tfts


Nametfts JSON
Version 0.0.13 PyPI version JSON
download
home_pagehttps://time-series-prediction.readthedocs.io
SummaryDeep learning time series with TensorFlow
upload_time2025-01-12 16:51:06
maintainerNone
docs_urlNone
authorLongxing Tan
requires_python<=3.13,>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements tensorflow pandas scikit-learn joblib matplotlib
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [license-image]: https://img.shields.io/badge/License-MIT-blue.svg
[license-url]: https://opensource.org/licenses/MIT
[pypi-image]: https://badge.fury.io/py/tfts.svg
[pypi-url]: https://pypi.python.org/pypi/tfts
[pepy-image]: https://pepy.tech/badge/tfts/month
[pepy-url]: https://pepy.tech/project/tfts
[build-image]: https://github.com/LongxingTan/Time-series-prediction/actions/workflows/test.yml/badge.svg?branch=master
[build-url]: https://github.com/LongxingTan/Time-series-prediction/actions/workflows/test.yml?query=branch%3Amaster
[lint-image]: https://github.com/LongxingTan/Time-series-prediction/actions/workflows/lint.yml/badge.svg?branch=master
[lint-url]: https://github.com/LongxingTan/Time-series-prediction/actions/workflows/lint.yml?query=branch%3Amaster
[docs-image]: https://readthedocs.org/projects/time-series-prediction/badge/?version=latest
[docs-url]: https://time-series-prediction.readthedocs.io/en/latest/?version=latest
[coverage-image]: https://codecov.io/gh/longxingtan/Time-series-prediction/branch/master/graph/badge.svg
[coverage-url]: https://codecov.io/github/longxingtan/Time-series-prediction?branch=master
[contributing-image]: https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat
[contributing-url]: https://github.com/longxingtan/Time-series-prediction/blob/master/CONTRIBUTING.md
[codeql-image]: https://github.com/longxingtan/Time-series-prediction/actions/workflows/codeql-analysis.yml/badge.svg
[codeql-url]: https://github.com/longxingtan/Time-series-prediction/actions/workflows/codeql-analysis.yml

<h1 align="center">
<img src="./docs/source/_static/logo.svg" width="400" align=center/>
</h1><br>

[![LICENSE][license-image]][license-url]
[![PyPI Version][pypi-image]][pypi-url]
[![Build Status][build-image]][build-url]
[![Lint Status][lint-image]][lint-url]
[![Docs Status][docs-image]][docs-url]
[![Code Coverage][coverage-image]][coverage-url]
[![Contributing][contributing-image]][contributing-url]

**[Documentation](https://time-series-prediction.readthedocs.io)** | **[Tutorials](https://time-series-prediction.readthedocs.io/en/latest/tutorials.html)** | **[Release Notes](https://time-series-prediction.readthedocs.io/en/latest/CHANGELOG.html)** | **[中文](https://github.com/LongxingTan/Time-series-prediction/blob/master/README_CN.md)**

**TFTS** (TensorFlow Time Series) is an easy-to-use time series package, supporting the classical and latest deep learning methods in TensorFlow or Keras.
- Support sota performance for time series task (prediction, classification, anomaly detection)
- Provide advanced deep learning models for industry, research and competition
- Documentation lives at [time-series-prediction.readthedocs.io](https://time-series-prediction.readthedocs.io)


## Tutorial

**Installation**

- python >= 3.7
- tensorflow >= 2.4

```shell
pip install tfts
```

**Quick start**

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LHdbrXmQGBSQuNTsbbM5-lAk5WENWF-Q?usp=sharing)
[![Open in Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://www.kaggle.com/code/tanlongxing/tensorflow-time-series-starter-tfts/notebook)

```python
import matplotlib.pyplot as plt
import tfts
from tfts import AutoModel, AutoConfig, KerasTrainer

train_length = 24
predict_sequence_length = 8
(x_train, y_train), (x_valid, y_valid) = tfts.get_data("sine", train_length, predict_sequence_length, test_size=0.2)

model_name_or_path = 'seq2seq'  # 'wavenet', 'transformer'
config = AutoConfig.for_model(model_name_or_path)
model = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)
trainer = KerasTrainer(model)
trainer.train((x_train, y_train), (x_valid, y_valid), epochs=15)

pred = trainer.predict(x_valid)
trainer.plot(history=x_valid, true=y_valid, pred=pred)
plt.show()
```

**Prepare your own data**

You could train your own data by preparing 3D data as inputs, for both inputs and targets
- option1 `np.ndarray`
- option2 `tf.data.Dataset`

Encoder only model inputs

```python
import numpy as np
from tfts import AutoConfig, AutoModel, KerasTrainer

train_length = 24
predict_sequence_length = 8
n_feature = 2

x_train = np.random.rand(1, train_length, n_feature)  # inputs: (batch, train_length, feature)
y_train = np.random.rand(1, predict_sequence_length, 1)  # target: (batch, predict_sequence_length, 1)
x_valid = np.random.rand(1, train_length, n_feature)
y_valid = np.random.rand(1, predict_sequence_length, 1)

config = AutoConfig.for_model('rnn')
model = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)
trainer = KerasTrainer(model)
trainer.train(train_dataset=(x_train, y_train), valid_dataset=(x_valid, y_valid), epochs=1)
```

Encoder-decoder model inputs

```python
# option1: np.ndarray
import numpy as np
from tfts import AutoConfig, AutoModel, KerasTrainer

train_length = 24
predict_sequence_length = 8
n_encoder_feature = 2
n_decoder_feature = 3

x_train = (
    np.random.rand(1, train_length, 1),  # inputs: (batch, train_length, 1)
    np.random.rand(1, train_length, n_encoder_feature),  # encoder_feature: (batch, train_length, encoder_features)
    np.random.rand(1, predict_sequence_length, n_decoder_feature),  # decoder_feature: (batch, predict_sequence_length, decoder_features)
)
y_train = np.random.rand(1, predict_sequence_length, 1)  # target: (batch, predict_sequence_length, 1)

x_valid = (
    np.random.rand(1, train_length, 1),
    np.random.rand(1, train_length, n_encoder_feature),
    np.random.rand(1, predict_sequence_length, n_decoder_feature),
)
y_valid = np.random.rand(1, predict_sequence_length, 1)

config = AutoConfig.for_model("seq2seq")
model = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)
trainer = KerasTrainer(model)
trainer.train((x_train, y_train), (x_valid, y_valid), epochs=1)
```

```python
# option2: tf.data.Dataset
import numpy as np
import tensorflow as tf
from tfts import AutoConfig, AutoModel, KerasTrainer

class FakeReader(object):
    def __init__(self, predict_sequence_length):
        train_length = 24
        n_encoder_feature = 2
        n_decoder_feature = 3
        self.x = np.random.rand(15, train_length, 1)
        self.encoder_feature = np.random.rand(15, train_length, n_encoder_feature)
        self.decoder_feature = np.random.rand(15, predict_sequence_length, n_decoder_feature)
        self.target = np.random.rand(15, predict_sequence_length, 1)

    def __len__(self):
        return len(self.x)

    def __getitem__(self, idx):
        return {
            "x": self.x[idx],
            "encoder_feature": self.encoder_feature[idx],
            "decoder_feature": self.decoder_feature[idx],
        }, self.target[idx]

    def iter(self):
        for i in range(len(self.x)):
            yield self[i]

predict_sequence_length = 10
train_reader = FakeReader(predict_sequence_length=predict_sequence_length)
train_loader = tf.data.Dataset.from_generator(
    train_reader.iter,
    ({"x": tf.float32, "encoder_feature": tf.float32, "decoder_feature": tf.float32}, tf.float32),
)
train_loader = train_loader.batch(batch_size=1)
valid_reader = FakeReader(predict_sequence_length=predict_sequence_length)
valid_loader = tf.data.Dataset.from_generator(
    valid_reader.iter,
    ({"x": tf.float32, "encoder_feature": tf.float32, "decoder_feature": tf.float32}, tf.float32),
)
valid_loader = valid_loader.batch(batch_size=1)

config = AutoConfig.for_model("seq2seq")
model = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)
trainer = KerasTrainer(model)
trainer.train(train_dataset=train_loader, valid_dataset=valid_loader, epochs=1)
```

**Prepare custom model config**

```python
from tfts import AutoModel, AutoConfig

config = AutoConfig.for_model('rnn')
print(config)
config.rnn_hidden_size = 128

model = AutoModel.from_config(config, predict_sequence_length=7)
```

**Build your own model**

<details><summary> Full list of model tfts supported using AutoModel </summary>

- rnn
- tcn
- bert
- nbeats
- seq2seq
- wavenet
- transformer
- informer

</details>

You could build the custom model based on tfts, especially
- add custom-defined embeddings for categorical variables
- add custom-defined head layers for classification or anomaly task

```python
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tfts import AutoModel, AutoConfig

def build_model():
    train_length = 24
    train_features = 15
    predict_sequence_length = 8

    inputs = Input([train_length, train_features])
    config = AutoConfig.for_model("seq2seq")
    backbone = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)
    outputs = backbone(inputs)
    outputs = Dense(1, activation="sigmoid")(outputs)
    model = tf.keras.Model(inputs=inputs, outputs=outputs)
    model.compile(loss="mse", optimizer="rmsprop")
    return model
```


## Examples

- [TFTS-Bert](https://github.com/LongxingTan/KDDCup2022-Baidu) wins the **3rd place** in KDD Cup 2022-wind power forecasting
- [TFTS-Seq2seq](https://github.com/LongxingTan/Data-competitions/tree/master/tianchi-enso-prediction) wins the **4th place** in Tianchi-ENSO prediction 2021

<!-- ### Performance

[Time series prediction](./examples/run_prediction_simple.py) performance is evaluated by tfts implementation, not official

| Performance | [web traffic<sup>mape</sup>]() | [grocery sales<sup>wrmse</sup>](https://www.kaggle.com/competitions/favorita-grocery-sales-forecasting/data) | [m5 sales<sup>val</sup>]() | [ventilator<sup>val</sup>]() |
| :-- | :-: | :-: | :-: | :-: |
| [RNN]() | 672 | 47.7% |52.6% | 61.4% |
| [DeepAR]() | 672 | 47.7% |52.6% | 61.4% |
| [Seq2seq]() | 672 | 47.7% |52.6% | 61.4% |
| [TCN]() | 672 | 47.7% |52.6% | 61.4% |
| [WaveNet]() | 672 | 47.7% |52.6% | 61.4% |
| [Bert]() | 672 | 47.7% |52.6% | 61.4% |
| [Transformer]() | 672 | 47.7% |52.6% | 61.4% |
| [Temporal-fusion-transformer]() | 672 | 47.7% |52.6% | 61.4% |
| [Informer]() | 672 | 47.7% |52.6% | 61.4% |
| [AutoFormer]() | 672 | 47.7% |52.6% | 61.4% |
| [N-beats]() | 672 | 47.7% |52.6% | 61.4% |
| [U-Net]() | 672 | 47.7% |52.6% | 61.4% |

### More demos
- [More complex prediction task](./notebooks)
- [Time series classification](./examples/run_classification.py)
- [Anomaly detection](./examples/run_anomaly.py)
- [Uncertainty prediction](examples/run_uncertainty.py)
- [Parameters tuning by optuna](examples/run_optuna_tune.py)
- [Serving by tf-serving](./examples) -->

For other DL frameworks, try [pytorch-forecasting](https://github.com/jdb78/pytorch-forecasting), [gluonts](https://github.com/awslabs/gluonts), [paddlets](https://github.com/PaddlePaddle/PaddleTS)


## Citation

If you find tfts project useful in your research, please consider cite:

```
@misc{tfts2020,
  author = {Longxing Tan},
  title = {Time series prediction},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/longxingtan/time-series-prediction}},
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://time-series-prediction.readthedocs.io",
    "name": "tfts",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<=3.13,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Longxing Tan",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/b1/f9/61439604446cb0ddcddd21c7d40475762be714f34e42bc0f7c16b5059bec/tfts-0.0.13.tar.gz",
    "platform": null,
    "description": "[license-image]: https://img.shields.io/badge/License-MIT-blue.svg\n[license-url]: https://opensource.org/licenses/MIT\n[pypi-image]: https://badge.fury.io/py/tfts.svg\n[pypi-url]: https://pypi.python.org/pypi/tfts\n[pepy-image]: https://pepy.tech/badge/tfts/month\n[pepy-url]: https://pepy.tech/project/tfts\n[build-image]: https://github.com/LongxingTan/Time-series-prediction/actions/workflows/test.yml/badge.svg?branch=master\n[build-url]: https://github.com/LongxingTan/Time-series-prediction/actions/workflows/test.yml?query=branch%3Amaster\n[lint-image]: https://github.com/LongxingTan/Time-series-prediction/actions/workflows/lint.yml/badge.svg?branch=master\n[lint-url]: https://github.com/LongxingTan/Time-series-prediction/actions/workflows/lint.yml?query=branch%3Amaster\n[docs-image]: https://readthedocs.org/projects/time-series-prediction/badge/?version=latest\n[docs-url]: https://time-series-prediction.readthedocs.io/en/latest/?version=latest\n[coverage-image]: https://codecov.io/gh/longxingtan/Time-series-prediction/branch/master/graph/badge.svg\n[coverage-url]: https://codecov.io/github/longxingtan/Time-series-prediction?branch=master\n[contributing-image]: https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat\n[contributing-url]: https://github.com/longxingtan/Time-series-prediction/blob/master/CONTRIBUTING.md\n[codeql-image]: https://github.com/longxingtan/Time-series-prediction/actions/workflows/codeql-analysis.yml/badge.svg\n[codeql-url]: https://github.com/longxingtan/Time-series-prediction/actions/workflows/codeql-analysis.yml\n\n<h1 align=\"center\">\n<img src=\"./docs/source/_static/logo.svg\" width=\"400\" align=center/>\n</h1><br>\n\n[![LICENSE][license-image]][license-url]\n[![PyPI Version][pypi-image]][pypi-url]\n[![Build Status][build-image]][build-url]\n[![Lint Status][lint-image]][lint-url]\n[![Docs Status][docs-image]][docs-url]\n[![Code Coverage][coverage-image]][coverage-url]\n[![Contributing][contributing-image]][contributing-url]\n\n**[Documentation](https://time-series-prediction.readthedocs.io)** | **[Tutorials](https://time-series-prediction.readthedocs.io/en/latest/tutorials.html)** | **[Release Notes](https://time-series-prediction.readthedocs.io/en/latest/CHANGELOG.html)** | **[\u4e2d\u6587](https://github.com/LongxingTan/Time-series-prediction/blob/master/README_CN.md)**\n\n**TFTS** (TensorFlow Time Series) is an easy-to-use time series package, supporting the classical and latest deep learning methods in TensorFlow or Keras.\n- Support sota performance for time series task (prediction, classification, anomaly detection)\n- Provide advanced deep learning models for industry, research and competition\n- Documentation lives at [time-series-prediction.readthedocs.io](https://time-series-prediction.readthedocs.io)\n\n\n## Tutorial\n\n**Installation**\n\n- python >= 3.7\n- tensorflow >= 2.4\n\n```shell\npip install tfts\n```\n\n**Quick start**\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LHdbrXmQGBSQuNTsbbM5-lAk5WENWF-Q?usp=sharing)\n[![Open in Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://www.kaggle.com/code/tanlongxing/tensorflow-time-series-starter-tfts/notebook)\n\n```python\nimport matplotlib.pyplot as plt\nimport tfts\nfrom tfts import AutoModel, AutoConfig, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\n(x_train, y_train), (x_valid, y_valid) = tfts.get_data(\"sine\", train_length, predict_sequence_length, test_size=0.2)\n\nmodel_name_or_path = 'seq2seq'  # 'wavenet', 'transformer'\nconfig = AutoConfig.for_model(model_name_or_path)\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train((x_train, y_train), (x_valid, y_valid), epochs=15)\n\npred = trainer.predict(x_valid)\ntrainer.plot(history=x_valid, true=y_valid, pred=pred)\nplt.show()\n```\n\n**Prepare your own data**\n\nYou could train your own data by preparing 3D data as inputs, for both inputs and targets\n- option1 `np.ndarray`\n- option2 `tf.data.Dataset`\n\nEncoder only model inputs\n\n```python\nimport numpy as np\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\nn_feature = 2\n\nx_train = np.random.rand(1, train_length, n_feature)  # inputs: (batch, train_length, feature)\ny_train = np.random.rand(1, predict_sequence_length, 1)  # target: (batch, predict_sequence_length, 1)\nx_valid = np.random.rand(1, train_length, n_feature)\ny_valid = np.random.rand(1, predict_sequence_length, 1)\n\nconfig = AutoConfig.for_model('rnn')\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train(train_dataset=(x_train, y_train), valid_dataset=(x_valid, y_valid), epochs=1)\n```\n\nEncoder-decoder model inputs\n\n```python\n# option1: np.ndarray\nimport numpy as np\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\ntrain_length = 24\npredict_sequence_length = 8\nn_encoder_feature = 2\nn_decoder_feature = 3\n\nx_train = (\n    np.random.rand(1, train_length, 1),  # inputs: (batch, train_length, 1)\n    np.random.rand(1, train_length, n_encoder_feature),  # encoder_feature: (batch, train_length, encoder_features)\n    np.random.rand(1, predict_sequence_length, n_decoder_feature),  # decoder_feature: (batch, predict_sequence_length, decoder_features)\n)\ny_train = np.random.rand(1, predict_sequence_length, 1)  # target: (batch, predict_sequence_length, 1)\n\nx_valid = (\n    np.random.rand(1, train_length, 1),\n    np.random.rand(1, train_length, n_encoder_feature),\n    np.random.rand(1, predict_sequence_length, n_decoder_feature),\n)\ny_valid = np.random.rand(1, predict_sequence_length, 1)\n\nconfig = AutoConfig.for_model(\"seq2seq\")\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train((x_train, y_train), (x_valid, y_valid), epochs=1)\n```\n\n```python\n# option2: tf.data.Dataset\nimport numpy as np\nimport tensorflow as tf\nfrom tfts import AutoConfig, AutoModel, KerasTrainer\n\nclass FakeReader(object):\n    def __init__(self, predict_sequence_length):\n        train_length = 24\n        n_encoder_feature = 2\n        n_decoder_feature = 3\n        self.x = np.random.rand(15, train_length, 1)\n        self.encoder_feature = np.random.rand(15, train_length, n_encoder_feature)\n        self.decoder_feature = np.random.rand(15, predict_sequence_length, n_decoder_feature)\n        self.target = np.random.rand(15, predict_sequence_length, 1)\n\n    def __len__(self):\n        return len(self.x)\n\n    def __getitem__(self, idx):\n        return {\n            \"x\": self.x[idx],\n            \"encoder_feature\": self.encoder_feature[idx],\n            \"decoder_feature\": self.decoder_feature[idx],\n        }, self.target[idx]\n\n    def iter(self):\n        for i in range(len(self.x)):\n            yield self[i]\n\npredict_sequence_length = 10\ntrain_reader = FakeReader(predict_sequence_length=predict_sequence_length)\ntrain_loader = tf.data.Dataset.from_generator(\n    train_reader.iter,\n    ({\"x\": tf.float32, \"encoder_feature\": tf.float32, \"decoder_feature\": tf.float32}, tf.float32),\n)\ntrain_loader = train_loader.batch(batch_size=1)\nvalid_reader = FakeReader(predict_sequence_length=predict_sequence_length)\nvalid_loader = tf.data.Dataset.from_generator(\n    valid_reader.iter,\n    ({\"x\": tf.float32, \"encoder_feature\": tf.float32, \"decoder_feature\": tf.float32}, tf.float32),\n)\nvalid_loader = valid_loader.batch(batch_size=1)\n\nconfig = AutoConfig.for_model(\"seq2seq\")\nmodel = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\ntrainer = KerasTrainer(model)\ntrainer.train(train_dataset=train_loader, valid_dataset=valid_loader, epochs=1)\n```\n\n**Prepare custom model config**\n\n```python\nfrom tfts import AutoModel, AutoConfig\n\nconfig = AutoConfig.for_model('rnn')\nprint(config)\nconfig.rnn_hidden_size = 128\n\nmodel = AutoModel.from_config(config, predict_sequence_length=7)\n```\n\n**Build your own model**\n\n<details><summary> Full list of model tfts supported using AutoModel </summary>\n\n- rnn\n- tcn\n- bert\n- nbeats\n- seq2seq\n- wavenet\n- transformer\n- informer\n\n</details>\n\nYou could build the custom model based on tfts, especially\n- add custom-defined embeddings for categorical variables\n- add custom-defined head layers for classification or anomaly task\n\n```python\nimport tensorflow as tf\nfrom tensorflow.keras.layers import Input, Dense\nfrom tfts import AutoModel, AutoConfig\n\ndef build_model():\n    train_length = 24\n    train_features = 15\n    predict_sequence_length = 8\n\n    inputs = Input([train_length, train_features])\n    config = AutoConfig.for_model(\"seq2seq\")\n    backbone = AutoModel.from_config(config, predict_sequence_length=predict_sequence_length)\n    outputs = backbone(inputs)\n    outputs = Dense(1, activation=\"sigmoid\")(outputs)\n    model = tf.keras.Model(inputs=inputs, outputs=outputs)\n    model.compile(loss=\"mse\", optimizer=\"rmsprop\")\n    return model\n```\n\n\n## Examples\n\n- [TFTS-Bert](https://github.com/LongxingTan/KDDCup2022-Baidu) wins the **3rd place** in KDD Cup 2022-wind power forecasting\n- [TFTS-Seq2seq](https://github.com/LongxingTan/Data-competitions/tree/master/tianchi-enso-prediction) wins the **4th place** in Tianchi-ENSO prediction 2021\n\n<!-- ### Performance\n\n[Time series prediction](./examples/run_prediction_simple.py) performance is evaluated by tfts implementation, not official\n\n| Performance | [web traffic<sup>mape</sup>]() | [grocery sales<sup>wrmse</sup>](https://www.kaggle.com/competitions/favorita-grocery-sales-forecasting/data) | [m5 sales<sup>val</sup>]() | [ventilator<sup>val</sup>]() |\n| :-- | :-: | :-: | :-: | :-: |\n| [RNN]() | 672 | 47.7% |52.6% | 61.4% |\n| [DeepAR]() | 672 | 47.7% |52.6% | 61.4% |\n| [Seq2seq]() | 672 | 47.7% |52.6% | 61.4% |\n| [TCN]() | 672 | 47.7% |52.6% | 61.4% |\n| [WaveNet]() | 672 | 47.7% |52.6% | 61.4% |\n| [Bert]() | 672 | 47.7% |52.6% | 61.4% |\n| [Transformer]() | 672 | 47.7% |52.6% | 61.4% |\n| [Temporal-fusion-transformer]() | 672 | 47.7% |52.6% | 61.4% |\n| [Informer]() | 672 | 47.7% |52.6% | 61.4% |\n| [AutoFormer]() | 672 | 47.7% |52.6% | 61.4% |\n| [N-beats]() | 672 | 47.7% |52.6% | 61.4% |\n| [U-Net]() | 672 | 47.7% |52.6% | 61.4% |\n\n### More demos\n- [More complex prediction task](./notebooks)\n- [Time series classification](./examples/run_classification.py)\n- [Anomaly detection](./examples/run_anomaly.py)\n- [Uncertainty prediction](examples/run_uncertainty.py)\n- [Parameters tuning by optuna](examples/run_optuna_tune.py)\n- [Serving by tf-serving](./examples) -->\n\nFor other DL frameworks, try [pytorch-forecasting](https://github.com/jdb78/pytorch-forecasting), [gluonts](https://github.com/awslabs/gluonts), [paddlets](https://github.com/PaddlePaddle/PaddleTS)\n\n\n## Citation\n\nIf you find tfts project useful in your research, please consider cite:\n\n```\n@misc{tfts2020,\n  author = {Longxing Tan},\n  title = {Time series prediction},\n  year = {2020},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/longxingtan/time-series-prediction}},\n}\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Deep learning time series with TensorFlow",
    "version": "0.0.13",
    "project_urls": {
        "Documentation": "https://time-series-prediction.readthedocs.io",
        "Homepage": "https://time-series-prediction.readthedocs.io",
        "Repository": "https://github.com/LongxingTan/Time-series-prediction"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e66b4574909c63b40f6cf75e85ca6fb7d43478ee0e41a8e632954a722c10f2bd",
                "md5": "8af466c2f1b5bb283f31afd0750b8891",
                "sha256": "8a816cd3a8a74377b735146a15d5c7560457040360321be835bf6b5d100df684"
            },
            "downloads": -1,
            "filename": "tfts-0.0.13-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8af466c2f1b5bb283f31afd0750b8891",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<=3.13,>=3.8",
            "size": 59020,
            "upload_time": "2025-01-12T16:51:03",
            "upload_time_iso_8601": "2025-01-12T16:51:03.679963Z",
            "url": "https://files.pythonhosted.org/packages/e6/6b/4574909c63b40f6cf75e85ca6fb7d43478ee0e41a8e632954a722c10f2bd/tfts-0.0.13-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b1f961439604446cb0ddcddd21c7d40475762be714f34e42bc0f7c16b5059bec",
                "md5": "e21f04d81f124e62b681036ee3a662ac",
                "sha256": "df1ef5ac6bb48a4830edbd45d865aadba9d62942b8c9070e0574e9c5deeeecab"
            },
            "downloads": -1,
            "filename": "tfts-0.0.13.tar.gz",
            "has_sig": false,
            "md5_digest": "e21f04d81f124e62b681036ee3a662ac",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<=3.13,>=3.8",
            "size": 41424,
            "upload_time": "2025-01-12T16:51:06",
            "upload_time_iso_8601": "2025-01-12T16:51:06.095374Z",
            "url": "https://files.pythonhosted.org/packages/b1/f9/61439604446cb0ddcddd21c7d40475762be714f34e42bc0f7c16b5059bec/tfts-0.0.13.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-12 16:51:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "LongxingTan",
    "github_project": "Time-series-prediction",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "tensorflow",
            "specs": [
                [
                    ">=",
                    "2.3.1"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    ">=",
                    "1.2"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    ">=",
                    "0.23"
                ]
            ]
        },
        {
            "name": "joblib",
            "specs": []
        },
        {
            "name": "matplotlib",
            "specs": []
        }
    ],
    "lcname": "tfts"
}
        
Elapsed time: 0.69467s