Name | timeagi JSON |
Version |
0.0.1
JSON |
| download |
home_page | https://github.com/DC-research/TEMPO |
Summary | Time Series Foundation Model - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting |
upload_time | 2024-11-30 05:06:40 |
maintainer | None |
docs_url | None |
author | Defu Cao |
requires_python | <4,>=3.7.2 |
license | None |
keywords |
time-series
ml
llm
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Time Series Foundation Model - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting
[![preprint](https://img.shields.io/static/v1?label=arXiv&message=2310.04948&color=B31B1B&logo=arXiv)](https://arxiv.org/pdf/2310.04948)
[![huggingface](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-FFD21E)](https://huggingface.co/Melady/TEMPO)
[![License: MIT](https://img.shields.io/badge/License-Apache--2.0-green.svg)](https://opensource.org/licenses/Apache-2.0)
</div>
<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TEMPO_logo.png width=80% /></div>
The official code for [["TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)"]](https://arxiv.org/pdf/2310.04948).
TEMPO is one of the very first open source **Time Series Foundation Models** for forecasting task v1.0 version.
<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TEMPO.png width=80% /></div>
## β³ Upcoming Features
- [β
] Parallel pre-training pipeline
- [] Probabilistic forecasting
- [] Multimodal dataset
- [] Multimodal pre-training script
## π News
- **Oct 2024**: π We've streamlined our code structure, enabling users to download the pre-trained model and perform zero-shot inference with a single line of code! Check out our [demo](./run_TEMPO_demo.py) for more details. Our model's download count on HuggingFace is now trackable!
- **Jun 2024**: π We added demos for reproducing zero-shot experiments in [Colab](https://colab.research.google.com/drive/11qGpT7H1JMaTlMlm9WtHFZ3_cJz7p-og?usp=sharing). We also added the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [Colab](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)
- **May 2024**: π TEMPO has launched a GUI-based online [demo](https://4171a8a7484b3e9148.gradio.live/), allowing users to directly interact with our foundation model!
- **May 2024**: π TEMPO published the 80M pretrained foundation model in [HuggingFace](https://huggingface.co/Melady/TEMPO)!
- **May 2024**: π§ͺ We added the code for pretraining and inference TEMPO models. You can find a pre-training script demo in [this folder](./scripts/etth2.sh). We also added [a script](./scripts/etth2_test.sh) for the inference demo.
- **Mar 2024**: π Released [TETS dataset](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link) from [S&P 500](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview) used in multimodal experiments in TEMPO.
- **Mar 2024**: π§ͺ TEMPO published the project [code](https://github.com/DC-research/TEMPO) and the pre-trained checkpoint [online](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link)!
- **Jan 2024**: π TEMPO [paper](https://openreview.net/pdf?id=YH5w12OUuU) get accepted by ICLR!
- **Oct 2023**: π TEMPO [paper](https://arxiv.org/pdf/2310.04948) released on Arxiv!
## Build the environment
```
conda create -n tempo python=3.8
```
```
conda activate tempo
```
```
pip install timeagi
```
## Script Demo
A streamlining example showing how to perform forecasting using TEMPO:
```python
# Third-party library imports
import numpy as np
import torch
from numpy.random import choice
# Local imports
from models.TEMPO import TEMPO
model = TEMPO.load_pretrained_model(
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu'),
repo_id = "Melady/TEMPO",
filename = "TEMPO-80M_v1.pth",
cache_dir = "./checkpoints/TEMPO_checkpoints"
)
input_data = np.random.rand(336) # Random input data
with torch.no_grad():
predicted_values = model.predict(input_data, pred_length=96)
print("Predicted values:")
print(predicted_values)
```
## Demos
### 1. Reproducing zero-shot experiments on ETTh2:
Please try to reproduc the zero-shot experiments on ETTh2 [[here on Colab]](https://colab.research.google.com/drive/11qGpT7H1JMaTlMlm9WtHFZ3_cJz7p-og?usp=sharing).
### 2. Zero-shot experiments on customer dataset:
We use the following Colab page to show the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [[Colab]](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)
### 3. Online demo:
Please try our foundation model demo [[here]](https://4171a8a7484b3e9148.gradio.live).
<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TEMPO_demo.jpg width=80% /></div>
## Practice on your end
We also updated our models on HuggingFace: [[Melady/TEMPO]](https://huggingface.co/Melady/TEMPO).
### Get Data
Download the data from [[Google Drive]](https://drive.google.com/drive/folders/13Cg1KYOlzM5C7K8gK8NfC-F3EYxkM3D2?usp=sharing) or [[Baidu Drive]](https://pan.baidu.com/s/1r3KhGd0Q9PJIUZdfEYoymg?pwd=i9iy), and place the downloaded data in the folder`./dataset`. You can also download the STL results from [[Google Drive]](https://drive.google.com/file/d/1gWliIGDDSi2itUAvYaRgACru18j753Kw/view?usp=sharing), and place the downloaded data in the folder`./stl`.
### Run TEMPO
### Pre-Training Stage
```
bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather].sh
```
### Test/ Inference Stage
After training, we can test TEMPO model under the zero-shot setting:
```
bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather]_test.sh
```
<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/results.jpg width=90% /></div>
## Pre-trained Models
You can download the pre-trained model from [[Google Drive]](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link) and then run the test script for fun.
## TETS dataset
Here is the prompts use to generate the coresponding textual informaton of time series via [[OPENAI ChatGPT-3.5 API]](https://platform.openai.com/docs/guides/text-generation)
<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TETS_prompt.png width=80% /></div>
The time series data are come from [[S&P 500]](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview). Here is the EBITDA case for one company from the dataset:
<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/Company1_ebitda_summary.png width=80% /></div>
Example of generated contextual information for the Company marked above:
<div align="center"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics//Company1_ebitda_summary_words.jpg width=80% /></div>
You can download the processed data with text embedding from GPT2 from: [[TETS]](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link
).
## Contact
Feel free to connect DefuCao@USC.EDU / YanLiu.CS@USC.EDU if youβre interested in applying TEMPO to your real-world application.
## Cite our work
```
@inproceedings{
cao2024tempo,
title={{TEMPO}: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting},
author={Defu Cao and Furong Jia and Sercan O Arik and Tomas Pfister and Yixiang Zheng and Wen Ye and Yan Liu},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=YH5w12OUuU}
}
```
```
@article{
Jia_Wang_Zheng_Cao_Liu_2024,
title={GPT4MTS: Prompt-based Large Language Model for Multimodal Time-series Forecasting},
volume={38},
url={https://ojs.aaai.org/index.php/AAAI/article/view/30383},
DOI={10.1609/aaai.v38i21.30383},
number={21},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
author={Jia, Furong and Wang, Kevin and Zheng, Yixiang and Cao, Defu and Liu, Yan},
year={2024}, month={Mar.}, pages={23343-23351}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/DC-research/TEMPO",
"name": "timeagi",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.7.2",
"maintainer_email": null,
"keywords": "time-series, ml, llm",
"author": "Defu Cao",
"author_email": "defucao@usc.edu",
"download_url": "https://files.pythonhosted.org/packages/eb/0b/09bfb453dd74668acaa6c6ca3e022a54df8eed0f2571a427e26e3c06e86f/timeagi-0.0.1.tar.gz",
"platform": null,
"description": "# Time Series Foundation Model - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting\n\n[![preprint](https://img.shields.io/static/v1?label=arXiv&message=2310.04948&color=B31B1B&logo=arXiv)](https://arxiv.org/pdf/2310.04948)\n[![huggingface](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-FFD21E)](https://huggingface.co/Melady/TEMPO)\n[![License: MIT](https://img.shields.io/badge/License-Apache--2.0-green.svg)](https://opensource.org/licenses/Apache-2.0)\n</div>\n\n<div align=\"center\"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TEMPO_logo.png width=80% /></div>\n\nThe official code for [[\"TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)\"]](https://arxiv.org/pdf/2310.04948).\n\nTEMPO is one of the very first open source **Time Series Foundation Models** for forecasting task v1.0 version.\n\n<div align=\"center\"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TEMPO.png width=80% /></div>\n\n## \u23f3 Upcoming Features\n\n- [\u2705] Parallel pre-training pipeline\n- [] Probabilistic forecasting\n- [] Multimodal dataset\n- [] Multimodal pre-training script\n\n\n## \ud83d\ude80 News\n\n\n- **Oct 2024**: \ud83d\ude80 We've streamlined our code structure, enabling users to download the pre-trained model and perform zero-shot inference with a single line of code! Check out our [demo](./run_TEMPO_demo.py) for more details. Our model's download count on HuggingFace is now trackable!\n\n- **Jun 2024**: \ud83d\ude80 We added demos for reproducing zero-shot experiments in [Colab](https://colab.research.google.com/drive/11qGpT7H1JMaTlMlm9WtHFZ3_cJz7p-og?usp=sharing). We also added the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [Colab](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)\n- **May 2024**: \ud83d\ude80 TEMPO has launched a GUI-based online [demo](https://4171a8a7484b3e9148.gradio.live/), allowing users to directly interact with our foundation model!\n- **May 2024**: \ud83d\ude80 TEMPO published the 80M pretrained foundation model in [HuggingFace](https://huggingface.co/Melady/TEMPO)!\n- **May 2024**: \ud83e\uddea We added the code for pretraining and inference TEMPO models. You can find a pre-training script demo in [this folder](./scripts/etth2.sh). We also added [a script](./scripts/etth2_test.sh) for the inference demo.\n\n- **Mar 2024**: \ud83d\udcc8 Released [TETS dataset](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link) from [S&P 500](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview) used in multimodal experiments in TEMPO. \n- **Mar 2024**: \ud83e\uddea TEMPO published the project [code](https://github.com/DC-research/TEMPO) and the pre-trained checkpoint [online](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link)! \n- **Jan 2024**: \ud83d\ude80 TEMPO [paper](https://openreview.net/pdf?id=YH5w12OUuU) get accepted by ICLR!\n- **Oct 2023**: \ud83d\ude80 TEMPO [paper](https://arxiv.org/pdf/2310.04948) released on Arxiv!\n\n## Build the environment\n\n```\nconda create -n tempo python=3.8\n```\n```\nconda activate tempo\n```\n```\npip install timeagi\n```\n\n## Script Demo\n\nA streamlining example showing how to perform forecasting using TEMPO:\n\n```python\n# Third-party library imports\nimport numpy as np\nimport torch\nfrom numpy.random import choice\n# Local imports\nfrom models.TEMPO import TEMPO\n\n\nmodel = TEMPO.load_pretrained_model(\n device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu'),\n repo_id = \"Melady/TEMPO\",\n filename = \"TEMPO-80M_v1.pth\",\n cache_dir = \"./checkpoints/TEMPO_checkpoints\" \n)\n\ninput_data = np.random.rand(336) # Random input data\nwith torch.no_grad():\n predicted_values = model.predict(input_data, pred_length=96)\nprint(\"Predicted values:\")\nprint(predicted_values)\n\n```\n\n## Demos\n\n### 1. Reproducing zero-shot experiments on ETTh2:\n\nPlease try to reproduc the zero-shot experiments on ETTh2 [[here on Colab]](https://colab.research.google.com/drive/11qGpT7H1JMaTlMlm9WtHFZ3_cJz7p-og?usp=sharing).\n\n### 2. Zero-shot experiments on customer dataset:\n\nWe use the following Colab page to show the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [[Colab]](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)\n\n### 3. Online demo:\n\nPlease try our foundation model demo [[here]](https://4171a8a7484b3e9148.gradio.live).\n\n<div align=\"center\"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TEMPO_demo.jpg width=80% /></div>\n\n## Practice on your end\n\nWe also updated our models on HuggingFace: [[Melady/TEMPO]](https://huggingface.co/Melady/TEMPO).\n\n\n\n### Get Data\n\n Download the data from [[Google Drive]](https://drive.google.com/drive/folders/13Cg1KYOlzM5C7K8gK8NfC-F3EYxkM3D2?usp=sharing) or [[Baidu Drive]](https://pan.baidu.com/s/1r3KhGd0Q9PJIUZdfEYoymg?pwd=i9iy), and place the downloaded data in the folder`./dataset`. You can also download the STL results from [[Google Drive]](https://drive.google.com/file/d/1gWliIGDDSi2itUAvYaRgACru18j753Kw/view?usp=sharing), and place the downloaded data in the folder`./stl`.\n\n### Run TEMPO\n\n### Pre-Training Stage\n```\nbash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather].sh\n```\n\n### Test/ Inference Stage\n\nAfter training, we can test TEMPO model under the zero-shot setting:\n\n```\nbash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather]_test.sh\n```\n\n<div align=\"center\"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/results.jpg width=90% /></div>\n\n\n## Pre-trained Models\n\nYou can download the pre-trained model from [[Google Drive]](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link) and then run the test script for fun.\n\n## TETS dataset\n\nHere is the prompts use to generate the coresponding textual informaton of time series via [[OPENAI ChatGPT-3.5 API]](https://platform.openai.com/docs/guides/text-generation)\n\n<div align=\"center\"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/TETS_prompt.png width=80% /></div>\n\nThe time series data are come from [[S&P 500]](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview). Here is the EBITDA case for one company from the dataset:\n\n\n<div align=\"center\"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics/Company1_ebitda_summary.png width=80% /></div>\n\nExample of generated contextual information for the Company marked above:\n\n<div align=\"center\"><img src=https://raw.githubusercontent.com/DC-research/TEMPO/main/tempo/pics//Company1_ebitda_summary_words.jpg width=80% /></div>\n\n\n\n\nYou can download the processed data with text embedding from GPT2 from: [[TETS]](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link\n).\n\n## Contact\nFeel free to connect DefuCao@USC.EDU / YanLiu.CS@USC.EDU if you\u2019re interested in applying TEMPO to your real-world application.\n\n## Cite our work\n```\n@inproceedings{\ncao2024tempo,\ntitle={{TEMPO}: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting},\nauthor={Defu Cao and Furong Jia and Sercan O Arik and Tomas Pfister and Yixiang Zheng and Wen Ye and Yan Liu},\nbooktitle={The Twelfth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=YH5w12OUuU}\n}\n```\n\n```\n@article{\n Jia_Wang_Zheng_Cao_Liu_2024, \n title={GPT4MTS: Prompt-based Large Language Model for Multimodal Time-series Forecasting}, \n volume={38}, \n url={https://ojs.aaai.org/index.php/AAAI/article/view/30383}, \n DOI={10.1609/aaai.v38i21.30383}, \n number={21}, \n journal={Proceedings of the AAAI Conference on Artificial Intelligence}, \n author={Jia, Furong and Wang, Kevin and Zheng, Yixiang and Cao, Defu and Liu, Yan}, \n year={2024}, month={Mar.}, pages={23343-23351} \n }\n```\n\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Time Series Foundation Model - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting",
"version": "0.0.1",
"project_urls": {
"Bug Reports": "https://github.com/DC-research/TEMPO/issues",
"Homepage": "https://github.com/DC-research/TEMPO",
"Source": "https://github.com/DC-research/TEMPO"
},
"split_keywords": [
"time-series",
" ml",
" llm"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "57cecd34ba991d8158788ba3d348c94e287357a60bcc7d2ec58aecc4ee6ab980",
"md5": "bfb87c5144f56bbe2d433f9cf8daad05",
"sha256": "fceab49e49606796fbe1808f4ac12e50da481d0bd77424f732abfa9fdccac95b"
},
"downloads": -1,
"filename": "timeagi-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bfb87c5144f56bbe2d433f9cf8daad05",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4,>=3.7.2",
"size": 117773,
"upload_time": "2024-11-30T05:06:20",
"upload_time_iso_8601": "2024-11-30T05:06:20.016094Z",
"url": "https://files.pythonhosted.org/packages/57/ce/cd34ba991d8158788ba3d348c94e287357a60bcc7d2ec58aecc4ee6ab980/timeagi-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "eb0b09bfb453dd74668acaa6c6ca3e022a54df8eed0f2571a427e26e3c06e86f",
"md5": "dc3f1940b97d1d2cc771640955733d50",
"sha256": "82426e2f7474e50a699fe467906f5c5b42b0fef41a62e2f5acf6a688bcf18f23"
},
"downloads": -1,
"filename": "timeagi-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "dc3f1940b97d1d2cc771640955733d50",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4,>=3.7.2",
"size": 92867,
"upload_time": "2024-11-30T05:06:40",
"upload_time_iso_8601": "2024-11-30T05:06:40.666668Z",
"url": "https://files.pythonhosted.org/packages/eb/0b/09bfb453dd74668acaa6c6ca3e022a54df8eed0f2571a427e26e3c06e86f/timeagi-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-30 05:06:40",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "DC-research",
"github_project": "TEMPO",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "timeagi"
}