# Mantis: Lightweight Calibrated Foundation Model for User-Friendly Time Series Classification
<p align="center">
<img src="figures/mantis_logo_white_with_font.png" alt="Logo" height="300"/>
</p>
## Overview
**Mantis** is an open-source time series classification foundation model implemented by [Huawei Noah's Ark Lab](https://huggingface.co/paris-noah).\
The key features are:
- *Zero-shot feature extraction:* The model can be used in a frozen state to extract deep features and train a classifier on them.
- *Fine-tuning:* To achieve the highest performance, the model can be further fine-tuned for a new task.
- *Lightweight:* The model contains 8 million parameters, which allows it to be fine-tuned on a single GPU (even feasible on a CPU).
- *Calibration:* In our studies, we have shown that Mantis is the most calibrated foundation model for classification so far.
- *Adaptable to large-scale datasets:* For datasets with a large number of channels, we propose additional adapters that reduce memory requirements.
<p align="center">
<img src="figures/zero-shot-exp-results.png" alt="Logo" height="300"/>
<img src="figures/fine-tuning-exp-results.png" alt="Logo" height="300"/>
</p>
Please find out technical report on [arXiv](https://arxiv.org/abs/2502.15637). Our pre-trained weights can be found on [Hugging Face](https://huggingface.co/paris-noah/Mantis-8M).
Below we give instructions how the package can be installed and used.
## Installation
### Pip installation
> [!WARNING]
> The package will be released to PyPI very soon. Meanwhile, please use editable mode intallation given below.
>
```
pip install mantis
```
### Editable mode using Poetry
First, install Poetry and add the path to the binary file to your shell configuration file.
For example, on Linux systems, you can do this by running:
```bash
curl -sSL https://install.python-poetry.org | python3 -
export PATH="/home/username/.local/bin:$PATH"
```
Now you can create a virtual environment that is based on one of your already installed Python interpreters.
For example, if your default Python is 3.9, then create the environment by running:
```bash
poetry env use 3.9
```
Alternatively, you can specify a path to the interpreter. For example, to use an Anaconda Python interpreter:
```bash
poetry env use /path/to/anaconda3/envs/my_env/bin/python
```
If you want to run any command within the environment, instead of activating the environment manually, you can use `poetry run`:
```bash
poetry run <command>
```
For example, to install the dependencies and run tests:
```bash
poetry install
poetry run pytest
```
If dependencies are not resolving correctly, try re-generating the lock file:
```bash
poetry lock
poetry install
```
## Getting started
Please refer to `getting_started/` folder to see reproducible examples of how the package can be used.
Below we summarize the basic commands needed to use the package.
### Initialization.
To load our pre-trained model with 8M parameters from the Hugging Face, it is sufficient to run:
``` python
from mantis.architecture import Mantis8M
network = Mantis8M(device='cuda')
network = network.from_pretrained("paris-noah/Mantis-8M")
```
### Feature Extraction.
We provide a scikit-learn-like wrapper `MantisTrainer` that allows to use Mantis as a feature extractor by running the following commands:
``` python
from mantis.trainer import MantisTrainer
model = MantisTrainer(device='cuda', network=network)
Z = model.transform(X) # X is your time series dataset
```
### Fine-tuning.
If you want to fine-tune the model on your supervised dataset, you can use `fit` method of `MantisTrainer`:
``` python
from mantis.trainer import MantisTrainer
model = MantisTrainer(device='cuda', network=network)
model.fit(X, y) # y is a vector with class labels
probs = model.predict_proba(X)
y_pred = model.predict(X)
```
### Adapters.
We have integrated into the framework the possibility to pass the input to an adapter before sending it to the foundation model. This may be useful for time series data sets with a large number of channels. More specifically, large number of channels may induce the curse of dimensionality or make model's fine-tuning unfeasible.
A straightforward way to overcome these issues is to use a dimension reduction approach like PCA:
``` python
from mantis.adapters import MultichannelProjector
adapter = MultichannelProjector(new_num_channels=5, base_projector='pca')
adapter.fit(X)
X_transformed = adapter.transform(X)
model = MantisTrainer(device='cuda', network=network)
Z = model.transform(X_transformed)
```
Another wat is to add learnable layers before the foundation model and fine-tune them with the prediction head:
``` python
from mantis.adapters import LinearChannelCombiner
model = MantisTrainer(device='cuda', network=network)
adapter = LinearChannelCombiner(num_channels=X.shape[1], new_num_channels=5)
model.fit(X, y, adapter=adapter, fine_tuning_type='adapter_head')
```
## Structure
```
├── data/ <-- two datasets for demonstration
├── getting_started/ <-- jupyter notebooks with tutorials
└── src/mantis/ <-- the main package
├── adapters/ <-- adapters for multichannel time series
├── architecture/ <-- foundation model architectures
└── trainer/ <-- a scikit-learn-like wrapper for feature extraction or fine-tuning
```
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.
## Open-source Participation
We would be happy to receive feedback and integrate any suggestion, so do not hesitate to contribute to this project by raising a GitHub issue or contacting us by email:
- Vasilii Feofanov - vasilii [dot] feofanov [at] huawei [dot] com
## Citing Mantis 📚
If you use Mantis in your work, please cite this technical report:
```bibtex
@article{feofanov2025mantis,
title={Mantis: Lightweight Calibrated Foundation Model for User-Friendly Time Series Classification},
author={Vasilii Feofanov and Songkang Wen and Marius Alonso and Romain Ilbert and Hongbo Guo and Malik Tiomoko and Lujia Pan and Jianfeng Zhang and Ievgen Redko},
journal={arXiv preprint arXiv:2502.15637},
year={2025},
}
```
Raw data
{
"_id": null,
"home_page": null,
"name": "mantis-tsfm",
"maintainer": "Vasilii Feofanov",
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": "vasilii.feofanov@huawei.com",
"keywords": "Time Series Foundation Model, Classification, Transformer",
"author": "Vasilii Feofanov",
"author_email": "vasilii.feofanov@huawei.com",
"download_url": "https://files.pythonhosted.org/packages/73/39/82b4c9cb6feb602298f9b91d818d57cde81b66b387359ddfc05d303edf57/mantis_tsfm-0.1.0.tar.gz",
"platform": null,
"description": "# Mantis: Lightweight Calibrated Foundation Model for User-Friendly Time Series Classification\n\n<p align=\"center\">\n <img src=\"figures/mantis_logo_white_with_font.png\" alt=\"Logo\" height=\"300\"/>\n</p>\n\n## Overview\n\n**Mantis** is an open-source time series classification foundation model implemented by [Huawei Noah's Ark Lab](https://huggingface.co/paris-noah).\\\nThe key features are:\n\n - *Zero-shot feature extraction:* The model can be used in a frozen state to extract deep features and train a classifier on them.\n - *Fine-tuning:* To achieve the highest performance, the model can be further fine-tuned for a new task.\n - *Lightweight:* The model contains 8 million parameters, which allows it to be fine-tuned on a single GPU (even feasible on a CPU).\n - *Calibration:* In our studies, we have shown that Mantis is the most calibrated foundation model for classification so far.\n - *Adaptable to large-scale datasets:* For datasets with a large number of channels, we propose additional adapters that reduce memory requirements.\n\n<p align=\"center\">\n <img src=\"figures/zero-shot-exp-results.png\" alt=\"Logo\" height=\"300\"/> \n \n <img src=\"figures/fine-tuning-exp-results.png\" alt=\"Logo\" height=\"300\"/>\n</p>\n\nPlease find out technical report on [arXiv](https://arxiv.org/abs/2502.15637). Our pre-trained weights can be found on [Hugging Face](https://huggingface.co/paris-noah/Mantis-8M).\nBelow we give instructions how the package can be installed and used.\n\n## Installation\n\n### Pip installation \n\n> [!WARNING]\n> The package will be released to PyPI very soon. Meanwhile, please use editable mode intallation given below.\n> \n\n```\npip install mantis\n```\n\n### Editable mode using Poetry\n\nFirst, install Poetry and add the path to the binary file to your shell configuration file. \nFor example, on Linux systems, you can do this by running:\n```bash\ncurl -sSL https://install.python-poetry.org | python3 -\nexport PATH=\"/home/username/.local/bin:$PATH\"\n```\nNow you can create a virtual environment that is based on one of your already installed Python interpreters.\nFor example, if your default Python is 3.9, then create the environment by running:\n```bash\npoetry env use 3.9\n```\nAlternatively, you can specify a path to the interpreter. For example, to use an Anaconda Python interpreter:\n```bash\npoetry env use /path/to/anaconda3/envs/my_env/bin/python\n```\nIf you want to run any command within the environment, instead of activating the environment manually, you can use `poetry run`:\n```bash\npoetry run <command>\n```\nFor example, to install the dependencies and run tests:\n```bash\npoetry install\npoetry run pytest\n```\nIf dependencies are not resolving correctly, try re-generating the lock file:\n```bash\npoetry lock\npoetry install\n```\n\n\n## Getting started\n\nPlease refer to `getting_started/` folder to see reproducible examples of how the package can be used.\n\nBelow we summarize the basic commands needed to use the package.\n\n### Initialization.\n\nTo load our pre-trained model with 8M parameters from the Hugging Face, it is sufficient to run:\n\n``` python\nfrom mantis.architecture import Mantis8M\n\nnetwork = Mantis8M(device='cuda')\nnetwork = network.from_pretrained(\"paris-noah/Mantis-8M\")\n```\n\n### Feature Extraction.\n\nWe provide a scikit-learn-like wrapper `MantisTrainer` that allows to use Mantis as a feature extractor by running the following commands:\n\n``` python\nfrom mantis.trainer import MantisTrainer\n\nmodel = MantisTrainer(device='cuda', network=network)\nZ = model.transform(X) # X is your time series dataset\n```\n\n### Fine-tuning.\n\nIf you want to fine-tune the model on your supervised dataset, you can use `fit` method of `MantisTrainer`:\n\n``` python\nfrom mantis.trainer import MantisTrainer\n\nmodel = MantisTrainer(device='cuda', network=network)\nmodel.fit(X, y) # y is a vector with class labels\nprobs = model.predict_proba(X)\ny_pred = model.predict(X)\n```\n\n### Adapters.\n\nWe have integrated into the framework the possibility to pass the input to an adapter before sending it to the foundation model. This may be useful for time series data sets with a large number of channels. More specifically, large number of channels may induce the curse of dimensionality or make model's fine-tuning unfeasible. \n\nA straightforward way to overcome these issues is to use a dimension reduction approach like PCA:\n``` python\nfrom mantis.adapters import MultichannelProjector\n\nadapter = MultichannelProjector(new_num_channels=5, base_projector='pca')\nadapter.fit(X)\nX_transformed = adapter.transform(X)\n\nmodel = MantisTrainer(device='cuda', network=network)\nZ = model.transform(X_transformed)\n```\n\nAnother wat is to add learnable layers before the foundation model and fine-tune them with the prediction head:\n``` python\nfrom mantis.adapters import LinearChannelCombiner\n\nmodel = MantisTrainer(device='cuda', network=network)\nadapter = LinearChannelCombiner(num_channels=X.shape[1], new_num_channels=5)\nmodel.fit(X, y, adapter=adapter, fine_tuning_type='adapter_head')\n```\n\n## Structure\n\n```\n\u251c\u2500\u2500 data/ <-- two datasets for demonstration\n\u251c\u2500\u2500 getting_started/ <-- jupyter notebooks with tutorials\n\u2514\u2500\u2500 src/mantis/ <-- the main package\n \u251c\u2500\u2500 adapters/ <-- adapters for multichannel time series\n \u251c\u2500\u2500 architecture/ <-- foundation model architectures\n \u2514\u2500\u2500 trainer/ <-- a scikit-learn-like wrapper for feature extraction or fine-tuning\n```\n\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.\n\n## Open-source Participation\n\nWe would be happy to receive feedback and integrate any suggestion, so do not hesitate to contribute to this project by raising a GitHub issue or contacting us by email:\n\n - Vasilii Feofanov - vasilii [dot] feofanov [at] huawei [dot] com\n\n\n## Citing Mantis \ud83d\udcda\n\nIf you use Mantis in your work, please cite this technical report:\n\n```bibtex\n@article{feofanov2025mantis,\n title={Mantis: Lightweight Calibrated Foundation Model for User-Friendly Time Series Classification},\n author={Vasilii Feofanov and Songkang Wen and Marius Alonso and Romain Ilbert and Hongbo Guo and Malik Tiomoko and Lujia Pan and Jianfeng Zhang and Ievgen Redko},\n journal={arXiv preprint arXiv:2502.15637},\n year={2025},\n}\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Mantis: Lightweight Calibrated Foundation Model for User-Friendly Time Series Classification",
"version": "0.1.0",
"project_urls": null,
"split_keywords": [
"time series foundation model",
" classification",
" transformer"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "dc146ec1f45edf03f7ec0583dc9ee37913aafba3bca2c31ce50f892a418264ea",
"md5": "ed41a7bd05ea843781fc9b10e044afb7",
"sha256": "8d97b85905d17b812aa9fa3d16390315de62ab083fe1b649b6abc81a4780b949"
},
"downloads": -1,
"filename": "mantis_tsfm-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ed41a7bd05ea843781fc9b10e044afb7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 20133,
"upload_time": "2025-02-25T16:31:38",
"upload_time_iso_8601": "2025-02-25T16:31:38.409322Z",
"url": "https://files.pythonhosted.org/packages/dc/14/6ec1f45edf03f7ec0583dc9ee37913aafba3bca2c31ce50f892a418264ea/mantis_tsfm-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "733982b4c9cb6feb602298f9b91d818d57cde81b66b387359ddfc05d303edf57",
"md5": "7cbef21185cc07e4330141b276962bf8",
"sha256": "0dd7a1cd62d8b7029bfdcee1e2b2c73f5154eb787253d0d24f6a0318047420b3"
},
"downloads": -1,
"filename": "mantis_tsfm-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "7cbef21185cc07e4330141b276962bf8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 17228,
"upload_time": "2025-02-25T16:31:39",
"upload_time_iso_8601": "2025-02-25T16:31:39.657105Z",
"url": "https://files.pythonhosted.org/packages/73/39/82b4c9cb6feb602298f9b91d818d57cde81b66b387359ddfc05d303edf57/mantis_tsfm-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-25 16:31:39",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "mantis-tsfm"
}