foreblocks


Nameforeblocks JSON
Version 0.1.4 PyPI version JSON
download
home_pageNone
SummaryModular Time Series Forecasting Library
upload_time2025-08-03 09:28:29
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2025 Laio O. Seman Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords time series forecasting deep learning transformer lstm pytorch
VCS
bugtrack_url
requirements torch captum numpy pandas matplotlib seaborn scikit-learn scipy ewtpy statsmodels tqdm requests wandb
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # foreBlocks: Modular Deep Learning Library for Time Series Forecasting

[![PyPI Version](https://img.shields.io/pypi/v/tracernaut.svg)](https://pypi.org/project/foreblocks/)
[![Python Versions](https://img.shields.io/pypi/pyversions/foreblocks.svg)](https://pypi.org/project/foreblocks/)
[![License](https://img.shields.io/github/license/lseman/foreblocks)](LICENSE)

![ForeBlocks Logo](logo.svg#gh-light-mode-only)
![ForeBlocks Logo](logo_dark.svg#gh-dark-mode-only)

**foreBlocks** is a flexible and modular deep learning library for time series forecasting, built on PyTorch. It provides a wide range of neural network architectures and forecasting strategies through a clean, research-friendly API โ€” enabling fast experimentation and scalable deployment.

๐Ÿ”— **[GitHub Repository](https://github.com/lseman/foreblocks)**

---

## ๐Ÿš€ Quick Start

```bash
# Clone and install
git clone https://github.com/lseman/foreblocks
cd foreblocks
pip install -e .
````

Or install directly via PyPI:

```bash
pip install foreblocks
```

```python
from foreblocks import TimeSeriesSeq2Seq, ModelConfig, TrainingConfig
import pandas as pd
import torch

# Load your time series dataset
data = pd.read_csv('your_data.csv')
X = data.values

# Configure the model
model_config = ModelConfig(
    model_type="lstm",
    input_size=X.shape[1],
    output_size=1,
    hidden_size=64,
    target_len=24,
    teacher_forcing_ratio=0.5
)

# Initialize and train
model = TimeSeriesSeq2Seq(model_config=model_config)
X_train, y_train, _ = model.preprocess(X, self_tune=True)

# Create DataLoader and start training
from torch.utils.data import DataLoader, TensorDataset
train_dataset = TensorDataset(
    torch.tensor(X_train, dtype=torch.float32),
    torch.tensor(y_train, dtype=torch.float32)
)
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)

history = model.train_model(train_loader)
predictions = model.predict(X_test)
```

---

## โœจ Key Features

| Feature                     | Description                                                        |
| --------------------------- | ------------------------------------------------------------------ |
| ๐Ÿ”ง **Multiple Strategies**  | Seq2Seq, Autoregressive, and Direct forecasting modes              |
| ๐Ÿงฉ **Modular Design**       | Easily swap and extend model components                            |
| ๐Ÿค– **Advanced Models**      | LSTM, GRU, Transformer, VAE, and more                              |
| โšก **Smart Preprocessing**   | Automatic normalization, differencing, EWT, and outlier handling   |
| ๐ŸŽฏ **Attention Modules**    | Pluggable attention layers for enhanced temporal modeling          |
| ๐Ÿ“Š **Multivariate Support** | Designed for multi-feature time series with dynamic input handling |
| ๐Ÿ“ˆ **Training Utilities**   | Built-in trainer with callbacks, metrics, and visualizations       |
| ๐Ÿ” **Transparent API**      | Clean and extensible codebase with complete documentation          |

---

## ๐Ÿ“– Documentation

| Section       | Description                                      | Link                           |
| ------------- | ------------------------------------------------ | ------------------------------ |
| Preprocessing | EWT, normalization, differencing, outliers       | [Guide](docs/preprocessor.md)  |
| Custom Blocks | Registering new encoder/decoder/attention blocks | [Guide](docs/custom_blocks.md) |
| Transformers  | Transformer-based modules                        | [Docs](docs/transformer.md)    |
| Fourier       | Frequency-based forecasting layers               | [Docs](docs/fourier.md)        |
| Wavelet       | Wavelet transform modules                        | [Docs](docs/wavelet.md)        |
| DARTS         | Architecture search for forecasting              | [Docs](docs/darts.md)          |

---

## ๐Ÿ—๏ธ Architecture Overview

ForeBlocks is built around clean and extensible abstractions:

* `TimeSeriesSeq2Seq`: High-level interface for forecasting workflows
* `ForecastingModel`: Core model engine combining encoders, decoders, and heads
* `TimeSeriesPreprocessor`: Adaptive preprocessing with feature engineering
* `Trainer`: Handles training loop, validation, and visual feedback

---

## ๐Ÿ”ฎ Forecasting Models

### 1. **Sequence-to-Sequence** (default)

```python
ModelConfig(
    model_type="lstm",
    strategy="seq2seq",
    input_size=3,
    output_size=1,
    hidden_size=64,
    num_encoder_layers=2,
    num_decoder_layers=2,
    target_len=24
)
```

### 2. **Autoregressive**

```python
ModelConfig(
    model_type="lstm",
    strategy="autoregressive",
    input_size=1,
    output_size=1,
    hidden_size=64,
    target_len=12
)
```

### 3. **Direct Multi-Step**

```python
ModelConfig(
    model_type="lstm",
    strategy="direct",
    input_size=5,
    output_size=1,
    hidden_size=128,
    target_len=48
)
```

### 4. **Transformer-based**

```python
ModelConfig(
    model_type="transformer",
    strategy="transformer_seq2seq",
    input_size=4,
    output_size=4,
    hidden_size=128,
    dim_feedforward=512,
    nheads=8,
    num_encoder_layers=3,
    num_decoder_layers=3,
    target_len=96
)
```

---

## โš™๏ธ Advanced Features

### Multi-Encoder/Decoder

```python
ModelConfig(
    multi_encoder_decoder=True,
    input_size=5,
    output_size=1,
    hidden_size=64,
    model_type="lstm",
    target_len=24
)
```

### Attention Integration

```python
from foreblocks.attention import AttentionLayer

attention = AttentionLayer(
    method="dot",
    attention_backend="self",
    encoder_hidden_size=64,
    decoder_hidden_size=64
)

model = TimeSeriesSeq2Seq(
    model_config=model_config,
    attention_module=attention
)
```

### Custom Preprocessing

```python
X_train, y_train, _ = model.preprocess(
    X,
    normalize=True,
    differencing=True,
    detrend=True,
    apply_ewt=True,
    window_size=48,
    horizon=24,
    remove_outliers=True,
    outlier_method="iqr",
    self_tune=True
)
```

### Scheduled Sampling

```python
def schedule(epoch): return max(0.0, 1.0 - 0.1 * epoch)

model = TimeSeriesSeq2Seq(
    model_config=model_config,
    scheduled_sampling_fn=schedule
)
```

---

## ๐Ÿงช Examples

### LSTM + Attention

```python
model_config = ModelConfig(
    model_type="lstm",
    input_size=3,
    output_size=1,
    hidden_size=64,
    target_len=24
)

attention = AttentionLayer(
    method="dot",
    encoder_hidden_size=64,
    decoder_hidden_size=64
)

model = TimeSeriesSeq2Seq(
    model_config=model_config,
    attention_module=attention
)
```

### Transformer Model

```python
model_config = ModelConfig(
    model_type="transformer",
    input_size=4,
    output_size=4,
    hidden_size=128,
    dim_feedforward=512,
    nheads=8,
    num_encoder_layers=3,
    num_decoder_layers=3,
    target_len=96
)

training_config = TrainingConfig(
    num_epochs=100,
    learning_rate=1e-4,
    weight_decay=1e-5,
    patience=15
)

model = TimeSeriesSeq2Seq(
    model_config=model_config,
    training_config=training_config
)
```

---

## ๐Ÿ› ๏ธ Configuration Reference

### `ModelConfig`

| Parameter               | Type  | Description                        | Default |
| ----------------------- | ----- | ---------------------------------- | ------- |
| `model_type`            | str   | "lstm", "gru", "transformer", etc. | "lstm"  |
| `input_size`            | int   | Number of input features           | โ€”       |
| `output_size`           | int   | Number of output features          | โ€”       |
| `hidden_size`           | int   | Hidden layer dimension             | 64      |
| `target_len`            | int   | Forecast steps                     | โ€”       |
| `num_encoder_layers`    | int   | Encoder depth                      | 1       |
| `num_decoder_layers`    | int   | Decoder depth                      | 1       |
| `teacher_forcing_ratio` | float | Ratio of teacher forcing           | 0.5     |

### `TrainingConfig`

| Parameter       | Type  | Description             | Default |
| --------------- | ----- | ----------------------- | ------- |
| `num_epochs`    | int   | Training epochs         | 100     |
| `learning_rate` | float | Learning rate           | 1e-3    |
| `batch_size`    | int   | Mini-batch size         | 32      |
| `patience`      | int   | Early stopping patience | 10      |
| `weight_decay`  | float | L2 regularization       | 0.0     |

---

## ๐Ÿฉบ Troubleshooting

<details>
<summary><strong>๐Ÿ”ด Dimension Mismatch</strong></summary>

* Confirm `input_size` and `output_size` match your data
* Ensure encoder/decoder hidden sizes are compatible

</details>

<details>
<summary><strong>๐ŸŸก Memory Issues</strong></summary>

* Reduce `batch_size`, `hidden_size`, or sequence length
* Use gradient accumulation or mixed precision

</details>

<details>
<summary><strong>๐ŸŸ  Poor Predictions</strong></summary>

* Try different `strategy`
* Use attention mechanisms
* Fine-tune hyperparameters (e.g. `hidden_size`, dropout)

</details>

<details>
<summary><strong>๐Ÿ”ต Training Instability</strong></summary>

* Clip gradients (`clip_grad_norm_`)
* Use learning rate schedulers (`ReduceLROnPlateau`)

</details>

---

## ๐Ÿ’ก Best Practices

* โœ… Always normalize input data
* โœ… Evaluate with appropriate multi-step metrics (e.g. MAPE, MAE)
* โœ… Try simple models (LSTM) before complex ones (Transformer)
* โœ… Use `self_tune=True` in preprocessing for best defaults
* โœ… Split validation data chronologically

---

## ๐Ÿค Contributing

We welcome contributions! Visit the [GitHub repo](https://github.com/lseman/foreblocks) to:

* Report bugs ๐Ÿ›
* Request features ๐Ÿ’ก
* Improve documentation ๐Ÿ“š
* Submit PRs ๐Ÿ”ง

---

## ๐Ÿ“„ License

This project is licensed under the MIT License โ€” see [LICENSE](LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "foreblocks",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "time series, forecasting, deep learning, transformer, lstm, pytorch",
    "author": null,
    "author_email": "Laio Seman <laioseman@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/28/66/5dd4c30b59db94a557a91434677f78b3c3fc53a829c850b5b1009103c3c2/foreblocks-0.1.4.tar.gz",
    "platform": null,
    "description": "# foreBlocks: Modular Deep Learning Library for Time Series Forecasting\n\n[![PyPI Version](https://img.shields.io/pypi/v/tracernaut.svg)](https://pypi.org/project/foreblocks/)\n[![Python Versions](https://img.shields.io/pypi/pyversions/foreblocks.svg)](https://pypi.org/project/foreblocks/)\n[![License](https://img.shields.io/github/license/lseman/foreblocks)](LICENSE)\n\n![ForeBlocks Logo](logo.svg#gh-light-mode-only)\n![ForeBlocks Logo](logo_dark.svg#gh-dark-mode-only)\n\n**foreBlocks** is a flexible and modular deep learning library for time series forecasting, built on PyTorch. It provides a wide range of neural network architectures and forecasting strategies through a clean, research-friendly API \u2014 enabling fast experimentation and scalable deployment.\n\n\ud83d\udd17 **[GitHub Repository](https://github.com/lseman/foreblocks)**\n\n---\n\n## \ud83d\ude80 Quick Start\n\n```bash\n# Clone and install\ngit clone https://github.com/lseman/foreblocks\ncd foreblocks\npip install -e .\n````\n\nOr install directly via PyPI:\n\n```bash\npip install foreblocks\n```\n\n```python\nfrom foreblocks import TimeSeriesSeq2Seq, ModelConfig, TrainingConfig\nimport pandas as pd\nimport torch\n\n# Load your time series dataset\ndata = pd.read_csv('your_data.csv')\nX = data.values\n\n# Configure the model\nmodel_config = ModelConfig(\n    model_type=\"lstm\",\n    input_size=X.shape[1],\n    output_size=1,\n    hidden_size=64,\n    target_len=24,\n    teacher_forcing_ratio=0.5\n)\n\n# Initialize and train\nmodel = TimeSeriesSeq2Seq(model_config=model_config)\nX_train, y_train, _ = model.preprocess(X, self_tune=True)\n\n# Create DataLoader and start training\nfrom torch.utils.data import DataLoader, TensorDataset\ntrain_dataset = TensorDataset(\n    torch.tensor(X_train, dtype=torch.float32),\n    torch.tensor(y_train, dtype=torch.float32)\n)\ntrain_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)\n\nhistory = model.train_model(train_loader)\npredictions = model.predict(X_test)\n```\n\n---\n\n## \u2728 Key Features\n\n| Feature                     | Description                                                        |\n| --------------------------- | ------------------------------------------------------------------ |\n| \ud83d\udd27 **Multiple Strategies**  | Seq2Seq, Autoregressive, and Direct forecasting modes              |\n| \ud83e\udde9 **Modular Design**       | Easily swap and extend model components                            |\n| \ud83e\udd16 **Advanced Models**      | LSTM, GRU, Transformer, VAE, and more                              |\n| \u26a1 **Smart Preprocessing**   | Automatic normalization, differencing, EWT, and outlier handling   |\n| \ud83c\udfaf **Attention Modules**    | Pluggable attention layers for enhanced temporal modeling          |\n| \ud83d\udcca **Multivariate Support** | Designed for multi-feature time series with dynamic input handling |\n| \ud83d\udcc8 **Training Utilities**   | Built-in trainer with callbacks, metrics, and visualizations       |\n| \ud83d\udd0d **Transparent API**      | Clean and extensible codebase with complete documentation          |\n\n---\n\n## \ud83d\udcd6 Documentation\n\n| Section       | Description                                      | Link                           |\n| ------------- | ------------------------------------------------ | ------------------------------ |\n| Preprocessing | EWT, normalization, differencing, outliers       | [Guide](docs/preprocessor.md)  |\n| Custom Blocks | Registering new encoder/decoder/attention blocks | [Guide](docs/custom_blocks.md) |\n| Transformers  | Transformer-based modules                        | [Docs](docs/transformer.md)    |\n| Fourier       | Frequency-based forecasting layers               | [Docs](docs/fourier.md)        |\n| Wavelet       | Wavelet transform modules                        | [Docs](docs/wavelet.md)        |\n| DARTS         | Architecture search for forecasting              | [Docs](docs/darts.md)          |\n\n---\n\n## \ud83c\udfd7\ufe0f Architecture Overview\n\nForeBlocks is built around clean and extensible abstractions:\n\n* `TimeSeriesSeq2Seq`: High-level interface for forecasting workflows\n* `ForecastingModel`: Core model engine combining encoders, decoders, and heads\n* `TimeSeriesPreprocessor`: Adaptive preprocessing with feature engineering\n* `Trainer`: Handles training loop, validation, and visual feedback\n\n---\n\n## \ud83d\udd2e Forecasting Models\n\n### 1. **Sequence-to-Sequence** (default)\n\n```python\nModelConfig(\n    model_type=\"lstm\",\n    strategy=\"seq2seq\",\n    input_size=3,\n    output_size=1,\n    hidden_size=64,\n    num_encoder_layers=2,\n    num_decoder_layers=2,\n    target_len=24\n)\n```\n\n### 2. **Autoregressive**\n\n```python\nModelConfig(\n    model_type=\"lstm\",\n    strategy=\"autoregressive\",\n    input_size=1,\n    output_size=1,\n    hidden_size=64,\n    target_len=12\n)\n```\n\n### 3. **Direct Multi-Step**\n\n```python\nModelConfig(\n    model_type=\"lstm\",\n    strategy=\"direct\",\n    input_size=5,\n    output_size=1,\n    hidden_size=128,\n    target_len=48\n)\n```\n\n### 4. **Transformer-based**\n\n```python\nModelConfig(\n    model_type=\"transformer\",\n    strategy=\"transformer_seq2seq\",\n    input_size=4,\n    output_size=4,\n    hidden_size=128,\n    dim_feedforward=512,\n    nheads=8,\n    num_encoder_layers=3,\n    num_decoder_layers=3,\n    target_len=96\n)\n```\n\n---\n\n## \u2699\ufe0f Advanced Features\n\n### Multi-Encoder/Decoder\n\n```python\nModelConfig(\n    multi_encoder_decoder=True,\n    input_size=5,\n    output_size=1,\n    hidden_size=64,\n    model_type=\"lstm\",\n    target_len=24\n)\n```\n\n### Attention Integration\n\n```python\nfrom foreblocks.attention import AttentionLayer\n\nattention = AttentionLayer(\n    method=\"dot\",\n    attention_backend=\"self\",\n    encoder_hidden_size=64,\n    decoder_hidden_size=64\n)\n\nmodel = TimeSeriesSeq2Seq(\n    model_config=model_config,\n    attention_module=attention\n)\n```\n\n### Custom Preprocessing\n\n```python\nX_train, y_train, _ = model.preprocess(\n    X,\n    normalize=True,\n    differencing=True,\n    detrend=True,\n    apply_ewt=True,\n    window_size=48,\n    horizon=24,\n    remove_outliers=True,\n    outlier_method=\"iqr\",\n    self_tune=True\n)\n```\n\n### Scheduled Sampling\n\n```python\ndef schedule(epoch): return max(0.0, 1.0 - 0.1 * epoch)\n\nmodel = TimeSeriesSeq2Seq(\n    model_config=model_config,\n    scheduled_sampling_fn=schedule\n)\n```\n\n---\n\n## \ud83e\uddea Examples\n\n### LSTM + Attention\n\n```python\nmodel_config = ModelConfig(\n    model_type=\"lstm\",\n    input_size=3,\n    output_size=1,\n    hidden_size=64,\n    target_len=24\n)\n\nattention = AttentionLayer(\n    method=\"dot\",\n    encoder_hidden_size=64,\n    decoder_hidden_size=64\n)\n\nmodel = TimeSeriesSeq2Seq(\n    model_config=model_config,\n    attention_module=attention\n)\n```\n\n### Transformer Model\n\n```python\nmodel_config = ModelConfig(\n    model_type=\"transformer\",\n    input_size=4,\n    output_size=4,\n    hidden_size=128,\n    dim_feedforward=512,\n    nheads=8,\n    num_encoder_layers=3,\n    num_decoder_layers=3,\n    target_len=96\n)\n\ntraining_config = TrainingConfig(\n    num_epochs=100,\n    learning_rate=1e-4,\n    weight_decay=1e-5,\n    patience=15\n)\n\nmodel = TimeSeriesSeq2Seq(\n    model_config=model_config,\n    training_config=training_config\n)\n```\n\n---\n\n## \ud83d\udee0\ufe0f Configuration Reference\n\n### `ModelConfig`\n\n| Parameter               | Type  | Description                        | Default |\n| ----------------------- | ----- | ---------------------------------- | ------- |\n| `model_type`            | str   | \"lstm\", \"gru\", \"transformer\", etc. | \"lstm\"  |\n| `input_size`            | int   | Number of input features           | \u2014       |\n| `output_size`           | int   | Number of output features          | \u2014       |\n| `hidden_size`           | int   | Hidden layer dimension             | 64      |\n| `target_len`            | int   | Forecast steps                     | \u2014       |\n| `num_encoder_layers`    | int   | Encoder depth                      | 1       |\n| `num_decoder_layers`    | int   | Decoder depth                      | 1       |\n| `teacher_forcing_ratio` | float | Ratio of teacher forcing           | 0.5     |\n\n### `TrainingConfig`\n\n| Parameter       | Type  | Description             | Default |\n| --------------- | ----- | ----------------------- | ------- |\n| `num_epochs`    | int   | Training epochs         | 100     |\n| `learning_rate` | float | Learning rate           | 1e-3    |\n| `batch_size`    | int   | Mini-batch size         | 32      |\n| `patience`      | int   | Early stopping patience | 10      |\n| `weight_decay`  | float | L2 regularization       | 0.0     |\n\n---\n\n## \ud83e\ude7a Troubleshooting\n\n<details>\n<summary><strong>\ud83d\udd34 Dimension Mismatch</strong></summary>\n\n* Confirm `input_size` and `output_size` match your data\n* Ensure encoder/decoder hidden sizes are compatible\n\n</details>\n\n<details>\n<summary><strong>\ud83d\udfe1 Memory Issues</strong></summary>\n\n* Reduce `batch_size`, `hidden_size`, or sequence length\n* Use gradient accumulation or mixed precision\n\n</details>\n\n<details>\n<summary><strong>\ud83d\udfe0 Poor Predictions</strong></summary>\n\n* Try different `strategy`\n* Use attention mechanisms\n* Fine-tune hyperparameters (e.g. `hidden_size`, dropout)\n\n</details>\n\n<details>\n<summary><strong>\ud83d\udd35 Training Instability</strong></summary>\n\n* Clip gradients (`clip_grad_norm_`)\n* Use learning rate schedulers (`ReduceLROnPlateau`)\n\n</details>\n\n---\n\n## \ud83d\udca1 Best Practices\n\n* \u2705 Always normalize input data\n* \u2705 Evaluate with appropriate multi-step metrics (e.g. MAPE, MAE)\n* \u2705 Try simple models (LSTM) before complex ones (Transformer)\n* \u2705 Use `self_tune=True` in preprocessing for best defaults\n* \u2705 Split validation data chronologically\n\n---\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Visit the [GitHub repo](https://github.com/lseman/foreblocks) to:\n\n* Report bugs \ud83d\udc1b\n* Request features \ud83d\udca1\n* Improve documentation \ud83d\udcda\n* Submit PRs \ud83d\udd27\n\n---\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License \u2014 see [LICENSE](LICENSE).\n",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Laio O. Seman\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.\n        ",
    "summary": "Modular Time Series Forecasting Library",
    "version": "0.1.4",
    "project_urls": {
        "Homepage": "https://github.com/lseman/foreblocks",
        "Repository": "https://github.com/lseman/foreblocks"
    },
    "split_keywords": [
        "time series",
        " forecasting",
        " deep learning",
        " transformer",
        " lstm",
        " pytorch"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "662e6041d837268485bf44f75ca5bf920137ee76a0953b734086e155fc4477e2",
                "md5": "b64ea48e860ff65ef177beaf7f8d7ce3",
                "sha256": "35ea32b6409c7ee3c4aeb81c960084315aeabcd10a3a65c5a3e051e4fff425fb"
            },
            "downloads": -1,
            "filename": "foreblocks-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b64ea48e860ff65ef177beaf7f8d7ce3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 251243,
            "upload_time": "2025-08-03T09:28:28",
            "upload_time_iso_8601": "2025-08-03T09:28:28.483616Z",
            "url": "https://files.pythonhosted.org/packages/66/2e/6041d837268485bf44f75ca5bf920137ee76a0953b734086e155fc4477e2/foreblocks-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "28665dd4c30b59db94a557a91434677f78b3c3fc53a829c850b5b1009103c3c2",
                "md5": "94ab29ab124251dc53664adb232592d3",
                "sha256": "b336125ac3084d64b80a172e0fa2b4a391d5215642340f09afea9807dcfca028"
            },
            "downloads": -1,
            "filename": "foreblocks-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "94ab29ab124251dc53664adb232592d3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 232545,
            "upload_time": "2025-08-03T09:28:29",
            "upload_time_iso_8601": "2025-08-03T09:28:29.981820Z",
            "url": "https://files.pythonhosted.org/packages/28/66/5dd4c30b59db94a557a91434677f78b3c3fc53a829c850b5b1009103c3c2/foreblocks-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-03 09:28:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lseman",
    "github_project": "foreblocks",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    "==",
                    "2.7.0"
                ]
            ]
        },
        {
            "name": "captum",
            "specs": [
                [
                    "==",
                    "0.8.0"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "1.26.4"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "==",
                    "2.2.3"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    "==",
                    "3.10.3"
                ]
            ]
        },
        {
            "name": "seaborn",
            "specs": [
                [
                    "==",
                    "0.13.2"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    "==",
                    "1.6.1"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    "==",
                    "1.15.3"
                ]
            ]
        },
        {
            "name": "ewtpy",
            "specs": [
                [
                    "==",
                    "0.2"
                ]
            ]
        },
        {
            "name": "statsmodels",
            "specs": [
                [
                    "==",
                    "0.14.4"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    "==",
                    "4.67.1"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "==",
                    "2.32.3"
                ]
            ]
        },
        {
            "name": "wandb",
            "specs": [
                [
                    "==",
                    "0.19.11"
                ]
            ]
        }
    ],
    "lcname": "foreblocks"
}
        
Elapsed time: 1.23952s