# FedCast: Federated Learning for Time Series Forecasting
<p align="center">
<img src="https://raw.githubusercontent.com/NKDataConv/FedCast/main/assets/fedcast-logo.png" alt="FedCast Logo" width="100">
</p>
FedCast is a comprehensive Python framework designed for time series forecasting using federated learning. It leverages the powerful [Flower (flwr)](https://flower.ai/) framework to enable privacy-preserving, decentralized model training on distributed time series data.
## Project Overview
The core goal of FedCast is to provide a modular, extensible, and easy-to-use platform for researchers and practitioners to develop and evaluate personalized federated learning strategies for time series analysis. The framework addresses the unique challenges of time series forecasting in federated settings, where data privacy, communication efficiency, and model personalization are critical concerns.
### Problem Statement
Traditional centralized approaches to time series forecasting require all data to be collected at a central location, which poses significant challenges:
- **Privacy Concerns**: Sensitive time series data (medical, financial, IoT) cannot be shared
- **Communication Overhead**: Large-scale time series data is expensive to transmit
- **Heterogeneity**: Different clients may have varying data distributions and patterns
- **Personalization**: Global models may not perform well for individual client patterns
FedCast addresses these challenges through federated learning, enabling collaborative model training while keeping data distributed and private.
## Architecture
FedCast is built on a modular architecture that seamlessly integrates with the Flower framework while providing specialized components for time series forecasting:
### Core Components
#### 1. **Flower Integration Layer**
- Direct integration with Flower's core functionality
- Custom client and server implementations
- Support for both synchronous and asynchronous federated learning
- Preservation of all Flower features and capabilities
#### 2. **Data Management**
- **Time Series Datasets**: Support for multiple data types (synthetic, energy, medical, financial, IoT, network, weather)
- **Data Validation**: Automatic data cleaning and validation
- **Transformation Pipelines**: Flexible data preprocessing
- **Heterogeneous Data Handling**: Support for varying data distributions across clients
- **Automatic Downloading**: Built-in data source connectors with caching
#### 3. **Model Management**
- **Model Registry**: Centralized model factory system
- **Version Control**: Model serialization and deserialization
- **Adaptation**: Model personalization and fine-tuning
- **Architecture Support**: MLP, Linear models, and extensible framework for custom models
#### 4. **Federated Learning Strategies**
- **Communication-Efficient Algorithms**: FedLAMA reduces communication overhead by up to 70%
- **Robust Aggregation**: FedNova addresses objective inconsistency in heterogeneous settings
- **Personalization**: FedTrend and other specialized strategies for time series
- **Standard Algorithms**: FedAvg, FedProx, FedOpt, SCAFFOLD, and more
#### 5. **Evaluation & Experimentation**
- **Time Series Metrics**: Specialized evaluation metrics for forecasting tasks
- **MLflow Integration**: Comprehensive experiment tracking and logging
- **Visualization**: Automatic plotting of training progress and results
- **Grid Experiments**: Automated testing across multiple configurations
#### 6. **Telemetry & Monitoring**
- **MLflow Logger**: Centralized experiment tracking
- **Performance Monitoring**: Real-time training metrics
- **Result Analysis**: Comparative analysis tools
### Design Principles
- **Modularity**: Clear separation of concerns with independent, replaceable components
- **Extensibility**: Plugin architecture for easy integration of new algorithms and data sources
- **Privacy-First**: Built-in privacy preservation mechanisms
- **Performance**: Optimized for communication efficiency and computational speed
- **Reproducibility**: Comprehensive logging and experiment tracking
## Key Features
- **Federated Time Series Forecasting**: Train models on time-series data without centralizing it
- **Built on Flower**: Extends the robust and flexible Flower framework
- **Modular Architecture**: Easily customize components like data loaders, models, and aggregation strategies
- **Personalization**: Supports various strategies for building models tailored to individual clients
- **Communication Efficiency**: Advanced strategies like FedLAMA reduce communication overhead significantly
- **Comprehensive Evaluation**: Specialized metrics and visualization tools for time series forecasting
- **Experiment Tracking**: Full MLflow integration for reproducible research
- **Multiple Data Sources**: Support for synthetic, real-world, and domain-specific datasets
## Technical Stack
- **Python 3.12+**: Core programming language
- **Flower**: Federated learning framework foundation
- **PyTorch**: Deep learning model implementation
- **Pandas/NumPy**: Data manipulation and numerical computing
- **MLflow**: Experiment tracking and model management
- **Poetry**: Dependency management and packaging
- **Pytest**: Testing framework
## Quick Start Example
```python
from fedcast.datasets import SinusDataset
from fedcast.cast_models import MLP
from fedcast.federated_learning_strategies import FedTrend
from fedcast.experiments import run_federated_experiment
# Load time series data
dataset = SinusDataset(num_clients=10, sequence_length=100)
# Define model architecture
model = MLP(input_size=100, hidden_size=64, output_size=1)
# Choose federated learning strategy
strategy = FedTrend()
# Run federated learning experiment
results = run_federated_experiment(
dataset=dataset,
model=model,
strategy=strategy,
num_rounds=50
)
# Results are automatically logged to MLflow
print(f"Final accuracy: {results['final_accuracy']}")
```
## Getting Started
### Installation
#### Option 1: Install from PyPI (Recommended)
```bash
pip install fedcast
```
> **Note**: FedCast is currently in **Beta** (v0.1.1b1). While the core functionality is stable, some features may still be under development. We welcome feedback and contributions!
#### Option 2: Install from source
1. **Clone the repository:**
```bash
git clone <repository-url>
cd FedCast
```
2. **Install dependencies:**
This project uses [Poetry](https://python-poetry.org/) for dependency management and packaging.
```bash
poetry install
```
Or install directly with pip:
```bash
pip install -e .
```
## Quick Start
After installation, you can start using FedCast:
```python
import fedcast
from fedcast.datasets import load_sinus_dataset
from fedcast.cast_models import MLPModel
from fedcast.federated_learning_strategies import build_fedavg_strategy
# Create a dataset
dataset = load_sinus_dataset(partition_id=0)
# Create a model
model = MLPModel()
# Create a federated learning strategy
strategy = build_fedavg_strategy()
# Your federated learning experiment here...
```
## Development
### Running Tests
To ensure the reliability and correctness of the framework, we use `pytest` for testing.
To run the full test suite, execute the following command from the root of the project:
```bash
poetry run pytest
```
This will automatically discover and run all tests located in the `tests/` directory.
### Running Experiments
FedCast provides several ways to run federated learning experiments:
#### 1. Basic Experiments
Run individual experiments with specific configurations:
```bash
# FedAvg experiment
poetry run python fedcast/experiments/basic_fedavg.py
# FedTrend experiment
poetry run python fedcast/experiments/basic_fedtrend.py
```
#### 2. Grid Search Experiments
Run comprehensive experiments across multiple configurations:
```bash
# Run all combinations of datasets, models, and strategies
poetry run python fedcast/experiments/grid_all.py
```
#### 3. Custom Experiments
Create your own experiment scripts by importing FedCast components:
```python
from fedcast.datasets import YourDataset
from fedcast.cast_models import YourModel
from fedcast.federated_learning_strategies import YourStrategy
# Implement your custom experiment logic
```
## Monitoring and Visualization
### MLflow UI
View experiment results, compare runs, and analyze performance:
```bash
mlflow ui --host 127.0.0.1 --port 5000
```
Access the UI at `http://127.0.0.1:5000` to:
- Track experiment parameters and metrics
- Compare different federated learning strategies
- Visualize training progress and convergence
- Download model artifacts and results
### Automatic Plotting
FedCast automatically generates plots for:
- Training and validation losses per round
- Client-specific performance metrics
- Communication efficiency comparisons
- Model convergence analysis
Plots are saved in `runs/<experiment_name>/` directory.
## Supporters
This project is supported by the Bundesministerium für Forschung, Technologie und Raumfahrt (BMFTR). We are grateful for their support, without which this project would not be possible.
<img src="https://raw.githubusercontent.com/NKDataConv/FedCast/main/assets/logo_bmftr.jpg" alt="BMFTR Logo" width=250>
Raw data
{
"_id": null,
"home_page": "https://github.com/NKDataConv/FedCast",
"name": "fedcast",
"maintainer": null,
"docs_url": null,
"requires_python": "<=3.13.2,>=3.9.2",
"maintainer_email": null,
"keywords": "federated-learning, time-series, forecasting, machine-learning, flower, privacy",
"author": "FedCast Team",
"author_email": "nk@data-convolution.de",
"download_url": "https://files.pythonhosted.org/packages/75/2b/57a72cd4813ffbbff2cd2e1db1aa696dc99e260f72b7decc9bac7b7fb5f2/fedcast-0.1.1b1.tar.gz",
"platform": null,
"description": "# FedCast: Federated Learning for Time Series Forecasting\n\n<p align=\"center\">\n <img src=\"https://raw.githubusercontent.com/NKDataConv/FedCast/main/assets/fedcast-logo.png\" alt=\"FedCast Logo\" width=\"100\">\n</p>\n\nFedCast is a comprehensive Python framework designed for time series forecasting using federated learning. It leverages the powerful [Flower (flwr)](https://flower.ai/) framework to enable privacy-preserving, decentralized model training on distributed time series data.\n\n## Project Overview\n\nThe core goal of FedCast is to provide a modular, extensible, and easy-to-use platform for researchers and practitioners to develop and evaluate personalized federated learning strategies for time series analysis. The framework addresses the unique challenges of time series forecasting in federated settings, where data privacy, communication efficiency, and model personalization are critical concerns.\n\n### Problem Statement\n\nTraditional centralized approaches to time series forecasting require all data to be collected at a central location, which poses significant challenges:\n- **Privacy Concerns**: Sensitive time series data (medical, financial, IoT) cannot be shared\n- **Communication Overhead**: Large-scale time series data is expensive to transmit\n- **Heterogeneity**: Different clients may have varying data distributions and patterns\n- **Personalization**: Global models may not perform well for individual client patterns\n\nFedCast addresses these challenges through federated learning, enabling collaborative model training while keeping data distributed and private.\n\n## Architecture\n\nFedCast is built on a modular architecture that seamlessly integrates with the Flower framework while providing specialized components for time series forecasting:\n\n### Core Components\n\n#### 1. **Flower Integration Layer**\n- Direct integration with Flower's core functionality\n- Custom client and server implementations\n- Support for both synchronous and asynchronous federated learning\n- Preservation of all Flower features and capabilities\n\n#### 2. **Data Management**\n- **Time Series Datasets**: Support for multiple data types (synthetic, energy, medical, financial, IoT, network, weather)\n- **Data Validation**: Automatic data cleaning and validation\n- **Transformation Pipelines**: Flexible data preprocessing\n- **Heterogeneous Data Handling**: Support for varying data distributions across clients\n- **Automatic Downloading**: Built-in data source connectors with caching\n\n#### 3. **Model Management**\n- **Model Registry**: Centralized model factory system\n- **Version Control**: Model serialization and deserialization\n- **Adaptation**: Model personalization and fine-tuning\n- **Architecture Support**: MLP, Linear models, and extensible framework for custom models\n\n#### 4. **Federated Learning Strategies**\n- **Communication-Efficient Algorithms**: FedLAMA reduces communication overhead by up to 70%\n- **Robust Aggregation**: FedNova addresses objective inconsistency in heterogeneous settings\n- **Personalization**: FedTrend and other specialized strategies for time series\n- **Standard Algorithms**: FedAvg, FedProx, FedOpt, SCAFFOLD, and more\n\n#### 5. **Evaluation & Experimentation**\n- **Time Series Metrics**: Specialized evaluation metrics for forecasting tasks\n- **MLflow Integration**: Comprehensive experiment tracking and logging\n- **Visualization**: Automatic plotting of training progress and results\n- **Grid Experiments**: Automated testing across multiple configurations\n\n#### 6. **Telemetry & Monitoring**\n- **MLflow Logger**: Centralized experiment tracking\n- **Performance Monitoring**: Real-time training metrics\n- **Result Analysis**: Comparative analysis tools\n\n### Design Principles\n\n- **Modularity**: Clear separation of concerns with independent, replaceable components\n- **Extensibility**: Plugin architecture for easy integration of new algorithms and data sources\n- **Privacy-First**: Built-in privacy preservation mechanisms\n- **Performance**: Optimized for communication efficiency and computational speed\n- **Reproducibility**: Comprehensive logging and experiment tracking\n\n## Key Features\n\n- **Federated Time Series Forecasting**: Train models on time-series data without centralizing it\n- **Built on Flower**: Extends the robust and flexible Flower framework\n- **Modular Architecture**: Easily customize components like data loaders, models, and aggregation strategies\n- **Personalization**: Supports various strategies for building models tailored to individual clients\n- **Communication Efficiency**: Advanced strategies like FedLAMA reduce communication overhead significantly\n- **Comprehensive Evaluation**: Specialized metrics and visualization tools for time series forecasting\n- **Experiment Tracking**: Full MLflow integration for reproducible research\n- **Multiple Data Sources**: Support for synthetic, real-world, and domain-specific datasets\n\n## Technical Stack\n\n- **Python 3.12+**: Core programming language\n- **Flower**: Federated learning framework foundation\n- **PyTorch**: Deep learning model implementation\n- **Pandas/NumPy**: Data manipulation and numerical computing\n- **MLflow**: Experiment tracking and model management\n- **Poetry**: Dependency management and packaging\n- **Pytest**: Testing framework\n\n## Quick Start Example\n\n```python\nfrom fedcast.datasets import SinusDataset\nfrom fedcast.cast_models import MLP\nfrom fedcast.federated_learning_strategies import FedTrend\nfrom fedcast.experiments import run_federated_experiment\n\n# Load time series data\ndataset = SinusDataset(num_clients=10, sequence_length=100)\n\n# Define model architecture\nmodel = MLP(input_size=100, hidden_size=64, output_size=1)\n\n# Choose federated learning strategy\nstrategy = FedTrend()\n\n# Run federated learning experiment\nresults = run_federated_experiment(\n dataset=dataset,\n model=model,\n strategy=strategy,\n num_rounds=50\n)\n\n# Results are automatically logged to MLflow\nprint(f\"Final accuracy: {results['final_accuracy']}\")\n```\n\n## Getting Started\n\n### Installation\n\n#### Option 1: Install from PyPI (Recommended)\n```bash\npip install fedcast\n```\n\n> **Note**: FedCast is currently in **Beta** (v0.1.1b1). While the core functionality is stable, some features may still be under development. We welcome feedback and contributions!\n\n#### Option 2: Install from source\n1. **Clone the repository:**\n ```bash\n git clone <repository-url>\n cd FedCast\n ```\n\n2. **Install dependencies:**\n This project uses [Poetry](https://python-poetry.org/) for dependency management and packaging.\n ```bash\n poetry install\n ```\n\n Or install directly with pip:\n ```bash\n pip install -e .\n ```\n\n## Quick Start\n\nAfter installation, you can start using FedCast:\n\n```python\nimport fedcast\nfrom fedcast.datasets import load_sinus_dataset\nfrom fedcast.cast_models import MLPModel\nfrom fedcast.federated_learning_strategies import build_fedavg_strategy\n\n# Create a dataset\ndataset = load_sinus_dataset(partition_id=0)\n\n# Create a model\nmodel = MLPModel()\n\n# Create a federated learning strategy\nstrategy = build_fedavg_strategy()\n\n# Your federated learning experiment here...\n```\n\n## Development\n\n### Running Tests\n\nTo ensure the reliability and correctness of the framework, we use `pytest` for testing.\n\nTo run the full test suite, execute the following command from the root of the project:\n\n```bash\npoetry run pytest\n```\n\nThis will automatically discover and run all tests located in the `tests/` directory.\n\n\n### Running Experiments\n\nFedCast provides several ways to run federated learning experiments:\n\n#### 1. Basic Experiments\nRun individual experiments with specific configurations:\n```bash\n# FedAvg experiment\npoetry run python fedcast/experiments/basic_fedavg.py\n\n# FedTrend experiment\npoetry run python fedcast/experiments/basic_fedtrend.py\n```\n\n#### 2. Grid Search Experiments\nRun comprehensive experiments across multiple configurations:\n```bash\n# Run all combinations of datasets, models, and strategies\npoetry run python fedcast/experiments/grid_all.py\n```\n\n#### 3. Custom Experiments\nCreate your own experiment scripts by importing FedCast components:\n```python\nfrom fedcast.datasets import YourDataset\nfrom fedcast.cast_models import YourModel\nfrom fedcast.federated_learning_strategies import YourStrategy\n\n# Implement your custom experiment logic\n```\n\n## Monitoring and Visualization\n\n### MLflow UI\nView experiment results, compare runs, and analyze performance:\n```bash\nmlflow ui --host 127.0.0.1 --port 5000\n```\n\nAccess the UI at `http://127.0.0.1:5000` to:\n- Track experiment parameters and metrics\n- Compare different federated learning strategies\n- Visualize training progress and convergence\n- Download model artifacts and results\n\n### Automatic Plotting\nFedCast automatically generates plots for:\n- Training and validation losses per round\n- Client-specific performance metrics\n- Communication efficiency comparisons\n- Model convergence analysis\n\nPlots are saved in `runs/<experiment_name>/` directory.\n\n\n## Supporters\n\nThis project is supported by the Bundesministerium f\u00fcr Forschung, Technologie und Raumfahrt (BMFTR). We are grateful for their support, without which this project would not be possible.\n\n<img src=\"https://raw.githubusercontent.com/NKDataConv/FedCast/main/assets/logo_bmftr.jpg\" alt=\"BMFTR Logo\" width=250>\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "A modular framework for time series forecasting using federated learning, built on top of the Flower framework",
"version": "0.1.1b1",
"project_urls": {
"Documentation": "https://github.com/NKDataConv/FedCast#readme",
"Homepage": "https://github.com/NKDataConv/FedCast",
"Repository": "https://github.com/NKDataConv/FedCast"
},
"split_keywords": [
"federated-learning",
" time-series",
" forecasting",
" machine-learning",
" flower",
" privacy"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3f26e91598fc4ac5c2836499ade0c09e36ef539f608091e943ced4c533977cd6",
"md5": "67a7c891f84e4fa4ec09153ea4667f47",
"sha256": "1d29cb3d2ff2f088f7e4286fdc2d1af3238ea364376c5d1ade6a07a188ec721e"
},
"downloads": -1,
"filename": "fedcast-0.1.1b1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "67a7c891f84e4fa4ec09153ea4667f47",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<=3.13.2,>=3.9.2",
"size": 61597,
"upload_time": "2025-10-17T12:39:17",
"upload_time_iso_8601": "2025-10-17T12:39:17.757845Z",
"url": "https://files.pythonhosted.org/packages/3f/26/e91598fc4ac5c2836499ade0c09e36ef539f608091e943ced4c533977cd6/fedcast-0.1.1b1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "752b57a72cd4813ffbbff2cd2e1db1aa696dc99e260f72b7decc9bac7b7fb5f2",
"md5": "6b733525cd20cad39c489eb965fdbe2c",
"sha256": "c0a39150cd8cfc9fc8cb4309f35da04ba202fca12c78713d7311897dfd98f914"
},
"downloads": -1,
"filename": "fedcast-0.1.1b1.tar.gz",
"has_sig": false,
"md5_digest": "6b733525cd20cad39c489eb965fdbe2c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<=3.13.2,>=3.9.2",
"size": 44053,
"upload_time": "2025-10-17T12:39:19",
"upload_time_iso_8601": "2025-10-17T12:39:19.350009Z",
"url": "https://files.pythonhosted.org/packages/75/2b/57a72cd4813ffbbff2cd2e1db1aa696dc99e260f72b7decc9bac7b7fb5f2/fedcast-0.1.1b1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-17 12:39:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "NKDataConv",
"github_project": "FedCast",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "fedcast"
}