# 🚀 NanoFed
![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/camille-004/nanofed/ci.yml?style=for-the-badge)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/nanofed?style=for-the-badge)
![Read the Docs](https://img.shields.io/readthedocs/nanofed?style=for-the-badge)
![GitHub License](https://img.shields.io/github/license/camille-004/nanofed?style=for-the-badge)
![PyPI - Status](https://img.shields.io/pypi/status/nanofed?style=for-the-badge)
**NanoFed**: *Simplifying the development of privacy-preserving distributed ML models.*
---
## 🌍 What is Federated Learning?
Federated Learning (FL) is a **distributed machine learning paradigm** that trains a global model across multiple clients (devices or organizations) without sharing their data. Instead, clients send model updates to a central server for aggregation.
### **Key Benefits**
| 🌟 Feature | Description |
|------------------------|--------------------------------------------------|
| 🔒 **Privacy Preservation** | Data stays securely on devices. |
| 🚀 **Resource Efficiency** | Decentralized training reduces transfer overhead.|
| 🌐 **Scalable AI** | Enables collaborative training environments. |
---
## 📦 Installation
### **Requirements**
- Python `3.10+`
- Dependencies installed automatically
### **Install with Pip**
```bash
pip install nanofed
```
### **Development Installation**
```bash
git clone https://github.com/camille-004/nanofed.git
cd nanofed
make install
```
---
## 📖 Documentation
📚 **Learn how to use NanoFed in our guides and API references.**
👉 [**Read the Docs**](https://nanofed.readthedocs.io)
---
## ✨ Key Features
- 🔒 **Privacy-First**: Keep data on devices while training.
- 🚀 **Easy-to-Use**: Simple APIs with seamless PyTorch integration.
- 🔧 **Flexible**: Customizable aggregation strategies and extensible architecture.
- 💻 **Production Ready**: Robust error handling and logging.
### **Feature Overview**
| Feature | Description |
|--------------------------|--------------------------------------------------|
| 🔒 **Privacy-First** | Data never leaves devices. |
| 🚀 **Intuitive API** | Built for developers with PyTorch support. |
| 🔧 **Flexible Aggregation** | Supports custom strategies. |
| 💻 **Production Ready** | Async communication, robust error handling. |
---
## 🔧 Quick Start
Train a model using federated learning in just a few lines of code:
```python
import asyncio
from nanofed import HTTPClient, TorchTrainer, TrainingConfig
async def run_client(client_id: str, server_url: str):
training_config = TrainingConfig(epochs=1, batch_size=256, learning_rate=0.1)
async with HTTPClient(server_url, client_id) as client:
model_state, _ = await client.fetch_global_model()
await client.submit_update(model_state)
if __name__ == "__main__":
asyncio.run(run_client("client1", "http://localhost:8080"))
```
---
## 🛠️ Getting Help
Need assistance? Here are some helpful resources:
| Resource | Description |
|------------------------|------------------------------------------------|
| 📚 **[Documentation](https://nanofed.readthedocs.io)** | Learn how to use NanoFed effectively. |
| 🐛 **[Issue Tracker](https://github.com/camille-004/nanofed/issues)** | Report bugs or request features. |
| 🛠️ **[Source Code](https://github.com/camille-004/nanofed)** | Browse the NanoFed repository on GitHub. |
---
## ⚖️ License
NanoFed is licensed under the **GNU General Public License (GPL-3.0)**.
See the [LICENSE](https://github.com/camille-004/nanofed/blob/main/LICENSE) file for details.
---
## 👩💻 Contributing
Contributions are welcome! We follow the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) specification. See our [contribution guidelines](https://github.com/camille-004/nanofed/blob/main/CONTRIBUTING.md) for detailed instructions.
Example commit message:
```bash
feat(client): add retry mechanism
```
---
## 🛠️ Development Roadmap
### ✅ Completed
**Core Features for V1**
- Basic client-server architecture with HTTP communication
- Simple global model management
- Basic FedAvg implementation
- Local training support
- Support for PyTorch models
- Synchronous training (all clients must complete before aggregation)
- Basic error handling and logging
---
### 🚀 Future Enhancements
**Planned Features**
- Advanced privacy features: Differential Privacy (DP), Secure Multiparty Computation (MPC), Homomorphic Encryption (HE)
- Asynchronous updates for faster and more flexible training
- Non-IID data handling for diverse client datasets
- Custom aggregation strategies for specific use cases
- gRPC implementation for high-performance communication
- Model compression techniques to reduce bandwidth usage
- Fault tolerance mechanisms for unreliable clients or servers
---
> Made with ❤️ and 🧠 by [Camille Dunning](https://github.com/camille-004).
Raw data
{
"_id": null,
"home_page": "https://github.com/camille-004/nanofed",
"name": "nanofed",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": "federated-learning, deep-learning, pytorch",
"author": "camille-004",
"author_email": "dunningcamille@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/ac/2e/e961c448a58cc3ba07e6bfa19096fe2b66bd3d4ca1b36a3b91c89125d630/nanofed-0.1.4.tar.gz",
"platform": null,
"description": "# \ud83d\ude80 NanoFed\n\n![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/camille-004/nanofed/ci.yml?style=for-the-badge)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/nanofed?style=for-the-badge)\n![Read the Docs](https://img.shields.io/readthedocs/nanofed?style=for-the-badge)\n![GitHub License](https://img.shields.io/github/license/camille-004/nanofed?style=for-the-badge)\n![PyPI - Status](https://img.shields.io/pypi/status/nanofed?style=for-the-badge)\n\n\n**NanoFed**: *Simplifying the development of privacy-preserving distributed ML models.*\n\n---\n\n## \ud83c\udf0d What is Federated Learning?\n\nFederated Learning (FL) is a **distributed machine learning paradigm** that trains a global model across multiple clients (devices or organizations) without sharing their data. Instead, clients send model updates to a central server for aggregation.\n\n### **Key Benefits**\n\n| \ud83c\udf1f Feature | Description |\n|------------------------|--------------------------------------------------|\n| \ud83d\udd12 **Privacy Preservation** | Data stays securely on devices. |\n| \ud83d\ude80 **Resource Efficiency** | Decentralized training reduces transfer overhead.|\n| \ud83c\udf10 **Scalable AI** | Enables collaborative training environments. |\n\n\n---\n\n## \ud83d\udce6 Installation\n\n### **Requirements**\n\n- Python `3.10+`\n- Dependencies installed automatically\n\n### **Install with Pip**\n\n```bash\npip install nanofed\n```\n\n### **Development Installation**\n\n```bash\ngit clone https://github.com/camille-004/nanofed.git\ncd nanofed\nmake install\n```\n\n---\n\n## \ud83d\udcd6 Documentation\n\n\ud83d\udcda **Learn how to use NanoFed in our guides and API references.**\n\ud83d\udc49 [**Read the Docs**](https://nanofed.readthedocs.io)\n\n---\n\n## \u2728 Key Features\n\n- \ud83d\udd12 **Privacy-First**: Keep data on devices while training.\n- \ud83d\ude80 **Easy-to-Use**: Simple APIs with seamless PyTorch integration.\n- \ud83d\udd27 **Flexible**: Customizable aggregation strategies and extensible architecture.\n- \ud83d\udcbb **Production Ready**: Robust error handling and logging.\n\n### **Feature Overview**\n\n| Feature | Description |\n|--------------------------|--------------------------------------------------|\n| \ud83d\udd12 **Privacy-First** | Data never leaves devices. |\n| \ud83d\ude80 **Intuitive API** | Built for developers with PyTorch support. |\n| \ud83d\udd27 **Flexible Aggregation** | Supports custom strategies. |\n| \ud83d\udcbb **Production Ready** | Async communication, robust error handling. |\n\n---\n\n## \ud83d\udd27 Quick Start\n\nTrain a model using federated learning in just a few lines of code:\n\n```python\nimport asyncio\nfrom nanofed import HTTPClient, TorchTrainer, TrainingConfig\n\nasync def run_client(client_id: str, server_url: str):\n training_config = TrainingConfig(epochs=1, batch_size=256, learning_rate=0.1)\n async with HTTPClient(server_url, client_id) as client:\n model_state, _ = await client.fetch_global_model()\n await client.submit_update(model_state)\n\nif __name__ == \"__main__\":\n asyncio.run(run_client(\"client1\", \"http://localhost:8080\"))\n```\n\n---\n\n## \ud83d\udee0\ufe0f Getting Help\n\nNeed assistance? Here are some helpful resources:\n\n| Resource | Description |\n|------------------------|------------------------------------------------|\n| \ud83d\udcda **[Documentation](https://nanofed.readthedocs.io)** | Learn how to use NanoFed effectively. |\n| \ud83d\udc1b **[Issue Tracker](https://github.com/camille-004/nanofed/issues)** | Report bugs or request features. |\n| \ud83d\udee0\ufe0f **[Source Code](https://github.com/camille-004/nanofed)** | Browse the NanoFed repository on GitHub. |\n\n---\n\n## \u2696\ufe0f License\n\nNanoFed is licensed under the **GNU General Public License (GPL-3.0)**.\nSee the [LICENSE](https://github.com/camille-004/nanofed/blob/main/LICENSE) file for details.\n\n---\n\n## \ud83d\udc69\u200d\ud83d\udcbb Contributing\n\nContributions are welcome! We follow the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) specification. See our [contribution guidelines](https://github.com/camille-004/nanofed/blob/main/CONTRIBUTING.md) for detailed instructions.\n\nExample commit message:\n```bash\nfeat(client): add retry mechanism\n```\n\n---\n\n## \ud83d\udee0\ufe0f Development Roadmap\n\n### \u2705 Completed\n**Core Features for V1**\n- Basic client-server architecture with HTTP communication\n- Simple global model management\n- Basic FedAvg implementation\n- Local training support\n- Support for PyTorch models\n- Synchronous training (all clients must complete before aggregation)\n- Basic error handling and logging\n\n---\n\n### \ud83d\ude80 Future Enhancements\n**Planned Features**\n- Advanced privacy features: Differential Privacy (DP), Secure Multiparty Computation (MPC), Homomorphic Encryption (HE)\n- Asynchronous updates for faster and more flexible training\n- Non-IID data handling for diverse client datasets\n- Custom aggregation strategies for specific use cases\n- gRPC implementation for high-performance communication\n- Model compression techniques to reduce bandwidth usage\n- Fault tolerance mechanisms for unreliable clients or servers\n\n---\n\n> Made with \u2764\ufe0f and \ud83e\udde0 by [Camille Dunning](https://github.com/camille-004).\n",
"bugtrack_url": null,
"license": "GPL-3.0-or-later",
"summary": "A lightweight federated learning library",
"version": "0.1.4",
"project_urls": {
"Homepage": "https://github.com/camille-004/nanofed",
"Repository": "https://github.com/camille-004/nanofed"
},
"split_keywords": [
"federated-learning",
" deep-learning",
" pytorch"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "759ea44f349e82dc653f8770e319026dc54475fe6aec2a7e2538ad04b7d765b1",
"md5": "e82a723fd351c59c4a29f5023c8098a5",
"sha256": "383900cb02e44d62a882e248771c9ba54d7dc48cc11846a189d9949f251b976c"
},
"downloads": -1,
"filename": "nanofed-0.1.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e82a723fd351c59c4a29f5023c8098a5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 41490,
"upload_time": "2024-12-08T17:37:42",
"upload_time_iso_8601": "2024-12-08T17:37:42.091089Z",
"url": "https://files.pythonhosted.org/packages/75/9e/a44f349e82dc653f8770e319026dc54475fe6aec2a7e2538ad04b7d765b1/nanofed-0.1.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ac2ee961c448a58cc3ba07e6bfa19096fe2b66bd3d4ca1b36a3b91c89125d630",
"md5": "af71d9d2dda872e2659746711a6d5448",
"sha256": "4ffa1daf4cd98f1e60a3bdeab5b99152386e7c31e68df651ed661480ad87ff41"
},
"downloads": -1,
"filename": "nanofed-0.1.4.tar.gz",
"has_sig": false,
"md5_digest": "af71d9d2dda872e2659746711a6d5448",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 34243,
"upload_time": "2024-12-08T17:37:43",
"upload_time_iso_8601": "2024-12-08T17:37:43.798481Z",
"url": "https://files.pythonhosted.org/packages/ac/2e/e961c448a58cc3ba07e6bfa19096fe2b66bd3d4ca1b36a3b91c89125d630/nanofed-0.1.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-08 17:37:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "camille-004",
"github_project": "nanofed",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "nanofed"
}