# Meta-Learning Toolkit
<div align="center">
[](https://pypi.org/project/meta-learning-toolkit/)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/benedictchen/meta-learning-toolkit/actions)
[](https://meta-learning-toolkit.readthedocs.io)
[](https://github.com/astral-sh/ruff)
**Production-ready meta-learning algorithms with research-accurate implementations**
*Based on 30+ foundational papers spanning 1987-2025*
[๐ Documentation](https://meta-learning-toolkit.readthedocs.io) โข
[๐ Quick Start](#-60-second-quickstart) โข
[๐ป CLI Tool](#-cli-tool) โข
[๐ฏ Examples](#-examples) โข
[โค๏ธ Support](#๏ธ-support-this-research)
</div>
---
## ๐ง What is Meta-Learning?
Meta-learning, or "learning to learn," enables AI systems to rapidly adapt to new tasks with minimal examples. Instead of training from scratch on each task, meta-learning algorithms develop learning strategies that generalize across tasks.
**Key Insight**: Train on many tasks โ Learn to learn โ Rapidly adapt to new tasks
## โจ Why This Toolkit?
Unlike existing libraries ([learn2learn](https://github.com/learnables/learn2learn), [torchmeta](https://github.com/tristandeleu/pytorch-meta), [higher](https://github.com/facebookresearch/higher)), this toolkit provides:
- โ
**Test-Time Compute Scaling** - First public implementation (2024 breakthrough)
- โ
**Research-Accurate Math** - Fixes common bugs in distance metrics, gradients
- โ
**Production CLI** - Professional `mlfew` command-line interface
- โ
**Complete Documentation** - Mathematical foundations + working examples
- โ
**Modern Architecture** - Clean API, type hints, comprehensive tests
## ๐ 60-Second Quickstart
### Installation
```bash
pip install meta-learning-toolkit
```
### Basic Usage
```python
import meta_learning as ml
import torch.nn as nn
# 1. Create a feature extractor
feature_extractor = nn.Sequential(
nn.Conv2d(1, 64, 3, padding=1),
nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
nn.Flatten()
)
# 2. Create Prototypical Networks model
model = ml.ProtoHead(feature_extractor)
# 3. Load dataset and sample episode
dataset = ml.get_dataset("omniglot", split="train")
support_x, support_y, query_x, query_y = ml.make_episode(
dataset, n_way=5, k_shot=1, n_query=15
)
# 4. Run few-shot learning
logits = model(support_x, support_y, query_x)
accuracy = (logits.argmax(-1) == query_y).float().mean()
print(f"5-way 1-shot accuracy: {accuracy:.3f}")
```
**That's it!** You just ran few-shot learning with Prototypical Networks.
## ๐ป CLI Tool
The `mlfew` command provides a complete workflow:
```bash
# Train a model
mlfew fit --dataset omniglot --algorithm protonet --n-way 5 --k-shot 1
# Evaluate performance
mlfew eval --model checkpoints/protonet_omniglot.pt --dataset omniglot
# Run benchmarks
mlfew benchmark --datasets omniglot,miniimagenet --algorithms protonet,maml
```
## ๐ Supported Datasets
All datasets include automatic downloading, checksum verification, and canonical splits:
| Dataset | Classes | Samples/Class | Paper | Auto-Download |
|---------|---------|---------------|-------|---------------|
| **Omniglot** | 1,623 characters | 20 | Lake et al. 2015 | โ
|
| **miniImageNet** | 100 classes | 600 | Vinyals et al. 2016 | โ ๏ธ Manual* |
| **CIFAR-FS** | 100 classes | 600 | Bertinetto et al. 2018 | โ
|
*Manual download required due to ImageNet licensing. Automatic CIFAR-10 proxy provided.
## ๐งช Algorithms Implemented
| Algorithm | Paper | Year | Implementation Status |
|-----------|--------|------|----------------------|
| **Prototypical Networks** | Snell et al. | 2017 | โ
Research-accurate |
| **MAML** | Finn et al. | 2017 | โ
Second-order gradients |
| **Test-Time Compute Scaling** | 2024 Research | 2024 | โ
**First public impl** |
| **Multi-Scale ProtoNet** | Enhanced | 2024 | โ
Complete |
| **Online Meta-Learning** | Finn et al. | 2019 | โ
Continual learning |
## ๐ฌ Research Accuracy
All implementations follow exact mathematical formulations from original papers:
### Prototypical Networks
```
Prototype computation: c_k = (1/|S_k|) ฮฃ f_ฯ(x_i) for (x_i, y_i) โ S_k
Classification: p(y=k|x) = exp(-d(f_ฯ(x), c_k)) / ฮฃ_k' exp(-d(f_ฯ(x), c_k'))
Distance: d(ยท,ยท) = ||ยท - ยท||โยฒ (squared Euclidean)
```
### MAML
```
Inner update: ฯ_i = ฮธ - ฮฑโ_ฮธ L_Ti(f_ฮธ)
Outer update: ฮธ โ ฮธ - ฮฒโ_ฮธ ฮฃ_Ti L_Ti(f_ฯi)
Gradients: Second-order (create_graph=True)
```
**Common bugs fixed**: Wrong distance signs, missing second-order gradients, BatchNorm episodic leakage.
## ๐ข Installation Options
### Option 1: PyPI (Recommended)
```bash
pip install meta-learning-toolkit
```
### Option 2: Development Install
```bash
git clone https://github.com/benedictchen/meta-learning-toolkit
cd meta-learning-toolkit
pip install -e .[dev,test,datasets,visualization]
```
## ๐งโ๐ป Requirements
- **Python**: 3.9+
- **PyTorch**: 2.0+
- **Core**: `numpy`, `scipy`, `scikit-learn`, `tqdm`, `rich`, `pyyaml`
- **Optional**: `matplotlib`, `seaborn`, `wandb` (for visualization)
- **Development**: `pytest`, `ruff`, `mypy`, `pre-commit`
## ๐ Documentation
Complete documentation available at: **https://meta-learning-toolkit.readthedocs.io**
- ๐ [Quick Start Guide](https://meta-learning-toolkit.readthedocs.io/quickstart/)
- ๐ [API Reference](https://meta-learning-toolkit.readthedocs.io/api/)
- ๐ก [Examples & Tutorials](https://meta-learning-toolkit.readthedocs.io/examples/)
- ๐ฌ [Research & Theory](https://meta-learning-toolkit.readthedocs.io/research/)
## ๐งช Testing
Comprehensive test suite with 90%+ coverage:
```bash
# Run all tests
pytest
# Run specific test categories
pytest -m "not slow" # Skip slow tests
pytest -m "regression" # Mathematical correctness
# With coverage report
pytest --cov=src/meta_learning --cov-report=html
```
## ๐ License
Custom Non-Commercial License - See [LICENSE](LICENSE) for details.
**TL;DR**: Free for research and educational use. Commercial use requires permission.
## ๐ Citation
If this toolkit helps your research, please cite:
```bibtex
@software{chen2025metalearning,
title={Meta-Learning Toolkit: Production-Ready Few-Shot Learning},
author={Chen, Benedict},
year={2025},
url={https://github.com/benedictchen/meta-learning-toolkit},
version={2.0.0}
}
```
## โค๏ธ Support This Research
This toolkit is developed and maintained by [Benedict Chen](mailto:benedict@benedictchen.com). If it helps your research or projects, please consider:
<div align="center">
[](https://github.com/sponsors/benedictchen)
[](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=WXQKYYKPHWXHS)
</div>
- โญ **Star the repository**
- ๐ณ **[Donate via PayPal](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=WXQKYYKPHWXHS)**
- ๐ **[Sponsor on GitHub](https://github.com/sponsors/benedictchen)**
- ๐ฆ **Share on social media**
- ๐ **Cite in your papers**
*Your support enables continued development of cutting-edge AI research tools!*
---
<div align="center">
**Built with โค๏ธ by [Benedict Chen](mailto:benedict@benedictchen.com)**
*Turning research papers into production-ready code*
</div>
Raw data
{
"_id": null,
"home_page": null,
"name": "meta-learning-toolkit",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Benedict Chen <benedict@benedictchen.com>",
"keywords": "MAML, few-shot, meta-learning, prototypical-networks, test-time-compute",
"author": null,
"author_email": "Benedict Chen <benedict@benedictchen.com>",
"download_url": "https://files.pythonhosted.org/packages/1f/7c/3c815f9418cad3ce0ace577db18c4f0354a95b679dfc303aa40e4246ea5f/meta_learning_toolkit-2.2.0.tar.gz",
"platform": null,
"description": "# Meta-Learning Toolkit\n\n<div align=\"center\">\n\n[](https://pypi.org/project/meta-learning-toolkit/)\n[](https://www.python.org/downloads/)\n[](LICENSE)\n[](https://github.com/benedictchen/meta-learning-toolkit/actions)\n[](https://meta-learning-toolkit.readthedocs.io)\n[](https://github.com/astral-sh/ruff)\n\n**Production-ready meta-learning algorithms with research-accurate implementations**\n\n*Based on 30+ foundational papers spanning 1987-2025*\n\n[\ud83d\udcda Documentation](https://meta-learning-toolkit.readthedocs.io) \u2022\n[\ud83d\ude80 Quick Start](#-60-second-quickstart) \u2022\n[\ud83d\udcbb CLI Tool](#-cli-tool) \u2022\n[\ud83c\udfaf Examples](#-examples) \u2022\n[\u2764\ufe0f Support](#\ufe0f-support-this-research)\n\n</div>\n\n---\n\n## \ud83e\udde0 What is Meta-Learning?\n\nMeta-learning, or \"learning to learn,\" enables AI systems to rapidly adapt to new tasks with minimal examples. Instead of training from scratch on each task, meta-learning algorithms develop learning strategies that generalize across tasks.\n\n**Key Insight**: Train on many tasks \u2192 Learn to learn \u2192 Rapidly adapt to new tasks\n\n## \u2728 Why This Toolkit?\n\nUnlike existing libraries ([learn2learn](https://github.com/learnables/learn2learn), [torchmeta](https://github.com/tristandeleu/pytorch-meta), [higher](https://github.com/facebookresearch/higher)), this toolkit provides:\n\n- \u2705 **Test-Time Compute Scaling** - First public implementation (2024 breakthrough)\n- \u2705 **Research-Accurate Math** - Fixes common bugs in distance metrics, gradients \n- \u2705 **Production CLI** - Professional `mlfew` command-line interface\n- \u2705 **Complete Documentation** - Mathematical foundations + working examples\n- \u2705 **Modern Architecture** - Clean API, type hints, comprehensive tests\n\n## \ud83d\ude80 60-Second Quickstart\n\n### Installation\n\n```bash\npip install meta-learning-toolkit\n```\n\n### Basic Usage\n\n```python\nimport meta_learning as ml\nimport torch.nn as nn\n\n# 1. Create a feature extractor\nfeature_extractor = nn.Sequential(\n nn.Conv2d(1, 64, 3, padding=1),\n nn.ReLU(),\n nn.AdaptiveAvgPool2d(1),\n nn.Flatten()\n)\n\n# 2. Create Prototypical Networks model\nmodel = ml.ProtoHead(feature_extractor)\n\n# 3. Load dataset and sample episode\ndataset = ml.get_dataset(\"omniglot\", split=\"train\")\nsupport_x, support_y, query_x, query_y = ml.make_episode(\n dataset, n_way=5, k_shot=1, n_query=15\n)\n\n# 4. Run few-shot learning\nlogits = model(support_x, support_y, query_x)\naccuracy = (logits.argmax(-1) == query_y).float().mean()\n\nprint(f\"5-way 1-shot accuracy: {accuracy:.3f}\")\n```\n\n**That's it!** You just ran few-shot learning with Prototypical Networks.\n\n## \ud83d\udcbb CLI Tool\n\nThe `mlfew` command provides a complete workflow:\n\n```bash\n# Train a model\nmlfew fit --dataset omniglot --algorithm protonet --n-way 5 --k-shot 1\n\n# Evaluate performance \nmlfew eval --model checkpoints/protonet_omniglot.pt --dataset omniglot\n\n# Run benchmarks\nmlfew benchmark --datasets omniglot,miniimagenet --algorithms protonet,maml\n```\n\n## \ud83d\udcca Supported Datasets\n\nAll datasets include automatic downloading, checksum verification, and canonical splits:\n\n| Dataset | Classes | Samples/Class | Paper | Auto-Download |\n|---------|---------|---------------|-------|---------------|\n| **Omniglot** | 1,623 characters | 20 | Lake et al. 2015 | \u2705 |\n| **miniImageNet** | 100 classes | 600 | Vinyals et al. 2016 | \u26a0\ufe0f Manual* |\n| **CIFAR-FS** | 100 classes | 600 | Bertinetto et al. 2018 | \u2705 |\n\n*Manual download required due to ImageNet licensing. Automatic CIFAR-10 proxy provided.\n\n## \ud83e\uddea Algorithms Implemented\n\n| Algorithm | Paper | Year | Implementation Status |\n|-----------|--------|------|----------------------|\n| **Prototypical Networks** | Snell et al. | 2017 | \u2705 Research-accurate |\n| **MAML** | Finn et al. | 2017 | \u2705 Second-order gradients |\n| **Test-Time Compute Scaling** | 2024 Research | 2024 | \u2705 **First public impl** |\n| **Multi-Scale ProtoNet** | Enhanced | 2024 | \u2705 Complete |\n| **Online Meta-Learning** | Finn et al. | 2019 | \u2705 Continual learning |\n\n## \ud83d\udd2c Research Accuracy\n\nAll implementations follow exact mathematical formulations from original papers:\n\n### Prototypical Networks\n```\nPrototype computation: c_k = (1/|S_k|) \u03a3 f_\u03c6(x_i) for (x_i, y_i) \u2208 S_k\nClassification: p(y=k|x) = exp(-d(f_\u03c6(x), c_k)) / \u03a3_k' exp(-d(f_\u03c6(x), c_k'))\nDistance: d(\u00b7,\u00b7) = ||\u00b7 - \u00b7||\u2082\u00b2 (squared Euclidean)\n```\n\n### MAML\n```\nInner update: \u03c6_i = \u03b8 - \u03b1\u2207_\u03b8 L_Ti(f_\u03b8) \nOuter update: \u03b8 \u2190 \u03b8 - \u03b2\u2207_\u03b8 \u03a3_Ti L_Ti(f_\u03c6i)\nGradients: Second-order (create_graph=True)\n```\n\n**Common bugs fixed**: Wrong distance signs, missing second-order gradients, BatchNorm episodic leakage.\n\n## \ud83d\udea2 Installation Options\n\n### Option 1: PyPI (Recommended)\n```bash\npip install meta-learning-toolkit\n```\n\n### Option 2: Development Install\n```bash\ngit clone https://github.com/benedictchen/meta-learning-toolkit\ncd meta-learning-toolkit\npip install -e .[dev,test,datasets,visualization]\n```\n\n## \ud83e\uddd1\u200d\ud83d\udcbb Requirements\n\n- **Python**: 3.9+\n- **PyTorch**: 2.0+ \n- **Core**: `numpy`, `scipy`, `scikit-learn`, `tqdm`, `rich`, `pyyaml`\n- **Optional**: `matplotlib`, `seaborn`, `wandb` (for visualization)\n- **Development**: `pytest`, `ruff`, `mypy`, `pre-commit`\n\n## \ud83d\udcda Documentation\n\nComplete documentation available at: **https://meta-learning-toolkit.readthedocs.io**\n\n- \ud83d\ude80 [Quick Start Guide](https://meta-learning-toolkit.readthedocs.io/quickstart/)\n- \ud83d\udcd6 [API Reference](https://meta-learning-toolkit.readthedocs.io/api/)\n- \ud83d\udca1 [Examples & Tutorials](https://meta-learning-toolkit.readthedocs.io/examples/)\n- \ud83d\udd2c [Research & Theory](https://meta-learning-toolkit.readthedocs.io/research/)\n\n## \ud83e\uddea Testing\n\nComprehensive test suite with 90%+ coverage:\n\n```bash\n# Run all tests\npytest\n\n# Run specific test categories\npytest -m \"not slow\" # Skip slow tests\npytest -m \"regression\" # Mathematical correctness\n\n# With coverage report\npytest --cov=src/meta_learning --cov-report=html\n```\n\n## \ud83d\udcc4 License\n\nCustom Non-Commercial License - See [LICENSE](LICENSE) for details.\n\n**TL;DR**: Free for research and educational use. Commercial use requires permission.\n\n## \ud83c\udf93 Citation\n\nIf this toolkit helps your research, please cite:\n\n```bibtex\n@software{chen2025metalearning,\n title={Meta-Learning Toolkit: Production-Ready Few-Shot Learning},\n author={Chen, Benedict},\n year={2025},\n url={https://github.com/benedictchen/meta-learning-toolkit},\n version={2.0.0}\n}\n```\n\n## \u2764\ufe0f Support This Research\n\nThis toolkit is developed and maintained by [Benedict Chen](mailto:benedict@benedictchen.com). If it helps your research or projects, please consider:\n\n<div align=\"center\">\n\n[](https://github.com/sponsors/benedictchen)\n[](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=WXQKYYKPHWXHS)\n\n</div>\n\n- \u2b50 **Star the repository**\n- \ud83d\udcb3 **[Donate via PayPal](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=WXQKYYKPHWXHS)**\n- \ud83d\udc96 **[Sponsor on GitHub](https://github.com/sponsors/benedictchen)**\n- \ud83d\udc26 **Share on social media**\n- \ud83d\udcdd **Cite in your papers**\n\n*Your support enables continued development of cutting-edge AI research tools!*\n\n---\n\n<div align=\"center\">\n\n**Built with \u2764\ufe0f by [Benedict Chen](mailto:benedict@benedictchen.com)**\n\n*Turning research papers into production-ready code*\n\n</div>",
"bugtrack_url": null,
"license": "Custom Non-Commercial License",
"summary": "Production-ready meta-learning algorithms with research-accurate implementations of MAML, Prototypical Networks, and test-time compute scaling",
"version": "2.2.0",
"project_urls": {
"Documentation": "https://github.com/benedictchen/meta-learning/blob/main/README.md",
"Funding": "https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=WXQKYYKPHWXHS",
"Homepage": "https://github.com/benedictchen/meta-learning",
"Issues": "https://github.com/benedictchen/meta-learning/issues",
"Repository": "https://github.com/benedictchen/meta-learning",
"Sponsor": "https://github.com/sponsors/benedictchen"
},
"split_keywords": [
"maml",
" few-shot",
" meta-learning",
" prototypical-networks",
" test-time-compute"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "db53a5958c5b6bcd3cc2f6be147052e0f97af7cd6eead98effc2ae44d2c42cc8",
"md5": "48efaa061b0874a7c2be4c27a435e952",
"sha256": "5e9b0ad42f3c0c58caedfff423392bd63182b897355796b68cc99128d3f9114c"
},
"downloads": -1,
"filename": "meta_learning_toolkit-2.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "48efaa061b0874a7c2be4c27a435e952",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 46654,
"upload_time": "2025-09-05T04:41:06",
"upload_time_iso_8601": "2025-09-05T04:41:06.137593Z",
"url": "https://files.pythonhosted.org/packages/db/53/a5958c5b6bcd3cc2f6be147052e0f97af7cd6eead98effc2ae44d2c42cc8/meta_learning_toolkit-2.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1f7c3c815f9418cad3ce0ace577db18c4f0354a95b679dfc303aa40e4246ea5f",
"md5": "8a977993a58b122649f3e430c57d5b0f",
"sha256": "833748245dfb5961aed3b5279277ec588a35514c64cf76b78c4594c804e75e6a"
},
"downloads": -1,
"filename": "meta_learning_toolkit-2.2.0.tar.gz",
"has_sig": false,
"md5_digest": "8a977993a58b122649f3e430c57d5b0f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 80207,
"upload_time": "2025-09-05T04:41:07",
"upload_time_iso_8601": "2025-09-05T04:41:07.757649Z",
"url": "https://files.pythonhosted.org/packages/1f/7c/3c815f9418cad3ce0ace577db18c4f0354a95b679dfc303aa40e4246ea5f/meta_learning_toolkit-2.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-05 04:41:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "benedictchen",
"github_project": "meta-learning",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "meta-learning-toolkit"
}