<div align="center">
# smolmodels ✨
[](https://pypi.org/project/smolmodels/)
[](https://discord.gg/SefZDepGMv)
Build machine learning models using natural language and minimal code
[Quickstart](#1-quickstart) |
[Features](#2-features) |
[Installation & Setup](#3-installation--setup) |
[Documentation](#4-documentation) |
[Benchmarks](#5-benchmarks)
<br>
Create machine learning models with minimal code by describing what you want them to do in
plain words. You explain the task, and the library builds a model for you, including data generation, feature
engineering, training, and packaging.
</div>
> [!NOTE]
> This library is in early development, and we're actively working on new features and improvements! Please report any
> bugs or share your feature requests on [GitHub](https://github.com/plexe-ai/smolmodels/issues)
> or [Discord](https://discord.gg/SefZDepGMv) 💛
## 1. Quickstart
Installation:
```bash
pip install smolmodels
```
Define, train and save a `Model`:
```python
import smolmodels as sm
# Step 1: define the model
model = sm.Model(
intent="Predict sentiment on a news article such that [...]",
input_schema={"headline": str, "content": str}, # [optional - can be pydantic or dict]
output_schema={"sentiment": str} # [optional - can be pydantic or dict]
)
# Step 2: build and train the model on data
model.build(
datasets=[dataset, auxiliary_dataset],
provider="openai/gpt-4o-mini",
timeout=3600
)
# Step 3: use the model to get predictions on new data
sentiment = model.predict({
"headline": "600B wiped off NVIDIA market cap",
"content": "NVIDIA shares fell 38% after [...]",
})
# Step 4: save the model, can be loaded later for reuse
sm.save_model(model, "news-sentiment-predictor")
# Step 5: load a saved model and use it
loaded_model = sm.load_model("news-sentiment-predictor.tar.gz")
```
## 2. Features
`smolmodels` combines graph search, LLM code/data generation and code execution to produce a machine learning model
that meets the criteria of the task description. When you call `model.build()`, the library generates a graph of
possible model solutions, evaluates them, and selects the one that maximises the performance metric for this task.
### 2.1. 💬 Define Models using Natural Language
A model is defined as a transformation from an **input schema** to an **output schema**, which behaves according to an
**intent**. The schemas can be defined either using `pydantic` models, or plain dictionaries that are convertible to
`pydantic` models.
```python
# This defines the model's identity
model = sm.Model(
intent="Predict sentiment on a news article such that [...]",
input_schema={"headline": str, "content": str}, # supported: pydantic or dict
output_schema={"sentiment": str} # supported: pydantic or dict
)
```
You describe the model's expected behaviour in plain English. The library will select a metric to optimise for,
and produce logic for feature engineering, model training, evaluation, and so on.
### 2.2. 🎯 Model Building
The model is built by calling `model.build()`. This method takes one or more datasets and
generates a set of possible model solutions, training and evaluating them to select
the best one. The model with the highest performance metric becomes the "implementation" of the predictor.
You can specify the model building cutoff in terms of a timeout, a maximum number of solutions to explore, or both.
```python
model.build(
datasets=[dataset_a, dataset_b],
provider="openai/gpt-4o-mini",
timeout=3600, # [optional] max time in seconds
max_iterations=10 # [optional] max number of model solutions to explore
)
```
The model can now be used to make predictions, and can be saved or loaded using `sm.save_model()` or `sm.load_model()`.
```python
sentiment = model.predict({"headline": "600B wiped off NVIDIA market cap", ...})
```
### 2.3. 🎲 Data Generation and Schema Inference
The library can generate synthetic data for training and testing. This is useful if you have no data available, or
want to augment existing data. You can do this with the `sm.DatasetGenerator` class:
```python
dataset = sm.DatasetGenerator(
schema={"headline": str, "content": str, "sentiment": str}, # supported: pydantic or dict
data=existing_data
)
dataset.generate(1000)
model.build(
datasets=[dataset],
...
)
```
> [!CAUTION]
> Data generation can consume a lot of tokens. Start with a conservative `generate_samples` value and
> increase it if needed.
The library can also infer the input and/or output schema of your predictor, if required. This is based either on the
dataset you provide, or on the model's intent. This can be useful when you don't know what the model should look like.
As with the models, you can specify the schema using `pydantic` models or plain dictionaries.
```python
# In this case, the library will infer a schema from the intent and generate data for you
model = sm.Model(intent="Predict sentiment on a news article such that [...]")
model.build(provider="openai/gpt-4o-mini")
```
> [!TIP]
> If you know how the model will be used, you will get better results by specifying the schema explicitly.
> Schema inference is primarily intended to be used if you don't know what the input/output schema at prediction time
> should be.
### 2.4. 🌐 Multi-Provider Support
You can use multiple LLM providers for model generation. Specify the provider and model in the format `provider/model`:
```python
model.build(provider="openai/gpt-4o-mini", ...)
```
See the section on installation and setup for more details on supported providers and how to configure API keys.
## 3. Installation & Setup
Install the library in the usual manner:
```bash
pip install smolmodels
```
Set your API key as an environment variable based on which provider you want to use. For example:
```bash
# For OpenAI
export OPENAI_API_KEY=<your-API-key>
# For Anthropic
export ANTHROPIC_API_KEY=<your-API-key>
# For Gemini
export GEMINI_API_KEY=<your-API-key>
```
> [!TIP]
> The library uses LiteLLM as its provider abstraction layer. For other supported providers and models,
> check the [LiteLLM](https://docs.litellm.ai/docs/providers) documentation.
## 4. Documentation
For full documentation, visit [docs.plexe.ai](https://docs.plexe.ai).
## 5. Benchmarks
Performance evaluated on 20 OpenML benchmark datasets and 12 Kaggle competitions. Higher performance observed on 12/20
OpenML datasets, with remaining datasets showing performance within 0.005 of baseline. Experiments conducted on standard
infrastructure (8 vCPUs, 30GB RAM) with 1-hour runtime limit per dataset.
Complete code and results are available at [plexe-ai/plexe-results](https://github.com/plexe-ai/plexe-results).
## 6. Contributing
We love contributions! You can get started with [issues](https://github.com/plexe-ai/smolmodels/issues),
submitting a PR with improvements, or joining the [Discord](https://discord.gg/3czW7BMj) to chat with the team.
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
## 7. License
Apache-2.0 License - see [LICENSE](LICENSE) for details.
## 8. Product Roadmap
- [X] Fine-tuning and transfer learning for small pre-trained models
- [ ] Support for non-tabular data types in model generation
- [ ] Use Pydantic for schemas and split data generation into a separate module
- [ ] Smolmodels self-hosted platform ⭐ (More details coming soon!)
Raw data
{
"_id": null,
"home_page": "https://github.com/plexe-ai/smolmodels",
"name": "smolmodels",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.12",
"maintainer_email": null,
"keywords": "custom ai, llm, machine learning model, data generation",
"author": "marcellodebernardi",
"author_email": "marcello.debernardi@outlook.com",
"download_url": "https://files.pythonhosted.org/packages/38/1b/0f61d2c2947cf5613881908cab99e1016f1c98173a8f040e8bf0fb83156b/smolmodels-0.9.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n\n# smolmodels \u2728\n\n[](https://pypi.org/project/smolmodels/)\n[](https://discord.gg/SefZDepGMv)\n\nBuild machine learning models using natural language and minimal code\n\n[Quickstart](#1-quickstart) |\n[Features](#2-features) |\n[Installation & Setup](#3-installation--setup) |\n[Documentation](#4-documentation) |\n[Benchmarks](#5-benchmarks)\n\n<br>\n\nCreate machine learning models with minimal code by describing what you want them to do in\nplain words. You explain the task, and the library builds a model for you, including data generation, feature \nengineering, training, and packaging.\n</div>\n\n> [!NOTE]\n> This library is in early development, and we're actively working on new features and improvements! Please report any\n> bugs or share your feature requests on [GitHub](https://github.com/plexe-ai/smolmodels/issues) \n> or [Discord](https://discord.gg/SefZDepGMv) \ud83d\udc9b\n\n\n## 1. Quickstart\nInstallation: \n\n```bash\npip install smolmodels\n```\n\nDefine, train and save a `Model`:\n\n```python\nimport smolmodels as sm\n\n# Step 1: define the model\nmodel = sm.Model(\n intent=\"Predict sentiment on a news article such that [...]\",\n input_schema={\"headline\": str, \"content\": str}, # [optional - can be pydantic or dict]\n output_schema={\"sentiment\": str} # [optional - can be pydantic or dict]\n)\n\n# Step 2: build and train the model on data\nmodel.build(\n datasets=[dataset, auxiliary_dataset],\n provider=\"openai/gpt-4o-mini\",\n timeout=3600\n)\n\n# Step 3: use the model to get predictions on new data\nsentiment = model.predict({\n \"headline\": \"600B wiped off NVIDIA market cap\",\n \"content\": \"NVIDIA shares fell 38% after [...]\",\n})\n\n# Step 4: save the model, can be loaded later for reuse\nsm.save_model(model, \"news-sentiment-predictor\")\n\n# Step 5: load a saved model and use it\nloaded_model = sm.load_model(\"news-sentiment-predictor.tar.gz\")\n```\n\n## 2. Features\n\n`smolmodels` combines graph search, LLM code/data generation and code execution to produce a machine learning model\nthat meets the criteria of the task description. When you call `model.build()`, the library generates a graph of\npossible model solutions, evaluates them, and selects the one that maximises the performance metric for this task.\n\n### 2.1. \ud83d\udcac Define Models using Natural Language\nA model is defined as a transformation from an **input schema** to an **output schema**, which behaves according to an\n**intent**. The schemas can be defined either using `pydantic` models, or plain dictionaries that are convertible to\n`pydantic` models.\n\n```python\n# This defines the model's identity\nmodel = sm.Model(\n intent=\"Predict sentiment on a news article such that [...]\",\n input_schema={\"headline\": str, \"content\": str}, # supported: pydantic or dict\n output_schema={\"sentiment\": str} # supported: pydantic or dict\n)\n```\n\nYou describe the model's expected behaviour in plain English. The library will select a metric to optimise for, \nand produce logic for feature engineering, model training, evaluation, and so on.\n\n### 2.2. \ud83c\udfaf Model Building\nThe model is built by calling `model.build()`. This method takes one or more datasets and \ngenerates a set of possible model solutions, training and evaluating them to select\nthe best one. The model with the highest performance metric becomes the \"implementation\" of the predictor.\n\nYou can specify the model building cutoff in terms of a timeout, a maximum number of solutions to explore, or both.\n\n```python\nmodel.build(\n datasets=[dataset_a, dataset_b],\n provider=\"openai/gpt-4o-mini\",\n timeout=3600, # [optional] max time in seconds\n max_iterations=10 # [optional] max number of model solutions to explore\n)\n```\n\nThe model can now be used to make predictions, and can be saved or loaded using `sm.save_model()` or `sm.load_model()`.\n\n```python\nsentiment = model.predict({\"headline\": \"600B wiped off NVIDIA market cap\", ...})\n```\n\n### 2.3. \ud83c\udfb2 Data Generation and Schema Inference\nThe library can generate synthetic data for training and testing. This is useful if you have no data available, or \nwant to augment existing data. You can do this with the `sm.DatasetGenerator` class:\n\n```python\ndataset = sm.DatasetGenerator(\n schema={\"headline\": str, \"content\": str, \"sentiment\": str}, # supported: pydantic or dict\n data=existing_data\n)\ndataset.generate(1000)\n\nmodel.build(\n datasets=[dataset],\n ...\n)\n```\n\n> [!CAUTION]\n> Data generation can consume a lot of tokens. Start with a conservative `generate_samples` value and\n> increase it if needed.\n\nThe library can also infer the input and/or output schema of your predictor, if required. This is based either on the\ndataset you provide, or on the model's intent. This can be useful when you don't know what the model should look like.\nAs with the models, you can specify the schema using `pydantic` models or plain dictionaries.\n\n```python\n# In this case, the library will infer a schema from the intent and generate data for you\nmodel = sm.Model(intent=\"Predict sentiment on a news article such that [...]\")\nmodel.build(provider=\"openai/gpt-4o-mini\")\n```\n\n> [!TIP]\n> If you know how the model will be used, you will get better results by specifying the schema explicitly.\n> Schema inference is primarily intended to be used if you don't know what the input/output schema at prediction time\n> should be.\n\n### 2.4. \ud83c\udf10 Multi-Provider Support\nYou can use multiple LLM providers for model generation. Specify the provider and model in the format `provider/model`:\n\n```python\nmodel.build(provider=\"openai/gpt-4o-mini\", ...)\n```\n\nSee the section on installation and setup for more details on supported providers and how to configure API keys.\n\n## 3. Installation & Setup\nInstall the library in the usual manner:\n\n```bash\npip install smolmodels\n```\n\nSet your API key as an environment variable based on which provider you want to use. For example:\n\n```bash\n# For OpenAI\nexport OPENAI_API_KEY=<your-API-key>\n# For Anthropic\nexport ANTHROPIC_API_KEY=<your-API-key>\n# For Gemini\nexport GEMINI_API_KEY=<your-API-key>\n```\n\n> [!TIP]\n> The library uses LiteLLM as its provider abstraction layer. For other supported providers and models,\n> check the [LiteLLM](https://docs.litellm.ai/docs/providers) documentation.\n\n## 4. Documentation\nFor full documentation, visit [docs.plexe.ai](https://docs.plexe.ai).\n\n## 5. Benchmarks\nPerformance evaluated on 20 OpenML benchmark datasets and 12 Kaggle competitions. Higher performance observed on 12/20\nOpenML datasets, with remaining datasets showing performance within 0.005 of baseline. Experiments conducted on standard\ninfrastructure (8 vCPUs, 30GB RAM) with 1-hour runtime limit per dataset.\n\nComplete code and results are available at [plexe-ai/plexe-results](https://github.com/plexe-ai/plexe-results).\n\n## 6. Contributing\n\nWe love contributions! You can get started with [issues](https://github.com/plexe-ai/smolmodels/issues),\nsubmitting a PR with improvements, or joining the [Discord](https://discord.gg/3czW7BMj) to chat with the team. \nSee [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.\n\n## 7. License\n\nApache-2.0 License - see [LICENSE](LICENSE) for details.\n\n## 8. Product Roadmap\n\n- [X] Fine-tuning and transfer learning for small pre-trained models\n- [ ] Support for non-tabular data types in model generation\n- [ ] Use Pydantic for schemas and split data generation into a separate module\n- [ ] Smolmodels self-hosted platform \u2b50 (More details coming soon!)\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "A framework for building ML models from natural language",
"version": "0.9.0",
"project_urls": {
"Homepage": "https://github.com/plexe-ai/smolmodels",
"Repository": "https://github.com/plexe-ai/smolmodels"
},
"split_keywords": [
"custom ai",
" llm",
" machine learning model",
" data generation"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "196f6d0a029c77a07ba103dbe23a6660d7279cb8a01d32e2ff99eaae824db085",
"md5": "f78eb2bcf50adfbd464a6f0506dc2533",
"sha256": "7fb29d2d79aa17dcebcc61f6d85cd1db5fe9f26c54723c6a8d4f39eae92a05ee"
},
"downloads": -1,
"filename": "smolmodels-0.9.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f78eb2bcf50adfbd464a6f0506dc2533",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.12",
"size": 79771,
"upload_time": "2025-02-21T18:40:41",
"upload_time_iso_8601": "2025-02-21T18:40:41.415315Z",
"url": "https://files.pythonhosted.org/packages/19/6f/6d0a029c77a07ba103dbe23a6660d7279cb8a01d32e2ff99eaae824db085/smolmodels-0.9.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "381b0f61d2c2947cf5613881908cab99e1016f1c98173a8f040e8bf0fb83156b",
"md5": "1ec553a28e1d4ea2c88659ff70f140e0",
"sha256": "6210ce408ff0e96ae23565e04c625ebddbe7d9e07d58a04bb4f6dcb28649ee82"
},
"downloads": -1,
"filename": "smolmodels-0.9.0.tar.gz",
"has_sig": false,
"md5_digest": "1ec553a28e1d4ea2c88659ff70f140e0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.12",
"size": 56597,
"upload_time": "2025-02-21T18:40:43",
"upload_time_iso_8601": "2025-02-21T18:40:43.309957Z",
"url": "https://files.pythonhosted.org/packages/38/1b/0f61d2c2947cf5613881908cab99e1016f1c98173a8f040e8bf0fb83156b/smolmodels-0.9.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-21 18:40:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "plexe-ai",
"github_project": "smolmodels",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "smolmodels"
}