Name | python-aiconfig JSON |
Version |
1.1.34
JSON |
| download |
home_page | None |
Summary | Python library for AIConfig SDK |
upload_time | 2024-04-23 18:03:44 |
maintainer | None |
docs_url | None |
author | LastMile AI |
requires_python | >=3.10 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
> Full documentation: **[aiconfig.lastmileai.dev](https://aiconfig.lastmileai.dev/)**
## Overview
AIConfig saves prompts, models and model parameters as source control friendly configs. This allows you to iterate on prompts and model parameters _separately from your application code_.
1. **Prompts as configs**: a [standardized JSON format](https://aiconfig.lastmileai.dev/docs/overview/ai-config-format) to store generative AI model settings, prompt inputs/outputs, and flexible metadata.
2. **Model-agnostic SDK**: Python & Node SDKs to use `aiconfig` in your application code. AIConfig is designed to be **model-agnostic** and **multi-modal**, so you can extend it to work with any generative AI model, including text, image and audio.
3. **AI Workbook editor**: A [notebook-like playground](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9) to edit `aiconfig` files visually, run prompts, tweak models and model settings, and chain things together.
### What problem it solves
Today, application code is tightly coupled with the gen AI settings for the application -- prompts, parameters, and model-specific logic is all jumbled in with app code.
- results in increased complexity
- makes it hard to iterate on the prompts or try different models easily
- makes it hard to evaluate prompt/model performance
AIConfig helps unwind complexity by separating prompts, model parameters, and model-specific logic from your application.
- simplifies application code -- simply call `config.run()`
- open the `aiconfig` in a playground to iterate quickly
- version control and evaluate the `aiconfig` - it's the AI artifact for your application.
![AIConfig flow](aiconfig-docs/static/img/aiconfig_dataflow.png)
### Quicknav
<ul style="margin-bottom:0; padding-bottom:0;">
<li><a href="#install">Getting Started</a></li>
<ul style="margin-bottom:0; padding-bottom:0;">
<li><a href="https://aiconfig.lastmileai.dev/docs/overview/create-an-aiconfig">Create an AIConfig</a></li>
<li><a href="https://aiconfig.lastmileai.dev/docs/overview/run-aiconfig">Run a prompt</a></li>
<li><a href="https://aiconfig.lastmileai.dev/docs/overview/parameters">Pass data into prompts</a></li>
<li><a href="https://aiconfig.lastmileai.dev/docs/overview/define-prompt-chain">Prompt Chains</a></li>
<li><a href="https://aiconfig.lastmileai.dev/docs/overview/monitoring-aiconfig">Callbacks and monitoring</a></li>
</ul>
<li><a href="#aiconfig-sdk">SDK Cheatsheet</a></li>
<li><a href="#cookbooks">Cookbooks and guides</a></li>
<ul style="margin-bottom:0; padding-bottom:0;">
<li><a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Wizard-GPT">CLI Chatbot</a></li>
<li><a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/RAG-with-ChromaDB">RAG with ChromaDB</a></li>
<li><a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/RAG-with-MongoDB">RAG with MongoDB</a></li>
<li><a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Basic-Prompt-Routing">Prompt routing</a></li>
<li><a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI">OpenAI function calling</a></li>
<li><a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification">Chain of Verification</a></li>
</ul>
<li><a href="#supported-models">Supported models</a></li>
<ul style="margin-bottom:0; padding-bottom:0;">
<li><a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/llama">LLaMA2 example</a></li>
<li><a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/HuggingFace">Hugging Face (Mistral-7B) example</a></li>
<li><a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Multi-LLM-Consistency">PaLM</a></li>
</ul>
<li><a href="#extensibility">Extensibility</a></li>
<li><a href="#contributing-to-aiconfig">Contributing</a></li>
<li><a href="#roadmap">Roadmap</a></li>
<li><a href="#faqs">FAQ</a></li>
</ul>
## Features
- [x] **Source-control friendly** [`aiconfig` format](https://aiconfig.lastmileai.dev/docs/overview/ai-config-format) to save prompts and model settings, which you can use for evaluation, reproducibility and simplifying your application code.
- [x] **Multi-modal and model agnostic**. Use with any model, and serialize/deserialize data with the same `aiconfig` format.
- [x] **Prompt chaining and parameterization** with [{{handlebars}}](https://handlebarsjs.com/) templating syntax, allowing you to pass dynamic data into prompts (as well as between prompts).
- [x] **Streaming** supported out of the box, allowing you to get playground-like streaming wherever you use `aiconfig`.
- [x] **Notebook editor**. [AI Workbooks editor](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9) to visually create your `aiconfig`, and use the SDK to connect it to your application code.
## Install
Install with your favorite package manager for Node or Python.
### Node.js
#### `npm` or `yarn`
```bash
npm install aiconfig
```
```bash
yarn add aiconfig
```
### Python
#### `pip` or `poetry`
```bash
pip install python-aiconfig
```
```bash
poetry add python-aiconfig
```
[Detailed installation instructions](https://aiconfig.lastmileai.dev/docs/getting-started/#installation).
## Getting Started
> We cover Python instructions here, for Node.js please see the [detailed Getting Started guide](https://aiconfig.lastmileai.dev/docs/getting-started)
In this quickstart, you will create a customizable NYC travel itinerary using `aiconfig`.
This AIConfig contains a prompt chain to get a list of travel activities from an LLM and then generate an itinerary in an order specified by the user.
> **Link to tutorial code: [here](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Getting-Started)**
https://github.com/lastmile-ai/aiconfig/assets/25641935/d3d41ad2-ab66-4eb6-9deb-012ca283ff81
### Download `travel.aiconfig.json`
> **Note**: Don't worry if you don't understand all the pieces of this yet, we'll go over it step by step.
```json
{
"name": "NYC Trip Planner",
"description": "Intrepid explorer with ChatGPT and AIConfig",
"schema_version": "latest",
"metadata": {
"models": {
"gpt-3.5-turbo": {
"model": "gpt-3.5-turbo",
"top_p": 1,
"temperature": 1
},
"gpt-4": {
"model": "gpt-4",
"max_tokens": 3000,
"system_prompt": "You are an expert travel coordinator with exquisite taste."
}
},
"default_model": "gpt-3.5-turbo"
},
"prompts": [
{
"name": "get_activities",
"input": "Tell me 10 fun attractions to do in NYC."
},
{
"name": "gen_itinerary",
"input": "Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}.",
"metadata": {
"model": "gpt-4",
"parameters": {
"order_by": "geographic location"
}
}
}
]
}
```
### Run the `get_activities` prompt.
> **Note**: Make sure to specify the API keys (such as [`OPENAI_API_KEY`](https://platform.openai.com/api-keys)) in your environment before proceeding.
```bash
export OPENAI_API_KEY=my_key
```
You don't need to worry about how to run inference for the model; it's all handled by AIConfig. The prompt runs with gpt-3.5-turbo since that is the `default_model` for this AIConfig.
```python
import asyncio
from aiconfig import AIConfigRuntime, InferenceOptions
async def main():
# Load the aiconfig
config = AIConfigRuntime.load('travel.aiconfig.json')
# Run a single prompt (with streaming)
inference_options = InferenceOptions(stream=True)
await config.run("get_activities", options=inference_options)
asyncio.run(main())
```
### Run the `gen_itinerary` prompt.
This prompt depends on the output of `get_activities`. It also takes in parameters (user input) to determine the customized itinerary.
Let's take a closer look:
**`gen_itinerary` prompt:**
```
"Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}."
```
**prompt metadata:**
```json
{
"metadata": {
"model": "gpt-4",
"parameters": {
"order_by": "geographic location"
}
}
}
```
Observe the following:
1. The prompt depends on the output of the `get_activities` prompt.
2. It also depends on an `order_by` parameter (using {{handlebars}} syntax)
3. It uses **gpt-4**, whereas the `get_activities` prompt it depends on uses **gpt-3.5-turbo**.
> Effectively, this is a prompt chain between `gen_itinerary` and `get_activities` prompts, _as well as_ as a model chain between **gpt-3.5-turbo** and **gpt-4**.
Let's run this with AIConfig:
Replace `config.run` above with this:
```python
await config.run("gen_itinerary", params={"order_by": "duration"}, options=inference_options, run_with_dependencies=True)
```
Notice how simple the syntax is to perform a fairly complex task - running 2 different prompts across 2 different models and chaining one's output as part of the input of another.
The code will just run `get_activities`, then pipe its output as an input to `gen_itinerary`, and finally run `gen_itinerary`.
### Save the AIConfig
Let's save the AIConfig back to disk, and serialize the outputs from the latest inference run as well:
```python
# Save the aiconfig to disk. and serialize outputs from the model run
config.save('updated.aiconfig.json', include_outputs=True)
```
### Edit `aiconfig` in a notebook editor
We can iterate on an `aiconfig` using a notebook-like editor called an **AI Workbook**. Now that we have an `aiconfig` file artifact that encapsulates the generative AI part of our application, we can iterate on it separately from the application code that uses it.
1. Go to https://lastmileai.dev.
2. Go to Workbooks page: https://lastmileai.dev/workbooks
3. Click dropdown from '+ New Workbook' and select 'Create from AIConfig'
4. Upload `travel.aiconfig.json`
https://github.com/lastmile-ai/aiconfig/assets/81494782/5d901493-bbda-4f8e-93c7-dd9a91bf242e
Try out the workbook playground here: **[NYC Travel Workbook](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9)**
> **We are working on a local editor that you can run yourself. For now, please use the hosted version on https://lastmileai.dev.**
### Additional Guides
There is a lot you can do with `aiconfig`. We have several other tutorials to help get you started:
- [Create an AIConfig from scratch](https://aiconfig.lastmileai.dev/docs/overview/create-an-aiconfig)
- [Run a prompt](https://aiconfig.lastmileai.dev/docs/overview/run-aiconfig)
- [Pass data into prompts](https://aiconfig.lastmileai.dev/docs/overview/parameters)
- [Prompt chains](https://aiconfig.lastmileai.dev/docs/overview/define-prompt-chain)
- [Callbacks and monitoring](https://aiconfig.lastmileai.dev/docs/overview/monitoring-aiconfig)
Here are some example uses:
- [CLI Chatbot](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Wizard-GPT)
- [RAG with AIConfig](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/RAG-with-AIConfig)
- [Prompt routing](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Basic-Prompt-Routing)
- [OpenAI function calling](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI)
- [Chain of thought](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification)
### OpenAI Introspection API
If you are already using OpenAI completion API's in your application, you can get started very quickly to start saving the messages in an `aiconfig`.
Usage: see openai_wrapper.ipynb.
Now you can continue using `openai` completion API as normal. When you want to save the config, just call `new_config.save()` and all your openai completion calls will get serialized to disk.
> [**Detailed guide here**](https://aiconfig.lastmileai.dev/docs/overview/create-an-aiconfig#openai-api-python-wrapper)
## Supported Models
AIConfig supports the following model models out of the box:
- OpenAI chat models (GPT-3, GPT-3.5, GPT-4)
- LLaMA2 (running locally)
- Google PaLM models (PaLM chat)
- Hugging Face text generation models (e.g. Mistral-7B)
### Examples
- [OpenAI](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI)
- [LLaMA example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/llama)
- [Hugging Face (Mistral-7B) example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/HuggingFace)
- [PaLM](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Multi-LLM-Consistency)
> If you need to use a model that isn't provided out of the box, you can implement a `ModelParser` for it (see [Extending AIConfig](#extending-aiconfig)). **We welcome [contributions](https://aiconfig.lastmileai.dev/docs/contributing)**
## AIConfig Schema
[AIConfig specification](https://aiconfig.lastmileai.dev/docs/overview/ai-config-format)
## AIConfig SDK
> Read the [Usage Guide](https://aiconfig.lastmileai.dev/docs/usage-guide) for more details.
The AIConfig SDK supports CRUD operations for prompts, models, parameters and metadata. Here are some common examples.
The root interface is the `AIConfigRuntime` object. That is the entrypoint for interacting with an AIConfig programmatically.
Let's go over a few key CRUD operations to give a glimpse.
### AIConfig `create`
```python
config = AIConfigRuntime.create("aiconfig name", "description")
```
### Prompt `resolve`
`resolve` deserializes an existing `Prompt` into the data object that its model expects.
```python
config.resolve("prompt_name", params)
```
`params` are overrides you can specify to resolve any `{{handlebars}}` templates in the prompt. See the `gen_itinerary` prompt in the Getting Started example.
### Prompt `serialize`
`serialize` is the inverse of `resolve` -- it serializes the data object that a model understands into a `Prompt` object that can be serialized into the `aiconfig` format.
```python
config.serialize("model_name", data, "prompt_name")
```
### Prompt `run`
`run` is used to run inference for the specified `Prompt`.
```python
config.run("prompt_name", params)
```
### `run_with_dependencies`
This is a variant of `run` -- this re-runs all prompt dependencies.
For example, in [`travel.aiconfig.json`](#download-travelaiconfigjson), the `gen_itinerary` prompt references the output of the `get_activities` prompt using `{{get_activities.output}}`.
Running this function will first execute `get_activities`, and use its output to resolve the `gen_itinerary` prompt before executing it.
This is transitive, so it computes the Directed Acyclic Graph of dependencies to execute. Complex relationships can be modeled this way.
```python
config.run_with_dependencies("gen_itinerary")
```
### Updating metadata and parameters
Use the `get/set_metadata` and `get/set_parameter` methods to interact with metadata and parameters (`set_parameter` is just syntactic sugar to update `"metadata.parameters"`)
```python
config.set_metadata("key", data, "prompt_name")
```
Note: if `"prompt_name"` is specified, the metadata is updated specifically for that prompt. Otherwise, the global metadata is updated.
### Register new `ModelParser`
Use the `AIConfigRuntime.register_model_parser` if you want to use a different `ModelParser`, or configure AIConfig to work with an additional model.
AIConfig uses the model name string to retrieve the right `ModelParser` for a given Prompt (see `AIConfigRuntime.get_model_parser`), so you can register a different ModelParser for the same ID to override which `ModelParser` handles a Prompt.
For example, suppose I want to use `MyOpenAIModelParser` to handle `gpt-4` prompts. I can do the following at the start of my application:
```python
AIConfigRuntime.register_model_parser(myModelParserInstance, ["gpt-4"])
```
### Callback events
Use callback events to trace and monitor what's going on -- helpful for debugging and observability.
```python
from aiconfig import AIConfigRuntime, CallbackEvent, CallbackManager
config = AIConfigRuntime.load('aiconfig.json')
async def my_custom_callback(event: CallbackEvent) -> None:
print(f"Event triggered: {event.name}", event)
callback_manager = CallbackManager([my_custom_callback])
config.set_callback_manager(callback_manager)
await config.run("prompt_name")
```
[**Read more** here](https://aiconfig.lastmileai.dev/docs/overview/monitoring-aiconfig)
## Extensibility
AIConfig is designed to be customized and extended for your use-case. The [Extensibility](/docs/extensibility) guide goes into more detail.
Currently, there are 3 core ways to extend AIConfig:
1. [Supporting other models](https://aiconfig.lastmileai.dev/docs/extensibility#1-bring-your-own-model) - define a ModelParser extension
2. [Callback event handlers](https://aiconfig.lastmileai.dev/docs/extensibility#2-callback-handlers) - tracing and monitoring
3. [Custom metadata](https://aiconfig.lastmileai.dev/docs/extensibility#3-custom-metadata) - save custom fields in `aiconfig`
## Contributing to `aiconfig`
This is our first open-source project and we'd love your help.
See our [contributing guidelines](https://aiconfig.lastmileai.dev/docs/contributing) -- we would especially love help adding support for additional models that the community wants.
## Cookbooks
We provide several guides to demonstrate the power of `aiconfig`.
> **See the [`cookbooks`](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks) folder for examples to clone.**
### Chatbot
- [Wizard GPT](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Wizard-GPT) - speak to a wizard on your CLI
- [CLI-mate](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Cli-Mate) - help you make code-mods interactively on your codebase.
### Retrieval Augmented Generated (RAG)
- [RAG with AIConfig](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/RAG-with-AIConfig)
At its core, RAG is about passing data into prompts. Read how to [pass data](/docs/overview/parameters) with AIConfig.
### Function calling
- [OpenAI function calling](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI)
### Prompt routing
- [Prompt routing](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Basic-Prompt-Routing)
### Chain of Thought
A variant of chain-of-thought is Chain of Verification, used to help reduce hallucinations. Check out the aiconfig cookbook for CoVe:
- [Chain of Verification](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification)
### Using local LLaMA2 with `aiconfig`
- [LLaMA example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/llama)
### Hugging Face text generation
- [Hugging Face (Mistral-7B) example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/HuggingFace)
### Google PaLM
- [PaLM](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Multi-LLM-Consistency)
## Roadmap
This project is under active development.
If you'd like to help, please see the [contributing guidelines](#contributing-to-aiconfig).
Please create issues for additional capabilities you'd like to see.
Here's what's already on our roadmap:
- Evaluation interfaces: allow `aiconfig` artifacts to be evaluated with user-defined eval functions.
- We are also considering integrating with existing evaluation frameworks.
- Local editor for `aiconfig`: enable you to interact with aiconfigs more intuitively.
- OpenAI Assistants API support
- Multi-modal ModelParsers:
- GPT4-V support
- DALLE-3
- Whisper
- HuggingFace image generation
## FAQs
### How should I edit an `aiconfig` file?
Editing a configshould be done either programmatically via SDK or via the UI (workbooks):
- [Programmatic](https://github.com/lastmile-ai/aiconfig/blob/main/cookbooks/Create-AIConfig-Programmatically/create_aiconfig_programmatically.ipynb) editing.
- [Edit with a workbook](#edit-aiconfig-in-a-notebook-editor) editor: this is similar to editing an ipynb file as a notebook (most people never touch the json ipynb directly)
You should only edit the `aiconfig` by hand for minor modifications, like tweaking a prompt string or updating some metadata.
### Does this support custom endpoints?
Out of the box, AIConfig already supports all OpenAI GPT\* models, Google’s PaLM model and any “textgeneration” model on Hugging Face (like Mistral). See [Supported Models](#supported-models) for more details.
Additionally, you can install `aiconfig` [extensions](https://github.com/lastmile-ai/aiconfig/tree/main/extensions) for additional models (see question below).
### Is OpenAI function calling supported?
Yes. [This example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI) goes through how to do it.
We are also working on adding support for the Assistants API.
### How can I use aiconfig with my own model endpoint?
Model support is implemented as “ModelParser”s in the AIConfig SDK, and the idea is that anyone, including you, can define a ModelParser (and even publish it as an extension package).
All that’s needed to use a model with AIConfig is a ModelParser that knows
- how to serialize data from a model into the aiconfig format
- how to deserialize data from an aiconfig into the type the model expects
- how to run inference for model.
For more details, see [Extensibility](https://aiconfig.lastmileai.dev/docs/extensibility).
### When should I store outputs in an `aiconfig`?
The `AIConfigRuntime` object is used to interact with an aiconfig programmatically (see [SDK usage guide](#aiconfig-sdk)). As you run prompts, this object keeps track of the outputs returned from the model.
You can choose to serialize these outputs back into the `aiconfig` by using the `config.save(include_outputs=True)` API. This can be useful for preserving context -- think of it like session state.
For example, you can use aiconfig to create a chatbot, and use the same format to save the chat history so it can be resumed for the next session.
You can also choose to save outputs to a _different_ file than the original config -- `config.save("history.aiconfig.json", include_outputs=True)`.
### Why should I use `aiconfig` instead of things like [configurator](https://pypi.org/project/configurator/)?
It helps to have a [standardized format](http://aiconfig.lastmileai.dev/docs/overview/ai-config-format) specifically for storing generative AI prompts, inference results, model parameters and arbitrary metadata, as opposed to a general-purpose configuration schema.
With that standardization, you just need a layer that knows how to serialize/deserialize from that format into whatever the inference endpoints require.
### This looks similar to `ipynb` for Jupyter notebooks
We believe that notebooks are a perfect iteration environment for generative AI -- they are flexible, multi-modal, and collaborative.
The multi-modality and flexibility offered by notebooks and [`ipynb`](https://ipython.org/ipython-doc/3/notebook/nbformat.html) offers a good interaction model for generative AI. The `aiconfig` file format is extensible like `ipynb`, and AI Workbook editor allows rapid iteration in a notebook-like IDE.
_AI Workbooks are to AIConfig what Jupyter notebooks are to `ipynb`_
There are 2 areas where we are going beyond what notebooks offer:
1. `aiconfig` is more **source-control friendly** than `ipynb`. `ipynb` stores binary data (images, etc.) by encoding it in the file, while `aiconfig` recommends using file URI references instead.
2. `aiconfig` can be imported and **connected to application code** using the AIConfig SDK.
Raw data
{
"_id": null,
"home_page": null,
"name": "python-aiconfig",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": null,
"author": "LastMile AI",
"author_email": "Sarmad Qadri <sarmad@lastmileai.dev>",
"download_url": "https://files.pythonhosted.org/packages/34/94/c0132857ce840d41250ffd0d72272893a7bf3d0a1f41adf7022279a332a3/python_aiconfig-1.1.34.tar.gz",
"platform": null,
"description": "> Full documentation: **[aiconfig.lastmileai.dev](https://aiconfig.lastmileai.dev/)**\n\n## Overview\n\nAIConfig saves prompts, models and model parameters as source control friendly configs. This allows you to iterate on prompts and model parameters _separately from your application code_.\n\n1. **Prompts as configs**: a [standardized JSON format](https://aiconfig.lastmileai.dev/docs/overview/ai-config-format) to store generative AI model settings, prompt inputs/outputs, and flexible metadata.\n2. **Model-agnostic SDK**: Python & Node SDKs to use `aiconfig` in your application code. AIConfig is designed to be **model-agnostic** and **multi-modal**, so you can extend it to work with any generative AI model, including text, image and audio.\n3. **AI Workbook editor**: A [notebook-like playground](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9) to edit `aiconfig` files visually, run prompts, tweak models and model settings, and chain things together.\n\n### What problem it solves\n\nToday, application code is tightly coupled with the gen AI settings for the application -- prompts, parameters, and model-specific logic is all jumbled in with app code.\n\n- results in increased complexity\n- makes it hard to iterate on the prompts or try different models easily\n- makes it hard to evaluate prompt/model performance\n\nAIConfig helps unwind complexity by separating prompts, model parameters, and model-specific logic from your application.\n\n- simplifies application code -- simply call `config.run()`\n- open the `aiconfig` in a playground to iterate quickly\n- version control and evaluate the `aiconfig` - it's the AI artifact for your application.\n\n![AIConfig flow](aiconfig-docs/static/img/aiconfig_dataflow.png)\n\n### Quicknav\n\n<ul style=\"margin-bottom:0; padding-bottom:0;\">\n <li><a href=\"#install\">Getting Started</a></li>\n <ul style=\"margin-bottom:0; padding-bottom:0;\">\n <li><a href=\"https://aiconfig.lastmileai.dev/docs/overview/create-an-aiconfig\">Create an AIConfig</a></li>\n <li><a href=\"https://aiconfig.lastmileai.dev/docs/overview/run-aiconfig\">Run a prompt</a></li>\n <li><a href=\"https://aiconfig.lastmileai.dev/docs/overview/parameters\">Pass data into prompts</a></li>\n <li><a href=\"https://aiconfig.lastmileai.dev/docs/overview/define-prompt-chain\">Prompt Chains</a></li>\n <li><a href=\"https://aiconfig.lastmileai.dev/docs/overview/monitoring-aiconfig\">Callbacks and monitoring</a></li>\n </ul>\n <li><a href=\"#aiconfig-sdk\">SDK Cheatsheet</a></li>\n <li><a href=\"#cookbooks\">Cookbooks and guides</a></li>\n <ul style=\"margin-bottom:0; padding-bottom:0;\">\n <li><a href=\"https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Wizard-GPT\">CLI Chatbot</a></li>\n <li><a href=\"https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/RAG-with-ChromaDB\">RAG with ChromaDB</a></li>\n <li><a href=\"https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/RAG-with-MongoDB\">RAG with MongoDB</a></li>\n <li><a href=\"https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Basic-Prompt-Routing\">Prompt routing</a></li>\n <li><a href=\"https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI\">OpenAI function calling</a></li>\n <li><a href=\"https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification\">Chain of Verification</a></li>\n </ul>\n <li><a href=\"#supported-models\">Supported models</a></li>\n <ul style=\"margin-bottom:0; padding-bottom:0;\">\n <li><a href=\"https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/llama\">LLaMA2 example</a></li>\n <li><a href=\"https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/HuggingFace\">Hugging Face (Mistral-7B) example</a></li>\n <li><a href=\"https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Multi-LLM-Consistency\">PaLM</a></li>\n </ul>\n <li><a href=\"#extensibility\">Extensibility</a></li>\n <li><a href=\"#contributing-to-aiconfig\">Contributing</a></li>\n <li><a href=\"#roadmap\">Roadmap</a></li>\n <li><a href=\"#faqs\">FAQ</a></li>\n</ul>\n\n## Features\n\n- [x] **Source-control friendly** [`aiconfig` format](https://aiconfig.lastmileai.dev/docs/overview/ai-config-format) to save prompts and model settings, which you can use for evaluation, reproducibility and simplifying your application code.\n- [x] **Multi-modal and model agnostic**. Use with any model, and serialize/deserialize data with the same `aiconfig` format.\n- [x] **Prompt chaining and parameterization** with [{{handlebars}}](https://handlebarsjs.com/) templating syntax, allowing you to pass dynamic data into prompts (as well as between prompts).\n- [x] **Streaming** supported out of the box, allowing you to get playground-like streaming wherever you use `aiconfig`.\n- [x] **Notebook editor**. [AI Workbooks editor](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9) to visually create your `aiconfig`, and use the SDK to connect it to your application code.\n\n## Install\n\nInstall with your favorite package manager for Node or Python.\n\n### Node.js\n\n#### `npm` or `yarn`\n\n```bash\nnpm install aiconfig\n```\n\n```bash\nyarn add aiconfig\n```\n\n### Python\n\n#### `pip` or `poetry`\n\n```bash\npip install python-aiconfig\n```\n\n```bash\npoetry add python-aiconfig\n```\n\n[Detailed installation instructions](https://aiconfig.lastmileai.dev/docs/getting-started/#installation).\n\n## Getting Started\n\n> We cover Python instructions here, for Node.js please see the [detailed Getting Started guide](https://aiconfig.lastmileai.dev/docs/getting-started)\n\nIn this quickstart, you will create a customizable NYC travel itinerary using `aiconfig`.\n\nThis AIConfig contains a prompt chain to get a list of travel activities from an LLM and then generate an itinerary in an order specified by the user.\n\n> **Link to tutorial code: [here](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Getting-Started)**\n\nhttps://github.com/lastmile-ai/aiconfig/assets/25641935/d3d41ad2-ab66-4eb6-9deb-012ca283ff81\n\n### Download `travel.aiconfig.json`\n\n> **Note**: Don't worry if you don't understand all the pieces of this yet, we'll go over it step by step.\n\n```json\n{\n \"name\": \"NYC Trip Planner\",\n \"description\": \"Intrepid explorer with ChatGPT and AIConfig\",\n \"schema_version\": \"latest\",\n \"metadata\": {\n \"models\": {\n \"gpt-3.5-turbo\": {\n \"model\": \"gpt-3.5-turbo\",\n \"top_p\": 1,\n \"temperature\": 1\n },\n \"gpt-4\": {\n \"model\": \"gpt-4\",\n \"max_tokens\": 3000,\n \"system_prompt\": \"You are an expert travel coordinator with exquisite taste.\"\n }\n },\n \"default_model\": \"gpt-3.5-turbo\"\n },\n \"prompts\": [\n {\n \"name\": \"get_activities\",\n \"input\": \"Tell me 10 fun attractions to do in NYC.\"\n },\n {\n \"name\": \"gen_itinerary\",\n \"input\": \"Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}.\",\n \"metadata\": {\n \"model\": \"gpt-4\",\n \"parameters\": {\n \"order_by\": \"geographic location\"\n }\n }\n }\n ]\n}\n```\n\n### Run the `get_activities` prompt.\n\n> **Note**: Make sure to specify the API keys (such as [`OPENAI_API_KEY`](https://platform.openai.com/api-keys)) in your environment before proceeding.\n\n```bash\nexport OPENAI_API_KEY=my_key\n```\n\nYou don't need to worry about how to run inference for the model; it's all handled by AIConfig. The prompt runs with gpt-3.5-turbo since that is the `default_model` for this AIConfig.\n\n```python\nimport asyncio\nfrom aiconfig import AIConfigRuntime, InferenceOptions\n\nasync def main():\n # Load the aiconfig\n config = AIConfigRuntime.load('travel.aiconfig.json')\n\n # Run a single prompt (with streaming)\n inference_options = InferenceOptions(stream=True)\n await config.run(\"get_activities\", options=inference_options)\n\nasyncio.run(main())\n```\n\n### Run the `gen_itinerary` prompt.\n\nThis prompt depends on the output of `get_activities`. It also takes in parameters (user input) to determine the customized itinerary.\n\nLet's take a closer look:\n\n**`gen_itinerary` prompt:**\n\n```\n\"Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}.\"\n```\n\n**prompt metadata:**\n\n```json\n{\n \"metadata\": {\n \"model\": \"gpt-4\",\n \"parameters\": {\n \"order_by\": \"geographic location\"\n }\n }\n}\n```\n\nObserve the following:\n\n1. The prompt depends on the output of the `get_activities` prompt.\n2. It also depends on an `order_by` parameter (using {{handlebars}} syntax)\n3. It uses **gpt-4**, whereas the `get_activities` prompt it depends on uses **gpt-3.5-turbo**.\n\n> Effectively, this is a prompt chain between `gen_itinerary` and `get_activities` prompts, _as well as_ as a model chain between **gpt-3.5-turbo** and **gpt-4**.\n\nLet's run this with AIConfig:\n\nReplace `config.run` above with this:\n\n```python\nawait config.run(\"gen_itinerary\", params={\"order_by\": \"duration\"}, options=inference_options, run_with_dependencies=True)\n```\n\nNotice how simple the syntax is to perform a fairly complex task - running 2 different prompts across 2 different models and chaining one's output as part of the input of another.\n\nThe code will just run `get_activities`, then pipe its output as an input to `gen_itinerary`, and finally run `gen_itinerary`.\n\n### Save the AIConfig\n\nLet's save the AIConfig back to disk, and serialize the outputs from the latest inference run as well:\n\n```python\n# Save the aiconfig to disk. and serialize outputs from the model run\nconfig.save('updated.aiconfig.json', include_outputs=True)\n```\n\n### Edit `aiconfig` in a notebook editor\n\nWe can iterate on an `aiconfig` using a notebook-like editor called an **AI Workbook**. Now that we have an `aiconfig` file artifact that encapsulates the generative AI part of our application, we can iterate on it separately from the application code that uses it.\n\n1. Go to https://lastmileai.dev.\n2. Go to Workbooks page: https://lastmileai.dev/workbooks\n3. Click dropdown from '+ New Workbook' and select 'Create from AIConfig'\n4. Upload `travel.aiconfig.json`\n\nhttps://github.com/lastmile-ai/aiconfig/assets/81494782/5d901493-bbda-4f8e-93c7-dd9a91bf242e\n\nTry out the workbook playground here: **[NYC Travel Workbook](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9)**\n\n> **We are working on a local editor that you can run yourself. For now, please use the hosted version on https://lastmileai.dev.**\n\n### Additional Guides\n\nThere is a lot you can do with `aiconfig`. We have several other tutorials to help get you started:\n\n- [Create an AIConfig from scratch](https://aiconfig.lastmileai.dev/docs/overview/create-an-aiconfig)\n- [Run a prompt](https://aiconfig.lastmileai.dev/docs/overview/run-aiconfig)\n- [Pass data into prompts](https://aiconfig.lastmileai.dev/docs/overview/parameters)\n- [Prompt chains](https://aiconfig.lastmileai.dev/docs/overview/define-prompt-chain)\n- [Callbacks and monitoring](https://aiconfig.lastmileai.dev/docs/overview/monitoring-aiconfig)\n\nHere are some example uses:\n\n- [CLI Chatbot](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Wizard-GPT)\n- [RAG with AIConfig](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/RAG-with-AIConfig)\n- [Prompt routing](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Basic-Prompt-Routing)\n- [OpenAI function calling](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI)\n- [Chain of thought](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification)\n\n### OpenAI Introspection API\n\nIf you are already using OpenAI completion API's in your application, you can get started very quickly to start saving the messages in an `aiconfig`.\n\nUsage: see openai_wrapper.ipynb.\n\nNow you can continue using `openai` completion API as normal. When you want to save the config, just call `new_config.save()` and all your openai completion calls will get serialized to disk.\n\n> [**Detailed guide here**](https://aiconfig.lastmileai.dev/docs/overview/create-an-aiconfig#openai-api-python-wrapper)\n\n## Supported Models\n\nAIConfig supports the following model models out of the box:\n\n- OpenAI chat models (GPT-3, GPT-3.5, GPT-4)\n- LLaMA2 (running locally)\n- Google PaLM models (PaLM chat)\n- Hugging Face text generation models (e.g. Mistral-7B)\n\n### Examples\n\n- [OpenAI](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI)\n- [LLaMA example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/llama)\n- [Hugging Face (Mistral-7B) example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/HuggingFace)\n- [PaLM](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Multi-LLM-Consistency)\n\n> If you need to use a model that isn't provided out of the box, you can implement a `ModelParser` for it (see [Extending AIConfig](#extending-aiconfig)). **We welcome [contributions](https://aiconfig.lastmileai.dev/docs/contributing)**\n\n## AIConfig Schema\n\n[AIConfig specification](https://aiconfig.lastmileai.dev/docs/overview/ai-config-format)\n\n## AIConfig SDK\n\n> Read the [Usage Guide](https://aiconfig.lastmileai.dev/docs/usage-guide) for more details.\n\nThe AIConfig SDK supports CRUD operations for prompts, models, parameters and metadata. Here are some common examples.\n\nThe root interface is the `AIConfigRuntime` object. That is the entrypoint for interacting with an AIConfig programmatically.\n\nLet's go over a few key CRUD operations to give a glimpse.\n\n### AIConfig `create`\n\n```python\nconfig = AIConfigRuntime.create(\"aiconfig name\", \"description\")\n```\n\n### Prompt `resolve`\n\n`resolve` deserializes an existing `Prompt` into the data object that its model expects.\n\n```python\nconfig.resolve(\"prompt_name\", params)\n```\n\n`params` are overrides you can specify to resolve any `{{handlebars}}` templates in the prompt. See the `gen_itinerary` prompt in the Getting Started example.\n\n### Prompt `serialize`\n\n`serialize` is the inverse of `resolve` -- it serializes the data object that a model understands into a `Prompt` object that can be serialized into the `aiconfig` format.\n\n```python\nconfig.serialize(\"model_name\", data, \"prompt_name\")\n```\n\n### Prompt `run`\n\n`run` is used to run inference for the specified `Prompt`.\n\n```python\nconfig.run(\"prompt_name\", params)\n```\n\n### `run_with_dependencies`\n\nThis is a variant of `run` -- this re-runs all prompt dependencies.\nFor example, in [`travel.aiconfig.json`](#download-travelaiconfigjson), the `gen_itinerary` prompt references the output of the `get_activities` prompt using `{{get_activities.output}}`.\n\nRunning this function will first execute `get_activities`, and use its output to resolve the `gen_itinerary` prompt before executing it.\nThis is transitive, so it computes the Directed Acyclic Graph of dependencies to execute. Complex relationships can be modeled this way.\n\n```python\nconfig.run_with_dependencies(\"gen_itinerary\")\n```\n\n### Updating metadata and parameters\n\nUse the `get/set_metadata` and `get/set_parameter` methods to interact with metadata and parameters (`set_parameter` is just syntactic sugar to update `\"metadata.parameters\"`)\n\n```python\nconfig.set_metadata(\"key\", data, \"prompt_name\")\n```\n\nNote: if `\"prompt_name\"` is specified, the metadata is updated specifically for that prompt. Otherwise, the global metadata is updated.\n\n### Register new `ModelParser`\n\nUse the `AIConfigRuntime.register_model_parser` if you want to use a different `ModelParser`, or configure AIConfig to work with an additional model.\n\nAIConfig uses the model name string to retrieve the right `ModelParser` for a given Prompt (see `AIConfigRuntime.get_model_parser`), so you can register a different ModelParser for the same ID to override which `ModelParser` handles a Prompt.\n\nFor example, suppose I want to use `MyOpenAIModelParser` to handle `gpt-4` prompts. I can do the following at the start of my application:\n\n```python\nAIConfigRuntime.register_model_parser(myModelParserInstance, [\"gpt-4\"])\n```\n\n### Callback events\n\nUse callback events to trace and monitor what's going on -- helpful for debugging and observability.\n\n```python\nfrom aiconfig import AIConfigRuntime, CallbackEvent, CallbackManager\nconfig = AIConfigRuntime.load('aiconfig.json')\n\nasync def my_custom_callback(event: CallbackEvent) -> None:\n print(f\"Event triggered: {event.name}\", event)\n\ncallback_manager = CallbackManager([my_custom_callback])\nconfig.set_callback_manager(callback_manager)\n\nawait config.run(\"prompt_name\")\n```\n\n[**Read more** here](https://aiconfig.lastmileai.dev/docs/overview/monitoring-aiconfig)\n\n## Extensibility\n\nAIConfig is designed to be customized and extended for your use-case. The [Extensibility](/docs/extensibility) guide goes into more detail.\n\nCurrently, there are 3 core ways to extend AIConfig:\n\n1. [Supporting other models](https://aiconfig.lastmileai.dev/docs/extensibility#1-bring-your-own-model) - define a ModelParser extension\n2. [Callback event handlers](https://aiconfig.lastmileai.dev/docs/extensibility#2-callback-handlers) - tracing and monitoring\n3. [Custom metadata](https://aiconfig.lastmileai.dev/docs/extensibility#3-custom-metadata) - save custom fields in `aiconfig`\n\n## Contributing to `aiconfig`\n\nThis is our first open-source project and we'd love your help.\n\nSee our [contributing guidelines](https://aiconfig.lastmileai.dev/docs/contributing) -- we would especially love help adding support for additional models that the community wants.\n\n## Cookbooks\n\nWe provide several guides to demonstrate the power of `aiconfig`.\n\n> **See the [`cookbooks`](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks) folder for examples to clone.**\n\n### Chatbot\n\n- [Wizard GPT](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Wizard-GPT) - speak to a wizard on your CLI\n\n- [CLI-mate](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Cli-Mate) - help you make code-mods interactively on your codebase.\n\n### Retrieval Augmented Generated (RAG)\n\n- [RAG with AIConfig](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/RAG-with-AIConfig)\n\nAt its core, RAG is about passing data into prompts. Read how to [pass data](/docs/overview/parameters) with AIConfig.\n\n### Function calling\n\n- [OpenAI function calling](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI)\n\n### Prompt routing\n\n- [Prompt routing](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Basic-Prompt-Routing)\n\n### Chain of Thought\n\nA variant of chain-of-thought is Chain of Verification, used to help reduce hallucinations. Check out the aiconfig cookbook for CoVe:\n\n- [Chain of Verification](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification)\n\n### Using local LLaMA2 with `aiconfig`\n\n- [LLaMA example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/llama)\n\n### Hugging Face text generation\n\n- [Hugging Face (Mistral-7B) example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/HuggingFace)\n\n### Google PaLM\n\n- [PaLM](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Multi-LLM-Consistency)\n\n## Roadmap\n\nThis project is under active development.\n\nIf you'd like to help, please see the [contributing guidelines](#contributing-to-aiconfig).\n\nPlease create issues for additional capabilities you'd like to see.\n\nHere's what's already on our roadmap:\n\n- Evaluation interfaces: allow `aiconfig` artifacts to be evaluated with user-defined eval functions.\n - We are also considering integrating with existing evaluation frameworks.\n- Local editor for `aiconfig`: enable you to interact with aiconfigs more intuitively.\n- OpenAI Assistants API support\n- Multi-modal ModelParsers:\n - GPT4-V support\n - DALLE-3\n - Whisper\n - HuggingFace image generation\n\n## FAQs\n\n### How should I edit an `aiconfig` file?\n\nEditing a configshould be done either programmatically via SDK or via the UI (workbooks):\n\n- [Programmatic](https://github.com/lastmile-ai/aiconfig/blob/main/cookbooks/Create-AIConfig-Programmatically/create_aiconfig_programmatically.ipynb) editing.\n\n- [Edit with a workbook](#edit-aiconfig-in-a-notebook-editor) editor: this is similar to editing an ipynb file as a notebook (most people never touch the json ipynb directly)\n\nYou should only edit the `aiconfig` by hand for minor modifications, like tweaking a prompt string or updating some metadata.\n\n### Does this support custom endpoints?\n\nOut of the box, AIConfig already supports all OpenAI GPT\\* models, Google\u2019s PaLM model and any \u201ctextgeneration\u201d model on Hugging Face (like Mistral). See [Supported Models](#supported-models) for more details.\n\nAdditionally, you can install `aiconfig` [extensions](https://github.com/lastmile-ai/aiconfig/tree/main/extensions) for additional models (see question below).\n\n### Is OpenAI function calling supported?\n\nYes. [This example](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI) goes through how to do it.\n\nWe are also working on adding support for the Assistants API.\n\n### How can I use aiconfig with my own model endpoint?\n\nModel support is implemented as \u201cModelParser\u201ds in the AIConfig SDK, and the idea is that anyone, including you, can define a ModelParser (and even publish it as an extension package).\n\nAll that\u2019s needed to use a model with AIConfig is a ModelParser that knows\n\n- how to serialize data from a model into the aiconfig format\n- how to deserialize data from an aiconfig into the type the model expects\n- how to run inference for model.\n\nFor more details, see [Extensibility](https://aiconfig.lastmileai.dev/docs/extensibility).\n\n### When should I store outputs in an `aiconfig`?\n\nThe `AIConfigRuntime` object is used to interact with an aiconfig programmatically (see [SDK usage guide](#aiconfig-sdk)). As you run prompts, this object keeps track of the outputs returned from the model.\n\nYou can choose to serialize these outputs back into the `aiconfig` by using the `config.save(include_outputs=True)` API. This can be useful for preserving context -- think of it like session state.\n\nFor example, you can use aiconfig to create a chatbot, and use the same format to save the chat history so it can be resumed for the next session.\n\nYou can also choose to save outputs to a _different_ file than the original config -- `config.save(\"history.aiconfig.json\", include_outputs=True)`.\n\n### Why should I use `aiconfig` instead of things like [configurator](https://pypi.org/project/configurator/)?\n\nIt helps to have a [standardized format](http://aiconfig.lastmileai.dev/docs/overview/ai-config-format) specifically for storing generative AI prompts, inference results, model parameters and arbitrary metadata, as opposed to a general-purpose configuration schema.\n\nWith that standardization, you just need a layer that knows how to serialize/deserialize from that format into whatever the inference endpoints require.\n\n### This looks similar to `ipynb` for Jupyter notebooks\n\nWe believe that notebooks are a perfect iteration environment for generative AI -- they are flexible, multi-modal, and collaborative.\n\nThe multi-modality and flexibility offered by notebooks and [`ipynb`](https://ipython.org/ipython-doc/3/notebook/nbformat.html) offers a good interaction model for generative AI. The `aiconfig` file format is extensible like `ipynb`, and AI Workbook editor allows rapid iteration in a notebook-like IDE.\n\n_AI Workbooks are to AIConfig what Jupyter notebooks are to `ipynb`_\n\nThere are 2 areas where we are going beyond what notebooks offer:\n\n1. `aiconfig` is more **source-control friendly** than `ipynb`. `ipynb` stores binary data (images, etc.) by encoding it in the file, while `aiconfig` recommends using file URI references instead.\n2. `aiconfig` can be imported and **connected to application code** using the AIConfig SDK.\n",
"bugtrack_url": null,
"license": null,
"summary": "Python library for AIConfig SDK",
"version": "1.1.34",
"project_urls": {
"Bug Tracker": "https://github.com/lastmile-ai/aiconfig/issues",
"Homepage": "https://github.com/lastmile-ai/aiconfig"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "26fe285019be6346e6f8ed5f562d2627b4bd8939b8829ef857b7434fd06188ff",
"md5": "ce68184e2fc2909b932bc685852ad794",
"sha256": "db4bea01ff2715c135d99117e75c9ff840386c7be2c4559f9df1e428176b8819"
},
"downloads": -1,
"filename": "python_aiconfig-1.1.34-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ce68184e2fc2909b932bc685852ad794",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 1625547,
"upload_time": "2024-04-23T18:03:40",
"upload_time_iso_8601": "2024-04-23T18:03:40.988349Z",
"url": "https://files.pythonhosted.org/packages/26/fe/285019be6346e6f8ed5f562d2627b4bd8939b8829ef857b7434fd06188ff/python_aiconfig-1.1.34-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3494c0132857ce840d41250ffd0d72272893a7bf3d0a1f41adf7022279a332a3",
"md5": "6fb9bcf78a851493c78dca06fb6fb5b2",
"sha256": "15edec7a20ac12df41fade560921b8ae32313dcc5bd26353b24a9153f8c71beb"
},
"downloads": -1,
"filename": "python_aiconfig-1.1.34.tar.gz",
"has_sig": false,
"md5_digest": "6fb9bcf78a851493c78dca06fb6fb5b2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 1611804,
"upload_time": "2024-04-23T18:03:44",
"upload_time_iso_8601": "2024-04-23T18:03:44.279932Z",
"url": "https://files.pythonhosted.org/packages/34/94/c0132857ce840d41250ffd0d72272893a7bf3d0a1f41adf7022279a332a3/python_aiconfig-1.1.34.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-23 18:03:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "lastmile-ai",
"github_project": "aiconfig",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "python-aiconfig"
}