Name | neat-llm JSON |
Version |
0.1.1a17
JSON |
| download |
home_page | None |
Summary | Neat is a simpler and more intuitive abstraction for quickly working with llms. Easily create tool-calling agents, generate structured output, and easily switch between a wide range of models providers, simplifying the process of building and protoyping llm applications. |
upload_time | 2024-11-16 09:34:25 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | None |
keywords |
ai
language-model
llm
llm-agents
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# neat-llm
A simpler abstraction for working with Large Language Models (LLMs).
## Features
- **Unified Interface**: Work with multiple LLM providers (OpenAI, Anthropic, Cohere, Mistral) through a single, consistent API.
- **Async-First**: Built for modern async Python applications with streaming support.
- **Prompt Management**: Create, version, and reuse prompts easily. (In development)
- **Tool Integration**: Seamlessly integrate custom tools and functions for LLMs to use.
- **Structured Outputs**: Define and validate structured outputs using Pydantic models.
- **Type Safety**: Leverage Python's type hinting for a safer development experience.
- **Flexible Configuration**: Easy-to-use configuration management with environment variable support.
- **Conversation Mode**: Engage in multi-turn dialogues with your agent.
- **Streaming Support**: Stream responses chunk-by-chunk for real-time applications.
- **Flexible Message Formatting**: Use both traditional message dictionary format and neat's helper methods for message construction.
**Note**: Prompt versioning and database features are currently under development and may change in future releases.
## Installation
```bash
pip install neat-llm
```
## API Key Setup
To use neat-llm with various LLM providers, set up your API keys using one of these methods:
1. Create a `.env` file in your project root:
```
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
COHERE_API_KEY=your_cohere_api_key
MISTRAL_API_KEY=your_mistral_api_key
```
2. Set API keys programmatically:
```python
from neat import neat_config
neat_config.openai_api_key = "your_openai_api_key"
neat_config.anthropic_api_key = "your_anthropic_api_key"
neat_config.cohere_api_key = "your_cohere_api_key"
neat_config.mistral_api_key = "your_mistral_api_key"
```
Replace `your_*_api_key` with your actual API keys from the respective providers.
## Quick Start
neat-llm offers two ways to construct messages: helper methods for convenience, and traditional message dictionary format for those who prefer it. Both approaches are fully supported.
### Basic Usage
Here's a simple example using neat's helper methods:
```python
from neat import Neat
import asyncio
neat = Neat()
@neat.lm()
async def generate_story(theme: str, length: int):
return [
neat.system("You are a creative story writer."),
neat.user(f"Write a {length}-word story about {theme}."),
]
async def main():
story = await generate_story("time travel", 100)
print(f"Generated Story:\n{story}")
if __name__ == "__main__":
asyncio.run(main())
```
### Using Traditional Dictionary Format
Here's the same example using the traditional dictionary format:
```python
from neat import Neat
import asyncio
neat = Neat()
@neat.lm()
async def generate_story(theme: str, length: int):
return [
{"role": "system", "content": "You are a creative story writer."},
{"role": "user", "content": f"Write a {length}-word story about {theme}."},
]
async def main():
story = await generate_story("time travel", 100)
print(f"Generated Story:\n{story}")
if __name__ == "__main__":
asyncio.run(main())
```
### Streaming Responses
neat-llm supports streaming responses for real-time applications:
```python
from neat import Neat
import asyncio
neat = Neat()
@neat.lm(stream=True)
async def generate_story(theme: str):
return [
neat.system("You are a creative story writer."),
neat.user(f"Write a story about {theme}, piece by piece.")
]
async def main():
async for chunk in await generate_story("time travel"):
print(chunk, end="", flush=True)
print("\nDone!")
if __name__ == "__main__":
asyncio.run(main())
```
## Advanced Usage
### Custom Tools
neat-llm supports tool integration for enhanced capabilities:
```python
from neat import Neat
import random
import asyncio
neat = Neat()
# Custom tool to get weather information
def get_weather(location: str) -> dict:
"""Fetch current weather information for a given location."""
# Simulating weather data for demonstration
temp = round(random.uniform(-5, 35), 1)
conditions = random.choice(["Sunny", "Cloudy", "Rainy", "Windy", "Snowy"])
return {"temperature": temp, "conditions": conditions}
# Custom tool to recommend clothing based on weather
def recommend_clothing(weather: dict) -> dict:
"""Recommend clothing based on weather conditions."""
if weather["temperature"] < 10:
return {"top": "Warm coat", "bottom": "Thick pants", "accessories": "Scarf and gloves"}
elif 10 <= weather["temperature"] < 20:
return {"top": "Light jacket", "bottom": "Jeans", "accessories": "Light scarf"}
else:
return {"top": "T-shirt", "bottom": "Shorts", "accessories": "Sunglasses"}
# Register the tools
neat.add_tool(get_weather)
neat.add_tool(recommend_clothing)
@neat.lm(tools=[get_weather, recommend_clothing])
async def assistant():
return [
neat.system("You are a helpful weather and fashion assistant. Use the get_weather tool to check the weather for specific locations, and the recommend_clothing tool to suggest appropriate outfits based on the weather."),
neat.user("What's the weather like in Paris today, and what should I wear?"),
]
async def main():
result = await assistant()
print(f"Weather and Fashion Assistant:\n{result}")
if __name__ == "__main__":
asyncio.run(main())
```
### Streaming with Tools
Tools can also be used with streaming responses:
```python
@neat.lm(tools=[get_weather, recommend_clothing], stream=True)
async def assistant():
return [
neat.system("You are a helpful weather and fashion assistant."),
neat.user("What's the weather like in Paris today, and what should I wear?"),
]
async def main():
async for chunk in await assistant():
if isinstance(chunk, dict): # Tool call result
print("\nTool Call:", chunk)
else: # Regular content
print(chunk, end="", flush=True)
if __name__ == "__main__":
asyncio.run(main())
```
### Structured Outputs
Use Pydantic models to define and validate structured outputs:
```python
from neat import Neat
from pydantic import BaseModel, Field
import asyncio
neat = Neat()
class MovieRecommendation(BaseModel):
"""Represents a movie recommendation with details."""
title: str = Field(..., description="The title of the recommended movie")
year: int = Field(..., description="The release year of the movie")
genre: str = Field(..., description="The primary genre of the movie")
reason: str = Field(..., description="A brief explanation for why this movie is recommended")
@neat.lm(response_model=MovieRecommendation)
async def recommend_movie(preferences: str):
return [
neat.system("You are a movie recommendation expert."),
neat.user(f"Recommend a movie based on these preferences: {preferences}"),
]
async def main():
preferences = "I like sci-fi movies with mind-bending plots"
movie = await recommend_movie(preferences)
print(f"Movie: {movie.title} ({movie.year})\nGenre: {movie.genre}\nReason: {movie.reason}")
if __name__ == "__main__":
asyncio.run(main())
```
### Streaming Structured Outputs
Structured outputs can also be streamed:
```python
@neat.lm(response_model=MovieRecommendation, stream=True)
async def recommend_movie(preferences: str):
return [
neat.system("You are a movie recommendation expert."),
neat.user(f"Recommend a movie based on these preferences: {preferences}"),
]
async def main():
preferences = "I like sci-fi movies with mind-bending plots"
async for chunk in await recommend_movie(preferences):
if isinstance(chunk, MovieRecommendation):
print("\nReceived recommendation:", chunk)
else:
print(chunk, end="", flush=True)
if __name__ == "__main__":
asyncio.run(main())
```
### Conversation Mode
Engage in interactive dialogues with your AI assistant:
```python
from neat import Neat
import asyncio
neat = Neat()
@neat.lm(conversation=True)
async def chat_with_ai():
return [
neat.system("You are a friendly and knowledgeable AI assistant."),
neat.user("Hello! What shall we discuss?"),
]
async def main():
await chat_with_ai() # This will start an interactive conversation
if __name__ == "__main__":
asyncio.run(main())
```
In conversation mode, you'll see a rich console interface with color-coded messages and formatted text. To exit the conversation, type "exit" or "quit".
## License
This project is licensed under the MIT License - see the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "neat-llm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, language-model, llm, llm-agents",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/85/f5/8c4215458a4612cd9da746f3e612c7fee0d21a1b6815e971601cfe98ea6e/neat_llm-0.1.1a17.tar.gz",
"platform": null,
"description": "# neat-llm\n\nA simpler abstraction for working with Large Language Models (LLMs).\n\n## Features\n\n- **Unified Interface**: Work with multiple LLM providers (OpenAI, Anthropic, Cohere, Mistral) through a single, consistent API.\n- **Async-First**: Built for modern async Python applications with streaming support.\n- **Prompt Management**: Create, version, and reuse prompts easily. (In development)\n- **Tool Integration**: Seamlessly integrate custom tools and functions for LLMs to use.\n- **Structured Outputs**: Define and validate structured outputs using Pydantic models.\n- **Type Safety**: Leverage Python's type hinting for a safer development experience.\n- **Flexible Configuration**: Easy-to-use configuration management with environment variable support.\n- **Conversation Mode**: Engage in multi-turn dialogues with your agent.\n- **Streaming Support**: Stream responses chunk-by-chunk for real-time applications.\n- **Flexible Message Formatting**: Use both traditional message dictionary format and neat's helper methods for message construction.\n\n**Note**: Prompt versioning and database features are currently under development and may change in future releases.\n\n## Installation\n\n```bash\npip install neat-llm\n```\n\n## API Key Setup\n\nTo use neat-llm with various LLM providers, set up your API keys using one of these methods:\n\n1. Create a `.env` file in your project root:\n\n ```\n OPENAI_API_KEY=your_openai_api_key\n ANTHROPIC_API_KEY=your_anthropic_api_key\n COHERE_API_KEY=your_cohere_api_key\n MISTRAL_API_KEY=your_mistral_api_key\n ```\n\n2. Set API keys programmatically:\n\n ```python\n from neat import neat_config\n\n neat_config.openai_api_key = \"your_openai_api_key\"\n neat_config.anthropic_api_key = \"your_anthropic_api_key\"\n neat_config.cohere_api_key = \"your_cohere_api_key\"\n neat_config.mistral_api_key = \"your_mistral_api_key\"\n ```\n\nReplace `your_*_api_key` with your actual API keys from the respective providers.\n\n## Quick Start\n\nneat-llm offers two ways to construct messages: helper methods for convenience, and traditional message dictionary format for those who prefer it. Both approaches are fully supported.\n\n### Basic Usage\n\nHere's a simple example using neat's helper methods:\n\n```python\nfrom neat import Neat\nimport asyncio\n\nneat = Neat()\n\n@neat.lm()\nasync def generate_story(theme: str, length: int):\n return [\n neat.system(\"You are a creative story writer.\"),\n neat.user(f\"Write a {length}-word story about {theme}.\"),\n ]\n\nasync def main():\n story = await generate_story(\"time travel\", 100)\n print(f\"Generated Story:\\n{story}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Using Traditional Dictionary Format\n\nHere's the same example using the traditional dictionary format:\n\n```python\nfrom neat import Neat\nimport asyncio\n\nneat = Neat()\n\n@neat.lm()\nasync def generate_story(theme: str, length: int):\n return [\n {\"role\": \"system\", \"content\": \"You are a creative story writer.\"},\n {\"role\": \"user\", \"content\": f\"Write a {length}-word story about {theme}.\"},\n ]\n\nasync def main():\n story = await generate_story(\"time travel\", 100)\n print(f\"Generated Story:\\n{story}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Streaming Responses\n\nneat-llm supports streaming responses for real-time applications:\n\n```python\nfrom neat import Neat\nimport asyncio\n\nneat = Neat()\n\n@neat.lm(stream=True)\nasync def generate_story(theme: str):\n return [\n neat.system(\"You are a creative story writer.\"),\n neat.user(f\"Write a story about {theme}, piece by piece.\")\n ]\n\nasync def main():\n async for chunk in await generate_story(\"time travel\"):\n print(chunk, end=\"\", flush=True)\n print(\"\\nDone!\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n## Advanced Usage\n\n### Custom Tools\n\nneat-llm supports tool integration for enhanced capabilities:\n\n```python\nfrom neat import Neat\nimport random\nimport asyncio\n\nneat = Neat()\n\n# Custom tool to get weather information\ndef get_weather(location: str) -> dict:\n \"\"\"Fetch current weather information for a given location.\"\"\"\n # Simulating weather data for demonstration\n temp = round(random.uniform(-5, 35), 1)\n conditions = random.choice([\"Sunny\", \"Cloudy\", \"Rainy\", \"Windy\", \"Snowy\"])\n return {\"temperature\": temp, \"conditions\": conditions}\n\n# Custom tool to recommend clothing based on weather\ndef recommend_clothing(weather: dict) -> dict:\n \"\"\"Recommend clothing based on weather conditions.\"\"\"\n if weather[\"temperature\"] < 10:\n return {\"top\": \"Warm coat\", \"bottom\": \"Thick pants\", \"accessories\": \"Scarf and gloves\"}\n elif 10 <= weather[\"temperature\"] < 20:\n return {\"top\": \"Light jacket\", \"bottom\": \"Jeans\", \"accessories\": \"Light scarf\"}\n else:\n return {\"top\": \"T-shirt\", \"bottom\": \"Shorts\", \"accessories\": \"Sunglasses\"}\n\n# Register the tools\nneat.add_tool(get_weather)\nneat.add_tool(recommend_clothing)\n\n@neat.lm(tools=[get_weather, recommend_clothing])\nasync def assistant():\n return [\n neat.system(\"You are a helpful weather and fashion assistant. Use the get_weather tool to check the weather for specific locations, and the recommend_clothing tool to suggest appropriate outfits based on the weather.\"),\n neat.user(\"What's the weather like in Paris today, and what should I wear?\"),\n ]\n\nasync def main():\n result = await assistant()\n print(f\"Weather and Fashion Assistant:\\n{result}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Streaming with Tools\n\nTools can also be used with streaming responses:\n\n```python\n@neat.lm(tools=[get_weather, recommend_clothing], stream=True)\nasync def assistant():\n return [\n neat.system(\"You are a helpful weather and fashion assistant.\"),\n neat.user(\"What's the weather like in Paris today, and what should I wear?\"),\n ]\n\nasync def main():\n async for chunk in await assistant():\n if isinstance(chunk, dict): # Tool call result\n print(\"\\nTool Call:\", chunk)\n else: # Regular content\n print(chunk, end=\"\", flush=True)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Structured Outputs\n\nUse Pydantic models to define and validate structured outputs:\n\n```python\nfrom neat import Neat\nfrom pydantic import BaseModel, Field\nimport asyncio\n\nneat = Neat()\n\nclass MovieRecommendation(BaseModel):\n \"\"\"Represents a movie recommendation with details.\"\"\"\n title: str = Field(..., description=\"The title of the recommended movie\")\n year: int = Field(..., description=\"The release year of the movie\")\n genre: str = Field(..., description=\"The primary genre of the movie\")\n reason: str = Field(..., description=\"A brief explanation for why this movie is recommended\")\n\n@neat.lm(response_model=MovieRecommendation)\nasync def recommend_movie(preferences: str):\n return [\n neat.system(\"You are a movie recommendation expert.\"),\n neat.user(f\"Recommend a movie based on these preferences: {preferences}\"),\n ]\n\nasync def main():\n preferences = \"I like sci-fi movies with mind-bending plots\"\n movie = await recommend_movie(preferences)\n print(f\"Movie: {movie.title} ({movie.year})\\nGenre: {movie.genre}\\nReason: {movie.reason}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Streaming Structured Outputs\n\nStructured outputs can also be streamed:\n\n```python\n@neat.lm(response_model=MovieRecommendation, stream=True)\nasync def recommend_movie(preferences: str):\n return [\n neat.system(\"You are a movie recommendation expert.\"),\n neat.user(f\"Recommend a movie based on these preferences: {preferences}\"),\n ]\n\nasync def main():\n preferences = \"I like sci-fi movies with mind-bending plots\"\n async for chunk in await recommend_movie(preferences):\n if isinstance(chunk, MovieRecommendation):\n print(\"\\nReceived recommendation:\", chunk)\n else:\n print(chunk, end=\"\", flush=True)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Conversation Mode\n\nEngage in interactive dialogues with your AI assistant:\n\n```python\nfrom neat import Neat\nimport asyncio\n\nneat = Neat()\n\n@neat.lm(conversation=True)\nasync def chat_with_ai():\n return [\n neat.system(\"You are a friendly and knowledgeable AI assistant.\"),\n neat.user(\"Hello! What shall we discuss?\"),\n ]\n\nasync def main():\n await chat_with_ai() # This will start an interactive conversation\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\nIn conversation mode, you'll see a rich console interface with color-coded messages and formatted text. To exit the conversation, type \"exit\" or \"quit\".\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "Neat is a simpler and more intuitive abstraction for quickly working with llms. Easily create tool-calling agents, generate structured output, and easily switch between a wide range of models providers, simplifying the process of building and protoyping llm applications.",
"version": "0.1.1a17",
"project_urls": null,
"split_keywords": [
"ai",
" language-model",
" llm",
" llm-agents"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "13ecdb3c89ef72b94672b4bc0291f7ed2fc269832b0704e2d29d319f31d9fcc7",
"md5": "0bbd3c6cbe5ea523f86e77b6c4ba8d14",
"sha256": "fa9ca49f45231936ee8772ac4a1e5fcc83a0454aaaec6037cb1f2f32b87cbaf2"
},
"downloads": -1,
"filename": "neat_llm-0.1.1a17-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0bbd3c6cbe5ea523f86e77b6c4ba8d14",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 33135,
"upload_time": "2024-11-16T09:34:23",
"upload_time_iso_8601": "2024-11-16T09:34:23.742117Z",
"url": "https://files.pythonhosted.org/packages/13/ec/db3c89ef72b94672b4bc0291f7ed2fc269832b0704e2d29d319f31d9fcc7/neat_llm-0.1.1a17-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "85f58c4215458a4612cd9da746f3e612c7fee0d21a1b6815e971601cfe98ea6e",
"md5": "ac4c1b63a189e87838185f40c966f1cb",
"sha256": "351d6260e19b96df81bb7e62ca3c8cdc1bd408ee108605dab6de06b84e516b0f"
},
"downloads": -1,
"filename": "neat_llm-0.1.1a17.tar.gz",
"has_sig": false,
"md5_digest": "ac4c1b63a189e87838185f40c966f1cb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 27114,
"upload_time": "2024-11-16T09:34:25",
"upload_time_iso_8601": "2024-11-16T09:34:25.765029Z",
"url": "https://files.pythonhosted.org/packages/85/f5/8c4215458a4612cd9da746f3e612c7fee0d21a1b6815e971601cfe98ea6e/neat_llm-0.1.1a17.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-16 09:34:25",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "neat-llm"
}