agentics


Nameagentics JSON
Version 0.1.9 PyPI version JSON
download
home_pageNone
SummaryA minimal LLM agent library
upload_time2025-02-07 08:24:50
maintainerNone
docs_urlNone
authorFacundo Goiriz
requires_python<4.0,>=3.9
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Agentics

Minimalist Python library for LLM usage

## Installation

```bash
pip install agentics
```

## Why Agentics?

Compare:

```python
from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)
```

To this:

```python
from agentics import LLM

llm = LLM()
response: str = llm("Hello!")
print(response)
```

## Quickstart

### Simple Chat

```python
from agentics import LLM

llm = LLM(system_prompt="You know everything about the world")

response: str = llm("What is the capital of France?")

print(response)
# The capital of France is Paris.
```

### Structured Output

```python
from agentics import LLM
from pydantic import BaseModel

class ExtractUser(BaseModel):
    name: str
    age: int

llm = LLM()

res = llm.chat("John Doe is 30 years old.", response_format=ExtractUser)

assert res.name == "John Doe"
assert res.age == 30
```

### Tool Usage

```python
from agentics import LLM
import requests

def visit_url(url: str):
    """Fetch the content of a URL"""
    return requests.get(url).content.decode()

llm = LLM()

res = llm.chat("What's the top story on Hacker News?", tools=[visit_url])

print(res)
# The top story on Hacker News is: "Operating System in 1,000 Lines – Intro"
```

### Tool Usage with Structured Output

```python
from agentics import LLM
from pydantic import BaseModel
import requests

class HackerNewsStory(BaseModel):
    title: str
    points: int

def visit_url(url: str):
    """Fetch the content of a URL"""
    return requests.get(url).content.decode()

llm = LLM()

res = llm.chat(
    "What's the top story on Hacker News?", 
    tools=[visit_url], 
    response_format=HackerNewsStory
)

print(res)
# title='Operating System in 1,000 Lines – Intro' points=29
```

### Multiple Tools with Structured Output

```python
from agentics import LLM
from pydantic import BaseModel

def calculate_area(width: float, height: float):
    """Calculate the area of a rectangle"""
    return width * height

def calculate_volume(area: float, depth: float):
    """Calculate volume from area and depth"""
    return area * depth

class BoxDimensions(BaseModel):
    width: float
    height: float
    depth: float
    area: float
    volume: float

llm = LLM()

res = llm.chat(
    "Calculate the area and volume of a box that is 5.5 meters wide, 3.2 meters high and 2.1 meters deep", 
    tools=[calculate_area, calculate_volume],
    response_format=BoxDimensions
)

print(res)
# width=5.5 height=3.2 depth=2.1 area=17.6 volume=36.96
```

### Text Embeddings and Similarity Search

The `Embedding` class provides a simple interface for generating text embeddings and performing similarity searches:

```python
from agentics import Embedding

# Create an embedding instance
embedding = Embedding()

# Get embedding for a single string
vector = embedding("Hello, how are you?")

# Get embeddings for multiple strings at once
vectors = embedding([
    "Good morning, how's it going?",
    "Today is a great day",
    "I'm feeling sad",
    "Greetings"
])

# Compare two texts using cosine similarity
similarity = embedding.cosine_similarity(
    embedding("Hello!"),
    embedding("Hi there!")
)

# Rank texts by similarity (returns IDs by default)
reference = embedding("Hello, how are you?")
candidates = embedding([
    "Good morning, how's it going?",
    "Today is a great day",
    "I'm feeling sad",
    "Greetings"
])

# Get ranked results with similarity scores
ranked_results = embedding.rank(reference, candidates)
for idx, score in ranked_results:
    print(f"Text ID: {idx} Similarity Score: {score}")

# Rank texts by similarity but return actual vectors instead of IDs
ranked_results_vectors = embedding.rank(reference, candidates, return_vectors=True)
for vector, score in ranked_results_vectors:
    print(f"Vector: {vector[:5]}... Similarity Score: {score}")  # Show first 5 values for readability
```

The `Embedding` class features:
- Simple interface for generating embeddings from text
- Support for both single strings and lists of strings
- Built-in cosine similarity computation
- Efficient similarity ranking for multiple vectors
- Option to return either vector IDs (default) or actual vectors
- Uses OpenAI’s text embedding models (defaults to "text-embedding-3-small")

# API Reference

## LLM

The main interface for interacting with language models through chat completions. Provides a flexible and minimal API for handling conversations, function calling, and structured outputs.

### Constructor Parameters

- `system_prompt` (str, optional): Initial system prompt to set context. Example:
  ```python
  llm = LLM(system_prompt="You are a helpful assistant")
  ```

- `model` (str, optional): The model identifier to use (default: "gpt-4o-mini")
- `client` (OpenAI, optional): Custom OpenAI client instance. Useful for alternative providers:
  ```python
  client = OpenAI(api_key=os.getenv("DEEPSEEK_API_KEY"), base_url="https://api.deepseek.com")
  llm = LLM(client=client, model="deepseek-chat")
  ```

- `messages` (list[dict], optional): Pre-populate conversation history:
  ```python
  llm = LLM(messages=[{"role": "user", "content": "Initial message"}])
  ```

### Chat Method

Both `llm.chat()` and `llm()` provide identical functionality as the main interface for interactions.

#### Parameters

- `prompt` (str, optional): The input prompt to send to the model. If provided, appended to conversation history.
- `tools` (list[dict], optional): List of available function tools the model can use. Each tool should be a callable with type hints.
- `response_format` (BaseModel, optional): Pydantic model to structure and validate the response.
- `single_tool_call_request` (bool, optional): When True, limits the model to one request to use tools (can still call multiple tools in that request).
- `**kwargs`: Additional arguments passed directly to the chat completion API.

#### Return Value
- `Union[str, BaseModel]`: Either a string response or structured data matching response_format

#### Behavior Flows

1. Basic Chat (no tools/response_format):
   - Simple text completion
   - Returns string response

2. With Tools:
   - Model can choose to use available tools or respond directly
   - When tools are used, multiple tools can be called in a single request
   - Tools are called automatically and results fed back
   - Process repeats if model decides to use tools again
   - Use `single_tool_call_request=True` to limit the model to one request to use tools (can still call multiple tools in that request).

3. With Response Format:
   - Response is cast to specified Pydantic model
   - Returns structured data

4. Combined Tools + Response Format:
   - Follows tool flow first
   - Final text response is cast to model

The conversation history is accessible via the `.messages` attribute, making it easy to inspect or manipulate the context.

## Embedding

The interface for generating text embeddings and performing similarity operations. Provides a simple API for embedding generation and similarity ranking.

### Constructor Parameters

- `model` (str, optional): The model identifier to use (default: "text-embedding-3-small")
- `client` (OpenAI, optional): Custom OpenAI client instance. If None, creates new instance.

### Methods

#### embed() / __call__()

Both `embedding.embed()` and `embedding()` provide identical functionality for generating embeddings.

##### Parameters
- `input` (Union[str, List[str]]): Text input, either a single string or a list of strings.

##### Returns
- `Union[List[float], List[List[float]]]`: 
  - For single string input: a list of floats (the embedding vector)
  - For list input: a list of embeddings (list of float lists)

#### cosine_similarity()

Compute cosine similarity between two embedding vectors.

##### Parameters
- `a` (List[float]): The first embedding vector
- `b` (List[float]): The second embedding vector

##### Returns
- `float`: The cosine similarity score between -1 and 1

#### rank()

Rank a list of vectors by similarity to a reference vector.

##### Parameters
- `vector` (List[float]): The reference embedding vector
- `vectors` (List[List[float]]): A list of embedding vectors to compare against
- `return_vectors` (bool, optional, default=False): If True, returns the actual vectors instead of their indices.

##### Returns
- `List[Tuple[Union[int, List[float]], float]]`: A list of tuples containing:
  - If `return_vectors` is False: The index of the vector in the input list
  - If `return_vectors` is True: The actual embedding vector
  - The cosine similarity score (higher is more similar)
Sorted in descending order of similarity.

## Inspiration

Agentics was born from a desire to simplify LLM interactions in Python. The existing landscape often requires verbose boilerplate:

```python
from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)
```

When my goal in mind was to be able to simply do `llm("Hello!")`, with that desired interface is how I started building Agentics, this:

```python
from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)
```

now turns into this:

```python
from agentics import LLM

llm = LLM()
response = llm("Hello!")
print(response)
```

Agentics makes things simple while bringing these powerful features into the same library:

- **Simple API**: Talk to LLMs with just a few lines of code
- **Structured Output**: Like [instructor](https://github.com/instructor-ai/instructor), turns responses into Pydantic models
- **Function Calling**: Like [Marvin's assistants](https://www.askmarvin.ai/docs/interactive/assistants/) but 
using direct message-based communication instead of the 
Assistants API

I built this to make working with OpenAI's LLMs easier. It handles structured outputs and function calling without any fuss. Right now it only works with OpenAI, but it makes common LLM tasks way simpler.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "agentics",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Facundo Goiriz",
    "author_email": "facundogoiriz@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/d0/a5/cd7190e65848f1149df639b7fed791bd4fc30356ce0db846673a30b4fc4b/agentics-0.1.9.tar.gz",
    "platform": null,
    "description": "# Agentics\n\nMinimalist Python library for LLM usage\n\n## Installation\n\n```bash\npip install agentics\n```\n\n## Why Agentics?\n\nCompare:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\nresponse = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Hello!\"}\n    ]\n)\nprint(response.choices[0].message.content)\n```\n\nTo this:\n\n```python\nfrom agentics import LLM\n\nllm = LLM()\nresponse: str = llm(\"Hello!\")\nprint(response)\n```\n\n## Quickstart\n\n### Simple Chat\n\n```python\nfrom agentics import LLM\n\nllm = LLM(system_prompt=\"You know everything about the world\")\n\nresponse: str = llm(\"What is the capital of France?\")\n\nprint(response)\n# The capital of France is Paris.\n```\n\n### Structured Output\n\n```python\nfrom agentics import LLM\nfrom pydantic import BaseModel\n\nclass ExtractUser(BaseModel):\n    name: str\n    age: int\n\nllm = LLM()\n\nres = llm.chat(\"John Doe is 30 years old.\", response_format=ExtractUser)\n\nassert res.name == \"John Doe\"\nassert res.age == 30\n```\n\n### Tool Usage\n\n```python\nfrom agentics import LLM\nimport requests\n\ndef visit_url(url: str):\n    \"\"\"Fetch the content of a URL\"\"\"\n    return requests.get(url).content.decode()\n\nllm = LLM()\n\nres = llm.chat(\"What's the top story on Hacker News?\", tools=[visit_url])\n\nprint(res)\n# The top story on Hacker News is: \"Operating System in 1,000 Lines \u2013 Intro\"\n```\n\n### Tool Usage with Structured Output\n\n```python\nfrom agentics import LLM\nfrom pydantic import BaseModel\nimport requests\n\nclass HackerNewsStory(BaseModel):\n    title: str\n    points: int\n\ndef visit_url(url: str):\n    \"\"\"Fetch the content of a URL\"\"\"\n    return requests.get(url).content.decode()\n\nllm = LLM()\n\nres = llm.chat(\n    \"What's the top story on Hacker News?\", \n    tools=[visit_url], \n    response_format=HackerNewsStory\n)\n\nprint(res)\n# title='Operating System in 1,000 Lines \u2013 Intro' points=29\n```\n\n### Multiple Tools with Structured Output\n\n```python\nfrom agentics import LLM\nfrom pydantic import BaseModel\n\ndef calculate_area(width: float, height: float):\n    \"\"\"Calculate the area of a rectangle\"\"\"\n    return width * height\n\ndef calculate_volume(area: float, depth: float):\n    \"\"\"Calculate volume from area and depth\"\"\"\n    return area * depth\n\nclass BoxDimensions(BaseModel):\n    width: float\n    height: float\n    depth: float\n    area: float\n    volume: float\n\nllm = LLM()\n\nres = llm.chat(\n    \"Calculate the area and volume of a box that is 5.5 meters wide, 3.2 meters high and 2.1 meters deep\", \n    tools=[calculate_area, calculate_volume],\n    response_format=BoxDimensions\n)\n\nprint(res)\n# width=5.5 height=3.2 depth=2.1 area=17.6 volume=36.96\n```\n\n### Text Embeddings and Similarity Search\n\nThe `Embedding` class provides a simple interface for generating text embeddings and performing similarity searches:\n\n```python\nfrom agentics import Embedding\n\n# Create an embedding instance\nembedding = Embedding()\n\n# Get embedding for a single string\nvector = embedding(\"Hello, how are you?\")\n\n# Get embeddings for multiple strings at once\nvectors = embedding([\n    \"Good morning, how's it going?\",\n    \"Today is a great day\",\n    \"I'm feeling sad\",\n    \"Greetings\"\n])\n\n# Compare two texts using cosine similarity\nsimilarity = embedding.cosine_similarity(\n    embedding(\"Hello!\"),\n    embedding(\"Hi there!\")\n)\n\n# Rank texts by similarity (returns IDs by default)\nreference = embedding(\"Hello, how are you?\")\ncandidates = embedding([\n    \"Good morning, how's it going?\",\n    \"Today is a great day\",\n    \"I'm feeling sad\",\n    \"Greetings\"\n])\n\n# Get ranked results with similarity scores\nranked_results = embedding.rank(reference, candidates)\nfor idx, score in ranked_results:\n    print(f\"Text ID: {idx} Similarity Score: {score}\")\n\n# Rank texts by similarity but return actual vectors instead of IDs\nranked_results_vectors = embedding.rank(reference, candidates, return_vectors=True)\nfor vector, score in ranked_results_vectors:\n    print(f\"Vector: {vector[:5]}... Similarity Score: {score}\")  # Show first 5 values for readability\n```\n\nThe `Embedding` class features:\n- Simple interface for generating embeddings from text\n- Support for both single strings and lists of strings\n- Built-in cosine similarity computation\n- Efficient similarity ranking for multiple vectors\n- Option to return either vector IDs (default) or actual vectors\n- Uses OpenAI\u2019s text embedding models (defaults to \"text-embedding-3-small\")\n\n# API Reference\n\n## LLM\n\nThe main interface for interacting with language models through chat completions. Provides a flexible and minimal API for handling conversations, function calling, and structured outputs.\n\n### Constructor Parameters\n\n- `system_prompt` (str, optional): Initial system prompt to set context. Example:\n  ```python\n  llm = LLM(system_prompt=\"You are a helpful assistant\")\n  ```\n\n- `model` (str, optional): The model identifier to use (default: \"gpt-4o-mini\")\n- `client` (OpenAI, optional): Custom OpenAI client instance. Useful for alternative providers:\n  ```python\n  client = OpenAI(api_key=os.getenv(\"DEEPSEEK_API_KEY\"), base_url=\"https://api.deepseek.com\")\n  llm = LLM(client=client, model=\"deepseek-chat\")\n  ```\n\n- `messages` (list[dict], optional): Pre-populate conversation history:\n  ```python\n  llm = LLM(messages=[{\"role\": \"user\", \"content\": \"Initial message\"}])\n  ```\n\n### Chat Method\n\nBoth `llm.chat()` and `llm()` provide identical functionality as the main interface for interactions.\n\n#### Parameters\n\n- `prompt` (str, optional): The input prompt to send to the model. If provided, appended to conversation history.\n- `tools` (list[dict], optional): List of available function tools the model can use. Each tool should be a callable with type hints.\n- `response_format` (BaseModel, optional): Pydantic model to structure and validate the response.\n- `single_tool_call_request` (bool, optional): When True, limits the model to one request to use tools (can still call multiple tools in that request).\n- `**kwargs`: Additional arguments passed directly to the chat completion API.\n\n#### Return Value\n- `Union[str, BaseModel]`: Either a string response or structured data matching response_format\n\n#### Behavior Flows\n\n1. Basic Chat (no tools/response_format):\n   - Simple text completion\n   - Returns string response\n\n2. With Tools:\n   - Model can choose to use available tools or respond directly\n   - When tools are used, multiple tools can be called in a single request\n   - Tools are called automatically and results fed back\n   - Process repeats if model decides to use tools again\n   - Use `single_tool_call_request=True` to limit the model to one request to use tools (can still call multiple tools in that request).\n\n3. With Response Format:\n   - Response is cast to specified Pydantic model\n   - Returns structured data\n\n4. Combined Tools + Response Format:\n   - Follows tool flow first\n   - Final text response is cast to model\n\nThe conversation history is accessible via the `.messages` attribute, making it easy to inspect or manipulate the context.\n\n## Embedding\n\nThe interface for generating text embeddings and performing similarity operations. Provides a simple API for embedding generation and similarity ranking.\n\n### Constructor Parameters\n\n- `model` (str, optional): The model identifier to use (default: \"text-embedding-3-small\")\n- `client` (OpenAI, optional): Custom OpenAI client instance. If None, creates new instance.\n\n### Methods\n\n#### embed() / __call__()\n\nBoth `embedding.embed()` and `embedding()` provide identical functionality for generating embeddings.\n\n##### Parameters\n- `input` (Union[str, List[str]]): Text input, either a single string or a list of strings.\n\n##### Returns\n- `Union[List[float], List[List[float]]]`: \n  - For single string input: a list of floats (the embedding vector)\n  - For list input: a list of embeddings (list of float lists)\n\n#### cosine_similarity()\n\nCompute cosine similarity between two embedding vectors.\n\n##### Parameters\n- `a` (List[float]): The first embedding vector\n- `b` (List[float]): The second embedding vector\n\n##### Returns\n- `float`: The cosine similarity score between -1 and 1\n\n#### rank()\n\nRank a list of vectors by similarity to a reference vector.\n\n##### Parameters\n- `vector` (List[float]): The reference embedding vector\n- `vectors` (List[List[float]]): A list of embedding vectors to compare against\n- `return_vectors` (bool, optional, default=False): If True, returns the actual vectors instead of their indices.\n\n##### Returns\n- `List[Tuple[Union[int, List[float]], float]]`: A list of tuples containing:\n  - If `return_vectors` is False: The index of the vector in the input list\n  - If `return_vectors` is True: The actual embedding vector\n  - The cosine similarity score (higher is more similar)\nSorted in descending order of similarity.\n\n## Inspiration\n\nAgentics was born from a desire to simplify LLM interactions in Python. The existing landscape often requires verbose boilerplate:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\nresponse = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Hello!\"}\n    ]\n)\nprint(response.choices[0].message.content)\n```\n\nWhen my goal in mind was to be able to simply do `llm(\"Hello!\")`, with that desired interface is how I started building Agentics, this:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\nresponse = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Hello!\"}\n    ]\n)\nprint(response.choices[0].message.content)\n```\n\nnow turns into this:\n\n```python\nfrom agentics import LLM\n\nllm = LLM()\nresponse = llm(\"Hello!\")\nprint(response)\n```\n\nAgentics makes things simple while bringing these powerful features into the same library:\n\n- **Simple API**: Talk to LLMs with just a few lines of code\n- **Structured Output**: Like [instructor](https://github.com/instructor-ai/instructor), turns responses into Pydantic models\n- **Function Calling**: Like [Marvin's assistants](https://www.askmarvin.ai/docs/interactive/assistants/) but \nusing direct message-based communication instead of the \nAssistants API\n\nI built this to make working with OpenAI's LLMs easier. It handles structured outputs and function calling without any fuss. Right now it only works with OpenAI, but it makes common LLM tasks way simpler.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A minimal LLM agent library",
    "version": "0.1.9",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5ef376c3476fc73230e7cfdff813d5f883261cfd4991b14c9485b8d998c49679",
                "md5": "a6518057bc0637611a5024aba4b6a74c",
                "sha256": "97ffa0f0bced6b17cfa2281559fbdb305cd813832d9684a233ff90a1c2af4d73"
            },
            "downloads": -1,
            "filename": "agentics-0.1.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a6518057bc0637611a5024aba4b6a74c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 11021,
            "upload_time": "2025-02-07T08:24:48",
            "upload_time_iso_8601": "2025-02-07T08:24:48.391258Z",
            "url": "https://files.pythonhosted.org/packages/5e/f3/76c3476fc73230e7cfdff813d5f883261cfd4991b14c9485b8d998c49679/agentics-0.1.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d0a5cd7190e65848f1149df639b7fed791bd4fc30356ce0db846673a30b4fc4b",
                "md5": "2af062a31f1eda08ded34cfc527def3c",
                "sha256": "a36d36a44a942fbeeb00f13293052aea6bd4419dcd98a941fbb5a0b598905140"
            },
            "downloads": -1,
            "filename": "agentics-0.1.9.tar.gz",
            "has_sig": false,
            "md5_digest": "2af062a31f1eda08ded34cfc527def3c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 11584,
            "upload_time": "2025-02-07T08:24:50",
            "upload_time_iso_8601": "2025-02-07T08:24:50.358996Z",
            "url": "https://files.pythonhosted.org/packages/d0/a5/cd7190e65848f1149df639b7fed791bd4fc30356ce0db846673a30b4fc4b/agentics-0.1.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-07 08:24:50",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "agentics"
}
        
Elapsed time: 0.63111s