# Manifest ✨
Call an LLM by calling a function.
- Define a function name, arguments, return value, and docstring.
- Call your function as normal, passing in your values.
- For those values, an LLM will return a response that conforms to your return type.
# Installation
```
pip install manifest
```
Now make sure your OpenAI key is set:
```
export OPENAI_API_KEY="your_api_key_here"
```
# Examples
## Sentiment analysis
```python
from manifest import ai
@ai
def is_optimistic(text: str) -> bool:
"""Determines if the text is optimistic"""
print(is_optimistic("This is amazing!")) # Prints True
```
## Translation
```python
from manifest import ai
@ai
def translate(english_text: str, target_lang: str) -> str:
"""Translates text from english into a target language"""
print(translate("Hello", "fr")) # Prints "Bonjour"
```
## Image analysis
```python
from pathlib import Path
from manifest import ai
@ai
def breed_of_dog(image: Path) -> str:
"""Determines the breed of dog from a photo"""
image = Path("path/to/dog.jpg")
print(breed_of_dog(image)) # Prints "German Shepherd" (or whatever)
```
## Complex objects
```python
from dataclasses import dataclass
from manifest import ai
@dataclass
class Actor:
name: str
character: str
@dataclass
class Movie:
title: str
director: str
year: int
top_cast: list[Actor]
@ai
def similar_movie(movie: str, before_year: int | None=None) -> Movie:
"""Discovers a similar movie, before a certain year, if the year is
provided."""
like_inception = similar_movie("Inception")
print(like_inception) # Prints a movie similar to inception
```
## Recursive types
It can handle self-referential types. For example, each `Character` has a `social_graph`, and each `SocialGraph` is composed of `Characters`.
```python
from dataclasses import dataclass
from pprint import pprint
from manifest import ai
@dataclass
class Character:
name: str
occupation: str
social_graph: "SocialGraph"
@dataclass
class SocialGraph:
friends: list[Character]
enemies: list[Character]
@ai
def get_character_social_graph(character_name: str) -> SocialGraph:
"""For a given fictional character, return their social graph, resolving
each friend and enemy's social graph recursively."""
graph = get_character_social_graph("Walter White")
pprint(graph)
```
```
SocialGraph(
friends=[
Character(
name='Jesse Pinkman',
occupation='Meth Manufacturer',
social_graph=SocialGraph(
friends=[Character(name='Walter White', occupation='Chemistry Teacher', social_graph=SocialGraph(friends=[], enemies=[]))],
enemies=[Character(name='Hank Schrader', occupation='DEA Agent', social_graph=SocialGraph(friends=[], enemies=[]))]
)
),
Character(
name='Saul Goodman',
occupation='Lawyer',
social_graph=SocialGraph(friends=[Character(name='Walter White', occupation='Chemistry Teacher', social_graph=SocialGraph(friends=[], enemies=[]))], enemies=[])
)
],
enemies=[
Character(
name='Hank Schrader',
occupation='DEA Agent',
social_graph=SocialGraph(
friends=[Character(name='Marie Schrader', occupation='Radiologic Technologist', social_graph=SocialGraph(friends=[], enemies=[]))],
enemies=[Character(name='Walter White', occupation='Meth Manufacturer', social_graph=SocialGraph(friends=[], enemies=[]))]
)
),
Character(
name='Gus Fring',
occupation='Businessman',
social_graph=SocialGraph(
friends=[Character(name='Mike Ehrmantraut', occupation='Fixer', social_graph=SocialGraph(friends=[], enemies=[]))],
enemies=[Character(name='Walter White', occupation='Meth Manufacturer', social_graph=SocialGraph(friends=[], enemies=[]))]
)
)
]
)
```
# How does it work?
Manifest relies heavily on runtime metadata, such as a function's name,
docstring, arguments, and type hints. It uses all of these to compose a prompt
behind the scenes, then sends the prompt to an LLM. The LLM "executes" the
prompt, and returns a json-based format that we can safely parse back into the
appropriate object.
To get the most out the `@ai` decorator:
- Name your function well.
- Add type hints to your function.
- Add a high-value docstring to your function.
# Limitations
## REPL
Manifest doesn't work from the REPL, due to it needing access to the source code
of the functions it decorates.
## Types
You can only pass in and return the following types:
- Dataclasses
- `Enum` subclasses
- primitives (str, int, bool, None, etc)
- basic container types (list, dict, tuple)
- unions
- Any combination of the above
## Prompts
The prompt templates are also a little fiddly sometimes. They can be improved.
# Initialization
To make things super simple, manifest uses ambient LLM credentials, currently
just `OPENAI_API_KEY`. If environment credentials are not found, you will be
instructed to initialize the library yourself.
Raw data
{
"_id": null,
"home_page": "https://github.com/amoffat/manifest",
"name": "manifest",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.11",
"maintainer_email": null,
"keywords": "llm, ai",
"author": "Andrew Moffat",
"author_email": "arwmoffat@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/a7/0d/95147e6c3b47d7380130a3800aae3193d392cb274fc8b74c81468ed6fd5a/manifest-0.6.0.tar.gz",
"platform": null,
"description": "# Manifest \u2728\n\nCall an LLM by calling a function.\n\n- Define a function name, arguments, return value, and docstring.\n- Call your function as normal, passing in your values.\n- For those values, an LLM will return a response that conforms to your return type.\n\n# Installation\n\n```\npip install manifest\n```\n\nNow make sure your OpenAI key is set:\n\n```\nexport OPENAI_API_KEY=\"your_api_key_here\"\n```\n\n# Examples\n\n## Sentiment analysis\n\n```python\nfrom manifest import ai\n\n@ai\ndef is_optimistic(text: str) -> bool:\n \"\"\"Determines if the text is optimistic\"\"\"\n\nprint(is_optimistic(\"This is amazing!\")) # Prints True\n```\n\n## Translation\n\n```python\nfrom manifest import ai\n\n@ai\ndef translate(english_text: str, target_lang: str) -> str:\n \"\"\"Translates text from english into a target language\"\"\"\n\nprint(translate(\"Hello\", \"fr\")) # Prints \"Bonjour\"\n```\n\n## Image analysis\n\n```python\nfrom pathlib import Path\nfrom manifest import ai\n\n@ai\ndef breed_of_dog(image: Path) -> str:\n \"\"\"Determines the breed of dog from a photo\"\"\"\n\nimage = Path(\"path/to/dog.jpg\")\nprint(breed_of_dog(image)) # Prints \"German Shepherd\" (or whatever)\n```\n\n## Complex objects\n\n```python\nfrom dataclasses import dataclass\nfrom manifest import ai\n\n@dataclass\nclass Actor:\n name: str\n character: str\n\n@dataclass\nclass Movie:\n title: str\n director: str\n year: int\n top_cast: list[Actor]\n\n@ai\ndef similar_movie(movie: str, before_year: int | None=None) -> Movie:\n \"\"\"Discovers a similar movie, before a certain year, if the year is\n provided.\"\"\"\n\nlike_inception = similar_movie(\"Inception\")\nprint(like_inception) # Prints a movie similar to inception\n\n```\n\n## Recursive types\n\nIt can handle self-referential types. For example, each `Character` has a `social_graph`, and each `SocialGraph` is composed of `Characters`.\n\n```python\nfrom dataclasses import dataclass\nfrom pprint import pprint\n\nfrom manifest import ai\n\n\n@dataclass\nclass Character:\n name: str\n occupation: str\n social_graph: \"SocialGraph\"\n\n\n@dataclass\nclass SocialGraph:\n friends: list[Character]\n enemies: list[Character]\n\n\n@ai\ndef get_character_social_graph(character_name: str) -> SocialGraph:\n \"\"\"For a given fictional character, return their social graph, resolving\n each friend and enemy's social graph recursively.\"\"\"\n\n\ngraph = get_character_social_graph(\"Walter White\")\npprint(graph)\n\n```\n\n```\nSocialGraph(\n friends=[\n Character(\n name='Jesse Pinkman',\n occupation='Meth Manufacturer',\n social_graph=SocialGraph(\n friends=[Character(name='Walter White', occupation='Chemistry Teacher', social_graph=SocialGraph(friends=[], enemies=[]))],\n enemies=[Character(name='Hank Schrader', occupation='DEA Agent', social_graph=SocialGraph(friends=[], enemies=[]))]\n )\n ),\n Character(\n name='Saul Goodman',\n occupation='Lawyer',\n social_graph=SocialGraph(friends=[Character(name='Walter White', occupation='Chemistry Teacher', social_graph=SocialGraph(friends=[], enemies=[]))], enemies=[])\n )\n ],\n enemies=[\n Character(\n name='Hank Schrader',\n occupation='DEA Agent',\n social_graph=SocialGraph(\n friends=[Character(name='Marie Schrader', occupation='Radiologic Technologist', social_graph=SocialGraph(friends=[], enemies=[]))],\n enemies=[Character(name='Walter White', occupation='Meth Manufacturer', social_graph=SocialGraph(friends=[], enemies=[]))]\n )\n ),\n Character(\n name='Gus Fring',\n occupation='Businessman',\n social_graph=SocialGraph(\n friends=[Character(name='Mike Ehrmantraut', occupation='Fixer', social_graph=SocialGraph(friends=[], enemies=[]))],\n enemies=[Character(name='Walter White', occupation='Meth Manufacturer', social_graph=SocialGraph(friends=[], enemies=[]))]\n )\n )\n ]\n)\n```\n\n# How does it work?\n\nManifest relies heavily on runtime metadata, such as a function's name,\ndocstring, arguments, and type hints. It uses all of these to compose a prompt\nbehind the scenes, then sends the prompt to an LLM. The LLM \"executes\" the\nprompt, and returns a json-based format that we can safely parse back into the\nappropriate object.\n\nTo get the most out the `@ai` decorator:\n\n- Name your function well.\n- Add type hints to your function.\n- Add a high-value docstring to your function.\n\n# Limitations\n\n## REPL\n\nManifest doesn't work from the REPL, due to it needing access to the source code\nof the functions it decorates.\n\n## Types\n\nYou can only pass in and return the following types:\n\n- Dataclasses\n- `Enum` subclasses\n- primitives (str, int, bool, None, etc)\n- basic container types (list, dict, tuple)\n- unions\n- Any combination of the above\n\n## Prompts\n\nThe prompt templates are also a little fiddly sometimes. They can be improved.\n\n# Initialization\n\nTo make things super simple, manifest uses ambient LLM credentials, currently\njust `OPENAI_API_KEY`. If environment credentials are not found, you will be\ninstructed to initialize the library yourself.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Use an LLM to execute code",
"version": "0.6.0",
"project_urls": {
"Homepage": "https://github.com/amoffat/manifest",
"Repository": "https://github.com/amoffat/manifest"
},
"split_keywords": [
"llm",
" ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5eb3125e8b78ce6871fbbe8691876896b4fea8d8d90c6dc265dd20386e0eb1cb",
"md5": "251d88bf35f8b5dfcc8c9e8dc5b47d9d",
"sha256": "94b0929df89d13f9a908f7207f79796cb3cfd8695f27dae5bdd88b7583333fa9"
},
"downloads": -1,
"filename": "manifest-0.6.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "251d88bf35f8b5dfcc8c9e8dc5b47d9d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.11",
"size": 21691,
"upload_time": "2024-09-11T03:23:33",
"upload_time_iso_8601": "2024-09-11T03:23:33.775482Z",
"url": "https://files.pythonhosted.org/packages/5e/b3/125e8b78ce6871fbbe8691876896b4fea8d8d90c6dc265dd20386e0eb1cb/manifest-0.6.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a70d95147e6c3b47d7380130a3800aae3193d392cb274fc8b74c81468ed6fd5a",
"md5": "18450188e0f828c31463dff3790fc08a",
"sha256": "7dab5403869149c4cfd6c5b66be1e9a3034e4e39720107b387c8c456752cdb02"
},
"downloads": -1,
"filename": "manifest-0.6.0.tar.gz",
"has_sig": false,
"md5_digest": "18450188e0f828c31463dff3790fc08a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.11",
"size": 15613,
"upload_time": "2024-09-11T03:23:35",
"upload_time_iso_8601": "2024-09-11T03:23:35.295894Z",
"url": "https://files.pythonhosted.org/packages/a7/0d/95147e6c3b47d7380130a3800aae3193d392cb274fc8b74c81468ed6fd5a/manifest-0.6.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-11 03:23:35",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "amoffat",
"github_project": "manifest",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "manifest"
}