# JustAI
Package to make working with Large Language models in Python super easy.
Supports OpenAI, Anthropic Claude, Google Gemini, X Grok, DeepSeek, Perplexity, OpenRouter and open source .guff models.
Author: Hans-Peter Harmsen (hp@harmsen.nl) \
Current version: 5.2.0
Version 4.x is not compatible with the 3.x series.
## Installation
1. Install the package:
~~~~bash
python -m pip install justai
~~~~
2. Create an OpenAI acccount (for OpenAI models) [here](https://platform.openai.com/) or an Anthropic account [here](https://console.anthropic.com/) or a Google account
3. Create an OpenAI api key [here](https://platform.openai.com/account/api-keys) or an Anthropic api key [here](https://console.anthropic.com/settings/keys) or a Google api key [here](https://aistudio.google.com/app/apikey)
4. Create a .env file with the following content, depending on the model you intend to use:
```bash
OPENAI_API_KEY=your-openai-api-key
OPENAI_ORGANIZATION=your-openai-organization-id
ANTHROPIC_API_KEY=your-anthropic-api-key
GOOGLE_API_KEY=your-google-api-key
X_API_KEY=your-x-ai-api-key
DEEPSKEEK_API_KEY=your-deepseek-api-key
```
## Basic usage
```Python
from justai import Model
model = Model('gpt-5-mini')
model.system = """You are a movie critic. I feed you with movie
titles and you give me a review in 50 words."""
message = model.chat("Forrest Gump", cached=True)
print(message)
```
Here, cached=True specifies that justai should cache the prompt and the model's response.
#### output
```
Forrest Gump is an American classic that tells the story of
a man with a kind heart and simple mind who experiences major
events in history. Tom Hanks gives an unforgettable performance,
making us both laugh and cry. A heartwarming and nostalgic
movie that still resonates with audiences today.
```
## Models
Justai can use different types of models:
**OpenAI** models like GPT-5 and O3
**Anthropic** models like the Claude-3 models
**Google** models like the Gemini models
**X AI** models like the Grok models
**DeekSeek** models like Deepseek V-3 (deepseek-chat) and reasoning model Deepseek-R1 (deepseek-reasoning)
**Open source** models like Llama2-7b or Mixtral-8x7b-instruct as long as they are in the GGUF format.
**OpenRouter** models. To use these use modelname 'openrouter/_provider_/_modelname'
Except for OpenRouter, the provider is chosen depending on the model name. E.g. if a model name starts with gpt, OpenAI is chosen as the provider.
To use an open source model, just pass the full path to the .gguf file as the model name.
## More advanced usage
### Returning json or other types
```bash
python examples/return_types.py
```
You can specify a specific return type (like a list of dicts) for the completion.
This is useful when you want to extract structured data from the completion.
To return structured data, just pass return_json=True to model.chat() and tell the model in the
prompt how you want your json to be structured.
#### Example returning json data
~~~python
model = Model('gemini-1.5-flash')
prompt = "Give me the main characters from Seinfeld with their characteristics. " + \
"Return json with keys name, profession and weirdness"
data = model.chat(prompt, return_json=True)
print(json.dumps(data, indent=4))
~~~
#### Specifying the return type
To define a specific return type you can use the return_type parameter.
Currently this works with the Google models (pass a Python type definition, returns Json)
and with OpenAI (pass a Pydatic type definition, returns a Pydantic model).
See the example code for more further examples.
### Images
Pass images to the model. An image can either be:
* An url to an image
* The raw image data
* A PIL image
#### Example with PIL image and GPT4o-mini
```python
model = Model("gpt-5-nano")
url = 'https://upload.wikimedia.org/wikipedia/commons/9/94/Common_dolphin.jpg'
image = Image.open(io.BytesIO(httpx.get(url).content))
message = model.chat("What is in this image", images=url, cached=False)
print(message)
```
### Asynchronous use
```python
async def print_words(model_name, prompt):
model = Model(model_name)
async for word in model.chat_async(prompt):
print(word, end='')
prompt = "Give me 5 names for a juice bar that focuses senior citizens."
asyncio.run(print_words("sonar-pro", prompt))
```
### Prompt caching
Shows how to use Prompt caching in Anthropic models.
```python
model = Model('claude-3.7-sonnet')
model.system_message = "You are an experienced book analyzer" # This is how you set the system message in justai
model.cached_prompt = SOME_STORY
res = model.chat('Who is Mr. Thompsons Neighbour? Give me just the name.',
cached=False) # Disable justai's own cache
print(res)
print('input_token_count', model.input_token_count)
print('output_token_count', model.output_token_count)
print('cache_creation_input_tokens', model.cache_creation_input_tokens)
print('cache_read_input_tokens', model.cache_read_input_tokens)
```
### Creating images
Some models can create images. You need to pass an image generating model to the Model to use it.
```python
model = Model('gpt-5')
pil_image = model.generate_image("Create an image dolphin reading a book")
```
Passing other images alongside the prompt is also possible.
This can be used to alter images or to do style transfer.
```python
model = Model('gemini-2.5-flash-image-preview')
url = 'https://upload.wikimedia.org/wikipedia/commons/9/94/Common_dolphin.jpg'
image = Image.open(io.BytesIO(httpx.get(url).content))
pil_image = model.generate_image("Convert this image into the style of van Gogh", images=image)
```
Image input can be a single image or a list of images. \
Each image can be a a url, a PIL image or raw image data.
Output is always a PIL image.
Raw data
{
"_id": null,
"home_page": null,
"name": "justai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "OpenAI, ChatGPT, GPT-5, GPT5, api, Claude, Anthropic, Lllama, Gemini, Grok, Perplexity, Sonar",
"author": null,
"author_email": "HP Harmsen <hp@harmsen.nl>",
"download_url": "https://files.pythonhosted.org/packages/73/c7/e1e62018aabdd914b12725fdf08b7bad40b26ae9cdb88989dc716bcff562/justai-5.2.0.tar.gz",
"platform": null,
"description": "# JustAI\n\nPackage to make working with Large Language models in Python super easy.\nSupports OpenAI, Anthropic Claude, Google Gemini, X Grok, DeepSeek, Perplexity, OpenRouter and open source .guff models.\n\nAuthor: Hans-Peter Harmsen (hp@harmsen.nl) \\\nCurrent version: 5.2.0\n\nVersion 4.x is not compatible with the 3.x series.\n\n## Installation\n1. Install the package:\n~~~~bash\npython -m pip install justai\n~~~~\n2. Create an OpenAI acccount (for OpenAI models) [here](https://platform.openai.com/) or an Anthropic account [here](https://console.anthropic.com/) or a Google account\n3. Create an OpenAI api key [here](https://platform.openai.com/account/api-keys) or an Anthropic api key [here](https://console.anthropic.com/settings/keys) or a Google api key [here](https://aistudio.google.com/app/apikey)\n4. Create a .env file with the following content, depending on the model you intend to use:\n```bash\nOPENAI_API_KEY=your-openai-api-key\nOPENAI_ORGANIZATION=your-openai-organization-id\nANTHROPIC_API_KEY=your-anthropic-api-key\nGOOGLE_API_KEY=your-google-api-key\nX_API_KEY=your-x-ai-api-key\nDEEPSKEEK_API_KEY=your-deepseek-api-key\n```\n## Basic usage\n\n```Python\nfrom justai import Model\n\nmodel = Model('gpt-5-mini')\nmodel.system = \"\"\"You are a movie critic. I feed you with movie\n titles and you give me a review in 50 words.\"\"\"\n\nmessage = model.chat(\"Forrest Gump\", cached=True)\nprint(message)\n```\nHere, cached=True specifies that justai should cache the prompt and the model's response.\n\n#### output\n```\nForrest Gump is an American classic that tells the story of\na man with a kind heart and simple mind who experiences major\nevents in history. Tom Hanks gives an unforgettable performance, \nmaking us both laugh and cry. A heartwarming and nostalgic \nmovie that still resonates with audiences today.\n```\n## Models\nJustai can use different types of models:\n\n**OpenAI** models like GPT-5 and O3\n\n**Anthropic** models like the Claude-3 models\n\n**Google** models like the Gemini models\n\n**X AI** models like the Grok models\n\n**DeekSeek** models like Deepseek V-3 (deepseek-chat) and reasoning model Deepseek-R1 (deepseek-reasoning)\n\n**Open source** models like Llama2-7b or Mixtral-8x7b-instruct as long as they are in the GGUF format.\n\n**OpenRouter** models. To use these use modelname 'openrouter/_provider_/_modelname'\n\nExcept for OpenRouter, the provider is chosen depending on the model name. E.g. if a model name starts with gpt, OpenAI is chosen as the provider.\nTo use an open source model, just pass the full path to the .gguf file as the model name.\n\n\n## More advanced usage\n\n### Returning json or other types\n```bash\npython examples/return_types.py\n```\nYou can specify a specific return type (like a list of dicts) for the completion. \nThis is useful when you want to extract structured data from the completion.\n\nTo return structured data, just pass return_json=True to model.chat() and tell the model in the \nprompt how you want your json to be structured.\n\n#### Example returning json data\n~~~python\nmodel = Model('gemini-1.5-flash')\nprompt = \"Give me the main characters from Seinfeld with their characteristics. \" + \\\n \"Return json with keys name, profession and weirdness\"\n\ndata = model.chat(prompt, return_json=True)\nprint(json.dumps(data, indent=4))\n~~~\n#### Specifying the return type\nTo define a specific return type you can use the return_type parameter.\n\nCurrently this works with the Google models (pass a Python type definition, returns Json)\nand with OpenAI (pass a Pydatic type definition, returns a Pydantic model).\n\n\nSee the example code for more further examples.\n\n### Images\nPass images to the model. An image can either be:\n* An url to an image\n* The raw image data\n* A PIL image\n\n#### Example with PIL image and GPT4o-mini\n```python\n \nmodel = Model(\"gpt-5-nano\")\nurl = 'https://upload.wikimedia.org/wikipedia/commons/9/94/Common_dolphin.jpg'\nimage = Image.open(io.BytesIO(httpx.get(url).content))\nmessage = model.chat(\"What is in this image\", images=url, cached=False)\nprint(message)\n\n```\n\n### Asynchronous use\n```python\nasync def print_words(model_name, prompt):\n model = Model(model_name)\n async for word in model.chat_async(prompt):\n print(word, end='')\n \nprompt = \"Give me 5 names for a juice bar that focuses senior citizens.\"\nasyncio.run(print_words(\"sonar-pro\", prompt))\n```\n\n### Prompt caching\nShows how to use Prompt caching in Anthropic models.\n```python\nmodel = Model('claude-3.7-sonnet')\nmodel.system_message = \"You are an experienced book analyzer\" # This is how you set the system message in justai\nmodel.cached_prompt = SOME_STORY\nres = model.chat('Who is Mr. Thompsons Neighbour? Give me just the name.',\n cached=False) # Disable justai's own cache\nprint(res)\nprint('input_token_count', model.input_token_count)\nprint('output_token_count', model.output_token_count)\nprint('cache_creation_input_tokens', model.cache_creation_input_tokens)\nprint('cache_read_input_tokens', model.cache_read_input_tokens)\n\n```\n\n### Creating images\n\nSome models can create images. You need to pass an image generating model to the Model to use it.\n\n```python\nmodel = Model('gpt-5')\npil_image = model.generate_image(\"Create an image dolphin reading a book\")\n```\n\nPassing other images alongside the prompt is also possible.\nThis can be used to alter images or to do style transfer.\n\n\n```python\nmodel = Model('gemini-2.5-flash-image-preview')\nurl = 'https://upload.wikimedia.org/wikipedia/commons/9/94/Common_dolphin.jpg'\nimage = Image.open(io.BytesIO(httpx.get(url).content))\npil_image = model.generate_image(\"Convert this image into the style of van Gogh\", images=image)\n```\n\nImage input can be a single image or a list of images. \\\nEach image can be a a url, a PIL image or raw image data.\n\nOutput is always a PIL image.\n",
"bugtrack_url": null,
"license": null,
"summary": "Makes working with LLMs like OpenAI GPT, Anthropic Claude, Google Gemini and Open source models super easy",
"version": "5.2.0",
"project_urls": {
"Homepage": "https://github.com/hpharmsen/justai"
},
"split_keywords": [
"openai",
" chatgpt",
" gpt-5",
" gpt5",
" api",
" claude",
" anthropic",
" lllama",
" gemini",
" grok",
" perplexity",
" sonar"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d3ccb5246d6898810563e3050d8cc5f1a5be2df3319044be0f86fb2289eb8be4",
"md5": "b8012f235abb2eda0307e55b5b4f7fe6",
"sha256": "e68ad641cb43d6f82d89e68a127b68a472ab2f20f1213bd779f3174f11bd9817"
},
"downloads": -1,
"filename": "justai-5.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b8012f235abb2eda0307e55b5b4f7fe6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 36726,
"upload_time": "2025-09-01T11:25:14",
"upload_time_iso_8601": "2025-09-01T11:25:14.552795Z",
"url": "https://files.pythonhosted.org/packages/d3/cc/b5246d6898810563e3050d8cc5f1a5be2df3319044be0f86fb2289eb8be4/justai-5.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "73c7e1e62018aabdd914b12725fdf08b7bad40b26ae9cdb88989dc716bcff562",
"md5": "6ec8d4ddf53a8ca0ae4afff0bb18478d",
"sha256": "530330a52598e18f54c388b64a9a1b374bfe70a298f34357f1b0f3e982b4ac9c"
},
"downloads": -1,
"filename": "justai-5.2.0.tar.gz",
"has_sig": false,
"md5_digest": "6ec8d4ddf53a8ca0ae4afff0bb18478d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 30308,
"upload_time": "2025-09-01T11:25:16",
"upload_time_iso_8601": "2025-09-01T11:25:16.128064Z",
"url": "https://files.pythonhosted.org/packages/73/c7/e1e62018aabdd914b12725fdf08b7bad40b26ae9cdb88989dc716bcff562/justai-5.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-01 11:25:16",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hpharmsen",
"github_project": "justai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "anthropic",
"specs": [
[
">=",
"0.34.0"
]
]
},
{
"name": "absl-py",
"specs": []
},
{
"name": "google-genai",
"specs": []
},
{
"name": "httpx",
"specs": []
},
{
"name": "jsonschema",
"specs": []
},
{
"name": "justdays",
"specs": []
},
{
"name": "llama-cpp-python",
"specs": []
},
{
"name": "lxml",
"specs": [
[
"==",
"5.1.1"
]
]
},
{
"name": "openai",
"specs": [
[
">=",
"1.40.0"
]
]
},
{
"name": "packaging",
"specs": []
},
{
"name": "pillow",
"specs": []
},
{
"name": "pydantic",
"specs": []
},
{
"name": "python-dateutil",
"specs": []
},
{
"name": "python-dotenv",
"specs": []
},
{
"name": "rich",
"specs": []
},
{
"name": "tiktoken",
"specs": []
},
{
"name": "tomli",
"specs": []
}
],
"lcname": "justai"
}