# JustAI
Package to make working with Large Language models in Python super easy.
Supports OpenAI, Anthropic Claude, Google Gemini, X Grok, DeepSeek, Perplexity, OpenRouter and open source .guff models.
Author: Hans-Peter Harmsen (hp@harmsen.nl) \
Current version: 4.2.2
Version 4.x is not compatible with the 3.x series.
## Installation
1. Install the package:
~~~~bash
python -m pip install justai
~~~~
2. Create an OpenAI acccount (for OpenAI models) [here](https://platform.openai.com/) or an Anthropic account [here](https://console.anthropic.com/) or a Google account
3. Create an OpenAI api key [here](https://platform.openai.com/account/api-keys) or an Anthropic api key [here](https://console.anthropic.com/settings/keys) or a Google api key [here](https://aistudio.google.com/app/apikey)
4. Create a .env file with the following content, depending on the model you intend to use:
```bash
OPENAI_API_KEY=your-openai-api-key
OPENAI_ORGANIZATION=your-openai-organization-id
ANTHROPIC_API_KEY=your-anthropic-api-key
GOOGLE_API_KEY=your-google-api-key
X_API_KEY=your-x-ai-api-key
DEEPSKEEK_API_KEY=your-deepseek-api-key
```
## Basic usage
```Python
from justai import Model
model = Model('gpt-4o-mini')
model.system = """You are a movie critic. I feed you with movie
titles and you give me a review in 50 words."""
message = model.chat("Forrest Gump", cached=True)
print(message)
```
Here, cached=True specifies that justai should cache the prompt and the model's response.
#### output
```
Forrest Gump is an American classic that tells the story of
a man with a kind heart and simple mind who experiences major
events in history. Tom Hanks gives an unforgettable performance,
making us both laugh and cry. A heartwarming and nostalgic
movie that still resonates with audiences today.
```
## Models
Justai can use different types of models:
**OpenAI** models like GPT-4 and O3
**Anthropic** models like the Claude-3 models
**Google** models like the Gemini models
**X AI** models like the Grok models
**DeekSeek** models like Deepseek V-3 (deepseek-chat) and reasoning model Deepseek-R1 (deepseek-reasoning)
**Open source** models like Llama2-7b or Mixtral-8x7b-instruct as long as they are in the GGUF format.
**OpenRouter** models. To use these use modelname 'openrouter/_provider_/_modelname'
Except for OpenRouter, the provider is chosen depending on the model name. E.g. if a model name starts with gpt, OpenAI is chosen as the provider.
To use an open source model, just pass the full path to the .gguf file as the model name.
## More advanced usage
### Returning json or other types
```bash
python examples/return_types.py
```
You can specify a specific return type (like a list of dicts) for the completion.
This is useful when you want to extract structured data from the completion.
To return structured data, just pass return_json=True to model.chat() and tell the model in the
prompt how you want your json to be structured.
#### Example returning json data
~~~python
model = Model('gemini-1.5-flash')
prompt = "Give me the main characters from Seinfeld with their characteristics. " + \
"Return json with keys name, profession and weirdness"
data = model.chat(prompt, return_json=True)
print(json.dumps(data, indent=4))
~~~
#### Specifying the return type
To define a specific return type you can use the return_type parameter.
Currently this works with the Google models (pass a Python type definition, returns Json)
and with OpenAI (pass a Pydatic type definition, returns a Pydantic model).
See the example code for more further examples.
### Images
Pass images to the model. An image can either be:
* An url to an image
* The raw image data
* A PIL image
#### Example with PIL image and GPT4o-mini
```python
model = Model("gpt-4o-mini-2024-07-18")
url = 'https://upload.wikimedia.org/wikipedia/commons/9/94/Common_dolphin.jpg'
image = Image.open(io.BytesIO(httpx.get(url).content))
message = model.chat("What is in this image", images=url, cached=False)
print(message)
```
### Asynchronous use
```python
async def print_words(model_name, prompt):
model = Model(model_name)
async for word in model.chat_async(prompt):
print(word, end='')
prompt = "Give me 5 names for a juice bar that focuses senior citizens."
asyncio.run(print_words("sonar-pro", prompt))
```
### Prompt caching
Shows how to use Prompt caching in Anthropic models.
```python
model = Model('claude-3.7-sonnet')
model.system_message = "You are an experienced book analyzer" # This is how you set the system message in justai
model.cached_prompt = SOME_STORY
res = model.chat('Who is Mr. Thompsons Neighbour? Give me just the name.',
cached=False) # Disable justai's own cache
print(res)
print('input_token_count', model.input_token_count)
print('output_token_count', model.output_token_count)
print('cache_creation_input_tokens', model.cache_creation_input_tokens)
print('cache_read_input_tokens', model.cache_read_input_tokens)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "justai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ChatGPT, GPT4o, GPT4, api, Claude, Anthropic, Lllama, Gemini, Grok, Perplexity, Sonar",
"author": null,
"author_email": "HP Harmsen <hp@harmsen.nl>",
"download_url": "https://files.pythonhosted.org/packages/95/64/9e8c9ddecfa6f872cb2ac058e238ebb472326212a9562254e97e3684cc55/justai-4.2.2.tar.gz",
"platform": null,
"description": "# JustAI\n\nPackage to make working with Large Language models in Python super easy.\nSupports OpenAI, Anthropic Claude, Google Gemini, X Grok, DeepSeek, Perplexity, OpenRouter and open source .guff models.\n\nAuthor: Hans-Peter Harmsen (hp@harmsen.nl) \\\nCurrent version: 4.2.2\n\nVersion 4.x is not compatible with the 3.x series.\n\n## Installation\n1. Install the package:\n~~~~bash\npython -m pip install justai\n~~~~\n2. Create an OpenAI acccount (for OpenAI models) [here](https://platform.openai.com/) or an Anthropic account [here](https://console.anthropic.com/) or a Google account\n3. Create an OpenAI api key [here](https://platform.openai.com/account/api-keys) or an Anthropic api key [here](https://console.anthropic.com/settings/keys) or a Google api key [here](https://aistudio.google.com/app/apikey)\n4. Create a .env file with the following content, depending on the model you intend to use:\n```bash\nOPENAI_API_KEY=your-openai-api-key\nOPENAI_ORGANIZATION=your-openai-organization-id\nANTHROPIC_API_KEY=your-anthropic-api-key\nGOOGLE_API_KEY=your-google-api-key\nX_API_KEY=your-x-ai-api-key\nDEEPSKEEK_API_KEY=your-deepseek-api-key\n```\n## Basic usage\n\n```Python\nfrom justai import Model\n\nmodel = Model('gpt-4o-mini')\nmodel.system = \"\"\"You are a movie critic. I feed you with movie\n titles and you give me a review in 50 words.\"\"\"\n\nmessage = model.chat(\"Forrest Gump\", cached=True)\nprint(message)\n```\nHere, cached=True specifies that justai should cache the prompt and the model's response.\n\n#### output\n```\nForrest Gump is an American classic that tells the story of\na man with a kind heart and simple mind who experiences major\nevents in history. Tom Hanks gives an unforgettable performance, \nmaking us both laugh and cry. A heartwarming and nostalgic \nmovie that still resonates with audiences today.\n```\n## Models\nJustai can use different types of models:\n\n**OpenAI** models like GPT-4 and O3\n\n**Anthropic** models like the Claude-3 models\n\n**Google** models like the Gemini models\n\n**X AI** models like the Grok models\n\n**DeekSeek** models like Deepseek V-3 (deepseek-chat) and reasoning model Deepseek-R1 (deepseek-reasoning)\n\n**Open source** models like Llama2-7b or Mixtral-8x7b-instruct as long as they are in the GGUF format.\n\n**OpenRouter** models. To use these use modelname 'openrouter/_provider_/_modelname'\n\nExcept for OpenRouter, the provider is chosen depending on the model name. E.g. if a model name starts with gpt, OpenAI is chosen as the provider.\nTo use an open source model, just pass the full path to the .gguf file as the model name.\n\n\n## More advanced usage\n\n### Returning json or other types\n```bash\npython examples/return_types.py\n```\nYou can specify a specific return type (like a list of dicts) for the completion. \nThis is useful when you want to extract structured data from the completion.\n\nTo return structured data, just pass return_json=True to model.chat() and tell the model in the \nprompt how you want your json to be structured.\n\n#### Example returning json data\n~~~python\nmodel = Model('gemini-1.5-flash')\nprompt = \"Give me the main characters from Seinfeld with their characteristics. \" + \\\n \"Return json with keys name, profession and weirdness\"\n\ndata = model.chat(prompt, return_json=True)\nprint(json.dumps(data, indent=4))\n~~~\n#### Specifying the return type\nTo define a specific return type you can use the return_type parameter.\n\nCurrently this works with the Google models (pass a Python type definition, returns Json)\nand with OpenAI (pass a Pydatic type definition, returns a Pydantic model).\n\n\nSee the example code for more further examples.\n\n### Images\nPass images to the model. An image can either be:\n* An url to an image\n* The raw image data\n* A PIL image\n\n#### Example with PIL image and GPT4o-mini\n```python\n \nmodel = Model(\"gpt-4o-mini-2024-07-18\")\nurl = 'https://upload.wikimedia.org/wikipedia/commons/9/94/Common_dolphin.jpg'\nimage = Image.open(io.BytesIO(httpx.get(url).content))\nmessage = model.chat(\"What is in this image\", images=url, cached=False)\nprint(message)\n\n```\n\n### Asynchronous use\n```python\nasync def print_words(model_name, prompt):\n model = Model(model_name)\n async for word in model.chat_async(prompt):\n print(word, end='')\n \nprompt = \"Give me 5 names for a juice bar that focuses senior citizens.\"\nasyncio.run(print_words(\"sonar-pro\", prompt))\n```\n\n### Prompt caching\nShows how to use Prompt caching in Anthropic models.\n```python\nmodel = Model('claude-3.7-sonnet')\nmodel.system_message = \"You are an experienced book analyzer\" # This is how you set the system message in justai\nmodel.cached_prompt = SOME_STORY\nres = model.chat('Who is Mr. Thompsons Neighbour? Give me just the name.',\n cached=False) # Disable justai's own cache\nprint(res)\nprint('input_token_count', model.input_token_count)\nprint('output_token_count', model.output_token_count)\nprint('cache_creation_input_tokens', model.cache_creation_input_tokens)\nprint('cache_read_input_tokens', model.cache_read_input_tokens)\n\n```\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Makes working with LLMs like OpenAI GPT, Anthropic Claude, Google Gemini and Open source models super easy",
"version": "4.2.2",
"project_urls": {
"Homepage": "https://github.com/hpharmsen/justai"
},
"split_keywords": [
"chatgpt",
" gpt4o",
" gpt4",
" api",
" claude",
" anthropic",
" lllama",
" gemini",
" grok",
" perplexity",
" sonar"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9794b93f7394d59ac15e201d677eb2e7b498ea1d550d57977bb24ab622be24a2",
"md5": "aab3effd78154f6ef538abca67db3f44",
"sha256": "03322f5d928f5ca5ec2fbaf069bb16e47a69bb9782b0df4b7cc9f25317c86e06"
},
"downloads": -1,
"filename": "justai-4.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "aab3effd78154f6ef538abca67db3f44",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 26292,
"upload_time": "2025-07-20T19:17:19",
"upload_time_iso_8601": "2025-07-20T19:17:19.187034Z",
"url": "https://files.pythonhosted.org/packages/97/94/b93f7394d59ac15e201d677eb2e7b498ea1d550d57977bb24ab622be24a2/justai-4.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "95649e8c9ddecfa6f872cb2ac058e238ebb472326212a9562254e97e3684cc55",
"md5": "c7f78053f6a1ec9deae690c69903152b",
"sha256": "796d7b894943d03355171d0edcc7b1aba5d32544fb059cc72a09a280c986b55d"
},
"downloads": -1,
"filename": "justai-4.2.2.tar.gz",
"has_sig": false,
"md5_digest": "c7f78053f6a1ec9deae690c69903152b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 23396,
"upload_time": "2025-07-20T19:17:20",
"upload_time_iso_8601": "2025-07-20T19:17:20.409238Z",
"url": "https://files.pythonhosted.org/packages/95/64/9e8c9ddecfa6f872cb2ac058e238ebb472326212a9562254e97e3684cc55/justai-4.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-20 19:17:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hpharmsen",
"github_project": "justai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "anthropic",
"specs": [
[
">=",
"0.34.0"
]
]
},
{
"name": "absl-py",
"specs": []
},
{
"name": "google-generativeai",
"specs": []
},
{
"name": "httpx",
"specs": []
},
{
"name": "justdays",
"specs": []
},
{
"name": "llama-cpp-python",
"specs": []
},
{
"name": "lxml",
"specs": [
[
"==",
"5.1.1"
]
]
},
{
"name": "openai",
"specs": [
[
">=",
"1.40.0"
]
]
},
{
"name": "packaging",
"specs": []
},
{
"name": "pillow",
"specs": []
},
{
"name": "pydantic",
"specs": []
},
{
"name": "python-dateutil",
"specs": []
},
{
"name": "python-dotenv",
"specs": []
},
{
"name": "rich",
"specs": []
},
{
"name": "tiktoken",
"specs": []
},
{
"name": "tomli",
"specs": []
}
],
"lcname": "justai"
}