| Name | ollama JSON | 
            
| Version | 
                  0.5.4
                   
                  JSON | 
            
 | download  | 
            
| home_page | None  | 
            
| Summary | The official Python client for Ollama. | 
            | upload_time | 2025-09-16 00:25:25 | 
            | maintainer | None | 
            
            | docs_url | None | 
            | author | None | 
            
            | requires_python | >=3.8 | 
            
            
            | license | None | 
            | keywords | 
                 | 
            | VCS | 
                
                     | 
                
            
            | bugtrack_url | 
                
                 | 
             
            
            | requirements | 
                
                  No requirements were recorded.
                
             | 
            
| Travis-CI | 
                
                   No Travis.
                
             | 
            | coveralls test coverage | 
                
                   No coveralls.
                
             | 
        
        
            
            # Ollama Python Library
The Ollama Python library provides the easiest way to integrate Python 3.8+ projects with [Ollama](https://github.com/ollama/ollama).
## Prerequisites
- [Ollama](https://ollama.com/download) should be installed and running
- Pull a model to use with the library: `ollama pull <model>` e.g. `ollama pull gemma3`
  - See [Ollama.com](https://ollama.com/search) for more information on the models available.
## Install
```sh
pip install ollama
```
## Usage
```python
from ollama import chat
from ollama import ChatResponse
response: ChatResponse = chat(model='gemma3', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response['message']['content'])
# or access fields directly from the response object
print(response.message.content)
```
See [_types.py](ollama/_types.py) for more information on the response types.
## Streaming responses
Response streaming can be enabled by setting `stream=True`.
```python
from ollama import chat
stream = chat(
    model='gemma3',
    messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],
    stream=True,
)
for chunk in stream:
  print(chunk['message']['content'], end='', flush=True)
```
## Custom client
A custom client can be created by instantiating `Client` or `AsyncClient` from `ollama`.
All extra keyword arguments are passed into the [`httpx.Client`](https://www.python-httpx.org/api/#client).
```python
from ollama import Client
client = Client(
  host='http://localhost:11434',
  headers={'x-some-header': 'some-value'}
)
response = client.chat(model='gemma3', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
```
## Async client
The `AsyncClient` class is used to make asynchronous requests. It can be configured with the same fields as the `Client` class.
```python
import asyncio
from ollama import AsyncClient
async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  response = await AsyncClient().chat(model='gemma3', messages=[message])
asyncio.run(chat())
```
Setting `stream=True` modifies functions to return a Python asynchronous generator:
```python
import asyncio
from ollama import AsyncClient
async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  async for part in await AsyncClient().chat(model='gemma3', messages=[message], stream=True):
    print(part['message']['content'], end='', flush=True)
asyncio.run(chat())
```
## API
The Ollama Python library's API is designed around the [Ollama REST API](https://github.com/ollama/ollama/blob/main/docs/api.md)
### Chat
```python
ollama.chat(model='gemma3', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
```
### Generate
```python
ollama.generate(model='gemma3', prompt='Why is the sky blue?')
```
### List
```python
ollama.list()
```
### Show
```python
ollama.show('gemma3')
```
### Create
```python
ollama.create(model='example', from_='gemma3', system="You are Mario from Super Mario Bros.")
```
### Copy
```python
ollama.copy('gemma3', 'user/gemma3')
```
### Delete
```python
ollama.delete('gemma3')
```
### Pull
```python
ollama.pull('gemma3')
```
### Push
```python
ollama.push('user/gemma3')
```
### Embed
```python
ollama.embed(model='gemma3', input='The sky is blue because of rayleigh scattering')
```
### Embed (batch)
```python
ollama.embed(model='gemma3', input=['The sky is blue because of rayleigh scattering', 'Grass is green because of chlorophyll'])
```
### Ps
```python
ollama.ps()
```
## Errors
Errors are raised if requests return an error status or if an error is detected while streaming.
```python
model = 'does-not-yet-exist'
try:
  ollama.chat(model)
except ollama.ResponseError as e:
  print('Error:', e.error)
  if e.status_code == 404:
    ollama.pull(model)
```
            
         
        Raw data
        
            {
    "_id": null,
    "home_page": null,
    "name": "ollama",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "hello@ollama.com",
    "download_url": "https://files.pythonhosted.org/packages/72/62/a36be4555e4218d6c8b35e72e0dfe0823845400097275cd81c9aec4ddf39/ollama-0.5.4.tar.gz",
    "platform": null,
    "description": "# Ollama Python Library\n\nThe Ollama Python library provides the easiest way to integrate Python 3.8+ projects with [Ollama](https://github.com/ollama/ollama).\n\n## Prerequisites\n\n- [Ollama](https://ollama.com/download) should be installed and running\n- Pull a model to use with the library: `ollama pull <model>` e.g. `ollama pull gemma3`\n  - See [Ollama.com](https://ollama.com/search) for more information on the models available.\n\n## Install\n\n```sh\npip install ollama\n```\n\n## Usage\n\n```python\nfrom ollama import chat\nfrom ollama import ChatResponse\n\nresponse: ChatResponse = chat(model='gemma3', messages=[\n  {\n    'role': 'user',\n    'content': 'Why is the sky blue?',\n  },\n])\nprint(response['message']['content'])\n# or access fields directly from the response object\nprint(response.message.content)\n```\n\nSee [_types.py](ollama/_types.py) for more information on the response types.\n\n## Streaming responses\n\nResponse streaming can be enabled by setting `stream=True`.\n\n```python\nfrom ollama import chat\n\nstream = chat(\n    model='gemma3',\n    messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],\n    stream=True,\n)\n\nfor chunk in stream:\n  print(chunk['message']['content'], end='', flush=True)\n```\n\n## Custom client\nA custom client can be created by instantiating `Client` or `AsyncClient` from `ollama`.\n\nAll extra keyword arguments are passed into the [`httpx.Client`](https://www.python-httpx.org/api/#client).\n\n```python\nfrom ollama import Client\nclient = Client(\n  host='http://localhost:11434',\n  headers={'x-some-header': 'some-value'}\n)\nresponse = client.chat(model='gemma3', messages=[\n  {\n    'role': 'user',\n    'content': 'Why is the sky blue?',\n  },\n])\n```\n\n## Async client\n\nThe `AsyncClient` class is used to make asynchronous requests. It can be configured with the same fields as the `Client` class.\n\n```python\nimport asyncio\nfrom ollama import AsyncClient\n\nasync def chat():\n  message = {'role': 'user', 'content': 'Why is the sky blue?'}\n  response = await AsyncClient().chat(model='gemma3', messages=[message])\n\nasyncio.run(chat())\n```\n\nSetting `stream=True` modifies functions to return a Python asynchronous generator:\n\n```python\nimport asyncio\nfrom ollama import AsyncClient\n\nasync def chat():\n  message = {'role': 'user', 'content': 'Why is the sky blue?'}\n  async for part in await AsyncClient().chat(model='gemma3', messages=[message], stream=True):\n    print(part['message']['content'], end='', flush=True)\n\nasyncio.run(chat())\n```\n\n## API\n\nThe Ollama Python library's API is designed around the [Ollama REST API](https://github.com/ollama/ollama/blob/main/docs/api.md)\n\n### Chat\n\n```python\nollama.chat(model='gemma3', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])\n```\n\n### Generate\n\n```python\nollama.generate(model='gemma3', prompt='Why is the sky blue?')\n```\n\n### List\n\n```python\nollama.list()\n```\n\n### Show\n\n```python\nollama.show('gemma3')\n```\n\n### Create\n\n```python\nollama.create(model='example', from_='gemma3', system=\"You are Mario from Super Mario Bros.\")\n```\n\n### Copy\n\n```python\nollama.copy('gemma3', 'user/gemma3')\n```\n\n### Delete\n\n```python\nollama.delete('gemma3')\n```\n\n### Pull\n\n```python\nollama.pull('gemma3')\n```\n\n### Push\n\n```python\nollama.push('user/gemma3')\n```\n\n### Embed\n\n```python\nollama.embed(model='gemma3', input='The sky is blue because of rayleigh scattering')\n```\n\n### Embed (batch)\n\n```python\nollama.embed(model='gemma3', input=['The sky is blue because of rayleigh scattering', 'Grass is green because of chlorophyll'])\n```\n\n### Ps\n\n```python\nollama.ps()\n```\n\n\n## Errors\n\nErrors are raised if requests return an error status or if an error is detected while streaming.\n\n```python\nmodel = 'does-not-yet-exist'\n\ntry:\n  ollama.chat(model)\nexcept ollama.ResponseError as e:\n  print('Error:', e.error)\n  if e.status_code == 404:\n    ollama.pull(model)\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "The official Python client for Ollama.",
    "version": "0.5.4",
    "project_urls": {
        "homepage": "https://ollama.com",
        "issues": "https://github.com/ollama/ollama-python/issues",
        "repository": "https://github.com/ollama/ollama-python"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1bafd0a23c8fdec4c8ddb771191d9b36a57fbce6741835a78f1b18ab6d15ae7d",
                "md5": "8102cc6afc32b5370d012175f2544452",
                "sha256": "6374c9bb4f2a371b3583c09786112ba85b006516745689c172a7e28af4d4d1a2"
            },
            "downloads": -1,
            "filename": "ollama-0.5.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8102cc6afc32b5370d012175f2544452",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 13548,
            "upload_time": "2025-09-16T00:25:24",
            "upload_time_iso_8601": "2025-09-16T00:25:24.186242Z",
            "url": "https://files.pythonhosted.org/packages/1b/af/d0a23c8fdec4c8ddb771191d9b36a57fbce6741835a78f1b18ab6d15ae7d/ollama-0.5.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7262a36be4555e4218d6c8b35e72e0dfe0823845400097275cd81c9aec4ddf39",
                "md5": "32874eb5d43cfe536c145fb6e70ceeaf",
                "sha256": "75857505a5d42e5e58114a1b78cc8c24596d8866863359d8a2329946a9b6d6f3"
            },
            "downloads": -1,
            "filename": "ollama-0.5.4.tar.gz",
            "has_sig": false,
            "md5_digest": "32874eb5d43cfe536c145fb6e70ceeaf",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 45233,
            "upload_time": "2025-09-16T00:25:25",
            "upload_time_iso_8601": "2025-09-16T00:25:25.785755Z",
            "url": "https://files.pythonhosted.org/packages/72/62/a36be4555e4218d6c8b35e72e0dfe0823845400097275cd81c9aec4ddf39/ollama-0.5.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-16 00:25:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ollama",
    "github_project": "ollama-python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "ollama"
}