ollama


Nameollama JSON
Version 0.4.4 PyPI version JSON
download
home_pagehttps://ollama.com
SummaryThe official Python client for Ollama.
upload_time2024-12-08 03:39:17
maintainerNone
docs_urlNone
authorOllama
requires_python<4.0,>=3.8
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Ollama Python Library

The Ollama Python library provides the easiest way to integrate Python 3.8+ projects with [Ollama](https://github.com/ollama/ollama).

## Prerequisites

- [Ollama](https://ollama.com/download) should be installed and running
- Pull a model to use with the library: `ollama pull <model>` e.g. `ollama pull llama3.2`
  - See [Ollama.com](https://ollama.com/search) for more information on the models available.

## Install

```sh
pip install ollama
```

## Usage

```python
from ollama import chat
from ollama import ChatResponse

response: ChatResponse = chat(model='llama3.2', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response['message']['content'])
# or access fields directly from the response object
print(response.message.content)
```

See [_types.py](ollama/_types.py) for more information on the response types.

## Streaming responses

Response streaming can be enabled by setting `stream=True`.

```python
from ollama import chat

stream = chat(
    model='llama3.2',
    messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],
    stream=True,
)

for chunk in stream:
  print(chunk['message']['content'], end='', flush=True)
```

## Custom client
A custom client can be created by instantiating `Client` or `AsyncClient` from `ollama`.

All extra keyword arguments are passed into the [`httpx.Client`](https://www.python-httpx.org/api/#client).

```python
from ollama import Client
client = Client(
  host='http://localhost:11434',
  headers={'x-some-header': 'some-value'}
)
response = client.chat(model='llama3.2', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
```

## Async client

The `AsyncClient` class is used to make asynchronous requests. It can be configured with the same fields as the `Client` class.

```python
import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  response = await AsyncClient().chat(model='llama3.2', messages=[message])

asyncio.run(chat())
```

Setting `stream=True` modifies functions to return a Python asynchronous generator:

```python
import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  async for part in await AsyncClient().chat(model='llama3.2', messages=[message], stream=True):
    print(part['message']['content'], end='', flush=True)

asyncio.run(chat())
```

## API

The Ollama Python library's API is designed around the [Ollama REST API](https://github.com/ollama/ollama/blob/main/docs/api.md)

### Chat

```python
ollama.chat(model='llama3.2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
```

### Generate

```python
ollama.generate(model='llama3.2', prompt='Why is the sky blue?')
```

### List

```python
ollama.list()
```

### Show

```python
ollama.show('llama3.2')
```

### Create

```python
modelfile='''
FROM llama3.2
SYSTEM You are mario from super mario bros.
'''

ollama.create(model='example', modelfile=modelfile)
```

### Copy

```python
ollama.copy('llama3.2', 'user/llama3.2')
```

### Delete

```python
ollama.delete('llama3.2')
```

### Pull

```python
ollama.pull('llama3.2')
```

### Push

```python
ollama.push('user/llama3.2')
```

### Embed

```python
ollama.embed(model='llama3.2', input='The sky is blue because of rayleigh scattering')
```

### Embed (batch)

```python
ollama.embed(model='llama3.2', input=['The sky is blue because of rayleigh scattering', 'Grass is green because of chlorophyll'])
```

### Ps

```python
ollama.ps()
```


## Errors

Errors are raised if requests return an error status or if an error is detected while streaming.

```python
model = 'does-not-yet-exist'

try:
  ollama.chat(model)
except ollama.ResponseError as e:
  print('Error:', e.error)
  if e.status_code == 404:
    ollama.pull(model)
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://ollama.com",
    "name": "ollama",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Ollama",
    "author_email": "hello@ollama.com",
    "download_url": "https://files.pythonhosted.org/packages/d8/55/af1d77457cbd4b0de93a6a0c4367de327c057ecfdd4efd19a428516b56d4/ollama-0.4.4.tar.gz",
    "platform": null,
    "description": "# Ollama Python Library\n\nThe Ollama Python library provides the easiest way to integrate Python 3.8+ projects with [Ollama](https://github.com/ollama/ollama).\n\n## Prerequisites\n\n- [Ollama](https://ollama.com/download) should be installed and running\n- Pull a model to use with the library: `ollama pull <model>` e.g. `ollama pull llama3.2`\n  - See [Ollama.com](https://ollama.com/search) for more information on the models available.\n\n## Install\n\n```sh\npip install ollama\n```\n\n## Usage\n\n```python\nfrom ollama import chat\nfrom ollama import ChatResponse\n\nresponse: ChatResponse = chat(model='llama3.2', messages=[\n  {\n    'role': 'user',\n    'content': 'Why is the sky blue?',\n  },\n])\nprint(response['message']['content'])\n# or access fields directly from the response object\nprint(response.message.content)\n```\n\nSee [_types.py](ollama/_types.py) for more information on the response types.\n\n## Streaming responses\n\nResponse streaming can be enabled by setting `stream=True`.\n\n```python\nfrom ollama import chat\n\nstream = chat(\n    model='llama3.2',\n    messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],\n    stream=True,\n)\n\nfor chunk in stream:\n  print(chunk['message']['content'], end='', flush=True)\n```\n\n## Custom client\nA custom client can be created by instantiating `Client` or `AsyncClient` from `ollama`.\n\nAll extra keyword arguments are passed into the [`httpx.Client`](https://www.python-httpx.org/api/#client).\n\n```python\nfrom ollama import Client\nclient = Client(\n  host='http://localhost:11434',\n  headers={'x-some-header': 'some-value'}\n)\nresponse = client.chat(model='llama3.2', messages=[\n  {\n    'role': 'user',\n    'content': 'Why is the sky blue?',\n  },\n])\n```\n\n## Async client\n\nThe `AsyncClient` class is used to make asynchronous requests. It can be configured with the same fields as the `Client` class.\n\n```python\nimport asyncio\nfrom ollama import AsyncClient\n\nasync def chat():\n  message = {'role': 'user', 'content': 'Why is the sky blue?'}\n  response = await AsyncClient().chat(model='llama3.2', messages=[message])\n\nasyncio.run(chat())\n```\n\nSetting `stream=True` modifies functions to return a Python asynchronous generator:\n\n```python\nimport asyncio\nfrom ollama import AsyncClient\n\nasync def chat():\n  message = {'role': 'user', 'content': 'Why is the sky blue?'}\n  async for part in await AsyncClient().chat(model='llama3.2', messages=[message], stream=True):\n    print(part['message']['content'], end='', flush=True)\n\nasyncio.run(chat())\n```\n\n## API\n\nThe Ollama Python library's API is designed around the [Ollama REST API](https://github.com/ollama/ollama/blob/main/docs/api.md)\n\n### Chat\n\n```python\nollama.chat(model='llama3.2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])\n```\n\n### Generate\n\n```python\nollama.generate(model='llama3.2', prompt='Why is the sky blue?')\n```\n\n### List\n\n```python\nollama.list()\n```\n\n### Show\n\n```python\nollama.show('llama3.2')\n```\n\n### Create\n\n```python\nmodelfile='''\nFROM llama3.2\nSYSTEM You are mario from super mario bros.\n'''\n\nollama.create(model='example', modelfile=modelfile)\n```\n\n### Copy\n\n```python\nollama.copy('llama3.2', 'user/llama3.2')\n```\n\n### Delete\n\n```python\nollama.delete('llama3.2')\n```\n\n### Pull\n\n```python\nollama.pull('llama3.2')\n```\n\n### Push\n\n```python\nollama.push('user/llama3.2')\n```\n\n### Embed\n\n```python\nollama.embed(model='llama3.2', input='The sky is blue because of rayleigh scattering')\n```\n\n### Embed (batch)\n\n```python\nollama.embed(model='llama3.2', input=['The sky is blue because of rayleigh scattering', 'Grass is green because of chlorophyll'])\n```\n\n### Ps\n\n```python\nollama.ps()\n```\n\n\n## Errors\n\nErrors are raised if requests return an error status or if an error is detected while streaming.\n\n```python\nmodel = 'does-not-yet-exist'\n\ntry:\n  ollama.chat(model)\nexcept ollama.ResponseError as e:\n  print('Error:', e.error)\n  if e.status_code == 404:\n    ollama.pull(model)\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "The official Python client for Ollama.",
    "version": "0.4.4",
    "project_urls": {
        "Homepage": "https://ollama.com",
        "Repository": "https://github.com/ollama/ollama-python"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d301815774a30d0047d464add27e2824a5b1d02ea4ad021c83590cd753e5f511",
                "md5": "224069e34638d613043fd1fb924a6bb4",
                "sha256": "0f466e845e2205a1cbf5a2fef4640027b90beaa3b06c574426d8b6b17fd6e139"
            },
            "downloads": -1,
            "filename": "ollama-0.4.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "224069e34638d613043fd1fb924a6bb4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 13155,
            "upload_time": "2024-12-08T03:39:14",
            "upload_time_iso_8601": "2024-12-08T03:39:14.969028Z",
            "url": "https://files.pythonhosted.org/packages/d3/01/815774a30d0047d464add27e2824a5b1d02ea4ad021c83590cd753e5f511/ollama-0.4.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d855af1d77457cbd4b0de93a6a0c4367de327c057ecfdd4efd19a428516b56d4",
                "md5": "7f218225f9a75f33ae03801a89616e73",
                "sha256": "e1db064273c739babc2dde9ea84029c4a43415354741b6c50939ddd3dd0f7ffb"
            },
            "downloads": -1,
            "filename": "ollama-0.4.4.tar.gz",
            "has_sig": false,
            "md5_digest": "7f218225f9a75f33ae03801a89616e73",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 13065,
            "upload_time": "2024-12-08T03:39:17",
            "upload_time_iso_8601": "2024-12-08T03:39:17.219285Z",
            "url": "https://files.pythonhosted.org/packages/d8/55/af1d77457cbd4b0de93a6a0c4367de327c057ecfdd4efd19a428516b56d4/ollama-0.4.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-08 03:39:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ollama",
    "github_project": "ollama-python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "ollama"
}
        
Elapsed time: 0.39905s