Name | ollama JSON |
Version |
0.3.3
JSON |
| download |
home_page | https://ollama.ai |
Summary | The official Python client for Ollama. |
upload_time | 2024-09-09 17:23:43 |
maintainer | None |
docs_url | None |
author | Ollama |
requires_python | <4.0,>=3.8 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Ollama Python Library
The Ollama Python library provides the easiest way to integrate Python 3.8+ projects with [Ollama](https://github.com/ollama/ollama).
## Install
```sh
pip install ollama
```
## Usage
```python
import ollama
response = ollama.chat(model='llama3.1', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
print(response['message']['content'])
```
## Streaming responses
Response streaming can be enabled by setting `stream=True`, modifying function calls to return a Python generator where each part is an object in the stream.
```python
import ollama
stream = ollama.chat(
model='llama3.1',
messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
## API
The Ollama Python library's API is designed around the [Ollama REST API](https://github.com/ollama/ollama/blob/main/docs/api.md)
### Chat
```python
ollama.chat(model='llama3.1', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
```
### Generate
```python
ollama.generate(model='llama3.1', prompt='Why is the sky blue?')
```
### List
```python
ollama.list()
```
### Show
```python
ollama.show('llama3.1')
```
### Create
```python
modelfile='''
FROM llama3.1
SYSTEM You are mario from super mario bros.
'''
ollama.create(model='example', modelfile=modelfile)
```
### Copy
```python
ollama.copy('llama3.1', 'user/llama3.1')
```
### Delete
```python
ollama.delete('llama3.1')
```
### Pull
```python
ollama.pull('llama3.1')
```
### Push
```python
ollama.push('user/llama3.1')
```
### Embeddings
```python
ollama.embeddings(model='llama3.1', prompt='The sky is blue because of rayleigh scattering')
```
### Ps
```python
ollama.ps()
```
## Custom client
A custom client can be created with the following fields:
- `host`: The Ollama host to connect to
- `timeout`: The timeout for requests
```python
from ollama import Client
client = Client(host='http://localhost:11434')
response = client.chat(model='llama3.1', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
```
## Async client
```python
import asyncio
from ollama import AsyncClient
async def chat():
message = {'role': 'user', 'content': 'Why is the sky blue?'}
response = await AsyncClient().chat(model='llama3.1', messages=[message])
asyncio.run(chat())
```
Setting `stream=True` modifies functions to return a Python asynchronous generator:
```python
import asyncio
from ollama import AsyncClient
async def chat():
message = {'role': 'user', 'content': 'Why is the sky blue?'}
async for part in await AsyncClient().chat(model='llama3.1', messages=[message], stream=True):
print(part['message']['content'], end='', flush=True)
asyncio.run(chat())
```
## Errors
Errors are raised if requests return an error status or if an error is detected while streaming.
```python
model = 'does-not-yet-exist'
try:
ollama.chat(model)
except ollama.ResponseError as e:
print('Error:', e.error)
if e.status_code == 404:
ollama.pull(model)
```
Raw data
{
"_id": null,
"home_page": "https://ollama.ai",
"name": "ollama",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Ollama",
"author_email": "hello@ollama.com",
"download_url": "https://files.pythonhosted.org/packages/a6/8e/60a9b065eb796ef3996451cbe2d8044f6b030696166693b9805ae33b8b4c/ollama-0.3.3.tar.gz",
"platform": null,
"description": "# Ollama Python Library\n\nThe Ollama Python library provides the easiest way to integrate Python 3.8+ projects with [Ollama](https://github.com/ollama/ollama).\n\n## Install\n\n```sh\npip install ollama\n```\n\n## Usage\n\n```python\nimport ollama\nresponse = ollama.chat(model='llama3.1', messages=[\n {\n 'role': 'user',\n 'content': 'Why is the sky blue?',\n },\n])\nprint(response['message']['content'])\n```\n\n## Streaming responses\n\nResponse streaming can be enabled by setting `stream=True`, modifying function calls to return a Python generator where each part is an object in the stream.\n\n```python\nimport ollama\n\nstream = ollama.chat(\n model='llama3.1',\n messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],\n stream=True,\n)\n\nfor chunk in stream:\n print(chunk['message']['content'], end='', flush=True)\n```\n\n## API\n\nThe Ollama Python library's API is designed around the [Ollama REST API](https://github.com/ollama/ollama/blob/main/docs/api.md)\n\n### Chat\n\n```python\nollama.chat(model='llama3.1', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])\n```\n\n### Generate\n\n```python\nollama.generate(model='llama3.1', prompt='Why is the sky blue?')\n```\n\n### List\n\n```python\nollama.list()\n```\n\n### Show\n\n```python\nollama.show('llama3.1')\n```\n\n### Create\n\n```python\nmodelfile='''\nFROM llama3.1\nSYSTEM You are mario from super mario bros.\n'''\n\nollama.create(model='example', modelfile=modelfile)\n```\n\n### Copy\n\n```python\nollama.copy('llama3.1', 'user/llama3.1')\n```\n\n### Delete\n\n```python\nollama.delete('llama3.1')\n```\n\n### Pull\n\n```python\nollama.pull('llama3.1')\n```\n\n### Push\n\n```python\nollama.push('user/llama3.1')\n```\n\n### Embeddings\n\n```python\nollama.embeddings(model='llama3.1', prompt='The sky is blue because of rayleigh scattering')\n```\n\n### Ps\n\n```python\nollama.ps()\n```\n\n## Custom client\n\nA custom client can be created with the following fields:\n\n- `host`: The Ollama host to connect to\n- `timeout`: The timeout for requests\n\n```python\nfrom ollama import Client\nclient = Client(host='http://localhost:11434')\nresponse = client.chat(model='llama3.1', messages=[\n {\n 'role': 'user',\n 'content': 'Why is the sky blue?',\n },\n])\n```\n\n## Async client\n\n```python\nimport asyncio\nfrom ollama import AsyncClient\n\nasync def chat():\n message = {'role': 'user', 'content': 'Why is the sky blue?'}\n response = await AsyncClient().chat(model='llama3.1', messages=[message])\n\nasyncio.run(chat())\n```\n\nSetting `stream=True` modifies functions to return a Python asynchronous generator:\n\n```python\nimport asyncio\nfrom ollama import AsyncClient\n\nasync def chat():\n message = {'role': 'user', 'content': 'Why is the sky blue?'}\n async for part in await AsyncClient().chat(model='llama3.1', messages=[message], stream=True):\n print(part['message']['content'], end='', flush=True)\n\nasyncio.run(chat())\n```\n\n## Errors\n\nErrors are raised if requests return an error status or if an error is detected while streaming.\n\n```python\nmodel = 'does-not-yet-exist'\n\ntry:\n ollama.chat(model)\nexcept ollama.ResponseError as e:\n print('Error:', e.error)\n if e.status_code == 404:\n ollama.pull(model)\n```\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "The official Python client for Ollama.",
"version": "0.3.3",
"project_urls": {
"Homepage": "https://ollama.ai",
"Repository": "https://github.com/jmorganca/ollama-python"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "6acad22905ac3f768523f778189d38c9c6cd9edf4fa9dd09cb5a3fc57b184f90",
"md5": "0ea6d64afa669ece1d70b701ac0b80a9",
"sha256": "ca6242ce78ab34758082b7392df3f9f6c2cb1d070a9dede1a4c545c929e16dba"
},
"downloads": -1,
"filename": "ollama-0.3.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0ea6d64afa669ece1d70b701ac0b80a9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 10267,
"upload_time": "2024-09-09T17:23:41",
"upload_time_iso_8601": "2024-09-09T17:23:41.841050Z",
"url": "https://files.pythonhosted.org/packages/6a/ca/d22905ac3f768523f778189d38c9c6cd9edf4fa9dd09cb5a3fc57b184f90/ollama-0.3.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a68e60a9b065eb796ef3996451cbe2d8044f6b030696166693b9805ae33b8b4c",
"md5": "437ca6a41c5225283de3392f3e9ad9a7",
"sha256": "f90a6d61803117f40b0e8ff17465cab5e1eb24758a473cfe8101aff38bc13b51"
},
"downloads": -1,
"filename": "ollama-0.3.3.tar.gz",
"has_sig": false,
"md5_digest": "437ca6a41c5225283de3392f3e9ad9a7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 10390,
"upload_time": "2024-09-09T17:23:43",
"upload_time_iso_8601": "2024-09-09T17:23:43.495573Z",
"url": "https://files.pythonhosted.org/packages/a6/8e/60a9b065eb796ef3996451cbe2d8044f6b030696166693b9805ae33b8b4c/ollama-0.3.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-09 17:23:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jmorganca",
"github_project": "ollama-python",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "ollama"
}