Name | ollama-hass JSON |
Version |
0.1.7
JSON |
| download |
home_page | https://ollama.ai |
Summary | A fork of the official Python client for Ollama for Home Assistant. |
upload_time | 2024-03-25 15:18:25 |
maintainer | None |
docs_url | None |
author | Ollama |
requires_python | <4.0,>=3.8 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
**NOTE**: This is a fork of the official Ollama Python library with [loosened dependencies](https://github.com/ollama/ollama-python/pull/97) in order to make it compatible with [Home Assistant](https://www.home-assistant.io/).
# Ollama Python Library
The Ollama Python library provides the easiest way to integrate Python 3.8+ projects with [Ollama](https://github.com/jmorganca/ollama).
## Install
```sh
pip install ollama
```
## Usage
```python
import ollama
response = ollama.chat(model='llama2', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
print(response['message']['content'])
```
## Streaming responses
Response streaming can be enabled by setting `stream=True`, modifying function calls to return a Python generator where each part is an object in the stream.
```python
import ollama
stream = ollama.chat(
model='llama2',
messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
## API
The Ollama Python library's API is designed around the [Ollama REST API](https://github.com/jmorganca/ollama/blob/main/docs/api.md)
### Chat
```python
ollama.chat(model='llama2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
```
### Generate
```python
ollama.generate(model='llama2', prompt='Why is the sky blue?')
```
### List
```python
ollama.list()
```
### Show
```python
ollama.show('llama2')
```
### Create
```python
modelfile='''
FROM llama2
SYSTEM You are mario from super mario bros.
'''
ollama.create(model='example', modelfile=modelfile)
```
### Copy
```python
ollama.copy('llama2', 'user/llama2')
```
### Delete
```python
ollama.delete('llama2')
```
### Pull
```python
ollama.pull('llama2')
```
### Push
```python
ollama.push('user/llama2')
```
### Embeddings
```python
ollama.embeddings(model='llama2', prompt='They sky is blue because of rayleigh scattering')
```
## Custom client
A custom client can be created with the following fields:
- `host`: The Ollama host to connect to
- `timeout`: The timeout for requests
```python
from ollama import Client
client = Client(host='http://localhost:11434')
response = client.chat(model='llama2', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
```
## Async client
```python
import asyncio
from ollama import AsyncClient
async def chat():
message = {'role': 'user', 'content': 'Why is the sky blue?'}
response = await AsyncClient().chat(model='llama2', messages=[message])
asyncio.run(chat())
```
Setting `stream=True` modifies functions to return a Python asynchronous generator:
```python
import asyncio
from ollama import AsyncClient
async def chat():
message = {'role': 'user', 'content': 'Why is the sky blue?'}
async for part in await AsyncClient().chat(model='llama2', messages=[message], stream=True):
print(part['message']['content'], end='', flush=True)
asyncio.run(chat())
```
## Errors
Errors are raised if requests return an error status or if an error is detected while streaming.
```python
model = 'does-not-yet-exist'
try:
ollama.chat(model)
except ollama.ResponseError as e:
print('Error:', e.error)
if e.status_code == 404:
ollama.pull(model)
```
Raw data
{
"_id": null,
"home_page": "https://ollama.ai",
"name": "ollama-hass",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Ollama",
"author_email": "hello@ollama.com",
"download_url": "https://files.pythonhosted.org/packages/ee/dc/c45d42f94fd05a94d00cc1ea02ca7e4553dac19540c87b169b6cfeb5e210/ollama_hass-0.1.7.tar.gz",
"platform": null,
"description": "**NOTE**: This is a fork of the official Ollama Python library with [loosened dependencies](https://github.com/ollama/ollama-python/pull/97) in order to make it compatible with [Home Assistant](https://www.home-assistant.io/).\n\n# Ollama Python Library\n\nThe Ollama Python library provides the easiest way to integrate Python 3.8+ projects with [Ollama](https://github.com/jmorganca/ollama).\n\n## Install\n\n```sh\npip install ollama\n```\n\n## Usage\n\n```python\nimport ollama\nresponse = ollama.chat(model='llama2', messages=[\n {\n 'role': 'user',\n 'content': 'Why is the sky blue?',\n },\n])\nprint(response['message']['content'])\n```\n\n## Streaming responses\n\nResponse streaming can be enabled by setting `stream=True`, modifying function calls to return a Python generator where each part is an object in the stream.\n\n```python\nimport ollama\n\nstream = ollama.chat(\n model='llama2',\n messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],\n stream=True,\n)\n\nfor chunk in stream:\n print(chunk['message']['content'], end='', flush=True)\n```\n\n## API\n\nThe Ollama Python library's API is designed around the [Ollama REST API](https://github.com/jmorganca/ollama/blob/main/docs/api.md)\n\n### Chat\n\n```python\nollama.chat(model='llama2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])\n```\n\n### Generate\n\n```python\nollama.generate(model='llama2', prompt='Why is the sky blue?')\n```\n\n### List\n\n```python\nollama.list()\n```\n\n### Show\n\n```python\nollama.show('llama2')\n```\n\n### Create\n\n```python\nmodelfile='''\nFROM llama2\nSYSTEM You are mario from super mario bros.\n'''\n\nollama.create(model='example', modelfile=modelfile)\n```\n\n### Copy\n\n```python\nollama.copy('llama2', 'user/llama2')\n```\n\n### Delete\n\n```python\nollama.delete('llama2')\n```\n\n### Pull\n\n```python\nollama.pull('llama2')\n```\n\n### Push\n\n```python\nollama.push('user/llama2')\n```\n\n### Embeddings\n\n```python\nollama.embeddings(model='llama2', prompt='They sky is blue because of rayleigh scattering')\n```\n\n## Custom client\n\nA custom client can be created with the following fields:\n\n- `host`: The Ollama host to connect to\n- `timeout`: The timeout for requests\n\n```python\nfrom ollama import Client\nclient = Client(host='http://localhost:11434')\nresponse = client.chat(model='llama2', messages=[\n {\n 'role': 'user',\n 'content': 'Why is the sky blue?',\n },\n])\n```\n\n## Async client\n\n```python\nimport asyncio\nfrom ollama import AsyncClient\n\nasync def chat():\n message = {'role': 'user', 'content': 'Why is the sky blue?'}\n response = await AsyncClient().chat(model='llama2', messages=[message])\n\nasyncio.run(chat())\n```\n\nSetting `stream=True` modifies functions to return a Python asynchronous generator:\n\n```python\nimport asyncio\nfrom ollama import AsyncClient\n\nasync def chat():\n message = {'role': 'user', 'content': 'Why is the sky blue?'}\n async for part in await AsyncClient().chat(model='llama2', messages=[message], stream=True):\n print(part['message']['content'], end='', flush=True)\n\nasyncio.run(chat())\n```\n\n## Errors\n\nErrors are raised if requests return an error status or if an error is detected while streaming.\n\n```python\nmodel = 'does-not-yet-exist'\n\ntry:\n ollama.chat(model)\nexcept ollama.ResponseError as e:\n print('Error:', e.error)\n if e.status_code == 404:\n ollama.pull(model)\n```\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A fork of the official Python client for Ollama for Home Assistant.",
"version": "0.1.7",
"project_urls": {
"Homepage": "https://ollama.ai",
"Repository": "https://github.com/jmorganca/ollama-python"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1625afb47ee6b27911de140bf4b53b41bea2b128f7f8c2aca59d5648f7a2f30c",
"md5": "8034b3c2779a50151892c2973fe11b86",
"sha256": "130fdf6cdd2bf86be0cce3e5328676c5de5c2fb4d34f9478c2890bec4fbcb7e2"
},
"downloads": -1,
"filename": "ollama_hass-0.1.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8034b3c2779a50151892c2973fe11b86",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 9507,
"upload_time": "2024-03-25T15:18:24",
"upload_time_iso_8601": "2024-03-25T15:18:24.663692Z",
"url": "https://files.pythonhosted.org/packages/16/25/afb47ee6b27911de140bf4b53b41bea2b128f7f8c2aca59d5648f7a2f30c/ollama_hass-0.1.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "eedcc45d42f94fd05a94d00cc1ea02ca7e4553dac19540c87b169b6cfeb5e210",
"md5": "e6c000652e3f84feb80ba277963486d5",
"sha256": "ac0ac9e68d97e2b74dfe8278671c2c67c3ed4b796df1b195c82e440350918684"
},
"downloads": -1,
"filename": "ollama_hass-0.1.7.tar.gz",
"has_sig": false,
"md5_digest": "e6c000652e3f84feb80ba277963486d5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 9684,
"upload_time": "2024-03-25T15:18:25",
"upload_time_iso_8601": "2024-03-25T15:18:25.888647Z",
"url": "https://files.pythonhosted.org/packages/ee/dc/c45d42f94fd05a94d00cc1ea02ca7e4553dac19540c87b169b6cfeb5e210/ollama_hass-0.1.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-25 15:18:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jmorganca",
"github_project": "ollama-python",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "ollama-hass"
}