ollama-flow


Nameollama-flow JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryA Python library for the Ollama API.
upload_time2025-07-12 01:19:04
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseMIT
keywords ai api chat embed generate llm ollama
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Ollama Flow

一個強大且易用的 Python 函式庫,用於與 Ollama API 互動。

## 功能特色

- 🚀 **簡潔易用的 API** - 提供直觀的 Python 介面
- 🎯 **結構化輸出** - 支援 JSON Schema 和 Pydantic 模型
- 🌊 **串流模式** - 即時獲取生成內容
- 💬 **完整聊天支援** - 支援多輪對話和工具調用
- 🔤 **嵌入向量** - 生成文本嵌入向量
- ✅ **智能模型驗證** - 自動檢查模型是否存在,提供友好的錯誤提示
- 💾 **緩存機制** - 智能緩存模型列表,提升性能
- 🛡️ **類型安全** - 完整的類型提示支援
- 📦 **零依賴** - 僅依賴 `requests` 和 `pydantic`

## 支援的 API 端點

- `/api/generate` - 生成完成
- `/api/chat` - 聊天完成
- `/api/embed` - 生成嵌入向量

## 安裝

```bash
pip install ollama-flow
```

或者從源碼安裝:

```bash
git clone https://github.com/your-username/ollama-flow.git
cd ollama-flow
pip install -e .
```

## 快速開始

### 基本使用

```python
from ollama_flow import OllamaClient

# 建立客戶端
client = OllamaClient(base_url="http://localhost:11434")

# 生成文本
response = client.generate(
    model="llama3.2",
    prompt="解釋什麼是機器學習?",
    stream=False
)
print(response.response)
```

### 聊天對話

```python
from ollama_flow import OllamaClient, ChatMessage

client = OllamaClient()

messages = [
    ChatMessage(role="system", content="你是一個有用的助手。"),
    ChatMessage(role="user", content="你好!")
]

response = client.chat(
    model="llama3.2",
    messages=messages,
    stream=False
)
print(response.message.content)
```

### 結構化輸出

```python
from ollama_flow import OllamaClient
from pydantic import BaseModel, Field
from typing import List

class Product(BaseModel):
    name: str = Field(..., description="產品名稱")
    price: float = Field(..., description="價格")
    features: List[str] = Field(..., description="功能特點")

client = OllamaClient()

# 使用 Pydantic 模型
response = client.generate_structured(
    model="llama3.2",
    prompt="創建一個智慧型手機產品資訊,請用 JSON 格式回應。",
    schema=Product,
    stream=False
)

# 解析結構化回應
product = client.parse_structured_response(response.response, Product)
print(f"產品名稱:{product.name}")
print(f"價格:${product.price}")
print(f"功能:{', '.join(product.features)}")
```

### 串流模式

```python
from ollama_flow import OllamaClient

client = OllamaClient()

response_stream = client.generate(
    model="llama3.2",
    prompt="寫一篇關於人工智慧的文章。",
    stream=True
)

print("生成中:", end="", flush=True)
for chunk in response_stream:
    if chunk.get("done", False):
        print("\n生成完成!")
        break
    else:
        print(chunk.get("response", ""), end="", flush=True)
```

### 生成嵌入向量

```python
from ollama_flow import OllamaClient

client = OllamaClient()

response = client.embed(
    model="all-minilm",
    input="這是要轉換為嵌入向量的文本。"
)

print(f"嵌入維度:{len(response.embeddings[0])}")
print(f"嵌入向量:{response.embeddings[0][:5]}...")  # 顯示前5個值
```

## 進階功能

### 模型驗證

```python
# 預設開啟模型驗證
client = OllamaClient(check_models=True)

# 獲取可用模型列表
models = client.list_models()
print(f"可用模型:{models}")

# 刷新模型緩存
models = client.refresh_models_cache()

# 關閉模型驗證(不推薦)
client_no_check = OllamaClient(check_models=False)

# 當使用不存在的模型時,會拋出 ValueError
try:
    client.generate(model="nonexistent-model", prompt="Hello")
except ValueError as e:
    print(f"模型驗證錯誤:{e}")
```

### JSON 模式

```python
# 使用 JSON 模式
response = client.generate_json(
    model="llama3.2",
    prompt="列出三個程式設計語言及其特點。請用 JSON 格式回應。",
    stream=False
)

import json
data = json.loads(response.response)
print(json.dumps(data, indent=2, ensure_ascii=False))
```

### 自定義 JSON Schema

```python
# 使用自定義 JSON Schema
person_schema = {
    "type": "object",
    "properties": {
        "name": {"type": "string"},
        "age": {"type": "integer"},
        "hobbies": {"type": "array", "items": {"type": "string"}}
    },
    "required": ["name", "age"]
}

response = client.generate_structured(
    model="llama3.2",
    prompt="創建一個虛構角色的資訊。",
    schema=person_schema,
    stream=False
)
```

### 上下文管理器

```python
# 使用上下文管理器自動清理連線
with OllamaClient() as client:
    response = client.generate(
        model="llama3.2",
        prompt="你好!",
        stream=False
    )
    print(response.response)
```

## API 參考

### OllamaClient

主要的客戶端類別,提供與 Ollama API 的介面。

#### 初始化

```python
client = OllamaClient(
    base_url="http://localhost:11434",  # Ollama 服務器 URL
    timeout=30,  # 請求超時時間(秒)
    check_models=True  # 是否在調用前檢查模型是否存在
)
```

#### 方法

- `generate()` - 生成文本完成
- `chat()` - 聊天完成
- `embed()` - 生成嵌入向量
- `generate_json()` - 生成 JSON 格式回應
- `generate_structured()` - 生成結構化回應
- `chat_json()` - 聊天生成 JSON 格式回應
- `chat_structured()` - 聊天生成結構化回應
- `parse_structured_response()` - 解析結構化回應
- `list_models()` - 獲取可用模型列表
- `refresh_models_cache()` - 刷新模型緩存

### 資料模型

#### GenerateRequest
- `model`: 模型名稱
- `prompt`: 提示文本
- `format`: 回應格式(可選)
- `stream`: 是否使用串流模式
- `options`: 模型參數(可選)

#### ChatMessage
- `role`: 角色(system/user/assistant/tool)
- `content`: 訊息內容
- `images`: 圖像列表(可選)

#### ChatRequest
- `model`: 模型名稱
- `messages`: 對話訊息列表
- `format`: 回應格式(可選)
- `stream`: 是否使用串流模式
- `tools`: 工具定義(可選)

#### EmbedRequest
- `model`: 模型名稱
- `input`: 輸入文本或文本列表
- `truncate`: 是否截斷長文本

## 範例

查看 `examples/` 目錄中的詳細範例:

- `basic_usage.py` - 基本使用方法
- `structured_output.py` - 結構化輸出範例
- `streaming.py` - 串流模式範例
- `model_validation.py` - 模型驗證功能範例

## 錯誤處理

```python
from ollama_flow import OllamaClient

client = OllamaClient()

try:
    response = client.generate(
        model="llama3.2",
        prompt="你好!",
        stream=False
    )
    print(response.response)
except Exception as e:
    print(f"發生錯誤:{e}")
```

## 需求

- Python 3.7+
- requests >= 2.31.0
- pydantic >= 2.0.0
- 運行中的 Ollama 服務

## 許可證

MIT License

## 貢獻

歡迎提交 Issue 和 Pull Request!

1. Fork 這個專案
2. 建立您的功能分支 (`git checkout -b feature/amazing-feature`)
3. 提交您的變更 (`git commit -m 'Add some amazing feature'`)
4. 推送到分支 (`git push origin feature/amazing-feature`)
5. 開啟一個 Pull Request

## 更新日誌

### v0.1.0
- 初始發布
- 支援 generate, chat, embed API
- 結構化輸出支援
- 串流模式支援
- 智能模型驗證功能
- 模型列表緩存機制
- 完整的類型提示

## 相關連結

- [Ollama](https://ollama.com/) - 本地 LLM 運行平台
- [Ollama API 文檔](https://github.com/ollama/ollama/blob/main/docs/api.md)
- [Pydantic](https://docs.pydantic.dev/) - 資料驗證函式庫

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ollama-flow",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "ai, api, chat, embed, generate, llm, ollama",
    "author": null,
    "author_email": "Warren <warren@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/5b/a1/0c41ff7e24c0bb3d10561e80220cd4ac97596db0608c1e9659136376e9e2/ollama_flow-0.1.0.tar.gz",
    "platform": null,
    "description": "# Ollama Flow\n\n\u4e00\u500b\u5f37\u5927\u4e14\u6613\u7528\u7684 Python \u51fd\u5f0f\u5eab\uff0c\u7528\u65bc\u8207 Ollama API \u4e92\u52d5\u3002\n\n## \u529f\u80fd\u7279\u8272\n\n- \ud83d\ude80 **\u7c21\u6f54\u6613\u7528\u7684 API** - \u63d0\u4f9b\u76f4\u89c0\u7684 Python \u4ecb\u9762\n- \ud83c\udfaf **\u7d50\u69cb\u5316\u8f38\u51fa** - \u652f\u63f4 JSON Schema \u548c Pydantic \u6a21\u578b\n- \ud83c\udf0a **\u4e32\u6d41\u6a21\u5f0f** - \u5373\u6642\u7372\u53d6\u751f\u6210\u5167\u5bb9\n- \ud83d\udcac **\u5b8c\u6574\u804a\u5929\u652f\u63f4** - \u652f\u63f4\u591a\u8f2a\u5c0d\u8a71\u548c\u5de5\u5177\u8abf\u7528\n- \ud83d\udd24 **\u5d4c\u5165\u5411\u91cf** - \u751f\u6210\u6587\u672c\u5d4c\u5165\u5411\u91cf\n- \u2705 **\u667a\u80fd\u6a21\u578b\u9a57\u8b49** - \u81ea\u52d5\u6aa2\u67e5\u6a21\u578b\u662f\u5426\u5b58\u5728\uff0c\u63d0\u4f9b\u53cb\u597d\u7684\u932f\u8aa4\u63d0\u793a\n- \ud83d\udcbe **\u7de9\u5b58\u6a5f\u5236** - \u667a\u80fd\u7de9\u5b58\u6a21\u578b\u5217\u8868\uff0c\u63d0\u5347\u6027\u80fd\n- \ud83d\udee1\ufe0f **\u985e\u578b\u5b89\u5168** - \u5b8c\u6574\u7684\u985e\u578b\u63d0\u793a\u652f\u63f4\n- \ud83d\udce6 **\u96f6\u4f9d\u8cf4** - \u50c5\u4f9d\u8cf4 `requests` \u548c `pydantic`\n\n## \u652f\u63f4\u7684 API \u7aef\u9ede\n\n- `/api/generate` - \u751f\u6210\u5b8c\u6210\n- `/api/chat` - \u804a\u5929\u5b8c\u6210\n- `/api/embed` - \u751f\u6210\u5d4c\u5165\u5411\u91cf\n\n## \u5b89\u88dd\n\n```bash\npip install ollama-flow\n```\n\n\u6216\u8005\u5f9e\u6e90\u78bc\u5b89\u88dd\uff1a\n\n```bash\ngit clone https://github.com/your-username/ollama-flow.git\ncd ollama-flow\npip install -e .\n```\n\n## \u5feb\u901f\u958b\u59cb\n\n### \u57fa\u672c\u4f7f\u7528\n\n```python\nfrom ollama_flow import OllamaClient\n\n# \u5efa\u7acb\u5ba2\u6236\u7aef\nclient = OllamaClient(base_url=\"http://localhost:11434\")\n\n# \u751f\u6210\u6587\u672c\nresponse = client.generate(\n    model=\"llama3.2\",\n    prompt=\"\u89e3\u91cb\u4ec0\u9ebc\u662f\u6a5f\u5668\u5b78\u7fd2\uff1f\",\n    stream=False\n)\nprint(response.response)\n```\n\n### \u804a\u5929\u5c0d\u8a71\n\n```python\nfrom ollama_flow import OllamaClient, ChatMessage\n\nclient = OllamaClient()\n\nmessages = [\n    ChatMessage(role=\"system\", content=\"\u4f60\u662f\u4e00\u500b\u6709\u7528\u7684\u52a9\u624b\u3002\"),\n    ChatMessage(role=\"user\", content=\"\u4f60\u597d\uff01\")\n]\n\nresponse = client.chat(\n    model=\"llama3.2\",\n    messages=messages,\n    stream=False\n)\nprint(response.message.content)\n```\n\n### \u7d50\u69cb\u5316\u8f38\u51fa\n\n```python\nfrom ollama_flow import OllamaClient\nfrom pydantic import BaseModel, Field\nfrom typing import List\n\nclass Product(BaseModel):\n    name: str = Field(..., description=\"\u7522\u54c1\u540d\u7a31\")\n    price: float = Field(..., description=\"\u50f9\u683c\")\n    features: List[str] = Field(..., description=\"\u529f\u80fd\u7279\u9ede\")\n\nclient = OllamaClient()\n\n# \u4f7f\u7528 Pydantic \u6a21\u578b\nresponse = client.generate_structured(\n    model=\"llama3.2\",\n    prompt=\"\u5275\u5efa\u4e00\u500b\u667a\u6167\u578b\u624b\u6a5f\u7522\u54c1\u8cc7\u8a0a\uff0c\u8acb\u7528 JSON \u683c\u5f0f\u56de\u61c9\u3002\",\n    schema=Product,\n    stream=False\n)\n\n# \u89e3\u6790\u7d50\u69cb\u5316\u56de\u61c9\nproduct = client.parse_structured_response(response.response, Product)\nprint(f\"\u7522\u54c1\u540d\u7a31\uff1a{product.name}\")\nprint(f\"\u50f9\u683c\uff1a${product.price}\")\nprint(f\"\u529f\u80fd\uff1a{', '.join(product.features)}\")\n```\n\n### \u4e32\u6d41\u6a21\u5f0f\n\n```python\nfrom ollama_flow import OllamaClient\n\nclient = OllamaClient()\n\nresponse_stream = client.generate(\n    model=\"llama3.2\",\n    prompt=\"\u5beb\u4e00\u7bc7\u95dc\u65bc\u4eba\u5de5\u667a\u6167\u7684\u6587\u7ae0\u3002\",\n    stream=True\n)\n\nprint(\"\u751f\u6210\u4e2d\uff1a\", end=\"\", flush=True)\nfor chunk in response_stream:\n    if chunk.get(\"done\", False):\n        print(\"\\n\u751f\u6210\u5b8c\u6210\uff01\")\n        break\n    else:\n        print(chunk.get(\"response\", \"\"), end=\"\", flush=True)\n```\n\n### \u751f\u6210\u5d4c\u5165\u5411\u91cf\n\n```python\nfrom ollama_flow import OllamaClient\n\nclient = OllamaClient()\n\nresponse = client.embed(\n    model=\"all-minilm\",\n    input=\"\u9019\u662f\u8981\u8f49\u63db\u70ba\u5d4c\u5165\u5411\u91cf\u7684\u6587\u672c\u3002\"\n)\n\nprint(f\"\u5d4c\u5165\u7dad\u5ea6\uff1a{len(response.embeddings[0])}\")\nprint(f\"\u5d4c\u5165\u5411\u91cf\uff1a{response.embeddings[0][:5]}...\")  # \u986f\u793a\u524d5\u500b\u503c\n```\n\n## \u9032\u968e\u529f\u80fd\n\n### \u6a21\u578b\u9a57\u8b49\n\n```python\n# \u9810\u8a2d\u958b\u555f\u6a21\u578b\u9a57\u8b49\nclient = OllamaClient(check_models=True)\n\n# \u7372\u53d6\u53ef\u7528\u6a21\u578b\u5217\u8868\nmodels = client.list_models()\nprint(f\"\u53ef\u7528\u6a21\u578b\uff1a{models}\")\n\n# \u5237\u65b0\u6a21\u578b\u7de9\u5b58\nmodels = client.refresh_models_cache()\n\n# \u95dc\u9589\u6a21\u578b\u9a57\u8b49\uff08\u4e0d\u63a8\u85a6\uff09\nclient_no_check = OllamaClient(check_models=False)\n\n# \u7576\u4f7f\u7528\u4e0d\u5b58\u5728\u7684\u6a21\u578b\u6642\uff0c\u6703\u62cb\u51fa ValueError\ntry:\n    client.generate(model=\"nonexistent-model\", prompt=\"Hello\")\nexcept ValueError as e:\n    print(f\"\u6a21\u578b\u9a57\u8b49\u932f\u8aa4\uff1a{e}\")\n```\n\n### JSON \u6a21\u5f0f\n\n```python\n# \u4f7f\u7528 JSON \u6a21\u5f0f\nresponse = client.generate_json(\n    model=\"llama3.2\",\n    prompt=\"\u5217\u51fa\u4e09\u500b\u7a0b\u5f0f\u8a2d\u8a08\u8a9e\u8a00\u53ca\u5176\u7279\u9ede\u3002\u8acb\u7528 JSON \u683c\u5f0f\u56de\u61c9\u3002\",\n    stream=False\n)\n\nimport json\ndata = json.loads(response.response)\nprint(json.dumps(data, indent=2, ensure_ascii=False))\n```\n\n### \u81ea\u5b9a\u7fa9 JSON Schema\n\n```python\n# \u4f7f\u7528\u81ea\u5b9a\u7fa9 JSON Schema\nperson_schema = {\n    \"type\": \"object\",\n    \"properties\": {\n        \"name\": {\"type\": \"string\"},\n        \"age\": {\"type\": \"integer\"},\n        \"hobbies\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}\n    },\n    \"required\": [\"name\", \"age\"]\n}\n\nresponse = client.generate_structured(\n    model=\"llama3.2\",\n    prompt=\"\u5275\u5efa\u4e00\u500b\u865b\u69cb\u89d2\u8272\u7684\u8cc7\u8a0a\u3002\",\n    schema=person_schema,\n    stream=False\n)\n```\n\n### \u4e0a\u4e0b\u6587\u7ba1\u7406\u5668\n\n```python\n# \u4f7f\u7528\u4e0a\u4e0b\u6587\u7ba1\u7406\u5668\u81ea\u52d5\u6e05\u7406\u9023\u7dda\nwith OllamaClient() as client:\n    response = client.generate(\n        model=\"llama3.2\",\n        prompt=\"\u4f60\u597d\uff01\",\n        stream=False\n    )\n    print(response.response)\n```\n\n## API \u53c3\u8003\n\n### OllamaClient\n\n\u4e3b\u8981\u7684\u5ba2\u6236\u7aef\u985e\u5225\uff0c\u63d0\u4f9b\u8207 Ollama API \u7684\u4ecb\u9762\u3002\n\n#### \u521d\u59cb\u5316\n\n```python\nclient = OllamaClient(\n    base_url=\"http://localhost:11434\",  # Ollama \u670d\u52d9\u5668 URL\n    timeout=30,  # \u8acb\u6c42\u8d85\u6642\u6642\u9593\uff08\u79d2\uff09\n    check_models=True  # \u662f\u5426\u5728\u8abf\u7528\u524d\u6aa2\u67e5\u6a21\u578b\u662f\u5426\u5b58\u5728\n)\n```\n\n#### \u65b9\u6cd5\n\n- `generate()` - \u751f\u6210\u6587\u672c\u5b8c\u6210\n- `chat()` - \u804a\u5929\u5b8c\u6210\n- `embed()` - \u751f\u6210\u5d4c\u5165\u5411\u91cf\n- `generate_json()` - \u751f\u6210 JSON \u683c\u5f0f\u56de\u61c9\n- `generate_structured()` - \u751f\u6210\u7d50\u69cb\u5316\u56de\u61c9\n- `chat_json()` - \u804a\u5929\u751f\u6210 JSON \u683c\u5f0f\u56de\u61c9\n- `chat_structured()` - \u804a\u5929\u751f\u6210\u7d50\u69cb\u5316\u56de\u61c9\n- `parse_structured_response()` - \u89e3\u6790\u7d50\u69cb\u5316\u56de\u61c9\n- `list_models()` - \u7372\u53d6\u53ef\u7528\u6a21\u578b\u5217\u8868\n- `refresh_models_cache()` - \u5237\u65b0\u6a21\u578b\u7de9\u5b58\n\n### \u8cc7\u6599\u6a21\u578b\n\n#### GenerateRequest\n- `model`: \u6a21\u578b\u540d\u7a31\n- `prompt`: \u63d0\u793a\u6587\u672c\n- `format`: \u56de\u61c9\u683c\u5f0f\uff08\u53ef\u9078\uff09\n- `stream`: \u662f\u5426\u4f7f\u7528\u4e32\u6d41\u6a21\u5f0f\n- `options`: \u6a21\u578b\u53c3\u6578\uff08\u53ef\u9078\uff09\n\n#### ChatMessage\n- `role`: \u89d2\u8272\uff08system/user/assistant/tool\uff09\n- `content`: \u8a0a\u606f\u5167\u5bb9\n- `images`: \u5716\u50cf\u5217\u8868\uff08\u53ef\u9078\uff09\n\n#### ChatRequest\n- `model`: \u6a21\u578b\u540d\u7a31\n- `messages`: \u5c0d\u8a71\u8a0a\u606f\u5217\u8868\n- `format`: \u56de\u61c9\u683c\u5f0f\uff08\u53ef\u9078\uff09\n- `stream`: \u662f\u5426\u4f7f\u7528\u4e32\u6d41\u6a21\u5f0f\n- `tools`: \u5de5\u5177\u5b9a\u7fa9\uff08\u53ef\u9078\uff09\n\n#### EmbedRequest\n- `model`: \u6a21\u578b\u540d\u7a31\n- `input`: \u8f38\u5165\u6587\u672c\u6216\u6587\u672c\u5217\u8868\n- `truncate`: \u662f\u5426\u622a\u65b7\u9577\u6587\u672c\n\n## \u7bc4\u4f8b\n\n\u67e5\u770b `examples/` \u76ee\u9304\u4e2d\u7684\u8a73\u7d30\u7bc4\u4f8b\uff1a\n\n- `basic_usage.py` - \u57fa\u672c\u4f7f\u7528\u65b9\u6cd5\n- `structured_output.py` - \u7d50\u69cb\u5316\u8f38\u51fa\u7bc4\u4f8b\n- `streaming.py` - \u4e32\u6d41\u6a21\u5f0f\u7bc4\u4f8b\n- `model_validation.py` - \u6a21\u578b\u9a57\u8b49\u529f\u80fd\u7bc4\u4f8b\n\n## \u932f\u8aa4\u8655\u7406\n\n```python\nfrom ollama_flow import OllamaClient\n\nclient = OllamaClient()\n\ntry:\n    response = client.generate(\n        model=\"llama3.2\",\n        prompt=\"\u4f60\u597d\uff01\",\n        stream=False\n    )\n    print(response.response)\nexcept Exception as e:\n    print(f\"\u767c\u751f\u932f\u8aa4\uff1a{e}\")\n```\n\n## \u9700\u6c42\n\n- Python 3.7+\n- requests >= 2.31.0\n- pydantic >= 2.0.0\n- \u904b\u884c\u4e2d\u7684 Ollama \u670d\u52d9\n\n## \u8a31\u53ef\u8b49\n\nMIT License\n\n## \u8ca2\u737b\n\n\u6b61\u8fce\u63d0\u4ea4 Issue \u548c Pull Request\uff01\n\n1. Fork \u9019\u500b\u5c08\u6848\n2. \u5efa\u7acb\u60a8\u7684\u529f\u80fd\u5206\u652f (`git checkout -b feature/amazing-feature`)\n3. \u63d0\u4ea4\u60a8\u7684\u8b8a\u66f4 (`git commit -m 'Add some amazing feature'`)\n4. \u63a8\u9001\u5230\u5206\u652f (`git push origin feature/amazing-feature`)\n5. \u958b\u555f\u4e00\u500b Pull Request\n\n## \u66f4\u65b0\u65e5\u8a8c\n\n### v0.1.0\n- \u521d\u59cb\u767c\u5e03\n- \u652f\u63f4 generate, chat, embed API\n- \u7d50\u69cb\u5316\u8f38\u51fa\u652f\u63f4\n- \u4e32\u6d41\u6a21\u5f0f\u652f\u63f4\n- \u667a\u80fd\u6a21\u578b\u9a57\u8b49\u529f\u80fd\n- \u6a21\u578b\u5217\u8868\u7de9\u5b58\u6a5f\u5236\n- \u5b8c\u6574\u7684\u985e\u578b\u63d0\u793a\n\n## \u76f8\u95dc\u9023\u7d50\n\n- [Ollama](https://ollama.com/) - \u672c\u5730 LLM \u904b\u884c\u5e73\u53f0\n- [Ollama API \u6587\u6a94](https://github.com/ollama/ollama/blob/main/docs/api.md)\n- [Pydantic](https://docs.pydantic.dev/) - \u8cc7\u6599\u9a57\u8b49\u51fd\u5f0f\u5eab\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Python library for the Ollama API.",
    "version": "0.1.0",
    "project_urls": {
        "Documentation": "https://github.com/your-username/ollama-flow#readme",
        "Homepage": "https://github.com/your-username/ollama-flow",
        "Issues": "https://github.com/your-username/ollama-flow/issues",
        "Repository": "https://github.com/your-username/ollama-flow.git"
    },
    "split_keywords": [
        "ai",
        " api",
        " chat",
        " embed",
        " generate",
        " llm",
        " ollama"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "02fa8be60b37d87d5b6e6815958eef8b3c6f8d94e9711888c4de0120d511dde6",
                "md5": "0597aba5ddda4962306e4273f1b05dd2",
                "sha256": "446e561218058ffbcc78ec9a6faff4e9a391d83ffe554c5ea7fe1652aaf0c1c4"
            },
            "downloads": -1,
            "filename": "ollama_flow-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0597aba5ddda4962306e4273f1b05dd2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 9528,
            "upload_time": "2025-07-12T01:19:03",
            "upload_time_iso_8601": "2025-07-12T01:19:03.587220Z",
            "url": "https://files.pythonhosted.org/packages/02/fa/8be60b37d87d5b6e6815958eef8b3c6f8d94e9711888c4de0120d511dde6/ollama_flow-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5ba10c41ff7e24c0bb3d10561e80220cd4ac97596db0608c1e9659136376e9e2",
                "md5": "0560cb74500251058d0efff502e9669e",
                "sha256": "54c74d574e6ead1340c139a1ef9c7aa0d8a2405a68b69a60269295259412f066"
            },
            "downloads": -1,
            "filename": "ollama_flow-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0560cb74500251058d0efff502e9669e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 50387,
            "upload_time": "2025-07-12T01:19:04",
            "upload_time_iso_8601": "2025-07-12T01:19:04.959090Z",
            "url": "https://files.pythonhosted.org/packages/5b/a1/0c41ff7e24c0bb3d10561e80220cd4ac97596db0608c1e9659136376e9e2/ollama_flow-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-12 01:19:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "your-username",
    "github_project": "ollama-flow#readme",
    "github_not_found": true,
    "lcname": "ollama-flow"
}
        
Elapsed time: 0.46642s