<div align="center">
<img src="logo/logo.png" alt="Generate Logo" width="200"/>
</div>
<div align="center">
<h1>Generate</h1>
<p>
A Python Package to Access World-Class Generative Models.
</p>
<p>
<a href="https://wangyuxinwhy.github.io/generate/">中文文档</a>
|
<a href="https://colab.research.google.com/github/wangyuxinwhy/generate/blob/main/examples/tutorial.ipynb">交互式教程</a>
</p>
[![Python Version](https://img.shields.io/badge/python-3.8+-blue.svg)](#)
[![CI Status](https://github.com/wangyuxinwhy/generate/actions/workflows/ci.yml/badge.svg)](https://github.com/wangyuxinwhy/generate/actions/workflows/ci.yml)
[![CD Status](https://github.com/wangyuxinwhy/generate/actions/workflows/cd.yml/badge.svg)](https://github.com/wangyuxinwhy/generate/actions/workflows/cd.yml)
[![License](https://img.shields.io/github/license/wangyuxinwhy/generate)](https://github.com/wangyuxinwhy/generate/blob/main/LICENSE)
[![Documentation](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://wangyuxinwhy.github.io/generate/)
[![Made with Love](https://img.shields.io/badge/made%20with-love-red.svg)](#)
</div>
<br>
<br>
# 简介
Generate 允许用户通过统一的 api 访问多平台的生成式模型,当前支持:
| 平台 🤖 | 同步 🔄 | 异步 ⏳ | 流式 🌊 | Vision 👀 | Tools 🛠️ |
| ----------------- | ------- | ------- | ------- | --------- | -------- |
| OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ |
| Azure | ✅ | ✅ | ❌ | ✅ | ✅ |
| Anthropic | ✅ | ✅ | ✅ | ✅ | ❌ |
| 文心 Wenxin | ✅ | ✅ | ✅ | ❌ | ✅ |
| 百炼 Bailian | ✅ | ✅ | ✅ | ❌ | ❌ |
| 灵积 DashScope | ✅ | ✅ | ✅ | ✅ | ❌ |
| 百川智能 Baichuan | ✅ | ✅ | ✅ | ❌ | ❌ |
| Minimax | ✅ | ✅ | ✅ | ❌ | ✅ |
| 混元 Hunyuan | ✅ | ✅ | ✅ | ❌ | ❌ |
| 智谱 Zhipu | ✅ | ✅ | ✅ | ✅ | ✅ |
| 月之暗面 Moonshot | ✅ | ✅ | ✅ | ❌ | ❌ |
| DeepSeek | ✅ | ✅ | ✅ | ❌ | ❌ |
| 零一万物 Yi | ✅ | ✅ | ✅ | ✅ | ❌ |
| 阶跃星辰 StepFun | ✅ | ✅ | ✅ | ✅ | ❌ |
## Features
- **多模态**,支持文本生成,多模态文本生成,结构体生成,图像生成,语音生成...
- **跨平台**,支持 OpenAI,Azure,Minimax,智谱,月之暗面,文心一言 在内的国内外 10+ 平台
- **One API**,统一了不同平台的消息格式,推理参数,接口封装,返回解析,让用户无需关心不同平台的差异
- **异步,流式和并发**,提供流式调用,非流式调用,同步调用,异步调用,异步批量并发调用,适配不同的应用场景
- **自带电池**,提供 chainlit UI,输入检查,参数检查,计费,速率控制,_Agent_, _Tool call_ 等
- **轻量**,最小化依赖,不同平台的请求和鉴权逻辑均为原生内置功能
- **高质量代码**,100% typehints,pylance strict, ruff lint & format, test coverage > 85% ...
## 基础使用
<a target="_blank" href="https://colab.research.google.com/github/wangyuxinwhy/generate/blob/main/examples/tutorial.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
### 安装
```bash
pip install generate-core
```
### 查看模型列表
```python
from generate.chat_completion import ChatModelRegistry
print('\n'.join([model_cls.__name__ for model_cls, _ in ChatModelRegistry.values()]))
# ----- Output -----
AzureChat
AnthropicChat
OpenAIChat
MinimaxProChat
MinimaxChat
ZhipuChat
ZhipuCharacterChat
WenxinChat
HunyuanChat
BaichuanChat
BailianChat
DashScopeChat
DashScopeMultiModalChat
MoonshotChat
DeepSeekChat
YiChat
```
### 配置模型 API
```python
from generate import WenxinChat
# 获取如何配置文心一言,其他模型同理
print(WenxinChat.how_to_settings())
# ----- Output -----
WenxinChat Settings
# Platform
Qianfan
# Required Environment Variables
['QIANFAN_API_KEY', 'QIANFAN_SECRET_KEY']
# Optional Environment Variables
['QIANFAN_PLATFORM_URL', 'QIANFAN_COMLPETION_API_BASE', 'QIANFAN_IMAGE_GENERATION_API_BASE', 'QIANFAN_ACCESS_TOKEN_API']
You can get more information from this link: https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Dlkm79mnx
tips: You can also set these variables in the .env file, and generate will automatically load them.
```
### 对话补全模型
#### 文本生成
```python
from generate import OpenAIChat
model = OpenAIChat()
model.generate('你好,GPT!', temperature=0, seed=2023)
# ----- Output -----
ChatCompletionOutput(
model_info=ModelInfo(task='chat_completion', type='openai', name='gpt-3.5-turbo-0613'),
cost=0.000343,
extra={'usage': {'prompt_tokens': 13, 'completion_tokens': 18, 'total_tokens': 31}},
message=AssistantMessage(
role='assistant',
name=None,
content='你好!有什么我可以帮助你的吗?',
function_call=None,
tool_calls=None
),
finish_reason='stop'
)
```
#### 多模态文本生成
```python
from generate import OpenAIChat
model = OpenAIChat(model='gpt-4-vision-preview')
user_message = {
'role': 'user',
'content': [
{'text': '这个图片是哪里?'},
{'image_url': {'url': 'https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg'}},
],
}
model.generate(user_message, max_tokens=1000)
# ----- Output -----
ChatCompletionOutput(
model_info=ModelInfo(task='chat_completion', type='openai', name='gpt-4-1106-vision-preview'),
cost=0.10339000000000001,
extra={'usage': {'prompt_tokens': 1120, 'completion_tokens': 119, 'total_tokens': 1239}},
message=AssistantMessage(
role='assistant',
name=None,
content='这张图片显示的是一名女士和一只狗在沙滩上。他们似乎在享受日落时分的宁静时刻',
function_call=None,
tool_calls=None
),
finish_reason='stop'
)
```
### 派生功能
#### 结构体生成
```python
from generate import OpenAIChat
from pydantic import BaseModel
class Country(BaseModel):
name: str
capital: str
model = OpenAIChat().structure(output_structure_type=Country)
model.generate('Paris is the capital of France and also the largest city in the country.')
# ----- Output -----
StructureModelOutput(
model_info=ModelInfo(task='chat_completion', type='openai', name='gpt-3.5-turbo-0613'),
cost=0.000693,
extra={'usage': {'prompt_tokens': 75, 'completion_tokens': 12, 'total_tokens': 87}},
structure=Country(name='France', capital='Paris')
)
```
#### 速率限制
```python
import time
from generate import OpenAIChat
# 限制速率,每 10 秒最多 4 次请求
limit_model = OpenAIChat().limit(max_generates_per_time_window=2, num_seconds_in_time_window=10)
start_time = time.time()
for i in limit_model.batch_generate([f'1 + {i} = ?' for i in range(4)]):
print(i.reply)
print(f'elapsed time: {time.time() - start_time:.2f} seconds')
# ----- Output -----
1
elapsed time: 0.70 seconds
2
elapsed time: 1.34 seconds
3
elapsed time: 11.47 seconds
4
elapsed time: 12.15 seconds
```
#### 对话历史保持
```python
from generate import OpenAIChat
session_model = OpenAIChat().session()
session_model.generate('i am bob')
print(session_model.generate('What is my name?').reply)
# ----- Output -----
Your name is Bob.
```
#### 工具调用
```python
from generate import OpenAIChat, tool
@tool
def get_weather(location: str) -> str:
return f'{location}, 27°C, Sunny'
agent = OpenAIChat().agent(tools=get_weather)
print(agent.generate('what is the weather in Beijing?').reply)
# ----- Output -----
The weather in Beijing is currently 27°C and sunny.
```
### 图像生成模型
```python
from generate import OpenAIImageGeneration
model = OpenAIImageGeneration()
model.generate('black hole')
# ----- Output -----
ImageGenerationOutput(
model_info=ModelInfo(task='image_generation', type='openai', name='dall-e-3'),
cost=0.56,
extra={},
images=[
GeneratedImage(
url='https://oaidalleapiprodscus.blob.core.windows.net/...',
prompt='Visualize an astronomical illustration featuring a black hole at its core. The black hole
should be portrayed with strong gravitational lensing effect that distorts the light around it. Include a
surrounding accretion disk, glowing brightly with blue and white hues, streaked with shades of red and orange,
indicating heat and intense energy. The cosmos in the background should be filled with distant stars, galaxies, and
nebulas, illuminating the vast, infinite space with specks of light.',
image_format='png',
content=b'<image bytes>'
)
]
)
```
### 语音生成模型
```python
from generate import MinimaxSpeech
model = MinimaxSpeech()
model.generate('你好,世界!')
# ----- Output -----
TextToSpeechOutput(
model_info=ModelInfo(task='text_to_speech', type='minimax', name='speech-01'),
cost=0.01,
extra={},
audio=b'<audio bytes>',
audio_format='mp3'
)
```
### 多种调用方式
```python
from generate import OpenAIChat
model = OpenAIChat()
for stream_output in model.stream_generate('介绍一下唐朝'):
print(stream_output.stream.delta, end='', flush=True)
# 同步调用,model.generate
# 异步调用,model.async_generate
# 流式调用,model.stream_generate
# 异步流式调用,model.async_stream_generate
# 批量调研,model.batch_generate
# 异步批量调用,model.async_batch_generate
```
### 启动 chainlit UI
```bash
python -m generate.ui
# help
# python -m generate.ui --help
```
Raw data
{
"_id": null,
"home_page": "https://github.com/wangyuxinwhy/generate",
"name": "generate-core",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0.0,>=3.8.1",
"maintainer_email": null,
"keywords": "openai, text generation, image generation, text to speech",
"author": "wangyuxin",
"author_email": "wangyuxin@mokahr.com",
"download_url": "https://files.pythonhosted.org/packages/ef/dd/55bb8f9cefa77fd54e62bc477a76a9611a75741fa6e75ff3422f1bc44dff/generate_core-0.4.3.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <img src=\"logo/logo.png\" alt=\"Generate Logo\" width=\"200\"/>\n</div>\n\n<div align=\"center\">\n <h1>Generate</h1>\n <p>\n A Python Package to Access World-Class Generative Models.\n </p>\n <p>\n <a href=\"https://wangyuxinwhy.github.io/generate/\">\u4e2d\u6587\u6587\u6863</a>\n \uff5c\n <a href=\"https://colab.research.google.com/github/wangyuxinwhy/generate/blob/main/examples/tutorial.ipynb\">\u4ea4\u4e92\u5f0f\u6559\u7a0b</a>\n </p>\n\n[![Python Version](https://img.shields.io/badge/python-3.8+-blue.svg)](#)\n[![CI Status](https://github.com/wangyuxinwhy/generate/actions/workflows/ci.yml/badge.svg)](https://github.com/wangyuxinwhy/generate/actions/workflows/ci.yml)\n[![CD Status](https://github.com/wangyuxinwhy/generate/actions/workflows/cd.yml/badge.svg)](https://github.com/wangyuxinwhy/generate/actions/workflows/cd.yml)\n[![License](https://img.shields.io/github/license/wangyuxinwhy/generate)](https://github.com/wangyuxinwhy/generate/blob/main/LICENSE)\n[![Documentation](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://wangyuxinwhy.github.io/generate/)\n[![Made with Love](https://img.shields.io/badge/made%20with-love-red.svg)](#)\n\n</div>\n<br>\n<br>\n\n# \u7b80\u4ecb\n\n\n\nGenerate \u5141\u8bb8\u7528\u6237\u901a\u8fc7\u7edf\u4e00\u7684 api \u8bbf\u95ee\u591a\u5e73\u53f0\u7684\u751f\u6210\u5f0f\u6a21\u578b\uff0c\u5f53\u524d\u652f\u6301\uff1a\n\n| \u5e73\u53f0 \ud83e\udd16 | \u540c\u6b65 \ud83d\udd04 | \u5f02\u6b65 \u23f3 | \u6d41\u5f0f \ud83c\udf0a | Vision \ud83d\udc40 | Tools \ud83d\udee0\ufe0f |\n| ----------------- | ------- | ------- | ------- | --------- | -------- |\n| OpenAI | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n| Azure | \u2705 | \u2705 | \u274c | \u2705 | \u2705 |\n| Anthropic | \u2705 | \u2705 | \u2705 | \u2705 | \u274c |\n| \u6587\u5fc3 Wenxin | \u2705 | \u2705 | \u2705 | \u274c | \u2705 |\n| \u767e\u70bc Bailian | \u2705 | \u2705 | \u2705 | \u274c | \u274c |\n| \u7075\u79ef DashScope | \u2705 | \u2705 | \u2705 | \u2705 | \u274c |\n| \u767e\u5ddd\u667a\u80fd Baichuan | \u2705 | \u2705 | \u2705 | \u274c | \u274c |\n| Minimax | \u2705 | \u2705 | \u2705 | \u274c | \u2705 |\n| \u6df7\u5143 Hunyuan | \u2705 | \u2705 | \u2705 | \u274c | \u274c |\n| \u667a\u8c31 Zhipu | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n| \u6708\u4e4b\u6697\u9762 Moonshot | \u2705 | \u2705 | \u2705 | \u274c | \u274c |\n| DeepSeek | \u2705 | \u2705 | \u2705 | \u274c | \u274c |\n| \u96f6\u4e00\u4e07\u7269 Yi | \u2705 | \u2705 | \u2705 | \u2705 | \u274c |\n| \u9636\u8dc3\u661f\u8fb0 StepFun | \u2705 | \u2705 | \u2705 | \u2705 | \u274c |\n\n## Features\n\n- **\u591a\u6a21\u6001**\uff0c\u652f\u6301\u6587\u672c\u751f\u6210\uff0c\u591a\u6a21\u6001\u6587\u672c\u751f\u6210\uff0c\u7ed3\u6784\u4f53\u751f\u6210\uff0c\u56fe\u50cf\u751f\u6210\uff0c\u8bed\u97f3\u751f\u6210...\n- **\u8de8\u5e73\u53f0**\uff0c\u652f\u6301 OpenAI\uff0cAzure\uff0cMinimax\uff0c\u667a\u8c31\uff0c\u6708\u4e4b\u6697\u9762\uff0c\u6587\u5fc3\u4e00\u8a00 \u5728\u5185\u7684\u56fd\u5185\u5916 10+ \u5e73\u53f0\n- **One API**\uff0c\u7edf\u4e00\u4e86\u4e0d\u540c\u5e73\u53f0\u7684\u6d88\u606f\u683c\u5f0f\uff0c\u63a8\u7406\u53c2\u6570\uff0c\u63a5\u53e3\u5c01\u88c5\uff0c\u8fd4\u56de\u89e3\u6790\uff0c\u8ba9\u7528\u6237\u65e0\u9700\u5173\u5fc3\u4e0d\u540c\u5e73\u53f0\u7684\u5dee\u5f02\n- **\u5f02\u6b65\uff0c\u6d41\u5f0f\u548c\u5e76\u53d1**\uff0c\u63d0\u4f9b\u6d41\u5f0f\u8c03\u7528\uff0c\u975e\u6d41\u5f0f\u8c03\u7528\uff0c\u540c\u6b65\u8c03\u7528\uff0c\u5f02\u6b65\u8c03\u7528\uff0c\u5f02\u6b65\u6279\u91cf\u5e76\u53d1\u8c03\u7528\uff0c\u9002\u914d\u4e0d\u540c\u7684\u5e94\u7528\u573a\u666f\n- **\u81ea\u5e26\u7535\u6c60**\uff0c\u63d0\u4f9b chainlit UI\uff0c\u8f93\u5165\u68c0\u67e5\uff0c\u53c2\u6570\u68c0\u67e5\uff0c\u8ba1\u8d39\uff0c\u901f\u7387\u63a7\u5236\uff0c_Agent_, _Tool call_ \u7b49\n- **\u8f7b\u91cf**\uff0c\u6700\u5c0f\u5316\u4f9d\u8d56\uff0c\u4e0d\u540c\u5e73\u53f0\u7684\u8bf7\u6c42\u548c\u9274\u6743\u903b\u8f91\u5747\u4e3a\u539f\u751f\u5185\u7f6e\u529f\u80fd\n- **\u9ad8\u8d28\u91cf\u4ee3\u7801**\uff0c100% typehints\uff0cpylance strict, ruff lint & format, test coverage > 85% ...\n\n## \u57fa\u7840\u4f7f\u7528\n\n<a target=\"_blank\" href=\"https://colab.research.google.com/github/wangyuxinwhy/generate/blob/main/examples/tutorial.ipynb\">\n <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n</a>\n\n### \u5b89\u88c5\n\n```bash\npip install generate-core\n```\n\n### \u67e5\u770b\u6a21\u578b\u5217\u8868\n\n```python\nfrom generate.chat_completion import ChatModelRegistry\n\nprint('\\n'.join([model_cls.__name__ for model_cls, _ in ChatModelRegistry.values()]))\n\n# ----- Output -----\nAzureChat\nAnthropicChat\nOpenAIChat\nMinimaxProChat\nMinimaxChat\nZhipuChat\nZhipuCharacterChat\nWenxinChat\nHunyuanChat\nBaichuanChat\nBailianChat\nDashScopeChat\nDashScopeMultiModalChat\nMoonshotChat\nDeepSeekChat\nYiChat\n```\n\n### \u914d\u7f6e\u6a21\u578b API\n\n```python\nfrom generate import WenxinChat\n\n# \u83b7\u53d6\u5982\u4f55\u914d\u7f6e\u6587\u5fc3\u4e00\u8a00\uff0c\u5176\u4ed6\u6a21\u578b\u540c\u7406\nprint(WenxinChat.how_to_settings())\n\n# ----- Output -----\nWenxinChat Settings\n\n# Platform\nQianfan\n\n# Required Environment Variables\n['QIANFAN_API_KEY', 'QIANFAN_SECRET_KEY']\n\n# Optional Environment Variables\n['QIANFAN_PLATFORM_URL', 'QIANFAN_COMLPETION_API_BASE', 'QIANFAN_IMAGE_GENERATION_API_BASE', 'QIANFAN_ACCESS_TOKEN_API']\n\nYou can get more information from this link: https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Dlkm79mnx\n\ntips: You can also set these variables in the .env file, and generate will automatically load them.\n```\n\n### \u5bf9\u8bdd\u8865\u5168\u6a21\u578b\n\n#### \u6587\u672c\u751f\u6210\n\n```python\nfrom generate import OpenAIChat\n\nmodel = OpenAIChat()\nmodel.generate('\u4f60\u597d\uff0cGPT\uff01', temperature=0, seed=2023)\n\n# ----- Output -----\nChatCompletionOutput(\n model_info=ModelInfo(task='chat_completion', type='openai', name='gpt-3.5-turbo-0613'),\n cost=0.000343,\n extra={'usage': {'prompt_tokens': 13, 'completion_tokens': 18, 'total_tokens': 31}},\n message=AssistantMessage(\n role='assistant',\n name=None,\n content='\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u52a9\u4f60\u7684\u5417\uff1f',\n function_call=None,\n tool_calls=None\n ),\n finish_reason='stop'\n)\n```\n\n#### \u591a\u6a21\u6001\u6587\u672c\u751f\u6210\n\n```python\nfrom generate import OpenAIChat\n\nmodel = OpenAIChat(model='gpt-4-vision-preview')\nuser_message = {\n 'role': 'user',\n 'content': [\n {'text': '\u8fd9\u4e2a\u56fe\u7247\u662f\u54ea\u91cc\uff1f'},\n {'image_url': {'url': 'https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg'}},\n ],\n}\nmodel.generate(user_message, max_tokens=1000)\n\n# ----- Output -----\nChatCompletionOutput(\n model_info=ModelInfo(task='chat_completion', type='openai', name='gpt-4-1106-vision-preview'),\n cost=0.10339000000000001,\n extra={'usage': {'prompt_tokens': 1120, 'completion_tokens': 119, 'total_tokens': 1239}},\n message=AssistantMessage(\n role='assistant',\n name=None,\n content='\u8fd9\u5f20\u56fe\u7247\u663e\u793a\u7684\u662f\u4e00\u540d\u5973\u58eb\u548c\u4e00\u53ea\u72d7\u5728\u6c99\u6ee9\u4e0a\u3002\u4ed6\u4eec\u4f3c\u4e4e\u5728\u4eab\u53d7\u65e5\u843d\u65f6\u5206\u7684\u5b81\u9759\u65f6\u523b',\n function_call=None,\n tool_calls=None\n ),\n finish_reason='stop'\n)\n```\n\n### \u6d3e\u751f\u529f\u80fd\n\n#### \u7ed3\u6784\u4f53\u751f\u6210\n\n```python\nfrom generate import OpenAIChat\nfrom pydantic import BaseModel\n\nclass Country(BaseModel):\n name: str\n capital: str\n\nmodel = OpenAIChat().structure(output_structure_type=Country)\nmodel.generate('Paris is the capital of France and also the largest city in the country.')\n# ----- Output -----\nStructureModelOutput(\n model_info=ModelInfo(task='chat_completion', type='openai', name='gpt-3.5-turbo-0613'),\n cost=0.000693,\n extra={'usage': {'prompt_tokens': 75, 'completion_tokens': 12, 'total_tokens': 87}},\n structure=Country(name='France', capital='Paris')\n)\n```\n\n#### \u901f\u7387\u9650\u5236\n\n```python\nimport time\nfrom generate import OpenAIChat\n\n# \u9650\u5236\u901f\u7387\uff0c\u6bcf 10 \u79d2\u6700\u591a 4 \u6b21\u8bf7\u6c42\nlimit_model = OpenAIChat().limit(max_generates_per_time_window=2, num_seconds_in_time_window=10)\nstart_time = time.time()\nfor i in limit_model.batch_generate([f'1 + {i} = ?' for i in range(4)]):\n print(i.reply)\n print(f'elapsed time: {time.time() - start_time:.2f} seconds')\n\n# ----- Output -----\n1\nelapsed time: 0.70 seconds\n2\nelapsed time: 1.34 seconds\n3\nelapsed time: 11.47 seconds\n4\nelapsed time: 12.15 seconds\n```\n\n#### \u5bf9\u8bdd\u5386\u53f2\u4fdd\u6301\n\n```python\nfrom generate import OpenAIChat\n\nsession_model = OpenAIChat().session()\nsession_model.generate('i am bob')\nprint(session_model.generate('What is my name?').reply)\n\n# ----- Output -----\nYour name is Bob.\n```\n\n#### \u5de5\u5177\u8c03\u7528\n\n```python\nfrom generate import OpenAIChat, tool\n\n@tool\ndef get_weather(location: str) -> str:\n return f'{location}, 27\u00b0C, Sunny'\n\nagent = OpenAIChat().agent(tools=get_weather)\nprint(agent.generate('what is the weather in Beijing?').reply)\n\n# ----- Output -----\nThe weather in Beijing is currently 27\u00b0C and sunny.\n```\n\n### \u56fe\u50cf\u751f\u6210\u6a21\u578b\n\n```python\nfrom generate import OpenAIImageGeneration\n\nmodel = OpenAIImageGeneration()\nmodel.generate('black hole')\n\n# ----- Output -----\nImageGenerationOutput(\n model_info=ModelInfo(task='image_generation', type='openai', name='dall-e-3'),\n cost=0.56,\n extra={},\n images=[\n GeneratedImage(\n url='https://oaidalleapiprodscus.blob.core.windows.net/...',\n prompt='Visualize an astronomical illustration featuring a black hole at its core. The black hole\nshould be portrayed with strong gravitational lensing effect that distorts the light around it. Include a\nsurrounding accretion disk, glowing brightly with blue and white hues, streaked with shades of red and orange,\nindicating heat and intense energy. The cosmos in the background should be filled with distant stars, galaxies, and\nnebulas, illuminating the vast, infinite space with specks of light.',\n image_format='png',\n content=b'<image bytes>'\n )\n ]\n)\n```\n\n### \u8bed\u97f3\u751f\u6210\u6a21\u578b\n\n```python\nfrom generate import MinimaxSpeech\n\nmodel = MinimaxSpeech()\nmodel.generate('\u4f60\u597d\uff0c\u4e16\u754c\uff01')\n\n# ----- Output -----\nTextToSpeechOutput(\n model_info=ModelInfo(task='text_to_speech', type='minimax', name='speech-01'),\n cost=0.01,\n extra={},\n audio=b'<audio bytes>',\n audio_format='mp3'\n)\n```\n\n### \u591a\u79cd\u8c03\u7528\u65b9\u5f0f\n\n```python\nfrom generate import OpenAIChat\n\nmodel = OpenAIChat()\nfor stream_output in model.stream_generate('\u4ecb\u7ecd\u4e00\u4e0b\u5510\u671d'):\n print(stream_output.stream.delta, end='', flush=True)\n\n# \u540c\u6b65\u8c03\u7528\uff0cmodel.generate\n# \u5f02\u6b65\u8c03\u7528\uff0cmodel.async_generate\n# \u6d41\u5f0f\u8c03\u7528\uff0cmodel.stream_generate\n# \u5f02\u6b65\u6d41\u5f0f\u8c03\u7528\uff0cmodel.async_stream_generate\n# \u6279\u91cf\u8c03\u7814\uff0cmodel.batch_generate\n# \u5f02\u6b65\u6279\u91cf\u8c03\u7528\uff0cmodel.async_batch_generate\n```\n\n### \u542f\u52a8 chainlit UI\n\n```bash\npython -m generate.ui\n# help\n# python -m generate.ui --help\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "\u6587\u672c\u751f\u6210\uff0c\u56fe\u50cf\u751f\u6210\uff0c\u8bed\u97f3\u751f\u6210",
"version": "0.4.3",
"project_urls": {
"Homepage": "https://github.com/wangyuxinwhy/generate",
"Repository": "https://github.com/wangyuxinwhy/generate"
},
"split_keywords": [
"openai",
" text generation",
" image generation",
" text to speech"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ab9f791b356b6897be70a8fe0bf4fbb923ca154a511461576b389af1dd503405",
"md5": "8e9b2839309faf9aab3360acefa5f23e",
"sha256": "128bdd6a32237ea551ba18414e71f019fef547587ee8762c3a5f61849feee178"
},
"downloads": -1,
"filename": "generate_core-0.4.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8e9b2839309faf9aab3360acefa5f23e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0.0,>=3.8.1",
"size": 95670,
"upload_time": "2024-04-12T03:11:04",
"upload_time_iso_8601": "2024-04-12T03:11:04.109712Z",
"url": "https://files.pythonhosted.org/packages/ab/9f/791b356b6897be70a8fe0bf4fbb923ca154a511461576b389af1dd503405/generate_core-0.4.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "efdd55bb8f9cefa77fd54e62bc477a76a9611a75741fa6e75ff3422f1bc44dff",
"md5": "e5d327a41d5772ea5260ca0f48f6d793",
"sha256": "0ae1dee5c6567a61dfe7b82d3ce31339dbf3d111ac6c60016e9006177785eea1"
},
"downloads": -1,
"filename": "generate_core-0.4.3.tar.gz",
"has_sig": false,
"md5_digest": "e5d327a41d5772ea5260ca0f48f6d793",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0.0,>=3.8.1",
"size": 57309,
"upload_time": "2024-04-12T03:11:05",
"upload_time_iso_8601": "2024-04-12T03:11:05.739010Z",
"url": "https://files.pythonhosted.org/packages/ef/dd/55bb8f9cefa77fd54e62bc477a76a9611a75741fa6e75ff3422f1bc44dff/generate_core-0.4.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-12 03:11:05",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "wangyuxinwhy",
"github_project": "generate",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "generate-core"
}