modelscope-agent


Namemodelscope-agent JSON
Version 0.7.1 PyPI version JSON
download
home_pagehttps://github.com/modelscope/modelscope-agent
SummaryModelScope Agent: Be a powerful models and tools agent based on ModelScope and open source LLM.
upload_time2024-11-11 11:06:22
maintainerNone
docs_urlNone
authorModelscope Team
requires_pythonNone
licenseApache License 2.0
keywords python agent llm aigc qwen modelscope
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1> ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models</h1>

<p align="center">
    <br>
    <img src="https://modelscope.oss-cn-beijing.aliyuncs.com/modelscope.gif" width="400"/>
    <br>
<p>

<p align="center">
<a href="https://modelscope.cn/home">Modelscope Hub</a> | <a href="https://arxiv.org/abs/2309.00986">Paper</a> | <a href="https://modelscope.cn/studios/damo/ModelScopeGPT/summary">Demo</a>
<br>
        <a href="README_CN.md">中文</a>&nbsp | &nbspEnglish
</p>

<p align="center">
<img src="https://img.shields.io/badge/python-%E2%89%A53.8-5be.svg">
<a href='https://modelscope-agent.readthedocs.io/en/latest/?badge=latest'>
    <img src='https://readthedocs.org/projects/modelscope-agent/badge/?version=latest' alt='Documentation Status' />
</a>
<a href="https://github.com/modelscope/modelscope"><img src="https://img.shields.io/badge/modelscope[framework]-%E2%89%A51.16.0-5D91D4.svg"></a>
<a href="https://github.com/modelscope/modelscope-agent/actions?query=branch%3Amaster+workflow%3Acitest++"><img src="https://img.shields.io/github/actions/workflow/status/modelscope/modelscope-agent/citest.yaml?branch=master&logo=github&label=CI"></a>
<a href="https://github.com/modelscope/modelscope-agent/blob/main/LICENSE"><img src="https://img.shields.io/github/license/modelscope/modelscope-agent"></a>
<a href="https://github.com/modelscope/modelscope-agent/pulls"><img src="https://img.shields.io/badge/PR-welcome-55EB99.svg"></a>
<a href="https://pypi.org/project/modelscope-agent/"><img src="https://badge.fury.io/py/modelscope-agent.svg"></a>
<a href="https://pepy.tech/project/modelscope-agent"><img src="https://pepy.tech/badge/modelscope-agent"></a>
</p>

<p align="center">
<a href="https://trendshift.io/repositories/323" target="_blank"><img src="https://trendshift.io/api/badge/repositories/323" alt="modelscope%2Fmodelscope-agent | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</p>

## Introduction

Modelscope-Agent is a customizable and scalable Agent framework. A single agent has abilities such as role-playing, LLM calling, tool usage, planning, and memory.
It mainly has the following characteristics:

- **Simple Agent Implementation Process**: Simply specify the role instruction, LLM name, and tool name list to implement an Agent application. The framework automatically arranges workflows for tool usage, planning, and memory.
- **Rich models and tools**: The framework is equipped with rich LLM interfaces, such as Dashscope and Modelscope model interfaces, OpenAI model interfaces, etc. Built in rich tools, such as **code interpreter**, **weather query**, **text to image**, **web browsing**, etc., make it easy to customize exclusive agents.
- **Unified interface and high scalability**: The framework has clear tools and LLM registration mechanism, making it convenient for users to expand more diverse Agent applications.
- **Low coupling**: Developers can easily use built-in tools, LLM, memory, and other components without the need to bind higher-level agents.


## 🎉 News
* 🔥🔥🔥Aug 8, 2024: A new graph based code generation tool [CodexGraph](https://arxiv.org/abs/2408.03910) is released by Modelscope-Agent, it has been proved effective and versatile on various code related tasks, please check [example](https://github.com/modelscope/modelscope-agent/tree/master/apps/codexgraph_agent).
* 🔥🔥Aug 1, 2024: A high efficient and reliable Data Science Assistant is running on Modelscope-Agent, please find detail in [example](https://github.com/modelscope/modelscope-agent/tree/master/apps/datascience_assistant).
* 🔥July 17, 2024: Parallel tool calling on Modelscope-Agent-Server, please find detail in [doc](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent_servers/README.md).
* 🔥June 17, 2024: Upgrading RAG flow based on LLama-index, allow user to hybrid search knowledge by different strategies and modalities, please find detail in [doc](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent/rag/README_zh.md).
* 🔥June 6, 2024: With [Modelscope-Agent-Server](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent_servers/README.md), **Qwen2** could be used by OpenAI SDK with tool calling ability, please find detail in [doc](https://github.com/modelscope/modelscope-agent/blob/master/docs/llms/qwen2_tool_calling.md).
* 🔥June 4, 2024: Modelscope-Agent supported Mobile-Agent-V2[arxiv](https://arxiv.org/abs/2406.01014),based on Android Adb Env, please check in the [application](https://github.com/modelscope/modelscope-agent/tree/master/apps/mobile_agent).
* 🔥May 17, 2024: Modelscope-Agent supported multi-roles room chat in the [gradio](https://github.com/modelscope/modelscope-agent/tree/master/apps/multi_roles_chat_room).
* May 14, 2024: Modelscope-Agent supported image input in `RolePlay` agents with latest OpenAI model `GPT-4o`. Developers can experience this feature by specifying the `image_url` parameter.
* May 10, 2024: Modelscope-Agent launched a user-friendly `Assistant API`, and also provided a `Tools API` that executes utilities in isolated, secure containers, please find the [document](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent_servers/)
* Apr 12, 2024: The [Ray](https://docs.ray.io/en/latest/) version of multi-agent solution is on modelscope-agent, please find the [document](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent/multi_agents_utils/README.md)
* Mar 15, 2024: Modelscope-Agent and the [AgentFabric](https://github.com/modelscope/modelscope-agent/tree/master/apps/agentfabric) (opensource version for GPTs) is running on the production environment of [modelscope studio](https://modelscope.cn/studios/agent).
* Feb 10, 2024: In Chinese New year, we upgrade the modelscope agent to version v0.3 to facilitate developers to customize various types of agents more conveniently through coding and make it easier to make multi-agent demos. For more details, you can refer to [#267](https://github.com/modelscope/modelscope-agent/pull/267) and [#293](https://github.com/modelscope/modelscope-agent/pull/293) .
* Nov 26, 2023: [AgentFabric](https://github.com/modelscope/modelscope-agent/tree/master/apps/agentfabric) now supports collaborative use in ModelScope's [Creation Space](https://modelscope.cn/studios/modelscope/AgentFabric/summary), allowing for the sharing of custom applications in the Creation Space. The update also includes the latest [GTE](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_chinese-base/summary) text embedding integration.
* Nov 17, 2023: [AgentFabric](https://github.com/modelscope/modelscope-agent/tree/master/apps/agentfabric) released, which is an interactive framework to facilitate creation of agents tailored to various real-world applications.
* Oct 30, 2023: [Facechain Agent](https://modelscope.cn/studios/CVstudio/facechain_agent_studio/summary) released a local version of the Facechain Agent that can be run locally. For detailed usage instructions, please refer to [Facechain Agent](#facechain-agent).
* Oct 25, 2023: [Story Agent](https://modelscope.cn/studios/damo/story_agent/summary) released a local version of the Story Agent for generating storybook illustrations. It can be run locally. For detailed usage instructions, please refer to [Story Agent](#story-agent).
* Sep 20, 2023: [ModelScope GPT](https://modelscope.cn/studios/damo/ModelScopeGPT/summary) offers a local version through gradio that can be run locally. You can navigate to the demo/msgpt/ directory and execute `bash run_msgpt.sh`.
* Sep 4, 2023: Three demos, [demo_qwen](demo/demo_qwen_agent.ipynb), [demo_retrieval_agent](demo/demo_retrieval_agent.ipynb) and [demo_register_tool](demo/demo_register_new_tool.ipynb), have been added, along with detailed tutorials provided.
* Sep 2, 2023: The [preprint paper](https://arxiv.org/abs/2309.00986) associated with this project was published.
* Aug 22, 2023: Support accessing various AI model APIs using ModelScope tokens.
* Aug 7, 2023: The initial version of the modelscope-agent repository was released.


## Installation

clone repo and install dependency:
```shell
git clone https://github.com/modelscope/modelscope-agent.git
cd modelscope-agent && pip install -r requirements.txt
```

### ModelScope notebook【recommended】

The ModelScope Notebook offers a free-tier that allows ModelScope user to run the FaceChain application with minimum setup, refer to [ModelScope Notebook](https://modelscope.cn/my/mynotebook/preset)

```shell
# Step1: 我的notebook -> PAI-DSW -> GPU环境

# Step2: Download the [demo file](https://github.com/modelscope/modelscope-agent/blob/master/demo/demo_qwen_agent.ipynb) and upload it to the GPU.

# Step3:  Execute the demo notebook in order.
```


## Quickstart

The agent incorporates an LLM along with task-specific tools, and uses the LLM to determine which tool or tools to invoke in order to complete the user's tasks.

To start, all you need to do is initialize an `RolePlay` object with corresponding tasks

- This sample code uses the qwen-max model, drawing tools and weather forecast tools.
     - Using the qwen-max model requires replacing YOUR_DASHSCOPE_API_KEY in the example with your API-KEY for the code to run properly. YOUR_DASHSCOPE_API_KEY can be obtained [here](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key). The drawing tool also calls DASHSCOPE API (wanx), so no additional configuration is required.
     - When using the weather forecast tool, you need to replace YOUR_AMAP_TOKEN in the example with your AMAP weather API-KEY so that the code can run normally. YOUR_AMAP_TOKEN is available [here](https://lbs.amap.com/api/javascript-api-v2/guide/services/weather).


```Python
# 配置环境变量;如果您已经提前将api-key提前配置到您的运行环境中,可以省略这个步骤
import os
os.environ['DASHSCOPE_API_KEY']=YOUR_DASHSCOPE_API_KEY
os.environ['AMAP_TOKEN']=YOUR_AMAP_TOKEN

# 选用RolePlay 配置agent
from modelscope_agent.agents.role_play import RolePlay  # NOQA

role_template = '你扮演一个天气预报助手,你需要查询相应地区的天气,并调用给你的画图工具绘制一张城市的图。'

llm_config = {'model': 'qwen-max', 'model_server': 'dashscope'}

# input tool name
function_list = ['amap_weather', 'image_gen']

bot = RolePlay(
    function_list=function_list, llm=llm_config, instruction=role_template)

response = bot.run('朝阳区天气怎样?')

text = ''
for chunk in response:
    text += chunk
```

Result
- Terminal runs
```shell
# 第一次调用llm的输出
Action: amap_weather
Action Input: {"location": "朝阳区"}

# 第二次调用llm的输出
目前,朝阳区的天气状况为阴天,气温为1度。

Action: image_gen
Action Input: {"text": "朝阳区城市风光", "resolution": "1024*1024"}

# 第三次调用llm的输出
目前,朝阳区的天气状况为阴天,气温为1度。同时,我已为你生成了一张朝阳区的城市风光图,如下所示:

![](https://dashscope-result-sh.oss-cn-shanghai.aliyuncs.com/1d/45/20240204/3ab595ad/96d55ca6-6550-4514-9013-afe0f917c7ac-1.jpg?Expires=1707123521&OSSAccessKeyId=LTAI5tQZd8AEcZX6KZV4G8qL&Signature=RsJRt7zsv2y4kg7D9QtQHuVkXZY%3D)
```

## modules
### Agent

An `Agent` object consists of the following components:

- `LLM`: A large language model that is responsible to process your inputs and decide calling tools.
- `function_list`: A list consists of available tools for agents.

Currently, configuration of `Agent` may contain following arguments:
- `llm`: The llm config of this agent
    - When Dict: set the config of llm as {'model': '', 'api_key': '', 'model_server': ''}
    - When BaseChatModel: llm is sent by another agent
- `function_list`: A list of tools
    - When str: tool names
    - When Dict: tool cfg
- `storage_path`: If not specified otherwise, all data will be stored here in KV pairs by memory
- `instruction`: the system instruction of this agent
- `name`: the name of agent
- `description`: the description of agent, which is used for multi_agent
- `kwargs`: other potential parameters

`Agent`, as a base class, cannot be directly initialized and called. Agent subclasses need to inherit it. They must implement function `_run`, which mainly includes three parts: generation of messages/propmt, calling of llm(s), and tool calling based on the results of llm. We provide an implement of these components in `RolePlay` for users, and you can also custom your components according to your requirement.

```python
from modelscope_agent import Agent
class YourCustomAgent(Agent):
    def _run(self, user_request, **kwargs):
        # Custom your workflow
```


### LLM
LLM is core module of agent, which ensures the quality of interaction results.

Currently, configuration of `` may contain following arguments:
- `model`: The specific model name will be passed directly to the model service provider.
- `model_server`: provider of model services.

`BaseChatModel`, as a base class of llm, cannot be directly initialized and called. The subclasses need to inherit it. They must implement function `_chat_stream` and `_chat_no_stream`, which correspond to streaming output and non-streaming output respectively.
Optionally implement `chat_with_functions` and `chat_with_raw_prompt` for function calling and text completion.

Currently we provide the implementation of three model service providers: dashscope (for qwen series models), zhipu (for glm series models) and openai (for all openai api format models). You can directly use the models supported by the above service providers, or you can customize your llm.

For more information please refer to `docs/modules/llm.md`

### `Tool`

We provide several multi-domain tools that can be configured and used in the agent.

You can also customize your tools with set the tool's name, description, and parameters based on a predefined pattern by inheriting the base tool. Depending on your needs, call() can be implemented.
An example of a custom tool is provided in [demo_register_new_tool](/demo/demo_register_new_tool.ipynb)

You can pass the tool name or configuration you want to use to the agent.
```python
# by tool name
function_list = ['amap_weather', 'image_gen']
bot = RolePlay(function_list=function_list, ...)

# by tool configuration
from langchain.tools import ShellTool
function_list = [{'terminal':ShellTool()}]
bot = RolePlay(function_list=function_list, ...)

# by mixture
function_list = ['amap_weather', {'terminal':ShellTool()}]
bot = RolePlay(function_list=function_list, ...)
```

#### Built-in tools
- `image_gen`: [Wanx Image Generation](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-wanxiang). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.
- `code_interpreter`: [Code Interpreter](https://jupyter-client.readthedocs.io/en/5.2.2/api/client.html)
- `web_browser`: [Web Browsing](https://python.langchain.com/docs/use_cases/web_scraping)
- `amap_weather`: [AMAP Weather](https://lbs.amap.com/api/javascript-api-v2/guide/services/weather). AMAP_TOKEN needs to be configured in the environment variable.
- `wordart_texture_generation`: [Word art texture generation](https://help.aliyun.com/zh/dashscope/developer-reference/wordart). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.
- `web_search`: [Web Searching](https://learn.microsoft.com/en-us/bing/search-apis/bing-web-search/overview). []
- `qwen_vl`: [Qwen-VL image recognition](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-plus-api). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.
- `style_repaint`: [Character style redrawn](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-wanxiang-style-repaint). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.
- `image_enhancement`: [Chasing shadow-magnifying glass](https://github.com/dreamoving/Phantom). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.
- `text-address`: [Geocoding](https://www.modelscope.cn/models/iic/mgeo_geographic_elements_tagging_chinese_base/summary). [MODELSCOPE_API_TOKEN](https://www.modelscope.cn/my/myaccesstoken) needs to be configured in the environment variable.
- `speech-generation`: [Speech generation](https://www.modelscope.cn/models/iic/speech_sambert-hifigan_tts_zh-cn_16k/summary). [MODELSCOPE_API_TOKEN](https://www.modelscope.cn/my/myaccesstoken) needs to be configured in the environment variable.
- `video-generation`: [Video generation](https://www.modelscope.cn/models/iic/text-to-video-synthesis/summary). [MODELSCOPE_API_TOKEN](https://www.modelscope.cn/my/myaccesstoken) needs to be configured in the environment variable.

### Multi-agent

Please refer the multi-agent [readme](modelscope_agent/multi_agents_utils/README.md).


## Related Tutorials

If you would like to learn more about the practical details of Agent, you can refer to our articles and video tutorials:

* [Article Tutorial](https://mp.weixin.qq.com/s/L3GiV2QHeybhVZSg_g_JRw)
* [Video Tutorial](https://b23.tv/AGIzmHM)

## Share Your Agent

We appreciate your enthusiasm in participating in our open-source ModelScope-Agent project. If you encounter any issues, please feel free to report them to us. If you have built a new Agent demo and are ready to share your work with us, please create a pull request at any time! If you need any further assistance, please contact us via email at [contact@modelscope.cn](mailto:contact@modelscope.cn) or [communication group](https://modelscope.cn/docs/%E8%81%94%E7%B3%BB%E6%88%91%E4%BB%AC)!

### Facechain Agent
Facechain is an open-source project for generating personalized portraits in various styles using facial images uploaded by users. By integrating the capabilities of Facechain into the modelscope-agent framework, we have greatly simplified the usage process. The generation of personalized portraits can now be done through dialogue with the Facechain Agent.

FaceChainAgent Studio Application Link: https://modelscope.cn/studios/CVstudio/facechain_agent_studio/summary

You can run it directly in a notebook/Colab/local environment: https://www.modelscope.cn/my/mynotebook

```
! git clone -b feat/facechain_agent https://github.com/modelscope/modelscope-agent.git

! cd modelscope-agent && ! pip install -r requirements.txt
! cd modelscope-agent/demo/facechain_agent/demo/facechain_agent && ! pip install -r requirements.txt
! pip install http://dashscope-cn-beijing.oss-cn-beijing.aliyuncs.com/zhicheng/modelscope_agent-0.1.0-py3-none-any.whl
! PYTHONPATH=/mnt/workspace/modelscope-agent/demo/facechain_agent && cd modelscope-agent/demo/facechain_agent/demo/facechain_agent && python app_v1.0.py
```

## License

This project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=modelscope/modelscope-agent&type=Date)](https://star-history.com/#modelscope/modelscope-agent&Date)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/modelscope/modelscope-agent",
    "name": "modelscope-agent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "python, agent, LLM, AIGC, qwen, ModelScope",
    "author": "Modelscope Team",
    "author_email": "contact@modelscope.cn",
    "download_url": "https://files.pythonhosted.org/packages/ef/07/d9056ebc473d58eb20da6b04e11dd55b9b7f923f5bba23798b6ab5e73dac/modelscope-agent-0.7.1.tar.gz",
    "platform": null,
    "description": "<h1> ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models</h1>\n\n<p align=\"center\">\n    <br>\n    <img src=\"https://modelscope.oss-cn-beijing.aliyuncs.com/modelscope.gif\" width=\"400\"/>\n    <br>\n<p>\n\n<p align=\"center\">\n<a href=\"https://modelscope.cn/home\">Modelscope Hub</a> \uff5c <a href=\"https://arxiv.org/abs/2309.00986\">Paper</a> \uff5c <a href=\"https://modelscope.cn/studios/damo/ModelScopeGPT/summary\">Demo</a>\n<br>\n        <a href=\"README_CN.md\">\u4e2d\u6587</a>&nbsp \uff5c &nbspEnglish\n</p>\n\n<p align=\"center\">\n<img src=\"https://img.shields.io/badge/python-%E2%89%A53.8-5be.svg\">\n<a href='https://modelscope-agent.readthedocs.io/en/latest/?badge=latest'>\n    <img src='https://readthedocs.org/projects/modelscope-agent/badge/?version=latest' alt='Documentation Status' />\n</a>\n<a href=\"https://github.com/modelscope/modelscope\"><img src=\"https://img.shields.io/badge/modelscope[framework]-%E2%89%A51.16.0-5D91D4.svg\"></a>\n<a href=\"https://github.com/modelscope/modelscope-agent/actions?query=branch%3Amaster+workflow%3Acitest++\"><img src=\"https://img.shields.io/github/actions/workflow/status/modelscope/modelscope-agent/citest.yaml?branch=master&logo=github&label=CI\"></a>\n<a href=\"https://github.com/modelscope/modelscope-agent/blob/main/LICENSE\"><img src=\"https://img.shields.io/github/license/modelscope/modelscope-agent\"></a>\n<a href=\"https://github.com/modelscope/modelscope-agent/pulls\"><img src=\"https://img.shields.io/badge/PR-welcome-55EB99.svg\"></a>\n<a href=\"https://pypi.org/project/modelscope-agent/\"><img src=\"https://badge.fury.io/py/modelscope-agent.svg\"></a>\n<a href=\"https://pepy.tech/project/modelscope-agent\"><img src=\"https://pepy.tech/badge/modelscope-agent\"></a>\n</p>\n\n<p align=\"center\">\n<a href=\"https://trendshift.io/repositories/323\" target=\"_blank\"><img src=\"https://trendshift.io/api/badge/repositories/323\" alt=\"modelscope%2Fmodelscope-agent | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"/></a>\n</p>\n\n## Introduction\n\nModelscope-Agent is a customizable and scalable Agent framework. A single agent has abilities such as role-playing, LLM calling, tool usage, planning, and memory.\nIt mainly has the following characteristics:\n\n- **Simple Agent Implementation Process**: Simply specify the role instruction, LLM name, and tool name list to implement an Agent application. The framework automatically arranges workflows for tool usage, planning, and memory.\n- **Rich models and tools**: The framework is equipped with rich LLM interfaces, such as Dashscope and Modelscope model interfaces, OpenAI model interfaces, etc. Built in rich tools, such as **code interpreter**, **weather query**, **text to image**, **web browsing**, etc., make it easy to customize exclusive agents.\n- **Unified interface and high scalability**: The framework has clear tools and LLM registration mechanism, making it convenient for users to expand more diverse Agent applications.\n- **Low coupling**: Developers can easily use built-in tools, LLM, memory, and other components without the need to bind higher-level agents.\n\n\n## \ud83c\udf89 News\n* \ud83d\udd25\ud83d\udd25\ud83d\udd25Aug 8, 2024: A new graph based code generation tool [CodexGraph](https://arxiv.org/abs/2408.03910) is released by Modelscope-Agent, it has been proved effective and versatile on various code related tasks, please check [example](https://github.com/modelscope/modelscope-agent/tree/master/apps/codexgraph_agent).\n* \ud83d\udd25\ud83d\udd25Aug 1, 2024: A high efficient and reliable Data Science Assistant is running on Modelscope-Agent, please find detail in [example](https://github.com/modelscope/modelscope-agent/tree/master/apps/datascience_assistant).\n* \ud83d\udd25July 17, 2024: Parallel tool calling on Modelscope-Agent-Server, please find detail in [doc](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent_servers/README.md).\n* \ud83d\udd25June 17, 2024: Upgrading RAG flow based on LLama-index, allow user to hybrid search knowledge by different strategies and modalities, please find detail in [doc](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent/rag/README_zh.md).\n* \ud83d\udd25June 6, 2024: With [Modelscope-Agent-Server](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent_servers/README.md), **Qwen2** could be used by OpenAI SDK with tool calling ability, please find detail in [doc](https://github.com/modelscope/modelscope-agent/blob/master/docs/llms/qwen2_tool_calling.md).\n* \ud83d\udd25June 4, 2024: Modelscope-Agent supported Mobile-Agent-V2[arxiv](https://arxiv.org/abs/2406.01014)\uff0cbased on Android Adb Env, please check in the [application](https://github.com/modelscope/modelscope-agent/tree/master/apps/mobile_agent).\n* \ud83d\udd25May 17, 2024: Modelscope-Agent supported multi-roles room chat in the [gradio](https://github.com/modelscope/modelscope-agent/tree/master/apps/multi_roles_chat_room).\n* May 14, 2024: Modelscope-Agent supported image input in `RolePlay` agents with latest OpenAI model `GPT-4o`. Developers can experience this feature by specifying the `image_url` parameter.\n* May 10, 2024: Modelscope-Agent launched a user-friendly `Assistant API`, and also provided a `Tools API` that executes utilities in isolated, secure containers, please find the [document](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent_servers/)\n* Apr 12, 2024: The [Ray](https://docs.ray.io/en/latest/) version of multi-agent solution is on modelscope-agent, please find the [document](https://github.com/modelscope/modelscope-agent/blob/master/modelscope_agent/multi_agents_utils/README.md)\n* Mar 15, 2024: Modelscope-Agent and the [AgentFabric](https://github.com/modelscope/modelscope-agent/tree/master/apps/agentfabric) (opensource version for GPTs) is running on the production environment of [modelscope studio](https://modelscope.cn/studios/agent).\n* Feb 10, 2024: In Chinese New year, we upgrade the modelscope agent to version v0.3 to facilitate developers to customize various types of agents more conveniently through coding and make it easier to make multi-agent demos. For more details, you can refer to [#267](https://github.com/modelscope/modelscope-agent/pull/267) and [#293](https://github.com/modelscope/modelscope-agent/pull/293) .\n* Nov 26, 2023: [AgentFabric](https://github.com/modelscope/modelscope-agent/tree/master/apps/agentfabric) now supports collaborative use in ModelScope's [Creation Space](https://modelscope.cn/studios/modelscope/AgentFabric/summary), allowing for the sharing of custom applications in the Creation Space. The update also includes the latest [GTE](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_chinese-base/summary) text embedding integration.\n* Nov 17, 2023: [AgentFabric](https://github.com/modelscope/modelscope-agent/tree/master/apps/agentfabric) released, which is an interactive framework to facilitate creation of agents tailored to various real-world applications.\n* Oct 30, 2023: [Facechain Agent](https://modelscope.cn/studios/CVstudio/facechain_agent_studio/summary) released a local version of the Facechain Agent that can be run locally. For detailed usage instructions, please refer to [Facechain Agent](#facechain-agent).\n* Oct 25, 2023: [Story Agent](https://modelscope.cn/studios/damo/story_agent/summary) released a local version of the Story Agent for generating storybook illustrations. It can be run locally. For detailed usage instructions, please refer to [Story Agent](#story-agent).\n* Sep 20, 2023: [ModelScope GPT](https://modelscope.cn/studios/damo/ModelScopeGPT/summary) offers a local version through gradio that can be run locally. You can navigate to the demo/msgpt/ directory and execute `bash run_msgpt.sh`.\n* Sep 4, 2023: Three demos, [demo_qwen](demo/demo_qwen_agent.ipynb), [demo_retrieval_agent](demo/demo_retrieval_agent.ipynb) and [demo_register_tool](demo/demo_register_new_tool.ipynb), have been added, along with detailed tutorials provided.\n* Sep 2, 2023: The [preprint paper](https://arxiv.org/abs/2309.00986) associated with this project was published.\n* Aug 22, 2023: Support accessing various AI model APIs using ModelScope tokens.\n* Aug 7, 2023: The initial version of the modelscope-agent repository was released.\n\n\n## Installation\n\nclone repo and install dependency\uff1a\n```shell\ngit clone https://github.com/modelscope/modelscope-agent.git\ncd modelscope-agent && pip install -r requirements.txt\n```\n\n### ModelScope notebook\u3010recommended\u3011\n\nThe ModelScope Notebook offers a free-tier that allows ModelScope user to run the FaceChain application with minimum setup, refer to [ModelScope Notebook](https://modelscope.cn/my/mynotebook/preset)\n\n```shell\n# Step1: \u6211\u7684notebook -> PAI-DSW -> GPU\u73af\u5883\n\n# Step2: Download the [demo file](https://github.com/modelscope/modelscope-agent/blob/master/demo/demo_qwen_agent.ipynb) and upload it to the GPU.\n\n# Step3:  Execute the demo notebook in order.\n```\n\n\n## Quickstart\n\nThe agent incorporates an LLM along with task-specific tools, and uses the LLM to determine which tool or tools to invoke in order to complete the user's tasks.\n\nTo start, all you need to do is initialize an `RolePlay` object with corresponding tasks\n\n- This sample code uses the qwen-max model, drawing tools and weather forecast tools.\n     - Using the qwen-max model requires replacing YOUR_DASHSCOPE_API_KEY in the example with your API-KEY for the code to run properly. YOUR_DASHSCOPE_API_KEY can be obtained [here](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key). The drawing tool also calls DASHSCOPE API (wanx), so no additional configuration is required.\n     - When using the weather forecast tool, you need to replace YOUR_AMAP_TOKEN in the example with your AMAP weather API-KEY so that the code can run normally. YOUR_AMAP_TOKEN is available [here](https://lbs.amap.com/api/javascript-api-v2/guide/services/weather).\n\n\n```Python\n# \u914d\u7f6e\u73af\u5883\u53d8\u91cf\uff1b\u5982\u679c\u60a8\u5df2\u7ecf\u63d0\u524d\u5c06api-key\u63d0\u524d\u914d\u7f6e\u5230\u60a8\u7684\u8fd0\u884c\u73af\u5883\u4e2d\uff0c\u53ef\u4ee5\u7701\u7565\u8fd9\u4e2a\u6b65\u9aa4\nimport os\nos.environ['DASHSCOPE_API_KEY']=YOUR_DASHSCOPE_API_KEY\nos.environ['AMAP_TOKEN']=YOUR_AMAP_TOKEN\n\n# \u9009\u7528RolePlay \u914d\u7f6eagent\nfrom modelscope_agent.agents.role_play import RolePlay  # NOQA\n\nrole_template = '\u4f60\u626e\u6f14\u4e00\u4e2a\u5929\u6c14\u9884\u62a5\u52a9\u624b\uff0c\u4f60\u9700\u8981\u67e5\u8be2\u76f8\u5e94\u5730\u533a\u7684\u5929\u6c14\uff0c\u5e76\u8c03\u7528\u7ed9\u4f60\u7684\u753b\u56fe\u5de5\u5177\u7ed8\u5236\u4e00\u5f20\u57ce\u5e02\u7684\u56fe\u3002'\n\nllm_config = {'model': 'qwen-max', 'model_server': 'dashscope'}\n\n# input tool name\nfunction_list = ['amap_weather', 'image_gen']\n\nbot = RolePlay(\n    function_list=function_list, llm=llm_config, instruction=role_template)\n\nresponse = bot.run('\u671d\u9633\u533a\u5929\u6c14\u600e\u6837\uff1f')\n\ntext = ''\nfor chunk in response:\n    text += chunk\n```\n\nResult\n- Terminal runs\n```shell\n# \u7b2c\u4e00\u6b21\u8c03\u7528llm\u7684\u8f93\u51fa\nAction: amap_weather\nAction Input: {\"location\": \"\u671d\u9633\u533a\"}\n\n# \u7b2c\u4e8c\u6b21\u8c03\u7528llm\u7684\u8f93\u51fa\n\u76ee\u524d\uff0c\u671d\u9633\u533a\u7684\u5929\u6c14\u72b6\u51b5\u4e3a\u9634\u5929\uff0c\u6c14\u6e29\u4e3a1\u5ea6\u3002\n\nAction: image_gen\nAction Input: {\"text\": \"\u671d\u9633\u533a\u57ce\u5e02\u98ce\u5149\", \"resolution\": \"1024*1024\"}\n\n# \u7b2c\u4e09\u6b21\u8c03\u7528llm\u7684\u8f93\u51fa\n\u76ee\u524d\uff0c\u671d\u9633\u533a\u7684\u5929\u6c14\u72b6\u51b5\u4e3a\u9634\u5929\uff0c\u6c14\u6e29\u4e3a1\u5ea6\u3002\u540c\u65f6\uff0c\u6211\u5df2\u4e3a\u4f60\u751f\u6210\u4e86\u4e00\u5f20\u671d\u9633\u533a\u7684\u57ce\u5e02\u98ce\u5149\u56fe\uff0c\u5982\u4e0b\u6240\u793a\uff1a\n\n![](https://dashscope-result-sh.oss-cn-shanghai.aliyuncs.com/1d/45/20240204/3ab595ad/96d55ca6-6550-4514-9013-afe0f917c7ac-1.jpg?Expires=1707123521&OSSAccessKeyId=LTAI5tQZd8AEcZX6KZV4G8qL&Signature=RsJRt7zsv2y4kg7D9QtQHuVkXZY%3D)\n```\n\n## modules\n### Agent\n\nAn `Agent` object consists of the following components:\n\n- `LLM`: A large language model that is responsible to process your inputs and decide calling tools.\n- `function_list`: A list consists of available tools for agents.\n\nCurrently, configuration of `Agent` may contain following arguments:\n- `llm`: The llm config of this agent\n    - When Dict: set the config of llm as {'model': '', 'api_key': '', 'model_server': ''}\n    - When BaseChatModel: llm is sent by another agent\n- `function_list`: A list of tools\n    - When str: tool names\n    - When Dict: tool cfg\n- `storage_path`: If not specified otherwise, all data will be stored here in KV pairs by memory\n- `instruction`: the system instruction of this agent\n- `name`: the name of agent\n- `description`: the description of agent, which is used for multi_agent\n- `kwargs`: other potential parameters\n\n`Agent`, as a base class, cannot be directly initialized and called. Agent subclasses need to inherit it. They must implement function `_run`, which mainly includes three parts: generation of messages/propmt, calling of llm(s), and tool calling based on the results of llm. We provide an implement of these components in `RolePlay` for users, and you can also custom your components according to your requirement.\n\n```python\nfrom modelscope_agent import Agent\nclass YourCustomAgent(Agent):\n    def _run(self, user_request, **kwargs):\n        # Custom your workflow\n```\n\n\n### LLM\nLLM is core module of agent, which ensures the quality of interaction results.\n\nCurrently, configuration of `` may contain following arguments:\n- `model`: The specific model name will be passed directly to the model service provider.\n- `model_server`: provider of model services.\n\n`BaseChatModel`, as a base class of llm, cannot be directly initialized and called. The subclasses need to inherit it. They must implement function `_chat_stream` and `_chat_no_stream`, which correspond to streaming output and non-streaming output respectively.\nOptionally implement `chat_with_functions` and `chat_with_raw_prompt` for function calling and text completion.\n\nCurrently we provide the implementation of three model service providers: dashscope (for qwen series models), zhipu (for glm series models) and openai (for all openai api format models). You can directly use the models supported by the above service providers, or you can customize your llm.\n\nFor more information please refer to `docs/modules/llm.md`\n\n### `Tool`\n\nWe provide several multi-domain tools that can be configured and used in the agent.\n\nYou can also customize your tools with set the tool's name, description, and parameters based on a predefined pattern by inheriting the base tool. Depending on your needs, call() can be implemented.\nAn example of a custom tool is provided in [demo_register_new_tool](/demo/demo_register_new_tool.ipynb)\n\nYou can pass the tool name or configuration you want to use to the agent.\n```python\n# by tool name\nfunction_list = ['amap_weather', 'image_gen']\nbot = RolePlay(function_list=function_list, ...)\n\n# by tool configuration\nfrom langchain.tools import ShellTool\nfunction_list = [{'terminal':ShellTool()}]\nbot = RolePlay(function_list=function_list, ...)\n\n# by mixture\nfunction_list = ['amap_weather', {'terminal':ShellTool()}]\nbot = RolePlay(function_list=function_list, ...)\n```\n\n#### Built-in tools\n- `image_gen`: [Wanx Image Generation](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-wanxiang). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.\n- `code_interpreter`: [Code Interpreter](https://jupyter-client.readthedocs.io/en/5.2.2/api/client.html)\n- `web_browser`: [Web Browsing](https://python.langchain.com/docs/use_cases/web_scraping)\n- `amap_weather`: [AMAP Weather](https://lbs.amap.com/api/javascript-api-v2/guide/services/weather). AMAP_TOKEN needs to be configured in the environment variable.\n- `wordart_texture_generation`: [Word art texture generation](https://help.aliyun.com/zh/dashscope/developer-reference/wordart). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.\n- `web_search`: [Web Searching](https://learn.microsoft.com/en-us/bing/search-apis/bing-web-search/overview). []\n- `qwen_vl`: [Qwen-VL image recognition](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-plus-api). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.\n- `style_repaint`: [Character style redrawn](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-wanxiang-style-repaint). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.\n- `image_enhancement`: [Chasing shadow-magnifying glass](https://github.com/dreamoving/Phantom). [DASHSCOPE_API_KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key) needs to be configured in the environment variable.\n- `text-address`: [Geocoding](https://www.modelscope.cn/models/iic/mgeo_geographic_elements_tagging_chinese_base/summary). [MODELSCOPE_API_TOKEN](https://www.modelscope.cn/my/myaccesstoken) needs to be configured in the environment variable.\n- `speech-generation`: [Speech generation](https://www.modelscope.cn/models/iic/speech_sambert-hifigan_tts_zh-cn_16k/summary). [MODELSCOPE_API_TOKEN](https://www.modelscope.cn/my/myaccesstoken) needs to be configured in the environment variable.\n- `video-generation`: [Video generation](https://www.modelscope.cn/models/iic/text-to-video-synthesis/summary). [MODELSCOPE_API_TOKEN](https://www.modelscope.cn/my/myaccesstoken) needs to be configured in the environment variable.\n\n### Multi-agent\n\nPlease refer the multi-agent [readme](modelscope_agent/multi_agents_utils/README.md).\n\n\n## Related Tutorials\n\nIf you would like to learn more about the practical details of Agent, you can refer to our articles and video tutorials:\n\n* [Article Tutorial](https://mp.weixin.qq.com/s/L3GiV2QHeybhVZSg_g_JRw)\n* [Video Tutorial](https://b23.tv/AGIzmHM)\n\n## Share Your Agent\n\nWe appreciate your enthusiasm in participating in our open-source ModelScope-Agent project. If you encounter any issues, please feel free to report them to us. If you have built a new Agent demo and are ready to share your work with us, please create a pull request at any time! If you need any further assistance, please contact us via email at [contact@modelscope.cn](mailto:contact@modelscope.cn) or [communication group](https://modelscope.cn/docs/%E8%81%94%E7%B3%BB%E6%88%91%E4%BB%AC)!\n\n### Facechain Agent\nFacechain is an open-source project for generating personalized portraits in various styles using facial images uploaded by users. By integrating the capabilities of Facechain into the modelscope-agent framework, we have greatly simplified the usage process. The generation of personalized portraits can now be done through dialogue with the Facechain Agent.\n\nFaceChainAgent Studio Application Link: https://modelscope.cn/studios/CVstudio/facechain_agent_studio/summary\n\nYou can run it directly in a notebook/Colab/local environment: https://www.modelscope.cn/my/mynotebook\n\n```\n! git clone -b feat/facechain_agent https://github.com/modelscope/modelscope-agent.git\n\n! cd modelscope-agent && ! pip install -r requirements.txt\n! cd modelscope-agent/demo/facechain_agent/demo/facechain_agent && ! pip install -r requirements.txt\n! pip install http://dashscope-cn-beijing.oss-cn-beijing.aliyuncs.com/zhicheng/modelscope_agent-0.1.0-py3-none-any.whl\n! PYTHONPATH=/mnt/workspace/modelscope-agent/demo/facechain_agent && cd modelscope-agent/demo/facechain_agent/demo/facechain_agent && python app_v1.0.py\n```\n\n## License\n\nThis project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=modelscope/modelscope-agent&type=Date)](https://star-history.com/#modelscope/modelscope-agent&Date)\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "ModelScope Agent: Be a powerful models and tools agent based on ModelScope and open source LLM.",
    "version": "0.7.1",
    "project_urls": {
        "Homepage": "https://github.com/modelscope/modelscope-agent"
    },
    "split_keywords": [
        "python",
        " agent",
        " llm",
        " aigc",
        " qwen",
        " modelscope"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "295bc37499a90145f45d8023acd3c80c87a9e60fc089a96b8e6e4e3a0defc549",
                "md5": "741cbc617b303d2cb2bcce621afad443",
                "sha256": "d9ae6cbcd6b3c6ba3d5033b22110b8d35689efc5940f5ca4ec2ddad0c7abc44a"
            },
            "downloads": -1,
            "filename": "modelscope_agent-0.7.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "741cbc617b303d2cb2bcce621afad443",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 23563058,
            "upload_time": "2024-11-11T11:06:19",
            "upload_time_iso_8601": "2024-11-11T11:06:19.484901Z",
            "url": "https://files.pythonhosted.org/packages/29/5b/c37499a90145f45d8023acd3c80c87a9e60fc089a96b8e6e4e3a0defc549/modelscope_agent-0.7.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ef07d9056ebc473d58eb20da6b04e11dd55b9b7f923f5bba23798b6ab5e73dac",
                "md5": "1af92902dc08cccb632f756eb8c09a42",
                "sha256": "70ca29d288ee6a92bb017eb2bfd4c17691e6a3f6cb2cf2842457dedd71565f84"
            },
            "downloads": -1,
            "filename": "modelscope-agent-0.7.1.tar.gz",
            "has_sig": false,
            "md5_digest": "1af92902dc08cccb632f756eb8c09a42",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 23488443,
            "upload_time": "2024-11-11T11:06:22",
            "upload_time_iso_8601": "2024-11-11T11:06:22.496687Z",
            "url": "https://files.pythonhosted.org/packages/ef/07/d9056ebc473d58eb20da6b04e11dd55b9b7f923f5bba23798b6ab5e73dac/modelscope-agent-0.7.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-11 11:06:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "modelscope",
    "github_project": "modelscope-agent",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "modelscope-agent"
}
        
Elapsed time: 0.41729s