qwen-agent


Nameqwen-agent JSON
Version 0.0.3 PyPI version JSON
download
home_pagehttps://github.com/QwenLM/Qwen-Agent
SummaryQwen-Agent: Enhancing LLMs with Agent Workflows, RAG, Function Calling, and Code Interpreter.
upload_time2024-04-25 06:01:35
maintainerNone
docs_urlNone
authorQwen Team
requires_pythonNone
licenseNone
keywords llm agent function calling rag code interpreter
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [中文](https://github.com/QwenLM/Qwen-Agent/blob/main/README_CN.md) | English

<p align="center">
    <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/logo-qwen-agent.png" width="400"/>
<p>
<br>

Qwen-Agent is a framework for developing LLM applications based on the instruction following, tool usage, planning, and
memory capabilities of Qwen.
It also comes with example applications such as Browser Assistant, Code Interpreter, and Custom Assistant.

# Getting Started

## Installation

- Install the stable version from PyPI:
```bash
pip install -U qwen-agent
```

- Alternatively, you can install the latest development version from the source:
```bash
git clone https://github.com/QwenLM/Qwen-Agent.git
cd Qwen-Agent
pip install -e ./
```

## Preparation: Model Service

You can either use the model service provided by Alibaba
Cloud's [DashScope](https://help.aliyun.com/zh/dashscope/developer-reference/quick-start), or deploy and use your own
model service using the open-source Qwen models.

- If you choose to use the model service offered by DashScope, please ensure that you set the environment
variable `DASHSCOPE_API_KEY` to your unique DashScope API key.

- Alternatively, if you prefer to deploy and use your own model service, please follow the instructions provided in the README of Qwen1.5 for deploying an OpenAI-compatible API service.
Specifically, consult the [vLLM](https://github.com/QwenLM/Qwen1.5?tab=readme-ov-file#vllm) section for high-throughput GPU deployment or the [Ollama](https://github.com/QwenLM/Qwen1.5?tab=readme-ov-file#ollama) section for local CPU (+GPU) deployment.

## Developing Your Own Agent

Qwen-Agent offers atomic components, such as LLMs (which inherit from `class BaseChatModel` and come with [function calling](https://github.com/QwenLM/Qwen-Agent/blob/main/examples/function_calling.py)) and Tools (which inherit
from `class BaseTool`), along with high-level components like Agents (derived from `class Agent`).

The following example illustrates the process of creating an agent capable of reading PDF files and utilizing tools, as
well as incorporating a custom tool:

```py
import pprint
import urllib.parse
import json5
from qwen_agent.agents import Assistant
from qwen_agent.tools.base import BaseTool, register_tool


# Step 1 (Optional): Add a custom tool named `my_image_gen`.
@register_tool('my_image_gen')
class MyImageGen(BaseTool):
    # The `description` tells the agent the functionality of this tool.
    description = 'AI painting (image generation) service, input text description, and return the image URL drawn based on text information.'
    # The `parameters` tell the agent what input parameters the tool has.
    parameters = [{
        'name': 'prompt',
        'type': 'string',
        'description': 'Detailed description of the desired image content, in English',
        'required': True
    }]

    def call(self, params: str, **kwargs) -> str:
        # `params` are the arguments generated by the LLM agent.
        prompt = json5.loads(params)['prompt']
        prompt = urllib.parse.quote(prompt)
        return json5.dumps(
            {'image_url': f'https://image.pollinations.ai/prompt/{prompt}'},
            ensure_ascii=False)


# Step 2: Configure the LLM you are using.
llm_cfg = {
    # Use the model service provided by DashScope:
    'model': 'qwen-max',
    'model_server': 'dashscope',
    # 'api_key': 'YOUR_DASHSCOPE_API_KEY',
    # It will use the `DASHSCOPE_API_KEY' environment variable if 'api_key' is not set here.

    # Use a model service compatible with the OpenAI API, such as vLLM or Ollama:
    # 'model': 'Qwen1.5-7B-Chat',
    # 'model_server': 'http://localhost:8000/v1',  # base_url, also known as api_base
    # 'api_key': 'EMPTY',

    # (Optional) LLM hyperparameters for generation:
    'generate_cfg': {
        'top_p': 0.8
    }
}

# Step 3: Create an agent. Here we use the `Assistant` agent as an example, which is capable of using tools and reading files.
system_instruction = '''You are a helpful assistant.
After receiving the user's request, you should:
- first draw an image and obtain the image url,
- then run code `request.get(image_url)` to download the image,
- and finally select an image operation from the given document to process the image.
Please show the image using `plt.show()`.'''
tools = ['my_image_gen', 'code_interpreter']  # `code_interpreter` is a built-in tool for executing code.
files = ['./examples/resource/doc.pdf']  # Give the bot a PDF file to read.
bot = Assistant(llm=llm_cfg,
                system_message=system_instruction,
                function_list=tools,
                files=files)

# Step 4: Run the agent as a chatbot.
messages = []  # This stores the chat history.
while True:
    # For example, enter the query "draw a dog and rotate it 90 degrees".
    query = input('user query: ')
    # Append the user query to the chat history.
    messages.append({'role': 'user', 'content': query})
    response = []
    for response in bot.run(messages=messages):
        # Streaming output.
        print('bot response:')
        pprint.pprint(response, indent=2)
    # Append the bot responses to the chat history.
    messages.extend(response)
```

In addition to using built-in agent implentations such as `class Assistant`, you can also develop your own agent implemetation by inheriting from `class Agent`.
Please refer to the [examples](https://github.com/QwenLM/Qwen-Agent/blob/main/examples) directory for more usage examples.

# Application: BrowserQwen

BrowserQwen is a browser assistant built upon Qwen-Agent. Please refer to its [documentation](https://github.com/QwenLM/Qwen-Agent/blob/main/browser_qwen.md) for details.

# Disclaimer

The code interpreter is not sandboxed, and it executes code in your own environment. Please do not ask Qwen to perform dangerous tasks, and do not directly use the code interpreter for production purposes.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/QwenLM/Qwen-Agent",
    "name": "qwen-agent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "LLM, Agent, Function Calling, RAG, Code Interpreter",
    "author": "Qwen Team",
    "author_email": "tujianhong.tjh@alibaba-inc.com",
    "download_url": "https://files.pythonhosted.org/packages/48/44/cd2e29a5782e6e0c4a37d64af79fce9d05369c6a1de1b0d6be258a594470/qwen-agent-0.0.3.tar.gz",
    "platform": null,
    "description": "[\u4e2d\u6587](https://github.com/QwenLM/Qwen-Agent/blob/main/README_CN.md) \uff5c English\n\n<p align=\"center\">\n    <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/logo-qwen-agent.png\" width=\"400\"/>\n<p>\n<br>\n\nQwen-Agent is a framework for developing LLM applications based on the instruction following, tool usage, planning, and\nmemory capabilities of Qwen.\nIt also comes with example applications such as Browser Assistant, Code Interpreter, and Custom Assistant.\n\n# Getting Started\n\n## Installation\n\n- Install the stable version from PyPI:\n```bash\npip install -U qwen-agent\n```\n\n- Alternatively, you can install the latest development version from the source:\n```bash\ngit clone https://github.com/QwenLM/Qwen-Agent.git\ncd Qwen-Agent\npip install -e ./\n```\n\n## Preparation: Model Service\n\nYou can either use the model service provided by Alibaba\nCloud's [DashScope](https://help.aliyun.com/zh/dashscope/developer-reference/quick-start), or deploy and use your own\nmodel service using the open-source Qwen models.\n\n- If you choose to use the model service offered by DashScope, please ensure that you set the environment\nvariable `DASHSCOPE_API_KEY` to your unique DashScope API key.\n\n- Alternatively, if you prefer to deploy and use your own model service, please follow the instructions provided in the README of Qwen1.5 for deploying an OpenAI-compatible API service.\nSpecifically, consult the [vLLM](https://github.com/QwenLM/Qwen1.5?tab=readme-ov-file#vllm) section for high-throughput GPU deployment or the [Ollama](https://github.com/QwenLM/Qwen1.5?tab=readme-ov-file#ollama) section for local CPU (+GPU) deployment.\n\n## Developing Your Own Agent\n\nQwen-Agent offers atomic components, such as LLMs (which inherit from `class BaseChatModel` and come with [function calling](https://github.com/QwenLM/Qwen-Agent/blob/main/examples/function_calling.py)) and Tools (which inherit\nfrom `class BaseTool`), along with high-level components like Agents (derived from `class Agent`).\n\nThe following example illustrates the process of creating an agent capable of reading PDF files and utilizing tools, as\nwell as incorporating a custom tool:\n\n```py\nimport pprint\nimport urllib.parse\nimport json5\nfrom qwen_agent.agents import Assistant\nfrom qwen_agent.tools.base import BaseTool, register_tool\n\n\n# Step 1 (Optional): Add a custom tool named `my_image_gen`.\n@register_tool('my_image_gen')\nclass MyImageGen(BaseTool):\n    # The `description` tells the agent the functionality of this tool.\n    description = 'AI painting (image generation) service, input text description, and return the image URL drawn based on text information.'\n    # The `parameters` tell the agent what input parameters the tool has.\n    parameters = [{\n        'name': 'prompt',\n        'type': 'string',\n        'description': 'Detailed description of the desired image content, in English',\n        'required': True\n    }]\n\n    def call(self, params: str, **kwargs) -> str:\n        # `params` are the arguments generated by the LLM agent.\n        prompt = json5.loads(params)['prompt']\n        prompt = urllib.parse.quote(prompt)\n        return json5.dumps(\n            {'image_url': f'https://image.pollinations.ai/prompt/{prompt}'},\n            ensure_ascii=False)\n\n\n# Step 2: Configure the LLM you are using.\nllm_cfg = {\n    # Use the model service provided by DashScope:\n    'model': 'qwen-max',\n    'model_server': 'dashscope',\n    # 'api_key': 'YOUR_DASHSCOPE_API_KEY',\n    # It will use the `DASHSCOPE_API_KEY' environment variable if 'api_key' is not set here.\n\n    # Use a model service compatible with the OpenAI API, such as vLLM or Ollama:\n    # 'model': 'Qwen1.5-7B-Chat',\n    # 'model_server': 'http://localhost:8000/v1',  # base_url, also known as api_base\n    # 'api_key': 'EMPTY',\n\n    # (Optional) LLM hyperparameters for generation:\n    'generate_cfg': {\n        'top_p': 0.8\n    }\n}\n\n# Step 3: Create an agent. Here we use the `Assistant` agent as an example, which is capable of using tools and reading files.\nsystem_instruction = '''You are a helpful assistant.\nAfter receiving the user's request, you should:\n- first draw an image and obtain the image url,\n- then run code `request.get(image_url)` to download the image,\n- and finally select an image operation from the given document to process the image.\nPlease show the image using `plt.show()`.'''\ntools = ['my_image_gen', 'code_interpreter']  # `code_interpreter` is a built-in tool for executing code.\nfiles = ['./examples/resource/doc.pdf']  # Give the bot a PDF file to read.\nbot = Assistant(llm=llm_cfg,\n                system_message=system_instruction,\n                function_list=tools,\n                files=files)\n\n# Step 4: Run the agent as a chatbot.\nmessages = []  # This stores the chat history.\nwhile True:\n    # For example, enter the query \"draw a dog and rotate it 90 degrees\".\n    query = input('user query: ')\n    # Append the user query to the chat history.\n    messages.append({'role': 'user', 'content': query})\n    response = []\n    for response in bot.run(messages=messages):\n        # Streaming output.\n        print('bot response:')\n        pprint.pprint(response, indent=2)\n    # Append the bot responses to the chat history.\n    messages.extend(response)\n```\n\nIn addition to using built-in agent implentations such as `class Assistant`, you can also develop your own agent implemetation by inheriting from `class Agent`.\nPlease refer to the [examples](https://github.com/QwenLM/Qwen-Agent/blob/main/examples) directory for more usage examples.\n\n# Application: BrowserQwen\n\nBrowserQwen is a browser assistant built upon Qwen-Agent. Please refer to its [documentation](https://github.com/QwenLM/Qwen-Agent/blob/main/browser_qwen.md) for details.\n\n# Disclaimer\n\nThe code interpreter is not sandboxed, and it executes code in your own environment. Please do not ask Qwen to perform dangerous tasks, and do not directly use the code interpreter for production purposes.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Qwen-Agent: Enhancing LLMs with Agent Workflows, RAG, Function Calling, and Code Interpreter.",
    "version": "0.0.3",
    "project_urls": {
        "Homepage": "https://github.com/QwenLM/Qwen-Agent"
    },
    "split_keywords": [
        "llm",
        " agent",
        " function calling",
        " rag",
        " code interpreter"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5f98ea7249662317cdd5d4bb3bce720f6bcaa6bcb992c590dd2d70ef117513a0",
                "md5": "8b01dd6ed08fc5d19063f9803226eac3",
                "sha256": "e8242ee31b63f108c2ddca5e40e5e6e34e9f4ebaa4abf7bd40597c9d3cda0fa1"
            },
            "downloads": -1,
            "filename": "qwen_agent-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8b01dd6ed08fc5d19063f9803226eac3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 7016427,
            "upload_time": "2024-04-25T06:01:29",
            "upload_time_iso_8601": "2024-04-25T06:01:29.000870Z",
            "url": "https://files.pythonhosted.org/packages/5f/98/ea7249662317cdd5d4bb3bce720f6bcaa6bcb992c590dd2d70ef117513a0/qwen_agent-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4844cd2e29a5782e6e0c4a37d64af79fce9d05369c6a1de1b0d6be258a594470",
                "md5": "494e501e1936c7064c85156c8b301685",
                "sha256": "36e1310321e319924108bbb98c80b2239b891ddfa678969b4fda154cc6d8d4f7"
            },
            "downloads": -1,
            "filename": "qwen-agent-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "494e501e1936c7064c85156c8b301685",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 6989044,
            "upload_time": "2024-04-25T06:01:35",
            "upload_time_iso_8601": "2024-04-25T06:01:35.616662Z",
            "url": "https://files.pythonhosted.org/packages/48/44/cd2e29a5782e6e0c4a37d64af79fce9d05369c6a1de1b0d6be258a594470/qwen-agent-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-25 06:01:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "QwenLM",
    "github_project": "Qwen-Agent",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "qwen-agent"
}
        
Elapsed time: 0.29211s