Name | gemini-agents-toolkit JSON |
Version |
3.7.1
JSON |
| download |
home_page | None |
Summary | Toolkit For Creating Gemini Based Agents |
upload_time | 2024-12-25 19:00:18 |
maintainer | None |
docs_url | None |
author | Viacheslav Kovalevskyi |
requires_python | None |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
![img.png](docs/main_readme/header_ducks.png)
# gemini-agents-toolkit
The project is an SDK for implementing Agent-Driven Development (ADD) applications.
ADD is aimed to move routine code tasks to the LLM agents and concentrate on the algorithmic and fun part of the business logic implementation.
<h4 align="center">
<a href="https://opensource.org/licenses/mit">
<img src="https://img.shields.io/badge/mit-blue.svg?style=flat-square&label=license" alt="license" style="height: 20px;">
</a>
<a href="https://discord.gg/qPWcJhgAx4">
<img src="https://img.shields.io/badge/discord-7289da.svg?style=flat-square&logo=discord" alt="discord" style="height: 20px;">
</a>
<a href="https://www.youtube.com/watch?v=Y4QW_ILmcn8">
<img src="https://img.shields.io/badge/youtube-d95652.svg?style=flat-square&logo=youtube" alt="youtube" style="height: 20px;">
</a>
</h4>
⭐ Add a star for a duck!
---
<p align="center">
<a href="#how-it-works">How It Works</a> •
<a href="#requirements">Requirements</a> •
<a href="#getting-started">Getting Started</a> •
<a href="#run-examples">Run Examples</a> •
<a href="#how-to-contribute">How To Contribute</a>
</p>
---
## 🚀How It Works
`gemini-agents-toolkit` is an SDK that creates LLM agents and enables their integration into pipelines for modifying generated responses.
See the picture describing the process:
![img.png](docs/main_readme/big_picture.jpg)
The roles of every component are as follows:
1. **Application:** Define custom functions to guide LLM executions, launch agents for task execution, and combine tasks into pipelines.
2. `gemini-agents-toolkit`: A tool for creating agents, executing pipelines and interacting with Gemini.
3. **Vertex AI API:** A Google API for interacting with Gemini models.
4. **Gemini:** LLM models that generate text, code, or instructions to guide agent execution.
---
## 📝Requirements
**❗️ The project uses Vertex AI API to access Gemini models, which is paid Google API ❗**
You can find pricing [here](https://cloud.google.com/vertex-ai/generative-ai/pricing).
Agents request text generating only, which is paid for Text Input and Text Output. Running a pipeline will produce several requests depending on the pipeline's complexity.
========
**Supported Python versions:** Python 3.8 to 3.12 (gcloud requirement)
---
## 🤖Getting Started
### Big Picture
This project uses Gemini via Google Cloud API. To use the project you must first set up the following:
- Google Cloud account with Billing
- Allow requesting LLM models via APIs ([pricing](https://cloud.google.com/vertex-ai/generative-ai/pricing))
- Set up gcloud tool in your environment
- Set up the project
### Google Cloud account with Billing
1. Go to [Google AI Studio](https://aistudio.google.com/), select "Get API key" in the menu on the left. Click "Create API" key button and select Google Cloud project to associate API key with.
2. Click suggestion to "Set up Billing" near your new API key and initialize your Google Billing account.
3. [Optional] You can add a Budget for your account, which will email you when your spendings reach configured thresholds. Choose "Budgets & alerts" in the menu on the left in Billing console and follow the [instruction](https://cloud.google.com/billing/docs/how-to/budgets#steps-to-create-budget).
4. After this step, you should be able to test your Google Cloud project set up running `curl` command presented under the API keys table.
### Allow requesting LLM models via APIs
1. The project uses [Vertex AI API](https://cloud.google.com/vertex-ai/) to request Gemini models, Vertex AI API should be allowed in your Google cloud account. You need to allow it from the [Google Cloud Vertex AI API product page](https://console.cloud.google.com/apis/library/aiplatform.googleapis.com).
2. [Optional] With these [instructions](https://cloud.google.com/apis/docs/capping-api-usage) you can limit or increase your API requests quota.
### Set up gcloud tool in your environment
Follow the Google instructions to [install](https://cloud.google.com/sdk/docs/install) gcloud and to [initialize](https://cloud.google.com/sdk/docs/initializing) it.
### Set up the project
1. Clone `gemini-agents-toolkit` repository.
2. For the development process, you probably will need other repositories from the project. You can find other packages [here](https://github.com/GeminiAgentsToolkit). Currently, the project uses this list of other repositories:
| Module | Description | Link |
|------------|---------------------------------------| --- |
| json-agent | Agent to convert data formats to JSON | [link](https://github.com/GeminiAgentsToolkit/json-agent) |
6. `gemini-agents-toolkit` uses several environment variables that you need to set up:
| Env Variable | Description | Which Value To Set | Command To Set Up |
| --- |-----------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|
| GOOGLE_PROJECT | Project id from Google AI Studio | Open [Google AI studio](https://aistudio.google.com/app/apikey) and choose from the table with projects | `EXPORT GOOGLE_PROJECT=my-amazing-project` |
| GOOGLE_API_KEY | API key from Google AI Studio to request your project | Find out the API key in the row with your project | `EXPORT GOOGLE_API_KEY=my-amazing-project` |
| GOOGLE_REGION | You can choose which region API to request | You can ignore env for the default value `us-west1`, or find all available regions in Google Cloud [docs](https://cloud.google.com/compute/docs/regions-zones/) | `EXPORT GOOGLE_REGION=my-amazing-project` |
4. Now, you need to register custom Python modules to use them in examples:
```bash
export PYTHONPATH=/my/custom/module/path:$PYTHONPATH
```
Modules you need to register:
| Module | Description | Link |
|------------|---------------------------------------| --- |
| config | Using env variables and constants | [link](https://github.com/GeminiAgentsToolkit/gemini-agents-toolkit/tree/main/config) |
| json-agent | Agent to convert data formats to JSON | [link](https://github.com/GeminiAgentsToolkit/json-agent) |
❗ To have your variables set up permanently, register them in `.bashrc` (`.zshrc` etc, depending on your operational system and shell).
---
## 🎉Run Examples
Now it's time for fun!
Run `examples/simple_example.py` with
```bash
python examples/simple_example.py
```
In this example, you have created a custom function and gave Gemini ability to use your function:
<details>
<summary>See code</summary>
```python
import vertexai
from config import (PROJECT_ID, REGION, SIMPLE_MODEL)
from gemini_agents_toolkit import agent
def say_to_duck(say: str):
"""say something to a duck"""
return f"duck answer is: duck duck {say} duck duck duck"
vertexai.init(project=PROJECT_ID, location=REGION)
all_functions = [say_to_duck]
duck_comms_agent = agent.create_agent_from_functions_list(functions=all_functions,
model_name=SIMPLE_MODEL)
print(duck_comms_agent.send_message("say to the duck message: I am hungry"))
```
</details>
=>>>><<<<=
You can create several agents, which will delegate functions execution to each other:
```bash
python examples/multi_agent_example.py
```
<details>
<summary>See code</summary>
```python
import datetime
import vertexai
from config import (PROJECT_ID, REGION, SIMPLE_MODEL, DEFAULT_MODEL)
from gemini_agents_toolkit import agent
from gemini_agents_toolkit.history_utils import summarize
vertexai.init(project=PROJECT_ID, location=REGION)
def generate_duck_comms_agent():
"""create an agent to say to a duck"""
def say_to_duck(say: str):
"""say something to a duck"""
return f"duck answer is: duck duck {say} duck duck duck"
return agent.create_agent_from_functions_list(
functions=[say_to_duck],
delegation_function_prompt=("""Agent can communicat to ducks and can say something to them.
And provides the answer from the duck."""),
model_name=DEFAULT_MODEL)
def generate_time_checker_agent():
"""create an agent to get the time"""
def get_local_time():
"""get the current local time"""
return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
return agent.create_agent_from_functions_list(
functions=[get_local_time],
delegation_function_prompt="Agent can provide the current local time.",
model_name=SIMPLE_MODEL)
duck_comms_agent = generate_duck_comms_agent()
time_checker_agent = generate_time_checker_agent()
main_agent = agent.create_agent_from_functions_list(
delegates=[time_checker_agent, duck_comms_agent],
model_name=SIMPLE_MODEL)
result_say_operation, history_say_operation = main_agent.send_message("say to the duck message: I am hungry")
result_time_operation, history_time_operation = main_agent.send_message("can you tell me what time it is?")
print(result_say_operation)
print(result_time_operation)
print(summarize(agent=main_agent, history=history_say_operation + history_time_operation))
```
</details>
=>>>><<<<=
You can execute your code periodicaly:
```bash
python examples/simple_scheduler_example.py
```
<details>
<summary>See code</summary>
```python
import time
import vertexai
from config import (PROJECT_ID, REGION, SIMPLE_MODEL)
from gemini_agents_toolkit import agent
def say_to_duck(say: str):
"""say something to a duck"""
return f"duck answer is: duck duck {say} duck duck duck"
def print_msg_from_agent(msg: str):
"""print message in console"""
print(msg)
vertexai.init(project=PROJECT_ID, location=REGION)
all_functions = [say_to_duck]
duck_comms_agent = agent.create_agent_from_functions_list(functions=all_functions,
model_name=SIMPLE_MODEL,
add_scheduling_functions=True,
on_message=print_msg_from_agent)
# no need to print result directly since we passed to agent on_message
duck_comms_agent.send_message("can you be saying, each minute, to the duck that I am hungry")
# wait 3 min to see results
time.sleep(180)
```
</details>
Find more advanced examples in the `examples` directory.
---
## 💡How To Contribute
### If you want to be a code contributor
- Just pick up a task from the Issues in this repository, assign it to you and raise a pull request with proposed changes.
- If you need help, join the [Discord](https://discord.gg/qPWcJhgAx4) and ask for help. The contributors team will be happy to see you!
### If you want to contribute ideas or participate discussions
Feel free to join our [Discord](https://discord.gg/qPWcJhgAx4), where you can discuss the project with the contributors and probably impact on the way the project evolves.
---
[Back to top](#gemini-agents-toolkit)
Raw data
{
"_id": null,
"home_page": null,
"name": "gemini-agents-toolkit",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Viacheslav Kovalevskyi",
"author_email": "viacheslav@kovalevskyi.com",
"download_url": "https://files.pythonhosted.org/packages/bb/5a/574f3f2b92224bc361b0414dfc73699e5c5bad9a4388e060c91e18e008d4/gemini_agents_toolkit-3.7.1.tar.gz",
"platform": null,
"description": "![img.png](docs/main_readme/header_ducks.png)\n# gemini-agents-toolkit\n\nThe project is an SDK for implementing Agent-Driven Development (ADD) applications. \nADD is aimed to move routine code tasks to the LLM agents and concentrate on the algorithmic and fun part of the business logic implementation.\n\n<h4 align=\"center\">\n <a href=\"https://opensource.org/licenses/mit\">\n <img src=\"https://img.shields.io/badge/mit-blue.svg?style=flat-square&label=license\" alt=\"license\" style=\"height: 20px;\">\n </a>\n <a href=\"https://discord.gg/qPWcJhgAx4\">\n <img src=\"https://img.shields.io/badge/discord-7289da.svg?style=flat-square&logo=discord\" alt=\"discord\" style=\"height: 20px;\">\n </a>\n <a href=\"https://www.youtube.com/watch?v=Y4QW_ILmcn8\">\n <img src=\"https://img.shields.io/badge/youtube-d95652.svg?style=flat-square&logo=youtube\" alt=\"youtube\" style=\"height: 20px;\">\n </a>\n</h4>\n\n\u2b50 Add a star for a duck!\n\n---\n\n<p align=\"center\">\n <a href=\"#how-it-works\">How It Works</a> • \n <a href=\"#requirements\">Requirements</a> • \n <a href=\"#getting-started\">Getting Started</a> • \n <a href=\"#run-examples\">Run Examples</a> • \n <a href=\"#how-to-contribute\">How To Contribute</a>\n</p>\n\n---\n## \ud83d\ude80How It Works\n\n`gemini-agents-toolkit` is an SDK that creates LLM agents and enables their integration into pipelines for modifying generated responses. \n\nSee the picture describing the process:\n\n![img.png](docs/main_readme/big_picture.jpg)\n\nThe roles of every component are as follows:\n1. **Application:** Define custom functions to guide LLM executions, launch agents for task execution, and combine tasks into pipelines.\n2. `gemini-agents-toolkit`: A tool for creating agents, executing pipelines and interacting with Gemini.\n3. **Vertex AI API:** A Google API for interacting with Gemini models.\n4. **Gemini:** LLM models that generate text, code, or instructions to guide agent execution.\n\n---\n## \ud83d\udcddRequirements\n\n**\u2757\ufe0f The project uses Vertex AI API to access Gemini models, which is paid Google API \u2757**\n\nYou can find pricing [here](https://cloud.google.com/vertex-ai/generative-ai/pricing).\n\nAgents request text generating only, which is paid for Text Input and Text Output. Running a pipeline will produce several requests depending on the pipeline's complexity.\n\n========\n\n**Supported Python versions:** Python 3.8 to 3.12 (gcloud requirement)\n\n---\n## \ud83e\udd16Getting Started\n\n### Big Picture\n\nThis project uses Gemini via Google Cloud API. To use the project you must first set up the following:\n- Google Cloud account with Billing\n- Allow requesting LLM models via APIs ([pricing](https://cloud.google.com/vertex-ai/generative-ai/pricing))\n- Set up gcloud tool in your environment\n- Set up the project\n\n### Google Cloud account with Billing\n1. Go to [Google AI Studio](https://aistudio.google.com/), select \"Get API key\" in the menu on the left. Click \"Create API\" key button and select Google Cloud project to associate API key with. \n2. Click suggestion to \"Set up Billing\" near your new API key and initialize your Google Billing account.\n3. [Optional] You can add a Budget for your account, which will email you when your spendings reach configured thresholds. Choose \"Budgets & alerts\" in the menu on the left in Billing console and follow the [instruction](https://cloud.google.com/billing/docs/how-to/budgets#steps-to-create-budget).\n4. After this step, you should be able to test your Google Cloud project set up running `curl` command presented under the API keys table.\n\n### Allow requesting LLM models via APIs\n1. The project uses [Vertex AI API](https://cloud.google.com/vertex-ai/) to request Gemini models, Vertex AI API should be allowed in your Google cloud account. You need to allow it from the [Google Cloud Vertex AI API product page](https://console.cloud.google.com/apis/library/aiplatform.googleapis.com).\n2. [Optional] With these [instructions](https://cloud.google.com/apis/docs/capping-api-usage) you can limit or increase your API requests quota.\n\n### Set up gcloud tool in your environment\nFollow the Google instructions to [install](https://cloud.google.com/sdk/docs/install) gcloud and to [initialize](https://cloud.google.com/sdk/docs/initializing) it.\n\n### Set up the project\n1. Clone `gemini-agents-toolkit` repository.\n2. For the development process, you probably will need other repositories from the project. You can find other packages [here](https://github.com/GeminiAgentsToolkit). Currently, the project uses this list of other repositories:\n\n| Module | Description | Link |\n|------------|---------------------------------------| --- |\n| json-agent | Agent to convert data formats to JSON | [link](https://github.com/GeminiAgentsToolkit/json-agent) |\n\n6. `gemini-agents-toolkit` uses several environment variables that you need to set up:\n\n| Env Variable | Description | Which Value To Set | Command To Set Up |\n| --- |-----------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|\n| GOOGLE_PROJECT | Project id from Google AI Studio | Open [Google AI studio](https://aistudio.google.com/app/apikey) and choose from the table with projects | `EXPORT GOOGLE_PROJECT=my-amazing-project` |\n| GOOGLE_API_KEY | API key from Google AI Studio to request your project | Find out the API key in the row with your project | `EXPORT GOOGLE_API_KEY=my-amazing-project` |\n| GOOGLE_REGION | You can choose which region API to request | You can ignore env for the default value `us-west1`, or find all available regions in Google Cloud [docs](https://cloud.google.com/compute/docs/regions-zones/) | `EXPORT GOOGLE_REGION=my-amazing-project` |\n\n4. Now, you need to register custom Python modules to use them in examples:\n```bash\nexport PYTHONPATH=/my/custom/module/path:$PYTHONPATH\n```\nModules you need to register:\n\n| Module | Description | Link |\n|------------|---------------------------------------| --- |\n| config | Using env variables and constants | [link](https://github.com/GeminiAgentsToolkit/gemini-agents-toolkit/tree/main/config) |\n| json-agent | Agent to convert data formats to JSON | [link](https://github.com/GeminiAgentsToolkit/json-agent) |\n\n\u2757 To have your variables set up permanently, register them in `.bashrc` (`.zshrc` etc, depending on your operational system and shell).\n\n---\n\n## \ud83c\udf89Run Examples\n\nNow it's time for fun!\n\nRun `examples/simple_example.py` with \n```bash\npython examples/simple_example.py\n```\nIn this example, you have created a custom function and gave Gemini ability to use your function:\n\n<details>\n<summary>See code</summary>\n\n```python\nimport vertexai\nfrom config import (PROJECT_ID, REGION, SIMPLE_MODEL)\nfrom gemini_agents_toolkit import agent\n\n\ndef say_to_duck(say: str):\n \"\"\"say something to a duck\"\"\"\n return f\"duck answer is: duck duck {say} duck duck duck\"\n\n\nvertexai.init(project=PROJECT_ID, location=REGION)\n\nall_functions = [say_to_duck]\nduck_comms_agent = agent.create_agent_from_functions_list(functions=all_functions,\n model_name=SIMPLE_MODEL)\n\nprint(duck_comms_agent.send_message(\"say to the duck message: I am hungry\"))\n```\n</details>\n\n=>>>><<<<=\n\nYou can create several agents, which will delegate functions execution to each other:\n```bash\npython examples/multi_agent_example.py\n```\n\n<details>\n<summary>See code</summary>\n\n```python\nimport datetime\nimport vertexai\nfrom config import (PROJECT_ID, REGION, SIMPLE_MODEL, DEFAULT_MODEL)\nfrom gemini_agents_toolkit import agent\nfrom gemini_agents_toolkit.history_utils import summarize\n\nvertexai.init(project=PROJECT_ID, location=REGION)\n\n\ndef generate_duck_comms_agent():\n \"\"\"create an agent to say to a duck\"\"\"\n\n def say_to_duck(say: str):\n \"\"\"say something to a duck\"\"\"\n return f\"duck answer is: duck duck {say} duck duck duck\"\n\n return agent.create_agent_from_functions_list(\n functions=[say_to_duck],\n delegation_function_prompt=(\"\"\"Agent can communicat to ducks and can say something to them.\n And provides the answer from the duck.\"\"\"),\n model_name=DEFAULT_MODEL)\n\n\ndef generate_time_checker_agent():\n \"\"\"create an agent to get the time\"\"\"\n\n def get_local_time():\n \"\"\"get the current local time\"\"\"\n return datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n\n return agent.create_agent_from_functions_list(\n functions=[get_local_time],\n delegation_function_prompt=\"Agent can provide the current local time.\",\n model_name=SIMPLE_MODEL)\n\n\nduck_comms_agent = generate_duck_comms_agent()\ntime_checker_agent = generate_time_checker_agent()\n\nmain_agent = agent.create_agent_from_functions_list(\n delegates=[time_checker_agent, duck_comms_agent],\n model_name=SIMPLE_MODEL)\n\nresult_say_operation, history_say_operation = main_agent.send_message(\"say to the duck message: I am hungry\")\nresult_time_operation, history_time_operation = main_agent.send_message(\"can you tell me what time it is?\")\n\nprint(result_say_operation)\nprint(result_time_operation)\nprint(summarize(agent=main_agent, history=history_say_operation + history_time_operation))\n```\n</details>\n\n=>>>><<<<=\n\nYou can execute your code periodicaly:\n\n```bash\npython examples/simple_scheduler_example.py\n```\n\n<details>\n<summary>See code</summary>\n\n```python\nimport time\nimport vertexai\nfrom config import (PROJECT_ID, REGION, SIMPLE_MODEL)\nfrom gemini_agents_toolkit import agent\n\n\ndef say_to_duck(say: str):\n \"\"\"say something to a duck\"\"\"\n return f\"duck answer is: duck duck {say} duck duck duck\"\n\n\ndef print_msg_from_agent(msg: str):\n \"\"\"print message in console\"\"\"\n print(msg)\n\n\nvertexai.init(project=PROJECT_ID, location=REGION)\n\nall_functions = [say_to_duck]\nduck_comms_agent = agent.create_agent_from_functions_list(functions=all_functions,\n model_name=SIMPLE_MODEL,\n add_scheduling_functions=True,\n on_message=print_msg_from_agent)\n\n# no need to print result directly since we passed to agent on_message\nduck_comms_agent.send_message(\"can you be saying, each minute, to the duck that I am hungry\")\n\n# wait 3 min to see results\ntime.sleep(180)\n```\n</details>\n\nFind more advanced examples in the `examples` directory.\n\n---\n## \ud83d\udca1How To Contribute\n### If you want to be a code contributor\n- Just pick up a task from the Issues in this repository, assign it to you and raise a pull request with proposed changes.\n- If you need help, join the [Discord](https://discord.gg/qPWcJhgAx4) and ask for help. The contributors team will be happy to see you!\n\n### If you want to contribute ideas or participate discussions\nFeel free to join our [Discord](https://discord.gg/qPWcJhgAx4), where you can discuss the project with the contributors and probably impact on the way the project evolves. \n\n---\n[Back to top](#gemini-agents-toolkit)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Toolkit For Creating Gemini Based Agents",
"version": "3.7.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8561df1055088c5e05e14882c82ecacca4f25000612dfa2a1a0d31048303f817",
"md5": "c55b4efff94ebe338a04cff0be1b8fac",
"sha256": "3ab1146965f1fa9402d65db350023077499e074d83a6ffd35b1d5d66587aaa1a"
},
"downloads": -1,
"filename": "gemini_agents_toolkit-3.7.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c55b4efff94ebe338a04cff0be1b8fac",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 36262,
"upload_time": "2024-12-25T19:00:16",
"upload_time_iso_8601": "2024-12-25T19:00:16.302752Z",
"url": "https://files.pythonhosted.org/packages/85/61/df1055088c5e05e14882c82ecacca4f25000612dfa2a1a0d31048303f817/gemini_agents_toolkit-3.7.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "bb5a574f3f2b92224bc361b0414dfc73699e5c5bad9a4388e060c91e18e008d4",
"md5": "8b6b742780fd685b822d170ebcd55776",
"sha256": "cf2de93052a6be39e44c99d58934f8786e9c512032fb068392e34703b085aca3"
},
"downloads": -1,
"filename": "gemini_agents_toolkit-3.7.1.tar.gz",
"has_sig": false,
"md5_digest": "8b6b742780fd685b822d170ebcd55776",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 19939,
"upload_time": "2024-12-25T19:00:18",
"upload_time_iso_8601": "2024-12-25T19:00:18.618277Z",
"url": "https://files.pythonhosted.org/packages/bb/5a/574f3f2b92224bc361b0414dfc73699e5c5bad9a4388e060c91e18e008d4/gemini_agents_toolkit-3.7.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-25 19:00:18",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "gemini-agents-toolkit"
}