Name | iointel JSON |
Version |
1.5.0
JSON |
| download |
home_page | None |
Summary | A framework to create agents, tasks, and workflows. |
upload_time | 2025-07-08 16:51:29 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | None |
keywords |
ai
agents
llm
framework
tools
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# IO Intelligence Agent Framework
> **IMPORTANT**:
> **Beta Notice:** This project is in rapid development and may not be stable for production use.
This repository provides a flexible system for building and orchestrating **agents** and **workflows**. It offers two modes:
- **Client Mode**: Where tasks call out to a remote API client (e.g., your `client.py` functions).
- **Local Mode**: Where tasks run directly in the local environment, utilizing `run_agents(...)` and local logic.
It also supports loading **YAML or JSON** workflows to define multi-step tasks.
---
## Table of Contents<a id="table-of-contents"></a>
1. [Overview](#overview)
2. [Installation](#installation)
3. [Concepts](#concepts)
- [Agents](#agents)
- [Tasks](#tasks)
- [Client Mode vs Local Mode](#client-mode-vs-local-mode)
- [Workflows (YAML/JSON)](#workflows-yamljson)
4. [Usage](#usage)
- [Creating Agents](#creating-agents)
- [Creating an Agent with custom Persona](#creating-an-agent-with-a-persona)
- [Building a Workflow](#building-a-workflow)
- [Running a Local Workflow](#running-a-local-workflow)
- [Running a Remote Workflow (Client Mode)](#running-a-remote-workflow-client-mode)
- [Uploading YAML/JSON Workflows](#uploading-yamljson-workflows)
5. [Examples](#examples)
- [Simple Summarize Task](#simple-summarize-task)
- [Chainable Workflows](#chainable-workflows)
- [Custom Workflow](#custom-workflow)
- [Loading From a YAML File](#loading-from-a-yaml-file)
6. [API Endpoints](#api-endpoints)
7. [License](#license)
---
## Overview<a id="overview"></a>
The framework has distilled Agents into 3 distinct pieces:
- **Agents**
- **Tasks**
- **Workflows**
The **Agent** can be configured with:
- **Model Provider** (e.g., OpenAI, Llama, etc.)
- **Tools** (e.g., specialized functions)
Users can define tasks (like `sentiment`, `translate_text`, etc.) in a **local** or **client** mode. They can also upload workflows (in YAML or JSON) to orchestrate multiple steps in sequence.
---
## Installation<a id="installation"></a>
1. **Install the latest release**:
```bash
pip install --upgrade iointel
```
2. **Set Required Environment Variable**:
- `OPENAI_API_KEY` or `IO_API_KEY` for the default OpenAI-based `ChatOpenAI`.
3. **Optional Environment Variables**:
- `AGENT_LOGGING_LEVEL` (optional) to configure logging verbosity: `DEBUG`, `INFO`, etc.
- `OPENAI_API_BASE_URL` or `IO_API_BASE_URL` to point to OpenAI-compatible API implementation, like `https://api.intelligence.io.solutions/api/v1`
- `OPENAI_API_MODEL` or `IO_API_MODEL` to pick specific LLM model as "agent brain", like `meta-llama/Llama-3.3-70B-Instruct`
---
## Concepts<a id="concepts"></a>
### Agents<a id="agents"></a>
- They can have a custom model (e.g., `OpenAIModel`, a Llama-based model, etc.).
- Agents can have tools attached, which are specialized functions accessible during execution.
- Agents can have a custom Persona Profile configured.
### Tasks<a id="tasks"></a>
- A **task** is a single step in a workflow, e.g., `schedule_reminder`, `sentiment`, `translate_text`, etc.
- Tasks are managed by the `Workflow` class in `workflow.py`.
- Tasks can be chained for multi-step logic into a workflow (e.g., `await Workflow(objective="...").translate_text().sentiment().run_tasks()`).
### Client Mode vs Local Mode<a id="client-mode-vs-local-mode"></a>
- **Local Mode**: The system calls `run_agents(...)` directly in your local environment.
- **Client Mode**: The system calls out to remote endpoints in a separate API.
- In `client_mode=True`, each task (e.g. `sentiment`) triggers a client function (`sentiment_analysis(...)`) instead of local logic.
This allows you to **switch** between running tasks locally or delegating them to a server.
### Workflows (YAML/JSON)<a id="workflows-yamljson"></a>
_Note: this part is under active development and might not always function!_
- You can define multi-step workflows in YAML or JSON.
- The endpoint `/run-file` accepts a file (via multipart form data).
- First tries parsing the payload as **JSON**.
- If that fails, it tries parsing the payload as **YAML**.
- The file is validated against a `WorkflowDefinition` Pydantic model.
- Each step has a `type` (e.g., `"sentiment"`, `"custom"`) and optional parameters (like `agents`, `target_language`, etc.).
---
## Usage<a id="usage"></a>
### Creating Agents<a id="creating-agents"></a>
```python
from iointel import Agent
my_agent = Agent(
name="MyAgent",
instructions="You are a helpful agent.",
# one can also pass custom model using pydantic_ai.models.openai.OpenAIModel
# or pass args to OpenAIModel() as kwargs to Agent()
)
```
### Creating an Agent with a Persona<a id="creating-an-agent-with-a-persona"></a>
```python
from iointel import PersonaConfig, Agent
my_persona = PersonaConfig(
name="Elandria the Arcane Scholar",
age=164,
role="an ancient elven mage",
style="formal and slightly archaic",
domain_knowledge=["arcane magic", "elven history", "ancient runes"],
quirks="often references centuries-old events casually",
bio="Once studied at the Grand Academy of Runic Arts",
lore="Elves in this world can live up to 300 years",
personality="calm, wise, but sometimes condescending",
conversation_style="uses 'thee' and 'thou' occasionally",
description="Tall, silver-haired, wearing intricate robes with arcane symbols",
emotional_stability=0.85,
friendliness=0.45,
creativity=0.68,
curiosity=0.95,
formality=0.1,
empathy=0.57,
humor=0.99,
)
agent = Agent(
name="ArcaneScholarAgent",
instructions="You are an assistant specialized in arcane knowledge.",
persona=my_persona
)
print(agent.instructions)
```
### Building a Workflow<a id="building-a-workflow"></a>
In Python code, you can create tasks by instantiating the Tasks class and chaining methods:
```python
from iointel import Workflow
tasks = Workflow(objective="This is the text to analyze", client_mode=False)
(
tasks
.sentiment(agents=[my_agent])
.translate_text(target_language="french") # a second step
)
results = await tasks.run_tasks()
print(results)
```
Because client_mode=False, everything runs locally.
### Running a Local Workflow<a id="running-a-local-workflow"></a>
```python
tasks = Workflow(objective="Breaking news: local sports team wins!", client_mode=False)
await tasks.summarize_text(max_words=50).run_tasks()
```
### Running a Remote Workflow (Client Mode)<a id="running-a-remote-workflow-client-mode"></a>
```python
tasks = Workflow(objective="Breaking news: local sports team wins!", client_mode=True)
await tasks.summarize_text(max_words=50).run_tasks()
```
Now, summarize_text calls the client function (e.g., summarize_task(...)) instead of local logic.
### Uploading YAML/JSON Workflows<a id="uploading-yamljson-workflows"></a>
_Note: this part is under active development and might not always function!_
1. Create a YAML or JSON file specifying workflow:
```yaml
name: "My YAML Workflow"
text: "Large text to analyze"
workflow:
- type: "sentiment"
- type: "summarize_text"
max_words: 20
- type: "moderation"
threshold: 0.7
- type: "custom"
name: "special-step"
objective: "Analyze the text"
instructions: "Use advanced analysis"
context:
extra_info: "some metadata"
```
2. Upload via the /run-file endpoint (multipart file upload).
The server reads it as JSON or YAML and runs the tasks sequentially in local mode.
## Examples<a id="examples"></a>
### Simple Summarize Task<a id="simple-summarize-task"></a>
```python
tasks = Workflow("Breaking news: new Python release!", client_mode=False)
await tasks.summarize_text(max_words=30).run_tasks()
```
Returns a summarized result.
### Chainable Workflows<a id="chainable-workflows"></a>
```python
tasks = Workflow("Tech giant acquires startup for $2B", client_mode=False)
(tasks
.translate_text(target_language="spanish")
.sentiment()
)
await results = tasks.run_tasks()
```
1. Translate to Spanish,
2. Sentiment analysis.
### Custom Workflow<a id="custom-workflow"></a>
```python
tasks = Workflow("Analyze this special text", client_mode=False)
tasks.custom(
name="my-unique-step",
objective="Perform advanced analysis",
instructions="Focus on entity extraction and sentiment",
agents=[my_agent],
**{"extra_context": "some_val"}
)
await results = tasks.run_tasks()
```
A "custom" task can reference a custom function in the CUSTOM_WORKFLOW_REGISTRY or fall back to a default behavior.
### Loading From a YAML File<a id="loading-from-a-yaml-file"></a>
_Note: this part is under active development and might not always function!_
```bash
curl -X POST "https://api.intelligence.io.solutions/api/v1/workflows/run-file" \
-F "yaml_file=@path/to/workflow.yaml"
```
## API Endpoints<a id="api-endpoints"></a>
Please refer to (IO.net documentation)[https://docs.io.net/docs/exploring-ai-agents] to see particular endpoints and their documentation.
## License<a id="license"></a>
See the [LICENSE](https://github.com/ionet-official/iointel?tab=Apache-2.0-1-ov-file#readme) file for license rights and limitations (Apache 2.0).
# IOIntel: Agentic Tools with Beautiful UI
## Features (MISSING ANCHOR)
- **Agentic tool use**: Agents can call Python tools, return results, and chain reasoning.
- **Rich tool call visualization**: Tool calls and results are rendered as beautiful, gold-accented "pills" in both CLI (with [rich](https://github.com/Textualize/rich)) and Gradio UI.
- **Dynamic UI**: Agents can generate forms (textboxes, sliders, etc.) on the fly in the Gradio app.
- **Live CSS theming**: Agents can change the UI theme at runtime.
- **Jupyter compatible**: The Gradio UI can be launched in a notebook cell.
---
## Quickstart: CLI Usage (MISSING ANCHOR)
```python
from iointel import Agent, register_tool
[@register_tool](https://github.com/register_tool)
def add(a: float, b: float) -> float:
return a + b
agent = Agent(
name="Solar",
instructions="You are a helpful assistant.",
model="gpt-4o",
api_key="sk-...",
tools=[add],
show_tool_calls=True, # Pretty rich tool call output!
)
import asyncio
async def main():
result = await agent.run("What is 2 + 2?", pretty=True)
# Tool calls/results are shown in rich formatting!
asyncio.run(main())
```


---
## Quickstart: Gradio UI (MISSING ANCHOR)
```python
from iointel import Agent, register_tool
[@register_tool](https://github.com/register_tool)
def get_weather(city: str) -> dict:
return {"temp": 72, "condition": "Sunny"}
agent = Agent(
name="GradioSolar",
instructions="You are a helpful assistant.",
model="gpt-4o",
api_key="sk-...",
tools=[get_weather],
show_tool_calls=True,
)
# Launch the beautiful Gradio Chat UI (works in Jupyter too!)
agent.launch_gradio_ui(interface_title="Iointel Gradio Solar")
# Or, for more control across different agents:
# from iointel.src.ui.io_gradio_ui import IOGradioUI
# ui = IOGradioUI(agent, interface_title="Iointel GradioSolar")
# ui.launch(share=True)
```

- **Tool calls** are rendered as beautiful, gold-trimmed panels in the chat.
- **Dynamic UI**: If your agent/tool returns a UI spec, it will be rendered live.
- **Works in Jupyter**: Just run the above in a notebook cell!
---
Raw data
{
"_id": null,
"home_page": null,
"name": "iointel",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "AI, Agents, LLM, framework, tools",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/74/a8/1c4e72c45952d1e7aa55cd1ef1f8b65e62fd2fa2e7784e8d0e844434d699/iointel-1.5.0.tar.gz",
"platform": null,
"description": "# IO Intelligence Agent Framework\n\n> **IMPORTANT**:\n> **Beta Notice:** This project is in rapid development and may not be stable for production use.\n\nThis repository provides a flexible system for building and orchestrating **agents** and **workflows**. It offers two modes:\n\n- **Client Mode**: Where tasks call out to a remote API client (e.g., your `client.py` functions). \n- **Local Mode**: Where tasks run directly in the local environment, utilizing `run_agents(...)` and local logic.\n\nIt also supports loading **YAML or JSON** workflows to define multi-step tasks.\n\n---\n\n## Table of Contents<a id=\"table-of-contents\"></a>\n\n1. [Overview](#overview) \n2. [Installation](#installation) \n3. [Concepts](#concepts) \n - [Agents](#agents) \n - [Tasks](#tasks) \n - [Client Mode vs Local Mode](#client-mode-vs-local-mode) \n - [Workflows (YAML/JSON)](#workflows-yamljson) \n4. [Usage](#usage) \n - [Creating Agents](#creating-agents) \n - [Creating an Agent with custom Persona](#creating-an-agent-with-a-persona) \n - [Building a Workflow](#building-a-workflow) \n - [Running a Local Workflow](#running-a-local-workflow) \n - [Running a Remote Workflow (Client Mode)](#running-a-remote-workflow-client-mode) \n - [Uploading YAML/JSON Workflows](#uploading-yamljson-workflows) \n5. [Examples](#examples) \n - [Simple Summarize Task](#simple-summarize-task) \n - [Chainable Workflows](#chainable-workflows) \n - [Custom Workflow](#custom-workflow) \n - [Loading From a YAML File](#loading-from-a-yaml-file) \n6. [API Endpoints](#api-endpoints) \n7. [License](#license)\n\n---\n\n## Overview<a id=\"overview\"></a>\n\nThe framework has distilled Agents into 3 distinct pieces:\n- **Agents**\n- **Tasks**\n- **Workflows**\n\nThe **Agent** can be configured with:\n\n- **Model Provider** (e.g., OpenAI, Llama, etc.) \n- **Tools** (e.g., specialized functions)\n\nUsers can define tasks (like `sentiment`, `translate_text`, etc.) in a **local** or **client** mode. They can also upload workflows (in YAML or JSON) to orchestrate multiple steps in sequence.\n\n---\n\n## Installation<a id=\"installation\"></a>\n\n1. **Install the latest release**:\n\n ```bash\n pip install --upgrade iointel\n ```\n\n2. **Set Required Environment Variable**:\n - `OPENAI_API_KEY` or `IO_API_KEY` for the default OpenAI-based `ChatOpenAI`.\n\n3. **Optional Environment Variables**:\n - `AGENT_LOGGING_LEVEL` (optional) to configure logging verbosity: `DEBUG`, `INFO`, etc.\n - `OPENAI_API_BASE_URL` or `IO_API_BASE_URL` to point to OpenAI-compatible API implementation, like `https://api.intelligence.io.solutions/api/v1`\n - `OPENAI_API_MODEL` or `IO_API_MODEL` to pick specific LLM model as \"agent brain\", like `meta-llama/Llama-3.3-70B-Instruct`\n\n---\n\n## Concepts<a id=\"concepts\"></a>\n\n### Agents<a id=\"agents\"></a>\n\n- They can have a custom model (e.g., `OpenAIModel`, a Llama-based model, etc.).\n- Agents can have tools attached, which are specialized functions accessible during execution.\n- Agents can have a custom Persona Profile configured.\n\n### Tasks<a id=\"tasks\"></a>\n\n- A **task** is a single step in a workflow, e.g., `schedule_reminder`, `sentiment`, `translate_text`, etc.\n- Tasks are managed by the `Workflow` class in `workflow.py`.\n- Tasks can be chained for multi-step logic into a workflow (e.g., `await Workflow(objective=\"...\").translate_text().sentiment().run_tasks()`).\n\n### Client Mode vs Local Mode<a id=\"client-mode-vs-local-mode\"></a>\n\n- **Local Mode**: The system calls `run_agents(...)` directly in your local environment. \n- **Client Mode**: The system calls out to remote endpoints in a separate API.\n - In `client_mode=True`, each task (e.g. `sentiment`) triggers a client function (`sentiment_analysis(...)`) instead of local logic.\n\nThis allows you to **switch** between running tasks locally or delegating them to a server.\n\n### Workflows (YAML/JSON)<a id=\"workflows-yamljson\"></a>\n\n_Note: this part is under active development and might not always function!_\n\n- You can define multi-step workflows in YAML or JSON.\n- The endpoint `/run-file` accepts a file (via multipart form data).\n - First tries parsing the payload as **JSON**.\n - If that fails, it tries parsing the payload as **YAML**.\n- The file is validated against a `WorkflowDefinition` Pydantic model.\n- Each step has a `type` (e.g., `\"sentiment\"`, `\"custom\"`) and optional parameters (like `agents`, `target_language`, etc.).\n\n---\n\n## Usage<a id=\"usage\"></a>\n\n### Creating Agents<a id=\"creating-agents\"></a>\n\n```python\nfrom iointel import Agent\n\nmy_agent = Agent(\n name=\"MyAgent\",\n instructions=\"You are a helpful agent.\",\n # one can also pass custom model using pydantic_ai.models.openai.OpenAIModel\n # or pass args to OpenAIModel() as kwargs to Agent()\n)\n```\n\n### Creating an Agent with a Persona<a id=\"creating-an-agent-with-a-persona\"></a>\n\n```python\nfrom iointel import PersonaConfig, Agent\n\n\nmy_persona = PersonaConfig(\n name=\"Elandria the Arcane Scholar\",\n age=164,\n role=\"an ancient elven mage\",\n style=\"formal and slightly archaic\",\n domain_knowledge=[\"arcane magic\", \"elven history\", \"ancient runes\"],\n quirks=\"often references centuries-old events casually\",\n bio=\"Once studied at the Grand Academy of Runic Arts\",\n lore=\"Elves in this world can live up to 300 years\",\n personality=\"calm, wise, but sometimes condescending\",\n conversation_style=\"uses 'thee' and 'thou' occasionally\",\n description=\"Tall, silver-haired, wearing intricate robes with arcane symbols\",\n emotional_stability=0.85,\n friendliness=0.45,\n creativity=0.68,\n curiosity=0.95,\n formality=0.1,\n empathy=0.57,\n humor=0.99,\n)\n\nagent = Agent(\n name=\"ArcaneScholarAgent\",\n instructions=\"You are an assistant specialized in arcane knowledge.\",\n persona=my_persona\n)\n\nprint(agent.instructions)\n```\n\n### Building a Workflow<a id=\"building-a-workflow\"></a>\n\nIn Python code, you can create tasks by instantiating the Tasks class and chaining methods:\n\n\n```python\nfrom iointel import Workflow\n\ntasks = Workflow(objective=\"This is the text to analyze\", client_mode=False)\n(\n tasks\n .sentiment(agents=[my_agent])\n .translate_text(target_language=\"french\") # a second step\n)\n\nresults = await tasks.run_tasks()\nprint(results)\n```\nBecause client_mode=False, everything runs locally.\n\n### Running a Local Workflow<a id=\"running-a-local-workflow\"></a>\n\n```python\ntasks = Workflow(objective=\"Breaking news: local sports team wins!\", client_mode=False)\nawait tasks.summarize_text(max_words=50).run_tasks()\n```\n\n### Running a Remote Workflow (Client Mode)<a id=\"running-a-remote-workflow-client-mode\"></a>\n\n```python\ntasks = Workflow(objective=\"Breaking news: local sports team wins!\", client_mode=True)\nawait tasks.summarize_text(max_words=50).run_tasks()\n```\nNow, summarize_text calls the client function (e.g., summarize_task(...)) instead of local logic.\n\n### Uploading YAML/JSON Workflows<a id=\"uploading-yamljson-workflows\"></a>\n_Note: this part is under active development and might not always function!_\n\n\t1.\tCreate a YAML or JSON file specifying workflow:\n\n```yaml\nname: \"My YAML Workflow\"\ntext: \"Large text to analyze\"\nworkflow:\n - type: \"sentiment\"\n - type: \"summarize_text\"\n max_words: 20\n - type: \"moderation\"\n threshold: 0.7\n - type: \"custom\"\n name: \"special-step\"\n objective: \"Analyze the text\"\n instructions: \"Use advanced analysis\"\n context:\n extra_info: \"some metadata\"\n```\n\n\t2.\tUpload via the /run-file endpoint (multipart file upload).\nThe server reads it as JSON or YAML and runs the tasks sequentially in local mode.\n\n## Examples<a id=\"examples\"></a>\n\n### Simple Summarize Task<a id=\"simple-summarize-task\"></a>\n\n```python\ntasks = Workflow(\"Breaking news: new Python release!\", client_mode=False)\nawait tasks.summarize_text(max_words=30).run_tasks()\n```\n\nReturns a summarized result.\n\n### Chainable Workflows<a id=\"chainable-workflows\"></a>\n\n```python\ntasks = Workflow(\"Tech giant acquires startup for $2B\", client_mode=False)\n(tasks\n .translate_text(target_language=\"spanish\")\n .sentiment()\n)\nawait results = tasks.run_tasks()\n```\n\n\t1.\tTranslate to Spanish,\n\t2.\tSentiment analysis.\n\n### Custom Workflow<a id=\"custom-workflow\"></a>\n```python\ntasks = Workflow(\"Analyze this special text\", client_mode=False)\ntasks.custom(\n name=\"my-unique-step\",\n objective=\"Perform advanced analysis\",\n instructions=\"Focus on entity extraction and sentiment\",\n agents=[my_agent],\n **{\"extra_context\": \"some_val\"}\n)\nawait results = tasks.run_tasks()\n```\n\nA \"custom\" task can reference a custom function in the CUSTOM_WORKFLOW_REGISTRY or fall back to a default behavior.\n\n### Loading From a YAML File<a id=\"loading-from-a-yaml-file\"></a>\n_Note: this part is under active development and might not always function!_\n\n```bash\ncurl -X POST \"https://api.intelligence.io.solutions/api/v1/workflows/run-file\" \\\n -F \"yaml_file=@path/to/workflow.yaml\"\n```\n\n## API Endpoints<a id=\"api-endpoints\"></a>\n\nPlease refer to (IO.net documentation)[https://docs.io.net/docs/exploring-ai-agents] to see particular endpoints and their documentation.\n\n## License<a id=\"license\"></a>\nSee the [LICENSE](https://github.com/ionet-official/iointel?tab=Apache-2.0-1-ov-file#readme) file for license rights and limitations (Apache 2.0).\n\n# IOIntel: Agentic Tools with Beautiful UI\n\n## Features (MISSING ANCHOR)\n- **Agentic tool use**: Agents can call Python tools, return results, and chain reasoning.\n- **Rich tool call visualization**: Tool calls and results are rendered as beautiful, gold-accented \"pills\" in both CLI (with [rich](https://github.com/Textualize/rich)) and Gradio UI.\n- **Dynamic UI**: Agents can generate forms (textboxes, sliders, etc.) on the fly in the Gradio app.\n- **Live CSS theming**: Agents can change the UI theme at runtime.\n- **Jupyter compatible**: The Gradio UI can be launched in a notebook cell.\n\n---\n\n## Quickstart: CLI Usage (MISSING ANCHOR)\n\n```python\nfrom iointel import Agent, register_tool\n\n[@register_tool](https://github.com/register_tool)\ndef add(a: float, b: float) -> float:\n return a + b\n\nagent = Agent(\n name=\"Solar\",\n instructions=\"You are a helpful assistant.\",\n model=\"gpt-4o\",\n api_key=\"sk-...\",\n tools=[add],\n show_tool_calls=True, # Pretty rich tool call output!\n)\n\nimport asyncio\nasync def main():\n result = await agent.run(\"What is 2 + 2?\", pretty=True)\n # Tool calls/results are shown in rich formatting!\n\nasyncio.run(main())\n```\n\n\n\n\n---\n\n## Quickstart: Gradio UI (MISSING ANCHOR)\n\n```python\nfrom iointel import Agent, register_tool\n\n[@register_tool](https://github.com/register_tool)\ndef get_weather(city: str) -> dict:\n return {\"temp\": 72, \"condition\": \"Sunny\"}\n\nagent = Agent(\n name=\"GradioSolar\",\n instructions=\"You are a helpful assistant.\",\n model=\"gpt-4o\",\n api_key=\"sk-...\",\n tools=[get_weather],\n show_tool_calls=True,\n)\n\n# Launch the beautiful Gradio Chat UI (works in Jupyter too!)\nagent.launch_gradio_ui(interface_title=\"Iointel Gradio Solar\")\n\n# Or, for more control across different agents:\n# from iointel.src.ui.io_gradio_ui import IOGradioUI\n# ui = IOGradioUI(agent, interface_title=\"Iointel GradioSolar\")\n# ui.launch(share=True)\n```\n\n\n\n\n- **Tool calls** are rendered as beautiful, gold-trimmed panels in the chat.\n- **Dynamic UI**: If your agent/tool returns a UI spec, it will be rendered live.\n- **Works in Jupyter**: Just run the above in a notebook cell!\n\n---\n",
"bugtrack_url": null,
"license": null,
"summary": "A framework to create agents, tasks, and workflows.",
"version": "1.5.0",
"project_urls": null,
"split_keywords": [
"ai",
" agents",
" llm",
" framework",
" tools"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "095c5c295d709e44edb30b064921a64556c45517daeb82cbadbeaf7a44a301f9",
"md5": "7cbf6cb780844e909d3858a159153d14",
"sha256": "66aa768f26e49d3228477658d4c990a0543ce52980807d76ab868828f944bc92"
},
"downloads": -1,
"filename": "iointel-1.5.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7cbf6cb780844e909d3858a159153d14",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 156096,
"upload_time": "2025-07-08T16:51:27",
"upload_time_iso_8601": "2025-07-08T16:51:27.436214Z",
"url": "https://files.pythonhosted.org/packages/09/5c/5c295d709e44edb30b064921a64556c45517daeb82cbadbeaf7a44a301f9/iointel-1.5.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "74a81c4e72c45952d1e7aa55cd1ef1f8b65e62fd2fa2e7784e8d0e844434d699",
"md5": "9e6ca1a4cfe61d739f7293c4e716355f",
"sha256": "13d824c6ecb8c25c73e9ccbb86674d03424e2ec95983366813eaf8da620c512f"
},
"downloads": -1,
"filename": "iointel-1.5.0.tar.gz",
"has_sig": false,
"md5_digest": "9e6ca1a4cfe61d739f7293c4e716355f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 345132,
"upload_time": "2025-07-08T16:51:29",
"upload_time_iso_8601": "2025-07-08T16:51:29.121225Z",
"url": "https://files.pythonhosted.org/packages/74/a8/1c4e72c45952d1e7aa55cd1ef1f8b65e62fd2fa2e7784e8d0e844434d699/iointel-1.5.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-08 16:51:29",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "iointel"
}