<p align="center">
<img style="width: 200px; height: 178px" src="Logo_Portia_Stacked_Black.png" />
</p>
# Portia SDK Python
Welcome to our repository! Portia AI is an open source developer framework for stateful, authenticated agentic workflows. The core product accessible in this repository is extensible with our complimentary cloud features which are aimed at making production deployments easier and faster.
Play around, break things and tell us how you're getting on in our <a href="https://discord.gg/DvAJz9ffaR" target="_blank">**Discord channel (↗)**</a>. Most importantly please be kind to your fellow humans (<a href="https://github.com/portiaAI/portia-sdk-python/blob/main/CODE_OF_CONDUCT.md" target="_blank" rel="noopener noreferrer">**Code of Conduct (↗)**</a>).
## Why Portia AI
| Problem | Portia's answer |
| ------- | --------------- |
| **Planning:** Many use cases require visibility into the LLM’s reasoning, particularly for complex tasks requiring multiple steps and tools. LLMs also struggle picking the right tools as their tool set grows: a recurring limitation for production deployments | **Multi-agent plans:** Our open source, multi-shot prompter guides your LLM to produce a [`Plan`](https://docs.portialabs.ai/generate-plan) in response to a prompt, weaving the relevant tools, inputs and outputs for every step. |
| **Execution:** Tracking an LLM’s progress mid-task is difficult, making it harder to intervene when guidance is needed. This is especially critical for enforcing company policies or correcting hallucinations (hello, missing arguments in tool calls!) | **Stateful workflows:** Portia will spin up a multi-agent [`Workflow`](https://docs.portialabs.ai/execute-workflow) to execute on generated plans and track their state throughout execution. Using our [`Clarification`](https://docs.portialabs.ai/manage-clarifications) abstraction you can define points where you want to take control of workflow execution e.g. to resolve missing information or multiple choice decisions. Portia serialises the workflow state, and you can manage its storage / retrieval yourself or use our cloud offering for simplicity. |
| **Authentication:** Existing solutions often disrupt the user experience with cumbersome authentication flows or require pre-emptive, full access to every tool—an approach that doesn’t scale for multi-agent assistants. | **Extensible, authenticated tool calling:** Bring your own tools on our extensible [`Tool`](https://docs.portialabs.ai/extend-tool-definitions) abstraction, or use our growing plug and play authenticated [tool library](https://docs.portialabs.ai/run-portia-tools), which will include a number of popular SaaS providers over time (Google, Zendesk, Hubspot, Github etc.). All Portia tools feature just-in-time authentication with token refresh, offering security without compromising on user experience. |
## Quickstart
### Installation
0. Ensure you have python 3.10 or higher installed. If you need to update your python version please visit their [docs](https://www.python.org/downloads/).
```bash
python --version
```
1. Install the Portia Python SDK
```bash
pip install portia-sdk-python
```
2. Ensure you have an API key set up
```bash
export OPENAI_API_KEY='your-api-key-here'
```
3. Validate your installation by submitting a simple maths prompt from the command line
```
portia-cli run "add 1 + 2"
```
>[!NOTE]
> We support Anthropic and Mistral AI as well and we're working on adding more models asap. For now if you want to use either model you'd have to set up the relevant API key and add one of these args to your CLI command:<br/>
> `portia-cli run --llm-provider="anthropic" "add 1 + 2"` or `portia-cli run --llm-provider="mistralai" "add 1 + 2"`
**All set? Now let's explore some basic usage of the product 🚀**
### E2E example with open source tools
This example is meant to get you familiar with a few of our core abstractions:
- A `Plan` is the set of steps an LLM thinks it should take in order to respond to a user prompt. They are immutable, structured and human-readable.
- A `Workflow` is a unique instantiation of a `Plan`. The purpose of a `Workflow` is to capture the state of a unique plan run at every step in an auditable way.
- A `Runner` is the main orchestrator of plan generation. It is also capable of workflow creation, execution, pausing and resumption.
Before running the code below, make sure you have the following keys set as environment variables in your .env file:
- An OpenAI API key (or other LLM API key) set as `OPENAI_API_KEY=`
- A Tavily <a href="https://tavily.com/" target="_blank">(**↗**)</a> API key set as `TAVILY_API_KEY=`
```python
from dotenv import load_dotenv
from portia.runner import Runner
from portia.config import default_config
from portia.open_source_tools.registry import example_tool_registry
load_dotenv()
# Instantiate a Portia runner. Load it with the default config and with the example tools.
runner = Runner(config=default_config(), tool_registry=example_tool_registry)
# Generate the plan from the user query
plan = runner.generate_plan('Which stock price grew faster in 2024, Amazon or Google?')
print(plan.model_dump_json(indent=2))
# Create and execute the workflow from the generated plan
workflow = runner.create_workflow(plan)
workflow = runner.execute_workflow(workflow)
# Serialise into JSON and print the output
print(workflow.model_dump_json(indent=2))
```
### E2E example with Portia cloud storage
Our cloud offering will allow you to easily store and retrieve workflows in the Portia cloud, access our library of cloud hosted tools, and use the Portia dashboard to view workflow, clarification and tool call logs. Head over to <a href="apps.portialabs.ai" target="_blank">**apps.portialabs.ai (↗)**</a> and get your Portia API key. You will need to set it as the env variable `PORTIA_API_KEY`.<br/>
Note that this example also requires the environment variables `OPENAI_API_KEY` (or ANTHROPIC or MISTRALAI if you're using either) and `TAVILY_API_KEY` as the [previous one](#e2e-example-with-open-source-tools).
The example below introduces **some** of the config options available with Portia AI:
- The `storage_class` is set using the `StorageClass` ENUM to 'CLOUD`. So long as your `PORTIA_API_KEY` is set, workflows and tool calls will be logged and appear automatically in your Portia dashboard at <a href="apps.portialabs.ai" target="_blank">**apps.portialabs.ai (↗)**</a>.
- The `default_log_level` is set using the `LogLevel` ENUM to `DEBUG` so you can get some insight into the sausage factory in your terminal, including plan generation, workflow states, tool calls and outputs at every step 😅
- The `llm_provider`, `llm_model` and `xxx_api_key` (varies depending on model provider chosen) are used to choose the specific LLM provider and model. In the example below we're splurging and using GPT 4.0!
Finally we also introduce the concept of a `tool_registry`, which is a flexible grouping of tools.
```python
import os
from dotenv import load_dotenv
from portia.runner import Runner
from portia.config import Config, StorageClass, LogLevel, LLMProvider, LLMModel
from portia.open_source_tools.registry import example_tool_registry
load_dotenv()
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')
# Load the default config and override the storage class to point to the Portia cloud
my_config = Config.from_default(
storage_class=StorageClass.CLOUD,
default_log_level=LogLevel.DEBUG,
llm_provider=LLMProvider.OPENAI, # You can use `MISTRAL`, `ANTHROPIC` instead
llm_model=LLMModel.GPT_4_O, # You can use any of the available models instead
openai_api_key=OPENAI_API_KEY # Use `mistralai_api_key=MISTRALAI` or `anthropic_api_key=ANTHROPIC_API_KEY` instead
)
# Instantiate a Portia runner. Load it with the config and with the open source example tool registry
runner = Runner(config=my_config, tool_registry=example_tool_registry)
# Execute a workflow from the user query
workflow = runner.execute_query('Which stock price grew faster in 2024, Amazon or Google?')
# Serialise into JSON an print the output
print(workflow.model_dump_json(indent=2))
```
## Learn more
- Head over to our docs at <a href="https://docs.portialabs.ai" target="_blank">**docs.portialabs.ai (↗)**</a>.
- Join the conversation on our <a href="https://discord.gg/DvAJz9ffaR" target="_blank">**Discord channel (↗)**</a>.
- Watch us embarrass ourselves on our <a href="https://www.youtube.com/@PortiaAI" target="_blank">**YouTube channel (↗)**</a>.
- Follow us on <a href="https://www.producthunt.com/posts/portia-ai" target="_blank">**Product Hunt (↗)**</a>.
## Contribution guidelines
Head on over to our <a href="https://github.com/portiaAI/portia-sdk-python/blob/main/CONTRIBUTING.md" target="_blank">**contribution guide (↗)**</a> for details.
Raw data
{
"_id": null,
"home_page": "https://www.portialabs.ai/",
"name": "portia-sdk-python",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "LLM, agentic, workflow",
"author": "Hello",
"author_email": "hello@portialabs.ai",
"download_url": "https://files.pythonhosted.org/packages/24/32/80cf01ee42368aee5a11b7bf3e924d201c533081491abcdbe1d928ea89a5/portia_sdk_python-0.0.1a56.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img style=\"width: 200px; height: 178px\" src=\"Logo_Portia_Stacked_Black.png\" />\n</p>\n\n# Portia SDK Python\n\nWelcome to our repository! Portia AI is an open source developer framework for stateful, authenticated agentic workflows. The core product accessible in this repository is extensible with our complimentary cloud features which are aimed at making production deployments easier and faster.\nPlay around, break things and tell us how you're getting on in our <a href=\"https://discord.gg/DvAJz9ffaR\" target=\"_blank\">**Discord channel (\u2197)**</a>. Most importantly please be kind to your fellow humans (<a href=\"https://github.com/portiaAI/portia-sdk-python/blob/main/CODE_OF_CONDUCT.md\" target=\"_blank\" rel=\"noopener noreferrer\">**Code of Conduct (\u2197)**</a>). \n\n\n## Why Portia AI\n| Problem | Portia's answer |\n| ------- | --------------- |\n| **Planning:** Many use cases require visibility into the LLM\u2019s reasoning, particularly for complex tasks requiring multiple steps and tools. LLMs also struggle picking the right tools as their tool set grows: a recurring limitation for production deployments | **Multi-agent plans:** Our open source, multi-shot prompter guides your LLM to produce a [`Plan`](https://docs.portialabs.ai/generate-plan) in response to a prompt, weaving the relevant tools, inputs and outputs for every step. |\n| **Execution:** Tracking an LLM\u2019s progress mid-task is difficult, making it harder to intervene when guidance is needed. This is especially critical for enforcing company policies or correcting hallucinations (hello, missing arguments in tool calls!) | **Stateful workflows:** Portia will spin up a multi-agent [`Workflow`](https://docs.portialabs.ai/execute-workflow) to execute on generated plans and track their state throughout execution. Using our [`Clarification`](https://docs.portialabs.ai/manage-clarifications) abstraction you can define points where you want to take control of workflow execution e.g. to resolve missing information or multiple choice decisions. Portia serialises the workflow state, and you can manage its storage / retrieval yourself or use our cloud offering for simplicity. |\n| **Authentication:** Existing solutions often disrupt the user experience with cumbersome authentication flows or require pre-emptive, full access to every tool\u2014an approach that doesn\u2019t scale for multi-agent assistants. | **Extensible, authenticated tool calling:** Bring your own tools on our extensible [`Tool`](https://docs.portialabs.ai/extend-tool-definitions) abstraction, or use our growing plug and play authenticated [tool library](https://docs.portialabs.ai/run-portia-tools), which will include a number of popular SaaS providers over time (Google, Zendesk, Hubspot, Github etc.). All Portia tools feature just-in-time authentication with token refresh, offering security without compromising on user experience. |\n\n\n## Quickstart\n\n### Installation\n\n0. Ensure you have python 3.10 or higher installed. If you need to update your python version please visit their [docs](https://www.python.org/downloads/).\n```bash\npython --version\n```\n\n1. Install the Portia Python SDK\n```bash\npip install portia-sdk-python \n```\n2. Ensure you have an API key set up\n```bash\nexport OPENAI_API_KEY='your-api-key-here'\n```\n3. Validate your installation by submitting a simple maths prompt from the command line\n```\nportia-cli run \"add 1 + 2\"\n```\n>[!NOTE]\n> We support Anthropic and Mistral AI as well and we're working on adding more models asap. For now if you want to use either model you'd have to set up the relevant API key and add one of these args to your CLI command:<br/>\n> `portia-cli run --llm-provider=\"anthropic\" \"add 1 + 2\"` or `portia-cli run --llm-provider=\"mistralai\" \"add 1 + 2\"`\n\n**All set? Now let's explore some basic usage of the product \ud83d\ude80**\n\n\n### E2E example with open source tools\nThis example is meant to get you familiar with a few of our core abstractions:\n- A `Plan` is the set of steps an LLM thinks it should take in order to respond to a user prompt. They are immutable, structured and human-readable.\n- A `Workflow` is a unique instantiation of a `Plan`. The purpose of a `Workflow` is to capture the state of a unique plan run at every step in an auditable way.\n- A `Runner` is the main orchestrator of plan generation. It is also capable of workflow creation, execution, pausing and resumption.\n\nBefore running the code below, make sure you have the following keys set as environment variables in your .env file:\n- An OpenAI API key (or other LLM API key) set as `OPENAI_API_KEY=`\n- A Tavily <a href=\"https://tavily.com/\" target=\"_blank\">(**\u2197**)</a> API key set as `TAVILY_API_KEY=`\n\n```python\nfrom dotenv import load_dotenv\nfrom portia.runner import Runner\nfrom portia.config import default_config\nfrom portia.open_source_tools.registry import example_tool_registry\n\nload_dotenv()\n\n# Instantiate a Portia runner. Load it with the default config and with the example tools.\nrunner = Runner(config=default_config(), tool_registry=example_tool_registry)\n\n# Generate the plan from the user query\nplan = runner.generate_plan('Which stock price grew faster in 2024, Amazon or Google?')\nprint(plan.model_dump_json(indent=2))\n\n# Create and execute the workflow from the generated plan\nworkflow = runner.create_workflow(plan)\nworkflow = runner.execute_workflow(workflow)\n\n# Serialise into JSON and print the output\nprint(workflow.model_dump_json(indent=2))\n```\n\n### E2E example with Portia cloud storage\nOur cloud offering will allow you to easily store and retrieve workflows in the Portia cloud, access our library of cloud hosted tools, and use the Portia dashboard to view workflow, clarification and tool call logs. Head over to <a href=\"apps.portialabs.ai\" target=\"_blank\">**apps.portialabs.ai (\u2197)**</a> and get your Portia API key. You will need to set it as the env variable `PORTIA_API_KEY`.<br/>\nNote that this example also requires the environment variables `OPENAI_API_KEY` (or ANTHROPIC or MISTRALAI if you're using either) and `TAVILY_API_KEY` as the [previous one](#e2e-example-with-open-source-tools).\n\nThe example below introduces **some** of the config options available with Portia AI:\n- The `storage_class` is set using the `StorageClass` ENUM to 'CLOUD`. So long as your `PORTIA_API_KEY` is set, workflows and tool calls will be logged and appear automatically in your Portia dashboard at <a href=\"apps.portialabs.ai\" target=\"_blank\">**apps.portialabs.ai (\u2197)**</a>.\n- The `default_log_level` is set using the `LogLevel` ENUM to `DEBUG` so you can get some insight into the sausage factory in your terminal, including plan generation, workflow states, tool calls and outputs at every step \ud83d\ude05\n- The `llm_provider`, `llm_model` and `xxx_api_key` (varies depending on model provider chosen) are used to choose the specific LLM provider and model. In the example below we're splurging and using GPT 4.0!\n\nFinally we also introduce the concept of a `tool_registry`, which is a flexible grouping of tools.\n\n```python\nimport os\nfrom dotenv import load_dotenv\nfrom portia.runner import Runner\nfrom portia.config import Config, StorageClass, LogLevel, LLMProvider, LLMModel\nfrom portia.open_source_tools.registry import example_tool_registry\n\nload_dotenv()\nOPENAI_API_KEY = os.getenv('OPENAI_API_KEY')\n\n# Load the default config and override the storage class to point to the Portia cloud\nmy_config = Config.from_default(\n storage_class=StorageClass.CLOUD,\n default_log_level=LogLevel.DEBUG,\n llm_provider=LLMProvider.OPENAI, # You can use `MISTRAL`, `ANTHROPIC` instead\n llm_model=LLMModel.GPT_4_O, # You can use any of the available models instead\n openai_api_key=OPENAI_API_KEY # Use `mistralai_api_key=MISTRALAI` or `anthropic_api_key=ANTHROPIC_API_KEY` instead\n)\n\n# Instantiate a Portia runner. Load it with the config and with the open source example tool registry\nrunner = Runner(config=my_config, tool_registry=example_tool_registry)\n\n# Execute a workflow from the user query\nworkflow = runner.execute_query('Which stock price grew faster in 2024, Amazon or Google?')\n\n# Serialise into JSON an print the output\nprint(workflow.model_dump_json(indent=2))\n```\n\n## Learn more\n- Head over to our docs at <a href=\"https://docs.portialabs.ai\" target=\"_blank\">**docs.portialabs.ai (\u2197)**</a>.\n- Join the conversation on our <a href=\"https://discord.gg/DvAJz9ffaR\" target=\"_blank\">**Discord channel (\u2197)**</a>.\n- Watch us embarrass ourselves on our <a href=\"https://www.youtube.com/@PortiaAI\" target=\"_blank\">**YouTube channel (\u2197)**</a>.\n- Follow us on <a href=\"https://www.producthunt.com/posts/portia-ai\" target=\"_blank\">**Product Hunt (\u2197)**</a>.\n\n## Contribution guidelines\nHead on over to our <a href=\"https://github.com/portiaAI/portia-sdk-python/blob/main/CONTRIBUTING.md\" target=\"_blank\">**contribution guide (\u2197)**</a> for details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Portia Labs Python SDK for building agentic workflows.",
"version": "0.0.1a56",
"project_urls": {
"Documentation": "https://docs.portialabs.ai",
"Homepage": "https://www.portialabs.ai/",
"Repository": "https://pypi.org/project/portia-sdk-python/"
},
"split_keywords": [
"llm",
" agentic",
" workflow"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5769e9bf8e085f7162fe04f79a62c70ebb4ad6740bee0dd62eb34841b5254a05",
"md5": "4d235f13d05feefbfaf5160aa82fbd2d",
"sha256": "93018d0926052f8c190f9f7217a09db1ece00bfe28a3c8a606a122069d1632e0"
},
"downloads": -1,
"filename": "portia_sdk_python-0.0.1a56-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4d235f13d05feefbfaf5160aa82fbd2d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 85577,
"upload_time": "2025-02-19T08:58:53",
"upload_time_iso_8601": "2025-02-19T08:58:53.180997Z",
"url": "https://files.pythonhosted.org/packages/57/69/e9bf8e085f7162fe04f79a62c70ebb4ad6740bee0dd62eb34841b5254a05/portia_sdk_python-0.0.1a56-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "243280cf01ee42368aee5a11b7bf3e924d201c533081491abcdbe1d928ea89a5",
"md5": "0ce2fe25cadfd5a89cf99276c9341980",
"sha256": "f3d563b4dd6b4f13f9d1b4a33f6e918247242ddea5c80025e89d9069dfccce6d"
},
"downloads": -1,
"filename": "portia_sdk_python-0.0.1a56.tar.gz",
"has_sig": false,
"md5_digest": "0ce2fe25cadfd5a89cf99276c9341980",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 68455,
"upload_time": "2025-02-19T08:58:54",
"upload_time_iso_8601": "2025-02-19T08:58:54.490409Z",
"url": "https://files.pythonhosted.org/packages/24/32/80cf01ee42368aee5a11b7bf3e924d201c533081491abcdbe1d928ea89a5/portia_sdk_python-0.0.1a56.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-19 08:58:54",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "portia-sdk-python"
}