# Python Code Execution Container
A secure, lightweight open source solution for executing Python code in an isolated environment, specifically designed for AI/ML applications and LLM agents.
<p align="center">
<img src="docs/logo.png" alt="logo">
</p>
## Overview
This project provides a sandboxed Python code execution environment built on IPython and Docker. It offers:
- **Secure Execution**: Code runs in Docker containers, preventing unauthorized system access
- **Flexible Dependencies**: Supports static and runtime dependency management
- **Real-time Streaming**: Chunked streaming of execution output as it's generated
- **Image Support**: Handle image outputs from matplotlib and other visualization libraries
- **Resource Control**: Built-in timeout mechanisms and container lifecycle management
- **Reproducible Environment**: Consistent execution environment across different systems
- **LLM Agent Ready**: Ideal for AI applications that need to execute Python code
This project is currently in beta, with active development of new features ongoing.
## Installation
### Python package
```bash
pip install gradion-executor
```
### Container image
**Note**: The container image build process requires [Docker](https://www.docker.com/) to be installed on your system.
#### Default build
To build a container image with default settings and no extra dependencies:
```bash
python -m gradion.executor build
```
This creates a Docker image tagged as `gradion/executor` with base Python dependencies required for the code execution environment.
#### Custom build
To create a custom image with additional dependencies for your application, create a dependencies file (e.g., `dependencies.txt`) following. For example:
```txt
pandas = "^2.2"
scikit-learn = "^1.5"
matplotlib = "^3.9"
```
To build the image with custom tag and dependencies:
```bash
python -m gradion.executor build \
--tag my-executor:v1 \
--dependencies path/to/dependencies.txt
```
The dependencies file should list Python packages in [Poetry dependency specification format](https://python-poetry.org/docs/dependency-specification/). These will be installed in addition to the base dependencies required for the execution environment. The execution container also supports [installing dependencies at runtime](#installing-dependencies-at-runtime).
## Usage
The following examples demonstrate how to use the `ExecutionContainer` and `ExecutionClient` context managers to execute Python code in an IPython environment running in a Docker container. Runnable scripts of the following code snippets are available in the [examples](examples/) directory.
### Basic usage
Here's a simple example that demonstrates how to execute Python code in an execution container. The `ExecutionContainer` context manager creates and starts a container for code execution, and the `ExecutionClient` context manager connects to an IPython kernel running in the container. The example below executes the code `print('Hello, world!')` and prints the output text:
```python
from gradion.executor import ExecutionContainer, ExecutionClient
# Create and start a container for code execution
async with ExecutionContainer(tag="gradion/executor") as container:
# Create and connect to an IPython kernel
async with ExecutionClient(host="localhost", port=container.port) as client:
# Execute Python code and await the result
result = await client.execute("print('Hello, world!')")
# Print the execution output text
print(f"Output: {result.text}") # Output: Hello, world!
```
The default image used by `ExecutionContainer` is `gradion/executor`. You can specify a custom image with the `tag` argument like in `ExecutionContainer(tag="my-executor:v1")`, for example.
### State management
Code execution within the same client context is stateful i.e. you can reference variables from previous executions. Code executions in different client contexts are isolated from each other:
```python
async with ExecutionContainer() as container:
async with ExecutionClient(host="localhost", port=container.port) as client_1:
# Execute code that defines variable x
result = await client_1.execute("x = 1")
assert result.text is None
# Reference variable x defined in previous execution
result = await client_1.execute("print(x)")
assert result.text == "1"
async with ExecutionClient(host="localhost", port=container.port) as client_2:
# Variable x is not defined in this client context
try:
await client_2.execute("print(x)")
except ExecutionError as e:
assert e.args[0] == "NameError: name 'x' is not defined"
```
### Output streaming
The execution client supports streaming output as it's generated during code execution:
```python
async with ExecutionContainer() as container:
async with ExecutionClient(host="localhost", port=container.port) as client:
# Code that produces output gradually
code = """
import time
for i in range(5):
print(f"Processing step {i}")
time.sleep(1)
"""
# Submit the execution and stream the output
execution = await client.submit(code)
async for chunk in execution.stream():
# Output will be printed gradually:
print(f"Received output: {chunk}")
# Received output: Processing step 0
# Received output: Processing step 1
# Received output: Processing step 2
# Received output: Processing step 3
# Received output: Processing step 4
# Get the aggregated result
result = await execution.result()
# Print the aggregated output text
print(f"Aggregated output:\n{result.text}")
# Aggregated output:
# Processing step 0
# Processing step 1
# Processing step 2
# Processing step 3
# Processing step 4
```
The `stream()` method accepts an optional `timeout` argument (defaults to 120 seconds). In case of timeout, the execution is automatically terminated by interrupting the kernel.
### Installing dependencies at runtime
```python
async with ExecutionContainer() as container:
async with ExecutionClient(host="localhost", port=container.port) as client:
# Install the einops package
await client.execute("!pip install einops")
# Then you can use it in the following code
# execution within the same client context
result = await client.execute("""
import einops
print(einops.__version__)
""")
print(f"Output: {result.text}") # Output: 0.8.0
```
### Creating and returning plots
Plots created with `matplotlib` or other libraries are returned as PIL images:
```python
async with ExecutionContainer() as container:
async with ExecutionClient(host="localhost", port=container.port) as client:
execution = await client.submit("""
!pip install matplotlib
import matplotlib.pyplot as plt
import numpy as np
import time
x = np.linspace(0, 10, 100)
plt.figure(figsize=(8, 6))
plt.plot(x, np.sin(x))
plt.title('Sine Wave')
plt.show()
print("Plot generation complete!")
""")
# Stream the output text as it's generated
async for chunk in execution.stream():
print(chunk, end="", flush=True)
# Obtain the execution result
result = await execution.result()
assert "Plot generation complete!" in result.text
# Get created PIL image from the result
result.images[0].save(f"output.png")
```
Images are not part of the output stream, but are available as a `images` list in the `result` object. The example above saves the created image as [output.png](docs/sine.png):
### Bind mounts
You can mount host directories into the container to allow code execution to access external files:
```python
# Map host paths to container paths. Host paths can be absolute or relative.
# Container paths must be relative and are created as subdirectories of the
# /app directory in the container. The /app directory is the container's
# working directory.
binds = {
"./data": "data", # Read data from host
"./output": "output" # Write results to host
}
# Create a data file on the host
async with aiofiles.open("data/input.txt", "w") as f:
await f.write("hello world")
async with ExecutionContainer(binds=binds) as container:
async with ExecutionClient(host="localhost", port=container.port) as client:
# Read from mounted data directory
result = await client.execute("""
with open('data/input.txt') as f:
data = f.read()
# Process data...
processed = data.upper()
# Write to mounted output directory
with open('output/result.txt', 'w') as f:
f.write(processed)
""")
# Check the result file on the host
async with aiofiles.open("output/result.txt", "r") as f:
assert await f.read() == "HELLO WORLD"
```
### Environment variables
Environment variables can be passed to the container for configuration or secrets:
```python
env = {
"API_KEY": "secret-key-123",
"DEBUG": "1"
}
async with ExecutionContainer(env=env) as container:
async with ExecutionClient(host="localhost", port=container.port) as client:
# Access environment variables in executed code
result = await client.execute("""
import os
api_key = os.environ['API_KEY']
print(f"Using API key: {api_key}")
debug = bool(int(os.environ.get('DEBUG', '0')))
if debug:
print("Debug mode enabled")
""")
print(result.text)
# Using API key: secret-key-123
# Debug mode enabled
```
Raw data
{
"_id": null,
"home_page": "https://github.com/gradion-ai/executor",
"name": "gradion-executor",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "Martin Krasser",
"author_email": "martin@gradion.ai",
"download_url": "https://files.pythonhosted.org/packages/f7/69/2b93f448ec52e95f3a35a9e18063aa3126a9657cea6de46cd37952da67a5/gradion_executor-0.1.2.tar.gz",
"platform": null,
"description": "# Python Code Execution Container\n\nA secure, lightweight open source solution for executing Python code in an isolated environment, specifically designed for AI/ML applications and LLM agents.\n\n<p align=\"center\">\n <img src=\"docs/logo.png\" alt=\"logo\">\n</p>\n\n## Overview\n\nThis project provides a sandboxed Python code execution environment built on IPython and Docker. It offers:\n\n- **Secure Execution**: Code runs in Docker containers, preventing unauthorized system access\n- **Flexible Dependencies**: Supports static and runtime dependency management\n- **Real-time Streaming**: Chunked streaming of execution output as it's generated\n- **Image Support**: Handle image outputs from matplotlib and other visualization libraries\n- **Resource Control**: Built-in timeout mechanisms and container lifecycle management\n- **Reproducible Environment**: Consistent execution environment across different systems\n- **LLM Agent Ready**: Ideal for AI applications that need to execute Python code\n\nThis project is currently in beta, with active development of new features ongoing.\n\n## Installation\n\n### Python package\n\n```bash\npip install gradion-executor\n```\n\n### Container image\n\n**Note**: The container image build process requires [Docker](https://www.docker.com/) to be installed on your system.\n\n#### Default build\n\nTo build a container image with default settings and no extra dependencies:\n\n```bash\npython -m gradion.executor build\n```\n\nThis creates a Docker image tagged as `gradion/executor` with base Python dependencies required for the code execution environment.\n\n#### Custom build\n\nTo create a custom image with additional dependencies for your application, create a dependencies file (e.g., `dependencies.txt`) following. For example:\n\n```txt\npandas = \"^2.2\"\nscikit-learn = \"^1.5\"\nmatplotlib = \"^3.9\"\n```\n\nTo build the image with custom tag and dependencies:\n\n```bash\npython -m gradion.executor build \\\n --tag my-executor:v1 \\\n --dependencies path/to/dependencies.txt\n```\n\nThe dependencies file should list Python packages in [Poetry dependency specification format](https://python-poetry.org/docs/dependency-specification/). These will be installed in addition to the base dependencies required for the execution environment. The execution container also supports [installing dependencies at runtime](#installing-dependencies-at-runtime).\n\n## Usage\n\nThe following examples demonstrate how to use the `ExecutionContainer` and `ExecutionClient` context managers to execute Python code in an IPython environment running in a Docker container. Runnable scripts of the following code snippets are available in the [examples](examples/) directory.\n\n### Basic usage\n\nHere's a simple example that demonstrates how to execute Python code in an execution container. The `ExecutionContainer` context manager creates and starts a container for code execution, and the `ExecutionClient` context manager connects to an IPython kernel running in the container. The example below executes the code `print('Hello, world!')` and prints the output text:\n\n```python\nfrom gradion.executor import ExecutionContainer, ExecutionClient\n\n# Create and start a container for code execution\nasync with ExecutionContainer(tag=\"gradion/executor\") as container:\n # Create and connect to an IPython kernel\n async with ExecutionClient(host=\"localhost\", port=container.port) as client:\n # Execute Python code and await the result\n result = await client.execute(\"print('Hello, world!')\")\n # Print the execution output text\n print(f\"Output: {result.text}\") # Output: Hello, world!\n```\n\nThe default image used by `ExecutionContainer` is `gradion/executor`. You can specify a custom image with the `tag` argument like in `ExecutionContainer(tag=\"my-executor:v1\")`, for example.\n\n### State management\n\nCode execution within the same client context is stateful i.e. you can reference variables from previous executions. Code executions in different client contexts are isolated from each other:\n\n```python\nasync with ExecutionContainer() as container:\n async with ExecutionClient(host=\"localhost\", port=container.port) as client_1:\n # Execute code that defines variable x\n result = await client_1.execute(\"x = 1\")\n assert result.text is None\n\n # Reference variable x defined in previous execution\n result = await client_1.execute(\"print(x)\")\n assert result.text == \"1\"\n\n async with ExecutionClient(host=\"localhost\", port=container.port) as client_2:\n # Variable x is not defined in this client context\n try:\n await client_2.execute(\"print(x)\")\n except ExecutionError as e:\n assert e.args[0] == \"NameError: name 'x' is not defined\"\n```\n\n### Output streaming\n\nThe execution client supports streaming output as it's generated during code execution:\n\n```python\nasync with ExecutionContainer() as container:\n async with ExecutionClient(host=\"localhost\", port=container.port) as client:\n # Code that produces output gradually\n code = \"\"\"\n import time\n for i in range(5):\n print(f\"Processing step {i}\")\n time.sleep(1)\n \"\"\"\n\n # Submit the execution and stream the output\n execution = await client.submit(code)\n async for chunk in execution.stream():\n # Output will be printed gradually:\n print(f\"Received output: {chunk}\")\n # Received output: Processing step 0\n # Received output: Processing step 1\n # Received output: Processing step 2\n # Received output: Processing step 3\n # Received output: Processing step 4\n\n # Get the aggregated result\n result = await execution.result()\n # Print the aggregated output text\n print(f\"Aggregated output:\\n{result.text}\")\n # Aggregated output:\n # Processing step 0\n # Processing step 1\n # Processing step 2\n # Processing step 3\n # Processing step 4\n```\n\nThe `stream()` method accepts an optional `timeout` argument (defaults to 120 seconds). In case of timeout, the execution is automatically terminated by interrupting the kernel.\n\n### Installing dependencies at runtime\n\n```python\nasync with ExecutionContainer() as container:\n async with ExecutionClient(host=\"localhost\", port=container.port) as client:\n # Install the einops package\n await client.execute(\"!pip install einops\")\n # Then you can use it in the following code\n # execution within the same client context\n result = await client.execute(\"\"\"\n import einops\n print(einops.__version__)\n \"\"\")\n print(f\"Output: {result.text}\") # Output: 0.8.0\n```\n\n### Creating and returning plots\n\nPlots created with `matplotlib` or other libraries are returned as PIL images:\n\n```python\nasync with ExecutionContainer() as container:\n async with ExecutionClient(host=\"localhost\", port=container.port) as client:\n execution = await client.submit(\"\"\"\n !pip install matplotlib\n\n import matplotlib.pyplot as plt\n import numpy as np\n import time\n\n x = np.linspace(0, 10, 100)\n plt.figure(figsize=(8, 6))\n plt.plot(x, np.sin(x))\n plt.title('Sine Wave')\n plt.show()\n\n print(\"Plot generation complete!\")\n \"\"\")\n\n # Stream the output text as it's generated\n async for chunk in execution.stream():\n print(chunk, end=\"\", flush=True)\n\n # Obtain the execution result\n result = await execution.result()\n assert \"Plot generation complete!\" in result.text\n\n # Get created PIL image from the result\n result.images[0].save(f\"output.png\")\n```\n\nImages are not part of the output stream, but are available as a `images` list in the `result` object. The example above saves the created image as [output.png](docs/sine.png):\n\n### Bind mounts\n\nYou can mount host directories into the container to allow code execution to access external files:\n\n```python\n# Map host paths to container paths. Host paths can be absolute or relative.\n# Container paths must be relative and are created as subdirectories of the\n# /app directory in the container. The /app directory is the container's\n# working directory.\nbinds = {\n \"./data\": \"data\", # Read data from host\n \"./output\": \"output\" # Write results to host\n}\n\n# Create a data file on the host\nasync with aiofiles.open(\"data/input.txt\", \"w\") as f:\n await f.write(\"hello world\")\n\nasync with ExecutionContainer(binds=binds) as container:\n async with ExecutionClient(host=\"localhost\", port=container.port) as client:\n # Read from mounted data directory\n result = await client.execute(\"\"\"\n with open('data/input.txt') as f:\n data = f.read()\n\n # Process data...\n processed = data.upper()\n\n # Write to mounted output directory\n with open('output/result.txt', 'w') as f:\n f.write(processed)\n \"\"\")\n\n# Check the result file on the host\nasync with aiofiles.open(\"output/result.txt\", \"r\") as f:\n assert await f.read() == \"HELLO WORLD\"\n```\n\n### Environment variables\n\nEnvironment variables can be passed to the container for configuration or secrets:\n\n```python\nenv = {\n \"API_KEY\": \"secret-key-123\",\n \"DEBUG\": \"1\"\n}\n\nasync with ExecutionContainer(env=env) as container:\n async with ExecutionClient(host=\"localhost\", port=container.port) as client:\n # Access environment variables in executed code\n result = await client.execute(\"\"\"\n import os\n\n api_key = os.environ['API_KEY']\n print(f\"Using API key: {api_key}\")\n\n debug = bool(int(os.environ.get('DEBUG', '0')))\n if debug:\n print(\"Debug mode enabled\")\n \"\"\")\n print(result.text)\n # Using API key: secret-key-123\n # Debug mode enabled\n```\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Gradion AI Code Execution Container",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/gradion-ai/executor"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5b6c2f6dc1596d73fa878d77009b86a98785dc3b137acf18d08a5b3626a9350d",
"md5": "239aa799234123323573cbf5c5857f01",
"sha256": "b7d7c655d5996b158bbf56d4a18b7bdd6729651e13f9b68e13c12367b82334bf"
},
"downloads": -1,
"filename": "gradion_executor-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "239aa799234123323573cbf5c5857f01",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 16520,
"upload_time": "2024-12-04T17:34:39",
"upload_time_iso_8601": "2024-12-04T17:34:39.040278Z",
"url": "https://files.pythonhosted.org/packages/5b/6c/2f6dc1596d73fa878d77009b86a98785dc3b137acf18d08a5b3626a9350d/gradion_executor-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f7692b93f448ec52e95f3a35a9e18063aa3126a9657cea6de46cd37952da67a5",
"md5": "39ce3eead0a14380a882b6cc2c9df60f",
"sha256": "913fd859ffa6c5183f8ab01a3f15439a52caa0b92e8917e199dcfad556b84d7c"
},
"downloads": -1,
"filename": "gradion_executor-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "39ce3eead0a14380a882b6cc2c9df60f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 16269,
"upload_time": "2024-12-04T17:34:40",
"upload_time_iso_8601": "2024-12-04T17:34:40.903580Z",
"url": "https://files.pythonhosted.org/packages/f7/69/2b93f448ec52e95f3a35a9e18063aa3126a9657cea6de46cd37952da67a5/gradion_executor-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-04 17:34:40",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "gradion-ai",
"github_project": "executor",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "gradion-executor"
}