# AlloAI
[](https://github.com/m4xw311/AlloAI/actions/workflows/workflow.yml)
[](https://badge.fury.io/py/alloai)
[](https://pypi.org/project/alloai/)
[](https://github.com/m4xw311/AlloAI/blob/main/LICENSE)
An agentless vibe coding framework for seamlessly mixing code and LLM instructions in executable markdown files. AlloAI enables you to write polyglot programs that support both traditional programming languages and natural language instructions, all executed in a shared runtime environment.
## Overview
AlloAI lets you write markdown files that mix Python code with natural language instructions. The code blocks execute normally, while text between them becomes prompts for an LLM to generate and execute additional code - all in the same runtime environment with shared variables.
## Features
- **Seamless Integration**: Mix Python code and natural language instructions in markdown files
- **Shared Runtime**: All code blocks and LLM-generated code share the same execution context
- **State Preservation**: Variables and their values persist across code blocks and LLM instructions
- **Simple Syntax**: Use standard markdown code blocks and plain text instructions
- **Flexible LLM Backend**: Supports OpenAI-compatible APIs (including local models)
- **Easy Installation**: Available as a pip-installable package with CLI support
- **Code Export**: Generate standalone Python scripts from your AlloAI executions for reuse
## Installation
### From PyPI (Recommended)
```bash
pip install alloai
```
### From Source (Development)
```bash
git clone https://github.com/m4xw311/AlloAI.git
cd AlloAI
pip install -e .
```
For development with additional tools:
```bash
pip install -e ".[dev]"
```
## Configuration
Create a `.env` file in your project directory with your OpenAI API configuration:
```env
OPENAI_API_KEY=your_api_key_here
OPENAI_BASE_URL=https://api.openai.com/v1 # Optional: for custom endpoints
OPENAI_MODEL=gpt-3.5-turbo # Optional: specify model
```
You can copy the provided example:
```bash
cp .env.example .env
# Then edit .env with your API key
```
**Configuration Options:**
- `OPENAI_API_KEY`: Required. Your OpenAI API key
- `OPENAI_BASE_URL`: Optional. Custom API endpoint for OpenAI-compatible services
- `OPENAI_MODEL`: Optional. Model to use (default: gpt-3.5-turbo)
## Usage
### Basic Usage
Once installed, you can run AlloAI scripts directly from the command line:
```bash
alloai script.md
```
### Command-Line Options
```bash
alloai --help # Show help message
alloai --version # Show version
alloai -v script.md # Run with verbose output
alloai --env /path/to/.env script.md # Use specific .env file
alloai -o output.py script.md # Export generated code to a file
```
### Writing AlloAI Scripts
Create a markdown file with interleaved code blocks and natural language instructions:
**example.md:**
````markdown
```python
x = 5
```
Increment x by 1
```python
print(x)
```
Multiply x by 10 and display it
````
Run it:
```bash
alloai example.md
```
Output:
```
6
60
```
### More Examples
**Data Processing Example:**
````markdown
```python
data = [1, 2, 3, 4, 5]
```
Calculate the sum and average of the data list, store them in variables called total and average
```python
print(f"Final sum: {total}")
print(f"Final average: {average}")
```
````
**String Manipulation Example:**
````markdown
```python
text = "hello world"
```
Convert the text to uppercase and reverse it, update the text variable
```python
print(f"Result: {text}")
```
````
**Working with Files:**
````markdown
```python
import json
data = {"name": "AlloAI", "version": "0.1.0"}
```
Write the data dictionary to a file called output.json with proper formatting
```python
with open("output.json", "r") as f:
loaded = json.load(f)
print(f"Loaded: {loaded}")
```
````
## Code Generation and Export
AlloAI can generate a standalone Python script containing all the code that was executed during a run, including both the original code blocks and any LLM-generated code. This is useful for:
- Debugging and understanding what code the LLM generated
- Creating reusable scripts from your AlloAI experiments
- Sharing the complete execution flow with others
- Running the same logic without requiring the LLM on subsequent runs
### Exporting Generated Code
Use the `-o` or `--output` flag to save the complete executed code:
```bash
alloai script.md -o generated_script.py
```
This will:
1. Execute your AlloAI script normally
2. Collect all executed code (both from markdown and LLM-generated)
3. Save it to a standalone Python file with helpful comments
4. Make the file executable (on Unix-like systems)
### Example
Given this AlloAI script (`calculation.md`):
````markdown
```python
x = 10
y = 20
```
Calculate the sum of x and y and store it in a variable called result
```python
print(f"The result is: {result}")
```
````
Running with export:
```bash
alloai calculation.md -o calculation_standalone.py
```
Will generate `calculation_standalone.py`:
```python
#!/usr/bin/env python3
# This file was generated by AlloAI
# You can run this file directly with: python3 calculation_standalone.py
# Code block from markdown
x = 10
y = 20
# LLM-generated code for: Calculate the sum of x and y...
result = x + y
# Code block from markdown
print(f"The result is: {result}")
```
You can then run the generated script directly:
```bash
python3 calculation_standalone.py
# Output: The result is: 30
```
## Python API
You can also use AlloAI programmatically in your Python code:
```python
from alloai import parse_markdown, execute_markdown
# Read and parse markdown content
with open("script.md", "r") as f:
content = f.read()
# Parse the markdown
parts = parse_markdown(content)
# Execute the parsed content
execute_markdown(parts)
# Or, execute and save the generated code
generated_code = execute_markdown(parts, output_file="output.py")
print(f"Generated code:\n{generated_code}")
```
## How It Works
1. **Parse**: AlloAI reads your markdown file and identifies code blocks and text instructions
2. **Execute**: Code blocks run directly in a persistent Python environment
3. **Generate**: Text instructions are sent to the LLM with the current program state
4. **Continue**: LLM-generated code executes in the same environment, preserving all variables
The key insight is that everything shares the same runtime - your code, LLM-generated code, and all variables persist throughout execution.
## Requirements
- Python 3.8+
- OpenAI API key (or compatible API endpoint)
## Supported LLM Providers
AlloAI works with any OpenAI-compatible API:
- **OpenAI**: GPT-3.5, GPT-4, etc.
- **Azure OpenAI**: Use custom `OPENAI_BASE_URL`
- **Local Models**: Via LM Studio, Ollama, llama.cpp, etc.
- **Alternative Providers**: Any service with OpenAI-compatible endpoints
## Limitations
- Currently supports only Python code blocks
- LLM instructions are limited by the model's code generation capabilities
- Error handling in LLM-generated code may require manual intervention
- Large variable states may exceed LLM context limits
## Troubleshooting
- **API Key Not Found**: Create a `.env` file with `OPENAI_API_KEY=your_key_here`
- **Import Errors**: Run `pip install alloai`
- **Connection Issues**: Check your API key and internet connection
## Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. For technical details, see [docs/DEVELOPMENT.md](docs/DEVELOPMENT.md).
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for a detailed history of changes to AlloAI.
Raw data
{
"_id": null,
"home_page": "https://github.com/m4xw311/AlloAI",
"name": "alloai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, ai, markdown, code-execution, polyglot, scripting",
"author": "Maxwell Felix",
"author_email": "max@example.com",
"download_url": "https://files.pythonhosted.org/packages/05/88/0eadce3bedce75c347659b63da737188e193036cbaa914f046b9119478f2/alloai-0.2.0.tar.gz",
"platform": null,
"description": "# AlloAI\n\n[](https://github.com/m4xw311/AlloAI/actions/workflows/workflow.yml)\n[](https://badge.fury.io/py/alloai)\n[](https://pypi.org/project/alloai/)\n[](https://github.com/m4xw311/AlloAI/blob/main/LICENSE)\n\nAn agentless vibe coding framework for seamlessly mixing code and LLM instructions in executable markdown files. AlloAI enables you to write polyglot programs that support both traditional programming languages and natural language instructions, all executed in a shared runtime environment.\n\n## Overview\n\nAlloAI lets you write markdown files that mix Python code with natural language instructions. The code blocks execute normally, while text between them becomes prompts for an LLM to generate and execute additional code - all in the same runtime environment with shared variables.\n\n## Features\n\n- **Seamless Integration**: Mix Python code and natural language instructions in markdown files\n- **Shared Runtime**: All code blocks and LLM-generated code share the same execution context\n- **State Preservation**: Variables and their values persist across code blocks and LLM instructions\n- **Simple Syntax**: Use standard markdown code blocks and plain text instructions\n- **Flexible LLM Backend**: Supports OpenAI-compatible APIs (including local models)\n- **Easy Installation**: Available as a pip-installable package with CLI support\n- **Code Export**: Generate standalone Python scripts from your AlloAI executions for reuse\n\n## Installation\n\n### From PyPI (Recommended)\n\n```bash\npip install alloai\n```\n\n### From Source (Development)\n\n```bash\ngit clone https://github.com/m4xw311/AlloAI.git\ncd AlloAI\npip install -e .\n```\n\nFor development with additional tools:\n```bash\npip install -e \".[dev]\"\n```\n\n## Configuration\n\nCreate a `.env` file in your project directory with your OpenAI API configuration:\n\n```env\nOPENAI_API_KEY=your_api_key_here\nOPENAI_BASE_URL=https://api.openai.com/v1 # Optional: for custom endpoints\nOPENAI_MODEL=gpt-3.5-turbo # Optional: specify model\n```\n\nYou can copy the provided example:\n```bash\ncp .env.example .env\n# Then edit .env with your API key\n```\n\n**Configuration Options:**\n- `OPENAI_API_KEY`: Required. Your OpenAI API key\n- `OPENAI_BASE_URL`: Optional. Custom API endpoint for OpenAI-compatible services\n- `OPENAI_MODEL`: Optional. Model to use (default: gpt-3.5-turbo)\n\n## Usage\n\n### Basic Usage\n\nOnce installed, you can run AlloAI scripts directly from the command line:\n\n```bash\nalloai script.md\n```\n\n### Command-Line Options\n\n```bash\nalloai --help # Show help message\nalloai --version # Show version\nalloai -v script.md # Run with verbose output\nalloai --env /path/to/.env script.md # Use specific .env file\nalloai -o output.py script.md # Export generated code to a file\n```\n\n### Writing AlloAI Scripts\n\nCreate a markdown file with interleaved code blocks and natural language instructions:\n\n**example.md:**\n````markdown\n```python\nx = 5\n```\n\nIncrement x by 1\n\n```python\nprint(x)\n```\n\nMultiply x by 10 and display it\n````\n\nRun it:\n```bash\nalloai example.md\n```\n\nOutput:\n```\n6\n60\n```\n\n### More Examples\n\n**Data Processing Example:**\n````markdown\n```python\ndata = [1, 2, 3, 4, 5]\n```\n\nCalculate the sum and average of the data list, store them in variables called total and average\n\n```python\nprint(f\"Final sum: {total}\")\nprint(f\"Final average: {average}\")\n```\n````\n\n**String Manipulation Example:**\n````markdown\n```python\ntext = \"hello world\"\n```\n\nConvert the text to uppercase and reverse it, update the text variable\n\n```python\nprint(f\"Result: {text}\")\n```\n````\n\n**Working with Files:**\n````markdown\n```python\nimport json\ndata = {\"name\": \"AlloAI\", \"version\": \"0.1.0\"}\n```\n\nWrite the data dictionary to a file called output.json with proper formatting\n\n```python\nwith open(\"output.json\", \"r\") as f:\n loaded = json.load(f)\n print(f\"Loaded: {loaded}\")\n```\n````\n\n## Code Generation and Export\n\nAlloAI can generate a standalone Python script containing all the code that was executed during a run, including both the original code blocks and any LLM-generated code. This is useful for:\n- Debugging and understanding what code the LLM generated\n- Creating reusable scripts from your AlloAI experiments\n- Sharing the complete execution flow with others\n- Running the same logic without requiring the LLM on subsequent runs\n\n### Exporting Generated Code\n\nUse the `-o` or `--output` flag to save the complete executed code:\n\n```bash\nalloai script.md -o generated_script.py\n```\n\nThis will:\n1. Execute your AlloAI script normally\n2. Collect all executed code (both from markdown and LLM-generated)\n3. Save it to a standalone Python file with helpful comments\n4. Make the file executable (on Unix-like systems)\n\n### Example\n\nGiven this AlloAI script (`calculation.md`):\n````markdown\n```python\nx = 10\ny = 20\n```\n\nCalculate the sum of x and y and store it in a variable called result\n\n```python\nprint(f\"The result is: {result}\")\n```\n````\n\nRunning with export:\n```bash\nalloai calculation.md -o calculation_standalone.py\n```\n\nWill generate `calculation_standalone.py`:\n```python\n#!/usr/bin/env python3\n# This file was generated by AlloAI\n# You can run this file directly with: python3 calculation_standalone.py\n\n# Code block from markdown\nx = 10\ny = 20\n\n# LLM-generated code for: Calculate the sum of x and y...\nresult = x + y\n\n# Code block from markdown\nprint(f\"The result is: {result}\")\n```\n\nYou can then run the generated script directly:\n```bash\npython3 calculation_standalone.py\n# Output: The result is: 30\n```\n\n## Python API\n\nYou can also use AlloAI programmatically in your Python code:\n\n```python\nfrom alloai import parse_markdown, execute_markdown\n\n# Read and parse markdown content\nwith open(\"script.md\", \"r\") as f:\n content = f.read()\n\n# Parse the markdown\nparts = parse_markdown(content)\n\n# Execute the parsed content\nexecute_markdown(parts)\n\n# Or, execute and save the generated code\ngenerated_code = execute_markdown(parts, output_file=\"output.py\")\nprint(f\"Generated code:\\n{generated_code}\")\n```\n\n## How It Works\n\n1. **Parse**: AlloAI reads your markdown file and identifies code blocks and text instructions\n2. **Execute**: Code blocks run directly in a persistent Python environment\n3. **Generate**: Text instructions are sent to the LLM with the current program state\n4. **Continue**: LLM-generated code executes in the same environment, preserving all variables\n\nThe key insight is that everything shares the same runtime - your code, LLM-generated code, and all variables persist throughout execution.\n\n\n\n\n\n## Requirements\n\n- Python 3.8+\n- OpenAI API key (or compatible API endpoint)\n\n## Supported LLM Providers\n\nAlloAI works with any OpenAI-compatible API:\n\n- **OpenAI**: GPT-3.5, GPT-4, etc.\n- **Azure OpenAI**: Use custom `OPENAI_BASE_URL`\n- **Local Models**: Via LM Studio, Ollama, llama.cpp, etc.\n- **Alternative Providers**: Any service with OpenAI-compatible endpoints\n\n## Limitations\n\n- Currently supports only Python code blocks\n- LLM instructions are limited by the model's code generation capabilities\n- Error handling in LLM-generated code may require manual intervention\n- Large variable states may exceed LLM context limits\n\n## Troubleshooting\n\n- **API Key Not Found**: Create a `.env` file with `OPENAI_API_KEY=your_key_here`\n- **Import Errors**: Run `pip install alloai`\n- **Connection Issues**: Check your API key and internet connection\n\n\n\n## Contributing\n\nWe welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. For technical details, see [docs/DEVELOPMENT.md](docs/DEVELOPMENT.md).\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n\n\n## Changelog\n\nSee [CHANGELOG.md](CHANGELOG.md) for a detailed history of changes to AlloAI.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Write markdown files mixing Python code with natural language prompts that generate and execute additional code in a shared runtime",
"version": "0.2.0",
"project_urls": {
"Bug Tracker": "https://github.com/m4xw311/AlloAI/issues",
"Documentation": "https://github.com/m4xw311/AlloAI#readme",
"Homepage": "https://github.com/m4xw311/AlloAI",
"Repository": "https://github.com/m4xw311/AlloAI.git"
},
"split_keywords": [
"llm",
" ai",
" markdown",
" code-execution",
" polyglot",
" scripting"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ceb4430381778b1d29778790483b0a1b8afbe8f57226e670b04b92e80fc0331c",
"md5": "615ee81425d18a609bc8dbf76645cc00",
"sha256": "54295a74e47072a79e6218e36b423581cf1d688421dcdb672af557a6fc30a5cc"
},
"downloads": -1,
"filename": "alloai-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "615ee81425d18a609bc8dbf76645cc00",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 11230,
"upload_time": "2025-08-31T15:36:31",
"upload_time_iso_8601": "2025-08-31T15:36:31.351919Z",
"url": "https://files.pythonhosted.org/packages/ce/b4/430381778b1d29778790483b0a1b8afbe8f57226e670b04b92e80fc0331c/alloai-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "05880eadce3bedce75c347659b63da737188e193036cbaa914f046b9119478f2",
"md5": "383621d610db1d57b688b9ec58c2ead1",
"sha256": "83e68eda16fa038284c4ee1ffeb3e3d24049aba0a69f4767e895cc9e3934e24b"
},
"downloads": -1,
"filename": "alloai-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "383621d610db1d57b688b9ec58c2ead1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 14835,
"upload_time": "2025-08-31T15:36:32",
"upload_time_iso_8601": "2025-08-31T15:36:32.677472Z",
"url": "https://files.pythonhosted.org/packages/05/88/0eadce3bedce75c347659b63da737188e193036cbaa914f046b9119478f2/alloai-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-31 15:36:32",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "m4xw311",
"github_project": "AlloAI",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "openai",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"0.19.0"
]
]
}
],
"lcname": "alloai"
}