# FunctAI
DSPy-powered function decorators that turn typed Python into single-call LLM programs.
Highlights
- Single-call `@magic` functions with `step()`/`final()` markers
- Per-function adapters (`adapter="json" | "chat" | dspy.Adapter`)
- Pass LM by string (`lm="gpt-4.1"`) or LM instance; DSPy resolves providers
- Works with DSPy modules: `Predict`, `ChainOfThought`, `ReAct` (with tools)
- Structured outputs: plain types, lists/dicts, dataclasses, namedtuples
- Batch with `parallel(...)`, compile with `optimize(...)`
## Installation
```bash
# Using uv (recommended)
uv pip install functai
# Or with pip
pip install functai
```
## Configure LM
You can let FunctAI/DSPy resolve the provider by passing a model string:
```python
@magic(lm="gpt-4.1")
def echo(text: str) -> str:
"It returns the input text."
...
```
This is equivalent to setting `mod.lm = dspy.LM("gpt-4.1")`. You can also pass a provider-specific LM:
```python
import dspy
@magic(lm=dspy.LM(model="gpt-4.1"))
def echo(text: str) -> str: ...
```
Alternatively, configure globally:
```python
import dspy
dspy.settings.configure(lm=dspy.LM("gpt-4.1"))
```
Ensure provider environment variables are set (e.g., `OPENAI_API_KEY`).
## Quick Start
```python
from functai import magic
@magic(adapter="json")
def classify(text: str) -> str:
"""Return 'positive' or 'negative'."""
...
result = classify("This library is amazing!") # → "positive"
```
## Markers and Lazy Outputs
Use `step()` and `final()` markers to define intermediate and final outputs. FunctAI builds a DSPy signature from your function and makes a single LLM call when a marked value is first used.
```python
from functai import magic, step, final
from typing import List
@magic(adapter="json", lm="gpt-4.1")
def analyze(text: str) -> dict:
_sentiment: str = step(desc="Determine sentiment")
_keywords: List[str] = step(desc="Extract keywords")
summary: dict = final(desc="Combine analysis")
return summary
res = analyze("FunctAI makes AI programming fun and easy!")
```
Lazy proxies materialize on first use. You can:
- Access directly: `str(res)` or `dict(res)`
- Force materialize: `res.value`
- Get raw DSPy output: `analyze(..., _prediction=True)` (returns `dspy.Prediction`)
Tip: Return eager values by doing `return summary.value`.
Unannotated markers are supported too:
```python
@magic(lm="gpt-4.1")
def fn(x: str) -> str:
tmp = step(desc="Intermediate") # type Any
out = final(desc="Final") # type Any
return out
```
You can also return `final(...)` directly without naming a variable. In that case, the output is named `result` by default and typed from your function’s return annotation:
```python
from functai import magic, step, final
from typing import List
@magic(adapter="json", lm="gpt-4.1")
def analyze(text: str) -> dict:
_sentiment: str = step("Determine sentiment")
_keywords: List[str] = step("Extract keywords")
return final("Combine analysis") # final output name defaults to 'result'
res = analyze("FunctAI makes AI programming fun and easy!")
```
## Structured Outputs
You can return dataclasses or namedtuples. Fields become model outputs and are reconstructed after the call.
```python
from dataclasses import dataclass
from typing import List
from functai import magic, step, final
@dataclass
class Analysis:
sentiment: str
confidence: float
keywords: List[str]
@magic(adapter="json", lm="gpt-4.1")
def analyze_text(text: str) -> Analysis:
_sentiment: str = step("Determine sentiment")
_confidence: float = step("Confidence score between 0 and 1")
_keywords: List[str] = step("Important keywords")
result: Analysis = final("Complete analysis")
return result
```
## Choosing Modules and Adapters
`module` accepts a string (`"predict"`, `"cot"`, `"react"`), a `dspy.Module` subclass, or an instance. All `module_kwargs` are forwarded to the module constructor. For ReAct, pass tools via `tools=[...]` or `module_kwargs={"tools": [...]}`.
```python
import dspy
from functai import magic, final
def get_weather(city: str) -> str:
return "sunny"
@magic(lm="gpt-4.1", module=dspy.ReAct, tools=[get_weather])
def agent(question: str) -> str:
answer: str = final("Answer the question")
return answer
@magic(module="cot", lm="gpt-4.1")
def derive(text: str) -> str:
proof: str = final("Show your reasoning")
return proof
```
ReAct notes
- ReAct builds two subprograms: an agent (react) and an extractor (extract). Step/Final fields are produced by the extract stage.
- If a module only returns `result`, FunctAI maps your `final()` to that `result` automatically.
Adapters
- `adapter="json"` → `dspy.JSONAdapter()`
- `adapter="chat"` → `dspy.ChatAdapter()`
- Custom adapters are supported (class or instance). A two-step adapter requires `adapter_kwargs`.
## Batch and Optimize
Parallel batch over the underlying module:
```python
from functai import parallel
rows = parallel(classify, inputs=[{"text": "great"}, {"text": "bad"}])
```
Compile with an optimizer (e.g., BootstrapFewShot). The per-function adapter and LM are preserved:
```python
from functai import optimize
trainset = [("I love it", "positive"), ("Terrible UX", "negative")]
compiled_classify = optimize(classify, trainset=trainset)
compiled_classify("So good!")
```
## Prediction Mode
Every `@magic` function accepts `_prediction=True` to return the raw `dspy.Prediction`:
```python
pred = agent("What is the surprise?", _prediction=True)
print(pred.result) # the final answer
print(pred.reasoning) # when available (e.g., ReAct extract stage)
print(pred.trajectory) # full tool-use trace for ReAct
```
## Inspect & Preview Prompts
Preview the adapter-formatted messages without making a call:
```python
from functai import format_prompt
preview = format_prompt(analyze, text="Hello world")
print(preview["render"]) # nice human-readable view
print(preview["messages"]) # list of {role, content}
print(preview["demos"]) # extracted demos (if any)
```
After a call, view the provider-level history via DSPy:
```python
from functai import inspect_history_text
print(inspect_history_text()) # captures dspy.inspect_history() output
```
## Examples
See the `examples/` directory:
- `01_format_preview.qmd`: Preview adapter-formatted prompts (with demos)
- `02_history.qmd`: Inspect provider-level history after a call
- `03_optimization.qmd`: Optimize a function with DSPy
- `04_react_agent.qmd`: Build an agent with tools via ReAct
Render with Quarto or open as Markdown:
```bash
quarto preview examples/01_format_preview.qmd
```
## Linting Tips
Some linters (e.g., Ruff F841) flag variables assigned but not used. This is common with `step()` markers that are consumed by the LLM, not Python. Two options:
1) Prefix with underscores (sanitized when building the signature)
```python
_sentiment: str = step("Determine sentiment")
```
2) Mark as used with `use(...)`
```python
from functai import use
sentiment: str = step("Determine sentiment")
use(sentiment)
```
## Development
```bash
# Install with dev dependencies
uv pip install -e ".[dev]"
# Run tests
uv run pytest
# Format and lint
uv run ruff format .
uv run ruff check .
```
Raw data
{
"_id": null,
"home_page": null,
"name": "functai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ai, artificial-intelligence, decorators, dspy, function-decorators, llm, machine-learning, prompt-engineering",
"author": null,
"author_email": "Maxime Rivest <maxime.rivest@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/64/96/8f5174d7fb5c9f152710df5d1b5371823c8760693cec9c255033d2f7daab/functai-0.1.3.tar.gz",
"platform": null,
"description": "# FunctAI\n\nDSPy-powered function decorators that turn typed Python into single-call LLM programs.\n\nHighlights\n- Single-call `@magic` functions with `step()`/`final()` markers\n- Per-function adapters (`adapter=\"json\" | \"chat\" | dspy.Adapter`)\n- Pass LM by string (`lm=\"gpt-4.1\"`) or LM instance; DSPy resolves providers\n- Works with DSPy modules: `Predict`, `ChainOfThought`, `ReAct` (with tools)\n- Structured outputs: plain types, lists/dicts, dataclasses, namedtuples\n- Batch with `parallel(...)`, compile with `optimize(...)`\n\n## Installation\n\n```bash\n# Using uv (recommended)\nuv pip install functai\n\n# Or with pip\npip install functai\n```\n\n## Configure LM\n\nYou can let FunctAI/DSPy resolve the provider by passing a model string:\n\n```python\n@magic(lm=\"gpt-4.1\")\ndef echo(text: str) -> str:\n \"It returns the input text.\"\n ...\n```\n\nThis is equivalent to setting `mod.lm = dspy.LM(\"gpt-4.1\")`. You can also pass a provider-specific LM:\n\n```python\nimport dspy\n@magic(lm=dspy.LM(model=\"gpt-4.1\"))\ndef echo(text: str) -> str: ...\n```\n\nAlternatively, configure globally:\n\n```python\nimport dspy\ndspy.settings.configure(lm=dspy.LM(\"gpt-4.1\"))\n```\n\nEnsure provider environment variables are set (e.g., `OPENAI_API_KEY`).\n\n## Quick Start\n\n```python\nfrom functai import magic\n\n@magic(adapter=\"json\")\ndef classify(text: str) -> str:\n \"\"\"Return 'positive' or 'negative'.\"\"\"\n ...\n\nresult = classify(\"This library is amazing!\") # \u2192 \"positive\"\n```\n\n## Markers and Lazy Outputs\n\nUse `step()` and `final()` markers to define intermediate and final outputs. FunctAI builds a DSPy signature from your function and makes a single LLM call when a marked value is first used.\n\n```python\nfrom functai import magic, step, final\nfrom typing import List\n\n@magic(adapter=\"json\", lm=\"gpt-4.1\")\ndef analyze(text: str) -> dict:\n _sentiment: str = step(desc=\"Determine sentiment\")\n _keywords: List[str] = step(desc=\"Extract keywords\")\n summary: dict = final(desc=\"Combine analysis\")\n return summary\n\nres = analyze(\"FunctAI makes AI programming fun and easy!\")\n```\n\nLazy proxies materialize on first use. You can:\n- Access directly: `str(res)` or `dict(res)`\n- Force materialize: `res.value`\n- Get raw DSPy output: `analyze(..., _prediction=True)` (returns `dspy.Prediction`)\n\nTip: Return eager values by doing `return summary.value`.\n\nUnannotated markers are supported too:\n\n```python\n@magic(lm=\"gpt-4.1\")\ndef fn(x: str) -> str:\n tmp = step(desc=\"Intermediate\") # type Any\n out = final(desc=\"Final\") # type Any\n return out\n```\n\nYou can also return `final(...)` directly without naming a variable. In that case, the output is named `result` by default and typed from your function\u2019s return annotation:\n\n```python\nfrom functai import magic, step, final\nfrom typing import List\n\n@magic(adapter=\"json\", lm=\"gpt-4.1\")\ndef analyze(text: str) -> dict:\n _sentiment: str = step(\"Determine sentiment\")\n _keywords: List[str] = step(\"Extract keywords\")\n return final(\"Combine analysis\") # final output name defaults to 'result'\n\nres = analyze(\"FunctAI makes AI programming fun and easy!\")\n```\n\n## Structured Outputs\n\nYou can return dataclasses or namedtuples. Fields become model outputs and are reconstructed after the call.\n\n```python\nfrom dataclasses import dataclass\nfrom typing import List\nfrom functai import magic, step, final\n\n@dataclass\nclass Analysis:\n sentiment: str\n confidence: float\n keywords: List[str]\n\n@magic(adapter=\"json\", lm=\"gpt-4.1\")\ndef analyze_text(text: str) -> Analysis:\n _sentiment: str = step(\"Determine sentiment\")\n _confidence: float = step(\"Confidence score between 0 and 1\")\n _keywords: List[str] = step(\"Important keywords\")\n result: Analysis = final(\"Complete analysis\")\n return result\n```\n\n## Choosing Modules and Adapters\n\n`module` accepts a string (`\"predict\"`, `\"cot\"`, `\"react\"`), a `dspy.Module` subclass, or an instance. All `module_kwargs` are forwarded to the module constructor. For ReAct, pass tools via `tools=[...]` or `module_kwargs={\"tools\": [...]}`.\n\n```python\nimport dspy\nfrom functai import magic, final\n\ndef get_weather(city: str) -> str:\n return \"sunny\"\n\n@magic(lm=\"gpt-4.1\", module=dspy.ReAct, tools=[get_weather])\ndef agent(question: str) -> str:\n answer: str = final(\"Answer the question\")\n return answer\n\n@magic(module=\"cot\", lm=\"gpt-4.1\")\ndef derive(text: str) -> str:\n proof: str = final(\"Show your reasoning\")\n return proof\n```\n\nReAct notes\n- ReAct builds two subprograms: an agent (react) and an extractor (extract). Step/Final fields are produced by the extract stage.\n- If a module only returns `result`, FunctAI maps your `final()` to that `result` automatically.\n\nAdapters\n- `adapter=\"json\"` \u2192 `dspy.JSONAdapter()`\n- `adapter=\"chat\"` \u2192 `dspy.ChatAdapter()`\n- Custom adapters are supported (class or instance). A two-step adapter requires `adapter_kwargs`.\n\n## Batch and Optimize\n\nParallel batch over the underlying module:\n\n```python\nfrom functai import parallel\n\nrows = parallel(classify, inputs=[{\"text\": \"great\"}, {\"text\": \"bad\"}])\n```\n\nCompile with an optimizer (e.g., BootstrapFewShot). The per-function adapter and LM are preserved:\n\n```python\nfrom functai import optimize\n\ntrainset = [(\"I love it\", \"positive\"), (\"Terrible UX\", \"negative\")]\ncompiled_classify = optimize(classify, trainset=trainset)\ncompiled_classify(\"So good!\")\n```\n\n## Prediction Mode\n\nEvery `@magic` function accepts `_prediction=True` to return the raw `dspy.Prediction`:\n\n```python\npred = agent(\"What is the surprise?\", _prediction=True)\nprint(pred.result) # the final answer\nprint(pred.reasoning) # when available (e.g., ReAct extract stage)\nprint(pred.trajectory) # full tool-use trace for ReAct\n```\n\n## Inspect & Preview Prompts\n\nPreview the adapter-formatted messages without making a call:\n\n```python\nfrom functai import format_prompt\npreview = format_prompt(analyze, text=\"Hello world\")\nprint(preview[\"render\"]) # nice human-readable view\nprint(preview[\"messages\"]) # list of {role, content}\nprint(preview[\"demos\"]) # extracted demos (if any)\n```\n\nAfter a call, view the provider-level history via DSPy:\n\n```python\nfrom functai import inspect_history_text\nprint(inspect_history_text()) # captures dspy.inspect_history() output\n```\n\n## Examples\n\nSee the `examples/` directory:\n- `01_format_preview.qmd`: Preview adapter-formatted prompts (with demos)\n- `02_history.qmd`: Inspect provider-level history after a call\n- `03_optimization.qmd`: Optimize a function with DSPy\n- `04_react_agent.qmd`: Build an agent with tools via ReAct\n\nRender with Quarto or open as Markdown:\n\n```bash\nquarto preview examples/01_format_preview.qmd\n```\n\n## Linting Tips\n\nSome linters (e.g., Ruff F841) flag variables assigned but not used. This is common with `step()` markers that are consumed by the LLM, not Python. Two options:\n\n1) Prefix with underscores (sanitized when building the signature)\n\n```python\n_sentiment: str = step(\"Determine sentiment\")\n```\n\n2) Mark as used with `use(...)`\n\n```python\nfrom functai import use\nsentiment: str = step(\"Determine sentiment\")\nuse(sentiment)\n```\n\n## Development\n\n```bash\n# Install with dev dependencies\nuv pip install -e \".[dev]\"\n\n# Run tests\nuv run pytest\n\n# Format and lint\nuv run ruff format .\nuv run ruff check .\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "DSPy-powered function decorators for AI-enhanced programming",
"version": "0.1.3",
"project_urls": {
"Bug Tracker": "https://github.com/maximerivest/functai/issues",
"Documentation": "https://github.com/maximerivest/functai#readme",
"Homepage": "https://github.com/maximerivest/functai",
"Source": "https://github.com/maximerivest/functai"
},
"split_keywords": [
"ai",
" artificial-intelligence",
" decorators",
" dspy",
" function-decorators",
" llm",
" machine-learning",
" prompt-engineering"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e0cb37e0b38056f7e0eb8572e4c0b6286136051ad8a9b26995cf0ec5067ce9c1",
"md5": "193b1d5e00bbcdf7c1bc77282ef445c1",
"sha256": "147e76a494c9b1064b22b9c5b42d7092469aae26d4b6fbdecac0e93ab1ed0baf"
},
"downloads": -1,
"filename": "functai-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "193b1d5e00bbcdf7c1bc77282ef445c1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 15696,
"upload_time": "2025-08-28T01:18:33",
"upload_time_iso_8601": "2025-08-28T01:18:33.373643Z",
"url": "https://files.pythonhosted.org/packages/e0/cb/37e0b38056f7e0eb8572e4c0b6286136051ad8a9b26995cf0ec5067ce9c1/functai-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "64968f5174d7fb5c9f152710df5d1b5371823c8760693cec9c255033d2f7daab",
"md5": "e6054fec1138743f943592faacfad9e8",
"sha256": "0ab2ae46689721718e456bf4574284ec10eba70290c50f7188541bf0ca02f76e"
},
"downloads": -1,
"filename": "functai-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "e6054fec1138743f943592faacfad9e8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 15118,
"upload_time": "2025-08-28T01:18:34",
"upload_time_iso_8601": "2025-08-28T01:18:34.552072Z",
"url": "https://files.pythonhosted.org/packages/64/96/8f5174d7fb5c9f152710df5d1b5371823c8760693cec9c255033d2f7daab/functai-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-28 01:18:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "maximerivest",
"github_project": "functai",
"github_not_found": true,
"lcname": "functai"
}