# FunctAI Specification: The Function-is-the-Prompt Paradigm
- [FunctAI Specification: The Function-is-the-Prompt Paradigm](#functai-specification-the-function-is-the-prompt-paradigm)
- [FunctAI: The Function-is-the-Prompt Paradigm](#functai-the-function-is-the-prompt-paradigm)
- [1. Getting Started](#1-getting-started)
- [1.1. Installation](#11-installation)
- [1.2. Configuration](#12-configuration)
- [1.3. Your First AI Function](#13-your-first-ai-function)
- [2. Core Concepts](#2-core-concepts)
- [2.1. The `@ai` Decorator](#21-the-ai-decorator)
- [2.2. The `_ai` Sentinel](#22-the-_ai-sentinel)
- [2.3. Post-processing and Validation](#23-post-processing-and-validation)
- [3. Structured Output and Type System](#3-structured-output-and-type-system)
- [3.1. Basic Types](#31-basic-types)
- [3.2. Dataclasses and Complex Structures](#32-dataclasses-and-complex-structures)
- [3.3. Restricted Choices](#33-restricted-choices)
- [4. Configuration and Flexibility](#4-configuration-and-flexibility)
- [4.1. The Configuration Cascade](#41-the-configuration-cascade)
- [4.2. Global Configuration](#42-global-configuration)
- [4.3. Per-Function Configuration](#43-per-function-configuration)
- [4.4. Contextual Overrides](#44-contextual-overrides)
- [5. Advanced Execution Strategies](#5-advanced-execution-strategies)
- [5.1. Chain of Thought (CoT) Reasoning](#51-chain-of-thought-cot-reasoning)
- [5.2. Accessing Intermediate Steps](#52-accessing-intermediate-steps)
- [5.3. Multiple Explicit Outputs](#53-multiple-explicit-outputs)
- [5.4. Tool Usage (ReAct Agents)](#54-tool-usage-react-agents)
- [6. Stateful Interactions (Memory)](#6-stateful-interactions-memory)
- [7. Optimization (In-place Compilation)](#7-optimization-in-place-compilation)
- [7.1. The Optimization Workflow](#71-the-optimization-workflow)
- [7.2. Reverting Optimization](#72-reverting-optimization)
- [8. Inspection and Debugging](#8-inspection-and-debugging)
- [10. Real-World Examples](#10-real-world-examples)
- [10.1. Data Extraction Pipeline](#101-data-extraction-pipeline)
- [10.2. Research Assistant Agent](#102-research-assistant-agent)
# FunctAI: The Function-is-the-Prompt Paradigm
Welcome to FunctAI. This library reimagines how Python developers
integrate Large Language Models (LLMs) into their applications. FunctAI
allows you to treat AI models as reliable, typed Python functions,
abstracting away the complexities of prompt engineering and output
parsing.
The core philosophy of FunctAI is simple yet powerful:
> **The function definition *is* the prompt, and the function body *is*
> the program definition.**
By leveraging Python’s native features—docstrings for instructions, type
hints for structure, and variable assignments for logic flow—you can
define sophisticated AI behaviors with minimal boilerplate.
FunctAI is built on the powerful
[DSPy](https://github.com/stanfordnlp/dspy) framework, unlocking
advanced strategies like Chain-of-Thought, automatic optimization, and
agentic tool usage through an ergonomic, decorator-based API.
------------------------------------------------------------------------
## 1. Getting Started
### 1.1. Installation
Install FunctAI and its core dependency, DSPy.
``` bash
pip install functai
```
### 1.2. Configuration
Before using FunctAI, you must configure a default Language Model (LM).
This requires initializing a DSPy LM provider.
``` python
from functai import configure
# Configure FunctAI globally
configure(lm="gpt-4.1", temperature=1.0, api_key="<YOUR_API_KEY>")
```
### 1.3. Your First AI Function
Creating an AI function is as simple as defining a standard Python
function with type hints and a docstring, then decorating it with `@ai`.
``` python
from functai import ai, _ai
@ai
def summarize(text: str, focus: str = "key points") -> str:
"""Summarize the text in one concise sentence,
concentrating on the specified focus area."""
# The _ai sentinel represents the LLM output
return _ai
# Call it exactly like a normal Python function
long_text = "FunctAI bridges the gap between Python's expressive syntax and the dynamic capabilities of LLMs. It allows developers to focus on logic rather than boilerplate."
summarize(long_text, focus="developer benefits")
```
"FunctAI enables developers to concentrate on core logic by reducing boilerplate and leveraging Python's syntax with LLM capabilities."
**What happens when you call `summarize`?**
1. FunctAI intercepts the call.
2. It constructs a prompt using the docstring and the inputs (`text`,
`focus`).
3. It invokes the configured LM (GPT-4.1).
4. It parses the LM’s response and returns the result, ensuring it
matches the return type (`str`).
## 2. Core Concepts
### 2.1. The `@ai` Decorator
The `@ai` decorator is the magic wand. It transforms a Python function
into an LLM-powered program. It analyzes the function’s signature
(parameters, return type, and docstring) to understand the task
requirements and a prompt is automatically constructed out of that for
you.
``` python
@ai
def sentiment(text: str) -> str:
"""Analyze the sentiment of the given text.
Return 'positive', 'negative', or 'neutral'."""
... # An empty body or Ellipsis also works like returning _ai
```
### 2.2. The `_ai` Sentinel
The `_ai` object is a special sentinel used within an `@ai` function. It
represents the value(s) that will be generated by the AI. It acts as a
proxy, deferring the actual LM execution.
Returning `_ai` directly indicates that the LLM’s output is the
function’s final result.
``` python
@ai
def extract_price(description: str) -> float:
"""Extract the price from a product description."""
return _ai
```
### 2.3. Post-processing and Validation
FunctAI encourages writing robust code. You can assign `_ai` to a
variable and apply standard Python operations before returning. `_ai`
behaves dynamically as if it were the expected return type.
This allows you to combine the power of AI with the reliability of code
for validation, cleaning, or transformation.
``` python
@ai
def sentiment_score(text: str) -> float:
"""Returns a sentiment score between 0.0 (negative) and 1.0 (positive)."""
# _ai behaves like a float here due to the return type hint
score = _ai
# Post-processing: ensure the score is strictly within bounds
return max(0.0, min(1.0, float(score)))
sentiment_score("I think that FunctAI is amazing!")
```
0.95
## 3. Structured Output and Type System
FunctAI excels at extracting structured data. Python type hints serve as
the contract between your code and the LLM.
### 3.1. Basic Types
FunctAI handles standard Python types (`int`, `float`, `bool`, `str`,
`list`, `dict`).
``` python
@ai
def calculate(expression) -> int:
"""Evaluate the mathematical expression."""
return _ai
result = calculate("What is 15 times 23?")
print(result)
```
345
``` python
@ai
def get_keywords(article) -> list[str]:
"""Extract 5 key terms from the article."""
keywords: list[str] = _ai
# Post-processing example: ensure lowercase
return [k.lower() for k in keywords]
get_keywords("FunctAI excels at extracting structured data. Python type hints serve as the contract between your code and the LLM.")
```
['functai', 'structured data', 'python type hints', 'contract', 'llm']
### 3.2. Dataclasses and Complex Structures
For complex data extraction, define a `dataclass` (or Pydantic model)
and use it as the return type.
``` python
from dataclasses import dataclass
from typing import List
@dataclass
class ProductInfo:
name: str
price: float
features: List[str]
in_stock: bool
@ai
def extract_product(description: str) -> ProductInfo:
"""Extract product information from the description."""
return _ai
info = extract_product("iPhone 15 Pro - $999, 5G, titanium design, available now")
# The output is a validated ProductInfo instance
print(info)
```
ProductInfo(name='iPhone 15 Pro', price=999.0, features=['5G', 'titanium design'], in_stock=True)
### 3.3. Restricted Choices
Use `Enum` to restrict the LM’s output to a predefined set of values,
increasing reliability for classification tasks.
``` python
from enum import Enum
class TicketPriority(Enum):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
@ai
def classify_priority(issue_description: str) -> TicketPriority:
"""Analyzes the issue and classifies its priority level."""
return _ai
result = classify_priority(issue_description="The main database is unresponsive.")
result
```
<TicketPriority.HIGH: 'high'>
You can also use typing `Literal`:
``` python
from typing import Literal
@ai
def categorize(text: str) -> Literal["sport", "fashion"]:
...
categorize("Vibe Coding as been declared a sport")
```
'sport'
## 4. Configuration and Flexibility
### 4.1. The Configuration Cascade
FunctAI uses a flexible, cascading configuration system. Settings are
applied in the following order of precedence (highest to lowest):
1. **Function-Level** (e.g., `@ai(temperature=0.1)`)
2. **Contextual** (e.g., `with configure(temperature=0.1):`)
3. **Global** (e.g., `configure(temperature=0.1)`)
### 4.2. Global Configuration
Use `functai.configure()` for project-wide defaults.
``` python
import functai
# Example using GPT-3.5-Turbo as a default
functai.configure(
lm="groq/openai/gpt-oss-120b",
temperature=0.5
)
@ai
def math(formal):
...
math("2+2")
```
2025/08/30 13:32:38 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.
4
### 4.3. Per-Function Configuration
Override defaults for specific functions directly in the decorator. This
is useful when different tasks require different models or creativity
levels.
``` python
# Deterministic task requiring a powerful model
@ai(temperature=0.0, lm="openai/gpt-4o")
def legal_analysis(document):
"""Provide precise legal analysis of the document."""
return _ai
# Creative task using a different provider
@ai(temperature=0.9, lm="anthropic/claude-sonnet-4-20250514")
def creative_story(prompt):
"""Write a creative story based on the prompt."""
return _ai
```
### 4.4. Contextual Overrides
Use the `functai.configure()` context manager to temporarily override
defaults for a block of code.
``` python
from functai import configure
@ai
def analyze(data): return _ai
analyze("data1") # Uses global defaults
# Temporarily switch model and temperature
with configure(temperature=0.0, lm="gpt-4o"):
print(analyze("data2")) # Uses GPT-4.1, Temp 0.0
analyze("data3") # Back to global defaults
```
2025/08/30 13:32:43 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.
2025/08/30 13:32:49 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.
data2
'data3'
## 5. Advanced Execution Strategies
FunctAI truly shines by allowing developers to define complex execution
strategies directly within the function body, adhering to the “function
body is the program definition” philosophy.
### 5.1. Chain of Thought (CoT) Reasoning
Eliciting step-by-step reasoning (Chain of Thought) often significantly
improves the quality and accuracy of the final answer, especially for
complex tasks.
In FunctAI, you define CoT by declaring intermediate reasoning steps
within the function body using `_ai` assignments with descriptions.
``` python
@ai
def solve_math_problem(question: str) -> float:
"""Solves a math word problem and returns the numerical answer."""
# Define the reasoning step:
# 1. The variable name ('reasoning') becomes the field name.
# 2. The type hint (str) defines the output type for this step.
# 3. The subscript _ai["..."] provides specific instructions for the LLM.
reasoning: str = _ai["Step-by-step thinking process to reach the solution."]
# The final return value (float) is the main output
return _ai
```
**Behavior:** FunctAI analyzes the function body, detects the
intermediate `reasoning` assignment, and automatically configures the
execution to generate the reasoning *before* attempting to generate the
final result.
*(Note: You can also enable a generic CoT by setting
`@ai(module="cot")`, but the explicit definition above offers more
control.)*
### 5.2. Accessing Intermediate Steps
While the function call normally returns only the final result, you can
access the intermediate steps (like `reasoning`) by adding the special
argument `all=True` to the function call.
This returns the raw prediction object containing all generated fields.
``` python
question = "If a train travels 120 miles in 2 hours, what is its speed?"
prediction = solve_math_problem(question, all=True)
print("--- Reasoning ---")
print(prediction.reasoning)
print("\n--- Answer ---")
# When _ai is returned directly, the main output is stored in the 'result' attribute
print(prediction.result)
```
--- Reasoning ---
Speed is calculated by dividing distance by time. The train travels 120 miles in 2 hours, so speed = 120 miles / 2 hours = 60 miles per hour.
--- Answer ---
60.0
### 5.3. Multiple Explicit Outputs
You can define and return multiple distinct outputs from a single
function call by declaring them inline, similar to the CoT pattern.
``` python
from typing import Tuple
@ai()
def critique_and_improve(text: str) -> Tuple[str, str, int]:
"""
Analyze the text, provide constructive criticism, and suggest an improved version.
"""
# Define explicit output fields using _ai[...]
critique: str = _ai["Constructive criticism focusing on clarity and tone."]
improved_text: str = _ai["The improved version of the text."]
# Return the materialized fields (Python handles the Tuple structure)
return critique, improved_text, 1.0
critique, improved, number = critique_and_improve(text="U should fix this asap, it's broken.")
print("--- Critique ---")
print(critique)
print("--- Improved ---")
print(improved)
```
--- Critique ---
The original message is overly informal and uses shorthand ('U') and vague urgency ('asap') that may come across as unprofessional. It lacks specific details about what is broken, which can make it harder for the recipient to address the issue efficiently. A clearer, more courteous tone with a brief description of the problem would improve communication.
--- Improved ---
Please address this issue as soon as possible; the current functionality appears to be broken.
### 5.4. Tool Usage (ReAct Agents)
FunctAI supports the ReAct (Reasoning + Acting) pattern for creating
agents that can interact with external tools. Tools are standard, typed
Python functions.
When the `tools` argument is provided to the `@ai` decorator, the
execution strategy automatically upgrades to an agentic loop (using
`dspy.ReAct`).
``` python
# 1. Define tools
def search_web(query: str) -> str:
"""Searches the web for information. (Mock implementation)"""
print(f"[Tool executing: Searching for '{query}']")
# In a real scenario, this would call a search API
return f"Mock search results for {query}."
def calculate(expression: str) -> float:
"""Performs mathematical calculations. (Mock implementation)"""
print(f"[Tool executing: Calculating '{expression}']")
# WARNING: eval() is unsafe in production. Use a safe math library.
return eval(expression)
# 2. Define the AI function with access to the tools
@ai(tools=[search_web, calculate])
def research_assistant(question: str) -> str:
"""Answer questions using available tools to gather data and perform calculations."""
return _ai
# 3. Execute the agent
# The AI will potentially use search_web and then calculate.
answer = research_assistant("What is the result of (15 * 23) + 10?")
```
[Tool executing: Calculating '(15 * 23) + 10']
2025/08/30 13:32:58 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.
``` python
import functai
#| echo: false
print(functai.phistory())
```
[2025-08-30T13:32:58.957207]
System message:
Your input fields are:
1. `question` (str):
2. `trajectory` (str):
Your output fields are:
1. `reasoning` (str):
2. `result` (str):
All interactions will be structured in the following way, with the appropriate values filled in.
Inputs will have the following structure:
[[ ## question ## ]]
{question}
[[ ## trajectory ## ]]
{trajectory}
Outputs will be a JSON object with the following fields.
{
"reasoning": "{reasoning}",
"result": "{result}"
}
In adhering to this structure, your objective is:
Function: research_assistant
Answer questions using available tools to gather data and perform calculations.
User message:
[[ ## question ## ]]
What is the result of (15 * 23) + 10?
[[ ## trajectory ## ]]
[[ ## thought_0 ## ]]
I need to compute the arithmetic expression (15 * 23) + 10.
[[ ## tool_name_0 ## ]]
calculate
[[ ## tool_args_0 ## ]]
{"expression": "(15 * 23) + 10"}
[[ ## observation_0 ## ]]
355
[[ ## thought_1 ## ]]
The calculation is complete; the result of (15 * 23) + 10 is 355.
[[ ## tool_name_1 ## ]]
finish
[[ ## tool_args_1 ## ]]
{}
[[ ## observation_1 ## ]]
Completed.
Respond with a JSON object in the following order of fields: `reasoning`, then `result`.
Response:
{"reasoning":"I used the calculate tool to evaluate the expression (15 * 23) + 10, which gave 355. No further steps are needed.","result":"355"}
**Behavior:** The function will iteratively think about the task, decide
which tool to use, execute the tool, observe the results, and repeat
until it can provide the final answer.
## 6. Stateful Interactions (Memory)
By default, `@ai` functions are stateless; each call is independent. To
maintain context across calls (e.g., in a chatbot scenario), set
`stateful=True`.
``` python
from functai import ai, _ai
@ai(lm="gpt-4.1", stateful=True)
def assistant(message):
"""A friendly AI assistant that remembers the conversation history."""
return _ai
response1 = assistant("Hello, my name is Alex.")
print(f"Output1: {response1}")
response2 = assistant("What is my name?")
print(f"Output2: {response2}")
```
Output1: Hello Alex! It's nice to meet you. How can I assist you today?
Output2: Your name is Alex.
[2025-08-30T13:32:59.675591]
System message:
Your input fields are:
1. `message` (str):
2. `history` (History):
Your output fields are:
1. `result` (Any):
All interactions will be structured in the following way, with the appropriate values filled in.
Inputs will have the following structure:
[[ ## message ## ]]
{message}
[[ ## history ## ]]
{history}
Outputs will be a JSON object with the following fields.
{
"result": "{result} # note: the value you produce must adhere to the JSON schema: {}"
}
In adhering to this structure, your objective is:
Function: assistant
A friendly AI assistant that remembers the conversation history.
User message:
[[ ## message ## ]]
Hello, my name is Alex.
Assistant message:
{
"result": "Hello Alex! It's nice to meet you. How can I assist you today?"
}
User message:
[[ ## message ## ]]
What is my name?
Respond with a JSON object in the following order of fields: `result` (must be formatted as a valid Python Any).
Response:
{
"result": "Your name is Alex."
}
**Behavior:** When `stateful=True`, FunctAI automatically includes the
history of previous inputs and outputs in the context of the next call.
## 7. Optimization (In-place Compilation)
**WARNING: optimization is still a work in progress**
FunctAI integrates seamlessly with DSPy’s optimization capabilities
(Teleprompters). Optimization (often called compilation in DSPy)
improves the quality and reliability of your AI functions by using a
dataset of examples.
The optimizer can automatically generate effective few-shot examples or
refine instructions. This happens *in place* using the `.opt()` method
on the function object.
### 7.1. The Optimization Workflow
``` python
import dspy
from dspy import Example
from functai import ai, _ai
dspy.configure(lm = dspy.LM("gpt-4.1"))
# 1. Define the function
@ai
def classify_intent(user_query: str) -> str:
"""Classify user intent as 'booking', 'cancelation', or 'information'."""
return _ai
# 2. Define the training data (List of DSPy Examples)
# .with_inputs() specifies which keys are inputs to the function
trainset = [
Example(user_query="I need to reserve a room.", result="booking").with_inputs("user_query"),
Example(user_query="How do I get there?", result="information").with_inputs("user_query"),
Example(user_query="I want to cancel my reservation.", result="cancelation").with_inputs("user_query"),
]
# 3. Optimize the function in place
# strategy="launch" typically uses a default like BootstrapFewShot
print("Optimizing...")
classify_intent.opt(trainset=trainset)
print("Optimization complete.")
# 4. The function is now optimized (it includes generated few-shot examples in its prompt)
result = classify_intent("Can I book a suite for next Tuesday?")
# Output: "booking"
```
Optimizing...
0%| | 0/3 [00:00<?, ?it/s]100%|██████████| 3/3 [00:00<00:00, 36.98it/s]
Bootstrapped 3 full traces after 2 examples for up to 1 rounds, amounting to 3 attempts.
Optimization complete.
2025/08/30 13:33:04 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.
### 7.2. Reverting Optimization
FunctAI tracks optimization steps. If the results are not satisfactory,
you can revert using `.undo_opt()`.
``` python
# Revert the last optimization step
classify_intent.undo_opt(steps=1)
```
## 8. Inspection and Debugging
**More to come**
To see the last call, use `functai.phistory()`.
``` python
from functai import phistory
# After running some AI functions...
print(phistory()) # Show the last call
```
[2025-08-30T13:33:06.742620]
System message:
Your input fields are:
1. `user_query` (str):
Your output fields are:
1. `result` (str):
All interactions will be structured in the following way, with the appropriate values filled in.
Inputs will have the following structure:
[[ ## user_query ## ]]
{user_query}
Outputs will be a JSON object with the following fields.
{
"result": "{result}"
}
In adhering to this structure, your objective is:
Function: classify_intent
Classify user intent as 'booking', 'cancelation', or 'information'.
User message:
[[ ## user_query ## ]]
I need to reserve a room.
Assistant message:
{
"result": "booking"
}
User message:
[[ ## user_query ## ]]
How do I get there?
Assistant message:
{
"result": "information"
}
User message:
[[ ## user_query ## ]]
I want to cancel my reservation.
Assistant message:
{
"result": "cancelation"
}
User message:
[[ ## user_query ## ]]
Can I book a suite for next Tuesday?
Respond with a JSON object in the following order of fields: `result`.
Response:
{
"result": "booking"
}
## 10. Real-World Examples
### 10.1. Data Extraction Pipeline
This example demonstrates chaining multiple AI functions to process
unstructured data reliably.
``` python
from dataclasses import dataclass
from typing import List
from functai import ai, _ai, configure
# Ensure deterministic extraction
configure(lm = "gpt-4.1", temperature=0.0, adapter="json")
# 1. Define the target structure
@dataclass
class Invoice:
invoice_number: str
vendor_name: str
total: float
items: List[str]
# 2. Define the extraction function with CoT for accuracy
@ai
def extract_invoice(document_text: str) -> Invoice:
"""Extract invoice information from the document text.
Parse all relevant fields accurately. Convert amounts to float.
"""
thought_process: str = _ai["Analyze the document layout and identify the location of each field before extracting."]
return _ai
# 3. Define a validation function
@ai
def validate_invoice(invoice: Invoice) -> bool:
"""Validate if the invoice data is complete and reasonable.
Check if the total is positive and required fields are present.
"""
return _ai
# 4. Define a summarization function
@ai
def summarize_invoice(invoice: Invoice) -> str:
"""Create a brief, human-readable summary of the invoice."""
return _ai
# 5. Execute the pipeline
document = """
INVOICE
Vendor: TechCorp Inc.
Invoice #: INV-2025-101
Items: 5x Laptops, 2x Monitors
Total: $5600.00
"""
invoice = extract_invoice(document)
if validate_invoice(invoice):
summary = summarize_invoice(invoice)
print("Invoice Validated Successfully!")
print(summary)
else:
print("Invoice Validation Failed.")
```
Invoice Validated Successfully!
Invoice INV-2025-101 from TechCorp Inc. totals $5,600.00 and includes 5 laptops and 2 monitors.
### 10.2. Research Assistant Agent
This example builds a sophisticated agent using tools and structured
internal outputs.
``` python
from functai import ai, _ai, configure
configure(lm = "gpt-4.1")
# Define Tools (Placeholders)
def search_web(query: str) -> str:
"""Search the web for information."""
print(f"[Searching: {query}]")
return f"Mock search results for {query}."
def read_paper(paper_id: str) -> str:
"""Read the content of a specific research paper."""
print(f"[Reading: {paper_id}]")
return f"Mock content of paper {paper_id}."
# Define the Agent
@ai(tools=[search_web, read_paper])
def research_assistant(query: str) -> str:
"""Advanced research assistant.
Use available tools to gather information. Synthesize findings.
"""
# Define intermediate outputs for better structure and inspection
research_notes: list[str] = _ai["Key findings gathered during the ReAct process."]
confidence: str = _ai["Confidence level: high/medium/low."]
sources: list[str] = _ai["Sources consulted during the ReAct process."]
answer = _ai
# Post-processing: Add metadata to the final response
return f"{answer}\n\nConfidence: {confidence}\nSources: {', '.join(sources)}"
# Execution
research_assistant("What are the latest breakthroughs in quantum computing?")
```
[Searching: latest breakthroughs in quantum computing 2024]
"The latest breakthroughs in quantum computing as of 2024 include IBM's unveiling of the 1,121-qubit 'Condor' processor, which represents a major step forward in hardware scalability. Researchers at Google, Microsoft, and Quantinuum have achieved significant progress in quantum error correction, with logical qubits now outperforming physical qubits in some cases. Advances in quantum networking have enabled entanglement distribution over longer distances, moving closer to a functional quantum internet. Additionally, new quantum algorithms and hybrid approaches are being developed for practical applications in chemistry, optimization, and machine learning. Commercial access to quantum computing continues to grow, with more cloud-based services available to researchers and businesses.\n\nConfidence: high\nSources: https://www.ibm.com/blog/quantum-condor-1121-qubit-processor/, https://www.nature.com/articles/d41586-024-00000-0, https://www.quantamagazine.org/quantum-error-correction-breakthroughs-2024-20240110/, https://www.microsoft.com/en-us/research/blog/advances-in-fault-tolerant-quantum-computing-2024/, https://www.scientificamerican.com/article/quantum-internet-milestones-2024/"
Raw data
{
"_id": null,
"home_page": null,
"name": "functai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ai, artificial-intelligence, decorators, dspy, function-decorators, llm, machine-learning, prompt-engineering",
"author": null,
"author_email": "Maxime Rivest <maxime.rivest@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/60/2a/924c4d30192364a1347ca35a90c32386b4ad6d262d40198fac02349fd98c/functai-0.12.0.tar.gz",
"platform": null,
"description": "# FunctAI Specification: The Function-is-the-Prompt Paradigm\n\n\n- [FunctAI Specification: The Function-is-the-Prompt Paradigm](#functai-specification-the-function-is-the-prompt-paradigm)\n- [FunctAI: The Function-is-the-Prompt Paradigm](#functai-the-function-is-the-prompt-paradigm)\n - [1. Getting Started](#1-getting-started)\n - [1.1. Installation](#11-installation)\n - [1.2. Configuration](#12-configuration)\n - [1.3. Your First AI Function](#13-your-first-ai-function)\n - [2. Core Concepts](#2-core-concepts)\n - [2.1. The `@ai` Decorator](#21-the-ai-decorator)\n - [2.2. The `_ai` Sentinel](#22-the-_ai-sentinel)\n - [2.3. Post-processing and Validation](#23-post-processing-and-validation)\n - [3. Structured Output and Type System](#3-structured-output-and-type-system)\n - [3.1. Basic Types](#31-basic-types)\n - [3.2. Dataclasses and Complex Structures](#32-dataclasses-and-complex-structures)\n - [3.3. Restricted Choices](#33-restricted-choices)\n - [4. Configuration and Flexibility](#4-configuration-and-flexibility)\n - [4.1. The Configuration Cascade](#41-the-configuration-cascade)\n - [4.2. Global Configuration](#42-global-configuration)\n - [4.3. Per-Function Configuration](#43-per-function-configuration)\n - [4.4. Contextual Overrides](#44-contextual-overrides)\n - [5. Advanced Execution Strategies](#5-advanced-execution-strategies)\n - [5.1. Chain of Thought (CoT) Reasoning](#51-chain-of-thought-cot-reasoning)\n - [5.2. Accessing Intermediate Steps](#52-accessing-intermediate-steps)\n - [5.3. Multiple Explicit Outputs](#53-multiple-explicit-outputs)\n - [5.4. Tool Usage (ReAct Agents)](#54-tool-usage-react-agents)\n - [6. Stateful Interactions (Memory)](#6-stateful-interactions-memory)\n - [7. Optimization (In-place Compilation)](#7-optimization-in-place-compilation)\n - [7.1. The Optimization Workflow](#71-the-optimization-workflow)\n - [7.2. Reverting Optimization](#72-reverting-optimization)\n - [8. Inspection and Debugging](#8-inspection-and-debugging)\n - [10. Real-World Examples](#10-real-world-examples)\n - [10.1. Data Extraction Pipeline](#101-data-extraction-pipeline)\n - [10.2. Research Assistant Agent](#102-research-assistant-agent)\n\n# FunctAI: The Function-is-the-Prompt Paradigm\n\nWelcome to FunctAI. This library reimagines how Python developers\nintegrate Large Language Models (LLMs) into their applications. FunctAI\nallows you to treat AI models as reliable, typed Python functions,\nabstracting away the complexities of prompt engineering and output\nparsing.\n\nThe core philosophy of FunctAI is simple yet powerful:\n\n> **The function definition *is* the prompt, and the function body *is*\n> the program definition.**\n\nBy leveraging Python\u2019s native features\u2014docstrings for instructions, type\nhints for structure, and variable assignments for logic flow\u2014you can\ndefine sophisticated AI behaviors with minimal boilerplate.\n\nFunctAI is built on the powerful\n[DSPy](https://github.com/stanfordnlp/dspy) framework, unlocking\nadvanced strategies like Chain-of-Thought, automatic optimization, and\nagentic tool usage through an ergonomic, decorator-based API.\n\n------------------------------------------------------------------------\n\n## 1. Getting Started\n\n### 1.1. Installation\n\nInstall FunctAI and its core dependency, DSPy.\n\n``` bash\npip install functai\n```\n\n### 1.2. Configuration\n\nBefore using FunctAI, you must configure a default Language Model (LM).\nThis requires initializing a DSPy LM provider.\n\n``` python\nfrom functai import configure\n\n# Configure FunctAI globally\nconfigure(lm=\"gpt-4.1\", temperature=1.0, api_key=\"<YOUR_API_KEY>\")\n```\n\n### 1.3. Your First AI Function\n\nCreating an AI function is as simple as defining a standard Python\nfunction with type hints and a docstring, then decorating it with `@ai`.\n\n``` python\nfrom functai import ai, _ai\n\n@ai\ndef summarize(text: str, focus: str = \"key points\") -> str:\n \"\"\"Summarize the text in one concise sentence,\n concentrating on the specified focus area.\"\"\"\n # The _ai sentinel represents the LLM output\n return _ai\n\n# Call it exactly like a normal Python function\nlong_text = \"FunctAI bridges the gap between Python's expressive syntax and the dynamic capabilities of LLMs. It allows developers to focus on logic rather than boilerplate.\"\nsummarize(long_text, focus=\"developer benefits\")\n```\n\n \"FunctAI enables developers to concentrate on core logic by reducing boilerplate and leveraging Python's syntax with LLM capabilities.\"\n\n**What happens when you call `summarize`?**\n\n1. FunctAI intercepts the call.\n2. It constructs a prompt using the docstring and the inputs (`text`,\n `focus`).\n3. It invokes the configured LM (GPT-4.1).\n4. It parses the LM\u2019s response and returns the result, ensuring it\n matches the return type (`str`).\n\n## 2. Core Concepts\n\n### 2.1. The `@ai` Decorator\n\nThe `@ai` decorator is the magic wand. It transforms a Python function\ninto an LLM-powered program. It analyzes the function\u2019s signature\n(parameters, return type, and docstring) to understand the task\nrequirements and a prompt is automatically constructed out of that for\nyou.\n\n``` python\n@ai\ndef sentiment(text: str) -> str:\n \"\"\"Analyze the sentiment of the given text.\n Return 'positive', 'negative', or 'neutral'.\"\"\"\n ... # An empty body or Ellipsis also works like returning _ai\n```\n\n### 2.2. The `_ai` Sentinel\n\nThe `_ai` object is a special sentinel used within an `@ai` function. It\nrepresents the value(s) that will be generated by the AI. It acts as a\nproxy, deferring the actual LM execution.\n\nReturning `_ai` directly indicates that the LLM\u2019s output is the\nfunction\u2019s final result.\n\n``` python\n@ai\ndef extract_price(description: str) -> float:\n \"\"\"Extract the price from a product description.\"\"\"\n return _ai\n```\n\n### 2.3. Post-processing and Validation\n\nFunctAI encourages writing robust code. You can assign `_ai` to a\nvariable and apply standard Python operations before returning. `_ai`\nbehaves dynamically as if it were the expected return type.\n\nThis allows you to combine the power of AI with the reliability of code\nfor validation, cleaning, or transformation.\n\n``` python\n@ai\ndef sentiment_score(text: str) -> float:\n \"\"\"Returns a sentiment score between 0.0 (negative) and 1.0 (positive).\"\"\"\n\n # _ai behaves like a float here due to the return type hint\n score = _ai\n\n # Post-processing: ensure the score is strictly within bounds\n return max(0.0, min(1.0, float(score)))\n\nsentiment_score(\"I think that FunctAI is amazing!\")\n```\n\n 0.95\n\n## 3. Structured Output and Type System\n\nFunctAI excels at extracting structured data. Python type hints serve as\nthe contract between your code and the LLM.\n\n### 3.1. Basic Types\n\nFunctAI handles standard Python types (`int`, `float`, `bool`, `str`,\n`list`, `dict`).\n\n``` python\n@ai\ndef calculate(expression) -> int:\n \"\"\"Evaluate the mathematical expression.\"\"\"\n return _ai\n\nresult = calculate(\"What is 15 times 23?\")\nprint(result)\n```\n\n 345\n\n``` python\n@ai\ndef get_keywords(article) -> list[str]:\n \"\"\"Extract 5 key terms from the article.\"\"\"\n keywords: list[str] = _ai\n # Post-processing example: ensure lowercase\n return [k.lower() for k in keywords]\n\nget_keywords(\"FunctAI excels at extracting structured data. Python type hints serve as the contract between your code and the LLM.\")\n```\n\n ['functai', 'structured data', 'python type hints', 'contract', 'llm']\n\n### 3.2. Dataclasses and Complex Structures\n\nFor complex data extraction, define a `dataclass` (or Pydantic model)\nand use it as the return type.\n\n``` python\nfrom dataclasses import dataclass\nfrom typing import List\n\n@dataclass\nclass ProductInfo:\n name: str\n price: float\n features: List[str]\n in_stock: bool\n\n@ai\ndef extract_product(description: str) -> ProductInfo:\n \"\"\"Extract product information from the description.\"\"\"\n return _ai\n\ninfo = extract_product(\"iPhone 15 Pro - $999, 5G, titanium design, available now\")\n\n# The output is a validated ProductInfo instance\nprint(info)\n```\n\n ProductInfo(name='iPhone 15 Pro', price=999.0, features=['5G', 'titanium design'], in_stock=True)\n\n### 3.3. Restricted Choices\n\nUse `Enum` to restrict the LM\u2019s output to a predefined set of values,\nincreasing reliability for classification tasks.\n\n``` python\nfrom enum import Enum\n\nclass TicketPriority(Enum):\n LOW = \"low\"\n MEDIUM = \"medium\"\n HIGH = \"high\"\n\n@ai\ndef classify_priority(issue_description: str) -> TicketPriority:\n \"\"\"Analyzes the issue and classifies its priority level.\"\"\"\n return _ai\n\nresult = classify_priority(issue_description=\"The main database is unresponsive.\")\nresult\n```\n\n <TicketPriority.HIGH: 'high'>\n\nYou can also use typing `Literal`:\n\n``` python\nfrom typing import Literal\n\n@ai\ndef categorize(text: str) -> Literal[\"sport\", \"fashion\"]:\n ...\n\ncategorize(\"Vibe Coding as been declared a sport\")\n```\n\n 'sport'\n\n## 4. Configuration and Flexibility\n\n### 4.1. The Configuration Cascade\n\nFunctAI uses a flexible, cascading configuration system. Settings are\napplied in the following order of precedence (highest to lowest):\n\n1. **Function-Level** (e.g., `@ai(temperature=0.1)`)\n2. **Contextual** (e.g., `with configure(temperature=0.1):`)\n3. **Global** (e.g., `configure(temperature=0.1)`)\n\n### 4.2. Global Configuration\n\nUse `functai.configure()` for project-wide defaults.\n\n``` python\nimport functai\n\n# Example using GPT-3.5-Turbo as a default\nfunctai.configure(\n lm=\"groq/openai/gpt-oss-120b\",\n temperature=0.5\n)\n\n@ai\ndef math(formal):\n ...\n\nmath(\"2+2\")\n```\n\n 2025/08/30 13:32:38 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.\n\n 4\n\n### 4.3. Per-Function Configuration\n\nOverride defaults for specific functions directly in the decorator. This\nis useful when different tasks require different models or creativity\nlevels.\n\n``` python\n# Deterministic task requiring a powerful model\n@ai(temperature=0.0, lm=\"openai/gpt-4o\")\ndef legal_analysis(document):\n \"\"\"Provide precise legal analysis of the document.\"\"\"\n return _ai\n\n# Creative task using a different provider\n@ai(temperature=0.9, lm=\"anthropic/claude-sonnet-4-20250514\")\ndef creative_story(prompt):\n \"\"\"Write a creative story based on the prompt.\"\"\"\n return _ai\n```\n\n### 4.4. Contextual Overrides\n\nUse the `functai.configure()` context manager to temporarily override\ndefaults for a block of code.\n\n``` python\nfrom functai import configure\n\n@ai\ndef analyze(data): return _ai\n\nanalyze(\"data1\") # Uses global defaults\n\n# Temporarily switch model and temperature\nwith configure(temperature=0.0, lm=\"gpt-4o\"):\n print(analyze(\"data2\")) # Uses GPT-4.1, Temp 0.0\n\nanalyze(\"data3\") # Back to global defaults\n```\n\n 2025/08/30 13:32:43 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.\n 2025/08/30 13:32:49 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.\n\n data2\n\n 'data3'\n\n## 5. Advanced Execution Strategies\n\nFunctAI truly shines by allowing developers to define complex execution\nstrategies directly within the function body, adhering to the \u201cfunction\nbody is the program definition\u201d philosophy.\n\n### 5.1. Chain of Thought (CoT) Reasoning\n\nEliciting step-by-step reasoning (Chain of Thought) often significantly\nimproves the quality and accuracy of the final answer, especially for\ncomplex tasks.\n\nIn FunctAI, you define CoT by declaring intermediate reasoning steps\nwithin the function body using `_ai` assignments with descriptions.\n\n``` python\n@ai\ndef solve_math_problem(question: str) -> float:\n \"\"\"Solves a math word problem and returns the numerical answer.\"\"\"\n\n # Define the reasoning step:\n # 1. The variable name ('reasoning') becomes the field name.\n # 2. The type hint (str) defines the output type for this step.\n # 3. The subscript _ai[\"...\"] provides specific instructions for the LLM.\n reasoning: str = _ai[\"Step-by-step thinking process to reach the solution.\"]\n\n # The final return value (float) is the main output\n return _ai\n```\n\n**Behavior:** FunctAI analyzes the function body, detects the\nintermediate `reasoning` assignment, and automatically configures the\nexecution to generate the reasoning *before* attempting to generate the\nfinal result.\n\n*(Note: You can also enable a generic CoT by setting\n`@ai(module=\"cot\")`, but the explicit definition above offers more\ncontrol.)*\n\n### 5.2. Accessing Intermediate Steps\n\nWhile the function call normally returns only the final result, you can\naccess the intermediate steps (like `reasoning`) by adding the special\nargument `all=True` to the function call.\n\nThis returns the raw prediction object containing all generated fields.\n\n``` python\nquestion = \"If a train travels 120 miles in 2 hours, what is its speed?\"\nprediction = solve_math_problem(question, all=True)\n\nprint(\"--- Reasoning ---\")\nprint(prediction.reasoning)\n\nprint(\"\\n--- Answer ---\")\n# When _ai is returned directly, the main output is stored in the 'result' attribute\nprint(prediction.result)\n```\n\n --- Reasoning ---\n Speed is calculated by dividing distance by time. The train travels 120 miles in 2 hours, so speed = 120 miles / 2 hours = 60 miles per hour.\n\n --- Answer ---\n 60.0\n\n### 5.3. Multiple Explicit Outputs\n\nYou can define and return multiple distinct outputs from a single\nfunction call by declaring them inline, similar to the CoT pattern.\n\n``` python\nfrom typing import Tuple\n\n@ai()\ndef critique_and_improve(text: str) -> Tuple[str, str, int]:\n \"\"\"\n Analyze the text, provide constructive criticism, and suggest an improved version.\n \"\"\"\n # Define explicit output fields using _ai[...]\n critique: str = _ai[\"Constructive criticism focusing on clarity and tone.\"]\n improved_text: str = _ai[\"The improved version of the text.\"]\n\n # Return the materialized fields (Python handles the Tuple structure)\n return critique, improved_text, 1.0\n\ncritique, improved, number = critique_and_improve(text=\"U should fix this asap, it's broken.\")\n\n\nprint(\"--- Critique ---\")\nprint(critique)\nprint(\"--- Improved ---\")\nprint(improved)\n```\n\n --- Critique ---\n The original message is overly informal and uses shorthand ('U') and vague urgency ('asap') that may come across as unprofessional. It lacks specific details about what is broken, which can make it harder for the recipient to address the issue efficiently. A clearer, more courteous tone with a brief description of the problem would improve communication.\n --- Improved ---\n Please address this issue as soon as possible; the current functionality appears to be broken.\n\n### 5.4. Tool Usage (ReAct Agents)\n\nFunctAI supports the ReAct (Reasoning + Acting) pattern for creating\nagents that can interact with external tools. Tools are standard, typed\nPython functions.\n\nWhen the `tools` argument is provided to the `@ai` decorator, the\nexecution strategy automatically upgrades to an agentic loop (using\n`dspy.ReAct`).\n\n``` python\n# 1. Define tools\ndef search_web(query: str) -> str:\n \"\"\"Searches the web for information. (Mock implementation)\"\"\"\n print(f\"[Tool executing: Searching for '{query}']\")\n # In a real scenario, this would call a search API\n return f\"Mock search results for {query}.\"\n\ndef calculate(expression: str) -> float:\n \"\"\"Performs mathematical calculations. (Mock implementation)\"\"\"\n print(f\"[Tool executing: Calculating '{expression}']\")\n # WARNING: eval() is unsafe in production. Use a safe math library.\n return eval(expression)\n\n# 2. Define the AI function with access to the tools\n@ai(tools=[search_web, calculate])\ndef research_assistant(question: str) -> str:\n \"\"\"Answer questions using available tools to gather data and perform calculations.\"\"\"\n return _ai\n\n# 3. Execute the agent\n# The AI will potentially use search_web and then calculate.\nanswer = research_assistant(\"What is the result of (15 * 23) + 10?\")\n```\n\n [Tool executing: Calculating '(15 * 23) + 10']\n\n 2025/08/30 13:32:58 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.\n\n``` python\nimport functai\n#| echo: false\nprint(functai.phistory())\n```\n\n\n\n\n\n [2025-08-30T13:32:58.957207]\n\n System message:\n\n Your input fields are:\n 1. `question` (str): \n 2. `trajectory` (str):\n Your output fields are:\n 1. `reasoning` (str): \n 2. `result` (str):\n All interactions will be structured in the following way, with the appropriate values filled in.\n\n Inputs will have the following structure:\n\n [[ ## question ## ]]\n {question}\n\n [[ ## trajectory ## ]]\n {trajectory}\n\n Outputs will be a JSON object with the following fields.\n\n {\n \"reasoning\": \"{reasoning}\",\n \"result\": \"{result}\"\n }\n In adhering to this structure, your objective is: \n Function: research_assistant\n \n Answer questions using available tools to gather data and perform calculations.\n\n\n User message:\n\n [[ ## question ## ]]\n What is the result of (15 * 23) + 10?\n\n [[ ## trajectory ## ]]\n [[ ## thought_0 ## ]]\n I need to compute the arithmetic expression (15 * 23) + 10.\n\n [[ ## tool_name_0 ## ]]\n calculate\n\n [[ ## tool_args_0 ## ]]\n {\"expression\": \"(15 * 23) + 10\"}\n\n [[ ## observation_0 ## ]]\n 355\n\n [[ ## thought_1 ## ]]\n The calculation is complete; the result of (15 * 23) + 10 is 355.\n\n [[ ## tool_name_1 ## ]]\n finish\n\n [[ ## tool_args_1 ## ]]\n {}\n\n [[ ## observation_1 ## ]]\n Completed.\n\n Respond with a JSON object in the following order of fields: `reasoning`, then `result`.\n\n\n Response:\n\n {\"reasoning\":\"I used the calculate tool to evaluate the expression (15 * 23) + 10, which gave 355. No further steps are needed.\",\"result\":\"355\"}\n\n\n\n\n**Behavior:** The function will iteratively think about the task, decide\nwhich tool to use, execute the tool, observe the results, and repeat\nuntil it can provide the final answer.\n\n## 6. Stateful Interactions (Memory)\n\nBy default, `@ai` functions are stateless; each call is independent. To\nmaintain context across calls (e.g., in a chatbot scenario), set\n`stateful=True`.\n\n``` python\nfrom functai import ai, _ai\n\n@ai(lm=\"gpt-4.1\", stateful=True)\ndef assistant(message):\n \"\"\"A friendly AI assistant that remembers the conversation history.\"\"\"\n return _ai\n\nresponse1 = assistant(\"Hello, my name is Alex.\")\nprint(f\"Output1: {response1}\")\n\nresponse2 = assistant(\"What is my name?\")\nprint(f\"Output2: {response2}\")\n```\n\n Output1: Hello Alex! It's nice to meet you. How can I assist you today?\n Output2: Your name is Alex.\n\n\n\n\n\n [2025-08-30T13:32:59.675591]\n\n System message:\n\n Your input fields are:\n 1. `message` (str): \n 2. `history` (History):\n Your output fields are:\n 1. `result` (Any):\n All interactions will be structured in the following way, with the appropriate values filled in.\n\n Inputs will have the following structure:\n\n [[ ## message ## ]]\n {message}\n\n [[ ## history ## ]]\n {history}\n\n Outputs will be a JSON object with the following fields.\n\n {\n \"result\": \"{result} # note: the value you produce must adhere to the JSON schema: {}\"\n }\n In adhering to this structure, your objective is: \n Function: assistant\n \n A friendly AI assistant that remembers the conversation history.\n\n\n User message:\n\n [[ ## message ## ]]\n Hello, my name is Alex.\n\n\n Assistant message:\n\n {\n \"result\": \"Hello Alex! It's nice to meet you. How can I assist you today?\"\n }\n\n\n User message:\n\n [[ ## message ## ]]\n What is my name?\n\n Respond with a JSON object in the following order of fields: `result` (must be formatted as a valid Python Any).\n\n\n Response:\n\n {\n \"result\": \"Your name is Alex.\"\n }\n\n\n\n\n**Behavior:** When `stateful=True`, FunctAI automatically includes the\nhistory of previous inputs and outputs in the context of the next call.\n\n## 7. Optimization (In-place Compilation)\n\n**WARNING: optimization is still a work in progress**\n\nFunctAI integrates seamlessly with DSPy\u2019s optimization capabilities\n(Teleprompters). Optimization (often called compilation in DSPy)\nimproves the quality and reliability of your AI functions by using a\ndataset of examples.\n\nThe optimizer can automatically generate effective few-shot examples or\nrefine instructions. This happens *in place* using the `.opt()` method\non the function object.\n\n### 7.1. The Optimization Workflow\n\n``` python\nimport dspy\nfrom dspy import Example\nfrom functai import ai, _ai\n\ndspy.configure(lm = dspy.LM(\"gpt-4.1\"))\n\n# 1. Define the function\n@ai\ndef classify_intent(user_query: str) -> str:\n \"\"\"Classify user intent as 'booking', 'cancelation', or 'information'.\"\"\"\n return _ai\n\n# 2. Define the training data (List of DSPy Examples)\n# .with_inputs() specifies which keys are inputs to the function\ntrainset = [\n Example(user_query=\"I need to reserve a room.\", result=\"booking\").with_inputs(\"user_query\"),\n Example(user_query=\"How do I get there?\", result=\"information\").with_inputs(\"user_query\"),\n Example(user_query=\"I want to cancel my reservation.\", result=\"cancelation\").with_inputs(\"user_query\"),\n]\n\n# 3. Optimize the function in place\n# strategy=\"launch\" typically uses a default like BootstrapFewShot\nprint(\"Optimizing...\")\nclassify_intent.opt(trainset=trainset)\nprint(\"Optimization complete.\")\n\n# 4. The function is now optimized (it includes generated few-shot examples in its prompt)\nresult = classify_intent(\"Can I book a suite for next Tuesday?\")\n# Output: \"booking\"\n```\n\n Optimizing...\n\n 0%| | 0/3 [00:00<?, ?it/s]100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3/3 [00:00<00:00, 36.98it/s]\n\n Bootstrapped 3 full traces after 2 examples for up to 1 rounds, amounting to 3 attempts.\n Optimization complete.\n\n\n 2025/08/30 13:33:04 WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.\n\n### 7.2. Reverting Optimization\n\nFunctAI tracks optimization steps. If the results are not satisfactory,\nyou can revert using `.undo_opt()`.\n\n``` python\n# Revert the last optimization step\nclassify_intent.undo_opt(steps=1)\n```\n\n## 8. Inspection and Debugging\n\n**More to come**\n\nTo see the last call, use `functai.phistory()`.\n\n``` python\nfrom functai import phistory\n\n# After running some AI functions...\nprint(phistory()) # Show the last call\n```\n\n\n\n\n\n [2025-08-30T13:33:06.742620]\n\n System message:\n\n Your input fields are:\n 1. `user_query` (str):\n Your output fields are:\n 1. `result` (str):\n All interactions will be structured in the following way, with the appropriate values filled in.\n\n Inputs will have the following structure:\n\n [[ ## user_query ## ]]\n {user_query}\n\n Outputs will be a JSON object with the following fields.\n\n {\n \"result\": \"{result}\"\n }\n In adhering to this structure, your objective is: \n Function: classify_intent\n \n Classify user intent as 'booking', 'cancelation', or 'information'.\n\n\n User message:\n\n [[ ## user_query ## ]]\n I need to reserve a room.\n\n\n Assistant message:\n\n {\n \"result\": \"booking\"\n }\n\n\n User message:\n\n [[ ## user_query ## ]]\n How do I get there?\n\n\n Assistant message:\n\n {\n \"result\": \"information\"\n }\n\n\n User message:\n\n [[ ## user_query ## ]]\n I want to cancel my reservation.\n\n\n Assistant message:\n\n {\n \"result\": \"cancelation\"\n }\n\n\n User message:\n\n [[ ## user_query ## ]]\n Can I book a suite for next Tuesday?\n\n Respond with a JSON object in the following order of fields: `result`.\n\n\n Response:\n\n {\n \"result\": \"booking\"\n }\n\n\n\n\n## 10. Real-World Examples\n\n### 10.1. Data Extraction Pipeline\n\nThis example demonstrates chaining multiple AI functions to process\nunstructured data reliably.\n\n``` python\nfrom dataclasses import dataclass\nfrom typing import List\nfrom functai import ai, _ai, configure\n\n# Ensure deterministic extraction\nconfigure(lm = \"gpt-4.1\", temperature=0.0, adapter=\"json\")\n\n# 1. Define the target structure\n@dataclass\nclass Invoice:\n invoice_number: str\n vendor_name: str\n total: float\n items: List[str]\n\n# 2. Define the extraction function with CoT for accuracy\n@ai\ndef extract_invoice(document_text: str) -> Invoice:\n \"\"\"Extract invoice information from the document text.\n Parse all relevant fields accurately. Convert amounts to float.\n \"\"\"\n thought_process: str = _ai[\"Analyze the document layout and identify the location of each field before extracting.\"]\n return _ai\n\n# 3. Define a validation function\n@ai\ndef validate_invoice(invoice: Invoice) -> bool:\n \"\"\"Validate if the invoice data is complete and reasonable.\n Check if the total is positive and required fields are present.\n \"\"\"\n return _ai\n\n# 4. Define a summarization function\n@ai\ndef summarize_invoice(invoice: Invoice) -> str:\n \"\"\"Create a brief, human-readable summary of the invoice.\"\"\"\n return _ai\n\n# 5. Execute the pipeline\ndocument = \"\"\"\nINVOICE\nVendor: TechCorp Inc.\nInvoice #: INV-2025-101\nItems: 5x Laptops, 2x Monitors\nTotal: $5600.00\n\"\"\"\n\ninvoice = extract_invoice(document)\n\nif validate_invoice(invoice):\n summary = summarize_invoice(invoice)\n print(\"Invoice Validated Successfully!\")\n print(summary)\nelse:\n print(\"Invoice Validation Failed.\")\n```\n\n Invoice Validated Successfully!\n Invoice INV-2025-101 from TechCorp Inc. totals $5,600.00 and includes 5 laptops and 2 monitors.\n\n### 10.2. Research Assistant Agent\n\nThis example builds a sophisticated agent using tools and structured\ninternal outputs.\n\n``` python\nfrom functai import ai, _ai, configure\n\nconfigure(lm = \"gpt-4.1\")\n\n# Define Tools (Placeholders)\ndef search_web(query: str) -> str:\n \"\"\"Search the web for information.\"\"\"\n print(f\"[Searching: {query}]\")\n return f\"Mock search results for {query}.\"\n\ndef read_paper(paper_id: str) -> str:\n \"\"\"Read the content of a specific research paper.\"\"\"\n print(f\"[Reading: {paper_id}]\")\n return f\"Mock content of paper {paper_id}.\"\n\n# Define the Agent\n@ai(tools=[search_web, read_paper])\ndef research_assistant(query: str) -> str:\n \"\"\"Advanced research assistant.\n Use available tools to gather information. Synthesize findings.\n \"\"\"\n # Define intermediate outputs for better structure and inspection\n research_notes: list[str] = _ai[\"Key findings gathered during the ReAct process.\"]\n confidence: str = _ai[\"Confidence level: high/medium/low.\"]\n sources: list[str] = _ai[\"Sources consulted during the ReAct process.\"]\n\n answer = _ai\n\n # Post-processing: Add metadata to the final response\n return f\"{answer}\\n\\nConfidence: {confidence}\\nSources: {', '.join(sources)}\"\n\n# Execution\nresearch_assistant(\"What are the latest breakthroughs in quantum computing?\")\n```\n\n [Searching: latest breakthroughs in quantum computing 2024]\n\n \"The latest breakthroughs in quantum computing as of 2024 include IBM's unveiling of the 1,121-qubit 'Condor' processor, which represents a major step forward in hardware scalability. Researchers at Google, Microsoft, and Quantinuum have achieved significant progress in quantum error correction, with logical qubits now outperforming physical qubits in some cases. Advances in quantum networking have enabled entanglement distribution over longer distances, moving closer to a functional quantum internet. Additionally, new quantum algorithms and hybrid approaches are being developed for practical applications in chemistry, optimization, and machine learning. Commercial access to quantum computing continues to grow, with more cloud-based services available to researchers and businesses.\\n\\nConfidence: high\\nSources: https://www.ibm.com/blog/quantum-condor-1121-qubit-processor/, https://www.nature.com/articles/d41586-024-00000-0, https://www.quantamagazine.org/quantum-error-correction-breakthroughs-2024-20240110/, https://www.microsoft.com/en-us/research/blog/advances-in-fault-tolerant-quantum-computing-2024/, https://www.scientificamerican.com/article/quantum-internet-milestones-2024/\"\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "DSPy-powered function decorators for AI-enhanced programming",
"version": "0.12.0",
"project_urls": {
"Bug Tracker": "https://github.com/maximerivest/functai/issues",
"Documentation": "https://github.com/maximerivest/functai#readme",
"Homepage": "https://github.com/maximerivest/functai",
"Source": "https://github.com/maximerivest/functai"
},
"split_keywords": [
"ai",
" artificial-intelligence",
" decorators",
" dspy",
" function-decorators",
" llm",
" machine-learning",
" prompt-engineering"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "24400d4c7d870c1e3dd46ac5959d50de23bdb7fd8b84304b48bc4d72e9d0f2db",
"md5": "1a6e5fdf8b1bc3e2d085b79977a487cd",
"sha256": "960e424d248c95ecb291889b656089ffb2512988db70c90f977ef643bdf863c0"
},
"downloads": -1,
"filename": "functai-0.12.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1a6e5fdf8b1bc3e2d085b79977a487cd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 42187,
"upload_time": "2025-09-05T02:24:34",
"upload_time_iso_8601": "2025-09-05T02:24:34.010267Z",
"url": "https://files.pythonhosted.org/packages/24/40/0d4c7d870c1e3dd46ac5959d50de23bdb7fd8b84304b48bc4d72e9d0f2db/functai-0.12.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "602a924c4d30192364a1347ca35a90c32386b4ad6d262d40198fac02349fd98c",
"md5": "607a6fcb6c009c51f1516b95278130ed",
"sha256": "9f9e367c624f9a4dbfd5fa8630ee1e234023161a76c3993f2b22f6cb9e167c1a"
},
"downloads": -1,
"filename": "functai-0.12.0.tar.gz",
"has_sig": false,
"md5_digest": "607a6fcb6c009c51f1516b95278130ed",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 48561,
"upload_time": "2025-09-05T02:24:35",
"upload_time_iso_8601": "2025-09-05T02:24:35.262484Z",
"url": "https://files.pythonhosted.org/packages/60/2a/924c4d30192364a1347ca35a90c32386b4ad6d262d40198fac02349fd98c/functai-0.12.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-05 02:24:35",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "maximerivest",
"github_project": "functai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "functai"
}