graphbit


Namegraphbit JSON
Version 0.4.0 PyPI version JSON
download
home_pagehttps://graphbit.ai
SummaryProduction-grade Python bindings for GraphBit agentic workflow automation
upload_time2025-10-06 19:09:45
maintainerNone
docs_urlNone
authorGraphBit Team
requires_pythonNone
licenseNone
keywords ai automation workflow agent llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # GraphBit Python API

Python bindings for GraphBit - a declarative agentic workflow automation framework.

## Installation

### From PyPI (Recommended)

```bash
pip install graphbit
```

### From Source

```bash
git clone https://github.com/InfinitiBit/graphbit.git
cd graphbit/
cargo build --release
cd python/
cargo build --release
maturin develop --release
```

## Quick Start

```python
import graphbit
import os

graphbit.init()

api_key = os.getenv("OPENAI_API_KEY")
config = graphbit.LlmConfig.openai(api_key, "gpt-4")

workflow = graphbit.Workflow("Hello World")

node = graphbit.Node.agent(
    "Greeter", 
    "Say hello to the world!", 
    "greeter"
)

node_id = workflow.add_node(node)
workflow.validate()

executor = graphbit.Executor(config)
context = executor.execute(workflow)

print(f"Result: {context.get_variable('node_result_1')}")
```

## Core Classes

### LlmConfig

Configuration for LLM providers.

```python
openai_config = graphbit.LlmConfig.openai(api_key, "gpt-4")

anthropic_config = graphbit.LlmConfig.anthropic(api_key, "claude-3-sonnet-20240229")

fireworks_config = graphbit.LlmConfig.fireworks(api_key, "accounts/fireworks/models/llama-v3p1-8b-instruct")
```

### EmbeddingConfig & EmbeddingClient

Service for generating text embeddings.

```python
config = graphbit.EmbeddingConfig.openai(api_key, "text-embedding-ada-002")
embedding_client = graphbit.EmbeddingClient(config)

# Single embedding
embedding = embedding_client.embed("Hello world")

# Batch embedding
embeddings = embedding_client.embed_many(["First text", "Second text"])
```

### Workflow

Pattern for creating workflows.

```python
workflow = graphbit.Workflow("My Workflow")

agent_node = graphbit.Node.agent(
    name="Analyzer", 
    prompt=f"Analyzes input data: {input}",
    agent_id="analyzer"
)

node_id = workflow.add_node(agent_node)

transform_node = graphbit.Node.transform(
    "Uppercase",
    "uppercase"
)

transform_id = workflow.add_node(transform_node)

workflow.connect(node_id, transform_id)

workflow.validate()
```

### Node

Factory for creating different types of workflow nodes.

```python
# Agent node
agent = graphbit.Node.agent(
    "Content Writer", 
    f"Write an article about {topic}", 
    "writer"
)

# Transform node
transform = graphbit.Node.transform(
    "Uppercase", 
    "uppercase"
)
# Available transformations: "uppercase", "lowercase", "json_extract", "split", "join"

# Condition node
condition = graphbit.Node.condition(
    "Quality Check", 
    "quality_score > 0.8"
)
```

### DocumentLoaderConfig & DocumentLoader

Service for loading documents

```python
from graphbit import DocumentLoaderConfig, DocumentLoader

# Basic configuration
config = DocumentLoaderConfig(
    max_file_size=50_000_000,    # 50MB limit
    default_encoding="utf-8",    # Text encoding
    preserve_formatting=True     # Keep formatting
)
loader = DocumentLoader(config)

# PDF processing
config = DocumentLoaderConfig(preserve_formatting=True)
config.extraction_settings = {
    "ocr_enabled": True,
    "table_detection": True
}
loader = DocumentLoader(config)
content = loader.load_document("report.pdf", "pdf")

# Text files processing
config = DocumentLoaderConfig(default_encoding="utf-8")
loader = DocumentLoader(config)
content = loader.load_document("notes.txt", "txt")

# Structured data processing
# JSON, CSV, XML automatically parsed as text
loader = DocumentLoader()
json_content = loader.load_document("data.json", "json")
csv_content = loader.load_document("data.csv", "csv")
```

### TextSplitterConfig & TextSplitter

Factory for creating different types of test splitters.

```python
# Character configuration
config = graphbit.TextSplitterConfig.character(
    chunk_size=1000,
    chunk_overlap=200
)

# Token configuration
config = graphbit.TextSplitterConfig.token(
    chunk_size=100,
    chunk_overlap=20,
    token_pattern=r'\w+'
)

# Code splitter configuration
config = graphbit.TextSplitterConfig.code(
    chunk_size=500,
    chunk_overlap=50,
    language="python"
)

# Markdown splitter configuration
config = graphbit.TextSplitterConfig.markdown(
    chunk_size=1000,
    chunk_overlap=100,
    split_by_headers=True
)

# Create a character splitter
splitter = graphbit.CharacterSplitter(
    chunk_size=1000,      # Maximum characters per chunk
    chunk_overlap=200     # Overlap between chunks
)

# Create a token splitter
splitter = graphbit.TokenSplitter(
    chunk_size=100,       # Maximum tokens per chunk
    chunk_overlap=20,     # Token overlap
    token_pattern=None    # Optional custom regex pattern
)

# Create a sentence splitter
splitter = graphbit.SentenceSplitter(
    chunk_size=500,       # Target size in characters
    chunk_overlap=1       # Number of sentences to overlap
    sentence_endings=None # Optional list of sentence endings
)

# Create a recursive splitter
splitter = graphbit.RecursiveSplitter(
    chunk_size=1000,      # Target size in characters
    chunk_overlap=100     # Overlap between chunks
    separators=None       # Optional list of separators

)
```

### Executor

Executes workflows with the configured LLM.

```python
# Basic executor configuration
executor = graphbit.Executor(openai_config)

# High-throughput executor configuration
high_throughput_executor = graphbit.Executor.new_high_throughput(config)

# Low-latency executor configuration
low_latency_executor = graphbit.Executor.new_low_latency(config)

# Memory-optimized executor configuration
memory_optimized_executor = graphbit.Executor.new_memory_optimized(config)

context = executor.execute(workflow)

print(f"Workflow Status: {context.state()}")
print(f"Workflow Success: {context.is_success()}")
print(f"Workflow Failed: {context.is_failed()}")
print(f"Workflow Execution time: {context.execution_time_ms()}ms")
print(f"All variables: {context.get_all_variables()}")
```

## API Reference Summary

### Core Classes
- `LlmConfig`: LLM provider configuration
- `LlmClient`: LLM client with resilience patterns
- `LlmUsage`: Token usage statistics for cost tracking
- `FinishReason`: LLM completion reason (stop, length, tool_calls, etc.)
- `LlmToolCall`: Tool/function call information
- `LlmResponse`: Complete LLM response with metadata
- `EmbeddingConfig`: Embedding service configuration
- `EmbeddingClient`: Text embedding generation
- `Node`: Workflow node factory (agent, transform, condition)
- `Workflow`: Workflow definition and operations
- `DocumentLoaderConfig`: Documents loading service configuration
- `DocumentLoader`: Loading documents
- `TextSplitterConfig`: Text splitter service configuration
- `TextSplitter`: Text splitters for text processing
- `Executor`: Workflow execution engine

### LLM Response Observability

Access detailed LLM response information for comprehensive tracing across multiple providers:

```python
import graphbit
import os

# Configure different LLM providers
providers = {
    "OpenAI": graphbit.LlmConfig.openai(api_key=os.getenv("OPENAI_API_KEY")),
    "Anthropic": graphbit.LlmConfig.anthropic(api_key=os.getenv("ANTHROPIC_API_KEY")),
    "DeepSeek": graphbit.LlmConfig.deepseek(api_key=os.getenv("DEEPSEEK_API_KEY")),
    "Perplexity": graphbit.LlmConfig.perplexity(api_key=os.getenv("PERPLEXITY_API_KEY"))
}

# Test with DeepSeek for cost-effective AI
config = providers["DeepSeek"]
client = graphbit.LlmClient(config)

# Get full response with usage statistics
response = client.complete_full("What is AI?", max_tokens=100)

# Access detailed metrics
print(f"Content: {response.content}")
print(f"Model: {response.model}")
print(f"Prompt tokens: {response.usage.prompt_tokens}")
print(f"Completion tokens: {response.usage.completion_tokens}")
print(f"Total tokens: {response.usage.total_tokens}")
print(f"Finish reason: {response.finish_reason}")

# Test with Perplexity for real-time search capabilities
perplexity_client = graphbit.LlmClient(providers["Perplexity"])
search_response = await perplexity_client.complete_full_async(
    "What are the latest developments in quantum computing?",
    max_tokens=150
)

# Serialize for logging/monitoring
response_data = response.to_dict()
```

This API provides everything needed to build agentic workflows with Python, from simple automation to multi-step pipelines with comprehensive LLM observability.


            

Raw data

            {
    "_id": null,
    "home_page": "https://graphbit.ai",
    "name": "graphbit",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "ai, automation, workflow, agent, llm",
    "author": "GraphBit Team",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "# GraphBit Python API\n\nPython bindings for GraphBit - a declarative agentic workflow automation framework.\n\n## Installation\n\n### From PyPI (Recommended)\n\n```bash\npip install graphbit\n```\n\n### From Source\n\n```bash\ngit clone https://github.com/InfinitiBit/graphbit.git\ncd graphbit/\ncargo build --release\ncd python/\ncargo build --release\nmaturin develop --release\n```\n\n## Quick Start\n\n```python\nimport graphbit\nimport os\n\ngraphbit.init()\n\napi_key = os.getenv(\"OPENAI_API_KEY\")\nconfig = graphbit.LlmConfig.openai(api_key, \"gpt-4\")\n\nworkflow = graphbit.Workflow(\"Hello World\")\n\nnode = graphbit.Node.agent(\n    \"Greeter\", \n    \"Say hello to the world!\", \n    \"greeter\"\n)\n\nnode_id = workflow.add_node(node)\nworkflow.validate()\n\nexecutor = graphbit.Executor(config)\ncontext = executor.execute(workflow)\n\nprint(f\"Result: {context.get_variable('node_result_1')}\")\n```\n\n## Core Classes\n\n### LlmConfig\n\nConfiguration for LLM providers.\n\n```python\nopenai_config = graphbit.LlmConfig.openai(api_key, \"gpt-4\")\n\nanthropic_config = graphbit.LlmConfig.anthropic(api_key, \"claude-3-sonnet-20240229\")\n\nfireworks_config = graphbit.LlmConfig.fireworks(api_key, \"accounts/fireworks/models/llama-v3p1-8b-instruct\")\n```\n\n### EmbeddingConfig & EmbeddingClient\n\nService for generating text embeddings.\n\n```python\nconfig = graphbit.EmbeddingConfig.openai(api_key, \"text-embedding-ada-002\")\nembedding_client = graphbit.EmbeddingClient(config)\n\n# Single embedding\nembedding = embedding_client.embed(\"Hello world\")\n\n# Batch embedding\nembeddings = embedding_client.embed_many([\"First text\", \"Second text\"])\n```\n\n### Workflow\n\nPattern for creating workflows.\n\n```python\nworkflow = graphbit.Workflow(\"My Workflow\")\n\nagent_node = graphbit.Node.agent(\n    name=\"Analyzer\", \n    prompt=f\"Analyzes input data: {input}\",\n    agent_id=\"analyzer\"\n)\n\nnode_id = workflow.add_node(agent_node)\n\ntransform_node = graphbit.Node.transform(\n    \"Uppercase\",\n    \"uppercase\"\n)\n\ntransform_id = workflow.add_node(transform_node)\n\nworkflow.connect(node_id, transform_id)\n\nworkflow.validate()\n```\n\n### Node\n\nFactory for creating different types of workflow nodes.\n\n```python\n# Agent node\nagent = graphbit.Node.agent(\n    \"Content Writer\", \n    f\"Write an article about {topic}\", \n    \"writer\"\n)\n\n# Transform node\ntransform = graphbit.Node.transform(\n    \"Uppercase\", \n    \"uppercase\"\n)\n# Available transformations: \"uppercase\", \"lowercase\", \"json_extract\", \"split\", \"join\"\n\n# Condition node\ncondition = graphbit.Node.condition(\n    \"Quality Check\", \n    \"quality_score > 0.8\"\n)\n```\n\n### DocumentLoaderConfig & DocumentLoader\n\nService for loading documents\n\n```python\nfrom graphbit import DocumentLoaderConfig, DocumentLoader\n\n# Basic configuration\nconfig = DocumentLoaderConfig(\n    max_file_size=50_000_000,    # 50MB limit\n    default_encoding=\"utf-8\",    # Text encoding\n    preserve_formatting=True     # Keep formatting\n)\nloader = DocumentLoader(config)\n\n# PDF processing\nconfig = DocumentLoaderConfig(preserve_formatting=True)\nconfig.extraction_settings = {\n    \"ocr_enabled\": True,\n    \"table_detection\": True\n}\nloader = DocumentLoader(config)\ncontent = loader.load_document(\"report.pdf\", \"pdf\")\n\n# Text files processing\nconfig = DocumentLoaderConfig(default_encoding=\"utf-8\")\nloader = DocumentLoader(config)\ncontent = loader.load_document(\"notes.txt\", \"txt\")\n\n# Structured data processing\n# JSON, CSV, XML automatically parsed as text\nloader = DocumentLoader()\njson_content = loader.load_document(\"data.json\", \"json\")\ncsv_content = loader.load_document(\"data.csv\", \"csv\")\n```\n\n### TextSplitterConfig & TextSplitter\n\nFactory for creating different types of test splitters.\n\n```python\n# Character configuration\nconfig = graphbit.TextSplitterConfig.character(\n    chunk_size=1000,\n    chunk_overlap=200\n)\n\n# Token configuration\nconfig = graphbit.TextSplitterConfig.token(\n    chunk_size=100,\n    chunk_overlap=20,\n    token_pattern=r'\\w+'\n)\n\n# Code splitter configuration\nconfig = graphbit.TextSplitterConfig.code(\n    chunk_size=500,\n    chunk_overlap=50,\n    language=\"python\"\n)\n\n# Markdown splitter configuration\nconfig = graphbit.TextSplitterConfig.markdown(\n    chunk_size=1000,\n    chunk_overlap=100,\n    split_by_headers=True\n)\n\n# Create a character splitter\nsplitter = graphbit.CharacterSplitter(\n    chunk_size=1000,      # Maximum characters per chunk\n    chunk_overlap=200     # Overlap between chunks\n)\n\n# Create a token splitter\nsplitter = graphbit.TokenSplitter(\n    chunk_size=100,       # Maximum tokens per chunk\n    chunk_overlap=20,     # Token overlap\n    token_pattern=None    # Optional custom regex pattern\n)\n\n# Create a sentence splitter\nsplitter = graphbit.SentenceSplitter(\n    chunk_size=500,       # Target size in characters\n    chunk_overlap=1       # Number of sentences to overlap\n    sentence_endings=None # Optional list of sentence endings\n)\n\n# Create a recursive splitter\nsplitter = graphbit.RecursiveSplitter(\n    chunk_size=1000,      # Target size in characters\n    chunk_overlap=100     # Overlap between chunks\n    separators=None       # Optional list of separators\n\n)\n```\n\n### Executor\n\nExecutes workflows with the configured LLM.\n\n```python\n# Basic executor configuration\nexecutor = graphbit.Executor(openai_config)\n\n# High-throughput executor configuration\nhigh_throughput_executor = graphbit.Executor.new_high_throughput(config)\n\n# Low-latency executor configuration\nlow_latency_executor = graphbit.Executor.new_low_latency(config)\n\n# Memory-optimized executor configuration\nmemory_optimized_executor = graphbit.Executor.new_memory_optimized(config)\n\ncontext = executor.execute(workflow)\n\nprint(f\"Workflow Status: {context.state()}\")\nprint(f\"Workflow Success: {context.is_success()}\")\nprint(f\"Workflow Failed: {context.is_failed()}\")\nprint(f\"Workflow Execution time: {context.execution_time_ms()}ms\")\nprint(f\"All variables: {context.get_all_variables()}\")\n```\n\n## API Reference Summary\n\n### Core Classes\n- `LlmConfig`: LLM provider configuration\n- `LlmClient`: LLM client with resilience patterns\n- `LlmUsage`: Token usage statistics for cost tracking\n- `FinishReason`: LLM completion reason (stop, length, tool_calls, etc.)\n- `LlmToolCall`: Tool/function call information\n- `LlmResponse`: Complete LLM response with metadata\n- `EmbeddingConfig`: Embedding service configuration\n- `EmbeddingClient`: Text embedding generation\n- `Node`: Workflow node factory (agent, transform, condition)\n- `Workflow`: Workflow definition and operations\n- `DocumentLoaderConfig`: Documents loading service configuration\n- `DocumentLoader`: Loading documents\n- `TextSplitterConfig`: Text splitter service configuration\n- `TextSplitter`: Text splitters for text processing\n- `Executor`: Workflow execution engine\n\n### LLM Response Observability\n\nAccess detailed LLM response information for comprehensive tracing across multiple providers:\n\n```python\nimport graphbit\nimport os\n\n# Configure different LLM providers\nproviders = {\n    \"OpenAI\": graphbit.LlmConfig.openai(api_key=os.getenv(\"OPENAI_API_KEY\")),\n    \"Anthropic\": graphbit.LlmConfig.anthropic(api_key=os.getenv(\"ANTHROPIC_API_KEY\")),\n    \"DeepSeek\": graphbit.LlmConfig.deepseek(api_key=os.getenv(\"DEEPSEEK_API_KEY\")),\n    \"Perplexity\": graphbit.LlmConfig.perplexity(api_key=os.getenv(\"PERPLEXITY_API_KEY\"))\n}\n\n# Test with DeepSeek for cost-effective AI\nconfig = providers[\"DeepSeek\"]\nclient = graphbit.LlmClient(config)\n\n# Get full response with usage statistics\nresponse = client.complete_full(\"What is AI?\", max_tokens=100)\n\n# Access detailed metrics\nprint(f\"Content: {response.content}\")\nprint(f\"Model: {response.model}\")\nprint(f\"Prompt tokens: {response.usage.prompt_tokens}\")\nprint(f\"Completion tokens: {response.usage.completion_tokens}\")\nprint(f\"Total tokens: {response.usage.total_tokens}\")\nprint(f\"Finish reason: {response.finish_reason}\")\n\n# Test with Perplexity for real-time search capabilities\nperplexity_client = graphbit.LlmClient(providers[\"Perplexity\"])\nsearch_response = await perplexity_client.complete_full_async(\n    \"What are the latest developments in quantum computing?\",\n    max_tokens=150\n)\n\n# Serialize for logging/monitoring\nresponse_data = response.to_dict()\n```\n\nThis API provides everything needed to build agentic workflows with Python, from simple automation to multi-step pipelines with comprehensive LLM observability.\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Production-grade Python bindings for GraphBit agentic workflow automation",
    "version": "0.4.0",
    "project_urls": {
        "Homepage": "https://graphbit.ai",
        "Source Code": "https://github.com/InfinitiBit/graphbit"
    },
    "split_keywords": [
        "ai",
        " automation",
        " workflow",
        " agent",
        " llm"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d5acfefa464c34c1138e5e4897e41f9cbdd9485f4afa8cb796b16c1a1c043802",
                "md5": "c0798c290299fc5feaf6d2781dd77b45",
                "sha256": "c2d93614dff7a4c2729373070275d6138c4869d23f7473e964dacf21e49fa07b"
            },
            "downloads": -1,
            "filename": "graphbit-0.4.0-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "c0798c290299fc5feaf6d2781dd77b45",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": null,
            "size": 3447048,
            "upload_time": "2025-10-06T19:09:45",
            "upload_time_iso_8601": "2025-10-06T19:09:45.527209Z",
            "url": "https://files.pythonhosted.org/packages/d5/ac/fefa464c34c1138e5e4897e41f9cbdd9485f4afa8cb796b16c1a1c043802/graphbit-0.4.0-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-06 19:09:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "InfinitiBit",
    "github_project": "graphbit",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "graphbit"
}
        
Elapsed time: 1.90856s