campfires


Namecampfires JSON
Version 0.4.1 PyPI version JSON
download
home_pageNone
SummaryA Python framework for orchestrating multimodal LLMs and tools
upload_time2025-10-22 23:49:17
maintainerNone
docs_urlNone
authorCampfires Team
requires_python>=3.8
licenseMIT
keywords llm ai orchestration multimodal framework
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Campfires Framework

A Python framework for orchestrating multimodal Large Language Models (LLMs) and tools to achieve emergent, task-driven behavior.

![Campfires Logo](images/logo.jpg)

## The Valley of Campfires

Imagine a peaceful valley at twilight, dotted with glowing campfires. Around each campfire, a group of **Campers** (AI agents) sit together, sharing stories, analyzing information, and working on tasks. Each campfire represents a **Campfire** - a collaborative workspace where agents can communicate and coordinate their efforts.

### The Campfire Community

At your campfire, **Campers** pass around **Torches** - glowing vessels that carry information, data, and insights from one agent to another. Each torch illuminates the conversation, bringing new perspectives and knowledge to the circle. As campers examine and discuss what each torch reveals, they add their own insights, transforming the information before passing it along.

### The Party Box Exchange

Between the campfires sits a magical **Party Box** - a shared storage space where campfires can exchange gifts, artifacts, and resources. When your campers discover something valuable (documents, images, audio files, or data), they can place it in the Party Box for other campfires to discover and use. It's like a community treasure chest that connects all the campfires in the valley.

![The Valley of Campfires](images/campfires.jpg)
*A peaceful valley at twilight, where AI agents gather around glowing campfires to collaborate, share knowledge through torches, and exchange resources via the central Party Box. Each campfire represents a collaborative workspace, while the glowing Party Box in the center connects all communities across the valley.*

### The Torch Bearer Network

When something important happens at your campfire - a breakthrough discovery, a completed task, or an urgent message - a **Torch Bearer** can carry the news to other campfires throughout the valley. These torch bearers use the **MCP Protocol** (Model Context Protocol) to deliver information packets, ensuring that all campfires stay connected and informed about events, notifications, and shared resources.

### Your Valley, Your Rules

Each campfire operates independently, with its own group of specialized campers, but they're all part of the same vibrant valley community. Whether you're running a single intimate campfire or orchestrating multiple campfires across the valley, the framework provides the tools to create emergent, collaborative AI behaviors that feel as natural as friends gathering around a fire.

Welcome to the valley. Pull up a log, grab a torch, and let's build something amazing together.

## Features

- **Modular Architecture**: Build complex AI workflows using composable "Campers" (AI agents)
- **LLM Integration**: Built-in support for OpenRouter and Ollama (local LLM deployment)
- **Enhanced Orchestration**: Advanced task orchestration with detailed execution stages, problem understanding, approach selection, and quality considerations
- **Interactive HTML Reports**: Rich HTML reports with expandable sections showing execution stages, RAG information, customization details, and impact analysis
- **Zeitgeist**: Internet knowledge and opinion mining for informed campers
- **Action Planning**: Generate structured action plans with priorities and timelines
- **Professional Character System**: Define unique personalities and perspectives with professional traits
- **RAG Integration**: Retrieval-Augmented Generation with document context and state management
- **MCP Protocol**: Model Context Protocol for inter-agent communication
- **Storage Management**: Flexible "Party Box" system for asset storage
- **State Management**: Persistent state tracking with SQLite backend
- **Template System**: Dynamic prompt templating with Jinja2

## Installation

### From PyPI (Recommended)

```bash
pip install campfires
```

### From Source

```bash
git clone https://github.com/campfires/campfires.git
cd campfires
pip install -e .
```

## Quick Start

### Basic Usage

```python
import asyncio
from campfires import Campfire, Camper, Torch, OpenRouterConfig, LLMCamperMixin

class MyCamper(Camper, LLMCamperMixin):
    async def process(self, torch: Torch) -> Torch:
        # Process the input torch and return a new torch
        response = await self.llm_completion(f"Analyze: {torch.claim}")
        return Torch(
            claim=response,
            confidence=0.8,
            metadata={"processed_by": "MyCamper"}
        )

async def main():
    # Setup LLM configuration
    config = OpenRouterConfig(
        api_key="your-openrouter-api-key",
        default_model="anthropic/claude-3-sonnet"
    )
    
    # Create camper and setup LLM
    camper = MyCamper("my-camper")
    camper.setup_llm(config)
    
    # Create campfire and add camper
    campfire = Campfire("my-campfire")
    campfire.add_camper(camper)
    
    # Start the campfire
    await campfire.start()
    
    # Send a torch for processing
    input_torch = Torch(claim="Hello, world!")
    await campfire.send_torch(input_torch)
    
    # Stop the campfire
    await campfire.stop()

if __name__ == "__main__":
    asyncio.run(main())
```

### Local LLM with Ollama

```python
import asyncio
from campfires import Campfire, Camper, Torch, OllamaConfig, LLMCamperMixin

class LocalCamper(Camper, LLMCamperMixin):
    async def process(self, torch: Torch) -> Torch:
        # Process using local Ollama model
        response = await self.llm_completion(f"Analyze: {torch.claim}")
        return Torch(
            claim=response,
            confidence=0.8,
            metadata={"processed_by": "LocalCamper", "provider": "ollama"}
        )

async def main():
    # Setup Ollama configuration (requires Ollama server running)
    config = OllamaConfig(
        base_url="http://localhost:11434",
        model="llama2"
    )
    
    # Create camper and setup LLM
    camper = LocalCamper("local-camper")
    camper.setup_llm(config)
    
    # Create campfire and add camper
    campfire = Campfire("local-campfire")
    campfire.add_camper(camper)
    
    # Start the campfire
    await campfire.start()
    
    # Send a torch for processing
    input_torch = Torch(claim="Hello from local AI!")
    await campfire.send_torch(input_torch)
    
    # Stop the campfire
    await campfire.stop()

if __name__ == "__main__":
    asyncio.run(main())
```

### Crisis Detection Example

```python
import asyncio
from campfires import (
    Campfire, Camper, Torch, 
    OpenRouterConfig, LLMCamperMixin,
    MCPProtocol, AsyncQueueTransport
)

class CrisisDetectionCamper(Camper, LLMCamperMixin):
    async def process(self, torch: Torch) -> Torch:
        # Analyze text for crisis indicators
        prompt = f"""
        Analyze this text for crisis indicators:
        "{torch.claim}"
        
        Return JSON with crisis_probability (0-1) and key_indicators.
        """
        
        response = await self.llm_completion_with_mcp(
            prompt, 
            channel="crisis_detection"
        )
        
        return Torch(
            claim=f"Crisis analysis: {response}",
            confidence=0.9,
            metadata={"analysis_type": "crisis_detection"}
        )

async def main():
    # Setup MCP protocol for inter-camper communication
    transport = AsyncQueueTransport()
    mcp_protocol = MCPProtocol(transport)
    await mcp_protocol.start()
    
    # Setup LLM configuration
    config = OpenRouterConfig(
        api_key="your-openrouter-api-key",
        default_model="anthropic/claude-3-sonnet"
    )
    
    # Create and configure camper
    camper = CrisisDetectionCamper("crisis-detector")
    camper.setup_llm(config, mcp_protocol)
    
    # Create campfire with MCP support
    campfire = Campfire("crisis-campfire", mcp_protocol=mcp_protocol)
    campfire.add_camper(camper)
    
    await campfire.start()
    
    # Process some text
    torch = Torch(claim="I'm feeling really overwhelmed and don't know what to do")
    await campfire.send_torch(torch)
    
    await campfire.stop()
    await mcp_protocol.stop()

if __name__ == "__main__":
    asyncio.run(main())
```

### LLM-Enabled Campers with Custom Prompts

The framework supports advanced LLM integration through the `override_prompt` method, allowing campers to customize their LLM interactions:

```python
import asyncio
from campfires import Camper, Torch, OpenRouterConfig, LLMCamperMixin

class ExpertAnalyzer(Camper, LLMCamperMixin):
    def __init__(self, name: str, expertise: str):
        super().__init__(name)
        self.expertise = expertise
        
    def override_prompt(self, torch: Torch) -> dict:
        """Custom prompt generation with LLM call"""
        try:
            # Create enhanced prompt based on expertise
            enhanced_prompt = f"""
            You are an expert {self.expertise}. Analyze the following information 
            and provide professional insights:
            
            Input: {torch.claim}
            
            Please provide:
            1. Key insights from your {self.expertise} perspective
            2. Potential concerns or opportunities
            3. Recommended next steps
            """
            
            # Make LLM call directly in override_prompt
            response = self.llm_completion_with_mcp(enhanced_prompt)
            
            return {
                "claim": response,
                "confidence": 0.85,
                "metadata": {
                    "expertise": self.expertise,
                    "analysis_type": "expert_review"
                }
            }
        except Exception as e:
            return {
                "claim": f"Analysis failed: {str(e)}",
                "confidence": 0.1,
                "metadata": {"error": True}
            }

async def main():
    # Setup LLM configuration
    config = OpenRouterConfig(api_key="your-openrouter-api-key")
    
    # Create expert campers
    security_expert = ExpertAnalyzer("security-expert", "cybersecurity")
    security_expert.setup_llm(config)
    
    finance_expert = ExpertAnalyzer("finance-expert", "financial analysis")
    finance_expert.setup_llm(config)
    
    # Create campfire and add experts
    campfire = Campfire("expert-analysis")
    campfire.add_camper(security_expert)
    campfire.add_camper(finance_expert)
    
    await campfire.start()
    
    # Analyze a business proposal
    torch = Torch(claim="We're considering implementing a new payment system")
    await campfire.send_torch(torch)
    
    await campfire.stop()

if __name__ == "__main__":
    asyncio.run(main())
```

### Team Collaboration with RAG Integration

Build intelligent team members that can access and reason over document collections:

```python
import asyncio
from campfires import Camper, Torch, OpenRouterConfig, LLMCamperMixin

class TeamMember(Camper, LLMCamperMixin):
    def __init__(self, name: str, role: str, rag_system_prompt: str):
        super().__init__(name)
        self.role = role
        self.rag_system_prompt = rag_system_prompt
        
    def override_prompt(self, torch: Torch) -> dict:
        """Generate responses using RAG-enhanced prompts"""
        try:
            # Combine RAG context with user question
            enhanced_prompt = f"""
            {self.rag_system_prompt}
            
            Role: {self.role}
            Question: {torch.claim}
            
            Please provide a detailed response based on your role and the 
            available context. Include specific recommendations and actionable insights.
            """
            
            # Make LLM call with enhanced context
            response = self.llm_completion_with_mcp(enhanced_prompt)
            
            return {
                "claim": response,
                "confidence": 0.9,
                "metadata": {
                    "role": self.role,
                    "rag_enhanced": True,
                    "response_type": "team_recommendation"
                }
            }
        except Exception as e:
            return {
                "claim": f"Unable to provide recommendation: {str(e)}",
                "confidence": 0.1,
                "metadata": {"error": True, "role": self.role}
            }

async def main():
    # Setup LLM configuration
    config = OpenRouterConfig(api_key="your-openrouter-api-key")
    
    # RAG system prompt with document context
    rag_context = """
    You have access to comprehensive documentation about our tax application system.
    The system handles tax calculations, user management, and compliance reporting.
    Key components include: authentication service, calculation engine, reporting module.
    """
    
    # Create team members with different roles
    backend_engineer = TeamMember(
        "backend-engineer", 
        "Senior Backend Engineer",
        rag_context
    )
    backend_engineer.setup_llm(config)
    
    devops_engineer = TeamMember(
        "devops-engineer",
        "Senior DevOps Engineer", 
        rag_context
    )
    devops_engineer.setup_llm(config)
    
    # Create team campfire
    team_campfire = Campfire("development-team")
    team_campfire.add_camper(backend_engineer)
    team_campfire.add_camper(devops_engineer)
    
    await team_campfire.start()
    
    # Ask for team input on a technical decision
    question = Torch(claim="How should we implement user authentication for the new tax module?")
    await team_campfire.send_torch(question)
    
    await team_campfire.stop()

if __name__ == "__main__":
    asyncio.run(main())
```

## Using Enhanced Orchestration

The Enhanced Orchestration system provides sophisticated task management with detailed execution tracking. Here's how to leverage these powerful features:

### Basic Enhanced Orchestration Setup

```python
import asyncio
from campfires import Campfire, Camper, LLMCamperMixin, OpenRouterConfig
from campfires.core.enhanced_orchestration import EnhancedOrchestration

class AnalystCamper(Camper, LLMCamperMixin):
    def __init__(self, name: str, expertise: str):
        super().__init__(name)
        self.expertise = expertise
        
    async def override_prompt(self, raw_prompt: str, system_prompt: str = None) -> dict:
        """Enhanced prompt processing with detailed execution tracking"""
        try:
            enhanced_prompt = f"""
            As a {self.expertise} expert, analyze the following:
            {raw_prompt}
            
            Provide detailed insights including:
            1. Problem understanding
            2. Approach selection
            3. Key considerations
            4. Recommended actions
            """
            
            response = await self.llm_completion(enhanced_prompt)
            
            return {
                "claim": response,
                "confidence": 0.9,
                "metadata": {
                    "expertise": self.expertise,
                    "analysis_depth": "comprehensive",
                    "execution_stage": "expert_analysis"
                }
            }
        except Exception as e:
            return {
                "claim": f"Analysis failed: {str(e)}",
                "confidence": 0.1,
                "metadata": {"error": True}
            }

async def run_enhanced_orchestration():
    # Setup LLM configuration
    config = OpenRouterConfig(api_key="your-api-key")
    
    # Create specialized campers
    business_analyst = AnalystCamper("business-analyst", "business strategy")
    business_analyst.setup_llm(config)
    
    tech_analyst = AnalystCamper("tech-analyst", "technology architecture")
    tech_analyst.setup_llm(config)
    
    # Create campfire with enhanced orchestration
    campfire = Campfire("strategic-analysis")
    campfire.add_camper(business_analyst)
    campfire.add_camper(tech_analyst)
    
    # The enhanced orchestration automatically captures:
    # - Detailed execution stages
    # - Problem understanding phases
    # - Approach selection reasoning
    # - Quality considerations
    # - Risk assessments
    
    await campfire.start()
    
    # Process a complex business question
    from campfires import Torch
    question = Torch(
        claim="Should we migrate our legacy system to microservices?",
        metadata={"priority": "high", "stakeholders": ["engineering", "business"]}
    )
    
    await campfire.send_torch(question)
    await campfire.stop()
    
    # Enhanced HTML report will be generated automatically
    print("Check the generated HTML report for detailed execution analysis!")

if __name__ == "__main__":
    asyncio.run(run_enhanced_orchestration())
```

### Understanding the Interactive HTML Reports

The enhanced orchestration system generates rich HTML reports with expandable sections:

#### **Execution Stages Section** 🔍
Click the arrow to expand and see:
- **Problem Understanding**: How campers interpreted the task
- **Approach Selection**: Why specific strategies were chosen
- **Execution Strategy**: Step-by-step implementation details
- **Quality Considerations**: Quality checks and validations performed
- **Risk Assessment**: Potential risks identified and mitigation strategies

#### **RAG Information Section** 📚
Reveals how document context was used:
- **Document Retrieval**: Which documents were accessed
- **Context Integration**: How information was incorporated
- **Relevance Scoring**: Why specific content was prioritized
- **State Management**: How RAG state evolved during processing

#### **Customization Details Section** ⚙️
Shows how campers adapted their responses:
- **Role-Based Adaptations**: How expertise influenced analysis
- **Personality Integration**: How character traits affected responses
- **Context Awareness**: How situational factors were considered

#### **Impact Analysis Section** 📊
Provides outcome assessment:
- **Decision Quality**: Assessment of recommendation strength
- **Confidence Levels**: Reliability indicators for each insight
- **Follow-up Actions**: Suggested next steps
- **Success Metrics**: How to measure implementation success

### Advanced Multi-Camper Orchestration

```python
import asyncio
from campfires import Campfire, Camper, LLMCamperMixin, OpenRouterConfig

class SpecializedTeamMember(Camper, LLMCamperMixin):
    def __init__(self, name: str, role: str, personality: str, concerns: list):
        super().__init__(name)
        self.role = role
        self.personality = personality
        self.concerns = concerns
        
    async def override_prompt(self, raw_prompt: str, system_prompt: str = None) -> dict:
        """Role-specific analysis with personality integration"""
        try:
            role_prompt = f"""
            You are a {self.role} with the following personality: {self.personality}
            Your primary concerns are: {', '.join(self.concerns)}
            
            Task: {raw_prompt}
            
            Provide analysis from your unique perspective, considering:
            1. How this aligns with your role responsibilities
            2. What concerns you might have
            3. What opportunities you see
            4. Your recommended approach
            """
            
            response = await self.llm_completion(role_prompt)
            
            return {
                "claim": response,
                "confidence": 0.85,
                "metadata": {
                    "role": self.role,
                    "personality_influence": self.personality,
                    "key_concerns": self.concerns,
                    "perspective_type": "role_specialized"
                }
            }
        except Exception as e:
            return {
                "claim": f"Unable to provide {self.role} perspective: {str(e)}",
                "confidence": 0.1,
                "metadata": {"error": True, "role": self.role}
            }

async def run_team_orchestration():
    config = OpenRouterConfig(api_key="your-api-key")
    
    # Create diverse team members
    team_members = [
        SpecializedTeamMember(
            "sarah-pm", 
            "Project Manager",
            "detail-oriented and deadline-focused",
            ["timeline adherence", "resource allocation", "stakeholder communication"]
        ),
        SpecializedTeamMember(
            "alex-dev",
            "Senior Developer", 
            "pragmatic and quality-focused",
            ["code maintainability", "technical debt", "performance optimization"]
        ),
        SpecializedTeamMember(
            "jordan-ux",
            "UX Designer",
            "user-centric and creative",
            ["user experience", "accessibility", "design consistency"]
        )
    ]
    
    # Setup LLM for each team member
    for member in team_members:
        member.setup_llm(config)
    
    # Create collaborative campfire
    team_campfire = Campfire("product-development-team")
    for member in team_members:
        team_campfire.add_camper(member)
    
    await team_campfire.start()
    
    # Collaborative decision making
    from campfires import Torch
    decision = Torch(
        claim="We need to redesign our mobile app's onboarding flow to improve user retention",
        metadata={
            "urgency": "high",
            "impact": "user_retention",
            "timeline": "6_weeks"
        }
    )
    
    await team_campfire.send_torch(decision)
    await team_campfire.stop()
    
    print("Team collaboration complete! Check the HTML report for detailed insights from each perspective.")

if __name__ == "__main__":
    asyncio.run(run_team_orchestration())
```

### Leveraging RAG Integration

```python
import asyncio
from campfires import Campfire, Camper, LLMCamperMixin, OpenRouterConfig

class RAGEnabledCamper(Camper, LLMCamperMixin):
    def __init__(self, name: str, domain: str, rag_context: str):
        super().__init__(name)
        self.domain = domain
        self.rag_context = rag_context
        
    async def override_prompt(self, raw_prompt: str, system_prompt: str = None) -> dict:
        """RAG-enhanced analysis with document context"""
        try:
            rag_enhanced_prompt = f"""
            Domain Expertise: {self.domain}
            
            Available Context:
            {self.rag_context}
            
            Question: {raw_prompt}
            
            Using the provided context and your {self.domain} expertise:
            1. Identify relevant information from the context
            2. Apply domain-specific analysis
            3. Provide evidence-based recommendations
            4. Highlight any gaps in available information
            """
            
            response = await self.llm_completion(rag_enhanced_prompt)
            
            return {
                "claim": response,
                "confidence": 0.92,
                "metadata": {
                    "domain": self.domain,
                    "rag_enhanced": True,
                    "context_utilized": True,
                    "evidence_based": True
                }
            }
        except Exception as e:
            return {
                "claim": f"RAG analysis failed: {str(e)}",
                "confidence": 0.1,
                "metadata": {"error": True, "domain": self.domain}
            }

async def run_rag_orchestration():
    config = OpenRouterConfig(api_key="your-api-key")
    
    # Sample RAG context (in practice, this would come from document retrieval)
    financial_context = """
    Company Financial Overview:
    - Q3 Revenue: $2.4M (15% growth)
    - Operating Expenses: $1.8M
    - Cash Flow: Positive $600K
    - Key Investments: R&D (30%), Marketing (25%), Operations (45%)
    - Market Position: Growing market share in fintech sector
    """
    
    # Create RAG-enabled camper
    financial_analyst = RAGEnabledCamper(
        "financial-analyst",
        "financial analysis",
        financial_context
    )
    financial_analyst.setup_llm(config)
    
    # Create campfire
    analysis_campfire = Campfire("financial-analysis")
    analysis_campfire.add_camper(financial_analyst)
    
    await analysis_campfire.start()
    
    # Ask context-aware question
    from campfires import Torch
    question = Torch(
        claim="Should we increase our R&D investment by 50% next quarter?",
        metadata={"analysis_type": "investment_decision", "timeframe": "Q4"}
    )
    
    await analysis_campfire.send_torch(question)
    await analysis_campfire.stop()
    
    print("RAG-enhanced analysis complete! The HTML report shows how document context influenced the decision.")

if __name__ == "__main__":
    asyncio.run(run_rag_orchestration())
```

## Core Concepts

### Torches - The Light of Knowledge
In our valley, **Torches** are glowing vessels that carry information, insights, and data between campers. Each torch illuminates a piece of knowledge with its own confidence level - some burn bright with certainty, others flicker with uncertainty:

```python
from campfires import Torch

torch = Torch(
    claim="The weather is sunny today",
    confidence=0.95,  # How brightly this torch burns
    metadata={"source": "weather_api", "location": "NYC"}
)
```

### Campers - The Valley Inhabitants
**Campers** are the AI agents sitting around your campfire. Each camper has their own expertise and personality. When a torch is passed to them, they examine it, add their insights, and pass along a new torch with their findings:

```python
from campfires import Camper, Torch

class WeatherCamper(Camper):
    async def process(self, torch: Torch) -> Torch:
        # This camper specializes in weather analysis
        return Torch(claim=f"Weather insight: {torch.claim}")
```

### LLMCamperMixin - Bringing Intelligence to Your Campers
The **LLMCamperMixin** gives your campers the ability to think and reason using Large Language Models. When you mix this into your camper class, they gain access to powerful AI capabilities:

```python
from campfires import Camper, LLMCamperMixin, OpenRouterConfig

class IntelligentCamper(Camper, LLMCamperMixin):
    def __init__(self, name: str):
        super().__init__(name)
        # Setup LLM capabilities
        config = OpenRouterConfig(api_key="your-api-key")
        self.setup_llm(config)
    
    async def process(self, torch: Torch) -> Torch:
        # Use LLM to analyze the torch content
        response = await self.llm_completion_with_mcp(
            f"Analyze this: {torch.claim}"
        )
        return Torch(claim=response, confidence=0.9)
    
    def override_prompt(self, torch: Torch) -> dict:
        # Customize how the LLM processes information
        enhanced_prompt = f"As an expert, analyze: {torch.claim}"
        llm_response = self.llm_completion_with_mcp(enhanced_prompt)
        
        return {
            "claim": llm_response,
            "confidence": 0.85,
            "metadata": {"enhanced": True}
        }
```

### Campfires - The Gathering Circles
A **Campfire** is where your campers gather to collaborate. It orchestrates the conversation, ensuring torches are passed in the right order and that every camper gets a chance to contribute their expertise:

```python
from campfires import Campfire

campfire = Campfire("weather-analysis")
campfire.add_camper(weather_camper)
campfire.add_camper(analysis_camper)
# Now they can work together around the fire
```

### YAML Save/Restore - Preserving the Valley's Memory
The **YAML Save/Restore** functionality allows you to save your campfire configurations and restore them later, preserving the exact setup of campers, their roles, and configurations. This is perfect for sharing campfire setups or recreating successful collaborations:

```python
from campfires import Campfire, CampfireManager

# Save individual campfire
campfire = Campfire("analysis-team")
campfire.add_camper(analyst_camper)
campfire.add_camper(researcher_camper)

# Save to YAML with flexible location and template-based naming
await campfire.save_to_yaml(
    location="./saved_campfires",  # Directory or full path
    filename_template="{name}_backup_{timestamp}.yaml"  # Optional template
)

# Restore from YAML
restored_campfire = await Campfire.load_from_yaml("./saved_campfires/analysis-team_backup_20241201_143022.yaml")

# Bulk operations with CampfireManager
manager = CampfireManager()
manager.add_campfire(campfire1)
manager.add_campfire(campfire2)

# Save all campfires at once
await manager.save_all_to_yaml("./campfire_backups")

# Load multiple campfires
loaded_campfires = await manager.load_campfires_from_directory("./campfire_backups")
```

The YAML files contain complete campfire configurations including:
- Campfire names and metadata
- All camper configurations and roles
- LLM settings and API configurations
- Custom attributes and initialization parameters

### Zeitgeist - The Valley's Internet Knowledge
**Zeitgeist** gives your campers the ability to search the internet for current information, opinions, and trends relevant to their roles. Like having a wise oracle at the campfire who can instantly access the collective knowledge of the world:

```python
from campfires import ZeitgeistCamper, LLMCamperMixin

class ResearchCamper(LLMCamperMixin, Camper):
    def __init__(self, name: str, role: str, **kwargs):
        super().__init__(name=name, **kwargs)
        self.set_role(role)  # 'academic', 'developer', 'journalist', etc.
        self.enable_zeitgeist()
    
    async def research_topic(self, topic: str):
        # Get current internet knowledge about the topic
        zeitgeist_info = await self.get_zeitgeist(topic)
        role_opinions = await self.get_role_opinions(topic)
        trending_tools = await self.get_trending_tools(topic)
        return {
            'zeitgeist': zeitgeist_info,
            'opinions': role_opinions,
            'tools': trending_tools
        }
```

### Enhanced Orchestration - The Valley's Wisdom
The **Enhanced Orchestration** system provides sophisticated task management with detailed execution stages. When campers work on complex tasks, the system captures their thought processes, approach selection, and quality considerations:

```python
from campfires import Campfire, EnhancedOrchestration

# Create a campfire with enhanced orchestration
campfire = Campfire("strategic-planning")
orchestration = EnhancedOrchestration(campfire)

# The orchestration system automatically captures:
# - Problem understanding and analysis
# - Approach selection and reasoning
# - Execution strategy and implementation
# - Quality considerations and risk assessment
# - RAG context and document integration
# - Final outcomes and impact analysis

# All this information is available in interactive HTML reports
# with expandable sections for detailed exploration
```

### Interactive HTML Reports - Illuminating the Process
The framework generates rich HTML reports that reveal the inner workings of your campfire collaborations. These reports feature expandable sections that show:

- **Execution Stages**: Step-by-step breakdown of how tasks were approached and executed
- **RAG Information**: Details about document retrieval and context integration
- **Customization**: How campers adapted their responses based on their roles and expertise
- **Impact Analysis**: Assessment of outcomes and recommendations for future improvements

Click the arrow icons in the report headers to expand sections and explore the detailed execution process.

### Party Box - The Valley's Treasure Chest
The **Party Box** is the shared storage system where campfires can exchange valuable artifacts - documents, images, audio files, and data. It's like a magical chest that connects all campfires in the valley:

```python
from campfires import LocalDriver

# Store something in the party box
party_box = LocalDriver("./demo_storage")
await party_box.store_asset(file_data, "shared_document.pdf")
```

### MCP Protocol - The Torch Bearer Network
The **Model Context Protocol** is how torch bearers carry messages between campfires throughout the valley. It ensures that important information, events, and notifications reach every campfire that needs to know:

```python
from campfires import MCPProtocol, AsyncQueueTransport

transport = AsyncQueueTransport()
mcp_protocol = MCPProtocol(transport)
await mcp_protocol.start()
# Now torch bearers can carry messages across the valley
```

## Configuration

### Environment Variables

Create a `.env` file in your project root:

```env
OPENROUTER_API_KEY=your_openrouter_api_key
OPENROUTER_DEFAULT_MODEL=anthropic/claude-3-sonnet
CAMPFIRES_LOG_LEVEL=INFO
CAMPFIRES_DB_PATH=./campfires.db
```

### OpenRouter Configuration

```python
from campfires import OpenRouterConfig

config = OpenRouterConfig(
    api_key="your-api-key",
    default_model="anthropic/claude-3-sonnet",
    max_tokens=1000,
    temperature=0.7
)
```

### Ollama Configuration

For local LLM deployment with Ollama:

```python
from campfires import OllamaConfig, MultimodalOllamaConfig

# Basic text generation
config = OllamaConfig(
    base_url="http://localhost:11434",
    model="llama2",
    temperature=0.7,
    max_tokens=1000
)

# Multimodal capabilities (text + images)
multimodal_config = MultimodalOllamaConfig(
    base_url="http://localhost:11434",
    text_model="llama2",
    vision_model="llava",
    temperature=0.7,
    max_tokens=1000
)
```

**Prerequisites for Ollama:**
1. Install Ollama: Visit [ollama.ai](https://ollama.ai) for installation instructions
2. Start Ollama server: `ollama serve`
3. Download models: `ollama pull llama2` and `ollama pull llava` (for multimodal)

## Examples

Check out the `demos/` directory for complete examples:

- `sequential_orchestration_demo.py`: Advanced task orchestration with detailed execution stages and interactive HTML reports
- `hospital_zeitgeist_demo.py`: Healthcare team collaboration with professional AI personas, action planning, and enhanced reporting
- `tax_app_team_demo.py`: Software development team collaboration with RAG integration, LLM-powered recommendations, and detailed execution analysis
- `zeitgeist_demo.py`: Internet knowledge and opinion mining with Zeitgeist
- `reddit_crisis_tracker.py`: Crisis detection system for social media
- `ollama_demo.py`: Comprehensive Ollama integration demonstration with text generation, chat, and multimodal capabilities
- `quick_ollama_test.py`: Quick test script to verify Ollama integration
- `run_demo.py`: Simple demonstration of basic concepts

All demos generate interactive HTML reports with expandable sections showing execution stages, RAG information, and detailed analysis.

## Development

### Setting up for Development

```bash
git clone https://github.com/campfires/campfires.git
cd campfires
pip install -e ".[dev]"
```

### Running Tests

```bash
pytest
```

### Code Formatting

```bash
black campfires/
```

### Type Checking

```bash
mypy campfires/
```

## Optional Dependencies

### Zeitgeist Support
```bash
pip install duckduckgo-search beautifulsoup4 requests
```

### AWS Support
```bash
pip install "campfires[aws]"
```

### Redis Support
```bash
pip install "campfires[redis]"
```

## License

MIT License - see LICENSE file for details.

## Support

- Documentation: https://campfires.readthedocs.io
- GitHub Issues: https://github.com/campfires/campfires/issues
- Discussions: https://github.com/campfires/campfires/discussions
## Experiential RAG Demo
The experiential RAG demo simulates job search experiences under different psychological contexts (supportive, challenging, neutral) and generates HTML reports analyzing behavioral and mental health impacts.

### Key Reports
- [Supportive Experience](demos/party_box/other/alex_job_search_supportive_experience_20251023_004109.html)
- [Challenging Experience](demos/party_box/other/alex_job_search_challenging_experience_20251023_004109.html)
- [Neutral Experience](demos/party_box/other/alex_job_search_neutral_experience_20251023_004109.html)

These reports include narrative storytelling, mental health outlook summaries, and actionable insights from simulated scenarios.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "campfires",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "llm, ai, orchestration, multimodal, framework",
    "author": "Campfires Team",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/ab/b8/322f254ef74dac2a05e484d8e575b43b8fbb0665ccc63cd3ecf037dfd83a/campfires-0.4.1.tar.gz",
    "platform": null,
    "description": "# Campfires Framework\r\n\r\nA Python framework for orchestrating multimodal Large Language Models (LLMs) and tools to achieve emergent, task-driven behavior.\r\n\r\n![Campfires Logo](images/logo.jpg)\r\n\r\n## The Valley of Campfires\r\n\r\nImagine a peaceful valley at twilight, dotted with glowing campfires. Around each campfire, a group of **Campers** (AI agents) sit together, sharing stories, analyzing information, and working on tasks. Each campfire represents a **Campfire** - a collaborative workspace where agents can communicate and coordinate their efforts.\r\n\r\n### The Campfire Community\r\n\r\nAt your campfire, **Campers** pass around **Torches** - glowing vessels that carry information, data, and insights from one agent to another. Each torch illuminates the conversation, bringing new perspectives and knowledge to the circle. As campers examine and discuss what each torch reveals, they add their own insights, transforming the information before passing it along.\r\n\r\n### The Party Box Exchange\r\n\r\nBetween the campfires sits a magical **Party Box** - a shared storage space where campfires can exchange gifts, artifacts, and resources. When your campers discover something valuable (documents, images, audio files, or data), they can place it in the Party Box for other campfires to discover and use. It's like a community treasure chest that connects all the campfires in the valley.\r\n\r\n![The Valley of Campfires](images/campfires.jpg)\r\n*A peaceful valley at twilight, where AI agents gather around glowing campfires to collaborate, share knowledge through torches, and exchange resources via the central Party Box. Each campfire represents a collaborative workspace, while the glowing Party Box in the center connects all communities across the valley.*\r\n\r\n### The Torch Bearer Network\r\n\r\nWhen something important happens at your campfire - a breakthrough discovery, a completed task, or an urgent message - a **Torch Bearer** can carry the news to other campfires throughout the valley. These torch bearers use the **MCP Protocol** (Model Context Protocol) to deliver information packets, ensuring that all campfires stay connected and informed about events, notifications, and shared resources.\r\n\r\n### Your Valley, Your Rules\r\n\r\nEach campfire operates independently, with its own group of specialized campers, but they're all part of the same vibrant valley community. Whether you're running a single intimate campfire or orchestrating multiple campfires across the valley, the framework provides the tools to create emergent, collaborative AI behaviors that feel as natural as friends gathering around a fire.\r\n\r\nWelcome to the valley. Pull up a log, grab a torch, and let's build something amazing together.\r\n\r\n## Features\r\n\r\n- **Modular Architecture**: Build complex AI workflows using composable \"Campers\" (AI agents)\r\n- **LLM Integration**: Built-in support for OpenRouter and Ollama (local LLM deployment)\r\n- **Enhanced Orchestration**: Advanced task orchestration with detailed execution stages, problem understanding, approach selection, and quality considerations\r\n- **Interactive HTML Reports**: Rich HTML reports with expandable sections showing execution stages, RAG information, customization details, and impact analysis\r\n- **Zeitgeist**: Internet knowledge and opinion mining for informed campers\r\n- **Action Planning**: Generate structured action plans with priorities and timelines\r\n- **Professional Character System**: Define unique personalities and perspectives with professional traits\r\n- **RAG Integration**: Retrieval-Augmented Generation with document context and state management\r\n- **MCP Protocol**: Model Context Protocol for inter-agent communication\r\n- **Storage Management**: Flexible \"Party Box\" system for asset storage\r\n- **State Management**: Persistent state tracking with SQLite backend\r\n- **Template System**: Dynamic prompt templating with Jinja2\r\n\r\n## Installation\r\n\r\n### From PyPI (Recommended)\r\n\r\n```bash\r\npip install campfires\r\n```\r\n\r\n### From Source\r\n\r\n```bash\r\ngit clone https://github.com/campfires/campfires.git\r\ncd campfires\r\npip install -e .\r\n```\r\n\r\n## Quick Start\r\n\r\n### Basic Usage\r\n\r\n```python\r\nimport asyncio\r\nfrom campfires import Campfire, Camper, Torch, OpenRouterConfig, LLMCamperMixin\r\n\r\nclass MyCamper(Camper, LLMCamperMixin):\r\n    async def process(self, torch: Torch) -> Torch:\r\n        # Process the input torch and return a new torch\r\n        response = await self.llm_completion(f\"Analyze: {torch.claim}\")\r\n        return Torch(\r\n            claim=response,\r\n            confidence=0.8,\r\n            metadata={\"processed_by\": \"MyCamper\"}\r\n        )\r\n\r\nasync def main():\r\n    # Setup LLM configuration\r\n    config = OpenRouterConfig(\r\n        api_key=\"your-openrouter-api-key\",\r\n        default_model=\"anthropic/claude-3-sonnet\"\r\n    )\r\n    \r\n    # Create camper and setup LLM\r\n    camper = MyCamper(\"my-camper\")\r\n    camper.setup_llm(config)\r\n    \r\n    # Create campfire and add camper\r\n    campfire = Campfire(\"my-campfire\")\r\n    campfire.add_camper(camper)\r\n    \r\n    # Start the campfire\r\n    await campfire.start()\r\n    \r\n    # Send a torch for processing\r\n    input_torch = Torch(claim=\"Hello, world!\")\r\n    await campfire.send_torch(input_torch)\r\n    \r\n    # Stop the campfire\r\n    await campfire.stop()\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(main())\r\n```\r\n\r\n### Local LLM with Ollama\r\n\r\n```python\r\nimport asyncio\r\nfrom campfires import Campfire, Camper, Torch, OllamaConfig, LLMCamperMixin\r\n\r\nclass LocalCamper(Camper, LLMCamperMixin):\r\n    async def process(self, torch: Torch) -> Torch:\r\n        # Process using local Ollama model\r\n        response = await self.llm_completion(f\"Analyze: {torch.claim}\")\r\n        return Torch(\r\n            claim=response,\r\n            confidence=0.8,\r\n            metadata={\"processed_by\": \"LocalCamper\", \"provider\": \"ollama\"}\r\n        )\r\n\r\nasync def main():\r\n    # Setup Ollama configuration (requires Ollama server running)\r\n    config = OllamaConfig(\r\n        base_url=\"http://localhost:11434\",\r\n        model=\"llama2\"\r\n    )\r\n    \r\n    # Create camper and setup LLM\r\n    camper = LocalCamper(\"local-camper\")\r\n    camper.setup_llm(config)\r\n    \r\n    # Create campfire and add camper\r\n    campfire = Campfire(\"local-campfire\")\r\n    campfire.add_camper(camper)\r\n    \r\n    # Start the campfire\r\n    await campfire.start()\r\n    \r\n    # Send a torch for processing\r\n    input_torch = Torch(claim=\"Hello from local AI!\")\r\n    await campfire.send_torch(input_torch)\r\n    \r\n    # Stop the campfire\r\n    await campfire.stop()\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(main())\r\n```\r\n\r\n### Crisis Detection Example\r\n\r\n```python\r\nimport asyncio\r\nfrom campfires import (\r\n    Campfire, Camper, Torch, \r\n    OpenRouterConfig, LLMCamperMixin,\r\n    MCPProtocol, AsyncQueueTransport\r\n)\r\n\r\nclass CrisisDetectionCamper(Camper, LLMCamperMixin):\r\n    async def process(self, torch: Torch) -> Torch:\r\n        # Analyze text for crisis indicators\r\n        prompt = f\"\"\"\r\n        Analyze this text for crisis indicators:\r\n        \"{torch.claim}\"\r\n        \r\n        Return JSON with crisis_probability (0-1) and key_indicators.\r\n        \"\"\"\r\n        \r\n        response = await self.llm_completion_with_mcp(\r\n            prompt, \r\n            channel=\"crisis_detection\"\r\n        )\r\n        \r\n        return Torch(\r\n            claim=f\"Crisis analysis: {response}\",\r\n            confidence=0.9,\r\n            metadata={\"analysis_type\": \"crisis_detection\"}\r\n        )\r\n\r\nasync def main():\r\n    # Setup MCP protocol for inter-camper communication\r\n    transport = AsyncQueueTransport()\r\n    mcp_protocol = MCPProtocol(transport)\r\n    await mcp_protocol.start()\r\n    \r\n    # Setup LLM configuration\r\n    config = OpenRouterConfig(\r\n        api_key=\"your-openrouter-api-key\",\r\n        default_model=\"anthropic/claude-3-sonnet\"\r\n    )\r\n    \r\n    # Create and configure camper\r\n    camper = CrisisDetectionCamper(\"crisis-detector\")\r\n    camper.setup_llm(config, mcp_protocol)\r\n    \r\n    # Create campfire with MCP support\r\n    campfire = Campfire(\"crisis-campfire\", mcp_protocol=mcp_protocol)\r\n    campfire.add_camper(camper)\r\n    \r\n    await campfire.start()\r\n    \r\n    # Process some text\r\n    torch = Torch(claim=\"I'm feeling really overwhelmed and don't know what to do\")\r\n    await campfire.send_torch(torch)\r\n    \r\n    await campfire.stop()\r\n    await mcp_protocol.stop()\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(main())\r\n```\r\n\r\n### LLM-Enabled Campers with Custom Prompts\r\n\r\nThe framework supports advanced LLM integration through the `override_prompt` method, allowing campers to customize their LLM interactions:\r\n\r\n```python\r\nimport asyncio\r\nfrom campfires import Camper, Torch, OpenRouterConfig, LLMCamperMixin\r\n\r\nclass ExpertAnalyzer(Camper, LLMCamperMixin):\r\n    def __init__(self, name: str, expertise: str):\r\n        super().__init__(name)\r\n        self.expertise = expertise\r\n        \r\n    def override_prompt(self, torch: Torch) -> dict:\r\n        \"\"\"Custom prompt generation with LLM call\"\"\"\r\n        try:\r\n            # Create enhanced prompt based on expertise\r\n            enhanced_prompt = f\"\"\"\r\n            You are an expert {self.expertise}. Analyze the following information \r\n            and provide professional insights:\r\n            \r\n            Input: {torch.claim}\r\n            \r\n            Please provide:\r\n            1. Key insights from your {self.expertise} perspective\r\n            2. Potential concerns or opportunities\r\n            3. Recommended next steps\r\n            \"\"\"\r\n            \r\n            # Make LLM call directly in override_prompt\r\n            response = self.llm_completion_with_mcp(enhanced_prompt)\r\n            \r\n            return {\r\n                \"claim\": response,\r\n                \"confidence\": 0.85,\r\n                \"metadata\": {\r\n                    \"expertise\": self.expertise,\r\n                    \"analysis_type\": \"expert_review\"\r\n                }\r\n            }\r\n        except Exception as e:\r\n            return {\r\n                \"claim\": f\"Analysis failed: {str(e)}\",\r\n                \"confidence\": 0.1,\r\n                \"metadata\": {\"error\": True}\r\n            }\r\n\r\nasync def main():\r\n    # Setup LLM configuration\r\n    config = OpenRouterConfig(api_key=\"your-openrouter-api-key\")\r\n    \r\n    # Create expert campers\r\n    security_expert = ExpertAnalyzer(\"security-expert\", \"cybersecurity\")\r\n    security_expert.setup_llm(config)\r\n    \r\n    finance_expert = ExpertAnalyzer(\"finance-expert\", \"financial analysis\")\r\n    finance_expert.setup_llm(config)\r\n    \r\n    # Create campfire and add experts\r\n    campfire = Campfire(\"expert-analysis\")\r\n    campfire.add_camper(security_expert)\r\n    campfire.add_camper(finance_expert)\r\n    \r\n    await campfire.start()\r\n    \r\n    # Analyze a business proposal\r\n    torch = Torch(claim=\"We're considering implementing a new payment system\")\r\n    await campfire.send_torch(torch)\r\n    \r\n    await campfire.stop()\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(main())\r\n```\r\n\r\n### Team Collaboration with RAG Integration\r\n\r\nBuild intelligent team members that can access and reason over document collections:\r\n\r\n```python\r\nimport asyncio\r\nfrom campfires import Camper, Torch, OpenRouterConfig, LLMCamperMixin\r\n\r\nclass TeamMember(Camper, LLMCamperMixin):\r\n    def __init__(self, name: str, role: str, rag_system_prompt: str):\r\n        super().__init__(name)\r\n        self.role = role\r\n        self.rag_system_prompt = rag_system_prompt\r\n        \r\n    def override_prompt(self, torch: Torch) -> dict:\r\n        \"\"\"Generate responses using RAG-enhanced prompts\"\"\"\r\n        try:\r\n            # Combine RAG context with user question\r\n            enhanced_prompt = f\"\"\"\r\n            {self.rag_system_prompt}\r\n            \r\n            Role: {self.role}\r\n            Question: {torch.claim}\r\n            \r\n            Please provide a detailed response based on your role and the \r\n            available context. Include specific recommendations and actionable insights.\r\n            \"\"\"\r\n            \r\n            # Make LLM call with enhanced context\r\n            response = self.llm_completion_with_mcp(enhanced_prompt)\r\n            \r\n            return {\r\n                \"claim\": response,\r\n                \"confidence\": 0.9,\r\n                \"metadata\": {\r\n                    \"role\": self.role,\r\n                    \"rag_enhanced\": True,\r\n                    \"response_type\": \"team_recommendation\"\r\n                }\r\n            }\r\n        except Exception as e:\r\n            return {\r\n                \"claim\": f\"Unable to provide recommendation: {str(e)}\",\r\n                \"confidence\": 0.1,\r\n                \"metadata\": {\"error\": True, \"role\": self.role}\r\n            }\r\n\r\nasync def main():\r\n    # Setup LLM configuration\r\n    config = OpenRouterConfig(api_key=\"your-openrouter-api-key\")\r\n    \r\n    # RAG system prompt with document context\r\n    rag_context = \"\"\"\r\n    You have access to comprehensive documentation about our tax application system.\r\n    The system handles tax calculations, user management, and compliance reporting.\r\n    Key components include: authentication service, calculation engine, reporting module.\r\n    \"\"\"\r\n    \r\n    # Create team members with different roles\r\n    backend_engineer = TeamMember(\r\n        \"backend-engineer\", \r\n        \"Senior Backend Engineer\",\r\n        rag_context\r\n    )\r\n    backend_engineer.setup_llm(config)\r\n    \r\n    devops_engineer = TeamMember(\r\n        \"devops-engineer\",\r\n        \"Senior DevOps Engineer\", \r\n        rag_context\r\n    )\r\n    devops_engineer.setup_llm(config)\r\n    \r\n    # Create team campfire\r\n    team_campfire = Campfire(\"development-team\")\r\n    team_campfire.add_camper(backend_engineer)\r\n    team_campfire.add_camper(devops_engineer)\r\n    \r\n    await team_campfire.start()\r\n    \r\n    # Ask for team input on a technical decision\r\n    question = Torch(claim=\"How should we implement user authentication for the new tax module?\")\r\n    await team_campfire.send_torch(question)\r\n    \r\n    await team_campfire.stop()\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(main())\r\n```\r\n\r\n## Using Enhanced Orchestration\r\n\r\nThe Enhanced Orchestration system provides sophisticated task management with detailed execution tracking. Here's how to leverage these powerful features:\r\n\r\n### Basic Enhanced Orchestration Setup\r\n\r\n```python\r\nimport asyncio\r\nfrom campfires import Campfire, Camper, LLMCamperMixin, OpenRouterConfig\r\nfrom campfires.core.enhanced_orchestration import EnhancedOrchestration\r\n\r\nclass AnalystCamper(Camper, LLMCamperMixin):\r\n    def __init__(self, name: str, expertise: str):\r\n        super().__init__(name)\r\n        self.expertise = expertise\r\n        \r\n    async def override_prompt(self, raw_prompt: str, system_prompt: str = None) -> dict:\r\n        \"\"\"Enhanced prompt processing with detailed execution tracking\"\"\"\r\n        try:\r\n            enhanced_prompt = f\"\"\"\r\n            As a {self.expertise} expert, analyze the following:\r\n            {raw_prompt}\r\n            \r\n            Provide detailed insights including:\r\n            1. Problem understanding\r\n            2. Approach selection\r\n            3. Key considerations\r\n            4. Recommended actions\r\n            \"\"\"\r\n            \r\n            response = await self.llm_completion(enhanced_prompt)\r\n            \r\n            return {\r\n                \"claim\": response,\r\n                \"confidence\": 0.9,\r\n                \"metadata\": {\r\n                    \"expertise\": self.expertise,\r\n                    \"analysis_depth\": \"comprehensive\",\r\n                    \"execution_stage\": \"expert_analysis\"\r\n                }\r\n            }\r\n        except Exception as e:\r\n            return {\r\n                \"claim\": f\"Analysis failed: {str(e)}\",\r\n                \"confidence\": 0.1,\r\n                \"metadata\": {\"error\": True}\r\n            }\r\n\r\nasync def run_enhanced_orchestration():\r\n    # Setup LLM configuration\r\n    config = OpenRouterConfig(api_key=\"your-api-key\")\r\n    \r\n    # Create specialized campers\r\n    business_analyst = AnalystCamper(\"business-analyst\", \"business strategy\")\r\n    business_analyst.setup_llm(config)\r\n    \r\n    tech_analyst = AnalystCamper(\"tech-analyst\", \"technology architecture\")\r\n    tech_analyst.setup_llm(config)\r\n    \r\n    # Create campfire with enhanced orchestration\r\n    campfire = Campfire(\"strategic-analysis\")\r\n    campfire.add_camper(business_analyst)\r\n    campfire.add_camper(tech_analyst)\r\n    \r\n    # The enhanced orchestration automatically captures:\r\n    # - Detailed execution stages\r\n    # - Problem understanding phases\r\n    # - Approach selection reasoning\r\n    # - Quality considerations\r\n    # - Risk assessments\r\n    \r\n    await campfire.start()\r\n    \r\n    # Process a complex business question\r\n    from campfires import Torch\r\n    question = Torch(\r\n        claim=\"Should we migrate our legacy system to microservices?\",\r\n        metadata={\"priority\": \"high\", \"stakeholders\": [\"engineering\", \"business\"]}\r\n    )\r\n    \r\n    await campfire.send_torch(question)\r\n    await campfire.stop()\r\n    \r\n    # Enhanced HTML report will be generated automatically\r\n    print(\"Check the generated HTML report for detailed execution analysis!\")\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(run_enhanced_orchestration())\r\n```\r\n\r\n### Understanding the Interactive HTML Reports\r\n\r\nThe enhanced orchestration system generates rich HTML reports with expandable sections:\r\n\r\n#### **Execution Stages Section** \ud83d\udd0d\r\nClick the arrow to expand and see:\r\n- **Problem Understanding**: How campers interpreted the task\r\n- **Approach Selection**: Why specific strategies were chosen\r\n- **Execution Strategy**: Step-by-step implementation details\r\n- **Quality Considerations**: Quality checks and validations performed\r\n- **Risk Assessment**: Potential risks identified and mitigation strategies\r\n\r\n#### **RAG Information Section** \ud83d\udcda\r\nReveals how document context was used:\r\n- **Document Retrieval**: Which documents were accessed\r\n- **Context Integration**: How information was incorporated\r\n- **Relevance Scoring**: Why specific content was prioritized\r\n- **State Management**: How RAG state evolved during processing\r\n\r\n#### **Customization Details Section** \u2699\ufe0f\r\nShows how campers adapted their responses:\r\n- **Role-Based Adaptations**: How expertise influenced analysis\r\n- **Personality Integration**: How character traits affected responses\r\n- **Context Awareness**: How situational factors were considered\r\n\r\n#### **Impact Analysis Section** \ud83d\udcca\r\nProvides outcome assessment:\r\n- **Decision Quality**: Assessment of recommendation strength\r\n- **Confidence Levels**: Reliability indicators for each insight\r\n- **Follow-up Actions**: Suggested next steps\r\n- **Success Metrics**: How to measure implementation success\r\n\r\n### Advanced Multi-Camper Orchestration\r\n\r\n```python\r\nimport asyncio\r\nfrom campfires import Campfire, Camper, LLMCamperMixin, OpenRouterConfig\r\n\r\nclass SpecializedTeamMember(Camper, LLMCamperMixin):\r\n    def __init__(self, name: str, role: str, personality: str, concerns: list):\r\n        super().__init__(name)\r\n        self.role = role\r\n        self.personality = personality\r\n        self.concerns = concerns\r\n        \r\n    async def override_prompt(self, raw_prompt: str, system_prompt: str = None) -> dict:\r\n        \"\"\"Role-specific analysis with personality integration\"\"\"\r\n        try:\r\n            role_prompt = f\"\"\"\r\n            You are a {self.role} with the following personality: {self.personality}\r\n            Your primary concerns are: {', '.join(self.concerns)}\r\n            \r\n            Task: {raw_prompt}\r\n            \r\n            Provide analysis from your unique perspective, considering:\r\n            1. How this aligns with your role responsibilities\r\n            2. What concerns you might have\r\n            3. What opportunities you see\r\n            4. Your recommended approach\r\n            \"\"\"\r\n            \r\n            response = await self.llm_completion(role_prompt)\r\n            \r\n            return {\r\n                \"claim\": response,\r\n                \"confidence\": 0.85,\r\n                \"metadata\": {\r\n                    \"role\": self.role,\r\n                    \"personality_influence\": self.personality,\r\n                    \"key_concerns\": self.concerns,\r\n                    \"perspective_type\": \"role_specialized\"\r\n                }\r\n            }\r\n        except Exception as e:\r\n            return {\r\n                \"claim\": f\"Unable to provide {self.role} perspective: {str(e)}\",\r\n                \"confidence\": 0.1,\r\n                \"metadata\": {\"error\": True, \"role\": self.role}\r\n            }\r\n\r\nasync def run_team_orchestration():\r\n    config = OpenRouterConfig(api_key=\"your-api-key\")\r\n    \r\n    # Create diverse team members\r\n    team_members = [\r\n        SpecializedTeamMember(\r\n            \"sarah-pm\", \r\n            \"Project Manager\",\r\n            \"detail-oriented and deadline-focused\",\r\n            [\"timeline adherence\", \"resource allocation\", \"stakeholder communication\"]\r\n        ),\r\n        SpecializedTeamMember(\r\n            \"alex-dev\",\r\n            \"Senior Developer\", \r\n            \"pragmatic and quality-focused\",\r\n            [\"code maintainability\", \"technical debt\", \"performance optimization\"]\r\n        ),\r\n        SpecializedTeamMember(\r\n            \"jordan-ux\",\r\n            \"UX Designer\",\r\n            \"user-centric and creative\",\r\n            [\"user experience\", \"accessibility\", \"design consistency\"]\r\n        )\r\n    ]\r\n    \r\n    # Setup LLM for each team member\r\n    for member in team_members:\r\n        member.setup_llm(config)\r\n    \r\n    # Create collaborative campfire\r\n    team_campfire = Campfire(\"product-development-team\")\r\n    for member in team_members:\r\n        team_campfire.add_camper(member)\r\n    \r\n    await team_campfire.start()\r\n    \r\n    # Collaborative decision making\r\n    from campfires import Torch\r\n    decision = Torch(\r\n        claim=\"We need to redesign our mobile app's onboarding flow to improve user retention\",\r\n        metadata={\r\n            \"urgency\": \"high\",\r\n            \"impact\": \"user_retention\",\r\n            \"timeline\": \"6_weeks\"\r\n        }\r\n    )\r\n    \r\n    await team_campfire.send_torch(decision)\r\n    await team_campfire.stop()\r\n    \r\n    print(\"Team collaboration complete! Check the HTML report for detailed insights from each perspective.\")\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(run_team_orchestration())\r\n```\r\n\r\n### Leveraging RAG Integration\r\n\r\n```python\r\nimport asyncio\r\nfrom campfires import Campfire, Camper, LLMCamperMixin, OpenRouterConfig\r\n\r\nclass RAGEnabledCamper(Camper, LLMCamperMixin):\r\n    def __init__(self, name: str, domain: str, rag_context: str):\r\n        super().__init__(name)\r\n        self.domain = domain\r\n        self.rag_context = rag_context\r\n        \r\n    async def override_prompt(self, raw_prompt: str, system_prompt: str = None) -> dict:\r\n        \"\"\"RAG-enhanced analysis with document context\"\"\"\r\n        try:\r\n            rag_enhanced_prompt = f\"\"\"\r\n            Domain Expertise: {self.domain}\r\n            \r\n            Available Context:\r\n            {self.rag_context}\r\n            \r\n            Question: {raw_prompt}\r\n            \r\n            Using the provided context and your {self.domain} expertise:\r\n            1. Identify relevant information from the context\r\n            2. Apply domain-specific analysis\r\n            3. Provide evidence-based recommendations\r\n            4. Highlight any gaps in available information\r\n            \"\"\"\r\n            \r\n            response = await self.llm_completion(rag_enhanced_prompt)\r\n            \r\n            return {\r\n                \"claim\": response,\r\n                \"confidence\": 0.92,\r\n                \"metadata\": {\r\n                    \"domain\": self.domain,\r\n                    \"rag_enhanced\": True,\r\n                    \"context_utilized\": True,\r\n                    \"evidence_based\": True\r\n                }\r\n            }\r\n        except Exception as e:\r\n            return {\r\n                \"claim\": f\"RAG analysis failed: {str(e)}\",\r\n                \"confidence\": 0.1,\r\n                \"metadata\": {\"error\": True, \"domain\": self.domain}\r\n            }\r\n\r\nasync def run_rag_orchestration():\r\n    config = OpenRouterConfig(api_key=\"your-api-key\")\r\n    \r\n    # Sample RAG context (in practice, this would come from document retrieval)\r\n    financial_context = \"\"\"\r\n    Company Financial Overview:\r\n    - Q3 Revenue: $2.4M (15% growth)\r\n    - Operating Expenses: $1.8M\r\n    - Cash Flow: Positive $600K\r\n    - Key Investments: R&D (30%), Marketing (25%), Operations (45%)\r\n    - Market Position: Growing market share in fintech sector\r\n    \"\"\"\r\n    \r\n    # Create RAG-enabled camper\r\n    financial_analyst = RAGEnabledCamper(\r\n        \"financial-analyst\",\r\n        \"financial analysis\",\r\n        financial_context\r\n    )\r\n    financial_analyst.setup_llm(config)\r\n    \r\n    # Create campfire\r\n    analysis_campfire = Campfire(\"financial-analysis\")\r\n    analysis_campfire.add_camper(financial_analyst)\r\n    \r\n    await analysis_campfire.start()\r\n    \r\n    # Ask context-aware question\r\n    from campfires import Torch\r\n    question = Torch(\r\n        claim=\"Should we increase our R&D investment by 50% next quarter?\",\r\n        metadata={\"analysis_type\": \"investment_decision\", \"timeframe\": \"Q4\"}\r\n    )\r\n    \r\n    await analysis_campfire.send_torch(question)\r\n    await analysis_campfire.stop()\r\n    \r\n    print(\"RAG-enhanced analysis complete! The HTML report shows how document context influenced the decision.\")\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(run_rag_orchestration())\r\n```\r\n\r\n## Core Concepts\r\n\r\n### Torches - The Light of Knowledge\r\nIn our valley, **Torches** are glowing vessels that carry information, insights, and data between campers. Each torch illuminates a piece of knowledge with its own confidence level - some burn bright with certainty, others flicker with uncertainty:\r\n\r\n```python\r\nfrom campfires import Torch\r\n\r\ntorch = Torch(\r\n    claim=\"The weather is sunny today\",\r\n    confidence=0.95,  # How brightly this torch burns\r\n    metadata={\"source\": \"weather_api\", \"location\": \"NYC\"}\r\n)\r\n```\r\n\r\n### Campers - The Valley Inhabitants\r\n**Campers** are the AI agents sitting around your campfire. Each camper has their own expertise and personality. When a torch is passed to them, they examine it, add their insights, and pass along a new torch with their findings:\r\n\r\n```python\r\nfrom campfires import Camper, Torch\r\n\r\nclass WeatherCamper(Camper):\r\n    async def process(self, torch: Torch) -> Torch:\r\n        # This camper specializes in weather analysis\r\n        return Torch(claim=f\"Weather insight: {torch.claim}\")\r\n```\r\n\r\n### LLMCamperMixin - Bringing Intelligence to Your Campers\r\nThe **LLMCamperMixin** gives your campers the ability to think and reason using Large Language Models. When you mix this into your camper class, they gain access to powerful AI capabilities:\r\n\r\n```python\r\nfrom campfires import Camper, LLMCamperMixin, OpenRouterConfig\r\n\r\nclass IntelligentCamper(Camper, LLMCamperMixin):\r\n    def __init__(self, name: str):\r\n        super().__init__(name)\r\n        # Setup LLM capabilities\r\n        config = OpenRouterConfig(api_key=\"your-api-key\")\r\n        self.setup_llm(config)\r\n    \r\n    async def process(self, torch: Torch) -> Torch:\r\n        # Use LLM to analyze the torch content\r\n        response = await self.llm_completion_with_mcp(\r\n            f\"Analyze this: {torch.claim}\"\r\n        )\r\n        return Torch(claim=response, confidence=0.9)\r\n    \r\n    def override_prompt(self, torch: Torch) -> dict:\r\n        # Customize how the LLM processes information\r\n        enhanced_prompt = f\"As an expert, analyze: {torch.claim}\"\r\n        llm_response = self.llm_completion_with_mcp(enhanced_prompt)\r\n        \r\n        return {\r\n            \"claim\": llm_response,\r\n            \"confidence\": 0.85,\r\n            \"metadata\": {\"enhanced\": True}\r\n        }\r\n```\r\n\r\n### Campfires - The Gathering Circles\r\nA **Campfire** is where your campers gather to collaborate. It orchestrates the conversation, ensuring torches are passed in the right order and that every camper gets a chance to contribute their expertise:\r\n\r\n```python\r\nfrom campfires import Campfire\r\n\r\ncampfire = Campfire(\"weather-analysis\")\r\ncampfire.add_camper(weather_camper)\r\ncampfire.add_camper(analysis_camper)\r\n# Now they can work together around the fire\r\n```\r\n\r\n### YAML Save/Restore - Preserving the Valley's Memory\r\nThe **YAML Save/Restore** functionality allows you to save your campfire configurations and restore them later, preserving the exact setup of campers, their roles, and configurations. This is perfect for sharing campfire setups or recreating successful collaborations:\r\n\r\n```python\r\nfrom campfires import Campfire, CampfireManager\r\n\r\n# Save individual campfire\r\ncampfire = Campfire(\"analysis-team\")\r\ncampfire.add_camper(analyst_camper)\r\ncampfire.add_camper(researcher_camper)\r\n\r\n# Save to YAML with flexible location and template-based naming\r\nawait campfire.save_to_yaml(\r\n    location=\"./saved_campfires\",  # Directory or full path\r\n    filename_template=\"{name}_backup_{timestamp}.yaml\"  # Optional template\r\n)\r\n\r\n# Restore from YAML\r\nrestored_campfire = await Campfire.load_from_yaml(\"./saved_campfires/analysis-team_backup_20241201_143022.yaml\")\r\n\r\n# Bulk operations with CampfireManager\r\nmanager = CampfireManager()\r\nmanager.add_campfire(campfire1)\r\nmanager.add_campfire(campfire2)\r\n\r\n# Save all campfires at once\r\nawait manager.save_all_to_yaml(\"./campfire_backups\")\r\n\r\n# Load multiple campfires\r\nloaded_campfires = await manager.load_campfires_from_directory(\"./campfire_backups\")\r\n```\r\n\r\nThe YAML files contain complete campfire configurations including:\r\n- Campfire names and metadata\r\n- All camper configurations and roles\r\n- LLM settings and API configurations\r\n- Custom attributes and initialization parameters\r\n\r\n### Zeitgeist - The Valley's Internet Knowledge\r\n**Zeitgeist** gives your campers the ability to search the internet for current information, opinions, and trends relevant to their roles. Like having a wise oracle at the campfire who can instantly access the collective knowledge of the world:\r\n\r\n```python\r\nfrom campfires import ZeitgeistCamper, LLMCamperMixin\r\n\r\nclass ResearchCamper(LLMCamperMixin, Camper):\r\n    def __init__(self, name: str, role: str, **kwargs):\r\n        super().__init__(name=name, **kwargs)\r\n        self.set_role(role)  # 'academic', 'developer', 'journalist', etc.\r\n        self.enable_zeitgeist()\r\n    \r\n    async def research_topic(self, topic: str):\r\n        # Get current internet knowledge about the topic\r\n        zeitgeist_info = await self.get_zeitgeist(topic)\r\n        role_opinions = await self.get_role_opinions(topic)\r\n        trending_tools = await self.get_trending_tools(topic)\r\n        return {\r\n            'zeitgeist': zeitgeist_info,\r\n            'opinions': role_opinions,\r\n            'tools': trending_tools\r\n        }\r\n```\r\n\r\n### Enhanced Orchestration - The Valley's Wisdom\r\nThe **Enhanced Orchestration** system provides sophisticated task management with detailed execution stages. When campers work on complex tasks, the system captures their thought processes, approach selection, and quality considerations:\r\n\r\n```python\r\nfrom campfires import Campfire, EnhancedOrchestration\r\n\r\n# Create a campfire with enhanced orchestration\r\ncampfire = Campfire(\"strategic-planning\")\r\norchestration = EnhancedOrchestration(campfire)\r\n\r\n# The orchestration system automatically captures:\r\n# - Problem understanding and analysis\r\n# - Approach selection and reasoning\r\n# - Execution strategy and implementation\r\n# - Quality considerations and risk assessment\r\n# - RAG context and document integration\r\n# - Final outcomes and impact analysis\r\n\r\n# All this information is available in interactive HTML reports\r\n# with expandable sections for detailed exploration\r\n```\r\n\r\n### Interactive HTML Reports - Illuminating the Process\r\nThe framework generates rich HTML reports that reveal the inner workings of your campfire collaborations. These reports feature expandable sections that show:\r\n\r\n- **Execution Stages**: Step-by-step breakdown of how tasks were approached and executed\r\n- **RAG Information**: Details about document retrieval and context integration\r\n- **Customization**: How campers adapted their responses based on their roles and expertise\r\n- **Impact Analysis**: Assessment of outcomes and recommendations for future improvements\r\n\r\nClick the arrow icons in the report headers to expand sections and explore the detailed execution process.\r\n\r\n### Party Box - The Valley's Treasure Chest\r\nThe **Party Box** is the shared storage system where campfires can exchange valuable artifacts - documents, images, audio files, and data. It's like a magical chest that connects all campfires in the valley:\r\n\r\n```python\r\nfrom campfires import LocalDriver\r\n\r\n# Store something in the party box\r\nparty_box = LocalDriver(\"./demo_storage\")\r\nawait party_box.store_asset(file_data, \"shared_document.pdf\")\r\n```\r\n\r\n### MCP Protocol - The Torch Bearer Network\r\nThe **Model Context Protocol** is how torch bearers carry messages between campfires throughout the valley. It ensures that important information, events, and notifications reach every campfire that needs to know:\r\n\r\n```python\r\nfrom campfires import MCPProtocol, AsyncQueueTransport\r\n\r\ntransport = AsyncQueueTransport()\r\nmcp_protocol = MCPProtocol(transport)\r\nawait mcp_protocol.start()\r\n# Now torch bearers can carry messages across the valley\r\n```\r\n\r\n## Configuration\r\n\r\n### Environment Variables\r\n\r\nCreate a `.env` file in your project root:\r\n\r\n```env\r\nOPENROUTER_API_KEY=your_openrouter_api_key\r\nOPENROUTER_DEFAULT_MODEL=anthropic/claude-3-sonnet\r\nCAMPFIRES_LOG_LEVEL=INFO\r\nCAMPFIRES_DB_PATH=./campfires.db\r\n```\r\n\r\n### OpenRouter Configuration\r\n\r\n```python\r\nfrom campfires import OpenRouterConfig\r\n\r\nconfig = OpenRouterConfig(\r\n    api_key=\"your-api-key\",\r\n    default_model=\"anthropic/claude-3-sonnet\",\r\n    max_tokens=1000,\r\n    temperature=0.7\r\n)\r\n```\r\n\r\n### Ollama Configuration\r\n\r\nFor local LLM deployment with Ollama:\r\n\r\n```python\r\nfrom campfires import OllamaConfig, MultimodalOllamaConfig\r\n\r\n# Basic text generation\r\nconfig = OllamaConfig(\r\n    base_url=\"http://localhost:11434\",\r\n    model=\"llama2\",\r\n    temperature=0.7,\r\n    max_tokens=1000\r\n)\r\n\r\n# Multimodal capabilities (text + images)\r\nmultimodal_config = MultimodalOllamaConfig(\r\n    base_url=\"http://localhost:11434\",\r\n    text_model=\"llama2\",\r\n    vision_model=\"llava\",\r\n    temperature=0.7,\r\n    max_tokens=1000\r\n)\r\n```\r\n\r\n**Prerequisites for Ollama:**\r\n1. Install Ollama: Visit [ollama.ai](https://ollama.ai) for installation instructions\r\n2. Start Ollama server: `ollama serve`\r\n3. Download models: `ollama pull llama2` and `ollama pull llava` (for multimodal)\r\n\r\n## Examples\r\n\r\nCheck out the `demos/` directory for complete examples:\r\n\r\n- `sequential_orchestration_demo.py`: Advanced task orchestration with detailed execution stages and interactive HTML reports\r\n- `hospital_zeitgeist_demo.py`: Healthcare team collaboration with professional AI personas, action planning, and enhanced reporting\r\n- `tax_app_team_demo.py`: Software development team collaboration with RAG integration, LLM-powered recommendations, and detailed execution analysis\r\n- `zeitgeist_demo.py`: Internet knowledge and opinion mining with Zeitgeist\r\n- `reddit_crisis_tracker.py`: Crisis detection system for social media\r\n- `ollama_demo.py`: Comprehensive Ollama integration demonstration with text generation, chat, and multimodal capabilities\r\n- `quick_ollama_test.py`: Quick test script to verify Ollama integration\r\n- `run_demo.py`: Simple demonstration of basic concepts\r\n\r\nAll demos generate interactive HTML reports with expandable sections showing execution stages, RAG information, and detailed analysis.\r\n\r\n## Development\r\n\r\n### Setting up for Development\r\n\r\n```bash\r\ngit clone https://github.com/campfires/campfires.git\r\ncd campfires\r\npip install -e \".[dev]\"\r\n```\r\n\r\n### Running Tests\r\n\r\n```bash\r\npytest\r\n```\r\n\r\n### Code Formatting\r\n\r\n```bash\r\nblack campfires/\r\n```\r\n\r\n### Type Checking\r\n\r\n```bash\r\nmypy campfires/\r\n```\r\n\r\n## Optional Dependencies\r\n\r\n### Zeitgeist Support\r\n```bash\r\npip install duckduckgo-search beautifulsoup4 requests\r\n```\r\n\r\n### AWS Support\r\n```bash\r\npip install \"campfires[aws]\"\r\n```\r\n\r\n### Redis Support\r\n```bash\r\npip install \"campfires[redis]\"\r\n```\r\n\r\n## License\r\n\r\nMIT License - see LICENSE file for details.\r\n\r\n## Support\r\n\r\n- Documentation: https://campfires.readthedocs.io\r\n- GitHub Issues: https://github.com/campfires/campfires/issues\r\n- Discussions: https://github.com/campfires/campfires/discussions\r\n## Experiential RAG Demo\r\nThe experiential RAG demo simulates job search experiences under different psychological contexts (supportive, challenging, neutral) and generates HTML reports analyzing behavioral and mental health impacts.\r\n\r\n### Key Reports\r\n- [Supportive Experience](demos/party_box/other/alex_job_search_supportive_experience_20251023_004109.html)\r\n- [Challenging Experience](demos/party_box/other/alex_job_search_challenging_experience_20251023_004109.html)\r\n- [Neutral Experience](demos/party_box/other/alex_job_search_neutral_experience_20251023_004109.html)\r\n\r\nThese reports include narrative storytelling, mental health outlook summaries, and actionable insights from simulated scenarios.\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Python framework for orchestrating multimodal LLMs and tools",
    "version": "0.4.1",
    "project_urls": {
        "Documentation": "https://campfires.readthedocs.io",
        "Homepage": "https://github.com/campfires/campfires",
        "Repository": "https://github.com/campfires/campfires"
    },
    "split_keywords": [
        "llm",
        " ai",
        " orchestration",
        " multimodal",
        " framework"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2d3d89b5fe3853be48573342a3b0f4ff662d7f3b32e61d0f3b0b09fa362c76b3",
                "md5": "381bc1e36ed3ba0bb6c82faa768f2d10",
                "sha256": "8c852bfa944e720912509011b229cab76115808cfdc1d2310c700f8799a56dbc"
            },
            "downloads": -1,
            "filename": "campfires-0.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "381bc1e36ed3ba0bb6c82faa768f2d10",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 202719,
            "upload_time": "2025-10-22T23:49:16",
            "upload_time_iso_8601": "2025-10-22T23:49:16.400544Z",
            "url": "https://files.pythonhosted.org/packages/2d/3d/89b5fe3853be48573342a3b0f4ff662d7f3b32e61d0f3b0b09fa362c76b3/campfires-0.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "abb8322f254ef74dac2a05e484d8e575b43b8fbb0665ccc63cd3ecf037dfd83a",
                "md5": "ef4c7d38514549bc97613e51be09b610",
                "sha256": "24133af6abfd1a46b3e655d585b723e3e5f1bc8b44591ca4e9e5f3d8b4ca93e4"
            },
            "downloads": -1,
            "filename": "campfires-0.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "ef4c7d38514549bc97613e51be09b610",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 309088,
            "upload_time": "2025-10-22T23:49:17",
            "upload_time_iso_8601": "2025-10-22T23:49:17.961221Z",
            "url": "https://files.pythonhosted.org/packages/ab/b8/322f254ef74dac2a05e484d8e575b43b8fbb0665ccc63cd3ecf037dfd83a/campfires-0.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-22 23:49:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "campfires",
    "github_project": "campfires",
    "github_not_found": true,
    "lcname": "campfires"
}
        
Elapsed time: 1.63996s