Codemni


NameCodemni JSON
Version 1.2.3 PyPI version JSON
download
home_pageNone
SummaryBuild Intelligent AI Agents with Full Control and Zero Complexity - Lightweight, modular Python toolkit for AI development
upload_time2025-10-26 05:00:20
maintainerNone
docs_urlNone
authorCodexJitin
requires_python>=3.8
licenseNone
keywords ai llm agent openai anthropic google gemini groq ollama memory tool-calling
VCS
bugtrack_url
requirements requests python-dotenv
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <img src="https://raw.githubusercontent.com/CodexJitin/Codemni/main/assets/codemni-logo.jpg" alt="Codemni Logo" width="200"/>
  
# Codemni

[![Python](https://img.shields.io/badge/Python-3.8%2B-blue.svg)](https://www.python.org/)
[![License](https://img.shields.io/badge/License-Proprietary-red.svg)](LICENSE)
[![Version](https://img.shields.io/badge/Version-1.2.2-green.svg)](https://github.com/CodexJitin/Codemni)
[![PyPI](https://img.shields.io/badge/PyPI-Codemni-brightgreen.svg)](https://pypi.org/project/Codemni/)

### *Build Intelligent AI Agents with Full Control and Zero Complexity*

**Lightweight • Modular • Production-Ready**

</div>

Codemni is a Python framework that puts you in control of AI agent development. Build powerful tool-calling agents with custom logic, multi-provider LLM support, and flexible memory—without the bloat of heavy abstractions.

## Table of Contents

- [Overview](#overview)
- [Key Features](#key-features)
- [Architecture](#architecture)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Core Components](#core-components)
  - [AI Agents](#1-ai-agents)
  - [LLM Module](#2-llm-module)
  - [Memory Module](#3-memory-module)
  - [Prebuild Tools](#4-prebuild-tools)
- [Complete Examples](#complete-examples)
- [Comparison Guide](#comparison-guide)
- [Best Practices](#best-practices)
- [Advanced Usage](#advanced-usage)
- [Project Structure](#project-structure)
- [Development](#development)
- [Contributing](#contributing)
- [License](#license)
- [Support](#support)
- [Author](#author)
- [Acknowledgments](#acknowledgments)
- [Changelog](#changelog)
- [Quick Reference](#quick-reference)

## Overview

**What is Codemni?**

Codemni empowers developers to create sophisticated AI agents without getting lost in complexity. Whether you're building chatbots, automation tools, or research assistants, Codemni provides the essential building blocks while keeping your code clean and maintainable.

**Why Choose Codemni?**

- **Full Control**: Write your own logic without fighting framework constraints
- **Clean Architecture**: Minimal abstractions mean you understand exactly what's happening
- **Production-Ready**: Built-in retries, error handling, and timeouts from day one
- **Truly Modular**: Use only what you need—every component works independently
- **Multi-Provider**: Switch between OpenAI, Google, Anthropic, Groq, and Ollama seamlessly

**Core Capabilities:**

- **3 Agent Types**: Standard, Reasoning, and Deep Reasoning agents for different use cases
- **5 LLM Providers**: OpenAI, Google Gemini, Anthropic Claude, Groq, and Ollama
- **4 Memory Strategies**: Buffer, Window, Token Buffer, and Summary Memory
- **Prebuild Tools**: Ready-to-use tools like Wikipedia integration
- **Custom Tools**: Add your own tools with simple Python functions

## Key Features

### Intelligent AI Agents

- **Three agent types** with varying reasoning capabilities
- **Dynamic tool selection** and execution based on context
- **Custom prompt support** to shape agent personality
- **Verbose mode** for debugging and monitoring
- **Memory integration** for stateful conversations

### Multi-Provider LLM Support

- **Unified interface** across all major LLM providers
- **Automatic retries** with exponential backoff for reliability
- **Configurable timeouts** to prevent hanging requests
- **Function and class-based APIs** for flexibility
- **Clear exception hierarchies** for better error handling

### Flexible Memory Management

- **Four memory strategies** optimized for different scenarios
- **Token-aware management** to control API costs
- **Intelligent summarization** for maintaining long conversation context
- **Sliding window** for recent message prioritization
- **Simple buffer** for complete conversation history

### Extensible Architecture

- **Easy tool creation** using standard Python functions
- **Plugin-style prebuild tools** that integrate in one line
- **Custom agent prompts** for specialized behaviors
- **Modular design** that scales with your needs

## Architecture

```
Codemni/
├── Agents/                          # AI Agent implementations
│   ├── TOOL_CALLING_AGENT/          # Standard tool-calling agent
│   ├── REASONING_TOOL_CALLING_AGENT/    # Agent with basic reasoning
│   └── DEEP_REASONING_TOOL_CALLING_AGENT/  # Advanced reasoning agent
├── llm/                             # LLM provider wrappers
│   ├── OpenAI_llm.py               # OpenAI GPT models
│   ├── Google_llm.py               # Google Gemini models
│   ├── Anthropic_llm.py            # Anthropic Claude models
│   ├── Groq_llm.py                 # Groq models
│   └── Ollama_llm.py               # Ollama local models
├── memory/                          # Conversation memory strategies
│   ├── conversational_buffer_memory.py      # Store all messages
│   ├── conversational_window_memory.py      # Sliding window
│   ├── conversational_token_buffer_memory.py  # Token-limited buffer
│   └── conversational_summary_memory.py     # Intelligent summarization
├── Prebuild_Tools/                  # Ready-to-use tools
│   └── Wikipedia_tool/              # Wikipedia search & retrieval
└── core/                            # Core utilities
    └── adapter.py                   # Tool execution engine
```

---

## 📦 Installation

### Prerequisites
- **Python 3.8+**
- An API key for your chosen LLM provider

### Install from PyPI

```bash
# Install core package (includes all modules)
pip install Codemni

# Install with specific LLM provider
pip install Codemni[openai]      # For OpenAI
pip install Codemni[google]      # For Google Gemini
pip install Codemni[anthropic]   # For Anthropic Claude
pip install Codemni[groq]        # For Groq
pip install Codemni[ollama]      # For Ollama

# Install with all providers
pip install Codemni[all]

# Install for development
pip install Codemni[dev]
```

### Dependencies

**Core (always installed):**
- `requests>=2.31.0`
- `python-dotenv>=1.0.0`

**Optional (install as needed):**
- `openai>=1.0.0` - For OpenAI models
- `google-generativeai>=0.3.0` - For Google Gemini
- `anthropic>=0.25.0` - For Anthropic Claude
- `groq>=0.4.0` - For Groq
- `ollama>=0.1.0` - For Ollama
- `wikipedia>=1.4.0` - For Wikipedia tool

## Quick Start

### Basic Tool-Calling Agent

```python
from Agents import Create_ToolCalling_Agent
from llm.Google_llm import GoogleLLM
import datetime

# Initialize LLM
llm = GoogleLLM(
    model="gemini-2.0-flash",
    api_key="YOUR_API_KEY"  # or set GOOGLE_API_KEY env var
)

# Create agent
agent = Create_ToolCalling_Agent(llm=llm, verbose=True)

# Define a tool
def get_current_time():
    """Get the current date and time"""
    return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")

# Add tool to agent
agent.add_tool(
    name="get_current_time",
    description="Get the current date and time",
    function=get_current_time
)

# Use the agent
response = agent.run("What time is it?")
print(response)
```

### With Memory

```python
from Agents import Create_ToolCalling_Agent
from llm.OpenAI_llm import OpenAILLM
from memory import ConversationalWindowMemory

# Initialize with memory
llm = OpenAILLM(model="gpt-4", api_key="YOUR_API_KEY")
memory = ConversationalWindowMemory(window_size=10)

agent = Create_ToolCalling_Agent(
    llm=llm,
    memory=memory,
    verbose=True
)

# Conversations maintain context
agent.run("My name is Alice")
agent.run("What's my name?")  # Agent remembers: "Your name is Alice"
```

### Using Prebuild Tools

```python
from Agents import Create_ToolCalling_Agent
from llm.Anthropic_llm import AnthropicLLM
from Prebuild_Tools import WikipediaTool

llm = AnthropicLLM(model="claude-3-5-sonnet-20241022", api_key="YOUR_API_KEY")
agent = Create_ToolCalling_Agent(llm=llm, verbose=True)

# Add Wikipedia tool
wiki = WikipediaTool()
wiki.add_to_agent(agent)

# Query Wikipedia through the agent
response = agent.run("Tell me about quantum computing")
print(response)
```

## Core Components

Codemni consists of four main components. Each has detailed documentation in its respective directory:

### 1. AI Agents

Three types of agents with varying reasoning capabilities:

| Agent Type | Speed | Best For | Documentation |
|------------|-------|----------|---------------|
| **TOOL_CALLING_AGENT** | Fastest | Production APIs | [Agent Docs](Agents/TOOL_CALLING_AGENT/README.md) |
| **REASONING_TOOL_CALLING_AGENT** | Fast (4.6s) | General applications | [Agent Docs](Agents/REASONING_TOOL_CALLING_AGENT/README.md) |
| **DEEP_REASONING_TOOL_CALLING_AGENT** | Slower (11.1s) | Research & debugging | [Agent Docs](Agents/DEEP_REASONING_TOOL_CALLING_AGENT/README.md) |

**Quick Example:**
```python
from Agents import Create_ToolCalling_Agent
from llm.Google_llm import GoogleLLM

llm = GoogleLLM(model="gemini-2.0-flash", api_key="YOUR_API_KEY")
agent = Create_ToolCalling_Agent(llm=llm, verbose=True)

# Add tools and use
def calculator(expression):
    return str(eval(expression))

agent.add_tool("calculator", "Evaluate math expressions", calculator)
response = agent.run("What is 125 * 48?")
```

**Key Differences:**
- **Standard Agent**: Direct tool calling, fastest, lowest cost
- **Reasoning Agent**: Shows thinking process, balanced speed/transparency
- **Deep Reasoning Agent**: Comprehensive analysis, self-reflection, error recovery

**[View Full Agent Documentation](Agents/)**

### 2. LLM Module

Unified interface for multiple LLM providers with built-in retries and error handling.

**Supported Providers:**
- **OpenAI** - GPT-4, GPT-3.5-turbo, GPT-4-turbo
- **Google** - Gemini Pro, Gemini 2.0 Flash
- **Anthropic** - Claude 3 (Opus, Sonnet, Haiku)
- **Groq** - Llama, Mixtral, Gemma
- **Ollama** - Any local model

**Quick Example:**
```python
# Function-based API (one-off calls)
from llm.OpenAI_llm import openai_llm

response = openai_llm(
    prompt="Explain Python in one sentence",
    model="gpt-4",
    api_key="YOUR_API_KEY"
)

# Class-based API (for agents)
from llm.Google_llm import GoogleLLM

llm = GoogleLLM(model="gemini-2.0-flash", api_key="YOUR_API_KEY")
response = llm.generate_response("What is machine learning?")
```

**Features:**

- Automatic retries with exponential backoff
- Configurable timeouts
- Clear exception hierarchies
- Both function and class-based APIs
- Production-ready error handling

**[View Full LLM Documentation](llm/README.md)**

**Features:**
- ✅ Automatic retries with exponential backoff
- ✅ Configurable timeouts
- ✅ Clear exception hierarchies
- ✅ Both function and class-based APIs
- ✅ Production-ready error handling

📚 **[View Full LLM Documentation →](llm/README.md)**

---

### 3. Memory Module

Four strategies for managing conversation history:

| Memory Type | Limit | Best For | Documentation |
|-------------|-------|----------|---------------|
| **Buffer Memory** | None | Short conversations | [📖 Memory Docs](memory/README.md#1-conversationalbuffermemory) |
| **Window Memory** | Message count | Long conversations | [📖 Memory Docs](memory/README.md#2-conversationalwindowmemory) |
| **Token Buffer** | Token count | Cost-conscious apps | [� Memory Docs](memory/README.md#3-conversationaltokenbuffermemory) |
| **Summary Memory** | Intelligent | Very long conversations | [📖 Memory Docs](memory/README.md#4-conversationalsummarymemory) |

**Quick Example:**
```python
from memory import ConversationalWindowMemory

# Keep last 10 messages
memory = ConversationalWindowMemory(window_size=10)
memory.add_user_message("Hello!")
memory.add_ai_message("Hi! How can I help?")

history = memory.get_history()
```

**Integration with Agents:**
```python
from Agents import Create_ToolCalling_Agent
from memory import ConversationalBufferMemory

memory = ConversationalBufferMemory()
agent = Create_ToolCalling_Agent(llm=llm, memory=memory)

# Agent maintains conversation context
agent.run("My name is Alice")
agent.run("What's my name?")  # Remembers: "Alice"
```

**[View Full Memory Documentation](memory/README.md)**

### 4. Prebuild Tools

Ready-to-use tools for common tasks:

**Available Tools:**
- **Wikipedia Tool** - Search and retrieve Wikipedia content

**Quick Example:**
```python
from Prebuild_Tools import WikipediaTool

wiki = WikipediaTool(language="en")

# Search Wikipedia
results = wiki.search("Python programming")

# Get article summary
summary = wiki.get_summary("Python (programming language)", sentences=3)

# Add all Wikipedia tools to an agent
wiki.add_to_agent(agent)
```

**Automatic Integration:**
The `add_to_agent()` method automatically registers all Wikipedia-related tools with your agent, enabling natural language queries like:
- "Search Wikipedia for quantum computing"
- "Get me a summary of machine learning from Wikipedia"
- "Find information about Albert Einstein on Wikipedia"

**[View Full Tools Documentation](Prebuild_Tools/README.md)**

## Complete Examples

### Example 1: Multi-Tool Calculator Agent

```python
from Agents import Create_ToolCalling_Agent
from llm.Google_llm import GoogleLLM
import math
import datetime

# Initialize
llm = GoogleLLM(model="gemini-2.0-flash", api_key="YOUR_API_KEY")
agent = Create_ToolCalling_Agent(llm=llm, verbose=True)

# Define tools
def calculator(expression):
    """Evaluate mathematical expressions"""
    try:
        return str(eval(expression))
    except Exception as e:
        return f"Error: {str(e)}"

def get_time():
    """Get current time"""
    return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")

def square_root(number):
    """Calculate square root"""
    try:
        return str(math.sqrt(float(number)))
    except Exception as e:
        return f"Error: {str(e)}"

# Add tools
agent.add_tool("calculator", "Evaluate math expressions", calculator)
agent.add_tool("get_time", "Get current time", get_time)
agent.add_tool("square_root", "Calculate square root", square_root)

# Use agent
print(agent.run("What is 125 * 48?"))
print(agent.run("What is the square root of 144?"))
print(agent.run("What time is it?"))
```

### Example 2: Research Assistant with Memory

```python
from Agents.REASONING_TOOL_CALLING_AGENT import Create_ToolCalling_Agent
from llm.Anthropic_llm import AnthropicLLM
from memory import ConversationalSummaryMemory
from Prebuild_Tools import WikipediaTool

# Initialize components
llm = AnthropicLLM(model="claude-3-5-sonnet-20241022", api_key="YOUR_API_KEY")
memory = ConversationalSummaryMemory(llm=llm, max_messages_before_summary=15)

# Create agent
agent = Create_ToolCalling_Agent(llm=llm, memory=memory, verbose=True)

# Add Wikipedia tool
wiki = WikipediaTool()
wiki.add_to_agent(agent)

# Multi-turn research conversation
agent.run("Tell me about quantum computing")
agent.run("What are its practical applications?")
agent.run("Who are the pioneers in this field?")
agent.run("Summarize everything we discussed")  # Memory maintains context
```

### Example 3: Deep Reasoning Problem Solver

```python
from Agents.DEEP_REASONING_TOOL_CALLING_AGENT import Create_Deep_Reasoning_Tool_Calling_Agent
from llm.OpenAI_llm import OpenAILLM

# Initialize with deep reasoning
llm = OpenAILLM(model="gpt-4", api_key="YOUR_API_KEY")
agent = Create_Deep_Reasoning_Tool_Calling_Agent(
    llm=llm,
    verbose=True,
    show_reasoning=True,
    min_confidence=0.7
)

# Define complex tool
def analyze_data(dataset_name):
    """Analyze a dataset and return statistics"""
    # Simulated analysis
    return f"Dataset '{dataset_name}': Mean=45.2, Median=43.0, StdDev=12.5"

agent.add_tool(
    "analyze_data",
    "Analyze dataset and return statistical summary",
    analyze_data
)

# Complex query - shows deep reasoning process
response = agent.run(
    "I need to understand the statistical properties of the sales_2024 dataset. "
    "Analyze it and explain what the numbers mean for business decisions."
)
```

### Example 4: Custom Prompt Agent

```python
from Agents import Create_ToolCalling_Agent
from llm.Google_llm import GoogleLLM

llm = GoogleLLM(model="gemini-2.0-flash", api_key="YOUR_API_KEY")

# Custom agent personality
custom_prompt = """You are a friendly and enthusiastic math tutor for children.
Always explain concepts in simple terms and use encouraging language.
Make math fun and approachable!"""

agent = Create_ToolCalling_Agent(
    llm=llm,
    prompt=custom_prompt,
    verbose=True
)

# Agent responds with custom personality
def calculator(expression):
    return str(eval(expression))

agent.add_tool("calculator", "Calculate math problems", calculator)
response = agent.run("What is 7 times 8?")
```

## Comparison Guide

### Agent Types Comparison

| Feature | TOOL_CALLING | REASONING | DEEP_REASONING |
|---------|-------------|-----------|----------------|
| **Speed** | Fastest | Fast (4.63s) | Slower (11.11s) |
| **Token Usage** | Lowest | Low (600-900) | High (1500-2800) |
| **Reasoning Display** | None | Basic | Deep chain-of-thought |
| **Problem Analysis** | None | Minimal | Comprehensive |
| **Situation Awareness** | No | No | Yes |
| **Self-Reflection** | No | No | Confidence + alternatives |
| **Error Recovery** | Basic | Basic retry | Strategic alternatives |
| **Best For** | Production APIs | General apps | Research & debugging |
| **Cost** | Lowest | Medium | Highest |
| **Transparency** | Low | Medium | Highest |

### Memory Types Comparison

| Memory Type | Size Limit | Use Case | Token Cost |
|-------------|-----------|----------|------------|
| **Buffer** | None | Short conversations | Can grow large |
| **Window** | Fixed count | Long conversations | Predictable |
| **Token Buffer** | Token limit | Cost-conscious apps | Optimized |
| **Summary** | Intelligent | Very long conversations | Medium + summary cost |

### LLM Providers Comparison

| Provider | Speed | Cost | Best For |
|----------|-------|------|----------|
| **OpenAI (GPT-4)** | Medium | High | Best quality |
| **Google (Gemini)** | Fast | Low | Fast, cost-effective |
| **Anthropic (Claude)** | Medium | Medium | Long context, reasoning |
| **Groq** | Very Fast | Very Low | Speed priority |
| **Ollama** | Varies | Free | Local/private |

## Best Practices

### 1. Choosing the Right Agent

```python
# For production APIs - use TOOL_CALLING_AGENT
# Fast, cost-efficient, reliable
from Agents.TOOL_CALLING_AGENT import Create_ToolCalling_Agent

# For user-facing apps - use REASONING_TOOL_CALLING_AGENT
# Shows thinking, still efficient
from Agents.REASONING_TOOL_CALLING_AGENT import Create_ToolCalling_Agent

# For research/debugging - use DEEP_REASONING_TOOL_CALLING_AGENT
# Deep analysis, comprehensive reasoning
from Agents.DEEP_REASONING_TOOL_CALLING_AGENT import Create_Deep_Reasoning_Tool_Calling_Agent
```

### 2. Memory Selection

```python
# Short conversations (<10 exchanges)
memory = ConversationalBufferMemory()

# Long conversations (need recent context)
memory = ConversationalWindowMemory(window_size=10)

# Budget-conscious (control token usage)
memory = ConversationalTokenBufferMemory(max_tokens=2000, model="gpt-4")

# Very long conversations (need full context)
memory = ConversationalSummaryMemory(llm=llm)
```

### 3. Tool Design

```python
# GOOD: Clear, focused tool
def get_weather(city):
    """Get current weather for a specific city"""
    # Implementation
    return weather_data

# BAD: Too broad, unclear purpose
def do_stuff(input):
    """Does various things"""
    # Implementation
```

**Tool Best Practices:**

- Clear, descriptive names
- Focused functionality (one tool = one task)
- Good error handling
- Detailed docstrings
- Return strings or serializable data

### 4. Error Handling

```python
from llm.OpenAI_llm import OpenAILLM, OpenAILLMError

try:
    llm = OpenAILLM(model="gpt-4", api_key="YOUR_API_KEY")
    response = llm.generate_response("Hello")
except OpenAILLMError as e:
    # Handle LLM-specific errors
    print(f"LLM Error: {e}")
except Exception as e:
    # Handle other errors
    print(f"Unexpected error: {e}")
```

### 5. API Key Management

```python
import os
from dotenv import load_dotenv

# Use environment variables
load_dotenv()

llm = GoogleLLM(
    model="gemini-2.0-flash",
    api_key=os.getenv("GOOGLE_API_KEY")  # From .env file
)
```

**.env file:**
```bash
OPENAI_API_KEY=your_openai_key
GOOGLE_API_KEY=your_google_key
ANTHROPIC_API_KEY=your_anthropic_key
GROQ_API_KEY=your_groq_key
```

### 6. Verbose Mode Usage

```python
# Development: verbose=True for debugging
agent = Create_ToolCalling_Agent(llm=llm, verbose=True)

# Production: verbose=False for clean logs
agent = Create_ToolCalling_Agent(llm=llm, verbose=False)
```

## Advanced Usage

### Custom Tool with Complex Parameters

```python
def temperature_converter(temperature, from_unit, to_unit):
    """Convert temperature between Celsius, Fahrenheit, and Kelvin
    
    Args:
        temperature: Temperature value
        from_unit: Source unit (C, F, or K)
        to_unit: Target unit (C, F, or K)
    """
    temp = float(temperature)
    from_unit = from_unit.upper()
    to_unit = to_unit.upper()
    
    # Convert to Celsius first
    if from_unit == 'F':
        celsius = (temp - 32) * 5/9
    elif from_unit == 'K':
        celsius = temp - 273.15
    else:
        celsius = temp
    
    # Convert to target
    if to_unit == 'F':
        result = (celsius * 9/5) + 32
    elif to_unit == 'K':
        result = celsius + 273.15
    else:
        result = celsius
    
    return f"{result:.2f} {to_unit}"

agent.add_tool(
    "temperature_converter",
    "Convert temperature between Celsius (C), Fahrenheit (F), and Kelvin (K)",
    temperature_converter
)
```

### Multi-Agent System

```python
# Create specialized agents
research_agent = Create_ToolCalling_Agent(llm=llm1)
wiki = WikipediaTool()
wiki.add_to_agent(research_agent)

analysis_agent = Create_Deep_Reasoning_Tool_Calling_Agent(llm=llm2)
# Add analysis tools...

# Coordinate agents
def coordinated_research(query):
    # First agent gathers information
    info = research_agent.run(f"Find information about {query}")
    
    # Second agent analyzes
    analysis = analysis_agent.run(f"Analyze this information: {info}")
    
    return analysis
```

### Dynamic Tool Loading

```python
def load_tools_from_config(agent, config):
    """Load tools from configuration"""
    for tool_config in config["tools"]:
        name = tool_config["name"]
        description = tool_config["description"]
        
        # Dynamically import tool function
        module = __import__(tool_config["module"])
        function = getattr(module, tool_config["function"])
        
        agent.add_tool(name, description, function)
```

### Streaming Responses (Future Enhancement)

```python
# Note: Streaming support coming in future version
# Current: Get complete response
response = agent.run("Tell me about AI")

# Future: Stream response chunks
for chunk in agent.run_stream("Tell me about AI"):
    print(chunk, end="", flush=True)
```

## Project Structure

```
Codemni/
│
├── __init__.py                      # Package initialization
├── pyproject.toml                   # Project configuration
├── requirements.txt                 # Dependencies
├── LICENSE                          # Proprietary license
├── README.md                        # This file
│
├── Agents/                          # AI Agent implementations
│   ├── __init__.py
│   ├── TOOL_CALLING_AGENT/
│   │   ├── __init__.py
│   │   ├── agent.py                 # Standard agent
│   │   ├── prompt.py                # Agent prompts
│   │   └── README.md                # Agent documentation
│   ├── REASONING_TOOL_CALLING_AGENT/
│   │   ├── __init__.py
│   │   ├── agent.py                 # Reasoning agent
│   │   ├── prompt.py
│   │   └── README.md
│   └── DEEP_REASONING_TOOL_CALLING_AGENT/
│       ├── __init__.py
│       ├── agent.py                 # Deep reasoning agent
│       ├── prompt.py
│       └── README.md
│
├── llm/                             # LLM provider wrappers
│   ├── __init__.py
│   ├── OpenAI_llm.py               # OpenAI integration
│   ├── Google_llm.py               # Google Gemini integration
│   ├── Anthropic_llm.py            # Anthropic Claude integration
│   ├── Groq_llm.py                 # Groq integration
│   ├── Ollama_llm.py               # Ollama integration
│   └── README.md                    # LLM documentation
│
├── memory/                          # Conversation memory
│   ├── __init__.py
│   ├── conversational_buffer_memory.py
│   ├── conversational_window_memory.py
│   ├── conversational_token_buffer_memory.py
│   ├── conversational_summary_memory.py
│   └── README.md                    # Memory documentation
│
├── Prebuild_Tools/                  # Ready-to-use tools
│   ├── __init__.py
│   ├── README.md
│   └── Wikipedia_tool/
│       ├── __init__.py
│       ├── wikipedia_tool.py
│       └── README.md
│
└── core/                            # Core utilities
    ├── __init__.py
    └── adapter.py                   # Tool execution engine
```

## Development

### Setting Up Development Environment

```bash
# Clone repository (if contributing)
git clone https://github.com/CodexJitin/Codemni.git
cd Codemni

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install in development mode
pip install -e .[dev]

# Install all providers
pip install -e .[all]
```

### Code Quality

```bash
# Format code
black .

# Lint code
flake8 .

# Type checking
mypy .
```

## Contributing

Codemni is proprietary software. However, we welcome:

- **Bug reports** - Submit issues on GitHub
- **Feature requests** - Share your ideas
- **Documentation improvements** - Help others learn
- **Example contributions** - Share your use cases

**To contribute:**

1. Open an issue describing your contribution
2. Wait for approval from maintainers
3. Submit a pull request if approved

## License

**Proprietary License** - Copyright (c) 2025 CodexJitin. All Rights Reserved.

### Permitted Use

- Install via PyPI (`pip install Codemni`)
- Use in commercial/non-commercial projects
- Integrate as a dependency

### Restrictions

- Cannot copy, modify, or redistribute source code
- Cannot reverse engineer
- Cannot remove proprietary notices

See [LICENSE](LICENSE) file for complete terms.

## Support

### Documentation

- **Main Documentation**: This README
- **LLM Module**: [llm/README.md](llm/README.md)
- **Memory Module**: [memory/README.md](memory/README.md)
- **Agent Guides**: Individual README in each agent folder
- **Tools**: [Prebuild_Tools/README.md](Prebuild_Tools/README.md)

### Getting Help

- **Issues**: [GitHub Issues](https://github.com/CodexJitin/Codemni/issues)
- **Discussions**: [GitHub Discussions](https://github.com/CodexJitin/Codemni/discussions)
- **Email**: Contact CodexJitin

### Useful Links

- **Homepage**: [https://github.com/CodexJitin/Codemni](https://github.com/CodexJitin/Codemni)
- **PyPI**: [https://pypi.org/project/Codemni/](https://pypi.org/project/Codemni/)
- **Bug Tracker**: [https://github.com/CodexJitin/Codemni/issues](https://github.com/CodexJitin/Codemni/issues)
- **Documentation**: [GitHub README](https://github.com/CodexJitin/Codemni#readme)

## Author

**CodexJitin**

- GitHub: [@CodexJitin](https://github.com/CodexJitin)
- Project: [Codemni](https://github.com/CodexJitin/Codemni)

## Acknowledgments

Built for the AI developer community.

Special thanks to:
- OpenAI, Google, Anthropic, Groq, and Ollama for their amazing LLM APIs
- The Python community for excellent tools and libraries
- All contributors and users of Codemni

## Changelog

### Version 1.2.2 (Current)
- Stable release with all core features
- Three agent types with varying reasoning capabilities
- Five LLM provider integrations
- Four memory management strategies
- Wikipedia prebuild tool
- Comprehensive documentation

### Roadmap

- Streaming response support
- More prebuild tools
- Agent analytics and monitoring
- Web search tool
- Database tool integration
- Enhanced customization options

## Quick Reference

### Installation
```bash
pip install Codemni[all]
```

### Basic Agent
```python
from Agents import Create_ToolCalling_Agent
from llm.Google_llm import GoogleLLM

llm = GoogleLLM(model="gemini-2.0-flash", api_key="KEY")
agent = Create_ToolCalling_Agent(llm=llm)
agent.add_tool("name", "description", function)
response = agent.run("query")
```

### With Memory
```python
from memory import ConversationalWindowMemory
memory = ConversationalWindowMemory(window_size=10)
agent = Create_ToolCalling_Agent(llm=llm, memory=memory)
```

### With Prebuild Tools
```python
from Prebuild_Tools import WikipediaTool
wiki = WikipediaTool()
wiki.add_to_agent(agent)
```

<div align="center">

**Made by CodexJitin**

**Star this project if you find it useful!**

[Report Bug](https://github.com/CodexJitin/Codemni/issues) · [Request Feature](https://github.com/CodexJitin/Codemni/issues) · [Documentation](https://github.com/CodexJitin/Codemni#readme)

</div>

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "Codemni",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ai, llm, agent, openai, anthropic, google, gemini, groq, ollama, memory, tool-calling",
    "author": "CodexJitin",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/bc/67/04176a609bee6da54c9438c8f702e6c00b627ec69456e93a4c03d9ddee63/codemni-1.2.3.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\r\n  <img src=\"https://raw.githubusercontent.com/CodexJitin/Codemni/main/assets/codemni-logo.jpg\" alt=\"Codemni Logo\" width=\"200\"/>\r\n  \r\n# Codemni\r\n\r\n[![Python](https://img.shields.io/badge/Python-3.8%2B-blue.svg)](https://www.python.org/)\r\n[![License](https://img.shields.io/badge/License-Proprietary-red.svg)](LICENSE)\r\n[![Version](https://img.shields.io/badge/Version-1.2.2-green.svg)](https://github.com/CodexJitin/Codemni)\r\n[![PyPI](https://img.shields.io/badge/PyPI-Codemni-brightgreen.svg)](https://pypi.org/project/Codemni/)\r\n\r\n### *Build Intelligent AI Agents with Full Control and Zero Complexity*\r\n\r\n**Lightweight \u2022 Modular \u2022 Production-Ready**\r\n\r\n</div>\r\n\r\nCodemni is a Python framework that puts you in control of AI agent development. Build powerful tool-calling agents with custom logic, multi-provider LLM support, and flexible memory\u2014without the bloat of heavy abstractions.\r\n\r\n## Table of Contents\r\n\r\n- [Overview](#overview)\r\n- [Key Features](#key-features)\r\n- [Architecture](#architecture)\r\n- [Installation](#installation)\r\n- [Quick Start](#quick-start)\r\n- [Core Components](#core-components)\r\n  - [AI Agents](#1-ai-agents)\r\n  - [LLM Module](#2-llm-module)\r\n  - [Memory Module](#3-memory-module)\r\n  - [Prebuild Tools](#4-prebuild-tools)\r\n- [Complete Examples](#complete-examples)\r\n- [Comparison Guide](#comparison-guide)\r\n- [Best Practices](#best-practices)\r\n- [Advanced Usage](#advanced-usage)\r\n- [Project Structure](#project-structure)\r\n- [Development](#development)\r\n- [Contributing](#contributing)\r\n- [License](#license)\r\n- [Support](#support)\r\n- [Author](#author)\r\n- [Acknowledgments](#acknowledgments)\r\n- [Changelog](#changelog)\r\n- [Quick Reference](#quick-reference)\r\n\r\n## Overview\r\n\r\n**What is Codemni?**\r\n\r\nCodemni empowers developers to create sophisticated AI agents without getting lost in complexity. Whether you're building chatbots, automation tools, or research assistants, Codemni provides the essential building blocks while keeping your code clean and maintainable.\r\n\r\n**Why Choose Codemni?**\r\n\r\n- **Full Control**: Write your own logic without fighting framework constraints\r\n- **Clean Architecture**: Minimal abstractions mean you understand exactly what's happening\r\n- **Production-Ready**: Built-in retries, error handling, and timeouts from day one\r\n- **Truly Modular**: Use only what you need\u2014every component works independently\r\n- **Multi-Provider**: Switch between OpenAI, Google, Anthropic, Groq, and Ollama seamlessly\r\n\r\n**Core Capabilities:**\r\n\r\n- **3 Agent Types**: Standard, Reasoning, and Deep Reasoning agents for different use cases\r\n- **5 LLM Providers**: OpenAI, Google Gemini, Anthropic Claude, Groq, and Ollama\r\n- **4 Memory Strategies**: Buffer, Window, Token Buffer, and Summary Memory\r\n- **Prebuild Tools**: Ready-to-use tools like Wikipedia integration\r\n- **Custom Tools**: Add your own tools with simple Python functions\r\n\r\n## Key Features\r\n\r\n### Intelligent AI Agents\r\n\r\n- **Three agent types** with varying reasoning capabilities\r\n- **Dynamic tool selection** and execution based on context\r\n- **Custom prompt support** to shape agent personality\r\n- **Verbose mode** for debugging and monitoring\r\n- **Memory integration** for stateful conversations\r\n\r\n### Multi-Provider LLM Support\r\n\r\n- **Unified interface** across all major LLM providers\r\n- **Automatic retries** with exponential backoff for reliability\r\n- **Configurable timeouts** to prevent hanging requests\r\n- **Function and class-based APIs** for flexibility\r\n- **Clear exception hierarchies** for better error handling\r\n\r\n### Flexible Memory Management\r\n\r\n- **Four memory strategies** optimized for different scenarios\r\n- **Token-aware management** to control API costs\r\n- **Intelligent summarization** for maintaining long conversation context\r\n- **Sliding window** for recent message prioritization\r\n- **Simple buffer** for complete conversation history\r\n\r\n### Extensible Architecture\r\n\r\n- **Easy tool creation** using standard Python functions\r\n- **Plugin-style prebuild tools** that integrate in one line\r\n- **Custom agent prompts** for specialized behaviors\r\n- **Modular design** that scales with your needs\r\n\r\n## Architecture\r\n\r\n```\r\nCodemni/\r\n\u251c\u2500\u2500 Agents/                          # AI Agent implementations\r\n\u2502   \u251c\u2500\u2500 TOOL_CALLING_AGENT/          # Standard tool-calling agent\r\n\u2502   \u251c\u2500\u2500 REASONING_TOOL_CALLING_AGENT/    # Agent with basic reasoning\r\n\u2502   \u2514\u2500\u2500 DEEP_REASONING_TOOL_CALLING_AGENT/  # Advanced reasoning agent\r\n\u251c\u2500\u2500 llm/                             # LLM provider wrappers\r\n\u2502   \u251c\u2500\u2500 OpenAI_llm.py               # OpenAI GPT models\r\n\u2502   \u251c\u2500\u2500 Google_llm.py               # Google Gemini models\r\n\u2502   \u251c\u2500\u2500 Anthropic_llm.py            # Anthropic Claude models\r\n\u2502   \u251c\u2500\u2500 Groq_llm.py                 # Groq models\r\n\u2502   \u2514\u2500\u2500 Ollama_llm.py               # Ollama local models\r\n\u251c\u2500\u2500 memory/                          # Conversation memory strategies\r\n\u2502   \u251c\u2500\u2500 conversational_buffer_memory.py      # Store all messages\r\n\u2502   \u251c\u2500\u2500 conversational_window_memory.py      # Sliding window\r\n\u2502   \u251c\u2500\u2500 conversational_token_buffer_memory.py  # Token-limited buffer\r\n\u2502   \u2514\u2500\u2500 conversational_summary_memory.py     # Intelligent summarization\r\n\u251c\u2500\u2500 Prebuild_Tools/                  # Ready-to-use tools\r\n\u2502   \u2514\u2500\u2500 Wikipedia_tool/              # Wikipedia search & retrieval\r\n\u2514\u2500\u2500 core/                            # Core utilities\r\n    \u2514\u2500\u2500 adapter.py                   # Tool execution engine\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n### Prerequisites\r\n- **Python 3.8+**\r\n- An API key for your chosen LLM provider\r\n\r\n### Install from PyPI\r\n\r\n```bash\r\n# Install core package (includes all modules)\r\npip install Codemni\r\n\r\n# Install with specific LLM provider\r\npip install Codemni[openai]      # For OpenAI\r\npip install Codemni[google]      # For Google Gemini\r\npip install Codemni[anthropic]   # For Anthropic Claude\r\npip install Codemni[groq]        # For Groq\r\npip install Codemni[ollama]      # For Ollama\r\n\r\n# Install with all providers\r\npip install Codemni[all]\r\n\r\n# Install for development\r\npip install Codemni[dev]\r\n```\r\n\r\n### Dependencies\r\n\r\n**Core (always installed):**\r\n- `requests>=2.31.0`\r\n- `python-dotenv>=1.0.0`\r\n\r\n**Optional (install as needed):**\r\n- `openai>=1.0.0` - For OpenAI models\r\n- `google-generativeai>=0.3.0` - For Google Gemini\r\n- `anthropic>=0.25.0` - For Anthropic Claude\r\n- `groq>=0.4.0` - For Groq\r\n- `ollama>=0.1.0` - For Ollama\r\n- `wikipedia>=1.4.0` - For Wikipedia tool\r\n\r\n## Quick Start\r\n\r\n### Basic Tool-Calling Agent\r\n\r\n```python\r\nfrom Agents import Create_ToolCalling_Agent\r\nfrom llm.Google_llm import GoogleLLM\r\nimport datetime\r\n\r\n# Initialize LLM\r\nllm = GoogleLLM(\r\n    model=\"gemini-2.0-flash\",\r\n    api_key=\"YOUR_API_KEY\"  # or set GOOGLE_API_KEY env var\r\n)\r\n\r\n# Create agent\r\nagent = Create_ToolCalling_Agent(llm=llm, verbose=True)\r\n\r\n# Define a tool\r\ndef get_current_time():\r\n    \"\"\"Get the current date and time\"\"\"\r\n    return datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\r\n\r\n# Add tool to agent\r\nagent.add_tool(\r\n    name=\"get_current_time\",\r\n    description=\"Get the current date and time\",\r\n    function=get_current_time\r\n)\r\n\r\n# Use the agent\r\nresponse = agent.run(\"What time is it?\")\r\nprint(response)\r\n```\r\n\r\n### With Memory\r\n\r\n```python\r\nfrom Agents import Create_ToolCalling_Agent\r\nfrom llm.OpenAI_llm import OpenAILLM\r\nfrom memory import ConversationalWindowMemory\r\n\r\n# Initialize with memory\r\nllm = OpenAILLM(model=\"gpt-4\", api_key=\"YOUR_API_KEY\")\r\nmemory = ConversationalWindowMemory(window_size=10)\r\n\r\nagent = Create_ToolCalling_Agent(\r\n    llm=llm,\r\n    memory=memory,\r\n    verbose=True\r\n)\r\n\r\n# Conversations maintain context\r\nagent.run(\"My name is Alice\")\r\nagent.run(\"What's my name?\")  # Agent remembers: \"Your name is Alice\"\r\n```\r\n\r\n### Using Prebuild Tools\r\n\r\n```python\r\nfrom Agents import Create_ToolCalling_Agent\r\nfrom llm.Anthropic_llm import AnthropicLLM\r\nfrom Prebuild_Tools import WikipediaTool\r\n\r\nllm = AnthropicLLM(model=\"claude-3-5-sonnet-20241022\", api_key=\"YOUR_API_KEY\")\r\nagent = Create_ToolCalling_Agent(llm=llm, verbose=True)\r\n\r\n# Add Wikipedia tool\r\nwiki = WikipediaTool()\r\nwiki.add_to_agent(agent)\r\n\r\n# Query Wikipedia through the agent\r\nresponse = agent.run(\"Tell me about quantum computing\")\r\nprint(response)\r\n```\r\n\r\n## Core Components\r\n\r\nCodemni consists of four main components. Each has detailed documentation in its respective directory:\r\n\r\n### 1. AI Agents\r\n\r\nThree types of agents with varying reasoning capabilities:\r\n\r\n| Agent Type | Speed | Best For | Documentation |\r\n|------------|-------|----------|---------------|\r\n| **TOOL_CALLING_AGENT** | Fastest | Production APIs | [Agent Docs](Agents/TOOL_CALLING_AGENT/README.md) |\r\n| **REASONING_TOOL_CALLING_AGENT** | Fast (4.6s) | General applications | [Agent Docs](Agents/REASONING_TOOL_CALLING_AGENT/README.md) |\r\n| **DEEP_REASONING_TOOL_CALLING_AGENT** | Slower (11.1s) | Research & debugging | [Agent Docs](Agents/DEEP_REASONING_TOOL_CALLING_AGENT/README.md) |\r\n\r\n**Quick Example:**\r\n```python\r\nfrom Agents import Create_ToolCalling_Agent\r\nfrom llm.Google_llm import GoogleLLM\r\n\r\nllm = GoogleLLM(model=\"gemini-2.0-flash\", api_key=\"YOUR_API_KEY\")\r\nagent = Create_ToolCalling_Agent(llm=llm, verbose=True)\r\n\r\n# Add tools and use\r\ndef calculator(expression):\r\n    return str(eval(expression))\r\n\r\nagent.add_tool(\"calculator\", \"Evaluate math expressions\", calculator)\r\nresponse = agent.run(\"What is 125 * 48?\")\r\n```\r\n\r\n**Key Differences:**\r\n- **Standard Agent**: Direct tool calling, fastest, lowest cost\r\n- **Reasoning Agent**: Shows thinking process, balanced speed/transparency\r\n- **Deep Reasoning Agent**: Comprehensive analysis, self-reflection, error recovery\r\n\r\n**[View Full Agent Documentation](Agents/)**\r\n\r\n### 2. LLM Module\r\n\r\nUnified interface for multiple LLM providers with built-in retries and error handling.\r\n\r\n**Supported Providers:**\r\n- **OpenAI** - GPT-4, GPT-3.5-turbo, GPT-4-turbo\r\n- **Google** - Gemini Pro, Gemini 2.0 Flash\r\n- **Anthropic** - Claude 3 (Opus, Sonnet, Haiku)\r\n- **Groq** - Llama, Mixtral, Gemma\r\n- **Ollama** - Any local model\r\n\r\n**Quick Example:**\r\n```python\r\n# Function-based API (one-off calls)\r\nfrom llm.OpenAI_llm import openai_llm\r\n\r\nresponse = openai_llm(\r\n    prompt=\"Explain Python in one sentence\",\r\n    model=\"gpt-4\",\r\n    api_key=\"YOUR_API_KEY\"\r\n)\r\n\r\n# Class-based API (for agents)\r\nfrom llm.Google_llm import GoogleLLM\r\n\r\nllm = GoogleLLM(model=\"gemini-2.0-flash\", api_key=\"YOUR_API_KEY\")\r\nresponse = llm.generate_response(\"What is machine learning?\")\r\n```\r\n\r\n**Features:**\r\n\r\n- Automatic retries with exponential backoff\r\n- Configurable timeouts\r\n- Clear exception hierarchies\r\n- Both function and class-based APIs\r\n- Production-ready error handling\r\n\r\n**[View Full LLM Documentation](llm/README.md)**\r\n\r\n**Features:**\r\n- \u2705 Automatic retries with exponential backoff\r\n- \u2705 Configurable timeouts\r\n- \u2705 Clear exception hierarchies\r\n- \u2705 Both function and class-based APIs\r\n- \u2705 Production-ready error handling\r\n\r\n\ud83d\udcda **[View Full LLM Documentation \u2192](llm/README.md)**\r\n\r\n---\r\n\r\n### 3. Memory Module\r\n\r\nFour strategies for managing conversation history:\r\n\r\n| Memory Type | Limit | Best For | Documentation |\r\n|-------------|-------|----------|---------------|\r\n| **Buffer Memory** | None | Short conversations | [\ud83d\udcd6 Memory Docs](memory/README.md#1-conversationalbuffermemory) |\r\n| **Window Memory** | Message count | Long conversations | [\ud83d\udcd6 Memory Docs](memory/README.md#2-conversationalwindowmemory) |\r\n| **Token Buffer** | Token count | Cost-conscious apps | [\ufffd Memory Docs](memory/README.md#3-conversationaltokenbuffermemory) |\r\n| **Summary Memory** | Intelligent | Very long conversations | [\ud83d\udcd6 Memory Docs](memory/README.md#4-conversationalsummarymemory) |\r\n\r\n**Quick Example:**\r\n```python\r\nfrom memory import ConversationalWindowMemory\r\n\r\n# Keep last 10 messages\r\nmemory = ConversationalWindowMemory(window_size=10)\r\nmemory.add_user_message(\"Hello!\")\r\nmemory.add_ai_message(\"Hi! How can I help?\")\r\n\r\nhistory = memory.get_history()\r\n```\r\n\r\n**Integration with Agents:**\r\n```python\r\nfrom Agents import Create_ToolCalling_Agent\r\nfrom memory import ConversationalBufferMemory\r\n\r\nmemory = ConversationalBufferMemory()\r\nagent = Create_ToolCalling_Agent(llm=llm, memory=memory)\r\n\r\n# Agent maintains conversation context\r\nagent.run(\"My name is Alice\")\r\nagent.run(\"What's my name?\")  # Remembers: \"Alice\"\r\n```\r\n\r\n**[View Full Memory Documentation](memory/README.md)**\r\n\r\n### 4. Prebuild Tools\r\n\r\nReady-to-use tools for common tasks:\r\n\r\n**Available Tools:**\r\n- **Wikipedia Tool** - Search and retrieve Wikipedia content\r\n\r\n**Quick Example:**\r\n```python\r\nfrom Prebuild_Tools import WikipediaTool\r\n\r\nwiki = WikipediaTool(language=\"en\")\r\n\r\n# Search Wikipedia\r\nresults = wiki.search(\"Python programming\")\r\n\r\n# Get article summary\r\nsummary = wiki.get_summary(\"Python (programming language)\", sentences=3)\r\n\r\n# Add all Wikipedia tools to an agent\r\nwiki.add_to_agent(agent)\r\n```\r\n\r\n**Automatic Integration:**\r\nThe `add_to_agent()` method automatically registers all Wikipedia-related tools with your agent, enabling natural language queries like:\r\n- \"Search Wikipedia for quantum computing\"\r\n- \"Get me a summary of machine learning from Wikipedia\"\r\n- \"Find information about Albert Einstein on Wikipedia\"\r\n\r\n**[View Full Tools Documentation](Prebuild_Tools/README.md)**\r\n\r\n## Complete Examples\r\n\r\n### Example 1: Multi-Tool Calculator Agent\r\n\r\n```python\r\nfrom Agents import Create_ToolCalling_Agent\r\nfrom llm.Google_llm import GoogleLLM\r\nimport math\r\nimport datetime\r\n\r\n# Initialize\r\nllm = GoogleLLM(model=\"gemini-2.0-flash\", api_key=\"YOUR_API_KEY\")\r\nagent = Create_ToolCalling_Agent(llm=llm, verbose=True)\r\n\r\n# Define tools\r\ndef calculator(expression):\r\n    \"\"\"Evaluate mathematical expressions\"\"\"\r\n    try:\r\n        return str(eval(expression))\r\n    except Exception as e:\r\n        return f\"Error: {str(e)}\"\r\n\r\ndef get_time():\r\n    \"\"\"Get current time\"\"\"\r\n    return datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\r\n\r\ndef square_root(number):\r\n    \"\"\"Calculate square root\"\"\"\r\n    try:\r\n        return str(math.sqrt(float(number)))\r\n    except Exception as e:\r\n        return f\"Error: {str(e)}\"\r\n\r\n# Add tools\r\nagent.add_tool(\"calculator\", \"Evaluate math expressions\", calculator)\r\nagent.add_tool(\"get_time\", \"Get current time\", get_time)\r\nagent.add_tool(\"square_root\", \"Calculate square root\", square_root)\r\n\r\n# Use agent\r\nprint(agent.run(\"What is 125 * 48?\"))\r\nprint(agent.run(\"What is the square root of 144?\"))\r\nprint(agent.run(\"What time is it?\"))\r\n```\r\n\r\n### Example 2: Research Assistant with Memory\r\n\r\n```python\r\nfrom Agents.REASONING_TOOL_CALLING_AGENT import Create_ToolCalling_Agent\r\nfrom llm.Anthropic_llm import AnthropicLLM\r\nfrom memory import ConversationalSummaryMemory\r\nfrom Prebuild_Tools import WikipediaTool\r\n\r\n# Initialize components\r\nllm = AnthropicLLM(model=\"claude-3-5-sonnet-20241022\", api_key=\"YOUR_API_KEY\")\r\nmemory = ConversationalSummaryMemory(llm=llm, max_messages_before_summary=15)\r\n\r\n# Create agent\r\nagent = Create_ToolCalling_Agent(llm=llm, memory=memory, verbose=True)\r\n\r\n# Add Wikipedia tool\r\nwiki = WikipediaTool()\r\nwiki.add_to_agent(agent)\r\n\r\n# Multi-turn research conversation\r\nagent.run(\"Tell me about quantum computing\")\r\nagent.run(\"What are its practical applications?\")\r\nagent.run(\"Who are the pioneers in this field?\")\r\nagent.run(\"Summarize everything we discussed\")  # Memory maintains context\r\n```\r\n\r\n### Example 3: Deep Reasoning Problem Solver\r\n\r\n```python\r\nfrom Agents.DEEP_REASONING_TOOL_CALLING_AGENT import Create_Deep_Reasoning_Tool_Calling_Agent\r\nfrom llm.OpenAI_llm import OpenAILLM\r\n\r\n# Initialize with deep reasoning\r\nllm = OpenAILLM(model=\"gpt-4\", api_key=\"YOUR_API_KEY\")\r\nagent = Create_Deep_Reasoning_Tool_Calling_Agent(\r\n    llm=llm,\r\n    verbose=True,\r\n    show_reasoning=True,\r\n    min_confidence=0.7\r\n)\r\n\r\n# Define complex tool\r\ndef analyze_data(dataset_name):\r\n    \"\"\"Analyze a dataset and return statistics\"\"\"\r\n    # Simulated analysis\r\n    return f\"Dataset '{dataset_name}': Mean=45.2, Median=43.0, StdDev=12.5\"\r\n\r\nagent.add_tool(\r\n    \"analyze_data\",\r\n    \"Analyze dataset and return statistical summary\",\r\n    analyze_data\r\n)\r\n\r\n# Complex query - shows deep reasoning process\r\nresponse = agent.run(\r\n    \"I need to understand the statistical properties of the sales_2024 dataset. \"\r\n    \"Analyze it and explain what the numbers mean for business decisions.\"\r\n)\r\n```\r\n\r\n### Example 4: Custom Prompt Agent\r\n\r\n```python\r\nfrom Agents import Create_ToolCalling_Agent\r\nfrom llm.Google_llm import GoogleLLM\r\n\r\nllm = GoogleLLM(model=\"gemini-2.0-flash\", api_key=\"YOUR_API_KEY\")\r\n\r\n# Custom agent personality\r\ncustom_prompt = \"\"\"You are a friendly and enthusiastic math tutor for children.\r\nAlways explain concepts in simple terms and use encouraging language.\r\nMake math fun and approachable!\"\"\"\r\n\r\nagent = Create_ToolCalling_Agent(\r\n    llm=llm,\r\n    prompt=custom_prompt,\r\n    verbose=True\r\n)\r\n\r\n# Agent responds with custom personality\r\ndef calculator(expression):\r\n    return str(eval(expression))\r\n\r\nagent.add_tool(\"calculator\", \"Calculate math problems\", calculator)\r\nresponse = agent.run(\"What is 7 times 8?\")\r\n```\r\n\r\n## Comparison Guide\r\n\r\n### Agent Types Comparison\r\n\r\n| Feature | TOOL_CALLING | REASONING | DEEP_REASONING |\r\n|---------|-------------|-----------|----------------|\r\n| **Speed** | Fastest | Fast (4.63s) | Slower (11.11s) |\r\n| **Token Usage** | Lowest | Low (600-900) | High (1500-2800) |\r\n| **Reasoning Display** | None | Basic | Deep chain-of-thought |\r\n| **Problem Analysis** | None | Minimal | Comprehensive |\r\n| **Situation Awareness** | No | No | Yes |\r\n| **Self-Reflection** | No | No | Confidence + alternatives |\r\n| **Error Recovery** | Basic | Basic retry | Strategic alternatives |\r\n| **Best For** | Production APIs | General apps | Research & debugging |\r\n| **Cost** | Lowest | Medium | Highest |\r\n| **Transparency** | Low | Medium | Highest |\r\n\r\n### Memory Types Comparison\r\n\r\n| Memory Type | Size Limit | Use Case | Token Cost |\r\n|-------------|-----------|----------|------------|\r\n| **Buffer** | None | Short conversations | Can grow large |\r\n| **Window** | Fixed count | Long conversations | Predictable |\r\n| **Token Buffer** | Token limit | Cost-conscious apps | Optimized |\r\n| **Summary** | Intelligent | Very long conversations | Medium + summary cost |\r\n\r\n### LLM Providers Comparison\r\n\r\n| Provider | Speed | Cost | Best For |\r\n|----------|-------|------|----------|\r\n| **OpenAI (GPT-4)** | Medium | High | Best quality |\r\n| **Google (Gemini)** | Fast | Low | Fast, cost-effective |\r\n| **Anthropic (Claude)** | Medium | Medium | Long context, reasoning |\r\n| **Groq** | Very Fast | Very Low | Speed priority |\r\n| **Ollama** | Varies | Free | Local/private |\r\n\r\n## Best Practices\r\n\r\n### 1. Choosing the Right Agent\r\n\r\n```python\r\n# For production APIs - use TOOL_CALLING_AGENT\r\n# Fast, cost-efficient, reliable\r\nfrom Agents.TOOL_CALLING_AGENT import Create_ToolCalling_Agent\r\n\r\n# For user-facing apps - use REASONING_TOOL_CALLING_AGENT\r\n# Shows thinking, still efficient\r\nfrom Agents.REASONING_TOOL_CALLING_AGENT import Create_ToolCalling_Agent\r\n\r\n# For research/debugging - use DEEP_REASONING_TOOL_CALLING_AGENT\r\n# Deep analysis, comprehensive reasoning\r\nfrom Agents.DEEP_REASONING_TOOL_CALLING_AGENT import Create_Deep_Reasoning_Tool_Calling_Agent\r\n```\r\n\r\n### 2. Memory Selection\r\n\r\n```python\r\n# Short conversations (<10 exchanges)\r\nmemory = ConversationalBufferMemory()\r\n\r\n# Long conversations (need recent context)\r\nmemory = ConversationalWindowMemory(window_size=10)\r\n\r\n# Budget-conscious (control token usage)\r\nmemory = ConversationalTokenBufferMemory(max_tokens=2000, model=\"gpt-4\")\r\n\r\n# Very long conversations (need full context)\r\nmemory = ConversationalSummaryMemory(llm=llm)\r\n```\r\n\r\n### 3. Tool Design\r\n\r\n```python\r\n# GOOD: Clear, focused tool\r\ndef get_weather(city):\r\n    \"\"\"Get current weather for a specific city\"\"\"\r\n    # Implementation\r\n    return weather_data\r\n\r\n# BAD: Too broad, unclear purpose\r\ndef do_stuff(input):\r\n    \"\"\"Does various things\"\"\"\r\n    # Implementation\r\n```\r\n\r\n**Tool Best Practices:**\r\n\r\n- Clear, descriptive names\r\n- Focused functionality (one tool = one task)\r\n- Good error handling\r\n- Detailed docstrings\r\n- Return strings or serializable data\r\n\r\n### 4. Error Handling\r\n\r\n```python\r\nfrom llm.OpenAI_llm import OpenAILLM, OpenAILLMError\r\n\r\ntry:\r\n    llm = OpenAILLM(model=\"gpt-4\", api_key=\"YOUR_API_KEY\")\r\n    response = llm.generate_response(\"Hello\")\r\nexcept OpenAILLMError as e:\r\n    # Handle LLM-specific errors\r\n    print(f\"LLM Error: {e}\")\r\nexcept Exception as e:\r\n    # Handle other errors\r\n    print(f\"Unexpected error: {e}\")\r\n```\r\n\r\n### 5. API Key Management\r\n\r\n```python\r\nimport os\r\nfrom dotenv import load_dotenv\r\n\r\n# Use environment variables\r\nload_dotenv()\r\n\r\nllm = GoogleLLM(\r\n    model=\"gemini-2.0-flash\",\r\n    api_key=os.getenv(\"GOOGLE_API_KEY\")  # From .env file\r\n)\r\n```\r\n\r\n**.env file:**\r\n```bash\r\nOPENAI_API_KEY=your_openai_key\r\nGOOGLE_API_KEY=your_google_key\r\nANTHROPIC_API_KEY=your_anthropic_key\r\nGROQ_API_KEY=your_groq_key\r\n```\r\n\r\n### 6. Verbose Mode Usage\r\n\r\n```python\r\n# Development: verbose=True for debugging\r\nagent = Create_ToolCalling_Agent(llm=llm, verbose=True)\r\n\r\n# Production: verbose=False for clean logs\r\nagent = Create_ToolCalling_Agent(llm=llm, verbose=False)\r\n```\r\n\r\n## Advanced Usage\r\n\r\n### Custom Tool with Complex Parameters\r\n\r\n```python\r\ndef temperature_converter(temperature, from_unit, to_unit):\r\n    \"\"\"Convert temperature between Celsius, Fahrenheit, and Kelvin\r\n    \r\n    Args:\r\n        temperature: Temperature value\r\n        from_unit: Source unit (C, F, or K)\r\n        to_unit: Target unit (C, F, or K)\r\n    \"\"\"\r\n    temp = float(temperature)\r\n    from_unit = from_unit.upper()\r\n    to_unit = to_unit.upper()\r\n    \r\n    # Convert to Celsius first\r\n    if from_unit == 'F':\r\n        celsius = (temp - 32) * 5/9\r\n    elif from_unit == 'K':\r\n        celsius = temp - 273.15\r\n    else:\r\n        celsius = temp\r\n    \r\n    # Convert to target\r\n    if to_unit == 'F':\r\n        result = (celsius * 9/5) + 32\r\n    elif to_unit == 'K':\r\n        result = celsius + 273.15\r\n    else:\r\n        result = celsius\r\n    \r\n    return f\"{result:.2f} {to_unit}\"\r\n\r\nagent.add_tool(\r\n    \"temperature_converter\",\r\n    \"Convert temperature between Celsius (C), Fahrenheit (F), and Kelvin (K)\",\r\n    temperature_converter\r\n)\r\n```\r\n\r\n### Multi-Agent System\r\n\r\n```python\r\n# Create specialized agents\r\nresearch_agent = Create_ToolCalling_Agent(llm=llm1)\r\nwiki = WikipediaTool()\r\nwiki.add_to_agent(research_agent)\r\n\r\nanalysis_agent = Create_Deep_Reasoning_Tool_Calling_Agent(llm=llm2)\r\n# Add analysis tools...\r\n\r\n# Coordinate agents\r\ndef coordinated_research(query):\r\n    # First agent gathers information\r\n    info = research_agent.run(f\"Find information about {query}\")\r\n    \r\n    # Second agent analyzes\r\n    analysis = analysis_agent.run(f\"Analyze this information: {info}\")\r\n    \r\n    return analysis\r\n```\r\n\r\n### Dynamic Tool Loading\r\n\r\n```python\r\ndef load_tools_from_config(agent, config):\r\n    \"\"\"Load tools from configuration\"\"\"\r\n    for tool_config in config[\"tools\"]:\r\n        name = tool_config[\"name\"]\r\n        description = tool_config[\"description\"]\r\n        \r\n        # Dynamically import tool function\r\n        module = __import__(tool_config[\"module\"])\r\n        function = getattr(module, tool_config[\"function\"])\r\n        \r\n        agent.add_tool(name, description, function)\r\n```\r\n\r\n### Streaming Responses (Future Enhancement)\r\n\r\n```python\r\n# Note: Streaming support coming in future version\r\n# Current: Get complete response\r\nresponse = agent.run(\"Tell me about AI\")\r\n\r\n# Future: Stream response chunks\r\nfor chunk in agent.run_stream(\"Tell me about AI\"):\r\n    print(chunk, end=\"\", flush=True)\r\n```\r\n\r\n## Project Structure\r\n\r\n```\r\nCodemni/\r\n\u2502\r\n\u251c\u2500\u2500 __init__.py                      # Package initialization\r\n\u251c\u2500\u2500 pyproject.toml                   # Project configuration\r\n\u251c\u2500\u2500 requirements.txt                 # Dependencies\r\n\u251c\u2500\u2500 LICENSE                          # Proprietary license\r\n\u251c\u2500\u2500 README.md                        # This file\r\n\u2502\r\n\u251c\u2500\u2500 Agents/                          # AI Agent implementations\r\n\u2502   \u251c\u2500\u2500 __init__.py\r\n\u2502   \u251c\u2500\u2500 TOOL_CALLING_AGENT/\r\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py\r\n\u2502   \u2502   \u251c\u2500\u2500 agent.py                 # Standard agent\r\n\u2502   \u2502   \u251c\u2500\u2500 prompt.py                # Agent prompts\r\n\u2502   \u2502   \u2514\u2500\u2500 README.md                # Agent documentation\r\n\u2502   \u251c\u2500\u2500 REASONING_TOOL_CALLING_AGENT/\r\n\u2502   \u2502   \u251c\u2500\u2500 __init__.py\r\n\u2502   \u2502   \u251c\u2500\u2500 agent.py                 # Reasoning agent\r\n\u2502   \u2502   \u251c\u2500\u2500 prompt.py\r\n\u2502   \u2502   \u2514\u2500\u2500 README.md\r\n\u2502   \u2514\u2500\u2500 DEEP_REASONING_TOOL_CALLING_AGENT/\r\n\u2502       \u251c\u2500\u2500 __init__.py\r\n\u2502       \u251c\u2500\u2500 agent.py                 # Deep reasoning agent\r\n\u2502       \u251c\u2500\u2500 prompt.py\r\n\u2502       \u2514\u2500\u2500 README.md\r\n\u2502\r\n\u251c\u2500\u2500 llm/                             # LLM provider wrappers\r\n\u2502   \u251c\u2500\u2500 __init__.py\r\n\u2502   \u251c\u2500\u2500 OpenAI_llm.py               # OpenAI integration\r\n\u2502   \u251c\u2500\u2500 Google_llm.py               # Google Gemini integration\r\n\u2502   \u251c\u2500\u2500 Anthropic_llm.py            # Anthropic Claude integration\r\n\u2502   \u251c\u2500\u2500 Groq_llm.py                 # Groq integration\r\n\u2502   \u251c\u2500\u2500 Ollama_llm.py               # Ollama integration\r\n\u2502   \u2514\u2500\u2500 README.md                    # LLM documentation\r\n\u2502\r\n\u251c\u2500\u2500 memory/                          # Conversation memory\r\n\u2502   \u251c\u2500\u2500 __init__.py\r\n\u2502   \u251c\u2500\u2500 conversational_buffer_memory.py\r\n\u2502   \u251c\u2500\u2500 conversational_window_memory.py\r\n\u2502   \u251c\u2500\u2500 conversational_token_buffer_memory.py\r\n\u2502   \u251c\u2500\u2500 conversational_summary_memory.py\r\n\u2502   \u2514\u2500\u2500 README.md                    # Memory documentation\r\n\u2502\r\n\u251c\u2500\u2500 Prebuild_Tools/                  # Ready-to-use tools\r\n\u2502   \u251c\u2500\u2500 __init__.py\r\n\u2502   \u251c\u2500\u2500 README.md\r\n\u2502   \u2514\u2500\u2500 Wikipedia_tool/\r\n\u2502       \u251c\u2500\u2500 __init__.py\r\n\u2502       \u251c\u2500\u2500 wikipedia_tool.py\r\n\u2502       \u2514\u2500\u2500 README.md\r\n\u2502\r\n\u2514\u2500\u2500 core/                            # Core utilities\r\n    \u251c\u2500\u2500 __init__.py\r\n    \u2514\u2500\u2500 adapter.py                   # Tool execution engine\r\n```\r\n\r\n## Development\r\n\r\n### Setting Up Development Environment\r\n\r\n```bash\r\n# Clone repository (if contributing)\r\ngit clone https://github.com/CodexJitin/Codemni.git\r\ncd Codemni\r\n\r\n# Create virtual environment\r\npython -m venv venv\r\nsource venv/bin/activate  # On Windows: venv\\Scripts\\activate\r\n\r\n# Install in development mode\r\npip install -e .[dev]\r\n\r\n# Install all providers\r\npip install -e .[all]\r\n```\r\n\r\n### Code Quality\r\n\r\n```bash\r\n# Format code\r\nblack .\r\n\r\n# Lint code\r\nflake8 .\r\n\r\n# Type checking\r\nmypy .\r\n```\r\n\r\n## Contributing\r\n\r\nCodemni is proprietary software. However, we welcome:\r\n\r\n- **Bug reports** - Submit issues on GitHub\r\n- **Feature requests** - Share your ideas\r\n- **Documentation improvements** - Help others learn\r\n- **Example contributions** - Share your use cases\r\n\r\n**To contribute:**\r\n\r\n1. Open an issue describing your contribution\r\n2. Wait for approval from maintainers\r\n3. Submit a pull request if approved\r\n\r\n## License\r\n\r\n**Proprietary License** - Copyright (c) 2025 CodexJitin. All Rights Reserved.\r\n\r\n### Permitted Use\r\n\r\n- Install via PyPI (`pip install Codemni`)\r\n- Use in commercial/non-commercial projects\r\n- Integrate as a dependency\r\n\r\n### Restrictions\r\n\r\n- Cannot copy, modify, or redistribute source code\r\n- Cannot reverse engineer\r\n- Cannot remove proprietary notices\r\n\r\nSee [LICENSE](LICENSE) file for complete terms.\r\n\r\n## Support\r\n\r\n### Documentation\r\n\r\n- **Main Documentation**: This README\r\n- **LLM Module**: [llm/README.md](llm/README.md)\r\n- **Memory Module**: [memory/README.md](memory/README.md)\r\n- **Agent Guides**: Individual README in each agent folder\r\n- **Tools**: [Prebuild_Tools/README.md](Prebuild_Tools/README.md)\r\n\r\n### Getting Help\r\n\r\n- **Issues**: [GitHub Issues](https://github.com/CodexJitin/Codemni/issues)\r\n- **Discussions**: [GitHub Discussions](https://github.com/CodexJitin/Codemni/discussions)\r\n- **Email**: Contact CodexJitin\r\n\r\n### Useful Links\r\n\r\n- **Homepage**: [https://github.com/CodexJitin/Codemni](https://github.com/CodexJitin/Codemni)\r\n- **PyPI**: [https://pypi.org/project/Codemni/](https://pypi.org/project/Codemni/)\r\n- **Bug Tracker**: [https://github.com/CodexJitin/Codemni/issues](https://github.com/CodexJitin/Codemni/issues)\r\n- **Documentation**: [GitHub README](https://github.com/CodexJitin/Codemni#readme)\r\n\r\n## Author\r\n\r\n**CodexJitin**\r\n\r\n- GitHub: [@CodexJitin](https://github.com/CodexJitin)\r\n- Project: [Codemni](https://github.com/CodexJitin/Codemni)\r\n\r\n## Acknowledgments\r\n\r\nBuilt for the AI developer community.\r\n\r\nSpecial thanks to:\r\n- OpenAI, Google, Anthropic, Groq, and Ollama for their amazing LLM APIs\r\n- The Python community for excellent tools and libraries\r\n- All contributors and users of Codemni\r\n\r\n## Changelog\r\n\r\n### Version 1.2.2 (Current)\r\n- Stable release with all core features\r\n- Three agent types with varying reasoning capabilities\r\n- Five LLM provider integrations\r\n- Four memory management strategies\r\n- Wikipedia prebuild tool\r\n- Comprehensive documentation\r\n\r\n### Roadmap\r\n\r\n- Streaming response support\r\n- More prebuild tools\r\n- Agent analytics and monitoring\r\n- Web search tool\r\n- Database tool integration\r\n- Enhanced customization options\r\n\r\n## Quick Reference\r\n\r\n### Installation\r\n```bash\r\npip install Codemni[all]\r\n```\r\n\r\n### Basic Agent\r\n```python\r\nfrom Agents import Create_ToolCalling_Agent\r\nfrom llm.Google_llm import GoogleLLM\r\n\r\nllm = GoogleLLM(model=\"gemini-2.0-flash\", api_key=\"KEY\")\r\nagent = Create_ToolCalling_Agent(llm=llm)\r\nagent.add_tool(\"name\", \"description\", function)\r\nresponse = agent.run(\"query\")\r\n```\r\n\r\n### With Memory\r\n```python\r\nfrom memory import ConversationalWindowMemory\r\nmemory = ConversationalWindowMemory(window_size=10)\r\nagent = Create_ToolCalling_Agent(llm=llm, memory=memory)\r\n```\r\n\r\n### With Prebuild Tools\r\n```python\r\nfrom Prebuild_Tools import WikipediaTool\r\nwiki = WikipediaTool()\r\nwiki.add_to_agent(agent)\r\n```\r\n\r\n<div align=\"center\">\r\n\r\n**Made by CodexJitin**\r\n\r\n**Star this project if you find it useful!**\r\n\r\n[Report Bug](https://github.com/CodexJitin/Codemni/issues) \u00b7 [Request Feature](https://github.com/CodexJitin/Codemni/issues) \u00b7 [Documentation](https://github.com/CodexJitin/Codemni#readme)\r\n\r\n</div>\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Build Intelligent AI Agents with Full Control and Zero Complexity - Lightweight, modular Python toolkit for AI development",
    "version": "1.2.3",
    "project_urls": {
        "Bug Tracker": "https://github.com/CodexJitin/Codemni/issues",
        "Documentation": "https://github.com/CodexJitin/Codemni#readme",
        "Homepage": "https://github.com/CodexJitin/Codemni",
        "Repository": "https://github.com/CodexJitin/Codemni"
    },
    "split_keywords": [
        "ai",
        " llm",
        " agent",
        " openai",
        " anthropic",
        " google",
        " gemini",
        " groq",
        " ollama",
        " memory",
        " tool-calling"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "709b1600432213db30cdaa5945a94b67e462baecfeea8f01b20fe3af2ffad358",
                "md5": "dbb4cc974466dbbb41f8dd1c64e8a573",
                "sha256": "e76c4fc5a3391bf79674d56d0b29c63780e591d750a8eceefafeee7437a2b76a"
            },
            "downloads": -1,
            "filename": "codemni-1.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dbb4cc974466dbbb41f8dd1c64e8a573",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 95345,
            "upload_time": "2025-10-26T05:00:18",
            "upload_time_iso_8601": "2025-10-26T05:00:18.970139Z",
            "url": "https://files.pythonhosted.org/packages/70/9b/1600432213db30cdaa5945a94b67e462baecfeea8f01b20fe3af2ffad358/codemni-1.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "bc6704176a609bee6da54c9438c8f702e6c00b627ec69456e93a4c03d9ddee63",
                "md5": "7537f58866de9972213b51a7e6fe2990",
                "sha256": "bf2a255b8bf93618efef27fac0af97eb1b28e169f6e8a7912b615d9b9be6c313"
            },
            "downloads": -1,
            "filename": "codemni-1.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "7537f58866de9972213b51a7e6fe2990",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 106299,
            "upload_time": "2025-10-26T05:00:20",
            "upload_time_iso_8601": "2025-10-26T05:00:20.772537Z",
            "url": "https://files.pythonhosted.org/packages/bc/67/04176a609bee6da54c9438c8f702e6c00b627ec69456e93a4c03d9ddee63/codemni-1.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-26 05:00:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "CodexJitin",
    "github_project": "Codemni",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "requests",
            "specs": [
                [
                    ">=",
                    "2.31.0"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        }
    ],
    "lcname": "codemni"
}
        
Elapsed time: 1.23757s