# GRAMI-AI: Dynamic AI Agent Framework
<div align="center">
<img src="https://img.shields.io/badge/version-0.4.0-blue.svg" alt="Version">
<img src="https://img.shields.io/badge/python-3.8+-blue.svg" alt="Python Versions">
<img src="https://img.shields.io/badge/license-MIT-green.svg" alt="License">
<img src="https://img.shields.io/github/stars/YAFATEK/grami-ai?style=social" alt="GitHub Stars">
</div>
## 📋 Table of Contents
- [Overview](#-overview)
- [Key Features](#-key-features)
- [Installation](#-installation)
- [Quick Start](#-quick-start)
- [Provider Examples](#-provider-examples)
- [Memory Management](#-memory-management)
- [Streaming Capabilities](#-streaming-capabilities)
- [Development Roadmap](#-development-roadmap)
- [TODO List](#-todo-list)
- [Contributing](#-contributing)
- [License](#-license)
## 🌟 Overview
GRAMI-AI is a cutting-edge, async-first AI agent framework designed for building sophisticated AI applications. With support for multiple LLM providers, advanced memory management, and streaming capabilities, GRAMI-AI enables developers to create powerful, context-aware AI systems.
### Why GRAMI-AI?
- **Async-First**: Built for high-performance asynchronous operations
- **Provider Agnostic**: Support for Gemini, OpenAI, Anthropic, and Ollama
- **Advanced Memory**: LRU and Redis-based memory management
- **Streaming Support**: Efficient token-by-token streaming responses
- **Enterprise Ready**: Production-grade security and scalability
## 🚀 Key Features
### LLM Providers
- Gemini (Google's latest LLM)
- OpenAI (GPT models)
- Anthropic (Claude)
- Ollama (Local models)
### Memory Management
- LRU Memory (In-memory caching)
- Redis Memory (Distributed caching)
- Custom memory providers
### Communication
- Synchronous messaging
- Asynchronous streaming
- WebSocket support
- Custom interfaces
## 💻 Installation
```bash
pip install grami-ai
```
## 🔑 API Key Setup
Before using GRAMI-AI, you need to set up your API keys. You can do this by setting environment variables:
```bash
export GEMINI_API_KEY="your-gemini-api-key"
# Or for other providers:
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
```
Or using a .env file:
```env
GEMINI_API_KEY=your-gemini-api-key
OPENAI_API_KEY=your-openai-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key
```
## 🎯 Quick Start
Here's a simple example of how to create an AI agent using GRAMI-AI:
```python
from grami.agents import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory
import asyncio
import os
async def main():
# Initialize memory and provider
memory = LRUMemory(capacity=5)
provider = GeminiProvider(
api_key=os.getenv("GEMINI_API_KEY"),
generation_config={
"temperature": 0.9,
"top_p": 0.9,
"top_k": 40,
"max_output_tokens": 1000,
"candidate_count": 1
}
)
# Create agent
agent = AsyncAgent(
name="MyAssistant",
llm=provider,
memory=memory,
system_instructions="You are a helpful AI assistant."
)
# Example: Using streaming responses
message = "Tell me a short story about AI."
async for chunk in agent.stream_message(message):
print(chunk, end="", flush=True)
print("\n")
# Example: Using non-streaming responses
response = await agent.send_message("What's the weather like today?")
print(f"Response: {response}")
if __name__ == "__main__":
asyncio.run(main())
```
## 📚 Provider Examples
### Gemini Provider
```python
from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory
# Initialize with memory
provider = GeminiProvider(
api_key="YOUR_API_KEY",
model="gemini-pro", # Optional, defaults to gemini-pro
generation_config={ # Optional
"temperature": 0.7,
"top_p": 0.8,
"top_k": 40
}
)
# Add memory provider
memory = LRUMemory(capacity=100)
provider.set_memory_provider(memory)
# Regular message
response = await provider.send_message("What is AI?")
# Streaming response
async for chunk in provider.stream_message("Tell me a story"):
print(chunk, end="", flush=True)
```
## 🧠 Memory Management
### LRU Memory
```python
from grami.memory.lru import LRUMemory
# Initialize with capacity
memory = LRUMemory(capacity=100)
# Add to agent
agent = AsyncAgent(
name="MemoryAgent",
llm=provider,
memory=memory
)
```
### Redis Memory
```python
from grami.memory.redis import RedisMemory
# Initialize Redis memory
memory = RedisMemory(
host="localhost",
port=6379,
capacity=1000
)
# Add to provider
provider.set_memory_provider(memory)
```
## 🌊 Streaming Capabilities
### Basic Streaming
```python
async def stream_example():
async for chunk in provider.stream_message("Generate a story"):
print(chunk, end="", flush=True)
```
### Streaming with Memory
```python
async def stream_with_memory():
# First message
response = await provider.send_message("My name is Alice")
# Stream follow-up (will remember context)
async for chunk in provider.stream_message("What's my name?"):
print(chunk, end="", flush=True)
```
## 🗺 Development Roadmap
### Core Framework Design
- [x] Implement AsyncAgent base class with dynamic configuration
- [x] Create flexible system instruction definition mechanism
- [x] Design abstract LLM provider interface
- [ ] Develop dynamic role and persona assignment system
- [x] Comprehensive async example configurations
- [x] Memory with streaming
- [x] Memory without streaming
- [x] No memory with streaming
- [x] No memory without streaming
- [ ] Implement multi-modal agent capabilities (text, image, video)
### LLM Provider Abstraction
- [x] Unified interface for diverse LLM providers
- [x] Google Gemini integration
- [x] Basic message sending
- [x] Streaming support
- [x] Memory integration
- [ ] OpenAI ChatGPT integration
- [x] Basic message sending
- [x] Streaming implementation
- [ ] Memory support
- [ ] Anthropic Claude integration
- [ ] Ollama local LLM support
- [ ] Standardize function/tool calling across providers
- [ ] Dynamic prompt engineering support
- [x] Provider-specific configuration handling
### Communication Interfaces
- [x] WebSocket real-time communication
- [ ] REST API endpoint design
- [ ] Kafka inter-agent communication
- [ ] gRPC support
- [x] Event-driven agent notification system
- [ ] Secure communication protocols
### Memory and State Management
- [x] Pluggable memory providers
- [x] In-memory state storage (LRU)
- [x] Redis distributed memory
- [ ] DynamoDB scalable storage
- [ ] S3 content storage
- [x] Conversation and task history tracking
- [ ] Global state management for agent crews
- [x] Persistent task and interaction logs
- [ ] Advanced memory indexing
- [ ] Memory compression techniques
### Tool and Function Ecosystem
- [x] Extensible tool integration framework
- [ ] Default utility tools
- [ ] Kafka message publisher
- [ ] Web search utility
- [ ] Content analysis tool
- [x] Provider-specific function calling support
- [ ] Community tool marketplace
- [x] Easy custom tool development
### Agent Crew Collaboration
- [ ] Inter-agent communication protocol
- [ ] Workflow and task delegation mechanisms
- [ ] Approval and review workflows
- [ ] Notification and escalation systems
- [ ] Dynamic team composition
- [ ] Shared context and memory management
### Use Case Implementations
- [ ] Digital Agency workflow template
- [ ] Growth Manager agent
- [ ] Content Creator agent
- [ ] Trend Researcher agent
- [ ] Media Creation agent
- [ ] Customer interaction management
- [ ] Approval and revision cycles
### Security and Compliance
- [x] Secure credential management
- [ ] Role-based access control
- [x] Audit logging
- [ ] Compliance with data protection regulations
### Performance and Scalability
- [x] Async-first design
- [x] Horizontal scaling support
- [ ] Performance benchmarking
- [x] Resource optimization
### Testing and Quality
- [x] Comprehensive unit testing
- [x] Integration testing for agent interactions
- [x] Mocking frameworks for LLM providers
- [x] Continuous integration setup
### Documentation and Community
- [x] Detailed API documentation
- [x] Comprehensive developer guides
- [x] Example use case implementations
- [x] Contribution guidelines
- [ ] Community tool submission process
- [ ] Regular maintenance and updates
### Future Roadmap
- [ ] Payment integration solutions
- [ ] Advanced agent collaboration patterns
- [ ] Specialized industry-specific agents
- [ ] Enhanced security features
- [ ] Extended provider support
## 📝 TODO List
- [x] Add support for Gemini provider
- [x] Implement advanced caching strategies (LRU)
- [ ] Add WebSocket support for real-time communication
- [x] Create comprehensive test suite
- [x] Add support for function calling
- [ ] Implement conversation branching
- [ ] Add support for multi-modal inputs
- [x] Enhance error handling and logging
- [x] Add rate limiting and quota management
- [x] Create detailed API documentation
- [x] Add support for custom prompt templates
- [ ] Implement conversation summarization
- [x] Add support for multiple languages
- [ ] Implement fine-tuning capabilities
- [ ] Add support for model quantization
- [ ] Create a web-based demo
- [ ] Add support for batch processing
- [x] Implement conversation history export/import
- [ ] Add support for custom model hosting
- [ ] Create visualization tools for conversation flows
- [x] Implement automated testing pipeline
- [x] Add support for conversation analytics
- [x] Create deployment guides for various platforms
- [x] Implement automated documentation generation
- [x] Add support for model performance monitoring
- [x] Create benchmarking tools
- [ ] Implement A/B testing capabilities
- [x] Add support for custom tokenizers
- [x] Create model evaluation tools
- [x] Implement conversation templates
- [ ] Add support for conversation routing
- [x] Create debugging tools
- [x] Implement conversation validation
- [x] Add support for custom memory backends
- [x] Create conversation backup/restore features
- [x] Implement conversation filtering
- [x] Add support for conversation tagging
- [x] Create conversation search capabilities
- [ ] Implement conversation versioning
- [ ] Add support for conversation merging
- [x] Create conversation export formats
- [x] Implement conversation import validation
- [ ] Add support for conversation scheduling
- [x] Create conversation monitoring tools
- [ ] Implement conversation archiving
- [x] Add support for conversation encryption
- [x] Create conversation access control
- [x] Implement conversation rate limiting
- [x] Add support for conversation quotas
- [x] Create conversation usage analytics
- [x] Implement conversation cost tracking
- [x] Add support for conversation billing
- [x] Create conversation audit logs
- [x] Implement conversation compliance checks
- [x] Add support for conversation retention policies
- [x] Create conversation backup strategies
- [x] Implement conversation recovery procedures
- [x] Add support for conversation migration
- [x] Create conversation optimization tools
- [x] Implement conversation caching strategies
- [x] Add support for conversation compression
- [x] Create conversation performance metrics
- [x] Implement conversation health checks
- [x] Add support for conversation monitoring
- [x] Create conversation alerting system
- [x] Implement conversation debugging tools
- [x] Add support for conversation profiling
- [x] Create conversation testing framework
- [x] Implement conversation documentation
- [x] Add support for conversation examples
- [x] Create conversation tutorials
- [x] Implement conversation guides
- [x] Add support for conversation best practices
- [x] Create conversation security guidelines
## 🤝 Contributing
We welcome contributions! Please feel free to submit a Pull Request.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🔗 Links
- [PyPI Package](https://pypi.org/project/grami-ai/)
- [GitHub Repository](https://github.com/yafatek/grami-ai)
- [Documentation](https://docs.grami-ai.dev)
## 📧 Support
For support, email support@yafatek.dev or create an issue on GitHub.
Raw data
{
"_id": null,
"home_page": null,
"name": "grami-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "YAFATEK Solutions <support@yafatek.dev>",
"keywords": "agent, ai, artificial-intelligence, generative-ai, llm, machine-learning",
"author": null,
"author_email": "YAFATEK Solutions / GRAMI Team <support@yafatek.dev>",
"download_url": "https://files.pythonhosted.org/packages/cb/ef/5822b0972df9109ad4923947abe80a698892aa6b15c9eb9b947bfb2fa389/grami_ai-0.4.4.tar.gz",
"platform": null,
"description": "# GRAMI-AI: Dynamic AI Agent Framework\n\n<div align=\"center\">\n <img src=\"https://img.shields.io/badge/version-0.4.0-blue.svg\" alt=\"Version\">\n <img src=\"https://img.shields.io/badge/python-3.8+-blue.svg\" alt=\"Python Versions\">\n <img src=\"https://img.shields.io/badge/license-MIT-green.svg\" alt=\"License\">\n <img src=\"https://img.shields.io/github/stars/YAFATEK/grami-ai?style=social\" alt=\"GitHub Stars\">\n</div>\n\n## \ud83d\udccb Table of Contents\n\n- [Overview](#-overview)\n- [Key Features](#-key-features)\n- [Installation](#-installation)\n- [Quick Start](#-quick-start)\n- [Provider Examples](#-provider-examples)\n- [Memory Management](#-memory-management)\n- [Streaming Capabilities](#-streaming-capabilities)\n- [Development Roadmap](#-development-roadmap)\n- [TODO List](#-todo-list)\n- [Contributing](#-contributing)\n- [License](#-license)\n\n## \ud83c\udf1f Overview\n\nGRAMI-AI is a cutting-edge, async-first AI agent framework designed for building sophisticated AI applications. With support for multiple LLM providers, advanced memory management, and streaming capabilities, GRAMI-AI enables developers to create powerful, context-aware AI systems.\n\n### Why GRAMI-AI?\n\n- **Async-First**: Built for high-performance asynchronous operations\n- **Provider Agnostic**: Support for Gemini, OpenAI, Anthropic, and Ollama\n- **Advanced Memory**: LRU and Redis-based memory management\n- **Streaming Support**: Efficient token-by-token streaming responses\n- **Enterprise Ready**: Production-grade security and scalability\n\n## \ud83d\ude80 Key Features\n\n### LLM Providers\n- Gemini (Google's latest LLM)\n- OpenAI (GPT models)\n- Anthropic (Claude)\n- Ollama (Local models)\n\n### Memory Management\n- LRU Memory (In-memory caching)\n- Redis Memory (Distributed caching)\n- Custom memory providers\n\n### Communication\n- Synchronous messaging\n- Asynchronous streaming\n- WebSocket support\n- Custom interfaces\n\n## \ud83d\udcbb Installation\n\n```bash\npip install grami-ai\n```\n\n## \ud83d\udd11 API Key Setup\n\nBefore using GRAMI-AI, you need to set up your API keys. You can do this by setting environment variables:\n\n```bash\nexport GEMINI_API_KEY=\"your-gemini-api-key\"\n# Or for other providers:\nexport OPENAI_API_KEY=\"your-openai-api-key\"\nexport ANTHROPIC_API_KEY=\"your-anthropic-api-key\"\n```\n\nOr using a .env file:\n\n```env\nGEMINI_API_KEY=your-gemini-api-key\nOPENAI_API_KEY=your-openai-api-key\nANTHROPIC_API_KEY=your-anthropic-api-key\n```\n\n## \ud83c\udfaf Quick Start\n\nHere's a simple example of how to create an AI agent using GRAMI-AI:\n\n```python\nfrom grami.agents import AsyncAgent\nfrom grami.providers.gemini_provider import GeminiProvider\nfrom grami.memory.lru import LRUMemory\nimport asyncio\nimport os\n\nasync def main():\n # Initialize memory and provider\n memory = LRUMemory(capacity=5)\n provider = GeminiProvider(\n api_key=os.getenv(\"GEMINI_API_KEY\"),\n generation_config={\n \"temperature\": 0.9,\n \"top_p\": 0.9,\n \"top_k\": 40,\n \"max_output_tokens\": 1000,\n \"candidate_count\": 1\n }\n )\n \n # Create agent\n agent = AsyncAgent(\n name=\"MyAssistant\",\n llm=provider,\n memory=memory,\n system_instructions=\"You are a helpful AI assistant.\"\n )\n \n # Example: Using streaming responses\n message = \"Tell me a short story about AI.\"\n async for chunk in agent.stream_message(message):\n print(chunk, end=\"\", flush=True)\n print(\"\\n\")\n \n # Example: Using non-streaming responses\n response = await agent.send_message(\"What's the weather like today?\")\n print(f\"Response: {response}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n## \ud83d\udcda Provider Examples\n\n### Gemini Provider\n\n```python\nfrom grami.providers.gemini_provider import GeminiProvider\nfrom grami.memory.lru import LRUMemory\n\n# Initialize with memory\nprovider = GeminiProvider(\n api_key=\"YOUR_API_KEY\",\n model=\"gemini-pro\", # Optional, defaults to gemini-pro\n generation_config={ # Optional\n \"temperature\": 0.7,\n \"top_p\": 0.8,\n \"top_k\": 40\n }\n)\n\n# Add memory provider\nmemory = LRUMemory(capacity=100)\nprovider.set_memory_provider(memory)\n\n# Regular message\nresponse = await provider.send_message(\"What is AI?\")\n\n# Streaming response\nasync for chunk in provider.stream_message(\"Tell me a story\"):\n print(chunk, end=\"\", flush=True)\n```\n\n## \ud83e\udde0 Memory Management\n\n### LRU Memory\n\n```python\nfrom grami.memory.lru import LRUMemory\n\n# Initialize with capacity\nmemory = LRUMemory(capacity=100)\n\n# Add to agent\nagent = AsyncAgent(\n name=\"MemoryAgent\",\n llm=provider,\n memory=memory\n)\n```\n\n### Redis Memory\n\n```python\nfrom grami.memory.redis import RedisMemory\n\n# Initialize Redis memory\nmemory = RedisMemory(\n host=\"localhost\",\n port=6379,\n capacity=1000\n)\n\n# Add to provider\nprovider.set_memory_provider(memory)\n```\n\n## \ud83c\udf0a Streaming Capabilities\n\n### Basic Streaming\n\n```python\nasync def stream_example():\n async for chunk in provider.stream_message(\"Generate a story\"):\n print(chunk, end=\"\", flush=True)\n```\n\n### Streaming with Memory\n\n```python\nasync def stream_with_memory():\n # First message\n response = await provider.send_message(\"My name is Alice\")\n \n # Stream follow-up (will remember context)\n async for chunk in provider.stream_message(\"What's my name?\"):\n print(chunk, end=\"\", flush=True)\n```\n\n## \ud83d\uddfa Development Roadmap\n\n### Core Framework Design\n- [x] Implement AsyncAgent base class with dynamic configuration\n- [x] Create flexible system instruction definition mechanism\n- [x] Design abstract LLM provider interface\n- [ ] Develop dynamic role and persona assignment system\n- [x] Comprehensive async example configurations\n - [x] Memory with streaming\n - [x] Memory without streaming\n - [x] No memory with streaming\n - [x] No memory without streaming\n- [ ] Implement multi-modal agent capabilities (text, image, video)\n\n### LLM Provider Abstraction\n- [x] Unified interface for diverse LLM providers\n- [x] Google Gemini integration\n - [x] Basic message sending\n - [x] Streaming support\n - [x] Memory integration\n- [ ] OpenAI ChatGPT integration\n - [x] Basic message sending\n - [x] Streaming implementation\n - [ ] Memory support\n- [ ] Anthropic Claude integration\n- [ ] Ollama local LLM support\n- [ ] Standardize function/tool calling across providers\n- [ ] Dynamic prompt engineering support\n- [x] Provider-specific configuration handling\n\n### Communication Interfaces\n- [x] WebSocket real-time communication\n- [ ] REST API endpoint design\n- [ ] Kafka inter-agent communication\n- [ ] gRPC support\n- [x] Event-driven agent notification system\n- [ ] Secure communication protocols\n\n### Memory and State Management\n- [x] Pluggable memory providers\n- [x] In-memory state storage (LRU)\n- [x] Redis distributed memory\n- [ ] DynamoDB scalable storage\n- [ ] S3 content storage\n- [x] Conversation and task history tracking\n- [ ] Global state management for agent crews\n- [x] Persistent task and interaction logs\n- [ ] Advanced memory indexing\n- [ ] Memory compression techniques\n\n### Tool and Function Ecosystem\n- [x] Extensible tool integration framework\n- [ ] Default utility tools\n - [ ] Kafka message publisher\n - [ ] Web search utility\n - [ ] Content analysis tool\n- [x] Provider-specific function calling support\n- [ ] Community tool marketplace\n- [x] Easy custom tool development\n\n### Agent Crew Collaboration\n- [ ] Inter-agent communication protocol\n- [ ] Workflow and task delegation mechanisms\n- [ ] Approval and review workflows\n- [ ] Notification and escalation systems\n- [ ] Dynamic team composition\n- [ ] Shared context and memory management\n\n### Use Case Implementations\n- [ ] Digital Agency workflow template\n - [ ] Growth Manager agent\n - [ ] Content Creator agent\n - [ ] Trend Researcher agent\n - [ ] Media Creation agent\n- [ ] Customer interaction management\n- [ ] Approval and revision cycles\n\n### Security and Compliance\n- [x] Secure credential management\n- [ ] Role-based access control\n- [x] Audit logging\n- [ ] Compliance with data protection regulations\n\n### Performance and Scalability\n- [x] Async-first design\n- [x] Horizontal scaling support\n- [ ] Performance benchmarking\n- [x] Resource optimization\n\n### Testing and Quality\n- [x] Comprehensive unit testing\n- [x] Integration testing for agent interactions\n- [x] Mocking frameworks for LLM providers\n- [x] Continuous integration setup\n\n### Documentation and Community\n- [x] Detailed API documentation\n- [x] Comprehensive developer guides\n- [x] Example use case implementations\n- [x] Contribution guidelines\n- [ ] Community tool submission process\n- [ ] Regular maintenance and updates\n\n### Future Roadmap\n- [ ] Payment integration solutions\n- [ ] Advanced agent collaboration patterns\n- [ ] Specialized industry-specific agents\n- [ ] Enhanced security features\n- [ ] Extended provider support\n\n## \ud83d\udcdd TODO List\n\n- [x] Add support for Gemini provider\n- [x] Implement advanced caching strategies (LRU)\n- [ ] Add WebSocket support for real-time communication\n- [x] Create comprehensive test suite\n- [x] Add support for function calling\n- [ ] Implement conversation branching\n- [ ] Add support for multi-modal inputs\n- [x] Enhance error handling and logging\n- [x] Add rate limiting and quota management\n- [x] Create detailed API documentation\n- [x] Add support for custom prompt templates\n- [ ] Implement conversation summarization\n- [x] Add support for multiple languages\n- [ ] Implement fine-tuning capabilities\n- [ ] Add support for model quantization\n- [ ] Create a web-based demo\n- [ ] Add support for batch processing\n- [x] Implement conversation history export/import\n- [ ] Add support for custom model hosting\n- [ ] Create visualization tools for conversation flows\n- [x] Implement automated testing pipeline\n- [x] Add support for conversation analytics\n- [x] Create deployment guides for various platforms\n- [x] Implement automated documentation generation\n- [x] Add support for model performance monitoring\n- [x] Create benchmarking tools\n- [ ] Implement A/B testing capabilities\n- [x] Add support for custom tokenizers\n- [x] Create model evaluation tools\n- [x] Implement conversation templates\n- [ ] Add support for conversation routing\n- [x] Create debugging tools\n- [x] Implement conversation validation\n- [x] Add support for custom memory backends\n- [x] Create conversation backup/restore features\n- [x] Implement conversation filtering\n- [x] Add support for conversation tagging\n- [x] Create conversation search capabilities\n- [ ] Implement conversation versioning\n- [ ] Add support for conversation merging\n- [x] Create conversation export formats\n- [x] Implement conversation import validation\n- [ ] Add support for conversation scheduling\n- [x] Create conversation monitoring tools\n- [ ] Implement conversation archiving\n- [x] Add support for conversation encryption\n- [x] Create conversation access control\n- [x] Implement conversation rate limiting\n- [x] Add support for conversation quotas\n- [x] Create conversation usage analytics\n- [x] Implement conversation cost tracking\n- [x] Add support for conversation billing\n- [x] Create conversation audit logs\n- [x] Implement conversation compliance checks\n- [x] Add support for conversation retention policies\n- [x] Create conversation backup strategies\n- [x] Implement conversation recovery procedures\n- [x] Add support for conversation migration\n- [x] Create conversation optimization tools\n- [x] Implement conversation caching strategies\n- [x] Add support for conversation compression\n- [x] Create conversation performance metrics\n- [x] Implement conversation health checks\n- [x] Add support for conversation monitoring\n- [x] Create conversation alerting system\n- [x] Implement conversation debugging tools\n- [x] Add support for conversation profiling\n- [x] Create conversation testing framework\n- [x] Implement conversation documentation\n- [x] Add support for conversation examples\n- [x] Create conversation tutorials\n- [x] Implement conversation guides\n- [x] Add support for conversation best practices\n- [x] Create conversation security guidelines\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please feel free to submit a Pull Request.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\udd17 Links\n\n- [PyPI Package](https://pypi.org/project/grami-ai/)\n- [GitHub Repository](https://github.com/yafatek/grami-ai)\n- [Documentation](https://docs.grami-ai.dev)\n\n## \ud83d\udce7 Support\n\nFor support, email support@yafatek.dev or create an issue on GitHub.",
"bugtrack_url": null,
"license": "MIT",
"summary": "A dynamic and flexible AI agent framework for building intelligent, multi-modal AI agents",
"version": "0.4.4",
"project_urls": {
"Documentation": "https://grami-ai.readthedocs.io",
"Homepage": "https://github.com/YAFATEK/grami-ai",
"Issues": "https://github.com/YAFATEK/grami-ai/issues",
"Repository": "https://github.com/YAFATEK/grami-ai"
},
"split_keywords": [
"agent",
" ai",
" artificial-intelligence",
" generative-ai",
" llm",
" machine-learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "35b5babe1b586e1912839abfdb227fc757864605eadb1ee0ee7bc47084d33878",
"md5": "094be7b650c1fe6550609c9df98b26e4",
"sha256": "7ae31930db962dcde3f37cbc18ddd9bd8d7707f909564221937a8affac715aa3"
},
"downloads": -1,
"filename": "grami_ai-0.4.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "094be7b650c1fe6550609c9df98b26e4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 33374,
"upload_time": "2024-11-26T10:29:00",
"upload_time_iso_8601": "2024-11-26T10:29:00.419938Z",
"url": "https://files.pythonhosted.org/packages/35/b5/babe1b586e1912839abfdb227fc757864605eadb1ee0ee7bc47084d33878/grami_ai-0.4.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "cbef5822b0972df9109ad4923947abe80a698892aa6b15c9eb9b947bfb2fa389",
"md5": "6e98fd28d5f14bb86af1e10c9bcd46fb",
"sha256": "ef902dc92629f3df71bccc6ce54e89bf1900c9f541f76880dbe6471259835329"
},
"downloads": -1,
"filename": "grami_ai-0.4.4.tar.gz",
"has_sig": false,
"md5_digest": "6e98fd28d5f14bb86af1e10c9bcd46fb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 24297,
"upload_time": "2024-11-26T10:29:03",
"upload_time_iso_8601": "2024-11-26T10:29:03.136324Z",
"url": "https://files.pythonhosted.org/packages/cb/ef/5822b0972df9109ad4923947abe80a698892aa6b15c9eb9b947bfb2fa389/grami_ai-0.4.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-26 10:29:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "YAFATEK",
"github_project": "grami-ai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "fastapi",
"specs": [
[
"==",
"0.104.1"
]
]
},
{
"name": "uvicorn",
"specs": [
[
"==",
"0.24.0.post1"
]
]
},
{
"name": "websockets",
"specs": [
[
"==",
"11.0.3"
]
]
},
{
"name": "starlette",
"specs": [
[
"==",
"0.27.0"
]
]
}
],
"lcname": "grami-ai"
}