Name | ai-helper-agent JSON |
Version |
1.0.1
JSON |
| download |
home_page | None |
Summary | Interactive AI Helper Agent for code assistance, analysis, and bug fixing |
upload_time | 2025-07-29 19:00:49 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | MIT License
Copyright (c) 2025 AI Helper Agent
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
keywords |
ai
assistant
code-analysis
bug-fixing
automation
|
VCS |
 |
bugtrack_url |
|
requirements |
langchain-groq
langchain
structlog
pathlib2
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# AI Helper Agent
[](https://badge.fury.io/py/ai-helper-agent)
[](https://pypi.org/project/ai-helper-agent/)
[](https://opensource.org/licenses/MIT)
An interactive AI assistant for code analysis, bug fixing, and development automation. Powered by advanced language models to provide intelligent code assistance.
> **β οΈ Caution:**
> This tool is currently in **beta** and under **active development**. At the moment, only the **GROQ provider** is supported for CLI-based usage.
> Support for additional LLM providers will be added in the near future.
>
> To use the tool, generate a GROQ API key from:
> [https://groq.com/](https://groq.com/)
> [https://console.groq.com/docs/overview](https://console.groq.com/docs/overview)
## π Features
- **π§ Code Analysis & Bug Fixing**: Analyze Python, JavaScript, TypeScript, and other code languages
- **π File Operations**: Read, analyze, and modify files with intelligent suggestions
- **π Best Practices**: Follow language-specific conventions and implement proper error handling
- **π€ Interactive Assistant**: Conversational interface for natural code assistance
- **β‘ Fast & Reliable**: Built with performance and accuracy in mind
## π¦ Installation
### From PyPI (Recommended)
```bash
pip install ai-helper-agent
```
### From Source
```bash
git clone https://github.com/AIMLDev726/ai-helper-agent.git
cd ai-helper-agent
# Install production dependencies
pip install -r requirements.txt
# Or install in development mode with all dependencies
pip install -r requirements-dev.txt
pip install -e .
```
## π― Quick Start
### Command Line Interface
After installation, you can use the CLI directly:
```bash
# Interactive programming assistant
ai-helper-agent
# Show help
ai-helper-agent --help
# Show version
ai-helper-agent --version
```
The CLI provides an interactive session with conversation history and specialized commands:
```
AI Helper Agent CLI
Type 'help' for commands, 'exit' to quit
> help
Available commands:
- /analyze <file>: Analyze a code file
- /debug <file>: Debug code issues
- /optimize <file>: Suggest optimizations
- /explain <concept>: Explain programming concepts
- /history: Show conversation history
- /clear: Clear conversation history
- /help: Show this help
- /exit or /quit: Exit the program
> /analyze my_script.py
[Agent analyzes your code and provides feedback]
> How can I improve error handling in Python?
[Agent provides detailed explanation with examples]
> /exit
```
## β‘ **Important: Async vs Sync Usage**
**AI Helper Agent uses async methods for AI operations but provides sync alternatives:**
| Method | Type | Usage |
|--------|------|-------|
| `agent.chat()` | β‘ **Async** | `await agent.chat("message")` or `asyncio.run(agent.chat("message"))` |
| `agent.analyze_code()` | β‘ **Async** | `await agent.analyze_code(code)` or `asyncio.run(agent.analyze_code(code))` |
| `agent.interactive_session()` | π **Sync** | `agent.interactive_session()` (handles async internally) |
| `validate_python_code()` | π **Sync** | `validate_python_code(code)` |
| `run_python_code()` | π **Sync** | `run_python_code(code)` |
**π Recommended for beginners:** Use `agent.interactive_session()` - it's synchronous and handles all async operations internally!
## π οΈ Quick Start
### Basic Usage
#### **Option 1: Simple Interactive Session (Recommended for Beginners)**
```python
from ai_helper_agent import InteractiveAgent
# Initialize with API key directly - no environment variables needed!
agent = InteractiveAgent(
workspace_path="./my_project",
api_key="your_groq_api_key_here",
model="llama3-8b-8192" # Optional
)
# Start interactive chat session (handles async internally)
agent.interactive_session()
```
#### **Option 2: Programmatic Usage (Async)**
```python
import asyncio
from ai_helper_agent import InteractiveAgent
async def main():
# Initialize the agent
agent = InteractiveAgent(
workspace_path="./my_project",
api_key="your_groq_api_key_here"
)
# Analyze code
result = await agent.analyze_code("def hello(): print('Hello World')")
print(result)
# Interactive conversation
response = await agent.chat("Help me fix this Python function")
print(response)
# Run the async function
asyncio.run(main())
```
#### **Option 3: Environment Variable (Traditional)**
```python
import os
from ai_helper_agent import InteractiveAgent
# Set environment variable
os.environ["GROQ_API_KEY"] = "your_groq_api_key_here"
# Initialize without API key parameter
agent = InteractiveAgent(workspace_path="./my_project")
# Use interactive session
agent.interactive_session()
```
#### **Option 4: Utility Functions (Synchronous)**
```python
from ai_helper_agent import validate_python_code, run_python_code
# These utility functions are synchronous - no async needed
result = validate_python_code("print('Hello World')")
print(result) # {'valid': True, 'error': None}
output = run_python_code("print('Hello from AI!')")
print(output) # {'success': True, 'stdout': 'Hello from AI!\n', ...}
```
### Command Line Interface
**Note:** Set your API key as environment variable first:
```bash
export GROQ_API_KEY="your_groq_api_key_here" # Linux/Mac
# or
set GROQ_API_KEY=your_groq_api_key_here # Windows
```
```bash
# Start interactive session
ai-helper
# Start with specific workspace
ai-helper --workspace ./my_project
# Get help
ai-helper --help
```
### Command Line Interface
```bash
# Start interactive session
ai-helper
# Analyze a specific file
ai-helper analyze myfile.py
# Get help
ai-helper --help
```
## π Documentation
### Core Classes
#### `InteractiveAgent`
The main class for interacting with the AI assistant.
```python
class InteractiveAgent:
def __init__(self, llm=None, workspace_path=".", api_key=None, model=None):
"""
Initialize the AI assistant
Args:
llm: Language model instance (optional)
workspace_path: Path to your project workspace
api_key: Groq API key (optional, will use env var if not provided)
model: Model name to use (optional, defaults to llama3-8b-8192)
"""
```
#### Key Methods
- `async analyze_code(code: str, filename: str)` - Analyze code for issues and improvements β‘ **Async**
- `async chat(message: str)` - Interactive conversation with the AI β‘ **Async**
- `async fix_code(code: str, issues: str, filename: str)` - Fix code issues β‘ **Async**
- `read_file(file_path: str)` - Read files from workspace π **Sync**
- `write_file(file_path: str, content: str)` - Write files to workspace π **Sync**
- `interactive_session()` - Start interactive chat session π **Sync** (handles async internally)
### Utility Functions
```python
from ai_helper_agent.utils import validate_python_code, run_python_code
# Validate Python syntax
result = validate_python_code("print('hello')")
# Run Python code safely
output = run_python_code("print('Hello World')")
```
## π§ Configuration
### Environment Variables
```bash
# Required: Set your Groq API key
export GROQ_API_KEY="your_api_key_here"
# Optional: Other model API keys
export OPENAI_API_KEY="your_openai_key"
export ANTHROPIC_API_KEY="your_anthropic_key"
# Optional: Security settings
export FILE_ACCESS_MODE="ask" # always, ask, never
export CODE_EXECUTION="restricted" # restricted, sandboxed, disabled
# Optional: Set workspace path
export AI_HELPER_WORKSPACE="/path/to/your/project"
```
### Advanced Configuration
#### **Method 1: Direct Parameters (Easiest)**
```python
from ai_helper_agent import InteractiveAgent
# Direct configuration - no imports needed
agent = InteractiveAgent(
api_key="your_groq_api_key",
model="mixtral-8x7b-32768", # or llama3-8b-8192, llama3-70b-8192
workspace_path="./my_project"
)
# Use with async
import asyncio
async def example():
result = await agent.chat("Help me write a function")
print(result)
asyncio.run(example())
```
#### **Method 2: Custom LLM Configuration**
```python
from langchain_groq import ChatGroq
from ai_helper_agent import InteractiveAgent
# Custom LLM configuration
llm = ChatGroq(
model="mixtral-8x7b-32768",
temperature=0.1,
api_key="your_api_key"
)
agent = InteractiveAgent(llm=llm, workspace_path="./")
```
#### **Method 3: Helper Function**
```python
from ai_helper_agent import create_agent
# Quick creation with all options
agent = create_agent(
api_key="your_groq_api_key",
model="llama3-8b-8192",
workspace_path="./project"
)
# Start interactive session
agent.interactive_session()
```
## π§ͺ Testing
Run the test suite:
```bash
# Install test dependencies
pip install -e ".[test]"
# Run tests
pytest
# Run with coverage
pytest --cov=ai_helper_agent
```
## π§ **Troubleshooting**
### **Common Issues & Solutions**
#### **1. "RuntimeWarning: coroutine was never awaited"**
```python
# β Wrong - will cause warning
result = agent.chat("hello")
# β
Correct - Option 1: Use asyncio
import asyncio
result = asyncio.run(agent.chat("hello"))
# β
Correct - Option 2: Use in async function
async def main():
result = await agent.chat("hello")
print(result)
asyncio.run(main())
# β
Correct - Option 3: Use interactive session (easiest)
agent.interactive_session() # Handles async internally
```
#### **2. "GROQ_API_KEY not found"**
```python
# β
Solution 1: Pass API key directly (easiest)
agent = InteractiveAgent(api_key="your_key_here")
# β
Solution 2: Set environment variable
import os
os.environ["GROQ_API_KEY"] = "your_key_here"
agent = InteractiveAgent()
```
#### **3. "Import errors" during development**
```bash
# Install dependencies first
pip install -r requirements.txt
# or for development
pip install -r requirements-dev.txt
```
#### **4. Best Practices**
```python
# β
For beginners - use interactive session
from ai_helper_agent import InteractiveAgent
agent = InteractiveAgent(api_key="your_key")
agent.interactive_session()
# β
For advanced users - proper async handling
import asyncio
from ai_helper_agent import InteractiveAgent
async def main():
agent = InteractiveAgent(api_key="your_key")
result = await agent.chat("Help me code")
print(result)
if __name__ == "__main__":
asyncio.run(main())
```
## π€ Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
### Development Setup
```bash
# Clone the repository
git clone https://github.com/AIMLDev726/ai-helper-agent.git
cd ai-helper-agent
# Install development dependencies
pip install -r requirements-dev.txt
# Install in development mode
pip install -e .
# Install pre-commit hooks
pre-commit install
# Run tests
pytest
```
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π Acknowledgments
- Built with [LangChain](https://github.com/langchain-ai/langchain)
- Powered by [Groq](https://groq.com/) language models
- Inspired by the need for intelligent code assistance
## π Support
- π§ Email: aistudentlearn4@gmail.com
- π Issues: [GitHub Issues](https://github.com/AIMLDev726/ai-helper-agent/issues)
- π¬ Discussions: [GitHub Discussions](https://github.com/AIMLDev726/ai-helper-agent/discussions)
## πΊοΈ Roadmap
- [ ] Support for more programming languages
- [ ] Integration with popular IDEs
- [ ] Advanced code refactoring capabilities
- [ ] Team collaboration features
- [ ] Plugin system for extensibility
---
**Made with β€οΈ by the AI Helper Agent team**
Raw data
{
"_id": null,
"home_page": null,
"name": "ai-helper-agent",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "AIMLDev726 <aistudentlearn4@gmail.com>",
"keywords": "ai, assistant, code-analysis, bug-fixing, automation",
"author": null,
"author_email": "AIStudent <aistudentlearn4@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/3a/c9/ef0c608fed9fc058426ab6966f2978f5cd2eee74561c663248d580ae5925/ai_helper_agent-1.0.1.tar.gz",
"platform": null,
"description": "# AI Helper Agent\r\n\r\n[](https://badge.fury.io/py/ai-helper-agent)\r\n[](https://pypi.org/project/ai-helper-agent/)\r\n[](https://opensource.org/licenses/MIT)\r\n\r\nAn interactive AI assistant for code analysis, bug fixing, and development automation. Powered by advanced language models to provide intelligent code assistance.\r\n\r\n\r\n> **\u26a0\ufe0f Caution:** \r\n> This tool is currently in **beta** and under **active development**. At the moment, only the **GROQ provider** is supported for CLI-based usage. \r\n> Support for additional LLM providers will be added in the near future. \r\n> \r\n> To use the tool, generate a GROQ API key from: \r\n> [https://groq.com/](https://groq.com/) \r\n> [https://console.groq.com/docs/overview](https://console.groq.com/docs/overview)\r\n\r\n\r\n## \ud83d\ude80 Features\r\n\r\n- **\ud83d\udd27 Code Analysis & Bug Fixing**: Analyze Python, JavaScript, TypeScript, and other code languages\r\n- **\ud83d\udcc1 File Operations**: Read, analyze, and modify files with intelligent suggestions\r\n- **\ud83d\ude80 Best Practices**: Follow language-specific conventions and implement proper error handling\r\n- **\ud83e\udd16 Interactive Assistant**: Conversational interface for natural code assistance\r\n- **\u26a1 Fast & Reliable**: Built with performance and accuracy in mind\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n### From PyPI (Recommended)\r\n\r\n```bash\r\npip install ai-helper-agent\r\n```\r\n\r\n### From Source\r\n\r\n```bash\r\ngit clone https://github.com/AIMLDev726/ai-helper-agent.git\r\ncd ai-helper-agent\r\n\r\n# Install production dependencies\r\npip install -r requirements.txt\r\n\r\n# Or install in development mode with all dependencies\r\npip install -r requirements-dev.txt\r\npip install -e .\r\n```\r\n\r\n## \ud83c\udfaf Quick Start\r\n\r\n### Command Line Interface\r\n\r\nAfter installation, you can use the CLI directly:\r\n\r\n```bash\r\n# Interactive programming assistant\r\nai-helper-agent\r\n\r\n# Show help\r\nai-helper-agent --help\r\n\r\n# Show version\r\nai-helper-agent --version\r\n```\r\n\r\nThe CLI provides an interactive session with conversation history and specialized commands:\r\n\r\n```\r\nAI Helper Agent CLI\r\nType 'help' for commands, 'exit' to quit\r\n\r\n> help\r\nAvailable commands:\r\n- /analyze <file>: Analyze a code file\r\n- /debug <file>: Debug code issues\r\n- /optimize <file>: Suggest optimizations\r\n- /explain <concept>: Explain programming concepts\r\n- /history: Show conversation history\r\n- /clear: Clear conversation history\r\n- /help: Show this help\r\n- /exit or /quit: Exit the program\r\n\r\n> /analyze my_script.py\r\n[Agent analyzes your code and provides feedback]\r\n\r\n> How can I improve error handling in Python?\r\n[Agent provides detailed explanation with examples]\r\n\r\n> /exit\r\n```\r\n\r\n## \u26a1 **Important: Async vs Sync Usage**\r\n\r\n**AI Helper Agent uses async methods for AI operations but provides sync alternatives:**\r\n\r\n| Method | Type | Usage |\r\n|--------|------|-------|\r\n| `agent.chat()` | \u26a1 **Async** | `await agent.chat(\"message\")` or `asyncio.run(agent.chat(\"message\"))` |\r\n| `agent.analyze_code()` | \u26a1 **Async** | `await agent.analyze_code(code)` or `asyncio.run(agent.analyze_code(code))` |\r\n| `agent.interactive_session()` | \ud83d\udd04 **Sync** | `agent.interactive_session()` (handles async internally) |\r\n| `validate_python_code()` | \ud83d\udd04 **Sync** | `validate_python_code(code)` |\r\n| `run_python_code()` | \ud83d\udd04 **Sync** | `run_python_code(code)` |\r\n\r\n**\ud83d\udc4d Recommended for beginners:** Use `agent.interactive_session()` - it's synchronous and handles all async operations internally!\r\n\r\n## \ud83d\udee0\ufe0f Quick Start\r\n\r\n### Basic Usage\r\n\r\n#### **Option 1: Simple Interactive Session (Recommended for Beginners)**\r\n```python\r\nfrom ai_helper_agent import InteractiveAgent\r\n\r\n# Initialize with API key directly - no environment variables needed!\r\nagent = InteractiveAgent(\r\n workspace_path=\"./my_project\",\r\n api_key=\"your_groq_api_key_here\",\r\n model=\"llama3-8b-8192\" # Optional\r\n)\r\n\r\n# Start interactive chat session (handles async internally)\r\nagent.interactive_session()\r\n```\r\n\r\n#### **Option 2: Programmatic Usage (Async)**\r\n```python\r\nimport asyncio\r\nfrom ai_helper_agent import InteractiveAgent\r\n\r\nasync def main():\r\n # Initialize the agent\r\n agent = InteractiveAgent(\r\n workspace_path=\"./my_project\",\r\n api_key=\"your_groq_api_key_here\"\r\n )\r\n\r\n # Analyze code\r\n result = await agent.analyze_code(\"def hello(): print('Hello World')\")\r\n print(result)\r\n\r\n # Interactive conversation\r\n response = await agent.chat(\"Help me fix this Python function\")\r\n print(response)\r\n\r\n# Run the async function\r\nasyncio.run(main())\r\n```\r\n\r\n#### **Option 3: Environment Variable (Traditional)**\r\n```python\r\nimport os\r\nfrom ai_helper_agent import InteractiveAgent\r\n\r\n# Set environment variable\r\nos.environ[\"GROQ_API_KEY\"] = \"your_groq_api_key_here\"\r\n\r\n# Initialize without API key parameter\r\nagent = InteractiveAgent(workspace_path=\"./my_project\")\r\n\r\n# Use interactive session\r\nagent.interactive_session()\r\n```\r\n\r\n#### **Option 4: Utility Functions (Synchronous)**\r\n```python\r\nfrom ai_helper_agent import validate_python_code, run_python_code\r\n\r\n# These utility functions are synchronous - no async needed\r\nresult = validate_python_code(\"print('Hello World')\")\r\nprint(result) # {'valid': True, 'error': None}\r\n\r\noutput = run_python_code(\"print('Hello from AI!')\")\r\nprint(output) # {'success': True, 'stdout': 'Hello from AI!\\n', ...}\r\n```\r\n\r\n### Command Line Interface\r\n\r\n**Note:** Set your API key as environment variable first:\r\n```bash\r\nexport GROQ_API_KEY=\"your_groq_api_key_here\" # Linux/Mac\r\n# or\r\nset GROQ_API_KEY=your_groq_api_key_here # Windows\r\n```\r\n\r\n```bash\r\n# Start interactive session\r\nai-helper\r\n\r\n# Start with specific workspace\r\nai-helper --workspace ./my_project\r\n\r\n# Get help\r\nai-helper --help\r\n```\r\n\r\n### Command Line Interface\r\n\r\n```bash\r\n# Start interactive session\r\nai-helper\r\n\r\n# Analyze a specific file\r\nai-helper analyze myfile.py\r\n\r\n# Get help\r\nai-helper --help\r\n```\r\n\r\n## \ud83d\udcd6 Documentation\r\n\r\n### Core Classes\r\n\r\n#### `InteractiveAgent`\r\n\r\nThe main class for interacting with the AI assistant.\r\n\r\n```python\r\nclass InteractiveAgent:\r\n def __init__(self, llm=None, workspace_path=\".\", api_key=None, model=None):\r\n \"\"\"\r\n Initialize the AI assistant\r\n \r\n Args:\r\n llm: Language model instance (optional)\r\n workspace_path: Path to your project workspace\r\n api_key: Groq API key (optional, will use env var if not provided)\r\n model: Model name to use (optional, defaults to llama3-8b-8192)\r\n \"\"\"\r\n```\r\n\r\n#### Key Methods\r\n\r\n- `async analyze_code(code: str, filename: str)` - Analyze code for issues and improvements \u26a1 **Async**\r\n- `async chat(message: str)` - Interactive conversation with the AI \u26a1 **Async**\r\n- `async fix_code(code: str, issues: str, filename: str)` - Fix code issues \u26a1 **Async**\r\n- `read_file(file_path: str)` - Read files from workspace \ud83d\udd04 **Sync**\r\n- `write_file(file_path: str, content: str)` - Write files to workspace \ud83d\udd04 **Sync**\r\n- `interactive_session()` - Start interactive chat session \ud83d\udd04 **Sync** (handles async internally)\r\n\r\n### Utility Functions\r\n\r\n```python\r\nfrom ai_helper_agent.utils import validate_python_code, run_python_code\r\n\r\n# Validate Python syntax\r\nresult = validate_python_code(\"print('hello')\")\r\n\r\n# Run Python code safely\r\noutput = run_python_code(\"print('Hello World')\")\r\n```\r\n\r\n## \ud83d\udd27 Configuration\r\n\r\n### Environment Variables\r\n\r\n```bash\r\n# Required: Set your Groq API key\r\nexport GROQ_API_KEY=\"your_api_key_here\"\r\n\r\n# Optional: Other model API keys\r\nexport OPENAI_API_KEY=\"your_openai_key\"\r\nexport ANTHROPIC_API_KEY=\"your_anthropic_key\"\r\n\r\n# Optional: Security settings\r\nexport FILE_ACCESS_MODE=\"ask\" # always, ask, never\r\nexport CODE_EXECUTION=\"restricted\" # restricted, sandboxed, disabled\r\n\r\n# Optional: Set workspace path\r\nexport AI_HELPER_WORKSPACE=\"/path/to/your/project\"\r\n```\r\n\r\n### Advanced Configuration\r\n\r\n#### **Method 1: Direct Parameters (Easiest)**\r\n```python\r\nfrom ai_helper_agent import InteractiveAgent\r\n\r\n# Direct configuration - no imports needed\r\nagent = InteractiveAgent(\r\n api_key=\"your_groq_api_key\",\r\n model=\"mixtral-8x7b-32768\", # or llama3-8b-8192, llama3-70b-8192\r\n workspace_path=\"./my_project\"\r\n)\r\n\r\n# Use with async\r\nimport asyncio\r\n\r\nasync def example():\r\n result = await agent.chat(\"Help me write a function\")\r\n print(result)\r\n\r\nasyncio.run(example())\r\n```\r\n\r\n#### **Method 2: Custom LLM Configuration**\r\n```python\r\nfrom langchain_groq import ChatGroq\r\nfrom ai_helper_agent import InteractiveAgent\r\n\r\n# Custom LLM configuration\r\nllm = ChatGroq(\r\n model=\"mixtral-8x7b-32768\",\r\n temperature=0.1,\r\n api_key=\"your_api_key\"\r\n)\r\n\r\nagent = InteractiveAgent(llm=llm, workspace_path=\"./\")\r\n```\r\n\r\n#### **Method 3: Helper Function**\r\n```python\r\nfrom ai_helper_agent import create_agent\r\n\r\n# Quick creation with all options\r\nagent = create_agent(\r\n api_key=\"your_groq_api_key\",\r\n model=\"llama3-8b-8192\",\r\n workspace_path=\"./project\"\r\n)\r\n\r\n# Start interactive session\r\nagent.interactive_session()\r\n```\r\n\r\n## \ud83e\uddea Testing\r\n\r\nRun the test suite:\r\n\r\n```bash\r\n# Install test dependencies\r\npip install -e \".[test]\"\r\n\r\n# Run tests\r\npytest\r\n\r\n# Run with coverage\r\npytest --cov=ai_helper_agent\r\n```\r\n\r\n## \ud83d\udd27 **Troubleshooting**\r\n\r\n### **Common Issues & Solutions**\r\n\r\n#### **1. \"RuntimeWarning: coroutine was never awaited\"**\r\n```python\r\n# \u274c Wrong - will cause warning\r\nresult = agent.chat(\"hello\")\r\n\r\n# \u2705 Correct - Option 1: Use asyncio\r\nimport asyncio\r\nresult = asyncio.run(agent.chat(\"hello\"))\r\n\r\n# \u2705 Correct - Option 2: Use in async function\r\nasync def main():\r\n result = await agent.chat(\"hello\")\r\n print(result)\r\nasyncio.run(main())\r\n\r\n# \u2705 Correct - Option 3: Use interactive session (easiest)\r\nagent.interactive_session() # Handles async internally\r\n```\r\n\r\n#### **2. \"GROQ_API_KEY not found\"**\r\n```python\r\n# \u2705 Solution 1: Pass API key directly (easiest)\r\nagent = InteractiveAgent(api_key=\"your_key_here\")\r\n\r\n# \u2705 Solution 2: Set environment variable\r\nimport os\r\nos.environ[\"GROQ_API_KEY\"] = \"your_key_here\"\r\nagent = InteractiveAgent()\r\n```\r\n\r\n#### **3. \"Import errors\" during development**\r\n```bash\r\n# Install dependencies first\r\npip install -r requirements.txt\r\n# or for development\r\npip install -r requirements-dev.txt\r\n```\r\n\r\n#### **4. Best Practices**\r\n```python\r\n# \u2705 For beginners - use interactive session\r\nfrom ai_helper_agent import InteractiveAgent\r\nagent = InteractiveAgent(api_key=\"your_key\")\r\nagent.interactive_session()\r\n\r\n# \u2705 For advanced users - proper async handling\r\nimport asyncio\r\nfrom ai_helper_agent import InteractiveAgent\r\n\r\nasync def main():\r\n agent = InteractiveAgent(api_key=\"your_key\")\r\n result = await agent.chat(\"Help me code\")\r\n print(result)\r\n\r\nif __name__ == \"__main__\":\r\n asyncio.run(main())\r\n```\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\r\n\r\n### Development Setup\r\n\r\n```bash\r\n# Clone the repository\r\ngit clone https://github.com/AIMLDev726/ai-helper-agent.git\r\ncd ai-helper-agent\r\n\r\n# Install development dependencies\r\npip install -r requirements-dev.txt\r\n\r\n# Install in development mode\r\npip install -e .\r\n\r\n# Install pre-commit hooks\r\npre-commit install\r\n\r\n# Run tests\r\npytest\r\n```\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n- Built with [LangChain](https://github.com/langchain-ai/langchain)\r\n- Powered by [Groq](https://groq.com/) language models\r\n- Inspired by the need for intelligent code assistance\r\n\r\n## \ud83d\udcde Support\r\n\r\n- \ud83d\udce7 Email: aistudentlearn4@gmail.com\r\n- \ud83d\udc1b Issues: [GitHub Issues](https://github.com/AIMLDev726/ai-helper-agent/issues)\r\n- \ud83d\udcac Discussions: [GitHub Discussions](https://github.com/AIMLDev726/ai-helper-agent/discussions)\r\n\r\n## \ud83d\uddfa\ufe0f Roadmap\r\n\r\n- [ ] Support for more programming languages\r\n- [ ] Integration with popular IDEs\r\n- [ ] Advanced code refactoring capabilities\r\n- [ ] Team collaboration features\r\n- [ ] Plugin system for extensibility\r\n\r\n---\r\n\r\n**Made with \u2764\ufe0f by the AI Helper Agent team**\r\n",
"bugtrack_url": null,
"license": "MIT License\r\n \r\n Copyright (c) 2025 AI Helper Agent\r\n \r\n Permission is hereby granted, free of charge, to any person obtaining a copy\r\n of this software and associated documentation files (the \"Software\"), to deal\r\n in the Software without restriction, including without limitation the rights\r\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\r\n copies of the Software, and to permit persons to whom the Software is\r\n furnished to do so, subject to the following conditions:\r\n \r\n The above copyright notice and this permission notice shall be included in all\r\n copies or substantial portions of the Software.\r\n \r\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\r\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\r\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\r\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\r\n SOFTWARE.",
"summary": "Interactive AI Helper Agent for code assistance, analysis, and bug fixing",
"version": "1.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/AIMLDev726/ai-helper-agent/issues",
"Changelog": "https://github.com/AIMLDev726/ai-helper-agent/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/AIMLDev726/ai-helper-agent#readme",
"Homepage": "https://github.com/AIMLDev726/ai-helper-agent",
"Repository": "https://github.com/AIMLDev726/ai-helper-agent"
},
"split_keywords": [
"ai",
" assistant",
" code-analysis",
" bug-fixing",
" automation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e211f3c9847fa0351793def89a63b736ea09df54be13df3eba651ad95c4e7402",
"md5": "95607349aadd65519266b73dc961c96d",
"sha256": "c489af50435a1c0b5a12353ce3982f3ddf3a34a4afa24668a791f94635c0d137"
},
"downloads": -1,
"filename": "ai_helper_agent-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "95607349aadd65519266b73dc961c96d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 22931,
"upload_time": "2025-07-29T19:00:47",
"upload_time_iso_8601": "2025-07-29T19:00:47.640242Z",
"url": "https://files.pythonhosted.org/packages/e2/11/f3c9847fa0351793def89a63b736ea09df54be13df3eba651ad95c4e7402/ai_helper_agent-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "3ac9ef0c608fed9fc058426ab6966f2978f5cd2eee74561c663248d580ae5925",
"md5": "6bf94f6f2f2674f0aa5703dedab3db8c",
"sha256": "84836ea88e0a9454f102d6bda2d4234208cc6112d334504a559b95bd200161be"
},
"downloads": -1,
"filename": "ai_helper_agent-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "6bf94f6f2f2674f0aa5703dedab3db8c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 25127,
"upload_time": "2025-07-29T19:00:49",
"upload_time_iso_8601": "2025-07-29T19:00:49.141691Z",
"url": "https://files.pythonhosted.org/packages/3a/c9/ef0c608fed9fc058426ab6966f2978f5cd2eee74561c663248d580ae5925/ai_helper_agent-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-29 19:00:49",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AIMLDev726",
"github_project": "ai-helper-agent",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "langchain-groq",
"specs": [
[
">=",
"0.1.0"
]
]
},
{
"name": "langchain",
"specs": [
[
">=",
"0.1.0"
]
]
},
{
"name": "structlog",
"specs": [
[
">=",
"23.0.0"
]
]
},
{
"name": "pathlib2",
"specs": []
}
],
"lcname": "ai-helper-agent"
}