prometheous


Nameprometheous JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/James4Ever0/prometheous
SummaryAI-powered documentation generation and vector indexing tool
upload_time2025-07-25 17:29:40
maintainerNone
docs_urlNone
authorJames Brown
requires_python>=3.8
licenseMIT
keywords documentation ai vector-search code-analysis
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Prometheous

AI-powered documentation generation and vector indexing tool for codebases.

## Overview

Prometheous is a comprehensive tool that combines automated code documentation generation with advanced vector indexing capabilities for Retrieval-Augmented Generation (RAG). It helps developers create beautiful, searchable documentation and enables intelligent code exploration through semantic search.

## Features

- 🤖 **AI-Powered Documentation**: Generate comprehensive documentation from your codebase using advanced language models
- 🔍 **Vector Indexing**: Create semantic search indexes for intelligent code exploration
- 💬 **RAG Chat Interface**: Interactive chat with your codebase using natural language queries
- 🌐 **Beautiful Web UI**: Modern, responsive documentation websites with search functionality
- 🔧 **Flexible Configuration**: Support for multiple AI providers (OpenAI, Ollama, etc.)
- 📦 **Easy Integration**: Simple CLI interface with sensible defaults

## Installation

```bash
pip install prometheous
```

## Quick Start

### Generate Documentation

```bash
# Generate documentation for current directory
prom doc

# Generate documentation with custom paths
prom doc --project-root ./src --doc-root ./docs --project-url https://github.com/user/repo
```

### Create Vector Index

```bash
# Create vector index for documentation
prom vec --source-path ./docs

# Create index with interactive chat
prom vec --source-path ./docs --interactive
```

## Configuration

### Environment Variables

```bash
# OpenAI Configuration
export OPENAI_API_KEY="your-api-key"
export OPENAI_API_BASE="https://api.openai.com/v1"  # or your custom endpoint

# Model Configuration
export PROMETHEOUS_MODEL_NAME="gpt-3.5-turbo"
export PROMETHEOUS_MAX_TOKENS=4096

# Embedding Configuration
export PROMETHEOUS_EMBEDDING_MODEL="text-embedding-ada-002"
export EMBEDDING_DIMENSION=1536

# Ollama Configuration (for local models)
export OLLAMA_BASE_URL="http://localhost:11434"
```

## CLI Reference

### `prom doc` - Generate Documentation

| Option | Description | Default |
|--------|-------------|----------|
| `--project-root, -p` | Project root directory | `.` |
| `--doc-root, -d` | Documentation output directory | `./docs` |
| `--project-url, -u` | Project URL for links | `https://github.com/user/project` |
| `--model-name` | AI model to use | `gpt-3.5-turbo` |
| `--max-tokens` | Maximum tokens per request | `4096` |
| `--headless` | Run without browser preview | `false` |

### `prom vec` - Vector Indexing

| Option | Description | Default |
|--------|-------------|----------|
| `--source-path, -s` | Source directory to index | Required |
| `--cache-dir, -c` | Cache directory for index | `./cache` |
| `--embedding-model` | Embedding model to use | `text-embedding-ada-002` |
| `--embedding-dimension` | Embedding vector dimension | `1536` |
| `--interactive, -i` | Start interactive chat | `false` |

## Examples

### Complete Documentation Workflow

```bash
# 1. Generate documentation
prom doc --project-root ./my-project --doc-root ./documentation

# 2. Create vector index
prom vec --source-path ./documentation --cache-dir ./vector-cache

# 3. Start interactive chat
prom vec --source-path ./documentation --interactive
```

### Using with Local Models (Ollama)

```bash
# Set up Ollama environment
export OLLAMA_BASE_URL="http://localhost:11434"
export OPENAI_API_BASE="http://localhost:11434/v1"
export PROMETHEOUS_MODEL_NAME="llama2"
export PROMETHEOUS_EMBEDDING_MODEL="llama2"

# Generate documentation
prom doc --project-root ./src
```

## Development

For development setup and advanced usage, see:
- [Usage Guide](./usage.txt)
- [Project Understanding](./project_understanding.txt)
- [Legacy Documentation](./README.old.md)

## License

MIT License - see LICENSE file for details.

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/James4Ever0/prometheous",
    "name": "prometheous",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "documentation, ai, vector-search, code-analysis",
    "author": "James Brown",
    "author_email": "James Brown <james@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/19/b6/42fee4b84b70bff7aed5c3b2d7e464c3ecacdda75d1b2593fb67c4d3edd6/prometheous-0.1.0.tar.gz",
    "platform": null,
    "description": "# Prometheous\n\nAI-powered documentation generation and vector indexing tool for codebases.\n\n## Overview\n\nPrometheous is a comprehensive tool that combines automated code documentation generation with advanced vector indexing capabilities for Retrieval-Augmented Generation (RAG). It helps developers create beautiful, searchable documentation and enables intelligent code exploration through semantic search.\n\n## Features\n\n- \ud83e\udd16 **AI-Powered Documentation**: Generate comprehensive documentation from your codebase using advanced language models\n- \ud83d\udd0d **Vector Indexing**: Create semantic search indexes for intelligent code exploration\n- \ud83d\udcac **RAG Chat Interface**: Interactive chat with your codebase using natural language queries\n- \ud83c\udf10 **Beautiful Web UI**: Modern, responsive documentation websites with search functionality\n- \ud83d\udd27 **Flexible Configuration**: Support for multiple AI providers (OpenAI, Ollama, etc.)\n- \ud83d\udce6 **Easy Integration**: Simple CLI interface with sensible defaults\n\n## Installation\n\n```bash\npip install prometheous\n```\n\n## Quick Start\n\n### Generate Documentation\n\n```bash\n# Generate documentation for current directory\nprom doc\n\n# Generate documentation with custom paths\nprom doc --project-root ./src --doc-root ./docs --project-url https://github.com/user/repo\n```\n\n### Create Vector Index\n\n```bash\n# Create vector index for documentation\nprom vec --source-path ./docs\n\n# Create index with interactive chat\nprom vec --source-path ./docs --interactive\n```\n\n## Configuration\n\n### Environment Variables\n\n```bash\n# OpenAI Configuration\nexport OPENAI_API_KEY=\"your-api-key\"\nexport OPENAI_API_BASE=\"https://api.openai.com/v1\"  # or your custom endpoint\n\n# Model Configuration\nexport PROMETHEOUS_MODEL_NAME=\"gpt-3.5-turbo\"\nexport PROMETHEOUS_MAX_TOKENS=4096\n\n# Embedding Configuration\nexport PROMETHEOUS_EMBEDDING_MODEL=\"text-embedding-ada-002\"\nexport EMBEDDING_DIMENSION=1536\n\n# Ollama Configuration (for local models)\nexport OLLAMA_BASE_URL=\"http://localhost:11434\"\n```\n\n## CLI Reference\n\n### `prom doc` - Generate Documentation\n\n| Option | Description | Default |\n|--------|-------------|----------|\n| `--project-root, -p` | Project root directory | `.` |\n| `--doc-root, -d` | Documentation output directory | `./docs` |\n| `--project-url, -u` | Project URL for links | `https://github.com/user/project` |\n| `--model-name` | AI model to use | `gpt-3.5-turbo` |\n| `--max-tokens` | Maximum tokens per request | `4096` |\n| `--headless` | Run without browser preview | `false` |\n\n### `prom vec` - Vector Indexing\n\n| Option | Description | Default |\n|--------|-------------|----------|\n| `--source-path, -s` | Source directory to index | Required |\n| `--cache-dir, -c` | Cache directory for index | `./cache` |\n| `--embedding-model` | Embedding model to use | `text-embedding-ada-002` |\n| `--embedding-dimension` | Embedding vector dimension | `1536` |\n| `--interactive, -i` | Start interactive chat | `false` |\n\n## Examples\n\n### Complete Documentation Workflow\n\n```bash\n# 1. Generate documentation\nprom doc --project-root ./my-project --doc-root ./documentation\n\n# 2. Create vector index\nprom vec --source-path ./documentation --cache-dir ./vector-cache\n\n# 3. Start interactive chat\nprom vec --source-path ./documentation --interactive\n```\n\n### Using with Local Models (Ollama)\n\n```bash\n# Set up Ollama environment\nexport OLLAMA_BASE_URL=\"http://localhost:11434\"\nexport OPENAI_API_BASE=\"http://localhost:11434/v1\"\nexport PROMETHEOUS_MODEL_NAME=\"llama2\"\nexport PROMETHEOUS_EMBEDDING_MODEL=\"llama2\"\n\n# Generate documentation\nprom doc --project-root ./src\n```\n\n## Development\n\nFor development setup and advanced usage, see:\n- [Usage Guide](./usage.txt)\n- [Project Understanding](./project_understanding.txt)\n- [Legacy Documentation](./README.old.md)\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "AI-powered documentation generation and vector indexing tool",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/James4Ever0/prometheous",
        "Issues": "https://github.com/James4Ever0/prometheous/issues",
        "Repository": "https://github.com/James4Ever0/prometheous"
    },
    "split_keywords": [
        "documentation",
        " ai",
        " vector-search",
        " code-analysis"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b6d22d10418ec6149ecf4e6f6b632eec9356d1a2e9896b4ac666edbee3ef2fab",
                "md5": "e8f25f74b9aa98e13550457ead553d60",
                "sha256": "7f40ff7814a0c9657d51adf1276067721fb85d99a737c3d72820044f6e8174a8"
            },
            "downloads": -1,
            "filename": "prometheous-0.1.0-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e8f25f74b9aa98e13550457ead553d60",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.8",
            "size": 81338,
            "upload_time": "2025-07-25T17:29:37",
            "upload_time_iso_8601": "2025-07-25T17:29:37.658828Z",
            "url": "https://files.pythonhosted.org/packages/b6/d2/2d10418ec6149ecf4e6f6b632eec9356d1a2e9896b4ac666edbee3ef2fab/prometheous-0.1.0-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "19b642fee4b84b70bff7aed5c3b2d7e464c3ecacdda75d1b2593fb67c4d3edd6",
                "md5": "abf813a2e638835c08b09dd9aa1d6243",
                "sha256": "29c9cee31be43b277f12027cd01191f8e30428543624cbf566048ba882a8df03"
            },
            "downloads": -1,
            "filename": "prometheous-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "abf813a2e638835c08b09dd9aa1d6243",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 405231,
            "upload_time": "2025-07-25T17:29:40",
            "upload_time_iso_8601": "2025-07-25T17:29:40.895823Z",
            "url": "https://files.pythonhosted.org/packages/19/b6/42fee4b84b70bff7aed5c3b2d7e464c3ecacdda75d1b2593fb67c4d3edd6/prometheous-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-25 17:29:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "James4Ever0",
    "github_project": "prometheous",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "prometheous"
}
        
Elapsed time: 1.64285s