crllm


Namecrllm JSON
Version 0.5.0 PyPI version JSON
download
home_pagehttps://lukasrump.github.io/crllm/
SummaryProvides AI-powered code reviews using local or cloud-based Large Language Models (LLMs) to help developers improve code quality and catch bugs efficiently.
upload_time2024-11-15 13:46:07
maintainerNone
docs_urlNone
authorLukas Rump
requires_python<4.0,>=3.10
licenseMIT
keywords code review ai llm static analysis cli
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CRLLM

**Effortlessly Get Code Reviews from Large Language Models!**

CRLLM is a powerful command-line tool that enables developers to get code reviews from state-of-the-art Large Language Models (LLMs). Whether you want to use Ollama for locally running LLMs or connect to cloud services like ChatGPT, Hugging Face, and Azure, CRLLM has you covered. Improve your code quality, catch potential bugs, and receive AI-driven suggestions to enhance your development workflow. Get recommendations on best practices, bug-prone areas, and potential refactoring.
Learn from the suggestions how to improve your code, gaining new insights and techniques. Reduce the need for back-and-forth during human code reviews by catching more issues upfront.

## 🚀 Features

- **Flexible LLM Options**: Use Ollama to run models locally or leverage APIs from cloud providers like OpenAI, Hugging Face, and Azure.
- **Comprehensive Code Review**: Get quality feedback on code, including recommendations for readability, efficiency, and best practices.
- **Developer Productivity**: Integrate seamlessly into your existing development environment to speed up code review processes.
- **Privacy Control**: Choose between local or cloud-based solutions depending on your privacy needs and computational power.
- **Multi-Language Support**: Review code written in various programming languages (e.g., Python, JavaScript, Java, etc.).
- **Git Support**: Get Reviews for your Git changes or differences between branches.

## 🛠️ Installation

To get started with crllm, follow these simple installation steps:

### Prerequisites

- **Python 3.8+**: Make sure you have Python installed.
- **pipx**: https://pipx.pypa.io/stable/installation/
- **ollama**: https://ollama.com/download 
If you want to run the modells locally otherwise you will need the corresponding API keys for your provider



### Install from GitHub
```sh
pipx install git+https://github.com/lukasrump/crllm.git
```

### Install from PyPI
```sh
pipx install crllm
```

## 🌐 Configuration
CRLLM supports multiple backends for LLM code reviews. You can configure it by adding an configuration file `crllm_config.toml` in the root of your project. To initialize your project you can use

```bash
crllm -i .
```

This command guides you through the most important settings.
This TOML configuration file is splitted in four main sections:

### [project]
- **`description`**: Short project summary.

### [crllm]
- **`loader`**: Mechanism to load the source code, `"git"` by default.
- **`provider`**: LLM provider, `"ollama"`by default.
- **`git_main_branch`**: Specifies the main git branch, default is `"main"`.
- **`git_changed_lines`**: If `true`, only reviews changed lines.

#### Loaders
- **file**: Code review for a single source code file
- **git**: Reviews all changed files in the git repository
- **git_compare**: Reviews the difference between the current git branch and the `git_main_branch`

### [model]
The model settings depend on the provider. The model settings are the same as those of the [LangChain](https://python.langchain.com/docs/integrations/chat/) ChatModels. Per default crllm tries to use a locally installed ollama instance with llama3.1.

#### Ollama Local Setup
- **`model`**: Specifies the model to use, e.g `"llama3.1"`. Make sure that you pulled that model before you use it.

#### OpenAI API
- **`model`**: Specifies the model to use, e.g `"gpt-4o"`.

In addition you have to define the api key in your environment (`.env`)
```
OPENAI_API_KEY=your_openai_api_key
```
#### Hugging Face API
- **`repo_id`**: Specifies the repository to use, e.g `"HuggingFaceH4/zephyr-7b-beta"`.
- **`task`**: Specifies the task, e.g `"text-generation"`.

```
HUGGINGFACEHUB_API_TOKEN=your_huggingface_api_key
```

#### Azure OpenAI
- **`azure_deployment`**: Specifies the deployment to use, e.g `"gpt-35-turbo"`.
- **`api_version`**: Specifies the api version to use, e.g `"2023-06-01-preview"`.

In addition you have to define some variables in your environment (`.env`)

```
AZURE_OPENAI_API_KEY=your_azure_api_key
AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com
```

### [prompt]
- **`template`**: Override the prompt template that is used (optional).

## ✨Usage
CRLLM is designed to be easy to use right from your terminal. Below are some examples of how you can leverage the tool.

To perform a code review for a file or GIT repository run:
```sh
crllm path/to/your/codefile.py
```

### Enabling RAG Support

To enhance code reviews with source context, enable RAG (Retrieval-Augmented Generation) in `crllm_config.toml`:

```toml
[rag]
enabled = true
embedding_model = "all-minilm"      # Specify the embedding model
src_path = "./"                     # Define the root path of your source code
src_glob = "**/*.py"                # Use glob patterns to match source files (e.g., Python files)
```

### Ignore files
CRLLM supports a `.crllm_ignore` file to exclude specific files and directories from code reviews. This is similar to `.gitignore` but specific to CRLLM's code review process.
            

Raw data

            {
    "_id": null,
    "home_page": "https://lukasrump.github.io/crllm/",
    "name": "crllm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "code review, AI, LLM, static analysis, cli",
    "author": "Lukas Rump",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/0b/eb/d6ba70e62a96212a93ce982605a75379b1a4fad1d37cc29873fafd694dd9/crllm-0.5.0.tar.gz",
    "platform": null,
    "description": "# CRLLM\n\n**Effortlessly Get Code Reviews from Large Language Models!**\n\nCRLLM is a powerful command-line tool that enables developers to get code reviews from state-of-the-art Large Language Models (LLMs). Whether you want to use Ollama for locally running LLMs or connect to cloud services like ChatGPT, Hugging Face, and Azure, CRLLM has you covered. Improve your code quality, catch potential bugs, and receive AI-driven suggestions to enhance your development workflow. Get recommendations on best practices, bug-prone areas, and potential refactoring.\nLearn from the suggestions how to improve your code, gaining new insights and techniques. Reduce the need for back-and-forth during human code reviews by catching more issues upfront.\n\n## \ud83d\ude80 Features\n\n- **Flexible LLM Options**: Use Ollama to run models locally or leverage APIs from cloud providers like OpenAI, Hugging Face, and Azure.\n- **Comprehensive Code Review**: Get quality feedback on code, including recommendations for readability, efficiency, and best practices.\n- **Developer Productivity**: Integrate seamlessly into your existing development environment to speed up code review processes.\n- **Privacy Control**: Choose between local or cloud-based solutions depending on your privacy needs and computational power.\n- **Multi-Language Support**: Review code written in various programming languages (e.g., Python, JavaScript, Java, etc.).\n- **Git Support**: Get Reviews for your Git changes or differences between branches.\n\n## \ud83d\udee0\ufe0f Installation\n\nTo get started with crllm, follow these simple installation steps:\n\n### Prerequisites\n\n- **Python 3.8+**: Make sure you have Python installed.\n- **pipx**: https://pipx.pypa.io/stable/installation/\n- **ollama**: https://ollama.com/download \nIf you want to run the modells locally otherwise you will need the corresponding API keys for your provider\n\n\n\n### Install from GitHub\n```sh\npipx install git+https://github.com/lukasrump/crllm.git\n```\n\n### Install from PyPI\n```sh\npipx install crllm\n```\n\n## \ud83c\udf10 Configuration\nCRLLM supports multiple backends for LLM code reviews. You can configure it by adding an configuration file `crllm_config.toml` in the root of your project. To initialize your project you can use\n\n```bash\ncrllm -i .\n```\n\nThis command guides you through the most important settings.\nThis TOML configuration file is splitted in four main sections:\n\n### [project]\n- **`description`**: Short project summary.\n\n### [crllm]\n- **`loader`**: Mechanism to load the source code, `\"git\"` by default.\n- **`provider`**: LLM provider, `\"ollama\"`by default.\n- **`git_main_branch`**: Specifies the main git branch, default is `\"main\"`.\n- **`git_changed_lines`**: If `true`, only reviews changed lines.\n\n#### Loaders\n- **file**: Code review for a single source code file\n- **git**: Reviews all changed files in the git repository\n- **git_compare**: Reviews the difference between the current git branch and the `git_main_branch`\n\n### [model]\nThe model settings depend on the provider. The model settings are the same as those of the [LangChain](https://python.langchain.com/docs/integrations/chat/) ChatModels. Per default crllm tries to use a locally installed ollama instance with llama3.1.\n\n#### Ollama Local Setup\n- **`model`**: Specifies the model to use, e.g `\"llama3.1\"`. Make sure that you pulled that model before you use it.\n\n#### OpenAI API\n- **`model`**: Specifies the model to use, e.g `\"gpt-4o\"`.\n\nIn addition you have to define the api key in your environment (`.env`)\n```\nOPENAI_API_KEY=your_openai_api_key\n```\n#### Hugging Face API\n- **`repo_id`**: Specifies the repository to use, e.g `\"HuggingFaceH4/zephyr-7b-beta\"`.\n- **`task`**: Specifies the task, e.g `\"text-generation\"`.\n\n```\nHUGGINGFACEHUB_API_TOKEN=your_huggingface_api_key\n```\n\n#### Azure OpenAI\n- **`azure_deployment`**: Specifies the deployment to use, e.g `\"gpt-35-turbo\"`.\n- **`api_version`**: Specifies the api version to use, e.g `\"2023-06-01-preview\"`.\n\nIn addition you have to define some variables in your environment (`.env`)\n\n```\nAZURE_OPENAI_API_KEY=your_azure_api_key\nAZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com\n```\n\n### [prompt]\n- **`template`**: Override the prompt template that is used (optional).\n\n## \u2728Usage\nCRLLM is designed to be easy to use right from your terminal. Below are some examples of how you can leverage the tool.\n\nTo perform a code review for a file or GIT repository run:\n```sh\ncrllm path/to/your/codefile.py\n```\n\n### Enabling RAG Support\n\nTo enhance code reviews with source context, enable RAG (Retrieval-Augmented Generation) in `crllm_config.toml`:\n\n```toml\n[rag]\nenabled = true\nembedding_model = \"all-minilm\"      # Specify the embedding model\nsrc_path = \"./\"                     # Define the root path of your source code\nsrc_glob = \"**/*.py\"                # Use glob patterns to match source files (e.g., Python files)\n```\n\n### Ignore files\nCRLLM supports a `.crllm_ignore` file to exclude specific files and directories from code reviews. This is similar to `.gitignore` but specific to CRLLM's code review process.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Provides AI-powered code reviews using local or cloud-based Large Language Models (LLMs) to help developers improve code quality and catch bugs efficiently.",
    "version": "0.5.0",
    "project_urls": {
        "Homepage": "https://lukasrump.github.io/crllm/",
        "Repository": "https://github.com/lukasrump/crllm"
    },
    "split_keywords": [
        "code review",
        " ai",
        " llm",
        " static analysis",
        " cli"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9899faf8cd7882a5e6fb947ba8782d8ac0aaa4561b29684882611e635cbf6906",
                "md5": "9a4c07174acb5d0f6291bb3c1030c2ce",
                "sha256": "b752be70a4beb9565afbe0a51bfbf00c3d25c744402169cbf6697c146395c7a5"
            },
            "downloads": -1,
            "filename": "crllm-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9a4c07174acb5d0f6291bb3c1030c2ce",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 18159,
            "upload_time": "2024-11-15T13:46:06",
            "upload_time_iso_8601": "2024-11-15T13:46:06.917898Z",
            "url": "https://files.pythonhosted.org/packages/98/99/faf8cd7882a5e6fb947ba8782d8ac0aaa4561b29684882611e635cbf6906/crllm-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0bebd6ba70e62a96212a93ce982605a75379b1a4fad1d37cc29873fafd694dd9",
                "md5": "97a2270a1e33c66273574fd85692090b",
                "sha256": "ec010e364e693c7fcb97475c2b1533aebd1241b33b66494254375453c1990a31"
            },
            "downloads": -1,
            "filename": "crllm-0.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "97a2270a1e33c66273574fd85692090b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 12777,
            "upload_time": "2024-11-15T13:46:07",
            "upload_time_iso_8601": "2024-11-15T13:46:07.926932Z",
            "url": "https://files.pythonhosted.org/packages/0b/eb/d6ba70e62a96212a93ce982605a75379b1a4fad1d37cc29873fafd694dd9/crllm-0.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-15 13:46:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lukasrump",
    "github_project": "crllm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "crllm"
}
        
Elapsed time: 0.38965s