fraim


Namefraim JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryA CLI app that runs AI-powered security workflows
upload_time2025-07-10 21:41:09
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseMIT
keywords ai automation cli directory langfuse mcp packaging pandas pydantic python-dotenv ratelimit requests security tqdm tree-sitter urllib3 uv
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Fraim

A flexible framework for security teams to build and deploy AI-powered workflows that complement their existing security operations.

## 🔭 Overview

Fraim empowers security teams to easily create, customize, and deploy AI workflows tailored to their specific security needs. Rather than providing a one-size-fits-all solution, Fraim gives teams the building blocks to construct intelligent automation that integrates seamlessly with their existing security stack.

## ❓ Why Fraim?

- **Framework-First Approach**: Build custom AI workflows instead of using rigid, pre-built tools
- **Security Team Focused**: Designed specifically for security operations and threat analysis
- **Extensible Architecture**: Easily add new workflows, data sources, and AI models

## 💬 Community & Support

Join our growing community of security professionals using Fraim:

- **Documentation**: Visit [docs.fraim.dev](https://docs.fraim.dev) for comprehensive guides and tutorials
- **Schedule a Demo**: [Book time with our team](https://calendly.com/fraim-dev/fraim-intro) - We'd love to help! Schedule a call for anything related to Fraim (debugging, new integrations, customizing workflows, or even just to chat)
- **Slack Community**: [Join our Slack](https://join.slack.com/t/fraimworkspace/shared_invite/zt-38cunxtki-B80QAlLj7k8JoPaaYWUKNA) - Get help, share ideas, and connect with other security minded people looking to use AI to help their team succeed
- **Issues**: Report bugs and request features via GitHub Issues
- **Contributing**: See the [contributing guide](CONTRIBUTING.md) for more information.

## 🔎 Preview

![CLI Preview](assets/cli-preview.gif)
*Example run of the CLI*


![UI Preview](assets/ui-preview.gif)
*Output of running the `code` workflow*

## 🚀 Quick Start

### Prerequisites

- **Python 3.12+**
- **[pipx](https://pipx.pypa.io/stable/installation/) installation tool**
- **API Key** for your chosen AI provider (Google Gemini, OpenAI, etc.)

### Installation

NOTE: These instructions are for Linux based systems, see [docs](https://docs.fraim.dev/installation) for Windows installation instructions

1. **Install Fraim**:
```bash
pipx install fraim
```

2. **Configure your AI provider**:
   
    #### Google Gemini

    1. Get an API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
    2. Export it in your environment: 
        ```
        export GEMINI_API_KEY=your_api_key_here
        ```

    #### OpenAI

    3. Get an API key from [OpenAI Platform](https://platform.openai.com/api-keys)
    4. Export it in your environment:
        ```
        export OPENAI_API_KEY=your_api_key_here
        ```

### Basic Usage

```bash
# Run code security analysis on a Git repository
fraim --repo https://github.com/username/repo-name --workflows code

# Analyze local directory
fraim --path /path/to/code --workflows code
```

## 📖 Documentation

### Running Workflows

```bash
# Specify particular workflows
fraim --path /code --workflows code iac

# Adjust performance settings
fraim --path /code --workflows code --processes 4 --chunk-size 1000

# Enable debug logging
fraim --path /code --workflows code --debug

# Custom output location
fraim --path /code --workflows code --output /path/to/results/
```

### Observability

Fraim supports optional observability and tracing through [Langfuse](https://langfuse.com), which helps track workflow performance, debug issues, and analyze AI model usage.

To enable observability:

1. **Install with observability support**:
```bash
pipx install 'fraim[langfuse]'
```

2. **Enable observability during execution**:
```bash
fraim --path /code --workflows code --observability langfuse
```

This will trace your workflow execution, LLM calls, and performance metrics in Langfuse for analysis and debugging.

### Configuration

Fraim uses a flexible configuration system that allows you to:
- Customize AI model parameters
- Configure workflow-specific settings
- Set up custom data sources
- Define output formats

See the `fraim/config/` directory for configuration options.

### Key Components

- **Workflow Engine**: Orchestrates AI agents and tools
- **LLM Integrations**: Support for multiple AI providers
- **Tool System**: Extensible security analysis tools
- **Input Connectors**: Git repositories, file systems, APIs
- **Output Formatters**: JSON, SARIF, HTML reports

## 🔧 Available Workflows

Fraim includes several pre-built workflows that demonstrate the framework's capabilities:

### Code Security Analysis
*Status: Available*
*Workflow Name: scan*

Automated source code vulnerability scanning using AI-powered analysis. Detects common security issues across multiple programming languages including SQL injection, XSS, CSRF, and more.

Example
```
fraim --repo https://github.com/username/repo-name --workflows code
```

### Infrastructure as Code (IAC) Analysis
*Status: Available*
*Workflow Name: iac*

Analyzes infrastructure configuration files for security misconfigurations and compliance violations.

Example
```
fraim --repo https://github.com/username/repo-name --workflows iac
```

## 🛠️ Building Custom Workflows

Fraim makes it easy to create custom security workflows:

### 1. Define Input and Output Types

```python
# workflows/<name>/workflow.py
@dataclass
class MyWorkflowInput:
    """Input for the custom workflow."""
    code: Contextual[str]
    config: Config

type MyWorkflowOutput = List[sarif.Result]
```

### 2. Create Workflow Class

```python
# workflows/<name>/workflow.py

# Define file patterns for your workflow
FILE_PATTERNS = [
    '*.config', '*.ini', '*.yaml', '*.yml', '*.json'
]

# Load prompts from YAML files
PROMPTS = PromptTemplate.from_yaml(os.path.join(os.path.dirname(__file__), "my_prompts.yaml"))

@workflow('my_custom_workflow', file_patterns=FILE_PATTERNS)
class MyCustomWorkflow(Workflow[MyWorkflowInput, MyWorkflowOutput]):
    """Analyzes custom configuration files for security issues"""

    def __init__(self, config: Config, *args, **kwargs):
        super().__init__(config, *args, **kwargs)

        # Construct an LLM instance
        llm = LiteLLM.from_config(config)

        # Construct the analysis step
        parser = PydanticOutputParser(sarif.RunResults)
        self.analysis_step = LLMStep(llm, PROMPTS["system"], PROMPTS["user"], parser)

    async def workflow(self, input: MyWorkflowInput) -> MyWorkflowOutput:
        """Main workflow execution"""
        
        # 1. Analyze the configuration file
        analysis_results = await self.analysis_step.run({"code": input.code})
        
        # 2. Filter results by confidence threshold
        filtered_results = self.filter_results_by_confidence(
            analysis_results.results, input.config.confidence
        )
        
        return filtered_results
    
    def filter_results_by_confidence(self, results: List[sarif.Result], confidence_threshold: int) -> List[sarif.Result]:
        """Filter results by confidence."""
        return [result for result in results if result.properties.confidence > confidence_threshold]
```

### 3. Create Prompt Files

Create `my_prompts.yaml` in the same directory:

```yaml
system: |
  You are a configuration security analyzer.
  
  Your job is to analyze configuration files for security misconfigurations and vulnerabilities.
  
  <vulnerability_types>
    Valid vulnerability types (use EXACTLY as shown):
    
    - Hardcoded Credentials
    - Insecure Defaults
    - Excessive Permissions
    - Unencrypted Storage
    - Weak Cryptography
    - Missing Security Headers
    - Debug Mode Enabled
    - Exposed Secrets
    - Insecure Protocols
    - Missing Access Controls
  </vulnerability_types>

  {{ output_format }}

user: |
  Analyze the following configuration file for security issues:
  
  {{ code }}
```

## 📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

---

*Fraim is built by security teams, for security teams. Help us make AI-powered security accessible to everyone.*

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "fraim",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "ai, automation, cli, directory, langfuse, mcp, packaging, pandas, pydantic, python-dotenv, ratelimit, requests, security, tqdm, tree-sitter, urllib3, uv",
    "author": null,
    "author_email": "Fraim Authors <support@fraim.dev>",
    "download_url": "https://files.pythonhosted.org/packages/d7/45/f51fcd7836080a729b1d5dabee2cbe222c730c71d4efd4d95e75451a58db/fraim-0.2.0.tar.gz",
    "platform": null,
    "description": "# Fraim\n\nA flexible framework for security teams to build and deploy AI-powered workflows that complement their existing security operations.\n\n## \ud83d\udd2d Overview\n\nFraim empowers security teams to easily create, customize, and deploy AI workflows tailored to their specific security needs. Rather than providing a one-size-fits-all solution, Fraim gives teams the building blocks to construct intelligent automation that integrates seamlessly with their existing security stack.\n\n## \u2753 Why Fraim?\n\n- **Framework-First Approach**: Build custom AI workflows instead of using rigid, pre-built tools\n- **Security Team Focused**: Designed specifically for security operations and threat analysis\n- **Extensible Architecture**: Easily add new workflows, data sources, and AI models\n\n## \ud83d\udcac Community & Support\n\nJoin our growing community of security professionals using Fraim:\n\n- **Documentation**: Visit [docs.fraim.dev](https://docs.fraim.dev) for comprehensive guides and tutorials\n- **Schedule a Demo**: [Book time with our team](https://calendly.com/fraim-dev/fraim-intro) - We'd love to help! Schedule a call for anything related to Fraim (debugging, new integrations, customizing workflows, or even just to chat)\n- **Slack Community**: [Join our Slack](https://join.slack.com/t/fraimworkspace/shared_invite/zt-38cunxtki-B80QAlLj7k8JoPaaYWUKNA) - Get help, share ideas, and connect with other security minded people looking to use AI to help their team succeed\n- **Issues**: Report bugs and request features via GitHub Issues\n- **Contributing**: See the [contributing guide](CONTRIBUTING.md) for more information.\n\n## \ud83d\udd0e Preview\n\n![CLI Preview](assets/cli-preview.gif)\n*Example run of the CLI*\n\n\n![UI Preview](assets/ui-preview.gif)\n*Output of running the `code` workflow*\n\n## \ud83d\ude80 Quick Start\n\n### Prerequisites\n\n- **Python 3.12+**\n- **[pipx](https://pipx.pypa.io/stable/installation/) installation tool**\n- **API Key** for your chosen AI provider (Google Gemini, OpenAI, etc.)\n\n### Installation\n\nNOTE: These instructions are for Linux based systems, see [docs](https://docs.fraim.dev/installation) for Windows installation instructions\n\n1. **Install Fraim**:\n```bash\npipx install fraim\n```\n\n2. **Configure your AI provider**:\n   \n    #### Google Gemini\n\n    1. Get an API key from [Google AI Studio](https://makersuite.google.com/app/apikey)\n    2. Export it in your environment: \n        ```\n        export GEMINI_API_KEY=your_api_key_here\n        ```\n\n    #### OpenAI\n\n    3. Get an API key from [OpenAI Platform](https://platform.openai.com/api-keys)\n    4. Export it in your environment:\n        ```\n        export OPENAI_API_KEY=your_api_key_here\n        ```\n\n### Basic Usage\n\n```bash\n# Run code security analysis on a Git repository\nfraim --repo https://github.com/username/repo-name --workflows code\n\n# Analyze local directory\nfraim --path /path/to/code --workflows code\n```\n\n## \ud83d\udcd6 Documentation\n\n### Running Workflows\n\n```bash\n# Specify particular workflows\nfraim --path /code --workflows code iac\n\n# Adjust performance settings\nfraim --path /code --workflows code --processes 4 --chunk-size 1000\n\n# Enable debug logging\nfraim --path /code --workflows code --debug\n\n# Custom output location\nfraim --path /code --workflows code --output /path/to/results/\n```\n\n### Observability\n\nFraim supports optional observability and tracing through [Langfuse](https://langfuse.com), which helps track workflow performance, debug issues, and analyze AI model usage.\n\nTo enable observability:\n\n1. **Install with observability support**:\n```bash\npipx install 'fraim[langfuse]'\n```\n\n2. **Enable observability during execution**:\n```bash\nfraim --path /code --workflows code --observability langfuse\n```\n\nThis will trace your workflow execution, LLM calls, and performance metrics in Langfuse for analysis and debugging.\n\n### Configuration\n\nFraim uses a flexible configuration system that allows you to:\n- Customize AI model parameters\n- Configure workflow-specific settings\n- Set up custom data sources\n- Define output formats\n\nSee the `fraim/config/` directory for configuration options.\n\n### Key Components\n\n- **Workflow Engine**: Orchestrates AI agents and tools\n- **LLM Integrations**: Support for multiple AI providers\n- **Tool System**: Extensible security analysis tools\n- **Input Connectors**: Git repositories, file systems, APIs\n- **Output Formatters**: JSON, SARIF, HTML reports\n\n## \ud83d\udd27 Available Workflows\n\nFraim includes several pre-built workflows that demonstrate the framework's capabilities:\n\n### Code Security Analysis\n*Status: Available*\n*Workflow Name: scan*\n\nAutomated source code vulnerability scanning using AI-powered analysis. Detects common security issues across multiple programming languages including SQL injection, XSS, CSRF, and more.\n\nExample\n```\nfraim --repo https://github.com/username/repo-name --workflows code\n```\n\n### Infrastructure as Code (IAC) Analysis\n*Status: Available*\n*Workflow Name: iac*\n\nAnalyzes infrastructure configuration files for security misconfigurations and compliance violations.\n\nExample\n```\nfraim --repo https://github.com/username/repo-name --workflows iac\n```\n\n## \ud83d\udee0\ufe0f Building Custom Workflows\n\nFraim makes it easy to create custom security workflows:\n\n### 1. Define Input and Output Types\n\n```python\n# workflows/<name>/workflow.py\n@dataclass\nclass MyWorkflowInput:\n    \"\"\"Input for the custom workflow.\"\"\"\n    code: Contextual[str]\n    config: Config\n\ntype MyWorkflowOutput = List[sarif.Result]\n```\n\n### 2. Create Workflow Class\n\n```python\n# workflows/<name>/workflow.py\n\n# Define file patterns for your workflow\nFILE_PATTERNS = [\n    '*.config', '*.ini', '*.yaml', '*.yml', '*.json'\n]\n\n# Load prompts from YAML files\nPROMPTS = PromptTemplate.from_yaml(os.path.join(os.path.dirname(__file__), \"my_prompts.yaml\"))\n\n@workflow('my_custom_workflow', file_patterns=FILE_PATTERNS)\nclass MyCustomWorkflow(Workflow[MyWorkflowInput, MyWorkflowOutput]):\n    \"\"\"Analyzes custom configuration files for security issues\"\"\"\n\n    def __init__(self, config: Config, *args, **kwargs):\n        super().__init__(config, *args, **kwargs)\n\n        # Construct an LLM instance\n        llm = LiteLLM.from_config(config)\n\n        # Construct the analysis step\n        parser = PydanticOutputParser(sarif.RunResults)\n        self.analysis_step = LLMStep(llm, PROMPTS[\"system\"], PROMPTS[\"user\"], parser)\n\n    async def workflow(self, input: MyWorkflowInput) -> MyWorkflowOutput:\n        \"\"\"Main workflow execution\"\"\"\n        \n        # 1. Analyze the configuration file\n        analysis_results = await self.analysis_step.run({\"code\": input.code})\n        \n        # 2. Filter results by confidence threshold\n        filtered_results = self.filter_results_by_confidence(\n            analysis_results.results, input.config.confidence\n        )\n        \n        return filtered_results\n    \n    def filter_results_by_confidence(self, results: List[sarif.Result], confidence_threshold: int) -> List[sarif.Result]:\n        \"\"\"Filter results by confidence.\"\"\"\n        return [result for result in results if result.properties.confidence > confidence_threshold]\n```\n\n### 3. Create Prompt Files\n\nCreate `my_prompts.yaml` in the same directory:\n\n```yaml\nsystem: |\n  You are a configuration security analyzer.\n  \n  Your job is to analyze configuration files for security misconfigurations and vulnerabilities.\n  \n  <vulnerability_types>\n    Valid vulnerability types (use EXACTLY as shown):\n    \n    - Hardcoded Credentials\n    - Insecure Defaults\n    - Excessive Permissions\n    - Unencrypted Storage\n    - Weak Cryptography\n    - Missing Security Headers\n    - Debug Mode Enabled\n    - Exposed Secrets\n    - Insecure Protocols\n    - Missing Access Controls\n  </vulnerability_types>\n\n  {{ output_format }}\n\nuser: |\n  Analyze the following configuration file for security issues:\n  \n  {{ code }}\n```\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n---\n\n*Fraim is built by security teams, for security teams. Help us make AI-powered security accessible to everyone.*\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A CLI app that runs AI-powered security workflows",
    "version": "0.2.0",
    "project_urls": null,
    "split_keywords": [
        "ai",
        " automation",
        " cli",
        " directory",
        " langfuse",
        " mcp",
        " packaging",
        " pandas",
        " pydantic",
        " python-dotenv",
        " ratelimit",
        " requests",
        " security",
        " tqdm",
        " tree-sitter",
        " urllib3",
        " uv"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d1bb7b1a68a3338892f1ad5a0cd58343a316fef507dba5fc9964518f30fe55af",
                "md5": "a47312a0a63f0884e03b5e69e4cb63fc",
                "sha256": "86de165e5f7f517b8ffabf06834aaf751b1e6b651b06c9fcd6e1a7119bfced82"
            },
            "downloads": -1,
            "filename": "fraim-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a47312a0a63f0884e03b5e69e4cb63fc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 145582,
            "upload_time": "2025-07-10T21:41:07",
            "upload_time_iso_8601": "2025-07-10T21:41:07.929183Z",
            "url": "https://files.pythonhosted.org/packages/d1/bb/7b1a68a3338892f1ad5a0cd58343a316fef507dba5fc9964518f30fe55af/fraim-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d745f51fcd7836080a729b1d5dabee2cbe222c730c71d4efd4d95e75451a58db",
                "md5": "48f1176467a357c3ca97a0192dc08611",
                "sha256": "65010e0a3729d1aba9d21b223499fef2766d2ade2ad63845e17088c63bf0c0ff"
            },
            "downloads": -1,
            "filename": "fraim-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "48f1176467a357c3ca97a0192dc08611",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 11740880,
            "upload_time": "2025-07-10T21:41:09",
            "upload_time_iso_8601": "2025-07-10T21:41:09.668432Z",
            "url": "https://files.pythonhosted.org/packages/d7/45/f51fcd7836080a729b1d5dabee2cbe222c730c71d4efd4d95e75451a58db/fraim-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-10 21:41:09",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "fraim"
}
        
Elapsed time: 0.74336s