fraim


Namefraim JSON
Version 0.4.0 PyPI version JSON
download
home_pageNone
SummaryA CLI app that runs AI-powered security workflows
upload_time2025-07-28 22:06:09
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseMIT
keywords ai automation cli directory langfuse mcp packaging pandas pydantic python-dotenv ratelimit requests security tqdm tree-sitter urllib3 uv
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Fraim

A flexible framework for security teams to build and deploy AI-powered workflows that complement their existing security operations.

## 🔭 Overview

Fraim empowers security teams to easily create, customize, and deploy AI workflows tailored to their specific security needs. Rather than providing a one-size-fits-all solution, Fraim gives teams the building blocks to construct intelligent automation that integrates seamlessly with their existing security stack.
Fraim comes built as a CLI, but you can also run workflows via our Github Action.

## ❓ Why Fraim?

- **Framework-First Approach**: Build custom AI workflows instead of using rigid, pre-built tools
- **Security Team Focused**: Designed specifically for security operations and threat analysis
- **Extensible Architecture**: Easily add new workflows, data sources, and AI models

## 🔎 Preview

![UI Preview](assets/ui-preview.gif)
_Output of running the `code` workflow_

## Github Action Quick Start

NOTE: This example assumes you are using a Gemini based model. If you’d like to use an OpenAI based model, replace references of GEMINI with OPENAI and specify an OpenAI model in the action arguments.

Set your API key as a Secret in your repo. - Settings -> Secrets and Variables -> New Repository Secret -> GEMINI_API_KEY
Define your workflow inside your repo at .github/workflows/<action_name>.yml

```yaml
name: AI Security Scan
on:
  pull_request:
    branches: [main]

jobs:
  security-scan:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      actions: read
      security-events: write # Required for uploading SARIF
      pull-requests: write # Required for PR comments and annotations

    steps:
      - name: Run Fraim Security Scan
        uses: fraim-dev/fraim-action@v0
        with:
          gemini-api-key: ${{ secrets.GEMINI_API_KEY }}
          workflows: "code"
```

## 🚀 CLI Quick Start

### Prerequisites

- **Python 3.12+**
- **[pipx](https://pipx.pypa.io/stable/installation/) installation tool**
- **API Key** for your chosen AI provider (Google Gemini, OpenAI, etc.)

### Installation

NOTE: These instructions are for Linux based systems, see [docs](https://docs.fraim.dev/installation) for Windows installation instructions

1. **Install Fraim**:

```bash
pipx install fraim
```

2. **Configure your AI provider**:

   #### Google Gemini

   1. Get an API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
   2. Export it in your environment:
      ```
      export GEMINI_API_KEY=your_api_key_here
      ```

   #### OpenAI

   3. Get an API key from [OpenAI Platform](https://platform.openai.com/api-keys)
   4. Export it in your environment:
      ```
      export OPENAI_API_KEY=your_api_key_here
      ```

### Basic Usage

```bash
# Run code security analysis on a Git repository
fraim code --location https://github.com/username/repo-name

# Analyze local directory
fraim code --location /path/to/code
```

## 💬 Community & Support

Join our growing community of security professionals using Fraim:

- **Documentation**: Visit [docs.fraim.dev](https://docs.fraim.dev) for comprehensive guides and tutorials
- **Schedule a Demo**: [Book time with our team](https://calendly.com/fraim-dev/fraim-intro) - We'd love to help! Schedule a call for anything related to Fraim (debugging, new integrations, customizing workflows, or even just to chat)
- **Slack Community**: [Join our Slack](https://join.slack.com/t/fraimworkspace/shared_invite/zt-38cunxtki-B80QAlLj7k8JoPaaYWUKNA) - Get help, share ideas, and connect with other security minded people looking to use AI to help their team succeed
- **Issues**: Report bugs and request features via GitHub Issues
- **Contributing**: See the [contributing guide](CONTRIBUTING.md) for more information.

## 📖 Documentation

### Running Workflows

```bash

# Adjust performance settings
fraim code --location /code --chunk-size 1000

# Enable debug logging
fraim --debug code --location /code

# Custom output location
fraim --output /path/to/results/ code --location /code
```

### Observability

Fraim supports optional observability and tracing through [Langfuse](https://langfuse.com), which helps track workflow performance, debug issues, and analyze AI model usage.

To enable observability:

1. **Install with observability support**:

```bash
pipx install 'fraim[langfuse]'
```

2. **Enable observability during execution**:

```bash
fraim --observability langfuse code --location /code
```

This will trace your workflow execution, LLM calls, and performance metrics in Langfuse for analysis and debugging.

### Configuration

Fraim uses a flexible configuration system that allows you to:

- Customize AI model parameters
- Configure workflow-specific settings
- Set up custom data sources
- Define output formats

See the `fraim/config/` directory for configuration options.

### Key Components

- **Workflow Engine**: Orchestrates AI agents and tools
- **LLM Integrations**: Support for multiple AI providers
- **Tool System**: Extensible security analysis tools
- **Input Connectors**: Git repositories, file systems, APIs
- **Output Formatters**: JSON, SARIF, HTML reports

## 🔧 Available Workflows

Fraim includes several pre-built workflows that demonstrate the framework's capabilities:

### Code Security Analysis

_Status: Available_
_Workflow Name: code_

Automated source code vulnerability scanning using AI-powered analysis. Detects common security issues across multiple programming languages including SQL injection, XSS, CSRF, and more.

Example

```
fraim code --location https://github.com/username/repo-name
```

### Infrastructure as Code (IAC) Analysis

_Status: Available_
_Workflow Name: iac_

Analyzes infrastructure configuration files for security misconfigurations and compliance violations.

Example

```
fraim iac --location https://github.com/username/repo-name
```

## 🛠️ Building Custom Workflows

Fraim makes it easy to create custom security workflows:

### 1. Define Input and Output Types

```python
# workflows/<name>/workflow.py
@dataclass
class MyWorkflowInput:
    """Input for the custom workflow."""
    code: Contextual[str]
    config: Config

type MyWorkflowOutput = List[sarif.Result]
```

### 2. Create Workflow Class

```python
# workflows/<name>/workflow.py

# Define file patterns for your workflow
FILE_PATTERNS = [
    '*.config', '*.ini', '*.yaml', '*.yml', '*.json'
]

# Load prompts from YAML files
PROMPTS = PromptTemplate.from_yaml(os.path.join(os.path.dirname(__file__), "my_prompts.yaml"))

@workflow('my_custom_workflow')
class MyCustomWorkflow(Workflow[MyWorkflowInput, MyWorkflowOutput]):
    """Analyzes custom configuration files for security issues"""

    def __init__(self, config: Config, *args, **kwargs):
        super().__init__(config, *args, **kwargs)

        # Construct an LLM instance
        llm = LiteLLM.from_config(config)

        # Construct the analysis step
        parser = PydanticOutputParser(sarif.RunResults)
        self.analysis_step = LLMStep(llm, PROMPTS["system"], PROMPTS["user"], parser)

    async def workflow(self, input: MyWorkflowInput) -> MyWorkflowOutput:
        """Main workflow execution"""

        # 1. Analyze the configuration file
        analysis_results = await self.analysis_step.run({"code": input.code})

        # 2. Filter results by confidence threshold
        filtered_results = self.filter_results_by_confidence(
            analysis_results.results, input.config.confidence
        )

        return filtered_results

    def filter_results_by_confidence(self, results: List[sarif.Result], confidence_threshold: int) -> List[sarif.Result]:
        """Filter results by confidence."""
        return [result for result in results if result.properties.confidence > confidence_threshold]
```

### 3. Create Prompt Files

Create `my_prompts.yaml` in the same directory:

```yaml
system: |
  You are a configuration security analyzer.

  Your job is to analyze configuration files for security misconfigurations and vulnerabilities.

  <vulnerability_types>
    Valid vulnerability types (use EXACTLY as shown):

    - Hardcoded Credentials
    - Insecure Defaults
    - Excessive Permissions
    - Unencrypted Storage
    - Weak Cryptography
    - Missing Security Headers
    - Debug Mode Enabled
    - Exposed Secrets
    - Insecure Protocols
    - Missing Access Controls
  </vulnerability_types>

  {{ output_format }}

user: |
  Analyze the following configuration file for security issues:

  {{ code }}
```

## 📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

---

_Fraim is built by security teams, for security teams. Help us make AI-powered security accessible to everyone._

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "fraim",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "ai, automation, cli, directory, langfuse, mcp, packaging, pandas, pydantic, python-dotenv, ratelimit, requests, security, tqdm, tree-sitter, urllib3, uv",
    "author": null,
    "author_email": "Fraim Authors <support@fraim.dev>",
    "download_url": "https://files.pythonhosted.org/packages/b7/43/8e98c3ee0b989efd95248b4ac230df340cbef7e0e81993a20f0c55bf5492/fraim-0.4.0.tar.gz",
    "platform": null,
    "description": "# Fraim\n\nA flexible framework for security teams to build and deploy AI-powered workflows that complement their existing security operations.\n\n## \ud83d\udd2d Overview\n\nFraim empowers security teams to easily create, customize, and deploy AI workflows tailored to their specific security needs. Rather than providing a one-size-fits-all solution, Fraim gives teams the building blocks to construct intelligent automation that integrates seamlessly with their existing security stack.\nFraim comes built as a CLI, but you can also run workflows via our Github Action.\n\n## \u2753 Why Fraim?\n\n- **Framework-First Approach**: Build custom AI workflows instead of using rigid, pre-built tools\n- **Security Team Focused**: Designed specifically for security operations and threat analysis\n- **Extensible Architecture**: Easily add new workflows, data sources, and AI models\n\n## \ud83d\udd0e Preview\n\n![UI Preview](assets/ui-preview.gif)\n_Output of running the `code` workflow_\n\n## Github Action Quick Start\n\nNOTE: This example assumes you are using a Gemini based model. If you\u2019d like to use an OpenAI based model, replace references of GEMINI with OPENAI and specify an OpenAI model in the action arguments.\n\nSet your API key as a Secret in your repo. - Settings -> Secrets and Variables -> New Repository Secret -> GEMINI_API_KEY\nDefine your workflow inside your repo at .github/workflows/<action_name>.yml\n\n```yaml\nname: AI Security Scan\non:\n  pull_request:\n    branches: [main]\n\njobs:\n  security-scan:\n    runs-on: ubuntu-latest\n    permissions:\n      contents: read\n      actions: read\n      security-events: write # Required for uploading SARIF\n      pull-requests: write # Required for PR comments and annotations\n\n    steps:\n      - name: Run Fraim Security Scan\n        uses: fraim-dev/fraim-action@v0\n        with:\n          gemini-api-key: ${{ secrets.GEMINI_API_KEY }}\n          workflows: \"code\"\n```\n\n## \ud83d\ude80 CLI Quick Start\n\n### Prerequisites\n\n- **Python 3.12+**\n- **[pipx](https://pipx.pypa.io/stable/installation/) installation tool**\n- **API Key** for your chosen AI provider (Google Gemini, OpenAI, etc.)\n\n### Installation\n\nNOTE: These instructions are for Linux based systems, see [docs](https://docs.fraim.dev/installation) for Windows installation instructions\n\n1. **Install Fraim**:\n\n```bash\npipx install fraim\n```\n\n2. **Configure your AI provider**:\n\n   #### Google Gemini\n\n   1. Get an API key from [Google AI Studio](https://makersuite.google.com/app/apikey)\n   2. Export it in your environment:\n      ```\n      export GEMINI_API_KEY=your_api_key_here\n      ```\n\n   #### OpenAI\n\n   3. Get an API key from [OpenAI Platform](https://platform.openai.com/api-keys)\n   4. Export it in your environment:\n      ```\n      export OPENAI_API_KEY=your_api_key_here\n      ```\n\n### Basic Usage\n\n```bash\n# Run code security analysis on a Git repository\nfraim code --location https://github.com/username/repo-name\n\n# Analyze local directory\nfraim code --location /path/to/code\n```\n\n## \ud83d\udcac Community & Support\n\nJoin our growing community of security professionals using Fraim:\n\n- **Documentation**: Visit [docs.fraim.dev](https://docs.fraim.dev) for comprehensive guides and tutorials\n- **Schedule a Demo**: [Book time with our team](https://calendly.com/fraim-dev/fraim-intro) - We'd love to help! Schedule a call for anything related to Fraim (debugging, new integrations, customizing workflows, or even just to chat)\n- **Slack Community**: [Join our Slack](https://join.slack.com/t/fraimworkspace/shared_invite/zt-38cunxtki-B80QAlLj7k8JoPaaYWUKNA) - Get help, share ideas, and connect with other security minded people looking to use AI to help their team succeed\n- **Issues**: Report bugs and request features via GitHub Issues\n- **Contributing**: See the [contributing guide](CONTRIBUTING.md) for more information.\n\n## \ud83d\udcd6 Documentation\n\n### Running Workflows\n\n```bash\n\n# Adjust performance settings\nfraim code --location /code --chunk-size 1000\n\n# Enable debug logging\nfraim --debug code --location /code\n\n# Custom output location\nfraim --output /path/to/results/ code --location /code\n```\n\n### Observability\n\nFraim supports optional observability and tracing through [Langfuse](https://langfuse.com), which helps track workflow performance, debug issues, and analyze AI model usage.\n\nTo enable observability:\n\n1. **Install with observability support**:\n\n```bash\npipx install 'fraim[langfuse]'\n```\n\n2. **Enable observability during execution**:\n\n```bash\nfraim --observability langfuse code --location /code\n```\n\nThis will trace your workflow execution, LLM calls, and performance metrics in Langfuse for analysis and debugging.\n\n### Configuration\n\nFraim uses a flexible configuration system that allows you to:\n\n- Customize AI model parameters\n- Configure workflow-specific settings\n- Set up custom data sources\n- Define output formats\n\nSee the `fraim/config/` directory for configuration options.\n\n### Key Components\n\n- **Workflow Engine**: Orchestrates AI agents and tools\n- **LLM Integrations**: Support for multiple AI providers\n- **Tool System**: Extensible security analysis tools\n- **Input Connectors**: Git repositories, file systems, APIs\n- **Output Formatters**: JSON, SARIF, HTML reports\n\n## \ud83d\udd27 Available Workflows\n\nFraim includes several pre-built workflows that demonstrate the framework's capabilities:\n\n### Code Security Analysis\n\n_Status: Available_\n_Workflow Name: code_\n\nAutomated source code vulnerability scanning using AI-powered analysis. Detects common security issues across multiple programming languages including SQL injection, XSS, CSRF, and more.\n\nExample\n\n```\nfraim code --location https://github.com/username/repo-name\n```\n\n### Infrastructure as Code (IAC) Analysis\n\n_Status: Available_\n_Workflow Name: iac_\n\nAnalyzes infrastructure configuration files for security misconfigurations and compliance violations.\n\nExample\n\n```\nfraim iac --location https://github.com/username/repo-name\n```\n\n## \ud83d\udee0\ufe0f Building Custom Workflows\n\nFraim makes it easy to create custom security workflows:\n\n### 1. Define Input and Output Types\n\n```python\n# workflows/<name>/workflow.py\n@dataclass\nclass MyWorkflowInput:\n    \"\"\"Input for the custom workflow.\"\"\"\n    code: Contextual[str]\n    config: Config\n\ntype MyWorkflowOutput = List[sarif.Result]\n```\n\n### 2. Create Workflow Class\n\n```python\n# workflows/<name>/workflow.py\n\n# Define file patterns for your workflow\nFILE_PATTERNS = [\n    '*.config', '*.ini', '*.yaml', '*.yml', '*.json'\n]\n\n# Load prompts from YAML files\nPROMPTS = PromptTemplate.from_yaml(os.path.join(os.path.dirname(__file__), \"my_prompts.yaml\"))\n\n@workflow('my_custom_workflow')\nclass MyCustomWorkflow(Workflow[MyWorkflowInput, MyWorkflowOutput]):\n    \"\"\"Analyzes custom configuration files for security issues\"\"\"\n\n    def __init__(self, config: Config, *args, **kwargs):\n        super().__init__(config, *args, **kwargs)\n\n        # Construct an LLM instance\n        llm = LiteLLM.from_config(config)\n\n        # Construct the analysis step\n        parser = PydanticOutputParser(sarif.RunResults)\n        self.analysis_step = LLMStep(llm, PROMPTS[\"system\"], PROMPTS[\"user\"], parser)\n\n    async def workflow(self, input: MyWorkflowInput) -> MyWorkflowOutput:\n        \"\"\"Main workflow execution\"\"\"\n\n        # 1. Analyze the configuration file\n        analysis_results = await self.analysis_step.run({\"code\": input.code})\n\n        # 2. Filter results by confidence threshold\n        filtered_results = self.filter_results_by_confidence(\n            analysis_results.results, input.config.confidence\n        )\n\n        return filtered_results\n\n    def filter_results_by_confidence(self, results: List[sarif.Result], confidence_threshold: int) -> List[sarif.Result]:\n        \"\"\"Filter results by confidence.\"\"\"\n        return [result for result in results if result.properties.confidence > confidence_threshold]\n```\n\n### 3. Create Prompt Files\n\nCreate `my_prompts.yaml` in the same directory:\n\n```yaml\nsystem: |\n  You are a configuration security analyzer.\n\n  Your job is to analyze configuration files for security misconfigurations and vulnerabilities.\n\n  <vulnerability_types>\n    Valid vulnerability types (use EXACTLY as shown):\n\n    - Hardcoded Credentials\n    - Insecure Defaults\n    - Excessive Permissions\n    - Unencrypted Storage\n    - Weak Cryptography\n    - Missing Security Headers\n    - Debug Mode Enabled\n    - Exposed Secrets\n    - Insecure Protocols\n    - Missing Access Controls\n  </vulnerability_types>\n\n  {{ output_format }}\n\nuser: |\n  Analyze the following configuration file for security issues:\n\n  {{ code }}\n```\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n---\n\n_Fraim is built by security teams, for security teams. Help us make AI-powered security accessible to everyone._\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A CLI app that runs AI-powered security workflows",
    "version": "0.4.0",
    "project_urls": null,
    "split_keywords": [
        "ai",
        " automation",
        " cli",
        " directory",
        " langfuse",
        " mcp",
        " packaging",
        " pandas",
        " pydantic",
        " python-dotenv",
        " ratelimit",
        " requests",
        " security",
        " tqdm",
        " tree-sitter",
        " urllib3",
        " uv"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d6e528dab98d365ba5534dd088148f841ab46ceb153b99e925487e6a0e79ad4e",
                "md5": "e240215dcf39a51556a4d494317819f1",
                "sha256": "e75cda2661fa71e8d1f05f25e75feaf0a38cf33a6cdc12e10b8de9cff52d85e5"
            },
            "downloads": -1,
            "filename": "fraim-0.4.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e240215dcf39a51556a4d494317819f1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 161575,
            "upload_time": "2025-07-28T22:06:07",
            "upload_time_iso_8601": "2025-07-28T22:06:07.517742Z",
            "url": "https://files.pythonhosted.org/packages/d6/e5/28dab98d365ba5534dd088148f841ab46ceb153b99e925487e6a0e79ad4e/fraim-0.4.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b7438e98c3ee0b989efd95248b4ac230df340cbef7e0e81993a20f0c55bf5492",
                "md5": "5fac554e2370076a4e50a360bed91557",
                "sha256": "4bf1fc297c80d3235c100c4ba71346f4c2684229a24bdc239b97b26b225f501b"
            },
            "downloads": -1,
            "filename": "fraim-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "5fac554e2370076a4e50a360bed91557",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 9439502,
            "upload_time": "2025-07-28T22:06:09",
            "upload_time_iso_8601": "2025-07-28T22:06:09.582351Z",
            "url": "https://files.pythonhosted.org/packages/b7/43/8e98c3ee0b989efd95248b4ac230df340cbef7e0e81993a20f0c55bf5492/fraim-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-28 22:06:09",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "fraim"
}
        
Elapsed time: 1.77376s