fraim


Namefraim JSON
Version 0.7.0 PyPI version JSON
download
home_pageNone
SummaryA CLI app that runs AI-powered security workflows
upload_time2025-09-26 20:30:15
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseMIT
keywords ai automation cli directory langfuse mcp packaging pandas pydantic python-dotenv ratelimit requests security tqdm tree-sitter urllib3 uv
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Fraim - A Security Engineer's AI Toolkit

## 🔭 Overview

Fraim gives security engineers AI-powered workflows to help them leverage the power of AI to solve REAL business needs. The workflows in this project are companions to a security engineer to help them find, detect, fix, and flag vulnerabilities across the development lifecycle.
You can run Fraim as a CLI or inside Github Actions.

## 🚩 Risk Flagger

Most security teams do not have visibility into the code changes happening on a day-to-day basis, and it is unrealistic to review every change. Risk Flagger solves this by requesting review on a Pull Request only if a "risk" is identified. These "risks" can be defined to match your specific use cases (ie "Flag any changes that make changes to authentication").

**Perfect for**:
- Security teams with no visibility into code changes
- Teams needing to focus limited security resources on the highest-priority risks
- Organizations wanting to implement "security left" practices

```bash
# Basic risk flagger with built-in risks
fraim run risk_flagger --model anthropic/claude-sonnet-4-20250514 --diff --base <base_sha> --head <head_sha> --approver security

# Custom risk considerations inline
fraim run risk_flagger --model anthropic/claude-sonnet-4-20250514 --diff --base <base_sha> --head <head_sha> --custom-risk-list-json '{"Database Changes": "All changes to a database should be flagged, similarly any networking changes that might affect the database should be flagged."}' --custom-risk-list-action replace --approver security

# Custom risk considerations
fraim run risk_flagger --model anthropic/claude-sonnet-4-20250514 --diff --base <base_sha> --head <head_sha> --custom-risk-list-filepath ./custom-risks.yaml --approver security
```

NOTE: we recommend using the Anthropic or OpenAI latest models for this workflow.


<img src="assets/risk-flagger-preview.png" alt="Risk Flagger Preview" width="500"/>

## 🛡️ Code Security Analysis

Most security teams rely on signature-based scanners and scattered linters that miss context and overwhelm engineers with noise. Code Security Analysis applies LLM-powered, context-aware review to surface real vulnerabilities across languages (e.g. injection, authentication/authorization flaws, insecure cryptography, secret exposure, and unsafe configurations), explaining impact and suggesting fixes. It integrates cleanly into CI via SARIF output and can run on full repos or just diffs to keep PRs secure without slowing delivery.

**Perfect for**:
- Security teams needing comprehensive vulnerability coverage
- Organizations requiring compliance with secure coding standards
- Teams wanting to catch vulnerabilities before they reach production

```bash
# Comprehensive code analysis
fraim run code --location https://github.com/username/repo-name

# Focus on recent changes
fraim run code --location . --diff --base main --head HEAD
```

## 🏗️ Infrastructure as Code (IAC) Analysis  

Cloud misconfigurations often slip through because policy-as-code checks and scattered linters miss context across modules, environments, and providers. Infrastructure as Code Analysis uses LLM-powered, context-aware review of Terraform, CloudFormation, and Kubernetes manifests to spot risky defaults, excessive permissions, insecure networking and storage, and compliance gaps—explaining impact and proposing safer configurations. It integrates cleanly into CI via SARIF and can run on full repos or just diffs to prevent drift without slowing delivery.

**Perfect for**:
- DevOps teams managing cloud infrastructure
- Organizations with strict compliance requirements
- Teams implementing Infrastructure as Code practices
- Security teams overseeing cloud security posture

```bash
# Analyze infrastructure configurations
fraim run iac --location https://github.com/username/repo-name
```

## 🚀 Getting Started

### Github Action Quick Start

NOTE: This example assumes you are using an Anthropic based model.

Set your API key as a Secret in your repo. - Settings -> Secrets and Variables -> New Repository Secret -> ANTHROPIC_API_KEY
Define your workflow inside your repo at .github/workflows/<action_name>.yml

```yaml
name: AI Security Scan
on:
  pull_request:
    branches: [main]

jobs:
  security-scan:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      actions: read
      security-events: write # Required for uploading SARIF
      pull-requests: write # Required for PR comments and annotations

    steps:
      - name: Run Fraim Security Scan
        uses: fraim-dev/fraim-action@v0
        with:
          anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
          workflows: "code"
```

### CLI Quick Start

#### Prerequisites

- **Python 3.12+**
- **[pipx](https://pipx.pypa.io/stable/installation/) installation tool**
- **API Key** for your chosen AI provider (Google Gemini, OpenAI, etc.)

#### Installation

NOTE: These instructions are for Linux based systems, see [docs](https://docs.fraim.dev/installation) for Windows installation instructions

1. **Install Fraim**:

```bash
pipx install fraim
```

2. **Configure your AI provider**:

   #### Google Gemini

   1. Get an API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
   2. Export it in your environment:
      ```
      export GEMINI_API_KEY=your_api_key_here
      ```

   #### OpenAI

   3. Get an API key from [OpenAI Platform](https://platform.openai.com/api-keys)
   4. Export it in your environment:
      ```
      export OPENAI_API_KEY=your_api_key_here
      ```

### Common CLI Arguments

#### Global Options (apply to all commands)

- `--debug`: Enable debug logging for troubleshooting
- `--show-logs SHOW_LOGS`: Print logs to standard error output  
- `--log-output LOG_OUTPUT`: Specify directory for log files
- `--observability langfuse`: Enable LLM observability and analytics

#### Workflow Options (apply to most workflows)

- `--location LOCATION`: Repository URL or local path to analyze
- `--model MODEL`: AI model to use (default varies by workflow, e.g., `gemini/gemini-2.5-flash`)
- `--temperature TEMPERATURE`: Model temperature setting (0.0-1.0, default: 0)
- `--chunk-size CHUNK_SIZE`: Number of lines per processing chunk
- `--limit LIMIT`: Maximum number of files to scan
- `--globs GLOBS`: File patterns to include in analysis
- `--max-concurrent-chunks MAX_CONCURRENT_CHUNKS`: Control parallelism

#### Git Diff Options

- `--diff`: Analyze only git diff instead of full repository
- `--head HEAD`: Git head commit for diff (default: HEAD)
- `--base BASE`: Git base commit for diff (default: empty tree)

#### Pull Request Integration  

- `--pr-url PR_URL`: URL of pull request to analyze
- `--approver APPROVER`: GitHub username/group to notify

### Observability

Fraim supports optional observability and tracing through [Langfuse](https://langfuse.com), which helps track workflow performance, debug issues, and analyze AI model usage.

To enable observability:

1. **Install with observability support**:

```bash
pipx install 'fraim[langfuse]'
```

2. **Enable observability during execution**:

```bash
fraim --observability langfuse run code --location /code
```

This will trace your workflow execution, LLM calls, and performance metrics in Langfuse for analysis and debugging.

## 💬 Community & Support

Join our growing community of security professionals using Fraim:

- **Documentation**: Visit [docs.fraim.dev](https://docs.fraim.dev) for comprehensive guides and tutorials
- **Schedule a Demo**: [Book time with our team](https://calendly.com/fraim-dev/fraim-intro) - We'd love to help! Schedule a call for anything related to Fraim (debugging, new integrations, customizing workflows, or even just to chat)
- **Slack Community**: [Join our Slack](https://join.slack.com/t/fraimworkspace/shared_invite/zt-38cunxtki-B80QAlLj7k8JoPaaYWUKNA) - Get help, share ideas, and connect with other security minded people looking to use AI to help their team succeed
- **Issues**: Report bugs and request features via GitHub Issues
- **Contributing**: See the [contributing guide](CONTRIBUTING.md) for more information.

## 🛠️ "Fraim"-work Development

### Building Custom Workflows

Fraim makes it easy to create custom security workflows tailored to your organization's specific needs:

### Key Framework Components

- **Workflow Engine**: Orchestrates AI agents and tools in flexible, composable patterns
- **LLM Integrations**: Support for multiple AI providers with seamless switching
- **Tool System**: Extensible security analysis tools that can be combined and customized
- **Input Connectors**: Git repositories, file systems, APIs, and custom data sources
- **Output Formatters**: JSON, SARIF, HTML reports, and custom output formats

### Configuration System

Fraim uses a flexible configuration system that allows you to:

- Customize AI model parameters for optimal performance
- Configure workflow-specific settings and thresholds
- Set up custom data sources and input methods
- Define custom output formats and destinations
- Manage API keys and authentication

See the `fraim/config/` directory for configuration options.

#### 1. Define Input and Output Types

```python
# workflows/<name>/workflow.py
@dataclass
class MyWorkflowInput:
    """Input for the custom workflow."""
    code: Contextual[str]
    config: Config

type MyWorkflowOutput = List[sarif.Result]
```

#### 2. Create Workflow Class

```python
# workflows/<name>/workflow.py

# Define file patterns for your workflow
FILE_PATTERNS = [
    '*.config', '*.ini', '*.yaml', '*.yml', '*.json'
]

# Load prompts from YAML files
PROMPTS = PromptTemplate.from_yaml(os.path.join(os.path.dirname(__file__), "my_prompts.yaml"))

@workflow('my_custom_workflow')
class MyCustomWorkflow(Workflow[MyWorkflowInput, MyWorkflowOutput]):
    """Analyzes custom configuration files for security issues"""

    def __init__(self, config: Config, *args, **kwargs):
        super().__init__(config, *args, **kwargs)

        # Construct an LLM instance
        llm = LiteLLM.from_config(config)

        # Construct the analysis step
        parser = PydanticOutputParser(sarif.RunResults)
        self.analysis_step = LLMStep(llm, PROMPTS["system"], PROMPTS["user"], parser)

    async def workflow(self, input: MyWorkflowInput) -> MyWorkflowOutput:
        """Main workflow execution"""

        # 1. Analyze the configuration file
        analysis_results = await self.analysis_step.run({"code": input.code})

        # 2. Filter results by confidence threshold
        filtered_results = self.filter_results_by_confidence(
            analysis_results.results, input.config.confidence
        )

        return filtered_results

    def filter_results_by_confidence(self, results: List[sarif.Result], confidence_threshold: int) -> List[sarif.Result]:
        """Filter results by confidence."""
        return [result for result in results if result.properties.confidence > confidence_threshold]
```

#### 3. Create Prompt Files

Create `my_prompts.yaml` in the same directory:

```yaml
system: |
  You are a configuration security analyzer.

  Your job is to analyze configuration files for security misconfigurations and vulnerabilities.

  <vulnerability_types>
    Valid vulnerability types (use EXACTLY as shown):

    - Hardcoded Credentials
    - Insecure Defaults
    - Excessive Permissions
    - Unencrypted Storage
    - Weak Cryptography
    - Missing Security Headers
    - Debug Mode Enabled
    - Exposed Secrets
    - Insecure Protocols
    - Missing Access Controls
  </vulnerability_types>

  {{ output_format }}

user: |
  Analyze the following configuration file for security issues:

  {{ code }}
```

## 📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

---

_Fraim is built by security teams, for security teams. Help us make AI-powered security accessible to everyone._

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "fraim",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "ai, automation, cli, directory, langfuse, mcp, packaging, pandas, pydantic, python-dotenv, ratelimit, requests, security, tqdm, tree-sitter, urllib3, uv",
    "author": null,
    "author_email": "Fraim Authors <support@fraim.dev>",
    "download_url": "https://files.pythonhosted.org/packages/b8/ad/dca094a933b2500cefb3e3ce86cd178cb76377d6d19e467f0629e5afdc9b/fraim-0.7.0.tar.gz",
    "platform": null,
    "description": "# Fraim - A Security Engineer's AI Toolkit\n\n## \ud83d\udd2d Overview\n\nFraim gives security engineers AI-powered workflows to help them leverage the power of AI to solve REAL business needs. The workflows in this project are companions to a security engineer to help them find, detect, fix, and flag vulnerabilities across the development lifecycle.\nYou can run Fraim as a CLI or inside Github Actions.\n\n## \ud83d\udea9 Risk Flagger\n\nMost security teams do not have visibility into the code changes happening on a day-to-day basis, and it is unrealistic to review every change. Risk Flagger solves this by requesting review on a Pull Request only if a \"risk\" is identified. These \"risks\" can be defined to match your specific use cases (ie \"Flag any changes that make changes to authentication\").\n\n**Perfect for**:\n- Security teams with no visibility into code changes\n- Teams needing to focus limited security resources on the highest-priority risks\n- Organizations wanting to implement \"security left\" practices\n\n```bash\n# Basic risk flagger with built-in risks\nfraim run risk_flagger --model anthropic/claude-sonnet-4-20250514 --diff --base <base_sha> --head <head_sha> --approver security\n\n# Custom risk considerations inline\nfraim run risk_flagger --model anthropic/claude-sonnet-4-20250514 --diff --base <base_sha> --head <head_sha> --custom-risk-list-json '{\"Database Changes\": \"All changes to a database should be flagged, similarly any networking changes that might affect the database should be flagged.\"}' --custom-risk-list-action replace --approver security\n\n# Custom risk considerations\nfraim run risk_flagger --model anthropic/claude-sonnet-4-20250514 --diff --base <base_sha> --head <head_sha> --custom-risk-list-filepath ./custom-risks.yaml --approver security\n```\n\nNOTE: we recommend using the Anthropic or OpenAI latest models for this workflow.\n\n\n<img src=\"assets/risk-flagger-preview.png\" alt=\"Risk Flagger Preview\" width=\"500\"/>\n\n## \ud83d\udee1\ufe0f Code Security Analysis\n\nMost security teams rely on signature-based scanners and scattered linters that miss context and overwhelm engineers with noise. Code Security Analysis applies LLM-powered, context-aware review to surface real vulnerabilities across languages (e.g. injection, authentication/authorization flaws, insecure cryptography, secret exposure, and unsafe configurations), explaining impact and suggesting fixes. It integrates cleanly into CI via SARIF output and can run on full repos or just diffs to keep PRs secure without slowing delivery.\n\n**Perfect for**:\n- Security teams needing comprehensive vulnerability coverage\n- Organizations requiring compliance with secure coding standards\n- Teams wanting to catch vulnerabilities before they reach production\n\n```bash\n# Comprehensive code analysis\nfraim run code --location https://github.com/username/repo-name\n\n# Focus on recent changes\nfraim run code --location . --diff --base main --head HEAD\n```\n\n## \ud83c\udfd7\ufe0f Infrastructure as Code (IAC) Analysis  \n\nCloud misconfigurations often slip through because policy-as-code checks and scattered linters miss context across modules, environments, and providers. Infrastructure as Code Analysis uses LLM-powered, context-aware review of Terraform, CloudFormation, and Kubernetes manifests to spot risky defaults, excessive permissions, insecure networking and storage, and compliance gaps\u2014explaining impact and proposing safer configurations. It integrates cleanly into CI via SARIF and can run on full repos or just diffs to prevent drift without slowing delivery.\n\n**Perfect for**:\n- DevOps teams managing cloud infrastructure\n- Organizations with strict compliance requirements\n- Teams implementing Infrastructure as Code practices\n- Security teams overseeing cloud security posture\n\n```bash\n# Analyze infrastructure configurations\nfraim run iac --location https://github.com/username/repo-name\n```\n\n## \ud83d\ude80 Getting Started\n\n### Github Action Quick Start\n\nNOTE: This example assumes you are using an Anthropic based model.\n\nSet your API key as a Secret in your repo. - Settings -> Secrets and Variables -> New Repository Secret -> ANTHROPIC_API_KEY\nDefine your workflow inside your repo at .github/workflows/<action_name>.yml\n\n```yaml\nname: AI Security Scan\non:\n  pull_request:\n    branches: [main]\n\njobs:\n  security-scan:\n    runs-on: ubuntu-latest\n    permissions:\n      contents: read\n      actions: read\n      security-events: write # Required for uploading SARIF\n      pull-requests: write # Required for PR comments and annotations\n\n    steps:\n      - name: Run Fraim Security Scan\n        uses: fraim-dev/fraim-action@v0\n        with:\n          anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}\n          workflows: \"code\"\n```\n\n### CLI Quick Start\n\n#### Prerequisites\n\n- **Python 3.12+**\n- **[pipx](https://pipx.pypa.io/stable/installation/) installation tool**\n- **API Key** for your chosen AI provider (Google Gemini, OpenAI, etc.)\n\n#### Installation\n\nNOTE: These instructions are for Linux based systems, see [docs](https://docs.fraim.dev/installation) for Windows installation instructions\n\n1. **Install Fraim**:\n\n```bash\npipx install fraim\n```\n\n2. **Configure your AI provider**:\n\n   #### Google Gemini\n\n   1. Get an API key from [Google AI Studio](https://makersuite.google.com/app/apikey)\n   2. Export it in your environment:\n      ```\n      export GEMINI_API_KEY=your_api_key_here\n      ```\n\n   #### OpenAI\n\n   3. Get an API key from [OpenAI Platform](https://platform.openai.com/api-keys)\n   4. Export it in your environment:\n      ```\n      export OPENAI_API_KEY=your_api_key_here\n      ```\n\n### Common CLI Arguments\n\n#### Global Options (apply to all commands)\n\n- `--debug`: Enable debug logging for troubleshooting\n- `--show-logs SHOW_LOGS`: Print logs to standard error output  \n- `--log-output LOG_OUTPUT`: Specify directory for log files\n- `--observability langfuse`: Enable LLM observability and analytics\n\n#### Workflow Options (apply to most workflows)\n\n- `--location LOCATION`: Repository URL or local path to analyze\n- `--model MODEL`: AI model to use (default varies by workflow, e.g., `gemini/gemini-2.5-flash`)\n- `--temperature TEMPERATURE`: Model temperature setting (0.0-1.0, default: 0)\n- `--chunk-size CHUNK_SIZE`: Number of lines per processing chunk\n- `--limit LIMIT`: Maximum number of files to scan\n- `--globs GLOBS`: File patterns to include in analysis\n- `--max-concurrent-chunks MAX_CONCURRENT_CHUNKS`: Control parallelism\n\n#### Git Diff Options\n\n- `--diff`: Analyze only git diff instead of full repository\n- `--head HEAD`: Git head commit for diff (default: HEAD)\n- `--base BASE`: Git base commit for diff (default: empty tree)\n\n#### Pull Request Integration  \n\n- `--pr-url PR_URL`: URL of pull request to analyze\n- `--approver APPROVER`: GitHub username/group to notify\n\n### Observability\n\nFraim supports optional observability and tracing through [Langfuse](https://langfuse.com), which helps track workflow performance, debug issues, and analyze AI model usage.\n\nTo enable observability:\n\n1. **Install with observability support**:\n\n```bash\npipx install 'fraim[langfuse]'\n```\n\n2. **Enable observability during execution**:\n\n```bash\nfraim --observability langfuse run code --location /code\n```\n\nThis will trace your workflow execution, LLM calls, and performance metrics in Langfuse for analysis and debugging.\n\n## \ud83d\udcac Community & Support\n\nJoin our growing community of security professionals using Fraim:\n\n- **Documentation**: Visit [docs.fraim.dev](https://docs.fraim.dev) for comprehensive guides and tutorials\n- **Schedule a Demo**: [Book time with our team](https://calendly.com/fraim-dev/fraim-intro) - We'd love to help! Schedule a call for anything related to Fraim (debugging, new integrations, customizing workflows, or even just to chat)\n- **Slack Community**: [Join our Slack](https://join.slack.com/t/fraimworkspace/shared_invite/zt-38cunxtki-B80QAlLj7k8JoPaaYWUKNA) - Get help, share ideas, and connect with other security minded people looking to use AI to help their team succeed\n- **Issues**: Report bugs and request features via GitHub Issues\n- **Contributing**: See the [contributing guide](CONTRIBUTING.md) for more information.\n\n## \ud83d\udee0\ufe0f \"Fraim\"-work Development\n\n### Building Custom Workflows\n\nFraim makes it easy to create custom security workflows tailored to your organization's specific needs:\n\n### Key Framework Components\n\n- **Workflow Engine**: Orchestrates AI agents and tools in flexible, composable patterns\n- **LLM Integrations**: Support for multiple AI providers with seamless switching\n- **Tool System**: Extensible security analysis tools that can be combined and customized\n- **Input Connectors**: Git repositories, file systems, APIs, and custom data sources\n- **Output Formatters**: JSON, SARIF, HTML reports, and custom output formats\n\n### Configuration System\n\nFraim uses a flexible configuration system that allows you to:\n\n- Customize AI model parameters for optimal performance\n- Configure workflow-specific settings and thresholds\n- Set up custom data sources and input methods\n- Define custom output formats and destinations\n- Manage API keys and authentication\n\nSee the `fraim/config/` directory for configuration options.\n\n#### 1. Define Input and Output Types\n\n```python\n# workflows/<name>/workflow.py\n@dataclass\nclass MyWorkflowInput:\n    \"\"\"Input for the custom workflow.\"\"\"\n    code: Contextual[str]\n    config: Config\n\ntype MyWorkflowOutput = List[sarif.Result]\n```\n\n#### 2. Create Workflow Class\n\n```python\n# workflows/<name>/workflow.py\n\n# Define file patterns for your workflow\nFILE_PATTERNS = [\n    '*.config', '*.ini', '*.yaml', '*.yml', '*.json'\n]\n\n# Load prompts from YAML files\nPROMPTS = PromptTemplate.from_yaml(os.path.join(os.path.dirname(__file__), \"my_prompts.yaml\"))\n\n@workflow('my_custom_workflow')\nclass MyCustomWorkflow(Workflow[MyWorkflowInput, MyWorkflowOutput]):\n    \"\"\"Analyzes custom configuration files for security issues\"\"\"\n\n    def __init__(self, config: Config, *args, **kwargs):\n        super().__init__(config, *args, **kwargs)\n\n        # Construct an LLM instance\n        llm = LiteLLM.from_config(config)\n\n        # Construct the analysis step\n        parser = PydanticOutputParser(sarif.RunResults)\n        self.analysis_step = LLMStep(llm, PROMPTS[\"system\"], PROMPTS[\"user\"], parser)\n\n    async def workflow(self, input: MyWorkflowInput) -> MyWorkflowOutput:\n        \"\"\"Main workflow execution\"\"\"\n\n        # 1. Analyze the configuration file\n        analysis_results = await self.analysis_step.run({\"code\": input.code})\n\n        # 2. Filter results by confidence threshold\n        filtered_results = self.filter_results_by_confidence(\n            analysis_results.results, input.config.confidence\n        )\n\n        return filtered_results\n\n    def filter_results_by_confidence(self, results: List[sarif.Result], confidence_threshold: int) -> List[sarif.Result]:\n        \"\"\"Filter results by confidence.\"\"\"\n        return [result for result in results if result.properties.confidence > confidence_threshold]\n```\n\n#### 3. Create Prompt Files\n\nCreate `my_prompts.yaml` in the same directory:\n\n```yaml\nsystem: |\n  You are a configuration security analyzer.\n\n  Your job is to analyze configuration files for security misconfigurations and vulnerabilities.\n\n  <vulnerability_types>\n    Valid vulnerability types (use EXACTLY as shown):\n\n    - Hardcoded Credentials\n    - Insecure Defaults\n    - Excessive Permissions\n    - Unencrypted Storage\n    - Weak Cryptography\n    - Missing Security Headers\n    - Debug Mode Enabled\n    - Exposed Secrets\n    - Insecure Protocols\n    - Missing Access Controls\n  </vulnerability_types>\n\n  {{ output_format }}\n\nuser: |\n  Analyze the following configuration file for security issues:\n\n  {{ code }}\n```\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n---\n\n_Fraim is built by security teams, for security teams. Help us make AI-powered security accessible to everyone._\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A CLI app that runs AI-powered security workflows",
    "version": "0.7.0",
    "project_urls": null,
    "split_keywords": [
        "ai",
        " automation",
        " cli",
        " directory",
        " langfuse",
        " mcp",
        " packaging",
        " pandas",
        " pydantic",
        " python-dotenv",
        " ratelimit",
        " requests",
        " security",
        " tqdm",
        " tree-sitter",
        " urllib3",
        " uv"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f2e64ed4ee450423db73df3a06925de3c1c0de57da83874a4eacd70e2feb9bf4",
                "md5": "b9ac2929a2befd534f65b2d45108ccb3",
                "sha256": "1debc822fdbe2eacb87f62418d660818a1a02a99a75ae4be2c9c77b0f1150af0"
            },
            "downloads": -1,
            "filename": "fraim-0.7.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b9ac2929a2befd534f65b2d45108ccb3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 229724,
            "upload_time": "2025-09-26T20:30:13",
            "upload_time_iso_8601": "2025-09-26T20:30:13.398523Z",
            "url": "https://files.pythonhosted.org/packages/f2/e6/4ed4ee450423db73df3a06925de3c1c0de57da83874a4eacd70e2feb9bf4/fraim-0.7.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b8addca094a933b2500cefb3e3ce86cd178cb76377d6d19e467f0629e5afdc9b",
                "md5": "3b3ed50f8eebc6167762016d834f8a4e",
                "sha256": "ecaf9c7668c4345be304d959f94cf790389516ed603cec71ef0eacdece517114"
            },
            "downloads": -1,
            "filename": "fraim-0.7.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3b3ed50f8eebc6167762016d834f8a4e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 9660522,
            "upload_time": "2025-09-26T20:30:15",
            "upload_time_iso_8601": "2025-09-26T20:30:15.384531Z",
            "url": "https://files.pythonhosted.org/packages/b8/ad/dca094a933b2500cefb3e3ce86cd178cb76377d6d19e467f0629e5afdc9b/fraim-0.7.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-26 20:30:15",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "fraim"
}
        
Elapsed time: 1.98837s