aixploit


Nameaixploit JSON
Version 1.2.5 PyPI version JSON
download
home_pagehttps://github.com/AINTRUST-AI/AIxploit
SummaryAI redTeaming Python library
upload_time2024-12-05 14:39:53
maintainerNone
docs_urlNone
authoraintrust
requires_python>=3.11
licenseGPL-3.0
keywords ai redteaming ai redteaming ai redteam ai redteaming library ai redteam library llm guardrails llm security llm llms
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AIxploit


[![Downloads](https://static.pepy.tech/badge/aixploit)](https://pepy.tech/project/aixploit)
[![PyPI - Python Version](https://img.shields.io/pypi/v/aixploit)](https://pypi.org/project/aixploit)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Downloads](https://static.pepy.tech/badge/aixploit/month)](https://pepy.tech/project/aixploit)

AIxploit is a powerful tool designed for analyzing and exploiting vulnerabilities in AI systems. 
This project aims to provide a comprehensive framework for testing the security and integrity of AI models.
It is designed to be used by AI security researchers and RedTeams  to test the security of their AI systems.

![Alt text](https://github.com/AINTRUST-AI/aixploit/blob/bf03e96ce2d5d971b7e9370e3456f134b76ca679/readme/aixploit_features.png)

## Installation

To get started with AIxploit download the package:

```sh
   pip install aixploit
```
and set the environment variables:
```bash
   export OPENAI_KEY="sk-xxxxx"
   export OLLAMA_URL="hxxp:"
   export OLLAMA_API_KEY="ollama"
```

## Usage

To use AIxploit, follow these steps:

1. Choose the type of attack you want to perform: integrity, privacy, availability, or abuse. 
The full list of attackers is available in the plugins folder.
   ```bash
   from aixploit.plugins import PromptInjection
   ```
2. Choose your targets and the associated attackers.
   ```bash
   target = ["Ollama", "http://localhost:11434/v1", "mistral"]
   attackers = [
        Privacy("quick"),
        Integrity("full"),
        Availability("quick"),
        Abuse("custom"),
   ] 
   ```

3. Run your attack and analyze the results:
   ```bash
   run(attackers, target, os.getenv("OLLAMA_API_KEY"))
   ```


Example test.py:

```bash

    import os
    from datetime import datetime
    from aixploit.plugins import PromptInjection, Privacy, Integrity, Availability, Abuse
    from aixploit.core import run


    target = ["Openai", "", "gpt-3.5-turbo"]
    attackers = [   
        PromptInjection("quick"),
        Privacy("quick"),
        Integrity("quick"),
        Availability("quick"),
        Abuse("quick"),
        #PromptInjection("full")
    ]

    start_time = datetime.now()
    print("Redteaming exercise started at : ", start_time.strftime("%H:%M:%S"))

    (
        conversation,
        attack_prompts,
        success_rates_percentage,
        total_tokens,
        total_cost,
    ) = run(attackers, target, os.getenv("OPENAI_KEY"))

    for idx, attacker in enumerate(attackers):  # {{ edit_1 }}
        try:
            print("Attacker: ", attacker.__class__.__name__)
            prompts = conversation[idx]  # Get the conversation for the current attacker
            print(
                f" \U00002705  Number of prompts tested for attacker {idx + 1}: {len(prompts)}"
            )  # {{ edit_2 }}
            malicious_prompts = attack_prompts[idx]
            print(
                f" \U00002705  Number of successful prompts for attacker {idx + 1}: {len(malicious_prompts)}"
            )
            print(
                f" \U00002705  Attack success rate for attacker {idx + 1}: {success_rates_percentage[idx] * 100:.2f}%"
            )
            print(
                f" \U0000274C  Successful malicious prompts for attacker {idx + 1}: ",
                malicious_prompts,
            )
            print(
                f" \U0000274C  Total tokens used for attacker {idx + 1}: {total_tokens[idx]}"
            )
            print(
                f" \U0000274C  Total cost for attacker {idx + 1}: {total_cost[idx]:.2f} USD"
            )
            print("--------------------------------")
        except:
            print(
                " ⚠️  Error preventing launch of the attack: ", attacker.__class__.__name__
            )

    print("Redteaming exercise ended at : ", datetime.now().strftime("%H:%M:%S"))
    print("Total time taken: ", datetime.now() - start_time)

```

## Contributing

We welcome contributions to AIxploit! If you would like to contribute, please follow these steps:

1. Fork the repository.
2. Create a new branch (`git checkout -b feature-branch`).
3. Make your changes and commit them (`git commit -m 'Add new feature'`).
4. Push to the branch (`git push origin feature-branch`).
5. Open a pull request.

Please ensure that your code adheres to the project's coding standards and includes appropriate tests.


## Contact

For any inquiries or feedback, please contact:

- **Contact AINTRUST AI** - [contact@aintrust.ai](mailto:contact@aintrust.ai)
- **Project Link**: [AIxploit GitHub Repository](https://github.com/AINTRUST-AI/AIxploit)

---

Thank you for your interest in AIxploit! We hope you find it useful.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AINTRUST-AI/AIxploit",
    "name": "aixploit",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "AI, redteaming, AI redteaming, AI redteam, AI redteaming library, AI redteam library, LLM Guardrails, LLM Security, LLM, LLMs",
    "author": "aintrust",
    "author_email": "aintrust <contact@aintrust.ai>",
    "download_url": "https://files.pythonhosted.org/packages/2c/04/af0f570292b903ee9c9471a9ca79bdf521a0ef07f5f6de1f0c52cae2bb05/aixploit-1.2.5.tar.gz",
    "platform": null,
    "description": "# AIxploit\n\n\n[![Downloads](https://static.pepy.tech/badge/aixploit)](https://pepy.tech/project/aixploit)\n[![PyPI - Python Version](https://img.shields.io/pypi/v/aixploit)](https://pypi.org/project/aixploit)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Downloads](https://static.pepy.tech/badge/aixploit/month)](https://pepy.tech/project/aixploit)\n\nAIxploit is a powerful tool designed for analyzing and exploiting vulnerabilities in AI systems. \nThis project aims to provide a comprehensive framework for testing the security and integrity of AI models.\nIt is designed to be used by AI security researchers and RedTeams  to test the security of their AI systems.\n\n![Alt text](https://github.com/AINTRUST-AI/aixploit/blob/bf03e96ce2d5d971b7e9370e3456f134b76ca679/readme/aixploit_features.png)\n\n## Installation\n\nTo get started with AIxploit download the package:\n\n```sh\n   pip install aixploit\n```\nand set the environment variables:\n```bash\n   export OPENAI_KEY=\"sk-xxxxx\"\n   export OLLAMA_URL=\"hxxp:\"\n   export OLLAMA_API_KEY=\"ollama\"\n```\n\n## Usage\n\nTo use AIxploit, follow these steps:\n\n1. Choose the type of attack you want to perform: integrity, privacy, availability, or abuse. \nThe full list of attackers is available in the plugins folder.\n   ```bash\n   from aixploit.plugins import PromptInjection\n   ```\n2. Choose your targets and the associated attackers.\n   ```bash\n   target = [\"Ollama\", \"http://localhost:11434/v1\", \"mistral\"]\n   attackers = [\n        Privacy(\"quick\"),\n        Integrity(\"full\"),\n        Availability(\"quick\"),\n        Abuse(\"custom\"),\n   ] \n   ```\n\n3. Run your attack and analyze the results:\n   ```bash\n   run(attackers, target, os.getenv(\"OLLAMA_API_KEY\"))\n   ```\n\n\nExample test.py:\n\n```bash\n\n    import os\n    from datetime import datetime\n    from aixploit.plugins import PromptInjection, Privacy, Integrity, Availability, Abuse\n    from aixploit.core import run\n\n\n    target = [\"Openai\", \"\", \"gpt-3.5-turbo\"]\n    attackers = [   \n        PromptInjection(\"quick\"),\n        Privacy(\"quick\"),\n        Integrity(\"quick\"),\n        Availability(\"quick\"),\n        Abuse(\"quick\"),\n        #PromptInjection(\"full\")\n    ]\n\n    start_time = datetime.now()\n    print(\"Redteaming exercise started at : \", start_time.strftime(\"%H:%M:%S\"))\n\n    (\n        conversation,\n        attack_prompts,\n        success_rates_percentage,\n        total_tokens,\n        total_cost,\n    ) = run(attackers, target, os.getenv(\"OPENAI_KEY\"))\n\n    for idx, attacker in enumerate(attackers):  # {{ edit_1 }}\n        try:\n            print(\"Attacker: \", attacker.__class__.__name__)\n            prompts = conversation[idx]  # Get the conversation for the current attacker\n            print(\n                f\" \\U00002705  Number of prompts tested for attacker {idx + 1}: {len(prompts)}\"\n            )  # {{ edit_2 }}\n            malicious_prompts = attack_prompts[idx]\n            print(\n                f\" \\U00002705  Number of successful prompts for attacker {idx + 1}: {len(malicious_prompts)}\"\n            )\n            print(\n                f\" \\U00002705  Attack success rate for attacker {idx + 1}: {success_rates_percentage[idx] * 100:.2f}%\"\n            )\n            print(\n                f\" \\U0000274C  Successful malicious prompts for attacker {idx + 1}: \",\n                malicious_prompts,\n            )\n            print(\n                f\" \\U0000274C  Total tokens used for attacker {idx + 1}: {total_tokens[idx]}\"\n            )\n            print(\n                f\" \\U0000274C  Total cost for attacker {idx + 1}: {total_cost[idx]:.2f} USD\"\n            )\n            print(\"--------------------------------\")\n        except:\n            print(\n                \" \u26a0\ufe0f  Error preventing launch of the attack: \", attacker.__class__.__name__\n            )\n\n    print(\"Redteaming exercise ended at : \", datetime.now().strftime(\"%H:%M:%S\"))\n    print(\"Total time taken: \", datetime.now() - start_time)\n\n```\n\n## Contributing\n\nWe welcome contributions to AIxploit! If you would like to contribute, please follow these steps:\n\n1. Fork the repository.\n2. Create a new branch (`git checkout -b feature-branch`).\n3. Make your changes and commit them (`git commit -m 'Add new feature'`).\n4. Push to the branch (`git push origin feature-branch`).\n5. Open a pull request.\n\nPlease ensure that your code adheres to the project's coding standards and includes appropriate tests.\n\n\n## Contact\n\nFor any inquiries or feedback, please contact:\n\n- **Contact AINTRUST AI** - [contact@aintrust.ai](mailto:contact@aintrust.ai)\n- **Project Link**: [AIxploit GitHub Repository](https://github.com/AINTRUST-AI/AIxploit)\n\n---\n\nThank you for your interest in AIxploit! We hope you find it useful.\n",
    "bugtrack_url": null,
    "license": "GPL-3.0",
    "summary": "AI redTeaming Python library",
    "version": "1.2.5",
    "project_urls": {
        "Homepage": "https://github.com/AINTRUST-AI/AIxploit"
    },
    "split_keywords": [
        "ai",
        " redteaming",
        " ai redteaming",
        " ai redteam",
        " ai redteaming library",
        " ai redteam library",
        " llm guardrails",
        " llm security",
        " llm",
        " llms"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c9050600d10d7c4e4271b6ab5b19366322a4869fd761f27f50147dde98bdb475",
                "md5": "1f867e43bbeab653c6565f921dd00fcf",
                "sha256": "162596c94836f8b72e5236518f44ba4dcf78e199e1970aaea233da7313613a2e"
            },
            "downloads": -1,
            "filename": "aixploit-1.2.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1f867e43bbeab653c6565f921dd00fcf",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 43463,
            "upload_time": "2024-12-05T14:39:52",
            "upload_time_iso_8601": "2024-12-05T14:39:52.141120Z",
            "url": "https://files.pythonhosted.org/packages/c9/05/0600d10d7c4e4271b6ab5b19366322a4869fd761f27f50147dde98bdb475/aixploit-1.2.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2c04af0f570292b903ee9c9471a9ca79bdf521a0ef07f5f6de1f0c52cae2bb05",
                "md5": "140e03f505f0cb6f1c2e0dd7d035eb50",
                "sha256": "dea990ffdb66e36332b9b41f5436b9f3e6634dd5d22f65e338c07414ea035a6e"
            },
            "downloads": -1,
            "filename": "aixploit-1.2.5.tar.gz",
            "has_sig": false,
            "md5_digest": "140e03f505f0cb6f1c2e0dd7d035eb50",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 29217,
            "upload_time": "2024-12-05T14:39:53",
            "upload_time_iso_8601": "2024-12-05T14:39:53.438338Z",
            "url": "https://files.pythonhosted.org/packages/2c/04/af0f570292b903ee9c9471a9ca79bdf521a0ef07f5f6de1f0c52cae2bb05/aixploit-1.2.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-05 14:39:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AINTRUST-AI",
    "github_project": "AIxploit",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "aixploit"
}
        
Elapsed time: 0.37988s