agentdojo


Nameagentdojo JSON
Version 0.1.15 PyPI version JSON
download
home_pageNone
SummaryA Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents
upload_time2024-09-08 16:41:37
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords benchmark llms machine learning prompt injection security
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <center>

# AgentDojo: Benchmarking the Capabilities and Adversarial Robustness of LLM Agents

[Edoardo Debenedetti](https://edoardo.science)<sup>1</sup>, [Jie Zhang](https://zj-jayzhang.github.io)<sup>1</sup>, [Mislav Balunović](https://www.sri.inf.ethz.ch/people/mislav)<sup>1,2</sup>, [Luca Beurer-Kellner](https://www.sri.inf.ethz.ch/people/luca)<sup>1,2</sup>, [Marc Fischer](https://marcfischer.at)<sup>1,2</sup>, [Florian Tramèr](https://floriantramer.com)<sup>1</sup>

<sup>1</sup>ETH Zurich and <sup>2</sup>Invariant Labs

Read our paper [here](https://arxiv.org/abs/2406.13352).
</center>

## Quickstart

```bash
pip install agentdojo
```

> [!IMPORTANT]
> Note that the API of the package is still under development and might change in the future.

If you want to use the prompt injection detector, you need to install the `transformers` extra:

```bash
pip install "agentdojo[transformers]"
```

## Running the benchmark

The benchmark can be run with the [benchmark](src/agentdojo/scripts/benchmark.py) script. Documentation on how to use the script can be obtained with the `--help` flag.

For example, to run the `workspace` suite on the tasks 0 and 1, with `gpt-4o-2024-05-13` as the LLM, the tool filter as a defense, and the attack with tool knowlege, run the following command:

```bash
python -m agentdojo.scripts.benchmark -s workspace -ut user_task_0 \
    -ut user_task_1 --model gpt-4o-2024-05-13 \
    --defense tool_filter --attack tool_knowledge
```

To run the above, but on all suites and tasks, run the following:

```bash
python -m agentdojo.scripts.benchmark --model gpt-4o-2024-05-13 \
    --defense tool_filter --attack tool_knowledge
```

## Documentation of the Dojo

Take a look at our [documentation](https://agentdojo.spylab.ai/).

## Development set-up

Take a look at the [development set-up](https://agentdojo.spylab.ai/docs/development) docs.

## Citing

If you use AgentDojo in your research, please consider citing our paper:

```bibtex
@misc{debenedetti2024agentdojo,
      title={AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents},
      author={Edoardo Debenedetti and Jie Zhang and Mislav Balunović and Luca Beurer-Kellner and Marc Fischer and Florian Tramèr},
      year={2024},
      eprint={2406.13352},
      archivePrefix={arXiv},
      primaryClass={cs.CR}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "agentdojo",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "Edoardo Debenedetti <edoardo.debenedetti@inf.ethz.ch>",
    "keywords": "benchmark, llms, machine learning, prompt injection, security",
    "author": null,
    "author_email": "Edoardo Debenedetti <edoardo.debenedetti@inf.ethz.ch>, Jie Zhang <jie.zhang@inf.ethz.ch>, Mislav Balunovic <mislav.balunovic@inf.ethz.ch>, Luca Beurer-Kellner <luca.beurer-kellner@inf.ethz.ch>",
    "download_url": "https://files.pythonhosted.org/packages/48/08/5abb8ec3172b6ac26e00ca31680ed9b4cf3a77d7edbef8bb92136dbdcda2/agentdojo-0.1.15.tar.gz",
    "platform": null,
    "description": "<center>\n\n# AgentDojo: Benchmarking the Capabilities and Adversarial Robustness of LLM Agents\n\n[Edoardo Debenedetti](https://edoardo.science)<sup>1</sup>, [Jie Zhang](https://zj-jayzhang.github.io)<sup>1</sup>, [Mislav Balunovi\u0107](https://www.sri.inf.ethz.ch/people/mislav)<sup>1,2</sup>, [Luca Beurer-Kellner](https://www.sri.inf.ethz.ch/people/luca)<sup>1,2</sup>, [Marc Fischer](https://marcfischer.at)<sup>1,2</sup>, [Florian Tram\u00e8r](https://floriantramer.com)<sup>1</sup>\n\n<sup>1</sup>ETH Zurich and <sup>2</sup>Invariant Labs\n\nRead our paper [here](https://arxiv.org/abs/2406.13352).\n</center>\n\n## Quickstart\n\n```bash\npip install agentdojo\n```\n\n> [!IMPORTANT]\n> Note that the API of the package is still under development and might change in the future.\n\nIf you want to use the prompt injection detector, you need to install the `transformers` extra:\n\n```bash\npip install \"agentdojo[transformers]\"\n```\n\n## Running the benchmark\n\nThe benchmark can be run with the [benchmark](src/agentdojo/scripts/benchmark.py) script. Documentation on how to use the script can be obtained with the `--help` flag.\n\nFor example, to run the `workspace` suite on the tasks 0 and 1, with `gpt-4o-2024-05-13` as the LLM, the tool filter as a defense, and the attack with tool knowlege, run the following command:\n\n```bash\npython -m agentdojo.scripts.benchmark -s workspace -ut user_task_0 \\\n    -ut user_task_1 --model gpt-4o-2024-05-13 \\\n    --defense tool_filter --attack tool_knowledge\n```\n\nTo run the above, but on all suites and tasks, run the following:\n\n```bash\npython -m agentdojo.scripts.benchmark --model gpt-4o-2024-05-13 \\\n    --defense tool_filter --attack tool_knowledge\n```\n\n## Documentation of the Dojo\n\nTake a look at our [documentation](https://agentdojo.spylab.ai/).\n\n## Development set-up\n\nTake a look at the [development set-up](https://agentdojo.spylab.ai/docs/development) docs.\n\n## Citing\n\nIf you use AgentDojo in your research, please consider citing our paper:\n\n```bibtex\n@misc{debenedetti2024agentdojo,\n      title={AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents},\n      author={Edoardo Debenedetti and Jie Zhang and Mislav Balunovi\u0107 and Luca Beurer-Kellner and Marc Fischer and Florian Tram\u00e8r},\n      year={2024},\n      eprint={2406.13352},\n      archivePrefix={arXiv},\n      primaryClass={cs.CR}\n}\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents",
    "version": "0.1.15",
    "project_urls": {
        "documentation": "https://agentdojo.spylab.ai/api/agent_pipeline/basic_elements/",
        "homepage": "https://agentdojo.spylab.ai",
        "issues": "https://github.com/ethz-spylab/agentdojo/issues",
        "repository": "https://github.com/ethz-spylab/agentdojo"
    },
    "split_keywords": [
        "benchmark",
        " llms",
        " machine learning",
        " prompt injection",
        " security"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bf2e4590f51071ad7e3d8988e8d215658a8c5f61e4d152b9dc50315a4ebe58bc",
                "md5": "7c567e3170bb6487d03aa0ab69cc4407",
                "sha256": "eb5a53ade470b02d91fc8365ce7252d18dcdb7cd511975fb99b665a886fc2bb7"
            },
            "downloads": -1,
            "filename": "agentdojo-0.1.15-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7c567e3170bb6487d03aa0ab69cc4407",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 153736,
            "upload_time": "2024-09-08T16:41:35",
            "upload_time_iso_8601": "2024-09-08T16:41:35.418636Z",
            "url": "https://files.pythonhosted.org/packages/bf/2e/4590f51071ad7e3d8988e8d215658a8c5f61e4d152b9dc50315a4ebe58bc/agentdojo-0.1.15-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "48085abb8ec3172b6ac26e00ca31680ed9b4cf3a77d7edbef8bb92136dbdcda2",
                "md5": "ab7a371e36eb90a9e029eaaea4ac6d25",
                "sha256": "639df25500eab81ef5a7055e199028cd6607ca803b08336f9865177129934a6e"
            },
            "downloads": -1,
            "filename": "agentdojo-0.1.15.tar.gz",
            "has_sig": false,
            "md5_digest": "ab7a371e36eb90a9e029eaaea4ac6d25",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 19005251,
            "upload_time": "2024-09-08T16:41:37",
            "upload_time_iso_8601": "2024-09-08T16:41:37.080043Z",
            "url": "https://files.pythonhosted.org/packages/48/08/5abb8ec3172b6ac26e00ca31680ed9b4cf3a77d7edbef8bb92136dbdcda2/agentdojo-0.1.15.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-08 16:41:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ethz-spylab",
    "github_project": "agentdojo",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "agentdojo"
}
        
Elapsed time: 0.44493s