eai-eval


Nameeai-eval JSON
Version 1.0.4 PyPI version JSON
download
home_pagehttps://github.com/embodied-agent-interface/embodied-agent-interface
SummaryNone
upload_time2024-11-07 07:37:28
maintainerNone
docs_urlNone
authorstanford
requires_python>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making</h1>

<p align="center">
    <a href="https://arxiv.org/abs/2410.07166">
        <img src="https://img.shields.io/badge/arXiv-2410.07166-B31B1B.svg?style=plastic&logo=arxiv" alt="arXiv">
    </a>
    <a href="https://embodied-agent-interface.github.io/">
        <img src="https://img.shields.io/badge/Website-EAI-purple?style=plastic&logo=Google%20chrome" alt="Website">
    </a>
    <a href="https://huggingface.co/datasets/Inevitablevalor/EmbodiedAgentInterface" target="_blank">
        <img src="https://img.shields.io/badge/Dataset-Download-yellow?style=plastic&logo=huggingface" alt="Download the EmbodiedAgentInterface Dataset from Hugging Face">
    </a>
    <a href="https://hub.docker.com/repository/docker/jameskrw/eai-eval/general">
        <img src="https://img.shields.io/badge/Docker-EAI-blue?style=plastic&logo=Docker" alt="Docker">
    </a>
    <a href="https://embodied-agent-eval.readthedocs.io/en/latest/#">
        <img src="https://img.shields.io/badge/Docs-Online-blue?style=plastic&logo=Read%20the%20Docs" alt="Docs">
    </a>
    <a href="https://opensource.org/licenses/MIT">
        <img src="https://img.shields.io/badge/License-MIT-yellow.svg?style=plastic" alt="License: MIT">
    </a>
<!--     <a href="https://github.com/embodied-agent-interface/embodied-agent-interface/tree/main/dataset">
        <img src="https://img.shields.io/badge/Dataset-Download-yellow?style=plastic&logo=Data" alt="Dataset">
    </a> -->
</p>

<p align="center">
    <a href="https://limanling.github.io/">Manling Li</a>, 
    <a href="https://www.linkedin.com/in/shiyu-zhao-1124a0266/">Shiyu Zhao</a>, 
    <a href="https://qinengwang-aiden.github.io/">Qineng Wang</a>, 
    <a href="https://jameskrw.github.io/">Kangrui Wang</a>, 
    <a href="https://bryanzhou008.github.io/">Yu Zhou</a>, 
    <a href="https://example.com/sanjana-srivastava">Sanjana Srivastava</a>, 
    <a href="https://example.com/cem-gokmen">Cem Gokmen</a>, 
    <a href="https://example.com/tony-lee">Tony Lee</a>, 
    <a href="https://sites.google.com/site/lieranli/">Li Erran Li</a>, 
    <a href="https://example.com/ruohan-zhang">Ruohan Zhang</a>, 
    <a href="https://example.com/weiyu-liu">Weiyu Liu</a>, 
    <a href="https://cs.stanford.edu/~pliang/">Percy Liang</a>, 
    <a href="https://profiles.stanford.edu/fei-fei-li">Li Fei-Fei</a>, 
    <a href="https://jiayuanm.com/">Jiayuan Mao</a>, 
    <a href="https://jiajunwu.com/">Jiajun Wu</a>
</p>
<p align="center">Stanford Vision and Learning Lab, Stanford University</p>

<p align="center">
    <a href="https://cs.stanford.edu/~manlingl/projects/embodied-eval" target="_blank">
        <img src="./EAgent.png" alt="EAgent" width="80%" height="80%" border="10" />
    </a>
</p>

# Dataset Highlights

-  Standardized goal specifications.
-  Standardized modules and interfaces.
-  Broad coverage of evaluation and fine-grained metrics.
-  Please find our dataset at [this link](https://huggingface.co/datasets/Inevitablevalor/EmbodiedAgentInterface).
-  PDDL files for both BEHAVIOR ([domain file](https://github.com/embodied-agent-interface/embodied-agent-interface/blob/main/src/virtualhome_eval/resources/behavior/behavior.pddl), [problem files](https://github.com/embodied-agent-interface/embodied-agent-interface/tree/main/src/virtualhome_eval/resources/behavior/problem_pddl)) and VirtualHome ([domain file](https://github.com/embodied-agent-interface/embodied-agent-interface/blob/main/src/virtualhome_eval/resources/virtualhome/virtualhome.pddl), [problem files](https://github.com/embodied-agent-interface/embodied-agent-interface/tree/main/src/virtualhome_eval/resources/virtualhome/problem_pddl)). 

# Overview

We aim to evaluate Large Language Models (LLMs) for embodied decision-making. While many works leverage LLMs for decision-making in embodied environments, a systematic understanding of their performance is still lacking. These models are applied in different domains, for various purposes, and with diverse inputs and outputs. Current evaluations tend to rely on final success rates alone, making it difficult to pinpoint where LLMs fall short and how to leverage them effectively in embodied AI systems.

To address this gap, we propose the **Embodied Agent Interface (EAI)**, which unifies:
1. A broad set of embodied decision-making tasks involving both state and temporally extended goals.
2. Four commonly used LLM-based modules: goal interpretation, subgoal decomposition, action sequencing, and transition modeling.
3. Fine-grained evaluation metrics, identifying errors such as hallucinations, affordance issues, and planning mistakes.

Our benchmark provides a comprehensive assessment of LLM performance across different subtasks, identifying their strengths and weaknesses in embodied decision-making contexts.

# Installation
1. **Create and Activate a Conda Environment**:
   ```bash
   conda create -n eai-eval python=3.8 -y 
   conda activate eai-eval
   ```

2. **Install `eai`**:
   
   You can install it from pip:
   ```bash
   pip install eai-eval
   ```

   Or, install from source:
   ```bash
   git clone https://github.com/embodied-agent-interface/embodied-agent-interface.git
   cd embodied-agent-interface
   pip install -e .
   ```

3. **(Optional) Install iGibson for behavior evaluation**:
   
   If you need to use `behavior_eval`, install iGibson. Follow these steps to minimize installation issues:

   - Make sure you are using Python 3.8 and meet the minimum system requirements in the [iGibson installation guide](https://stanfordvl.github.io/iGibson/installation.html).
   
   - Install CMake using Conda (do not use pip):
     ```bash
     conda install cmake
     ```

   - Install `iGibson`:
     We provide an installation script:
     ```bash
     python -m behavior_eval.utils.install_igibson_utils
     ```
     Alternatively, install it manually:
     ```bash
     git clone https://github.com/embodied-agent-interface/iGibson.git --recursive
     cd iGibson
     pip install -e .
     ```

   - Download assets:
     ```bash
     python -m behavior_eval.utils.download_utils
     ```

   We have successfully tested installation on Linux, Windows 10+, and macOS.

# Quick Start

1. **Arguments**:
   ```bash
   eai-eval \
     --dataset {virtualhome,behavior} \
     --mode {generate_prompts,evaluate_results} \
     --eval-type {action_sequencing,transition_modeling,goal_interpretation,subgoal_decomposition} \
     --llm-response-path <path_to_responses> \
     --output-dir <output_directory> \
     --num-workers <number_of_workers>
   ```

   Run the following command for further information:
   ```bash
   eai-eval --help
   ```

2. **Examples**:

-  ***Evaluate Results***
   
   
   Make sure to download our results first if you don't want to specify <path_to_responses>
   ```bash
   python -m eai_eval.utils.download_utils
   ```

   Then, run the commands below:
   ```bash
   eai-eval --dataset virtualhome --eval-type action_sequencing --mode evaluate_results
   eai-eval --dataset virtualhome --eval-type transition_modeling --mode evaluate_results
   eai-eval --dataset virtualhome --eval-type goal_interpretation --mode evaluate_results
   eai-eval --dataset virtualhome --eval-type subgoal_decomposition --mode evaluate_results
   eai-eval --dataset behavior --eval-type action_sequencing --mode evaluate_results
   eai-eval --dataset behavior --eval-type transition_modeling --mode evaluate_results
   eai-eval --dataset behavior --eval-type goal_interpretation --mode evaluate_results
   eai-eval --dataset behavior --eval-type subgoal_decomposition --mode evaluate_results
   ```

-  ***Generate Pormpts***
   
   
   To generate prompts, you can run:
   ```bash
   eai-eval --dataset virtualhome --eval-type action_sequencing --mode generate_prompts
   eai-eval --dataset virtualhome --eval-type transition_modeling --mode generate_prompts
   eai-eval --dataset virtualhome --eval-type goal_interpretation --mode generate_prompts
   eai-eval --dataset virtualhome --eval-type subgoal_decomposition --mode generate_prompts
   eai-eval --dataset behavior --eval-type action_sequencing --mode generate_prompts
   eai-eval --dataset behavior --eval-type transition_modeling --mode generate_prompts
   eai-eval --dataset behavior --eval-type goal_interpretation --mode generate_prompts
   eai-eval --dataset behavior --eval-type subgoal_decomposition --mode generate_prompts
   ```

-  ***Simulation***


      To see the effect of our magic actions, refer to this [notebook](https://github.com/embodied-agent-interface/embodied-agent-interface/blob/main/examples/action_sequencing_simulation.ipynb).
  


3. **Evaluate All Modules in One Command**


   To evaluate all modules with default parameters, use the command below:
   ```bash
   eai-eval --all
   ```
   This command will automatically traverse all unspecified parameter options.

   **Example Usage**:
   ```bash
   eai-eval --all --dataset virtualhome
   ```
   This will run both `generate_prompts` and `evaluate_results` for all modules in the `virtualhome` dataset. Make sure to download our results first if you don't want to specify <path_to_responses>

# Docker
We provide a ready-to-use Docker image for easy installation and usage.

First, pull the Docker image from Docker Hub:
```bash
docker pull jameskrw/eai-eval
```

Next, run the Docker container interactively:

```bash
docker run -it jameskrw/eai-eval
```

Test docker

```bash
eai-eval
```
By default, this will start generating prompts for goal interpretation in Behavior.



# BibTex

If you find our work helpful, please consider citing it:

```bash
@inproceedings{li2024embodied,
  title={Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making},
  author={Li, Manling and Zhao, Shiyu and Wang, Qineng and Wang, Kangrui and Zhou, Yu and Srivastava, Sanjana and Gokmen, Cem and Lee, Tony and Li, Li Erran and Zhang, Ruohan and others},
  booktitle={NeurIPS 2024},
  year={2024}
}
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/embodied-agent-interface/embodied-agent-interface",
    "name": "eai-eval",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "stanford",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/5f/c9/35dfcb5ea2e016646cfaa635733a2021d6ff8cb1c1edcb8720000c976701/eai_eval-1.0.4.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making</h1>\n\n<p align=\"center\">\n    <a href=\"https://arxiv.org/abs/2410.07166\">\n        <img src=\"https://img.shields.io/badge/arXiv-2410.07166-B31B1B.svg?style=plastic&logo=arxiv\" alt=\"arXiv\">\n    </a>\n    <a href=\"https://embodied-agent-interface.github.io/\">\n        <img src=\"https://img.shields.io/badge/Website-EAI-purple?style=plastic&logo=Google%20chrome\" alt=\"Website\">\n    </a>\n    <a href=\"https://huggingface.co/datasets/Inevitablevalor/EmbodiedAgentInterface\" target=\"_blank\">\n        <img src=\"https://img.shields.io/badge/Dataset-Download-yellow?style=plastic&logo=huggingface\" alt=\"Download the EmbodiedAgentInterface Dataset from Hugging Face\">\n    </a>\n    <a href=\"https://hub.docker.com/repository/docker/jameskrw/eai-eval/general\">\n        <img src=\"https://img.shields.io/badge/Docker-EAI-blue?style=plastic&logo=Docker\" alt=\"Docker\">\n    </a>\n    <a href=\"https://embodied-agent-eval.readthedocs.io/en/latest/#\">\n        <img src=\"https://img.shields.io/badge/Docs-Online-blue?style=plastic&logo=Read%20the%20Docs\" alt=\"Docs\">\n    </a>\n    <a href=\"https://opensource.org/licenses/MIT\">\n        <img src=\"https://img.shields.io/badge/License-MIT-yellow.svg?style=plastic\" alt=\"License: MIT\">\n    </a>\n<!--     <a href=\"https://github.com/embodied-agent-interface/embodied-agent-interface/tree/main/dataset\">\n        <img src=\"https://img.shields.io/badge/Dataset-Download-yellow?style=plastic&logo=Data\" alt=\"Dataset\">\n    </a> -->\n</p>\n\n<p align=\"center\">\n    <a href=\"https://limanling.github.io/\">Manling Li</a>, \n    <a href=\"https://www.linkedin.com/in/shiyu-zhao-1124a0266/\">Shiyu Zhao</a>, \n    <a href=\"https://qinengwang-aiden.github.io/\">Qineng Wang</a>, \n    <a href=\"https://jameskrw.github.io/\">Kangrui Wang</a>, \n    <a href=\"https://bryanzhou008.github.io/\">Yu Zhou</a>, \n    <a href=\"https://example.com/sanjana-srivastava\">Sanjana Srivastava</a>, \n    <a href=\"https://example.com/cem-gokmen\">Cem Gokmen</a>, \n    <a href=\"https://example.com/tony-lee\">Tony Lee</a>, \n    <a href=\"https://sites.google.com/site/lieranli/\">Li Erran Li</a>, \n    <a href=\"https://example.com/ruohan-zhang\">Ruohan Zhang</a>, \n    <a href=\"https://example.com/weiyu-liu\">Weiyu Liu</a>, \n    <a href=\"https://cs.stanford.edu/~pliang/\">Percy Liang</a>, \n    <a href=\"https://profiles.stanford.edu/fei-fei-li\">Li Fei-Fei</a>, \n    <a href=\"https://jiayuanm.com/\">Jiayuan Mao</a>, \n    <a href=\"https://jiajunwu.com/\">Jiajun Wu</a>\n</p>\n<p align=\"center\">Stanford Vision and Learning Lab, Stanford University</p>\n\n<p align=\"center\">\n    <a href=\"https://cs.stanford.edu/~manlingl/projects/embodied-eval\" target=\"_blank\">\n        <img src=\"./EAgent.png\" alt=\"EAgent\" width=\"80%\" height=\"80%\" border=\"10\" />\n    </a>\n</p>\n\n# Dataset Highlights\n\n-  Standardized goal specifications.\n-  Standardized modules and interfaces.\n-  Broad coverage of evaluation and fine-grained metrics.\n-  Please find our dataset at [this link](https://huggingface.co/datasets/Inevitablevalor/EmbodiedAgentInterface).\n-  PDDL files for both BEHAVIOR ([domain file](https://github.com/embodied-agent-interface/embodied-agent-interface/blob/main/src/virtualhome_eval/resources/behavior/behavior.pddl), [problem files](https://github.com/embodied-agent-interface/embodied-agent-interface/tree/main/src/virtualhome_eval/resources/behavior/problem_pddl)) and VirtualHome ([domain file](https://github.com/embodied-agent-interface/embodied-agent-interface/blob/main/src/virtualhome_eval/resources/virtualhome/virtualhome.pddl), [problem files](https://github.com/embodied-agent-interface/embodied-agent-interface/tree/main/src/virtualhome_eval/resources/virtualhome/problem_pddl)). \n\n# Overview\n\nWe aim to evaluate Large Language Models (LLMs) for embodied decision-making. While many works leverage LLMs for decision-making in embodied environments, a systematic understanding of their performance is still lacking. These models are applied in different domains, for various purposes, and with diverse inputs and outputs. Current evaluations tend to rely on final success rates alone, making it difficult to pinpoint where LLMs fall short and how to leverage them effectively in embodied AI systems.\n\nTo address this gap, we propose the **Embodied Agent Interface (EAI)**, which unifies:\n1. A broad set of embodied decision-making tasks involving both state and temporally extended goals.\n2. Four commonly used LLM-based modules: goal interpretation, subgoal decomposition, action sequencing, and transition modeling.\n3. Fine-grained evaluation metrics, identifying errors such as hallucinations, affordance issues, and planning mistakes.\n\nOur benchmark provides a comprehensive assessment of LLM performance across different subtasks, identifying their strengths and weaknesses in embodied decision-making contexts.\n\n# Installation\n1. **Create and Activate a Conda Environment**:\n   ```bash\n   conda create -n eai-eval python=3.8 -y \n   conda activate eai-eval\n   ```\n\n2. **Install `eai`**:\n   \n   You can install it from pip:\n   ```bash\n   pip install eai-eval\n   ```\n\n   Or, install from source:\n   ```bash\n   git clone https://github.com/embodied-agent-interface/embodied-agent-interface.git\n   cd embodied-agent-interface\n   pip install -e .\n   ```\n\n3. **(Optional) Install iGibson for behavior evaluation**:\n   \n   If you need to use `behavior_eval`, install iGibson. Follow these steps to minimize installation issues:\n\n   - Make sure you are using Python 3.8 and meet the minimum system requirements in the [iGibson installation guide](https://stanfordvl.github.io/iGibson/installation.html).\n   \n   - Install CMake using Conda (do not use pip):\n     ```bash\n     conda install cmake\n     ```\n\n   - Install `iGibson`:\n     We provide an installation script:\n     ```bash\n     python -m behavior_eval.utils.install_igibson_utils\n     ```\n     Alternatively, install it manually:\n     ```bash\n     git clone https://github.com/embodied-agent-interface/iGibson.git --recursive\n     cd iGibson\n     pip install -e .\n     ```\n\n   - Download assets:\n     ```bash\n     python -m behavior_eval.utils.download_utils\n     ```\n\n   We have successfully tested installation on Linux, Windows 10+, and macOS.\n\n# Quick Start\n\n1. **Arguments**:\n   ```bash\n   eai-eval \\\n     --dataset {virtualhome,behavior} \\\n     --mode {generate_prompts,evaluate_results} \\\n     --eval-type {action_sequencing,transition_modeling,goal_interpretation,subgoal_decomposition} \\\n     --llm-response-path <path_to_responses> \\\n     --output-dir <output_directory> \\\n     --num-workers <number_of_workers>\n   ```\n\n   Run the following command for further information:\n   ```bash\n   eai-eval --help\n   ```\n\n2. **Examples**:\n\n-  ***Evaluate Results***\n   \n   \n   Make sure to download our results first if you don't want to specify <path_to_responses>\n   ```bash\n   python -m eai_eval.utils.download_utils\n   ```\n\n   Then, run the commands below:\n   ```bash\n   eai-eval --dataset virtualhome --eval-type action_sequencing --mode evaluate_results\n   eai-eval --dataset virtualhome --eval-type transition_modeling --mode evaluate_results\n   eai-eval --dataset virtualhome --eval-type goal_interpretation --mode evaluate_results\n   eai-eval --dataset virtualhome --eval-type subgoal_decomposition --mode evaluate_results\n   eai-eval --dataset behavior --eval-type action_sequencing --mode evaluate_results\n   eai-eval --dataset behavior --eval-type transition_modeling --mode evaluate_results\n   eai-eval --dataset behavior --eval-type goal_interpretation --mode evaluate_results\n   eai-eval --dataset behavior --eval-type subgoal_decomposition --mode evaluate_results\n   ```\n\n-  ***Generate Pormpts***\n   \n   \n   To generate prompts, you can run:\n   ```bash\n   eai-eval --dataset virtualhome --eval-type action_sequencing --mode generate_prompts\n   eai-eval --dataset virtualhome --eval-type transition_modeling --mode generate_prompts\n   eai-eval --dataset virtualhome --eval-type goal_interpretation --mode generate_prompts\n   eai-eval --dataset virtualhome --eval-type subgoal_decomposition --mode generate_prompts\n   eai-eval --dataset behavior --eval-type action_sequencing --mode generate_prompts\n   eai-eval --dataset behavior --eval-type transition_modeling --mode generate_prompts\n   eai-eval --dataset behavior --eval-type goal_interpretation --mode generate_prompts\n   eai-eval --dataset behavior --eval-type subgoal_decomposition --mode generate_prompts\n   ```\n\n-  ***Simulation***\n\n\n      To see the effect of our magic actions, refer to this [notebook](https://github.com/embodied-agent-interface/embodied-agent-interface/blob/main/examples/action_sequencing_simulation.ipynb).\n  \n\n\n3. **Evaluate All Modules in One Command**\n\n\n   To evaluate all modules with default parameters, use the command below:\n   ```bash\n   eai-eval --all\n   ```\n   This command will automatically traverse all unspecified parameter options.\n\n   **Example Usage**:\n   ```bash\n   eai-eval --all --dataset virtualhome\n   ```\n   This will run both `generate_prompts` and `evaluate_results` for all modules in the `virtualhome` dataset. Make sure to download our results first if you don't want to specify <path_to_responses>\n\n# Docker\nWe provide a ready-to-use Docker image for easy installation and usage.\n\nFirst, pull the Docker image from Docker Hub:\n```bash\ndocker pull jameskrw/eai-eval\n```\n\nNext, run the Docker container interactively:\n\n```bash\ndocker run -it jameskrw/eai-eval\n```\n\nTest docker\n\n```bash\neai-eval\n```\nBy default, this will start generating prompts for goal interpretation in Behavior.\n\n\n\n# BibTex\n\nIf you find our work helpful, please consider citing it:\n\n```bash\n@inproceedings{li2024embodied,\n  title={Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making},\n  author={Li, Manling and Zhao, Shiyu and Wang, Qineng and Wang, Kangrui and Zhou, Yu and Srivastava, Sanjana and Gokmen, Cem and Lee, Tony and Li, Li Erran and Zhang, Ruohan and others},\n  booktitle={NeurIPS 2024},\n  year={2024}\n}\n```\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": null,
    "version": "1.0.4",
    "project_urls": {
        "Homepage": "https://github.com/embodied-agent-interface/embodied-agent-interface"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7da676764d520b06f4dd5335de7abfef40d959416b2f3ce3c8d0ceefa6482fb5",
                "md5": "4cfc6512f7eee46dc91b082aee46254a",
                "sha256": "3fe98ca6148e21734306bcfd6f780c0fade4edaa6e55d986c8fd70037686e2fc"
            },
            "downloads": -1,
            "filename": "eai_eval-1.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4cfc6512f7eee46dc91b082aee46254a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 27067472,
            "upload_time": "2024-11-07T07:37:25",
            "upload_time_iso_8601": "2024-11-07T07:37:25.427757Z",
            "url": "https://files.pythonhosted.org/packages/7d/a6/76764d520b06f4dd5335de7abfef40d959416b2f3ce3c8d0ceefa6482fb5/eai_eval-1.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5fc935dfcb5ea2e016646cfaa635733a2021d6ff8cb1c1edcb8720000c976701",
                "md5": "560c4d863784ccd48775b2e2ff7e9299",
                "sha256": "8ac4d569177247314dcd8768804f0d317b522c91f94031e32c7a3a8a8a5de8ee"
            },
            "downloads": -1,
            "filename": "eai_eval-1.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "560c4d863784ccd48775b2e2ff7e9299",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 22647396,
            "upload_time": "2024-11-07T07:37:28",
            "upload_time_iso_8601": "2024-11-07T07:37:28.084370Z",
            "url": "https://files.pythonhosted.org/packages/5f/c9/35dfcb5ea2e016646cfaa635733a2021d6ff8cb1c1edcb8720000c976701/eai_eval-1.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-07 07:37:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "embodied-agent-interface",
    "github_project": "embodied-agent-interface",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "eai-eval"
}
        
Elapsed time: 0.36947s