nett-benchmarks


Namenett-benchmarks JSON
Version 0.3.1 PyPI version JSON
download
home_page
SummaryA testbed for comparing the learning abilities of newborn animals and autonomous artificial agents.
upload_time2024-03-15 19:58:43
maintainer
docs_urlNone
author
requires_python==3.10.12
license======= MIT License Copyright (c) 2023 Building A Mind Lab Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords nett netts newborn embodied turing test benchmark benchmarking learning animals autonomous artificial agents reinforcement neuroml ai ml machine learning artificial intelligence
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
<img src="https://github.com/buildingamind/NewbornEmbodiedTuringTest/raw/main/docs/assets/images/banner.png" alt="Banner" style />

# **Newborn Embodied Turing Test**

Benchmarking Virtual Agents in Controlled-Rearing Conditions

![PyPI - Version](https://img.shields.io/pypi/v/nett-benchmarks)
![Python Version from PEP 621 TOML](https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Fbuildingamind%2FNewbornEmbodiedTuringTest%2Fmain%2Fpyproject.toml)
![GitHub License](https://img.shields.io/github/license/buildingamind/NewbornEmbodiedTuringTest)
![GitHub Issues or Pull Requests](https://img.shields.io/github/issues/buildingamind/NewbornEmbodiedTuringTest)

[Getting Started](#getting-started) •
[Documentation](https://buildingamind.github.io/NewbornEmbodiedTuringTest/) • 
[Lab Website](http://buildingamind.com/)

</div>

The Newborn Embodied Turing Test (NETT) is a cutting-edge toolkit designed to simulate virtual agents in controlled-rearing conditions. This innovative platform enables researchers to create, simulate, and analyze virtual agents, facilitating direct comparisons with real chicks as documented by the **[Building a Mind Lab](http://buildingamind.com/)**. Our comprehensive suite includes all necessary components for the simulation and analysis of embodied models, closely replicating laboratory conditions.

Below is a visual representation of our experimental setup, showcasing the infrastructure for the three primary experiments discussed in this documentation.

<div align="center">

<img src="https://github.com/buildingamind/NewbornEmbodiedTuringTest/raw/main/docs/assets/images/digital_twin.jpg" alt="Digital Twin" width="65%" />
</div>

## How to Use this Repository

The NETT toolkit comprises three key components:

1. **Virtual Environment**: A dynamic environment that serves as the habitat for virtual agents.
2. **Experimental Simulation Programs**: Tools to initiate and conduct experiments within the virtual world.
3. **Data Visualization Programs**: Utilities for analyzing and visualizing experiment outcomes.

## Directory Structure

The directory structure of the code is as follows:

```
├── docs                       # Documentation and guides
├── notebooks
│   ├── Getting Started.ipynb  # Introduction and setup notebook
├── src/nett
│   ├── analysis               # Analysis scripts
│   ├── body                   # Agent body configurations
│   ├── brain                  # Neural network models and learning algorithms
│   ├── environment            # Simulation environments
│   ├── utils                  # Utility functions
│   ├── nett.py                # Main library script
│   └── __init__.py            # Package initialization
├── tests                      # Unit tests
├── mkdocs.yml                 # MkDocs configuration
├── pyproject.toml             # Project metadata
└── README.md                  # This README file
```

## Getting Started
<!-- sphinx-start -->
<!-- This comment is here to denote the start of the "Getting Started" page for Sphinx documentation -->
To begin benchmarking your first embodied agent with NETT, please be aware:

**Important**: The `mlagents==1.0.0` dependency is incompatible with Apple Silicon (M1, M2, etc.) chips. Please utilize an alternate device to execute this codebase.

### Installation

1. **Virtual Environment Setup (Highly Recommended)**

   Create and activate a virtual environment to avoid dependency conflicts.
   ```bash
   conda create -y -n nett_env python=3.10.12
   conda activate nett_env
   ```
   See [here](https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda "Link for how to set-up a virtual env") for detailed instructions.

2. **Install Prerequistes**

   Install the needed versions of `setuptools` and `pip`:
   ```bash
   pip install setuptools==65.5.0 pip==21 wheel==0.38.4
   ```
   **NOTE:** This is a result of incompatibilities with the subdependency `gym==0.21`. More information about this issue can be found [here](https://github.com/openai/gym/issues/3176#issuecomment-1560026649)

3. **Toolkit Installation**

   Install the toolkit using `pip`.
   ```bash
   pip install nett-benchmarks
   ```

   **NOTE:**: Installation outside a virtual environment may fail due to conflicting dependencies. Ensure compatibility, especially with `gym==0.21` and `numpy<=1.21.2`.

### Running a NETT

1. **Download or Create the Unity Executable**

   Obtain a pre-made Unity executable from [here](https://origins.luddy.indiana.edu/environments/). The executable is required to run the virtual environment.

2. **Import NETT Components**

   Start by importing the NETT framework components - `Brain`, `Body`, and `Environment`, alongside the main `NETT` class.
   ```python
   from nett import Brain, Body, Environment
   from nett import NETT
   ```

3. **Component Configuration**:

- **Brain**

   Configure the learning aspects, including the policy network (e.g. "CnnPolicy"), learning algorithm (e.g. "PPO"), the reward function, and the encoder.
   ```python
   brain = Brain(policy="CnnPolicy", algorithm="PPO")
   ```
   To get a list of all available policies, algorithms, and encoders, the `Brain` class contains the methods `list_policies()`, `list_algorithms()`, and `list_encoders()` respectively.

- **Body** 

   Set up the agent's physical interface with the environment. It's possible to apply gym.Wrappers for data preprocessing.
   ```python
   body = Body(type="basic", dvs=False, wrappers=None)
   ```
   Here, we do not pass any wrappers, letting information from the environment reach the brain "as is". Alternative body types (e.g. `two-eyed`, `rag-doll`) are planned in future updates.

- **Environment**

   Create the simulation environment using the path to your Unity executable (see Step 1).
   ```python
   environment = Environment(config="identityandview", executable_path="path/to/executable.x86_64")
   ```
   To get a list of all available configurations, run `Environment.list_configs()`.

4. **Run the Benchmarking**

   Integrate all components into a NETT instance to facilitate experiment execution.
   ```python
   benchmarks = NETT(brain=brain, body=body, environment=environment)
   ```
   The `NETT` instance has a `.run()` method that initiates the benchmarking process. The method accepts parameters such as the number of brains, training/testing episodes, and the output directory.
   ```python
   job_sheet = benchmarks.run(output_dir="path/to/run/output/directory/", num_brains=5, trains_eps=10, test_eps=5)
   ```
   The `run` function is asynchronous, returning the list of jobs that may or may not be complete. If you wish to display the Unity environments running, set the `batch_mode` parameter to `False`.

5. **Check Status**:

To see the status of the benchmark processes, use the `.status()` method:
   ```python
   benchmarks.status(job_sheet)
   ```

### Running Standard Analysis

After running the experiments, the pipeline will generate a collection of datafiles in the defined output directory. 

1. **Install R and dependencies**

   To run the analyses performed in previous experiments,this toolkit provides a set of analysis scripts. Prior to running them, you will need R and the packages `tidyverse`, `argparse`, and `scales` installed. To install these packages, run the following command in R:
   ```R
   install.packages(c("tidyverse", "argparse", "scales"))
   ```
   Alternatively, if you are having difficulty installing R on your system, you can install these using conda.
   ```bash
   conda install -y r r-tidyverse r-argparse r-scales
   ```
2. **Run the Analysis** 

   To run the analysis, use the `analyze` method of the `NETT` class. This method will generate a set of plots and tables based on the datafiles in the output directory.
   ```python
   benchmarks.analyze(run_dir="path/to/run/output/directory/", output_dir="path/to/analysis/output/directory/")
   ```

<!-- sphinx-end -->
<!-- This comment is here to denote the end of the "Getting Started" page for Sphinx documentation -->

## Documentation
For a link to the full documentation, please visit [here](https://buildingamind.github.io/NewbornEmbodiedTuringTest/).

## Experiment Configuration

More information related to details on the experiment can be found on following pages.

* [**Parsing Experiment**](https://buildingamind.github.io/NewbornEmbodiedTuringTest/papers/Parsing.html)
* [**ViewPoint Experiment**](https://buildingamind.github.io/NewbornEmbodiedTuringTest/papers/ViewInvariant.html)

[🔼 Back to top](#newborn-embodied-turing-test)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "nett-benchmarks",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "==3.10.12",
    "maintainer_email": "",
    "keywords": "nett,netts,newborn,embodied,turing test,benchmark,benchmarking,learning,animals,autonomous,artificial,agents,reinforcement,neuroml,AI,ML,machine learning,artificial intelligence",
    "author": "",
    "author_email": "Bhargav Desai <desabh@iu.edu>, Zachary Laborde <zlaborde@iu.edu>",
    "download_url": "https://files.pythonhosted.org/packages/ac/9c/1f25fa03d2af149fbe0bc54867097d3890c7bc56ae32b751aea70ec18c27/nett-benchmarks-0.3.1.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n<img src=\"https://github.com/buildingamind/NewbornEmbodiedTuringTest/raw/main/docs/assets/images/banner.png\" alt=\"Banner\" style />\n\n# **Newborn Embodied Turing Test**\n\nBenchmarking Virtual Agents in Controlled-Rearing Conditions\n\n![PyPI - Version](https://img.shields.io/pypi/v/nett-benchmarks)\n![Python Version from PEP 621 TOML](https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Fbuildingamind%2FNewbornEmbodiedTuringTest%2Fmain%2Fpyproject.toml)\n![GitHub License](https://img.shields.io/github/license/buildingamind/NewbornEmbodiedTuringTest)\n![GitHub Issues or Pull Requests](https://img.shields.io/github/issues/buildingamind/NewbornEmbodiedTuringTest)\n\n[Getting Started](#getting-started) \u2022\n[Documentation](https://buildingamind.github.io/NewbornEmbodiedTuringTest/) \u2022 \n[Lab Website](http://buildingamind.com/)\n\n</div>\n\nThe Newborn Embodied Turing Test (NETT) is a cutting-edge toolkit designed to simulate virtual agents in controlled-rearing conditions. This innovative platform enables researchers to create, simulate, and analyze virtual agents, facilitating direct comparisons with real chicks as documented by the **[Building a Mind Lab](http://buildingamind.com/)**. Our comprehensive suite includes all necessary components for the simulation and analysis of embodied models, closely replicating laboratory conditions.\n\nBelow is a visual representation of our experimental setup, showcasing the infrastructure for the three primary experiments discussed in this documentation.\n\n<div align=\"center\">\n\n<img src=\"https://github.com/buildingamind/NewbornEmbodiedTuringTest/raw/main/docs/assets/images/digital_twin.jpg\" alt=\"Digital Twin\" width=\"65%\" />\n</div>\n\n## How to Use this Repository\n\nThe NETT toolkit comprises three key components:\n\n1. **Virtual Environment**: A dynamic environment that serves as the habitat for virtual agents.\n2. **Experimental Simulation Programs**: Tools to initiate and conduct experiments within the virtual world.\n3. **Data Visualization Programs**: Utilities for analyzing and visualizing experiment outcomes.\n\n## Directory Structure\n\nThe directory structure of the code is as follows:\n\n```\n\u251c\u2500\u2500 docs                       # Documentation and guides\n\u251c\u2500\u2500 notebooks\n\u2502   \u251c\u2500\u2500 Getting Started.ipynb  # Introduction and setup notebook\n\u251c\u2500\u2500 src/nett\n\u2502   \u251c\u2500\u2500 analysis               # Analysis scripts\n\u2502   \u251c\u2500\u2500 body                   # Agent body configurations\n\u2502   \u251c\u2500\u2500 brain                  # Neural network models and learning algorithms\n\u2502   \u251c\u2500\u2500 environment            # Simulation environments\n\u2502   \u251c\u2500\u2500 utils                  # Utility functions\n\u2502   \u251c\u2500\u2500 nett.py                # Main library script\n\u2502   \u2514\u2500\u2500 __init__.py            # Package initialization\n\u251c\u2500\u2500 tests                      # Unit tests\n\u251c\u2500\u2500 mkdocs.yml                 # MkDocs configuration\n\u251c\u2500\u2500 pyproject.toml             # Project metadata\n\u2514\u2500\u2500 README.md                  # This README file\n```\n\n## Getting Started\n<!-- sphinx-start -->\n<!-- This comment is here to denote the start of the \"Getting Started\" page for Sphinx documentation -->\nTo begin benchmarking your first embodied agent with NETT, please be aware:\n\n**Important**: The `mlagents==1.0.0` dependency is incompatible with Apple Silicon (M1, M2, etc.) chips. Please utilize an alternate device to execute this codebase.\n\n### Installation\n\n1. **Virtual Environment Setup (Highly Recommended)**\n\n   Create and activate a virtual environment to avoid dependency conflicts.\n   ```bash\n   conda create -y -n nett_env python=3.10.12\n   conda activate nett_env\n   ```\n   See [here](https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda \"Link for how to set-up a virtual env\") for detailed instructions.\n\n2. **Install Prerequistes**\n\n   Install the needed versions of `setuptools` and `pip`:\n   ```bash\n   pip install setuptools==65.5.0 pip==21 wheel==0.38.4\n   ```\n   **NOTE:** This is a result of incompatibilities with the subdependency `gym==0.21`. More information about this issue can be found [here](https://github.com/openai/gym/issues/3176#issuecomment-1560026649)\n\n3. **Toolkit Installation**\n\n   Install the toolkit using `pip`.\n   ```bash\n   pip install nett-benchmarks\n   ```\n\n   **NOTE:**: Installation outside a virtual environment may fail due to conflicting dependencies. Ensure compatibility, especially with `gym==0.21` and `numpy<=1.21.2`.\n\n### Running a NETT\n\n1. **Download or Create the Unity Executable**\n\n   Obtain a pre-made Unity executable from [here](https://origins.luddy.indiana.edu/environments/). The executable is required to run the virtual environment.\n\n2. **Import NETT Components**\n\n   Start by importing the NETT framework components - `Brain`, `Body`, and `Environment`, alongside the main `NETT` class.\n   ```python\n   from nett import Brain, Body, Environment\n   from nett import NETT\n   ```\n\n3. **Component Configuration**:\n\n- **Brain**\n\n   Configure the learning aspects, including the policy network (e.g. \"CnnPolicy\"), learning algorithm (e.g. \"PPO\"), the reward function, and the encoder.\n   ```python\n   brain = Brain(policy=\"CnnPolicy\", algorithm=\"PPO\")\n   ```\n   To get a list of all available policies, algorithms, and encoders, the `Brain` class contains the methods `list_policies()`, `list_algorithms()`, and `list_encoders()` respectively.\n\n- **Body** \n\n   Set up the agent's physical interface with the environment. It's possible to apply gym.Wrappers for data preprocessing.\n   ```python\n   body = Body(type=\"basic\", dvs=False, wrappers=None)\n   ```\n   Here, we do not pass any wrappers, letting information from the environment reach the brain \"as is\". Alternative body types (e.g. `two-eyed`, `rag-doll`) are planned in future updates.\n\n- **Environment**\n\n   Create the simulation environment using the path to your Unity executable (see Step 1).\n   ```python\n   environment = Environment(config=\"identityandview\", executable_path=\"path/to/executable.x86_64\")\n   ```\n   To get a list of all available configurations, run `Environment.list_configs()`.\n\n4. **Run the Benchmarking**\n\n   Integrate all components into a NETT instance to facilitate experiment execution.\n   ```python\n   benchmarks = NETT(brain=brain, body=body, environment=environment)\n   ```\n   The `NETT` instance has a `.run()` method that initiates the benchmarking process. The method accepts parameters such as the number of brains, training/testing episodes, and the output directory.\n   ```python\n   job_sheet = benchmarks.run(output_dir=\"path/to/run/output/directory/\", num_brains=5, trains_eps=10, test_eps=5)\n   ```\n   The `run` function is asynchronous, returning the list of jobs that may or may not be complete. If you wish to display the Unity environments running, set the `batch_mode` parameter to `False`.\n\n5. **Check Status**:\n\nTo see the status of the benchmark processes, use the `.status()` method:\n   ```python\n   benchmarks.status(job_sheet)\n   ```\n\n### Running Standard Analysis\n\nAfter running the experiments, the pipeline will generate a collection of datafiles in the defined output directory. \n\n1. **Install R and dependencies**\n\n   To run the analyses performed in previous experiments,this toolkit provides a set of analysis scripts. Prior to running them, you will need R and the packages `tidyverse`, `argparse`, and `scales` installed. To install these packages, run the following command in R:\n   ```R\n   install.packages(c(\"tidyverse\", \"argparse\", \"scales\"))\n   ```\n   Alternatively, if you are having difficulty installing R on your system, you can install these using conda.\n   ```bash\n   conda install -y r r-tidyverse r-argparse r-scales\n   ```\n2. **Run the Analysis** \n\n   To run the analysis, use the `analyze` method of the `NETT` class. This method will generate a set of plots and tables based on the datafiles in the output directory.\n   ```python\n   benchmarks.analyze(run_dir=\"path/to/run/output/directory/\", output_dir=\"path/to/analysis/output/directory/\")\n   ```\n\n<!-- sphinx-end -->\n<!-- This comment is here to denote the end of the \"Getting Started\" page for Sphinx documentation -->\n\n## Documentation\nFor a link to the full documentation, please visit [here](https://buildingamind.github.io/NewbornEmbodiedTuringTest/).\n\n## Experiment Configuration\n\nMore information related to details on the experiment can be found on following pages.\n\n* [**Parsing Experiment**](https://buildingamind.github.io/NewbornEmbodiedTuringTest/papers/Parsing.html)\n* [**ViewPoint Experiment**](https://buildingamind.github.io/NewbornEmbodiedTuringTest/papers/ViewInvariant.html)\n\n[\ud83d\udd3c Back to top](#newborn-embodied-turing-test)\n",
    "bugtrack_url": null,
    "license": "======= MIT License  Copyright (c) 2023 Building A Mind Lab  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "A testbed for comparing the learning abilities of newborn animals and autonomous artificial agents.",
    "version": "0.3.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/buildingamind/NewbornEmbodiedTuringTest/issues",
        "Documentation": "https://buildingamind.github.io/NewbornEmbodiedTuringTest/index.html",
        "Homepage": "https://github.com/buildingamind/NewbornEmbodiedTuringTest"
    },
    "split_keywords": [
        "nett",
        "netts",
        "newborn",
        "embodied",
        "turing test",
        "benchmark",
        "benchmarking",
        "learning",
        "animals",
        "autonomous",
        "artificial",
        "agents",
        "reinforcement",
        "neuroml",
        "ai",
        "ml",
        "machine learning",
        "artificial intelligence"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7424c8d5a30a0011919ad5b31a11af997f92c5db23c1dd7044bdb56789dfec47",
                "md5": "564785f426c44c3f2c1f9fe5f5e92b89",
                "sha256": "84f5fb229caff92b6e69ca9a521e2014d86b1b5ceb528d8b61c0a5ab23a4b380"
            },
            "downloads": -1,
            "filename": "nett_benchmarks-0.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "564785f426c44c3f2c1f9fe5f5e92b89",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "==3.10.12",
            "size": 49952,
            "upload_time": "2024-03-15T19:58:41",
            "upload_time_iso_8601": "2024-03-15T19:58:41.426578Z",
            "url": "https://files.pythonhosted.org/packages/74/24/c8d5a30a0011919ad5b31a11af997f92c5db23c1dd7044bdb56789dfec47/nett_benchmarks-0.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ac9c1f25fa03d2af149fbe0bc54867097d3890c7bc56ae32b751aea70ec18c27",
                "md5": "f6f206c754d348e60f52208fba8b3886",
                "sha256": "c33a93da9c1b38a00b83775cce1fc334440e5885dd81b4378f2132e336e78182"
            },
            "downloads": -1,
            "filename": "nett-benchmarks-0.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "f6f206c754d348e60f52208fba8b3886",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "==3.10.12",
            "size": 43025,
            "upload_time": "2024-03-15T19:58:43",
            "upload_time_iso_8601": "2024-03-15T19:58:43.274340Z",
            "url": "https://files.pythonhosted.org/packages/ac/9c/1f25fa03d2af149fbe0bc54867097d3890c7bc56ae32b751aea70ec18c27/nett-benchmarks-0.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-15 19:58:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "buildingamind",
    "github_project": "NewbornEmbodiedTuringTest",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nett-benchmarks"
}
        
Elapsed time: 0.21056s