nett-benchmarks


Namenett-benchmarks JSON
Version 0.4.1 PyPI version JSON
download
home_pageNone
SummaryA testbed for comparing the learning abilities of newborn animals and autonomous artificial agents.
upload_time2024-07-30 14:54:57
maintainerNone
docs_urlNone
authorNone
requires_python==3.10.12
license======= MIT License Copyright (c) 2023 Building A Mind Lab Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords nett netts newborn embodied turing test benchmark benchmarking learning animals autonomous artificial agents reinforcement neuroml ai ml machine learning artificial intelligence
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
<img src="https://github.com/buildingamind/NewbornEmbodiedTuringTest/raw/main/docs/assets/images/banner.png" alt="Banner" style />

# **Newborn Embodied Turing Test**

Benchmarking Virtual Agents in Controlled-Rearing Conditions

![PyPI - Version](https://img.shields.io/pypi/v/nett-benchmarks)
![Python Version from PEP 621 TOML](https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Fbuildingamind%2FNewbornEmbodiedTuringTest%2Fmain%2Fpyproject.toml)
![GitHub License](https://img.shields.io/github/license/buildingamind/NewbornEmbodiedTuringTest)
![GitHub Issues or Pull Requests](https://img.shields.io/github/issues/buildingamind/NewbornEmbodiedTuringTest)

[Getting Started](#getting-started) •
[Documentation](https://buildingamind.github.io/NewbornEmbodiedTuringTest/) • 
[Lab Website](http://buildingamind.com/)

</div>

The Newborn Embodied Turing Test (NETT) is a cutting-edge toolkit designed to simulate virtual agents in controlled-rearing conditions. This innovative platform enables researchers to create, simulate, and analyze virtual agents, facilitating direct comparisons with real chicks as documented by the **[Building a Mind Lab](http://buildingamind.com/)**. Our comprehensive suite includes all necessary components for the simulation and analysis of embodied models, closely replicating laboratory conditions.

Below is a visual representation of our experimental setup, showcasing the infrastructure for the three primary experiments discussed in this documentation.

<div align="center">

<img src="https://github.com/buildingamind/NewbornEmbodiedTuringTest/raw/main/docs/assets/images/digital_twin.jpg" alt="Digital Twin" width="65%" />
</div>

## How to Use this Repository

The NETT toolkit comprises three key components:

1. **Virtual Environment**: A dynamic environment that serves as the habitat for virtual agents.
2. **Experimental Simulation Programs**: Tools to initiate and conduct experiments within the virtual world.
3. **Data Visualization Programs**: Utilities for analyzing and visualizing experiment outcomes.

## Directory Structure

The directory structure of the code is as follows:

```
├── docs                          # Documentation and guides
├── examples
│   ├── notebooks                 # Jupyter Notebooks for examples
│      └── Getting Started.ipynb  # Introduction and setup notebook
│   └── run                       # Terminal script example
├── src/nett
│   ├── analysis                  # Analysis scripts
│   ├── body                      # Agent body configurations
│   ├── brain                     # Neural network models and learning algorithms
│   ├── environment               # Simulation environments
│   ├── utils                     # Utility functions
│   ├── nett.py                   # Main library script
│   └── __init__.py               # Package initialization
├── tests                         # Unit tests
├── mkdocs.yml                    # MkDocs configuration
├── pyproject.toml                # Project metadata
└── README.md                     # This README file
```

## Getting Started
<!-- sphinx-start -->
<!-- This comment is here to denote the start of the "Getting Started" page for Sphinx documentation -->
To begin benchmarking your first embodied agent with NETT, please be aware:

**Important**: The `mlagents==1.0.0` dependency is incompatible with Apple Silicon (M1, M2, etc.) chips. Please utilize an alternate device to execute this codebase.

### Installation

1. **Virtual Environment Setup (Highly Recommended)**

   Create and activate a virtual environment to avoid dependency conflicts.
   ```bash
   conda create -y -n nett_env python=3.10.12
   conda activate nett_env
   ```
   See [here](https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda "Link for how to set-up a virtual env") for detailed instructions.

2. **Install Prerequistes**

   Install the needed versions of `setuptools` and `pip`:
   ```bash
   pip install setuptools==65.5.0 pip==21 wheel==0.38.4
   ```
   **NOTE:** This is a result of incompatibilities with the subdependency `gym==0.21`. More information about this issue can be found [here](https://github.com/openai/gym/issues/3176#issuecomment-1560026649)

3. **Toolkit Installation**

   Install the toolkit using `pip`.
   ```bash
   pip install nett-benchmarks
   ```

   **NOTE:**: Installation outside a virtual environment may fail due to conflicting dependencies. Ensure compatibility, especially with `gym==0.21` and `numpy<=1.21.2`.

### Running a NETT

1. **Download or Create the Unity Executable**

   Obtain a pre-made Unity executable from [here](https://origins.luddy.indiana.edu/environments/). The executable is required to run the virtual environment.

2. **Import NETT Components**

   Start by importing the NETT framework components - `Brain`, `Body`, and `Environment`, alongside the main `NETT` class.
   ```python
   from nett import Brain, Body, Environment
   from nett import NETT
   ```

3. **Component Configuration**:

- **Brain**

   Configure the learning aspects, including the policy network (e.g. "CnnPolicy"), learning algorithm (e.g. "PPO"), the reward function, and the encoder.
   ```python
   brain = Brain(policy="CnnPolicy", algorithm="PPO")
   ```
   To get a list of all available policies, algorithms, and encoders, run `nett.list_policies()`, `nett.list_algorithms()`, and `nett.list_encoders()` respectively.

- **Body** 

   Set up the agent's physical interface with the environment. It's possible to apply gym.Wrappers for data preprocessing.
   ```python
   body = Body(type="basic", dvs=False, wrappers=None)
   ```
   Here, we do not pass any wrappers, letting information from the environment reach the brain "as is". Alternative body types (e.g. `two-eyed`, `rag-doll`) are planned in future updates.

- **Environment**

   Create the simulation environment using the path to your Unity executable (see Step 1).
   ```python
   environment = Environment(config="identityandview", executable_path="path/to/executable.x86_64")
   ```
   To get a list of all available configurations, run `nett.list_configs()`.

4. **Run the Benchmarking**

   Integrate all components into a NETT instance to facilitate experiment execution.
   ```python
   benchmarks = NETT(brain=brain, body=body, environment=environment)
   ```
   The `NETT` instance has a `.run()` method that initiates the benchmarking process. The method accepts parameters such as the number of brains, training/testing episodes, and the output directory.
   ```python
   job_sheet = benchmarks.run(output_dir="path/to/run/output/directory/", num_brains=5, trains_eps=10, test_eps=5)
   ```
   The `run` function is asynchronous, returning the list of jobs that may or may not be complete. If you wish to display the Unity environments running, set the `batch_mode` parameter to `False`.

5. **Check Status**:

To see the status of the benchmark processes, use the `.status()` method:
   ```python
   benchmarks.status(job_sheet)
   ```

### Running Standard Analysis

After running the experiments, the pipeline will generate a collection of datafiles in the defined output directory. 

1. **Install R and dependencies**

   To run the analyses performed in previous experiments,this toolkit provides a set of analysis scripts. Prior to running them, you will need R and the packages `tidyverse`, `argparse`, and `scales` installed. To install these packages, run the following command in R:
   ```R
   install.packages(c("tidyverse", "argparse", "scales"))
   ```
   Alternatively, if you are having difficulty installing R on your system, you can install these using conda.
   ```bash
   conda install -y r r-tidyverse r-argparse r-scales
   ```
2. **Run the Analysis** 

   To run the analysis, use the `analyze` method of the `NETT` class. This method will generate a set of plots and tables based on the datafiles in the output directory.
   ```python
   benchmarks.analyze(run_dir="path/to/run/output/directory/", output_dir="path/to/analysis/output/directory/")
   ```

<!-- sphinx-end -->
<!-- This comment is here to denote the end of the "Getting Started" page for Sphinx documentation -->

## Documentation
For a link to the full documentation, please visit [here](https://buildingamind.github.io/NewbornEmbodiedTuringTest/).

## Experiment Configuration

More information related to details on the experiment can be found on following pages.

* [**Parsing Experiment**](https://buildingamind.github.io/NewbornEmbodiedTuringTest/papers/Parsing.html)
* [**ViewPoint Experiment**](https://buildingamind.github.io/NewbornEmbodiedTuringTest/papers/ViewInvariant.html)

[🔼 Back to top](#newborn-embodied-turing-test)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nett-benchmarks",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "==3.10.12",
    "maintainer_email": null,
    "keywords": "nett, netts, newborn, embodied, turing test, benchmark, benchmarking, learning, animals, autonomous, artificial, agents, reinforcement, neuroml, AI, ML, machine learning, artificial intelligence",
    "author": null,
    "author_email": "Bhargav Desai <desabh@iu.edu>, Zachary Laborde <zlaborde@iu.edu>, Manju Garimella <mchivuku@iu.edu>",
    "download_url": "https://files.pythonhosted.org/packages/fb/9f/ea17337038409c7065edca75a71cad75c6a00cbe9e65a7784ea4262d237d/nett-benchmarks-0.4.1.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n<img src=\"https://github.com/buildingamind/NewbornEmbodiedTuringTest/raw/main/docs/assets/images/banner.png\" alt=\"Banner\" style />\n\n# **Newborn Embodied Turing Test**\n\nBenchmarking Virtual Agents in Controlled-Rearing Conditions\n\n![PyPI - Version](https://img.shields.io/pypi/v/nett-benchmarks)\n![Python Version from PEP 621 TOML](https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Fbuildingamind%2FNewbornEmbodiedTuringTest%2Fmain%2Fpyproject.toml)\n![GitHub License](https://img.shields.io/github/license/buildingamind/NewbornEmbodiedTuringTest)\n![GitHub Issues or Pull Requests](https://img.shields.io/github/issues/buildingamind/NewbornEmbodiedTuringTest)\n\n[Getting Started](#getting-started) \u2022\n[Documentation](https://buildingamind.github.io/NewbornEmbodiedTuringTest/) \u2022 \n[Lab Website](http://buildingamind.com/)\n\n</div>\n\nThe Newborn Embodied Turing Test (NETT) is a cutting-edge toolkit designed to simulate virtual agents in controlled-rearing conditions. This innovative platform enables researchers to create, simulate, and analyze virtual agents, facilitating direct comparisons with real chicks as documented by the **[Building a Mind Lab](http://buildingamind.com/)**. Our comprehensive suite includes all necessary components for the simulation and analysis of embodied models, closely replicating laboratory conditions.\n\nBelow is a visual representation of our experimental setup, showcasing the infrastructure for the three primary experiments discussed in this documentation.\n\n<div align=\"center\">\n\n<img src=\"https://github.com/buildingamind/NewbornEmbodiedTuringTest/raw/main/docs/assets/images/digital_twin.jpg\" alt=\"Digital Twin\" width=\"65%\" />\n</div>\n\n## How to Use this Repository\n\nThe NETT toolkit comprises three key components:\n\n1. **Virtual Environment**: A dynamic environment that serves as the habitat for virtual agents.\n2. **Experimental Simulation Programs**: Tools to initiate and conduct experiments within the virtual world.\n3. **Data Visualization Programs**: Utilities for analyzing and visualizing experiment outcomes.\n\n## Directory Structure\n\nThe directory structure of the code is as follows:\n\n```\n\u251c\u2500\u2500 docs                          # Documentation and guides\n\u251c\u2500\u2500 examples\n\u2502   \u251c\u2500\u2500 notebooks                 # Jupyter Notebooks for examples\n\u2502      \u2514\u2500\u2500 Getting Started.ipynb  # Introduction and setup notebook\n\u2502   \u2514\u2500\u2500 run                       # Terminal script example\n\u251c\u2500\u2500 src/nett\n\u2502   \u251c\u2500\u2500 analysis                  # Analysis scripts\n\u2502   \u251c\u2500\u2500 body                      # Agent body configurations\n\u2502   \u251c\u2500\u2500 brain                     # Neural network models and learning algorithms\n\u2502   \u251c\u2500\u2500 environment               # Simulation environments\n\u2502   \u251c\u2500\u2500 utils                     # Utility functions\n\u2502   \u251c\u2500\u2500 nett.py                   # Main library script\n\u2502   \u2514\u2500\u2500 __init__.py               # Package initialization\n\u251c\u2500\u2500 tests                         # Unit tests\n\u251c\u2500\u2500 mkdocs.yml                    # MkDocs configuration\n\u251c\u2500\u2500 pyproject.toml                # Project metadata\n\u2514\u2500\u2500 README.md                     # This README file\n```\n\n## Getting Started\n<!-- sphinx-start -->\n<!-- This comment is here to denote the start of the \"Getting Started\" page for Sphinx documentation -->\nTo begin benchmarking your first embodied agent with NETT, please be aware:\n\n**Important**: The `mlagents==1.0.0` dependency is incompatible with Apple Silicon (M1, M2, etc.) chips. Please utilize an alternate device to execute this codebase.\n\n### Installation\n\n1. **Virtual Environment Setup (Highly Recommended)**\n\n   Create and activate a virtual environment to avoid dependency conflicts.\n   ```bash\n   conda create -y -n nett_env python=3.10.12\n   conda activate nett_env\n   ```\n   See [here](https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda \"Link for how to set-up a virtual env\") for detailed instructions.\n\n2. **Install Prerequistes**\n\n   Install the needed versions of `setuptools` and `pip`:\n   ```bash\n   pip install setuptools==65.5.0 pip==21 wheel==0.38.4\n   ```\n   **NOTE:** This is a result of incompatibilities with the subdependency `gym==0.21`. More information about this issue can be found [here](https://github.com/openai/gym/issues/3176#issuecomment-1560026649)\n\n3. **Toolkit Installation**\n\n   Install the toolkit using `pip`.\n   ```bash\n   pip install nett-benchmarks\n   ```\n\n   **NOTE:**: Installation outside a virtual environment may fail due to conflicting dependencies. Ensure compatibility, especially with `gym==0.21` and `numpy<=1.21.2`.\n\n### Running a NETT\n\n1. **Download or Create the Unity Executable**\n\n   Obtain a pre-made Unity executable from [here](https://origins.luddy.indiana.edu/environments/). The executable is required to run the virtual environment.\n\n2. **Import NETT Components**\n\n   Start by importing the NETT framework components - `Brain`, `Body`, and `Environment`, alongside the main `NETT` class.\n   ```python\n   from nett import Brain, Body, Environment\n   from nett import NETT\n   ```\n\n3. **Component Configuration**:\n\n- **Brain**\n\n   Configure the learning aspects, including the policy network (e.g. \"CnnPolicy\"), learning algorithm (e.g. \"PPO\"), the reward function, and the encoder.\n   ```python\n   brain = Brain(policy=\"CnnPolicy\", algorithm=\"PPO\")\n   ```\n   To get a list of all available policies, algorithms, and encoders, run `nett.list_policies()`, `nett.list_algorithms()`, and `nett.list_encoders()` respectively.\n\n- **Body** \n\n   Set up the agent's physical interface with the environment. It's possible to apply gym.Wrappers for data preprocessing.\n   ```python\n   body = Body(type=\"basic\", dvs=False, wrappers=None)\n   ```\n   Here, we do not pass any wrappers, letting information from the environment reach the brain \"as is\". Alternative body types (e.g. `two-eyed`, `rag-doll`) are planned in future updates.\n\n- **Environment**\n\n   Create the simulation environment using the path to your Unity executable (see Step 1).\n   ```python\n   environment = Environment(config=\"identityandview\", executable_path=\"path/to/executable.x86_64\")\n   ```\n   To get a list of all available configurations, run `nett.list_configs()`.\n\n4. **Run the Benchmarking**\n\n   Integrate all components into a NETT instance to facilitate experiment execution.\n   ```python\n   benchmarks = NETT(brain=brain, body=body, environment=environment)\n   ```\n   The `NETT` instance has a `.run()` method that initiates the benchmarking process. The method accepts parameters such as the number of brains, training/testing episodes, and the output directory.\n   ```python\n   job_sheet = benchmarks.run(output_dir=\"path/to/run/output/directory/\", num_brains=5, trains_eps=10, test_eps=5)\n   ```\n   The `run` function is asynchronous, returning the list of jobs that may or may not be complete. If you wish to display the Unity environments running, set the `batch_mode` parameter to `False`.\n\n5. **Check Status**:\n\nTo see the status of the benchmark processes, use the `.status()` method:\n   ```python\n   benchmarks.status(job_sheet)\n   ```\n\n### Running Standard Analysis\n\nAfter running the experiments, the pipeline will generate a collection of datafiles in the defined output directory. \n\n1. **Install R and dependencies**\n\n   To run the analyses performed in previous experiments,this toolkit provides a set of analysis scripts. Prior to running them, you will need R and the packages `tidyverse`, `argparse`, and `scales` installed. To install these packages, run the following command in R:\n   ```R\n   install.packages(c(\"tidyverse\", \"argparse\", \"scales\"))\n   ```\n   Alternatively, if you are having difficulty installing R on your system, you can install these using conda.\n   ```bash\n   conda install -y r r-tidyverse r-argparse r-scales\n   ```\n2. **Run the Analysis** \n\n   To run the analysis, use the `analyze` method of the `NETT` class. This method will generate a set of plots and tables based on the datafiles in the output directory.\n   ```python\n   benchmarks.analyze(run_dir=\"path/to/run/output/directory/\", output_dir=\"path/to/analysis/output/directory/\")\n   ```\n\n<!-- sphinx-end -->\n<!-- This comment is here to denote the end of the \"Getting Started\" page for Sphinx documentation -->\n\n## Documentation\nFor a link to the full documentation, please visit [here](https://buildingamind.github.io/NewbornEmbodiedTuringTest/).\n\n## Experiment Configuration\n\nMore information related to details on the experiment can be found on following pages.\n\n* [**Parsing Experiment**](https://buildingamind.github.io/NewbornEmbodiedTuringTest/papers/Parsing.html)\n* [**ViewPoint Experiment**](https://buildingamind.github.io/NewbornEmbodiedTuringTest/papers/ViewInvariant.html)\n\n[\ud83d\udd3c Back to top](#newborn-embodied-turing-test)\n",
    "bugtrack_url": null,
    "license": "======= MIT License  Copyright (c) 2023 Building A Mind Lab  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "A testbed for comparing the learning abilities of newborn animals and autonomous artificial agents.",
    "version": "0.4.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/buildingamind/NewbornEmbodiedTuringTest/issues",
        "Documentation": "https://buildingamind.github.io/NewbornEmbodiedTuringTest/index.html",
        "Homepage": "https://github.com/buildingamind/NewbornEmbodiedTuringTest"
    },
    "split_keywords": [
        "nett",
        " netts",
        " newborn",
        " embodied",
        " turing test",
        " benchmark",
        " benchmarking",
        " learning",
        " animals",
        " autonomous",
        " artificial",
        " agents",
        " reinforcement",
        " neuroml",
        " ai",
        " ml",
        " machine learning",
        " artificial intelligence"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2b285145cef8d9c558b57148e653b17e3f4c566681516ca22946f2d6833a4621",
                "md5": "32ae655bb1a9b43fef9c3e0222b5d025",
                "sha256": "dc08c1ab1d56fe4914fe8116800547a18c2bc3d2e4085b7c8c86c7985f9bd59b"
            },
            "downloads": -1,
            "filename": "nett_benchmarks-0.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "32ae655bb1a9b43fef9c3e0222b5d025",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "==3.10.12",
            "size": 74919,
            "upload_time": "2024-07-30T14:54:56",
            "upload_time_iso_8601": "2024-07-30T14:54:56.703153Z",
            "url": "https://files.pythonhosted.org/packages/2b/28/5145cef8d9c558b57148e653b17e3f4c566681516ca22946f2d6833a4621/nett_benchmarks-0.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fb9fea17337038409c7065edca75a71cad75c6a00cbe9e65a7784ea4262d237d",
                "md5": "fe2ace9cb55982d1f93e95b32880a325",
                "sha256": "042781d6e4503c0dbb32c0f073f02c3c5516f8423959109717cdae96b5663d77"
            },
            "downloads": -1,
            "filename": "nett-benchmarks-0.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "fe2ace9cb55982d1f93e95b32880a325",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "==3.10.12",
            "size": 57881,
            "upload_time": "2024-07-30T14:54:57",
            "upload_time_iso_8601": "2024-07-30T14:54:57.972694Z",
            "url": "https://files.pythonhosted.org/packages/fb/9f/ea17337038409c7065edca75a71cad75c6a00cbe9e65a7784ea4262d237d/nett-benchmarks-0.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-30 14:54:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "buildingamind",
    "github_project": "NewbornEmbodiedTuringTest",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nett-benchmarks"
}
        
Elapsed time: 0.27733s