nasbenchapi


Namenasbenchapi JSON
Version 1.0.2 PyPI version JSON
download
home_pageNone
SummaryLightweight, unified pickle-based NASBench APIs (101/201/301) with downloader
upload_time2025-10-29 18:32:33
maintainerNone
docs_urlNone
authorNASBenchAPI Contributors
requires_python>=3.8
licenseMIT
keywords nas neural-architecture-search nasbench benchmark automl
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# NASBenchAPI

[![pypi](https://img.shields.io/badge/pypi%20package-1.0.2-lightgrey.svg)](https://pypi.org/project/nasbenchapi/) [![Platform](https://img.shields.io/badge/python-v3.8+-green)](https://github.com/ThunderStruct/NASBenchAPI) [![License](https://img.shields.io/badge/license-MIT-orange)](https://github.com/ThunderStruct/NASBenchAPI/blob/main/LICENSE) [![Read the Docs](https://readthedocs.org/projects/nasbenchapi/badge/?version=latest)](https://nasbenchapi.readthedocs.io/en/latest/)


A unified, lightweight interface for NASBench-101, 201, and 301 with optimized Pickle-based datasets.

 
------------------------

  

## Getting Started

  

**NASBenchAPI** is a lightweight, unified interface for Neural Architecture Search benchmarks (101, 201, and 301). All NASBench datasets (originally in `.tfrecord`, `.pth`, and `.json` formats) were extracted and saved as Pickle-based files for consistency.


### Related Works


This project is inspired by the holistic NAS Library, [NASLib](https://github.com/automl/NASLib), and the paper by [Mehta et al.](https://openreview.net/forum?id=0DLwqQLmqV).


The primary motivation for NASBenchAPI stems from the need to integrate NASBench datasets (101, 201, 301) into custom frameworks without the significant overhead and extraneous tools introduced by more comprehensive libraries. This API provides a focused, lightweight, and unified interface specifically for that purpose.


### Installation

  

#### PyPi (recommended)

  

The Python package is hosted on the [Python Package Index (PyPI)](https://pypi.org/project/nasbenchapi/).

  

The latest published version of NASBenchAPI can be installed using

  

```sh
pip install nasbenchapi
```

  

#### Manual Installation

Simply clone the entire repo and extract the files in the `nasbenchapi` folder, then import them into your project folder.

  

Or use one of the shorthand methods below

##### GIT

-  `cd` into your project directory

- Use `sparse-checkout` to pull the library files only into your project directory

```sh
git init nasbenchapi

cd nasbenchapi

git remote add -f origin https://github.com/ThunderStruct/NASBenchAPI.git
git config core.sparseCheckout true

echo "nasbenchapi/*"  >> .git/info/sparse-checkout

git pull --depth=1  origin  main
```

- Import the newly pulled files into your project folder

##### SVN

-  `cd` into your project directory

-  `checkout` the library files

```sh
svn checkout https://github.com/ThunderStruct/NASBenchAPI/trunk/nasbenchapi
```

- Import the newly checked out files into your project folder

  

### Quick Start

  

#### Basic Usage

  
#####  Loading and initializing a benchmark
```python
from nasbenchapi import NASBench101, NASBench201, NASBench301

# Initialize with explicit path

nb101 = NASBench101('/path/to/nb101.pkl')  # Same for 201, 301

# Or use environment variables
# export NASBENC2101_PATH=/path/to/nb201.pkl

nb201 =  NASBench201()

```

##### Sample random architectures

```python
archs = nb101.random_sample(n=5,  seed=42)    # randomly sample 5 architectures

print(f"Sampled {len(archs)} architectures")
```

##### Query performance of an architecture

```python
arch = archs[0]

# Tuple result: (info_dict, metrics_by_budget)
info, metrics = nb101.query(arch, dataset='cifar10', split='val')

# Accessing the final run at the 108-epoch budget
final_val = metrics[108][-1]['final_validation_accuracy']
print(f"Validation accuracy @108 epochs: {final_val}")

# Legacy condensed dict (metric / cost / info)
summary = nb101.query(arch, dataset='cifar10', split='val', summary=True)
print(f"Summary metric: {summary['metric']}")

```

##### Iterate over all architectures

```python
for i, arch in  enumerate(nb101.iter_all()):
    if i >=  10:
        break
    print(f"Architecture {i}: {nb101.id(arch)}")

```


## Benchmark Reference

### NASBench-101

- **Dataset format**: Converted from the original TensorFlow TFRecord into a Pickle for faster loading (up to 20x faster) and compatibility with modern libraries (does not depend on TF1.x).
- **Budgets**: Validation/test metrics are available at epochs 4, 12, 36, and 108.
- **Query return shape**:
  - Default: tuple ``(info_dict, metrics_by_budget)`` where each budget maps to a list of raw run dictionaries (`halfway_*`, `final_*` keys).
  - ``average=True`` collapses runs per budget; ``summary=True`` restores the legacy dict with ``metric``, ``metric_name``, ``cost``, ``std``, ``info``.

```python
from nasbenchapi import NASBench101, Arch101

nb101 = NASBench101('/path/to/nasbench101_full.pkl', verbose=False)
arch = nb101.random_sample(n=1, seed=0)[0]

info, metrics = nb101.query(arch, dataset='cifar10', split='val')
avg_metrics = nb101.query(arch, dataset='cifar10', split='val', average=True)[1]
summary = nb101.query(arch, dataset='cifar10', split='val', summary=True)

print(info['module_hash'])
print(metrics[108][-1]['final_test_accuracy'])
print(summary['metric'])
```

### NASBench-201

- **Dataset format**: Official PyTorch checkpoint (`NASBench-201-v1_1-096897.pth`) re-serialized to pickle with cached index ↔ string mappings.
- **Budgets**: Epochs 0–199 (commonly query 12 for early and 199 for final results) across CIFAR-10, CIFAR-100, and ImageNet16-120.
- **Query return shape**: dict with ``metric``, ``metric_name``, ``cost``, ``std``, and ``info`` (contains architecture index, arch string, dataset, split, seed, epoch, params, FLOPs).

```python
from nasbenchapi import NASBench201

nb201 = NASBench201('/path/to/nasbench201.pkl', verbose=False)
arch_str = nb201.random_sample(n=1, seed=7)[0]

result = nb201.query(arch_str, dataset='cifar10', split='val', budget=199)
print(result['metric'])
print(result['info']['arch_str'])
```

### NASBench-301

- **Dataset format**: The original directory of JSON surrogate models has been flattened into a single pickle for faster access; indices map directly to entries.
- **Budgets**: Validation budgets come from learning-curve lengths (typically 1–98 epochs for CIFAR-10/CIFAR-100); test metrics expose the declared training budget.
- **Query return shape**: dict with ``metric``, ``metric_name``, ``cost``, ``std``, and ``info`` (including entry index, dataset, optimizer tag, epochs available/used, JSON source path).

```python
from nasbenchapi import NASBench301

nb301 = NASBench301('/path/to/nasbench301.pkl', verbose=False)
idx = nb301.random_sample(n=1, seed=1)[0]

val_final = nb301.query(idx, dataset='cifar10', split='val')
val_epoch50 = nb301.query(idx, dataset='cifar10', split='val', budget=50)
test_final = nb301.query(idx, dataset='cifar10', split='test')

print(val_final['metric'], val_epoch50['metric'], test_final['metric'])
```

  

### Dataset Management

**Environment Variables (recommended)**

  

Set environment variables to avoid passing paths explicitly and work seamlessly across different projects:

```bash
export NASBENCH101_PATH=/path/to/nb101.pkl
export NASBENCH201_PATH=/path/to/nb201.pkl
export NASBENCH301_PATH=/path/to/nb301.pkl
```


**CLI Downloader (recommended)**

Download the Pickle-based benchmark datasets through the CLI:

```bash
nasbench-download
```

You may optionally set the `--benchmark={101|201|301}` argument. Otherwise, the tool will prompt for benchmark selection interactively.


**Manual Download**

Alternatively, manually download the Pickle-based benchmarks through the following links:

| Benchmark | Download Link |
|-----------|---------------|
| **NASBench-101** | [Figshare Link](https://figshare.com/ndownloader/files/58862740) |
| **NASBench-201** | [Figshare Link](https://figshare.com/ndownloader/files/58862743) |
| **NASBench-301** | [Figshare Link](https://figshare.com/ndownloader/files/58862737) |


### Documentation

Detailed examples and the full API docs are [hosted on Read the Docs](https://nasbenchapi.readthedocs.io/en/latest/).
  

## Benchmarks at a Glance

  

| Benchmark | Datasets | Metrics | Search Space Size |
|-----------|----------|---------|-------------------|
| **NASBench-101** | CIFAR-10 | train/val/test accuracy, training time | 423,624 |
| **NASBench-201** | CIFAR-10, CIFAR-100, ImageNet16-120 | train/val/test accuracy, losses | 15,625 |
| **NASBench-301** | CIFAR-10, CIFAR-100 | surrogate val/test accuracy | ~10^18 (surrogate) |
  


## Cite

If you use this library in your work, please use the following BibTeX entry:

```bibtex
@misc{nasbenchapi-2025, 
  title={NASBenchAPI: A unified interface for NASBench datasets}, 
  author={Shahawy, Mohamed}, 
  year={2025}, 
  publisher={GitHub}, 
  howpublished={\url{https://github.com/ThunderStruct/NASBenchAPI}} 
}
```

## License

This project is licensed under the MIT License - see the [LICENSE](https://github.com/ThunderStruct/NASBenchAPI/blob/main/LICENSE) file for details

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nasbenchapi",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "nas, neural-architecture-search, nasbench, benchmark, automl",
    "author": "NASBenchAPI Contributors",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/dc/e2/1514bca78db89a168a3a0f9f686b7c0db2ab63c8e36035419983e64448a5/nasbenchapi-1.0.2.tar.gz",
    "platform": null,
    "description": "\n# NASBenchAPI\n\n[![pypi](https://img.shields.io/badge/pypi%20package-1.0.2-lightgrey.svg)](https://pypi.org/project/nasbenchapi/) [![Platform](https://img.shields.io/badge/python-v3.8+-green)](https://github.com/ThunderStruct/NASBenchAPI) [![License](https://img.shields.io/badge/license-MIT-orange)](https://github.com/ThunderStruct/NASBenchAPI/blob/main/LICENSE) [![Read the Docs](https://readthedocs.org/projects/nasbenchapi/badge/?version=latest)](https://nasbenchapi.readthedocs.io/en/latest/)\n\n\nA unified, lightweight interface for NASBench-101, 201, and 301 with optimized Pickle-based datasets.\n\n \n------------------------\n\n  \n\n## Getting Started\n\n  \n\n**NASBenchAPI** is a lightweight, unified interface for Neural Architecture Search benchmarks (101, 201, and 301). All NASBench datasets (originally in `.tfrecord`, `.pth`, and `.json` formats) were extracted and saved as Pickle-based files for consistency.\n\n\n### Related Works\n\n\nThis project is inspired by the holistic NAS Library, [NASLib](https://github.com/automl/NASLib), and the paper by [Mehta et al.](https://openreview.net/forum?id=0DLwqQLmqV).\n\n\nThe primary motivation for NASBenchAPI stems from the need to integrate NASBench datasets (101, 201, 301) into custom frameworks without the significant overhead and extraneous tools introduced by more comprehensive libraries. This API provides a focused, lightweight, and unified interface specifically for that purpose.\n\n\n### Installation\n\n  \n\n#### PyPi (recommended)\n\n  \n\nThe Python package is hosted on the [Python Package Index (PyPI)](https://pypi.org/project/nasbenchapi/).\n\n  \n\nThe latest published version of NASBenchAPI can be installed using\n\n  \n\n```sh\npip install nasbenchapi\n```\n\n  \n\n#### Manual Installation\n\nSimply clone the entire repo and extract the files in the `nasbenchapi` folder, then import them into your project folder.\n\n  \n\nOr use one of the shorthand methods below\n\n##### GIT\n\n-  `cd` into your project directory\n\n- Use `sparse-checkout` to pull the library files only into your project directory\n\n```sh\ngit init nasbenchapi\n\ncd nasbenchapi\n\ngit remote add -f origin https://github.com/ThunderStruct/NASBenchAPI.git\ngit config core.sparseCheckout true\n\necho \"nasbenchapi/*\"  >> .git/info/sparse-checkout\n\ngit pull --depth=1  origin  main\n```\n\n- Import the newly pulled files into your project folder\n\n##### SVN\n\n-  `cd` into your project directory\n\n-  `checkout` the library files\n\n```sh\nsvn checkout https://github.com/ThunderStruct/NASBenchAPI/trunk/nasbenchapi\n```\n\n- Import the newly checked out files into your project folder\n\n  \n\n### Quick Start\n\n  \n\n#### Basic Usage\n\n  \n#####  Loading and initializing a benchmark\n```python\nfrom nasbenchapi import NASBench101, NASBench201, NASBench301\n\n# Initialize with explicit path\n\nnb101 = NASBench101('/path/to/nb101.pkl')  # Same for 201, 301\n\n# Or use environment variables\n# export NASBENC2101_PATH=/path/to/nb201.pkl\n\nnb201 =  NASBench201()\n\n```\n\n##### Sample random architectures\n\n```python\narchs = nb101.random_sample(n=5,  seed=42)    # randomly sample 5 architectures\n\nprint(f\"Sampled {len(archs)} architectures\")\n```\n\n##### Query performance of an architecture\n\n```python\narch = archs[0]\n\n# Tuple result: (info_dict, metrics_by_budget)\ninfo, metrics = nb101.query(arch, dataset='cifar10', split='val')\n\n# Accessing the final run at the 108-epoch budget\nfinal_val = metrics[108][-1]['final_validation_accuracy']\nprint(f\"Validation accuracy @108 epochs: {final_val}\")\n\n# Legacy condensed dict (metric / cost / info)\nsummary = nb101.query(arch, dataset='cifar10', split='val', summary=True)\nprint(f\"Summary metric: {summary['metric']}\")\n\n```\n\n##### Iterate over all architectures\n\n```python\nfor i, arch in  enumerate(nb101.iter_all()):\n    if i >=  10:\n        break\n    print(f\"Architecture {i}: {nb101.id(arch)}\")\n\n```\n\n\n## Benchmark Reference\n\n### NASBench-101\n\n- **Dataset format**: Converted from the original TensorFlow TFRecord into a Pickle for faster loading (up to 20x faster) and compatibility with modern libraries (does not depend on TF1.x).\n- **Budgets**: Validation/test metrics are available at epochs 4, 12, 36, and 108.\n- **Query return shape**:\n  - Default: tuple ``(info_dict, metrics_by_budget)`` where each budget maps to a list of raw run dictionaries (`halfway_*`, `final_*` keys).\n  - ``average=True`` collapses runs per budget; ``summary=True`` restores the legacy dict with ``metric``, ``metric_name``, ``cost``, ``std``, ``info``.\n\n```python\nfrom nasbenchapi import NASBench101, Arch101\n\nnb101 = NASBench101('/path/to/nasbench101_full.pkl', verbose=False)\narch = nb101.random_sample(n=1, seed=0)[0]\n\ninfo, metrics = nb101.query(arch, dataset='cifar10', split='val')\navg_metrics = nb101.query(arch, dataset='cifar10', split='val', average=True)[1]\nsummary = nb101.query(arch, dataset='cifar10', split='val', summary=True)\n\nprint(info['module_hash'])\nprint(metrics[108][-1]['final_test_accuracy'])\nprint(summary['metric'])\n```\n\n### NASBench-201\n\n- **Dataset format**: Official PyTorch checkpoint (`NASBench-201-v1_1-096897.pth`) re-serialized to pickle with cached index \u2194 string mappings.\n- **Budgets**: Epochs 0\u2013199 (commonly query 12 for early and 199 for final results) across CIFAR-10, CIFAR-100, and ImageNet16-120.\n- **Query return shape**: dict with ``metric``, ``metric_name``, ``cost``, ``std``, and ``info`` (contains architecture index, arch string, dataset, split, seed, epoch, params, FLOPs).\n\n```python\nfrom nasbenchapi import NASBench201\n\nnb201 = NASBench201('/path/to/nasbench201.pkl', verbose=False)\narch_str = nb201.random_sample(n=1, seed=7)[0]\n\nresult = nb201.query(arch_str, dataset='cifar10', split='val', budget=199)\nprint(result['metric'])\nprint(result['info']['arch_str'])\n```\n\n### NASBench-301\n\n- **Dataset format**: The original directory of JSON surrogate models has been flattened into a single pickle for faster access; indices map directly to entries.\n- **Budgets**: Validation budgets come from learning-curve lengths (typically 1\u201398 epochs for CIFAR-10/CIFAR-100); test metrics expose the declared training budget.\n- **Query return shape**: dict with ``metric``, ``metric_name``, ``cost``, ``std``, and ``info`` (including entry index, dataset, optimizer tag, epochs available/used, JSON source path).\n\n```python\nfrom nasbenchapi import NASBench301\n\nnb301 = NASBench301('/path/to/nasbench301.pkl', verbose=False)\nidx = nb301.random_sample(n=1, seed=1)[0]\n\nval_final = nb301.query(idx, dataset='cifar10', split='val')\nval_epoch50 = nb301.query(idx, dataset='cifar10', split='val', budget=50)\ntest_final = nb301.query(idx, dataset='cifar10', split='test')\n\nprint(val_final['metric'], val_epoch50['metric'], test_final['metric'])\n```\n\n  \n\n### Dataset Management\n\n**Environment Variables (recommended)**\n\n  \n\nSet environment variables to avoid passing paths explicitly and work seamlessly across different projects:\n\n```bash\nexport NASBENCH101_PATH=/path/to/nb101.pkl\nexport NASBENCH201_PATH=/path/to/nb201.pkl\nexport NASBENCH301_PATH=/path/to/nb301.pkl\n```\n\n\n**CLI Downloader (recommended)**\n\nDownload the Pickle-based benchmark datasets through the CLI:\n\n```bash\nnasbench-download\n```\n\nYou may optionally set the `--benchmark={101|201|301}` argument. Otherwise, the tool will prompt for benchmark selection interactively.\n\n\n**Manual Download**\n\nAlternatively, manually download the Pickle-based benchmarks through the following links:\n\n| Benchmark | Download Link |\n|-----------|---------------|\n| **NASBench-101** | [Figshare Link](https://figshare.com/ndownloader/files/58862740) |\n| **NASBench-201** | [Figshare Link](https://figshare.com/ndownloader/files/58862743) |\n| **NASBench-301** | [Figshare Link](https://figshare.com/ndownloader/files/58862737) |\n\n\n### Documentation\n\nDetailed examples and the full API docs are [hosted on Read the Docs](https://nasbenchapi.readthedocs.io/en/latest/).\n  \n\n## Benchmarks at a Glance\n\n  \n\n| Benchmark | Datasets | Metrics | Search Space Size |\n|-----------|----------|---------|-------------------|\n| **NASBench-101** | CIFAR-10 | train/val/test accuracy, training time | 423,624 |\n| **NASBench-201** | CIFAR-10, CIFAR-100, ImageNet16-120 | train/val/test accuracy, losses | 15,625 |\n| **NASBench-301** | CIFAR-10, CIFAR-100 | surrogate val/test accuracy | ~10^18 (surrogate) |\n  \n\n\n## Cite\n\nIf you use this library in your work, please use the following BibTeX entry:\n\n```bibtex\n@misc{nasbenchapi-2025, \n  title={NASBenchAPI: A unified interface for NASBench datasets}, \n  author={Shahawy, Mohamed}, \n  year={2025}, \n  publisher={GitHub}, \n  howpublished={\\url{https://github.com/ThunderStruct/NASBenchAPI}} \n}\n```\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](https://github.com/ThunderStruct/NASBenchAPI/blob/main/LICENSE) file for details\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Lightweight, unified pickle-based NASBench APIs (101/201/301) with downloader",
    "version": "1.0.2",
    "project_urls": {
        "Documentation": "https://github.com/ThunderStruct/NASBenchAPI/tree/main/docs",
        "Homepage": "https://github.com/ThunderStruct/NASBenchAPI",
        "Issues": "https://github.com/ThunderStruct/NASBenchAPI/issues",
        "Repository": "https://github.com/ThunderStruct/NASBenchAPI"
    },
    "split_keywords": [
        "nas",
        " neural-architecture-search",
        " nasbench",
        " benchmark",
        " automl"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1f15d165edc84c58e26d4c69a4743c699b0b25cac79df685a1eb64877f258abd",
                "md5": "7b9c4e9a76ca93f36f8d6adbbf7191c4",
                "sha256": "b5dc2f1c0f204bc4b6a89cce5f377f09ad6d1b86ea0a2cb95e6102ba315493fe"
            },
            "downloads": -1,
            "filename": "nasbenchapi-1.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7b9c4e9a76ca93f36f8d6adbbf7191c4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 27769,
            "upload_time": "2025-10-29T18:32:32",
            "upload_time_iso_8601": "2025-10-29T18:32:32.150234Z",
            "url": "https://files.pythonhosted.org/packages/1f/15/d165edc84c58e26d4c69a4743c699b0b25cac79df685a1eb64877f258abd/nasbenchapi-1.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "dce21514bca78db89a168a3a0f9f686b7c0db2ab63c8e36035419983e64448a5",
                "md5": "7eb3b76cccada80b5741ad8867d0eded",
                "sha256": "7eea73ce1f35b8146362a9c8b5e2f605b5ce5f30de24905b81a8d61ad3bcbf08"
            },
            "downloads": -1,
            "filename": "nasbenchapi-1.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "7eb3b76cccada80b5741ad8867d0eded",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 26196,
            "upload_time": "2025-10-29T18:32:33",
            "upload_time_iso_8601": "2025-10-29T18:32:33.360649Z",
            "url": "https://files.pythonhosted.org/packages/dc/e2/1514bca78db89a168a3a0f9f686b7c0db2ab63c8e36035419983e64448a5/nasbenchapi-1.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-29 18:32:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ThunderStruct",
    "github_project": "NASBenchAPI",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "nasbenchapi"
}
        
Elapsed time: 1.19202s