apebench


Nameapebench JSON
Version 0.1.1 PyPI version JSON
download
home_pageNone
SummaryBenchmark suite for Autoregressive Neural Emulators of PDEs in JAX.
upload_time2024-11-09 08:49:25
maintainerNone
docs_urlNone
authorFelix Koehler
requires_python~=3.10
licenseNone
keywords jax sciml deep-learning pde neural operator
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h4 align="center">A benchmark suite for Autoregressive PDE Emulators in <a href="https://github.com/google/jax" target="_blank">JAX</a>.</h4>

<p align="center">
<a href="https://pypi.org/project/apebench/">
  <img src="https://img.shields.io/pypi/v/apebench.svg" alt="PyPI">
</a>
<a href="https://github.com/ceyron/apebench/actions/workflows/test.yml">
  <img src="https://github.com/ceyron/apebench/actions/workflows/test.yml/badge.svg" alt="Tests">
</a>
<a href="https://tum-pbs.github.io/apebench">
  <img src="https://img.shields.io/badge/docs-latest-green" alt="docs-latest">
</a>
<a href="https://github.com/ceyron/apebench/releases">
  <img src="https://img.shields.io/github/v/release/ceyron/apebench?include_prereleases&label=changelog" alt="Changelog">
</a>
<a href="https://github.com/ceyron/apebench/blob/main/LICENSE.txt">
  <img src="https://img.shields.io/badge/license-MIT-blue" alt="License">
</a>
</p>

<p align="center">
    <a href="https://arxiv.org/abs/2411.00180">
        📄 Paper
    </a> •
    <a href="https://tum-pbs.github.io/apebench-paper/">
        🧵 Project Page
    </a>
</p>

<p align="center">
  <a href="#installation">Installation</a> •
  <a href="#quickstart">Quickstart</a> •
    <a href="#documentation">Documentation</a> •
    <a href="#background">Background</a> •
    <a href="#citation">Citation</a>
</p>

<p align="center">
  <img src="https://github.com/user-attachments/assets/c6b88756-bc35-4e9a-8662-798a16f8302b" width="150">
</p>

APEBench is a JAX-based tool to evaluate autoregressive neural emulators for
PDEs on periodic domains in 1d, 2d, and 3d. It comes with an efficient reference
simulator based on spectral methods that is used for procedural data generation
(no need to download large datasets with APEBench). Since this simulator can
also be embedded into emulator training (e.g., for a "solver-in-the-loop"
correction setting), this is the first benchmark suite to support
**differentiable physics**.



## Installation

```bash
pip install apebench
```

Requires Python 3.10+ and JAX 0.4.12+ 👉 [JAX install guide](https://jax.readthedocs.io/en/latest/installation.html).

Quick instruction with fresh Conda environment and JAX CUDA 12 on Linux.

```bash
conda create -n apebench python=3.12 -y
conda activate apebench
pip install -U "jax[cuda12]"
pip install apebench
```

## Quickstart

Train a ConvNet to emulate 1D advection, display train loss, test error metric
rollout, and a sample rollout.

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SeCuoYaSfIH2J0IdNeFlDrkCypxtvRie?usp=sharing)

```python
import apebench
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np

advection_scenario = apebench.scenarios.difficulty.Advection()

data, trained_nets = advection_scenario(
    task_config="predict",
    network_config="Conv;26;10;relu",
    train_config="one",
    num_seeds=3,
)

data_loss = apebench.melt_loss(data)
data_metrics = apebench.melt_metrics(data)
data_sample_rollout = apebench.melt_sample_rollouts(data)

fig, axs = plt.subplots(1, 3, figsize=(13, 3))

sns.lineplot(data_loss, x="update_step", y="train_loss", ax=axs[0])
axs[0].set_yscale("log")
axs[0].set_title("Training loss")

sns.lineplot(data_metrics, x="time_step", y="mean_nRMSE", ax=axs[1])
axs[1].set_ylim(-0.05, 1.05)
axs[1].set_title("Metric rollout")

axs[2].imshow(
    np.array(data_sample_rollout["sample_rollout"][0])[:, 0, :].T,
    origin="lower",
    aspect="auto",
    vmin=-1,
    vmax=1,
    cmap="RdBu_r",
)
axs[2].set_xlabel("time")
axs[2].set_ylabel("space")
axs[2].set_title("Sample rollout")

plt.show()
```

![](https://github.com/user-attachments/assets/10f968f4-2b30-4972-8753-22b7fad208ed)

You can explore the apebench scenarios using an interactive streamlit notebook
by running

```bash
streamlit run explore_sample_data_streamlit.py
```

[![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://apebench-app-mca2jmqzxmoap6zdm2uvcb.streamlit.app/)

## Documentation

Documentation is a available at
[tum-pbs.github.io/apebench/](https://tum-pbs.github.io/apebench/).

## Background

Autoregressive neural emulators can be used to efficiently forecast transient
phenomena, often associated with differential equations. Denote by
$\mathcal{P}_h$ a reference numerical simulator (e.g., the [FTCS
scheme](https://en.wikipedia.org/wiki/FTCS_scheme) for the heat equation). It
advances a state $u_h$ by

$$
u_h^{[t+1]} = \mathcal{P}_h(u_h^{[t]}).
$$

An autoregressive neural emulator $f_\theta$ is trained to mimic $\mathcal{P}_h$, i.e., $f_\theta \approx \mathcal{P}_h$. Doing so requires the following choices:

1. What is the reference simulator $\mathcal{P}_h$?
    1. What is its corresponding continuous transient partial differential
        equation? (advection, diffusion, Burgers, Kuramoto-Sivashinsky,
        Navier-Stokes, etc.)
    2. What consistent numerical scheme is used to discretize the continuous
        transient partial differential equation?
2. What is the architecture of the autoregressive neural emulator $f_\theta$?
3. How do $f_\theta$ and $\mathcal{P}_h$ interact during training (=optimization
    of $\theta$)?
    1. For how many steps are their predictions unrolled and compared?
    2. What is the time-level loss function?
    3. How large is the batch size?
    4. What is the opimizer and its learning rate scheduler?
    5. For how many steps is the training run?
4. Additional training and evaluation related choices:
    1. What is the initial condition distribution?
    2. How long is the time horizon seen during training?
    3. What is the evaluation metric? If it is related to an error rollout, for
        how many steps is the rollout?
    4. How many random seeds are used to draw conclusions?

APEBench is a framework to holistically assess all four ingredients. Component
(1), the discrete reference simulator $\mathcal{P}_h$, is provided by
[`Exponax`](https://github.com/Ceyron/exponax). This is a suite of
[ETDRK](https://www.sciencedirect.com/science/article/abs/pii/S0021999102969950)-based
methods for semi-linear partial differential equations on periodic domains. This
covers a wide range of dynamics. For the most common scenarios, a unique
interface using normalized (non-dimensionalized) coefficients or a
difficulty-based interface (as described in the APEBench paper) can be used. The
second (2) component is given by
[`PDEquinox`](https://github.com/Ceyron/pdequinox). This library uses
[`Equinox`](https://github.com/patrick-kidger/equinox), a JAX-based
deep-learning framework, to implement many commonly found architectures like
convolutional ResNets, U-Nets, and FNOs. The third (3) component is
[`Trainax`](https://github.com/Ceyron/trainax), an abstract implementation of
"trainers" that provide supervised rollout training and many other features. The
fourth (4) component is to wrap up the former three and is given by this
repository.
APEBench encapsulates the entire pipeline of training and evaluating an
autoregressive neural emulator in a scenario. A scenario is a callable
dataclass.

## Citation

This package was developed as part of the [APEBench paper
(arxiv.org/abs/2411.00180)](https://arxiv.org/abs/2411.00180) (accepted at
Neurips 2024). If you find it useful for your research, please consider citing
it:

```bibtex
@article{koehler2024apebench,
  title={{APEBench}: A Benchmark for Autoregressive Neural Emulators of {PDE}s},
  author={Felix Koehler and Simon Niedermayr and R{\"}udiger Westermann and Nils Thuerey},
  journal={Advances in Neural Information Processing Systems (NeurIPS)},
  volume={38},
  year={2024}
}
```

(Feel free to also give the project a star on GitHub if you like it.)

## Funding

The main author (Felix Koehler) is a PhD student in the group of [Prof. Thuerey at TUM](https://ge.in.tum.de/) and his research is funded by the [Munich Center for Machine Learning](https://mcml.ai/).

## License

MIT, see [here](https://github.com/Ceyron/apebench/blob/main/LICENSE.txt)

---

> [fkoehler.site](https://fkoehler.site/) &nbsp;&middot;&nbsp;
> GitHub [@ceyron](https://github.com/ceyron) &nbsp;&middot;&nbsp;
> X [@felix_m_koehler](https://twitter.com/felix_m_koehler) &nbsp;&middot;&nbsp;
> LinkedIn [Felix Köhler](https://www.linkedin.com/in/felix-koehler)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "apebench",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "~=3.10",
    "maintainer_email": null,
    "keywords": "jax, sciml, deep-learning, pde, neural operator",
    "author": "Felix Koehler",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/ff/03/ca6da80d8ba7516acc04745733c65f97193afcf554fafe405223a92dde5f/apebench-0.1.1.tar.gz",
    "platform": null,
    "description": "<h4 align=\"center\">A benchmark suite for Autoregressive PDE Emulators in <a href=\"https://github.com/google/jax\" target=\"_blank\">JAX</a>.</h4>\n\n<p align=\"center\">\n<a href=\"https://pypi.org/project/apebench/\">\n  <img src=\"https://img.shields.io/pypi/v/apebench.svg\" alt=\"PyPI\">\n</a>\n<a href=\"https://github.com/ceyron/apebench/actions/workflows/test.yml\">\n  <img src=\"https://github.com/ceyron/apebench/actions/workflows/test.yml/badge.svg\" alt=\"Tests\">\n</a>\n<a href=\"https://tum-pbs.github.io/apebench\">\n  <img src=\"https://img.shields.io/badge/docs-latest-green\" alt=\"docs-latest\">\n</a>\n<a href=\"https://github.com/ceyron/apebench/releases\">\n  <img src=\"https://img.shields.io/github/v/release/ceyron/apebench?include_prereleases&label=changelog\" alt=\"Changelog\">\n</a>\n<a href=\"https://github.com/ceyron/apebench/blob/main/LICENSE.txt\">\n  <img src=\"https://img.shields.io/badge/license-MIT-blue\" alt=\"License\">\n</a>\n</p>\n\n<p align=\"center\">\n    <a href=\"https://arxiv.org/abs/2411.00180\">\n        \ud83d\udcc4 Paper\n    </a> \u2022\n    <a href=\"https://tum-pbs.github.io/apebench-paper/\">\n        \ud83e\uddf5 Project Page\n    </a>\n</p>\n\n<p align=\"center\">\n  <a href=\"#installation\">Installation</a> \u2022\n  <a href=\"#quickstart\">Quickstart</a> \u2022\n    <a href=\"#documentation\">Documentation</a> \u2022\n    <a href=\"#background\">Background</a> \u2022\n    <a href=\"#citation\">Citation</a>\n</p>\n\n<p align=\"center\">\n  <img src=\"https://github.com/user-attachments/assets/c6b88756-bc35-4e9a-8662-798a16f8302b\" width=\"150\">\n</p>\n\nAPEBench is a JAX-based tool to evaluate autoregressive neural emulators for\nPDEs on periodic domains in 1d, 2d, and 3d. It comes with an efficient reference\nsimulator based on spectral methods that is used for procedural data generation\n(no need to download large datasets with APEBench). Since this simulator can\nalso be embedded into emulator training (e.g., for a \"solver-in-the-loop\"\ncorrection setting), this is the first benchmark suite to support\n**differentiable physics**.\n\n\n\n## Installation\n\n```bash\npip install apebench\n```\n\nRequires Python 3.10+ and JAX 0.4.12+ \ud83d\udc49 [JAX install guide](https://jax.readthedocs.io/en/latest/installation.html).\n\nQuick instruction with fresh Conda environment and JAX CUDA 12 on Linux.\n\n```bash\nconda create -n apebench python=3.12 -y\nconda activate apebench\npip install -U \"jax[cuda12]\"\npip install apebench\n```\n\n## Quickstart\n\nTrain a ConvNet to emulate 1D advection, display train loss, test error metric\nrollout, and a sample rollout.\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SeCuoYaSfIH2J0IdNeFlDrkCypxtvRie?usp=sharing)\n\n```python\nimport apebench\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nadvection_scenario = apebench.scenarios.difficulty.Advection()\n\ndata, trained_nets = advection_scenario(\n    task_config=\"predict\",\n    network_config=\"Conv;26;10;relu\",\n    train_config=\"one\",\n    num_seeds=3,\n)\n\ndata_loss = apebench.melt_loss(data)\ndata_metrics = apebench.melt_metrics(data)\ndata_sample_rollout = apebench.melt_sample_rollouts(data)\n\nfig, axs = plt.subplots(1, 3, figsize=(13, 3))\n\nsns.lineplot(data_loss, x=\"update_step\", y=\"train_loss\", ax=axs[0])\naxs[0].set_yscale(\"log\")\naxs[0].set_title(\"Training loss\")\n\nsns.lineplot(data_metrics, x=\"time_step\", y=\"mean_nRMSE\", ax=axs[1])\naxs[1].set_ylim(-0.05, 1.05)\naxs[1].set_title(\"Metric rollout\")\n\naxs[2].imshow(\n    np.array(data_sample_rollout[\"sample_rollout\"][0])[:, 0, :].T,\n    origin=\"lower\",\n    aspect=\"auto\",\n    vmin=-1,\n    vmax=1,\n    cmap=\"RdBu_r\",\n)\naxs[2].set_xlabel(\"time\")\naxs[2].set_ylabel(\"space\")\naxs[2].set_title(\"Sample rollout\")\n\nplt.show()\n```\n\n![](https://github.com/user-attachments/assets/10f968f4-2b30-4972-8753-22b7fad208ed)\n\nYou can explore the apebench scenarios using an interactive streamlit notebook\nby running\n\n```bash\nstreamlit run explore_sample_data_streamlit.py\n```\n\n[![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://apebench-app-mca2jmqzxmoap6zdm2uvcb.streamlit.app/)\n\n## Documentation\n\nDocumentation is a available at\n[tum-pbs.github.io/apebench/](https://tum-pbs.github.io/apebench/).\n\n## Background\n\nAutoregressive neural emulators can be used to efficiently forecast transient\nphenomena, often associated with differential equations. Denote by\n$\\mathcal{P}_h$ a reference numerical simulator (e.g., the [FTCS\nscheme](https://en.wikipedia.org/wiki/FTCS_scheme) for the heat equation). It\nadvances a state $u_h$ by\n\n$$\nu_h^{[t+1]} = \\mathcal{P}_h(u_h^{[t]}).\n$$\n\nAn autoregressive neural emulator $f_\\theta$ is trained to mimic $\\mathcal{P}_h$, i.e., $f_\\theta \\approx \\mathcal{P}_h$. Doing so requires the following choices:\n\n1. What is the reference simulator $\\mathcal{P}_h$?\n    1. What is its corresponding continuous transient partial differential\n        equation? (advection, diffusion, Burgers, Kuramoto-Sivashinsky,\n        Navier-Stokes, etc.)\n    2. What consistent numerical scheme is used to discretize the continuous\n        transient partial differential equation?\n2. What is the architecture of the autoregressive neural emulator $f_\\theta$?\n3. How do $f_\\theta$ and $\\mathcal{P}_h$ interact during training (=optimization\n    of $\\theta$)?\n    1. For how many steps are their predictions unrolled and compared?\n    2. What is the time-level loss function?\n    3. How large is the batch size?\n    4. What is the opimizer and its learning rate scheduler?\n    5. For how many steps is the training run?\n4. Additional training and evaluation related choices:\n    1. What is the initial condition distribution?\n    2. How long is the time horizon seen during training?\n    3. What is the evaluation metric? If it is related to an error rollout, for\n        how many steps is the rollout?\n    4. How many random seeds are used to draw conclusions?\n\nAPEBench is a framework to holistically assess all four ingredients. Component\n(1), the discrete reference simulator $\\mathcal{P}_h$, is provided by\n[`Exponax`](https://github.com/Ceyron/exponax). This is a suite of\n[ETDRK](https://www.sciencedirect.com/science/article/abs/pii/S0021999102969950)-based\nmethods for semi-linear partial differential equations on periodic domains. This\ncovers a wide range of dynamics. For the most common scenarios, a unique\ninterface using normalized (non-dimensionalized) coefficients or a\ndifficulty-based interface (as described in the APEBench paper) can be used. The\nsecond (2) component is given by\n[`PDEquinox`](https://github.com/Ceyron/pdequinox). This library uses\n[`Equinox`](https://github.com/patrick-kidger/equinox), a JAX-based\ndeep-learning framework, to implement many commonly found architectures like\nconvolutional ResNets, U-Nets, and FNOs. The third (3) component is\n[`Trainax`](https://github.com/Ceyron/trainax), an abstract implementation of\n\"trainers\" that provide supervised rollout training and many other features. The\nfourth (4) component is to wrap up the former three and is given by this\nrepository.\nAPEBench encapsulates the entire pipeline of training and evaluating an\nautoregressive neural emulator in a scenario. A scenario is a callable\ndataclass.\n\n## Citation\n\nThis package was developed as part of the [APEBench paper\n(arxiv.org/abs/2411.00180)](https://arxiv.org/abs/2411.00180) (accepted at\nNeurips 2024). If you find it useful for your research, please consider citing\nit:\n\n```bibtex\n@article{koehler2024apebench,\n  title={{APEBench}: A Benchmark for Autoregressive Neural Emulators of {PDE}s},\n  author={Felix Koehler and Simon Niedermayr and R{\\\"}udiger Westermann and Nils Thuerey},\n  journal={Advances in Neural Information Processing Systems (NeurIPS)},\n  volume={38},\n  year={2024}\n}\n```\n\n(Feel free to also give the project a star on GitHub if you like it.)\n\n## Funding\n\nThe main author (Felix Koehler) is a PhD student in the group of [Prof. Thuerey at TUM](https://ge.in.tum.de/) and his research is funded by the [Munich Center for Machine Learning](https://mcml.ai/).\n\n## License\n\nMIT, see [here](https://github.com/Ceyron/apebench/blob/main/LICENSE.txt)\n\n---\n\n> [fkoehler.site](https://fkoehler.site/) &nbsp;&middot;&nbsp;\n> GitHub [@ceyron](https://github.com/ceyron) &nbsp;&middot;&nbsp;\n> X [@felix_m_koehler](https://twitter.com/felix_m_koehler) &nbsp;&middot;&nbsp;\n> LinkedIn [Felix K\u00f6hler](https://www.linkedin.com/in/felix-koehler)\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Benchmark suite for Autoregressive Neural Emulators of PDEs in JAX.",
    "version": "0.1.1",
    "project_urls": {
        "repository": "https://github.com/Ceyron/apebench"
    },
    "split_keywords": [
        "jax",
        " sciml",
        " deep-learning",
        " pde",
        " neural operator"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "366c811b2dd61da9caecbb9b967dab09314303274ec08e3ed77143098070c438",
                "md5": "1202d2e614d7d473aef243159d407289",
                "sha256": "ad06abd3adfc4aa9c497a53650bc5eed86abe56ab24c7c523a31cd6ef9278731"
            },
            "downloads": -1,
            "filename": "apebench-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1202d2e614d7d473aef243159d407289",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "~=3.10",
            "size": 47312,
            "upload_time": "2024-11-09T08:49:22",
            "upload_time_iso_8601": "2024-11-09T08:49:22.696303Z",
            "url": "https://files.pythonhosted.org/packages/36/6c/811b2dd61da9caecbb9b967dab09314303274ec08e3ed77143098070c438/apebench-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ff03ca6da80d8ba7516acc04745733c65f97193afcf554fafe405223a92dde5f",
                "md5": "1ac0fe22683f02d5ed1edf35b84cad1a",
                "sha256": "c5ddd47799f0799b2c2e72c27d3d81993f6fa218a04b1df93d4c1850e4893bf9"
            },
            "downloads": -1,
            "filename": "apebench-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "1ac0fe22683f02d5ed1edf35b84cad1a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "~=3.10",
            "size": 38132,
            "upload_time": "2024-11-09T08:49:25",
            "upload_time_iso_8601": "2024-11-09T08:49:25.154150Z",
            "url": "https://files.pythonhosted.org/packages/ff/03/ca6da80d8ba7516acc04745733c65f97193afcf554fafe405223a92dde5f/apebench-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-09 08:49:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Ceyron",
    "github_project": "apebench",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "apebench"
}
        
Elapsed time: 0.54776s