aenet-gpr


Nameaenet-gpr JSON
Version 1.7.1 PyPI version JSON
download
home_pagehttps://github.com/atomisticnet/aenet-gpr
SummaryAtomistic simulation tools based on Gaussian Processes Regression
upload_time2025-07-13 17:53:04
maintainerNone
docs_urlNone
authorIn Won Yeu
requires_python>=3
licenseMPL-2.0
keywords machine learning potential energy surface aenet data augmentation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ænet-gpr
**Efficient Data Augmentation for ANN Potential Training Using GPR Surrogate Models**

`aenet-gpr` is a Python package that enables scalable and cost-efficient training of artificial neural network (ANN) potentials by leveraging Gaussian Process Regression (GPR) as a surrogate model.  
It automates data augmentation to:

- Reduce the number of expensive DFT calculations  
- Lower ANN training overhead particularly critical for complex and heterogeneous interface systems  
- Maintain high accuracy comparable to the demanding direct force training

📄 Reference:  
[In Won Yeu, Alexander Urban, Nongnuch Artrith et al., “Scalable Training of Neural Network Potentials for Complex Interfaces Through Data Augmentation”, *npj Computational Materials* 11, 156 (2025)](https://doi.org/10.1038/s41524-025-01651-0)

📬 Contact:  
- In Won Yeu (iy2185@columbia.edu)  
- Nongnuch Artrith (n.artrith@uu.nl)

## 🔁 Workflow Overview

<p align="center">
<img src="doc/source/images/0_flowchart.png" width="700">
</p>

1. **Data Grouping**  
   - Split the initial DFT database into homogeneous subsets (same composition and number of atoms)

2. **Train**  
   - Construct local GPR models using structure, energy, and atomic force data of each subset

3. **Test**  
   - Predict and evaluate target properties with the trained GPR models

4. **Augment**  
   - Perturb reference structures and generate new data  
   - Tag with GPR-predicted energies to expand the training dataset

✅ Outputs are saved in [XCrysDen Structure Format (XSF)](http://ann.atomistic.net/documentation/#structural-energy-reference-data), fully compatible with the [ænet package](https://github.com/atomisticnet/aenet-PyTorch) for indirect force training (**GPR-ANN**).

## 🔑 Key Features

- GPR-based prediction of energies and atomic forces with uncertainty estimates  
- Supports various descriptors including Cartesian and SOAP  
- Applicable to periodic and non-periodic systems  
- Batch-based kernel computation for speed and memory efficiency  
- Accepts multiple input formats (e.g., XSF, VASP OUTCAR, etc.)  
- Fully controlled through a single input file (`train.in`)
- Compatible with various GPR applications such as GPR-NEB, GPR-ANN, and ASE-Calculator

## 📦 Installation

**Requirements:**

- Python with PyTorch (to be installed separately, see below)
- Other dependencies (`numpy`, `ASE`, `DScribe`) are automatically installed when installing `aenet-gpr`

### 1. Install PyTorch

Refer to [official guide](https://pytorch.org/get-started/locally) and install compatible versions depending on availablity of GPU and CUDA:

   - With CUDA (optional for GPU support):

     `$ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118`

   - CPU-only:
 
     `$ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu`


### 2. Install ænet-gpr
   
   - Installation using pip

     `$ pip install aenet-gpr`

## 📘 Tutorial

Find interactive notebooks `*.ipynb` in the `./tutorial/` folder, or run directly on Google Colab:

### GPR tutorials for various systems

- [GPR Tutorial: H₂](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_1_H2.ipynb)  
- [GPR Tutorial: EC–EC](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_2_EC-EC.ipynb)  
- [GPR Tutorial: Li–EC](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_3_Li-EC.ipynb)

### GPR applications for accelerating atomistic simulations

- [GPR-ANN: accelerating ANN potential training through GPR data augmentation](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_python_GPR-ANN.ipynb)
- [GPR-NEB: accelerating NEB through GPR surrogate model](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_python_GPR-NEB.ipynb)
- [GPR-ASE: ASE GPRCalculator](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_python_ASE-GPRCalculator.ipynb)

The `./example/` directory includes example input and output data files.

## 📂 Input Files

### 1. Structure–Energy–Force Data

By default, input data is provided in `.xsf` format. 

#### Example: aenet XSF format (non-periodic)
The first comment line should specify **total energy** of a structure. Each line following the keyword `ATOMS` contains **atomic symbol**, **three Cartesian coordinates**, and the three components of **atomic forces**. The length, energy, and force units are Å, eV, and eV/Å.
```
# total energy =  -0.0970905812353288 eV

ATOMS
H    -0.91666666666667    0.00000000000000    0.00000000000000    0.32660398877491    0.00000000000000    0.00000000000000
H    0.91666666666667    0.00000000000000    0.00000000000000    -0.32660398877491    0.00000000000000    0.00000000000000
```

#### Example: aenet XSF format (periodic)
```
# total energy = -16688.9969866290994105 eV

CRYSTAL
PRIMVEC
 10.31700000000000 0.00000000000000 0.00000000000000
 0.00000000000000 10.31700000000000 0.00000000000000
 0.00000000000000 0.00000000000000 32.00000000000000
PRIMCOORD
 46 1
Li     -0.02691046000000     0.02680527000000     10.32468480000000     -0.01367780493112     -0.01466501222916     0.08701630310868
Li     -0.04431013000000     3.46713645000000     10.25290534000000     0.06865473174602     -0.00786890285541     0.15426435842600
Li     0.02355300000000     6.82569825000000     10.31803445000000     0.00877419275000     0.03943267659765     0.14805797440506
...
```

Other formats such as **VASP OUTCAR** (with a line of `File_format vasp-out` in `train.in` below) are also supported as long as they can be read through [ASE](https://wiki.fysik.dtu.dk/ase/ase/io/io.html).

### 2. Configuration File

#### Example: `train.in` (comments are provided to guide the keyword meanings)
```
# File path
Train_file ./example/3_Li-EC/train_set/file_*.xsf
Test_file ./example/3_Li-EC/test_set/file_*.xsf

# Train model save (default: False)
Train_model_save False  # True-> train data and trained GPR model are saved in "data_dict.pt" and "calc_dict.pt"

# File format (default: xsf)
File_format xsf  # Other DFT output files, which can be read via ASE such as "vasp-out" "aims-output" "espresso-out", are also supported

# Uncertainty estimation (default: True)
Get_variance True  # False -> only energy and forces are evaluated without uncertainty estimate

# Descriptor (default: cartesian coordinates)
Descriptor cart  # cart or soap

# Kernel parameter (default: Squared exponential)
scale 0.4  # default: 0.4
weight 1.0  # default: 1.0

# Data process (default: batch, 25)
data_process batch  # batch (memory cost up, time cost down) or iterative (no-batch: memory down, time up)
batch_size 25

# Flags for xsf file writing (default: False)
Train_write False  # True -> xsf files for reference training set are written under "./train_xsf/" directory
Test_write False  # True -> xsf files for reference test set are written under "./test_xsf/" directory
Additional_write False  # True -> additional xsf files are written under "./additional_xsf/" directory; False -> Augmentation step is not executed

# Data augmentation parameter (default: 0.055, 25)
Disp_length 0.05
Num_copy 20  # [num_copy] multiples of reference training data are augmented
```

## 🚀 Usage Example

With the `train.in` file and datasets prepared, simply run:

`$ python -m aenet_gpr ./train.in > train.out`

The **Train–Test–Augment** steps will be executed sequentially. Augmented data will be saved in the `./additional_xsf/` directory.

## 🖥️ Running on an HPC system (SLURM)

To run `aenet_gpr` on an HPC cluster using SLURM, use a batch script like the following:

```
#!/bin/bash
#SBATCH --job-name=aenet-job
#SBATCH --nodes=1
#SBATCH --tasks-per-node=8
#SBATCH --cpus-per-task=4
#SBATCH --time=1:00:00

module load anaconda3
source activate aenet-env

ulimit -s unlimited
python -m aenet_gpr ./train.in > train.out
```

## ⚙️ Tuning Tips

### 1. Accuracy – Descriptor and Kernel Scale Parameter

- Descriptor: **Cartesian**, **SOAP**, and others supported by [DScribe](https://singroup.github.io/dscribe/latest/index.html)
- Default kernel: **Squared Exponential (sqexp)**
- Kernel parameters: **scale** and **weight**

<p align="center">
<img src="doc/source/images/0_kernel.png" width="300">
</p>

Following figure shows energy prediction errors of the `./example/3_Li-EC/` example with different kernel parameters and descriptors.

<p align="center">
<img src="doc/source/images/3_Li-EC_accuracy.png" width="1000">
</p>

When using the **Cartesian descriptor** (gray circles), the error decreases as the `scale` parameter increases, and it converges at `scale = 3.0`. When using the **periodic SOAP descriptor** (for details, see [DScribe documentation](https://singroup.github.io/dscribe/latest/tutorials/descriptors/soap.html)), the error is significantly reduced by one order of magnitude compared to the **Cartesian descriptor**.  

As demonstrated in the examples for the `./example/2_EC-EC/` (results available in the `example` directory), non-periodic systems can be well-represented using **non-periodic Cartesian descriptors**, while periodic systems are expected to yield better accuracy when using **periodic SOAP descriptors**.  

For the example of **SOAP descriptor** here, eight uniformly distributed points in the Li slab Rectangular cuboid were used as `centers` argument for **SOAP**. 

The corresponding `train.in` input arguments are
```
Descriptor soap
soap_r_cut 5.0
soap_n_max 6
soap_l_max 4
soap_centers [[2.20113706670393, 2.328998192856251, 6.952547732109352], [2.20113706670393, 2.328998192856251, 11.895790642109352], [2.20113706670393, 6.760484232856251, 6.952547732109352], [2.20113706670393, 6.760484232856251, 11.895790642109352], [6.63924050670393, 2.328998192856251, 6.952547732109352], [6.63924050670393, 2.328998192856251, 11.895790642109352], [6.63924050670393, 6.760484232856251, 6.952547732109352], [6.63924050670393, 6.760484232856251, 11.895790642109352]]
soap_n_jobs 4  
  
scale 2.0  
weight 1.0
```

### 2. Efficiency – Data Processing Mode

- `data_process iterative`: Computing kernels data-by-data involves `n_data × n_data` sequential kernel evaluations, minimizing the memory overhead but significantly increasing computational time.  

- `data_process batch`: **aenet-gpr** supports batch processing by grouping the data process into a specific size (`batch_size 25`), which significantly reduces train and evaluation time while keeping memory usage efficient.

Below, we provide a benchmark comparing the required time and memory for different batch sizes on the `./example/3_Li-EC/` example.

<p align="center">
<img src="doc/source/images/3_Li-EC_cost.png" width="1000">
</p>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/atomisticnet/aenet-gpr",
    "name": "aenet-gpr",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3",
    "maintainer_email": null,
    "keywords": "machine learning, potential energy surface, aenet, data augmentation",
    "author": "In Won Yeu",
    "author_email": "iy2185@columbia.edu",
    "download_url": "https://files.pythonhosted.org/packages/ab/63/3620f7469b0c32239cc197de68f5f9503d5be6b3aa3f92a59414c8f95d29/aenet-gpr-1.7.1.tar.gz",
    "platform": null,
    "description": "# \u00e6net-gpr\n**Efficient Data Augmentation for ANN Potential Training Using GPR Surrogate Models**\n\n`aenet-gpr` is a Python package that enables scalable and cost-efficient training of artificial neural network (ANN) potentials by leveraging Gaussian Process Regression (GPR) as a surrogate model.  \nIt automates data augmentation to:\n\n- Reduce the number of expensive DFT calculations  \n- Lower ANN training overhead particularly critical for complex and heterogeneous interface systems  \n- Maintain high accuracy comparable to the demanding direct force training\n\n\ud83d\udcc4 Reference:  \n[In Won Yeu, Alexander Urban, Nongnuch Artrith et al., \u201cScalable Training of Neural Network Potentials for Complex Interfaces Through Data Augmentation\u201d, *npj Computational Materials* 11, 156 (2025)](https://doi.org/10.1038/s41524-025-01651-0)\n\n\ud83d\udcec Contact:  \n- In Won Yeu (iy2185@columbia.edu)  \n- Nongnuch Artrith (n.artrith@uu.nl)\n\n## \ud83d\udd01 Workflow Overview\n\n<p align=\"center\">\n<img src=\"doc/source/images/0_flowchart.png\" width=\"700\">\n</p>\n\n1. **Data Grouping**  \n   - Split the initial DFT database into homogeneous subsets (same composition and number of atoms)\n\n2. **Train**  \n   - Construct local GPR models using structure, energy, and atomic force data of each subset\n\n3. **Test**  \n   - Predict and evaluate target properties with the trained GPR models\n\n4. **Augment**  \n   - Perturb reference structures and generate new data  \n   - Tag with GPR-predicted energies to expand the training dataset\n\n\u2705 Outputs are saved in [XCrysDen Structure Format (XSF)](http://ann.atomistic.net/documentation/#structural-energy-reference-data), fully compatible with the [\u00e6net package](https://github.com/atomisticnet/aenet-PyTorch) for indirect force training (**GPR-ANN**).\n\n## \ud83d\udd11 Key Features\n\n- GPR-based prediction of energies and atomic forces with uncertainty estimates  \n- Supports various descriptors including Cartesian and SOAP  \n- Applicable to periodic and non-periodic systems  \n- Batch-based kernel computation for speed and memory efficiency  \n- Accepts multiple input formats (e.g., XSF, VASP OUTCAR, etc.)  \n- Fully controlled through a single input file (`train.in`)\n- Compatible with various GPR applications such as GPR-NEB, GPR-ANN, and ASE-Calculator\n\n## \ud83d\udce6 Installation\n\n**Requirements:**\n\n- Python with PyTorch (to be installed separately, see below)\n- Other dependencies (`numpy`, `ASE`, `DScribe`) are automatically installed when installing `aenet-gpr`\n\n### 1. Install PyTorch\n\nRefer to [official guide](https://pytorch.org/get-started/locally) and install compatible versions depending on availablity of GPU and CUDA:\n\n   - With CUDA (optional for GPU support):\n\n     `$ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118`\n\n   - CPU-only:\n \n     `$ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu`\n\n\n### 2. Install \u00e6net-gpr\n   \n   - Installation using pip\n\n     `$ pip install aenet-gpr`\n\n## \ud83d\udcd8 Tutorial\n\nFind interactive notebooks `*.ipynb` in the `./tutorial/` folder, or run directly on Google Colab:\n\n### GPR tutorials for various systems\n\n- [GPR Tutorial: H\u2082](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_1_H2.ipynb)  \n- [GPR Tutorial: EC\u2013EC](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_2_EC-EC.ipynb)  \n- [GPR Tutorial: Li\u2013EC](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_3_Li-EC.ipynb)\n\n### GPR applications for accelerating atomistic simulations\n\n- [GPR-ANN: accelerating ANN potential training through GPR data augmentation](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_python_GPR-ANN.ipynb)\n- [GPR-NEB: accelerating NEB through GPR surrogate model](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_python_GPR-NEB.ipynb)\n- [GPR-ASE: ASE GPRCalculator](https://colab.research.google.com/github/atomisticnet/aenet-gpr/blob/main/tutorial/tutorial_python_ASE-GPRCalculator.ipynb)\n\nThe `./example/` directory includes example input and output data files.\n\n## \ud83d\udcc2 Input Files\n\n### 1. Structure\u2013Energy\u2013Force Data\n\nBy default, input data is provided in `.xsf` format. \n\n#### Example: aenet XSF format (non-periodic)\nThe first comment line should specify **total energy** of a structure. Each line following the keyword `ATOMS` contains **atomic symbol**, **three Cartesian coordinates**, and the three components of **atomic forces**. The length, energy, and force units are \u00c5, eV, and eV/\u00c5.\n```\n# total energy =  -0.0970905812353288 eV\n\nATOMS\nH    -0.91666666666667    0.00000000000000    0.00000000000000    0.32660398877491    0.00000000000000    0.00000000000000\nH    0.91666666666667    0.00000000000000    0.00000000000000    -0.32660398877491    0.00000000000000    0.00000000000000\n```\n\n#### Example: aenet XSF format (periodic)\n```\n# total energy = -16688.9969866290994105 eV\n\nCRYSTAL\nPRIMVEC\n 10.31700000000000 0.00000000000000 0.00000000000000\n 0.00000000000000 10.31700000000000 0.00000000000000\n 0.00000000000000 0.00000000000000 32.00000000000000\nPRIMCOORD\n 46 1\nLi     -0.02691046000000     0.02680527000000     10.32468480000000     -0.01367780493112     -0.01466501222916     0.08701630310868\nLi     -0.04431013000000     3.46713645000000     10.25290534000000     0.06865473174602     -0.00786890285541     0.15426435842600\nLi     0.02355300000000     6.82569825000000     10.31803445000000     0.00877419275000     0.03943267659765     0.14805797440506\n...\n```\n\nOther formats such as **VASP OUTCAR** (with a line of `File_format vasp-out` in `train.in` below) are also supported as long as they can be read through [ASE](https://wiki.fysik.dtu.dk/ase/ase/io/io.html).\n\n### 2. Configuration File\n\n#### Example: `train.in` (comments are provided to guide the keyword meanings)\n```\n# File path\nTrain_file ./example/3_Li-EC/train_set/file_*.xsf\nTest_file ./example/3_Li-EC/test_set/file_*.xsf\n\n# Train model save (default: False)\nTrain_model_save False  # True-> train data and trained GPR model are saved in \"data_dict.pt\" and \"calc_dict.pt\"\n\n# File format (default: xsf)\nFile_format xsf  # Other DFT output files, which can be read via ASE such as \"vasp-out\" \"aims-output\" \"espresso-out\", are also supported\n\n# Uncertainty estimation (default: True)\nGet_variance True  # False -> only energy and forces are evaluated without uncertainty estimate\n\n# Descriptor (default: cartesian coordinates)\nDescriptor cart  # cart or soap\n\n# Kernel parameter (default: Squared exponential)\nscale 0.4  # default: 0.4\nweight 1.0  # default: 1.0\n\n# Data process (default: batch, 25)\ndata_process batch  # batch (memory cost up, time cost down) or iterative (no-batch: memory down, time up)\nbatch_size 25\n\n# Flags for xsf file writing (default: False)\nTrain_write False  # True -> xsf files for reference training set are written under \"./train_xsf/\" directory\nTest_write False  # True -> xsf files for reference test set are written under \"./test_xsf/\" directory\nAdditional_write False  # True -> additional xsf files are written under \"./additional_xsf/\" directory; False -> Augmentation step is not executed\n\n# Data augmentation parameter (default: 0.055, 25)\nDisp_length 0.05\nNum_copy 20  # [num_copy] multiples of reference training data are augmented\n```\n\n## \ud83d\ude80 Usage Example\n\nWith the `train.in` file and datasets prepared, simply run:\n\n`$ python -m aenet_gpr ./train.in > train.out`\n\nThe **Train\u2013Test\u2013Augment** steps will be executed sequentially. Augmented data will be saved in the `./additional_xsf/` directory.\n\n## \ud83d\udda5\ufe0f Running on an HPC system (SLURM)\n\nTo run `aenet_gpr` on an HPC cluster using SLURM, use a batch script like the following:\n\n```\n#!/bin/bash\n#SBATCH --job-name=aenet-job\n#SBATCH --nodes=1\n#SBATCH --tasks-per-node=8\n#SBATCH --cpus-per-task=4\n#SBATCH --time=1:00:00\n\nmodule load anaconda3\nsource activate aenet-env\n\nulimit -s unlimited\npython -m aenet_gpr ./train.in > train.out\n```\n\n## \u2699\ufe0f Tuning Tips\n\n### 1. Accuracy \u2013 Descriptor and Kernel Scale Parameter\n\n- Descriptor: **Cartesian**, **SOAP**, and others supported by [DScribe](https://singroup.github.io/dscribe/latest/index.html)\n- Default kernel: **Squared Exponential (sqexp)**\n- Kernel parameters: **scale** and **weight**\n\n<p align=\"center\">\n<img src=\"doc/source/images/0_kernel.png\" width=\"300\">\n</p>\n\nFollowing figure shows energy prediction errors of the `./example/3_Li-EC/` example with different kernel parameters and descriptors.\n\n<p align=\"center\">\n<img src=\"doc/source/images/3_Li-EC_accuracy.png\" width=\"1000\">\n</p>\n\nWhen using the **Cartesian descriptor** (gray circles), the error decreases as the `scale` parameter increases, and it converges at `scale = 3.0`. When using the **periodic SOAP descriptor** (for details, see [DScribe documentation](https://singroup.github.io/dscribe/latest/tutorials/descriptors/soap.html)), the error is significantly reduced by one order of magnitude compared to the **Cartesian descriptor**.  \n\nAs demonstrated in the examples for the `./example/2_EC-EC/` (results available in the `example` directory), non-periodic systems can be well-represented using **non-periodic Cartesian descriptors**, while periodic systems are expected to yield better accuracy when using **periodic SOAP descriptors**.  \n\nFor the example of **SOAP descriptor** here, eight uniformly distributed points in the Li slab Rectangular cuboid were used as `centers` argument for **SOAP**. \n\nThe corresponding `train.in` input arguments are\n```\nDescriptor soap\nsoap_r_cut 5.0\nsoap_n_max 6\nsoap_l_max 4\nsoap_centers [[2.20113706670393, 2.328998192856251, 6.952547732109352], [2.20113706670393, 2.328998192856251, 11.895790642109352], [2.20113706670393, 6.760484232856251, 6.952547732109352], [2.20113706670393, 6.760484232856251, 11.895790642109352], [6.63924050670393, 2.328998192856251, 6.952547732109352], [6.63924050670393, 2.328998192856251, 11.895790642109352], [6.63924050670393, 6.760484232856251, 6.952547732109352], [6.63924050670393, 6.760484232856251, 11.895790642109352]]\nsoap_n_jobs 4  \n  \nscale 2.0  \nweight 1.0\n```\n\n### 2. Efficiency \u2013 Data Processing Mode\n\n- `data_process iterative`: Computing kernels data-by-data involves `n_data \u00d7 n_data` sequential kernel evaluations, minimizing the memory overhead but significantly increasing computational time.  \n\n- `data_process batch`: **aenet-gpr** supports batch processing by grouping the data process into a specific size (`batch_size 25`), which significantly reduces train and evaluation time while keeping memory usage efficient.\n\nBelow, we provide a benchmark comparing the required time and memory for different batch sizes on the `./example/3_Li-EC/` example.\n\n<p align=\"center\">\n<img src=\"doc/source/images/3_Li-EC_cost.png\" width=\"1000\">\n</p>\n",
    "bugtrack_url": null,
    "license": "MPL-2.0",
    "summary": "Atomistic simulation tools based on Gaussian Processes Regression",
    "version": "1.7.1",
    "project_urls": {
        "Homepage": "https://github.com/atomisticnet/aenet-gpr"
    },
    "split_keywords": [
        "machine learning",
        " potential energy surface",
        " aenet",
        " data augmentation"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c2ae0a03e46856dfc2cf6deb0e3ffb6562d4ace06c56675bf42dd5801e205a2d",
                "md5": "5cd6db57d0d7f927a2d03da6ff12e8a4",
                "sha256": "8a2c74913022fd83dbeac05ea87fc5337fac2946bf3feaaed1a3686e1f3db990"
            },
            "downloads": -1,
            "filename": "aenet_gpr-1.7.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5cd6db57d0d7f927a2d03da6ff12e8a4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3",
            "size": 56345,
            "upload_time": "2025-07-13T17:53:02",
            "upload_time_iso_8601": "2025-07-13T17:53:02.924177Z",
            "url": "https://files.pythonhosted.org/packages/c2/ae/0a03e46856dfc2cf6deb0e3ffb6562d4ace06c56675bf42dd5801e205a2d/aenet_gpr-1.7.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ab633620f7469b0c32239cc197de68f5f9503d5be6b3aa3f92a59414c8f95d29",
                "md5": "e58170ae28767ac4c4da30e733756ff4",
                "sha256": "7dcb3285bbf5747d89bd3b98913ee3f4c2372569336c3e0c67925ea3bf0b0473"
            },
            "downloads": -1,
            "filename": "aenet-gpr-1.7.1.tar.gz",
            "has_sig": false,
            "md5_digest": "e58170ae28767ac4c4da30e733756ff4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3",
            "size": 50613,
            "upload_time": "2025-07-13T17:53:04",
            "upload_time_iso_8601": "2025-07-13T17:53:04.088617Z",
            "url": "https://files.pythonhosted.org/packages/ab/63/3620f7469b0c32239cc197de68f5f9503d5be6b3aa3f92a59414c8f95d29/aenet-gpr-1.7.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-13 17:53:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "atomisticnet",
    "github_project": "aenet-gpr",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "aenet-gpr"
}
        
Elapsed time: 0.44156s