pantea


Namepantea JSON
Version 0.11.0 PyPI version JSON
download
home_pagehttps://github.com/hghcomphys/pantea
SummaryA Python package for developing machine learning interatomic potentials, based on JAX.
upload_time2024-10-06 21:36:54
maintainerNone
docs_urlNone
authorHossein Ghorbanfekr
requires_python>=3.8
licenseGNU General Public License v3
keywords pantea
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
.. .. image:: docs/images/logo.png
.. :alt: logo
        
======
Pantea
======


.. image:: https://img.shields.io/pypi/v/pantea.svg
        :target: https://pypi.python.org/pypi/pantea

.. image:: https://github.com/hghcomphys/pantea/actions/workflows/tests.yml/badge.svg
        :target: https://github.com/hghcomphys/pantea/blob/main/.github/workflows/tests.yml

.. image:: https://readthedocs.org/projects/pantea/badge/?version=latest
        :target: https://pantea.readthedocs.io/en/latest/?version=latest
        :alt: Documentation Status


Description
-----------
Pantea is an optimized Python library based on Google `JAX`_ that enables 
development of machine learning interatomic potentials 
for use in computational material science. 
These potentials are particularly necessary for conducting large-scale molecular 
dynamics simulations of complex materials with ab initio accuracy.

.. _JAX: https://github.com/google/jax


See `documentation <https://pantea.readthedocs.io/en/latest/readme.html>`_ for more information.



-------------
Key features
-------------
* The design of Pantea is `simple` and `flexible`, which makes it easy to incorporate atomic descriptors and potentials. 
* It uses `automatic differentiation` to make defining new descriptors straightforward.
* Pantea is written purely in Python and optimized with `just-in-time` (JIT) compilation.
* It also supports `GPU` computing, which can significantly speed up preprocessing and model training.

.. warning::
        This package is under development and the current focus is on the implementation of high-dimensional 
        neural network potential (HDNNP) proposed by Behler et al. 
        (`2007 <https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.146401>`_).


Installation
------------
To install Pantea, run this command in your terminal:

.. code-block:: console

    $ pip install pantea

For machines with an NVIDIA **GPU** please follow the
`installation <https://pantea.readthedocs.io/en/latest/installation.html>`_ 
instruction on the documentation. 


Examples
--------

--------------------
I. Descriptor (ACSF)
--------------------
Atom-centered Symmetry Function (`ACSF`_) descriptor captures information about the distribution of neighboring atoms around a 
central atom by considering both radial (two-body) and angular (three-body) symmetry functions. 
The values obtained from these calculations represent a fingerprint of the local atomic environment and can be used in various machine learning potentials. 

Script below demonstrates the process of defining multiple symmetry functions
for an element, which can be utilized to evaluate the descriptor values for any structure. 

.. _ACSF: https://aip.scitation.org/doi/10.1063/1.3553717


.. code-block:: python

        from pantea.datasets import Dataset
        from pantea.descriptors import ACSF
        from pantea.descriptors.acsf import CutoffFunction, NeighborElements, G2, G3

        # Read atomic structure dataset (e.g. water molecules)
        structures = Dataset.from_runner("input.data")
        structure = structures[0]
        print(structure)
        # >> Structure(natoms=12, elements=('H', 'O'), dtype=float64)

        # Define an ACSF descriptor for hydrogen atoms
        # It includes two radial (G2) and angular (G3) symmetry functions
        cfn = CutoffFunction.from_type("tanh", r_cutoff=12.0)
        g2 = G2(cfn, eta=0.5, r_shift=0.0)
        g3 = G3(cfn, eta=0.001, zeta=2.0, lambda0=1.0, r_shift=12.0)

        descriptor = ACSF(
                central_element='H',
                radial_symmetry_functions=(
                        (g2, NeighborElements('H')),
                ),
                angular_symmetry_functions=(
                        (g3, NeighborElements('H', 'O')),
                ),
        )

        print(descriptor)
        # >> ACSF(central_element='H', num_symmetry_functions=2)

        values = descriptor(structure)
        print("Descriptor values:\n", values)
        # >> Descriptor values:
        # [[0.01952943 1.13103234]
        #  [0.01952756 1.04312263]
        # ...
        #  [0.00228752 0.41445455]]

        gradient = descriptor.grad(structure)
        print("Descriptor gradient:\n", gradient)
        # >> Descriptor gradient:
        # [[[ 4.64523585e-02 -5.03786078e-02 -6.14621389e-02]
        #   [-1.04818547e-01 -1.84170755e-02  4.76021411e-02]]
        #  [[-9.67003098e-03 -5.45498827e-02  6.32422634e-03]
        #   [-1.59613454e-01 -5.94085256e-02  1.72978932e-01]]
        # ...
        #  [[-1.36223042e-03 -8.02832759e-03 -6.08306094e-05]
        #   [ 1.29199076e-02 -9.58762344e-03 -9.12714216e-02]]] 


-------------------
II. Potential (NNP)
-------------------
This example illustrates how to quickly create a `high-dimensional neural network 
potential` (`HDNNP`_) instance from an input setting file.

.. _HDNNP: https://pubs.acs.org/doi/10.1021/acs.chemrev.0c00868

.. code-block:: python

        from pantea.datasets import Dataset
        from pantea.potentials import NeuralNetworkPotential

        # Dataset: reading structures from RuNNer input data file
        structures = Dataset.from_runner("input.data")
        structure = structures[0]

        # Potential: creating a NNP from the RuNNer potential file
        nnp = NeuralNetworkPotential.from_runner("input.nn")
        nnp.load()  # this will require loading scaler and model parameter files.

        total_energy = nnp(structure)
        print(total_energy)

        forces = nnp.compute_forces(structure)
        print(forces)


.. -------------------
.. III. Training (NNP) 
.. -------------------
.. This example shows the process of training a NNP potential on input structures. 
.. The trained potential can then be used to evaluate the energy and force components for new structures.

.. .. code-block:: python

..         from pantea.datasets import Dataset
..         from pantea.potentials import NeuralNetworkPotential
..         from pantea.potentials.nnp import NeuralNetworkPotentialTrainer        

..         # Dataset: reading structures from RuNNer input data file
..         structures = Dataset.from_runner("input.data", persist=True)
..         structures.preload()

..         # Potential: creating a NNP from the RuNNer configuration file
..         nnp = NeuralNetworkPotential.from_runner("input.nn")

..         # Trainer: initializing a trainer from the NNP potential 
..         trainer = NeuralNetworkPotentialTrainer.from_runner(potential=nnp)
..         trainer.fit_scaler(structures)
..         trainer.fit_model(structures)

..         trainer.save()  # this will save scaler and model parameters into files


.. .. warning::
..         Please note that the above examples are just for demonstration. 
..         For training a NNP model in real world we surely need larger samples of data.

Download example input files from `here <https://drive.google.com/drive/folders/1vABOndAia41Bn0v1jPaJZmVGnbjg8UPE?usp=sharing>`_.


License
-------
This project is licensed under the GNU General Public License (GPL) version 3 - 
see the `LICENSE <https://github.com/hghcomphys/pantea/blob/main/LICENSE>`_ file for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/hghcomphys/pantea",
    "name": "pantea",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "pantea",
    "author": "Hossein Ghorbanfekr",
    "author_email": "hgh.comphys@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/0e/42/f155a7152627e3990ec8746df201db6e5d0a8a22bb072711051670f51723/pantea-0.11.0.tar.gz",
    "platform": null,
    "description": "\n.. .. image:: docs/images/logo.png\n.. :alt: logo\n        \n======\nPantea\n======\n\n\n.. image:: https://img.shields.io/pypi/v/pantea.svg\n        :target: https://pypi.python.org/pypi/pantea\n\n.. image:: https://github.com/hghcomphys/pantea/actions/workflows/tests.yml/badge.svg\n        :target: https://github.com/hghcomphys/pantea/blob/main/.github/workflows/tests.yml\n\n.. image:: https://readthedocs.org/projects/pantea/badge/?version=latest\n        :target: https://pantea.readthedocs.io/en/latest/?version=latest\n        :alt: Documentation Status\n\n\nDescription\n-----------\nPantea is an optimized Python library based on Google `JAX`_ that enables \ndevelopment of machine learning interatomic potentials \nfor use in computational material science. \nThese potentials are particularly necessary for conducting large-scale molecular \ndynamics simulations of complex materials with ab initio accuracy.\n\n.. _JAX: https://github.com/google/jax\n\n\nSee `documentation <https://pantea.readthedocs.io/en/latest/readme.html>`_ for more information.\n\n\n\n-------------\nKey features\n-------------\n* The design of Pantea is `simple` and `flexible`, which makes it easy to incorporate atomic descriptors and potentials. \n* It uses `automatic differentiation` to make defining new descriptors straightforward.\n* Pantea is written purely in Python and optimized with `just-in-time` (JIT) compilation.\n* It also supports `GPU` computing, which can significantly speed up preprocessing and model training.\n\n.. warning::\n        This package is under development and the current focus is on the implementation of high-dimensional \n        neural network potential (HDNNP) proposed by Behler et al. \n        (`2007 <https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.146401>`_).\n\n\nInstallation\n------------\nTo install Pantea, run this command in your terminal:\n\n.. code-block:: console\n\n    $ pip install pantea\n\nFor machines with an NVIDIA **GPU** please follow the\n`installation <https://pantea.readthedocs.io/en/latest/installation.html>`_ \ninstruction on the documentation. \n\n\nExamples\n--------\n\n--------------------\nI. Descriptor (ACSF)\n--------------------\nAtom-centered Symmetry Function (`ACSF`_) descriptor captures information about the distribution of neighboring atoms around a \ncentral atom by considering both radial (two-body) and angular (three-body) symmetry functions. \nThe values obtained from these calculations represent a fingerprint of the local atomic environment and can be used in various machine learning potentials. \n\nScript below demonstrates the process of defining multiple symmetry functions\nfor an element, which can be utilized to evaluate the descriptor values for any structure. \n\n.. _ACSF: https://aip.scitation.org/doi/10.1063/1.3553717\n\n\n.. code-block:: python\n\n        from pantea.datasets import Dataset\n        from pantea.descriptors import ACSF\n        from pantea.descriptors.acsf import CutoffFunction, NeighborElements, G2, G3\n\n        # Read atomic structure dataset (e.g. water molecules)\n        structures = Dataset.from_runner(\"input.data\")\n        structure = structures[0]\n        print(structure)\n        # >> Structure(natoms=12, elements=('H', 'O'), dtype=float64)\n\n        # Define an ACSF descriptor for hydrogen atoms\n        # It includes two radial (G2) and angular (G3) symmetry functions\n        cfn = CutoffFunction.from_type(\"tanh\", r_cutoff=12.0)\n        g2 = G2(cfn, eta=0.5, r_shift=0.0)\n        g3 = G3(cfn, eta=0.001, zeta=2.0, lambda0=1.0, r_shift=12.0)\n\n        descriptor = ACSF(\n                central_element='H',\n                radial_symmetry_functions=(\n                        (g2, NeighborElements('H')),\n                ),\n                angular_symmetry_functions=(\n                        (g3, NeighborElements('H', 'O')),\n                ),\n        )\n\n        print(descriptor)\n        # >> ACSF(central_element='H', num_symmetry_functions=2)\n\n        values = descriptor(structure)\n        print(\"Descriptor values:\\n\", values)\n        # >> Descriptor values:\n        # [[0.01952943 1.13103234]\n        #  [0.01952756 1.04312263]\n        # ...\n        #  [0.00228752 0.41445455]]\n\n        gradient = descriptor.grad(structure)\n        print(\"Descriptor gradient:\\n\", gradient)\n        # >> Descriptor gradient:\n        # [[[ 4.64523585e-02 -5.03786078e-02 -6.14621389e-02]\n        #   [-1.04818547e-01 -1.84170755e-02  4.76021411e-02]]\n        #  [[-9.67003098e-03 -5.45498827e-02  6.32422634e-03]\n        #   [-1.59613454e-01 -5.94085256e-02  1.72978932e-01]]\n        # ...\n        #  [[-1.36223042e-03 -8.02832759e-03 -6.08306094e-05]\n        #   [ 1.29199076e-02 -9.58762344e-03 -9.12714216e-02]]] \n\n\n-------------------\nII. Potential (NNP)\n-------------------\nThis example illustrates how to quickly create a `high-dimensional neural network \npotential` (`HDNNP`_) instance from an input setting file.\n\n.. _HDNNP: https://pubs.acs.org/doi/10.1021/acs.chemrev.0c00868\n\n.. code-block:: python\n\n        from pantea.datasets import Dataset\n        from pantea.potentials import NeuralNetworkPotential\n\n        # Dataset: reading structures from RuNNer input data file\n        structures = Dataset.from_runner(\"input.data\")\n        structure = structures[0]\n\n        # Potential: creating a NNP from the RuNNer potential file\n        nnp = NeuralNetworkPotential.from_runner(\"input.nn\")\n        nnp.load()  # this will require loading scaler and model parameter files.\n\n        total_energy = nnp(structure)\n        print(total_energy)\n\n        forces = nnp.compute_forces(structure)\n        print(forces)\n\n\n.. -------------------\n.. III. Training (NNP) \n.. -------------------\n.. This example shows the process of training a NNP potential on input structures. \n.. The trained potential can then be used to evaluate the energy and force components for new structures.\n\n.. .. code-block:: python\n\n..         from pantea.datasets import Dataset\n..         from pantea.potentials import NeuralNetworkPotential\n..         from pantea.potentials.nnp import NeuralNetworkPotentialTrainer        \n\n..         # Dataset: reading structures from RuNNer input data file\n..         structures = Dataset.from_runner(\"input.data\", persist=True)\n..         structures.preload()\n\n..         # Potential: creating a NNP from the RuNNer configuration file\n..         nnp = NeuralNetworkPotential.from_runner(\"input.nn\")\n\n..         # Trainer: initializing a trainer from the NNP potential \n..         trainer = NeuralNetworkPotentialTrainer.from_runner(potential=nnp)\n..         trainer.fit_scaler(structures)\n..         trainer.fit_model(structures)\n\n..         trainer.save()  # this will save scaler and model parameters into files\n\n\n.. .. warning::\n..         Please note that the above examples are just for demonstration. \n..         For training a NNP model in real world we surely need larger samples of data.\n\nDownload example input files from `here <https://drive.google.com/drive/folders/1vABOndAia41Bn0v1jPaJZmVGnbjg8UPE?usp=sharing>`_.\n\n\nLicense\n-------\nThis project is licensed under the GNU General Public License (GPL) version 3 - \nsee the `LICENSE <https://github.com/hghcomphys/pantea/blob/main/LICENSE>`_ file for details.\n",
    "bugtrack_url": null,
    "license": "GNU General Public License v3",
    "summary": "A Python package for developing machine learning interatomic potentials, based on JAX.",
    "version": "0.11.0",
    "project_urls": {
        "Homepage": "https://github.com/hghcomphys/pantea"
    },
    "split_keywords": [
        "pantea"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a5e232c6fda5ec582124f56b11e712170ab1628d499720ecf2ec4ca57ca3b814",
                "md5": "927619a0e575cb64e676b9d2555761e9",
                "sha256": "3178838fb6729c8b3c0e867034b9a345dcd1b3ccb4acf7d5586812f620f5e67d"
            },
            "downloads": -1,
            "filename": "pantea-0.11.0-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "927619a0e575cb64e676b9d2555761e9",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.8",
            "size": 67595,
            "upload_time": "2024-10-06T21:36:50",
            "upload_time_iso_8601": "2024-10-06T21:36:50.236464Z",
            "url": "https://files.pythonhosted.org/packages/a5/e2/32c6fda5ec582124f56b11e712170ab1628d499720ecf2ec4ca57ca3b814/pantea-0.11.0-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0e42f155a7152627e3990ec8746df201db6e5d0a8a22bb072711051670f51723",
                "md5": "0eee49ad054d720238d03a80b9d9a2cb",
                "sha256": "10e9b6179b0c41de4063a6c76e294ac1e0d8ecd8d1b8a50ae7af44c2b6815d67"
            },
            "downloads": -1,
            "filename": "pantea-0.11.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0eee49ad054d720238d03a80b9d9a2cb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 1127860,
            "upload_time": "2024-10-06T21:36:54",
            "upload_time_iso_8601": "2024-10-06T21:36:54.293335Z",
            "url": "https://files.pythonhosted.org/packages/0e/42/f155a7152627e3990ec8746df201db6e5d0a8a22bb072711051670f51723/pantea-0.11.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-06 21:36:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hghcomphys",
    "github_project": "pantea",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "pantea"
}
        
Elapsed time: 0.79516s