torch-pme


Nametorch-pme JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryParticle-mesh based calculations of long-range interactions in PyTorch
upload_time2024-12-05 18:37:47
maintainerNone
docs_urlNone
authortorch-pme developers
requires_python>=3.9
licenseBSD-3-Clause
keywords pytorch auto-differentiation particle-mesh ewald electrostatics computational materials science machine learning molecular dynamics gpu acceleration high-performance computing fourier transform torchscript scientific computing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            torch-pme
=========

.. image:: https://raw.githubusercontent.com/lab-cosmo/torch-pme/refs/heads/main/docs/src/logo/torch-pme.svg
     :width: 200 px
     :align: left

|tests| |codecov| |docs|

.. marker-introduction

``torch-pme`` enables efficient, auto-differentiable computation of long-range
interactions in PyTorch. Auto-differentiation is supported for particle *positions*,
*charges*, and *cell* parameters, allowing not only the automatic computation of forces
but also enabling general applications in machine learning tasks. The library offers
classes for Particle-Particle Particle-Mesh Ewald (P3M), Particle Mesh Ewald (``PME``),
standard ``Ewald``, and non-periodic methods, with the flexibility to calculate
potentials beyond :math:`1/r` electrostatics, including arbitrary order :math:`1/r^p`
potentials.

Optimized for both CPU and GPU devices, ``torch-pme`` is fully `TorchScriptable`_,
allowing it to be converted into a format that runs independently of Python, such as in
C++, making it ideal for high-performance production environments.

.. _`TorchScriptable`: https://pytorch.org/docs/stable/jit.html

.. marker-documentation

For details, tutorials, and examples, please have a look at our `documentation`_.

.. _`documentation`: https://lab-cosmo.github.io/torch-pme

.. marker-installation

Installation
------------

You can install *torch-pme* using pip with

.. code-block:: bash

    pip install torch-pme

and ``import torchpme`` to use it in your projects!

We also provide bindings to `metatensor <https://docs.metatensor.org>`_ which
can optionally be installed together and used as ``torchpme.metatensor`` via

.. code-block:: bash

    pip install torch-pme[metatensor]

.. marker-issues

Having problems or ideas?
-------------------------

Having a problem with torch-pme? Please let us know by `submitting an issue
<https://github.com/lab-cosmo/torch-pme/issues>`_.

Submit new features or bug fixes through a `pull request
<https://github.com/lab-cosmo/torch-pme/pulls>`_.

.. marker-cite

Reference
---------

If you use *torch-pme* for your work, please read and cite our preprint available on
`arXiv`_.

.. code-block::

   @article{loche_fast_2024,
      title = {Fast and Flexible Range-Separated Models for Atomistic Machine Learning},
      author = {Loche, Philip and {Huguenin-Dumittan}, Kevin K. and Honarmand, Melika and Xu, Qianjun and Rumiantsev, Egor and How, Wei Bin and Langer, Marcel F. and Ceriotti, Michele},
      year = {2024},
      month = dec,
      number = {arXiv:2412.03281},
      eprint = {2412.03281},
      primaryclass = {physics},
      publisher = {arXiv},
      doi = {10.48550/arXiv.2412.03281},
      urldate = {2024-12-05},
      archiveprefix = {arXiv}
      }

.. _`arXiv`: http://arxiv.org/abs/2412.03281

.. marker-contributing

Contributors
------------

Thanks goes to all people that make torch-pme possible:

.. image:: https://contrib.rocks/image?repo=lab-cosmo/torch-pme
   :target: https://github.com/lab-cosmo/torch-pme/graphs/contributors

.. |tests| image:: https://github.com/lab-cosmo/torch-pme/workflows/Tests/badge.svg
   :alt: Github Actions Tests Job Status
   :target: https://github.com/lab-cosmo/torch-pme/actions?query=workflow%3ATests

.. |codecov| image:: https://codecov.io/gh/lab-cosmo/torch-pme/graph/badge.svg?token=srVKRy7r6m
   :alt: Code coverage
   :target: https://codecov.io/gh/lab-cosmo/torch-pme

.. |docs| image:: https://img.shields.io/badge/documentation-latest-sucess
   :alt: Documentation
   :target: `documentation`_

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "torch-pme",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "PyTorch, Auto-differentiation, Particle-Mesh Ewald, Electrostatics, Computational Materials Science, Machine Learning, Molecular Dynamics, GPU Acceleration, High-Performance Computing, Fourier Transform, TorchScript, Scientific Computing",
    "author": "torch-pme developers",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/96/50/590840e97b2f5ed7107bc2e57c931e44c52567e0e7f952385fb6056a8fc9/torch_pme-0.1.0.tar.gz",
    "platform": null,
    "description": "torch-pme\n=========\n\n.. image:: https://raw.githubusercontent.com/lab-cosmo/torch-pme/refs/heads/main/docs/src/logo/torch-pme.svg\n     :width: 200 px\n     :align: left\n\n|tests| |codecov| |docs|\n\n.. marker-introduction\n\n``torch-pme`` enables efficient, auto-differentiable computation of long-range\ninteractions in PyTorch. Auto-differentiation is supported for particle *positions*,\n*charges*, and *cell* parameters, allowing not only the automatic computation of forces\nbut also enabling general applications in machine learning tasks. The library offers\nclasses for Particle-Particle Particle-Mesh Ewald (P3M), Particle Mesh Ewald (``PME``),\nstandard ``Ewald``, and non-periodic methods, with the flexibility to calculate\npotentials beyond :math:`1/r` electrostatics, including arbitrary order :math:`1/r^p`\npotentials.\n\nOptimized for both CPU and GPU devices, ``torch-pme`` is fully `TorchScriptable`_,\nallowing it to be converted into a format that runs independently of Python, such as in\nC++, making it ideal for high-performance production environments.\n\n.. _`TorchScriptable`: https://pytorch.org/docs/stable/jit.html\n\n.. marker-documentation\n\nFor details, tutorials, and examples, please have a look at our `documentation`_.\n\n.. _`documentation`: https://lab-cosmo.github.io/torch-pme\n\n.. marker-installation\n\nInstallation\n------------\n\nYou can install *torch-pme* using pip with\n\n.. code-block:: bash\n\n    pip install torch-pme\n\nand ``import torchpme`` to use it in your projects!\n\nWe also provide bindings to `metatensor <https://docs.metatensor.org>`_ which\ncan optionally be installed together and used as ``torchpme.metatensor`` via\n\n.. code-block:: bash\n\n    pip install torch-pme[metatensor]\n\n.. marker-issues\n\nHaving problems or ideas?\n-------------------------\n\nHaving a problem with torch-pme? Please let us know by `submitting an issue\n<https://github.com/lab-cosmo/torch-pme/issues>`_.\n\nSubmit new features or bug fixes through a `pull request\n<https://github.com/lab-cosmo/torch-pme/pulls>`_.\n\n.. marker-cite\n\nReference\n---------\n\nIf you use *torch-pme* for your work, please read and cite our preprint available on\n`arXiv`_.\n\n.. code-block::\n\n   @article{loche_fast_2024,\n      title = {Fast and Flexible Range-Separated Models for Atomistic Machine Learning},\n      author = {Loche, Philip and {Huguenin-Dumittan}, Kevin K. and Honarmand, Melika and Xu, Qianjun and Rumiantsev, Egor and How, Wei Bin and Langer, Marcel F. and Ceriotti, Michele},\n      year = {2024},\n      month = dec,\n      number = {arXiv:2412.03281},\n      eprint = {2412.03281},\n      primaryclass = {physics},\n      publisher = {arXiv},\n      doi = {10.48550/arXiv.2412.03281},\n      urldate = {2024-12-05},\n      archiveprefix = {arXiv}\n      }\n\n.. _`arXiv`: http://arxiv.org/abs/2412.03281\n\n.. marker-contributing\n\nContributors\n------------\n\nThanks goes to all people that make torch-pme possible:\n\n.. image:: https://contrib.rocks/image?repo=lab-cosmo/torch-pme\n   :target: https://github.com/lab-cosmo/torch-pme/graphs/contributors\n\n.. |tests| image:: https://github.com/lab-cosmo/torch-pme/workflows/Tests/badge.svg\n   :alt: Github Actions Tests Job Status\n   :target: https://github.com/lab-cosmo/torch-pme/actions?query=workflow%3ATests\n\n.. |codecov| image:: https://codecov.io/gh/lab-cosmo/torch-pme/graph/badge.svg?token=srVKRy7r6m\n   :alt: Code coverage\n   :target: https://codecov.io/gh/lab-cosmo/torch-pme\n\n.. |docs| image:: https://img.shields.io/badge/documentation-latest-sucess\n   :alt: Documentation\n   :target: `documentation`_\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause",
    "summary": "Particle-mesh based calculations of long-range interactions in PyTorch",
    "version": "0.1.0",
    "project_urls": {
        "changelog": "https://lab-cosmo.github.io/torch-pme/latest/references/changelog.html",
        "documentation": "https://lab-cosmo.github.io/torch-pme",
        "homepage": "https://lab-cosmo.github.io/torch-pme",
        "issues": "https://github.com/lab-cosmo/torch-pme/issues",
        "repository": "https://github.com/lab-cosmo/torch-pme"
    },
    "split_keywords": [
        "pytorch",
        " auto-differentiation",
        " particle-mesh ewald",
        " electrostatics",
        " computational materials science",
        " machine learning",
        " molecular dynamics",
        " gpu acceleration",
        " high-performance computing",
        " fourier transform",
        " torchscript",
        " scientific computing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "65dcd89121c11b285104269eae4f861851b5789ecd620b089bcc4567fb62d29e",
                "md5": "fc6f35c9b6b5e263961430ec8a1b3892",
                "sha256": "13f638355de046698ac8dbc984341f90582486ce13a09acc771c63f599dae8eb"
            },
            "downloads": -1,
            "filename": "torch_pme-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fc6f35c9b6b5e263961430ec8a1b3892",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 54518,
            "upload_time": "2024-12-05T18:37:46",
            "upload_time_iso_8601": "2024-12-05T18:37:46.649790Z",
            "url": "https://files.pythonhosted.org/packages/65/dc/d89121c11b285104269eae4f861851b5789ecd620b089bcc4567fb62d29e/torch_pme-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9650590840e97b2f5ed7107bc2e57c931e44c52567e0e7f952385fb6056a8fc9",
                "md5": "99f6fc387e6355413715a31c7aa3d79a",
                "sha256": "7d31801b44a19a1633ecc4f2c484811342940b074dcd01f30dd97ff5a4e69278"
            },
            "downloads": -1,
            "filename": "torch_pme-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "99f6fc387e6355413715a31c7aa3d79a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 39559,
            "upload_time": "2024-12-05T18:37:47",
            "upload_time_iso_8601": "2024-12-05T18:37:47.997074Z",
            "url": "https://files.pythonhosted.org/packages/96/50/590840e97b2f5ed7107bc2e57c931e44c52567e0e7f952385fb6056a8fc9/torch_pme-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-05 18:37:47",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lab-cosmo",
    "github_project": "torch-pme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "torch-pme"
}
        
Elapsed time: 0.38325s