# Torch Autodiff Utility
<table>
<tr>
<td>Compatibility:</td>
<td>
<img src="https://img.shields.io/badge/Python-3.8%20|%203.9%20|%203.10%20|%203.11%20|%203.12-blue.svg" alt="Python Versions"/>
<img src="https://img.shields.io/badge/PyTorch-%3E=1.11.0-blue.svg" alt="PyTorch Versions"/>
</td>
</tr>
<tr>
<td>Availability:</td>
<td>
<a href="https://github.com/tad-mctc/tad-mctc/releases/latest">
<img src="https://img.shields.io/github/v/release/tad-mctc/tad-mctc?color=orange" alt="Release"/>
</a>
<a href="https://pypi.org/project/tad-mctc/">
<img src="https://img.shields.io/pypi/v/tad-mctc?color=orange" alt="PyPI"/>
</a>
<a href="https://anaconda.org/conda-forge/tad-mctc">
<img src="https://img.shields.io/conda/vn/conda-forge/tad-mctc.svg" alt="Conda Version"/>
</a>
<a href="http://www.apache.org/licenses/LICENSE-2.0">
<img src="https://img.shields.io/badge/License-Apache%202.0-orange.svg" alt="Apache-2.0"/>
</a>
</td>
</tr>
<tr>
<td>Status:</td>
<td>
<a href="https://github.com/tad-mctc/tad-mctc/actions/workflows/ubuntu.yaml">
<img src="https://github.com/tad-mctc/tad-mctc/actions/workflows/ubuntu.yaml/badge.svg" alt="Test Status Ubuntu"/>
</a>
<a href="https://github.com/tad-mctc/tad-mctc/actions/workflows/macos-x86.yaml">
<img src="https://github.com/tad-mctc/tad-mctc/actions/workflows/macos-x86.yaml/badge.svg" alt="Test Status macOS (x86)"/>
</a>
<a href="https://github.com/tad-mctc/tad-mctc/actions/workflows/macos-arm.yaml">
<img src="https://github.com/tad-mctc/tad-mctc/actions/workflows/macos-arm.yaml/badge.svg" alt="Test Status macOS (ARM)"/>
</a>
<a href="https://github.com/tad-mctc/tad-mctc/actions/workflows/windows.yaml">
<img src="https://github.com/tad-mctc/tad-mctc/actions/workflows/windows.yaml/badge.svg" alt="Test Status Windows"/>
</a>
<a href="https://github.com/tad-mctc/tad-mctc/actions/workflows/release.yaml">
<img src="https://github.com/tad-mctc/tad-mctc/actions/workflows/release.yaml/badge.svg" alt="Build Status"/>
</a>
<a href="https://tad-mctc.readthedocs.io">
<img src="https://readthedocs.org/projects/tad-mctc/badge/?version=latest" alt="Documentation Status"/>
</a>
<a href="https://results.pre-commit.ci/latest/github/tad-mctc/tad-mctc/main">
<img src="https://results.pre-commit.ci/badge/github/tad-mctc/tad-mctc/main.svg" alt="pre-commit.ci Status"/>
</a>
<a href="https://codecov.io/gh/tad-mctc/tad-mctc">
<img src="https://codecov.io/gh/tad-mctc/tad-mctc/branch/main/graph/badge.svg?token=OGJJnZ6t4G" alt="Coverage"/>
</a>
</td>
</tr>
</table>
<br>
This library is a collection of utility functions that are used in PyTorch (re-)implementations of projects from the [Grimme group](https://github.com/grimme-lab).
In particular, the _tad-mctc_ library provides:
- autograd functions (Jacobian, Hessian)
- atomic data (radii, EN, example molecules, ...)
- batch utility (packing, masks, ...)
- conversion functions (numpy, atomic symbols/numbers, ...)
- coordination numbers (DFT-D3, DFT-D4, EEQ)
- io (reading/writing coordinate files)
- molecular properties (bond lengths/orders/angles, moment of inertia, ...)
- safeops (autograd-safe implementations of common functions)
- typing (base class for tensor-like behavior of arbitrary classes)
- units
The name is inspired by the Fortran pendant "modular computation tool chain library" ([mctc-lib](https://github.com/grimme-lab/mctc-lib/)).
## Installation
### pip
_tad-mctc_ can easily be installed with `pip`.
```sh
pip install tad-mctc
```
### conda
_tad-mctc_ is also available from `conda`.
```sh
conda install tad-mctc
```
### From source
This project is hosted on GitHub at [tad-mctc/tad-mctc](https://github.com/tad-mctc/tad-mctc/).
Obtain the source by cloning the repository with
```sh
git clone https://github.com/tad-mctc/tad-mctc
cd tad-mctc
```
We recommend using a [conda](https://conda.io/) environment to install the package.
You can setup the environment manager using a [mambaforge](https://github.com/conda-forge/miniforge) installer.
Install the required dependencies from the conda-forge channel.
```sh
mamba env create -n torch -f environment.yaml
mamba activate torch
```
Install this project with `pip` in the environment
```sh
pip install .
```
The following dependencies are required
- [numpy](https://numpy.org/)
- [opt_einsum](https://optimized-einsum.readthedocs.io/en/stable/)
- [psutil](https://psutil.readthedocs.io/en/latest/)
- [pytest](https://docs.pytest.org/) (tests only)
- [torch](https://pytorch.org/)
## Compatibility
| PyTorch \ Python | 3.8 | 3.9 | 3.10 | 3.11 | 3.12 |
|------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
| 1.11.0 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: |
| 1.12.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: |
| 1.13.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: |
| 2.0.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: |
| 2.1.2 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: |
| 2.2.2 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| 2.3.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| 2.4.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| 2.5.1 | :x: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
Note that only the latest bug fix version is listed, but all preceding bug fix minor versions are supported.
For example, although only version 2.2.2 is listed, version 2.2.0 and 2.2.1 are also supported.
On macOS and Windows, PyTorch<2.0.0 does only support Python<3.11.
## Development
For development, additionally install the following tools in your environment.
```sh
mamba install black covdefaults mypy pre-commit pylint pytest pytest-cov pytest-xdist tox
pip install pytest-random-order
```
With pip, add the option `-e` for installing in development mode, and add `[dev]` for the development dependencies
```sh
pip install -e .[dev]
```
The pre-commit hooks are initialized by running the following command in the root of the repository.
```sh
pre-commit install
```
For testing all Python environments, simply run `tox`.
```sh
tox
```
Note that this randomizes the order of tests but skips "large" tests. To modify this behavior, `tox` has to skip the optional _posargs_.
```sh
tox -- test
```
## Examples
The following example shows how to calculate the coordination number used in the EEQ model for a single structure.
```python
import torch
import tad_mctc as mctc
numbers = mctc.convert.symbol_to_number(symbols="C C C C N C S H H H H H".split())
# coordinates in Bohr
positions = torch.tensor(
[
[-2.56745685564671, -0.02509985979910, 0.00000000000000],
[-1.39177582455797, +2.27696188880014, 0.00000000000000],
[+1.27784995624894, +2.45107479759386, 0.00000000000000],
[+2.62801937615793, +0.25927727028120, 0.00000000000000],
[+1.41097033661123, -1.99890996077412, 0.00000000000000],
[-1.17186102298849, -2.34220576284180, 0.00000000000000],
[-2.39505990368378, -5.22635838332362, 0.00000000000000],
[+2.41961980455457, -3.62158019253045, 0.00000000000000],
[-2.51744374846065, +3.98181713686746, 0.00000000000000],
[+2.24269048384775, +4.24389473203647, 0.00000000000000],
[+4.66488984573956, +0.17907568006409, 0.00000000000000],
[-4.60044244782237, -0.17794734637413, 0.00000000000000],
]
)
# calculate EEQ coordination number
cn = mctc.ncoord.cn_eeq(numbers, positions)
torch.set_printoptions(precision=10)
print(cn)
# tensor([3.0519218445, 3.0177774429, 3.0132560730, 3.0197706223,
# 3.0779352188, 3.0095663071, 1.0991339684, 0.9968624115,
# 0.9943327904, 0.9947233200, 0.9945874214, 0.9945726395])
```
The next example shows the calculation of the coordination number used in DFT-D4 for a batch of structures.
```python
import torch
import tad_mctc as mctc
# S22 system 4: formamide dimer
numbers = mctc.batch.pack((
mctc.convert.symbol_to_number("C C N N H H H H H H O O".split()),
mctc.convert.symbol_to_number("C O N H H H".split()),
))
# coordinates in Bohr
positions = mctc.batch.pack((
torch.tensor([
[-3.81469488143921, +0.09993441402912, 0.00000000000000],
[+3.81469488143921, -0.09993441402912, 0.00000000000000],
[-2.66030049324036, -2.15898251533508, 0.00000000000000],
[+2.66030049324036, +2.15898251533508, 0.00000000000000],
[-0.73178529739380, -2.28237795829773, 0.00000000000000],
[-5.89039325714111, -0.02589114569128, 0.00000000000000],
[-3.71254944801331, -3.73605775833130, 0.00000000000000],
[+3.71254944801331, +3.73605775833130, 0.00000000000000],
[+0.73178529739380, +2.28237795829773, 0.00000000000000],
[+5.89039325714111, +0.02589114569128, 0.00000000000000],
[-2.74426102638245, +2.16115570068359, 0.00000000000000],
[+2.74426102638245, -2.16115570068359, 0.00000000000000],
]),
torch.tensor([
[-0.55569743203406, +1.09030425468557, 0.00000000000000],
[+0.51473634678469, +3.15152550263611, 0.00000000000000],
[+0.59869690244446, -1.16861263789477, 0.00000000000000],
[-0.45355203669134, -2.74568780438064, 0.00000000000000],
[+2.52721209544999, -1.29200800956867, 0.00000000000000],
[-2.63139587595376, +0.96447869452240, 0.00000000000000],
]),
))
# calculate coordination number
cn = mctc.ncoord.cn_d4(numbers, positions)
torch.set_printoptions(precision=10)
print(cn)
# tensor([[2.6886456013, 2.6886456013, 2.6314170361, 2.6314167976,
# 0.8594539165, 0.9231414795, 0.8605306745, 0.8605306745,
# 0.8594539165, 0.9231414795, 0.8568341732, 0.8568341732],
# [2.6886456013, 0.8568335176, 2.6314167976, 0.8605306745,
# 0.8594532013, 0.9231414795, 0.0000000000, 0.0000000000,
# 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000]])
```
## Contributing
This is a volunteer open source projects and contributions are always welcome.
Please, take a moment to read the [contributing guidelines](CONTRIBUTING.md).
## License
This project is licensed under the Apache License, Version 2.0 (the "License"); you may not use this project's files except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Raw data
{
"_id": null,
"home_page": "https://github.com/tad-mctc/tad-mctc",
"name": "tad-mctc",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.8",
"maintainer_email": null,
"keywords": "pytorch, autograd, computational chemistry, quantum chemistry",
"author": "\"Marvin Friede\"",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/db/9d/97d34672b24d1ac6bdece44484ccc852aaa6065ae553ee4bf7dada65bd9c/tad_mctc-0.2.5.tar.gz",
"platform": null,
"description": "# Torch Autodiff Utility\n\n<table>\n <tr>\n <td>Compatibility:</td>\n <td>\n <img src=\"https://img.shields.io/badge/Python-3.8%20|%203.9%20|%203.10%20|%203.11%20|%203.12-blue.svg\" alt=\"Python Versions\"/>\n <img src=\"https://img.shields.io/badge/PyTorch-%3E=1.11.0-blue.svg\" alt=\"PyTorch Versions\"/>\n </td>\n </tr>\n <tr>\n <td>Availability:</td>\n <td>\n <a href=\"https://github.com/tad-mctc/tad-mctc/releases/latest\">\n <img src=\"https://img.shields.io/github/v/release/tad-mctc/tad-mctc?color=orange\" alt=\"Release\"/>\n </a>\n <a href=\"https://pypi.org/project/tad-mctc/\">\n <img src=\"https://img.shields.io/pypi/v/tad-mctc?color=orange\" alt=\"PyPI\"/>\n </a>\n <a href=\"https://anaconda.org/conda-forge/tad-mctc\">\n <img src=\"https://img.shields.io/conda/vn/conda-forge/tad-mctc.svg\" alt=\"Conda Version\"/>\n </a>\n <a href=\"http://www.apache.org/licenses/LICENSE-2.0\">\n <img src=\"https://img.shields.io/badge/License-Apache%202.0-orange.svg\" alt=\"Apache-2.0\"/>\n </a>\n </td>\n </tr>\n <tr>\n <td>Status:</td>\n <td>\n <a href=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/ubuntu.yaml\">\n <img src=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/ubuntu.yaml/badge.svg\" alt=\"Test Status Ubuntu\"/>\n </a>\n <a href=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/macos-x86.yaml\">\n <img src=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/macos-x86.yaml/badge.svg\" alt=\"Test Status macOS (x86)\"/>\n </a>\n <a href=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/macos-arm.yaml\">\n <img src=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/macos-arm.yaml/badge.svg\" alt=\"Test Status macOS (ARM)\"/>\n </a>\n <a href=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/windows.yaml\">\n <img src=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/windows.yaml/badge.svg\" alt=\"Test Status Windows\"/>\n </a>\n <a href=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/release.yaml\">\n <img src=\"https://github.com/tad-mctc/tad-mctc/actions/workflows/release.yaml/badge.svg\" alt=\"Build Status\"/>\n </a>\n <a href=\"https://tad-mctc.readthedocs.io\">\n <img src=\"https://readthedocs.org/projects/tad-mctc/badge/?version=latest\" alt=\"Documentation Status\"/>\n </a>\n <a href=\"https://results.pre-commit.ci/latest/github/tad-mctc/tad-mctc/main\">\n <img src=\"https://results.pre-commit.ci/badge/github/tad-mctc/tad-mctc/main.svg\" alt=\"pre-commit.ci Status\"/>\n </a>\n <a href=\"https://codecov.io/gh/tad-mctc/tad-mctc\">\n <img src=\"https://codecov.io/gh/tad-mctc/tad-mctc/branch/main/graph/badge.svg?token=OGJJnZ6t4G\" alt=\"Coverage\"/>\n </a>\n </td>\n </tr>\n</table>\n\n<br>\n\nThis library is a collection of utility functions that are used in PyTorch (re-)implementations of projects from the [Grimme group](https://github.com/grimme-lab).\nIn particular, the _tad-mctc_ library provides:\n\n- autograd functions (Jacobian, Hessian)\n\n- atomic data (radii, EN, example molecules, ...)\n\n- batch utility (packing, masks, ...)\n\n- conversion functions (numpy, atomic symbols/numbers, ...)\n\n- coordination numbers (DFT-D3, DFT-D4, EEQ)\n\n- io (reading/writing coordinate files)\n\n- molecular properties (bond lengths/orders/angles, moment of inertia, ...)\n\n- safeops (autograd-safe implementations of common functions)\n\n- typing (base class for tensor-like behavior of arbitrary classes)\n\n- units\n\nThe name is inspired by the Fortran pendant \"modular computation tool chain library\" ([mctc-lib](https://github.com/grimme-lab/mctc-lib/)).\n\n## Installation\n\n### pip\n\n_tad-mctc_ can easily be installed with `pip`.\n\n```sh\npip install tad-mctc\n```\n\n### conda\n\n_tad-mctc_ is also available from `conda`.\n\n```sh\nconda install tad-mctc\n```\n\n### From source\n\nThis project is hosted on GitHub at [tad-mctc/tad-mctc](https://github.com/tad-mctc/tad-mctc/).\nObtain the source by cloning the repository with\n\n```sh\ngit clone https://github.com/tad-mctc/tad-mctc\ncd tad-mctc\n```\n\nWe recommend using a [conda](https://conda.io/) environment to install the package.\nYou can setup the environment manager using a [mambaforge](https://github.com/conda-forge/miniforge) installer.\nInstall the required dependencies from the conda-forge channel.\n\n```sh\nmamba env create -n torch -f environment.yaml\nmamba activate torch\n```\n\nInstall this project with `pip` in the environment\n\n```sh\npip install .\n```\n\nThe following dependencies are required\n\n- [numpy](https://numpy.org/)\n- [opt_einsum](https://optimized-einsum.readthedocs.io/en/stable/)\n- [psutil](https://psutil.readthedocs.io/en/latest/)\n- [pytest](https://docs.pytest.org/) (tests only)\n- [torch](https://pytorch.org/)\n\n\n## Compatibility\n\n| PyTorch \\ Python | 3.8 | 3.9 | 3.10 | 3.11 | 3.12 |\n|------------------|--------------------|--------------------|--------------------|--------------------|--------------------|\n| 1.11.0 | :white_check_mark: | :white_check_mark: | :x: | :x: | :x: |\n| 1.12.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: | :x: |\n| 1.13.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: |\n| 2.0.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: |\n| 2.1.2 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :x: |\n| 2.2.2 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| 2.3.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| 2.4.1 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n| 2.5.1 | :x: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |\n\nNote that only the latest bug fix version is listed, but all preceding bug fix minor versions are supported.\nFor example, although only version 2.2.2 is listed, version 2.2.0 and 2.2.1 are also supported.\n\nOn macOS and Windows, PyTorch<2.0.0 does only support Python<3.11.\n\n\n## Development\n\nFor development, additionally install the following tools in your environment.\n\n```sh\nmamba install black covdefaults mypy pre-commit pylint pytest pytest-cov pytest-xdist tox\npip install pytest-random-order\n```\n\nWith pip, add the option `-e` for installing in development mode, and add `[dev]` for the development dependencies\n\n```sh\npip install -e .[dev]\n```\n\nThe pre-commit hooks are initialized by running the following command in the root of the repository.\n\n```sh\npre-commit install\n```\n\nFor testing all Python environments, simply run `tox`.\n\n```sh\ntox\n```\n\nNote that this randomizes the order of tests but skips \"large\" tests. To modify this behavior, `tox` has to skip the optional _posargs_.\n\n```sh\ntox -- test\n```\n\n## Examples\n\nThe following example shows how to calculate the coordination number used in the EEQ model for a single structure.\n\n```python\nimport torch\nimport tad_mctc as mctc\n\nnumbers = mctc.convert.symbol_to_number(symbols=\"C C C C N C S H H H H H\".split())\n\n# coordinates in Bohr\npositions = torch.tensor(\n [\n [-2.56745685564671, -0.02509985979910, 0.00000000000000],\n [-1.39177582455797, +2.27696188880014, 0.00000000000000],\n [+1.27784995624894, +2.45107479759386, 0.00000000000000],\n [+2.62801937615793, +0.25927727028120, 0.00000000000000],\n [+1.41097033661123, -1.99890996077412, 0.00000000000000],\n [-1.17186102298849, -2.34220576284180, 0.00000000000000],\n [-2.39505990368378, -5.22635838332362, 0.00000000000000],\n [+2.41961980455457, -3.62158019253045, 0.00000000000000],\n [-2.51744374846065, +3.98181713686746, 0.00000000000000],\n [+2.24269048384775, +4.24389473203647, 0.00000000000000],\n [+4.66488984573956, +0.17907568006409, 0.00000000000000],\n [-4.60044244782237, -0.17794734637413, 0.00000000000000],\n ]\n)\n\n# calculate EEQ coordination number\ncn = mctc.ncoord.cn_eeq(numbers, positions)\ntorch.set_printoptions(precision=10)\nprint(cn)\n# tensor([3.0519218445, 3.0177774429, 3.0132560730, 3.0197706223,\n# 3.0779352188, 3.0095663071, 1.0991339684, 0.9968624115,\n# 0.9943327904, 0.9947233200, 0.9945874214, 0.9945726395])\n```\n\nThe next example shows the calculation of the coordination number used in DFT-D4 for a batch of structures.\n\n```python\nimport torch\nimport tad_mctc as mctc\n\n# S22 system 4: formamide dimer\nnumbers = mctc.batch.pack((\n mctc.convert.symbol_to_number(\"C C N N H H H H H H O O\".split()),\n mctc.convert.symbol_to_number(\"C O N H H H\".split()),\n))\n\n# coordinates in Bohr\npositions = mctc.batch.pack((\n torch.tensor([\n [-3.81469488143921, +0.09993441402912, 0.00000000000000],\n [+3.81469488143921, -0.09993441402912, 0.00000000000000],\n [-2.66030049324036, -2.15898251533508, 0.00000000000000],\n [+2.66030049324036, +2.15898251533508, 0.00000000000000],\n [-0.73178529739380, -2.28237795829773, 0.00000000000000],\n [-5.89039325714111, -0.02589114569128, 0.00000000000000],\n [-3.71254944801331, -3.73605775833130, 0.00000000000000],\n [+3.71254944801331, +3.73605775833130, 0.00000000000000],\n [+0.73178529739380, +2.28237795829773, 0.00000000000000],\n [+5.89039325714111, +0.02589114569128, 0.00000000000000],\n [-2.74426102638245, +2.16115570068359, 0.00000000000000],\n [+2.74426102638245, -2.16115570068359, 0.00000000000000],\n ]),\n torch.tensor([\n [-0.55569743203406, +1.09030425468557, 0.00000000000000],\n [+0.51473634678469, +3.15152550263611, 0.00000000000000],\n [+0.59869690244446, -1.16861263789477, 0.00000000000000],\n [-0.45355203669134, -2.74568780438064, 0.00000000000000],\n [+2.52721209544999, -1.29200800956867, 0.00000000000000],\n [-2.63139587595376, +0.96447869452240, 0.00000000000000],\n ]),\n))\n\n# calculate coordination number\ncn = mctc.ncoord.cn_d4(numbers, positions)\ntorch.set_printoptions(precision=10)\nprint(cn)\n# tensor([[2.6886456013, 2.6886456013, 2.6314170361, 2.6314167976,\n# 0.8594539165, 0.9231414795, 0.8605306745, 0.8605306745,\n# 0.8594539165, 0.9231414795, 0.8568341732, 0.8568341732],\n# [2.6886456013, 0.8568335176, 2.6314167976, 0.8605306745,\n# 0.8594532013, 0.9231414795, 0.0000000000, 0.0000000000,\n# 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000]])\n```\n\n## Contributing\n\nThis is a volunteer open source projects and contributions are always welcome.\nPlease, take a moment to read the [contributing guidelines](CONTRIBUTING.md).\n\n## License\n\nThis project is licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this project's files except in compliance with the License. You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Torch Autodiff Utility",
"version": "0.2.5",
"project_urls": {
"Documentation": "https://tad-mctc.readthedocs.io",
"Homepage": "https://github.com/tad-mctc/tad-mctc",
"Source": "https://github.com/tad-mctc/tad-mctc",
"Tracker": "https://github.com/tad-mctc/tad-mctc/issues"
},
"split_keywords": [
"pytorch",
" autograd",
" computational chemistry",
" quantum chemistry"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5c01226c67824802cd316c153ce569655a61b18d576de8922f1d0209bfd7ed14",
"md5": "f79b3fe7d95d8529215ebc289f907fc4",
"sha256": "622f6f62f80425978c995f432cc2875719985c0fd2a32d44ce5afd69cd6a3e5a"
},
"downloads": -1,
"filename": "tad_mctc-0.2.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f79b3fe7d95d8529215ebc289f907fc4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.8",
"size": 156514,
"upload_time": "2024-12-08T21:08:02",
"upload_time_iso_8601": "2024-12-08T21:08:02.828851Z",
"url": "https://files.pythonhosted.org/packages/5c/01/226c67824802cd316c153ce569655a61b18d576de8922f1d0209bfd7ed14/tad_mctc-0.2.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "db9d97d34672b24d1ac6bdece44484ccc852aaa6065ae553ee4bf7dada65bd9c",
"md5": "22435f9b4846b8509c6ee825964a4e62",
"sha256": "38ffc59669badf92ada66bf79766f35ecde9fc856d200eaeb4498f9d8ec0d234"
},
"downloads": -1,
"filename": "tad_mctc-0.2.5.tar.gz",
"has_sig": false,
"md5_digest": "22435f9b4846b8509c6ee825964a4e62",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.8",
"size": 97792,
"upload_time": "2024-12-08T21:08:04",
"upload_time_iso_8601": "2024-12-08T21:08:04.848412Z",
"url": "https://files.pythonhosted.org/packages/db/9d/97d34672b24d1ac6bdece44484ccc852aaa6065ae553ee4bf7dada65bd9c/tad_mctc-0.2.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-08 21:08:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tad-mctc",
"github_project": "tad-mctc",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "tad-mctc"
}