Name | pqm JSON |
Version |
0.6.2
JSON |
| download |
home_page | None |
Summary | Implemenation of the PQMass two sample test from Lemos et al. 2024 |
upload_time | 2024-11-11 16:54:42 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT License Copyright (c) [2023] [pqm authors] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
machine learning
pytorch
statistics
|
VCS |
|
bugtrack_url |
|
requirements |
scipy
numpy
torch
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# PQMass: Probabilistic Assessment of the Quality of Generative Models using Probability Mass Estimation
![PyPI - Version](https://img.shields.io/pypi/v/pqm?style=flat-square)
[![CI](https://github.com/Ciela-Institute/PQM/actions/workflows/ci.yml/badge.svg)](https://github.com/Ciela-Institute/PQM/actions/workflows/ci.yml)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
![PyPI - Downloads](https://img.shields.io/pypi/dm/pqm)
[![codecov](https://codecov.io/gh/Ciela-Institute/PQM/graph/badge.svg?token=wbkUiRkYtg)](https://codecov.io/gh/Ciela-Institute/PQM)
[![arXiv](https://img.shields.io/badge/arXiv-2402.04355-b31b1b.svg)](https://arxiv.org/abs/2402.04355)
[PQMass](https://arxiv.org/abs/2402.04355) is a new sample-based method for evaluating the quality of generative models as well as assessing distribution shifts to determine if two datasets come from the same underlying distribution.
## Install
To install PQMass, run the following:
```bash
pip install pqm
```
## Usage
PQMass takes in $x$ and $y$ two datasets and determines if they come from the same underlying distribution. For instance, in the case of generative models, $x$ represents the samples generated by your model, while $y$ corresponds to the real data or test set.
![Headline plot showing an example tessellation for PQMass](media/Voronoi.png "")
PQMass partitions the space by taking reference points from $x$ and $y$ and creating Voronoi tessellations around the reference points. On the left is an example of one such region, which we note follows a Binomial Distribution; the samples are either inside or outside the region. On the right is the entire space partitioned, allowing us to see that this is a multinomial distribution, a given sample can be in region P or any other region. This is crucial as it allows for two metrics to be defined that can be used to determine if $x$ and $y$ come from the same underlying distribution. The first is the $\chi_{PQM}^2$
$$\chi_{PQM}^2 \equiv \sum_{i = 1}^{n_R} \left[ \frac{(k({\bf x}, R_i) - \hat{N}_{x, i})^2}{\hat{N}_{x, i}} + \frac{(k({\bf y}, R_i) - \hat{N}_{y, i})^2}{\hat{N}_{y, i}} \right]$$
and the second is the $\text{p-value}(\chi_{PQM}^2)$
$$\text{p-value}(\chi_{PQM}^2) \equiv \int_{-\infty}^{\chi^2_{\rm {PQM}}} \chi^2_{n_R - 1}(z) dz$$
For $\chi_{PQM}^2$ metric, given your two sets of samples, if they come from the same
distribution, the histogram of your $\chi_{PQM}^2$ values should follow the $\chi^2$
distribution. The degrees of freedom (DoF) will equal `DoF = num_refs - 1` The
peak of this distribution will be at `DoF - 2`, the mean will equal `DoF`, and
the standard deviation will be `sqrt(2 * DoF)`. If your $\chi_{PQM}^2$ values are too
high (`chi^2 / DoF > 1`), it suggests that the samples are out of distribution.
Conversely, if the values are too low (`chi^2 / DoF < 1`), it indicates
potential duplication of samples between `x` and `y`.
If your two samples are drawn from the same distribution, then the $\text{p-value}(\chi_{PQM}^2)$
should be drawn from the random $\mathcal{U}(0,1)$ distribution. This means that if
you get a very small value (i.e., 1e-6), then you have failed the null
hypothesis test, and the two samples are not drawn from the same distribution.
If you get values approximately equal to 1 every time then that suggests
potential duplication of samples between `x` and `y`.
PQMass can work for any two datasets as it measures the distribution shift between the $x$ and $y$, which we show below.
## Example
We are using 100 regions. Thus, the DoF is 99, our expected $\chi^2$ peak of the distribution is 97, the mean is 99, and the standard deviation should be 14.07. With this in mind, we set up our example. For the p-value, we expect to be between 0 and 1 and a significantly small p-value (e.g., $< 0.05$ or $< 0.01$) would mean we reject the null hypothesis and thus $x$ and $y$ do not come from the same distribution.
Our expected p-value should be around 0.5 to pass the null hypothesis test; any significant deviation away from this would indicate failure of the null hypothesis test.
Given two distributions, $x$ and $y$, sampling from a $\mathcal{N}(0, 1)$ in 10 dimensions, the goal is to determine if they come from the same underlying distribution. This is considered the null test as we know they come from the same distribution, but we show how one would use PQMass to determine this.
```python
from pqm import pqm_pvalue, pqm_chi2
import numpy as np
p = np.random.normal(size = (500, 10))
q = np.random.normal(size = (400, 10))
# To get chi^2 from PQMass
chi2_stat = pqm_chi2(p, q, re_tessellation = 1000)
print(np.mean(chi2_stat), np.std(chi2_stat)) # 98.51, 11.334
# To get pvalues from PQMass
pvalues = pqm_pvalue(p, q, re_tessellation = 1000)
print(np.mean(pvalues), np.std(pvalues)) # 0.50, 0.26
```
We see that both $\chi_{PQM}^2$ and $\text{p-value}(\chi_{PQM}^2)$ follow the expected $\chi^2$ indicatiing that both $x$ and $y$ come from the same underlying distribution.
Another such example in which we do $\textit{not}$ expect $x$ and $y$ to come from the same distribution is if $x$ is again sampled from a $\mathcal{N}(0, 1)$ in 10 dimensions whereas $y$ is sampled from a $\mathcal{U}(0, 1)$ in 10 dimensions.
```python
from pqm import pqm_pvalue, pqm_chi2
import numpy as np
p = np.random.normal(size = (500, 10))
q = np.random.uniform(size = (400, 10))
# To get chi^2 from PQMass
chi2_stat = pqm_chi2(p, q, re_tessellation = 1000)
print(np.mean(chi2_stat), np.std(chi2_stat)) # 577.29, 25.74
# To get pvalues from PQMass
pvalues = pqm_pvalue(p, q, re_tessellation = 1000)
print(np.mean(pvalues), np.std(pvalues)) # 3.53e-56, 8.436e-55
```
Here it is clear that both $\chi_{PQM}^2$ and $\text{p-value}(\chi_{PQM}^2)$ are not close to their expected results, thus showing that $x$ and $y$ do $\textbf{not}$ come from the same underlying distribution.
Thus, PQMass can be used to identify if any two distributions come from the same underlying distributions if enough samples are given. We encourage users to look through the paper to see the varying experiments and use cases for PQMass!
## How to Intrept Result
We have shown what to expect for PQMass when working with $\chi_{PQM}^2$ or $\text{p-value}(\chi_{PQM}^2)$ however when working with $\chi_{PQM}^2$, there is the case in which it will return 0's. There are a couple reasons in why this could happen
- For Generative Models; 0's indicate memorization. Samples are duplicates of the data it has been trained on.
- For non generative model scenario, it is typically due to lack of samples espically in high dimensions. Increasing samples should alleviate the issue.
- Another scenario in which one could get 0's in a non generative model case is that it can also be an inidcator of duplicate samples in $x$ and $y$.
## Advanced Usage
Depending on the data you are working with we show other uses of the parameters for PQMass.
### Z-Score Normalization
If you determine that you need to normalize $x$ and $y$, there is a z-score normalization function built into PQMass, and one can call it by setting `z_score_norm = True`:
```python
chi2_stat = pqm_chi2(p, q, re_tessellation = 1000, z_score_norm = True)
pvalues = pqm_pvalue(p, q, re_tessellation = 1000, z_score_norm = True)
```
### Modification to how references points are selected
The default setup for selecting reference points is to take the number of regions and then sample from $x$ and $y$ proportional to each length, respectively. However, if, for your case, you want to only sample the reference points from $x$ by setting `x_frac = 1.0`:
```python
chi2_stat = pqm_chi2(p, q, re_tessellation = 1000, x_frac = 1.0)
pvalues = pqm_pvalue(p, q, re_tessellation = 1000, x_frac = 1.0)
```
Alternatively, you can sample the reference points only from $y$ by setting `x_frac = 0`:
```python
chi2_stat = pqm_chi2(p, q, re_tessellation = 1000, x_frac = 0)
pvalues = pqm_pvalue(p, q, re_tessellation = 1000, x_frac = 0)
```
Similary you can sample reference points equally from both $x$ and $y$ by setting `x_frac = 0.5`:
```python
chi2_stat = pqm_chi2(p, q, re_tessellation = 1000, x_frac = 0.5)
pvalues = pqm_pvalue(p, q, re_tessellation = 1000, x_frac = 0.5)
```
Lastly one could not sample reference points from either $x$ or $y$ but instead sample from a Guassian by using the `guass_frac = 1.0`:
```python
chi2_stat = pqm_chi2(p, q, re_tessellation = 1000, guass_frac = 1.0)
pvalues = pqm_pvalue(p, q, re_tessellation = 1000, guass_frac = 1.0)
```
### GPU Compatibility
PQMass now works on both CPU and GPU. All that is needed is to pass the `x_samples` and `y_samples` as a PyTorch Tensor on the appropriate device.
## Developing
If you're a developer then:
```python
git clone git@github.com:Ciela-Institute/PQM.git
cd PQM
git checkout -b my-new-branch
pip install -e .
```
But make an issue first so we can discuss implementation ideas.
Raw data
{
"_id": null,
"home_page": null,
"name": "pqm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "machine learning, pytorch, statistics",
"author": null,
"author_email": "Pablo Lemos <pablo.lemos@mila.quebec>, Connor Stone <connor.stone@umontreal.ca>, Sammy Sharief <sharief2@illinois.edu>",
"download_url": "https://files.pythonhosted.org/packages/dc/5b/8974dafc1e9814eba997b99f7daee637830ae98de8e2e6cf018b201abf01/pqm-0.6.2.tar.gz",
"platform": null,
"description": "# PQMass: Probabilistic Assessment of the Quality of Generative Models using Probability Mass Estimation\n\n![PyPI - Version](https://img.shields.io/pypi/v/pqm?style=flat-square)\n[![CI](https://github.com/Ciela-Institute/PQM/actions/workflows/ci.yml/badge.svg)](https://github.com/Ciela-Institute/PQM/actions/workflows/ci.yml)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/pqm)\n[![codecov](https://codecov.io/gh/Ciela-Institute/PQM/graph/badge.svg?token=wbkUiRkYtg)](https://codecov.io/gh/Ciela-Institute/PQM)\n[![arXiv](https://img.shields.io/badge/arXiv-2402.04355-b31b1b.svg)](https://arxiv.org/abs/2402.04355)\n\n[PQMass](https://arxiv.org/abs/2402.04355) is a new sample-based method for evaluating the quality of generative models as well as assessing distribution shifts to determine if two datasets come from the same underlying distribution.\n\n## Install\n\nTo install PQMass, run the following:\n\n```bash\npip install pqm\n```\n\n## Usage\n\nPQMass takes in $x$ and $y$ two datasets and determines if they come from the same underlying distribution. For instance, in the case of generative models, $x$ represents the samples generated by your model, while $y$ corresponds to the real data or test set.\n\n![Headline plot showing an example tessellation for PQMass](media/Voronoi.png \"\")\nPQMass partitions the space by taking reference points from $x$ and $y$ and creating Voronoi tessellations around the reference points. On the left is an example of one such region, which we note follows a Binomial Distribution; the samples are either inside or outside the region. On the right is the entire space partitioned, allowing us to see that this is a multinomial distribution, a given sample can be in region P or any other region. This is crucial as it allows for two metrics to be defined that can be used to determine if $x$ and $y$ come from the same underlying distribution. The first is the $\\chi_{PQM}^2$\n\n$$\\chi_{PQM}^2 \\equiv \\sum_{i = 1}^{n_R} \\left[ \\frac{(k({\\bf x}, R_i) - \\hat{N}_{x, i})^2}{\\hat{N}_{x, i}} + \\frac{(k({\\bf y}, R_i) - \\hat{N}_{y, i})^2}{\\hat{N}_{y, i}} \\right]$$\n\nand the second is the $\\text{p-value}(\\chi_{PQM}^2)$\n\n$$\\text{p-value}(\\chi_{PQM}^2) \\equiv \\int_{-\\infty}^{\\chi^2_{\\rm {PQM}}} \\chi^2_{n_R - 1}(z) dz$$\n\nFor $\\chi_{PQM}^2$ metric, given your two sets of samples, if they come from the same\ndistribution, the histogram of your $\\chi_{PQM}^2$ values should follow the $\\chi^2$\ndistribution. The degrees of freedom (DoF) will equal `DoF = num_refs - 1` The\npeak of this distribution will be at `DoF - 2`, the mean will equal `DoF`, and\nthe standard deviation will be `sqrt(2 * DoF)`. If your $\\chi_{PQM}^2$ values are too\nhigh (`chi^2 / DoF > 1`), it suggests that the samples are out of distribution.\nConversely, if the values are too low (`chi^2 / DoF < 1`), it indicates\npotential duplication of samples between `x` and `y`.\n\nIf your two samples are drawn from the same distribution, then the $\\text{p-value}(\\chi_{PQM}^2)$\nshould be drawn from the random $\\mathcal{U}(0,1)$ distribution. This means that if\nyou get a very small value (i.e., 1e-6), then you have failed the null\nhypothesis test, and the two samples are not drawn from the same distribution.\nIf you get values approximately equal to 1 every time then that suggests\npotential duplication of samples between `x` and `y`.\n\nPQMass can work for any two datasets as it measures the distribution shift between the $x$ and $y$, which we show below.\n\n## Example\n\nWe are using 100 regions. Thus, the DoF is 99, our expected $\\chi^2$ peak of the distribution is 97, the mean is 99, and the standard deviation should be 14.07. With this in mind, we set up our example. For the p-value, we expect to be between 0 and 1 and a significantly small p-value (e.g., $< 0.05$ or $< 0.01$) would mean we reject the null hypothesis and thus $x$ and $y$ do not come from the same distribution.\n\nOur expected p-value should be around 0.5 to pass the null hypothesis test; any significant deviation away from this would indicate failure of the null hypothesis test.\n\nGiven two distributions, $x$ and $y$, sampling from a $\\mathcal{N}(0, 1)$ in 10 dimensions, the goal is to determine if they come from the same underlying distribution. This is considered the null test as we know they come from the same distribution, but we show how one would use PQMass to determine this.\n\n```python\nfrom pqm import pqm_pvalue, pqm_chi2\nimport numpy as np\n\np = np.random.normal(size = (500, 10))\nq = np.random.normal(size = (400, 10))\n\n# To get chi^2 from PQMass\nchi2_stat = pqm_chi2(p, q, re_tessellation = 1000)\nprint(np.mean(chi2_stat), np.std(chi2_stat)) # 98.51, 11.334\n\n# To get pvalues from PQMass\npvalues = pqm_pvalue(p, q, re_tessellation = 1000)\nprint(np.mean(pvalues), np.std(pvalues)) # 0.50, 0.26\n```\n\nWe see that both $\\chi_{PQM}^2$ and $\\text{p-value}(\\chi_{PQM}^2)$ follow the expected $\\chi^2$ indicatiing that both $x$ and $y$ come from the same underlying distribution.\n\nAnother such example in which we do $\\textit{not}$ expect $x$ and $y$ to come from the same distribution is if $x$ is again sampled from a $\\mathcal{N}(0, 1)$ in 10 dimensions whereas $y$ is sampled from a $\\mathcal{U}(0, 1)$ in 10 dimensions.\n\n```python\nfrom pqm import pqm_pvalue, pqm_chi2\nimport numpy as np\n\np = np.random.normal(size = (500, 10))\nq = np.random.uniform(size = (400, 10))\n\n# To get chi^2 from PQMass\nchi2_stat = pqm_chi2(p, q, re_tessellation = 1000)\nprint(np.mean(chi2_stat), np.std(chi2_stat)) # 577.29, 25.74\n\n# To get pvalues from PQMass\npvalues = pqm_pvalue(p, q, re_tessellation = 1000)\nprint(np.mean(pvalues), np.std(pvalues)) # 3.53e-56, 8.436e-55\n```\n\nHere it is clear that both $\\chi_{PQM}^2$ and $\\text{p-value}(\\chi_{PQM}^2)$ are not close to their expected results, thus showing that $x$ and $y$ do $\\textbf{not}$ come from the same underlying distribution.\n\nThus, PQMass can be used to identify if any two distributions come from the same underlying distributions if enough samples are given. We encourage users to look through the paper to see the varying experiments and use cases for PQMass!\n\n## How to Intrept Result\n\nWe have shown what to expect for PQMass when working with $\\chi_{PQM}^2$ or $\\text{p-value}(\\chi_{PQM}^2)$ however when working with $\\chi_{PQM}^2$, there is the case in which it will return 0's. There are a couple reasons in why this could happen\n\n- For Generative Models; 0's indicate memorization. Samples are duplicates of the data it has been trained on.\n- For non generative model scenario, it is typically due to lack of samples espically in high dimensions. Increasing samples should alleviate the issue.\n- Another scenario in which one could get 0's in a non generative model case is that it can also be an inidcator of duplicate samples in $x$ and $y$.\n\n## Advanced Usage\n\nDepending on the data you are working with we show other uses of the parameters for PQMass.\n\n### Z-Score Normalization\n\nIf you determine that you need to normalize $x$ and $y$, there is a z-score normalization function built into PQMass, and one can call it by setting `z_score_norm = True`:\n\n```python\nchi2_stat = pqm_chi2(p, q, re_tessellation = 1000, z_score_norm = True)\npvalues = pqm_pvalue(p, q, re_tessellation = 1000, z_score_norm = True)\n```\n\n### Modification to how references points are selected\n\nThe default setup for selecting reference points is to take the number of regions and then sample from $x$ and $y$ proportional to each length, respectively. However, if, for your case, you want to only sample the reference points from $x$ by setting `x_frac = 1.0`:\n\n```python\nchi2_stat = pqm_chi2(p, q, re_tessellation = 1000, x_frac = 1.0)\npvalues = pqm_pvalue(p, q, re_tessellation = 1000, x_frac = 1.0)\n```\n\nAlternatively, you can sample the reference points only from $y$ by setting `x_frac = 0`:\n\n```python\nchi2_stat = pqm_chi2(p, q, re_tessellation = 1000, x_frac = 0)\npvalues = pqm_pvalue(p, q, re_tessellation = 1000, x_frac = 0)\n```\n\nSimilary you can sample reference points equally from both $x$ and $y$ by setting `x_frac = 0.5`:\n\n```python\nchi2_stat = pqm_chi2(p, q, re_tessellation = 1000, x_frac = 0.5)\npvalues = pqm_pvalue(p, q, re_tessellation = 1000, x_frac = 0.5)\n```\n\nLastly one could not sample reference points from either $x$ or $y$ but instead sample from a Guassian by using the `guass_frac = 1.0`:\n\n```python\nchi2_stat = pqm_chi2(p, q, re_tessellation = 1000, guass_frac = 1.0)\npvalues = pqm_pvalue(p, q, re_tessellation = 1000, guass_frac = 1.0)\n```\n\n### GPU Compatibility\n\nPQMass now works on both CPU and GPU. All that is needed is to pass the `x_samples` and `y_samples` as a PyTorch Tensor on the appropriate device.\n\n## Developing\n\nIf you're a developer then:\n\n```python\ngit clone git@github.com:Ciela-Institute/PQM.git\ncd PQM\ngit checkout -b my-new-branch\npip install -e .\n```\n\nBut make an issue first so we can discuss implementation ideas.\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) [2023] [pqm authors] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"summary": "Implemenation of the PQMass two sample test from Lemos et al. 2024",
"version": "0.6.2",
"project_urls": {
"Documentation": "https://github.com/Ciela-Institute/PQM",
"Homepage": "https://github.com/Ciela-Institute/PQM",
"Issues": "https://github.com/Ciela-Institute/PQM/issues",
"Repository": "https://github.com/Ciela-Institute/PQM"
},
"split_keywords": [
"machine learning",
" pytorch",
" statistics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0b25505b0ac9fe31d418c771e5cfc88d1607ffd56fdddbd8a9c0ef368cbfee2d",
"md5": "165cbc503db889ebdab759811e5e8196",
"sha256": "9331e31b760ed19ca1ca89f50931387b0358eb2faf8cedc32fdd1af1444e4ada"
},
"downloads": -1,
"filename": "pqm-0.6.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "165cbc503db889ebdab759811e5e8196",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 11569,
"upload_time": "2024-11-11T16:54:40",
"upload_time_iso_8601": "2024-11-11T16:54:40.710055Z",
"url": "https://files.pythonhosted.org/packages/0b/25/505b0ac9fe31d418c771e5cfc88d1607ffd56fdddbd8a9c0ef368cbfee2d/pqm-0.6.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "dc5b8974dafc1e9814eba997b99f7daee637830ae98de8e2e6cf018b201abf01",
"md5": "d7f00275d5203474e1ec3b59bd998dc2",
"sha256": "9bbe14429e5ac21f0dd2a48a8ed0f6fca36803119fa0661841e133e46cb6d187"
},
"downloads": -1,
"filename": "pqm-0.6.2.tar.gz",
"has_sig": false,
"md5_digest": "d7f00275d5203474e1ec3b59bd998dc2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 317517,
"upload_time": "2024-11-11T16:54:42",
"upload_time_iso_8601": "2024-11-11T16:54:42.235744Z",
"url": "https://files.pythonhosted.org/packages/dc/5b/8974dafc1e9814eba997b99f7daee637830ae98de8e2e6cf018b201abf01/pqm-0.6.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-11 16:54:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Ciela-Institute",
"github_project": "PQM",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "scipy",
"specs": []
},
{
"name": "numpy",
"specs": []
},
{
"name": "torch",
"specs": []
}
],
"lcname": "pqm"
}