[](https://github.com/pre-commit/pre-commit) [](https://github.com/embodied-computation-group/metadpy/blob/master/LICENSE) [](https://codecov.io/gh/embodied-computation-group/metadpy) [](https://github.com/psf/black) [](http://mypy-lang.org/) [](https://pycqa.github.io/isort/) [](https://badge.fury.io/py/metadpy)
***
<img src="https://github.com/embodied-computation-group/metadpy/raw/master/docs/source/images/logo.png" align="left" alt="metadpy" height="250" HSPACE=30>
**metadpy** is a Python implementation of standard Bayesian models of behavioural metacognition. It is aimed to provide simple yet powerful functions to compute standard indexes and metrics of signal detection theory (SDT) and metacognitive efficiency (meta-dā and hierarchical meta-dā). The only input required is a data frame encoding task performances and confidence ratings at the trial level.
**metadpy** is written in Python 3. It uses [Numpy](https://numpy.org/), [Scipy](https://www.scipy.org/) and [Pandas](https://pandas.pydata.org/>) for most of its operation, comprizing meta-dā estimation using maximum likelihood estimation (MLE). The (Hierarchical) Bayesian modelling is implemented in [Aesara](https://github.com/aesara-devs/aesara) (now renamed [PyTensor](https://github.com/pymc-devs/pytensor) for versions of [pymc](https://docs.pymc.io/>) >=5.0).
* š [Documentation](https://embodied-computation-group.github.io/metadpy/)
* āļø [Tutorials](https://embodied-computation-group.github.io/metadpy/tutorials.html)
# Installation
The package can be installed using pip:
```shell
pip install metadpy
```
For most of the operations, the following packages are required:
* [Numpy](https://numpy.org/) (>=1.15)
* [Scipy](https://www.scipy.org/) (>=1.3.0)
* [Pandas](https://pandas.pydata.org/>) (>=0.24)
* [Matplotlib](https://matplotlib.org/) (>=3.0.2)
* [Seaborn](https://seaborn.pydata.org/) (>=0.9.0)
Bayesian models will require:
* [PyTensor](https://github.com/pymc-devs/pytensor)
* [pymc](https://docs.pymc.io/>) (>=5.0)
# Why metadpy?
metadpy stands for meta-d' (meta-d prime) in Python. meta-d' is a behavioural metric commonly used in consciousness and metacognition research. It is modelled to reflect metacognitive efficiency (i.e the relationship between subjective reports about performances and objective behaviour).
metadpy first aims to be the Python equivalent of the [hMeta-d toolbox](https://github.com/metacoglab/HMeta-d) (Matlab and R). It tries to make these models available to a broader open-source ecosystem and to ease their use via cloud computing interfaces. One notable difference is that While the [hMeta-d toolbox](https://github.com/metacoglab/HMeta-d) relies on JAGS for the Bayesian modelling of confidence data (see [**4**]) to analyse task performance and confidence ratings, metadpy is built on the top of [pymc](https://docs.pymc.io/>), and uses Hamiltonina Monte Carlo methods (NUTS).
For an extensive introduction to metadpy, you can navigate the following notebooks that are Python adaptations of the introduction to the [hMeta-d toolbox](https://github.com/metacoglab/HMeta-d) written in Matlab by Olivia Faul for the [Zurich Computational Psychiatry course](https://github.com/metacoglab/HMeta-d/tree/master/CPC_metacog_tutorial).
# Tutorials
| Notebook | Colab |
| --- | ---|
| What metacognition looks like? | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/1-What%20metacognition%20looks%20like.ipynb)
| Fitting the model (MLE) | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/2-Fitting%20the%20model-MLE.ipynb)
| Comparing with the hmetad toolbox | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/3-Comparison%20with%20the%20hmeta-d%20toolbox.ipynb)
# Examples
| Notebook | Colab |
| --- | ---|
| Subject and group level (MLE) | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%201%20-%20Fitting%20MLE%20-%20Subject%20and%20group%20level.ipynb)
| Subject and group level (Bayesian) | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%202%20-%20Fitting%20Bayesian%20-%20Subject%20level%20(pymc).ipynb)
Or just follow the quick tour below.
# Importing data
Classical metacognition experiments contain two phases: task performance and confidence ratings. The task performance could for example be the ability to distinguish the presence of a dot on the screen. By relating trials where stimuli are present or absent and the response provided by the participant (Can you see the dot: yes/no), it is possible to obtain the accuracy. The confidence rating is proposed to the participant when the response is made and should reflect how certain the participant is about his/her judgement.
An ideal observer would always associate very high confidence ratings with correct task-I responses, and very low confidence ratings with an incorrect task-1 response, while a participant with a low metacognitive efficiency will have a more mixed response pattern.
A minimal metacognition dataset will therefore consist in a data frame populated with 5 columns:
* `Stimuli`: Which of the two stimuli was presented [0 or 1].
* `Response`: The response made by the participant [0 or 1].
* `Accuracy`: Was the participant correct? [0 or 1].
* `Confidence`: The confidence level [can be continuous or discrete rating].
* `ntrial`: The trial number.
Due to the logical dependence between the `Stimuli`, `Responses` and `Accuracy` columns, in practice only two of those columns are necessary, the third being deduced from the others. Most of the functions in `metadpy` will accept DataFrames containing only two of these columns, and will automatically infer the missing information. Similarly, as the metacognition models described here does not incorporate the temporal dimension, the trial number is optional.
`metadpy` includes a simulation function that will let you create one such data frame for one or many participants and condition, controlling for a variety of parameters. Here, we will simulate 200 trials from participant having `d=1` and `c=0` (task performances) and a `meta-d=1.5` (metacognitive sensibility). The confidence ratings were provided using a 1-to-4 rating scale.
```python
from metadpy.utils import responseSimulation
simulation = responseSimulation(d=1, metad=1.5, c=0,
nRatings=4, nTrials=200)
simulation
```
| | Stimuli | Responses | Accuracy | Confidence | nTrial | Subject |
|---:|----------:|------------:|-----------:|-------------:|---------:|----------:|
| 0 | 1 | 1 | 1 | 4 | 0 | 0 |
| 1 | 0 | 0 | 1 | 4 | 1 | 0 |
| 2 | 1 | 1 | 1 | 2 | 2 | 0 |
| 3 | 0 | 1 | 0 | 4 | 3 | 0 |
| 4 | 0 | 0 | 1 | 3 | 4 | 0 |
```python
from metadpy.utils import trials2counts
nR_S1, nR_S2 = trials2counts(
data=simulation, stimuli="Stimuli", accuracy="Accuracy",
confidence="Confidence", nRatings=4)
```
## Data visualization
You can easily visualize metacognition results using one of the plotting functions. Here, we will use the `plot_confidence` and the `plot_roc` functions to visualize the metacognitive performance of our participant.
```python
import matplotlib.pyplot as plt
from metadpy.plotting import plot_confidence, plot_roc
```
```python
fig, axs = plt.subplots(1, 2, figsize=(13, 5))
plot_confidence(nR_S1, nR_S2, ax=axs[0])
plot_roc(nR_S1, nR_S2, ax=axs[1])
```

# Signal detection theory (SDT)
```python
from metadpy.sdt import criterion, dprime, rates, roc_auc, scores
```
All metadpy functions are registred as Pandas flavors (see [pandas-flavor](https://pypi.org/project/pandas-flavor/)), which means that the functions can be called as a method from the result data frame.
```python
simulation.criterion()
```
5.551115123125783e-17
```python
simulation.dprime()
```
0.9917006946949065
```python
simulation.rates()
```
(0.69, 0.31)
```python
simulation.roc_auc(nRatings=4)
```
0.695689287238583
```python
simulation.scores()
```
(69, 31, 31, 69)
# Estimating meta dprime using Maximum Likelyhood Estimates (MLE)
```python
from metadpy.mle import metad
metad(
data=simulation, nRatings=4, stimuli='Stimuli', accuracy='Accuracy',
confidence='Confidence', verbose=0
)
```
| | dprime | meta_d | m_ratio | m_diff |
|---:|---------:|---------:|----------:|---------:|
| 0 | 0.970635 | 1.45925 | 1.5034 | 0.488613 |
# Estimating meta-dprime using hierarchical Bayesian modeling
## Subject level
```python
import pymc as pm
from metadpy.bayesian import hmetad
```
```python
model, trace = hmetad(
data=simulation, nRatings=4, stimuli='Stimuli',
accuracy='Accuracy', confidence='Confidence'
)
```
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [c1, d1, meta_d, cS1_hn, cS2_hn]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 10 seconds.
```python
import arviz as az
az.plot_trace(trace, var_names=['meta_d', 'cS2', 'cS1']);
```

```python
az.summary(trace)
```
| | mean | sd | hdi_3% | hdi_97% | mcse_mean | mcse_sd | ess_bulk | ess_tail | r_hat |
|:-------|-------:|------:|---------:|----------:|------------:|----------:|-----------:|-----------:|--------:|
| meta_d | 1.384 | 0.254 | 0.909 | 1.86 | 0.004 | 0.003 | 3270 | 2980 | 1 |
# References
[1] Maniscalco, B., & Lau, H. (2014). Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-dā², Response-Specific Meta-dā², and the Unequal Variance SDT Model. In The Cognitive Neuroscience of Metacognition (pp. 25ā66). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-45190-4_3
[2] Maniscalco, B., & Lau, H. (2012). A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422ā430. doi:10.1016/j.concog.2011.09.021
[3] Fleming, S. M., & Lau, H. C. (2014). How to measure metacognition. Frontiers in Human Neuroscience, 8. https://doi.org/10.3389/fnhum.2014.00443
[4] Fleming, S.M. (2017) HMeta-d: hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings, Neuroscience of Consciousness, 3(1) nix007, https://doi.org/10.1093/nc/nix007
Raw data
{
"_id": null,
"home_page": "https://github.com/LegrandNico/metadpy",
"name": "metadpy",
"maintainer": "Nicolas Legrand",
"docs_url": null,
"requires_python": "",
"maintainer_email": "nicolas.legrand@cas.au.dk",
"keywords": "",
"author": "Nicolas Legrand",
"author_email": "nicolas.legrand@cas.au.dk",
"download_url": "https://files.pythonhosted.org/packages/5c/cb/c4b92657b04bb8a38b323a83cf67c87d59860868ce1dde2214b8abce6409/metadpy-0.1.2.tar.gz",
"platform": null,
"description": "[](https://github.com/pre-commit/pre-commit) [](https://github.com/embodied-computation-group/metadpy/blob/master/LICENSE) [](https://codecov.io/gh/embodied-computation-group/metadpy) [](https://github.com/psf/black) [](http://mypy-lang.org/) [](https://pycqa.github.io/isort/) [](https://badge.fury.io/py/metadpy)\n\n***\n\n<img src=\"https://github.com/embodied-computation-group/metadpy/raw/master/docs/source/images/logo.png\" align=\"left\" alt=\"metadpy\" height=\"250\" HSPACE=30>\n\n**metadpy** is a Python implementation of standard Bayesian models of behavioural metacognition. It is aimed to provide simple yet powerful functions to compute standard indexes and metrics of signal detection theory (SDT) and metacognitive efficiency (meta-d\u2019 and hierarchical meta-d\u2019). The only input required is a data frame encoding task performances and confidence ratings at the trial level.\n\n**metadpy** is written in Python 3. It uses [Numpy](https://numpy.org/), [Scipy](https://www.scipy.org/) and [Pandas](https://pandas.pydata.org/>) for most of its operation, comprizing meta-d\u2019 estimation using maximum likelihood estimation (MLE). The (Hierarchical) Bayesian modelling is implemented in [Aesara](https://github.com/aesara-devs/aesara) (now renamed [PyTensor](https://github.com/pymc-devs/pytensor) for versions of [pymc](https://docs.pymc.io/>) >=5.0).\n\n* \ud83d\udcd6 [Documentation](https://embodied-computation-group.github.io/metadpy/) \n* \u270f\ufe0f [Tutorials](https://embodied-computation-group.github.io/metadpy/tutorials.html) \n\n# Installation\n\nThe package can be installed using pip:\n\n```shell\npip install metadpy\n```\n\nFor most of the operations, the following packages are required:\n\n* [Numpy](https://numpy.org/) (>=1.15)\n* [Scipy](https://www.scipy.org/) (>=1.3.0)\n* [Pandas](https://pandas.pydata.org/>) (>=0.24)\n* [Matplotlib](https://matplotlib.org/) (>=3.0.2)\n* [Seaborn](https://seaborn.pydata.org/) (>=0.9.0)\n\nBayesian models will require:\n\n* [PyTensor](https://github.com/pymc-devs/pytensor)\n* [pymc](https://docs.pymc.io/>) (>=5.0)\n\n\n# Why metadpy?\n\nmetadpy stands for meta-d' (meta-d prime) in Python. meta-d' is a behavioural metric commonly used in consciousness and metacognition research. It is modelled to reflect metacognitive efficiency (i.e the relationship between subjective reports about performances and objective behaviour).\n\nmetadpy first aims to be the Python equivalent of the [hMeta-d toolbox](https://github.com/metacoglab/HMeta-d) (Matlab and R). It tries to make these models available to a broader open-source ecosystem and to ease their use via cloud computing interfaces. One notable difference is that While the [hMeta-d toolbox](https://github.com/metacoglab/HMeta-d) relies on JAGS for the Bayesian modelling of confidence data (see [**4**]) to analyse task performance and confidence ratings, metadpy is built on the top of [pymc](https://docs.pymc.io/>), and uses Hamiltonina Monte Carlo methods (NUTS).\n\nFor an extensive introduction to metadpy, you can navigate the following notebooks that are Python adaptations of the introduction to the [hMeta-d toolbox](https://github.com/metacoglab/HMeta-d) written in Matlab by Olivia Faul for the [Zurich Computational Psychiatry course](https://github.com/metacoglab/HMeta-d/tree/master/CPC_metacog_tutorial).\n\n# Tutorials\n\n| Notebook | Colab |\n| --- | ---|\n| What metacognition looks like? | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/1-What%20metacognition%20looks%20like.ipynb)\n| Fitting the model (MLE) | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/2-Fitting%20the%20model-MLE.ipynb)\n| Comparing with the hmetad toolbox | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/3-Comparison%20with%20the%20hmeta-d%20toolbox.ipynb)\n\n# Examples\n\n| Notebook | Colab |\n| --- | ---|\n| Subject and group level (MLE) | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%201%20-%20Fitting%20MLE%20-%20Subject%20and%20group%20level.ipynb)\n| Subject and group level (Bayesian) | [](https://colab.research.google.com/github/embodied-computation-group/metadpy/blob/master/docs/source/examples/Example%202%20-%20Fitting%20Bayesian%20-%20Subject%20level%20(pymc).ipynb)\n\nOr just follow the quick tour below.\n\n# Importing data\n\nClassical metacognition experiments contain two phases: task performance and confidence ratings. The task performance could for example be the ability to distinguish the presence of a dot on the screen. By relating trials where stimuli are present or absent and the response provided by the participant (Can you see the dot: yes/no), it is possible to obtain the accuracy. The confidence rating is proposed to the participant when the response is made and should reflect how certain the participant is about his/her judgement.\n\nAn ideal observer would always associate very high confidence ratings with correct task-I responses, and very low confidence ratings with an incorrect task-1 response, while a participant with a low metacognitive efficiency will have a more mixed response pattern.\n\nA minimal metacognition dataset will therefore consist in a data frame populated with 5 columns:\n* `Stimuli`: Which of the two stimuli was presented [0 or 1].\n* `Response`: The response made by the participant [0 or 1].\n* `Accuracy`: Was the participant correct? [0 or 1].\n* `Confidence`: The confidence level [can be continuous or discrete rating].\n* `ntrial`: The trial number.\n\nDue to the logical dependence between the `Stimuli`, `Responses` and `Accuracy` columns, in practice only two of those columns are necessary, the third being deduced from the others. Most of the functions in `metadpy` will accept DataFrames containing only two of these columns, and will automatically infer the missing information. Similarly, as the metacognition models described here does not incorporate the temporal dimension, the trial number is optional. \n\n`metadpy` includes a simulation function that will let you create one such data frame for one or many participants and condition, controlling for a variety of parameters. Here, we will simulate 200 trials from participant having `d=1` and `c=0` (task performances) and a `meta-d=1.5` (metacognitive sensibility). The confidence ratings were provided using a 1-to-4 rating scale.\n\n\n```python\nfrom metadpy.utils import responseSimulation\nsimulation = responseSimulation(d=1, metad=1.5, c=0, \n nRatings=4, nTrials=200)\nsimulation\n```\n\n| | Stimuli | Responses | Accuracy | Confidence | nTrial | Subject |\n|---:|----------:|------------:|-----------:|-------------:|---------:|----------:|\n| 0 | 1 | 1 | 1 | 4 | 0 | 0 |\n| 1 | 0 | 0 | 1 | 4 | 1 | 0 |\n| 2 | 1 | 1 | 1 | 2 | 2 | 0 |\n| 3 | 0 | 1 | 0 | 4 | 3 | 0 |\n| 4 | 0 | 0 | 1 | 3 | 4 | 0 |\n\n```python\nfrom metadpy.utils import trials2counts\nnR_S1, nR_S2 = trials2counts(\n data=simulation, stimuli=\"Stimuli\", accuracy=\"Accuracy\",\n confidence=\"Confidence\", nRatings=4)\n```\n\n## Data visualization\n\nYou can easily visualize metacognition results using one of the plotting functions. Here, we will use the `plot_confidence` and the `plot_roc` functions to visualize the metacognitive performance of our participant.\n\n```python\nimport matplotlib.pyplot as plt\nfrom metadpy.plotting import plot_confidence, plot_roc\n```\n\n```python\nfig, axs = plt.subplots(1, 2, figsize=(13, 5))\nplot_confidence(nR_S1, nR_S2, ax=axs[0])\nplot_roc(nR_S1, nR_S2, ax=axs[1])\n```\n\n\n\n# Signal detection theory (SDT)\n\n```python\nfrom metadpy.sdt import criterion, dprime, rates, roc_auc, scores\n```\n\nAll metadpy functions are registred as Pandas flavors (see [pandas-flavor](https://pypi.org/project/pandas-flavor/)), which means that the functions can be called as a method from the result data frame.\n\n```python\nsimulation.criterion()\n```\n\n5.551115123125783e-17\n\n```python\nsimulation.dprime()\n```\n\n0.9917006946949065\n\n```python\nsimulation.rates()\n```\n\n(0.69, 0.31)\n\n```python\nsimulation.roc_auc(nRatings=4)\n```\n\n0.695689287238583\n\n```python\nsimulation.scores()\n```\n\n(69, 31, 31, 69)\n\n# Estimating meta dprime using Maximum Likelyhood Estimates (MLE)\n\n```python\nfrom metadpy.mle import metad\n\nmetad(\n data=simulation, nRatings=4, stimuli='Stimuli', accuracy='Accuracy',\n confidence='Confidence', verbose=0\n )\n```\n\n| | dprime | meta_d | m_ratio | m_diff |\n|---:|---------:|---------:|----------:|---------:|\n| 0 | 0.970635 | 1.45925 | 1.5034 | 0.488613 |\n\n# Estimating meta-dprime using hierarchical Bayesian modeling\n\n## Subject level\n\n```python\nimport pymc as pm\nfrom metadpy.bayesian import hmetad\n```\n\n```python\nmodel, trace = hmetad(\n data=simulation, nRatings=4, stimuli='Stimuli',\n accuracy='Accuracy', confidence='Confidence'\n )\n```\n\nAuto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (4 chains in 4 jobs)\nNUTS: [c1, d1, meta_d, cS1_hn, cS2_hn]\nSampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 10 seconds.\n\n```python\nimport arviz as az\naz.plot_trace(trace, var_names=['meta_d', 'cS2', 'cS1']);\n```\n \n\n\n```python\naz.summary(trace)\n```\n\n| | mean | sd | hdi_3% | hdi_97% | mcse_mean | mcse_sd | ess_bulk | ess_tail | r_hat |\n|:-------|-------:|------:|---------:|----------:|------------:|----------:|-----------:|-----------:|--------:|\n| meta_d | 1.384 | 0.254 | 0.909 | 1.86 | 0.004 | 0.003 | 3270 | 2980 | 1 |\n\n# References\n\n[1] Maniscalco, B., & Lau, H. (2014). Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-d\u2032, Response-Specific Meta-d\u2032, and the Unequal Variance SDT Model. In The Cognitive Neuroscience of Metacognition (pp. 25\u201366). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-45190-4_3 \n\n[2] Maniscalco, B., & Lau, H. (2012). A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422\u2013430. doi:10.1016/j.concog.2011.09.021\n\n[3] Fleming, S. M., & Lau, H. C. (2014). How to measure metacognition. Frontiers in Human Neuroscience, 8. https://doi.org/10.3389/fnhum.2014.00443\n\n[4] Fleming, S.M. (2017) HMeta-d: hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings, Neuroscience of Consciousness, 3(1) nix007, https://doi.org/10.1093/nc/nix007\n",
"bugtrack_url": null,
"license": "GPL-3.0",
"summary": "metadpy: Metacognitive efficiency modelling in Python",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/LegrandNico/metadpy"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ea65d06c02097e8ffd5228c6df760146024ac94d24506caa0c839ac92a378f12",
"md5": "6d89326900f6ace1aabde67cb78ccad3",
"sha256": "587b526f8669bf0c57be44da27578807095c61cb2cfea8cbab77ad710292085e"
},
"downloads": -1,
"filename": "metadpy-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6d89326900f6ace1aabde67cb78ccad3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 44848,
"upload_time": "2024-01-15T15:21:56",
"upload_time_iso_8601": "2024-01-15T15:21:56.960786Z",
"url": "https://files.pythonhosted.org/packages/ea/65/d06c02097e8ffd5228c6df760146024ac94d24506caa0c839ac92a378f12/metadpy-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "5ccbc4b92657b04bb8a38b323a83cf67c87d59860868ce1dde2214b8abce6409",
"md5": "cfe3f1a311db17c0d8e53e8762677c97",
"sha256": "4a545ae24a6aca2ec1a0c63810bf2c53d2eb33ceaeca0cfd5762d25432cea9ee"
},
"downloads": -1,
"filename": "metadpy-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "cfe3f1a311db17c0d8e53e8762677c97",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 45673,
"upload_time": "2024-01-15T15:21:58",
"upload_time_iso_8601": "2024-01-15T15:21:58.900741Z",
"url": "https://files.pythonhosted.org/packages/5c/cb/c4b92657b04bb8a38b323a83cf67c87d59860868ce1dde2214b8abce6409/metadpy-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-01-15 15:21:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LegrandNico",
"github_project": "metadpy",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "metadpy"
}