pted


Namepted JSON
Version 1.2.0 PyPI version JSON
download
home_pageNone
SummaryImplementation of a Permutation Test using the Energy Distance for two sample tests and posterior coverage tests
upload_time2025-10-07 19:22:06
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License Copyright (c) 2025 Connor Stone, PhD Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords bayesian machine learning pytorch statistics
VCS
bugtrack_url
requirements numpy scipy
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PTED: Permutation Test using the Energy Distance

![PyPI - Version](https://img.shields.io/pypi/v/pted?style=flat-square)
[![CI](https://github.com/ConnorStoneAstro/pted/actions/workflows/ci.yml/badge.svg)](https://github.com/ConnorStoneAstro/pted/actions/workflows/ci.yml)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
![PyPI - Downloads](https://img.shields.io/pypi/dm/pted)
[![codecov](https://codecov.io/gh/ConnorStoneAstro/pted/graph/badge.svg?token=5LISJ5BN17)](https://codecov.io/gh/ConnorStoneAstro/pted)
[![DOI](https://zenodo.org/badge/966938845.svg)](https://doi.org/10.5281/zenodo.15353928)

Think of it like a multi-dimensional KS-test! It is used for two sample testing
and posterior coverage tests. In some cases it is even more sensitive than the
KS-test, but likely not all cases.

![pted logo](media/pted_logo.png)

## Install

To install PTED, run the following:

```bash
pip install pted
```

If you want to run PTED on GPUs using PyTorch, then also install torch:

```bash
pip install torch
```

The two functions are ``pted.pted`` and ``pted.pted_coverage_test``. For
information about each argument, just use ``help(pted.pted)`` or
``help(pted.pted_coverage_test)``. 

## What does PTED do?

PTED (pronounced "ted") takes in `x` and `y` two datasets and determines if they
come from the same underlying distribution. 

PTED is useful for:

* "were these two samples drawn from the same distribution?" this works even with noise, so long as the noise distribution is also the same for each sample
* Evaluate the coverage of a posterior sampling procedure
* Check for MCMC chain convergence. Split the chain in half or take two chains, that's two samples, if the chain is well mixed then these ought to be drawn from the same distribution
* Evaluate the performance of a generative model. PTED is powerful here as it can detect overfitting to the training sample.
* Evaluate if a simulator generates true "data-like" samples
* PTED can be a distance metric for Approximate Bayesian Computing posteriors
* Check for drift in a time series, comparing samples before/after some cutoff time

And much more!

## Example: Two-Sample-Test

```python
from pted import pted
import numpy as np

x = np.random.normal(size = (500, 10)) # (n_samples_x, n_dimensions)
y = np.random.normal(size = (400, 10)) # (n_samples_y, n_dimensions)

p_value = pted(x, y)
print(f"p-value: {p_value:.3f}") # expect uniform random from 0-1
```

## Example: Coverage Test

```python
from pted import pted_coverage_test
import numpy as np

g = np.random.normal(size = (100, 10)) # ground truth (n_simulations, n_dimensions)
s = np.random.normal(size = (200, 100, 10)) # posterior samples (n_samples, n_simulations, n_dimensions)

p_value = pted_coverage_test(g, s)
print(f"p-value: {p_value:.3f}") # expect uniform random from 0-1
```

## How it works

### Two sample test

PTED uses the energy distance of the two samples `x` and `y`, this is computed as:

$$d = \frac{2}{n_xn_y}\sum_{i,j}||x_i - y_j|| - \frac{1}{n_x^2}\sum_{i,j}||x_i - x_j|| - \frac{1}{n_y^2}\sum_{i,j}||y_i - y_j||$$

The energy distance measures distances between pairs of points. It becomes more
positive if the `x` and `y` samples tend to be further from each other than from
themselves. We demonstrate this in the figure below, where the `x` samples are
drawn from a (thick) circle, while the `y` samples are drawn from a (thick)
line.

![pted demo test](media/test_PTED.png)

In the left figure, we show the two distributions, which by eye are clearly not
drawn from the same distribution (circle and line). In the center figure we show
the individual distance measurements as histograms. To compute the energy
distance, we would sum all the elements in these histograms rather than binning
them. You can also see a schematic of the distance matrix, which represents
every pair of samples and is colour coded the same as the histograms. In the
right figure we show the energy distance as a vertical line, the grey
distribution is explained below.

The next element of PTED is the permutation test. For this we combine the `x`
and `y` samples into a single collection `z`. We then randomly shuffle (permute)
the `z` collection and break it back into `x` and `y`, now with samples randomly
swapped between the two distributions (though they are the same size as before).
If we compute the energy distance again, we will get very different results.
This time we are sure that the null hypothesis is true, `x` and `y` have been
drawn from the same distribution (`z`), and so the energy distance will be quite
low. If we do this many times and track the permuted energy distances we get a
distribution, this is the grey distribution in the right figure. Below we show
an example of what this looks like.

![pted demo permute](media/permute_PTED.png)

Here we see the `x` and `y` samples have been scrambled in the left figure. In
the center figure we see the components of the energy distance matrix are much
more consistent because `x` and `y` now follow the same distribution (a mixture
of the original circle and line distribution). In the right figure we now see
that the vertical line is situated well within the grey distribution. Indeed the
grey distribution is just a histogram of many re-runs of this procedure. We
compute a p-value by taking the fraction of the energy distances that are
greater than the current one.

### Coverage test

In the coverage test we have some number of simulations `nsim` where there is a
true value `g` and some posterior samples `s`. For each simulation separately we
use PTED to compute a p-value, essentially asking the question "was `g` drawn
from the distribution that generated `s`?". Individually, these tests are not
especially informative, however their p-values must have been drawn from
`U(0,1)` under the null-hypothesis. Thus we just need a way to combine their
statistical power. It turns out that for some `p ~ U(0,1)` value, we have that
`- 2 ln(p)` is chi2 distributed with `dof = 2`. This means that we can sum the
chi2 values for the PTED test on each simulation and compare with a chi2
distribution with `dof = 2 * nsim`. We use a density based two tailed p-value
test on this chi2 distribution meaning that if your posterior is underconfident
or overconfident, you will get a small p-value that can be used to reject the
null.

## Example: Sensitivity comparison with KS-test

There is no single universally optimal two sample test, but a widely used method
in 1D is called the Kolmogorov-Smirnov (KS)-test. The KS-test operates
fundamentally differently from PTED and can only really work in 1D. Here I do a
super basic comparison of the two methods. Draw two samples of 100 Gaussian
distributed points, thus the null hypothesis is true for these points. Then
slowly bias one of the samples by changing the standard deviation up to 2 sigma.
By tracking how the p-value drops we can see which method is more sensitive to
this kind of mismatched sample. If you run this test a hundred times you will
find that PTED is more sensitive to this kind of bias than the KS-test. Observe
that both methods start around p=0.5 in the true null case (scale = 1), since
they are both exact tests that truly sample U(0,1) under the null.

```python
from pted import pted
import numpy as np
from scipy.stats import kstest
import matplotlib.pyplot as plt

np.random.seed(0)

scale = np.linspace(1.0, 2.0, 10)
pted_p = np.zeros((10, 100))
ks_p = np.zeros((10, 100))
for i, s in enumerate(scale):
    for trial in range(100):
        x = np.random.normal(size=(100, 1))
        y = np.random.normal(scale=s, size=(100, 1))
        pted_p[i][trial] = pted(x, y, two_tailed=False)
        ks_p[i][trial] = kstest(x[:, 0], y[:, 0]).pvalue

plt.plot(scale, np.mean(pted_p, axis=1), linewidth=3, c="b", label="PTED")
plt.plot(scale, np.mean(ks_p, axis=1), linewidth=3, c="r", label="KS")
plt.legend()
plt.ylim(0, None)
plt.xlim(1, 2.0)
plt.xlabel("Out of distribution scale [*sigma]")
plt.ylabel("p-value")

plt.savefig("pted_demo.png", bbox_inches="tight")
plt.show()
```

![pted demo KS comparison](media/pted_ks.png)

## Interpreting the results

### Two sample test

This is a null hypothesis test, thus we are specifically asking the question:
"if `x` and `y` were drawn from the same distribution, how likely am I to have
observed an energy distance as extreme as this?" This is fundamentally different
from the question "how likely is it that `x` and `y` were drawn from the same
distribution?" Which is really what we would like to ask, but I am unaware of
how we would do that in a meaningful way. It is also important to note that we
are specifically looking at extreme energy distances, so we are not even really
talking about the probability densities directly. If there was a transformation
between `x` and `y` that the energy distance was insensitive to, then the two
distributions could potentially be arbitrarily different without PTED
identifying it. For example, since the default energy distance is computed with
the Euclidean distance, a single dimension in which the values are orders of
magnitude larger than the others could make it so that all other dimensions are
ignored and could be very different. For this reason we suggest using the metric
`mahalanobis` if this is a potential issue in your data.

### Coverage Test

For the coverage test we apply the PTED two sample test to each simulation
separately. We then combine the resulting p-values using chi squared where the
resulting degrees of freedom is 2 times the number of simulations. Because of
this, we can detect underconfidence or overconfidence. Specifically we detect
the average over/under confidence, it is possible to be overconfident in some
parts of the posterior and underconfident in others. Underconfidence is when the
posterior distribution is too large, it covers the ground truth by spreading too
thin and not fully exploiting the information in the prior/likelihood of the
posterior sampling process. Sometimes this is acceptable/expected, for example
when using Approximate Bayesian Computation one expects the posterior to be at
least slightly underconfident. Overconfidence is when the posterior is too
narrow and so the ground truth appears as an outlier from its perspective. This
can occur in two main ways, one is by having a too narrow posterior. This could
occur if the measurement uncertainty estimates were too low or there were
sources of error not accounted for in the model. Another way is if your
posterior is biased, you may have an appropriately broad posterior, but it is in
the wrong part of your parameter space. PTED has no way to distinguish these
modes of overconfidence, however just knowing under/over-confidence can be
powerful. As such, by default the PTED coverage test will warn users as to which
kind of failure mode they are in if the `warn_confidence` parameter is not
`None` (default is 1e-3).

## GPU Compatibility

PTED works on both CPU and GPU. All that is needed is to pass the `x` and `y` as
PyTorch Tensors on the appropriate device.

## Citation

If you use PTED in your work, please include a citation to the [zenodo
record](https://doi.org/10.5281/zenodo.15353928) and also see below for
references of the underlying method.

## Reference

I didn't invent this test, I just think its neat. Here is a paper on the subject:

```
@article{szekely2004testing,
  title={Testing for equal distributions in high dimension},
  author={Sz{\'e}kely, G{\'a}bor J and Rizzo, Maria L and others},
  journal={InterStat},
  volume={5},
  number={16.10},
  pages={1249--1272},
  year={2004},
  publisher={Citeseer}
}
```

Permutation tests are a whole class of tests, with much literature. Here are
some starting points:

```
@book{good2013permutation,
  title={Permutation tests: a practical guide to resampling methods for testing hypotheses},
  author={Good, Phillip},
  year={2013},
  publisher={Springer Science \& Business Media}
}
```

```
@book{rizzo2019statistical,
  title={Statistical computing with R},
  author={Rizzo, Maria L},
  year={2019},
  publisher={Chapman and Hall/CRC}
}
```

There is also [the wikipedia
page](https://en.wikipedia.org/wiki/Permutation_test), and the more general
[scipy
implementation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.permutation_test.html),
and other [python implementations](https://github.com/qbarthelemy/PyPermut)
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pted",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "Bayesian, machine learning, pytorch, statistics",
    "author": null,
    "author_email": "Connor Stone <connorstone628@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/91/5c/57a444ceab15c35926094f9306ac1f12424827de5e7ffac6bed9cf2ff305/pted-1.2.0.tar.gz",
    "platform": null,
    "description": "# PTED: Permutation Test using the Energy Distance\n\n![PyPI - Version](https://img.shields.io/pypi/v/pted?style=flat-square)\n[![CI](https://github.com/ConnorStoneAstro/pted/actions/workflows/ci.yml/badge.svg)](https://github.com/ConnorStoneAstro/pted/actions/workflows/ci.yml)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/pted)\n[![codecov](https://codecov.io/gh/ConnorStoneAstro/pted/graph/badge.svg?token=5LISJ5BN17)](https://codecov.io/gh/ConnorStoneAstro/pted)\n[![DOI](https://zenodo.org/badge/966938845.svg)](https://doi.org/10.5281/zenodo.15353928)\n\nThink of it like a multi-dimensional KS-test! It is used for two sample testing\nand posterior coverage tests. In some cases it is even more sensitive than the\nKS-test, but likely not all cases.\n\n![pted logo](media/pted_logo.png)\n\n## Install\n\nTo install PTED, run the following:\n\n```bash\npip install pted\n```\n\nIf you want to run PTED on GPUs using PyTorch, then also install torch:\n\n```bash\npip install torch\n```\n\nThe two functions are ``pted.pted`` and ``pted.pted_coverage_test``. For\ninformation about each argument, just use ``help(pted.pted)`` or\n``help(pted.pted_coverage_test)``. \n\n## What does PTED do?\n\nPTED (pronounced \"ted\") takes in `x` and `y` two datasets and determines if they\ncome from the same underlying distribution. \n\nPTED is useful for:\n\n* \"were these two samples drawn from the same distribution?\" this works even with noise, so long as the noise distribution is also the same for each sample\n* Evaluate the coverage of a posterior sampling procedure\n* Check for MCMC chain convergence. Split the chain in half or take two chains, that's two samples, if the chain is well mixed then these ought to be drawn from the same distribution\n* Evaluate the performance of a generative model. PTED is powerful here as it can detect overfitting to the training sample.\n* Evaluate if a simulator generates true \"data-like\" samples\n* PTED can be a distance metric for Approximate Bayesian Computing posteriors\n* Check for drift in a time series, comparing samples before/after some cutoff time\n\nAnd much more!\n\n## Example: Two-Sample-Test\n\n```python\nfrom pted import pted\nimport numpy as np\n\nx = np.random.normal(size = (500, 10)) # (n_samples_x, n_dimensions)\ny = np.random.normal(size = (400, 10)) # (n_samples_y, n_dimensions)\n\np_value = pted(x, y)\nprint(f\"p-value: {p_value:.3f}\") # expect uniform random from 0-1\n```\n\n## Example: Coverage Test\n\n```python\nfrom pted import pted_coverage_test\nimport numpy as np\n\ng = np.random.normal(size = (100, 10)) # ground truth (n_simulations, n_dimensions)\ns = np.random.normal(size = (200, 100, 10)) # posterior samples (n_samples, n_simulations, n_dimensions)\n\np_value = pted_coverage_test(g, s)\nprint(f\"p-value: {p_value:.3f}\") # expect uniform random from 0-1\n```\n\n## How it works\n\n### Two sample test\n\nPTED uses the energy distance of the two samples `x` and `y`, this is computed as:\n\n$$d = \\frac{2}{n_xn_y}\\sum_{i,j}||x_i - y_j|| - \\frac{1}{n_x^2}\\sum_{i,j}||x_i - x_j|| - \\frac{1}{n_y^2}\\sum_{i,j}||y_i - y_j||$$\n\nThe energy distance measures distances between pairs of points. It becomes more\npositive if the `x` and `y` samples tend to be further from each other than from\nthemselves. We demonstrate this in the figure below, where the `x` samples are\ndrawn from a (thick) circle, while the `y` samples are drawn from a (thick)\nline.\n\n![pted demo test](media/test_PTED.png)\n\nIn the left figure, we show the two distributions, which by eye are clearly not\ndrawn from the same distribution (circle and line). In the center figure we show\nthe individual distance measurements as histograms. To compute the energy\ndistance, we would sum all the elements in these histograms rather than binning\nthem. You can also see a schematic of the distance matrix, which represents\nevery pair of samples and is colour coded the same as the histograms. In the\nright figure we show the energy distance as a vertical line, the grey\ndistribution is explained below.\n\nThe next element of PTED is the permutation test. For this we combine the `x`\nand `y` samples into a single collection `z`. We then randomly shuffle (permute)\nthe `z` collection and break it back into `x` and `y`, now with samples randomly\nswapped between the two distributions (though they are the same size as before).\nIf we compute the energy distance again, we will get very different results.\nThis time we are sure that the null hypothesis is true, `x` and `y` have been\ndrawn from the same distribution (`z`), and so the energy distance will be quite\nlow. If we do this many times and track the permuted energy distances we get a\ndistribution, this is the grey distribution in the right figure. Below we show\nan example of what this looks like.\n\n![pted demo permute](media/permute_PTED.png)\n\nHere we see the `x` and `y` samples have been scrambled in the left figure. In\nthe center figure we see the components of the energy distance matrix are much\nmore consistent because `x` and `y` now follow the same distribution (a mixture\nof the original circle and line distribution). In the right figure we now see\nthat the vertical line is situated well within the grey distribution. Indeed the\ngrey distribution is just a histogram of many re-runs of this procedure. We\ncompute a p-value by taking the fraction of the energy distances that are\ngreater than the current one.\n\n### Coverage test\n\nIn the coverage test we have some number of simulations `nsim` where there is a\ntrue value `g` and some posterior samples `s`. For each simulation separately we\nuse PTED to compute a p-value, essentially asking the question \"was `g` drawn\nfrom the distribution that generated `s`?\". Individually, these tests are not\nespecially informative, however their p-values must have been drawn from\n`U(0,1)` under the null-hypothesis. Thus we just need a way to combine their\nstatistical power. It turns out that for some `p ~ U(0,1)` value, we have that\n`- 2 ln(p)` is chi2 distributed with `dof = 2`. This means that we can sum the\nchi2 values for the PTED test on each simulation and compare with a chi2\ndistribution with `dof = 2 * nsim`. We use a density based two tailed p-value\ntest on this chi2 distribution meaning that if your posterior is underconfident\nor overconfident, you will get a small p-value that can be used to reject the\nnull.\n\n## Example: Sensitivity comparison with KS-test\n\nThere is no single universally optimal two sample test, but a widely used method\nin 1D is called the Kolmogorov-Smirnov (KS)-test. The KS-test operates\nfundamentally differently from PTED and can only really work in 1D. Here I do a\nsuper basic comparison of the two methods. Draw two samples of 100 Gaussian\ndistributed points, thus the null hypothesis is true for these points. Then\nslowly bias one of the samples by changing the standard deviation up to 2 sigma.\nBy tracking how the p-value drops we can see which method is more sensitive to\nthis kind of mismatched sample. If you run this test a hundred times you will\nfind that PTED is more sensitive to this kind of bias than the KS-test. Observe\nthat both methods start around p=0.5 in the true null case (scale = 1), since\nthey are both exact tests that truly sample U(0,1) under the null.\n\n```python\nfrom pted import pted\nimport numpy as np\nfrom scipy.stats import kstest\nimport matplotlib.pyplot as plt\n\nnp.random.seed(0)\n\nscale = np.linspace(1.0, 2.0, 10)\npted_p = np.zeros((10, 100))\nks_p = np.zeros((10, 100))\nfor i, s in enumerate(scale):\n    for trial in range(100):\n        x = np.random.normal(size=(100, 1))\n        y = np.random.normal(scale=s, size=(100, 1))\n        pted_p[i][trial] = pted(x, y, two_tailed=False)\n        ks_p[i][trial] = kstest(x[:, 0], y[:, 0]).pvalue\n\nplt.plot(scale, np.mean(pted_p, axis=1), linewidth=3, c=\"b\", label=\"PTED\")\nplt.plot(scale, np.mean(ks_p, axis=1), linewidth=3, c=\"r\", label=\"KS\")\nplt.legend()\nplt.ylim(0, None)\nplt.xlim(1, 2.0)\nplt.xlabel(\"Out of distribution scale [*sigma]\")\nplt.ylabel(\"p-value\")\n\nplt.savefig(\"pted_demo.png\", bbox_inches=\"tight\")\nplt.show()\n```\n\n![pted demo KS comparison](media/pted_ks.png)\n\n## Interpreting the results\n\n### Two sample test\n\nThis is a null hypothesis test, thus we are specifically asking the question:\n\"if `x` and `y` were drawn from the same distribution, how likely am I to have\nobserved an energy distance as extreme as this?\" This is fundamentally different\nfrom the question \"how likely is it that `x` and `y` were drawn from the same\ndistribution?\" Which is really what we would like to ask, but I am unaware of\nhow we would do that in a meaningful way. It is also important to note that we\nare specifically looking at extreme energy distances, so we are not even really\ntalking about the probability densities directly. If there was a transformation\nbetween `x` and `y` that the energy distance was insensitive to, then the two\ndistributions could potentially be arbitrarily different without PTED\nidentifying it. For example, since the default energy distance is computed with\nthe Euclidean distance, a single dimension in which the values are orders of\nmagnitude larger than the others could make it so that all other dimensions are\nignored and could be very different. For this reason we suggest using the metric\n`mahalanobis` if this is a potential issue in your data.\n\n### Coverage Test\n\nFor the coverage test we apply the PTED two sample test to each simulation\nseparately. We then combine the resulting p-values using chi squared where the\nresulting degrees of freedom is 2 times the number of simulations. Because of\nthis, we can detect underconfidence or overconfidence. Specifically we detect\nthe average over/under confidence, it is possible to be overconfident in some\nparts of the posterior and underconfident in others. Underconfidence is when the\nposterior distribution is too large, it covers the ground truth by spreading too\nthin and not fully exploiting the information in the prior/likelihood of the\nposterior sampling process. Sometimes this is acceptable/expected, for example\nwhen using Approximate Bayesian Computation one expects the posterior to be at\nleast slightly underconfident. Overconfidence is when the posterior is too\nnarrow and so the ground truth appears as an outlier from its perspective. This\ncan occur in two main ways, one is by having a too narrow posterior. This could\noccur if the measurement uncertainty estimates were too low or there were\nsources of error not accounted for in the model. Another way is if your\nposterior is biased, you may have an appropriately broad posterior, but it is in\nthe wrong part of your parameter space. PTED has no way to distinguish these\nmodes of overconfidence, however just knowing under/over-confidence can be\npowerful. As such, by default the PTED coverage test will warn users as to which\nkind of failure mode they are in if the `warn_confidence` parameter is not\n`None` (default is 1e-3).\n\n## GPU Compatibility\n\nPTED works on both CPU and GPU. All that is needed is to pass the `x` and `y` as\nPyTorch Tensors on the appropriate device.\n\n## Citation\n\nIf you use PTED in your work, please include a citation to the [zenodo\nrecord](https://doi.org/10.5281/zenodo.15353928) and also see below for\nreferences of the underlying method.\n\n## Reference\n\nI didn't invent this test, I just think its neat. Here is a paper on the subject:\n\n```\n@article{szekely2004testing,\n  title={Testing for equal distributions in high dimension},\n  author={Sz{\\'e}kely, G{\\'a}bor J and Rizzo, Maria L and others},\n  journal={InterStat},\n  volume={5},\n  number={16.10},\n  pages={1249--1272},\n  year={2004},\n  publisher={Citeseer}\n}\n```\n\nPermutation tests are a whole class of tests, with much literature. Here are\nsome starting points:\n\n```\n@book{good2013permutation,\n  title={Permutation tests: a practical guide to resampling methods for testing hypotheses},\n  author={Good, Phillip},\n  year={2013},\n  publisher={Springer Science \\& Business Media}\n}\n```\n\n```\n@book{rizzo2019statistical,\n  title={Statistical computing with R},\n  author={Rizzo, Maria L},\n  year={2019},\n  publisher={Chapman and Hall/CRC}\n}\n```\n\nThere is also [the wikipedia\npage](https://en.wikipedia.org/wiki/Permutation_test), and the more general\n[scipy\nimplementation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.permutation_test.html),\nand other [python implementations](https://github.com/qbarthelemy/PyPermut)",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Connor Stone, PhD\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.",
    "summary": "Implementation of a Permutation Test using the Energy Distance for two sample tests and posterior coverage tests",
    "version": "1.2.0",
    "project_urls": {
        "Documentation": "https://github.com/ConnorStoneAstro/pted",
        "Homepage": "https://github.com/ConnorStoneAstro/pted",
        "Issues": "https://github.com/ConnorStoneAstro/pted/issues",
        "Repository": "https://github.com/ConnorStoneAstro/pted"
    },
    "split_keywords": [
        "bayesian",
        " machine learning",
        " pytorch",
        " statistics"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3de4e1971b27cffe1e7236f2b2135ed900fa3ad4c698effb7f6f6939d683cb69",
                "md5": "ac9e38d04d0e93a90021297a245e12ca",
                "sha256": "d60275bfcbd867a1edbcbbada40b481246184b5229ef476c5669380a0ac88bbd"
            },
            "downloads": -1,
            "filename": "pted-1.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ac9e38d04d0e93a90021297a245e12ca",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 14595,
            "upload_time": "2025-10-07T19:22:05",
            "upload_time_iso_8601": "2025-10-07T19:22:05.020694Z",
            "url": "https://files.pythonhosted.org/packages/3d/e4/e1971b27cffe1e7236f2b2135ed900fa3ad4c698effb7f6f6939d683cb69/pted-1.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "915c57a444ceab15c35926094f9306ac1f12424827de5e7ffac6bed9cf2ff305",
                "md5": "670668805f3b44cc333f37b715f186d5",
                "sha256": "4efeb1296cb5cbcbf4d6d043cf13e82dcd11c03ca27fcafc876616216744add7"
            },
            "downloads": -1,
            "filename": "pted-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "670668805f3b44cc333f37b715f186d5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 696603,
            "upload_time": "2025-10-07T19:22:06",
            "upload_time_iso_8601": "2025-10-07T19:22:06.751474Z",
            "url": "https://files.pythonhosted.org/packages/91/5c/57a444ceab15c35926094f9306ac1f12424827de5e7ffac6bed9cf2ff305/pted-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-07 19:22:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ConnorStoneAstro",
    "github_project": "pted",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "scipy",
            "specs": []
        }
    ],
    "lcname": "pted"
}
        
Elapsed time: 1.31650s