statmoments


Namestatmoments JSON
Version 1.0.5 PyPI version JSON
download
home_pageNone
SummaryFast streaming single-pass univariate/bivariate statistics and t-test
upload_time2024-05-21 07:18:36
maintainerNone
docs_urlNone
authorAnton Kochepasov
requires_pythonNone
licenseMIT
keywords data-science univariate bivariate statistics streaming numpy vectorization
VCS
bugtrack_url
requirements h5py numpy scipy psutil cython
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # statmoments

Fast streaming univariate and bivariate moments and t-statistics.

statmoments is a library for fast one-pass computation of univariate and bivariate moments for batches of waveforms or traces with thousands of sample points. It can compute Welch's t-test statistics for arbitrary data partitioning, helping find relationships and statistical differences among data splits. Using top BLAS implementations, statmoments preprocesses data to maximize computational efficiency on Windows and Linux.

## How is that different?

When input data differences are subtle, millions of waveforms may have to be processed to find the statistically significant difference, requiring efficient algorithms. In addition to that, the high-order moment computation need multiple passes and may require starting over once new data appear. With thousands of sample points per waveform, the problem becomes more complex.

A streaming algorithm process a sequence of inputs in a single pass it is collected, when it's fast enough, it's suitable for real-time sources like oscilloscopes, sensors, and financial markets, or for large datasets that don't fit in memory. The dense matrix representation reduces memory requirements. The accumulator can be converted to co-moments and Welch's t-test statistics on demand. Data batches can be iteratively processed, increasing precision and then discarded. The library handles significant input streams, processing hundreds of megabytes per second.

Yet another dimension can be added when the data split is unknown. In other words, which bucket the input waveform belongs to. This library solves this with pre-classification of the input data and computing moments for all the requested data splits.

Some of the benefits of streaming computation include:

- Real-time insights for trend identification and anomaly detection
- Reduced data processing latency, crucial for time-sensitive applications
- Scalability to handle large data volumes, essential for data-intensive research in fields like astrophysics and financial analysis

## Where is this needed?

Univariate statistics analyze and describe a single variable or dataset. Common applications include

- Descriptive Statistics: Summarizing central tendency, dispersion, and shape of a dataset
- Hypothesis Testing: Determining significant differences or relationships between groups or conditions
- Finance and Economics: Examining asset performance, tracking market trends, and assessing risk in real-time

In summary, univariate statistics are fundamental for data analysis, providing essential insights into individual variables across various fields, aiding in decision-making and further research.

Bivariate statistics help understand relationships between two variables, aiding informed decisions across various fields. They address questions like:

- Is there a statistically significant relationship between variables?
- Which data points are related?
- How strong is the relationship?
- Can we use one variable to predict the other?

These statistical methods are used in medical and bioinformatics research, astrophysics, seismology, market predictions, and other fields, handling input data measured in hundreds of gigabytes.

## Numeric accuracy

The numeric accuracy of results depends on the coefficient of variation (COV) of a sample point in the input waveforms. With a COV of about 5%, the computed (co-)kurtosis has about 10 correct significant digits for 10,000 waveforms, sufficient for Welch's t-test. Increasing data by 100x loses one more significant digit.

## Examples

### Performing univariate data analysis

```python
  # Input data parameters
  tr_count = 100   # M input waveforms
  tr_len   = 5     # N features or points in the input waveforms
  cl_len   = 2     # L hypotheses how to split input waveforms

  # Create engine, which can compute up to kurtosis
  uveng = statmoments.Univar(tr_len, cl_len, moment=4)

  # Process input data and split hypotheses
  uveng.update(wforms1, classification1)

  # Process more input data and split hypotheses
  uveng.update(wforms2, classification2)

  # Get statistical moments
  mean       = [cm.copy() for cm in uveng.moments(moments=1)]  # E(X)
  skeweness  = [cm.copy() for cm in uveng.moments(moments=3)]  # E(X^3)

  # Detect statistical differences in the first-order t-test
  for i, tt in enumerate(statmoments.stattests.ttests(uveng, moment=1)):
    if np.any(np.abs(tt) > 5):
      print(f"Data split {i} has different means")

  # Process more input data and split hypotheses
  uveng.update(wforms3, classification3)

  # Get updated statistical moments and t-tests
```

### Performing bivariate data analysis

```python
  # Input data parameters
  tr_count = 100   # M input waveforms
  tr_len = 5       # N features or points in the input waveforms
  cl_len = 2       # L hypotheses how to split input waveforms

  # Create bivariate engine, which can compute up to co-kurtosis
  bveng = statmoments.Bivar(tr_len, cl_len, moment=4)

  # Process input data and split hypotheses
  bveng.update(wforms1, classification1)

  # Process more input data and split hypotheses
  bveng.update(wforms2, classification2)

  # Get bivariate moments
  covariance    = [cm.copy() for cm in bveng.comoments(moments=(1, 1))]  # E(X Y)
  cokurtosis22  = [cm.copy() for cm in bveng.comoments(moments=(2, 2))]  # E(X^2 Y^2)
  cokurtosis13  = [cm.copy() for cm in bveng.comoments(moments=(1, 3))]  # E(X^1 Y^3)

  # univariate statistical moments are also can be obtained
  variance   = [cm.copy() for cm in bveng.moments(moments=2)]  # E(X^2)

  # Detect statistical differences in the second order t-test (covariances)
  for i, tt in enumerate(statmoments.stattests.ttests(bveng, moment=(1,1))):
    if np.any(np.abs(tt) > 5):
      print(f"Found stat diff in the split {i}")

  # Process more input data and split hypotheses
  bveng.update(wforms3, classification3)

  # Get updated statistical moments and t-tests
```

### Performing data analysis from the command line

```shell
# Find univariate t-test statistics of skeweness for the first
# 5000 waveform sample points, stored in a HDF5 dataset
python -m statmoments.univar -i data.h5 -m 3 -r 0:5000

# Find bivariate t-test statistics of covariance for the first
# 1000 waveform sample points, stored in a HDF5 dataset
python -m statmoments.bivar -i data.h5 -r 0:1000
```

More examples can be found in the examples and tests directories.

## Implementation notes

Due to RAM limits, results are produced one at a time for each input classifier as the set of statistical moments. Each classifier's output moment has dimensions 2 x M x L, where M is an index of the requested classifier and L is the region length. The co-moments and t-tests is represented by a 1D array for each classifier. **Bivariate moments** are represented by the **upper triangle** of the symmetric matrix.

## Installation

```shell
pip install statmoments
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "statmoments",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "data-science, univariate, bivariate, statistics, streaming, numpy, vectorization",
    "author": "Anton Kochepasov",
    "author_email": "akss@me.com",
    "download_url": "https://files.pythonhosted.org/packages/70/6a/3a0b2f55918fbae9940e3af733fae398774ba2c8a247223693da1504527a/statmoments-1.0.5.tar.gz",
    "platform": "any",
    "description": "# statmoments\n\nFast streaming univariate and bivariate moments and t-statistics.\n\nstatmoments is a library for fast one-pass computation of univariate and bivariate moments for batches of waveforms or traces with thousands of sample points. It can compute Welch's t-test statistics for arbitrary data partitioning, helping find relationships and statistical differences among data splits. Using top BLAS implementations, statmoments preprocesses data to maximize computational efficiency on Windows and Linux.\n\n## How is that different?\n\nWhen input data differences are subtle, millions of waveforms may have to be processed to find the statistically significant difference, requiring efficient algorithms. In addition to that, the high-order moment computation need multiple passes and may require starting over once new data appear. With thousands of sample points per waveform, the problem becomes more complex.\n\nA streaming algorithm process a sequence of inputs in a single pass it is collected, when it's fast enough, it's suitable for real-time sources like oscilloscopes, sensors, and financial markets, or for large datasets that don't fit in memory. The dense matrix representation reduces memory requirements. The accumulator can be converted to co-moments and Welch's t-test statistics on demand. Data batches can be iteratively processed, increasing precision and then discarded. The library handles significant input streams, processing hundreds of megabytes per second.\n\nYet another dimension can be added when the data split is unknown. In other words, which bucket the input waveform belongs to. This library solves this with pre-classification of the input data and computing moments for all the requested data splits.\n\nSome of the benefits of streaming computation include:\n\n- Real-time insights for trend identification and anomaly detection\n- Reduced data processing latency, crucial for time-sensitive applications\n- Scalability to handle large data volumes, essential for data-intensive research in fields like astrophysics and financial analysis\n\n## Where is this needed?\n\nUnivariate statistics analyze and describe a single variable or dataset. Common applications include\n\n- Descriptive Statistics: Summarizing central tendency, dispersion, and shape of a dataset\n- Hypothesis Testing: Determining significant differences or relationships between groups or conditions\n- Finance and Economics: Examining asset performance, tracking market trends, and assessing risk in real-time\n\nIn summary, univariate statistics are fundamental for data analysis, providing essential insights into individual variables across various fields, aiding in decision-making and further research.\n\nBivariate statistics help understand relationships between two variables, aiding informed decisions across various fields. They address questions like:\n\n- Is there a statistically significant relationship between variables?\n- Which data points are related?\n- How strong is the relationship?\n- Can we use one variable to predict the other?\n\nThese statistical methods are used in medical and bioinformatics research, astrophysics, seismology, market predictions, and other fields, handling input data measured in hundreds of gigabytes.\n\n## Numeric accuracy\n\nThe numeric accuracy of results depends on the coefficient of variation (COV) of a sample point in the input waveforms. With a COV of about 5%, the computed (co-)kurtosis has about 10 correct significant digits for 10,000 waveforms, sufficient for Welch's t-test. Increasing data by 100x loses one more significant digit.\n\n## Examples\n\n### Performing univariate data analysis\n\n```python\n  # Input data parameters\n  tr_count = 100   # M input waveforms\n  tr_len   = 5     # N features or points in the input waveforms\n  cl_len   = 2     # L hypotheses how to split input waveforms\n\n  # Create engine, which can compute up to kurtosis\n  uveng = statmoments.Univar(tr_len, cl_len, moment=4)\n\n  # Process input data and split hypotheses\n  uveng.update(wforms1, classification1)\n\n  # Process more input data and split hypotheses\n  uveng.update(wforms2, classification2)\n\n  # Get statistical moments\n  mean       = [cm.copy() for cm in uveng.moments(moments=1)]  # E(X)\n  skeweness  = [cm.copy() for cm in uveng.moments(moments=3)]  # E(X^3)\n\n  # Detect statistical differences in the first-order t-test\n  for i, tt in enumerate(statmoments.stattests.ttests(uveng, moment=1)):\n    if np.any(np.abs(tt) > 5):\n      print(f\"Data split {i} has different means\")\n\n  # Process more input data and split hypotheses\n  uveng.update(wforms3, classification3)\n\n  # Get updated statistical moments and t-tests\n```\n\n### Performing bivariate data analysis\n\n```python\n  # Input data parameters\n  tr_count = 100   # M input waveforms\n  tr_len = 5       # N features or points in the input waveforms\n  cl_len = 2       # L hypotheses how to split input waveforms\n\n  # Create bivariate engine, which can compute up to co-kurtosis\n  bveng = statmoments.Bivar(tr_len, cl_len, moment=4)\n\n  # Process input data and split hypotheses\n  bveng.update(wforms1, classification1)\n\n  # Process more input data and split hypotheses\n  bveng.update(wforms2, classification2)\n\n  # Get bivariate moments\n  covariance    = [cm.copy() for cm in bveng.comoments(moments=(1, 1))]  # E(X Y)\n  cokurtosis22  = [cm.copy() for cm in bveng.comoments(moments=(2, 2))]  # E(X^2 Y^2)\n  cokurtosis13  = [cm.copy() for cm in bveng.comoments(moments=(1, 3))]  # E(X^1 Y^3)\n\n  # univariate statistical moments are also can be obtained\n  variance   = [cm.copy() for cm in bveng.moments(moments=2)]  # E(X^2)\n\n  # Detect statistical differences in the second order t-test (covariances)\n  for i, tt in enumerate(statmoments.stattests.ttests(bveng, moment=(1,1))):\n    if np.any(np.abs(tt) > 5):\n      print(f\"Found stat diff in the split {i}\")\n\n  # Process more input data and split hypotheses\n  bveng.update(wforms3, classification3)\n\n  # Get updated statistical moments and t-tests\n```\n\n### Performing data analysis from the command line\n\n```shell\n# Find univariate t-test statistics of skeweness for the first\n# 5000 waveform sample points, stored in a HDF5 dataset\npython -m statmoments.univar -i data.h5 -m 3 -r 0:5000\n\n# Find bivariate t-test statistics of covariance for the first\n# 1000 waveform sample points, stored in a HDF5 dataset\npython -m statmoments.bivar -i data.h5 -r 0:1000\n```\n\nMore examples can be found in the examples and tests directories.\n\n## Implementation notes\n\nDue to RAM limits, results are produced one at a time for each input classifier as the set of statistical moments. Each classifier's output moment has dimensions 2 x M x L, where M is an index of the requested classifier and L is the region length. The co-moments and t-tests is represented by a 1D array for each classifier. **Bivariate moments** are represented by the **upper triangle** of the symmetric matrix.\n\n## Installation\n\n```shell\npip install statmoments\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Fast streaming single-pass univariate/bivariate statistics and t-test",
    "version": "1.0.5",
    "project_urls": {
        "Source Code": "https://github.com/akochepasov/statmoments/"
    },
    "split_keywords": [
        "data-science",
        " univariate",
        " bivariate",
        " statistics",
        " streaming",
        " numpy",
        " vectorization"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f2590d1ef2c441d55e081be48fa6ac229e9ebf04b7da6ba7cd86e6b446428429",
                "md5": "748ecf7c60bbf928e5bbbf19a595a0b4",
                "sha256": "b502330e624c05941372ac76ea6a8da3e79546e9219fa6b44964d92f3b171d61"
            },
            "downloads": -1,
            "filename": "statmoments-1.0.5-cp39-cp39-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "748ecf7c60bbf928e5bbbf19a595a0b4",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": null,
            "size": 303222,
            "upload_time": "2024-05-21T07:18:34",
            "upload_time_iso_8601": "2024-05-21T07:18:34.241484Z",
            "url": "https://files.pythonhosted.org/packages/f2/59/0d1ef2c441d55e081be48fa6ac229e9ebf04b7da6ba7cd86e6b446428429/statmoments-1.0.5-cp39-cp39-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "706a3a0b2f55918fbae9940e3af733fae398774ba2c8a247223693da1504527a",
                "md5": "e00e789ae72af57996fe60edd6f246be",
                "sha256": "0636aa2c6031a46a7af42547a294725b49cc7abf407120af8b42fbc76eac9dcf"
            },
            "downloads": -1,
            "filename": "statmoments-1.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "e00e789ae72af57996fe60edd6f246be",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 357760,
            "upload_time": "2024-05-21T07:18:36",
            "upload_time_iso_8601": "2024-05-21T07:18:36.390147Z",
            "url": "https://files.pythonhosted.org/packages/70/6a/3a0b2f55918fbae9940e3af733fae398774ba2c8a247223693da1504527a/statmoments-1.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-21 07:18:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "akochepasov",
    "github_project": "statmoments",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "h5py",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "scipy",
            "specs": []
        },
        {
            "name": "psutil",
            "specs": []
        },
        {
            "name": "cython",
            "specs": []
        }
    ],
    "lcname": "statmoments"
}
        
Elapsed time: 0.74932s