extrapolation


Nameextrapolation JSON
Version 2.1.0 PyPI version JSON
download
home_pagehttps://github.com/wellington36/extrapolation_methods
SummaryExtrapolation methods to real series
upload_time2023-11-09 21:01:18
maintainer
docs_urlNone
authorWellington Silva
requires_python
licenseMIT
keywords python series extrapolation numerical-analysis
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Extrapolation Methods
==================

Let be $S_n = {\sum}^{n}_{i=1} a_i$ a sequence of partial sums. This repository contains implementations of the following series transformations, which generate a new sequence $T\_n$:


* [Aitken's transformation (or delta-squared process)](https://en.wikipedia.org/wiki/Aitken%27s_delta-squared_process):
  - In `esum`: O($2n\log n$)
  - In `acelsum`: O($n$)

  $$T_n = \frac{S_{n-1} S_{n+1} - S_n^2}{S_{n+1} - 2 S_n + S_{n-1}}.$$

* [Richardson's transformation (modify, with given p)](https://en.wikipedia.org/wiki/Richardson_extrapolation):
  - In `esum`: O($2(\log n)^2$)
  - In `acelsum`: O($\log n$)

  $$T_n = S_{2n} + \frac{S_{2n} - S_n}{2^p - 1}.$$

  Here, we use $p = 1$ for simplicity.

* [Epsilon transformation](https://www.sciencedirect.com/science/article/pii/S0377042700003551):
  - In `esum`: O($2n\log n$)
  - In `acelsum`: O($n$)

  Let be the auxiliary sequence $\varepsilon_n^j$ defined by:

  $$\varepsilon_{-1}^{j} = 0\ \text{and}\ \varepsilon_{0}^{j} = S_j,$$

  and inductively:

  $$\varepsilon_{k+1}^{j} = \varepsilon_{k-1}^{j+1} + [\varepsilon_{k}^{j+1} - \varepsilon_{k}^{j}]^{-1}.$$

  Then, $T_n = \varepsilon_{n-1}^{2}$ (because the odd steps are only partial steps).


* [G transformation](https://www.cambridge.org/core/books/abs/practical-extrapolation-methods/gtransformation-and-its-generalizations/B3A1C6628B6C3E6438C943E25FFA621D):
  - In `esum`: O($4n\log n$)
  - In `acelsum`: O($2n$)

  Let be two auxiliary sequences $s_j^{(n)}$ and $r_j^{(n)}$ defined by:

  $$s^{(n)}\_0 = 1,\ r^{(n)}\_1 = x\_n,\ n=0,1,\ldots,$$

  inductively:

  $$s^{(n)}\_{k+1} = s^{(n+1)}\_{k} \left( \frac{r^{(n+1)}\_{k+1}}{r^{(n)}\_{k+1}} - 1 \right),\ k,n = 0,1,\ldots$$
  
  and

  $$r^{(n)}\_{k+1} = r^{(n+1)}\_{k} \left( \frac{s^{(n+1)}\_{k}}{s^{(n)}\_{k}} - 1 \right),\ k=1,2,\ldots;\ n=0,1,\ldots$$

  Then, $T\_n = S\_n - \frac{S\_{n+1} - S\_{n}}{r^{(n+1)}\_{1} - r^{(n)}\_{1}} r^{(n)}\_{1}$.

* [Levin transformation](https://epubs.siam.org/doi/abs/10.1137/0716017):
  - In `esum`: O($4n\log n$)
  - In `acelsum`: O($2n$)

  This method is defined by

  $$W_n^{(k)} = \frac{M_n^{(k)}}{N_n^{(k)}}$$

  where

  $$M_n^{(0)} = \frac{S_n}{g(n)},$$

  $$M_{n}^{(k+1)} = \frac{M_{n+1}^{(k)} - M_{n}^{(k)}}{a_{n + k}^{-1} - a_{n + 1}^{-1}},$$

  and

  $$N_n^{(0)} = \frac{1}{g(n)},$$

  $$N_{n}^{(k+1)} = \frac{N_{n+1}^{(k)} - N_{n}^{(k)}}{a_{n + k}^{-1} - a_{n + 1}^{-1}}.$$

  For the function $g(n)$, we have some classic choices for this function:

  - **t-variant**: $g(n) = a_{n+1}$;
  - **u-variant**: $g(n) = n a_n$;
  - **v-variant**: $g(n) = a_n a_{n+1} / (a_{n+1} - a_n)$.
 
  Then, $T\_n = \frac{M\_n^{(1)}}{N\_n^{(1)}}$.


## Installation

Make sure you have the mpmath library installed:

```
pip install mpmath
```

To install the package, run the following command:

```bash
pip install extrapolation
```

## Usage

We have the transformations implemented above, and for use have the `esum` and `acelsum` function.

### esum
The `esum` receives on input:

- *A series*: In the form of a function $f: \mathbb{N} \to \mathbb{R}$ returning the terms to be summed.
- *The Transformation*: "Aitken", "Richardson", "Epsilon", "G", "Levin-t", "Levin-u", "Levin-v" and "None", the latter being using the initial series without any transformation.
- *The stopping criterion*: In case the difference of the last two values of the series are smaller than a given error.
- *Return in logarithm scale*: True if you want to receive the return in logarithm scale with the sign and False if you want to receive in normal scale.
- *Precision*: If precision is 53 we use the default python precision, otherwise the given bits precision.

This function determines the minimum value of n for which, the difference between the last partial sums becomes less than the specified error when applying the transformation. And returns the series applied to the transformation. The following is an example:


```python
from extrapolation import esum
import math

# Test with no_transform (without transformation) and with Richardson transformation the basel problem
n0, no_accelerated = esum(lambda x: 1/x**2, 'None', error=1e-12, logarithm=False, precision=100)
n1, accelerated = esum(lambda x: 1/x**2, 'Richardson', error=1e-12, logarithm=False, precision=100)

# Comparison
print(f"True value:           {math.pi ** 2 / 6}")
print(f"Without acceleration: {no_accelerated[-1]}, with {n0} iterations")
print(f"With acceleration:    {accelerated[-1]}, with {n1} iterations")
```

Out:
```
True value:           1.6449340668482264
Without acceleration: 1.6449330668607708753650232828, with 1000012 iterations
With acceleration:    1.6449340611256049164589309217, with 22896 iterations
```

### acelsum
We have also the `acelsum` function, that receives on input:

- *A series*: In the form of a function $f: \mathbb{N} \to \mathbb{R}$ returning the terms to be summed.
- *The Transformation*: "Aitken", "Richardson", "Epsilon", "G", "Levin-t", "Levin-u", "Levin-v" and "None", the latter being using the initial series without any transformation.
- *Natural n*: Number of values to be summed.
- *Return in logarithm scale*: True if you want to receive the return in logarithm scale with the sign and False if you want to receive in normal scale.
- *Precision*: If precision is 53 we use the default python precision, otherwise the given bits precision.

This function calculates partial sums up to a given natural value, returning the result in log-scale or normal by applying a chosen transformation. The following is an example:

```python
from extrapolation import acelsum
import math

# Test with no_transform (without transformation) and with Richardson transformation the basel problem
no_accelerated = acelsum(lambda x: 1/x**2, 'None', n=1000, logarithm=False, precision=100)
accelerated = acelsum(lambda x: 1/x**2, 'Richardson', n=1000, logarithm=False, precision=100)

# Comparison
print(f"True value:           {math.pi ** 2 / 6}")
print(f"Without acceleration: {no_accelerated[-1]}")
print(f"With acceleration:    {accelerated[-1]}")
```

Out:
```
True value:           1.6449340668482264
Without acceleration: 1.6439345666815597935850701245
With acceleration:    1.6449310678482254269248263997
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/wellington36/extrapolation_methods",
    "name": "extrapolation",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "python,series,extrapolation,numerical-analysis",
    "author": "Wellington Silva",
    "author_email": "<wellington.71319@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/20/28/8070f79d5ecf225e3161816140aedb3b66c0322d1ae5de3cc80b694abe52/extrapolation-2.1.0.tar.gz",
    "platform": null,
    "description": "Extrapolation Methods\n==================\n\nLet be $S_n = {\\sum}^{n}_{i=1} a_i$ a sequence of partial sums. This repository contains implementations of the following series transformations, which generate a new sequence $T\\_n$:\n\n\n* [Aitken's transformation (or delta-squared process)](https://en.wikipedia.org/wiki/Aitken%27s_delta-squared_process):\n  - In `esum`: O($2n\\log n$)\n  - In `acelsum`: O($n$)\n\n  $$T_n = \\frac{S_{n-1} S_{n+1} - S_n^2}{S_{n+1} - 2 S_n + S_{n-1}}.$$\n\n* [Richardson's transformation (modify, with given p)](https://en.wikipedia.org/wiki/Richardson_extrapolation):\n  - In `esum`: O($2(\\log n)^2$)\n  - In `acelsum`: O($\\log n$)\n\n  $$T_n = S_{2n} + \\frac{S_{2n} - S_n}{2^p - 1}.$$\n\n  Here, we use $p = 1$ for simplicity.\n\n* [Epsilon transformation](https://www.sciencedirect.com/science/article/pii/S0377042700003551):\n  - In `esum`: O($2n\\log n$)\n  - In `acelsum`: O($n$)\n\n  Let be the auxiliary sequence $\\varepsilon_n^j$ defined by:\n\n  $$\\varepsilon_{-1}^{j} = 0\\ \\text{and}\\ \\varepsilon_{0}^{j} = S_j,$$\n\n  and inductively:\n\n  $$\\varepsilon_{k+1}^{j} = \\varepsilon_{k-1}^{j+1} + [\\varepsilon_{k}^{j+1} - \\varepsilon_{k}^{j}]^{-1}.$$\n\n  Then, $T_n = \\varepsilon_{n-1}^{2}$ (because the odd steps are only partial steps).\n\n\n* [G transformation](https://www.cambridge.org/core/books/abs/practical-extrapolation-methods/gtransformation-and-its-generalizations/B3A1C6628B6C3E6438C943E25FFA621D):\n  - In `esum`: O($4n\\log n$)\n  - In `acelsum`: O($2n$)\n\n  Let be two auxiliary sequences $s_j^{(n)}$ and $r_j^{(n)}$ defined by:\n\n  $$s^{(n)}\\_0 = 1,\\ r^{(n)}\\_1 = x\\_n,\\ n=0,1,\\ldots,$$\n\n  inductively:\n\n  $$s^{(n)}\\_{k+1} = s^{(n+1)}\\_{k} \\left( \\frac{r^{(n+1)}\\_{k+1}}{r^{(n)}\\_{k+1}} - 1 \\right),\\ k,n = 0,1,\\ldots$$\n  \n  and\n\n  $$r^{(n)}\\_{k+1} = r^{(n+1)}\\_{k} \\left( \\frac{s^{(n+1)}\\_{k}}{s^{(n)}\\_{k}} - 1 \\right),\\ k=1,2,\\ldots;\\ n=0,1,\\ldots$$\n\n  Then, $T\\_n = S\\_n - \\frac{S\\_{n+1} - S\\_{n}}{r^{(n+1)}\\_{1} - r^{(n)}\\_{1}} r^{(n)}\\_{1}$.\n\n* [Levin transformation](https://epubs.siam.org/doi/abs/10.1137/0716017):\n  - In `esum`: O($4n\\log n$)\n  - In `acelsum`: O($2n$)\n\n  This method is defined by\n\n  $$W_n^{(k)} = \\frac{M_n^{(k)}}{N_n^{(k)}}$$\n\n  where\n\n  $$M_n^{(0)} = \\frac{S_n}{g(n)},$$\n\n  $$M_{n}^{(k+1)} = \\frac{M_{n+1}^{(k)} - M_{n}^{(k)}}{a_{n + k}^{-1} - a_{n + 1}^{-1}},$$\n\n  and\n\n  $$N_n^{(0)} = \\frac{1}{g(n)},$$\n\n  $$N_{n}^{(k+1)} = \\frac{N_{n+1}^{(k)} - N_{n}^{(k)}}{a_{n + k}^{-1} - a_{n + 1}^{-1}}.$$\n\n  For the function $g(n)$, we have some classic choices for this function:\n\n  - **t-variant**: $g(n) = a_{n+1}$;\n  - **u-variant**: $g(n) = n a_n$;\n  - **v-variant**: $g(n) = a_n a_{n+1} / (a_{n+1} - a_n)$.\n \n  Then, $T\\_n = \\frac{M\\_n^{(1)}}{N\\_n^{(1)}}$.\n\n\n## Installation\n\nMake sure you have the mpmath library installed:\n\n```\npip install mpmath\n```\n\nTo install the package, run the following command:\n\n```bash\npip install extrapolation\n```\n\n## Usage\n\nWe have the transformations implemented above, and for use have the `esum` and `acelsum` function.\n\n### esum\nThe `esum` receives on input:\n\n- *A series*: In the form of a function $f: \\mathbb{N} \\to \\mathbb{R}$ returning the terms to be summed.\n- *The Transformation*: \"Aitken\", \"Richardson\", \"Epsilon\", \"G\", \"Levin-t\", \"Levin-u\", \"Levin-v\" and \"None\", the latter being using the initial series without any transformation.\n- *The stopping criterion*: In case the difference of the last two values of the series are smaller than a given error.\n- *Return in logarithm scale*: True if you want to receive the return in logarithm scale with the sign and False if you want to receive in normal scale.\n- *Precision*: If precision is 53 we use the default python precision, otherwise the given bits precision.\n\nThis function determines the minimum value of n for which, the difference between the last partial sums becomes less than the specified error when applying the transformation. And returns the series applied to the transformation. The following is an example:\n\n\n```python\nfrom extrapolation import esum\nimport math\n\n# Test with no_transform (without transformation) and with Richardson transformation the basel problem\nn0, no_accelerated = esum(lambda x: 1/x**2, 'None', error=1e-12, logarithm=False, precision=100)\nn1, accelerated = esum(lambda x: 1/x**2, 'Richardson', error=1e-12, logarithm=False, precision=100)\n\n# Comparison\nprint(f\"True value:           {math.pi ** 2 / 6}\")\nprint(f\"Without acceleration: {no_accelerated[-1]}, with {n0} iterations\")\nprint(f\"With acceleration:    {accelerated[-1]}, with {n1} iterations\")\n```\n\nOut:\n```\nTrue value:           1.6449340668482264\nWithout acceleration: 1.6449330668607708753650232828, with 1000012 iterations\nWith acceleration:    1.6449340611256049164589309217, with 22896 iterations\n```\n\n### acelsum\nWe have also the `acelsum` function, that receives on input:\n\n- *A series*: In the form of a function $f: \\mathbb{N} \\to \\mathbb{R}$ returning the terms to be summed.\n- *The Transformation*: \"Aitken\", \"Richardson\", \"Epsilon\", \"G\", \"Levin-t\", \"Levin-u\", \"Levin-v\" and \"None\", the latter being using the initial series without any transformation.\n- *Natural n*: Number of values to be summed.\n- *Return in logarithm scale*: True if you want to receive the return in logarithm scale with the sign and False if you want to receive in normal scale.\n- *Precision*: If precision is 53 we use the default python precision, otherwise the given bits precision.\n\nThis function calculates partial sums up to a given natural value, returning the result in log-scale or normal by applying a chosen transformation. The following is an example:\n\n```python\nfrom extrapolation import acelsum\nimport math\n\n# Test with no_transform (without transformation) and with Richardson transformation the basel problem\nno_accelerated = acelsum(lambda x: 1/x**2, 'None', n=1000, logarithm=False, precision=100)\naccelerated = acelsum(lambda x: 1/x**2, 'Richardson', n=1000, logarithm=False, precision=100)\n\n# Comparison\nprint(f\"True value:           {math.pi ** 2 / 6}\")\nprint(f\"Without acceleration: {no_accelerated[-1]}\")\nprint(f\"With acceleration:    {accelerated[-1]}\")\n```\n\nOut:\n```\nTrue value:           1.6449340668482264\nWithout acceleration: 1.6439345666815597935850701245\nWith acceleration:    1.6449310678482254269248263997\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Extrapolation methods to real series",
    "version": "2.1.0",
    "project_urls": {
        "Homepage": "https://github.com/wellington36/extrapolation_methods"
    },
    "split_keywords": [
        "python",
        "series",
        "extrapolation",
        "numerical-analysis"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ba20dcd981609d6c578bff57f15a105fcb1e1f52fe14496a2653c731db4080b9",
                "md5": "c2d385417e78681a53efe710fa99d584",
                "sha256": "3b6bd861dfcb74c0f461cea8c326e61496832754dd8feff7522745627ec7e332"
            },
            "downloads": -1,
            "filename": "extrapolation-2.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c2d385417e78681a53efe710fa99d584",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 7182,
            "upload_time": "2023-11-09T21:01:16",
            "upload_time_iso_8601": "2023-11-09T21:01:16.185435Z",
            "url": "https://files.pythonhosted.org/packages/ba/20/dcd981609d6c578bff57f15a105fcb1e1f52fe14496a2653c731db4080b9/extrapolation-2.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "20288070f79d5ecf225e3161816140aedb3b66c0322d1ae5de3cc80b694abe52",
                "md5": "075598bac0afed28a64e4729ab4fd9d3",
                "sha256": "e49eae59be7dc0849a6874ebdeb3888aff5104f4477246c6c197b6da526b22e0"
            },
            "downloads": -1,
            "filename": "extrapolation-2.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "075598bac0afed28a64e4729ab4fd9d3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 6816,
            "upload_time": "2023-11-09T21:01:18",
            "upload_time_iso_8601": "2023-11-09T21:01:18.029477Z",
            "url": "https://files.pythonhosted.org/packages/20/28/8070f79d5ecf225e3161816140aedb3b66c0322d1ae5de3cc80b694abe52/extrapolation-2.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-09 21:01:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "wellington36",
    "github_project": "extrapolation_methods",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "extrapolation"
}
        
Elapsed time: 0.16324s