Name | fdm JSON |
Version |
0.5.0
JSON |
| download |
home_page | None |
Summary | Estimate derivatives with finite differences |
upload_time | 2024-11-18 20:05:49 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.6 |
license | MIT |
keywords |
finite-difference
python
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# [FDM: Finite Difference Methods](http://github.com/wesselb/fdm)
[](https://github.com/wesselb/fdm/actions?query=workflow%3ACI)
[](https://coveralls.io/github/wesselb/fdm?branch=master)
[](https://wesselb.github.io/fdm)
[](https://github.com/psf/black)
FDM estimates derivatives with finite differences.
See also [FiniteDifferences.jl](https://github.com/JuliaDiff/FiniteDifferences.jl).
* [Installation](#installation)
* [Multivariate Derivatives](#multivariate-derivatives)
- [Gradients](#gradients)
- [Jacobians](#jacobians)
- [Jacobian-Vector Products (Directional Derivatives)](#jacobian-vector-products-directional-derivatives)
- [Hessian-Vector Products](#hessian-vector-products)
* [Scalar Derivatives](#scalar-derivatives)
* [Testing Sensitivities in a Reverse-Mode Automatic Differentation Framework](#testing-sensitivities-in-a-reverse-mode-automatic-differentation-framework)
## Installation
FDM requires Python 3.6 or higher.
```bash
pip install fdm
```
## Multivariate Derivatives
```python
from fdm import gradient, jacobian, jvp, hvp
```
For the purpose of illustration, let us consider a quadratic function:
```python
>>> a = np.random.randn(3, 3); a = a @ a.T
>>> a
array([[ 3.57224794, 0.22646662, -1.80432262],
[ 0.22646662, 4.72596213, 3.46435663],
[-1.80432262, 3.46435663, 3.70938152]])
>>> def f(x):
... return 0.5 * x @ a @ x
```
Consider the following input value:
```python
>>> x = np.array([1.0, 2.0, 3.0])
```
### Gradients
```python
>>> grad = gradient(f)
>>> grad(x)
array([-1.38778668, 20.07146076, 16.25253519])
>>> a @ x
array([-1.38778668, 20.07146076, 16.25253519])
```
### Jacobians
```python
>>> jac = jacobian(f)
>>> jac(x)
array([[-1.38778668, 20.07146076, 16.25253519]])
>>> a @ x
array([-1.38778668, 20.07146076, 16.25253519])
```
But `jacobian` also works for multi-valued functions.
```python
>>> def f2(x):
... return a @ x
>>> jac2 = jacobian(f2)
>>> jac2(x)
array([[ 3.57224794, 0.22646662, -1.80432262],
[ 0.22646662, 4.72596213, 3.46435663],
[-1.80432262, 3.46435663, 3.70938152]])
>>> a
array([[ 3.57224794, 0.22646662, -1.80432262],
[ 0.22646662, 4.72596213, 3.46435663],
[-1.80432262, 3.46435663, 3.70938152]])
```
### Jacobian-Vector Products (Directional Derivatives)
In the scalar case, `jvp` computes directional derivatives:
```python
>>> v = np.array([0.5, 0.6, 0.7]) # A direction
>>> dir_deriv = jvp(f, v)
>>> dir_deriv(x)
22.725757753354657
>>> np.sum(grad(x) * v)
22.72575775335481
```
In the multivariate case, `jvp` generalises to Jacobian-vector products:
```python
>>> prod = jvp(f2, v)
>>> prod(x)
array([0.65897811, 5.37386023, 3.77301973])
>>> a @ v
array([0.65897811, 5.37386023, 3.77301973])
```
### Hessian-Vector Products
```python
>>> prod = hvp(f, v)
>>> prod(x)
array([[0.6589781 , 5.37386023, 3.77301973]])
>>> 0.5 * (a + a.T) @ v
array([0.65897811, 5.37386023, 3.77301973])
```
## Scalar Derivatives
```python
>>> from fdm import central_fdm
```
Let's try to estimate the first derivative of `np.sin` at `1` with a
second-order method.
```python
>>> central_fdm(order=2, deriv=1)(np.sin, 1) - np.cos(1)
-1.2914319613699377e-09
```
And let's try to estimate the second derivative of `np.sin` at `1` with a
third-order method.
```python
>>> central_fdm(order=3, deriv=2)(np.sin, 1) + np.sin(1)
1.6342919018086377e-08
```
Hm.
Let's check the accuracy of this third-order method.
The step size and accuracy of the method are computed upon calling
`FDM.estimate`.
```python
>>> central_fdm(order=3, deriv=2).estimate(np.sin, 1).acc
5.476137293912896e-06
```
We might want a little more accuracy. Let's check the accuracy of a
fifth-order method.
```python
>>> central_fdm(order=5, deriv=2).estimate(np.sin, 1).acc
7.343652562575157e-10
```
And let's estimate the second derivative of `np.sin` at `1` with a
fifth-order method.
```python
>>> central_fdm(order=5, deriv=2)(np.sin, 1) + np.sin(1)
-1.7121615236703747e-10
```
Hooray!
Finally, let us verify that increasing the order generally increases the accuracy.
```python
>>> for i in range(3, 10):
... print(central_fdm(order=i, deriv=2)(np.sin, 1) + np.sin(1))
1.6342919018086377e-08
8.604865264771888e-09
-1.7121615236703747e-10
8.558931341440257e-12
-2.147615418834903e-12
6.80566714095221e-13
-1.2434497875801753e-14
```
## Testing Sensitivities in a Reverse-Mode Automatic Differentation Framework
Consider the function
```python
def mul(a, b):
return a * b
```
and its sensitivity
```python
def s_mul(s_y, y, a, b):
return s_y * b, a * s_y
```
The sensitivity `s_mul` takes in the sensitivity `s_y` of the output `y`,
the output `y`, and the arguments of the function `mul`; and returns a tuple
containing the sensitivities with respect to `a` and `b`.
Then function `check_sensitivity` can be used to assert that the
implementation of `s_mul` is correct:
```python
>>> from fdm import check_sensitivity
>>> check_sensitivity(mul, s_mul, (2, 3)) # Test at arguments `2` and `3`.
```
Suppose that the implementation were wrong, for example
```python
def s_mul_wrong(s_y, y, a, b):
return s_y * b, b * s_y # Used `b` instead of `a` for the second sensitivity!
```
Then `check_sensitivity` should throw an `AssertionError`:
```python
>>> check_sensitivity(mul, s_mul, (2, 3))
AssertionError: Sensitivity of argument 2 of function "mul" did not match numerical estimate.
```
Raw data
{
"_id": null,
"home_page": null,
"name": "fdm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": "finite-difference, python",
"author": null,
"author_email": "Wessel Bruinsma <wessel.p.bruinsma@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/3b/07/3200eb57d6904253a84944cbdb8cfb69056fb3318d2ac2ca4722560f961e/fdm-0.5.0.tar.gz",
"platform": null,
"description": "# [FDM: Finite Difference Methods](http://github.com/wesselb/fdm)\n\n[](https://github.com/wesselb/fdm/actions?query=workflow%3ACI)\n[](https://coveralls.io/github/wesselb/fdm?branch=master)\n[](https://wesselb.github.io/fdm)\n[](https://github.com/psf/black)\n\nFDM estimates derivatives with finite differences.\nSee also [FiniteDifferences.jl](https://github.com/JuliaDiff/FiniteDifferences.jl).\n\n* [Installation](#installation)\n* [Multivariate Derivatives](#multivariate-derivatives)\n - [Gradients](#gradients)\n - [Jacobians](#jacobians)\n - [Jacobian-Vector Products (Directional Derivatives)](#jacobian-vector-products-directional-derivatives)\n - [Hessian-Vector Products](#hessian-vector-products)\n* [Scalar Derivatives](#scalar-derivatives)\n* [Testing Sensitivities in a Reverse-Mode Automatic Differentation Framework](#testing-sensitivities-in-a-reverse-mode-automatic-differentation-framework)\n\n## Installation\n\nFDM requires Python 3.6 or higher.\n\n```bash\npip install fdm\n```\n\n## Multivariate Derivatives\n\n```python\nfrom fdm import gradient, jacobian, jvp, hvp\n```\n\nFor the purpose of illustration, let us consider a quadratic function:\n\n```python\n>>> a = np.random.randn(3, 3); a = a @ a.T\n>>> a\narray([[ 3.57224794, 0.22646662, -1.80432262],\n [ 0.22646662, 4.72596213, 3.46435663],\n [-1.80432262, 3.46435663, 3.70938152]])\n\n>>> def f(x):\n... return 0.5 * x @ a @ x\n```\n\nConsider the following input value:\n\n```python\n>>> x = np.array([1.0, 2.0, 3.0])\n```\n\n### Gradients\n\n```python\n>>> grad = gradient(f)\n>>> grad(x)\narray([-1.38778668, 20.07146076, 16.25253519])\n\n>>> a @ x\narray([-1.38778668, 20.07146076, 16.25253519])\n```\n\n### Jacobians\n\n```python\n>>> jac = jacobian(f)\n>>> jac(x)\narray([[-1.38778668, 20.07146076, 16.25253519]])\n\n>>> a @ x\narray([-1.38778668, 20.07146076, 16.25253519])\n```\n\nBut `jacobian` also works for multi-valued functions.\n\n```python\n>>> def f2(x):\n... return a @ x\n\n>>> jac2 = jacobian(f2)\n>>> jac2(x)\narray([[ 3.57224794, 0.22646662, -1.80432262],\n [ 0.22646662, 4.72596213, 3.46435663],\n [-1.80432262, 3.46435663, 3.70938152]])\n\n>>> a\narray([[ 3.57224794, 0.22646662, -1.80432262],\n [ 0.22646662, 4.72596213, 3.46435663],\n [-1.80432262, 3.46435663, 3.70938152]])\n```\n\n### Jacobian-Vector Products (Directional Derivatives)\n\nIn the scalar case, `jvp` computes directional derivatives:\n\n```python\n>>> v = np.array([0.5, 0.6, 0.7]) # A direction\n\n>>> dir_deriv = jvp(f, v)\n>>> dir_deriv(x)\n22.725757753354657\n\n>>> np.sum(grad(x) * v)\n22.72575775335481\n```\n\nIn the multivariate case, `jvp` generalises to Jacobian-vector products:\n\n```python\n>>> prod = jvp(f2, v)\n>>> prod(x)\narray([0.65897811, 5.37386023, 3.77301973])\n\n>>> a @ v\narray([0.65897811, 5.37386023, 3.77301973])\n```\n\n### Hessian-Vector Products\n\n```python\n>>> prod = hvp(f, v)\n>>> prod(x)\narray([[0.6589781 , 5.37386023, 3.77301973]])\n\n>>> 0.5 * (a + a.T) @ v\narray([0.65897811, 5.37386023, 3.77301973])\n```\n\n## Scalar Derivatives\n```python\n>>> from fdm import central_fdm\n```\n\nLet's try to estimate the first derivative of `np.sin` at `1` with a\nsecond-order method.\n\n```python\n>>> central_fdm(order=2, deriv=1)(np.sin, 1) - np.cos(1)\n-1.2914319613699377e-09\n```\n\nAnd let's try to estimate the second derivative of `np.sin` at `1` with a\nthird-order method.\n\n```python\n>>> central_fdm(order=3, deriv=2)(np.sin, 1) + np.sin(1)\n1.6342919018086377e-08\n```\n\nHm.\nLet's check the accuracy of this third-order method.\nThe step size and accuracy of the method are computed upon calling\n`FDM.estimate`.\n\n```python\n>>> central_fdm(order=3, deriv=2).estimate(np.sin, 1).acc\n5.476137293912896e-06\n```\n\nWe might want a little more accuracy. Let's check the accuracy of a\nfifth-order method.\n\n```python\n>>> central_fdm(order=5, deriv=2).estimate(np.sin, 1).acc\n7.343652562575157e-10\n```\n\nAnd let's estimate the second derivative of `np.sin` at `1` with a\nfifth-order method.\n\n```python\n>>> central_fdm(order=5, deriv=2)(np.sin, 1) + np.sin(1)\n-1.7121615236703747e-10\n```\n\nHooray!\n\nFinally, let us verify that increasing the order generally increases the accuracy.\n\n```python\n>>> for i in range(3, 10):\n... print(central_fdm(order=i, deriv=2)(np.sin, 1) + np.sin(1))\n1.6342919018086377e-08\n8.604865264771888e-09\n-1.7121615236703747e-10\n8.558931341440257e-12\n-2.147615418834903e-12\n6.80566714095221e-13\n-1.2434497875801753e-14\n```\n\n## Testing Sensitivities in a Reverse-Mode Automatic Differentation Framework\n\nConsider the function\n\n```python\ndef mul(a, b):\n return a * b\n```\n\nand its sensitivity\n\n```python\ndef s_mul(s_y, y, a, b):\n return s_y * b, a * s_y\n```\n\nThe sensitivity `s_mul` takes in the sensitivity `s_y` of the output `y`,\nthe output `y`, and the arguments of the function `mul`; and returns a tuple\ncontaining the sensitivities with respect to `a` and `b`.\nThen function `check_sensitivity` can be used to assert that the\nimplementation of `s_mul` is correct:\n\n```python\n>>> from fdm import check_sensitivity\n\n>>> check_sensitivity(mul, s_mul, (2, 3)) # Test at arguments `2` and `3`.\n```\n\nSuppose that the implementation were wrong, for example\n\n```python\ndef s_mul_wrong(s_y, y, a, b):\n return s_y * b, b * s_y # Used `b` instead of `a` for the second sensitivity!\n```\n\nThen `check_sensitivity` should throw an `AssertionError`:\n\n```python\n>>> check_sensitivity(mul, s_mul, (2, 3))\nAssertionError: Sensitivity of argument 2 of function \"mul\" did not match numerical estimate.\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Estimate derivatives with finite differences",
"version": "0.5.0",
"project_urls": {
"repository": "https://github.com/wesselb/fdm"
},
"split_keywords": [
"finite-difference",
" python"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e535294e2a694f0f29b69dd28c2cc4d6c16524c2c72f0102ab43846c2adf589f",
"md5": "05bcc3b28b920b3a5c3564f929715da9",
"sha256": "cf2ce17abcc439cde3aaa8b59c1bfffa53875156c2d8dcdf1639de1a2b3c166c"
},
"downloads": -1,
"filename": "fdm-0.5.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "05bcc3b28b920b3a5c3564f929715da9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 10501,
"upload_time": "2024-11-18T20:05:47",
"upload_time_iso_8601": "2024-11-18T20:05:47.586787Z",
"url": "https://files.pythonhosted.org/packages/e5/35/294e2a694f0f29b69dd28c2cc4d6c16524c2c72f0102ab43846c2adf589f/fdm-0.5.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3b073200eb57d6904253a84944cbdb8cfb69056fb3318d2ac2ca4722560f961e",
"md5": "f0a580c5948873526cd0e6ee76d701c4",
"sha256": "47876c8fe8aea4b374913594938cbcbd065a648d044fe8e287af7f320341ef39"
},
"downloads": -1,
"filename": "fdm-0.5.0.tar.gz",
"has_sig": false,
"md5_digest": "f0a580c5948873526cd0e6ee76d701c4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 9295,
"upload_time": "2024-11-18T20:05:49",
"upload_time_iso_8601": "2024-11-18T20:05:49.186741Z",
"url": "https://files.pythonhosted.org/packages/3b/07/3200eb57d6904253a84944cbdb8cfb69056fb3318d2ac2ca4722560f961e/fdm-0.5.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-18 20:05:49",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "wesselb",
"github_project": "fdm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "fdm"
}