Name | torch-legendre JSON |
Version |
0.1.1
JSON |
| download |
home_page | None |
Summary | A Legendre Polynomial Layer for PyTorch |
upload_time | 2025-08-14 07:33:44 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | None |
keywords |
machine
learning
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# torch-legendre
A PyTorch layer for expanding input features into **Legendre polynomial bases**.
Useful for building models with polynomial feature expansions while keeping the
workflow compatible with standard PyTorch `nn.Module` layers.
## Features
- Computes **Legendre polynomial** terms P₀(x), P₁(x), …, Pₙ₋₁(x) for each input feature
- Supports arbitrary input dimensionality
- Optional trainable linear projection after expansion
- Drop-in compatible with `torch.nn.Sequential`
- Efficient recurrence-based computation (no loops over batches)
---
## Installation
```bash
pip install -e .
```
## Usage
You can see a full demonstration of the `LegendreLayer` in the provided [example.ipynb](example.ipynb) notebook.
### 1. Basic Legendre Expansion
```python
import torch
from torch_legendre import LegendreLayer
# Example: 2 input features, expand to degree 4 (P₀...P₃)
layer = LegendreLayer(in_features=2, degree=4)
x = torch.tensor([[0.1, -0.3],
[0.5, 0.2]]) # shape (batch=2, in_features=2)
y = layer(x)
print(y.shape) # (2, 2 * 4) = (2, 8)
```
### 2. With Trainable Projection
```python
# Expand then project down to 3 outputs
layer = LegendreLayer(in_features=2, degree=4, out_features=3)
x = torch.rand(5, 2)
y = layer(x)
print(y.shape) # (5, 3)
```
### 3. Stacking in a Sequential Model
```python
import torch.nn as nn
model = nn.Sequential(
LegendreLayer(in_features=1, degree=5, out_features=10),
LegendreLayer(in_features=10, degree=3, out_features=1)
)
```
## API
### `LegendreLayer`
Expands each input feature into its Legendre polynomial basis and optionally applies a trainable linear projection.
**Parameters**
- **in_features** (*int*):
Number of input features.
- **degree** (*int*):
Number of polynomial degrees to compute per input feature.
`degree = 1` means only the constant term `P₀(x) = 1`.
- **out_features** (*int, optional*):
If provided, a final `nn.Linear` layer maps from `(in_features * degree)` → `out_features`.
If `None`, returns the raw expanded features.
- **bias** (*bool, default=True*):
Whether to include a bias term in the optional linear mapping.
**Shapes**
- **Input**: `(batch_size, in_features)`
- **Output**:
- If `out_features is None`: `(batch_size, in_features * degree)`
- Else: `(batch_size, out_features)`
## License
This project is licensed under the MIT License — see the [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "torch-legendre",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "machine learning",
"author": null,
"author_email": "Philipp Benner <philipp.benner@bam.de>",
"download_url": "https://files.pythonhosted.org/packages/6c/8f/c38abfb1dc610f6315c8214c2b59099672fac6e28fcfb2cc83ffc3586186/torch_legendre-0.1.1.tar.gz",
"platform": null,
"description": "# torch-legendre\n\nA PyTorch layer for expanding input features into **Legendre polynomial bases**.\nUseful for building models with polynomial feature expansions while keeping the\nworkflow compatible with standard PyTorch `nn.Module` layers.\n\n## Features\n\n- Computes **Legendre polynomial** terms P\u2080(x), P\u2081(x), \u2026, P\u2099\u208b\u2081(x) for each input feature\n- Supports arbitrary input dimensionality\n- Optional trainable linear projection after expansion\n- Drop-in compatible with `torch.nn.Sequential`\n- Efficient recurrence-based computation (no loops over batches)\n\n---\n\n## Installation\n\n```bash\npip install -e .\n```\n\n## Usage\n\nYou can see a full demonstration of the `LegendreLayer` in the provided [example.ipynb](example.ipynb) notebook.\n\n### 1. Basic Legendre Expansion\n\n```python\nimport torch\nfrom torch_legendre import LegendreLayer\n\n# Example: 2 input features, expand to degree 4 (P\u2080...P\u2083)\nlayer = LegendreLayer(in_features=2, degree=4)\n\nx = torch.tensor([[0.1, -0.3],\n [0.5, 0.2]]) # shape (batch=2, in_features=2)\n\ny = layer(x)\nprint(y.shape) # (2, 2 * 4) = (2, 8)\n```\n\n### 2. With Trainable Projection\n\n```python\n# Expand then project down to 3 outputs\nlayer = LegendreLayer(in_features=2, degree=4, out_features=3)\n\nx = torch.rand(5, 2)\ny = layer(x)\nprint(y.shape) # (5, 3)\n```\n\n### 3. Stacking in a Sequential Model\n\n```python\nimport torch.nn as nn\n\nmodel = nn.Sequential(\n LegendreLayer(in_features=1, degree=5, out_features=10),\n LegendreLayer(in_features=10, degree=3, out_features=1)\n)\n```\n\n## API\n\n### `LegendreLayer`\n\nExpands each input feature into its Legendre polynomial basis and optionally applies a trainable linear projection.\n\n**Parameters**\n\n- **in_features** (*int*):\n Number of input features.\n\n- **degree** (*int*):\n Number of polynomial degrees to compute per input feature.\n `degree = 1` means only the constant term `P\u2080(x) = 1`.\n\n- **out_features** (*int, optional*):\n If provided, a final `nn.Linear` layer maps from `(in_features * degree)` \u2192 `out_features`.\n If `None`, returns the raw expanded features.\n\n- **bias** (*bool, default=True*):\n Whether to include a bias term in the optional linear mapping.\n\n**Shapes**\n\n- **Input**: `(batch_size, in_features)`\n- **Output**:\n - If `out_features is None`: `(batch_size, in_features * degree)`\n - Else: `(batch_size, out_features)`\n\n## License\n\nThis project is licensed under the MIT License \u2014 see the [LICENSE](LICENSE) file for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "A Legendre Polynomial Layer for PyTorch",
"version": "0.1.1",
"project_urls": null,
"split_keywords": [
"machine",
"learning"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1f73e9811c9fcfe328e6ac48b3a31fb3c749800158233528d39a33b7601a91f6",
"md5": "9ff97c73d73d88a687951ca3fd441efe",
"sha256": "dcb4a89f62b145a29ff0afab011f9113d770678b118a4cfe750482cad3743130"
},
"downloads": -1,
"filename": "torch_legendre-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9ff97c73d73d88a687951ca3fd441efe",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 5195,
"upload_time": "2025-08-14T07:33:42",
"upload_time_iso_8601": "2025-08-14T07:33:42.868201Z",
"url": "https://files.pythonhosted.org/packages/1f/73/e9811c9fcfe328e6ac48b3a31fb3c749800158233528d39a33b7601a91f6/torch_legendre-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "6c8fc38abfb1dc610f6315c8214c2b59099672fac6e28fcfb2cc83ffc3586186",
"md5": "a82a11a09eec23d9ba15b3cceeb2f768",
"sha256": "f12d4de97d6b267cd5e1e3c8f6faa02e7bf0e174f986ef0978ae96ffe840b74c"
},
"downloads": -1,
"filename": "torch_legendre-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "a82a11a09eec23d9ba15b3cceeb2f768",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 157678,
"upload_time": "2025-08-14T07:33:44",
"upload_time_iso_8601": "2025-08-14T07:33:44.681399Z",
"url": "https://files.pythonhosted.org/packages/6c/8f/c38abfb1dc610f6315c8214c2b59099672fac6e28fcfb2cc83ffc3586186/torch_legendre-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-14 07:33:44",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "torch-legendre"
}