rotary-spatial-embeddings


Namerotary-spatial-embeddings JSON
Version 2025.8.21.2030 PyPI version JSON
download
home_pageNone
SummaryPyTorch implementation of Rotary Spatial Embeddings
upload_time2025-08-21 20:33:18
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseBSD 3-Clause License
keywords attention embeddings pytorch rope rotary spatial transformer
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RoSE N-dimensional Rotary Spatial Embeddings

## Original implementation of Rotary Spatial Embeddings (in PyTorch)

![GitHub - License](https://img.shields.io/github/license/rhoadesScholar/RoSE)
[![CI/CD Pipeline](https://github.com/rhoadesScholar/RoSE/actions/workflows/ci-cd.yml/badge.svg)](https://github.com/rhoadesScholar/RoSE/actions/workflows/ci-cd.yml)
[![codecov](https://codecov.io/github/rhoadesScholar/RoSE/graph/badge.svg?token=PPT4ZNZZCJ)](https://codecov.io/github/rhoadesScholar/RoSE)
![PyPI - Version](https://img.shields.io/pypi/v/rotary-spatial-embeddings)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/rotary-spatial-embeddings)


Rotary Spatial Embeddings (RoSE) extends [2D Rotary Position Embeddings (RoPE)](https://arxiv.org/abs/2403.13298) and the original [1D RoPE](https://arxiv.org/pdf/2104.09864) to incorporate into the embeddings spatial information in terms of N-dimensional real world coordinates. This is particularly useful for tasks that require understanding of spatial relationships across different scales, such as in microscopy.

## Explanation

### 1 Relative phase in 1-D RoPE

If you write the 1-D RoPE positional factor for token $t$ as a per-token complex phase

```math
\phi(t)=e^{\,i\,t\theta},\qquad t\in\mathbb Z .
```

After you attach that phase to query $q_t$ and key $k_t$,

```math
\tilde q_t = q_t\;\phi(t),\qquad
\tilde k_t = k_t\;\phi(t)^{*},
```

where $^*$ denotes complex conjugation, their dot-product inside attention becomes

```math
\tilde q_n\,\tilde k_m^{}
\;=\; q_n\,k_m^{}\,
\underbrace{\phi(n)\,\phi(m)^{*}}_{=\,e^{\,i\,(n-m)\theta}} .
```

⸻

### 2 Extending to N dimensions

Give every token a coordinate vector
$\mathbf{p}=(x,y,z,\dots)\in\mathbb R^{N}.$

Define its phase as

```math
\phi(\mathbf{p}) \;=\;e^{\,i\,\langle\mathbf{p},\,\boldsymbol\theta\rangle},
\qquad
\langle\mathbf{p},\boldsymbol\theta\rangle
=\sum_{a=1}^{N} p_a\,\theta_a .
```

Then

```math
\phi(\mathbf{p}_n)\,\phi(\mathbf{p}_m)^{*}
\;=\;
e^{\,i\,\langle\mathbf{p}_n-\mathbf{p}_m,\;\boldsymbol\theta\rangle},
```

which is the ND generalisation of the 1-D $e^{\,i\,(n-m)\theta}$.
You still get

```math
A_{nm}\;=\;\mathrm{Re}
\bigl[q_n k_m^{*}\;e^{\,i\,\langle\mathbf{p}_n-\mathbf{p}_m,
\boldsymbol\theta\rangle}\bigr],
```

while keeping the per-token encoding cost $O(LD)$.

**Partial Rotation**: RoSE also supports partial rotation via the `rotary_ratio` parameter, where only a fraction of the embedding dimensions are rotated while the rest are passed through unchanged. This provides a balance between spatial awareness and computational efficiency.

---

### 3 Embedding real-world coordinates

In many applications, such as microscopy or 3D point clouds, the coordinates are not just indices but represent real-world positions that may contain useful spatial information. RoSE allows for injecting these coordinates directly into the rotary embeddings by simply multiplying the coordinate vectors by the coordinate spacing (i.e. voxel size) before applying the rotary embedding.

---

## Installation

### From PyPI

```bash
pip install rose-spatial-embeddings
```

### From source

```bash
pip install git+https://github.com/rhoadesScholar/RoSE.git
```

## Usage

### Basic Usage - Multi-Head Attention with Spatial Embeddings

```python
import torch
from RoSE import RoSEMultiHeadAttention

# Create RoSE multi-head attention layer
layer = RoSEMultiHeadAttention(
    dim=128,
    num_heads=8,
    spatial_dims=3,
    learnable=True,
    base_theta=1e4,
    rotary_ratio=1.0  # Apply rotation to all dimensions (default)
)

batch_size, seq_len = 2, 1000
q = torch.randn(batch_size, seq_len, 128)
k = torch.randn(batch_size, seq_len, 128)

# Define spatial grid properties
grid_shape = (10, 10, 10)  # 3D grid dimensions
spacing = (1.0, 1.0, 1.0)  # Physical size of each voxel

# Compute attention scores with spatial embeddings
attn_scores = layer(q, k, spacing, grid_shape)  # Shape: (batch_size, num_heads, seq_len, seq_len)
```

### Partial Rotation with `rotary_ratio`

The `rotary_ratio` parameter allows you to apply rotary embeddings to only a fraction of the embedding dimensions, which can be beneficial for performance and model capacity:

```python
import torch
from RoSE import RotarySpatialEmbedding

# Apply rotation to only 50% of the embedding dimensions
embedding = RotarySpatialEmbedding(
    dim=128,
    num_heads=8,
    spatial_dims=2,
    rotary_ratio=0.5,  # Only rotate first 50% of dimensions per head
    learnable=False
)

batch_size, seq_len = 2, 100
x = torch.randn(batch_size, seq_len, 128)

# The first 64 dimensions (50% of 128) will be rotated
# The last 64 dimensions will be passed through unchanged
x_embedded = embedding(x, spacing=(0.5, 0.5), grid_shape=(10, 10))
```

**Key benefits of partial rotation:**

- **Performance**: Reduces computational cost for large embeddings
- **Flexibility**: Allows some dimensions to encode non-spatial information
- **Stability**: Can improve training stability in some scenarios
- **Memory**: Lower memory usage for frequency parameters

### Using Just the Embedding Layer

```python
import torch
from RoSE import RotarySpatialEmbedding

# Create just the rotary spatial embedding layer
embedding = RotarySpatialEmbedding(
    dim=128,
    num_heads=8,
    spatial_dims=2,
    learnable=False,
    frequency_scaling="sqrt",
    rotary_ratio=1.0  # Apply rotation to all dimensions (default)
)

batch_size, seq_len = 2, 100
x = torch.randn(batch_size, seq_len, 128)

# Define 2D grid
grid_shape = (10, 10)
spacing = (0.5, 0.5)

# Apply rotary spatial embeddings
x_embedded = embedding(x, spacing, grid_shape)  # Shape: (batch_size, seq_len, 128)
```

## Parameters

### Core Parameters

- **`dim`**: Total embedding dimension (must be even and divisible by `num_heads`)
- **`num_heads`**: Number of attention heads
- **`spatial_dims`**: Number of spatial dimensions (2 for 2D, 3 for 3D, etc.)
- **`rotary_ratio`**: Fraction of embedding dimensions to apply rotation to (0.0 to 1.0, default: 1.0)
  - `1.0`: Apply rotation to all dimensions (full rotation)
  - `0.5`: Apply rotation to 50% of dimensions per head
  - `0.0`: No rotation applied (passthrough)

### Advanced Parameters

- **`base_theta`**: Base frequency for rotary embeddings (default: 10000.0)
- **`learnable`**: Whether frequencies should be learnable parameters (default: True)
- **`init_jitter_std`**: Standard deviation for frequency initialization jitter (default: 0.02)
- **`frequency_scaling`**: Scaling strategy for frequencies (default: "sqrt")
  - `"none"`: No frequency scaling
  - `"linear"`: Linear scaling with spatial dimensions
  - `"sqrt"`: Square root scaling with spatial dimensions
  - `"adaptive"`: Adaptive scaling based on spatial dims and embedding dim


## License

BSD 3-Clause License. See [LICENSE](LICENSE) for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "rotary-spatial-embeddings",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "Jeff Rhoades <rhoadesj@hhmi.org>",
    "keywords": "attention, embeddings, pytorch, rope, rotary, spatial, transformer",
    "author": null,
    "author_email": "Jeff Rhoades <rhoadesj@hhmi.org>",
    "download_url": "https://files.pythonhosted.org/packages/c7/8b/7ce54b2f088eb0d0040283195865daeec689548d58bcacd077472ce7275d/rotary_spatial_embeddings-2025.8.21.2030.tar.gz",
    "platform": null,
    "description": "# RoSE N-dimensional Rotary Spatial Embeddings\n\n## Original implementation of Rotary Spatial Embeddings (in PyTorch)\n\n![GitHub - License](https://img.shields.io/github/license/rhoadesScholar/RoSE)\n[![CI/CD Pipeline](https://github.com/rhoadesScholar/RoSE/actions/workflows/ci-cd.yml/badge.svg)](https://github.com/rhoadesScholar/RoSE/actions/workflows/ci-cd.yml)\n[![codecov](https://codecov.io/github/rhoadesScholar/RoSE/graph/badge.svg?token=PPT4ZNZZCJ)](https://codecov.io/github/rhoadesScholar/RoSE)\n![PyPI - Version](https://img.shields.io/pypi/v/rotary-spatial-embeddings)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/rotary-spatial-embeddings)\n\n\nRotary Spatial Embeddings (RoSE) extends [2D Rotary Position Embeddings (RoPE)](https://arxiv.org/abs/2403.13298) and the original [1D RoPE](https://arxiv.org/pdf/2104.09864) to incorporate into the embeddings spatial information in terms of N-dimensional real world coordinates. This is particularly useful for tasks that require understanding of spatial relationships across different scales, such as in microscopy.\n\n## Explanation\n\n### 1\u2002Relative phase in 1-D RoPE\n\nIf you write the 1-D RoPE positional factor for token $t$ as a per-token complex phase\n\n```math\n\\phi(t)=e^{\\,i\\,t\\theta},\\qquad t\\in\\mathbb Z .\n```\n\nAfter you attach that phase to query $q_t$ and key $k_t$,\n\n```math\n\\tilde q_t = q_t\\;\\phi(t),\\qquad\n\\tilde k_t = k_t\\;\\phi(t)^{*},\n```\n\nwhere $^*$ denotes complex conjugation, their dot-product inside attention becomes\n\n```math\n\\tilde q_n\\,\\tilde k_m^{}\n\\;=\\; q_n\\,k_m^{}\\,\n\\underbrace{\\phi(n)\\,\\phi(m)^{*}}_{=\\,e^{\\,i\\,(n-m)\\theta}} .\n```\n\n\u2e3b\n\n### 2\u2002Extending to N dimensions\n\nGive every token a coordinate vector\n$\\mathbf{p}=(x,y,z,\\dots)\\in\\mathbb R^{N}.$\n\nDefine its phase as\n\n```math\n\\phi(\\mathbf{p}) \\;=\\;e^{\\,i\\,\\langle\\mathbf{p},\\,\\boldsymbol\\theta\\rangle},\n\\qquad\n\\langle\\mathbf{p},\\boldsymbol\\theta\\rangle\n=\\sum_{a=1}^{N} p_a\\,\\theta_a .\n```\n\nThen\n\n```math\n\\phi(\\mathbf{p}_n)\\,\\phi(\\mathbf{p}_m)^{*}\n\\;=\\;\ne^{\\,i\\,\\langle\\mathbf{p}_n-\\mathbf{p}_m,\\;\\boldsymbol\\theta\\rangle},\n```\n\nwhich is the ND generalisation of the 1-D $e^{\\,i\\,(n-m)\\theta}$.\nYou still get\n\n```math\nA_{nm}\\;=\\;\\mathrm{Re}\n\\bigl[q_n k_m^{*}\\;e^{\\,i\\,\\langle\\mathbf{p}_n-\\mathbf{p}_m,\n\\boldsymbol\\theta\\rangle}\\bigr],\n```\n\nwhile keeping the per-token encoding cost $O(LD)$.\n\n**Partial Rotation**: RoSE also supports partial rotation via the `rotary_ratio` parameter, where only a fraction of the embedding dimensions are rotated while the rest are passed through unchanged. This provides a balance between spatial awareness and computational efficiency.\n\n---\n\n### 3 Embedding real-world coordinates\n\nIn many applications, such as microscopy or 3D point clouds, the coordinates are not just indices but represent real-world positions that may contain useful spatial information. RoSE allows for injecting these coordinates directly into the rotary embeddings by simply multiplying the coordinate vectors by the coordinate spacing (i.e. voxel size) before applying the rotary embedding.\n\n---\n\n## Installation\n\n### From PyPI\n\n```bash\npip install rose-spatial-embeddings\n```\n\n### From source\n\n```bash\npip install git+https://github.com/rhoadesScholar/RoSE.git\n```\n\n## Usage\n\n### Basic Usage - Multi-Head Attention with Spatial Embeddings\n\n```python\nimport torch\nfrom RoSE import RoSEMultiHeadAttention\n\n# Create RoSE multi-head attention layer\nlayer = RoSEMultiHeadAttention(\n    dim=128,\n    num_heads=8,\n    spatial_dims=3,\n    learnable=True,\n    base_theta=1e4,\n    rotary_ratio=1.0  # Apply rotation to all dimensions (default)\n)\n\nbatch_size, seq_len = 2, 1000\nq = torch.randn(batch_size, seq_len, 128)\nk = torch.randn(batch_size, seq_len, 128)\n\n# Define spatial grid properties\ngrid_shape = (10, 10, 10)  # 3D grid dimensions\nspacing = (1.0, 1.0, 1.0)  # Physical size of each voxel\n\n# Compute attention scores with spatial embeddings\nattn_scores = layer(q, k, spacing, grid_shape)  # Shape: (batch_size, num_heads, seq_len, seq_len)\n```\n\n### Partial Rotation with `rotary_ratio`\n\nThe `rotary_ratio` parameter allows you to apply rotary embeddings to only a fraction of the embedding dimensions, which can be beneficial for performance and model capacity:\n\n```python\nimport torch\nfrom RoSE import RotarySpatialEmbedding\n\n# Apply rotation to only 50% of the embedding dimensions\nembedding = RotarySpatialEmbedding(\n    dim=128,\n    num_heads=8,\n    spatial_dims=2,\n    rotary_ratio=0.5,  # Only rotate first 50% of dimensions per head\n    learnable=False\n)\n\nbatch_size, seq_len = 2, 100\nx = torch.randn(batch_size, seq_len, 128)\n\n# The first 64 dimensions (50% of 128) will be rotated\n# The last 64 dimensions will be passed through unchanged\nx_embedded = embedding(x, spacing=(0.5, 0.5), grid_shape=(10, 10))\n```\n\n**Key benefits of partial rotation:**\n\n- **Performance**: Reduces computational cost for large embeddings\n- **Flexibility**: Allows some dimensions to encode non-spatial information\n- **Stability**: Can improve training stability in some scenarios\n- **Memory**: Lower memory usage for frequency parameters\n\n### Using Just the Embedding Layer\n\n```python\nimport torch\nfrom RoSE import RotarySpatialEmbedding\n\n# Create just the rotary spatial embedding layer\nembedding = RotarySpatialEmbedding(\n    dim=128,\n    num_heads=8,\n    spatial_dims=2,\n    learnable=False,\n    frequency_scaling=\"sqrt\",\n    rotary_ratio=1.0  # Apply rotation to all dimensions (default)\n)\n\nbatch_size, seq_len = 2, 100\nx = torch.randn(batch_size, seq_len, 128)\n\n# Define 2D grid\ngrid_shape = (10, 10)\nspacing = (0.5, 0.5)\n\n# Apply rotary spatial embeddings\nx_embedded = embedding(x, spacing, grid_shape)  # Shape: (batch_size, seq_len, 128)\n```\n\n## Parameters\n\n### Core Parameters\n\n- **`dim`**: Total embedding dimension (must be even and divisible by `num_heads`)\n- **`num_heads`**: Number of attention heads\n- **`spatial_dims`**: Number of spatial dimensions (2 for 2D, 3 for 3D, etc.)\n- **`rotary_ratio`**: Fraction of embedding dimensions to apply rotation to (0.0 to 1.0, default: 1.0)\n  - `1.0`: Apply rotation to all dimensions (full rotation)\n  - `0.5`: Apply rotation to 50% of dimensions per head\n  - `0.0`: No rotation applied (passthrough)\n\n### Advanced Parameters\n\n- **`base_theta`**: Base frequency for rotary embeddings (default: 10000.0)\n- **`learnable`**: Whether frequencies should be learnable parameters (default: True)\n- **`init_jitter_std`**: Standard deviation for frequency initialization jitter (default: 0.02)\n- **`frequency_scaling`**: Scaling strategy for frequencies (default: \"sqrt\")\n  - `\"none\"`: No frequency scaling\n  - `\"linear\"`: Linear scaling with spatial dimensions\n  - `\"sqrt\"`: Square root scaling with spatial dimensions\n  - `\"adaptive\"`: Adaptive scaling based on spatial dims and embedding dim\n\n\n## License\n\nBSD 3-Clause License. See [LICENSE](LICENSE) for details.\n",
    "bugtrack_url": null,
    "license": "BSD 3-Clause License",
    "summary": "PyTorch implementation of Rotary Spatial Embeddings",
    "version": "2025.8.21.2030",
    "project_urls": {
        "Documentation": "https://github.com/rhoadesScholar/RoSE",
        "Homepage": "https://github.com/rhoadesScholar/RoSE",
        "Issues": "https://github.com/rhoadesScholar/RoSE/issues",
        "Repository": "https://github.com/rhoadesScholar/RoSE"
    },
    "split_keywords": [
        "attention",
        " embeddings",
        " pytorch",
        " rope",
        " rotary",
        " spatial",
        " transformer"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d2494388391fe9591c2e5834821f37662dd47d870baf2826c175d18030af9774",
                "md5": "b7fb0b8799e9f17d20a733964fc9c7db",
                "sha256": "17461d029a94ddc2be2051ca49e21ef1c5cc5c2665a0140fb864e510109930b7"
            },
            "downloads": -1,
            "filename": "rotary_spatial_embeddings-2025.8.21.2030-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b7fb0b8799e9f17d20a733964fc9c7db",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 9159,
            "upload_time": "2025-08-21T20:33:17",
            "upload_time_iso_8601": "2025-08-21T20:33:17.922377Z",
            "url": "https://files.pythonhosted.org/packages/d2/49/4388391fe9591c2e5834821f37662dd47d870baf2826c175d18030af9774/rotary_spatial_embeddings-2025.8.21.2030-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c78b7ce54b2f088eb0d0040283195865daeec689548d58bcacd077472ce7275d",
                "md5": "b93bccbc1a0b907ce6998609b558c939",
                "sha256": "cbd3fa36b9223bcf8bb49b6442792603379f688e0f3ed2b520f39c007e6955f1"
            },
            "downloads": -1,
            "filename": "rotary_spatial_embeddings-2025.8.21.2030.tar.gz",
            "has_sig": false,
            "md5_digest": "b93bccbc1a0b907ce6998609b558c939",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 14177,
            "upload_time": "2025-08-21T20:33:18",
            "upload_time_iso_8601": "2025-08-21T20:33:18.789932Z",
            "url": "https://files.pythonhosted.org/packages/c7/8b/7ce54b2f088eb0d0040283195865daeec689548d58bcacd077472ce7275d/rotary_spatial_embeddings-2025.8.21.2030.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-21 20:33:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rhoadesScholar",
    "github_project": "RoSE",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "rotary-spatial-embeddings"
}
        
Elapsed time: 3.98522s