# RoSE N-dimensional Rotary Spatial Embeddings
## Original implementation of Rotary Spatial Embeddings (in PyTorch)

[](https://github.com/rhoadesScholar/RoSE/actions/workflows/ci-cd.yml)
[](https://codecov.io/github/rhoadesScholar/RoSE)


Rotary Spatial Embeddings (RoSE) extends [2D Rotary Position Embeddings (RoPE)](https://arxiv.org/abs/2403.13298) and the original [1D RoPE](https://arxiv.org/pdf/2104.09864) to incorporate into the embeddings spatial information in terms of N-dimensional real world coordinates. This is particularly useful for tasks that require understanding of spatial relationships across different scales, such as in microscopy.
## Explanation
### 1 Relative phase in 1-D RoPE
If you write the 1-D RoPE positional factor for token $t$ as a per-token complex phase
```math
\phi(t)=e^{\,i\,t\theta},\qquad t\in\mathbb Z .
```
After you attach that phase to query $q_t$ and key $k_t$,
```math
\tilde q_t = q_t\;\phi(t),\qquad
\tilde k_t = k_t\;\phi(t)^{*},
```
where $^*$ denotes complex conjugation, their dot-product inside attention becomes
```math
\tilde q_n\,\tilde k_m^{}
\;=\; q_n\,k_m^{}\,
\underbrace{\phi(n)\,\phi(m)^{*}}_{=\,e^{\,i\,(n-m)\theta}} .
```
⸻
### 2 Extending to N dimensions
Give every token a coordinate vector
$\mathbf{p}=(x,y,z,\dots)\in\mathbb R^{N}.$
Define its phase as
```math
\phi(\mathbf{p}) \;=\;e^{\,i\,\langle\mathbf{p},\,\boldsymbol\theta\rangle},
\qquad
\langle\mathbf{p},\boldsymbol\theta\rangle
=\sum_{a=1}^{N} p_a\,\theta_a .
```
Then
```math
\phi(\mathbf{p}_n)\,\phi(\mathbf{p}_m)^{*}
\;=\;
e^{\,i\,\langle\mathbf{p}_n-\mathbf{p}_m,\;\boldsymbol\theta\rangle},
```
which is the ND generalisation of the 1-D $e^{\,i\,(n-m)\theta}$.
You still get
```math
A_{nm}\;=\;\mathrm{Re}
\bigl[q_n k_m^{*}\;e^{\,i\,\langle\mathbf{p}_n-\mathbf{p}_m,
\boldsymbol\theta\rangle}\bigr],
```
while keeping the per-token encoding cost $O(LD)$.
---
### 3 Embedding real-world coordinates
In many applications, such as microscopy or 3D point clouds, the coordinates are not just indices but represent real-world positions that may contain useful spatial information. RoSE allows for injecting these coordinates directly into the rotary embeddings by simply multiplying the coordinate vectors by the coordinate spacing (i.e. voxel size) before applying the rotary embedding.
---
## Installation
### From PyPI
```bash
pip install rose-spatial-embeddings
```
### From source
```bash
pip install git+https://github.com/rhoadesScholar/RoSE.git
```
## Usage
```python
import torch
from RoSE import RoSEMultiheadSelfAttention
# Basic RoSE layer for applying rotary spatial embeddings to q and k
layer = RoSEMultiheadSelfAttention(dim=128, num_heads=8, spatial_dims=3, learnable=True)
batch_size, seq_len = 2, 1000
q = torch.randn(batch_size, seq_len, 128)
k = torch.randn(batch_size, seq_len, 128)
# Define spatial grid properties
grid_shape = (10, 10, 10) # 3D grid dimensions
voxel_size = (1.0, 1.0, 1.0) # Physical size of each voxel
# Apply rotary spatial embeddings
q_rot, k_rot = layer(q, k, grid_shape, voxel_size)
```
## License
BSD 3-Clause License. See [LICENSE](LICENSE) for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "rotary-spatial-embeddings",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "Jeff Rhoades <rhoadesj@hhmi.org>",
"keywords": "attention, embeddings, pytorch, rope, rotary, spatial, transformer",
"author": null,
"author_email": "Jeff Rhoades <rhoadesj@hhmi.org>",
"download_url": "https://files.pythonhosted.org/packages/03/a1/6b6a7c0af986c8f0e5e65858dad7e747549c32839795d1819e1613970467/rotary_spatial_embeddings-2025.7.31.1948.tar.gz",
"platform": null,
"description": "# RoSE N-dimensional Rotary Spatial Embeddings\n\n## Original implementation of Rotary Spatial Embeddings (in PyTorch)\n\n\n[](https://github.com/rhoadesScholar/RoSE/actions/workflows/ci-cd.yml)\n[](https://codecov.io/github/rhoadesScholar/RoSE)\n\n\n\n\nRotary Spatial Embeddings (RoSE) extends [2D Rotary Position Embeddings (RoPE)](https://arxiv.org/abs/2403.13298) and the original [1D RoPE](https://arxiv.org/pdf/2104.09864) to incorporate into the embeddings spatial information in terms of N-dimensional real world coordinates. This is particularly useful for tasks that require understanding of spatial relationships across different scales, such as in microscopy.\n\n## Explanation\n\n### 1\u2002Relative phase in 1-D RoPE\n\nIf you write the 1-D RoPE positional factor for token $t$ as a per-token complex phase\n\n```math\n\\phi(t)=e^{\\,i\\,t\\theta},\\qquad t\\in\\mathbb Z .\n```\n\nAfter you attach that phase to query $q_t$ and key $k_t$,\n\n```math\n\\tilde q_t = q_t\\;\\phi(t),\\qquad\n\\tilde k_t = k_t\\;\\phi(t)^{*},\n```\n\nwhere $^*$ denotes complex conjugation, their dot-product inside attention becomes\n\n```math\n\\tilde q_n\\,\\tilde k_m^{}\n\\;=\\; q_n\\,k_m^{}\\,\n\\underbrace{\\phi(n)\\,\\phi(m)^{*}}_{=\\,e^{\\,i\\,(n-m)\\theta}} .\n```\n\n\u2e3b\n\n### 2\u2002Extending to N dimensions\n\nGive every token a coordinate vector\n$\\mathbf{p}=(x,y,z,\\dots)\\in\\mathbb R^{N}.$\n\nDefine its phase as\n\n```math\n\\phi(\\mathbf{p}) \\;=\\;e^{\\,i\\,\\langle\\mathbf{p},\\,\\boldsymbol\\theta\\rangle},\n\\qquad\n\\langle\\mathbf{p},\\boldsymbol\\theta\\rangle\n=\\sum_{a=1}^{N} p_a\\,\\theta_a .\n```\n\nThen\n\n```math\n\\phi(\\mathbf{p}_n)\\,\\phi(\\mathbf{p}_m)^{*}\n\\;=\\;\ne^{\\,i\\,\\langle\\mathbf{p}_n-\\mathbf{p}_m,\\;\\boldsymbol\\theta\\rangle},\n```\n\nwhich is the ND generalisation of the 1-D $e^{\\,i\\,(n-m)\\theta}$.\nYou still get\n\n```math\nA_{nm}\\;=\\;\\mathrm{Re}\n\\bigl[q_n k_m^{*}\\;e^{\\,i\\,\\langle\\mathbf{p}_n-\\mathbf{p}_m,\n\\boldsymbol\\theta\\rangle}\\bigr],\n```\n\nwhile keeping the per-token encoding cost $O(LD)$.\n\n---\n\n### 3 Embedding real-world coordinates\n\nIn many applications, such as microscopy or 3D point clouds, the coordinates are not just indices but represent real-world positions that may contain useful spatial information. RoSE allows for injecting these coordinates directly into the rotary embeddings by simply multiplying the coordinate vectors by the coordinate spacing (i.e. voxel size) before applying the rotary embedding.\n\n---\n\n## Installation\n\n### From PyPI\n\n```bash\npip install rose-spatial-embeddings\n```\n\n### From source\n\n```bash\npip install git+https://github.com/rhoadesScholar/RoSE.git\n```\n\n## Usage\n\n```python\nimport torch\nfrom RoSE import RoSEMultiheadSelfAttention\n\n# Basic RoSE layer for applying rotary spatial embeddings to q and k\nlayer = RoSEMultiheadSelfAttention(dim=128, num_heads=8, spatial_dims=3, learnable=True)\n\nbatch_size, seq_len = 2, 1000\nq = torch.randn(batch_size, seq_len, 128)\nk = torch.randn(batch_size, seq_len, 128)\n\n# Define spatial grid properties\ngrid_shape = (10, 10, 10) # 3D grid dimensions\nvoxel_size = (1.0, 1.0, 1.0) # Physical size of each voxel\n\n# Apply rotary spatial embeddings\nq_rot, k_rot = layer(q, k, grid_shape, voxel_size)\n\n```\n\n\n## License\n\nBSD 3-Clause License. See [LICENSE](LICENSE) for details.\n",
"bugtrack_url": null,
"license": "BSD 3-Clause License",
"summary": "PyTorch implementation of Rotary Spatial Embeddings",
"version": "2025.7.31.1948",
"project_urls": {
"Documentation": "https://github.com/rhoadesScholar/RoSE",
"Homepage": "https://github.com/rhoadesScholar/RoSE",
"Issues": "https://github.com/rhoadesScholar/RoSE/issues",
"Repository": "https://github.com/rhoadesScholar/RoSE"
},
"split_keywords": [
"attention",
" embeddings",
" pytorch",
" rope",
" rotary",
" spatial",
" transformer"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5bf32b90deb9b4931e4c6f6264be6da8194b87d97ddfe362a20205f4508121cd",
"md5": "f7c27a0515c32b1d904439d990dd0f53",
"sha256": "381a4db4ad6577d11393e7437b2d38317a3b8bc5349cefe466bcf2dd8a6ea5f8"
},
"downloads": -1,
"filename": "rotary_spatial_embeddings-2025.7.31.1948-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f7c27a0515c32b1d904439d990dd0f53",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 6981,
"upload_time": "2025-07-31T20:00:52",
"upload_time_iso_8601": "2025-07-31T20:00:52.201395Z",
"url": "https://files.pythonhosted.org/packages/5b/f3/2b90deb9b4931e4c6f6264be6da8194b87d97ddfe362a20205f4508121cd/rotary_spatial_embeddings-2025.7.31.1948-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "03a16b6a7c0af986c8f0e5e65858dad7e747549c32839795d1819e1613970467",
"md5": "be9fedb95bd412e220554584f5136b0e",
"sha256": "e7a45299f59508abbad2b24d8388a2cc911590544fe9bc1df1e02d338bae1e6d"
},
"downloads": -1,
"filename": "rotary_spatial_embeddings-2025.7.31.1948.tar.gz",
"has_sig": false,
"md5_digest": "be9fedb95bd412e220554584f5136b0e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 12133,
"upload_time": "2025-07-31T20:00:53",
"upload_time_iso_8601": "2025-07-31T20:00:53.294659Z",
"url": "https://files.pythonhosted.org/packages/03/a1/6b6a7c0af986c8f0e5e65858dad7e747549c32839795d1819e1613970467/rotary_spatial_embeddings-2025.7.31.1948.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-31 20:00:53",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "rhoadesScholar",
"github_project": "RoSE",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "rotary-spatial-embeddings"
}