warpfield


Namewarpfield JSON
Version 0.2.1 PyPI version JSON
download
home_pageNone
SummaryGPU-accelerated 3D non-rigid registration
upload_time2025-08-15 12:13:50
maintainerNone
docs_urlNone
authorjlab.berlin, Maximilian Hoffmann, Benjamin Judkewitz
requires_python>=3.9
licenseMIT
keywords warpfield python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![Python Version](https://img.shields.io/badge/python-3.9+-blue)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
![tests](https://github.com/danionella/warpfield/actions/workflows/test.yml/badge.svg)
[![PyPI - Version](https://img.shields.io/pypi/v/warpfield)](https://pypi.org/project/warpfield/)
[![Conda Version](https://img.shields.io/conda/v/conda-forge/warpfield)](https://anaconda.org/conda-forge/warpfield)
![GitHub last commit](https://img.shields.io/github/last-commit/danionella/warpfield)

<img width="1000" src="https://github.com/user-attachments/assets/09b2109d-5db6-4f7e-8e3d-000361455a0f"/> <br>
<sup>Example shown: region-level 3D registration across subjects (video slowed down ~5x). Cell-level registration of microscopy data is also possible.</sup>

# warpfield

A GPU-accelerated Python library for non-rigid 3D registration / warping.

Links: [API documentation](http://danionella.github.io/warpfield), [GitHub repository](https://github.com/danionella/warpfield)


### Features

- GPU-accelerated code for high performance ([CuPy](https://cupy.dev/), CUDA kernels & FFT plans)
- Typical speedup compared to CPU-based methods: > 1000x (**seconds vs. hours** for gigavoxel volumes)
- Forward and inverse transform of 3D volumes as well as point coordinates
- Python API and command-line interface (CLI)
- Support for `.h5`,`.zarr`, `.n5`, `.nii`, `.nrrd`, `.tiff` and `.npy` file formats

### General Principle

The registration process aligns a moving 3D volume to a fixed reference volume by estimating and applying a displacement field. The process is typically performed in a multi-resolution (pyramid) fashion, starting with coarse alignment and progressively refining the displacement field at finer resolutions.

The key steps are:

1. **Preprocessing**: Enhance features in the volumes (e.g., using Difference-of-Gaussian or 'DoG' filtering) to improve registration accuracy.
2. **Block Matching**: Divide the volumes into smaller 3D blocks, project them to 2D (along each axis) for memory and compute efficiency, and calculate 2D cross-correlation maps. After smoothing these 2D maps across neighboring blocks, use their maxima to determine the block-level 3D displacement vector.
3. **Displacement Field Estimation**: Combine block-level displacement vectors into a displacement field (and optionaly apply a median filter or fit an affine transform)
4. **Warping**: Apply the displacement field to the moving volume to align it with the fixed volume.
5. **Multi-Resolution Refinement**: Repeat the above steps at progressively finer resolutions to refine the alignment.

## Hardware requirements

- A computer running Linux (recommended) or Windows
- A CUDA-compatible GPU with sufficient GPU memory: ≥ 30 bytes per voxel (30 GB / gigavoxel) of your 3D volume


## Installation

Using [conda/mamba](https://github.com/conda-forge/miniforge?tab=readme-ov-file#install):

```bash
conda install conda-forge::warpfield
```

Or, to install into a new conda environment (recommended):

```bash
conda create -n warpfield conda-forge::warpfield
conda activate warpfield
``` 

Installation via pip is also possible (but you need to separately ensure [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive) 12.x is installed):

```bash
pip install warpfield
```

For development, fork and clone the repository. Then type:

```bash
conda env create -n warpfield -f environment.yml
conda activate warpfield
pip install --no-deps -e .
```

## Quickstart
```python
import warpfield 

# 1. Load data (note: the two volumes are expected to be of the same array shape and resolution)
vol_ref, _ = warpfield.load_data("reference_volume.npy")
vol_mov, _ = warpfield.load_data("moving_volume.npy")

# 2. Choose registration recipe (here: loaded from a YAML file. See below for alternatives)
recipe = warpfield.Recipe.from_yaml('default.yml')

# 3. Register moving volume
vol_mov_reg, warp_map, _ = warpfield.register_volume(vol_ref, vol_mov, recipe)

# 4. Optional: apply the transformation to another volume (same shape and resolution)
vol_another_reg = warp_map.apply(vol_another)

# 5. Optional: apply inverse transformation to the reference volume
vol_ref_reg = warp_map.invert().apply(vol_ref)

# 6. Optional: apply the warp transformation to a set of coordiantes (3-by-n array, in units of volume voxels)
points = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
points_pushed = warp_map.push_coordinates(points)
points_pulled = warp_map.invert().push_coordinates(points) # inverse transformation
``` 

> [!TIP]
> You can test-run warpfield on Google Colab: <a target="_blank" href="https://colab.research.google.com/github/danionella/warpfield/blob/main/notebooks/example.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

> [!IMPORTANT]
> Fixed and moving volumes are expected to be of the same voxel size (which does not have to be isotropic). Physical units, scalings or other metadata that may be present in data files are ignored.
>
> If the moving volume does not already have the same resolution (voxel size) as the fixed volume, you can use the convenience function [`warpfield.ndimage.zoom`](https://danionella.github.io/warpfield/warpfield/ndimage.html#zoom) to match scale:
> 
> ```python
> vol_mov = zoom(vol_mov, zoom_factors = voxel_size_moving/voxel_size_fixed) # voxel sizes are 3-tuples of floats
> ```

## Command-Line Interface (CLI)

The `warpfield` library provides a command-line interface, allowing you to perform registration directly from the terminal. We recommend starting with the Python interface to develop and optimize your recipes interactively. Once you are satisfied with the results, you can use the CLI for batch processing.

#### Usage

```bash
python -m warpfield --fixed <fixed_image_path> --moving <moving_image_path> --recipe <recipe_path> [options]
# You can use the `--help` flag to see instructions for the CLI:
python -m warpfield --help
```

<details>
  <summary>Further details</summary>

#### Required Arguments

- `--fixed`: Path to the fixed 3D volume file (`.h5`,`.zarr`, `.n5`, `.nii`, `.nrrd`, `.tiff`, `.npy`). For hierarchical storage formats (`.h5`, `.zarr`, `.n5`), specify the dataset name using the format `filename.h5/dataset_name`. 
- `--moving`: Path to the moving 3D volume file.
- `--recipe`: Path to the registration recipe YAML file (`.yml`).

#### Optional Arguments

- `--output`: Path to save the registered volume (`.h5`). Defaults to `<moving>_registered.h5` if not provided.
- `--compression`: Compression method for saving the registered volume. Default is `gzip`.
- `--invert`: Additionally, register the moving image to the fixed image.

#### Output Structure

The output file is an HDF5 file containing the following datasets:
- `/moving_reg`: The registered moving image.
- `/warp_map`: A group containing the warp field and its metadata:
  - `/warp_field`: The displacement field.
  - `/block_size`: The block size (in voxels).
  - `/block_stride`: The block stride (in voxels).
- `/fixed_reg_inv` (optional): The fixed image registered to the moving image (if `--invert` is used).

</details>

## Recipes

The registration pipeline is defined by a recipe. The recipe consists of a pre-filter that is applied to all volumes (typically a DoG filter to sharpen features) and list of levels, each of which contains a set of parameters for the registration process. Typically, each level corresponds to a different resolution of the displacement field (the block size), with the first level being the coarsest and the last level being the finest.

### Recipe parameters

| Pre-filter parameter      | Description                                                                 |
|-------------------|-----------------------------------------------------------------------------|
| `clip_thresh`     | Pixel value threshold for clipping each volume. Default is 0. This setting helps remove DC background, which can otherwise cause edges (the shifted volume is 0 outside the FOV).  |
| `dog`             | If True, apply a 3D Difference-of-Gaussians (DoG) pre-filter to each volume. Default is True. If False (and σ<sub>low</sub> > 0), apply a simple Gaussian filter with σ<sub>low</sub> and ignore σ<sub>high</sub>. |
| `low`             | The σ<sub>low</sub> value for the 3D DoG pre-filter. Default is 0.5           |
| `high`            | The σ<sub>high</sub> value for the 3D DoG pre-filter. Default is 10.0. Note: σ<sub>low</sub> and σ<sub>high</sub> should be smaller and bigger than the feature of interest, respectively. A σ of 1 correponds to a FWHM of ~ 2.4.)             |
| `soft_edge`      | Fade the borders of the volumes to 0 (black). Reduces edge effects for volumes in which the sample extends to the volume border or beyond. Float or list of 3 floats (for each axis): size of the fade in voxels from each border. If required, the fade should be larger than σ<sub>high</sub> (above). Defaults to 0.0 (disabled).|

| Level parameter      | Description                                                                 |
|-------------------|-----------------------------------------------------------------------------|
| `block_size`      | Shape of blocks, into which the volume is divided. Positive numbers indicate block shape in voxels (e.g. [32, 16, 32]), while negative numbers are interpreted as "divide volume shape by this number" (e.g. [-5, -5, -5] results in 5 blocks along each axis, when block stride is 1.0)|
| `block_stride`    | Determines whether blocks overlap. Either list of 3 integer values (block center distances, or strides, in voxels) or scalar float (fraction of block_size). Default is 1.0 (no block overlap). Set this to a smaller value (e.g. 0.5) for overlaping blocks and higher precision, but larger (e.g. 8x) memory footprint   |
| `project.max`     | If True, apply 3D -> 2D max projections to each volume block. If false, apply mean projections. Default is True           |
| `project.dog`     | If True, apply a DoG filter to each 2D projection. Default is True           |
| `project.low`     | The σ<sub>low</sub> value for the 2D DoG filter. Default is 0.5 voxels (pixels).                 |
| `project.high`    | The σ<sub>high</sub> value for the 2D DoG filter. Default is 10.0 voxels (pixels).               |
| `project.normalize`    | Whether to normalize the projections before calculating the cross-covariance (which would make it a cross-correlation). Defaults to False or 0.0. Values can range from 0.0 (False) to 1.0 (True). Values in between imply normalisation by `l2_norm**project.normalize`            |
| `project.periodic_smooth` | If True, apply periodic smoothing. Default is False. This can help reduce block-edge effects (affecting FFT-based correlation). |
| `tukey_ref`    | if not None, apply a Tukey window to the reference projections (alpha = tukey_ref). Default is 0.5. The Tukey window can reduce block-edge effects. |
| `smooth.sigmas`   | Sigmas for smoothing cross-correlations across blocks. Default is [1.0, 1.0, 1.0] blocks. |
| `smooth.shear`    | Shear parameter (specific to oblique plane wobble – ignore otherwise). Default is None.                      |
| `smooth.long_range_ratio` | Long range ratio for double gaussian kernel. Default is None. To deal with empty or low contrast regions, a second smooth with a larger (5x) sigma is applied to the cross-correlation maps and added. Typical values are between 0 (or None) and 0.1
| `median_filter`   | If True, apply median filter to the displacement field. Default is True                  |
| `affine`        | If True, fit affine transformation to the displacement field. Default is False. The affine fit ignores all edge voxels (to reduce edge effects) and therefore needs at least 4 blocks along each axis |
| `update_rate`          | Update rate for the displacement field. Default is 1.0. This value can be lowered to dampen oscillations, if needed (rarely).|
| `repeats`          | Number of iterations for this level. More repeats allow each block to deviate further from neighbors, despite smoothing. Typical values range from 1-10. Disable a level by setting repeats to 0.|


### Defining recipes

Recipes can be loaded from YAML files (either those shipped with this package, such as [default.yml](https://github.com/danionella/warpfield/blob/main/src/warpfield/recipes/default.yml), or your own): 

```python
recipe = warpfield.Recipe.from_yaml("default.yml")
# or your own recipe:
# recipe = warpfield.recipes.from_yaml("path/to/your/recipe.yaml")

# You can then modify recipe parameters programmatically, e.g.;
recipe.pre_filter.clip_thresh=10
# ... etc.
```

Alternatively, you can define a recipe from scratch using the `Recipe` class. For example:

```python
# create a basic recipe:
recipe = warpfield.Recipe() # initialized with a translation level, followed by an affine registration level
recipe.pre_filter.clip_thresh = 0 # clip DC background, if present

# affine level properties
recipe.levels[-1].repeats = 5

# add non-rigid registration levels:
recipe.add_level(block_size=[128,128,128])
recipe.levels[-1].smooth.sigmas = [1.0,1.0,1.0]
recipe.levels[-1].repeats = 5

recipe.add_level(block_size=[64,64,64])
recipe.levels[-1].block_stride = 0.5
recipe.levels[-1].smooth.sigmas = [1.0,1.0,1.0]
recipe.levels[-1].repeats = 5

recipe.add_level(block_size=[32,32,32])
recipe.levels[-1].block_stride = 0.5
recipe.levels[-1].project.low = 1
recipe.levels[-1].project.high = 2
recipe.levels[-1].smooth.sigmas = [4.0,4.0,4.0]
recipe.levels[-1].smooth.long_range_ratio = 0.1
recipe.levels[-1].repeats = 5
```

> [!TIP]
> The speed of `warpfield` enables rapid iterative optimization of the registration process. Start with a simple recipe, such as the one above. Deactivate all levels after the first affine level, by setting their `repeats` to 0. Confirm that the affine registration converged (increasing repeats should not change the result) and move on to the next level. If voxels are anisotropic, adjust `block_size` to make blocks roughly isotropic in real space. Inspect results as you change the settings and repeats of the second level, then add more fine-grained levels if necessary. Adjust `project.low` and `project.high` to the relevant feature size if needed (which may get smaller in finer levels). If the moving volume warps too much, consider larger blocks / fewer levels. Otherwise, increase `smooth.sigmas`, reduce repeats, or reduce `block_stride` to 0.5 if you can afford the increase in memory footprint and compute time. You may also want to provide `register_volumes` with a callback function (see [`register_volumes`](https://danionella.github.io/warpfield/warpfield/register.html#register_volumes) and tip below) to observe each level and repeats of the registration process. It is very helpful for troubleshooting and for reducing compute time by adjusting the number of levels and repeats to the necessary minimum.

> [!TIP]
> Generating videos of the registration process:
> ```python
> video_path = "output.mp4" 
> units_per_voxel = [1,1,1] # voxel aspect ratio or physical dimensions (e.g. µm)
> callback = warpfield.utils.mips_callback(units_per_voxel=units_per_voxel)
> moving_reg, warpmap, _ = warpfield.register_volumes(fixed, moving, recipe, video_path=video_path, callback=callback)
> ```
> See also [`register_volumes`](https://danionella.github.io/warpfield/warpfield/register.html#register_volumes) documentation and [`notebooks/example.ipynb`](https://github.com/danionella/warpfield/blob/main/notebooks/example.ipynb)  <a target="_blank" href="https://colab.research.google.com/github/danionella/warpfield/blob/main/notebooks/example.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>.

## Feedback and contributions

We value your feedback and contributions to improve `warpfield`! Here's how you can help:

- **Bug reports**: If you encounter any issues or unexpected behavior, please [open a bug report](https://github.com/danionella/warpfield/issues/new?assignees=&labels=bug&template=bug_report.md&title=%5BBUG%5D).
- **Feature requests**: Have an idea for a new feature or improvement? Let us know by [submitting a feature request](https://github.com/danionella/warpfield/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=%5BFEATURE%5D).
- **General discussion**: Start a conversation in our [Discussions](https://github.com/danionella/warpfield/discussions). Whether you have questions, ideas, or just want to share how `warpfield` has helped your work, we'd like to hear from you!
- **Questions**: If you have questions about using `warpfield`, post them in [Q&A](https://github.com/danionella/warpfield/discussions/categories/q-a).

#### Contributing code or documentation

Whether it's fixing a bug, adding a feature, or improving documentation, your help is appreciated. To contribute, fork the repository, make your changes in a feature branch, and submit a pull request (PR) for review. Please follow these minimum standards:

- Code Quality: Write clean, readable, and well-documented code.
- Testing: Include tests for new features or bug fixes.
- If your change doesn't concern an existing issue or discussion, please create one first to explain the motivation behind your change.

Thank you for your support!

## Citing our work
If you use `warpfield` in your research, please cite the paper that first described our registration approach:

Hoffmann, M., Henninger, J. et al. Blazed oblique plane microscopy reveals scale-invariant inference of brain-wide population activity. Nature Communications 14, 8019 (2023). [https://doi.org/10.1038/s41467-023-43741-x](https://doi.org/10.1038/s41467-023-43741-x)

```bibtex
@article{Hoffmann2023,
  title={Blazed oblique plane microscopy reveals scale-invariant inference of brain-wide population activity},
  author={Hoffmann, Maximilian and Henninger, Jorg and Veith, Johannes and Richter, Lars and Judkewitz, Benjamin},
  journal={Nature Communications},
  volume={14},
  number={1},
  pages={8019},
  year={2023},
  publisher={Nature Publishing Group},
  doi={10.1038/s41467-023-43741-x}
}
```

## See also

- [Advanced Normalization Tools in Python (ANTsPy)](https://github.com/ANTsX/ANTsPy)
- [Computational Morphometry Toolkit (CMTK)](https://www.nitrc.org/projects/cmtk)
- [Constrained Large Deformation Diffeomorphic Image Registration (CLAIRE)
](https://github.com/andreasmang/claire)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "warpfield",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "warpfield, python",
    "author": "jlab.berlin, Maximilian Hoffmann, Benjamin Judkewitz",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/ec/91/d64d12759c27bb0df686d8c8ba63a925f552139189141ee2a8f55b092c6b/warpfield-0.2.1.tar.gz",
    "platform": null,
    "description": "![Python Version](https://img.shields.io/badge/python-3.9+-blue)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n![tests](https://github.com/danionella/warpfield/actions/workflows/test.yml/badge.svg)\n[![PyPI - Version](https://img.shields.io/pypi/v/warpfield)](https://pypi.org/project/warpfield/)\n[![Conda Version](https://img.shields.io/conda/v/conda-forge/warpfield)](https://anaconda.org/conda-forge/warpfield)\n![GitHub last commit](https://img.shields.io/github/last-commit/danionella/warpfield)\n\n<img width=\"1000\" src=\"https://github.com/user-attachments/assets/09b2109d-5db6-4f7e-8e3d-000361455a0f\"/> <br>\n<sup>Example shown: region-level 3D registration across subjects (video slowed down ~5x). Cell-level registration of microscopy data is also possible.</sup>\n\n# warpfield\n\nA GPU-accelerated Python library for non-rigid 3D registration / warping.\n\nLinks: [API documentation](http://danionella.github.io/warpfield), [GitHub repository](https://github.com/danionella/warpfield)\n\n\n### Features\n\n- GPU-accelerated code for high performance ([CuPy](https://cupy.dev/), CUDA kernels & FFT plans)\n- Typical speedup compared to CPU-based methods: > 1000x (**seconds vs. hours** for gigavoxel volumes)\n- Forward and inverse transform of 3D volumes as well as point coordinates\n- Python API and command-line interface (CLI)\n- Support for `.h5`,`.zarr`, `.n5`, `.nii`, `.nrrd`, `.tiff` and `.npy` file formats\n\n### General Principle\n\nThe registration process aligns a moving 3D volume to a fixed reference volume by estimating and applying a displacement field. The process is typically performed in a multi-resolution (pyramid) fashion, starting with coarse alignment and progressively refining the displacement field at finer resolutions.\n\nThe key steps are:\n\n1. **Preprocessing**: Enhance features in the volumes (e.g., using Difference-of-Gaussian or 'DoG' filtering) to improve registration accuracy.\n2. **Block Matching**: Divide the volumes into smaller 3D blocks, project them to 2D (along each axis) for memory and compute efficiency, and calculate 2D cross-correlation maps. After smoothing these 2D maps across neighboring blocks, use their maxima to determine the block-level 3D displacement vector.\n3. **Displacement Field Estimation**: Combine block-level displacement vectors into a displacement field (and optionaly apply a median filter or fit an affine transform)\n4. **Warping**: Apply the displacement field to the moving volume to align it with the fixed volume.\n5. **Multi-Resolution Refinement**: Repeat the above steps at progressively finer resolutions to refine the alignment.\n\n## Hardware requirements\n\n- A computer running Linux (recommended) or Windows\n- A CUDA-compatible GPU with sufficient GPU memory: \u2265 30 bytes per voxel (30 GB / gigavoxel) of your 3D volume\n\n\n## Installation\n\nUsing [conda/mamba](https://github.com/conda-forge/miniforge?tab=readme-ov-file#install):\n\n```bash\nconda install conda-forge::warpfield\n```\n\nOr, to install into a new conda environment (recommended):\n\n```bash\nconda create -n warpfield conda-forge::warpfield\nconda activate warpfield\n``` \n\nInstallation via pip is also possible (but you need to separately ensure [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive) 12.x is installed):\n\n```bash\npip install warpfield\n```\n\nFor development, fork and clone the repository. Then type:\n\n```bash\nconda env create -n warpfield -f environment.yml\nconda activate warpfield\npip install --no-deps -e .\n```\n\n## Quickstart\n```python\nimport warpfield \n\n# 1. Load data (note: the two volumes are expected to be of the same array shape and resolution)\nvol_ref, _ = warpfield.load_data(\"reference_volume.npy\")\nvol_mov, _ = warpfield.load_data(\"moving_volume.npy\")\n\n# 2. Choose registration recipe (here: loaded from a YAML file. See below for alternatives)\nrecipe = warpfield.Recipe.from_yaml('default.yml')\n\n# 3. Register moving volume\nvol_mov_reg, warp_map, _ = warpfield.register_volume(vol_ref, vol_mov, recipe)\n\n# 4. Optional: apply the transformation to another volume (same shape and resolution)\nvol_another_reg = warp_map.apply(vol_another)\n\n# 5. Optional: apply inverse transformation to the reference volume\nvol_ref_reg = warp_map.invert().apply(vol_ref)\n\n# 6. Optional: apply the warp transformation to a set of coordiantes (3-by-n array, in units of volume voxels)\npoints = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])\npoints_pushed = warp_map.push_coordinates(points)\npoints_pulled = warp_map.invert().push_coordinates(points) # inverse transformation\n``` \n\n> [!TIP]\n> You can test-run warpfield on Google Colab: <a target=\"_blank\" href=\"https://colab.research.google.com/github/danionella/warpfield/blob/main/notebooks/example.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n\n> [!IMPORTANT]\n> Fixed and moving volumes are expected to be of the same voxel size (which does not have to be isotropic). Physical units, scalings or other metadata that may be present in data files are ignored.\n>\n> If the moving volume does not already have the same resolution (voxel size) as the fixed volume, you can use the convenience function [`warpfield.ndimage.zoom`](https://danionella.github.io/warpfield/warpfield/ndimage.html#zoom) to match scale:\n> \n> ```python\n> vol_mov = zoom(vol_mov, zoom_factors = voxel_size_moving/voxel_size_fixed) # voxel sizes are 3-tuples of floats\n> ```\n\n## Command-Line Interface (CLI)\n\nThe `warpfield` library provides a command-line interface, allowing you to perform registration directly from the terminal. We recommend starting with the Python interface to develop and optimize your recipes interactively. Once you are satisfied with the results, you can use the CLI for batch processing.\n\n#### Usage\n\n```bash\npython -m warpfield --fixed <fixed_image_path> --moving <moving_image_path> --recipe <recipe_path> [options]\n# You can use the `--help` flag to see instructions for the CLI:\npython -m warpfield --help\n```\n\n<details>\n  <summary>Further details</summary>\n\n#### Required Arguments\n\n- `--fixed`: Path to the fixed 3D volume file (`.h5`,`.zarr`, `.n5`, `.nii`, `.nrrd`, `.tiff`, `.npy`). For hierarchical storage formats (`.h5`, `.zarr`, `.n5`), specify the dataset name using the format `filename.h5/dataset_name`. \n- `--moving`: Path to the moving 3D volume file.\n- `--recipe`: Path to the registration recipe YAML file (`.yml`).\n\n#### Optional Arguments\n\n- `--output`: Path to save the registered volume (`.h5`). Defaults to `<moving>_registered.h5` if not provided.\n- `--compression`: Compression method for saving the registered volume. Default is `gzip`.\n- `--invert`: Additionally, register the moving image to the fixed image.\n\n#### Output Structure\n\nThe output file is an HDF5 file containing the following datasets:\n- `/moving_reg`: The registered moving image.\n- `/warp_map`: A group containing the warp field and its metadata:\n  - `/warp_field`: The displacement field.\n  - `/block_size`: The block size (in voxels).\n  - `/block_stride`: The block stride (in voxels).\n- `/fixed_reg_inv` (optional): The fixed image registered to the moving image (if `--invert` is used).\n\n</details>\n\n## Recipes\n\nThe registration pipeline is defined by a recipe. The recipe consists of a pre-filter that is applied to all volumes (typically a DoG filter to sharpen features) and list of levels, each of which contains a set of parameters for the registration process. Typically, each level corresponds to a different resolution of the displacement field (the block size), with the first level being the coarsest and the last level being the finest.\n\n### Recipe parameters\n\n| Pre-filter parameter      | Description                                                                 |\n|-------------------|-----------------------------------------------------------------------------|\n| `clip_thresh`     | Pixel value threshold for clipping each volume. Default is 0. This setting helps remove DC background, which can otherwise cause edges (the shifted volume is 0 outside the FOV).  |\n| `dog`             | If True, apply a 3D Difference-of-Gaussians (DoG) pre-filter to each volume. Default is True. If False (and \u03c3<sub>low</sub> > 0), apply a simple Gaussian filter with \u03c3<sub>low</sub> and ignore \u03c3<sub>high</sub>. |\n| `low`             | The \u03c3<sub>low</sub> value for the 3D DoG pre-filter. Default is 0.5           |\n| `high`            | The \u03c3<sub>high</sub> value for the 3D DoG pre-filter. Default is 10.0. Note: \u03c3<sub>low</sub> and \u03c3<sub>high</sub> should be smaller and bigger than the feature of interest, respectively. A \u03c3 of 1 correponds to a FWHM of ~ 2.4.)             |\n| `soft_edge`      | Fade the borders of the volumes to 0 (black). Reduces edge effects for volumes in which the sample extends to the volume border or beyond. Float or list of 3 floats (for each axis): size of the fade in voxels from each border. If required, the fade should be larger than \u03c3<sub>high</sub> (above). Defaults to 0.0 (disabled).|\n\n| Level parameter      | Description                                                                 |\n|-------------------|-----------------------------------------------------------------------------|\n| `block_size`      | Shape of blocks, into which the volume is divided. Positive numbers indicate block shape in voxels (e.g. [32, 16, 32]), while negative numbers are interpreted as \"divide volume shape by this number\" (e.g. [-5, -5, -5] results in 5 blocks along each axis, when block stride is 1.0)|\n| `block_stride`    | Determines whether blocks overlap. Either list of 3 integer values (block center distances, or strides, in voxels) or scalar float (fraction of block_size). Default is 1.0 (no block overlap). Set this to a smaller value (e.g. 0.5) for overlaping blocks and higher precision, but larger (e.g. 8x) memory footprint   |\n| `project.max`     | If True, apply 3D -> 2D max projections to each volume block. If false, apply mean projections. Default is True           |\n| `project.dog`     | If True, apply a DoG filter to each 2D projection. Default is True           |\n| `project.low`     | The \u03c3<sub>low</sub> value for the 2D DoG filter. Default is 0.5 voxels (pixels).                 |\n| `project.high`    | The \u03c3<sub>high</sub> value for the 2D DoG filter. Default is 10.0 voxels (pixels).               |\n| `project.normalize`    | Whether to normalize the projections before calculating the cross-covariance (which would make it a cross-correlation). Defaults to False or 0.0. Values can range from 0.0 (False) to 1.0 (True). Values in between imply normalisation by `l2_norm**project.normalize`            |\n| `project.periodic_smooth` | If True, apply periodic smoothing. Default is False. This can help reduce block-edge effects (affecting FFT-based correlation). |\n| `tukey_ref`    | if not None, apply a Tukey window to the reference projections (alpha = tukey_ref). Default is 0.5. The Tukey window can reduce block-edge effects. |\n| `smooth.sigmas`   | Sigmas for smoothing cross-correlations across blocks. Default is [1.0, 1.0, 1.0] blocks. |\n| `smooth.shear`    | Shear parameter (specific to oblique plane wobble \u2013\u00a0ignore otherwise). Default is None.                      |\n| `smooth.long_range_ratio` | Long range ratio for double gaussian kernel. Default is None. To deal with empty or low contrast regions, a second smooth with a larger (5x) sigma is applied to the cross-correlation maps and added. Typical values are between 0 (or None) and 0.1\n| `median_filter`   | If True, apply median filter to the displacement field. Default is True                  |\n| `affine`        | If True, fit affine transformation to the displacement field. Default is False. The affine fit ignores all edge voxels (to reduce edge effects) and therefore needs at least 4 blocks along each axis |\n| `update_rate`          | Update rate for the displacement field. Default is 1.0. This value can be lowered to dampen oscillations, if needed (rarely).|\n| `repeats`          | Number of iterations for this level. More repeats allow each block to deviate further from neighbors, despite smoothing. Typical values range from 1-10. Disable a level by setting repeats to 0.|\n\n\n### Defining recipes\n\nRecipes can be loaded from YAML files (either those shipped with this package, such as [default.yml](https://github.com/danionella/warpfield/blob/main/src/warpfield/recipes/default.yml), or your own): \n\n```python\nrecipe = warpfield.Recipe.from_yaml(\"default.yml\")\n# or your own recipe:\n# recipe = warpfield.recipes.from_yaml(\"path/to/your/recipe.yaml\")\n\n# You can then modify recipe parameters programmatically, e.g.;\nrecipe.pre_filter.clip_thresh=10\n# ... etc.\n```\n\nAlternatively, you can define a recipe from scratch using the `Recipe` class. For example:\n\n```python\n# create a basic recipe:\nrecipe = warpfield.Recipe() # initialized with a translation level, followed by an affine registration level\nrecipe.pre_filter.clip_thresh = 0 # clip DC background, if present\n\n# affine level properties\nrecipe.levels[-1].repeats = 5\n\n# add non-rigid registration levels:\nrecipe.add_level(block_size=[128,128,128])\nrecipe.levels[-1].smooth.sigmas = [1.0,1.0,1.0]\nrecipe.levels[-1].repeats = 5\n\nrecipe.add_level(block_size=[64,64,64])\nrecipe.levels[-1].block_stride = 0.5\nrecipe.levels[-1].smooth.sigmas = [1.0,1.0,1.0]\nrecipe.levels[-1].repeats = 5\n\nrecipe.add_level(block_size=[32,32,32])\nrecipe.levels[-1].block_stride = 0.5\nrecipe.levels[-1].project.low = 1\nrecipe.levels[-1].project.high = 2\nrecipe.levels[-1].smooth.sigmas = [4.0,4.0,4.0]\nrecipe.levels[-1].smooth.long_range_ratio = 0.1\nrecipe.levels[-1].repeats = 5\n```\n\n> [!TIP]\n> The speed of `warpfield` enables rapid iterative optimization of the registration process. Start with a simple recipe, such as the one above. Deactivate all levels after the first affine level, by setting their `repeats` to 0. Confirm that the affine registration converged (increasing repeats should not change the result) and move on to the next level. If voxels are anisotropic, adjust `block_size` to make blocks roughly isotropic in real space. Inspect results as you change the settings and repeats of the second level, then add more fine-grained levels if necessary. Adjust `project.low` and `project.high` to the relevant feature size if needed (which may get smaller in finer levels). If the moving volume warps too much, consider larger blocks / fewer levels. Otherwise, increase `smooth.sigmas`, reduce repeats, or reduce `block_stride` to 0.5 if you can afford the increase in memory footprint and compute time. You may also want to provide `register_volumes` with a callback function (see [`register_volumes`](https://danionella.github.io/warpfield/warpfield/register.html#register_volumes) and tip below) to observe each level and repeats of the registration process. It is very helpful for troubleshooting and for reducing compute time by adjusting the number of levels and repeats to the necessary minimum.\n\n> [!TIP]\n> Generating videos of the registration process:\n> ```python\n> video_path = \"output.mp4\" \n> units_per_voxel = [1,1,1] # voxel aspect ratio or physical dimensions (e.g. \u00b5m)\n> callback = warpfield.utils.mips_callback(units_per_voxel=units_per_voxel)\n> moving_reg, warpmap, _ = warpfield.register_volumes(fixed, moving, recipe, video_path=video_path, callback=callback)\n> ```\n> See also [`register_volumes`](https://danionella.github.io/warpfield/warpfield/register.html#register_volumes) documentation and [`notebooks/example.ipynb`](https://github.com/danionella/warpfield/blob/main/notebooks/example.ipynb)  <a target=\"_blank\" href=\"https://colab.research.google.com/github/danionella/warpfield/blob/main/notebooks/example.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>.\n\n## Feedback and contributions\n\nWe value your feedback and contributions to improve `warpfield`! Here's how you can help:\n\n- **Bug reports**: If you encounter any issues or unexpected behavior, please [open a bug report](https://github.com/danionella/warpfield/issues/new?assignees=&labels=bug&template=bug_report.md&title=%5BBUG%5D).\n- **Feature requests**: Have an idea for a new feature or improvement? Let us know by [submitting a feature request](https://github.com/danionella/warpfield/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=%5BFEATURE%5D).\n- **General discussion**: Start a conversation in our [Discussions](https://github.com/danionella/warpfield/discussions). Whether you have questions, ideas, or just want to share how `warpfield` has helped your work, we'd like to hear from you!\n- **Questions**: If you have questions about using `warpfield`, post them in [Q&A](https://github.com/danionella/warpfield/discussions/categories/q-a).\n\n#### Contributing code or documentation\n\nWhether it's fixing a bug, adding a feature, or improving documentation, your help is appreciated. To contribute, fork the repository, make your changes in a feature branch, and submit a pull request (PR) for review. Please follow these minimum standards:\n\n- Code Quality: Write clean, readable, and well-documented code.\n- Testing: Include tests for new features or bug fixes.\n- If your change doesn't concern an existing issue or discussion, please create one first to explain the motivation behind your change.\n\nThank you for your support!\n\n## Citing our work\nIf you use `warpfield` in your research, please cite the paper that first described our registration approach:\n\nHoffmann, M., Henninger, J. et al. Blazed oblique plane microscopy reveals scale-invariant inference of brain-wide population activity. Nature Communications 14, 8019 (2023). [https://doi.org/10.1038/s41467-023-43741-x](https://doi.org/10.1038/s41467-023-43741-x)\n\n```bibtex\n@article{Hoffmann2023,\n  title={Blazed oblique plane microscopy reveals scale-invariant inference of brain-wide population activity},\n  author={Hoffmann, Maximilian and Henninger, Jorg and Veith, Johannes and Richter, Lars and Judkewitz, Benjamin},\n  journal={Nature Communications},\n  volume={14},\n  number={1},\n  pages={8019},\n  year={2023},\n  publisher={Nature Publishing Group},\n  doi={10.1038/s41467-023-43741-x}\n}\n```\n\n## See also\n\n- [Advanced Normalization Tools in Python (ANTsPy)](https://github.com/ANTsX/ANTsPy)\n- [Computational Morphometry Toolkit (CMTK)](https://www.nitrc.org/projects/cmtk)\n- [Constrained Large Deformation Diffeomorphic Image Registration (CLAIRE)\n](https://github.com/andreasmang/claire)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "GPU-accelerated 3D non-rigid registration",
    "version": "0.2.1",
    "project_urls": {
        "Documentation": "https://danionella.github.io/warpfield/",
        "Homepage": "https://github.com/danionella/warpfield"
    },
    "split_keywords": [
        "warpfield",
        " python"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "814b79118f20d4cddfd50715412812dd6dc5f7e58243ed727aa19f979cc027ca",
                "md5": "cdae6c368657f9b1b10c7f0da2a5f00d",
                "sha256": "da41fe41b65c2749513289180166aa9bd2c2be918c310a162eeef691f97b5203"
            },
            "downloads": -1,
            "filename": "warpfield-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cdae6c368657f9b1b10c7f0da2a5f00d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 33658,
            "upload_time": "2025-08-15T12:13:49",
            "upload_time_iso_8601": "2025-08-15T12:13:49.546759Z",
            "url": "https://files.pythonhosted.org/packages/81/4b/79118f20d4cddfd50715412812dd6dc5f7e58243ed727aa19f979cc027ca/warpfield-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ec91d64d12759c27bb0df686d8c8ba63a925f552139189141ee2a8f55b092c6b",
                "md5": "b125eed365a88c7911c5284e27a5fb3b",
                "sha256": "e9d359b15eaa6eb221f42750ec54242d4369daa2c231b075669b1b9071554ba3"
            },
            "downloads": -1,
            "filename": "warpfield-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "b125eed365a88c7911c5284e27a5fb3b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 39660,
            "upload_time": "2025-08-15T12:13:50",
            "upload_time_iso_8601": "2025-08-15T12:13:50.951432Z",
            "url": "https://files.pythonhosted.org/packages/ec/91/d64d12759c27bb0df686d8c8ba63a925f552139189141ee2a8f55b092c6b/warpfield-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-15 12:13:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "danionella",
    "github_project": "warpfield",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "warpfield"
}
        
Elapsed time: 0.66875s