# lsfm_fuse (LSFM Fusion in Python)
[](https://github.com/peng-lab/lsfm_fuse/actions)
A Python implementation of LSFM fusion method (former BigFUSE)
As a part of Leonardo package, this is fusion method is to fuse LSFM datasets illuminated via opposite illumination lenses and/or detected via opposing detecion lenses.
---
## Quick Start
### Fusing two datasets illuminated with opposite light sources
#### Use as Python API
(1) Provide two filenames
suppose we have two to-be-processed volumes "A.tif" and "B.tif" saved in folder `sample_name` under the path `data_path`.
Meanwhile fusion result, together with the intermediate results, will be saved in folder `save_folder` under the path `save_path`.
if "A.tif" and "B.tif" are illuminated from top and bottom respectively (in the image space):
```python
from lsfm_fuse import FUSE_illu
exe = FUSE_illu() ###to run with default parameters for training
out = exe.train(data_path = data_path,
sample_name = sample_name,
top_illu_data = "A.tif",
bottom_illu_data = "B.tif",
save_path = save_path,
save_folder = save_folder,
save_separate_results = True,
sparse_sample = False,
cam_pos = "front",
)
```
where `save_separate_results` means whether to save the separated results before stitching (by default False), `sparse_sample` means whether the specimen is sparse (by default False), and `cam_pos` means whether the detection device is in the front or back in the image space (by default "front"). Two folders, named The fusion result, named "A" ad "B" will be created under `save_folder` to keep intermediate results, whereas the fusion result, named "illuFusionResult.tif" will be saved in foldedr "A", i.e., named after the volume illuminated from the top. Fusion result will also be returned as `out`.
otherwise, "A.tif" and "B.tif" can be illuminated from left and right respectively:
```python
out = exe.train(data_path = data_path,
sample_name = sample_name,
left_illu_data = "A.tif",
right_illu_data = "B.tif",
save_path = save_path,
save_folder = save_folder,
)
```
Alternatively, FUSE_illu can be initialized with user-defined training parameters, a full list of input arguments in `__init__` is here:
```
require_precropping: bool = True,
precropping_params: list[int, int, int, int] = [],
resample_ratio: int = 2,
window_size: list[int, int] = [5, 59],
poly_order: list[int, int] = [2, 2],
n_epochs: int = 50,
require_segmentation: bool = True,
skip_illuFusion: bool = True,
destripe_preceded: bool = False,
destripe_params: Dict = None,
device: str = "cuda",
```
(2) Provide two arrays, i.e., the stacks to be processed, and necessary parameters (suitable for use in napari)
suppose we have two to-be-processed volumes `img_arr1` (illuminated from the top) and `img_arr2` (illuminated from the bottom) that have been read in as a np.ndarray or dask.array.core.Array
```python
from lsfm_fuse import FUSE_illu
exe = FUSE_illu() ###to run with default parameters for training
out = exe.train(data_path = data_path,
sample_name = sample_name,
top_illu_data = img_arr1,
bottom_illu_data = img_arr2,
save_path = save_path,
save_folder = save_folder,
save_separate_results = True,
sparse_sample = False,
cam_pos = "front",
)
```
to save results, two folders "top_illu" and "bottom_illu" will be created under `save_folder`, and the fusion result "illuFusionResult.tif" will be kept in folder "top_illu".
same for left-right illumination orientations, simply use arguments `left_illu_data` and `right_illu_data`, instead of `top_illu_data` and `bottom_illu_data`.
#### Run from command line for batch processing
suppose we have two to-be-processed volumes saved in /path/to/my/sample_name/image1.tiff (from the top) and /path/to/my/sample_name/image2.tiff (from the bottom), and we'd like to save the result in /save_path/save_folder:
```bash
fuse_illu --data_path /path/to/my \
--sample_name sample_name \
--top_illu_data image1.tiff \
--bottom_illu_data image2.tiff \
--save_path /save_path \
--save_folder save_folder
```
in addition, all the training parameters can be changed in command line:
```bash
fuse_illu --data_path /path/to/my \
--sample_name sample_name \
--top_illu_data image1.tiff \
--bottom_illu_data image2.tiff \
--save_path /save_path \
--save_folder save_folder\
--window_size 5,59 \
--poly_order 2,2
```
a full list of changeable args:
```
usage: run_fuse_illu --data_path
--sample_name
--save_path
--save_folder
[--require_precropping "True"]
[--precropping_params []]
[--resample_ratio 2]
[--window_size [5, 59]]
[--poly_order [2, 2]]
[--n_epochs 50]
[--require_segmentation "True"]
[--device "cpu"]
[--top_illu_data None]
[--bottom_illu_data None]
[--left_illu_data None]
[--right_illu_data None]
[--camera_position ""]
[--cam_pos "front"]
[--sparse_sample "False"]
[--save_separate_results "False"]
```
### Fusing four datasets with dual-sided illumination and dual-sided detection
#### Use as Python API
(1) Provide two filenames
suppose we have four to-be-processed volumes "A.tif", "B.tif", "C.tif" and "D.tif" saved in folder `sample_name` under the path `data_path`.
Meanwhile fusion result, together with the intermediate results, will be saved in folder `save_folder` under the path `save_path`.
if "A.tif", "B.tif", "C.tif" and "D.tif" are top illuminated+ventral detected, bottom illuminated+ventral detected, top illuminated+dorsal detected, bottom illuminated+dorsal detected (in the image space), respectively:
```python
from lsfm_fuse import FUSE_det
exe = FUSE_det() ###to run with default parameters for training
out = exe.train(data_path = data_path,
sample_name = sample_name,
require_registration = require_registration,
require_flipping_along_illu_for_dorsaldet = require_flipping_along_illu_for_dorsaldet,
require_flipping_along_det_for_dorsaldet = require_flipping_along_det_for_dorsaldet,
top_illu_ventral_det_data = "A.tif",
bottom_illu_ventral_det_data = "B.tif",
top_illu_dorsal_det_data = "C.tif",
bottom_illu_dorsal_det_data = "D.tif",
save_path = save_path,
save_folder = save_folder,
save_separate_results = False,
sparse_sample = False,
z_spacing = z_spacing,
xy_spacing = xy_spacing,
)
```
where `require_registration` means whether registration for the two detection devices is needed. By default, we assume datasets with different illumination sources but the same detection device are well registered in advance. `require_flipping_along_illu_for_dorsaldet` and `require_flipping_along_det_for_dorsaldet` mean in order to put inputs in a common space, whether flipping along illumination and detection are needed, respectively. Flipping only applies to datasets with detection device at the back. `save_separate_results` and `sparse_sample` are same as in FUSE_illu. If `require_registration` is True, `z_spacing` and `z_spacing`, axial resolution and lateral resolution, respectively, are mandatory for registration. Otherwise, they are optional.
Four folders "A", "B", "C" and "D" will be created under `save_foldedr` to keep intermediate results, and the fusion result, "quadrupleFusionResult.tif" will be saved under folder "A".
otherwise, illmination can be of hotizontal orientations (in image space):
```python
out = exe.train(data_path = data_path,
sample_name = sample_name,
require_registration = require_registration,
require_flipping_along_illu_for_dorsaldet = require_flipping_along_illu_for_dorsaldet,
require_flipping_along_det_for_dorsaldet = require_flipping_along_det_for_dorsaldet,
left_illu_ventral_det_data = "A.tif",
right_illu_ventral_det_data = "B.tif",
left_illu_dorsal_det_data = "C.tif",
right_illu_dorsal_det_data = "D.tif",
save_path = save_path,
save_folder = save_folder,
)
```
Alternatively, FUSE-det can be initialized with user-defined training parameters, a full list of input arguments in `__init__` is here:
```
require_precropping: bool = True,
precropping_params: list[int, int, int, int] = [],
resample_ratio: int = 2,
window_size: list[int, int] = [5, 59],
poly_order: list[int, int] = [2, 2],
n_epochs: int = 50,
require_segmentation: bool = True,
device: str = "cpu",
```
(2) Provide four arrays, i.e., the stacks to be processed, and necessary parameters (suitable for use in napari)
suppose we have four to-be-processed volumes `img_arr1` (top illuminated+ventral detected) and `img_arr2` (bottom illuminated+ventral detected), `img_arr3` (top illuminated+dorsal detected) and `img_arr4` (bottom illuminated+dorsal detected). All of them have been read in as a np.ndarray or dask.array.core.Array:
```python
from lsfm_fuse import FUSE_det
exe = FUSE_det() ###to run with default parameters for training
out = exe.train(data_path = data_path,
sample_name = sample_name,
require_registration = require_registration,
require_flipping_along_illu_for_dorsaldet = require_flipping_along_illu_for_dorsaldet,
require_flipping_along_det_for_dorsaldet = require_flipping_along_det_for_dorsaldet,
left_illu_ventral_det_data = img_arr1,
right_illu_ventral_det_data = img_arr2,
left_illu_dorsal_det_data = img_arr3,
right_illu_dorsal_det_data = img_arr4,
save_path = save_path,
save_folder = save_folder,
)
```
to save results, four folders "top_illu+ventral_det", "bottom_illu+ventral_det", "top_illu+dorsal_det", and "bottom_illu+dorsal_det" will be created under `save_folder`, and the fusion result "quadrupleFusionResult.tif" will be kept in folder "top_illu+ventral_det".
same for left-right illumination orientations, simply use arguments `left_illu_ventral_det_data`, `right_illu_ventral_det_data`, `left_illu_dorsal_det_data` and `right_illu_dorsal_det_data`.
#### Run from command line for batch processing
suppose we have four to-be-processed volumes saved in /path/to/my/sample_name/image1.tiff (top illuminated+ventral detected), /path/to/my/sample_name/image2.tiff (bottom illuminated+ventral detected), /path/to/my/sample_name/image3.tiff (top illuminated+dorsal detected), and /path/to/my/sample_name/image4.tiff (bottom illuminated+dorsal detected). We'd like to save the result in /save_path/save_folder:
```bash
bigfuse_det --data_path /path/to/my \
--sample_name sample_name \
--require_registration False\
--require_flipping_along_illu_for_dorsaldet True \
--require_flipping_along_det_for_dorsaldet False \
--top_illu_ventral_det_data image1.tiff \
--bottom_illu_ventral_det_data image2.tiff \
--top_illu_dorsal_det_data image3.tiff \
--bottom_illu_dorsal_det_data image4.tiff \
--save_path /save_path \
--save_folder save_folder
```
in addition, all the training parameters can be changed in command line. A full list of changeable args:
```
usage: run_fuse_det --data_path
--sample_name
--save_path
--save_folder
--require_registration
--require_flipping_along_illu_for_dorsaldet
--require_flipping_along_det_for_dorsaldet
[--require_precropping "True"]
[--precropping_params []]
[--resample_ratio 2]
[--window_size [5, 59]]
[--poly_order [2, 2]]
[--n_epochs 50]
[--require_segmentation "True"]
[--skip_illuFusion "False"]
[--destripe_preceded "False"]
[--destripe_params None]
[--device "cpu"]
[--sparse_sample "False"]
[--top_illu_ventral_det_data None]
[--bottom_illu_ventral_det_data None]
[--top_illu_dorsal_det_data None]
[--bottom_illu_dorsal_det_data None]
[--left_illu_ventral_det_data None]
[--right_illu_ventral_det_data None]
[--left_illu_dorsal_det_data None]
[--right_illu_dorsal_det_data None]
[--save_separate_results "False"]
[--z_spacing None]
[--xy_spacing None]
```
## Installation
**Stable Release:** `pip install lsfm_fuse`<br>
**Development Head:** `pip install git+https://github.com/peng-lab/lsfm_fuse.git`
## Development
See [CONTRIBUTING.md](CONTRIBUTING.md) for information related to developing the code.
**MIT license**
Raw data
{
"_id": null,
"home_page": "https://github.com/peng-lab/lsfm_fuse",
"name": "lsfm-fuse",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "lsfm_fuse",
"author": "Yu Liu",
"author_email": "liuyu9671@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/67/22/ff816e35c07d8835d56061c2680767cfa6843934b7e9a971731447fcd010/lsfm_fuse-0.1.2.tar.gz",
"platform": null,
"description": "# lsfm_fuse (LSFM Fusion in Python)\n\n[](https://github.com/peng-lab/lsfm_fuse/actions)\n\nA Python implementation of LSFM fusion method (former BigFUSE)\n\nAs a part of Leonardo package, this is fusion method is to fuse LSFM datasets illuminated via opposite illumination lenses and/or detected via opposing detecion lenses. \n\n---\n## Quick Start\n### Fusing two datasets illuminated with opposite light sources\n#### Use as Python API\n(1) Provide two filenames\n\nsuppose we have two to-be-processed volumes \"A.tif\" and \"B.tif\" saved in folder `sample_name` under the path `data_path`.\nMeanwhile fusion result, together with the intermediate results, will be saved in folder `save_folder` under the path `save_path`.\n\nif \"A.tif\" and \"B.tif\" are illuminated from top and bottom respectively (in the image space):\n```python\nfrom lsfm_fuse import FUSE_illu\n\nexe = FUSE_illu() ###to run with default parameters for training\nout = exe.train(data_path = data_path,\n sample_name = sample_name,\n top_illu_data = \"A.tif\",\n bottom_illu_data = \"B.tif\",\n save_path = save_path,\n save_folder = save_folder,\n save_separate_results = True,\n sparse_sample = False,\n cam_pos = \"front\",\n )\n```\nwhere `save_separate_results` means whether to save the separated results before stitching (by default False), `sparse_sample` means whether the specimen is sparse (by default False), and `cam_pos` means whether the detection device is in the front or back in the image space (by default \"front\"). Two folders, named The fusion result, named \"A\" ad \"B\" will be created under `save_folder` to keep intermediate results, whereas the fusion result, named \"illuFusionResult.tif\" will be saved in foldedr \"A\", i.e., named after the volume illuminated from the top. Fusion result will also be returned as `out`.\n\notherwise, \"A.tif\" and \"B.tif\" can be illuminated from left and right respectively:\n```python\nout = exe.train(data_path = data_path,\n sample_name = sample_name,\n left_illu_data = \"A.tif\",\n right_illu_data = \"B.tif\",\n save_path = save_path,\n save_folder = save_folder,\n )\n```\nAlternatively, FUSE_illu can be initialized with user-defined training parameters, a full list of input arguments in `__init__` is here:\n```\nrequire_precropping: bool = True,\nprecropping_params: list[int, int, int, int] = [],\nresample_ratio: int = 2,\nwindow_size: list[int, int] = [5, 59],\npoly_order: list[int, int] = [2, 2],\nn_epochs: int = 50,\nrequire_segmentation: bool = True,\nskip_illuFusion: bool = True,\ndestripe_preceded: bool = False,\ndestripe_params: Dict = None,\ndevice: str = \"cuda\",\n```\n\n(2) Provide two arrays, i.e., the stacks to be processed, and necessary parameters (suitable for use in napari)\nsuppose we have two to-be-processed volumes `img_arr1` (illuminated from the top) and `img_arr2` (illuminated from the bottom) that have been read in as a np.ndarray or dask.array.core.Array\n```python\nfrom lsfm_fuse import FUSE_illu\n\nexe = FUSE_illu() ###to run with default parameters for training\nout = exe.train(data_path = data_path,\n sample_name = sample_name,\n top_illu_data = img_arr1,\n bottom_illu_data = img_arr2,\n save_path = save_path,\n save_folder = save_folder,\n save_separate_results = True,\n sparse_sample = False,\n cam_pos = \"front\",\n )\n```\nto save results, two folders \"top_illu\" and \"bottom_illu\" will be created under `save_folder`, and the fusion result \"illuFusionResult.tif\" will be kept in folder \"top_illu\".\n\nsame for left-right illumination orientations, simply use arguments `left_illu_data` and `right_illu_data`, instead of `top_illu_data` and `bottom_illu_data`.\n\n#### Run from command line for batch processing\nsuppose we have two to-be-processed volumes saved in /path/to/my/sample_name/image1.tiff (from the top) and /path/to/my/sample_name/image2.tiff (from the bottom), and we'd like to save the result in /save_path/save_folder:\n```bash\nfuse_illu --data_path /path/to/my \\\n --sample_name sample_name \\\n --top_illu_data image1.tiff \\\n --bottom_illu_data image2.tiff \\\n --save_path /save_path \\\n --save_folder save_folder\n```\nin addition, all the training parameters can be changed in command line:\n```bash\nfuse_illu --data_path /path/to/my \\\n --sample_name sample_name \\\n --top_illu_data image1.tiff \\\n --bottom_illu_data image2.tiff \\\n --save_path /save_path \\\n --save_folder save_folder\\\n --window_size 5,59 \\\n --poly_order 2,2 \n```\na full list of changeable args:\n```\nusage: run_fuse_illu --data_path\n --sample_name\n --save_path\n --save_folder\n [--require_precropping \"True\"]\n [--precropping_params []]\n [--resample_ratio 2]\n [--window_size [5, 59]]\n [--poly_order [2, 2]]\n [--n_epochs 50]\n [--require_segmentation \"True\"]\n [--device \"cpu\"]\n [--top_illu_data None]\n [--bottom_illu_data None]\n [--left_illu_data None]\n [--right_illu_data None]\n [--camera_position \"\"]\n [--cam_pos \"front\"]\n [--sparse_sample \"False\"]\n [--save_separate_results \"False\"]\n```\n\n\n### Fusing four datasets with dual-sided illumination and dual-sided detection\n#### Use as Python API\n(1) Provide two filenames\n\nsuppose we have four to-be-processed volumes \"A.tif\", \"B.tif\", \"C.tif\" and \"D.tif\" saved in folder `sample_name` under the path `data_path`.\nMeanwhile fusion result, together with the intermediate results, will be saved in folder `save_folder` under the path `save_path`.\n\nif \"A.tif\", \"B.tif\", \"C.tif\" and \"D.tif\" are top illuminated+ventral detected, bottom illuminated+ventral detected, top illuminated+dorsal detected, bottom illuminated+dorsal detected (in the image space), respectively:\n```python\nfrom lsfm_fuse import FUSE_det\n\nexe = FUSE_det() ###to run with default parameters for training\nout = exe.train(data_path = data_path,\n sample_name = sample_name,\n require_registration = require_registration,\n require_flipping_along_illu_for_dorsaldet = require_flipping_along_illu_for_dorsaldet,\n require_flipping_along_det_for_dorsaldet = require_flipping_along_det_for_dorsaldet,\n top_illu_ventral_det_data = \"A.tif\",\n bottom_illu_ventral_det_data = \"B.tif\",\n top_illu_dorsal_det_data = \"C.tif\",\n bottom_illu_dorsal_det_data = \"D.tif\",\n save_path = save_path,\n save_folder = save_folder,\n save_separate_results = False,\n sparse_sample = False,\n z_spacing = z_spacing,\n xy_spacing = xy_spacing,\n )\n\n```\nwhere `require_registration` means whether registration for the two detection devices is needed. By default, we assume datasets with different illumination sources but the same detection device are well registered in advance. `require_flipping_along_illu_for_dorsaldet` and `require_flipping_along_det_for_dorsaldet` mean in order to put inputs in a common space, whether flipping along illumination and detection are needed, respectively. Flipping only applies to datasets with detection device at the back. `save_separate_results` and `sparse_sample` are same as in FUSE_illu. If `require_registration` is True, `z_spacing` and `z_spacing`, axial resolution and lateral resolution, respectively, are mandatory for registration. Otherwise, they are optional.\n\nFour folders \"A\", \"B\", \"C\" and \"D\" will be created under `save_foldedr` to keep intermediate results, and the fusion result, \"quadrupleFusionResult.tif\" will be saved under folder \"A\".\n\notherwise, illmination can be of hotizontal orientations (in image space):\n```python\nout = exe.train(data_path = data_path,\n sample_name = sample_name,\n require_registration = require_registration,\n require_flipping_along_illu_for_dorsaldet = require_flipping_along_illu_for_dorsaldet,\n require_flipping_along_det_for_dorsaldet = require_flipping_along_det_for_dorsaldet,\n left_illu_ventral_det_data = \"A.tif\",\n right_illu_ventral_det_data = \"B.tif\",\n left_illu_dorsal_det_data = \"C.tif\",\n right_illu_dorsal_det_data = \"D.tif\",\n save_path = save_path,\n save_folder = save_folder,\n )\n```\nAlternatively, FUSE-det can be initialized with user-defined training parameters, a full list of input arguments in `__init__` is here:\n```\nrequire_precropping: bool = True,\nprecropping_params: list[int, int, int, int] = [],\nresample_ratio: int = 2,\nwindow_size: list[int, int] = [5, 59],\npoly_order: list[int, int] = [2, 2],\nn_epochs: int = 50,\nrequire_segmentation: bool = True,\ndevice: str = \"cpu\",\n```\n\n(2) Provide four arrays, i.e., the stacks to be processed, and necessary parameters (suitable for use in napari)\nsuppose we have four to-be-processed volumes `img_arr1` (top illuminated+ventral detected) and `img_arr2` (bottom illuminated+ventral detected), `img_arr3` (top illuminated+dorsal detected) and `img_arr4` (bottom illuminated+dorsal detected). All of them have been read in as a np.ndarray or dask.array.core.Array:\n```python\nfrom lsfm_fuse import FUSE_det\n\nexe = FUSE_det() ###to run with default parameters for training\nout = exe.train(data_path = data_path,\n sample_name = sample_name,\n require_registration = require_registration,\n require_flipping_along_illu_for_dorsaldet = require_flipping_along_illu_for_dorsaldet,\n require_flipping_along_det_for_dorsaldet = require_flipping_along_det_for_dorsaldet,\n left_illu_ventral_det_data = img_arr1,\n right_illu_ventral_det_data = img_arr2,\n left_illu_dorsal_det_data = img_arr3,\n right_illu_dorsal_det_data = img_arr4,\n save_path = save_path,\n save_folder = save_folder,\n )\n```\nto save results, four folders \"top_illu+ventral_det\", \"bottom_illu+ventral_det\", \"top_illu+dorsal_det\", and \"bottom_illu+dorsal_det\" will be created under `save_folder`, and the fusion result \"quadrupleFusionResult.tif\" will be kept in folder \"top_illu+ventral_det\".\n\nsame for left-right illumination orientations, simply use arguments `left_illu_ventral_det_data`, `right_illu_ventral_det_data`, `left_illu_dorsal_det_data` and `right_illu_dorsal_det_data`.\n\n#### Run from command line for batch processing\nsuppose we have four to-be-processed volumes saved in /path/to/my/sample_name/image1.tiff (top illuminated+ventral detected), /path/to/my/sample_name/image2.tiff (bottom illuminated+ventral detected), /path/to/my/sample_name/image3.tiff (top illuminated+dorsal detected), and /path/to/my/sample_name/image4.tiff (bottom illuminated+dorsal detected). We'd like to save the result in /save_path/save_folder:\n```bash\nbigfuse_det --data_path /path/to/my \\\n --sample_name sample_name \\\n --require_registration False\\\n --require_flipping_along_illu_for_dorsaldet True \\\n --require_flipping_along_det_for_dorsaldet False \\\n --top_illu_ventral_det_data image1.tiff \\\n --bottom_illu_ventral_det_data image2.tiff \\\n --top_illu_dorsal_det_data image3.tiff \\\n --bottom_illu_dorsal_det_data image4.tiff \\\n --save_path /save_path \\\n --save_folder save_folder\n```\nin addition, all the training parameters can be changed in command line. A full list of changeable args:\n```\nusage: run_fuse_det --data_path\n --sample_name\n --save_path\n --save_folder\n --require_registration\n --require_flipping_along_illu_for_dorsaldet\n --require_flipping_along_det_for_dorsaldet\n [--require_precropping \"True\"]\n [--precropping_params []]\n [--resample_ratio 2]\n [--window_size [5, 59]]\n [--poly_order [2, 2]]\n [--n_epochs 50]\n [--require_segmentation \"True\"]\n [--skip_illuFusion \"False\"]\n [--destripe_preceded \"False\"]\n [--destripe_params None]\n [--device \"cpu\"]\n [--sparse_sample \"False\"]\n [--top_illu_ventral_det_data None]\n [--bottom_illu_ventral_det_data None]\n [--top_illu_dorsal_det_data None]\n [--bottom_illu_dorsal_det_data None]\n [--left_illu_ventral_det_data None]\n [--right_illu_ventral_det_data None]\n [--left_illu_dorsal_det_data None]\n [--right_illu_dorsal_det_data None]\n [--save_separate_results \"False\"]\n [--z_spacing None]\n [--xy_spacing None]\n```\n\n## Installation\n\n**Stable Release:** `pip install lsfm_fuse`<br>\n**Development Head:** `pip install git+https://github.com/peng-lab/lsfm_fuse.git`\n\n\n## Development\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md) for information related to developing the code.\n\n\n**MIT license**\n\n\n",
"bugtrack_url": null,
"license": "MIT license",
"summary": "A fusion algorithm in LSFM",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/peng-lab/lsfm_fuse"
},
"split_keywords": [
"lsfm_fuse"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "58213953bd7ab74733ba1240503399f1f43083119a0dba69b210380410328092",
"md5": "759c1549359e7befd1b4d42022f7cc8b",
"sha256": "1025657f1bb2842a47786613edba1f801e0bb280ff5571b15f1ab7859dba6afe"
},
"downloads": -1,
"filename": "lsfm_fuse-0.1.2-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "759c1549359e7befd1b4d42022f7cc8b",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": ">=3.9",
"size": 50450,
"upload_time": "2024-12-13T13:23:13",
"upload_time_iso_8601": "2024-12-13T13:23:13.126768Z",
"url": "https://files.pythonhosted.org/packages/58/21/3953bd7ab74733ba1240503399f1f43083119a0dba69b210380410328092/lsfm_fuse-0.1.2-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6722ff816e35c07d8835d56061c2680767cfa6843934b7e9a971731447fcd010",
"md5": "19aac5fc1fe40a7cce6d814905a6f8c6",
"sha256": "aeca7684538b2ecf105bc49890b2a2eab4e3fea1a20a3095323fc66ca452bf38"
},
"downloads": -1,
"filename": "lsfm_fuse-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "19aac5fc1fe40a7cce6d814905a6f8c6",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 50874,
"upload_time": "2024-12-13T13:23:16",
"upload_time_iso_8601": "2024-12-13T13:23:16.278550Z",
"url": "https://files.pythonhosted.org/packages/67/22/ff816e35c07d8835d56061c2680767cfa6843934b7e9a971731447fcd010/lsfm_fuse-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-13 13:23:16",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "peng-lab",
"github_project": "lsfm_fuse",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "lsfm-fuse"
}