Name | copick-torch JSON |
Version |
0.2.1
JSON |
| download |
home_page | None |
Summary | Torch utilities for copick |
upload_time | 2025-07-17 00:01:00 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT License
Copyright (c) 2024 Kyle I S Harrington
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
keywords |
annotation
copick
cryo-et
cryoet
pytorch
tomography
torch
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# copick-torch
[](https://codecov.io/gh/copick/copick-torch)
Torch utilities for [copick](https://github.com/copick/copick)
## Dataset classes
- `SimpleCopickDataset`: Main dataset class with caching and augmentation support
- `MinimalCopickDataset`: Simpler dataset implementation with optional preloading
### MinimalCopickDataset Usage
#### Direct usage in Python
```python
from copick_torch import MinimalCopickDataset
from torch.utils.data import DataLoader
# Create a minimal dataset - no caching, no augmentation
dataset = MinimalCopickDataset(
dataset_id=10440, # Dataset ID from CZ portal
overlay_root='/tmp/test/', # Overlay root directory
boxsize=(48, 48, 48), # Size of the subvolumes
voxel_spacing=10.012, # Voxel spacing
include_background=True, # Include background samples
background_ratio=0.2, # Background ratio
min_background_distance=48, # Minimum distance from particles for background
max_samples=None # No limit on samples
)
# Print dataset information
print(f"Dataset size: {len(dataset)}")
print(f"Classes: {dataset.keys()}")
print(f"Class distribution: {dataset.get_class_distribution()}")
# Create a DataLoader
dataloader = DataLoader(
dataset,
batch_size=8,
shuffle=True,
num_workers=4,
pin_memory=True
)
# Training loop
for volume, label in dataloader:
# volume shape: [batch_size, 1, depth, height, width]
# label: [batch_size] class indices
# Your training code here
pass
```
#### Saving and loading datasets
The `MinimalCopickDataset` supports preloading all subvolumes into memory and saving the actual tensor data to disk, making it easy to share and load datasets without needing access to the original tomogram data:
```python
from copick_torch import MinimalCopickDataset
# Create a dataset with preloading enabled (default)
dataset = MinimalCopickDataset(
dataset_id=10440,
overlay_root='/tmp/copick_overlay',
preload=True # This preloads all subvolumes into memory
)
# Save the dataset with preloaded tensors
dataset.save('/path/to/save')
# Load the dataset from the saved tensors (no need for original tomogram data)
loaded_dataset = MinimalCopickDataset.load('/path/to/save')
```
You can also use the provided utility script to save a dataset directly from the command line:
```bash
# Save with preloading (default)
python scripts/save_torch_dataset.py --dataset_id 10440 --output_dir /path/to/save
# Save without preloading (not recommended)
python scripts/save_torch_dataset.py --dataset_id 10440 --output_dir /path/to/save --no-preload
```
Options:
```
--dataset_id DATASET_ID Dataset ID from the CZ cryoET Data Portal
--output_dir OUTPUT_DIR Directory to save the dataset
--overlay_root OVERLAY_ROOT
Root directory for overlay storage (default: /tmp/copick_overlay)
--boxsize Z Y X Size of subvolumes to extract (default: 48 48 48)
--voxel_spacing SPACING Voxel spacing to use (default: 10.012)
--include_background Include background samples in the dataset
--background_ratio RATIO Ratio of background to particle samples (default: 0.2)
--no-preload Disable preloading tensors (not recommended)
--verbose Enable verbose output
```
#### Inspecting saved datasets
You can display detailed information about a saved dataset using the provided utility script:
```bash
python scripts/info_torch_dataset.py --input_dir /path/to/saved/dataset
```
This will display:
- Basic dataset metadata (dataset ID, box size, voxel spacing, etc.)
- Class mapping information
- Total number of samples
- Class distribution (counts and percentages)
- Tomogram information
- Sample volume shape
The script can also generate visualizations:
```bash
python scripts/info_torch_dataset.py --input_dir /path/to/dataset --output_pdf dataset_report.pdf --samples_per_class 5
```
Options:
```
--input_dir INPUT_DIR Directory where the dataset is saved
--output_pdf OUTPUT_PDF Path to save visualization PDF (default: input_dir/dataset_overview.pdf)
--samples_per_class SAMPLES_PER_CLASS
Number of sample visualizations per class (default: 3)
--verbose Enable verbose output
```
## Quick demo
```bash
# Simple training example
uv run examples/simple_training.py
# Fourier augmentation demo
uv run examples/fourier_augmentation_demo.py
# MONAI-based augmentation demo
uv run examples/monai_augmentation_demo.py
# SplicedMixup with Gaussian blur visualization
uv run examples/spliced_mixup_example.py
# SplicedMixup with Fourier augmentation visualization
uv run examples/spliced_mixup_fourier_example.py
# Generate augmentation documentation
python scripts/generate_augmentation_docs.py
# Generate dataset documentation
python scripts/generate_dataset_examples.py
# Save dataset to disk with preloaded tensors
python scripts/save_torch_dataset.py --dataset_id 10440 --output_dir /path/to/save
# Display information about a saved dataset
python scripts/info_torch_dataset.py --input_dir /path/to/save
# Visualize dataset with orthogonal views and projections
python examples/visualize_dataset.py --dataset_dir /path/to/save --output_file report.png
# Create enhanced visual report with sum projections
python examples/visualize_dataset_enhanced.py --dataset_dir /path/to/save --output_file report_enhanced.png
```
## Dataset Visualization
The repository includes two scripts for visualizing datasets:
### Basic Visualization
The `visualize_dataset.py` script creates a simple visualization of dataset samples with orthogonal views and maximum intensity projections:
```bash
python examples/visualize_dataset.py --dataset_dir /path/to/saved/dataset --output_file report.png
```
Options:
```
--dataset_dir DATASET_DIR Directory where the dataset was saved
--output_file OUTPUT_FILE Output file for the visualization (default: dataset_visualization.png)
--samples_per_class SAMPLES_PER_CLASS
Number of samples to display per class (default: 2)
--dpi DPI DPI for the output image (default: 150)
--verbose Enable verbose output
```
### Enhanced Visualization
The `visualize_dataset_enhanced.py` script creates a more elegant visualization with sum projections and better layout:
```bash
python examples/visualize_dataset_enhanced.py --dataset_dir /path/to/saved/dataset --output_file report_enhanced.png
```
Options:
```
--dataset_dir DATASET_DIR Directory where the dataset was saved
--output_file OUTPUT_FILE Output file for the visualization (default: dataset_visualization_enhanced.png)
--samples_per_class SAMPLES_PER_CLASS
Number of samples to display per class (default: 2)
--dpi DPI DPI for the output image (default: 150)
--cmap CMAP Colormap to use for visualization (default: viridis)
--verbose Enable verbose output
```
## Features
### Augmentations
`copick-torch` includes various MONAI-based data augmentation techniques for 3D tomographic data:
- **MixupTransform**: MONAI-compatible implementation of the Mixup technique (Zhang et al., 2018), creating virtual training examples by mixing pairs of inputs and their labels with a random proportion.
- **FourierAugment3D**: MONAI-compatible implementation of Fourier-based augmentation that operates in the frequency domain, including random frequency dropout, phase noise injection, and intensity scaling.
Example usage of MONAI-based Fourier augmentation:
```python
from copick_torch.monai_augmentations import FourierAugment3D
# Create the augmenter
fourier_aug = FourierAugment3D(
freq_mask_prob=0.3, # Probability of masking frequency components
phase_noise_std=0.1, # Standard deviation of phase noise
intensity_scaling_range=(0.8, 1.2), # Range for random intensity scaling
prob=1.0 # Probability of applying the transform
)
# Apply to a 3D volume (with PyTorch tensor)
augmented_volume = fourier_aug(volume_tensor)
```
### Documentation
See the [docs directory](./docs) for documentation and examples:
- [Augmentation Examples](./docs/augmentation_examples): Visualizations of various augmentations applied to different classes from the dataset used in the `spliced_mixup_example.py` example.
- [Dataset Examples](./docs/dataset_examples): Examples of volumes from each class in the dataset used by the CopickDataset classes.
## Citation
If you use `copick-torch` in your research, please cite:
```bibtex
@article{harrington2024open,
title={Open-source Tools for CryoET Particle Picking Machine Learning Competitions},
author={Harrington, Kyle I. and Zhao, Zhuowen and Schwartz, Jonathan and Kandel, Saugat and Ermel, Utz and Paraan, Mohammadreza and Potter, Clinton and Carragher, Bridget},
journal={bioRxiv},
year={2024},
doi={10.1101/2024.11.04.621608}
}
```
This software was introduced in a NeurIPS 2024 Workshop on Machine Learning in Structural Biology as "Open-source Tools for CryoET Particle Picking Machine Learning Competitions".
## Development
### Install development dependencies
```bash
pip install ".[test]"
```
### Run tests
```bash
pytest
```
### View coverage report
```bash
# Generate terminal, HTML and XML coverage reports
pytest --cov=copick_torch --cov-report=term --cov-report=html --cov-report=xml
```
Or use the self-contained coverage script:
```bash
# Run tests and generate coverage reports with badge
python scripts/coverage_report.py --term
```
After running the tests with coverage, you can:
1. View the terminal report directly in your console
2. Open `htmlcov/index.html` in a browser to see the detailed HTML report
3. View the generated coverage badge (`coverage-badge.svg`)
4. Check the [Codecov dashboard](https://codecov.io/gh/copick/copick-torch) for the project's coverage metrics
## Code of Conduct
This project adheres to the Contributor Covenant [code of conduct](https://github.com/chanzuckerberg/.github/blob/main/CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to [opensource@chanzuckerberg.com](mailto:opensource@chanzuckerberg.com).
## Reporting Security Issues
If you believe you have found a security issue, please responsibly disclose by contacting us at [security@chanzuckerberg.com](mailto:security@chanzuckerberg.com).
Raw data
{
"_id": null,
"home_page": null,
"name": "copick-torch",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "annotation, copick, cryo-et, cryoet, pytorch, tomography, torch",
"author": null,
"author_email": "Kyle Harrington <czi@kyleharrington.com>, Jonathan Schwartz <jonathan.schwartz@czii.org>",
"download_url": "https://files.pythonhosted.org/packages/aa/e7/24dd05e3ab3b2d2f2bcb23c8ff38ee3d5401347f84fdc9b6ec151bd381bd/copick_torch-0.2.1.tar.gz",
"platform": null,
"description": "# copick-torch\n\n[](https://codecov.io/gh/copick/copick-torch)\n\nTorch utilities for [copick](https://github.com/copick/copick)\n\n## Dataset classes\n\n- `SimpleCopickDataset`: Main dataset class with caching and augmentation support\n- `MinimalCopickDataset`: Simpler dataset implementation with optional preloading\n\n### MinimalCopickDataset Usage\n\n#### Direct usage in Python\n\n```python\nfrom copick_torch import MinimalCopickDataset\nfrom torch.utils.data import DataLoader\n\n# Create a minimal dataset - no caching, no augmentation\ndataset = MinimalCopickDataset(\n dataset_id=10440, # Dataset ID from CZ portal\n overlay_root='/tmp/test/', # Overlay root directory\n boxsize=(48, 48, 48), # Size of the subvolumes\n voxel_spacing=10.012, # Voxel spacing\n include_background=True, # Include background samples\n background_ratio=0.2, # Background ratio\n min_background_distance=48, # Minimum distance from particles for background\n max_samples=None # No limit on samples\n)\n\n# Print dataset information\nprint(f\"Dataset size: {len(dataset)}\")\nprint(f\"Classes: {dataset.keys()}\")\nprint(f\"Class distribution: {dataset.get_class_distribution()}\")\n\n# Create a DataLoader\ndataloader = DataLoader(\n dataset,\n batch_size=8,\n shuffle=True,\n num_workers=4,\n pin_memory=True\n)\n\n# Training loop\nfor volume, label in dataloader:\n # volume shape: [batch_size, 1, depth, height, width]\n # label: [batch_size] class indices\n # Your training code here\n pass\n```\n\n#### Saving and loading datasets\n\nThe `MinimalCopickDataset` supports preloading all subvolumes into memory and saving the actual tensor data to disk, making it easy to share and load datasets without needing access to the original tomogram data:\n\n```python\nfrom copick_torch import MinimalCopickDataset\n\n# Create a dataset with preloading enabled (default)\ndataset = MinimalCopickDataset(\n dataset_id=10440,\n overlay_root='/tmp/copick_overlay',\n preload=True # This preloads all subvolumes into memory\n)\n\n# Save the dataset with preloaded tensors\ndataset.save('/path/to/save')\n\n# Load the dataset from the saved tensors (no need for original tomogram data)\nloaded_dataset = MinimalCopickDataset.load('/path/to/save')\n```\n\nYou can also use the provided utility script to save a dataset directly from the command line:\n\n```bash\n# Save with preloading (default)\npython scripts/save_torch_dataset.py --dataset_id 10440 --output_dir /path/to/save\n\n# Save without preloading (not recommended)\npython scripts/save_torch_dataset.py --dataset_id 10440 --output_dir /path/to/save --no-preload\n```\n\nOptions:\n```\n --dataset_id DATASET_ID Dataset ID from the CZ cryoET Data Portal\n --output_dir OUTPUT_DIR Directory to save the dataset\n --overlay_root OVERLAY_ROOT\n Root directory for overlay storage (default: /tmp/copick_overlay)\n --boxsize Z Y X Size of subvolumes to extract (default: 48 48 48)\n --voxel_spacing SPACING Voxel spacing to use (default: 10.012)\n --include_background Include background samples in the dataset\n --background_ratio RATIO Ratio of background to particle samples (default: 0.2)\n --no-preload Disable preloading tensors (not recommended)\n --verbose Enable verbose output\n```\n\n#### Inspecting saved datasets\n\nYou can display detailed information about a saved dataset using the provided utility script:\n\n```bash\npython scripts/info_torch_dataset.py --input_dir /path/to/saved/dataset\n```\n\nThis will display:\n- Basic dataset metadata (dataset ID, box size, voxel spacing, etc.)\n- Class mapping information\n- Total number of samples\n- Class distribution (counts and percentages)\n- Tomogram information\n- Sample volume shape\n\nThe script can also generate visualizations:\n\n```bash\npython scripts/info_torch_dataset.py --input_dir /path/to/dataset --output_pdf dataset_report.pdf --samples_per_class 5\n```\n\nOptions:\n```\n --input_dir INPUT_DIR Directory where the dataset is saved\n --output_pdf OUTPUT_PDF Path to save visualization PDF (default: input_dir/dataset_overview.pdf)\n --samples_per_class SAMPLES_PER_CLASS\n Number of sample visualizations per class (default: 3)\n --verbose Enable verbose output\n```\n\n## Quick demo\n\n```bash\n# Simple training example\nuv run examples/simple_training.py\n\n# Fourier augmentation demo\nuv run examples/fourier_augmentation_demo.py\n\n# MONAI-based augmentation demo\nuv run examples/monai_augmentation_demo.py\n\n# SplicedMixup with Gaussian blur visualization\nuv run examples/spliced_mixup_example.py\n\n# SplicedMixup with Fourier augmentation visualization\nuv run examples/spliced_mixup_fourier_example.py\n\n# Generate augmentation documentation\npython scripts/generate_augmentation_docs.py\n\n# Generate dataset documentation\npython scripts/generate_dataset_examples.py\n\n# Save dataset to disk with preloaded tensors\npython scripts/save_torch_dataset.py --dataset_id 10440 --output_dir /path/to/save\n\n# Display information about a saved dataset\npython scripts/info_torch_dataset.py --input_dir /path/to/save\n\n# Visualize dataset with orthogonal views and projections\npython examples/visualize_dataset.py --dataset_dir /path/to/save --output_file report.png\n\n# Create enhanced visual report with sum projections\npython examples/visualize_dataset_enhanced.py --dataset_dir /path/to/save --output_file report_enhanced.png\n```\n\n## Dataset Visualization\n\nThe repository includes two scripts for visualizing datasets:\n\n### Basic Visualization\n\nThe `visualize_dataset.py` script creates a simple visualization of dataset samples with orthogonal views and maximum intensity projections:\n\n```bash\npython examples/visualize_dataset.py --dataset_dir /path/to/saved/dataset --output_file report.png\n```\n\nOptions:\n```\n --dataset_dir DATASET_DIR Directory where the dataset was saved\n --output_file OUTPUT_FILE Output file for the visualization (default: dataset_visualization.png)\n --samples_per_class SAMPLES_PER_CLASS\n Number of samples to display per class (default: 2)\n --dpi DPI DPI for the output image (default: 150)\n --verbose Enable verbose output\n```\n\n### Enhanced Visualization\n\nThe `visualize_dataset_enhanced.py` script creates a more elegant visualization with sum projections and better layout:\n\n```bash\npython examples/visualize_dataset_enhanced.py --dataset_dir /path/to/saved/dataset --output_file report_enhanced.png\n```\n\nOptions:\n```\n --dataset_dir DATASET_DIR Directory where the dataset was saved\n --output_file OUTPUT_FILE Output file for the visualization (default: dataset_visualization_enhanced.png)\n --samples_per_class SAMPLES_PER_CLASS\n Number of samples to display per class (default: 2)\n --dpi DPI DPI for the output image (default: 150)\n --cmap CMAP Colormap to use for visualization (default: viridis)\n --verbose Enable verbose output\n```\n\n## Features\n\n### Augmentations\n\n`copick-torch` includes various MONAI-based data augmentation techniques for 3D tomographic data:\n\n- **MixupTransform**: MONAI-compatible implementation of the Mixup technique (Zhang et al., 2018), creating virtual training examples by mixing pairs of inputs and their labels with a random proportion.\n- **FourierAugment3D**: MONAI-compatible implementation of Fourier-based augmentation that operates in the frequency domain, including random frequency dropout, phase noise injection, and intensity scaling.\n\nExample usage of MONAI-based Fourier augmentation:\n\n```python\nfrom copick_torch.monai_augmentations import FourierAugment3D\n\n# Create the augmenter\nfourier_aug = FourierAugment3D(\n freq_mask_prob=0.3, # Probability of masking frequency components\n phase_noise_std=0.1, # Standard deviation of phase noise\n intensity_scaling_range=(0.8, 1.2), # Range for random intensity scaling\n prob=1.0 # Probability of applying the transform\n)\n\n# Apply to a 3D volume (with PyTorch tensor)\naugmented_volume = fourier_aug(volume_tensor)\n```\n\n### Documentation\n\nSee the [docs directory](./docs) for documentation and examples:\n\n- [Augmentation Examples](./docs/augmentation_examples): Visualizations of various augmentations applied to different classes from the dataset used in the `spliced_mixup_example.py` example.\n- [Dataset Examples](./docs/dataset_examples): Examples of volumes from each class in the dataset used by the CopickDataset classes.\n\n## Citation\n\nIf you use `copick-torch` in your research, please cite:\n\n```bibtex\n@article{harrington2024open,\n title={Open-source Tools for CryoET Particle Picking Machine Learning Competitions},\n author={Harrington, Kyle I. and Zhao, Zhuowen and Schwartz, Jonathan and Kandel, Saugat and Ermel, Utz and Paraan, Mohammadreza and Potter, Clinton and Carragher, Bridget},\n journal={bioRxiv},\n year={2024},\n doi={10.1101/2024.11.04.621608}\n}\n```\n\nThis software was introduced in a NeurIPS 2024 Workshop on Machine Learning in Structural Biology as \"Open-source Tools for CryoET Particle Picking Machine Learning Competitions\".\n\n## Development\n\n### Install development dependencies\n\n```bash\npip install \".[test]\"\n```\n\n### Run tests\n\n```bash\npytest\n```\n\n### View coverage report\n\n```bash\n# Generate terminal, HTML and XML coverage reports\npytest --cov=copick_torch --cov-report=term --cov-report=html --cov-report=xml\n```\n\nOr use the self-contained coverage script:\n\n```bash\n# Run tests and generate coverage reports with badge\npython scripts/coverage_report.py --term\n```\n\nAfter running the tests with coverage, you can:\n\n1. View the terminal report directly in your console\n2. Open `htmlcov/index.html` in a browser to see the detailed HTML report\n3. View the generated coverage badge (`coverage-badge.svg`)\n4. Check the [Codecov dashboard](https://codecov.io/gh/copick/copick-torch) for the project's coverage metrics\n\n## Code of Conduct\n\nThis project adheres to the Contributor Covenant [code of conduct](https://github.com/chanzuckerberg/.github/blob/main/CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to [opensource@chanzuckerberg.com](mailto:opensource@chanzuckerberg.com).\n\n## Reporting Security Issues\n\nIf you believe you have found a security issue, please responsibly disclose by contacting us at [security@chanzuckerberg.com](mailto:security@chanzuckerberg.com).\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2024 Kyle I S Harrington\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.",
"summary": "Torch utilities for copick",
"version": "0.2.1",
"project_urls": {
"Bug Tracker": "https://github.com/copick/copick-torch/issues",
"Documentation": "https://github.com/copick/copick-torch#README.md",
"Issues": "https://github.com/copick/copick-torch/issues",
"Repository": "https://github.com/copick/copick-torch",
"Source Code": "https://github.com/copick/copick-torch",
"User Support": "https://github.com/copick/copick-torch/issues"
},
"split_keywords": [
"annotation",
" copick",
" cryo-et",
" cryoet",
" pytorch",
" tomography",
" torch"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4d90115c302d2e7f8cd2bbece8ea51b0b8cbae99a7b50c8cc944bef0919abf46",
"md5": "4a6251b3b095d5c73a93c0734b964c1f",
"sha256": "eebfebf7a21f8045074623b5a185e43e569091b4ed69bd0e74e295a0000d64b1"
},
"downloads": -1,
"filename": "copick_torch-0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4a6251b3b095d5c73a93c0734b964c1f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 40967,
"upload_time": "2025-07-17T00:00:59",
"upload_time_iso_8601": "2025-07-17T00:00:59.407558Z",
"url": "https://files.pythonhosted.org/packages/4d/90/115c302d2e7f8cd2bbece8ea51b0b8cbae99a7b50c8cc944bef0919abf46/copick_torch-0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "aae724dd05e3ab3b2d2f2bcb23c8ff38ee3d5401347f84fdc9b6ec151bd381bd",
"md5": "33e391175a77b4230547618856af82e9",
"sha256": "7743eddfa80158dc19e782dad63ea49e9ce138c7d58c3cc5e89150935010eb74"
},
"downloads": -1,
"filename": "copick_torch-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "33e391175a77b4230547618856af82e9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 4237095,
"upload_time": "2025-07-17T00:01:00",
"upload_time_iso_8601": "2025-07-17T00:01:00.823712Z",
"url": "https://files.pythonhosted.org/packages/aa/e7/24dd05e3ab3b2d2f2bcb23c8ff38ee3d5401347f84fdc9b6ec151bd381bd/copick_torch-0.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-17 00:01:00",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "copick",
"github_project": "copick-torch",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "copick-torch"
}