<p align="center">
<img width="350" src=https://github.com/softmatterlab/DeepTrack2/blob/develop/assets/logo.png?raw=true>
</p>
<h3 align="center">A comprehensive deep learning framework for digital microscopy.</h3>
<p align="center">
<a href="/LICENSE" alt="licence"><img src="https://img.shields.io/github/license/softmatterlab/DeepTrack-2.0"></a>
<a href="https://badge.fury.io/py/deeptrack"><img src="https://badge.fury.io/py/deeptrack.svg" alt="PyPI version"></a>
<a href="https://softmatterlab.github.io/DeepTrack2/deeptrack.html"><img src="https://img.shields.io/badge/docs-passing-green" alt="PyPI version"></a>
<a href="https://badge.fury.io/py/deeptrack"><img src="https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9%20%7C%203.10-blue" alt="Python version"></a>
<a href="https://doi.org/10.1063/5.0034891" alt="DOI">
<img src="https://img.shields.io/badge/DOI-10.1063%2F5.0034891-blue">
</a>
</p>
<p align="center">
<a href="#installation">Installation</a> •
<a href="#using-deeptrack">Examples</a> •
<a href="#learning-deeptrack-21">Basics</a> •
<a href="#cite-us">Cite us</a> •
<a href="/LICENSE">License</a>
</p>
We provide tools to create physical simulations of optical systems, to generate and train neural network models, and to analyze experimental data.
# Installation
DeepTrack 2.1 requires at least python 3.6.
To install DeepTrack 2.1, open a terminal or command prompt and run:
pip install deeptrack
If you have a very recent version of python, you may need to install numpy _before_ DeepTrack. This is a known issue with scikit-image.
### Updating to 2.1 from 2.0
If you are already using DeepTrack 2.0 (pypi version 0.x.x), updating to DeepTrack 2.1 (pypi version 1.x.x) is painless. If you have followed deprecation warnings, no change to your code is needed. There are two breaking changes:
- The deprecated operator `+` to chain features has been removed. It is now only possible using the `>>` operator.
- The deprecated operator `**` to duplicate a feature has been removed. It is now only possible using the `^` operator.
If you notice any other changes in behavior, please report it to us in the issues tab.
# Examples of applications using DeepTrack
DeepTrack is a general purpose deep learning framework for microscopy, meaning you can use it for any task you like. Here, we show some common applications!
<br/>
<h3 align="left"> Single particle tracking </h3>
<p align="left">
<img width="300" src=/assets/SPT-ideal.gif?raw=true>
<img width="300" src=/assets/SPT-noisy.gif?raw=true>
<br/>
<a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/paper-examples/2-single_particle_tracking.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> Training a CNN-based single particle tracker using simulated data </a>
<br/>
<a href="https://doi.org/10.1038/s41467-022-35004-y" alt="DOI lodestar">
<img src="https://img.shields.io/badge/DOI-10.1038%2Fs41467--022--35004--y-blue">
</a>
<a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/LodeSTAR/02.%20tracking_particles_of_various_shapes.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> Unsupervised training of a single particle tracker using LodeSTAR </a>
</p>
<br/>
<h3 align="left"> Multi-particle tracking </h3>
<p align="left">
<img width="600" src=/assets/MPT-packed.gif?raw=true>
<br/>
<a href="https://doi.org/10.1038/s41467-022-35004-y" alt="DOI lodestar">
<img src="https://img.shields.io/badge/DOI-10.1038%2Fs41467--022--35004--y-blue">
</a> <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/LodeSTAR/03.track_BF-C2DL-HSC.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> Training LodeSTAR to detect multiple cells from a single image </a>
<br/>
<a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/paper-examples/4-multi-molecule-tracking.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> Training a UNet-based multi-particle tracker using simulated data </a>
</p>
<br/>
<h3 align="left"> Particle tracing </h3>
<p align="left">
<img width="600" src=/assets/Tracing.gif?raw=true>
<br/>
<a href="https://doi.org/10.48550/arXiv.2202.06355" alt="DOI magik">
<img src="https://img.shields.io/badge/DOI-10.48550%2FarXiv.2202.0635-blue">
</a> <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/develop/examples/MAGIK/cell_migration_analysis.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> Training MAGIK to trace migrating cells </a>
</p>
# Basics to learn DeepTrack 2.1
Everybody learns in different ways! Depending on your preferences, and what you want to do with DeepTrack, you may want to check out one or more of these resources.
## Getting-started guides
We have two separate series of notebooks which aims to teach you all you need to know to use DeepTrack to its fullest. The first is a set of six notebooks with a focus on the application.
1. <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/get-started/01.%20deeptrack_introduction_tutorial.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> deeptrack_introduction_tutorial </a> gives an overview of how to use DeepTrack 2.1.
2. <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorials/tracking_particle_cnn_tutorial.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> tracking_particle_cnn_tutorial </a> demonstrates how to track a point particle with a convolutional neural network (CNN).
3. <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorial/tracking_multiple_particles_unet_tutorial.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> tracking_particle_cnn_tutorial </a> demonstrates how to track multiple particles using a U-net.
4. <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorials/characterizing_aberrations_tutorial.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> characterizing_aberrations_tutorial </a> demonstrates how to add and characterize aberrations of an optical device.
5. <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorials/distinguishing_particles_in_brightfield_tutorial.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> distinguishing_particles_in_brightfield_tutorial </a> demonstrates how to use a U-net to track and distinguish particles of different sizes in brightfield microscopy.
6. <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorials/analyzing_video_tutorial.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> analyzing_video_tutorial </a> demonstrates how to create videos and how to train a neural network to analyze them.
The second series focuses on individual topics, introducing them in a natural order.
1. <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/get-started/01.%20deeptrack_introduction_tutorial.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> Introducing how to create simulation pipelines and train models. </a>
2. <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/get-started/02.%20using_deeptrack_generators.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> Demonstrating data generators. </a>
3. <a href="https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/get-started/03.%20customizing_deeptrack_models.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg"> Demonstrating how to customize models using layer-blocks. </a>
## DeepTrack 2.1 in action
Additionally, we have seven more case studies which are less documented, but gives additional insight in how to use DeepTrack with real datasets
1. [MNIST](examples/paper-examples/1-MNIST.ipynb) classifies handwritted digits.
2. [single particle tracking](examples/paper-examples/2-single_particle_tracking.ipynb) tracks experimentally captured videos of a single particle. (Requires opencv-python compiled with ffmpeg to open and read a video.)
3. [single particle sizing](examples/paper-examples/3-particle_sizing.ipynb) extracts the radius and refractive index of particles.
4. [multi-particle tracking](examples/paper-examples/4-multi-molecule-tracking.ipynb) detects quantum dots in a low SNR image.
5. [3-dimensional tracking](examples/paper-examples/5-inline_holography_3d_tracking.ipynb) tracks particles in three dimensions.
6. [cell counting](examples/paper-examples/6-cell_counting.ipynb) counts the number of cells in fluorescence images.
7. [GAN image generation](examples/paper-examples/7-GAN_image_generation.ipynb) uses a GAN to create cell image from masks.
## Model-specific examples
We also have examples that are specific for certain models. This includes
- [*LodeSTAR*](examples/LodeSTAR) for label-free particle tracking.
- [*MAGIK*](deeptrack/models/gnns/) for graph-based particle linking and trace characterization.
## Documentation
The detailed documentation of DeepTrack 2.1 is available at the following link: https://softmatterlab.github.io/DeepTrack2/deeptrack.html
## Video Tutorials
Videos are currently being updated to match with the current version of DeepTrack.
## Cite us!
If you use DeepTrack 2.1 in your project, please cite us here:
```
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe.
"Quantitative Digital Microscopy with Deep Learning."
Applied Physics Reviews 8 (2021), 011310.
https://doi.org/10.1063/5.0034891
```
See also:
<https://www.nature.com/articles/s41467-022-35004-y>:
```
Midtvedt, B., Pineda, J., Skärberg, F. et al.
"Single-shot self-supervised object detection in microscopy."
Nat Commun 13, 7492 (2022).
```
<https://arxiv.org/abs/2202.06355>:
```
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe,1 and Carlo Manzo
"Geometric deep learning reveals the spatiotemporal fingerprint ofmicroscopic motion."
arXiv 2202.06355 (2022).
```
<https://doi.org/10.1364/OPTICA.6.000506>:
```
Saga Helgadottir, Aykut Argun, and Giovanni Volpe.
"Digital video microscopy enhanced by deep learning."
Optica 6.4 (2019): 506-513.
```
<https://github.com/softmatterlab/DeepTrack.git>:
```
Saga Helgadottir, Aykut Argun, and Giovanni Volpe.
"DeepTrack." (2019)
```
## Funding
This work was supported by the ERC Starting Grant ComplexSwimmers (Grant No. 677511), the ERC Starting Grant MAPEI (101001267), and the Knut and Alice Wallenberg Foundation.
Raw data
{
"_id": null,
"home_page": "https://github.com/DeepTrackAI/DeepTrack2",
"name": "deeptrack",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Benjamin Midtvedt, Jesus Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Carlo Manzo, Giovanni Volpe",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/d1/16/d21df5fa63b1cefef3bb4a86dadcf0b2e607eec08d6b58e5e82c5241ca6f/deeptrack-2.0.0.tar.gz",
"platform": null,
"description": "<p align=\"center\">\r\n <img width=\"350\" src=https://github.com/softmatterlab/DeepTrack2/blob/develop/assets/logo.png?raw=true>\r\n</p>\r\n\r\n<h3 align=\"center\">A comprehensive deep learning framework for digital microscopy.</h3>\r\n<p align=\"center\">\r\n <a href=\"/LICENSE\" alt=\"licence\"><img src=\"https://img.shields.io/github/license/softmatterlab/DeepTrack-2.0\"></a>\r\n <a href=\"https://badge.fury.io/py/deeptrack\"><img src=\"https://badge.fury.io/py/deeptrack.svg\" alt=\"PyPI version\"></a>\r\n <a href=\"https://softmatterlab.github.io/DeepTrack2/deeptrack.html\"><img src=\"https://img.shields.io/badge/docs-passing-green\" alt=\"PyPI version\"></a>\r\n <a href=\"https://badge.fury.io/py/deeptrack\"><img src=\"https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9%20%7C%203.10-blue\" alt=\"Python version\"></a>\r\n <a href=\"https://doi.org/10.1063/5.0034891\" alt=\"DOI\">\r\n <img src=\"https://img.shields.io/badge/DOI-10.1063%2F5.0034891-blue\">\r\n </a>\r\n</p>\r\n<p align=\"center\">\r\n <a href=\"#installation\">Installation</a> \u00e2\u20ac\u00a2\r\n <a href=\"#using-deeptrack\">Examples</a> \u00e2\u20ac\u00a2\r\n <a href=\"#learning-deeptrack-21\">Basics</a> \u00e2\u20ac\u00a2\r\n <a href=\"#cite-us\">Cite us</a> \u00e2\u20ac\u00a2\r\n <a href=\"/LICENSE\">License</a> \r\n</p>\r\n\r\n\r\nWe provide tools to create physical simulations of optical systems, to generate and train neural network models, and to analyze experimental data.\r\n\r\n# Installation\r\n\r\nDeepTrack 2.1 requires at least python 3.6.\r\n\r\nTo install DeepTrack 2.1, open a terminal or command prompt and run:\r\n\r\n pip install deeptrack\r\n \r\nIf you have a very recent version of python, you may need to install numpy _before_ DeepTrack. This is a known issue with scikit-image.\r\n\r\n### Updating to 2.1 from 2.0\r\n\r\nIf you are already using DeepTrack 2.0 (pypi version 0.x.x), updating to DeepTrack 2.1 (pypi version 1.x.x) is painless. If you have followed deprecation warnings, no change to your code is needed. There are two breaking changes:\r\n\r\n- The deprecated operator `+` to chain features has been removed. It is now only possible using the `>>` operator.\r\n- The deprecated operator `**` to duplicate a feature has been removed. It is now only possible using the `^` operator.\r\n\r\nIf you notice any other changes in behavior, please report it to us in the issues tab.\r\n\r\n# Examples of applications using DeepTrack \r\n\r\nDeepTrack is a general purpose deep learning framework for microscopy, meaning you can use it for any task you like. Here, we show some common applications!\r\n\r\n<br/>\r\n<h3 align=\"left\"> Single particle tracking </h3>\r\n<p align=\"left\">\r\n <img width=\"300\" src=/assets/SPT-ideal.gif?raw=true>\r\n <img width=\"300\" src=/assets/SPT-noisy.gif?raw=true>\r\n <br/>\r\n <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/paper-examples/2-single_particle_tracking.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> Training a CNN-based single particle tracker using simulated data </a>\r\n <br/>\r\n <a href=\"https://doi.org/10.1038/s41467-022-35004-y\" alt=\"DOI lodestar\">\r\n <img src=\"https://img.shields.io/badge/DOI-10.1038%2Fs41467--022--35004--y-blue\">\r\n </a> \r\n <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/LodeSTAR/02.%20tracking_particles_of_various_shapes.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> Unsupervised training of a single particle tracker using LodeSTAR </a> \r\n \r\n</p>\r\n<br/>\r\n\r\n<h3 align=\"left\"> Multi-particle tracking </h3>\r\n\r\n<p align=\"left\">\r\n <img width=\"600\" src=/assets/MPT-packed.gif?raw=true>\r\n <br/>\r\n <a href=\"https://doi.org/10.1038/s41467-022-35004-y\" alt=\"DOI lodestar\">\r\n <img src=\"https://img.shields.io/badge/DOI-10.1038%2Fs41467--022--35004--y-blue\">\r\n </a> <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/LodeSTAR/03.track_BF-C2DL-HSC.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> Training LodeSTAR to detect multiple cells from a single image </a>\r\n <br/>\r\n <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/paper-examples/4-multi-molecule-tracking.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> Training a UNet-based multi-particle tracker using simulated data </a> \r\n</p>\r\n<br/>\r\n\r\n<h3 align=\"left\"> Particle tracing </h3>\r\n\r\n<p align=\"left\">\r\n <img width=\"600\" src=/assets/Tracing.gif?raw=true>\r\n <br/>\r\n <a href=\"https://doi.org/10.48550/arXiv.2202.06355\" alt=\"DOI magik\">\r\n <img src=\"https://img.shields.io/badge/DOI-10.48550%2FarXiv.2202.0635-blue\">\r\n </a> <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/develop/examples/MAGIK/cell_migration_analysis.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> Training MAGIK to trace migrating cells </a>\r\n</p>\r\n\r\n# Basics to learn DeepTrack 2.1\r\n\r\nEverybody learns in different ways! Depending on your preferences, and what you want to do with DeepTrack, you may want to check out one or more of these resources.\r\n\r\n## Getting-started guides\r\n\r\nWe have two separate series of notebooks which aims to teach you all you need to know to use DeepTrack to its fullest. The first is a set of six notebooks with a focus on the application.\r\n\r\n1. <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/get-started/01.%20deeptrack_introduction_tutorial.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> deeptrack_introduction_tutorial </a> gives an overview of how to use DeepTrack 2.1.\r\n2. <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorials/tracking_particle_cnn_tutorial.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> tracking_particle_cnn_tutorial </a> demonstrates how to track a point particle with a convolutional neural network (CNN).\r\n3. <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorial/tracking_multiple_particles_unet_tutorial.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> tracking_particle_cnn_tutorial </a> demonstrates how to track multiple particles using a U-net.\r\n4. <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorials/characterizing_aberrations_tutorial.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> characterizing_aberrations_tutorial </a> demonstrates how to add and characterize aberrations of an optical device. \r\n5. <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorials/distinguishing_particles_in_brightfield_tutorial.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> distinguishing_particles_in_brightfield_tutorial </a> demonstrates how to use a U-net to track and distinguish particles of different sizes in brightfield microscopy. \r\n6. <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/tutorials/analyzing_video_tutorial.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> analyzing_video_tutorial </a> demonstrates how to create videos and how to train a neural network to analyze them.\r\n\r\n\r\nThe second series focuses on individual topics, introducing them in a natural order. \r\n\r\n1. <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/get-started/01.%20deeptrack_introduction_tutorial.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> Introducing how to create simulation pipelines and train models. </a>\r\n2. <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/get-started/02.%20using_deeptrack_generators.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> Demonstrating data generators. </a> \r\n3. <a href=\"https://colab.research.google.com/github/softmatterlab/DeepTrack-2.0/blob/master/examples/get-started/03.%20customizing_deeptrack_models.ipynb\"> <img src=\"https://colab.research.google.com/assets/colab-badge.svg\"> Demonstrating how to customize models using layer-blocks. </a> \r\n\r\n## DeepTrack 2.1 in action\r\n\r\nAdditionally, we have seven more case studies which are less documented, but gives additional insight in how to use DeepTrack with real datasets\r\n\r\n1. [MNIST](examples/paper-examples/1-MNIST.ipynb) classifies handwritted digits.\r\n2. [single particle tracking](examples/paper-examples/2-single_particle_tracking.ipynb) tracks experimentally captured videos of a single particle. (Requires opencv-python compiled with ffmpeg to open and read a video.)\r\n3. [single particle sizing](examples/paper-examples/3-particle_sizing.ipynb) extracts the radius and refractive index of particles.\r\n4. [multi-particle tracking](examples/paper-examples/4-multi-molecule-tracking.ipynb) detects quantum dots in a low SNR image.\r\n5. [3-dimensional tracking](examples/paper-examples/5-inline_holography_3d_tracking.ipynb) tracks particles in three dimensions.\r\n6. [cell counting](examples/paper-examples/6-cell_counting.ipynb) counts the number of cells in fluorescence images.\r\n7. [GAN image generation](examples/paper-examples/7-GAN_image_generation.ipynb) uses a GAN to create cell image from masks.\r\n\r\n## Model-specific examples\r\n\r\nWe also have examples that are specific for certain models. This includes \r\n- [*LodeSTAR*](examples/LodeSTAR) for label-free particle tracking.\r\n- [*MAGIK*](deeptrack/models/gnns/) for graph-based particle linking and trace characterization.\r\n\r\n## Documentation\r\nThe detailed documentation of DeepTrack 2.1 is available at the following link: https://softmatterlab.github.io/DeepTrack2/deeptrack.html\r\n\r\n## Video Tutorials\r\n\r\nVideos are currently being updated to match with the current version of DeepTrack.\r\n\r\n## Cite us!\r\nIf you use DeepTrack 2.1 in your project, please cite us here:\r\n\r\n```\r\nBenjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jes\u00c3\u00bas Pineda, Daniel Midtvedt, Giovanni Volpe.\r\n\"Quantitative Digital Microscopy with Deep Learning.\"\r\nApplied Physics Reviews 8 (2021), 011310.\r\nhttps://doi.org/10.1063/5.0034891\r\n```\r\n\r\nSee also:\r\n\r\n<https://www.nature.com/articles/s41467-022-35004-y>:\r\n```\r\nMidtvedt, B., Pineda, J., Sk\u00c3\u00a4rberg, F. et al. \r\n\"Single-shot self-supervised object detection in microscopy.\" \r\nNat Commun 13, 7492 (2022).\r\n```\r\n\r\n<https://arxiv.org/abs/2202.06355>:\r\n```\r\nJes\u00c3\u00bas Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio No\u00c3\u00a9, Daniel Midtvedt, Giovanni Volpe,1 and Carlo Manzo\r\n\"Geometric deep learning reveals the spatiotemporal fingerprint ofmicroscopic motion.\"\r\narXiv 2202.06355 (2022).\r\n```\r\n\r\n<https://doi.org/10.1364/OPTICA.6.000506>:\r\n```\r\nSaga Helgadottir, Aykut Argun, and Giovanni Volpe.\r\n\"Digital video microscopy enhanced by deep learning.\"\r\nOptica 6.4 (2019): 506-513.\r\n```\r\n\r\n<https://github.com/softmatterlab/DeepTrack.git>:\r\n```\r\nSaga Helgadottir, Aykut Argun, and Giovanni Volpe.\r\n\"DeepTrack.\" (2019)\r\n```\r\n\r\n## Funding\r\n\r\nThis work was supported by the ERC Starting Grant ComplexSwimmers (Grant No. 677511), the ERC Starting Grant MAPEI (101001267), and the Knut and Alice Wallenberg Foundation.\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A deep learning framework to enhance microscopy, developed by DeepTrackAI.",
"version": "2.0.0",
"project_urls": {
"Homepage": "https://github.com/DeepTrackAI/DeepTrack2"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "328fcd2beb25cd43646dc2c0eef000228e0e9996c94739b2dfcf6081a9613119",
"md5": "e2f5ec37dd79cf19bd42c4aac3b6f3e1",
"sha256": "771ae31bc2576211f8df9efed088dd5af09689396abe11aa4dbd48c22dab16c8"
},
"downloads": -1,
"filename": "deeptrack-2.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e2f5ec37dd79cf19bd42c4aac3b6f3e1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 204873,
"upload_time": "2024-08-26T15:47:48",
"upload_time_iso_8601": "2024-08-26T15:47:48.962178Z",
"url": "https://files.pythonhosted.org/packages/32/8f/cd2beb25cd43646dc2c0eef000228e0e9996c94739b2dfcf6081a9613119/deeptrack-2.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d116d21df5fa63b1cefef3bb4a86dadcf0b2e607eec08d6b58e5e82c5241ca6f",
"md5": "8414ac61171d94e40b764c85998e8b99",
"sha256": "a5273592c004f42e62128c6ca96565f1ed785d4428f2764f940b05854d3d226d"
},
"downloads": -1,
"filename": "deeptrack-2.0.0.tar.gz",
"has_sig": false,
"md5_digest": "8414ac61171d94e40b764c85998e8b99",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 167965,
"upload_time": "2024-08-26T15:47:51",
"upload_time_iso_8601": "2024-08-26T15:47:51.439504Z",
"url": "https://files.pythonhosted.org/packages/d1/16/d21df5fa63b1cefef3bb4a86dadcf0b2e607eec08d6b58e5e82c5241ca6f/deeptrack-2.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-26 15:47:51",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "DeepTrackAI",
"github_project": "DeepTrack2",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": []
},
{
"name": "matplotlib",
"specs": []
},
{
"name": "scipy",
"specs": []
},
{
"name": "Sphinx",
"specs": [
[
"==",
"2.2.0"
]
]
},
{
"name": "pydata-sphinx-theme",
"specs": []
},
{
"name": "numpydoc",
"specs": []
},
{
"name": "scikit-image",
"specs": []
},
{
"name": "more_itertools",
"specs": []
},
{
"name": "pint",
"specs": [
[
"<",
"0.20"
]
]
},
{
"name": "pandas",
"specs": []
},
{
"name": "tqdm",
"specs": []
},
{
"name": "lazy_import",
"specs": []
},
{
"name": "rich",
"specs": []
},
{
"name": "gdown",
"specs": []
}
],
"lcname": "deeptrack"
}