# Braintracer
Braintracer is a processing pipeline extension for the BrainGlobe API. It enables high-throughput processing with cellfinder, quantifies cell positions and produces figures for visualising cell distributions across datasets.
---
## Installation
First, install Anaconda or Miniconda on your machine.
Open Anaconda Prompt.
Create a Python environment and install braintracer:
`conda create -n env_name python=3.10.6`
`conda activate env_name`
`pip install braintracer`
View your downloaded BrainGlobe atlases with `brainglobe list`
Install the 10um Allen mouse brain atlas: `brainglobe install -a allen_mouse_10um`
Add your data into your working directory as follows:
```
├── WorkingDirectory
│ ├── bt.bat
│ ├── bt_visualiser.ipynb
│ ├── DatasetName1
│ | ├── SignalChannelName
│ | | ├── section_001_01
│ | | ├── section_001_02
│ | ├── BackgroundChannelName
│ ├── DatasetName2
```
As you can see, for now `bt.bat` and `bt_visualiser.py` must be copied into the working directory.
On Windows, these files are found here:
`Users/USERNAME/miniconda3/envs/ENV_NAME/Lib/site-packages/braintracer/braintracer`
It is also recommended to install CUDA for usage of the GPU in cellfinder:
`conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0`
Then confirm the GPU is detected by tensorflow:
`python`
`import tensorflow as tf`
`tf.config.list_physical_devices('GPU')`
To generate the braintracer directory structure inside `WorkingDirectory`:
• Open Anaconda Prompt
• Activate your environment: `conda activate env_name`
• Navigate to `WorkingDirectory`
• Run the braintracer pre-processing tool: `bt.bat`
• The tool can then be closed - the directories are generated immediately
---
## Usage
braintracer has two main workflows - pre-processing and visualisation.
### Pre-processing
• Open Anaconda Prompt
• Activate your environment: `conda activate env_name`
• Navigate to `WorkingDirectory`
• Run the braintracer pre-processing tool: `bt.bat`
• Follow the instructions in the terminal
If you already have a .csv from cellfinder containing cell coordinates, follow the above steps but answer `y` when asked `Do results already exist ready for copying?`
### Visualisation
• Open Anaconda Prompt
• Activate your environment: `conda activate env_name`
• Navigate to `WorkingDirectory`
• Open Jupyter with `jupyter-lab`
• In the browser tab that appears, open `bt_visualiser.ipynb`
• Play with your results and save figures all within Jupyter Notebook!
---
## Sample data
If you don't have access to any raw data, you can use the sample data provided.
Move the sample data files into the `WorkingDirectory\braintracer\cellfinder\` directory.
You should then be able to explore this data with the bt_visualiser.ipynb notebook with `jupyter notebook` or `jupyter-lab`
---
## Measure performance with ground truth
To assess the classifier's performance, you will need to generate ground truth data.
Braintracer requires ground truth coordinates in atlas space, so these should be generated in napari with the cellfinder curation plugin.
• Open napari with `napari`
• Navigate to `dataset\\cellfinder_[]\\registration`
• Load the signal channel `downsampled_standard_channel_0` and background channel `downsampled_standard`
• Open the cellfinder curation plugin and select these layers as the signal and background channels
• Click 'Add training data layers' and select some cells in the cells layer!
• Select both cell layers and go to File... Save selected layer(s)
• Save the file in the following format: `groundtruth_[].xml` (you must type .xml!) within `braintracer\\ground_truth`
---
## Generate training data to improve the classifier
The classifier requires some feedback to be improved, or retrained.
You can generate training data easily in napari.
• Open napari with `napari`
• Drag the `dataset\\cellfinder_[]` folder onto the napari workspace
• Drag the folders containing your signal and background channels
• Move the signal and background channel layers down to the bottom of the layer manager (with signal channel above the background!)
• Make the atlas layer (`allen_mouse_10um`) visible and decrease the opacity to reveal areas during curation
• Go to `Plugins > cellfinder > Curation`
• Set the signal and background image fields to your signal and background layers
• Click `Add training data layers`
• Select the layer you are interested in (`Cells` to mark false positives; `Non cells` for false negatives)
• Select the magnifying glass to move the FOV such that the entire area to be curated is visible but cell markers can still large enough
• You are then able to select the arrow icon to make markers selectable and not have to switch back and forth between the two tools
• Begin curation from the caudal end (towards slice #0) and work your way through each slice, switching between the `Cells` and `Non cells` layers depending on the type of false label
• Depending on the strategy, either review all cells (even confirming correct classifications by selecting `Mark as cell(s)` for the `Cells` layer or `Mark as non cell(s)` for the `Non cells` layer) or only the subset of cells that appear to be classified incorrectly
• When finished, click `Save training data` and select the output folder
• The plugin will create a file called `training.yml` and folders called `cells` and `non_cells` containing the TIFFs that the classifier will be shown
• Additionally, select both training data layers and go to File... Save selected layer(s)
• Save the file as `name.xml` (you must type .xml!)
The YML file can then be used to retrain the network.
Raw data
{
"_id": null,
"home_page": "https://github.com/samclothier/braintracer",
"name": "braintracer",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "neuroscience, tracing, anatomy, brainglobe, cellfinder, brainreg",
"author": "Sam Clothier",
"author_email": "sam.clothier.18@ucl.ac.uk",
"download_url": "https://files.pythonhosted.org/packages/4f/34/00d0e1f3acc24d74098a3263b9ebb9ac9fb68a6cadead995d288572a146a/braintracer-0.3.tar.gz",
"platform": null,
"description": "# Braintracer\r\nBraintracer is a processing pipeline extension for the BrainGlobe API. It enables high-throughput processing with cellfinder, quantifies cell positions and produces figures for visualising cell distributions across datasets.\r\n\r\n---\r\n## Installation\r\nFirst, install Anaconda or Miniconda on your machine. \r\nOpen Anaconda Prompt. \r\nCreate a Python environment and install braintracer: \r\n`conda create -n env_name python=3.10.6` \r\n`conda activate env_name` \r\n`pip install braintracer` \r\n\r\nView your downloaded BrainGlobe atlases with `brainglobe list` \r\nInstall the 10um Allen mouse brain atlas: `brainglobe install -a allen_mouse_10um` \r\n\r\nAdd your data into your working directory as follows: \r\n```\r\n\u251c\u2500\u2500 WorkingDirectory\r\n\u2502 \u251c\u2500\u2500 bt.bat\r\n\u2502 \u251c\u2500\u2500 bt_visualiser.ipynb\r\n\u2502 \u251c\u2500\u2500 DatasetName1\r\n\u2502 | \u251c\u2500\u2500 SignalChannelName\r\n\u2502 | | \u251c\u2500\u2500 section_001_01\r\n\u2502 | | \u251c\u2500\u2500 section_001_02\r\n\u2502 | \u251c\u2500\u2500 BackgroundChannelName\r\n\u2502 \u251c\u2500\u2500 DatasetName2\r\n```\r\n\r\nAs you can see, for now `bt.bat` and `bt_visualiser.py` must be copied into the working directory. \r\nOn Windows, these files are found here: \r\n`Users/USERNAME/miniconda3/envs/ENV_NAME/Lib/site-packages/braintracer/braintracer` \r\n\r\nIt is also recommended to install CUDA for usage of the GPU in cellfinder: \r\n`conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0` \r\nThen confirm the GPU is detected by tensorflow: \r\n`python` \r\n`import tensorflow as tf` \r\n`tf.config.list_physical_devices('GPU')` \r\n\r\nTo generate the braintracer directory structure inside `WorkingDirectory`: \r\n\u2022 Open Anaconda Prompt \r\n\u2022 Activate your environment: `conda activate env_name` \r\n\u2022 Navigate to `WorkingDirectory` \r\n\u2022 Run the braintracer pre-processing tool: `bt.bat` \r\n\u2022 The tool can then be closed - the directories are generated immediately \r\n\r\n---\r\n## Usage\r\nbraintracer has two main workflows - pre-processing and visualisation. \r\n\r\n### Pre-processing\r\n\u2022 Open Anaconda Prompt \r\n\u2022 Activate your environment: `conda activate env_name` \r\n\u2022 Navigate to `WorkingDirectory` \r\n\u2022 Run the braintracer pre-processing tool: `bt.bat` \r\n\u2022 Follow the instructions in the terminal \r\n\r\nIf you already have a .csv from cellfinder containing cell coordinates, follow the above steps but answer `y` when asked `Do results already exist ready for copying?` \r\n\r\n### Visualisation\r\n\u2022 Open Anaconda Prompt \r\n\u2022 Activate your environment: `conda activate env_name` \r\n\u2022 Navigate to `WorkingDirectory` \r\n\u2022 Open Jupyter with `jupyter-lab` \r\n\u2022 In the browser tab that appears, open `bt_visualiser.ipynb` \r\n\u2022 Play with your results and save figures all within Jupyter Notebook! \r\n\r\n---\r\n## Sample data\r\nIf you don't have access to any raw data, you can use the sample data provided. \r\nMove the sample data files into the `WorkingDirectory\\braintracer\\cellfinder\\` directory. \r\nYou should then be able to explore this data with the bt_visualiser.ipynb notebook with `jupyter notebook` or `jupyter-lab` \r\n\r\n---\r\n## Measure performance with ground truth\r\nTo assess the classifier's performance, you will need to generate ground truth data. \r\nBraintracer requires ground truth coordinates in atlas space, so these should be generated in napari with the cellfinder curation plugin. \r\n\u2022 Open napari with `napari` \r\n\u2022 Navigate to `dataset\\\\cellfinder_[]\\\\registration` \r\n\u2022 Load the signal channel `downsampled_standard_channel_0` and background channel `downsampled_standard` \r\n\u2022 Open the cellfinder curation plugin and select these layers as the signal and background channels \r\n\u2022 Click 'Add training data layers' and select some cells in the cells layer! \r\n\u2022 Select both cell layers and go to File... Save selected layer(s) \r\n\u2022 Save the file in the following format: `groundtruth_[].xml` (you must type .xml!) within `braintracer\\\\ground_truth` \r\n\r\n---\r\n## Generate training data to improve the classifier\r\nThe classifier requires some feedback to be improved, or retrained. \r\nYou can generate training data easily in napari.\r\n\u2022 Open napari with `napari` \r\n\u2022 Drag the `dataset\\\\cellfinder_[]` folder onto the napari workspace \r\n\u2022 Drag the folders containing your signal and background channels \r\n\u2022 Move the signal and background channel layers down to the bottom of the layer manager (with signal channel above the background!) \r\n\u2022 Make the atlas layer (`allen_mouse_10um`) visible and decrease the opacity to reveal areas during curation \r\n\u2022 Go to `Plugins > cellfinder > Curation` \r\n\u2022 Set the signal and background image fields to your signal and background layers \r\n\u2022 Click `Add training data layers` \r\n\u2022 Select the layer you are interested in (`Cells` to mark false positives; `Non cells` for false negatives) \r\n\u2022 Select the magnifying glass to move the FOV such that the entire area to be curated is visible but cell markers can still large enough \r\n\u2022 You are then able to select the arrow icon to make markers selectable and not have to switch back and forth between the two tools \r\n\u2022 Begin curation from the caudal end (towards slice #0) and work your way through each slice, switching between the `Cells` and `Non cells` layers depending on the type of false label \r\n\u2022 Depending on the strategy, either review all cells (even confirming correct classifications by selecting `Mark as cell(s)` for the `Cells` layer or `Mark as non cell(s)` for the `Non cells` layer) or only the subset of cells that appear to be classified incorrectly \r\n\u2022 When finished, click `Save training data` and select the output folder \r\n\u2022 The plugin will create a file called `training.yml` and folders called `cells` and `non_cells` containing the TIFFs that the classifier will be shown \r\n\u2022 Additionally, select both training data layers and go to File... Save selected layer(s) \r\n\u2022 Save the file as `name.xml` (you must type .xml!) \r\nThe YML file can then be used to retrain the network. \r\n",
"bugtrack_url": null,
"license": null,
"summary": "A complete processing pipeline for anatomical neuronal tracing.",
"version": "0.3",
"project_urls": {
"Download": "https://github.com/samclothier/braintracer/archive/refs/tags/0.3.tar.gz",
"Homepage": "https://github.com/samclothier/braintracer"
},
"split_keywords": [
"neuroscience",
" tracing",
" anatomy",
" brainglobe",
" cellfinder",
" brainreg"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1dc3581a2befaab32680e864252706e22196172178a024e40dc59398c7c91f69",
"md5": "5a95f11273a03c5ca4aee93cf2b10d97",
"sha256": "913adac7e9ddc773be4e9b6f491e5aa92ef00f53908d5d43a40e6b13a1dd4043"
},
"downloads": -1,
"filename": "braintracer-0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5a95f11273a03c5ca4aee93cf2b10d97",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 3096605,
"upload_time": "2024-12-11T10:48:12",
"upload_time_iso_8601": "2024-12-11T10:48:12.681680Z",
"url": "https://files.pythonhosted.org/packages/1d/c3/581a2befaab32680e864252706e22196172178a024e40dc59398c7c91f69/braintracer-0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4f3400d0e1f3acc24d74098a3263b9ebb9ac9fb68a6cadead995d288572a146a",
"md5": "12a6ba93bd47367f668272145fc3b002",
"sha256": "ca1807c93d2f2c21d8055a304839de7104974110e7b54f27dbf1f6cfb131c384"
},
"downloads": -1,
"filename": "braintracer-0.3.tar.gz",
"has_sig": false,
"md5_digest": "12a6ba93bd47367f668272145fc3b002",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 3094470,
"upload_time": "2024-12-11T10:48:15",
"upload_time_iso_8601": "2024-12-11T10:48:15.676900Z",
"url": "https://files.pythonhosted.org/packages/4f/34/00d0e1f3acc24d74098a3263b9ebb9ac9fb68a6cadead995d288572a146a/braintracer-0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-11 10:48:15",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "samclothier",
"github_project": "braintracer",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "braintracer"
}