# aind-dynamic-foraging-basic-analysis
[](LICENSE)

[](https://github.com/semantic-release/semantic-release)



## Usage
- To use this template, click the green `Use this template` button and `Create new repository`.
- After github initially creates the new repository, please wait an extra minute for the initialization scripts to finish organizing the repo.
- To enable the automatic semantic version increments: in the repository go to `Settings` and `Collaborators and teams`. Click the green `Add people` button. Add `svc-aindscicomp` as an admin. Modify the file in `.github/workflows/tag_and_publish.yml` and remove the if statement in line 10. The semantic version will now be incremented every time a code is committed into the main branch.
- To publish to PyPI, enable semantic versioning and uncomment the publish block in `.github/workflows/tag_and_publish.yml`. The code will now be published to PyPI every time the code is committed into the main branch.
- The `.github/workflows/test_and_lint.yml` file will run automated tests and style checks every time a Pull Request is opened. If the checks are undesired, the `test_and_lint.yml` can be deleted. The strictness of the code coverage level, etc., can be modified by altering the configurations in the `pyproject.toml` file and the `.flake8` file.
## Installation
To use the software, in the root directory, run
```bash
pip install -e .
```
To develop the code, run
```bash
pip install -e .[dev]
```
## Usage
### Annotate licks
To create a dataframe of licks that has been annotated with licking bout starts/stops, cue responsive licks, reward triggered licks, and intertrial choices.
```
import aind_dynamic_foraging_basic_analysis.licks.annotation as annotation
df_licks = annotation.annotate_licks(nwb)
```
You can then plot interlick interval analyses with:
```
import aind_dynamic_foraging_basic_analysis.licks.plot_interlick_interval as pii
#Plot interlick interval of all licks
pii.plot_interlick_interval(df_licks)
#plot interlick interval for left and right licks separately
pii.plot_interlick_interval(df_licks, categories='event')
```
### Create lick analysis report
To create a figure with several licking pattern analyses:
```
import aind_dynamic_foraging_basic_analysis.licks.lick_analysis as lick_analysis
lick_analysis.plot_lick_analysis(nwb)
```
### Compute trial by trial metrics
To annotate the trials dataframe with trial by trial metrics:
```
import aind_dynamic_foraging_basic_analysis.metrics.trial_metrics as tm
df_trials = tm.compute_all_trial_metrics(nwb)
```
### Plot interactive session scroller
```
import aind_dynamic_foraging_basic_analysis.plot.plot_session_scroller as pss
pss.plot_session_scroller(nwb)
```
To disable lick bout and other annotations:
```
pss.plot_session_scroller(nwb,plot_bouts=False)
```
This function will automatically plot FIP data if available. To change the processing method plotted use:
```
pss.plot_session_scroller(nwb, processing="bright")
```
To change which trial by trial metrics plotted:
```
pss.plot_session_scroller(nwb, metrics=['response_rate'])
```
### Plot FIP PSTH
You can use the `plot_fip` module to compute and plot PSTHs for the FIP data.
To compare one channel to multiple event types
```
from aind_dynamic_foraging_basic_analysis.plot import plot_fip as pf
channel = 'G_1_dff-poly'
rewarded_go_cues = nwb.df_trials.query('earned_reward == 1')['goCue_start_time_in_session'].values
unrewarded_go_cues = nwb.df_trials.query('earned_reward == 0')['goCue_start_time_in_session'].values
pf.plot_fip_psth_compare_alignments(
nwb,
{'rewarded goCue':rewarded_go_cues,'unrewarded goCue':unrewarded_go_cues},
channel,
censor=True
)
```
To compare multiple channels to the same event type:
```
pf.plot_fip_psth(nwb, 'goCue_start_time')
```
## Contributing
### Linters and testing
There are several libraries used to run linters, check documentation, and run tests.
- Please test your changes using the **coverage** library, which will run the tests and log a coverage report:
```bash
coverage run -m unittest discover && coverage report
```
- Use **interrogate** to check that modules, methods, etc. have been documented thoroughly:
```bash
interrogate .
```
- Use **flake8** to check that code is up to standards (no unused imports, etc.):
```bash
flake8 .
```
- Use **black** to automatically format the code into PEP standards:
```bash
black .
```
- Use **isort** to automatically sort import statements:
```bash
isort .
```
### Pull requests
For internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use [Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit) style for commit messages. Roughly, they should follow the pattern:
```text
<type>(<scope>): <short summary>
```
where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:
- **build**: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)
- **ci**: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)
- **docs**: Documentation only changes
- **feat**: A new feature
- **fix**: A bugfix
- **perf**: A code change that improves performance
- **refactor**: A code change that neither fixes a bug nor adds a feature
- **test**: Adding missing tests or correcting existing tests
### Semantic Release
The table below, from [semantic release](https://github.com/semantic-release/semantic-release), shows which commit message gets you which release type when `semantic-release` runs (using the default configuration):
| Commit message | Release type |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------- |
| `fix(pencil): stop graphite breaking when too much pressure applied` | ~~Patch~~ Fix Release, Default release |
| `feat(pencil): add 'graphiteWidth' option` | ~~Minor~~ Feature Release |
| `perf(pencil): remove graphiteWidth option`<br><br>`BREAKING CHANGE: The graphiteWidth option has been removed.`<br>`The default graphite width of 10mm is always used for performance reasons.` | ~~Major~~ Breaking Release <br /> (Note that the `BREAKING CHANGE: ` token must be in the footer of the commit) |
### Documentation
To generate the rst files source files for documentation, run
```bash
sphinx-apidoc -o doc_template/source/ src
```
Then to create the documentation HTML files, run
```bash
sphinx-build -b html doc_template/source/ doc_template/build/html
```
More info on sphinx installation can be found [here](https://www.sphinx-doc.org/en/master/usage/installation.html).
Raw data
{
"_id": null,
"home_page": null,
"name": "aind-dynamic-foraging-basic-analysis",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": null,
"author": "Allen Institute for Neural Dynamics",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/7d/d4/daf3a70cb2fd1c8bd7ec750a522c563d9d87e77712591b8a3ad88710b96c/aind_dynamic_foraging_basic_analysis-0.3.7.tar.gz",
"platform": null,
"description": "# aind-dynamic-foraging-basic-analysis\n\n[](LICENSE)\n\n[](https://github.com/semantic-release/semantic-release)\n\n\n\n\n\n\n## Usage\n - To use this template, click the green `Use this template` button and `Create new repository`.\n - After github initially creates the new repository, please wait an extra minute for the initialization scripts to finish organizing the repo.\n - To enable the automatic semantic version increments: in the repository go to `Settings` and `Collaborators and teams`. Click the green `Add people` button. Add `svc-aindscicomp` as an admin. Modify the file in `.github/workflows/tag_and_publish.yml` and remove the if statement in line 10. The semantic version will now be incremented every time a code is committed into the main branch.\n - To publish to PyPI, enable semantic versioning and uncomment the publish block in `.github/workflows/tag_and_publish.yml`. The code will now be published to PyPI every time the code is committed into the main branch.\n - The `.github/workflows/test_and_lint.yml` file will run automated tests and style checks every time a Pull Request is opened. If the checks are undesired, the `test_and_lint.yml` can be deleted. The strictness of the code coverage level, etc., can be modified by altering the configurations in the `pyproject.toml` file and the `.flake8` file.\n\n## Installation\nTo use the software, in the root directory, run\n```bash\npip install -e .\n```\n\nTo develop the code, run\n```bash\npip install -e .[dev]\n```\n\n## Usage\n### Annotate licks\nTo create a dataframe of licks that has been annotated with licking bout starts/stops, cue responsive licks, reward triggered licks, and intertrial choices.\n```\nimport aind_dynamic_foraging_basic_analysis.licks.annotation as annotation\ndf_licks = annotation.annotate_licks(nwb)\n```\n\nYou can then plot interlick interval analyses with:\n```\nimport aind_dynamic_foraging_basic_analysis.licks.plot_interlick_interval as pii\n\n#Plot interlick interval of all licks\npii.plot_interlick_interval(df_licks)\n\n#plot interlick interval for left and right licks separately\npii.plot_interlick_interval(df_licks, categories='event')\n```\n\n### Create lick analysis report\nTo create a figure with several licking pattern analyses:\n\n```\nimport aind_dynamic_foraging_basic_analysis.licks.lick_analysis as lick_analysis\nlick_analysis.plot_lick_analysis(nwb)\n```\n\n### Compute trial by trial metrics\nTo annotate the trials dataframe with trial by trial metrics:\n\n```\nimport aind_dynamic_foraging_basic_analysis.metrics.trial_metrics as tm\ndf_trials = tm.compute_all_trial_metrics(nwb)\n```\n\n### Plot interactive session scroller\n```\nimport aind_dynamic_foraging_basic_analysis.plot.plot_session_scroller as pss\npss.plot_session_scroller(nwb)\n```\n\nTo disable lick bout and other annotations:\n```\npss.plot_session_scroller(nwb,plot_bouts=False)\n```\n\nThis function will automatically plot FIP data if available. To change the processing method plotted use:\n```\npss.plot_session_scroller(nwb, processing=\"bright\")\n```\n\nTo change which trial by trial metrics plotted:\n```\npss.plot_session_scroller(nwb, metrics=['response_rate'])\n```\n\n### Plot FIP PSTH\nYou can use the `plot_fip` module to compute and plot PSTHs for the FIP data. \n\nTo compare one channel to multiple event types\n```\nfrom aind_dynamic_foraging_basic_analysis.plot import plot_fip as pf\nchannel = 'G_1_dff-poly'\nrewarded_go_cues = nwb.df_trials.query('earned_reward == 1')['goCue_start_time_in_session'].values\nunrewarded_go_cues = nwb.df_trials.query('earned_reward == 0')['goCue_start_time_in_session'].values\npf.plot_fip_psth_compare_alignments(\n nwb, \n {'rewarded goCue':rewarded_go_cues,'unrewarded goCue':unrewarded_go_cues}, \n channel, \n censor=True\n )\n```\n\nTo compare multiple channels to the same event type:\n```\npf.plot_fip_psth(nwb, 'goCue_start_time')\n```\n\n\n## Contributing\n\n### Linters and testing\n\nThere are several libraries used to run linters, check documentation, and run tests.\n\n- Please test your changes using the **coverage** library, which will run the tests and log a coverage report:\n\n```bash\ncoverage run -m unittest discover && coverage report\n```\n\n- Use **interrogate** to check that modules, methods, etc. have been documented thoroughly:\n\n```bash\ninterrogate .\n```\n\n- Use **flake8** to check that code is up to standards (no unused imports, etc.):\n```bash\nflake8 .\n```\n\n- Use **black** to automatically format the code into PEP standards:\n```bash\nblack .\n```\n\n- Use **isort** to automatically sort import statements:\n```bash\nisort .\n```\n\n### Pull requests\n\nFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use [Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit) style for commit messages. Roughly, they should follow the pattern:\n```text\n<type>(<scope>): <short summary>\n```\n\nwhere scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:\n\n- **build**: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)\n- **ci**: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)\n- **docs**: Documentation only changes\n- **feat**: A new feature\n- **fix**: A bugfix\n- **perf**: A code change that improves performance\n- **refactor**: A code change that neither fixes a bug nor adds a feature\n- **test**: Adding missing tests or correcting existing tests\n\n### Semantic Release\n\nThe table below, from [semantic release](https://github.com/semantic-release/semantic-release), shows which commit message gets you which release type when `semantic-release` runs (using the default configuration):\n\n| Commit message | Release type |\n| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------- |\n| `fix(pencil): stop graphite breaking when too much pressure applied` | ~~Patch~~ Fix Release, Default release |\n| `feat(pencil): add 'graphiteWidth' option` | ~~Minor~~ Feature Release |\n| `perf(pencil): remove graphiteWidth option`<br><br>`BREAKING CHANGE: The graphiteWidth option has been removed.`<br>`The default graphite width of 10mm is always used for performance reasons.` | ~~Major~~ Breaking Release <br /> (Note that the `BREAKING CHANGE: ` token must be in the footer of the commit) |\n\n### Documentation\nTo generate the rst files source files for documentation, run\n```bash\nsphinx-apidoc -o doc_template/source/ src \n```\nThen to create the documentation HTML files, run\n```bash\nsphinx-build -b html doc_template/source/ doc_template/build/html\n```\nMore info on sphinx installation can be found [here](https://www.sphinx-doc.org/en/master/usage/installation.html).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Generated from aind-library-template",
"version": "0.3.7",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c5fc6b66a79a201226576fd8770ce8d0555155f53eba7ca52aab31d9d83bfd8f",
"md5": "22bc07601bc0ce31419f309d8ae1a0eb",
"sha256": "87808f9a91c38e0799394406ca793cf64bc416f31f056062e71023edb1e70c0b"
},
"downloads": -1,
"filename": "aind_dynamic_foraging_basic_analysis-0.3.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "22bc07601bc0ce31419f309d8ae1a0eb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 33029,
"upload_time": "2024-11-21T19:19:37",
"upload_time_iso_8601": "2024-11-21T19:19:37.488123Z",
"url": "https://files.pythonhosted.org/packages/c5/fc/6b66a79a201226576fd8770ce8d0555155f53eba7ca52aab31d9d83bfd8f/aind_dynamic_foraging_basic_analysis-0.3.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7dd4daf3a70cb2fd1c8bd7ec750a522c563d9d87e77712591b8a3ad88710b96c",
"md5": "cc659f8b46273097f2d7bf4b76d7fb6d",
"sha256": "2511de539abfb2f98bb7445721f676f90583cf525e533178c789cb98f151e52b"
},
"downloads": -1,
"filename": "aind_dynamic_foraging_basic_analysis-0.3.7.tar.gz",
"has_sig": false,
"md5_digest": "cc659f8b46273097f2d7bf4b76d7fb6d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 2042675,
"upload_time": "2024-11-21T19:19:39",
"upload_time_iso_8601": "2024-11-21T19:19:39.358338Z",
"url": "https://files.pythonhosted.org/packages/7d/d4/daf3a70cb2fd1c8bd7ec750a522c563d9d87e77712591b8a3ad88710b96c/aind_dynamic_foraging_basic_analysis-0.3.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-21 19:19:39",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "aind-dynamic-foraging-basic-analysis"
}