aind-analysis-arch-result-access


Nameaind-analysis-arch-result-access JSON
Version 0.7.3 PyPI version JSON
download
home_pageNone
SummaryGenerated from aind-library-template
upload_time2025-08-20 23:14:37
maintainerNone
docs_urlNone
authorAllen Institute for Neural Dynamics
requires_python>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # aind-analysis-arch-result-access

[![License](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE)
![Code Style](https://img.shields.io/badge/code%20style-black-black)
[![semantic-release: angular](https://img.shields.io/badge/semantic--release-angular-e10079?logo=semantic-release)](https://github.com/semantic-release/semantic-release)
![Interrogate](https://img.shields.io/badge/interrogate-100.0%25-brightgreen)
![Coverage](https://img.shields.io/badge/coverage-94%25-brightgreen?logo=codecov)
![Python](https://img.shields.io/badge/python->=3.9-blue?logo=python)


APIs to access analysis results in the AIND behavior pipeline.

## Installation

```bash
pip install aind-analysis-arch-result-access
```

## Usage

Try the demo: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/14Hph9QuySbgSQBKl8PGi_nCQfoLcLUI-?usp=sharing)

### Access pipeline v1.0 (Han's "temporary" pipeline)
#### Fetch the session master table in [Streamlit](https://foraging-behavior-browser.allenneuraldynamics-test.org/)
```python
from aind_analysis_arch_result_access.han_pipeline import get_session_table
df_master = get_session_table(if_load_bpod=False)  # `if_load_bpod=True` will load additional 4000+ old sessions from bpod
```
#### Fetch logistic regression results
- Get logistic regression results from one session
    ```python
    from aind_analysis_arch_result_access.han_pipeline import get_logistic_regression
    df_logistic = get_logistic_regression(
        df_sessions=pd.DataFrame(
            {
                "subject_id": ["769253"],
                "session_date": ["2025-03-12"],
            }
        ),
        model="Su2022",
    )
    ```
- Get logistic regression results in batch (from any dataframe with `subject_id` and `session_date` columns)
    ```python
    df_logistic = get_logistic_regression(
        df_master.query("subject_id == '769253'"),  # All sessions from a single subject (query from the `df_master` above)
        model="Su2022",
        if_download_figures=True,  # Also download fitting plots
        download_path="./tmp",
    )
    ```

#### Fetch trial table (🚧 under development)
#### Fetch analysis figures (🚧 under development)
### Access pipeline v2.0 (AIND analysis architecture)
#### Fetch dynamic foraging MLE model fitting results
- Get all MLE fitting results from one session

    ```python
    from aind_analysis_arch_result_access.han_pipeline import get_mle_model_fitting
    df = get_mle_model_fitting(subject_id="730945", session_date="2024-10-24")

    print(df.columns)
    print(df[["agent_alias", "AIC", "prediction_accuracy_10-CV_test"]])
    ```
    output
    ```
    Query: {'analysis_spec.analysis_name': 'MLE fitting', 'analysis_spec.analysis_ver': 'first version @ 0.10.0', 'subject_id': '730945', 'session_date': '2024-10-24'}
    Found 5 MLE fitting records!
    Found 5 successful MLE fitting!
    Get latent variables from s3: 100%|██████████| 5/5 [00:00<00:00, 58.01it/s]

    Index(['_id', 'nwb_name', 'status', 'agent_alias', 'log_likelihood', 'AIC',
          'BIC', 'LPT', 'LPT_AIC', 'LPT_BIC', 'k_model', 'n_trials',
          'prediction_accuracy', 'prediction_accuracy_test',
          'prediction_accuracy_fit', 'prediction_accuracy_test_bias_only',
          'params', 'prediction_accuracy_10-CV_test',
          'prediction_accuracy_10-CV_test_std', 'prediction_accuracy_10-CV_fit',
          'prediction_accuracy_10-CV_fit_std',
          'prediction_accuracy_10-CV_test_bias_only',
          'prediction_accuracy_10-CV_test_bias_only_std', 'latent_variables'],
          dtype='object')

                      agent_alias          AIC  prediction_accuracy_10-CV_test
    0  QLearning_L1F1_CK1_softmax   239.519051                        0.898151
    1         QLearning_L1F0_epsi   403.621460                        0.762075
    2  QLearning_L2F1_CK1_softmax   236.265381                        0.903280
    3                        WSLS  4051.958064                        0.636196
    4      QLearning_L2F1_softmax   236.512476                        0.888611
    ```
    Now the latent variables also contain the `rpe`.
    ```python
    df.latent_variables.iloc[0].keys()
    ```
    output
    ```
    dict_keys(['q_value', 'choice_kernel', 'choice_prob', 'rpe'])
    ```

-  Also download figures
    ```python
    df = get_mle_model_fitting(
        subject_id="730945",
        session_date="2024-10-24",
        if_download_figures=True,
        download_path="./mle_figures",
    )
    !ls ./mle_figures
    ```
    output
    ```
    Query: {'analysis_spec.analysis_name': 'MLE fitting', 'analysis_spec.analysis_ver': 'first version @ 0.10.0', 'subject_id': '730945', 'session_date': '2024-10-24'}
    Found 5 MLE fitting records!
    Found 5 successful MLE fitting!
    Get latent variables from s3: 100%|██████████| 5/5 [00:00<00:00, 85.87it/s]
    Download figures from s3: 100%|██████████| 5/5 [00:00<00:00, 86.45it/s]

    730945_2024-10-24_17-38-06_QLearning_L1F0_epsi_58cc5b6f6e.png
    730945_2024-10-24_17-38-06_QLearning_L1F1_CK1_softmax_3ffdf98012.png
    730945_2024-10-24_17-38-06_QLearning_L2F1_CK1_softmax_5ce7f1f816.png
    730945_2024-10-24_17-38-06_QLearning_L2F1_softmax_ec59be40c0.png
    730945_2024-10-24_17-38-06_WSLS_7c61d01e0f.png
    ```
    Example figure:

    <img width="1153" alt="image" src="https://github.com/user-attachments/assets/84ebd7d3-ac49-4b8f-a0a6-41cced555437" />


- Get fittings from all sessions of a mouse for a specific model
    ```python
    df = get_mle_model_fitting(
        subject_id="730945",
        agent_alias="QLearning_L2F1_CK1_softmax",
        if_download_figures=False,
    )
    print(df.iloc[:10][["nwb_name", "agent_alias"]])
    ```
    output
    ```
    Query: {'analysis_spec.analysis_name': 'MLE fitting', 'analysis_spec.analysis_ver': 'first version @ 0.10.0', 'subject_id': '730945', 'analysis_results.fit_settings.agent_alias': 'QLearning_L2F1_CK1_softmax'}
    Found 32 MLE fitting records!
    Found 32 successful MLE fitting!
    Get latent variables from s3: 100%|██████████| 32/32 [00:00<00:00, 80.81it/s]

                            nwb_name                 agent_alias
    0  730945_2024-08-27_16-07-16.nwb  QLearning_L2F1_CK1_softmax
    1  730945_2024-09-05_16-47-58.nwb  QLearning_L2F1_CK1_softmax
    2  730945_2024-10-23_15-33-07.nwb  QLearning_L2F1_CK1_softmax
    3  730945_2024-09-19_17-26-54.nwb  QLearning_L2F1_CK1_softmax
    4  730945_2024-09-04_16-04-38.nwb  QLearning_L2F1_CK1_softmax
    5  730945_2024-08-30_15-55-05.nwb  QLearning_L2F1_CK1_softmax
    6  730945_2024-08-29_15-50-57.nwb  QLearning_L2F1_CK1_softmax
    7  730945_2024-10-24_17-38-06.nwb  QLearning_L2F1_CK1_softmax
    8  730945_2024-09-12_17-21-58.nwb  QLearning_L2F1_CK1_softmax
    9  730945_2024-09-03_15-49-53.nwb  QLearning_L2F1_CK1_softmax
    ```

- (for advanced users) Use your own docDB query
    ```python
    df = get_mle_model_fitting(
        from_custom_query={
            "analysis_results.fit_settings.agent_alias": "QLearning_L2F1_CK1_softmax",
            "analysis_results.n_trials" : {"$gt": 600},
        },
        if_include_latent_variables=False,
        if_download_figures=False,
    )
    ```
    output
    ```
    Query: {'analysis_spec.analysis_name': 'MLE fitting', 'analysis_spec.analysis_ver': 'first version @ 0.10.0', 'analysis_results.fit_settings.agent_alias': 'QLearning_L2F1_CK1_softmax', 'analysis_results.n_trials': {'$gt': 600}}
    Found 807 MLE fitting records!
    Found 807 successful MLE fitting!
    ```


## Contributing

### Installation
To use the software, in the root directory, run
```bash
pip install -e .
```

To develop the code, run
```bash
pip install -e .[dev]
```

### Linters and testing

There are several libraries used to run linters, check documentation, and run tests.

- Please test your changes using the **coverage** library, which will run the tests and log a coverage report:

```bash
coverage run -m unittest discover && coverage report
```

- Use **interrogate** to check that modules, methods, etc. have been documented thoroughly:

```bash
interrogate .
```

- Use **flake8** to check that code is up to standards (no unused imports, etc.):
```bash
flake8 .
```

- Use **black** to automatically format the code into PEP standards:
```bash
black .
```

- Use **isort** to automatically sort import statements:
```bash
isort .
```

### Pull requests

For internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use [Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit) style for commit messages. Roughly, they should follow the pattern:
```text
<type>(<scope>): <short summary>
```

where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:

- **build**: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)
- **ci**: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)
- **docs**: Documentation only changes
- **feat**: A new feature
- **fix**: A bugfix
- **perf**: A code change that improves performance
- **refactor**: A code change that neither fixes a bug nor adds a feature
- **test**: Adding missing tests or correcting existing tests

### Semantic Release

The table below, from [semantic release](https://github.com/semantic-release/semantic-release), shows which commit message gets you which release type when `semantic-release` runs (using the default configuration):

| Commit message                                                                                                                                                                                   | Release type                                                                                                    |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------- |
| `fix(pencil): stop graphite breaking when too much pressure applied`                                                                                                                             | ~~Patch~~ Fix Release, Default release                                                                          |
| `feat(pencil): add 'graphiteWidth' option`                                                                                                                                                       | ~~Minor~~ Feature Release                                                                                       |
| `perf(pencil): remove graphiteWidth option`<br><br>`BREAKING CHANGE: The graphiteWidth option has been removed.`<br>`The default graphite width of 10mm is always used for performance reasons.` | ~~Major~~ Breaking Release <br /> (Note that the `BREAKING CHANGE: ` token must be in the footer of the commit) |

### Documentation
To generate the rst files source files for documentation, run
```bash
sphinx-apidoc -o docs/source/ src
```
Then to create the documentation HTML files, run
```bash
sphinx-build -b html docs/source/ docs/build/html
```
More info on sphinx installation can be found [here](https://www.sphinx-doc.org/en/master/usage/installation.html).

### Read the Docs Deployment
Note: Private repositories require **Read the Docs for Business** account. The following instructions are for a public repo.

The following are required to import and build documentations on *Read the Docs*:
- A *Read the Docs* user account connected to Github. See [here](https://docs.readthedocs.com/platform/stable/guides/connecting-git-account.html) for more details.
- *Read the Docs* needs elevated permissions to perform certain operations that ensure that the workflow is as smooth as possible, like installing webhooks. If you are not the owner of the repo, you may have to request elevated permissions from the owner/admin. 
- A **.readthedocs.yaml** file in the root directory of the repo. Here is a basic template:
```yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the OS, Python version, and other tools you might need
build:
  os: ubuntu-24.04
  tools:
    python: "3.13"

# Path to a Sphinx configuration file.
sphinx:
  configuration: docs/source/conf.py

# Declare the Python requirements required to build your documentation
python:
  install:
    - method: pip
      path: .
      extra_requirements:
        - dev
```

Here are the steps for building docs in *Read the Docs*. See [here](https://docs.readthedocs.com/platform/stable/intro/add-project.html) for detailed instructions:
- From *Read the Docs* dashboard, click on **Add project**.
- For automatic configuration, select **Configure automatically** and type the name of the repo. A repo with public visibility should appear as you type. 
- Follow the subsequent steps.
- For manual configuration, select **Configure manually** and follow the subsequent steps

Once a project is created successfully, you will be able to configure/modify the project's settings; such as **Default version**, **Default branch** etc.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "aind-analysis-arch-result-access",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Allen Institute for Neural Dynamics",
    "author_email": "Han Hou <han.hou@alleninstitute.org>",
    "download_url": "https://files.pythonhosted.org/packages/8c/d1/9dee0662874f4e6f7b645319d55babeb2457b4c21dddf0080b0e6478bfbd/aind_analysis_arch_result_access-0.7.3.tar.gz",
    "platform": null,
    "description": "# aind-analysis-arch-result-access\n\n[![License](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE)\n![Code Style](https://img.shields.io/badge/code%20style-black-black)\n[![semantic-release: angular](https://img.shields.io/badge/semantic--release-angular-e10079?logo=semantic-release)](https://github.com/semantic-release/semantic-release)\n![Interrogate](https://img.shields.io/badge/interrogate-100.0%25-brightgreen)\n![Coverage](https://img.shields.io/badge/coverage-94%25-brightgreen?logo=codecov)\n![Python](https://img.shields.io/badge/python->=3.9-blue?logo=python)\n\n\nAPIs to access analysis results in the AIND behavior pipeline.\n\n## Installation\n\n```bash\npip install aind-analysis-arch-result-access\n```\n\n## Usage\n\nTry the demo: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/14Hph9QuySbgSQBKl8PGi_nCQfoLcLUI-?usp=sharing)\n\n### Access pipeline v1.0 (Han's \"temporary\" pipeline)\n#### Fetch the session master table in [Streamlit](https://foraging-behavior-browser.allenneuraldynamics-test.org/)\n```python\nfrom aind_analysis_arch_result_access.han_pipeline import get_session_table\ndf_master = get_session_table(if_load_bpod=False)  # `if_load_bpod=True` will load additional 4000+ old sessions from bpod\n```\n#### Fetch logistic regression results\n- Get logistic regression results from one session\n    ```python\n    from aind_analysis_arch_result_access.han_pipeline import get_logistic_regression\n    df_logistic = get_logistic_regression(\n        df_sessions=pd.DataFrame(\n            {\n                \"subject_id\": [\"769253\"],\n                \"session_date\": [\"2025-03-12\"],\n            }\n        ),\n        model=\"Su2022\",\n    )\n    ```\n- Get logistic regression results in batch (from any dataframe with `subject_id` and `session_date` columns)\n    ```python\n    df_logistic = get_logistic_regression(\n        df_master.query(\"subject_id == '769253'\"),  # All sessions from a single subject (query from the `df_master` above)\n        model=\"Su2022\",\n        if_download_figures=True,  # Also download fitting plots\n        download_path=\"./tmp\",\n    )\n    ```\n\n#### Fetch trial table (\ud83d\udea7 under development)\n#### Fetch analysis figures (\ud83d\udea7 under development)\n### Access pipeline v2.0 (AIND analysis architecture)\n#### Fetch dynamic foraging MLE model fitting results\n- Get all MLE fitting results from one session\n\n    ```python\n    from aind_analysis_arch_result_access.han_pipeline import get_mle_model_fitting\n    df = get_mle_model_fitting(subject_id=\"730945\", session_date=\"2024-10-24\")\n\n    print(df.columns)\n    print(df[[\"agent_alias\", \"AIC\", \"prediction_accuracy_10-CV_test\"]])\n    ```\n    output\n    ```\n    Query: {'analysis_spec.analysis_name': 'MLE fitting', 'analysis_spec.analysis_ver': 'first version @ 0.10.0', 'subject_id': '730945', 'session_date': '2024-10-24'}\n    Found 5 MLE fitting records!\n    Found 5 successful MLE fitting!\n    Get latent variables from s3: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:00<00:00, 58.01it/s]\n\n    Index(['_id', 'nwb_name', 'status', 'agent_alias', 'log_likelihood', 'AIC',\n          'BIC', 'LPT', 'LPT_AIC', 'LPT_BIC', 'k_model', 'n_trials',\n          'prediction_accuracy', 'prediction_accuracy_test',\n          'prediction_accuracy_fit', 'prediction_accuracy_test_bias_only',\n          'params', 'prediction_accuracy_10-CV_test',\n          'prediction_accuracy_10-CV_test_std', 'prediction_accuracy_10-CV_fit',\n          'prediction_accuracy_10-CV_fit_std',\n          'prediction_accuracy_10-CV_test_bias_only',\n          'prediction_accuracy_10-CV_test_bias_only_std', 'latent_variables'],\n          dtype='object')\n\n                      agent_alias          AIC  prediction_accuracy_10-CV_test\n    0  QLearning_L1F1_CK1_softmax   239.519051                        0.898151\n    1         QLearning_L1F0_epsi   403.621460                        0.762075\n    2  QLearning_L2F1_CK1_softmax   236.265381                        0.903280\n    3                        WSLS  4051.958064                        0.636196\n    4      QLearning_L2F1_softmax   236.512476                        0.888611\n    ```\n    Now the latent variables also contain the `rpe`.\n    ```python\n    df.latent_variables.iloc[0].keys()\n    ```\n    output\n    ```\n    dict_keys(['q_value', 'choice_kernel', 'choice_prob', 'rpe'])\n    ```\n\n-  Also download figures\n    ```python\n    df = get_mle_model_fitting(\n        subject_id=\"730945\",\n        session_date=\"2024-10-24\",\n        if_download_figures=True,\n        download_path=\"./mle_figures\",\n    )\n    !ls ./mle_figures\n    ```\n    output\n    ```\n    Query: {'analysis_spec.analysis_name': 'MLE fitting', 'analysis_spec.analysis_ver': 'first version @ 0.10.0', 'subject_id': '730945', 'session_date': '2024-10-24'}\n    Found 5 MLE fitting records!\n    Found 5 successful MLE fitting!\n    Get latent variables from s3: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:00<00:00, 85.87it/s]\n    Download figures from s3: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:00<00:00, 86.45it/s]\n\n    730945_2024-10-24_17-38-06_QLearning_L1F0_epsi_58cc5b6f6e.png\n    730945_2024-10-24_17-38-06_QLearning_L1F1_CK1_softmax_3ffdf98012.png\n    730945_2024-10-24_17-38-06_QLearning_L2F1_CK1_softmax_5ce7f1f816.png\n    730945_2024-10-24_17-38-06_QLearning_L2F1_softmax_ec59be40c0.png\n    730945_2024-10-24_17-38-06_WSLS_7c61d01e0f.png\n    ```\n    Example figure:\n\n    <img width=\"1153\" alt=\"image\" src=\"https://github.com/user-attachments/assets/84ebd7d3-ac49-4b8f-a0a6-41cced555437\" />\n\n\n- Get fittings from all sessions of a mouse for a specific model\n    ```python\n    df = get_mle_model_fitting(\n        subject_id=\"730945\",\n        agent_alias=\"QLearning_L2F1_CK1_softmax\",\n        if_download_figures=False,\n    )\n    print(df.iloc[:10][[\"nwb_name\", \"agent_alias\"]])\n    ```\n    output\n    ```\n    Query: {'analysis_spec.analysis_name': 'MLE fitting', 'analysis_spec.analysis_ver': 'first version @ 0.10.0', 'subject_id': '730945', 'analysis_results.fit_settings.agent_alias': 'QLearning_L2F1_CK1_softmax'}\n    Found 32 MLE fitting records!\n    Found 32 successful MLE fitting!\n    Get latent variables from s3: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 32/32 [00:00<00:00, 80.81it/s]\n\n                            nwb_name                 agent_alias\n    0  730945_2024-08-27_16-07-16.nwb  QLearning_L2F1_CK1_softmax\n    1  730945_2024-09-05_16-47-58.nwb  QLearning_L2F1_CK1_softmax\n    2  730945_2024-10-23_15-33-07.nwb  QLearning_L2F1_CK1_softmax\n    3  730945_2024-09-19_17-26-54.nwb  QLearning_L2F1_CK1_softmax\n    4  730945_2024-09-04_16-04-38.nwb  QLearning_L2F1_CK1_softmax\n    5  730945_2024-08-30_15-55-05.nwb  QLearning_L2F1_CK1_softmax\n    6  730945_2024-08-29_15-50-57.nwb  QLearning_L2F1_CK1_softmax\n    7  730945_2024-10-24_17-38-06.nwb  QLearning_L2F1_CK1_softmax\n    8  730945_2024-09-12_17-21-58.nwb  QLearning_L2F1_CK1_softmax\n    9  730945_2024-09-03_15-49-53.nwb  QLearning_L2F1_CK1_softmax\n    ```\n\n- (for advanced users) Use your own docDB query\n    ```python\n    df = get_mle_model_fitting(\n        from_custom_query={\n            \"analysis_results.fit_settings.agent_alias\": \"QLearning_L2F1_CK1_softmax\",\n            \"analysis_results.n_trials\" : {\"$gt\": 600},\n        },\n        if_include_latent_variables=False,\n        if_download_figures=False,\n    )\n    ```\n    output\n    ```\n    Query: {'analysis_spec.analysis_name': 'MLE fitting', 'analysis_spec.analysis_ver': 'first version @ 0.10.0', 'analysis_results.fit_settings.agent_alias': 'QLearning_L2F1_CK1_softmax', 'analysis_results.n_trials': {'$gt': 600}}\n    Found 807 MLE fitting records!\n    Found 807 successful MLE fitting!\n    ```\n\n\n## Contributing\n\n### Installation\nTo use the software, in the root directory, run\n```bash\npip install -e .\n```\n\nTo develop the code, run\n```bash\npip install -e .[dev]\n```\n\n### Linters and testing\n\nThere are several libraries used to run linters, check documentation, and run tests.\n\n- Please test your changes using the **coverage** library, which will run the tests and log a coverage report:\n\n```bash\ncoverage run -m unittest discover && coverage report\n```\n\n- Use **interrogate** to check that modules, methods, etc. have been documented thoroughly:\n\n```bash\ninterrogate .\n```\n\n- Use **flake8** to check that code is up to standards (no unused imports, etc.):\n```bash\nflake8 .\n```\n\n- Use **black** to automatically format the code into PEP standards:\n```bash\nblack .\n```\n\n- Use **isort** to automatically sort import statements:\n```bash\nisort .\n```\n\n### Pull requests\n\nFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use [Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit) style for commit messages. Roughly, they should follow the pattern:\n```text\n<type>(<scope>): <short summary>\n```\n\nwhere scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:\n\n- **build**: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)\n- **ci**: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)\n- **docs**: Documentation only changes\n- **feat**: A new feature\n- **fix**: A bugfix\n- **perf**: A code change that improves performance\n- **refactor**: A code change that neither fixes a bug nor adds a feature\n- **test**: Adding missing tests or correcting existing tests\n\n### Semantic Release\n\nThe table below, from [semantic release](https://github.com/semantic-release/semantic-release), shows which commit message gets you which release type when `semantic-release` runs (using the default configuration):\n\n| Commit message                                                                                                                                                                                   | Release type                                                                                                    |\n| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------- |\n| `fix(pencil): stop graphite breaking when too much pressure applied`                                                                                                                             | ~~Patch~~ Fix Release, Default release                                                                          |\n| `feat(pencil): add 'graphiteWidth' option`                                                                                                                                                       | ~~Minor~~ Feature Release                                                                                       |\n| `perf(pencil): remove graphiteWidth option`<br><br>`BREAKING CHANGE: The graphiteWidth option has been removed.`<br>`The default graphite width of 10mm is always used for performance reasons.` | ~~Major~~ Breaking Release <br /> (Note that the `BREAKING CHANGE: ` token must be in the footer of the commit) |\n\n### Documentation\nTo generate the rst files source files for documentation, run\n```bash\nsphinx-apidoc -o docs/source/ src\n```\nThen to create the documentation HTML files, run\n```bash\nsphinx-build -b html docs/source/ docs/build/html\n```\nMore info on sphinx installation can be found [here](https://www.sphinx-doc.org/en/master/usage/installation.html).\n\n### Read the Docs Deployment\nNote: Private repositories require **Read the Docs for Business** account. The following instructions are for a public repo.\n\nThe following are required to import and build documentations on *Read the Docs*:\n- A *Read the Docs* user account connected to Github. See [here](https://docs.readthedocs.com/platform/stable/guides/connecting-git-account.html) for more details.\n- *Read the Docs* needs elevated permissions to perform certain operations that ensure that the workflow is as smooth as possible, like installing webhooks. If you are not the owner of the repo, you may have to request elevated permissions from the owner/admin. \n- A **.readthedocs.yaml** file in the root directory of the repo. Here is a basic template:\n```yaml\n# Read the Docs configuration file\n# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details\n\n# Required\nversion: 2\n\n# Set the OS, Python version, and other tools you might need\nbuild:\n  os: ubuntu-24.04\n  tools:\n    python: \"3.13\"\n\n# Path to a Sphinx configuration file.\nsphinx:\n  configuration: docs/source/conf.py\n\n# Declare the Python requirements required to build your documentation\npython:\n  install:\n    - method: pip\n      path: .\n      extra_requirements:\n        - dev\n```\n\nHere are the steps for building docs in *Read the Docs*. See [here](https://docs.readthedocs.com/platform/stable/intro/add-project.html) for detailed instructions:\n- From *Read the Docs* dashboard, click on **Add project**.\n- For automatic configuration, select **Configure automatically** and type the name of the repo. A repo with public visibility should appear as you type. \n- Follow the subsequent steps.\n- For manual configuration, select **Configure manually** and follow the subsequent steps\n\nOnce a project is created successfully, you will be able to configure/modify the project's settings; such as **Default version**, **Default branch** etc.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Generated from aind-library-template",
    "version": "0.7.3",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "623aef1aa115d0e6beaf18c18a0c0fdc968c76e057829b448180052223afb57e",
                "md5": "88adf01d4a83ff02438380d2d84ae476",
                "sha256": "6e6eb697a2e8ea69bb7409a32afb39f8a28c64451920e149d9d3188770ff7c0a"
            },
            "downloads": -1,
            "filename": "aind_analysis_arch_result_access-0.7.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "88adf01d4a83ff02438380d2d84ae476",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 18949,
            "upload_time": "2025-08-20T23:14:36",
            "upload_time_iso_8601": "2025-08-20T23:14:36.324051Z",
            "url": "https://files.pythonhosted.org/packages/62/3a/ef1aa115d0e6beaf18c18a0c0fdc968c76e057829b448180052223afb57e/aind_analysis_arch_result_access-0.7.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8cd19dee0662874f4e6f7b645319d55babeb2457b4c21dddf0080b0e6478bfbd",
                "md5": "96185ae96531bc7e72bd1696e7c0cc9c",
                "sha256": "e98c656a87b2de29334b0585a668a42f886f8c7400ac054c1376165f0e2ab19c"
            },
            "downloads": -1,
            "filename": "aind_analysis_arch_result_access-0.7.3.tar.gz",
            "has_sig": false,
            "md5_digest": "96185ae96531bc7e72bd1696e7c0cc9c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 144580,
            "upload_time": "2025-08-20T23:14:37",
            "upload_time_iso_8601": "2025-08-20T23:14:37.531760Z",
            "url": "https://files.pythonhosted.org/packages/8c/d1/9dee0662874f4e6f7b645319d55babeb2457b4c21dddf0080b0e6478bfbd/aind_analysis_arch_result_access-0.7.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-20 23:14:37",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "aind-analysis-arch-result-access"
}
        
Elapsed time: 3.26568s