# bambird package
## Unsupervised classification to improve the quality of a bird song recording dataset
Open audio databases such as [Xeno-Canto](https://xeno-canto.org/) are widely used to build datasets to explore bird song repertoire or to train models for automatic bird sound classification by deep learning algorithms. However, such databases suffer from the fact that bird sounds are weakly labelled: a species name is attributed to each audio recording without timestamps that provide the temporal localization of the bird song of interest.
Manual annotations can solve this issue, but they are time consuming, expert-dependent, and cannot run on large datasets. Another solution consists in using a labelling function that automatically segments audio recordings before assigning a label to each segmented audio sample. Although labelling functions were introduced to expedite strong label assignment, their classification performance remains mostly unknown.
To address this issue and reduce label noise (wrong label assignment) in large bird song datasets, we introduce a data-centric novel labelling function composed of three successive steps: 1) time-frequency sound unit segmentation, 2) feature computation for each sound unit, and 3) classification of each sound unit as bird song or noise with either an unsupervised DBSCAN algorithm or the supervised BirdNET neural network.
The labelling function was optimized, validated, and tested on the songs of 44 West-Palearctic common bird species. We first showed that the segmentation of bird songs alone aggregated from 10% to 83% of label noise depending on the species. We also demonstrated that our labelling function was able to significantly reduce the initial label noise present in the dataset by up to a factor of three. Finally, we discuss different opportunities to design suitable labelling functions to build high-quality animal vocalizations with minimum expert annotation effort.
<br/>
<div align="center">
<img src="./docs/figure_workflow_sans_alpha.png" alt="drawing"/>
</div>
<br/>
Based on this work, we propose **bambird**, an open source Python package that provides a complete workflow to create your own labelling function to build cleaner bird song recording dataset. **bambird** is mostly based on [scikit-maad](https://github.com/scikit-maad/scikit-maad) package
[![DOI](https://zenodo.org/badge/xxx.svg)](https://zenodo.org/badge/latestdoi/xxxxx)
## Installation
bambird dependencies:
- scikit-maad >= 1.3.12
- librosa
- scikit-learn
- kneed
- hdbscan
- tqdm
- umap-learn
**bambird** is hosted on PyPI. To install, run the following command in your Python environment:
```bash
$ pip install bambird
```
To install the latest version from source clone the master repository and from the top-level folder call:
```bash
$ git clone https://github.com/ear-team/bambird.git && cd bambird
$ pip install -e .
```
## Functions
The functions available in the package are:
from config.py
>- **load_config** : Load the configuration file to set all the parameters of bambird
from dataset.py
>- **query_xc** : Query metadata from Xeno-Canto website with audiofile depending on the search terms. The audio recordings metadata are grouped and stored in a dataframe.
>- **download_xc**: Download the audio files from Xeno-Canto based on the input dataframe. It will create directories for each species if needed
>- **grab_audio_to_df**: create a dataframe with all recordings in the directory. The first column name corresponds to full path to the filename. The second column name correspond to the filename alone without the extension
>- **change_path**: change the path to the audio in the dataframe. This is usefull when the audio are moved from their original place
from segmentation.py
>- **extract_rois_core**: function called by single_file_extract_rois. Define a specific process to extract Rois. In this case, the function extract the most energetic part of songs/calls.
>- **extract_rois_full_sig**:f unction called by single_file_extract_rois. Define a specific process to extract Rois. In this case, the function extract the full songs/calls.
>- **single_file_extract_rois**: Extract all Rois in a single audio file
>- **multicpu_extract_rois**: Extract all Rois in the dataset (multiple audio files)
from features.py
>- **compute_features**: Compute features of a single Roi such as shape (wavelets), centroid and bandwidth
>- **multicpu_compute_features**: Compute features such as shape (wavelets), centroid and bandwidth of all Rois in the dataset (multiple audio files)
from cluster.py
>- **find_cluster**: Clustering of ROIs. Use DBSCAN or HDSCAN clustering method for several reasons :
* DBSCAN does not need the number of clusters to do the clustering
* DBSCAN is able to deal with noise and keep them outside any clusters.
So, the goal of the clustering is to aggregate similar ROIs
which might correspond to the main call or song of a species. If several
clusters are found, which means that we might have ROIs corresponding to
different calls and/or songs for the species, we can keep the cluster with
the highest number of ROIs or all the clusters.
>- **cluster_eval**: Evaluation of the clustering (requires annotations or any other files to compare with the result of the clustering)
>- **overlay_rois**: Overlay Rois with colors and number depending on the cluster number or the label.
>- **mark_rois**: Add a marker to the audio filenames of each Roi depending on the result of the evaluation of the clustering (TN, FN, TP, FP)
>- **unmark_rois**: Remove the markers
## Examples and documentation
- See the directory "example" to find scripts to run the labelling function on a collection of birds species or on a single file
- More example scripts will be available soon on [colab](https://colab.research.google.com/)
- [workflow single species](https://colab.research.google.com/drive/18tglsE1JciyD1xpTryX3JIenHKGScLSq#scrollTo=bzlosQhqt7vf)
- Full description of the package **scikit-maad**: https://doi.org/10.1111/2041-210X.13711
- Online reference manual and example gallery of **scikit-maad** [here](https://scikit-maad.github.io/).
- In depth information related to the Multiresolution Analysis of Acoustic Diversity implemented in scikit-maad was published in: Ulloa, J. S., Aubin, T., Llusia, D., Bouveyron, C., & Sueur, J. (2018). [Estimating animal acoustic diversity in tropical environments using unsupervised multiresolution analysis](https://doi.org/10.1016/j.ecolind.2018.03.026). Ecological Indicators, 90, 346–355
## Citing this work
If you find **bambird** usefull for your research, please consider citing it as:
- Michaud, F., Sueur, J., Le Cesne, M., & Haupert, S. (2022). [Unsupervised classification to improve the quality of a bird song recording dataset](https://doi.org/xxx). Ecological Informatics, xx, xxx–xxx
## Contributions and bug report
Improvements and new features are greatly appreciated. If you would like to contribute developing new features or making improvements to the available package, please refer to our [wiki](https://github.com/ear-team/bambird/wiki/How-to-contribute-to-bambird). Bug reports and especially tested patches may be submitted directly to the [bug tracker](https://github.com/ear-team/bambird/issues).
Raw data
{
"_id": null,
"home_page": "https://github.com/ear-team/bambird",
"name": "bambird",
"maintainer": "Felix Michaud and Sylvain Haupert",
"docs_url": null,
"requires_python": ">=3.5",
"maintainer_email": "",
"keywords": "ecoacoustics,bioacoustics,ecology,dataset,signal processing,segmentation,features,clustering,unsupervised labelling,deep learning,machine learning",
"author": "Felix Michaud and Sylvain Haupert",
"author_email": "felixmichaudlnhrdt@gmail.com, sylvain.haupert@mnhn.fr",
"download_url": "https://files.pythonhosted.org/packages/6e/5e/72aa180ed42dbe23d6f7d3044689b510778fa2b073f53e1d34be4ada8e66/bambird-0.3.0.tar.gz",
"platform": null,
"description": "# bambird package\n\n## Unsupervised classification to improve the quality of a bird song recording dataset\n\n\nOpen audio databases such as [Xeno-Canto](https://xeno-canto.org/) are widely used to build datasets to explore bird song repertoire or to train models for automatic bird sound classification by deep learning algorithms. However, such databases suffer from the fact that bird sounds are weakly labelled: a species name is attributed to each audio recording without timestamps that provide the temporal localization of the bird song of interest. \nManual annotations can solve this issue, but they are time consuming, expert-dependent, and cannot run on large datasets. Another solution consists in using a labelling function that automatically segments audio recordings before assigning a label to each segmented audio sample. Although labelling functions were introduced to expedite strong label assignment, their classification performance remains mostly unknown. \nTo address this issue and reduce label noise (wrong label assignment) in large bird song datasets, we introduce a data-centric novel labelling function composed of three successive steps: 1) time-frequency sound unit segmentation, 2) feature computation for each sound unit, and 3) classification of each sound unit as bird song or noise with either an unsupervised DBSCAN algorithm or the supervised BirdNET neural network. \nThe labelling function was optimized, validated, and tested on the songs of 44 West-Palearctic common bird species. We first showed that the segmentation of bird songs alone aggregated from 10% to 83% of label noise depending on the species. We also demonstrated that our labelling function was able to significantly reduce the initial label noise present in the dataset by up to a factor of three. Finally, we discuss different opportunities to design suitable labelling functions to build high-quality animal vocalizations with minimum expert annotation effort.\n\n<br/>\n<div align=\"center\">\n <img src=\"./docs/figure_workflow_sans_alpha.png\" alt=\"drawing\"/>\n</div>\n<br/>\n\nBased on this work, we propose **bambird**, an open source Python package that provides a complete workflow to create your own labelling function to build cleaner bird song recording dataset. **bambird** is mostly based on [scikit-maad](https://github.com/scikit-maad/scikit-maad) package\n\n[![DOI](https://zenodo.org/badge/xxx.svg)](https://zenodo.org/badge/latestdoi/xxxxx)\n\n## Installation\nbambird dependencies:\n\n- scikit-maad >= 1.3.12\n- librosa\n- scikit-learn\n- kneed\n- hdbscan\n- tqdm\n- umap-learn\n\n**bambird** is hosted on PyPI. To install, run the following command in your Python environment:\n\n```bash\n$ pip install bambird\n```\n\nTo install the latest version from source clone the master repository and from the top-level folder call:\n\n```bash\n$ git clone https://github.com/ear-team/bambird.git && cd bambird\n$ pip install -e .\n```\n## Functions\nThe functions available in the package are:\nfrom config.py\n>- **load_config** : Load the configuration file to set all the parameters of bambird\n\nfrom dataset.py\n>- **query_xc** : Query metadata from Xeno-Canto website with audiofile depending on the search terms. The audio recordings metadata are grouped and stored in a dataframe.\n>- **download_xc**: Download the audio files from Xeno-Canto based on the input dataframe. It will create directories for each species if needed\n>- **grab_audio_to_df**: create a dataframe with all recordings in the directory. The first column name corresponds to full path to the filename. The second column name correspond to the filename alone without the extension\n>- **change_path**: change the path to the audio in the dataframe. This is usefull when the audio are moved from their original place\n\nfrom segmentation.py \n>- **extract_rois_core**: function called by single_file_extract_rois. Define a specific process to extract Rois. In this case, the function extract the most energetic part of songs/calls.\n>- **extract_rois_full_sig**:f unction called by single_file_extract_rois. Define a specific process to extract Rois. In this case, the function extract the full songs/calls.\n>- **single_file_extract_rois**: Extract all Rois in a single audio file\n>- **multicpu_extract_rois**: Extract all Rois in the dataset (multiple audio files)\n\nfrom features.py\n>- **compute_features**: Compute features of a single Roi such as shape (wavelets), centroid and bandwidth\n>- **multicpu_compute_features**: Compute features such as shape (wavelets), centroid and bandwidth of all Rois in the dataset (multiple audio files)\n\nfrom cluster.py\n>- **find_cluster**: Clustering of ROIs. Use DBSCAN or HDSCAN clustering method for several reasons :\n * DBSCAN does not need the number of clusters to do the clustering\n * DBSCAN is able to deal with noise and keep them outside any clusters.\n So, the goal of the clustering is to aggregate similar ROIs\n which might correspond to the main call or song of a species. If several \n clusters are found, which means that we might have ROIs corresponding to \n different calls and/or songs for the species, we can keep the cluster with \n the highest number of ROIs or all the clusters.\n>- **cluster_eval**: Evaluation of the clustering (requires annotations or any other files to compare with the result of the clustering)\n>- **overlay_rois**: Overlay Rois with colors and number depending on the cluster number or the label.\n>- **mark_rois**: Add a marker to the audio filenames of each Roi depending on the result of the evaluation of the clustering (TN, FN, TP, FP)\n>- **unmark_rois**: Remove the markers\n\n\n## Examples and documentation\n\n- See the directory \"example\" to find scripts to run the labelling function on a collection of birds species or on a single file\n- More example scripts will be available soon on [colab](https://colab.research.google.com/)\n - [workflow single species](https://colab.research.google.com/drive/18tglsE1JciyD1xpTryX3JIenHKGScLSq#scrollTo=bzlosQhqt7vf)\n- Full description of the package **scikit-maad**: https://doi.org/10.1111/2041-210X.13711\n- Online reference manual and example gallery of **scikit-maad** [here](https://scikit-maad.github.io/).\n- In depth information related to the Multiresolution Analysis of Acoustic Diversity implemented in scikit-maad was published in: Ulloa, J. S., Aubin, T., Llusia, D., Bouveyron, C., & Sueur, J. (2018). [Estimating animal acoustic diversity in tropical environments using unsupervised multiresolution analysis](https://doi.org/10.1016/j.ecolind.2018.03.026). Ecological Indicators, 90, 346\u2013355\n\n## Citing this work\n\nIf you find **bambird** usefull for your research, please consider citing it as:\n\n- Michaud, F., Sueur, J., Le Cesne, M., & Haupert, S. (2022). [Unsupervised classification to improve the quality of a bird song recording dataset](https://doi.org/xxx). Ecological Informatics, xx, xxx\u2013xxx\n\n## Contributions and bug report\n\nImprovements and new features are greatly appreciated. If you would like to contribute developing new features or making improvements to the available package, please refer to our [wiki](https://github.com/ear-team/bambird/wiki/How-to-contribute-to-bambird). Bug reports and especially tested patches may be submitted directly to the [bug tracker](https://github.com/ear-team/bambird/issues). \n\n",
"bugtrack_url": null,
"license": "BSD 3 Clause",
"summary": "BAM, unsupervised labelling function to extract and cluster similar animal vocalizations together",
"version": "0.3.0",
"split_keywords": [
"ecoacoustics",
"bioacoustics",
"ecology",
"dataset",
"signal processing",
"segmentation",
"features",
"clustering",
"unsupervised labelling",
"deep learning",
"machine learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "6bb8b0e494359be0d85d94860d3abdc7",
"sha256": "90bedf2aac9c9166eaa038416e25bcd39bad7f4ef5261d80afef2c225bb54598"
},
"downloads": -1,
"filename": "bambird-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6bb8b0e494359be0d85d94860d3abdc7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.5",
"size": 44391,
"upload_time": "2022-12-14T09:20:32",
"upload_time_iso_8601": "2022-12-14T09:20:32.255610Z",
"url": "https://files.pythonhosted.org/packages/ff/8d/4131d0865f37216392e1f885bae03253a4ff56b18ebfbfd17513bcba307b/bambird-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "2030d4e7ed31c7fd695076d6093bdc2b",
"sha256": "4771b792d3ecdee16c7304233237050e65f1b4b19599bbc991afa0995ea0cd85"
},
"downloads": -1,
"filename": "bambird-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "2030d4e7ed31c7fd695076d6093bdc2b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.5",
"size": 39861,
"upload_time": "2022-12-14T09:20:34",
"upload_time_iso_8601": "2022-12-14T09:20:34.656937Z",
"url": "https://files.pythonhosted.org/packages/6e/5e/72aa180ed42dbe23d6f7d3044689b510778fa2b073f53e1d34be4ada8e66/bambird-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-12-14 09:20:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "ear-team",
"github_project": "bambird",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "bambird"
}