<!-- markdownlint-disable MD041 -->
<div align="center">
<h1>Machine learning challenges for hearing aid processing</h1>
<p align="center">
<img src="docs/images/earfinal_clarity_customColour.png" alt="drawing" width="200" hspace="40"/>
<img src="docs/images/cadenza_logo.png" alt="Cadenza Challenge" width="250" hspace="40"/>
<p>
[![PyPI version](https://badge.fury.io/py/pyclarity.svg)](https://badge.fury.io/py/pyclarity)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pyclarity)](https://pypi.org/project/pyclarity/)
[![codecov.io](https://codecov.io/github/claritychallenge/clarity/coverage.svg?branch=main)](https://app.codecov.io/gh/claritychallenge/clarity)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![linting: pylint](https://img.shields.io/badge/linting-pylint-yellowgreen)](https://github.com/PyCQA/pylint)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/claritychallenge/clarity/main.svg)](https://results.pre-commit.ci/latest/github/claritychallenge/clarity/main)
[![Downloads](https://pepy.tech/badge/pyclarity)](https://pepy.tech/project/pyclarity)
[![PyPI](https://img.shields.io/static/v1?label=CAD2%20Challenge%20-%20pypi&message=v0.6.0&color=orange)](https://pypi.org/project/pyclarity/0.6.0/)
[![PyPI](https://img.shields.io/static/v1?label=CEC3%20Challenge%20-%20pypi&message=v0.5.0&color=orange)](https://pypi.org/project/pyclarity/0.5.0/)
[![PyPI](https://img.shields.io/static/v1?label=ICASSP%202024%20Cadenza%20Challenge%20-%20pypi&message=v0.4.1&color=orange)](https://pypi.org/project/pyclarity/0.4.1/)
[![PyPI](https://img.shields.io/static/v1?label=CAD1%20and%20CPC2%20Challenges%20-%20pypi&message=v0.3.4&color=orange)](https://pypi.org/project/pyclarity/0.3.4/)
[![PyPI](https://img.shields.io/static/v1?label=ICASSP%202023%20Challenge%20-%20pypi&message=v0.2.1&color=orange)](https://pypi.org/project/pyclarity/0.2.1/)
[![PyPI](https://img.shields.io/static/v1?label=CEC2%20Challenge%20-%20pypi&message=v0.1.1&color=orange)](https://pypi.org/project/pyclarity/0.1.1/)
[![ORDA](https://img.shields.io/badge/ORDA--DOI-10.15131%2Fshef.data.23230694.v.1-lightgrey)](https://figshare.shef.ac.uk/articles/software/clarity/23230694/1)
</p>
</div>
---
We are organising a series of machine learning challenges to enhance hearing-aid signal processing and to better predict how people perceive speech-in-noise (Clarity) and speech-in-music (Cadenza). For further details of the Clarity Project visit [the Clarity project website](http://claritychallenge.org/), and for details of our latest Clarity challenges visit our [challenge documentation site](https://claritychallenge.github.io/clarity_CC_doc/). You can contact the Clarity Team by email at [claritychallengecontact@gmail.com](claritychallengecontact@gmail.com). For further details of the Cadenza Project visit [the Cadenza project website](http://cadenzachallenge.org/), and to find out about the latest Cadenza challenges join the [Cadenza Challenge Group](https://groups.google.com/g/cadenza-challenge).
In this repository, you will find code to support all Clarity and Cadenza Challenges, including baselines, toolkits, and systems from participants. **We encourage you to make your system/model open source and contribute to this repository.**
## Current Events
- The 2nd Cadenza Challenge is now open :fire::fire:
- Visit the [cadenza website](https://cadenzachallenge.org/docs/cadenza2/intro) for more details.
- Join the [Cadenza Challenge Group](https://groups.google.com/g/cadenza-challenge) to keep up-to-date on developments.
- The 3rd Clarity Enhancement Challenge is now open. :fire::fire:
- Visit the [challenge website](https://claritychallenge.org/docs/cec3/cec3_intro) for more details.
- Join the [Clarity Challenge Group](https://groups.google.com/g/clarity-challenge) to keep up-to-date on developments.
- The 2nd Clarity Prediction Challenge (CPC2) is now closed.
- The 4th Clarity Workshop will be held as a satellite event of Interspeech 2023. For details visit the [workshop website](https://claritychallenge.org/clarity2023-workshop/).
## Installation
### PyPI
Clarity is available on the [Python Package Index (PyPI)](https://pypi.org/project/pyclarity) to install create and/or
activate a virtual environment and then use `pip` to install.
```bash
conda create --name clarity python=3.8
conda activate clarity
pip install pyclarity
```
### GitHub Cloning
```bash
# First clone the repo
git clone https://github.com/claritychallenge/clarity.git
cd clarity
# Second create & activate environment with conda, see https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html
conda create --name clarity python=3.8
conda activate clarity
# Last install with pip
pip install -e .
```
### GitHub pip install
Alternatively `pip` allows you to install packages from GitHub sources directly. The following will install the current
`main` branch.
```bash
pip install -e git+https://github.com/claritychallenge/clarity.git@main
```
## Challenges
Current challenge
- [The 2nd Cadenza Challege](./recipes/cad2)
- [The 3rd Clarity Enhancement Challenge](./recipes/cec3)
Previous challenges
- [The ICASSP 2024 Cadenza Challenge](./recipes/cad_icassp_2024)
- [The 1st Cadenza Challenge (CAD1)](./recipes/cad1)
- [The 2nd Clarity Prediction Challenge (CPC2)](./recipes/cpc2)
- [The ICASSP 2023 Clarity Enhancement Challenge](./recipes/icassp_2023)
- [The 2nd Clarity Enhancement Challenge (CEC2)](./recipes/cec2)
- [The 1st Clarity Prediction Challenge (CPC1)](./recipes/cpc1)
- [The 1st Clarity Enhancement Challenge (CEC1)](./recipes/cec1)
## Available tools
We provide also a number of tools in this repository:
- **Hearing loss simulation**
- [Cambridge MSBG hearing loss simulator](./clarity/evaluator/msbg): descriptions can be found in the [CEC1
description](./recipes/cec1); an usage example can be found in the [CEC1 baseline](./recipes/cec1/baseline)
evaluation script `evaluate.py`.
- **Objective intelligibility measurement**
- [Modified binaural STOI (MBSTOI)](./clarity/evaluator/mbstoi/mbstoi.py): a Python implementation of MBSTOI. It is
jointly used with the MSBG hearing loss model in the [CEC1 baseline](./recipes/cec1/baseline). The official matlab
implementation can be found here: <http://ah-andersen.net/code/>
- [Hearing-aid speech perception index (HASPI)](./clarity/evaluator/haspi/haspi.py): a Python implementation of
HASPI Version 2, and the better-ear HASPI for binaural speech signals. For official matlab implementation, request here: <https://www.colorado.edu/lab/hearlab/resources>
- [Hearing-aid speech quality index (HASQI)](./clarity/evaluator/hasqi/hasqi.py): a Python implementation of
HASQI Version 2, and the better-ear HASQI for binaural speech signals.
- [Hearing-aid audio quality index (HAAQI)](./clarity/evaluator/haaqi/haaqi.py): a Python implementation of
HAAQI.
- **Hearing aid enhancement**
- [Cambridge hearing aid fitting (CAMFIT)](./clarity/enhancer/gha/gainrule_camfit.py): a Python implementation of CAMFIT, translated from the [HörTech Open Master Hearing Aid (OpenMHA)](http://www.openmha.org/about/); the CAMFIT is used together with OpenMHA enhancement as the [CEC1 baseline](./recipes/cec1/baseline), see `enhance.py`.
- [NAL-R hearing aid fitting](./clarity/enhancer/nalr.py): a Python implementation of NAL-R prescription fitting. It is used as the [CEC2 baseline](./recipes/cec2/baseline), see `enhance.py`.
In addition, differentiable approximation to some tools are provided:
- [x] [Differentiable MSBG hearing loss model](./clarity/predictor/torch_msbg.py). See also the BUT implementation:
<https://github.com/BUTSpeechFIT/torch_msbg_mbstoi>
- [ ] Differentiable HASPI (coming)
## Open-source systems
- CPC1:
- [Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in
Hearing-impaired Listeners](./recipes/cpc1/e032_sheffield)
- [Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility
Prediction](./recipes/cpc1/e029_sheffield)
- CEC1:
- [A Two-Stage End-to-End System for Speech-in-Noise Hearing Aid Processing](./recipes/cec1/e009_sheffield)
Raw data
{
"_id": null,
"home_page": null,
"name": "pyclarity",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "hearing, signal processing, clarity challenge",
"author": null,
"author_email": "The PyClarity Team <claritychallengecontact@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/2a/41/2e99ae8c78309892dc022009a6991b3acba0e318cd54226f778cde6c9ff2/pyclarity-0.6.2.tar.gz",
"platform": null,
"description": "\n<!-- markdownlint-disable MD041 -->\n<div align=\"center\">\n\n<h1>Machine learning challenges for hearing aid processing</h1>\n\n<p align=\"center\">\n <img src=\"docs/images/earfinal_clarity_customColour.png\" alt=\"drawing\" width=\"200\" hspace=\"40\"/>\n\n <img src=\"docs/images/cadenza_logo.png\" alt=\"Cadenza Challenge\" width=\"250\" hspace=\"40\"/>\n<p>\n\n[![PyPI version](https://badge.fury.io/py/pyclarity.svg)](https://badge.fury.io/py/pyclarity)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pyclarity)](https://pypi.org/project/pyclarity/)\n[![codecov.io](https://codecov.io/github/claritychallenge/clarity/coverage.svg?branch=main)](https://app.codecov.io/gh/claritychallenge/clarity)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![linting: pylint](https://img.shields.io/badge/linting-pylint-yellowgreen)](https://github.com/PyCQA/pylint)\n[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/claritychallenge/clarity/main.svg)](https://results.pre-commit.ci/latest/github/claritychallenge/clarity/main)\n[![Downloads](https://pepy.tech/badge/pyclarity)](https://pepy.tech/project/pyclarity)\n\n[![PyPI](https://img.shields.io/static/v1?label=CAD2%20Challenge%20-%20pypi&message=v0.6.0&color=orange)](https://pypi.org/project/pyclarity/0.6.0/)\n[![PyPI](https://img.shields.io/static/v1?label=CEC3%20Challenge%20-%20pypi&message=v0.5.0&color=orange)](https://pypi.org/project/pyclarity/0.5.0/)\n[![PyPI](https://img.shields.io/static/v1?label=ICASSP%202024%20Cadenza%20Challenge%20-%20pypi&message=v0.4.1&color=orange)](https://pypi.org/project/pyclarity/0.4.1/)\n[![PyPI](https://img.shields.io/static/v1?label=CAD1%20and%20CPC2%20Challenges%20-%20pypi&message=v0.3.4&color=orange)](https://pypi.org/project/pyclarity/0.3.4/)\n[![PyPI](https://img.shields.io/static/v1?label=ICASSP%202023%20Challenge%20-%20pypi&message=v0.2.1&color=orange)](https://pypi.org/project/pyclarity/0.2.1/)\n[![PyPI](https://img.shields.io/static/v1?label=CEC2%20Challenge%20-%20pypi&message=v0.1.1&color=orange)](https://pypi.org/project/pyclarity/0.1.1/)\n\n[![ORDA](https://img.shields.io/badge/ORDA--DOI-10.15131%2Fshef.data.23230694.v.1-lightgrey)](https://figshare.shef.ac.uk/articles/software/clarity/23230694/1)\n</p>\n\n</div>\n\n---\n\nWe are organising a series of machine learning challenges to enhance hearing-aid signal processing and to better predict how people perceive speech-in-noise (Clarity) and speech-in-music (Cadenza). For further details of the Clarity Project visit [the Clarity project website](http://claritychallenge.org/), and for details of our latest Clarity challenges visit our [challenge documentation site](https://claritychallenge.github.io/clarity_CC_doc/). You can contact the Clarity Team by email at [claritychallengecontact@gmail.com](claritychallengecontact@gmail.com). For further details of the Cadenza Project visit [the Cadenza project website](http://cadenzachallenge.org/), and to find out about the latest Cadenza challenges join the [Cadenza Challenge Group](https://groups.google.com/g/cadenza-challenge).\n\nIn this repository, you will find code to support all Clarity and Cadenza Challenges, including baselines, toolkits, and systems from participants. **We encourage you to make your system/model open source and contribute to this repository.**\n\n## Current Events\n\n- The 2nd Cadenza Challenge is now open :fire::fire:\n - Visit the [cadenza website](https://cadenzachallenge.org/docs/cadenza2/intro) for more details.\n - Join the [Cadenza Challenge Group](https://groups.google.com/g/cadenza-challenge) to keep up-to-date on developments.\n- The 3rd Clarity Enhancement Challenge is now open. :fire::fire:\n - Visit the [challenge website](https://claritychallenge.org/docs/cec3/cec3_intro) for more details.\n - Join the [Clarity Challenge Group](https://groups.google.com/g/clarity-challenge) to keep up-to-date on developments.\n- The 2nd Clarity Prediction Challenge (CPC2) is now closed.\n- The 4th Clarity Workshop will be held as a satellite event of Interspeech 2023. For details visit the [workshop website](https://claritychallenge.org/clarity2023-workshop/).\n\n## Installation\n\n### PyPI\n\nClarity is available on the [Python Package Index (PyPI)](https://pypi.org/project/pyclarity) to install create and/or\nactivate a virtual environment and then use `pip` to install.\n\n```bash\nconda create --name clarity python=3.8\nconda activate clarity\n\npip install pyclarity\n```\n\n### GitHub Cloning\n\n```bash\n# First clone the repo\ngit clone https://github.com/claritychallenge/clarity.git\ncd clarity\n\n# Second create & activate environment with conda, see https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html\nconda create --name clarity python=3.8\nconda activate clarity\n\n# Last install with pip\npip install -e .\n```\n\n### GitHub pip install\n\nAlternatively `pip` allows you to install packages from GitHub sources directly. The following will install the current\n`main` branch.\n\n```bash\npip install -e git+https://github.com/claritychallenge/clarity.git@main\n```\n\n## Challenges\n\nCurrent challenge\n\n- [The 2nd Cadenza Challege](./recipes/cad2)\n- [The 3rd Clarity Enhancement Challenge](./recipes/cec3)\n\nPrevious challenges\n\n- [The ICASSP 2024 Cadenza Challenge](./recipes/cad_icassp_2024)\n- [The 1st Cadenza Challenge (CAD1)](./recipes/cad1)\n- [The 2nd Clarity Prediction Challenge (CPC2)](./recipes/cpc2)\n- [The ICASSP 2023 Clarity Enhancement Challenge](./recipes/icassp_2023)\n- [The 2nd Clarity Enhancement Challenge (CEC2)](./recipes/cec2)\n- [The 1st Clarity Prediction Challenge (CPC1)](./recipes/cpc1)\n- [The 1st Clarity Enhancement Challenge (CEC1)](./recipes/cec1)\n\n## Available tools\n\nWe provide also a number of tools in this repository:\n\n- **Hearing loss simulation**\n - [Cambridge MSBG hearing loss simulator](./clarity/evaluator/msbg): descriptions can be found in the [CEC1\n description](./recipes/cec1); an usage example can be found in the [CEC1 baseline](./recipes/cec1/baseline)\n evaluation script `evaluate.py`.\n- **Objective intelligibility measurement**\n - [Modified binaural STOI (MBSTOI)](./clarity/evaluator/mbstoi/mbstoi.py): a Python implementation of MBSTOI. It is\n jointly used with the MSBG hearing loss model in the [CEC1 baseline](./recipes/cec1/baseline). The official matlab\n implementation can be found here: <http://ah-andersen.net/code/>\n - [Hearing-aid speech perception index (HASPI)](./clarity/evaluator/haspi/haspi.py): a Python implementation of\n HASPI Version 2, and the better-ear HASPI for binaural speech signals. For official matlab implementation, request here: <https://www.colorado.edu/lab/hearlab/resources>\n - [Hearing-aid speech quality index (HASQI)](./clarity/evaluator/hasqi/hasqi.py): a Python implementation of\n HASQI Version 2, and the better-ear HASQI for binaural speech signals.\n - [Hearing-aid audio quality index (HAAQI)](./clarity/evaluator/haaqi/haaqi.py): a Python implementation of\n HAAQI.\n- **Hearing aid enhancement**\n - [Cambridge hearing aid fitting (CAMFIT)](./clarity/enhancer/gha/gainrule_camfit.py): a Python implementation of CAMFIT, translated from the [H\u00f6rTech Open Master Hearing Aid (OpenMHA)](http://www.openmha.org/about/); the CAMFIT is used together with OpenMHA enhancement as the [CEC1 baseline](./recipes/cec1/baseline), see `enhance.py`.\n - [NAL-R hearing aid fitting](./clarity/enhancer/nalr.py): a Python implementation of NAL-R prescription fitting. It is used as the [CEC2 baseline](./recipes/cec2/baseline), see `enhance.py`.\n\nIn addition, differentiable approximation to some tools are provided:\n\n- [x] [Differentiable MSBG hearing loss model](./clarity/predictor/torch_msbg.py). See also the BUT implementation:\n <https://github.com/BUTSpeechFIT/torch_msbg_mbstoi>\n- [ ] Differentiable HASPI (coming)\n\n## Open-source systems\n\n- CPC1:\n - [Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in\n Hearing-impaired Listeners](./recipes/cpc1/e032_sheffield)\n - [Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility\n Prediction](./recipes/cpc1/e029_sheffield)\n- CEC1:\n - [A Two-Stage End-to-End System for Speech-in-Noise Hearing Aid Processing](./recipes/cec1/e009_sheffield)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Tools for the Clarity Challenge",
"version": "0.6.2",
"project_urls": {
"Bug_Tracker": "https://github.com/claritychallenge/clarity/issues",
"Documentation": "https://claritychallenge.github.io/clarity_CC_doc",
"Source": "https://github.com/claritychallenge/clarity"
},
"split_keywords": [
"hearing",
" signal processing",
" clarity challenge"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8129b2c586691fbaf8587b12ec194057ac62ec0ed931f22af4f5991044b90bc3",
"md5": "7a864989b3c35b053aaa8bfb9cf62ef0",
"sha256": "c99c09096f7f45e97a649d3b7676522548f3254ed4e7cbc1a9ee70f5c5cc33c3"
},
"downloads": -1,
"filename": "pyclarity-0.6.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7a864989b3c35b053aaa8bfb9cf62ef0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 839840,
"upload_time": "2024-10-18T14:47:40",
"upload_time_iso_8601": "2024-10-18T14:47:40.498644Z",
"url": "https://files.pythonhosted.org/packages/81/29/b2c586691fbaf8587b12ec194057ac62ec0ed931f22af4f5991044b90bc3/pyclarity-0.6.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2a412e99ae8c78309892dc022009a6991b3acba0e318cd54226f778cde6c9ff2",
"md5": "a1059c48c734b76765ee8be74ae0d314",
"sha256": "ece8f464cc6641c9518b8618fdb27c7a89f5a2a624b145cbc80b167d2d17d806"
},
"downloads": -1,
"filename": "pyclarity-0.6.2.tar.gz",
"has_sig": false,
"md5_digest": "a1059c48c734b76765ee8be74ae0d314",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 68741414,
"upload_time": "2024-10-18T14:47:43",
"upload_time_iso_8601": "2024-10-18T14:47:43.549707Z",
"url": "https://files.pythonhosted.org/packages/2a/41/2e99ae8c78309892dc022009a6991b3acba0e318cd54226f778cde6c9ff2/pyclarity-0.6.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-18 14:47:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "claritychallenge",
"github_project": "clarity",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"lcname": "pyclarity"
}