Name | rascal-speech JSON |
Version |
0.1.1
JSON |
| download |
home_page | None |
Summary | Resources for Analyzing Speech in Clinical Aphasiology Labs |
upload_time | 2025-09-16 14:38:27 |
maintainer | None |
docs_url | None |
author | Nick McCloskey |
requires_python | >=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# RASCAL - Resources for Analyzing Speech in Clinical Aphasiology Labs
RASCAL is a tool designed to facilitate the analysis of speech in clinical aphasiology research. It processes CHAT-formatted (.cha) transcriptions, organizes data into structured tiers, and automates key analytical steps in transcription reliability, CU coding, word counting, and core lexicon analysis.
---
## Analysis Pipeline
### **BU-TU Semi-Automated Monologic Narrative Analysis Overview**
1. **Step 0 (Manual):** Complete transcription for all samples.
2. **Step 1 (RASCAL):**
- **Input:** Transcriptions (`.cha`)
- **Output:** Transcription reliability files, utterance files, CU coding and reliability files
3. **Step 2 (Manual):** CU coding and reliability checks
4. **Step 3 (RASCAL):**
- **Input:** Original & reliability transcriptions, CU coding & reliability files
- **Output:** Reliability reports, coding summaries, word count & reliability files, speaking time file
5. **Step 4 (Manual):** Finalize word counts and record speaking times
6. **Step 5 (RASCAL):**
- **Input:** Utterance file, utterance-level CU summary, speaking times, word counts & reliability
- **Output:** Blind & unblind, utterance- & sample-level CU coding summaries, word count reliability, core lexicon analysis
---
## Try the Web App
You can use RASCAL in your browser โ no installation required:
๐ [Launch the RASCAL Web App](https://rascal.streamlit.app/)
---
## Installation
We recommend installing RASCAL into a dedicated virtual environment using Anaconda:
### 1. Create and activate your environment:
```bash
conda create --name rascal_env python=3.9
conda activate rascal_env
```
### 2. Install RASCAL from GitHub:
```bash
pip install git+https://github.com/nmccloskey/rascal.git@main
```
---
## Setup
To prepare for running RASCAL, complete the following steps:
### 1. Create your working directory:
We recommend creating a fresh project directory where you'll run your analysis.
Example structure:
```plaintext
your_project/
โโโ config.yaml # Configuration file (see below)
โโโ data/
โโโ input/ # Place your CHAT (.cha) files and/or Excel data here
# (RASCAL will make an output directory)
```
### 2. Provide a `config.yaml` file
This file specifies the directories, coders, reliability settings, and tier structure.
You can download the example config file from the repo or create your own like this:
```yaml
input_dir: data/input
output_dir: data/output
reliability_fraction: 0.2
coders:
- '1'
- '2'
- '3'
CU_paradigms:
- SAE
- AAE
exclude_participants:
- INV
strip_clan: true
prefer_correction: true
lowercase: true
tiers:
site:
values:
- AC
- BU
- TU
partition: true
blind: true
test:
values:
- Pre
- Post
- Maint
blind: true
study_id:
values: (AC|BU|TU)\d+
narrative:
values:
- CATGrandpa
- BrokenWindow
- RefusedUmbrella
- CatRescue
- BirthdayScene
```
### Explanation:
- General
- `reliability_fraction` - the proportion of data to subset for reliability (default 20%).
- `coders` - alphanumeric coder identifiers (2 required for function **g** and 3 for **c**, see below).
- `CU_paradigms` - allows users to accommodate multiple dialects if desired. If at least two paradigms are entered, parallel coding columns will be prepared and processed in all CU functions.
- `exclude_participants` - speakers appearing in .cha files to exclude from transcription reliability and CU coding (neutral utterances).
- Transcription Reliability
- `strip_clan` - removes CLAN markup but preserve speech-like content, including filled pauses (e.g., '&um' -> 'um') and partial words.
- `prefer_correction` - toggles policy for accepted corrections '[: x] [*]': True keeps x, False keeps original.
- `lowercase` - toggles case regularization.
**Specifying tiers:**
The tier system facilitates tabularization by associating a unit of analysis with its possible values and extracting this information from the file name of individual transcripts.
- **Multiple values**: enter as a comma- or newline-separated list. These are treated as **literal choices** and combined into a regex internally. See below examples.
- *narrative*: `BrokenWindow, RefusedUmbrella, CatRescue`
- *test*: `PreTx, PostTx`
- **Single value**: treated as a **regular expression** and validated immediately. Examples include:
- Digits only: `\\d+`
- Lab site + digits: `(AC|BU|TU)\\d+`
- Three uppercase letters + three digits: `[A-Z]{3}\\d{3}`
- **Tier attributes**
- **Partition**: creates separate coding files and **separate reliability** subsets by that tier. In this example, separate CU coding files will be generated for each site (AC, BU, TU), but not for each narrative or test value.
- **Blind**: generates blind codes for CU summaries (function **j** below).
***Example: Tier-Based Tabularization from Filenames (according to the above config).***
Source files:
- `TU88PreTxBrokenWindow.cha`
- `BU77Maintenance_CatRescue.cha`
Tabularization:
| Site | Test | ParticipantID | Narrative |
|------|-------|---------------|---------------|
| TU | Pre | TU88 | BrokenWindow |
| BU | Maint | BU77 | CatRescue |
---
## Running the Program
Once installed, RASCAL can be run from any directory using the command-line interface:
```bash
rascal <step or function>
```
For example, to run the CU coding analysis function:
```bash
rascal f
```
### Pipeline Commands
| Command | Step (Python function) | Input | Output (described) |
|---------|----------------------------------------------|----------------------------------------|-----------------------------------------------------|
| a | Select transcription reliability samples (*select_transcription_reliability_samples*) | Raw `.cha` files | Sample list + paired reliability `.cha` files |
| b | Prepare utterance tables (*prepare_utterance_dfs*) | Raw `.cha` files | Utterance spreadsheets |
| c | Create CU coding files (*make_CU_coding_files*) | Utterance tables (from **b**) | CU coding + reliability coding spreadsheets |
| d | Analyze transcription reliability (*analyze_transcription_reliability*) | Reliability `.cha` pairs | Agreement metrics + alignment text reports |
| e | Analyze CU reliability (*analyze_CU_reliability*) | Manually completed CU coding (from **c**) | Reliability summary tables + reports |
| f | Analyze CU coding (*analyze_CU_coding*) | Manually completed CU coding (from **c**) | Sample- and utterance-level CU summaries |
| g | Create word count files (*make_word_count_files*) | CU coding tables (from **f**) | Word count + reliability spreadsheets |
| h | Make timesheets (*make_timesheets*) | Utterance tables (from **b**) | Speaking time entry sheets |
| i | Analyze word count reliability (*analyze_word_count_reliability*) | Manually completed word counts (from **g**) | Reliability summaries + agreement reports |
| j | Unblind samples (*unblind_CUs*) | Utterance tables (**b**), CU coding (**c**), timesheets (**h**), word counts (**i**) | Blind + unblind utterance and sample summaries |
| k | Run CoreLex analysis (*run_corelex*) | Sample summaries (from **j**) | CoreLex coverage and percentile metrics |
| l | Reselect CU reliability (*reselect_CU_reliability*) | Manually completed CU coding (from **c**) | New reliability subsets |
### Step mapping:
| Step | Letters | Description |
|------|----------|--------------------------------------------|
| 1 | a, b, c | Read CHA, select reliability, prepare utterances |
| 3 | dโh | Analyze transcription & CU, word counts, timesheets |
| 5 | i, j, k | Word count reliability, unblind CUs, CoreLex |
---
## ๐ RASCAL Workflow Overview
```mermaid
flowchart TD
A[a: Select transcription reliability samples] --> B[b: Prepare utterance tables]
A --> D[d: Analyze transcription reliability]
B --> C[c: Create CU coding files]
C --> E[e: Analyze CU reliability]
C --> F[f: Analyze CU coding]
F --> G[g: Create word count files]
B --> H[h: Make timesheets]
G --> I[i: Analyze word count reliability]
B & F & G & H --> J[j: Unblind samples]
B & H & J --> K[k: Run CoreLex analysis]
C --> L[l: Reselect CU reliability]
L --> E
linkStyle 12 stroke:blue
linkStyle 13 stroke:blue
linkStyle 15 stroke:red
linkStyle 16 stroke:red
```
Black arrows indicate the central analysis pipeline. Red arrows represent the path required if CU reliability coding fails to meet agreement threshold and needs redone. Blue arrows show the alternate inputs to CoreLex analysis: function **b** output is required, and **h** output is optional.
## Notes on Input Transcriptions
- `.cha` files must be formatted correctly according to CHAT conventions.
- Ensure filenames match tier values as specified in `config.yaml`.
- RASCAL searches tier values using exact spelling and capitalization.
## ๐งช Testing
This project uses [pytest](https://docs.pytest.org/) for its testing suite.
All tests are located under the `tests/` directory, organized by module/function.
### Running Tests
To run the full suite:
```bash
pytest
```
Run with verbose output:
```bash
pytest -v
```
Run a specific test file:
```bash
pytest tests/test_samples/test_run_corelex.py
```
### Notes
- Tests stub out heavy dependencies (e.g., `openpyxl`, external web requests) to keep them fast and reproducible.
- Many tests use temporary directories (`tmp_path`) to simulate file I/O without affecting your real data.
## Status and Contact
I warmly welcome feedback, feature suggestions, or bug reports. Feel free to reach out by:
- Submitting an issue through the GitHub Issues tab
- Emailing me directly at: nsm [at] temple.edu
Thanks for your interest and collaboration!
## Citation
If using RASCAL in your research, please cite:
> McCloskey, N., et al. (2025, April). *The RASCAL pipeline: User-friendly and time-saving computational resources for coding and analyzing language samples*. Poster presented at the Aphasia Access Leadership Summit, Pittsburgh, PA.
## Acknowledgments
RASCAL builds on and integrates functionality from two excellent open-source tools which I highly recommend to researchers and clinicians working with language data:
- [**batchalign2**](https://github.com/TalkBank/batchalign2) โ Developed by the TalkBank team, batchalign provides a robust backend for automatic speech recognition. RASCAL is designed to function downstream of this system, leveraging its debulletized `.cha` files as input. This integration allows researchers to significantly expedite batch transcription, which without an ASR springboard might bottleneck discourse analysis.
- [**coreLexicon**](https://github.com/rbcavanaugh/coreLexicon) โ A web-based interface for Core Lexicon analysis developed by Rob Cavanaugh, et al. RASCAL implements its own Core Lexicon analysis that has high reliability with this web app: ICC(2) values (two-way random, absolute agreement) on primary metrics are 0.9627 for accuracy (number of core words) and 0.9689 for efficiency (core words per minute) - measured on 402 narratives (Brokem Window, Cat Rescue, and Refused Umbrella) in our study. RASCAL does not use the webapp but accesses the normative data associated with this repository (using Google sheet IDs) to calculate percentiles.
Raw data
{
"_id": null,
"home_page": null,
"name": "rascal-speech",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Nick McCloskey",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/90/e9/4cb2b72f8c5b1414943c8d78cf6260c528b1af119e96c58d4607effb29e4/rascal_speech-0.1.1.tar.gz",
"platform": null,
"description": "# RASCAL - Resources for Analyzing Speech in Clinical Aphasiology Labs\r\n\r\nRASCAL is a tool designed to facilitate the analysis of speech in clinical aphasiology research. It processes CHAT-formatted (.cha) transcriptions, organizes data into structured tiers, and automates key analytical steps in transcription reliability, CU coding, word counting, and core lexicon analysis.\r\n\r\n---\r\n\r\n## Analysis Pipeline\r\n\r\n### **BU-TU Semi-Automated Monologic Narrative Analysis Overview**\r\n\r\n1. **Step 0 (Manual):** Complete transcription for all samples.\r\n2. **Step 1 (RASCAL):**\r\n - **Input:** Transcriptions (`.cha`)\r\n - **Output:** Transcription reliability files, utterance files, CU coding and reliability files\r\n3. **Step 2 (Manual):** CU coding and reliability checks\r\n4. **Step 3 (RASCAL):**\r\n - **Input:** Original & reliability transcriptions, CU coding & reliability files\r\n - **Output:** Reliability reports, coding summaries, word count & reliability files, speaking time file\r\n5. **Step 4 (Manual):** Finalize word counts and record speaking times\r\n6. **Step 5 (RASCAL):**\r\n - **Input:** Utterance file, utterance-level CU summary, speaking times, word counts & reliability\r\n - **Output:** Blind & unblind, utterance- & sample-level CU coding summaries, word count reliability, core lexicon analysis\r\n---\r\n\r\n## Try the Web App\r\n\r\nYou can use RASCAL in your browser \u2014 no installation required:\r\n\r\n\ud83d\udc49 [Launch the RASCAL Web App](https://rascal.streamlit.app/)\r\n\r\n---\r\n\r\n## Installation\r\n\r\nWe recommend installing RASCAL into a dedicated virtual environment using Anaconda:\r\n\r\n### 1. Create and activate your environment:\r\n\r\n```bash\r\nconda create --name rascal_env python=3.9\r\nconda activate rascal_env\r\n```\r\n\r\n### 2. Install RASCAL from GitHub:\r\n```bash\r\npip install git+https://github.com/nmccloskey/rascal.git@main\r\n```\r\n\r\n---\r\n\r\n## Setup\r\n\r\nTo prepare for running RASCAL, complete the following steps:\r\n\r\n### 1. Create your working directory:\r\n\r\nWe recommend creating a fresh project directory where you'll run your analysis.\r\n\r\nExample structure:\r\n\r\n```plaintext\r\nyour_project/\r\n\u251c\u2500\u2500 config.yaml # Configuration file (see below)\r\n\u2514\u2500\u2500 data/\r\n \u2514\u2500\u2500 input/ # Place your CHAT (.cha) files and/or Excel data here\r\n # (RASCAL will make an output directory)\r\n```\r\n\r\n### 2. Provide a `config.yaml` file\r\n\r\nThis file specifies the directories, coders, reliability settings, and tier structure.\r\n\r\nYou can download the example config file from the repo or create your own like this:\r\n\r\n```yaml\r\ninput_dir: data/input\r\noutput_dir: data/output\r\nreliability_fraction: 0.2\r\ncoders:\r\n- '1'\r\n- '2'\r\n- '3'\r\nCU_paradigms:\r\n- SAE\r\n- AAE\r\nexclude_participants:\r\n- INV\r\nstrip_clan: true\r\nprefer_correction: true\r\nlowercase: true\r\ntiers:\r\n site:\r\n values:\r\n - AC\r\n - BU\r\n - TU\r\n partition: true\r\n blind: true\r\n test:\r\n values:\r\n - Pre\r\n - Post\r\n - Maint\r\n blind: true\r\n study_id:\r\n values: (AC|BU|TU)\\d+\r\n narrative:\r\n values:\r\n - CATGrandpa\r\n - BrokenWindow\r\n - RefusedUmbrella\r\n - CatRescue\r\n - BirthdayScene\r\n```\r\n\r\n### Explanation:\r\n\r\n- General\r\n\r\n - `reliability_fraction` - the proportion of data to subset for reliability (default 20%).\r\n\r\n - `coders` - alphanumeric coder identifiers (2 required for function **g** and 3 for **c**, see below).\r\n\r\n - `CU_paradigms` - allows users to accommodate multiple dialects if desired. If at least two paradigms are entered, parallel coding columns will be prepared and processed in all CU functions.\r\n\r\n - `exclude_participants` - speakers appearing in .cha files to exclude from transcription reliability and CU coding (neutral utterances).\r\n\r\n- Transcription Reliability\r\n\r\n - `strip_clan` - removes CLAN markup but preserve speech-like content, including filled pauses (e.g., '&um' -> 'um') and partial words.\r\n\r\n - `prefer_correction` - toggles policy for accepted corrections '[: x] [*]': True keeps x, False keeps original.\r\n\r\n - `lowercase` - toggles case regularization.\r\n\r\n**Specifying tiers:**\r\nThe tier system facilitates tabularization by associating a unit of analysis with its possible values and extracting this information from the file name of individual transcripts.\r\n\r\n- **Multiple values**: enter as a comma- or newline-separated list. These are treated as **literal choices** and combined into a regex internally. See below examples.\r\n - *narrative*: `BrokenWindow, RefusedUmbrella, CatRescue`\r\n - *test*: `PreTx, PostTx`\r\n \r\n- **Single value**: treated as a **regular expression** and validated immediately. Examples include:\r\n - Digits only: `\\\\d+`\r\n - Lab site + digits: `(AC|BU|TU)\\\\d+`\r\n - Three uppercase letters + three digits: `[A-Z]{3}\\\\d{3}`\r\n\r\n- **Tier attributes**\r\n - **Partition**: creates separate coding files and **separate reliability** subsets by that tier. In this example, separate CU coding files will be generated for each site (AC, BU, TU), but not for each narrative or test value.\r\n - **Blind**: generates blind codes for CU summaries (function **j** below).\r\n\r\n***Example: Tier-Based Tabularization from Filenames (according to the above config).***\r\n\r\nSource files:\r\n- `TU88PreTxBrokenWindow.cha`\r\n- `BU77Maintenance_CatRescue.cha`\r\n\r\nTabularization:\r\n\r\n| Site | Test | ParticipantID | Narrative |\r\n|------|-------|---------------|---------------|\r\n| TU | Pre | TU88 | BrokenWindow |\r\n| BU | Maint | BU77 | CatRescue |\r\n---\r\n\r\n## Running the Program\r\n\r\nOnce installed, RASCAL can be run from any directory using the command-line interface:\r\n\r\n```bash\r\nrascal <step or function>\r\n```\r\n\r\nFor example, to run the CU coding analysis function:\r\n\r\n```bash\r\nrascal f\r\n```\r\n\r\n### Pipeline Commands\r\n\r\n| Command | Step (Python function) | Input | Output (described) |\r\n|---------|----------------------------------------------|----------------------------------------|-----------------------------------------------------|\r\n| a | Select transcription reliability samples (*select_transcription_reliability_samples*) | Raw `.cha` files | Sample list + paired reliability `.cha` files |\r\n| b | Prepare utterance tables (*prepare_utterance_dfs*) | Raw `.cha` files | Utterance spreadsheets |\r\n| c | Create CU coding files (*make_CU_coding_files*) | Utterance tables (from **b**) | CU coding + reliability coding spreadsheets |\r\n| d | Analyze transcription reliability (*analyze_transcription_reliability*) | Reliability `.cha` pairs | Agreement metrics + alignment text reports |\r\n| e | Analyze CU reliability (*analyze_CU_reliability*) | Manually completed CU coding (from **c**) | Reliability summary tables + reports |\r\n| f | Analyze CU coding (*analyze_CU_coding*) | Manually completed CU coding (from **c**) | Sample- and utterance-level CU summaries |\r\n| g | Create word count files (*make_word_count_files*) | CU coding tables (from **f**) | Word count + reliability spreadsheets |\r\n| h | Make timesheets (*make_timesheets*) | Utterance tables (from **b**) | Speaking time entry sheets |\r\n| i | Analyze word count reliability (*analyze_word_count_reliability*) | Manually completed word counts (from **g**) | Reliability summaries + agreement reports |\r\n| j | Unblind samples (*unblind_CUs*) | Utterance tables (**b**), CU coding (**c**), timesheets (**h**), word counts (**i**) | Blind + unblind utterance and sample summaries |\r\n| k | Run CoreLex analysis (*run_corelex*) | Sample summaries (from **j**) | CoreLex coverage and percentile metrics |\r\n| l | Reselect CU reliability (*reselect_CU_reliability*) | Manually completed CU coding (from **c**) | New reliability subsets |\r\n\r\n### Step mapping:\r\n| Step | Letters | Description |\r\n|------|----------|--------------------------------------------|\r\n| 1 | a, b, c | Read CHA, select reliability, prepare utterances |\r\n| 3 | d\u2013h | Analyze transcription & CU, word counts, timesheets |\r\n| 5 | i, j, k | Word count reliability, unblind CUs, CoreLex |\r\n\r\n---\r\n\r\n## \ud83d\udcca RASCAL Workflow Overview\r\n\r\n```mermaid\r\nflowchart TD\r\n A[a: Select transcription reliability samples] --> B[b: Prepare utterance tables]\r\n A --> D[d: Analyze transcription reliability]\r\n\r\n B --> C[c: Create CU coding files]\r\n\r\n \r\n C --> E[e: Analyze CU reliability]\r\n C --> F[f: Analyze CU coding]\r\n\r\n F --> G[g: Create word count files]\r\n B --> H[h: Make timesheets]\r\n\r\n G --> I[i: Analyze word count reliability]\r\n\r\n B & F & G & H --> J[j: Unblind samples]\r\n\r\n B & H & J --> K[k: Run CoreLex analysis]\r\n\r\n C --> L[l: Reselect CU reliability]\r\n L --> E\r\n\r\n linkStyle 12 stroke:blue\r\n linkStyle 13 stroke:blue\r\n linkStyle 15 stroke:red\r\n linkStyle 16 stroke:red\r\n```\r\nBlack arrows indicate the central analysis pipeline. Red arrows represent the path required if CU reliability coding fails to meet agreement threshold and needs redone. Blue arrows show the alternate inputs to CoreLex analysis: function **b** output is required, and **h** output is optional.\r\n\r\n## Notes on Input Transcriptions\r\n\r\n- `.cha` files must be formatted correctly according to CHAT conventions.\r\n- Ensure filenames match tier values as specified in `config.yaml`.\r\n- RASCAL searches tier values using exact spelling and capitalization.\r\n\r\n## \ud83e\uddea Testing\r\n\r\nThis project uses [pytest](https://docs.pytest.org/) for its testing suite. \r\nAll tests are located under the `tests/` directory, organized by module/function.\r\n\r\n### Running Tests\r\nTo run the full suite:\r\n\r\n```bash\r\npytest\r\n```\r\nRun with verbose output:\r\n```bash\r\npytest -v\r\n```\r\nRun a specific test file:\r\n```bash\r\npytest tests/test_samples/test_run_corelex.py\r\n```\r\n\r\n### Notes\r\n- Tests stub out heavy dependencies (e.g., `openpyxl`, external web requests) to keep them fast and reproducible.\r\n- Many tests use temporary directories (`tmp_path`) to simulate file I/O without affecting your real data.\r\n\r\n## Status and Contact\r\n\r\nI warmly welcome feedback, feature suggestions, or bug reports. Feel free to reach out by:\r\n\r\n- Submitting an issue through the GitHub Issues tab\r\n\r\n- Emailing me directly at: nsm [at] temple.edu\r\n\r\nThanks for your interest and collaboration!\r\n\r\n## Citation\r\n\r\nIf using RASCAL in your research, please cite:\r\n\r\n> McCloskey, N., et al. (2025, April). *The RASCAL pipeline: User-friendly and time-saving computational resources for coding and analyzing language samples*. Poster presented at the Aphasia Access Leadership Summit, Pittsburgh, PA.\r\n\r\n## Acknowledgments\r\n\r\nRASCAL builds on and integrates functionality from two excellent open-source tools which I highly recommend to researchers and clinicians working with language data:\r\n\r\n- [**batchalign2**](https://github.com/TalkBank/batchalign2) \u2013 Developed by the TalkBank team, batchalign provides a robust backend for automatic speech recognition. RASCAL is designed to function downstream of this system, leveraging its debulletized `.cha` files as input. This integration allows researchers to significantly expedite batch transcription, which without an ASR springboard might bottleneck discourse analysis.\r\n\r\n- [**coreLexicon**](https://github.com/rbcavanaugh/coreLexicon) \u2013 A web-based interface for Core Lexicon analysis developed by Rob Cavanaugh, et al. RASCAL implements its own Core Lexicon analysis that has high reliability with this web app: ICC(2) values (two-way random, absolute agreement) on primary metrics are 0.9627 for accuracy (number of core words) and 0.9689 for efficiency (core words per minute) - measured on 402 narratives (Brokem Window, Cat Rescue, and Refused Umbrella) in our study. RASCAL does not use the webapp but accesses the normative data associated with this repository (using Google sheet IDs) to calculate percentiles.\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Resources for Analyzing Speech in Clinical Aphasiology Labs",
"version": "0.1.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f4bfed435ada40f268cc2555bf04f54e524e822dba504e4f9c9e9b19f2f3b1da",
"md5": "31c79c6f340814ed3ac0b51e69b7aff5",
"sha256": "70309bb884a522e2c2c5dffd32a443cc0f79af46d5b51ee2ec0b0de769da6627"
},
"downloads": -1,
"filename": "rascal_speech-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "31c79c6f340814ed3ac0b51e69b7aff5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 52973,
"upload_time": "2025-09-16T14:38:24",
"upload_time_iso_8601": "2025-09-16T14:38:24.933035Z",
"url": "https://files.pythonhosted.org/packages/f4/bf/ed435ada40f268cc2555bf04f54e524e822dba504e4f9c9e9b19f2f3b1da/rascal_speech-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "90e94cb2b72f8c5b1414943c8d78cf6260c528b1af119e96c58d4607effb29e4",
"md5": "ae0ce721c6b007bd0efc23f29165e61d",
"sha256": "dde326b2b5d747095a87ae92771760d19c4b4f1e8a52f97e813f4bf409869731"
},
"downloads": -1,
"filename": "rascal_speech-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "ae0ce721c6b007bd0efc23f29165e61d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 51560,
"upload_time": "2025-09-16T14:38:27",
"upload_time_iso_8601": "2025-09-16T14:38:27.086995Z",
"url": "https://files.pythonhosted.org/packages/90/e9/4cb2b72f8c5b1414943c8d78cf6260c528b1af119e96c58d4607effb29e4/rascal_speech-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-16 14:38:27",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "rascal-speech"
}