# cldfbench
Tooling to create [CLDF](https://cldf.clld.org) datasets from existing data.
[](https://github.com/cldf/cldfbench/actions?query=workflow%3Atests)
[](https://cldfbench.readthedocs.io/en/latest/?badge=latest)
[](https://pypi.org/project/cldfbench)
## Overview
This package provides tools to curate cross-linguistic data, with the goal of
packaging it as [CLDF](https://cldf.clld.org) datasets.
In particular, it supports a workflow where:
- "raw" source data is downloaded to a `raw/` subdirectory,
- and subsequently converted to one or more CLDF datasets in a `cldf/` subdirectory, with the help of:
- configuration data in a `etc/` directory and
- custom Python code (a subclass of [`cldfbench.Dataset`](src/cldfbench/dataset.py) which implements the workflow actions).
This workflow is supported via:
- a commandline interface `cldfbench` which calls the workflow actions as [subcommands](src/cldfbench/commands),
- a `cldfbench.Dataset` base class, which must be overwritten in a custom module
to hook custom code into the workflow.
With this workflow and the separation of the data into three directories we want
to provide a workbench for transparently deriving CLDF data from data that has been
published before. In particular we want to delineate clearly:
- what forms part of the original or source data (`raw`),
- what kind of information is added by the curators of the CLDF dataset (`etc`)
- and what data was derived using the workbench (`cldf`).
### Further reading
This paper introduces `cldfbench` and uses an extended, real-world example:
> Forkel, R., & List, J.-M. (2020). CLDFBench: Give your cross-linguistic data a lift. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, et al. (Eds.), Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020) (pp. 6995-7002). Paris: European Language Resources Association (ELRA). [[PDF]](https://pure.mpg.de/pubman/item/item_3231858_1/component/file_3231859/shh2600.pdf)
## Installation
`cldfbench` can be installed via `pip` - preferably in a
[virtual environment](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) - by running:
```shell script
pip install cldfbench
```
`cldfbench` provides some functionality that relies on python
packages which are not needed for the core functionality. These are specified as [extras](https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies) and can be installed using syntax like:
```shell
pip install cldfbench[<extras>]
```
where `<extras>` is a comma-separated list of names from the following list:
- `excel`: support for reading spreadsheet data.
- `glottolog`: support to access [Glottolog data](https://github.com/glottolog/glottolog).
- `concepticon`: support to access [Concepticon data](https://github.com/concepticon/concepticon-data).
- `clts`: support to access [CLTS data](https://github.com/cldf-clts/clts).
## The command line interface `cldfbench`
Installing the python package will also install a command `cldfbench` available on
the command line:
```shell script
$ cldfbench -h
usage: cldfbench [-h] [--log-level LOG_LEVEL] COMMAND ...
optional arguments:
-h, --help show this help message and exit
--log-level LOG_LEVEL
log level [ERROR|WARN|INFO|DEBUG] (default: 20)
available commands:
Run "COMAMND -h" to get help for a specific command.
COMMAND
check Run generic CLDF checks
...
```
As shown above, run `cldfbench -h` to get help, and `cldfbench COMMAND -h` to get
help on individual subcommands, e.g. `cldfbench new -h` to read about the usage
of the `new` subcommand.
### Dataset discovery
Most `cldfbench` commands operate on an existing dataset (unlike `new`, which
creates a new one). Datasets can be discovered in two ways:
1. Via the python module (i.e. the `*.py` file, containing the `Dataset` subclass).
To use this mode of discovery, pass the path to the python module
as `DATASET` argument, when required by a command.
2. Via [entry point](https://packaging.python.org/specifications/entry-points/) and
dataset ID. To use this mode, specify the name of the entry point as value of
the `--entry-point` option (or use the default name `cldfbench.dataset`) and
the `Dataset.id` as `DATASET` argument.
Discovery via entry point is particularly useful for commands that can operate
on multiple datasets. To select **all** datasets advertising a given entry point,
pass `"_"` (i.e. an underscore) as `DATASET` argument.
## Workflow
For a full example of the `cldfbench` curation workflow, see [the tutorial](doc/tutorial.md).
### Creating a skeleton for a new dataset directory
A directory containing stub entries for a dataset can be created running
```bash
cldfbench new
```
This will create the following layout (where `<ID>` stands for the chosen dataset ID):
```
<ID>/
├── cldf # A stub directory for the CLDF data
│ └── README.md
├── cldfbench_<ID>.py # The python module, providing the Dataset subclass
├── etc # A stub directory for the configuration data
│ └── README.md
├── metadata.json # The metadata provided to the subcommand serialized as JSON
├── raw # A stub directory for the raw data
│ └── README.md
├── setup.cfg # Python setup config, providing defaults for test integration
├── setup.py # Python setup file, making the dataset "installable"
├── test.py # The python code to run for dataset validation
└── .github # Integrate the validation with GitHub actions
```
### Implementing CLDF creation
`cldfbench` provides tools to make CLDF creation simple. Still, each dataset is
different, and so each dataset will have to provide its own custom code to do so.
This custom code goes into the `cmd_makecldf` method of the `Dataset` subclass in
the dataset's python module.
(See also the [API documentation of `cldfbench.Dataset`](https://cldfbench.readthedocs.io/en/latest/dataset.html).)
Typically, this code will make use of one or more
[`cldfbench.CLDFSpec`](src/cldfbench/cldf.py) instances, which describes what kind of CLDF to create. A `CLDFSpec` also gives access to a
[`cldfbench.CLDFWriter`](src/cldfbench/cldf.py) instance, which wraps a `pycldf.Dataset`.
The main interfaces to these objects are:
- `cldfbench.Dataset.cldf_specs`: a method returning specifications of all CLDF datasets
that are created by the dataset,
- `cldfbench.Dataset.cldf_writer`: a method returning an initialized `CLDFWriter`
associated with a particular `CLDFSpec`.
`cldfbench` supports several scenarios of CLDF creation:
- The typical use case is turning raw data into a single CLDF dataset. This would
require instantiating one `CLDFWriter` writer in the `cmd_makecldf` method, and
the defaults of `CLDFSpec` will probably be ok. Since this is the most common and
simplest case, it is supported with some extra "sugar": The initialized `CLDFWriter`
is available as `args.writer` when `cmd_makecldf` is called.
- But it is also possible to create multiple CLDF datasets:
- For a dataset containing both, lexical and typological data, it may be appropriate
to create a `Ẁordlist` and a `StructureDataset`. To do so, one would have to
call `cldf_writer` twice, passing in an approriate `CLDFSpec`. Note that if
both CLDF datasets are created in the same directory, they can share the
`LanguageTable` - but would have to specify distinct file names for the
`ParameterTable`, passing distinct values to `CLDFSpec.data_fnames`.
- When creating multiple datasets of the same CLDF module, e.g. to split a large dataset into smaller chunks, care must be taken to also disambiguate the name
of the metadata file, passing distinct values to `CLDFSpec.metadata_fname`.
When creating CLDF, it is also often useful to have standard reference catalogs
accessible, in particular Glottolog. See the section on [Catalogs](#catalogs) for
a description of how this is supported by `cldfbench`.
### Catalogs
Linking data to reference catalogs is a major goal of CLDF, thus `cldfbench`
provides tools to make catalog access and maintenance easier. Catalog data must be
accessible in local clones of the data repository. `cldfbench` provides commands:
- `catconfig` to create the clones and make them known through a configuration file,
- `catinfo` to get an overview of the installed catalogs and their versions,
- `catupdate` to update local clones from the upstream repositories.
See:
- https://cldfbench.readthedocs.io/en/latest//catalogs.html
for a list of reference catalogs which are currently supported in `cldfbench`.
**Note:** Cloning [glottolog/glottolog](https://github.com/glottolog/glottolog) - due to the
deeply nested directories of the language classification - results in long path names. On Windows
this may require disabling the
[maximum path length limitation](https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation).
### Curating a dataset on GitHub
One of the design goals of CLDF was to specify a data format that plays well with
version control. Thus, it's natural - and actually recommended - to curate a CLDF
dataset in a version controlled repository. The most popular way to do this in a
collaborative fashion is by using a [git](https://git-scm.com/) repository hosted on
[GitHub](https://github.com).
The directory layout supported by `cldfbench` caters to this use case in several ways:
- Each directory contains a file `README.md`, which will be rendered as human readable
description when browsing the repository at GitHub.
- The file `.travis.yml` contains the configuration for hooking up a repository with
[Travis CI](https://www.travis-ci.org/), to provide continuous consistency checking
of the data.
### Archiving a dataset with Zenodo
Curating a dataset on GitHub also provides a simple way to archiving and publishing
released versions of the data. You can hook up your repository with [Zenodo](https://zenodo.org) (following [this guide](https://guides.github.com/activities/citable-code/)). Then, Zenodo will pick up any released package, assign a DOI to it, archive it and
make it accessible in the long-term.
Some notes:
- Hook-up with Zenodo requires the repository to be public (not private).
- You should consider using an institutional account on GitHub and Zenodo to associate the repository with. Currently, only the user account registering a repository on Zenodo can change any metadata of releases lateron.
- Once released and archived with Zenodo, it's a good idea to add the DOI assigned by Zenodo to the release description on GitHub.
- To make sure a release is picked up by Zenodo, the version number must start with a letter, e.g. "v1.0" - **not** "1.0".
Thus, with a setup as described here, you can make sure you create [FAIR data](https://en.wikipedia.org/wiki/FAIR_data).
## Extending `cldfbench`
`cldfbench` can be extended or built-upon in various ways - typically by customizing core functionality in new python packages. To support particular types of raw data, you might want a custom `Dataset` class, or to support a particular type of CLDF data, you would customize `CLDFWriter`.
In addition to extending `cldfbench` using the standard methods of object-oriented programming, there are two more ways of extending `cldfbench`: Commands and dataset templates. Both are implemented using [entry ponits](https://setuptools.pypa.io/en/latest/userguide/entry_point.html).
So packages which provide custom commands or dataset templates must declare these in metadata that is made known to other Python packages (in particular the `cldfbench` package) **upon installation**.
### Commands
A python package (or a dataset) can provide additional subcommands to be run from `cldfbench`.
For more info see the [`commands.README`](src/cldfbench/commands/README.md).
### Custom dataset templates
A python package can provide alternative dataset templates to be run with `cldfbench new`.
Such templates are implemented by:
- a subclass of `cldfbench.Template`,
- which is advertised using an entry point `cldfbench.scaffold`:
```python
entry_points={
'cldfbench.scaffold': [
'template_name=mypackage.scaffold:DerivedTemplate',
],
},
```
Raw data
{
"_id": null,
"home_page": "https://github.com/cldf/cldfbench",
"name": "cldfbench",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Robert Forkel",
"author_email": "dlce.rdm@eva.mpg.de",
"download_url": "https://files.pythonhosted.org/packages/4a/e1/906b1e1b7b2995edb833c896cf6c5396a5ddcc5076ae8c629a92cd43beb9/cldfbench-1.14.2.tar.gz",
"platform": null,
"description": "# cldfbench\nTooling to create [CLDF](https://cldf.clld.org) datasets from existing data.\n\n[](https://github.com/cldf/cldfbench/actions?query=workflow%3Atests)\n[](https://cldfbench.readthedocs.io/en/latest/?badge=latest)\n[](https://pypi.org/project/cldfbench)\n\n\n## Overview\n\nThis package provides tools to curate cross-linguistic data, with the goal of\npackaging it as [CLDF](https://cldf.clld.org) datasets.\n\nIn particular, it supports a workflow where:\n- \"raw\" source data is downloaded to a `raw/` subdirectory,\n- and subsequently converted to one or more CLDF datasets in a `cldf/` subdirectory, with the help of:\n - configuration data in a `etc/` directory and\n - custom Python code (a subclass of [`cldfbench.Dataset`](src/cldfbench/dataset.py) which implements the workflow actions).\n\nThis workflow is supported via:\n- a commandline interface `cldfbench` which calls the workflow actions as [subcommands](src/cldfbench/commands),\n- a `cldfbench.Dataset` base class, which must be overwritten in a custom module\n to hook custom code into the workflow.\n\nWith this workflow and the separation of the data into three directories we want\nto provide a workbench for transparently deriving CLDF data from data that has been\npublished before. In particular we want to delineate clearly:\n- what forms part of the original or source data (`raw`), \n- what kind of information is added by the curators of the CLDF dataset (`etc`)\n- and what data was derived using the workbench (`cldf`).\n\n\n### Further reading\n\nThis paper introduces `cldfbench` and uses an extended, real-world example:\n\n> Forkel, R., & List, J.-M. (2020). CLDFBench: Give your cross-linguistic data a lift. In N. Calzolari, F. B\u00e9chet, P. Blache, K. Choukri, C. Cieri, T. Declerck, et al. (Eds.), Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020) (pp. 6995-7002). Paris: European Language Resources Association (ELRA). [[PDF]](https://pure.mpg.de/pubman/item/item_3231858_1/component/file_3231859/shh2600.pdf)\n\n\n## Installation\n\n`cldfbench` can be installed via `pip` - preferably in a \n[virtual environment](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) - by running:\n```shell script\npip install cldfbench\n```\n\n`cldfbench` provides some functionality that relies on python\npackages which are not needed for the core functionality. These are specified as [extras](https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies) and can be installed using syntax like:\n```shell\npip install cldfbench[<extras>]\n```\nwhere `<extras>` is a comma-separated list of names from the following list:\n- `excel`: support for reading spreadsheet data.\n- `glottolog`: support to access [Glottolog data](https://github.com/glottolog/glottolog).\n- `concepticon`: support to access [Concepticon data](https://github.com/concepticon/concepticon-data).\n- `clts`: support to access [CLTS data](https://github.com/cldf-clts/clts).\n\n\n## The command line interface `cldfbench`\n\nInstalling the python package will also install a command `cldfbench` available on\nthe command line:\n```shell script\n$ cldfbench -h\nusage: cldfbench [-h] [--log-level LOG_LEVEL] COMMAND ...\n\noptional arguments:\n -h, --help show this help message and exit\n --log-level LOG_LEVEL\n log level [ERROR|WARN|INFO|DEBUG] (default: 20)\n\navailable commands:\n Run \"COMAMND -h\" to get help for a specific command.\n\n COMMAND\n check Run generic CLDF checks\n ...\n```\n\nAs shown above, run `cldfbench -h` to get help, and `cldfbench COMMAND -h` to get\nhelp on individual subcommands, e.g. `cldfbench new -h` to read about the usage\nof the `new` subcommand.\n\n\n### Dataset discovery\n\nMost `cldfbench` commands operate on an existing dataset (unlike `new`, which\ncreates a new one). Datasets can be discovered in two ways:\n\n1. Via the python module (i.e. the `*.py` file, containing the `Dataset` subclass).\n To use this mode of discovery, pass the path to the python module\n as `DATASET` argument, when required by a command.\n\n2. Via [entry point](https://packaging.python.org/specifications/entry-points/) and\n dataset ID. To use this mode, specify the name of the entry point as value of\n the `--entry-point` option (or use the default name `cldfbench.dataset`) and\n the `Dataset.id` as `DATASET` argument.\n\nDiscovery via entry point is particularly useful for commands that can operate\non multiple datasets. To select **all** datasets advertising a given entry point,\npass `\"_\"` (i.e. an underscore) as `DATASET` argument.\n\n\n## Workflow\n\nFor a full example of the `cldfbench` curation workflow, see [the tutorial](doc/tutorial.md).\n\n\n### Creating a skeleton for a new dataset directory\n\nA directory containing stub entries for a dataset can be created running\n\n```bash\ncldfbench new\n```\n\nThis will create the following layout (where `<ID>` stands for the chosen dataset ID):\n```\n<ID>/\n\u251c\u2500\u2500 cldf # A stub directory for the CLDF data\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 README.md\n\u251c\u2500\u2500 cldfbench_<ID>.py # The python module, providing the Dataset subclass\n\u251c\u2500\u2500 etc # A stub directory for the configuration data\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 README.md\n\u251c\u2500\u2500 metadata.json # The metadata provided to the subcommand serialized as JSON\n\u251c\u2500\u2500 raw # A stub directory for the raw data\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 README.md\n\u251c\u2500\u2500 setup.cfg # Python setup config, providing defaults for test integration\n\u251c\u2500\u2500 setup.py # Python setup file, making the dataset \"installable\" \n\u251c\u2500\u2500 test.py # The python code to run for dataset validation\n\u2514\u2500\u2500 .github # Integrate the validation with GitHub actions\n```\n\n\n### Implementing CLDF creation\n\n`cldfbench` provides tools to make CLDF creation simple. Still, each dataset is\ndifferent, and so each dataset will have to provide its own custom code to do so.\nThis custom code goes into the `cmd_makecldf` method of the `Dataset` subclass in\nthe dataset's python module.\n(See also the [API documentation of `cldfbench.Dataset`](https://cldfbench.readthedocs.io/en/latest/dataset.html).)\n\nTypically, this code will make use of one or more\n[`cldfbench.CLDFSpec`](src/cldfbench/cldf.py) instances, which describes what kind of CLDF to create. A `CLDFSpec` also gives access to a\n[`cldfbench.CLDFWriter`](src/cldfbench/cldf.py) instance, which wraps a `pycldf.Dataset`.\n\nThe main interfaces to these objects are:\n- `cldfbench.Dataset.cldf_specs`: a method returning specifications of all CLDF datasets\n that are created by the dataset,\n- `cldfbench.Dataset.cldf_writer`: a method returning an initialized `CLDFWriter` \n associated with a particular `CLDFSpec`.\n\n`cldfbench` supports several scenarios of CLDF creation:\n- The typical use case is turning raw data into a single CLDF dataset. This would\n require instantiating one `CLDFWriter` writer in the `cmd_makecldf` method, and\n the defaults of `CLDFSpec` will probably be ok. Since this is the most common and\n simplest case, it is supported with some extra \"sugar\": The initialized `CLDFWriter`\n is available as `args.writer` when `cmd_makecldf` is called.\n- But it is also possible to create multiple CLDF datasets:\n - For a dataset containing both, lexical and typological data, it may be appropriate\n to create a `\u1e80ordlist` and a `StructureDataset`. To do so, one would have to\n call `cldf_writer` twice, passing in an approriate `CLDFSpec`. Note that if\n both CLDF datasets are created in the same directory, they can share the\n `LanguageTable` - but would have to specify distinct file names for the\n `ParameterTable`, passing distinct values to `CLDFSpec.data_fnames`.\n - When creating multiple datasets of the same CLDF module, e.g. to split a large dataset into smaller chunks, care must be taken to also disambiguate the name\n of the metadata file, passing distinct values to `CLDFSpec.metadata_fname`.\n\nWhen creating CLDF, it is also often useful to have standard reference catalogs\naccessible, in particular Glottolog. See the section on [Catalogs](#catalogs) for\na description of how this is supported by `cldfbench`.\n\n\n### Catalogs\n\nLinking data to reference catalogs is a major goal of CLDF, thus `cldfbench`\nprovides tools to make catalog access and maintenance easier. Catalog data must be\naccessible in local clones of the data repository. `cldfbench` provides commands:\n- `catconfig` to create the clones and make them known through a configuration file,\n- `catinfo` to get an overview of the installed catalogs and their versions,\n- `catupdate` to update local clones from the upstream repositories.\n\nSee:\n\n- https://cldfbench.readthedocs.io/en/latest//catalogs.html\n\nfor a list of reference catalogs which are currently supported in `cldfbench`.\n\n**Note:** Cloning [glottolog/glottolog](https://github.com/glottolog/glottolog) - due to the\ndeeply nested directories of the language classification - results in long path names. On Windows\nthis may require disabling the\n[maximum path length limitation](https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation).\n\n\n### Curating a dataset on GitHub\n\nOne of the design goals of CLDF was to specify a data format that plays well with\nversion control. Thus, it's natural - and actually recommended - to curate a CLDF\ndataset in a version controlled repository. The most popular way to do this in a\ncollaborative fashion is by using a [git](https://git-scm.com/) repository hosted on \n[GitHub](https://github.com).\n\nThe directory layout supported by `cldfbench` caters to this use case in several ways:\n- Each directory contains a file `README.md`, which will be rendered as human readable\n description when browsing the repository at GitHub.\n- The file `.travis.yml` contains the configuration for hooking up a repository with\n [Travis CI](https://www.travis-ci.org/), to provide continuous consistency checking\n of the data.\n\n\n### Archiving a dataset with Zenodo\n\nCurating a dataset on GitHub also provides a simple way to archiving and publishing\nreleased versions of the data. You can hook up your repository with [Zenodo](https://zenodo.org) (following [this guide](https://guides.github.com/activities/citable-code/)). Then, Zenodo will pick up any released package, assign a DOI to it, archive it and\nmake it accessible in the long-term.\n\nSome notes:\n- Hook-up with Zenodo requires the repository to be public (not private).\n- You should consider using an institutional account on GitHub and Zenodo to associate the repository with. Currently, only the user account registering a repository on Zenodo can change any metadata of releases lateron.\n- Once released and archived with Zenodo, it's a good idea to add the DOI assigned by Zenodo to the release description on GitHub.\n- To make sure a release is picked up by Zenodo, the version number must start with a letter, e.g. \"v1.0\" - **not** \"1.0\".\n\nThus, with a setup as described here, you can make sure you create [FAIR data](https://en.wikipedia.org/wiki/FAIR_data).\n\n\n## Extending `cldfbench`\n\n`cldfbench` can be extended or built-upon in various ways - typically by customizing core functionality in new python packages. To support particular types of raw data, you might want a custom `Dataset` class, or to support a particular type of CLDF data, you would customize `CLDFWriter`.\n\nIn addition to extending `cldfbench` using the standard methods of object-oriented programming, there are two more ways of extending `cldfbench`: Commands and dataset templates. Both are implemented using [entry ponits](https://setuptools.pypa.io/en/latest/userguide/entry_point.html).\nSo packages which provide custom commands or dataset templates must declare these in metadata that is made known to other Python packages (in particular the `cldfbench` package) **upon installation**.\n\n\n### Commands\n\nA python package (or a dataset) can provide additional subcommands to be run from `cldfbench`.\nFor more info see the [`commands.README`](src/cldfbench/commands/README.md).\n\n\n### Custom dataset templates\n\nA python package can provide alternative dataset templates to be run with `cldfbench new`.\nSuch templates are implemented by:\n- a subclass of `cldfbench.Template`,\n- which is advertised using an entry point `cldfbench.scaffold`:\n\n```python\n entry_points={\n 'cldfbench.scaffold': [\n 'template_name=mypackage.scaffold:DerivedTemplate',\n ],\n },\n```\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "Python library implementing a CLDF workbench",
"version": "1.14.2",
"project_urls": {
"Bug Tracker": "https://github.com/cldf/cldfbench/issues",
"Homepage": "https://github.com/cldf/cldfbench"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5a34bf0083b835de79e22d68813f95b118d2b174c07d15a17bf61d8dd296f63d",
"md5": "4bd1ca7c72b4fae1bfafb272ce15f3f4",
"sha256": "a7b7293783fc70fd7530133ce447c28feccc63e15d3b701282dfd7db387bc084"
},
"downloads": -1,
"filename": "cldfbench-1.14.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4bd1ca7c72b4fae1bfafb272ce15f3f4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 56611,
"upload_time": "2025-08-07T07:53:07",
"upload_time_iso_8601": "2025-08-07T07:53:07.189808Z",
"url": "https://files.pythonhosted.org/packages/5a/34/bf0083b835de79e22d68813f95b118d2b174c07d15a17bf61d8dd296f63d/cldfbench-1.14.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4ae1906b1e1b7b2995edb833c896cf6c5396a5ddcc5076ae8c629a92cd43beb9",
"md5": "19462628b5046197297f49178152d3a9",
"sha256": "fc795454060d2c44c540cb93409455d7e8c8c0d5ba40c17b1fd5259c1f6b1cc3"
},
"downloads": -1,
"filename": "cldfbench-1.14.2.tar.gz",
"has_sig": false,
"md5_digest": "19462628b5046197297f49178152d3a9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 55758,
"upload_time": "2025-08-07T07:53:09",
"upload_time_iso_8601": "2025-08-07T07:53:09.018499Z",
"url": "https://files.pythonhosted.org/packages/4a/e1/906b1e1b7b2995edb833c896cf6c5396a5ddcc5076ae8c629a92cd43beb9/cldfbench-1.14.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-07 07:53:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "cldf",
"github_project": "cldfbench",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "cldfbench"
}