pycldf


Namepycldf JSON
Version 1.39.0 PyPI version JSON
download
home_pagehttps://github.com/cldf/pycldf
SummaryA python library to read and write CLDF datasets
upload_time2024-09-09 13:51:00
maintainerNone
docs_urlNone
authorRobert Forkel
requires_python>=3.8
licenseApache 2.0
keywords linguistics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # pycldf

A python package to read and write [CLDF](http://cldf.clld.org) datasets.

[![Build Status](https://github.com/cldf/pycldf/workflows/tests/badge.svg)](https://github.com/cldf/pycldf/actions?query=workflow%3Atests)
[![Documentation Status](https://readthedocs.org/projects/pycldf/badge/?version=latest)](https://pycldf.readthedocs.io/en/latest/?badge=latest)
[![PyPI](https://img.shields.io/pypi/v/pycldf.svg)](https://pypi.org/project/pycldf)


## Install

Install `pycldf` from [PyPI](https://pypi.org/project/pycldf):
```shell
pip install pycldf
```


## Command line usage

Installing the `pycldf` package will also install a command line interface `cldf`, which provides some sub-commands to manage CLDF datasets.


### Dataset discovery

`cldf` subcommands support dataset discovery as specified in the [standard](https://github.com/cldf/cldf/blob/master/extensions/discovery.md).

So a typical workflow involving a remote dataset could look as follows.

Create a local directory to which to download the dataset (ideally including version info):
```shell
$ mkdir wacl-1.0.0
```

Validating a dataset from Zenodo will implicitly download it, so running
```shell
$ cldf validate https://zenodo.org/record/7322688#rdf:ID=wacl --download-dir wacl-1.0.0/
```
will download the dataset to `wacl-1.0.0`.

Subsequently we can access the data locally for better performance:
```shell
$ cldf stats wacl-1.0.0/#rdf:ID=wacl
<cldf:v1.0:StructureDataset at wacl-1.0.0/cldf>
                          value
------------------------  --------------------------------------------------------------------
dc:bibliographicCitation  Her, One-Soon, Harald Hammarström and Marc Allassonnière-Tang. 2022.
dc:conformsTo             http://cldf.clld.org/v1.0/terms.rdf#StructureDataset
dc:identifier             https://wacl.clld.org
dc:license                https://creativecommons.org/licenses/by/4.0/
dc:source                 sources.bib
dc:title                  World Atlas of Classifier Languages
dcat:accessURL            https://github.com/cldf-datasets/wacl
rdf:ID                    wacl
rdf:type                  http://www.w3.org/ns/dcat#Distribution

                Type              Rows
--------------  --------------  ------
values.csv      ValueTable        3338
parameters.csv  ParameterTable       1
languages.csv   LanguageTable     3338
codes.csv       CodeTable            2
sources.bib     Sources           2000
```

(Note that locating datasets on Zenodo requires installation of [cldfzenodo](https:pypi.org/project/cldfzenodo).)


### Summary statistics

```shell
$ cldf stats mydataset/Wordlist-metadata.json 
<cldf:v1.0:Wordlist at mydataset>

Path                   Type          Rows
---------------------  ----------  ------
forms.csv              Form Table       1
mydataset/sources.bib  Sources          1
```


### Validation

Arguably the most important functionality of `pycldf` is validating CLDF datasets.

By default, data files are read in strict-mode, i.e. invalid rows will result in an exception
being raised. To validate a data file, it can be read in validating-mode.

For example the following output is generated

```sh
$ cldf validate mydataset/forms.csv
WARNING forms.csv: duplicate primary key: (u'1',)
WARNING forms.csv:4:Source missing source key: Mei2005
```

when reading the file

```
ID,Language_ID,Parameter_ID,Value,Segments,Comment,Source
1,abcd1234,1277,word,,,Meier2005[3-7]
1,stan1295,1277,hand,,,Meier2005[3-7]
2,stan1295,1277,hand,,,Mei2005[3-7]
```


### Extracting human readable metadata

The information in a CLDF metadata file can be converted to [markdown](https://en.wikipedia.org/wiki/Markdown)
(a human readable markup language) running
```shell
cldf markdown PATH/TO/metadata.json
```
A typical usage of this feature is to create a `README.md` for your dataset
(which, when uploaded to e.g. GitHub will be rendered nicely in the browser).


### Downloading media listed in a dataset's MediaTable

Typically, CLDF datasets only reference media items. The *MediaTable* provides enough information, though,
to download and save an item's content. This can be done running
```shell
cldf downloadmedia PATH/TO/metadata.json PATH/TO/DOWNLOAD/DIR
```
To minimize bandwidth usage, relevant items can be filtered by passing selection criteria in the form
`COLUMN_NAME=SUBSTRING` as optional arguments. E.g. downloading could be limited to audio files passing
`Media_Type=audio/` (provided, `Media_Type` is the name of the column with `propertyUrl` 
http://cldf.clld.org/v1.0/terms.rdf#mediaType)


### Converting a CLDF dataset to an SQLite database

A very useful feature of CSVW in general and CLDF in particular is that it
provides enough metadata for a set of CSV files to load them into a relational
database - including relations between tables. This can be done running the
`cldf createdb` command:

```shell script
$ cldf createdb -h
usage: cldf createdb [-h] [--infer-primary-keys] DATASET SQLITE_DB_PATH

Load a CLDF dataset into a SQLite DB

positional arguments:
  DATASET               Dataset specification (i.e. path to a CLDF metadata
                        file or to the data file)
  SQLITE_DB_PATH        Path to the SQLite db file
```

For a specification of the resulting database schema refer to the documentation in
[`src/pycldf/db.py`](src/pycldf/db.py).


## Python API

For a detailed documentation of the Python API, refer to the
[docs on ReadTheDocs](https://pycldf.readthedocs.io/en/latest/index.html).


### Reading CLDF

As an example, we'll read data from [WALS Online, v2020](https://github.com/cldf-datasets/wals/tree/v2020):

```python
>>> from pycldf import Dataset
>>> wals2020 = Dataset.from_metadata('https://raw.githubusercontent.com/cldf-datasets/wals/v2020/cldf/StructureDataset-metadata.json')
```

For exploratory purposes, accessing a remote dataset over HTTP is fine. But for real analysis, you'd want to download
the datasets first and then access them locally, passing a local file path to `Dataset.from_metadata`.

Let's look at what we got:
```python
>>> print(wals2020)
<cldf:v1.0:StructureDataset at https://raw.githubusercontent.com/cldf-datasets/wals/v2020/cldf/StructureDataset-metadata.json>
>>> for c in wals2020.components:
  ...     print(c)
...
ValueTable
ParameterTable
CodeTable
LanguageTable
ExampleTable
```
As expected, we got a [StructureDataset](https://github.com/cldf/cldf/tree/master/modules/StructureDataset), and in
addition to the required `ValueTable`, we also have a couple more [components](https://github.com/cldf/cldf#cldf-components).

We can investigate the values using [`pycldf`'s ORM](src/pycldf/orm.py) functionality, i.e. mapping rows in the CLDF
data files to convenient python objects. (Take note of the limitations describe in [orm.py](src/pycldf/orm.py), though.)

```python
>>> for value in wals2020.objects('ValueTable'):
  ...     break
...
>>> value
<pycldf.orm.Value id="81A-aab">
>>> value.language
<pycldf.orm.Language id="aab">
>>> value.language.cldf
Namespace(glottocode=None, id='aab', iso639P3code=None, latitude=Decimal('-3.45'), longitude=Decimal('142.95'), macroarea=None, name='Arapesh (Abu)')
>>> value.parameter
<pycldf.orm.Parameter id="81A">
>>> value.parameter.cldf
Namespace(description=None, id='81A', name='Order of Subject, Object and Verb')
>>> value.references
(<Reference Nekitel-1985[94]>,)
>>> value.references[0]
<Reference Nekitel-1985[94]>
>>> print(value.references[0].source.bibtex())
@misc{Nekitel-1985,
    olac_field = {syntax; general_linguistics; typology},
    school     = {Australian National University},
    title      = {Sociolinguistic Aspects of Abu', a Papuan Language of the Sepik Area, Papua New Guinea},
    wals_code  = {aab},
    year       = {1985},
    author     = {Nekitel, Otto I. M. S.}
}
```

If performance is important, you can just read rows of data as python `dict`s, in which case the references between
tables must be resolved "by hand":

```python
>>> params = {r['id']: r for r in wals2020.iter_rows('ParameterTable', 'id', 'name')}
>>> for v in wals2020.iter_rows('ValueTable', 'parameterReference'):
    ...     print(params[v['parameterReference']]['name'])
...     break
...
Order of Subject, Object and Verb
```

Note that we passed names of CLDF terms to `Dataset.iter_rows` (e.g. `id`) specifying which columns we want to access 
by CLDF term - rather than by the column names they are mapped to in the dataset.


## Writing CLDF

**Warning:** Writing CLDF with `pycldf` does not automatically result in valid CLDF!
It does result in data that can be checked via `cldf validate` (see [below](#validation)),
though, so you should always validate after writing.

```python
from pycldf import Wordlist, Source

dataset = Wordlist.in_dir('mydataset')
dataset.add_sources(Source('book', 'Meier2005', author='Hans Meier', year='2005', title='The Book'))
dataset.write(FormTable=[
    {
        'ID': '1', 
        'Form': 'word', 
        'Language_ID': 'abcd1234', 
        'Parameter_ID': '1277', 
        'Source': ['Meier2005[3-7]'],
    }])
```

results in
```
$ ls -1 mydataset/
forms.csv
sources.bib
Wordlist-metadata.json
```

- `mydataset/forms.csv`
```
ID,Language_ID,Parameter_ID,Value,Segments,Comment,Source
1,abcd1234,1277,word,,,Meier2005[3-7]
```
- `mydataset/sources.bib`
```bibtex
@book{Meier2005,
    author = {Meier, Hans},
    year = {2005},
    title = {The Book}
}

```
- `mydataset/Wordlist-metadata.json`


### Advanced writing

To add predefined CLDF components to a dataset, use the `add_component` method:
```python
from pycldf import StructureDataset, term_uri

dataset = StructureDataset.in_dir('mydataset')
dataset.add_component('ParameterTable')
dataset.write(
    ValueTable=[{'ID': '1', 'Language_ID': 'abc', 'Parameter_ID': '1', 'Value': 'x'}],
	ParameterTable=[{'ID': '1', 'Name': 'Grammatical Feature'}])
```

It is also possible to add generic tables:
```python
dataset.add_table('contributors.csv', term_uri('id'), term_uri('name'))
```
which can also be linked to other tables:
```python
dataset.add_columns('ParameterTable', 'Contributor_ID')
dataset.add_foreign_key('ParameterTable', 'Contributor_ID', 'contributors.csv', 'ID')
```

### Addressing tables and columns

Tables in a dataset can be referenced using a `Dataset`'s `__getitem__` method,
passing
- a full CLDF Ontology URI for the corresponding component,
- the local name of the component in the CLDF Ontology,
- the `url` of the table.

Columns in a dataset can be referenced using a `Dataset`'s `__getitem__` method,
passing a tuple `(<TABLE>, <COLUMN>)` where `<TABLE>` specifies a table as explained
above and `<COLUMN>` is
- a full CLDF Ontolgy URI used as `propertyUrl` of the column,
- the `name` property of the column.

See also https://pycldf.readthedocs.io/en/latest/dataset.html#accessing-schema-objects-components-tables-columns-etc


## Object oriented access to CLDF data

The [`pycldf.orm`](src/pycldf/orm.py) module implements functionality
to access CLDF data via an [ORM](https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping).
See https://pycldf.readthedocs.io/en/latest/orm.html for
details.


## Accessing CLDF data via SQL

The [`pycldf.db`](src/pycldf/db.py) module implements functionality
to load CLDF data into a [SQLite](https://sqlite.org) database. See https://pycldf.readthedocs.io/en/latest/ext_sql.html
for details.


## See also
- https://github.com/frictionlessdata/datapackage-py



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/cldf/pycldf",
    "name": "pycldf",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "linguistics",
    "author": "Robert Forkel",
    "author_email": "robert_forkel@eva.mpg.de",
    "download_url": "https://files.pythonhosted.org/packages/91/80/5e63c553a9583326460f7bec67693205d87a1d88bf40583486294e969496/pycldf-1.39.0.tar.gz",
    "platform": "any",
    "description": "# pycldf\n\nA python package to read and write [CLDF](http://cldf.clld.org) datasets.\n\n[![Build Status](https://github.com/cldf/pycldf/workflows/tests/badge.svg)](https://github.com/cldf/pycldf/actions?query=workflow%3Atests)\n[![Documentation Status](https://readthedocs.org/projects/pycldf/badge/?version=latest)](https://pycldf.readthedocs.io/en/latest/?badge=latest)\n[![PyPI](https://img.shields.io/pypi/v/pycldf.svg)](https://pypi.org/project/pycldf)\n\n\n## Install\n\nInstall `pycldf` from [PyPI](https://pypi.org/project/pycldf):\n```shell\npip install pycldf\n```\n\n\n## Command line usage\n\nInstalling the `pycldf` package will also install a command line interface `cldf`, which provides some sub-commands to manage CLDF datasets.\n\n\n### Dataset discovery\n\n`cldf` subcommands support dataset discovery as specified in the [standard](https://github.com/cldf/cldf/blob/master/extensions/discovery.md).\n\nSo a typical workflow involving a remote dataset could look as follows.\n\nCreate a local directory to which to download the dataset (ideally including version info):\n```shell\n$ mkdir wacl-1.0.0\n```\n\nValidating a dataset from Zenodo will implicitly download it, so running\n```shell\n$ cldf validate https://zenodo.org/record/7322688#rdf:ID=wacl --download-dir wacl-1.0.0/\n```\nwill download the dataset to `wacl-1.0.0`.\n\nSubsequently we can access the data locally for better performance:\n```shell\n$ cldf stats wacl-1.0.0/#rdf:ID=wacl\n<cldf:v1.0:StructureDataset at wacl-1.0.0/cldf>\n                          value\n------------------------  --------------------------------------------------------------------\ndc:bibliographicCitation  Her, One-Soon, Harald Hammarstr\u00f6m and Marc Allassonni\u00e8re-Tang. 2022.\ndc:conformsTo             http://cldf.clld.org/v1.0/terms.rdf#StructureDataset\ndc:identifier             https://wacl.clld.org\ndc:license                https://creativecommons.org/licenses/by/4.0/\ndc:source                 sources.bib\ndc:title                  World Atlas of Classifier Languages\ndcat:accessURL            https://github.com/cldf-datasets/wacl\nrdf:ID                    wacl\nrdf:type                  http://www.w3.org/ns/dcat#Distribution\n\n                Type              Rows\n--------------  --------------  ------\nvalues.csv      ValueTable        3338\nparameters.csv  ParameterTable       1\nlanguages.csv   LanguageTable     3338\ncodes.csv       CodeTable            2\nsources.bib     Sources           2000\n```\n\n(Note that locating datasets on Zenodo requires installation of [cldfzenodo](https:pypi.org/project/cldfzenodo).)\n\n\n### Summary statistics\n\n```shell\n$ cldf stats mydataset/Wordlist-metadata.json \n<cldf:v1.0:Wordlist at mydataset>\n\nPath                   Type          Rows\n---------------------  ----------  ------\nforms.csv              Form Table       1\nmydataset/sources.bib  Sources          1\n```\n\n\n### Validation\n\nArguably the most important functionality of `pycldf` is validating CLDF datasets.\n\nBy default, data files are read in strict-mode, i.e. invalid rows will result in an exception\nbeing raised. To validate a data file, it can be read in validating-mode.\n\nFor example the following output is generated\n\n```sh\n$ cldf validate mydataset/forms.csv\nWARNING forms.csv: duplicate primary key: (u'1',)\nWARNING forms.csv:4:Source missing source key: Mei2005\n```\n\nwhen reading the file\n\n```\nID,Language_ID,Parameter_ID,Value,Segments,Comment,Source\n1,abcd1234,1277,word,,,Meier2005[3-7]\n1,stan1295,1277,hand,,,Meier2005[3-7]\n2,stan1295,1277,hand,,,Mei2005[3-7]\n```\n\n\n### Extracting human readable metadata\n\nThe information in a CLDF metadata file can be converted to [markdown](https://en.wikipedia.org/wiki/Markdown)\n(a human readable markup language) running\n```shell\ncldf markdown PATH/TO/metadata.json\n```\nA typical usage of this feature is to create a `README.md` for your dataset\n(which, when uploaded to e.g. GitHub will be rendered nicely in the browser).\n\n\n### Downloading media listed in a dataset's MediaTable\n\nTypically, CLDF datasets only reference media items. The *MediaTable* provides enough information, though,\nto download and save an item's content. This can be done running\n```shell\ncldf downloadmedia PATH/TO/metadata.json PATH/TO/DOWNLOAD/DIR\n```\nTo minimize bandwidth usage, relevant items can be filtered by passing selection criteria in the form\n`COLUMN_NAME=SUBSTRING` as optional arguments. E.g. downloading could be limited to audio files passing\n`Media_Type=audio/` (provided, `Media_Type` is the name of the column with `propertyUrl` \nhttp://cldf.clld.org/v1.0/terms.rdf#mediaType)\n\n\n### Converting a CLDF dataset to an SQLite database\n\nA very useful feature of CSVW in general and CLDF in particular is that it\nprovides enough metadata for a set of CSV files to load them into a relational\ndatabase - including relations between tables. This can be done running the\n`cldf createdb` command:\n\n```shell script\n$ cldf createdb -h\nusage: cldf createdb [-h] [--infer-primary-keys] DATASET SQLITE_DB_PATH\n\nLoad a CLDF dataset into a SQLite DB\n\npositional arguments:\n  DATASET               Dataset specification (i.e. path to a CLDF metadata\n                        file or to the data file)\n  SQLITE_DB_PATH        Path to the SQLite db file\n```\n\nFor a specification of the resulting database schema refer to the documentation in\n[`src/pycldf/db.py`](src/pycldf/db.py).\n\n\n## Python API\n\nFor a detailed documentation of the Python API, refer to the\n[docs on ReadTheDocs](https://pycldf.readthedocs.io/en/latest/index.html).\n\n\n### Reading CLDF\n\nAs an example, we'll read data from [WALS Online, v2020](https://github.com/cldf-datasets/wals/tree/v2020):\n\n```python\n>>> from pycldf import Dataset\n>>> wals2020 = Dataset.from_metadata('https://raw.githubusercontent.com/cldf-datasets/wals/v2020/cldf/StructureDataset-metadata.json')\n```\n\nFor exploratory purposes, accessing a remote dataset over HTTP is fine. But for real analysis, you'd want to download\nthe datasets first and then access them locally, passing a local file path to `Dataset.from_metadata`.\n\nLet's look at what we got:\n```python\n>>> print(wals2020)\n<cldf:v1.0:StructureDataset at https://raw.githubusercontent.com/cldf-datasets/wals/v2020/cldf/StructureDataset-metadata.json>\n>>> for c in wals2020.components:\n  ...     print(c)\n...\nValueTable\nParameterTable\nCodeTable\nLanguageTable\nExampleTable\n```\nAs expected, we got a [StructureDataset](https://github.com/cldf/cldf/tree/master/modules/StructureDataset), and in\naddition to the required `ValueTable`, we also have a couple more [components](https://github.com/cldf/cldf#cldf-components).\n\nWe can investigate the values using [`pycldf`'s ORM](src/pycldf/orm.py) functionality, i.e. mapping rows in the CLDF\ndata files to convenient python objects. (Take note of the limitations describe in [orm.py](src/pycldf/orm.py), though.)\n\n```python\n>>> for value in wals2020.objects('ValueTable'):\n  ...     break\n...\n>>> value\n<pycldf.orm.Value id=\"81A-aab\">\n>>> value.language\n<pycldf.orm.Language id=\"aab\">\n>>> value.language.cldf\nNamespace(glottocode=None, id='aab', iso639P3code=None, latitude=Decimal('-3.45'), longitude=Decimal('142.95'), macroarea=None, name='Arapesh (Abu)')\n>>> value.parameter\n<pycldf.orm.Parameter id=\"81A\">\n>>> value.parameter.cldf\nNamespace(description=None, id='81A', name='Order of Subject, Object and Verb')\n>>> value.references\n(<Reference Nekitel-1985[94]>,)\n>>> value.references[0]\n<Reference Nekitel-1985[94]>\n>>> print(value.references[0].source.bibtex())\n@misc{Nekitel-1985,\n    olac_field = {syntax; general_linguistics; typology},\n    school     = {Australian National University},\n    title      = {Sociolinguistic Aspects of Abu', a Papuan Language of the Sepik Area, Papua New Guinea},\n    wals_code  = {aab},\n    year       = {1985},\n    author     = {Nekitel, Otto I. M. S.}\n}\n```\n\nIf performance is important, you can just read rows of data as python `dict`s, in which case the references between\ntables must be resolved \"by hand\":\n\n```python\n>>> params = {r['id']: r for r in wals2020.iter_rows('ParameterTable', 'id', 'name')}\n>>> for v in wals2020.iter_rows('ValueTable', 'parameterReference'):\n    ...     print(params[v['parameterReference']]['name'])\n...     break\n...\nOrder of Subject, Object and Verb\n```\n\nNote that we passed names of CLDF terms to `Dataset.iter_rows` (e.g. `id`) specifying which columns we want to access \nby CLDF term - rather than by the column names they are mapped to in the dataset.\n\n\n## Writing CLDF\n\n**Warning:** Writing CLDF with `pycldf` does not automatically result in valid CLDF!\nIt does result in data that can be checked via `cldf validate` (see [below](#validation)),\nthough, so you should always validate after writing.\n\n```python\nfrom pycldf import Wordlist, Source\n\ndataset = Wordlist.in_dir('mydataset')\ndataset.add_sources(Source('book', 'Meier2005', author='Hans Meier', year='2005', title='The Book'))\ndataset.write(FormTable=[\n    {\n        'ID': '1', \n        'Form': 'word', \n        'Language_ID': 'abcd1234', \n        'Parameter_ID': '1277', \n        'Source': ['Meier2005[3-7]'],\n    }])\n```\n\nresults in\n```\n$ ls -1 mydataset/\nforms.csv\nsources.bib\nWordlist-metadata.json\n```\n\n- `mydataset/forms.csv`\n```\nID,Language_ID,Parameter_ID,Value,Segments,Comment,Source\n1,abcd1234,1277,word,,,Meier2005[3-7]\n```\n- `mydataset/sources.bib`\n```bibtex\n@book{Meier2005,\n    author = {Meier, Hans},\n    year = {2005},\n    title = {The Book}\n}\n\n```\n- `mydataset/Wordlist-metadata.json`\n\n\n### Advanced writing\n\nTo add predefined CLDF components to a dataset, use the `add_component` method:\n```python\nfrom pycldf import StructureDataset, term_uri\n\ndataset = StructureDataset.in_dir('mydataset')\ndataset.add_component('ParameterTable')\ndataset.write(\n    ValueTable=[{'ID': '1', 'Language_ID': 'abc', 'Parameter_ID': '1', 'Value': 'x'}],\n\tParameterTable=[{'ID': '1', 'Name': 'Grammatical Feature'}])\n```\n\nIt is also possible to add generic tables:\n```python\ndataset.add_table('contributors.csv', term_uri('id'), term_uri('name'))\n```\nwhich can also be linked to other tables:\n```python\ndataset.add_columns('ParameterTable', 'Contributor_ID')\ndataset.add_foreign_key('ParameterTable', 'Contributor_ID', 'contributors.csv', 'ID')\n```\n\n### Addressing tables and columns\n\nTables in a dataset can be referenced using a `Dataset`'s `__getitem__` method,\npassing\n- a full CLDF Ontology URI for the corresponding component,\n- the local name of the component in the CLDF Ontology,\n- the `url` of the table.\n\nColumns in a dataset can be referenced using a `Dataset`'s `__getitem__` method,\npassing a tuple `(<TABLE>, <COLUMN>)` where `<TABLE>` specifies a table as explained\nabove and `<COLUMN>` is\n- a full CLDF Ontolgy URI used as `propertyUrl` of the column,\n- the `name` property of the column.\n\nSee also https://pycldf.readthedocs.io/en/latest/dataset.html#accessing-schema-objects-components-tables-columns-etc\n\n\n## Object oriented access to CLDF data\n\nThe [`pycldf.orm`](src/pycldf/orm.py) module implements functionality\nto access CLDF data via an [ORM](https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping).\nSee https://pycldf.readthedocs.io/en/latest/orm.html for\ndetails.\n\n\n## Accessing CLDF data via SQL\n\nThe [`pycldf.db`](src/pycldf/db.py) module implements functionality\nto load CLDF data into a [SQLite](https://sqlite.org) database. See https://pycldf.readthedocs.io/en/latest/ext_sql.html\nfor details.\n\n\n## See also\n- https://github.com/frictionlessdata/datapackage-py\n\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "A python library to read and write CLDF datasets",
    "version": "1.39.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/cldf/pycldf/issues",
        "Homepage": "https://github.com/cldf/pycldf"
    },
    "split_keywords": [
        "linguistics"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2d33b8957064e56599588fae9b2adf2d7c4f632db4b5cf760b71ee4e151a10c8",
                "md5": "9a3d1fdb04559c1471bd5d275ffb49e1",
                "sha256": "1112cb765609c7be84f25fe693b298b645a009e24278033c86d4fe98eba543cd"
            },
            "downloads": -1,
            "filename": "pycldf-1.39.0-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9a3d1fdb04559c1471bd5d275ffb49e1",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.8",
            "size": 88943,
            "upload_time": "2024-09-09T13:50:58",
            "upload_time_iso_8601": "2024-09-09T13:50:58.488730Z",
            "url": "https://files.pythonhosted.org/packages/2d/33/b8957064e56599588fae9b2adf2d7c4f632db4b5cf760b71ee4e151a10c8/pycldf-1.39.0-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "91805e63c553a9583326460f7bec67693205d87a1d88bf40583486294e969496",
                "md5": "37e94d2dfa562707b82b087017863ed4",
                "sha256": "5150bd9d721ac607321559bcb205c3d3f6ecb817793ca51a5c4e63e989485f59"
            },
            "downloads": -1,
            "filename": "pycldf-1.39.0.tar.gz",
            "has_sig": false,
            "md5_digest": "37e94d2dfa562707b82b087017863ed4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 95644,
            "upload_time": "2024-09-09T13:51:00",
            "upload_time_iso_8601": "2024-09-09T13:51:00.852215Z",
            "url": "https://files.pythonhosted.org/packages/91/80/5e63c553a9583326460f7bec67693205d87a1d88bf40583486294e969496/pycldf-1.39.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-09 13:51:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cldf",
    "github_project": "pycldf",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "pycldf"
}
        
Elapsed time: 0.96969s