lcpcli


Namelcpcli JSON
Version 0.2.7 PyPI version JSON
download
home_pageNone
SummaryHelper for converting CONLLU files and uploading the corpus to LiRI Corpus Platform (LCP)
upload_time2025-09-08 09:54:44
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords conll tei vert corpora corpus linguistics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LCP CLI module

> Command-line tool for converting CONLLU files and uploading the corpus to LCP

## Installation

Make sure you have python 3.11+ with `pip` installed in your local environment, then run:

```bash
pip install lcpcli==0.2.7
```

## Usage

**Example:**

Corpus conversion:

```bash
lcpcli -i ~/conll_ext/ -o ~/upload/
```

Data upload:

```bash
lcpcli -c ~/upload/ -k $API_KEY -s $API_SECRET -p my_project --live
```

Including `--live` points the upload to the live instance of LCP. Leave it out if you want to add a corpus to an instance of LCP running on `localhost`.

**Help:**

```bash
lcpcli --help
```

`lcpcli` takes a corpus of CoNLL-U (PLUS) files and imports it to a project created in an LCP instance, such as _catchphrase_.

Besides the standard token-level CoNLL-U fields (`form`, `lemma`, `upos`, `xpos`, `feats`, `head`, `deprel`, `deps`) one can also provide document- and sentence-level annotations using comment lines in the files (see [the CoNLL-U Format section](#conll-u-format)).

### Example corpus

`lcpcli` ships with an example one-video "corpus": the video is an excerpt from the CC-BY 3.0 "Big Buck Bunny" video ((c) copyright 2008, Blender Foundation / www.bigbuckbunny.org) and the "transcription" is a sample of the Declaration of the Human Rights

To populate a folder with the example data, use this command

```bash
lcpcli --example /destination/folder/
```

This will create a subfolder named *free_video_corpus* in */destination/folder* which, itself, contains two subfolders: *input* and *output*. The *input* subfolder contains four files:

 - *doc.conllu* is a CoNLL-U Plus file that contains the textual data, with time alignments in seconds at the token- (`start` and `end` in the MISC column), segment- (`# start = ` and `# end = ` comments) and document-level (`#newdoc start =` and `#newdoc end =`)
 - *namedentity.csv* is a comma-separated value lookup file that contains information about the named entities, where each row associates an ID reported in the `namedentity` token cells of *doc.conllu* with two attributes, `type` and `form`
 - *shot.csv* is a comma-separated value file that defines time-aligned annotations about the shots in the video in the `view` column, where the `start` and `end` columns are timestamps, in seconds, relative to the document referenced in the `doc_id` column
 - *meta.json* is a JSON file that defines the structure of the corpus, used both for pre-processing the data before upload, and when adding the data to the LCP database. Read on for information on the definitions in this file

### CoNLL-U Format

The CoNLL-U format is documented at: https://universaldependencies.org/format.html

The LCP CLI converter will treat all the comments that start with `# newdoc KEY = VALUE` as document-level attributes.
This means that if a CoNLL-U file contains the line `# newdoc author = Jane Doe`, then in LCP all the sentences from this file will be associated with a document whose `meta` attribute will contain `author: 'Jane Doe'`.

All other comment lines following the format `# key = value` will add an entry to the `meta` attribute of the _segment_ corresponding to the sentence below that line (i.e. not at the document level).

The key-value pairs in the `MISC` column of a token line will go in the `meta` attribute of the corresponding token, with the exceptions of these key-value combinations:
 - `SpaceAfter=Yes` vs. `SpaceAfter=No` (case senstive) controls whether the token will be represented with a trailing space character in the database
 - `start=n.m|end=o.p` (case senstive) will align tokens, segments (sentences) and documents along a temporal axis, where `n.m` and `o.p` should be floating values in seconds

See below how to report all the attributes in the template `.json` file.

#### CoNLL-U Plus

CoNLL-U Plus is an extension to the CoNLLU-U format documented at: https://universaldependencies.org/ext-format.html

If your files start with a comment line of the form `# global.columns = ID FORM LEMMA UPOS XPOS FEATS HEAD DEPREL DEPS MISC`, `lcpcli` will treat them as CoNLL-U PLUS files and process the columns according to the names you set in that line.


#### Media files

If your corpus includes media files, your `.json` template should report it under a `mediaSlots` key in `meta`, e.g.:

```json
"meta": {
    "name": "Free Single-Video Corpus",
    "author": "LiRI",
    "date": "2024-06-13",
    "version": 1,
    "corpusDescription": "Single, open-source video with annotated shots and a placeholder text stream from the Universal Declaration of Human Rights annotated with named entities",
    "mediaSlots": {
        "video": {
            "mediaType": "video",
            "isOptional": false
        }
    }
},
```

Your CoNLL-U file(s) should accordingly report each document's media file's name in a comment, like so:

```csv
# newdoc video = bunny.mp4
```

The `.json` template should also define a main key named `tracks` to control what annotations will be represented along the time axis. For example the following will tell the interface to display separate timeline tracks for the shot, named entity and segment annotations, with the latter being subdivided in as many tracks as there are distinct values for the attribute `speaker` of the segments:

```json
"tracks": {
    "layers": {
        "Shot": {},
        "NamedEntity": {},
        "Segment": {
            "split": [
                "speaker"
            ]
        }
    }
}
```

Finally, your **output** corpus folder should include a subfolder named `media` in which all the referenced media files have been placed


#### Attribute types


The values of each attribute (on tokens, segments, documents or at any other level) have a **type**; the most common types are `text`, `number` or `categorical`. The attributes must be reported in the template `.json` file, along with their type (you can see an example in the section **Convert and Upload**)

 - `text` vs `categorical`: while both types correspond to alpha-numerical values, `categorical` is meant for attributes that have a limited number of possible values (typically, less than 100 distinct values) of a limited length (as a rule of thumb, each value can have up to 50 characters). There is no such limits on values of attributes of type `text`. When a user starts typing a constraint on an attribute of type `categorical`, the DQD editor will offer autocompletition suggestions. The attributes of type `text` will have their values listed in a dedicated table (`lcpcli`'s conversion step produces corresponding `.csv` files) so a query that expresses a constraint on an attribute will be slower if that attribute if of type `text` than of type `categorical`

 - the type `labels` (with an `s` at the end) corresponds to a set of labels that users will be able to constrain in DQD using the `contain` keyword: for example, if an attribute named `genre` is of type `labels`, the user could write a constraint like `genre contain 'drama'` or `hobbies !contain 'comedy'`. The values of attributes of type `labels` should be one-line strings, with each value separated by a comma (`,`) character (as in, e.g., `# newdoc genre = drama, romance, coming of age, fiction`); as a consequence, no label can contain the character `,`.

 - the type `dict` corresponds to key-values pairs as represented in JSON

 - the type `date` requires values to be formatted in a way that can be parsed by PostgreSQL


### Convert and Upload

1. Create a directory in which you have all your properly-fromatted CONLLU files.

2. In the same directory, create a template `.json` file that describes your corpus structure (see above about the `attributes` key on `Document` and `Segment`), for example:

```json
{
    "meta": {
        "name": "Free Single-Video Corpus",
        "author": "LiRI",
        "date": "2024-06-13",
        "version": 1,
        "corpusDescription": "Single, open-source video with annotated shots and a placeholder text stream from the Universal Declaration of Human Rights annotated with named entities",
        "mediaSlots": {
            "video": {
                "mediaType": "video",
                "isOptional": false
            }
        }
    },
    "firstClass": {
        "document": "Document",
        "segment": "Segment",
        "token": "Token"
    },
    "layer": {
        "Token": {
            "abstract": false,
            "layerType": "unit",
            "anchoring": {
                "location": false,
                "stream": true,
                "time": true
            },
            "attributes": {
                "form": {
                    "isGlobal": false,
                    "type": "text",
                    "nullable": true
                },
                "lemma": {
                    "isGlobal": false,
                    "type": "text",
                    "nullable": false
                },
                "upos": {
                    "isGlobal": true,
                    "type": "categorical",
                    "nullable": true
                },
                "xpos": {
                    "isGlobal": false,
                    "type": "categorical",
                    "nullable": true
                },
                "ufeat": {
                    "isGlobal": false,
                    "type": "dict",
                    "nullable": true
                }
            }
        },
        "DepRel": {
            "abstract": true,
            "layerType": "relation",
            "attributes": {
                "udep": {
                    "type": "categorical",
                    "isGlobal": true,
                    "nullable": false
                },
                "source": {
                    "name": "dependent",
                    "entity": "Token",
                    "nullable": false
                },
                "target": {
                    "name": "head",
                    "entity": "Token",
                    "nullable": true
                },
                "left_anchor": {
                    "type": "number",
                    "nullable": false
                },
                "right_anchor": {
                    "type": "number",
                    "nullable": false
                }
            }
        },
        "NamedEntity": {
            "abstract": false,
            "layerType": "span",
            "contains": "Token",
            "anchoring": {
                "location": false,
                "stream": true,
                "time": false
            },
            "attributes": {
                "form": {
                    "isGlobal": false,
                    "type": "text",
                    "nullable": false
                },
                "type": {
                    "isGlobal": false,
                    "type": "categorical",
                    "nullable": true
                }
            }
        },
        "Shot": {
            "abstract": false,
            "layerType": "span",
            "anchoring": {
                "location": false,
                "stream": false,
                "time": true
            },
            "attributes": {
                "view": {
                    "isGlobal": false,
                    "type": "categorical",
                    "nullable": false
                }
            }
        },
        "Segment": {
            "abstract": false,
            "layerType": "span",
            "contains": "Token",
            "attributes": {
                "meta": {
                    "text": {
                        "type": "text"
                    },
                    "start": {
                        "type": "text"
                    },
                    "end": {
                        "type": "text"
                    }
                }
            }
        },
        "Document": {
            "abstract": false,
            "contains": "Segment",
            "layerType": "span",
            "attributes": {
                "meta": {
                    "audio": {
                        "type": "text",
                        "isOptional": true
                    },
                    "video": {
                        "type": "text",
                        "isOptional": true
                    },
                    "start": {
                        "type": "number"
                    },
                    "end": {
                        "type": "number"
                    },
                    "name": {
                        "type": "text"
                    }
                }
            }
        }
    },
    "tracks": {
        "layers": {
            "Shot": {},
            "Segment": {},
            "NamedEntity": {}
        }
    }
}
```

3. If your corpus defines a character-anchored entity type such as named entities, make sure you also include a properly named and formatted CSV file for it in the directory.

4. Visit an LCP instance (e.g. _catchphrase_) and create a new project if you don't already have one where your corpus should go.

5. Retrieve the API key and secret for your project by clicking on the button that says: "Create API Key".

6. Once you have your API key and secret, you can start converting and uploading your corpus by running the following command:

```
lcpcli -i $CONLLU_FOLDER -o $OUTPUT_FOLDER -k $API_KEY -s $API_SECRET -p $PROJECT_NAME --live
```

- `$CONLLU_FOLDER` should point to the folder that contains your CONLLU files
- `$OUTPUT_FOLDER` should point to *another* folder that will be used to store the converted files to be uploaded
- `$API_KEY` is the key you copied from your project on LCP (still visible when you visit the page)
- `$API_SECRET` is the secret you copied from your project on LCP (only visible upon API Key creation)
- `$PROJECT_NAME` is the name of the project exactly as displayed on LCP -- it is case-sensitive, and space characters should be escaped

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "lcpcli",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "CONLL, TEI, VERT, corpora, corpus, linguistics",
    "author": null,
    "author_email": "Danny McDonald <daniel.mcdonald@uzh.ch>, Igor Musta\u010d <igor.mustac@uzh.ch>, Jeremy Zehr <jeremy.zehr@uzh.ch>, Jonathan Schaber <jeremy.schaber@uzh.ch>",
    "download_url": "https://files.pythonhosted.org/packages/90/40/2d23a4223143ce54e4d33e9a1bc2def3385347a48511e74967120e49a7cc/lcpcli-0.2.7.tar.gz",
    "platform": null,
    "description": "# LCP CLI module\n\n> Command-line tool for converting CONLLU files and uploading the corpus to LCP\n\n## Installation\n\nMake sure you have python 3.11+ with `pip` installed in your local environment, then run:\n\n```bash\npip install lcpcli==0.2.7\n```\n\n## Usage\n\n**Example:**\n\nCorpus conversion:\n\n```bash\nlcpcli -i ~/conll_ext/ -o ~/upload/\n```\n\nData upload:\n\n```bash\nlcpcli -c ~/upload/ -k $API_KEY -s $API_SECRET -p my_project --live\n```\n\nIncluding `--live` points the upload to the live instance of LCP. Leave it out if you want to add a corpus to an instance of LCP running on `localhost`.\n\n**Help:**\n\n```bash\nlcpcli --help\n```\n\n`lcpcli` takes a corpus of CoNLL-U (PLUS) files and imports it to a project created in an LCP instance, such as _catchphrase_.\n\nBesides the standard token-level CoNLL-U fields (`form`, `lemma`, `upos`, `xpos`, `feats`, `head`, `deprel`, `deps`) one can also provide document- and sentence-level annotations using comment lines in the files (see [the CoNLL-U Format section](#conll-u-format)).\n\n### Example corpus\n\n`lcpcli` ships with an example one-video \"corpus\": the video is an excerpt from the CC-BY 3.0 \"Big Buck Bunny\" video ((c) copyright 2008, Blender Foundation / www.bigbuckbunny.org) and the \"transcription\" is a sample of the Declaration of the Human Rights\n\nTo populate a folder with the example data, use this command\n\n```bash\nlcpcli --example /destination/folder/\n```\n\nThis will create a subfolder named *free_video_corpus* in */destination/folder* which, itself, contains two subfolders: *input* and *output*. The *input* subfolder contains four files:\n\n - *doc.conllu* is a CoNLL-U Plus file that contains the textual data, with time alignments in seconds at the token- (`start` and `end` in the MISC column), segment- (`# start = ` and `# end = ` comments) and document-level (`#newdoc start =` and `#newdoc end =`)\n - *namedentity.csv* is a comma-separated value lookup file that contains information about the named entities, where each row associates an ID reported in the `namedentity` token cells of *doc.conllu* with two attributes, `type` and `form`\n - *shot.csv* is a comma-separated value file that defines time-aligned annotations about the shots in the video in the `view` column, where the `start` and `end` columns are timestamps, in seconds, relative to the document referenced in the `doc_id` column\n - *meta.json* is a JSON file that defines the structure of the corpus, used both for pre-processing the data before upload, and when adding the data to the LCP database. Read on for information on the definitions in this file\n\n### CoNLL-U Format\n\nThe CoNLL-U format is documented at: https://universaldependencies.org/format.html\n\nThe LCP CLI converter will treat all the comments that start with `# newdoc KEY = VALUE` as document-level attributes.\nThis means that if a CoNLL-U file contains the line `# newdoc author = Jane Doe`, then in LCP all the sentences from this file will be associated with a document whose `meta` attribute will contain `author: 'Jane Doe'`.\n\nAll other comment lines following the format `# key = value` will add an entry to the `meta` attribute of the _segment_ corresponding to the sentence below that line (i.e. not at the document level).\n\nThe key-value pairs in the `MISC` column of a token line will go in the `meta` attribute of the corresponding token, with the exceptions of these key-value combinations:\n - `SpaceAfter=Yes` vs. `SpaceAfter=No` (case senstive) controls whether the token will be represented with a trailing space character in the database\n - `start=n.m|end=o.p` (case senstive) will align tokens, segments (sentences) and documents along a temporal axis, where `n.m` and `o.p` should be floating values in seconds\n\nSee below how to report all the attributes in the template `.json` file.\n\n#### CoNLL-U Plus\n\nCoNLL-U Plus is an extension to the CoNLLU-U format documented at: https://universaldependencies.org/ext-format.html\n\nIf your files start with a comment line of the form `# global.columns = ID FORM LEMMA UPOS XPOS FEATS HEAD DEPREL DEPS MISC`, `lcpcli` will treat them as CoNLL-U PLUS files and process the columns according to the names you set in that line.\n\n\n#### Media files\n\nIf your corpus includes media files, your `.json` template should report it under a `mediaSlots` key in `meta`, e.g.:\n\n```json\n\"meta\": {\n    \"name\": \"Free Single-Video Corpus\",\n    \"author\": \"LiRI\",\n    \"date\": \"2024-06-13\",\n    \"version\": 1,\n    \"corpusDescription\": \"Single, open-source video with annotated shots and a placeholder text stream from the Universal Declaration of Human Rights annotated with named entities\",\n    \"mediaSlots\": {\n        \"video\": {\n            \"mediaType\": \"video\",\n            \"isOptional\": false\n        }\n    }\n},\n```\n\nYour CoNLL-U file(s) should accordingly report each document's media file's name in a comment, like so:\n\n```csv\n# newdoc video = bunny.mp4\n```\n\nThe `.json` template should also define a main key named `tracks` to control what annotations will be represented along the time axis. For example the following will tell the interface to display separate timeline tracks for the shot, named entity and segment annotations, with the latter being subdivided in as many tracks as there are distinct values for the attribute `speaker` of the segments:\n\n```json\n\"tracks\": {\n    \"layers\": {\n        \"Shot\": {},\n        \"NamedEntity\": {},\n        \"Segment\": {\n            \"split\": [\n                \"speaker\"\n            ]\n        }\n    }\n}\n```\n\nFinally, your **output** corpus folder should include a subfolder named `media` in which all the referenced media files have been placed\n\n\n#### Attribute types\n\n\nThe values of each attribute (on tokens, segments, documents or at any other level) have a **type**; the most common types are `text`, `number` or `categorical`. The attributes must be reported in the template `.json` file, along with their type (you can see an example in the section **Convert and Upload**)\n\n - `text` vs `categorical`: while both types correspond to alpha-numerical values, `categorical` is meant for attributes that have a limited number of possible values (typically, less than 100 distinct values) of a limited length (as a rule of thumb, each value can have up to 50 characters). There is no such limits on values of attributes of type `text`. When a user starts typing a constraint on an attribute of type `categorical`, the DQD editor will offer autocompletition suggestions. The attributes of type `text` will have their values listed in a dedicated table (`lcpcli`'s conversion step produces corresponding `.csv` files) so a query that expresses a constraint on an attribute will be slower if that attribute if of type `text` than of type `categorical`\n\n - the type `labels` (with an `s` at the end) corresponds to a set of labels that users will be able to constrain in DQD using the `contain` keyword: for example, if an attribute named `genre` is of type `labels`, the user could write a constraint like `genre contain 'drama'` or `hobbies !contain 'comedy'`. The values of attributes of type `labels` should be one-line strings, with each value separated by a comma (`,`) character (as in, e.g., `# newdoc genre = drama, romance, coming of age, fiction`); as a consequence, no label can contain the character `,`.\n\n - the type `dict` corresponds to key-values pairs as represented in JSON\n\n - the type `date` requires values to be formatted in a way that can be parsed by PostgreSQL\n\n\n### Convert and Upload\n\n1. Create a directory in which you have all your properly-fromatted CONLLU files.\n\n2. In the same directory, create a template `.json` file that describes your corpus structure (see above about the `attributes` key on `Document` and `Segment`), for example:\n\n```json\n{\n    \"meta\": {\n        \"name\": \"Free Single-Video Corpus\",\n        \"author\": \"LiRI\",\n        \"date\": \"2024-06-13\",\n        \"version\": 1,\n        \"corpusDescription\": \"Single, open-source video with annotated shots and a placeholder text stream from the Universal Declaration of Human Rights annotated with named entities\",\n        \"mediaSlots\": {\n            \"video\": {\n                \"mediaType\": \"video\",\n                \"isOptional\": false\n            }\n        }\n    },\n    \"firstClass\": {\n        \"document\": \"Document\",\n        \"segment\": \"Segment\",\n        \"token\": \"Token\"\n    },\n    \"layer\": {\n        \"Token\": {\n            \"abstract\": false,\n            \"layerType\": \"unit\",\n            \"anchoring\": {\n                \"location\": false,\n                \"stream\": true,\n                \"time\": true\n            },\n            \"attributes\": {\n                \"form\": {\n                    \"isGlobal\": false,\n                    \"type\": \"text\",\n                    \"nullable\": true\n                },\n                \"lemma\": {\n                    \"isGlobal\": false,\n                    \"type\": \"text\",\n                    \"nullable\": false\n                },\n                \"upos\": {\n                    \"isGlobal\": true,\n                    \"type\": \"categorical\",\n                    \"nullable\": true\n                },\n                \"xpos\": {\n                    \"isGlobal\": false,\n                    \"type\": \"categorical\",\n                    \"nullable\": true\n                },\n                \"ufeat\": {\n                    \"isGlobal\": false,\n                    \"type\": \"dict\",\n                    \"nullable\": true\n                }\n            }\n        },\n        \"DepRel\": {\n            \"abstract\": true,\n            \"layerType\": \"relation\",\n            \"attributes\": {\n                \"udep\": {\n                    \"type\": \"categorical\",\n                    \"isGlobal\": true,\n                    \"nullable\": false\n                },\n                \"source\": {\n                    \"name\": \"dependent\",\n                    \"entity\": \"Token\",\n                    \"nullable\": false\n                },\n                \"target\": {\n                    \"name\": \"head\",\n                    \"entity\": \"Token\",\n                    \"nullable\": true\n                },\n                \"left_anchor\": {\n                    \"type\": \"number\",\n                    \"nullable\": false\n                },\n                \"right_anchor\": {\n                    \"type\": \"number\",\n                    \"nullable\": false\n                }\n            }\n        },\n        \"NamedEntity\": {\n            \"abstract\": false,\n            \"layerType\": \"span\",\n            \"contains\": \"Token\",\n            \"anchoring\": {\n                \"location\": false,\n                \"stream\": true,\n                \"time\": false\n            },\n            \"attributes\": {\n                \"form\": {\n                    \"isGlobal\": false,\n                    \"type\": \"text\",\n                    \"nullable\": false\n                },\n                \"type\": {\n                    \"isGlobal\": false,\n                    \"type\": \"categorical\",\n                    \"nullable\": true\n                }\n            }\n        },\n        \"Shot\": {\n            \"abstract\": false,\n            \"layerType\": \"span\",\n            \"anchoring\": {\n                \"location\": false,\n                \"stream\": false,\n                \"time\": true\n            },\n            \"attributes\": {\n                \"view\": {\n                    \"isGlobal\": false,\n                    \"type\": \"categorical\",\n                    \"nullable\": false\n                }\n            }\n        },\n        \"Segment\": {\n            \"abstract\": false,\n            \"layerType\": \"span\",\n            \"contains\": \"Token\",\n            \"attributes\": {\n                \"meta\": {\n                    \"text\": {\n                        \"type\": \"text\"\n                    },\n                    \"start\": {\n                        \"type\": \"text\"\n                    },\n                    \"end\": {\n                        \"type\": \"text\"\n                    }\n                }\n            }\n        },\n        \"Document\": {\n            \"abstract\": false,\n            \"contains\": \"Segment\",\n            \"layerType\": \"span\",\n            \"attributes\": {\n                \"meta\": {\n                    \"audio\": {\n                        \"type\": \"text\",\n                        \"isOptional\": true\n                    },\n                    \"video\": {\n                        \"type\": \"text\",\n                        \"isOptional\": true\n                    },\n                    \"start\": {\n                        \"type\": \"number\"\n                    },\n                    \"end\": {\n                        \"type\": \"number\"\n                    },\n                    \"name\": {\n                        \"type\": \"text\"\n                    }\n                }\n            }\n        }\n    },\n    \"tracks\": {\n        \"layers\": {\n            \"Shot\": {},\n            \"Segment\": {},\n            \"NamedEntity\": {}\n        }\n    }\n}\n```\n\n3. If your corpus defines a character-anchored entity type such as named entities, make sure you also include a properly named and formatted CSV file for it in the directory.\n\n4. Visit an LCP instance (e.g. _catchphrase_) and create a new project if you don't already have one where your corpus should go.\n\n5. Retrieve the API key and secret for your project by clicking on the button that says: \"Create API Key\".\n\n6. Once you have your API key and secret, you can start converting and uploading your corpus by running the following command:\n\n```\nlcpcli -i $CONLLU_FOLDER -o $OUTPUT_FOLDER -k $API_KEY -s $API_SECRET -p $PROJECT_NAME --live\n```\n\n- `$CONLLU_FOLDER` should point to the folder that contains your CONLLU files\n- `$OUTPUT_FOLDER` should point to *another* folder that will be used to store the converted files to be uploaded\n- `$API_KEY` is the key you copied from your project on LCP (still visible when you visit the page)\n- `$API_SECRET` is the secret you copied from your project on LCP (only visible upon API Key creation)\n- `$PROJECT_NAME` is the name of the project exactly as displayed on LCP -- it is case-sensitive, and space characters should be escaped\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Helper for converting CONLLU files and uploading the corpus to LiRI Corpus Platform (LCP)",
    "version": "0.2.7",
    "project_urls": {
        "Homepage": "https://github.com/liri-uzh/lcpcli/issues",
        "Issues": "https://github.com/liri-uzh/lcpcli/issues"
    },
    "split_keywords": [
        "conll",
        " tei",
        " vert",
        " corpora",
        " corpus",
        " linguistics"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "36e9bc8df83fc4ba680f16dbebbf3aead977db0c62dc12dc2880f219c2853948",
                "md5": "7fb2a29b779067f846abdbef56c8b431",
                "sha256": "34b9ec3c94f8ee5e67c29a71e5631993a9be4881b5492cc7c0a57cb3fb56567c"
            },
            "downloads": -1,
            "filename": "lcpcli-0.2.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7fb2a29b779067f846abdbef56c8b431",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 10618435,
            "upload_time": "2025-09-08T09:54:28",
            "upload_time_iso_8601": "2025-09-08T09:54:28.755330Z",
            "url": "https://files.pythonhosted.org/packages/36/e9/bc8df83fc4ba680f16dbebbf3aead977db0c62dc12dc2880f219c2853948/lcpcli-0.2.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "90402d23a4223143ce54e4d33e9a1bc2def3385347a48511e74967120e49a7cc",
                "md5": "888c206603be36d1e90a36b5c9568255",
                "sha256": "c1ecfa8ab473e046b2fb88800d4636e01ad025a11c12e9b7a3b9df000e9589d9"
            },
            "downloads": -1,
            "filename": "lcpcli-0.2.7.tar.gz",
            "has_sig": false,
            "md5_digest": "888c206603be36d1e90a36b5c9568255",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 10613104,
            "upload_time": "2025-09-08T09:54:44",
            "upload_time_iso_8601": "2025-09-08T09:54:44.422275Z",
            "url": "https://files.pythonhosted.org/packages/90/40/2d23a4223143ce54e4d33e9a1bc2def3385347a48511e74967120e49a7cc/lcpcli-0.2.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-08 09:54:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "liri-uzh",
    "github_project": "lcpcli",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "lcpcli"
}
        
Elapsed time: 1.97983s