conllu


Nameconllu JSON
Version 6.0.0 PyPI version JSON
download
home_pageNone
SummaryCoNLL-U Parser parses a CoNLL-U formatted string into a nested python dictionary
upload_time2024-10-13 21:44:53
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseThe MIT License (MIT) Copyright (c) 2016 Emil Stenström Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords conllu conll conll-u parser nlp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CoNLL-U Parser

**CoNLL-U Parser** parses a [CoNLL-U formatted](http://universaldependencies.org/format.html) string into a nested python dictionary. CoNLL-U is often the output of natural language processing tasks.

## Why should you use conllu?

- It's simple. ~300 lines of code.
- It has no dependencies
- Full typing support so your editor can do autocompletion
- Nice set of tests with CI setup: [![Build](https://github.com/EmilStenstrom/conllu/workflows/Run%20tests%20for%20all%20supported%20python%20versions/badge.svg)](https://github.com/EmilStenstrom/conllu/actions?query=workflow%3A%22Run+tests+for+all+supported+python+versions%22)
- It has 100% test branch coverage (and has undergone [mutation testing](https://github.com/boxed/mutmut/))
- It has [![lots of downloads](http://pepy.tech/badge/conllu)](http://pepy.tech/project/conllu)

## Installation

Note: As of conllu 5.0, Python 3.8 is required to install conllu. See [Notes on updating from 4.0 to 5.0](#notes-on-updating-from-40-to-50)

```bash
pip install conllu
```

Or, if you are using [conda](https://conda.io/docs/):

```bash
conda install -c conda-forge conllu
```

## Notes on updating from 4.0 to 5.0

Conllu version 5.0 drops support for Python 3.6 and 3.7 and requires Python 3.8 at a minimum. If you need support for older versions of python, you can always pin your install to an old version of conllu. You can install it with `pip install conllu==4.5.3`.

## Notes on updating from 3.0 to 4.0

Conllu version 4.0 drops support for Python 2 and all versions of earlier than Python 3.6. If you need support for older versions of python, you can always pin your install to an old version of conllu. You can install it with `pip install conllu==3.1.1`.

## Notes on updating from 2.0 to 3.0

The Universal dependencies 2.0 release changed two of the field names from xpostag -> xpos and upostag -> upos. Version 3.0 of conllu handles this by aliasing the previous names to the new names. This means you can use xpos/upos or xpostag/upostag, they will both return the same thing. This does change the public API slightly, so I've upped the major version to 3.0, but I've taken care to ensure you most likely DO NOT have to update your code when you update to 3.0.

## Notes on updating from 0.1 to 1.0

I don't like breaking backwards compatibility, but to be able to add new features I felt I had to. This means that updating from 0.1 to 1.0 *might* require code changes. Here's a guide on [how to upgrade to 1.0
](https://github.com/EmilStenstrom/conllu/wiki/Migrating-from-0.1-to-1.0).

## Example usage

At the top level, conllu provides two methods, `parse` and `parse_tree`. The first one parses sentences and returns a flat list. The other returns a nested tree structure. Let's go through them one by one.

## Use parse() to parse into a list of sentences

```python
>>> from conllu import parse
>>> 
>>> data = """
... # text = The quick brown fox jumps over the lazy dog.
... 1   The     the    DET    DT   Definite=Def|PronType=Art   4   det     _   _
... 2   quick   quick  ADJ    JJ   Degree=Pos                  4   amod    _   _
... 3   brown   brown  ADJ    JJ   Degree=Pos                  4   amod    _   _
... 4   fox     fox    NOUN   NN   Number=Sing                 5   nsubj   _   _
... 5   jumps   jump   VERB   VBZ  Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin   0   root    _   _
... 6   over    over   ADP    IN   _                           9   case    _   _
... 7   the     the    DET    DT   Definite=Def|PronType=Art   9   det     _   _
... 8   lazy    lazy   ADJ    JJ   Degree=Pos                  9   amod    _   _
... 9   dog     dog    NOUN   NN   Number=Sing                 5   nmod    _   SpaceAfter=No
... 10  .       .      PUNCT  .    _                           5   punct   _   _
...
... """
```

Now you have the data in a variable called `data`. Let's parse it:

```python
>>> sentences = parse(data)
>>> sentences
[TokenList<The, quick, brown, fox, jumps, over, the, lazy, dog, ., metadata={text: "The quick brown fox jumps over the lazy dog."}>]
```

<blockquote>

**Advanced usage**: If you have many sentences (say over a megabyte) to parse at once, you can avoid loading them into memory at once by using `parse_incr()` instead of `parse`. It takes an opened file, and returns a generator instead of the list directly, so you need to either iterate over it, or call list() to get the TokenLists out. Here's how you would use it:

```python
from io import open
from conllu import parse_incr

data_file = open("huge_file.conllu", "r", encoding="utf-8")
for tokenlist in parse_incr(data_file):
    print(tokenlist)
```

For most files, `parse` works fine.
</blockquote>

Since one CoNLL-U file usually contains multiple sentences, `parse()` always returns a list of sentences. Each sentence is represented by a TokenList.

```python
>>> sentence = sentences[0]
>>> sentence
TokenList<The, quick, brown, fox, jumps, over, the, lazy, dog, ., metadata={text: "The quick brown fox jumps over the lazy dog."}>
```

The TokenList supports indexing, so you can get the first token, represented by an ordered dictionary, like this:

```python
>>> token = sentence[0]
>>> token
{'id': 1,
     'form': 'The',
     'lemma': 'the',
     ...}
>>> token["form"]
'The'
```

### New in conllu 2.0: `filter()` a TokenList

```python
>>> sentence = sentences[0]
>>> sentence
TokenList<The, quick, brown, fox, jumps, over, the, lazy, dog, ., metadata={text: "The quick brown fox jumps over the lazy dog."}>
>>> sentence.filter(form="quick")
TokenList<quick>
```

By using `filter(field1__field2=value)` you can filter based on subelements further down in a parsed token.

```python
>>> sentence.filter(feats__Degree="Pos")
TokenList<quick, brown, lazy>
```

Filters can also be chained (meaning you can do `sentence.filter(...).filter(...)`), and filtering on multiple properties at the same time (`sentence.filter(field1=value1, field2=value2)`) means that ALL properties must match.

#### New in conllu 4.3: `filter()` a TokenList by lambda

You can also filter using a lambda function as value. This is useful if you, for instance, would like to filter out only tokens with integer ID:s:

```python
>>> from conllu.models import TokenList, Token
>>> sentence2 = TokenList([
...    Token(id=(1, "-", 2), form="It's"),
...    Token(id=1, form="It"),
...    Token(id=2, form="is"),
... ])
>>> sentence2
TokenList<It's, It, is>
>>> sentence2.filter(id=lambda x: type(x) is int)
TokenList<It, is>
```

### Writing data back to a TokenList

If you want to change your CoNLL-U file, there are a couple of convenience methods to know about.

You can add a new token by simply appending a dictionary with the fields you want to a TokenList:

```python
>>> sentence3 = TokenList([
...    {"id": 1, "form": "Lazy"},
...    {"id": 2, "form": "fox"},
... ])
>>> sentence3
TokenList<Lazy, fox>
>>> sentence3.append({"id": 3, "form": "box"})
>>> sentence3
TokenList<Lazy, fox, box>
```

Changing a sentence just means indexing into it, and setting a value to what you want:

```python
>>> sentence4 = TokenList([
...    {"id": 1, "form": "Lazy"},
...    {"id": 2, "form": "fox"},
... ])
>>> sentence4[1]["form"] = "crocodile"
>>> sentence4
TokenList<Lazy, crocodile>
>>> sentence4[1] = {"id": 2, "form": "elephant"}
>>> sentence4
TokenList<Lazy, elephant>
```

If you omit a field when passing in a dict, conllu will fill in a "_" for those values.

```python
>>> sentences = parse("1  The")
>>> sentences[0].append({"id": 2})
>>> sentences[0]
TokenList<The, _>
```

### Parse metadata from a CoNLL-U file

Each sentence can also have metadata in the form of comments before the sentence starts. This is available in a property on the TokenList called `metadata`.

```python
>>> sentence.metadata
{'text': 'The quick brown fox jumps over the lazy dog.'}
```

### Turn a TokenList back into CoNLL-U

If you ever want to get your CoNLL-U formated text back (maybe after changing something?), use the `serialize()` method:

```python
>>> print(sentence.serialize())
# text = The quick brown fox jumps over the lazy dog.
1   The     the     DET    DT   Definite=Def|PronType=Art   4   det    _   _
2   quick   quick   ADJ    JJ   Degree=Pos                  4   amod   _   _
3   brown   brown   ADJ    JJ   Degree=Pos                  4   amod   _   _
4   fox     fox     NOUN   NN   Number=Sing                 5   nsubj  _   _
5   jumps   jump    VERB   VBZ  Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin   0   root   _   _
6   over    over    ADP    IN   _                           9   case   _   _
7   the     the     DET    DT   Definite=Def|PronType=Art   9   det    _   _
8   lazy    lazy    ADJ    JJ   Degree=Pos                  9   amod   _   _
9   dog     dog     NOUN   NN   Number=Sing                 5   nmod   _   SpaceAfter=No
10  .       .       PUNCT  .    _                           5   punct  _   _
```

### Turn a TokenList into a TokenTree (see below)

You can also convert a TokenList to a TokenTree by using `to_tree`:

```python
>>> sentence.to_tree()
TokenTree<token={id=5, form=jumps}, children=[...]>
```

That's it!

## Use parse_tree() to parse into a list of dependency trees

Sometimes you're interested in the tree structure that hides in the `head` column of a CoNLL-U file. When this is the case, use `parse_tree` to get a nested structure representing the sentence.

```python
>>> from conllu import parse_tree
>>> sentences = parse_tree(data)
>>> sentences
[TokenTree<...>]
```

<blockquote>

**Advanced usage**: If you have many sentences (say over a megabyte) to parse at once, you can avoid loading them into memory at once by using `parse_tree_incr()` instead of `parse_tree`. It takes an opened file, and returns a generator instead of the list directly, so you need to either iterate over it, or call list() to get the TokenTrees out. Here's how you would use it:

```python
from io import open
from conllu import parse_tree_incr

data_file = open("huge_file.conllu", "r", encoding="utf-8")
for tokentree in parse_tree_incr(data_file):
    print(tokentree)
```

</blockquote>

Since one CoNLL-U file usually contains multiple sentences, `parse_tree()` always returns a list of sentences. Each sentence is represented by a TokenTree.

```python
>>> root = sentences[0]
>>> root
TokenTree<token={id=5, form=jumps}, children=[...]>
```

To quickly visualize the tree structure you can call `print_tree` on a TokenTree.

```python
>>> root.print_tree()
(deprel:root) form:jumps lemma:jump upos:VERB [5]
    (deprel:nsubj) form:fox lemma:fox upos:NOUN [4]
        (deprel:det) form:The lemma:the upos:DET [1]
        (deprel:amod) form:quick lemma:quick upos:ADJ [2]
        (deprel:amod) form:brown lemma:brown upos:ADJ [3]
    (deprel:nmod) form:dog lemma:dog upos:NOUN [9]
        (deprel:case) form:over lemma:over upos:ADP [6]
        (deprel:det) form:the lemma:the upos:DET [7]
        (deprel:amod) form:lazy lemma:lazy upos:ADJ [8]
    (deprel:punct) form:. lemma:. upos:PUNCT [10]
```

To access the token corresponding to the current node in the tree, use `token`:

```python
>>> root.token
{
    'id': 5,
    'form': 'jumps',
    'lemma': 'jump',
    ...
}
```

To start walking down the children of the current node, use the children attribute:

```python
>>> children = root.children
>>> children
[
    TokenTree<token={id=4, form=fox}, children=[...]>,
    TokenTree<token={id=9, form=dog}, children=[...]>,
    TokenTree<token={id=10, form=.}, children=None>
]
```

Just like with `parse()`, if a sentence has metadata it is available in a property on the TokenTree root called `metadata`.

```python
>>> root.metadata
{'text': 'The quick brown fox jumps over the lazy dog.'}
```

If you ever want to get your CoNLL-U formated text back (maybe after changing something?), use the `serialize()` method:

```python
>>> print(root.serialize())
# text = The quick brown fox jumps over the lazy dog.
1   The     the    DET    DT   Definite=Def|PronType=Art   4   det     _   _
2   quick   quick  ADJ    JJ   Degree=Pos                  4   amod    _   _
...
```

If you want to write it back to a file, you can use something like this:

```python
>>> from conllu import parse_tree
>>> sentences = parse_tree(data)
>>> 
>>> # Make some change to sentences here
>>> 
>>> with open('file-to-write-to', 'w') as f:
...     f.writelines([sentence.serialize() + "\n" for sentence in sentences])
```

## Customizing parsing to handle strange variations of CoNLL-U

Far from all CoNLL-U files found in the wild follow the CoNLL-U format specification. CoNLL-U tries to parse even files that are malformed according to the specification, but sometimes that doesn't work. For those situations you can change how conllu parses your files.

A normal CoNLL-U file consists of a specific set of fields (id, form, lemma, and so on...). Let's walk through how to parse a custom format using the three options `fields`, `field_parsers`, `metadata_parsers`. Here's the custom format we'll use.

```python
>>> data = """
... # tagset = TAG1|TAG2|TAG3|TAG4
... # sentence-123
... 1   My       TAG1|TAG2
... 2   custom   TAG3
... 3   format   TAG4
...
... """
```

Now, let's parse this with the the default settings, and look specifically at the first token to see how it was parsed.

```python
>>> sentences = parse(data)
>>> sentences[0][0]
{'id': 1, 'form': 'My', 'lemma': 'TAG1|TAG2'}
```

The parser has assumed (incorrectly) that the third field must the the default ´lemma´ field and parsed it as such. Let's customize this so the parser gets the name right, by setting the `fields` parameter when calling parse.

```python
>>> sentences = parse(data, fields=["id", "form", "tag"])
>>> sentences[0][0]
{'id': 1, 'form': 'My', 'tag': 'TAG1|TAG2'}
```

The only difference is that you now get the correct field name back when parsing. Now let's say you want those two tags returned as a list instead of as a string. This can be done using the `field_parsers` argument.

```python
>>> split_func = lambda line, i: line[i].split("|")
>>> sentences = parse(data, fields=["id", "form", "tag"], field_parsers={"tag": split_func})
>>> sentences[0][0]
{'id': 1, 'form': 'My', 'tag': ['TAG1', 'TAG2']}
```

That's much better! `field_parsers` specifies a mapping from a field name, to a function that can parse that field. In our case, we specify that the field with custom logic is `"tag"` and that the function to handle it is `split_func`. Each field_parser gets sent two parameters:

* `line`: The whole list of values from this line, split on whitespace. The reason you get the full line is so you can merge several tokens into one using a field_parser if you want.
* `i`: The current location in the line where you currently are. Most often, you'll use `line[i]` to get the current value.

In our case, we return `line[i].split("|")`, which returns a list like we want.

Let's look at the metadata in this example.

```text
# tagset = TAG1|TAG2|TAG3|TAG4
# sentence-123
```

None of these values are valid in CoNLL-U, but since the first line follows the key-value format of other (valid) fields, conllu will parse it anyway:

```python
>>> sentences = parse(data)
>>> sentences[0].metadata
{'tagset': 'TAG1|TAG2|TAG3|TAG4'}
```

Let's return this as a list using the `metadata_parsers` parameter.

```python
>>> sentences = parse(data, metadata_parsers={"tagset": lambda key, value: (key, value.split("|"))})
>>> sentences[0].metadata
{'tagset': ['TAG1', 'TAG2', 'TAG3', 'TAG4']}
```

A metadata parser behaves similarily to a field parser, but since most comments you'll see will be of the form "key = value" these values will be parsed and cleaned first, and then sent to your custom metadata_parser. Here we just take the value, and split it on "|", and return a list back. And lo and behold, we get what we wanted!

Now, let's deal with the "sentence-123" comment. Specifying another metadata_parser won't work, because this is an ID that will be different for each sentence. Instead, let's use a special metadata parser, called `__fallback__`.

```python
>>> sentences = parse(data, metadata_parsers={
...    "tagset": lambda key, value: (key, value.split("|")),
...    "__fallback__": lambda key, value: ("sentence-id", key)
... })
>>> sentences[0].metadata
{
    'tagset': ['TAG1', 'TAG2', 'TAG3', 'TAG4'],
    'sentence-id': 'sentence-123'
}
```

Just what we wanted! `__fallback__` gets called any time none of the other metadata_parsers match, and just like the others, it gets sent the key and value of the current line. In our case, the line contains no "=" to split on, so key will be "sentence-123" and value will be empty. We can return whatever we want here, but let's just say we want to call this field "sentence-id" so we return that as the key, and "sentence-123" as our value.

Finally, consider an even trickier case.

```python
>>> data = """
... # id=1-document_id=36:1047-span=1
... 1   My       TAG1|TAG2
... 2   custom   TAG3
... 3   format   TAG4
...
... """
```

This is actually three different comments, but somehow they are separated by "-" instead of on their own lines. To handle this, we get to use the ability of a metadata_parser to return multiple matches from a single line.

```python
>>> sentences = parse(data, metadata_parsers={
...    "__fallback__": lambda key, value: [pair.split("=") for pair in (key + "=" + value).split("-")]
... })
>>> sentences[0].metadata
{
    'id': '1',
    'document_id': '36:1047',
    'span': '1'
}
```

Our fallback parser returns a **list** of matches, one per pair of metadata comments we find. The `key + "=" + value` trick is needed since by default conllu assumes that this is a valid comment, so `key` is "id" and `value` is everything after the first "=", `1-document_id=36:1047-span=1` (note the missing "id=" in the beginning). We need to add it back before splitting on "-".

And that's it! Using these tricks you should be able to parse all the strange files you stumble into.

## Develop locally and run the tests

1. Make a fork of the repository to your own GitHub account.

2. Clone the repository locally on your computer:
    ```bash
    git clone git@github.com:YOURUSERNAME/conllu.git conllu
    cd conllu
    ```

3. Install the library used for running the tests:
    ```bash
    pip install tox
    ```

4. Now you can run the tests:
    ```bash
    tox
    ```
    This runs tox across all supported versions of Python, and also runs checks for code-coverage, syntax errors, and how imports are sorted.

4. (Alternative) If you just have one version of python installed, and don't want to go through the hassle of installing multiple version of python (hint: Install pyenv and pyenv-tox), **it's fine to run tox with just one version of python**:

    ```bash
    tox -e py38
    ```

5. Make a pull request. Here's a [good guide on PRs from GitHub](https://help.github.com/articles/creating-a-pull-request-from-a-fork/).

Thanks for helping conllu become a better library!

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "conllu",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "conllu, conll, conll-u, parser, nlp",
    "author": null,
    "author_email": "Emil Stenstr\u00f6m <emil@emilstenstrom.se>",
    "download_url": "https://files.pythonhosted.org/packages/0c/53/177d029cdae086c245b1875264d1f736d1909743b8b8e81ffcf7ab43cc48/conllu-6.0.0.tar.gz",
    "platform": null,
    "description": "# CoNLL-U Parser\n\n**CoNLL-U Parser** parses a [CoNLL-U formatted](http://universaldependencies.org/format.html) string into a nested python dictionary. CoNLL-U is often the output of natural language processing tasks.\n\n## Why should you use conllu?\n\n- It's simple. ~300 lines of code.\n- It has no dependencies\n- Full typing support so your editor can do autocompletion\n- Nice set of tests with CI setup: [![Build](https://github.com/EmilStenstrom/conllu/workflows/Run%20tests%20for%20all%20supported%20python%20versions/badge.svg)](https://github.com/EmilStenstrom/conllu/actions?query=workflow%3A%22Run+tests+for+all+supported+python+versions%22)\n- It has 100% test branch coverage (and has undergone [mutation testing](https://github.com/boxed/mutmut/))\n- It has [![lots of downloads](http://pepy.tech/badge/conllu)](http://pepy.tech/project/conllu)\n\n## Installation\n\nNote: As of conllu 5.0, Python 3.8 is required to install conllu. See [Notes on updating from 4.0 to 5.0](#notes-on-updating-from-40-to-50)\n\n```bash\npip install conllu\n```\n\nOr, if you are using [conda](https://conda.io/docs/):\n\n```bash\nconda install -c conda-forge conllu\n```\n\n## Notes on updating from 4.0 to 5.0\n\nConllu version 5.0 drops support for Python 3.6 and 3.7 and requires Python 3.8 at a minimum. If you need support for older versions of python, you can always pin your install to an old version of conllu. You can install it with `pip install conllu==4.5.3`.\n\n## Notes on updating from 3.0 to 4.0\n\nConllu version 4.0 drops support for Python 2 and all versions of earlier than Python 3.6. If you need support for older versions of python, you can always pin your install to an old version of conllu. You can install it with `pip install conllu==3.1.1`.\n\n## Notes on updating from 2.0 to 3.0\n\nThe Universal dependencies 2.0 release changed two of the field names from xpostag -> xpos and upostag -> upos. Version 3.0 of conllu handles this by aliasing the previous names to the new names. This means you can use xpos/upos or xpostag/upostag, they will both return the same thing. This does change the public API slightly, so I've upped the major version to 3.0, but I've taken care to ensure you most likely DO NOT have to update your code when you update to 3.0.\n\n## Notes on updating from 0.1 to 1.0\n\nI don't like breaking backwards compatibility, but to be able to add new features I felt I had to. This means that updating from 0.1 to 1.0 *might* require code changes. Here's a guide on [how to upgrade to 1.0\n](https://github.com/EmilStenstrom/conllu/wiki/Migrating-from-0.1-to-1.0).\n\n## Example usage\n\nAt the top level, conllu provides two methods, `parse` and `parse_tree`. The first one parses sentences and returns a flat list. The other returns a nested tree structure. Let's go through them one by one.\n\n## Use parse() to parse into a list of sentences\n\n```python\n>>> from conllu import parse\n>>> \n>>> data = \"\"\"\n... # text = The quick brown fox jumps over the lazy dog.\n... 1   The     the    DET    DT   Definite=Def|PronType=Art   4   det     _   _\n... 2   quick   quick  ADJ    JJ   Degree=Pos                  4   amod    _   _\n... 3   brown   brown  ADJ    JJ   Degree=Pos                  4   amod    _   _\n... 4   fox     fox    NOUN   NN   Number=Sing                 5   nsubj   _   _\n... 5   jumps   jump   VERB   VBZ  Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin   0   root    _   _\n... 6   over    over   ADP    IN   _                           9   case    _   _\n... 7   the     the    DET    DT   Definite=Def|PronType=Art   9   det     _   _\n... 8   lazy    lazy   ADJ    JJ   Degree=Pos                  9   amod    _   _\n... 9   dog     dog    NOUN   NN   Number=Sing                 5   nmod    _   SpaceAfter=No\n... 10  .       .      PUNCT  .    _                           5   punct   _   _\n...\n... \"\"\"\n```\n\nNow you have the data in a variable called `data`. Let's parse it:\n\n```python\n>>> sentences = parse(data)\n>>> sentences\n[TokenList<The, quick, brown, fox, jumps, over, the, lazy, dog, ., metadata={text: \"The quick brown fox jumps over the lazy dog.\"}>]\n```\n\n<blockquote>\n\n**Advanced usage**: If you have many sentences (say over a megabyte) to parse at once, you can avoid loading them into memory at once by using `parse_incr()` instead of `parse`. It takes an opened file, and returns a generator instead of the list directly, so you need to either iterate over it, or call list() to get the TokenLists out. Here's how you would use it:\n\n```python\nfrom io import open\nfrom conllu import parse_incr\n\ndata_file = open(\"huge_file.conllu\", \"r\", encoding=\"utf-8\")\nfor tokenlist in parse_incr(data_file):\n    print(tokenlist)\n```\n\nFor most files, `parse` works fine.\n</blockquote>\n\nSince one CoNLL-U file usually contains multiple sentences, `parse()` always returns a list of sentences. Each sentence is represented by a TokenList.\n\n```python\n>>> sentence = sentences[0]\n>>> sentence\nTokenList<The, quick, brown, fox, jumps, over, the, lazy, dog, ., metadata={text: \"The quick brown fox jumps over the lazy dog.\"}>\n```\n\nThe TokenList supports indexing, so you can get the first token, represented by an ordered dictionary, like this:\n\n```python\n>>> token = sentence[0]\n>>> token\n{'id': 1,\n     'form': 'The',\n     'lemma': 'the',\n     ...}\n>>> token[\"form\"]\n'The'\n```\n\n### New in conllu 2.0: `filter()` a TokenList\n\n```python\n>>> sentence = sentences[0]\n>>> sentence\nTokenList<The, quick, brown, fox, jumps, over, the, lazy, dog, ., metadata={text: \"The quick brown fox jumps over the lazy dog.\"}>\n>>> sentence.filter(form=\"quick\")\nTokenList<quick>\n```\n\nBy using `filter(field1__field2=value)` you can filter based on subelements further down in a parsed token.\n\n```python\n>>> sentence.filter(feats__Degree=\"Pos\")\nTokenList<quick, brown, lazy>\n```\n\nFilters can also be chained (meaning you can do `sentence.filter(...).filter(...)`), and filtering on multiple properties at the same time (`sentence.filter(field1=value1, field2=value2)`) means that ALL properties must match.\n\n#### New in conllu 4.3: `filter()` a TokenList by lambda\n\nYou can also filter using a lambda function as value. This is useful if you, for instance, would like to filter out only tokens with integer ID:s:\n\n```python\n>>> from conllu.models import TokenList, Token\n>>> sentence2 = TokenList([\n...    Token(id=(1, \"-\", 2), form=\"It's\"),\n...    Token(id=1, form=\"It\"),\n...    Token(id=2, form=\"is\"),\n... ])\n>>> sentence2\nTokenList<It's, It, is>\n>>> sentence2.filter(id=lambda x: type(x) is int)\nTokenList<It, is>\n```\n\n### Writing data back to a TokenList\n\nIf you want to change your CoNLL-U file, there are a couple of convenience methods to know about.\n\nYou can add a new token by simply appending a dictionary with the fields you want to a TokenList:\n\n```python\n>>> sentence3 = TokenList([\n...    {\"id\": 1, \"form\": \"Lazy\"},\n...    {\"id\": 2, \"form\": \"fox\"},\n... ])\n>>> sentence3\nTokenList<Lazy, fox>\n>>> sentence3.append({\"id\": 3, \"form\": \"box\"})\n>>> sentence3\nTokenList<Lazy, fox, box>\n```\n\nChanging a sentence just means indexing into it, and setting a value to what you want:\n\n```python\n>>> sentence4 = TokenList([\n...    {\"id\": 1, \"form\": \"Lazy\"},\n...    {\"id\": 2, \"form\": \"fox\"},\n... ])\n>>> sentence4[1][\"form\"] = \"crocodile\"\n>>> sentence4\nTokenList<Lazy, crocodile>\n>>> sentence4[1] = {\"id\": 2, \"form\": \"elephant\"}\n>>> sentence4\nTokenList<Lazy, elephant>\n```\n\nIf you omit a field when passing in a dict, conllu will fill in a \"_\" for those values.\n\n```python\n>>> sentences = parse(\"1  The\")\n>>> sentences[0].append({\"id\": 2})\n>>> sentences[0]\nTokenList<The, _>\n```\n\n### Parse metadata from a CoNLL-U file\n\nEach sentence can also have metadata in the form of comments before the sentence starts. This is available in a property on the TokenList called `metadata`.\n\n```python\n>>> sentence.metadata\n{'text': 'The quick brown fox jumps over the lazy dog.'}\n```\n\n### Turn a TokenList back into CoNLL-U\n\nIf you ever want to get your CoNLL-U formated text back (maybe after changing something?), use the `serialize()` method:\n\n```python\n>>> print(sentence.serialize())\n# text = The quick brown fox jumps over the lazy dog.\n1   The     the     DET    DT   Definite=Def|PronType=Art   4   det    _   _\n2   quick   quick   ADJ    JJ   Degree=Pos                  4   amod   _   _\n3   brown   brown   ADJ    JJ   Degree=Pos                  4   amod   _   _\n4   fox     fox     NOUN   NN   Number=Sing                 5   nsubj  _   _\n5   jumps   jump    VERB   VBZ  Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin   0   root   _   _\n6   over    over    ADP    IN   _                           9   case   _   _\n7   the     the     DET    DT   Definite=Def|PronType=Art   9   det    _   _\n8   lazy    lazy    ADJ    JJ   Degree=Pos                  9   amod   _   _\n9   dog     dog     NOUN   NN   Number=Sing                 5   nmod   _   SpaceAfter=No\n10  .       .       PUNCT  .    _                           5   punct  _   _\n```\n\n### Turn a TokenList into a TokenTree (see below)\n\nYou can also convert a TokenList to a TokenTree by using `to_tree`:\n\n```python\n>>> sentence.to_tree()\nTokenTree<token={id=5, form=jumps}, children=[...]>\n```\n\nThat's it!\n\n## Use parse_tree() to parse into a list of dependency trees\n\nSometimes you're interested in the tree structure that hides in the `head` column of a CoNLL-U file. When this is the case, use `parse_tree` to get a nested structure representing the sentence.\n\n```python\n>>> from conllu import parse_tree\n>>> sentences = parse_tree(data)\n>>> sentences\n[TokenTree<...>]\n```\n\n<blockquote>\n\n**Advanced usage**: If you have many sentences (say over a megabyte) to parse at once, you can avoid loading them into memory at once by using `parse_tree_incr()` instead of `parse_tree`. It takes an opened file, and returns a generator instead of the list directly, so you need to either iterate over it, or call list() to get the TokenTrees out. Here's how you would use it:\n\n```python\nfrom io import open\nfrom conllu import parse_tree_incr\n\ndata_file = open(\"huge_file.conllu\", \"r\", encoding=\"utf-8\")\nfor tokentree in parse_tree_incr(data_file):\n    print(tokentree)\n```\n\n</blockquote>\n\nSince one CoNLL-U file usually contains multiple sentences, `parse_tree()` always returns a list of sentences. Each sentence is represented by a TokenTree.\n\n```python\n>>> root = sentences[0]\n>>> root\nTokenTree<token={id=5, form=jumps}, children=[...]>\n```\n\nTo quickly visualize the tree structure you can call `print_tree` on a TokenTree.\n\n```python\n>>> root.print_tree()\n(deprel:root) form:jumps lemma:jump upos:VERB [5]\n    (deprel:nsubj) form:fox lemma:fox upos:NOUN [4]\n        (deprel:det) form:The lemma:the upos:DET [1]\n        (deprel:amod) form:quick lemma:quick upos:ADJ [2]\n        (deprel:amod) form:brown lemma:brown upos:ADJ [3]\n    (deprel:nmod) form:dog lemma:dog upos:NOUN [9]\n        (deprel:case) form:over lemma:over upos:ADP [6]\n        (deprel:det) form:the lemma:the upos:DET [7]\n        (deprel:amod) form:lazy lemma:lazy upos:ADJ [8]\n    (deprel:punct) form:. lemma:. upos:PUNCT [10]\n```\n\nTo access the token corresponding to the current node in the tree, use `token`:\n\n```python\n>>> root.token\n{\n    'id': 5,\n    'form': 'jumps',\n    'lemma': 'jump',\n    ...\n}\n```\n\nTo start walking down the children of the current node, use the children attribute:\n\n```python\n>>> children = root.children\n>>> children\n[\n    TokenTree<token={id=4, form=fox}, children=[...]>,\n    TokenTree<token={id=9, form=dog}, children=[...]>,\n    TokenTree<token={id=10, form=.}, children=None>\n]\n```\n\nJust like with `parse()`, if a sentence has metadata it is available in a property on the TokenTree root called `metadata`.\n\n```python\n>>> root.metadata\n{'text': 'The quick brown fox jumps over the lazy dog.'}\n```\n\nIf you ever want to get your CoNLL-U formated text back (maybe after changing something?), use the `serialize()` method:\n\n```python\n>>> print(root.serialize())\n# text = The quick brown fox jumps over the lazy dog.\n1   The     the    DET    DT   Definite=Def|PronType=Art   4   det     _   _\n2   quick   quick  ADJ    JJ   Degree=Pos                  4   amod    _   _\n...\n```\n\nIf you want to write it back to a file, you can use something like this:\n\n```python\n>>> from conllu import parse_tree\n>>> sentences = parse_tree(data)\n>>> \n>>> # Make some change to sentences here\n>>> \n>>> with open('file-to-write-to', 'w') as f:\n...     f.writelines([sentence.serialize() + \"\\n\" for sentence in sentences])\n```\n\n## Customizing parsing to handle strange variations of CoNLL-U\n\nFar from all CoNLL-U files found in the wild follow the CoNLL-U format specification. CoNLL-U tries to parse even files that are malformed according to the specification, but sometimes that doesn't work. For those situations you can change how conllu parses your files.\n\nA normal CoNLL-U file consists of a specific set of fields (id, form, lemma, and so on...). Let's walk through how to parse a custom format using the three options `fields`, `field_parsers`, `metadata_parsers`. Here's the custom format we'll use.\n\n```python\n>>> data = \"\"\"\n... # tagset = TAG1|TAG2|TAG3|TAG4\n... # sentence-123\n... 1   My       TAG1|TAG2\n... 2   custom   TAG3\n... 3   format   TAG4\n...\n... \"\"\"\n```\n\nNow, let's parse this with the the default settings, and look specifically at the first token to see how it was parsed.\n\n```python\n>>> sentences = parse(data)\n>>> sentences[0][0]\n{'id': 1, 'form': 'My', 'lemma': 'TAG1|TAG2'}\n```\n\nThe parser has assumed (incorrectly) that the third field must the the default \u00b4lemma\u00b4 field and parsed it as such. Let's customize this so the parser gets the name right, by setting the `fields` parameter when calling parse.\n\n```python\n>>> sentences = parse(data, fields=[\"id\", \"form\", \"tag\"])\n>>> sentences[0][0]\n{'id': 1, 'form': 'My', 'tag': 'TAG1|TAG2'}\n```\n\nThe only difference is that you now get the correct field name back when parsing. Now let's say you want those two tags returned as a list instead of as a string. This can be done using the `field_parsers` argument.\n\n```python\n>>> split_func = lambda line, i: line[i].split(\"|\")\n>>> sentences = parse(data, fields=[\"id\", \"form\", \"tag\"], field_parsers={\"tag\": split_func})\n>>> sentences[0][0]\n{'id': 1, 'form': 'My', 'tag': ['TAG1', 'TAG2']}\n```\n\nThat's much better! `field_parsers` specifies a mapping from a field name, to a function that can parse that field. In our case, we specify that the field with custom logic is `\"tag\"` and that the function to handle it is `split_func`. Each field_parser gets sent two parameters:\n\n* `line`: The whole list of values from this line, split on whitespace. The reason you get the full line is so you can merge several tokens into one using a field_parser if you want.\n* `i`: The current location in the line where you currently are. Most often, you'll use `line[i]` to get the current value.\n\nIn our case, we return `line[i].split(\"|\")`, which returns a list like we want.\n\nLet's look at the metadata in this example.\n\n```text\n# tagset = TAG1|TAG2|TAG3|TAG4\n# sentence-123\n```\n\nNone of these values are valid in CoNLL-U, but since the first line follows the key-value format of other (valid) fields, conllu will parse it anyway:\n\n```python\n>>> sentences = parse(data)\n>>> sentences[0].metadata\n{'tagset': 'TAG1|TAG2|TAG3|TAG4'}\n```\n\nLet's return this as a list using the `metadata_parsers` parameter.\n\n```python\n>>> sentences = parse(data, metadata_parsers={\"tagset\": lambda key, value: (key, value.split(\"|\"))})\n>>> sentences[0].metadata\n{'tagset': ['TAG1', 'TAG2', 'TAG3', 'TAG4']}\n```\n\nA metadata parser behaves similarily to a field parser, but since most comments you'll see will be of the form \"key = value\" these values will be parsed and cleaned first, and then sent to your custom metadata_parser. Here we just take the value, and split it on \"|\", and return a list back. And lo and behold, we get what we wanted!\n\nNow, let's deal with the \"sentence-123\" comment. Specifying another metadata_parser won't work, because this is an ID that will be different for each sentence. Instead, let's use a special metadata parser, called `__fallback__`.\n\n```python\n>>> sentences = parse(data, metadata_parsers={\n...    \"tagset\": lambda key, value: (key, value.split(\"|\")),\n...    \"__fallback__\": lambda key, value: (\"sentence-id\", key)\n... })\n>>> sentences[0].metadata\n{\n    'tagset': ['TAG1', 'TAG2', 'TAG3', 'TAG4'],\n    'sentence-id': 'sentence-123'\n}\n```\n\nJust what we wanted! `__fallback__` gets called any time none of the other metadata_parsers match, and just like the others, it gets sent the key and value of the current line. In our case, the line contains no \"=\" to split on, so key will be \"sentence-123\" and value will be empty. We can return whatever we want here, but let's just say we want to call this field \"sentence-id\" so we return that as the key, and \"sentence-123\" as our value.\n\nFinally, consider an even trickier case.\n\n```python\n>>> data = \"\"\"\n... # id=1-document_id=36:1047-span=1\n... 1   My       TAG1|TAG2\n... 2   custom   TAG3\n... 3   format   TAG4\n...\n... \"\"\"\n```\n\nThis is actually three different comments, but somehow they are separated by \"-\" instead of on their own lines. To handle this, we get to use the ability of a metadata_parser to return multiple matches from a single line.\n\n```python\n>>> sentences = parse(data, metadata_parsers={\n...    \"__fallback__\": lambda key, value: [pair.split(\"=\") for pair in (key + \"=\" + value).split(\"-\")]\n... })\n>>> sentences[0].metadata\n{\n    'id': '1',\n    'document_id': '36:1047',\n    'span': '1'\n}\n```\n\nOur fallback parser returns a **list** of matches, one per pair of metadata comments we find. The `key + \"=\" + value` trick is needed since by default conllu assumes that this is a valid comment, so `key` is \"id\" and `value` is everything after the first \"=\", `1-document_id=36:1047-span=1` (note the missing \"id=\" in the beginning). We need to add it back before splitting on \"-\".\n\nAnd that's it! Using these tricks you should be able to parse all the strange files you stumble into.\n\n## Develop locally and run the tests\n\n1. Make a fork of the repository to your own GitHub account.\n\n2. Clone the repository locally on your computer:\n    ```bash\n    git clone git@github.com:YOURUSERNAME/conllu.git conllu\n    cd conllu\n    ```\n\n3. Install the library used for running the tests:\n    ```bash\n    pip install tox\n    ```\n\n4. Now you can run the tests:\n    ```bash\n    tox\n    ```\n    This runs tox across all supported versions of Python, and also runs checks for code-coverage, syntax errors, and how imports are sorted.\n\n4. (Alternative) If you just have one version of python installed, and don't want to go through the hassle of installing multiple version of python (hint: Install pyenv and pyenv-tox), **it's fine to run tox with just one version of python**:\n\n    ```bash\n    tox -e py38\n    ```\n\n5. Make a pull request. Here's a [good guide on PRs from GitHub](https://help.github.com/articles/creating-a-pull-request-from-a-fork/).\n\nThanks for helping conllu become a better library!\n",
    "bugtrack_url": null,
    "license": "The MIT License (MIT)  Copyright (c) 2016 Emil Stenstr\u00f6m  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "CoNLL-U Parser parses a CoNLL-U formatted string into a nested python dictionary",
    "version": "6.0.0",
    "project_urls": {
        "Homepage": "https://github.com/EmilStenstrom/conllu/"
    },
    "split_keywords": [
        "conllu",
        " conll",
        " conll-u",
        " parser",
        " nlp"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "18648f26d84f18c4d421cc7ca8f4b1dfd080ae14ba15a627277fbd63c11d652e",
                "md5": "7fb3bd9778752e30fa4386d33d61f2a3",
                "sha256": "c47206a0912f768bfae429d3d3c2c7f5ed068babd2502663e865cfb21532cbcc"
            },
            "downloads": -1,
            "filename": "conllu-6.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7fb3bd9778752e30fa4386d33d61f2a3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 16782,
            "upload_time": "2024-10-13T21:44:52",
            "upload_time_iso_8601": "2024-10-13T21:44:52.254742Z",
            "url": "https://files.pythonhosted.org/packages/18/64/8f26d84f18c4d421cc7ca8f4b1dfd080ae14ba15a627277fbd63c11d652e/conllu-6.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0c53177d029cdae086c245b1875264d1f736d1909743b8b8e81ffcf7ab43cc48",
                "md5": "1f108c97b6ad0ff07a773b23375150f6",
                "sha256": "bc6072d49d00e77f4454039519118c0500fafa0d0eb509f53793081084f50aba"
            },
            "downloads": -1,
            "filename": "conllu-6.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "1f108c97b6ad0ff07a773b23375150f6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 33856,
            "upload_time": "2024-10-13T21:44:53",
            "upload_time_iso_8601": "2024-10-13T21:44:53.935827Z",
            "url": "https://files.pythonhosted.org/packages/0c/53/177d029cdae086c245b1875264d1f736d1909743b8b8e81ffcf7ab43cc48/conllu-6.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-13 21:44:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "EmilStenstrom",
    "github_project": "conllu",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "conllu"
}
        
Elapsed time: 3.84181s