tokenizer


Nametokenizer JSON
Version 3.4.3 PyPI version JSON
download
home_pagehttps://github.com/mideind/Tokenizer
SummaryA tokenizer for Icelandic text
upload_time2023-08-11 15:09:13
maintainer
docs_urlNone
authorMiðeind ehf.
requires_python
licenseMIT
keywords nlp tokenizer icelandic
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            -----------------------------------------
Tokenizer: A tokenizer for Icelandic text
-----------------------------------------

.. image:: https://github.com/mideind/Tokenizer/workflows/Python%20package/badge.svg
   :target: https://github.com/mideind/Tokenizer

Overview
--------

Tokenization is a necessary first step in many natural language processing
tasks, such as word counting, parsing, spell checking, corpus generation, and
statistical analysis of text.

**Tokenizer** is a compact pure-Python (>= 3.8) executable
program and module for tokenizing Icelandic text. It converts input text to
streams of *tokens*, where each token is a separate word, punctuation sign,
number/amount, date, e-mail, URL/URI, etc. It also segments the token stream
into sentences, considering corner cases such as abbreviations and dates in
the middle of sentences.

The package contains a dictionary of common Icelandic abbreviations,
in the file ``src/tokenizer/Abbrev.conf``.

Tokenizer is an independent spinoff from the `Greynir project <https://greynir.is>`_
(GitHub repository `here <https://github.com/mideind/Greynir>`_), by the same authors.
The `Greynir natural language parser for Icelandic <https://github.com/mideind/GreynirPackage>`_
uses Tokenizer on its input.

Note that Tokenizer is licensed under the *MIT* license
while Greynir is licensed under *GPLv3*.


Deep vs. shallow tokenization
-----------------------------

Tokenizer can do both *deep* and *shallow* tokenization.

*Shallow* tokenization simply returns each sentence as a string (or as a line
of text in an output file), where the individual tokens are separated
by spaces.

*Deep* tokenization returns token objects that have been annotated with
the token type and further information extracted from the token, for example
a *(year, month, day)* tuple in the case of date tokens.

In shallow tokenization, tokens are in most cases kept intact, although
consecutive white space is always coalesced. The input strings
``"800 MW"``, ``"21. janúar"`` and ``"800 7000"`` thus become
two tokens each, output with a single space between them.

In deep tokenization, the same strings are represented by single token objects,
of type ``TOK.MEASUREMENT``, ``TOK.DATEREL`` and ``TOK.TELNO``, respectively.
The text associated with a single token object may contain spaces,
although consecutive whitespace is always coalesced into a single space ``" "``.

By default, the command line tool performs shallow tokenization. If you
want deep tokenization with the command line tool, use the ``--json`` or
``--csv`` switches.

From Python code, call ``split_into_sentences()`` for shallow tokenization,
or ``tokenize()`` for deep tokenization. These functions are documented with
examples below.


Installation
------------

To install:

.. code-block:: console

    $ pip install tokenizer


Command line tool
-----------------

After installation, the tokenizer can be invoked directly from
the command line:

.. code-block:: console

    $ tokenize input.txt output.txt

Input and output files are in UTF-8 encoding. If the files are not
given explicitly, ``stdin`` and ``stdout`` are used for input and output,
respectively.

Empty lines in the input are treated as hard sentence boundaries.

By default, the output consists of one sentence per line, where each
line ends with a single newline character (ASCII LF, ``chr(10)``, ``"\n"``).
Within each line, tokens are separated by spaces.

The following (mutually exclusive) options can be specified
on the command line:

+-------------------+---------------------------------------------------+
| | ``--csv``       | Deep tokenization. Output token objects in CSV    |
|                   | format, one per line. Sentences are separated by  |
|                   | lines containing ``0,"",""``                      |
+-------------------+---------------------------------------------------+
| | ``--json``      | Deep tokenization. Output token objects in JSON   |
|                   | format, one per line.                             |
+-------------------+---------------------------------------------------+

Other options can be specified on the command line:

+-----------------------------------+---------------------------------------------------+
| | ``-n``                          | Normalize punctuation, causing e.g. quotes to be  |
| |                                 | output in Icelandic form and hyphens to be        |
| | ``--normalize``                 | regularized. This option is only applicable to    |
|                                   | shallow tokenization.                             |
+-----------------------------------+---------------------------------------------------+
| | ``-s``                          | Input contains strictly one sentence per line,    |
| |                                 | i.e. every newline is a sentence boundary.        |
| | ``--one_sent_per_line``         |                                                   |
+-----------------------------------+---------------------------------------------------+
| | ``-o``                          | Output original token text, i.e. bypass shallow   |
| |                                 | tokenization. This effectively runs the tokenizer |
| | ``--original``                  | as a sentence splitter only.                      |
+-----------------------------------+---------------------------------------------------+
| | ``-m``                          | Degree signal in tokens denoting temperature      |
| | ``--convert_measurements``      | normalized (200° C -> 200 °C)                     |
+-----------------------------------+---------------------------------------------------+
| | ``-p``                          | Numbers combined into one token with the          |
| | ``--coalesce_percent``          | following token denoting percentage word forms    |
|                                   | (*prósent*, *prósentustig*, *hundraðshlutar*)     |
+-----------------------------------+---------------------------------------------------+
| | ``-g``                          | Do not replace composite glyphs using Unicode     |
| | ``--keep_composite_glyphs``     | COMBINING codes with their accented/umlaut        |
|                                   | counterparts                                      |
+-----------------------------------+---------------------------------------------------+
| | ``-e``                          | HTML escape codes replaced by their meaning,      |
| | ``--replace_html_escapes``      | such as ``&aacute;`` -> ``á``                     |
+-----------------------------------+---------------------------------------------------+
| | ``-c``                          | English-style decimal points and thousands        |
| | ``--convert_numbers``           | separators in numbers changed to Icelandic style  |
+-----------------------------------+---------------------------------------------------+
| | ``-k N``                        | Kludgy ordinal handling defined.                  |
| | ``--handle_kludgy_ordinals N``  | 0: Returns the original mixed word form           |
|                                   | 1. Kludgy ordinal returned as pure word forms     |
|                                   | 2: Kludgy ordinals returned as pure numbers       |
+-----------------------------------+---------------------------------------------------+


Type ``tokenize -h`` or ``tokenize --help`` to get a short help message.

Example
=======

.. code-block:: console

    $ echo "3.janúar sl. keypti   ég 64kWst rafbíl. Hann kostaði € 30.000." | tokenize
    3. janúar sl. keypti ég 64kWst rafbíl .
    Hann kostaði €30.000 .

    $ echo "3.janúar sl. keypti   ég 64kWst rafbíl. Hann kostaði € 30.000." | tokenize --csv
    19,"3. janúar","0|1|3"
    6,"sl.","síðastliðinn"
    6,"keypti",""
    6,"ég",""
    22,"64kWst","J|230400000.0"
    6,"rafbíl",""
    1,".","."
    0,"",""
    6,"Hann",""
    6,"kostaði",""
    13,"€30.000","30000|EUR"
    1,".","."
    0,"",""

    $ echo "3.janúar sl. keypti   ég 64kWst rafbíl. Hann kostaði € 30.000." | tokenize --json
    {"k":"BEGIN SENT"}
    {"k":"DATEREL","t":"3. janúar","v":[0,1,3]}
    {"k":"WORD","t":"sl.","v":["síðastliðinn"]}
    {"k":"WORD","t":"keypti"}
    {"k":"WORD","t":"ég"}
    {"k":"MEASUREMENT","t":"64kWst","v":["J",230400000.0]}
    {"k":"WORD","t":"rafbíl"}
    {"k":"PUNCTUATION","t":".","v":"."}
    {"k":"END SENT"}
    {"k":"BEGIN SENT"}
    {"k":"WORD","t":"Hann"}
    {"k":"WORD","t":"kostaði"}
    {"k":"AMOUNT","t":"€30.000","v":[30000,"EUR"]}
    {"k":"PUNCTUATION","t":".","v":"."}
    {"k":"END SENT"}

Python module
-------------

Shallow tokenization example
============================

An example of shallow tokenization from Python code goes something like this:

.. code-block:: python

    from tokenizer import split_into_sentences

    # A string to be tokenized, containing two sentences
    s = "3.janúar sl. keypti   ég 64kWst rafbíl. Hann kostaði € 30.000."

    # Obtain a generator of sentence strings
    g = split_into_sentences(s)

    # Loop through the sentences
    for sentence in g:

        # Obtain the individual token strings
        tokens = sentence.split()

        # Print the tokens, comma-separated
        print("|".join(tokens))

The program outputs::

    3.|janúar|sl.|keypti|ég|64kWst|rafbíl|.
    Hann|kostaði|€30.000|.

Deep tokenization example
=========================

To do deep tokenization from within Python code:

.. code-block:: python

    from tokenizer import tokenize, TOK

    text = ("Málinu var vísað til stjórnskipunar- og eftirlitsnefndar "
        "skv. 3. gr. XVII. kafla laga nr. 10/2007 þann 3. janúar 2010.")

    for token in tokenize(text):

        print("{0}: '{1}' {2}".format(
            TOK.descr[token.kind],
            token.txt or "-",
            token.val or ""))

Output::

    BEGIN SENT: '-' (0, None)
    WORD: 'Málinu'
    WORD: 'var'
    WORD: 'vísað'
    WORD: 'til'
    WORD: 'stjórnskipunar- og eftirlitsnefndar'
    WORD: 'skv.' [('samkvæmt', 0, 'fs', 'skst', 'skv.', '-')]
    ORDINAL: '3.' 3
    WORD: 'gr.' [('grein', 0, 'kvk', 'skst', 'gr.', '-')]
    ORDINAL: 'XVII.' 17
    WORD: 'kafla'
    WORD: 'laga'
    WORD: 'nr.' [('númer', 0, 'hk', 'skst', 'nr.', '-')]
    NUMBER: '10' (10, None, None)
    PUNCTUATION: '/' (4, '/')
    YEAR: '2007' 2007
    WORD: 'þann'
    DATEABS: '3. janúar 2010' (2010, 1, 3)
    PUNCTUATION: '.' (3, '.')
    END SENT: '-'

Note the following:

- Sentences are delimited by ``TOK.S_BEGIN`` and ``TOK.S_END`` tokens.
- Composite words, such as *stjórnskipunar- og eftirlitsnefndar*,
  are coalesced into one token.
- Well-known abbreviations are recognized and their full expansion
  is available in the ``token.val`` field.
- Ordinal numbers (*3., XVII.*) are recognized and their value (*3, 17*)
  is available in the ``token.val``  field.
- Dates, years and times, both absolute and relative, are recognized and
  the respective year, month, day, hour, minute and second
  values are included as a tuple in ``token.val``.
- Numbers, both integer and real, are recognized and their value
  is available in the ``token.val`` field.
- Further details of how Tokenizer processes text can be inferred from the
  `test module <https://github.com/mideind/Tokenizer/blob/master/test/test_tokenizer.py>`_
  in the project's `GitHub repository <https://github.com/mideind/Tokenizer>`_.


The ``tokenize()`` function
---------------------------

To deep-tokenize a text string, call ``tokenizer.tokenize(text, **options)``.
The ``text`` parameter can be a string, or an iterable that yields strings
(such as a text file object).

The function returns a Python *generator* of token objects.
Each token object is a simple ``namedtuple`` with three
fields: ``(kind, txt, val)`` (further documented below).

The ``tokenizer.tokenize()`` function is typically called in a ``for`` loop:

.. code-block:: python

    import tokenizer
    for token in tokenizer.tokenize(mystring):
        kind, txt, val = token
        if kind == tokenizer.TOK.WORD:
            # Do something with word tokens
            pass
        else:
            # Do something else
            pass

Alternatively, create a token list from the returned generator::

    token_list = list(tokenizer.tokenize(mystring))

The ``split_into_sentences()`` function
---------------------------------------

To shallow-tokenize a text string, call
``tokenizer.split_into_sentences(text_or_gen, **options)``.
The ``text_or_gen`` parameter can be a string, or an iterable that yields
strings (such as a text file object).

This function returns a Python *generator* of strings, yielding a string
for each sentence in the input. Within a sentence, the tokens are
separated by spaces.

You can pass the option ``normalize=True`` to the function if you want
the normalized form of punctuation tokens. Normalization outputs
Icelandic single and double quotes („these“) instead of English-style
ones ("these"), converts three-dot ellipsis ... to single character
ellipsis …, and casts en-dashes – and em-dashes — to regular hyphens.

The ``tokenizer.split_into_sentences()`` function is typically called
in a ``for`` loop:

.. code-block:: python

    import tokenizer
    with open("example.txt", "r", encoding="utf-8") as f:
        # You can pass a file object directly to split_into_sentences()
        for sentence in tokenizer.split_into_sentences(f):
            # sentence is a string of space-separated tokens
            tokens = sentence.split()
            # Now, tokens is a list of strings, one for each token
            for t in tokens:
                # Do something with the token t
                pass


The ``correct_spaces()`` function
---------------------------------

The ``tokenizer.correct_spaces(text)`` function returns a string after
splitting it up and re-joining it with correct whitespace around
punctuation tokens. Example::

    >>> import tokenizer
    >>> tokenizer.correct_spaces(
    ... "Frétt \n  dagsins:Jón\t ,Friðgeir og Páll ! 100  /  2  =   50"
    ... )
    'Frétt dagsins: Jón, Friðgeir og Páll! 100/2 = 50'


The ``detokenize()`` function
---------------------------------

The ``tokenizer.detokenize(tokens, normalize=False)`` function
takes an iterable of token objects and returns a corresponding, correctly
spaced text string, composed from the tokens' text. If the
``normalize`` parameter is set to ``True``,
the function uses the normalized form of any punctuation tokens, such
as proper Icelandic single and double quotes instead of English-type
quotes. Example::

    >>> import tokenizer
    >>> toklist = list(tokenizer.tokenize("Hann sagði: „Þú ert ágæt!“."))
    >>> tokenizer.detokenize(toklist, normalize=True)
    'Hann sagði: „Þú ert ágæt!“.'


The ``normalized_text()`` function
----------------------------------

The ``tokenizer.normalized_text(token)`` function
returns the normalized text for a token. This means that the original
token text is returned except for certain punctuation tokens, where a
normalized form is returned instead. Specifically, English-type quotes
are converted to Icelandic ones, and en- and em-dashes are converted
to regular hyphens.


The ``text_from_tokens()`` function
-----------------------------------

The ``tokenizer.text_from_tokens(tokens)`` function
returns a concatenation of the text contents of the given token list,
with spaces between tokens. Example::

    >>> import tokenizer
    >>> toklist = list(tokenizer.tokenize("Hann sagði: \"Þú ert ágæt!\"."))
    >>> tokenizer.text_from_tokens(toklist)
    'Hann sagði : " Þú ert ágæt ! " .'


The ``normalized_text_from_tokens()`` function
----------------------------------------------

The ``tokenizer.normalized_text_from_tokens(tokens)`` function
returns a concatenation of the normalized text contents of the given
token list, with spaces between tokens. Example (note the double quotes)::

    >>> import tokenizer
    >>> toklist = list(tokenizer.tokenize("Hann sagði: \"Þú ert ágæt!\"."))
    >>> tokenizer.normalized_text_from_tokens(toklist)
    'Hann sagði : „ Þú ert ágæt ! “ .'


Tokenization options
--------------------

You can optionally pass one or more of the following options as
keyword parameters to the ``tokenize()`` and ``split_into_sentences()``
functions:


* ``convert_numbers=[bool]``

  Setting this option to ``True`` causes the tokenizer to convert numbers
  and amounts with
  English-style decimal points (``.``) and thousands separators (``,``)
  to Icelandic format, where the decimal separator is a comma (``,``)
  and the thousands separator is a period (``.``). ``$1,234.56`` is thus
  converted to a token whose text is ``$1.234,56``.

  The default value for the ``convert_numbers`` option is ``False``.

  Note that in versions of Tokenizer prior to 1.4, ``convert_numbers``
  was ``True``.


* ``convert_measurements=[bool]``

  Setting this option to ``True`` causes the tokenizer to convert
  degrees Kelvin, Celsius and Fahrenheit to a regularized form, i.e.
  ``200° C`` becomes ``200 °C``.

  The default value for the ``convert_measurements`` option is ``False``.


* ``replace_composite_glyphs=[bool]``

  Setting this option to ``False`` disables the automatic replacement
  of composite Unicode glyphs with their corresponding Icelandic characters.
  By default, the tokenizer combines vowels with the Unicode
  COMBINING ACUTE ACCENT and COMBINING DIAERESIS glyphs to form single
  character code points, such as 'á' and 'ö'.

  The default value for the ``replace_composite_glyphs`` option is ``True``.


* ``replace_html_escapes=[bool]``

  Setting this option to ``True`` causes the tokenizer to replace common
  HTML escaped character codes, such as ``&aacute;`` with the character being
  escaped, such as ``á``. Note that ``&shy;`` (soft hyphen) is replaced by
  an empty string, and ``&nbsp;`` is replaced by a normal space.
  The ligatures ``&filig;`` and ``&fllig;`` are replaced by ``fi`` and ``fl``,
  respectively.

  The default value for the ``replace_html_escapes`` option is ``False``.


* ``handle_kludgy_ordinals=[value]``

  This options controls the way Tokenizer handles 'kludgy' ordinals, such as
  *1sti*, *4ðu*, or *2ja*. By default, such ordinals are returned unmodified
  ('passed through') as word tokens (``TOK.WORD``).
  However, this can be modified as follows:

  * ``tokenizer.KLUDGY_ORDINALS_MODIFY``: Kludgy ordinals are corrected
    to become 'proper' word tokens, i.e. *1sti* becomes *fyrsti* and
    *2ja* becomes *tveggja*.

  * ``tokenizer.KLUDGY_ORDINALS_TRANSLATE``: Kludgy ordinals that represent
    proper ordinal numbers are translated to ordinal tokens (``TOK.ORDINAL``),
    with their original text and their ordinal value. *1sti* thus
    becomes a ``TOK.ORDINAL`` token with a value of 1, and *3ja* becomes
    a ``TOK.ORDINAL`` with a value of 3.

  * ``tokenizer.KLUDGY_ORDINALS_PASS_THROUGH`` is the default value of
    the option. It causes kludgy ordinals to be returned unmodified as
    word tokens.

  Note that versions of Tokenizer prior to 1.4 behaved as if
  ``handle_kludgy_ordinals`` were set to
  ``tokenizer.KLUDGY_ORDINALS_TRANSLATE``.


The token object
----------------

Each token is an instance of the class ``Tok`` that has three main properties:
``kind``, ``txt`` and ``val``.


The ``kind`` property
=====================

The ``kind`` property contains one of the following integer constants,
defined within the ``TOK`` class:

+---------------+---------+---------------------+---------------------------+
| Constant      |  Value  | Explanation         | Examples                  |
+===============+=========+=====================+===========================+
| PUNCTUATION   |    1    | Punctuation         | . ! ; % &                 |
+---------------+---------+---------------------+---------------------------+
| TIME          |    2    | Time (h, m, s)      | | 11:35:40                |
|               |         |                     | | kl. 7:05                |
|               |         |                     | | klukkan 23:35           |
+---------------+---------+---------------------+---------------------------+
| DATE *        |    3    | Date (y, m, d)      | [Unused, see DATEABS and  |
|               |         |                     | DATEREL]                  |
+---------------+---------+---------------------+---------------------------+
| YEAR          |    4    | Year                | | árið 874 e.Kr.          |
|               |         |                     | | 1965                    |
|               |         |                     | | 44 f.Kr.                |
+---------------+---------+---------------------+---------------------------+
| NUMBER        |    5    | Number              | | 100                     |
|               |         |                     | | 1.965                   |
|               |         |                     | | 1.965,34                |
|               |         |                     | | 1,965.34                |
|               |         |                     | | 2⅞                      |
+---------------+---------+---------------------+---------------------------+
| WORD          |    6    | Word                | | kattaeftirlit           |
|               |         |                     | | hunda- og kattaeftirlit |
+---------------+---------+---------------------+---------------------------+
| TELNO         |    7    | Telephone number    | | 5254764                 |
|               |         |                     | | 699-4244                |
|               |         |                     | | 410 4000                |
+---------------+---------+---------------------+---------------------------+
| PERCENT       |    8    | Percentage          | 78%                       |
+---------------+---------+---------------------+---------------------------+
| URL           |    9    | URL                 | | https://greynir.is      |
|               |         |                     | | http://tiny.cc/28695y   |
+---------------+---------+---------------------+---------------------------+
| ORDINAL       |    10   | Ordinal number      | | 30.                     |
|               |         |                     | | XVIII.                  |
+---------------+---------+---------------------+---------------------------+
| TIMESTAMP *   |    11   | Timestamp           | [Unused, see              |
|               |         |                     | TIMESTAMPABS and          |
|               |         |                     | TIMESTAMPREL]             |
+---------------+---------+---------------------+---------------------------+
| CURRENCY *    |    12   | Currency name       | [Unused]                  |
+---------------+---------+---------------------+---------------------------+
| AMOUNT        |    13   | Amount              | | €2.345,67               |
|               |         |                     | | 750 þús.kr.             |
|               |         |                     | | 2,7 mrð. USD            |
|               |         |                     | | kr. 9.900               |
|               |         |                     | | EUR 200                 |
+---------------+---------+---------------------+---------------------------+
| PERSON *      |    14   | Person name         | [Unused]                  |
+---------------+---------+---------------------+---------------------------+
| EMAIL         |    15   | E-mail              | ``fake@news.is``          |
+---------------+---------+---------------------+---------------------------+
| ENTITY *      |    16   | Named entity        | [Unused]                  |
+---------------+---------+---------------------+---------------------------+
| UNKNOWN       |    17   | Unknown token       |                           |
+---------------+---------+---------------------+---------------------------+
| DATEABS       |    18   | Absolute date       | | 30. desember 1965       |
|               |         |                     | | 30/12/1965              |
|               |         |                     | | 1965-12-30              |
|               |         |                     | | 1965/12/30              |
+---------------+---------+---------------------+---------------------------+
| DATEREL       |    19   | Relative date       | | 15. mars                |
|               |         |                     | | 15/3                    |
|               |         |                     | | 15.3.                   |
|               |         |                     | | mars 1911               |
+---------------+---------+---------------------+---------------------------+
| TIMESTAMPABS  |    20   | Absolute timestamp  | | 30. desember 1965 11:34 |
|               |         |                     | | 1965-12-30 kl. 13:00    |
+---------------+---------+---------------------+---------------------------+
| TIMESTAMPREL  |    21   | Relative timestamp  | | 30. desember kl. 13:00  |
+---------------+---------+---------------------+---------------------------+
| MEASUREMENT   |    22   | Value with a        | | 690 MW                  |
|               |         | measurement unit    | | 1.010 hPa               |
|               |         |                     | | 220 m²                  |
|               |         |                     | | 80° C                   |
+---------------+---------+---------------------+---------------------------+
| NUMWLETTER    |    23   | Number followed by  | | 14a                     |
|               |         | a single letter     | | 7B                      |
+---------------+---------+---------------------+---------------------------+
| DOMAIN        |    24   | Domain name         | | greynir.is              |
|               |         |                     | | Reddit.com              |
|               |         |                     | | www.wikipedia.org       |
+---------------+---------+---------------------+---------------------------+
| HASHTAG       |    25   | Hashtag             | | #MeToo                  |
|               |         |                     | | #12stig                 |
+---------------+---------+---------------------+---------------------------+
| MOLECULE      |    26   | Molecular formula   | | H2SO4                   |
|               |         |                     | | CO2                     |
+---------------+---------+---------------------+---------------------------+
| SSN           |    27   | Social security     | | 591213-1480             |
|               |         | number (*kennitala*)|                           |
+---------------+---------+---------------------+---------------------------+
| USERNAME      |    28   | Twitter user handle | | @username_123           |
|               |         |                     |                           |
+---------------+---------+---------------------+---------------------------+
| SERIALNUMBER  |    29   | Serial number       | | 394-5388                |
|               |         |                     | | 12-345-6789             |
+---------------+---------+---------------------+---------------------------+
| COMPANY *     |    30   | Company name        | [Unused]                  |
+---------------+---------+---------------------+---------------------------+
| S_BEGIN       |  11001  | Start of sentence   |                           |
+---------------+---------+---------------------+---------------------------+
| S_END         |  11002  | End of sentence     |                           |
+---------------+---------+---------------------+---------------------------+

(*) The token types marked with an asterisk are reserved for the Greynir package
and not currently returned by the tokenizer.

To obtain a descriptive text for a token kind, use
``TOK.descr[token.kind]`` (see example above).


The ``txt`` property
====================

The ``txt`` property contains the original source text for the token,
with the following exceptions:

* All contiguous whitespace (spaces, tabs, newlines) is coalesced
  into single spaces (``" "``) within the ``txt`` string. A date
  token that is parsed from a source text of ``"29.  \n   janúar"``
  thus has a ``txt`` of ``"29. janúar"``.

* Tokenizer automatically merges Unicode ``COMBINING ACUTE ACCENT``
  (code point 769) and ``COMBINING DIAERESIS`` (code point 776)
  with vowels to form single code points for the Icelandic letters
  á, é, í, ó, ú, ý and ö, in both lower and upper case. (This behavior
  can be disabled; see the ``replace_composite_glyphs`` option described
  above.)

* If the appropriate options are specified (see above), it converts
  kludgy ordinals (*3ja*) to proper ones (*þriðja*), and English-style
  thousand and decimal separators to Icelandic ones
  (*10,345.67* becomes *10.345,67*).

* If the ``replace_html_escapes`` option is set, Tokenizer replaces
  HTML-style escapes (``&aacute;``) with the characters
  being escaped (``á``).


The ``val`` property
====================

The ``val`` property contains auxiliary information, corresponding to
the token kind, as follows:

- For ``TOK.PUNCTUATION``, the ``val`` field contains a tuple with
  two items: ``(whitespace, normalform)``. The first item (``token.val[0]``)
  specifies the whitespace normally found around the symbol in question,
  as an integer::

    TP_LEFT = 1   # Whitespace to the left
    TP_CENTER = 2 # Whitespace to the left and right
    TP_RIGHT = 3  # Whitespace to the right
    TP_NONE = 4   # No whitespace

  The second item (``token.val[1]``) contains a normalized representation of the
  punctuation. For instance, various forms of single and double
  quotes are represented as Icelandic ones (i.e. „these“ or ‚these‘) in
  normalized form, and ellipsis ("...") are represented as the single
  character "…".

- For ``TOK.TIME``, the ``val`` field contains an
  ``(hour, minute, second)`` tuple.

- For ``TOK.DATEABS``, the ``val`` field contains a
  ``(year, month, day)`` tuple (all 1-based).

- For ``TOK.DATEREL``, the ``val`` field contains a
  ``(year, month, day)`` tuple (all 1-based),
  except that a least one of the tuple fields is missing and set to 0.
  Example: *3. júní* becomes ``TOK.DATEREL`` with the fields ``(0, 6, 3)``
  as the year is missing.

- For ``TOK.YEAR``, the ``val`` field contains the year as an integer.
  A negative number indicates that the year is BCE (*fyrir Krist*),
  specified with the suffix *f.Kr.* (e.g. *árið 33 f.Kr.*).

- For ``TOK.NUMBER``, the ``val`` field contains a tuple
  ``(number, None, None)``.
  (The two empty fields are included for compatibility with Greynir.)

- For ``TOK.WORD``, the ``val`` field contains the full expansion
  of an abbreviation, as a list containing a single tuple, or ``None``
  if the word is not abbreviated.

- For ``TOK.PERCENT``, the ``val`` field contains a tuple
  of ``(percentage, None, None)``.

- For ``TOK.ORDINAL``, the ``val`` field contains the ordinal value
  as an integer. The original ordinal may be a decimal number
  or a Roman numeral.

- For ``TOK.TIMESTAMP``, the ``val`` field contains
  a ``(year, month, day, hour, minute, second)`` tuple.

- For ``TOK.AMOUNT``, the ``val`` field contains
  an ``(amount, currency, None, None)`` tuple. The amount is a float, and
  the currency is an ISO currency code, e.g. *USD* for dollars ($ sign),
  *EUR* for euros (€ sign) or *ISK* for Icelandic króna
  (*kr.* abbreviation). (The two empty fields are included for
  compatibility with Greynir.)

- For ``TOK.MEASUREMENT``, the ``val`` field contains a ``(unit, value)``
  tuple, where ``unit`` is a base SI unit (such as ``g``, ``m``,
  ``m²``, ``s``, ``W``, ``Hz``, ``K`` for temperature in Kelvin).

- For ``TOK.TELNO``, the ``val`` field contains a tuple: ``(number, cc)``
  where the first item is the phone number
  in a normalized ``NNN-NNNN`` format, i.e. always including a hyphen,
  and the second item is the country code, eventually prefixed by ``+``.
  The country code defaults to ``354`` (Iceland).


Abbreviations
-------------

Abbreviations recognized by Tokenizer are defined in the ``Abbrev.conf``
file, found in the ``src/tokenizer/`` directory. This is a text file with
abbreviations, their definitions and explanatory comments.

When an abbreviation is encountered, it is recognized as a word token
(i.e. having its ``kind`` field equal to ``TOK.WORD``).
Its expansion(s) are included in the token's
``val`` field as a list containing tuples of the format
``(ordmynd, utg, ordfl, fl, stofn, beyging)``.
An example is *o.s.frv.*, which results in a ``val`` field equal to
``[('og svo framvegis', 0, 'ao', 'frasi', 'o.s.frv.', '-')]``.

The tuple format is designed to be compatible with the
*Database of Icelandic Morphology* (*DIM*),
*Beygingarlýsing íslensks nútímamáls*, i.e. the so-called *Sigrúnarsnið*.


Development installation
------------------------

To install Tokenizer in development mode, where you can easily
modify the source files (assuming you have ``git`` available):

.. code-block:: console

    $ git clone https://github.com/mideind/Tokenizer
    $ cd Tokenizer
    $ # [ Activate your virtualenv here, if you have one ]
    $ pip install -e .


Test suite
----------

Tokenizer comes with a large test suite.
The file ``test/test_tokenizer.py`` contains built-in tests that
run under ``pytest``.

To run the built-in tests, install `pytest <https://docs.pytest.org/en/latest/>`_,
``cd`` to your ``Tokenizer`` subdirectory (and optionally
activate your virtualenv), then run:

.. code-block:: console

    $ python -m pytest

The file ``test/toktest_large.txt`` contains a test set of 13,075 lines.
The lines test sentence detection, token detection and token classification.
For analysis, ``test/toktest_large_gold_perfect.txt`` contains
the expected output of a perfect shallow tokenization, and
``test/toktest_large_gold_acceptable.txt`` contains the current output of the
shallow tokenization.

The file ``test/Overview.txt`` (only in Icelandic) contains a description
of the test set, including line numbers for each part in both
``test/toktest_large.txt`` and ``test/toktest_large_gold_acceptable.txt``,
and a tag describing what is being tested in each part.

It also contains a description of a perfect shallow tokenization for each part,
acceptable tokenization and the current behaviour.
As such, the description is an analysis of which edge cases the tokenizer
can handle and which it can not.

To test the tokenizer on the large test set the following needs to be typed
in the command line:

.. code-block:: console

    $ tokenize test/toktest_large.txt test/toktest_large_out.txt

To compare it to the acceptable behaviour:

.. code-block:: console

    $ diff test/toktest_large_out.txt test/toktest_large_gold_acceptable.txt > diff.txt

The file ``test/toktest_normal.txt`` contains a running text from recent
news articles, containing no edge cases. The gold standard for that file
can be found in the file ``test/toktest_normal_gold_expected.txt``.


Changelog
---------

* Version 3.4.3: Various minor fixes. Now requires Python 3.8 or later.
* Version 3.4.2: Abbreviations and phrases added, ``META_BEGIN`` token added.
* Version 3.4.1: Improved performance on long input chunks.
* Version 3.4.0: Improved handling and normalization of punctuation.
* Version 3.3.2: Internal refactoring; bug fixes in paragraph handling.
* Version 3.3.1: Fixed bug where opening quotes at the start of paragraphs
  were sometimes incorrectly recognized and normalized.
* Version 3.2.0: Numbers and amounts that consist of word tokens only ('sex hundruð')
  are now returned as the original ``TOK.WORD`` s ('sex' and 'hundruð'), not as single
  coalesced ``TOK.NUMBER`` / ``TOK.AMOUNT`` /etc. tokens.
* Version 3.1.2: Changed paragraph markers to ``[[`` and ``]]`` (removing spaces).
* Version 3.1.1: Minor fixes; added Tok.from_token().
* Version 3.1.0: Added ``-o`` switch to the ``tokenize`` command to return original
  token text, enabling the tokenizer to run as a sentence splitter only.
* Version 3.0.0: Added tracking of character offsets for tokens within the
  original source text. Added full type annotations. Dropped Python 2.7 support.
* Version 2.5.0: Added arguments for all tokenizer options to the
  command-line tool. Type annotations enhanced.
* Version 2.4.0: Fixed bug where certain well-known word forms (*fá*, *fær*, *mín*, *sá*...)
  were being interpreted as (wrong) abbreviations. Also fixed bug where certain
  abbreviations were being recognized even in uppercase and at the end
  of a sentence, for instance *Örn.*
* Version 2.3.1: Various bug fixes; fixed type annotations for Python 2.7;
  the token kind ``NUMBER WITH LETTER`` is now ``NUMWLETTER``.
* Version 2.3.0: Added the ``replace_html_escapes`` option to
  the ``tokenize()`` function.
* Version 2.2.0: Fixed ``correct_spaces()`` to handle compounds such as
  *Atvinnu-, nýsköpunar- og ferðamálaráðuneytið* and
  *bensínstöðvar, -dælur og -tankar*.
* Version 2.1.0: Changed handling of periods at end of sentences if they are
  a part of an abbreviation. Now, the period is kept attached to the abbreviation,
  not split off into a separate period token, as before.
* Version 2.0.7: Added ``TOK.COMPANY`` token type; fixed a few abbreviations;
  renamed parameter ``text`` to ``text_or_gen`` in functions that accept a string
  or a string iterator.
* Version 2.0.6: Fixed handling of abbreviations such as *m.v.* (*miðað við*)
  that should not start a new sentence even if the following word is capitalized.
* Version 2.0.5: Fixed bug where single uppercase letters were erroneously
  being recognized as abbreviations, causing prepositions such as 'Í' and 'Á'
  at the beginning of sentences to be misunderstood in GreynirPackage.
* Version 2.0.4: Added imperfect abbreviations (*amk.*, *osfrv.*); recognized
  *klukkan hálf tvö* as a ``TOK.TIME``.
* Version 2.0.3: Fixed bug in ``detokenize()`` where abbreviations, domains
  and e-mails containing periods were wrongly split.
* Version 2.0.2: Spelled-out day ordinals are no longer included as a part of
  ``TOK.DATEREL`` tokens. Thus, *þriðji júní* is now a ``TOK.WORD``
  followed by a ``TOK.DATEREL``. *3. júní* continues to be parsed as
  a single ``TOK.DATEREL``.
* Version 2.0.1: Order of abbreviation meanings within the ``token.val`` field
  made deterministic; fixed bug in measurement unit handling.
* Version 2.0.0: Added command line tool; added ``split_into_sentences()``
  and ``detokenize()`` functions; removed ``convert_telno`` option;
  splitting of coalesced tokens made more robust;
  added ``TOK.SSN``, ``TOK.MOLECULE``, ``TOK.USERNAME`` and
  ``TOK.SERIALNUMBER`` token kinds; abbreviations can now have multiple
  meanings.
* Version 1.4.0: Added the ``**options`` parameter to the
  ``tokenize()`` function, giving control over the handling of numbers,
  telephone numbers, and 'kludgy' ordinals.
* Version 1.3.0: Added ``TOK.DOMAIN`` and ``TOK.HASHTAG`` token types;
  improved handling of capitalized month name *Ágúst*, which is
  now recognized when following an ordinal number; improved recognition
  of telephone numbers; added abbreviations.
* Version 1.2.3: Added abbreviations; updated GitHub URLs.
* Version 1.2.2: Added support for composites with more than two parts, i.e.
  *„dómsmála-, ferðamála-, iðnaðar- og nýsköpunarráðherra“*; added support for
  ``±`` sign; added several abbreviations.
* Version 1.2.1: Fixed bug where the name *Ágúst* was recognized
  as a month name; Unicode nonbreaking and invisible space characters
  are now removed before tokenization.
* Version 1.2.0: Added support for Unicode fraction characters;
  enhanced handing of degrees (°, °C, °F); fixed bug in cubic meter
  measurement unit; more abbreviations.
* Version 1.1.2: Fixed bug in liter (``l`` and ``ltr``) measurement units.
* Version 1.1.1: Added ``mark_paragraphs()`` function.
* Version 1.1.0: All abbreviations in ``Abbrev.conf`` are now
  returned with their meaning in a tuple in ``token.val``;
  handling of 'mbl.is' fixed.
* Version 1.0.9: Added abbreviation 'MAST'; harmonized copyright headers.
* Version 1.0.8: Bug fixes in ``DATEREL``, ``MEASUREMENT`` and ``NUMWLETTER``
  token handling; added 'kWst' and 'MWst' measurement units; blackened.
* Version 1.0.7: Added ``TOK.NUMWLETTER`` token type.
* Version 1.0.6: Automatic merging of Unicode ``COMBINING ACUTE ACCENT`` and
  ``COMBINING DIAERESIS`` code points with vowels.
* Version 1.0.5: Date/time and amount tokens coalesced to a further extent.
* Version 1.0.4: Added ``TOK.DATEABS``, ``TOK.TIMESTAMPABS``,
  ``TOK.MEASUREMENT``.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/mideind/Tokenizer",
    "name": "tokenizer",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "nlp,tokenizer,icelandic",
    "author": "Mi\u00f0eind ehf.",
    "author_email": "mideind@mideind.is",
    "download_url": "https://files.pythonhosted.org/packages/25/16/cea62e77428f0722b8adf33ee5a8d4077e726c53d8894bc342e9a944e95e/tokenizer-3.4.3.tar.gz",
    "platform": null,
    "description": "-----------------------------------------\nTokenizer: A tokenizer for Icelandic text\n-----------------------------------------\n\n.. image:: https://github.com/mideind/Tokenizer/workflows/Python%20package/badge.svg\n   :target: https://github.com/mideind/Tokenizer\n\nOverview\n--------\n\nTokenization is a necessary first step in many natural language processing\ntasks, such as word counting, parsing, spell checking, corpus generation, and\nstatistical analysis of text.\n\n**Tokenizer** is a compact pure-Python (>= 3.8) executable\nprogram and module for tokenizing Icelandic text. It converts input text to\nstreams of *tokens*, where each token is a separate word, punctuation sign,\nnumber/amount, date, e-mail, URL/URI, etc. It also segments the token stream\ninto sentences, considering corner cases such as abbreviations and dates in\nthe middle of sentences.\n\nThe package contains a dictionary of common Icelandic abbreviations,\nin the file ``src/tokenizer/Abbrev.conf``.\n\nTokenizer is an independent spinoff from the `Greynir project <https://greynir.is>`_\n(GitHub repository `here <https://github.com/mideind/Greynir>`_), by the same authors.\nThe `Greynir natural language parser for Icelandic <https://github.com/mideind/GreynirPackage>`_\nuses Tokenizer on its input.\n\nNote that Tokenizer is licensed under the *MIT* license\nwhile Greynir is licensed under *GPLv3*.\n\n\nDeep vs. shallow tokenization\n-----------------------------\n\nTokenizer can do both *deep* and *shallow* tokenization.\n\n*Shallow* tokenization simply returns each sentence as a string (or as a line\nof text in an output file), where the individual tokens are separated\nby spaces.\n\n*Deep* tokenization returns token objects that have been annotated with\nthe token type and further information extracted from the token, for example\na *(year, month, day)* tuple in the case of date tokens.\n\nIn shallow tokenization, tokens are in most cases kept intact, although\nconsecutive white space is always coalesced. The input strings\n``\"800 MW\"``, ``\"21. jan\u00faar\"`` and ``\"800 7000\"`` thus become\ntwo tokens each, output with a single space between them.\n\nIn deep tokenization, the same strings are represented by single token objects,\nof type ``TOK.MEASUREMENT``, ``TOK.DATEREL`` and ``TOK.TELNO``, respectively.\nThe text associated with a single token object may contain spaces,\nalthough consecutive whitespace is always coalesced into a single space ``\" \"``.\n\nBy default, the command line tool performs shallow tokenization. If you\nwant deep tokenization with the command line tool, use the ``--json`` or\n``--csv`` switches.\n\nFrom Python code, call ``split_into_sentences()`` for shallow tokenization,\nor ``tokenize()`` for deep tokenization. These functions are documented with\nexamples below.\n\n\nInstallation\n------------\n\nTo install:\n\n.. code-block:: console\n\n    $ pip install tokenizer\n\n\nCommand line tool\n-----------------\n\nAfter installation, the tokenizer can be invoked directly from\nthe command line:\n\n.. code-block:: console\n\n    $ tokenize input.txt output.txt\n\nInput and output files are in UTF-8 encoding. If the files are not\ngiven explicitly, ``stdin`` and ``stdout`` are used for input and output,\nrespectively.\n\nEmpty lines in the input are treated as hard sentence boundaries.\n\nBy default, the output consists of one sentence per line, where each\nline ends with a single newline character (ASCII LF, ``chr(10)``, ``\"\\n\"``).\nWithin each line, tokens are separated by spaces.\n\nThe following (mutually exclusive) options can be specified\non the command line:\n\n+-------------------+---------------------------------------------------+\n| | ``--csv``       | Deep tokenization. Output token objects in CSV    |\n|                   | format, one per line. Sentences are separated by  |\n|                   | lines containing ``0,\"\",\"\"``                      |\n+-------------------+---------------------------------------------------+\n| | ``--json``      | Deep tokenization. Output token objects in JSON   |\n|                   | format, one per line.                             |\n+-------------------+---------------------------------------------------+\n\nOther options can be specified on the command line:\n\n+-----------------------------------+---------------------------------------------------+\n| | ``-n``                          | Normalize punctuation, causing e.g. quotes to be  |\n| |                                 | output in Icelandic form and hyphens to be        |\n| | ``--normalize``                 | regularized. This option is only applicable to    |\n|                                   | shallow tokenization.                             |\n+-----------------------------------+---------------------------------------------------+\n| | ``-s``                          | Input contains strictly one sentence per line,    |\n| |                                 | i.e. every newline is a sentence boundary.        |\n| | ``--one_sent_per_line``         |                                                   |\n+-----------------------------------+---------------------------------------------------+\n| | ``-o``                          | Output original token text, i.e. bypass shallow   |\n| |                                 | tokenization. This effectively runs the tokenizer |\n| | ``--original``                  | as a sentence splitter only.                      |\n+-----------------------------------+---------------------------------------------------+\n| | ``-m``                          | Degree signal in tokens denoting temperature      |\n| | ``--convert_measurements``      | normalized (200\u00b0 C -> 200 \u00b0C)                     |\n+-----------------------------------+---------------------------------------------------+\n| | ``-p``                          | Numbers combined into one token with the          |\n| | ``--coalesce_percent``          | following token denoting percentage word forms    |\n|                                   | (*pr\u00f3sent*, *pr\u00f3sentustig*, *hundra\u00f0shlutar*)     |\n+-----------------------------------+---------------------------------------------------+\n| | ``-g``                          | Do not replace composite glyphs using Unicode     |\n| | ``--keep_composite_glyphs``     | COMBINING codes with their accented/umlaut        |\n|                                   | counterparts                                      |\n+-----------------------------------+---------------------------------------------------+\n| | ``-e``                          | HTML escape codes replaced by their meaning,      |\n| | ``--replace_html_escapes``      | such as ``&aacute;`` -> ``\u00e1``                     |\n+-----------------------------------+---------------------------------------------------+\n| | ``-c``                          | English-style decimal points and thousands        |\n| | ``--convert_numbers``           | separators in numbers changed to Icelandic style  |\n+-----------------------------------+---------------------------------------------------+\n| | ``-k N``                        | Kludgy ordinal handling defined.                  |\n| | ``--handle_kludgy_ordinals N``  | 0: Returns the original mixed word form           |\n|                                   | 1. Kludgy ordinal returned as pure word forms     |\n|                                   | 2: Kludgy ordinals returned as pure numbers       |\n+-----------------------------------+---------------------------------------------------+\n\n\nType ``tokenize -h`` or ``tokenize --help`` to get a short help message.\n\nExample\n=======\n\n.. code-block:: console\n\n    $ echo \"3.jan\u00faar sl. keypti   \u00e9g 64kWst rafb\u00edl. Hann kosta\u00f0i \u20ac 30.000.\" | tokenize\n    3. jan\u00faar sl. keypti \u00e9g 64kWst rafb\u00edl .\n    Hann kosta\u00f0i \u20ac30.000 .\n\n    $ echo \"3.jan\u00faar sl. keypti   \u00e9g 64kWst rafb\u00edl. Hann kosta\u00f0i \u20ac 30.000.\" | tokenize --csv\n    19,\"3. jan\u00faar\",\"0|1|3\"\n    6,\"sl.\",\"s\u00ed\u00f0astli\u00f0inn\"\n    6,\"keypti\",\"\"\n    6,\"\u00e9g\",\"\"\n    22,\"64kWst\",\"J|230400000.0\"\n    6,\"rafb\u00edl\",\"\"\n    1,\".\",\".\"\n    0,\"\",\"\"\n    6,\"Hann\",\"\"\n    6,\"kosta\u00f0i\",\"\"\n    13,\"\u20ac30.000\",\"30000|EUR\"\n    1,\".\",\".\"\n    0,\"\",\"\"\n\n    $ echo \"3.jan\u00faar sl. keypti   \u00e9g 64kWst rafb\u00edl. Hann kosta\u00f0i \u20ac 30.000.\" | tokenize --json\n    {\"k\":\"BEGIN SENT\"}\n    {\"k\":\"DATEREL\",\"t\":\"3. jan\u00faar\",\"v\":[0,1,3]}\n    {\"k\":\"WORD\",\"t\":\"sl.\",\"v\":[\"s\u00ed\u00f0astli\u00f0inn\"]}\n    {\"k\":\"WORD\",\"t\":\"keypti\"}\n    {\"k\":\"WORD\",\"t\":\"\u00e9g\"}\n    {\"k\":\"MEASUREMENT\",\"t\":\"64kWst\",\"v\":[\"J\",230400000.0]}\n    {\"k\":\"WORD\",\"t\":\"rafb\u00edl\"}\n    {\"k\":\"PUNCTUATION\",\"t\":\".\",\"v\":\".\"}\n    {\"k\":\"END SENT\"}\n    {\"k\":\"BEGIN SENT\"}\n    {\"k\":\"WORD\",\"t\":\"Hann\"}\n    {\"k\":\"WORD\",\"t\":\"kosta\u00f0i\"}\n    {\"k\":\"AMOUNT\",\"t\":\"\u20ac30.000\",\"v\":[30000,\"EUR\"]}\n    {\"k\":\"PUNCTUATION\",\"t\":\".\",\"v\":\".\"}\n    {\"k\":\"END SENT\"}\n\nPython module\n-------------\n\nShallow tokenization example\n============================\n\nAn example of shallow tokenization from Python code goes something like this:\n\n.. code-block:: python\n\n    from tokenizer import split_into_sentences\n\n    # A string to be tokenized, containing two sentences\n    s = \"3.jan\u00faar sl. keypti   \u00e9g 64kWst rafb\u00edl. Hann kosta\u00f0i \u20ac 30.000.\"\n\n    # Obtain a generator of sentence strings\n    g = split_into_sentences(s)\n\n    # Loop through the sentences\n    for sentence in g:\n\n        # Obtain the individual token strings\n        tokens = sentence.split()\n\n        # Print the tokens, comma-separated\n        print(\"|\".join(tokens))\n\nThe program outputs::\n\n    3.|jan\u00faar|sl.|keypti|\u00e9g|64kWst|rafb\u00edl|.\n    Hann|kosta\u00f0i|\u20ac30.000|.\n\nDeep tokenization example\n=========================\n\nTo do deep tokenization from within Python code:\n\n.. code-block:: python\n\n    from tokenizer import tokenize, TOK\n\n    text = (\"M\u00e1linu var v\u00edsa\u00f0 til stj\u00f3rnskipunar- og eftirlitsnefndar \"\n        \"skv. 3. gr. XVII. kafla laga nr. 10/2007 \u00feann 3. jan\u00faar 2010.\")\n\n    for token in tokenize(text):\n\n        print(\"{0}: '{1}' {2}\".format(\n            TOK.descr[token.kind],\n            token.txt or \"-\",\n            token.val or \"\"))\n\nOutput::\n\n    BEGIN SENT: '-' (0, None)\n    WORD: 'M\u00e1linu'\n    WORD: 'var'\n    WORD: 'v\u00edsa\u00f0'\n    WORD: 'til'\n    WORD: 'stj\u00f3rnskipunar- og eftirlitsnefndar'\n    WORD: 'skv.' [('samkv\u00e6mt', 0, 'fs', 'skst', 'skv.', '-')]\n    ORDINAL: '3.' 3\n    WORD: 'gr.' [('grein', 0, 'kvk', 'skst', 'gr.', '-')]\n    ORDINAL: 'XVII.' 17\n    WORD: 'kafla'\n    WORD: 'laga'\n    WORD: 'nr.' [('n\u00famer', 0, 'hk', 'skst', 'nr.', '-')]\n    NUMBER: '10' (10, None, None)\n    PUNCTUATION: '/' (4, '/')\n    YEAR: '2007' 2007\n    WORD: '\u00feann'\n    DATEABS: '3. jan\u00faar 2010' (2010, 1, 3)\n    PUNCTUATION: '.' (3, '.')\n    END SENT: '-'\n\nNote the following:\n\n- Sentences are delimited by ``TOK.S_BEGIN`` and ``TOK.S_END`` tokens.\n- Composite words, such as *stj\u00f3rnskipunar- og eftirlitsnefndar*,\n  are coalesced into one token.\n- Well-known abbreviations are recognized and their full expansion\n  is available in the ``token.val`` field.\n- Ordinal numbers (*3., XVII.*) are recognized and their value (*3, 17*)\n  is available in the ``token.val``  field.\n- Dates, years and times, both absolute and relative, are recognized and\n  the respective year, month, day, hour, minute and second\n  values are included as a tuple in ``token.val``.\n- Numbers, both integer and real, are recognized and their value\n  is available in the ``token.val`` field.\n- Further details of how Tokenizer processes text can be inferred from the\n  `test module <https://github.com/mideind/Tokenizer/blob/master/test/test_tokenizer.py>`_\n  in the project's `GitHub repository <https://github.com/mideind/Tokenizer>`_.\n\n\nThe ``tokenize()`` function\n---------------------------\n\nTo deep-tokenize a text string, call ``tokenizer.tokenize(text, **options)``.\nThe ``text`` parameter can be a string, or an iterable that yields strings\n(such as a text file object).\n\nThe function returns a Python *generator* of token objects.\nEach token object is a simple ``namedtuple`` with three\nfields: ``(kind, txt, val)`` (further documented below).\n\nThe ``tokenizer.tokenize()`` function is typically called in a ``for`` loop:\n\n.. code-block:: python\n\n    import tokenizer\n    for token in tokenizer.tokenize(mystring):\n        kind, txt, val = token\n        if kind == tokenizer.TOK.WORD:\n            # Do something with word tokens\n            pass\n        else:\n            # Do something else\n            pass\n\nAlternatively, create a token list from the returned generator::\n\n    token_list = list(tokenizer.tokenize(mystring))\n\nThe ``split_into_sentences()`` function\n---------------------------------------\n\nTo shallow-tokenize a text string, call\n``tokenizer.split_into_sentences(text_or_gen, **options)``.\nThe ``text_or_gen`` parameter can be a string, or an iterable that yields\nstrings (such as a text file object).\n\nThis function returns a Python *generator* of strings, yielding a string\nfor each sentence in the input. Within a sentence, the tokens are\nseparated by spaces.\n\nYou can pass the option ``normalize=True`` to the function if you want\nthe normalized form of punctuation tokens. Normalization outputs\nIcelandic single and double quotes (\u201ethese\u201c) instead of English-style\nones (\"these\"), converts three-dot ellipsis ... to single character\nellipsis \u2026, and casts en-dashes \u2013 and em-dashes \u2014 to regular hyphens.\n\nThe ``tokenizer.split_into_sentences()`` function is typically called\nin a ``for`` loop:\n\n.. code-block:: python\n\n    import tokenizer\n    with open(\"example.txt\", \"r\", encoding=\"utf-8\") as f:\n        # You can pass a file object directly to split_into_sentences()\n        for sentence in tokenizer.split_into_sentences(f):\n            # sentence is a string of space-separated tokens\n            tokens = sentence.split()\n            # Now, tokens is a list of strings, one for each token\n            for t in tokens:\n                # Do something with the token t\n                pass\n\n\nThe ``correct_spaces()`` function\n---------------------------------\n\nThe ``tokenizer.correct_spaces(text)`` function returns a string after\nsplitting it up and re-joining it with correct whitespace around\npunctuation tokens. Example::\n\n    >>> import tokenizer\n    >>> tokenizer.correct_spaces(\n    ... \"Fr\u00e9tt \\n  dagsins:J\u00f3n\\t ,Fri\u00f0geir og P\u00e1ll ! 100  /  2  =   50\"\n    ... )\n    'Fr\u00e9tt dagsins: J\u00f3n, Fri\u00f0geir og P\u00e1ll! 100/2 = 50'\n\n\nThe ``detokenize()`` function\n---------------------------------\n\nThe ``tokenizer.detokenize(tokens, normalize=False)`` function\ntakes an iterable of token objects and returns a corresponding, correctly\nspaced text string, composed from the tokens' text. If the\n``normalize`` parameter is set to ``True``,\nthe function uses the normalized form of any punctuation tokens, such\nas proper Icelandic single and double quotes instead of English-type\nquotes. Example::\n\n    >>> import tokenizer\n    >>> toklist = list(tokenizer.tokenize(\"Hann sag\u00f0i: \u201e\u00de\u00fa ert \u00e1g\u00e6t!\u201c.\"))\n    >>> tokenizer.detokenize(toklist, normalize=True)\n    'Hann sag\u00f0i: \u201e\u00de\u00fa ert \u00e1g\u00e6t!\u201c.'\n\n\nThe ``normalized_text()`` function\n----------------------------------\n\nThe ``tokenizer.normalized_text(token)`` function\nreturns the normalized text for a token. This means that the original\ntoken text is returned except for certain punctuation tokens, where a\nnormalized form is returned instead. Specifically, English-type quotes\nare converted to Icelandic ones, and en- and em-dashes are converted\nto regular hyphens.\n\n\nThe ``text_from_tokens()`` function\n-----------------------------------\n\nThe ``tokenizer.text_from_tokens(tokens)`` function\nreturns a concatenation of the text contents of the given token list,\nwith spaces between tokens. Example::\n\n    >>> import tokenizer\n    >>> toklist = list(tokenizer.tokenize(\"Hann sag\u00f0i: \\\"\u00de\u00fa ert \u00e1g\u00e6t!\\\".\"))\n    >>> tokenizer.text_from_tokens(toklist)\n    'Hann sag\u00f0i : \" \u00de\u00fa ert \u00e1g\u00e6t ! \" .'\n\n\nThe ``normalized_text_from_tokens()`` function\n----------------------------------------------\n\nThe ``tokenizer.normalized_text_from_tokens(tokens)`` function\nreturns a concatenation of the normalized text contents of the given\ntoken list, with spaces between tokens. Example (note the double quotes)::\n\n    >>> import tokenizer\n    >>> toklist = list(tokenizer.tokenize(\"Hann sag\u00f0i: \\\"\u00de\u00fa ert \u00e1g\u00e6t!\\\".\"))\n    >>> tokenizer.normalized_text_from_tokens(toklist)\n    'Hann sag\u00f0i : \u201e \u00de\u00fa ert \u00e1g\u00e6t ! \u201c .'\n\n\nTokenization options\n--------------------\n\nYou can optionally pass one or more of the following options as\nkeyword parameters to the ``tokenize()`` and ``split_into_sentences()``\nfunctions:\n\n\n* ``convert_numbers=[bool]``\n\n  Setting this option to ``True`` causes the tokenizer to convert numbers\n  and amounts with\n  English-style decimal points (``.``) and thousands separators (``,``)\n  to Icelandic format, where the decimal separator is a comma (``,``)\n  and the thousands separator is a period (``.``). ``$1,234.56`` is thus\n  converted to a token whose text is ``$1.234,56``.\n\n  The default value for the ``convert_numbers`` option is ``False``.\n\n  Note that in versions of Tokenizer prior to 1.4, ``convert_numbers``\n  was ``True``.\n\n\n* ``convert_measurements=[bool]``\n\n  Setting this option to ``True`` causes the tokenizer to convert\n  degrees Kelvin, Celsius and Fahrenheit to a regularized form, i.e.\n  ``200\u00b0 C`` becomes ``200 \u00b0C``.\n\n  The default value for the ``convert_measurements`` option is ``False``.\n\n\n* ``replace_composite_glyphs=[bool]``\n\n  Setting this option to ``False`` disables the automatic replacement\n  of composite Unicode glyphs with their corresponding Icelandic characters.\n  By default, the tokenizer combines vowels with the Unicode\n  COMBINING ACUTE ACCENT and COMBINING DIAERESIS glyphs to form single\n  character code points, such as '\u00e1' and '\u00f6'.\n\n  The default value for the ``replace_composite_glyphs`` option is ``True``.\n\n\n* ``replace_html_escapes=[bool]``\n\n  Setting this option to ``True`` causes the tokenizer to replace common\n  HTML escaped character codes, such as ``&aacute;`` with the character being\n  escaped, such as ``\u00e1``. Note that ``&shy;`` (soft hyphen) is replaced by\n  an empty string, and ``&nbsp;`` is replaced by a normal space.\n  The ligatures ``&filig;`` and ``&fllig;`` are replaced by ``fi`` and ``fl``,\n  respectively.\n\n  The default value for the ``replace_html_escapes`` option is ``False``.\n\n\n* ``handle_kludgy_ordinals=[value]``\n\n  This options controls the way Tokenizer handles 'kludgy' ordinals, such as\n  *1sti*, *4\u00f0u*, or *2ja*. By default, such ordinals are returned unmodified\n  ('passed through') as word tokens (``TOK.WORD``).\n  However, this can be modified as follows:\n\n  * ``tokenizer.KLUDGY_ORDINALS_MODIFY``: Kludgy ordinals are corrected\n    to become 'proper' word tokens, i.e. *1sti* becomes *fyrsti* and\n    *2ja* becomes *tveggja*.\n\n  * ``tokenizer.KLUDGY_ORDINALS_TRANSLATE``: Kludgy ordinals that represent\n    proper ordinal numbers are translated to ordinal tokens (``TOK.ORDINAL``),\n    with their original text and their ordinal value. *1sti* thus\n    becomes a ``TOK.ORDINAL`` token with a value of 1, and *3ja* becomes\n    a ``TOK.ORDINAL`` with a value of 3.\n\n  * ``tokenizer.KLUDGY_ORDINALS_PASS_THROUGH`` is the default value of\n    the option. It causes kludgy ordinals to be returned unmodified as\n    word tokens.\n\n  Note that versions of Tokenizer prior to 1.4 behaved as if\n  ``handle_kludgy_ordinals`` were set to\n  ``tokenizer.KLUDGY_ORDINALS_TRANSLATE``.\n\n\nThe token object\n----------------\n\nEach token is an instance of the class ``Tok`` that has three main properties:\n``kind``, ``txt`` and ``val``.\n\n\nThe ``kind`` property\n=====================\n\nThe ``kind`` property contains one of the following integer constants,\ndefined within the ``TOK`` class:\n\n+---------------+---------+---------------------+---------------------------+\n| Constant      |  Value  | Explanation         | Examples                  |\n+===============+=========+=====================+===========================+\n| PUNCTUATION   |    1    | Punctuation         | . ! ; % &                 |\n+---------------+---------+---------------------+---------------------------+\n| TIME          |    2    | Time (h, m, s)      | | 11:35:40                |\n|               |         |                     | | kl. 7:05                |\n|               |         |                     | | klukkan 23:35           |\n+---------------+---------+---------------------+---------------------------+\n| DATE *        |    3    | Date (y, m, d)      | [Unused, see DATEABS and  |\n|               |         |                     | DATEREL]                  |\n+---------------+---------+---------------------+---------------------------+\n| YEAR          |    4    | Year                | | \u00e1ri\u00f0 874 e.Kr.          |\n|               |         |                     | | 1965                    |\n|               |         |                     | | 44 f.Kr.                |\n+---------------+---------+---------------------+---------------------------+\n| NUMBER        |    5    | Number              | | 100                     |\n|               |         |                     | | 1.965                   |\n|               |         |                     | | 1.965,34                |\n|               |         |                     | | 1,965.34                |\n|               |         |                     | | 2\u215e                      |\n+---------------+---------+---------------------+---------------------------+\n| WORD          |    6    | Word                | | kattaeftirlit           |\n|               |         |                     | | hunda- og kattaeftirlit |\n+---------------+---------+---------------------+---------------------------+\n| TELNO         |    7    | Telephone number    | | 5254764                 |\n|               |         |                     | | 699-4244                |\n|               |         |                     | | 410 4000                |\n+---------------+---------+---------------------+---------------------------+\n| PERCENT       |    8    | Percentage          | 78%                       |\n+---------------+---------+---------------------+---------------------------+\n| URL           |    9    | URL                 | | https://greynir.is      |\n|               |         |                     | | http://tiny.cc/28695y   |\n+---------------+---------+---------------------+---------------------------+\n| ORDINAL       |    10   | Ordinal number      | | 30.                     |\n|               |         |                     | | XVIII.                  |\n+---------------+---------+---------------------+---------------------------+\n| TIMESTAMP *   |    11   | Timestamp           | [Unused, see              |\n|               |         |                     | TIMESTAMPABS and          |\n|               |         |                     | TIMESTAMPREL]             |\n+---------------+---------+---------------------+---------------------------+\n| CURRENCY *    |    12   | Currency name       | [Unused]                  |\n+---------------+---------+---------------------+---------------------------+\n| AMOUNT        |    13   | Amount              | | \u20ac2.345,67               |\n|               |         |                     | | 750 \u00fe\u00fas.kr.             |\n|               |         |                     | | 2,7 mr\u00f0. USD            |\n|               |         |                     | | kr. 9.900               |\n|               |         |                     | | EUR 200                 |\n+---------------+---------+---------------------+---------------------------+\n| PERSON *      |    14   | Person name         | [Unused]                  |\n+---------------+---------+---------------------+---------------------------+\n| EMAIL         |    15   | E-mail              | ``fake@news.is``          |\n+---------------+---------+---------------------+---------------------------+\n| ENTITY *      |    16   | Named entity        | [Unused]                  |\n+---------------+---------+---------------------+---------------------------+\n| UNKNOWN       |    17   | Unknown token       |                           |\n+---------------+---------+---------------------+---------------------------+\n| DATEABS       |    18   | Absolute date       | | 30. desember 1965       |\n|               |         |                     | | 30/12/1965              |\n|               |         |                     | | 1965-12-30              |\n|               |         |                     | | 1965/12/30              |\n+---------------+---------+---------------------+---------------------------+\n| DATEREL       |    19   | Relative date       | | 15. mars                |\n|               |         |                     | | 15/3                    |\n|               |         |                     | | 15.3.                   |\n|               |         |                     | | mars 1911               |\n+---------------+---------+---------------------+---------------------------+\n| TIMESTAMPABS  |    20   | Absolute timestamp  | | 30. desember 1965 11:34 |\n|               |         |                     | | 1965-12-30 kl. 13:00    |\n+---------------+---------+---------------------+---------------------------+\n| TIMESTAMPREL  |    21   | Relative timestamp  | | 30. desember kl. 13:00  |\n+---------------+---------+---------------------+---------------------------+\n| MEASUREMENT   |    22   | Value with a        | | 690 MW                  |\n|               |         | measurement unit    | | 1.010 hPa               |\n|               |         |                     | | 220 m\u00b2                  |\n|               |         |                     | | 80\u00b0 C                   |\n+---------------+---------+---------------------+---------------------------+\n| NUMWLETTER    |    23   | Number followed by  | | 14a                     |\n|               |         | a single letter     | | 7B                      |\n+---------------+---------+---------------------+---------------------------+\n| DOMAIN        |    24   | Domain name         | | greynir.is              |\n|               |         |                     | | Reddit.com              |\n|               |         |                     | | www.wikipedia.org       |\n+---------------+---------+---------------------+---------------------------+\n| HASHTAG       |    25   | Hashtag             | | #MeToo                  |\n|               |         |                     | | #12stig                 |\n+---------------+---------+---------------------+---------------------------+\n| MOLECULE      |    26   | Molecular formula   | | H2SO4                   |\n|               |         |                     | | CO2                     |\n+---------------+---------+---------------------+---------------------------+\n| SSN           |    27   | Social security     | | 591213-1480             |\n|               |         | number (*kennitala*)|                           |\n+---------------+---------+---------------------+---------------------------+\n| USERNAME      |    28   | Twitter user handle | | @username_123           |\n|               |         |                     |                           |\n+---------------+---------+---------------------+---------------------------+\n| SERIALNUMBER  |    29   | Serial number       | | 394-5388                |\n|               |         |                     | | 12-345-6789             |\n+---------------+---------+---------------------+---------------------------+\n| COMPANY *     |    30   | Company name        | [Unused]                  |\n+---------------+---------+---------------------+---------------------------+\n| S_BEGIN       |  11001  | Start of sentence   |                           |\n+---------------+---------+---------------------+---------------------------+\n| S_END         |  11002  | End of sentence     |                           |\n+---------------+---------+---------------------+---------------------------+\n\n(*) The token types marked with an asterisk are reserved for the Greynir package\nand not currently returned by the tokenizer.\n\nTo obtain a descriptive text for a token kind, use\n``TOK.descr[token.kind]`` (see example above).\n\n\nThe ``txt`` property\n====================\n\nThe ``txt`` property contains the original source text for the token,\nwith the following exceptions:\n\n* All contiguous whitespace (spaces, tabs, newlines) is coalesced\n  into single spaces (``\" \"``) within the ``txt`` string. A date\n  token that is parsed from a source text of ``\"29.  \\n   jan\u00faar\"``\n  thus has a ``txt`` of ``\"29. jan\u00faar\"``.\n\n* Tokenizer automatically merges Unicode ``COMBINING ACUTE ACCENT``\n  (code point 769) and ``COMBINING DIAERESIS`` (code point 776)\n  with vowels to form single code points for the Icelandic letters\n  \u00e1, \u00e9, \u00ed, \u00f3, \u00fa, \u00fd and \u00f6, in both lower and upper case. (This behavior\n  can be disabled; see the ``replace_composite_glyphs`` option described\n  above.)\n\n* If the appropriate options are specified (see above), it converts\n  kludgy ordinals (*3ja*) to proper ones (*\u00feri\u00f0ja*), and English-style\n  thousand and decimal separators to Icelandic ones\n  (*10,345.67* becomes *10.345,67*).\n\n* If the ``replace_html_escapes`` option is set, Tokenizer replaces\n  HTML-style escapes (``&aacute;``) with the characters\n  being escaped (``\u00e1``).\n\n\nThe ``val`` property\n====================\n\nThe ``val`` property contains auxiliary information, corresponding to\nthe token kind, as follows:\n\n- For ``TOK.PUNCTUATION``, the ``val`` field contains a tuple with\n  two items: ``(whitespace, normalform)``. The first item (``token.val[0]``)\n  specifies the whitespace normally found around the symbol in question,\n  as an integer::\n\n    TP_LEFT = 1   # Whitespace to the left\n    TP_CENTER = 2 # Whitespace to the left and right\n    TP_RIGHT = 3  # Whitespace to the right\n    TP_NONE = 4   # No whitespace\n\n  The second item (``token.val[1]``) contains a normalized representation of the\n  punctuation. For instance, various forms of single and double\n  quotes are represented as Icelandic ones (i.e. \u201ethese\u201c or \u201athese\u2018) in\n  normalized form, and ellipsis (\"...\") are represented as the single\n  character \"\u2026\".\n\n- For ``TOK.TIME``, the ``val`` field contains an\n  ``(hour, minute, second)`` tuple.\n\n- For ``TOK.DATEABS``, the ``val`` field contains a\n  ``(year, month, day)`` tuple (all 1-based).\n\n- For ``TOK.DATEREL``, the ``val`` field contains a\n  ``(year, month, day)`` tuple (all 1-based),\n  except that a least one of the tuple fields is missing and set to 0.\n  Example: *3. j\u00fan\u00ed* becomes ``TOK.DATEREL`` with the fields ``(0, 6, 3)``\n  as the year is missing.\n\n- For ``TOK.YEAR``, the ``val`` field contains the year as an integer.\n  A negative number indicates that the year is BCE (*fyrir Krist*),\n  specified with the suffix *f.Kr.* (e.g. *\u00e1ri\u00f0 33 f.Kr.*).\n\n- For ``TOK.NUMBER``, the ``val`` field contains a tuple\n  ``(number, None, None)``.\n  (The two empty fields are included for compatibility with Greynir.)\n\n- For ``TOK.WORD``, the ``val`` field contains the full expansion\n  of an abbreviation, as a list containing a single tuple, or ``None``\n  if the word is not abbreviated.\n\n- For ``TOK.PERCENT``, the ``val`` field contains a tuple\n  of ``(percentage, None, None)``.\n\n- For ``TOK.ORDINAL``, the ``val`` field contains the ordinal value\n  as an integer. The original ordinal may be a decimal number\n  or a Roman numeral.\n\n- For ``TOK.TIMESTAMP``, the ``val`` field contains\n  a ``(year, month, day, hour, minute, second)`` tuple.\n\n- For ``TOK.AMOUNT``, the ``val`` field contains\n  an ``(amount, currency, None, None)`` tuple. The amount is a float, and\n  the currency is an ISO currency code, e.g. *USD* for dollars ($ sign),\n  *EUR* for euros (\u20ac sign) or *ISK* for Icelandic kr\u00f3na\n  (*kr.* abbreviation). (The two empty fields are included for\n  compatibility with Greynir.)\n\n- For ``TOK.MEASUREMENT``, the ``val`` field contains a ``(unit, value)``\n  tuple, where ``unit`` is a base SI unit (such as ``g``, ``m``,\n  ``m\u00b2``, ``s``, ``W``, ``Hz``, ``K`` for temperature in Kelvin).\n\n- For ``TOK.TELNO``, the ``val`` field contains a tuple: ``(number, cc)``\n  where the first item is the phone number\n  in a normalized ``NNN-NNNN`` format, i.e. always including a hyphen,\n  and the second item is the country code, eventually prefixed by ``+``.\n  The country code defaults to ``354`` (Iceland).\n\n\nAbbreviations\n-------------\n\nAbbreviations recognized by Tokenizer are defined in the ``Abbrev.conf``\nfile, found in the ``src/tokenizer/`` directory. This is a text file with\nabbreviations, their definitions and explanatory comments.\n\nWhen an abbreviation is encountered, it is recognized as a word token\n(i.e. having its ``kind`` field equal to ``TOK.WORD``).\nIts expansion(s) are included in the token's\n``val`` field as a list containing tuples of the format\n``(ordmynd, utg, ordfl, fl, stofn, beyging)``.\nAn example is *o.s.frv.*, which results in a ``val`` field equal to\n``[('og svo framvegis', 0, 'ao', 'frasi', 'o.s.frv.', '-')]``.\n\nThe tuple format is designed to be compatible with the\n*Database of Icelandic Morphology* (*DIM*),\n*Beygingarl\u00fdsing \u00edslensks n\u00fat\u00edmam\u00e1ls*, i.e. the so-called *Sigr\u00fanarsni\u00f0*.\n\n\nDevelopment installation\n------------------------\n\nTo install Tokenizer in development mode, where you can easily\nmodify the source files (assuming you have ``git`` available):\n\n.. code-block:: console\n\n    $ git clone https://github.com/mideind/Tokenizer\n    $ cd Tokenizer\n    $ # [ Activate your virtualenv here, if you have one ]\n    $ pip install -e .\n\n\nTest suite\n----------\n\nTokenizer comes with a large test suite.\nThe file ``test/test_tokenizer.py`` contains built-in tests that\nrun under ``pytest``.\n\nTo run the built-in tests, install `pytest <https://docs.pytest.org/en/latest/>`_,\n``cd`` to your ``Tokenizer`` subdirectory (and optionally\nactivate your virtualenv), then run:\n\n.. code-block:: console\n\n    $ python -m pytest\n\nThe file ``test/toktest_large.txt`` contains a test set of 13,075 lines.\nThe lines test sentence detection, token detection and token classification.\nFor analysis, ``test/toktest_large_gold_perfect.txt`` contains\nthe expected output of a perfect shallow tokenization, and\n``test/toktest_large_gold_acceptable.txt`` contains the current output of the\nshallow tokenization.\n\nThe file ``test/Overview.txt`` (only in Icelandic) contains a description\nof the test set, including line numbers for each part in both\n``test/toktest_large.txt`` and ``test/toktest_large_gold_acceptable.txt``,\nand a tag describing what is being tested in each part.\n\nIt also contains a description of a perfect shallow tokenization for each part,\nacceptable tokenization and the current behaviour.\nAs such, the description is an analysis of which edge cases the tokenizer\ncan handle and which it can not.\n\nTo test the tokenizer on the large test set the following needs to be typed\nin the command line:\n\n.. code-block:: console\n\n    $ tokenize test/toktest_large.txt test/toktest_large_out.txt\n\nTo compare it to the acceptable behaviour:\n\n.. code-block:: console\n\n    $ diff test/toktest_large_out.txt test/toktest_large_gold_acceptable.txt > diff.txt\n\nThe file ``test/toktest_normal.txt`` contains a running text from recent\nnews articles, containing no edge cases. The gold standard for that file\ncan be found in the file ``test/toktest_normal_gold_expected.txt``.\n\n\nChangelog\n---------\n\n* Version 3.4.3: Various minor fixes. Now requires Python 3.8 or later.\n* Version 3.4.2: Abbreviations and phrases added, ``META_BEGIN`` token added.\n* Version 3.4.1: Improved performance on long input chunks.\n* Version 3.4.0: Improved handling and normalization of punctuation.\n* Version 3.3.2: Internal refactoring; bug fixes in paragraph handling.\n* Version 3.3.1: Fixed bug where opening quotes at the start of paragraphs\n  were sometimes incorrectly recognized and normalized.\n* Version 3.2.0: Numbers and amounts that consist of word tokens only ('sex hundru\u00f0')\n  are now returned as the original ``TOK.WORD`` s ('sex' and 'hundru\u00f0'), not as single\n  coalesced ``TOK.NUMBER`` / ``TOK.AMOUNT`` /etc. tokens.\n* Version 3.1.2: Changed paragraph markers to ``[[`` and ``]]`` (removing spaces).\n* Version 3.1.1: Minor fixes; added Tok.from_token().\n* Version 3.1.0: Added ``-o`` switch to the ``tokenize`` command to return original\n  token text, enabling the tokenizer to run as a sentence splitter only.\n* Version 3.0.0: Added tracking of character offsets for tokens within the\n  original source text. Added full type annotations. Dropped Python 2.7 support.\n* Version 2.5.0: Added arguments for all tokenizer options to the\n  command-line tool. Type annotations enhanced.\n* Version 2.4.0: Fixed bug where certain well-known word forms (*f\u00e1*, *f\u00e6r*, *m\u00edn*, *s\u00e1*...)\n  were being interpreted as (wrong) abbreviations. Also fixed bug where certain\n  abbreviations were being recognized even in uppercase and at the end\n  of a sentence, for instance *\u00d6rn.*\n* Version 2.3.1: Various bug fixes; fixed type annotations for Python 2.7;\n  the token kind ``NUMBER WITH LETTER`` is now ``NUMWLETTER``.\n* Version 2.3.0: Added the ``replace_html_escapes`` option to\n  the ``tokenize()`` function.\n* Version 2.2.0: Fixed ``correct_spaces()`` to handle compounds such as\n  *Atvinnu-, n\u00fdsk\u00f6punar- og fer\u00f0am\u00e1lar\u00e1\u00f0uneyti\u00f0* and\n  *bens\u00ednst\u00f6\u00f0var, -d\u00e6lur og -tankar*.\n* Version 2.1.0: Changed handling of periods at end of sentences if they are\n  a part of an abbreviation. Now, the period is kept attached to the abbreviation,\n  not split off into a separate period token, as before.\n* Version 2.0.7: Added ``TOK.COMPANY`` token type; fixed a few abbreviations;\n  renamed parameter ``text`` to ``text_or_gen`` in functions that accept a string\n  or a string iterator.\n* Version 2.0.6: Fixed handling of abbreviations such as *m.v.* (*mi\u00f0a\u00f0 vi\u00f0*)\n  that should not start a new sentence even if the following word is capitalized.\n* Version 2.0.5: Fixed bug where single uppercase letters were erroneously\n  being recognized as abbreviations, causing prepositions such as '\u00cd' and '\u00c1'\n  at the beginning of sentences to be misunderstood in GreynirPackage.\n* Version 2.0.4: Added imperfect abbreviations (*amk.*, *osfrv.*); recognized\n  *klukkan h\u00e1lf tv\u00f6* as a ``TOK.TIME``.\n* Version 2.0.3: Fixed bug in ``detokenize()`` where abbreviations, domains\n  and e-mails containing periods were wrongly split.\n* Version 2.0.2: Spelled-out day ordinals are no longer included as a part of\n  ``TOK.DATEREL`` tokens. Thus, *\u00feri\u00f0ji j\u00fan\u00ed* is now a ``TOK.WORD``\n  followed by a ``TOK.DATEREL``. *3. j\u00fan\u00ed* continues to be parsed as\n  a single ``TOK.DATEREL``.\n* Version 2.0.1: Order of abbreviation meanings within the ``token.val`` field\n  made deterministic; fixed bug in measurement unit handling.\n* Version 2.0.0: Added command line tool; added ``split_into_sentences()``\n  and ``detokenize()`` functions; removed ``convert_telno`` option;\n  splitting of coalesced tokens made more robust;\n  added ``TOK.SSN``, ``TOK.MOLECULE``, ``TOK.USERNAME`` and\n  ``TOK.SERIALNUMBER`` token kinds; abbreviations can now have multiple\n  meanings.\n* Version 1.4.0: Added the ``**options`` parameter to the\n  ``tokenize()`` function, giving control over the handling of numbers,\n  telephone numbers, and 'kludgy' ordinals.\n* Version 1.3.0: Added ``TOK.DOMAIN`` and ``TOK.HASHTAG`` token types;\n  improved handling of capitalized month name *\u00c1g\u00fast*, which is\n  now recognized when following an ordinal number; improved recognition\n  of telephone numbers; added abbreviations.\n* Version 1.2.3: Added abbreviations; updated GitHub URLs.\n* Version 1.2.2: Added support for composites with more than two parts, i.e.\n  *\u201ed\u00f3msm\u00e1la-, fer\u00f0am\u00e1la-, i\u00f0na\u00f0ar- og n\u00fdsk\u00f6punarr\u00e1\u00f0herra\u201c*; added support for\n  ``\u00b1`` sign; added several abbreviations.\n* Version 1.2.1: Fixed bug where the name *\u00c1g\u00fast* was recognized\n  as a month name; Unicode nonbreaking and invisible space characters\n  are now removed before tokenization.\n* Version 1.2.0: Added support for Unicode fraction characters;\n  enhanced handing of degrees (\u00b0, \u00b0C, \u00b0F); fixed bug in cubic meter\n  measurement unit; more abbreviations.\n* Version 1.1.2: Fixed bug in liter (``l`` and ``ltr``) measurement units.\n* Version 1.1.1: Added ``mark_paragraphs()`` function.\n* Version 1.1.0: All abbreviations in ``Abbrev.conf`` are now\n  returned with their meaning in a tuple in ``token.val``;\n  handling of 'mbl.is' fixed.\n* Version 1.0.9: Added abbreviation 'MAST'; harmonized copyright headers.\n* Version 1.0.8: Bug fixes in ``DATEREL``, ``MEASUREMENT`` and ``NUMWLETTER``\n  token handling; added 'kWst' and 'MWst' measurement units; blackened.\n* Version 1.0.7: Added ``TOK.NUMWLETTER`` token type.\n* Version 1.0.6: Automatic merging of Unicode ``COMBINING ACUTE ACCENT`` and\n  ``COMBINING DIAERESIS`` code points with vowels.\n* Version 1.0.5: Date/time and amount tokens coalesced to a further extent.\n* Version 1.0.4: Added ``TOK.DATEABS``, ``TOK.TIMESTAMPABS``,\n  ``TOK.MEASUREMENT``.\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A tokenizer for Icelandic text",
    "version": "3.4.3",
    "project_urls": {
        "Homepage": "https://github.com/mideind/Tokenizer"
    },
    "split_keywords": [
        "nlp",
        "tokenizer",
        "icelandic"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8c9b26a6371ffe49789e8b03837981f46348c5ef3cdf844fd915c842b10c634a",
                "md5": "4a371e014043d0154036edc4514efa8b",
                "sha256": "d9a4065760d63b6e17a914e4ec209608487aa64b0d9726a54b4ab064acb6eae1"
            },
            "downloads": -1,
            "filename": "tokenizer-3.4.3-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4a371e014043d0154036edc4514efa8b",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 112279,
            "upload_time": "2023-08-11T15:09:11",
            "upload_time_iso_8601": "2023-08-11T15:09:11.391173Z",
            "url": "https://files.pythonhosted.org/packages/8c/9b/26a6371ffe49789e8b03837981f46348c5ef3cdf844fd915c842b10c634a/tokenizer-3.4.3-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2516cea62e77428f0722b8adf33ee5a8d4077e726c53d8894bc342e9a944e95e",
                "md5": "5fab4b6a96dddfaac8070ee8fab0f086",
                "sha256": "e88c662d0cba3d130f0696bf22316c176ff358beace0ee08542273fcfe3e95f8"
            },
            "downloads": -1,
            "filename": "tokenizer-3.4.3.tar.gz",
            "has_sig": false,
            "md5_digest": "5fab4b6a96dddfaac8070ee8fab0f086",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 153056,
            "upload_time": "2023-08-11T15:09:13",
            "upload_time_iso_8601": "2023-08-11T15:09:13.469432Z",
            "url": "https://files.pythonhosted.org/packages/25/16/cea62e77428f0722b8adf33ee5a8d4077e726c53d8894bc342e9a944e95e/tokenizer-3.4.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-11 15:09:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mideind",
    "github_project": "Tokenizer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "tokenizer"
}
        
Elapsed time: 0.10545s