anyks-sc


Nameanyks-sc JSON
Version 1.2.6 PyPI version JSON
download
home_pagehttps://github.com/anyks/asc
SummarySmart language model
upload_time2023-01-05 00:09:51
maintainer
docs_urlNone
authorYuriy Lobarev
requires_python>=3.6
license
keywords nlp lm alm language-model spellchecker spell-checker anyks
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![ANYKS Smart language model](https://raw.githubusercontent.com/anyks/asc/master/site/img/banner.jpg)](https://anyks.com)

# ANYKS Spell-checker (ASC)

## Project description

There are a lot of typo and text error correction systems out there. Each one of those systems has its pros and cons, and each system has the right to live and will find its own user base. I would like to present my own version of the typo correction system with its own unique features.

## List of features

- Correction of mistakes in words with a **Levenshtein distance** of up to 4;
- Correction of different types of typos in words: insertion, deletion, substitution, rearrangement of character;
- **Ё**-fication of a word given the context (letter 'ё' is commonly replaced by letter 'е' in russian typed text);
- Context-based word capitalization for proper names and titles;
- Context-based splitting for words that are missing the separating space character;
- Text analysis without correcting the original text;
- Searching the text for errors, typos, incorrect context.

## Requirements

- [Zlib](http://www.zlib.net)
- [Bloom](http://www.partow.net/programming/bloomfilter/index.html)
- [OpenSSL](https://www.openssl.org)
- [hnswlib](https://github.com/nmslib/hnswlib)
- [HandyPack](https://github.com/bakwc/HandyPack)
- [GperfTools](https://github.com/gperftools/gperftools)
- [Python3](https://www.python.org/download/releases/3.0)
- [NLohmann::json](https://github.com/nlohmann/json)
- [BigInteger](http://mattmccutchen.net/bigint)
- [ALM](https://github.com/anyks/alm)

## Install PyBind11

```bash
$ python3 -m pip install pybind11
```

## Ready-to-use dictionaries

| Dictionary name                                                        | Size (GB)   | RAM (GB)                | N-gram order   | Language |
|------------------------------------------------------------------------|-------------|-------------------------|----------------|----------|
| [wittenbell-3-big.asc](https://cloud.mail.ru/public/2b9E/sz9b8YGJ1)    | 1.97        | 15.6                    | 3              | RU       |
| [wittenbell-3-middle.asc](https://cloud.mail.ru/public/5zo5/2F6uav8fz) | 1.24        | 9.7                     | 3              | RU       |
| [mkneserney-3-middle.asc](https://cloud.mail.ru/public/3SBP/35WJFKFyd) | 1.33        | 9.7                     | 3              | RU       |
| [wittenbell-3-single.asc](https://cloud.mail.ru/public/21jt/YEW493XQa) | 0.772       | 5.14                    | 3              | RU       |
| [wittenbell-5-single.asc](https://cloud.mail.ru/public/5Adc/5x5Ea1eA6) | 1.37        | 10.7                    | 5              | RU       |

## Testing

To test the system, we used data from the [2016 "spelling correction" competition organized by Dialog21](http://www.dialog-21.ru/evaluation/2016/spelling_correction).<br>
The trained binary dictionary that was used for testing: [wittenbell-3-middle.asc](https://cloud.mail.ru/public/5zo5/2F6uav8fz)

| Mode                              | Precision | Recall | FMeasure |
|-----------------------------------|-----------|--------|----------|
| **Typo correction**               | 76.97     | 62.71  | 69.11    |
| **Error correction**              | 73.72     | 60.53  | 66.48    |

I think it is unnecessary to add any other data. Anyone can repeat the test if they wish (all files used for testing are attached below).

### Files used for testing

- [test.txt](https://cloud.mail.ru/public/3rzx/2HwqDU9j5) - Text used for testing;
- [correct.txt](https://cloud.mail.ru/public/3dAN/J4EzV4U3W) - File with correct text;
- [evaluate.py](https://cloud.mail.ru/public/ZTRf/4sUS1Sc2p) - Python3 script for correction result evaluation.

---

## Description of Methods

### Methods:
- **idw** - Word ID retrieval method
- **idt** - Token ID retrieval method
- **ids** - Sequence ID retrieval method

### Example:
```python
>>> import asc
>>>
>>> asc.idw("hello")
313191024
>>>
>>> asc.idw("<s>")
1
>>>
>>> asc.idw("</s>")
22
>>>
>>> asc.idw("<unk>")
3
>>>
>>> asc.idt("1424")
2
>>>
>>> asc.idt("hello")
0
>>>
>>> asc.idw("Living")
13268942501
>>>
>>> asc.idw("in")
2047
>>>
>>> asc.idw("the")
83201
>>>
>>> asc.idw("USA")
72549
>>>
>>> asc.ids([13268942501, 2047, 83201, 72549])
16314074810955466382
```

### Description
| Name      | Description                                                                                                                                           |
|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
|〈s〉       | Sentence beginning token                                                                                                                              |
|〈/s〉      | Sentence end token                                                                                                                                    |
|〈url〉     | URL-address token                                                                                                                                     |
|〈num〉     | Number (arabic or roman) token                                                                                                                        |
|〈unk〉     | Unknown word token                                                                                                                                    |
|〈time〉    | Time token (15:44:56)                                                                                                                                 |
|〈score〉   | Score count token (4:3 ¦ 01:04)                                                                                                                       |
|〈fract〉   | Fraction token (5/20 ¦ 192/864)                                                                                                                       |
|〈date〉    | Date token (18.07.2004 ¦ 07/18/2004)                                                                                                                  |
|〈abbr〉    | Abbreviation token (1-й ¦ 2-е ¦ 20-я ¦ p.s ¦ p.s.)                                                                                                    |
|〈dimen〉   | Dimensions token (200x300 ¦ 1920x1080)                                                                                                                |
|〈range〉   | Range of numbers token (1-2 ¦ 100-200 ¦ 300-400)                                                                                                      |
|〈aprox〉   | Approximate number token (~93 ¦ ~95.86 ¦ 10~20)                                                                                                       |
|〈anum〉    | Pseudo-number token (combination of numbers and other symbols) (T34 ¦ 895-M-86 ¦ 39km)                                                                |
|〈pcards〉  | Symbols of the play cards (♠ ¦ ♣ ¦ ♥ ¦ ♦ )                                                                                                            |
|〈punct〉   | Punctuation token (. ¦ , ¦ ? ¦ ! ¦ : ¦ ; ¦ … ¦ ¡ ¦ ¿)                                                                                                 |
|〈route〉   | Direction symbols (arrows) (← ¦ ↑ ¦ ↓ ¦ ↔ ¦ ↵ ¦ ⇐ ¦ ⇑ ¦ ⇒ ¦ ⇓ ¦ ⇔ ¦ ◄ ¦ ▲ ¦ ► ¦ ▼)                                                                    |
|〈greek〉   | Symbols of the Greek alphabet (Α ¦ Β ¦ Γ ¦ Δ ¦ Ε ¦ Ζ ¦ Η ¦ Θ ¦ Ι ¦ Κ ¦ Λ ¦ Μ ¦ Ν ¦ Ξ ¦ Ο ¦ Π ¦ Ρ ¦ Σ ¦ Τ ¦ Υ ¦ Φ ¦ Χ ¦ Ψ ¦ Ω)                         |
|〈isolat〉  | Isolation/quotation token (( ¦ ) ¦ [ ¦ ] ¦ { ¦ } ¦ " ¦ « ¦ » ¦ „ ¦ “ ¦ ` ¦ ⌈ ¦ ⌉ ¦ ⌊ ¦ ⌋ ¦ ‹ ¦ › ¦ ‚ ¦ ’ ¦ ′ ¦ ‛ ¦ ″ ¦ ‘ ¦ ” ¦ ‟ ¦ ' ¦〈 ¦ 〉)         |
|〈specl〉   | Special character token (_ ¦ @ ¦ # ¦ № ¦ © ¦ ® ¦ & ¦ § ¦ æ ¦ ø ¦ Þ ¦ – ¦ ‾ ¦ ‑ ¦ — ¦ ¯ ¦ ¶ ¦ ˆ ¦ ˜ ¦ † ¦ ‡ ¦ • ¦ ‰ ¦ ⁄ ¦ ℑ ¦ ℘ ¦ ℜ ¦ ℵ ¦ ◊ ¦ \ )     |
|〈currency〉| Symbols of world currencies ($ ¦ € ¦ ₽ ¦ ¢ ¦ £ ¦ ₤ ¦ ¤ ¦ ¥ ¦ ℳ ¦ ₣ ¦ ₴ ¦ ₸ ¦ ₹ ¦ ₩ ¦ ₦ ¦ ₭ ¦ ₪ ¦ ৳ ¦ ƒ ¦ ₨ ¦ ฿ ¦ ₫ ¦ ៛ ¦ ₮ ¦ ₱ ¦ ﷼ ¦ ₡ ¦ ₲ ¦ ؋ ¦ ₵ ¦ ₺ ¦ ₼ ¦ ₾ ¦ ₠ ¦ ₧ ¦ ₯ ¦ ₢ ¦ ₳ ¦ ₥ ¦ ₰ ¦ ₿ ¦ ұ) |
|〈math〉    | Mathematical operation token (+ ¦ - ¦ = ¦ / ¦ * ¦ ^ ¦ × ¦ ÷ ¦ − ¦ ∕ ¦ ∖ ¦ ∗ ¦ √ ¦ ∝ ¦ ∞ ¦ ∠ ¦ ± ¦ ¹ ¦ ² ¦ ³ ¦ ½ ¦ ⅓ ¦ ¼ ¦ ¾ ¦ % ¦ ~ ¦ · ¦ ⋅ ¦ ° ¦ º ¦ ¬ ¦ ƒ ¦ ∀ ¦ ∂ ¦ ∃ ¦ ∅ ¦ ∇ ¦ ∈ ¦ ∉ ¦ ∋ ¦ ∏ ¦ ∑ ¦ ∧ ¦ ∨ ¦ ∩ ¦ ∪ ¦ ∫ ¦ ∴ ¦ ∼ ¦ ≅ ¦ ≈ ¦ ≠ ¦ ≡ ¦ ≤ ¦ ≥ ¦ ª ¦ ⊂ ¦ ⊃ ¦ ⊄ ¦ ⊆ ¦ ⊇ ¦ ⊕ ¦ ⊗ ¦ ⊥ ¦ ¨) |

---

### Methods:
- **setZone** - User zone set method

### Example:
```python
>>> import asc
>>>
>>> asc.setZone("com")
>>> asc.setZone("ru")
>>> asc.setZone("org")
>>> asc.setZone("net")
```

---

### Methods:
- **clear** - Method clear all data
- **setAlphabet** - Method set alphabet
- **getAlphabet** - Method get alphabet

### Example:
```python
>>> import asc
>>>
>>> asc.getAlphabet()
'abcdefghijklmnopqrstuvwxyz'
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.getAlphabet()
'abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя'
>>>
>>> asc.clear()
>>>
>>> asc.getAlphabet()
'abcdefghijklmnopqrstuvwxyz'
```

---

### Methods:
- **setUnknown** - Method set unknown word
- **getUnknown** - Method extraction unknown word

### Example:
```python
>>> import asc
>>>
>>> asc.setUnknown("word")
>>>
>>> asc.getUnknown()
'word'
```

---

### Methods:
- **infoIndex** - Method for print information about the dictionary
- **token** - Method for determining the type of the token words
- **addText** - Method of adding text for estimate
- **collectCorpus** - Training method of assembling the text data for ASC [curpus = filename or dir, smoothing = wittenBell, modified = False, prepares = False, mod = 0.0, status = Null]
- **pruneVocab** - Dictionary pruning method
- **buildArpa** - Method for build ARPA
- **writeWords** - Method for writing these words to a file
- **writeVocab** - Method for writing dictionary data to a file
- **writeNgrams** - Method of writing data to NGRAMs files
- **writeMap** - Method of writing sequence map to file
- **writeSuffix** - Method for writing data to a suffix file for digital abbreviations
- **writeAbbrs** - Method for writing data to an abbreviation file
- **getSuffixes** - Method for extracting the list of suffixes of digital abbreviations
- **writeArpa** - Method of writing data to ARPA file
- **setThreads** - Method for setting the number of threads used in work (0 - all available threads)
- **setStemmingMethod** - Method for setting external stemming function
- **loadIndex** - Binary index loading method
- **spell** - Method for performing spell-checker
- **analyze** - Method for analyze text
- **addAlt** - Method for add a word/letter with an alternative letter
- **setAlphabet** - Method for set Alphabet
- **setPilots** - Method for set pilot words
- **setSubstitutes** - Method for set letters to correct words from mixed alphabets
- **addAbbr** - Method add abbreviation
- **setAbbrs** - Method set abbreviations
- **getAbbrs** - Method for extracting the list of abbreviations
- **addGoodword** - Method add good word
- **addBadword** - Method add bad word
- **addUWord** - Method for add a word that always starts with a capital letter
- **setUWords** - Method for add a list of identifiers for words that always start with a capital letter
- **readArpa** - Method for reading an ARPA file, language model
- **readVocab** - Method of reading the dictionary
- **setEmbedding** - Method for set embedding
- **buildIndex** - Method for build spell-checker index
- **setAdCw** - Method for set dictionary characteristics (cw - count all words in dataset, ad - count all documents in dataset)
- **setCode** - Method for set code language
- **addLemma** - Method for add a Lemma to the dictionary
- **setNSWLibCount** - Method for set the maximum number of options for analysis

### Example:
```python
>>> import asc
>>> 
>>> asc.infoIndex("./wittenbell-3-single.asc")

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* Code: RU

* Version: 1.0.0

* Dictionary name: Russian - single

* Locale: en_US.UTF-8
* Alphabet: абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz

* Build date: 09/08/2020 15:39:31

* Encrypted: NO

* ALM type: ALMv1

* Allow apostrophe: NO

* Count words: 106912195
* Count documents: 263998

* Only good words: NO
* Mix words in dicts: YES
* Confidence arpa: YES

* Count upper words: 841915
* Count pilots words: 15
* Count bad words: 108790
* Count good words: 124
* Count substitutes: 14
* Count abbreviations: 16532

* Alternatives: е => ё
* Count alternatives words: 58138

* Size embedding: 28

* Length n-gram: 3
* Count n-grams: 6710202

* Author: Yuriy Lobarev

* Contacts: site: https://anyks.com, e-mail: forman@anyks.com

* Copyright ©: Yuriy Lobarev

* License type: GPLv3
* License text:
The GNU General Public License is a free, copyleft license for software and other kinds of works.

The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.

When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.

To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.

For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.

For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.

Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.

Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.

The precise terms and conditions for copying, distribution and modification follow.

URL: https://www.gnu.org/licenses/gpl-3.0.ru.html

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

```

### Example:
```python
>>> import asc
>>> import spacy
>>> import pymorphy2
>>> 
>>> asc.setThreads(0)
>>> asc.setOption(asc.options_t.ascSplit)
>>> asc.setOption(asc.options_t.ascAlter)
>>> asc.setOption(asc.options_t.ascESplit)
>>> asc.setOption(asc.options_t.ascRSplit)
>>> asc.setOption(asc.options_t.ascUppers)
>>> asc.setOption(asc.options_t.ascHyphen)
>>> asc.setOption(asc.options_t.ascWordRep)
>>> asc.setOption(asc.options_t.mixDicts)
>>> asc.setOption(asc.options_t.confidence)
>>> asc.setOption(asc.options_t.stemming)
>>> 
>>> morphRu = pymorphy2.MorphAnalyzer()
>>> morphEn = spacy.load('en', disable=['parser', 'ner'])
>>> 
>>> def status(text, status):
...     print(text, status)
... 
>>> 
>>> def eng(word):
...     global morphEn
...     words = morphEn(word)
...     word = ''.join([token.lemma_ for token in words]).strip()
...     if word[0] != '-' and word[len(word) - 1] != '-':
...         return word
...     else:
...         return ""
... 
>>> 
>>> def rus(word):
...     global morphRu
...     if morphRu != None:
...         word = morphRu.parse(word)[0].normal_form
...         return word
...     else:
...         return ""
... 
>>> 
>>> def run(word, lang):
...     if lang == "ru":
...         return rus(word.lower())
...     elif lang == "en":
...         return eng(word.lower())
... 
>>> 
>>> asc.setStemmingMethod(run)
>>> 
>>> asc.loadIndex("./wittenbell-3-single.asc", "", status)
Loading dictionary 1
Loading dictionary 2
Loading dictionary 3
Loading dictionary 4
Loading dictionary 5
Loading dictionary 6
Loading dictionary 7
Loading dictionary 8
...
Loading Bloom filter 100
Loading stemming 0
Loading stemming 1
Loading stemming 2
Loading stemming 3
...
Loading language model 6
Loading language model 12
Loading language model 18
Loading language model 25
Loading language model 31
Loading language model 37
...
Loading alternative words 1
Loading alternative words 2
Loading alternative words 3
Loading alternative words 4
Loading alternative words 5
Loading alternative words 6
Loading alternative words 7
...
Loading substitutes letters 7
Loading substitutes letters 14
Loading substitutes letters 21
Loading substitutes letters 28
Loading substitutes letters 35
Loading substitutes letters 42
...
>>> 
>>> res = asc.spell("начальнег зажог павзрослому", True)
>>> res
('начальник зажёг по-взрослому', [('начальнег', 'начальник'), ('зажог', 'зажёг'), ('павзрослому', 'по-взрослому')])
>>> 
>>> res = asc.analyze("слзы теут на мрозе")
>>> res
[('теут', ['текут']), ('мрозе', ['мозг', 'мороз', 'морозе', 'моё']), ('слзы', ['слезы', 'слёзы'])]
```

### Example:
```python
>>> import asc
>>> 
>>> asc.setThreads(0)
>>> asc.setOption(asc.options_t.ascSplit)
>>> asc.setOption(asc.options_t.ascAlter)
>>> asc.setOption(asc.options_t.ascESplit)
>>> asc.setOption(asc.options_t.ascRSplit)
>>> asc.setOption(asc.options_t.ascUppers)
>>> asc.setOption(asc.options_t.ascHyphen)
>>> asc.setOption(asc.options_t.ascWordRep)
>>> asc.setOption(asc.options_t.mixDicts)
>>> asc.setOption(asc.options_t.confidence)
>>> 
>>> asc.addAlt("е", "ё")
>>> asc.addAlt("ежик", "ёжик")
>>> asc.addAlt("Легкий", "Лёгкий")
...
>>> asc.setAlphabet("абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz")
>>> 
>>> asc.setPilots(["а","у","в","о","с","к","б","и","я","э","a","i","o","e","g"])
>>> asc.setSubstitutes({'p':'р','c':'с','o':'о','t':'т','k':'к','e':'е','a':'а','h':'н','x':'х','b':'в','m':'м'})
>>> 
>>> asc.idw("Сбербанк")
13236490857
asc.idw("Совкомбанк")
22287680895
>>> 
>>> asc.token("Сбербанк")
'<word>'
>>> asc.token("совкомбанк")
'<word>'
>>> 
>>> asc.setAbbrs({13236490857, 22287680895})
>>> 
>>> asc.addAbbr("США")
>>> asc.addAbbr("Сбер")
>>> 
>>> asc.token("Сбербанк")
'<abbr>'
>>> asc.token("совкомбанк")
'<abbr>'
>>> asc.token("сша")
'<abbr>'
>>> asc.token("СБЕР")
'<abbr>'
...
>>> asc.getAbbrs()
{13236490857, 189243, 22287680895, 26938511}
>>> 
>>> asc.addGoodword("T-34")
>>> asc.addGoodword("АН-25")
...
>>> asc.addBadword("ийти")
>>> asc.addBadword("циган")
>>> asc.addBadword("апичатка")
...
>>> asc.addUWord("Москва")
>>> asc.addUWord("Санкт-Петербург")
...
>>> def statusArpa(status):
...     print("Read arpa", status)
... 
>>> def statusVocab(status):
...     print("Read vocab", status)
... 
>>> def statusIndex(status):
...     print("Build index", status)
... 
>>> asc.readArpa("./words.arpa", statusArpa)
Read arpa 0
Read arpa 1
Read arpa 2
Read arpa 3
Read arpa 4
Read arpa 5
Read arpa 6
Read arpa 7
Read arpa 8
...
>>> asc.readVocab("./words.vocab", statusVocab)
Read vocab 0
Read vocab 1
Read vocab 2
Read vocab 3
Read vocab 4
Read vocab 5
Read vocab 6
...
>>> asc.setEmbedding({
...     "а": 0, "б": 1, "в": 2, "г": 3, "д": 4, "е": 5,
...     "ё": 5, "ж": 6, "з": 7, "и": 8, "й": 8, "к": 9,
...     "л": 10, "м": 11, "н": 12, "о": 0, "п": 13, "р": 14,
...     "с": 15, "т": 16, "у": 17, "ф": 18, "х": 19, "ц": 20,
...     "ч": 21, "ш": 21, "щ": 21, "ъ": 22, "ы": 23, "ь": 22,
...     "э": 5, "ю": 24, "я": 25, "<": 26, ">": 26, "~": 26,
...     "-": 26, "+": 26, "=": 26, "*": 26, "/": 26, ":": 26,
...     "%": 26, "|": 26, "^": 26, "&": 26, "#": 26, "'": 26,
...     "\\": 26, "0": 27, "1": 27, "2": 27, "3": 27, "4": 27,
...     "5": 27, "6": 27, "7": 27, "8": 27, "9": 27, "a": 0,
...     "b": 2, "c": 15, "d": 4, "e": 5, "f": 18, "g": 3,
...     "h": 12, "i": 8, "j": 6, "k": 9, "l": 10, "m": 11,
...     "n": 12, "o": 0, "p": 14, "q": 13, "r": 14, "s": 15,
...     "t": 16, "u": 24, "v": 21, "w": 22, "x": 19, "y": 17, "z": 7
... }, 28)
>>> 
>>> asc.buildIndex(statusIndex)
Build index 0
Build index 1
Build index 2
Build index 3
Build index 4
...
>>> res = asc.spell("начальнег зажог павзрослому", True)
>>> res
('начальник зажег по-взрослому', [('начальнег', 'начальник'), ('зажог', 'зажег'), ('павзрослому', 'по-взрослому')])
>>> 
>>> res = asc.analyze("слзы теут на мрозе")
>>> res
[('теут', ['текут']), ('мрозе', ['мозг', 'мороз', 'морозе', 'моё']), ('слзы', ['слезы', 'слёзы'])]
```

### Example:
```python
>>> import asc
>>> 
>>> asc.setThreads(0)
>>> asc.setOption(asc.options_t.ascSplit)
>>> asc.setOption(asc.options_t.ascAlter)
>>> asc.setOption(asc.options_t.ascESplit)
>>> asc.setOption(asc.options_t.ascRSplit)
>>> asc.setOption(asc.options_t.ascUppers)
>>> asc.setOption(asc.options_t.ascHyphen)
>>> asc.setOption(asc.options_t.ascWordRep)
>>> asc.setOption(asc.options_t.mixDicts)
>>> asc.setOption(asc.options_t.confidence)
>>> 
>>> asc.addAlt("е", "ё")
>>> asc.addAlt("ежик", "ёжик")
>>> asc.addAlt("зажег", "зажёг")
>>> asc.addAlt("Легкий", "Лёгкий")
...
>>> asc.setAlphabet("абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz")
>>> 
>>> asc.setPilots(["а","у","в","о","с","к","б","и","я","э","a","i","o","e","g"])
>>> asc.setSubstitutes({'p':'р','c':'с','o':'о','t':'т','k':'к','e':'е','a':'а','h':'н','x':'х','b':'в','m':'м'})
>>> 
>>> asc.addAbbr("США")
>>> asc.addAbbr("Сбер")
...
>>> asc.addGoodword("T-34")
>>> asc.addGoodword("АН-25")
...
>>> asc.addBadword("ийти")
>>> asc.addBadword("циган")
>>> asc.addBadword("апичатка")
...
>>> asc.idw("Москва")
50387419219
>>> asc.idw("Санкт-Петербург")
68256898625
>>> 
>>> asc.setUWords({50387419219: 1, 68256898625: 1})
>>> 
...
>>> def statusArpa(status):
...     print("Read arpa", status)
... 
>>> def statusIndex(status):
...     print("Build index", status)
... 
>>> asc.readArpa("./words.arpa", statusArpa)
Read arpa 0
Read arpa 1
Read arpa 2
Read arpa 3
Read arpa 4
Read arpa 5
Read arpa 6
Read arpa 7
Read arpa 8
...
>>> asc.setAdCw(38120, 13)
>>> 
>>> asc.setEmbedding({
...     "а": 0, "б": 1, "в": 2, "г": 3, "д": 4, "е": 5,
...     "ё": 5, "ж": 6, "з": 7, "и": 8, "й": 8, "к": 9,
...     "л": 10, "м": 11, "н": 12, "о": 0, "п": 13, "р": 14,
...     "с": 15, "т": 16, "у": 17, "ф": 18, "х": 19, "ц": 20,
...     "ч": 21, "ш": 21, "щ": 21, "ъ": 22, "ы": 23, "ь": 22,
...     "э": 5, "ю": 24, "я": 25, "<": 26, ">": 26, "~": 26,
...     "-": 26, "+": 26, "=": 26, "*": 26, "/": 26, ":": 26,
...     "%": 26, "|": 26, "^": 26, "&": 26, "#": 26, "'": 26,
...     "\\": 26, "0": 27, "1": 27, "2": 27, "3": 27, "4": 27,
...     "5": 27, "6": 27, "7": 27, "8": 27, "9": 27, "a": 0,
...     "b": 2, "c": 15, "d": 4, "e": 5, "f": 18, "g": 3,
...     "h": 12, "i": 8, "j": 6, "k": 9, "l": 10, "m": 11,
...     "n": 12, "o": 0, "p": 14, "q": 13, "r": 14, "s": 15,
...     "t": 16, "u": 24, "v": 21, "w": 22, "x": 19, "y": 17, "z": 7
... }, 28)
>>> 
>>> asc.buildIndex(statusIndex)
Build index 0
Build index 1
Build index 2
Build index 3
Build index 4
...
>>> res = asc.spell("начальнег зажог павзрослому", True)
>>> res
('начальник зажёг по-взрослому', [('начальнег', 'начальник'), ('зажог', 'зажёг'), ('павзрослому', 'по-взрослому')])
>>> 
>>> res = asc.analyze("слзы теут на мрозе")
>>> res
[('теут', ['текут']), ('мрозе', ['мозг', 'мороз', 'морозе', 'моё']), ('слзы', ['слезы', 'слёзы'])]
```

### Example:
```python
>>> import asc
>>> import spacy
>>> import pymorphy2
>>> 
>>> asc.setThreads(0)
>>> asc.setOption(asc.options_t.ascSplit)
>>> asc.setOption(asc.options_t.ascAlter)
>>> asc.setOption(asc.options_t.ascESplit)
>>> asc.setOption(asc.options_t.ascRSplit)
>>> asc.setOption(asc.options_t.ascUppers)
>>> asc.setOption(asc.options_t.ascHyphen)
>>> asc.setOption(asc.options_t.ascWordRep)
>>> asc.setOption(asc.options_t.mixDicts)
>>> asc.setOption(asc.options_t.confidence)
>>> asc.setOption(asc.options_t.stemming)
>>> 
>>> asc.addAlt("е", "ё")
>>> asc.addAlt("ежик", "ёжик")
>>> asc.addAlt("зажег", "зажёг")
>>> asc.addAlt("Легкий", "Лёгкий")
...
>>> asc.setAlphabet("абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz")
>>> 
>>> asc.setPilots(["а","у","в","о","с","к","б","и","я","э","a","i","o","e","g"])
>>> asc.setSubstitutes({'p':'р','c':'с','o':'о','t':'т','k':'к','e':'е','a':'а','h':'н','x':'х','b':'в','m':'м'})
>>> 
>>> asc.addAbbr("США")
>>> asc.addAbbr("Сбер")
...
>>> asc.addGoodword("T-34")
>>> asc.addGoodword("АН-25")
...
>>> asc.addBadword("ийти")
>>> asc.addBadword("циган")
>>> asc.addBadword("апичатка")
...
>>> asc.addUWord("Москва")
>>> asc.addUWord("Санкт-Петербург")
...
>>> morphRu = pymorphy2.MorphAnalyzer()
>>> morphEn = spacy.load('en', disable=['parser', 'ner'])
>>> 
>>> def statusArpa(status):
...     print("Read arpa", status)
... 
>>> def statusIndex(status):
...     print("Build index", status)
... 
>>> def statusStemming(status):
...    print("Build stemming", status)
...
>>> def eng(word):
...     global morphEn
...     words = morphEn(word)
...     word = ''.join([token.lemma_ for token in words]).strip()
...     if word[0] != '-' and word[len(word) - 1] != '-':
...         return word
...     else:
...         return ""
... 
>>> def rus(word):
...     global morphRu
...     if morphRu != None:
...         word = morphRu.parse(word)[0].normal_form
...         return word
...     else:
...         return ""
... 
>>> def run(word, lang):
...     if lang == "ru":
...         return rus(word.lower())
...     elif lang == "en":
...         return eng(word.lower())
... 
>>> asc.readArpa("./words.arpa", statusArpa)
Read arpa 0
Read arpa 1
Read arpa 2
Read arpa 3
Read arpa 4
Read arpa 5
Read arpa 6
Read arpa 7
Read arpa 8
...
>>> asc.setAdCw(38120, 13)
>>> 
>>> asc.setEmbedding({
...     "а": 0, "б": 1, "в": 2, "г": 3, "д": 4, "е": 5,
...     "ё": 5, "ж": 6, "з": 7, "и": 8, "й": 8, "к": 9,
...     "л": 10, "м": 11, "н": 12, "о": 0, "п": 13, "р": 14,
...     "с": 15, "т": 16, "у": 17, "ф": 18, "х": 19, "ц": 20,
...     "ч": 21, "ш": 21, "щ": 21, "ъ": 22, "ы": 23, "ь": 22,
...     "э": 5, "ю": 24, "я": 25, "<": 26, ">": 26, "~": 26,
...     "-": 26, "+": 26, "=": 26, "*": 26, "/": 26, ":": 26,
...     "%": 26, "|": 26, "^": 26, "&": 26, "#": 26, "'": 26,
...     "\\": 26, "0": 27, "1": 27, "2": 27, "3": 27, "4": 27,
...     "5": 27, "6": 27, "7": 27, "8": 27, "9": 27, "a": 0,
...     "b": 2, "c": 15, "d": 4, "e": 5, "f": 18, "g": 3,
...     "h": 12, "i": 8, "j": 6, "k": 9, "l": 10, "m": 11,
...     "n": 12, "o": 0, "p": 14, "q": 13, "r": 14, "s": 15,
...     "t": 16, "u": 24, "v": 21, "w": 22, "x": 19, "y": 17, "z": 7
... }, 28)
>>> 
>>> asc.setCode("ru")
>>> 
>>> asc.buildIndex(statusIndex)
Build index 0
Build index 1
Build index 2
Build index 3
Build index 4
...
>>> asc.setStemmingMethod(run)
>>>
>>> asc.buildStemming(statusStemming)
Build stemming 0
Build stemming 1
Build stemming 2
Build stemming 3
Build stemming 4
Build stemming 5
...
>>> asc.addLemma("говорил")
>>> asc.addLemma("ходить")
...
>>> asc.setNSWLibCount(50000)
>>> 
>>> res = asc.spell("начальнег зажог павзрослому", True)
>>> res
('начальник зажёг по-взрослому', [('начальнег', 'начальник'), ('зажог', 'зажёг'), ('павзрослому', 'по-взрослому')])
>>> 
>>> res = asc.analyze("слзы теут на мрозе")
>>> res
[('теут', ['текут']), ('мрозе', ['мозг', 'мороз', 'морозе', 'моё']), ('слзы', ['слезы', 'слёзы'])]
```

---

### Methods:
- **setOption** - Library options setting method
- **unsetOption** - Disable module option method

### Example:
```python
>>> import asc
>>>
>>> asc.unsetOption(asc.options_t.debug)
>>> asc.unsetOption(asc.options_t.mixDicts)
>>> asc.unsetOption(asc.options_t.onlyGood)
>>> asc.unsetOption(asc.options_t.confidence)
...
```

#### Description
| Options     | Description                                                                              |
|-------------|------------------------------------------------------------------------------------------|
| debug       | Flag debug mode                                                                          |
| bloom       | Flag allowed to use Bloom filter to check words                                          |
| uppers      | Flag that allows you to correct the case of letters                                      |
| stemming    | Flag for stemming activation                                                             |
| onlyGood    | Flag allowing to consider words from the white list only                                 |
| mixDicts    | Flag allowing the use of words consisting of mixed dictionaries                          |
| allowUnk    | Flag allowing to unknown word                                                            |
| resetUnk    | Flag to reset the frequency of an unknown word                                           |
| allGrams    | Flag allowing accounting of all collected n-grams                                        |
| onlyTypos   | Flag to only correct typos                                                               |
| lowerCase   | Flag allowing to case-insensitive                                                        |
| confidence  | Flag arpa file loading without pre-processing the words                                  |
| tokenWords  | Flag that takes into account when assembling N-grams, only those tokens that match words |
| interpolate | Flag allowing to use interpolation in estimating                                         |
| ascSplit    | Flag to allow splitting of merged words                                                  |
| ascAlter    | Flag that allows you to replace alternative letters in words                             |
| ascESplit   | Flag to allow splitting of misspelled concatenated words                                 |
| ascRSplit   | Flag that allows you to combine words separated by a space                               |
| ascUppers   | Flag that allows you to correct the case of letters                                      |
| ascHyphen   | Flag to allow splitting of concatenated words with hyphens                               |
| ascSkipUpp  | Flag to skip uppercase words                                                             |
| ascSkipLat  | Flag allowing words in the latin alphabet to be skipped                                  |
| ascSkipHyp  | Flag to skip hyphenated words                                                            |
| ascWordRep  | Flag that allows you to remove duplicate words                                           |

---

### Methods:
- **erratum** - Method for search typos in text
- **token** - Method for determining the type of the token words
- **split** - Method for performing a split of clumped words
- **splitByHyphens** - Method for performing a split of clumped words by hyphens
- **check** - Method for checking a word for its existence in the dictionary

### Example:
```python
>>> import asc
>>> 
>>> asc.setThreads(0)
>>> asc.setOption(asc.options_t.ascSplit)
>>> asc.setOption(asc.options_t.ascAlter)
>>> asc.setOption(asc.options_t.ascESplit)
>>> asc.setOption(asc.options_t.ascRSplit)
>>> asc.setOption(asc.options_t.ascUppers)
>>> asc.setOption(asc.options_t.ascHyphen)
>>> asc.setOption(asc.options_t.ascWordRep)
>>> asc.setOption(asc.options_t.mixDicts)
>>> asc.setOption(asc.options_t.confidence)
>>> 
>>> def status(text, status):
...     print(text, status)
... 
>>> 
>>> asc.loadIndex("./wittenbell-3-single.asc", "", status)
Loading dictionary 1
Loading dictionary 2
Loading dictionary 3
Loading dictionary 4
Loading dictionary 5
Loading dictionary 6
Loading dictionary 7
Loading dictionary 8
...
Loading Bloom filter 100
Loading stemming 100
Loading language model 6
Loading language model 12
Loading language model 18
Loading language model 25
Loading language model 31
Loading language model 37
...
Loading alternative words 1
Loading alternative words 2
Loading alternative words 3
Loading alternative words 4
Loading alternative words 5
Loading alternative words 6
Loading alternative words 7
...
Loading substitutes letters 7
Loading substitutes letters 14
Loading substitutes letters 21
Loading substitutes letters 28
Loading substitutes letters 35
Loading substitutes letters 42
...
>>> 
asc.erratum("начальнег зажёг павзрослому")
['начальнег', 'павзрослому']
>>> 
asc.token("word")
'<word>'
>>> asc.token("12")
'<num>'
>>> asc.token("127.0.0.1")
'<url>'
>>> asc.token("14-33")
'<range>'
>>> asc.token("14:44:22")
'<time>'
>>> asc.token("08/02/2020")
'<date>'
>>> 
>>> asc.split("приветкакдела")
'привет как Дела'
>>> asc.split("былмастеромпрятатьсянонемогвоспользоватьсясвоимиталантамипотому")
'был мастером прятаться но не мог воспользоваться своими талантами потому'
>>> asc.split("Ябинатакойсоставбысходилеслиб")
'я б и на такой состав бы сходил если б'
>>> asc.split("летчерезXVIретроспективнопросматриватьэтобудет")
'лет через XVI ретроспективно просматривать это будет'
>>> 
>>> asc.splitByHyphens("привет-как-дела")
'привет как дела'
>>> asc.splitByHyphens("как-то-так")
'как то так'
>>> asc.splitByHyphens("как-то")
'как-то'
>>> 
>>> asc.check("hello")
True
>>> asc.check("Шварценеггер")
True
>>> asc.check("прывет")
False
```

---

### Methods:
- **setSize** - Method for set size N-gram
- **setAlmV2** - Method for set the language model type ALMv2
- **unsetAlmV2** - Method for unset the language model type ALMv2
- **setLocale** - Method set locale (Default: en_US.UTF-8)
- **setCode** - Method for set code language
- **setLictype** - Method for set dictionary license information type
- **setName** - Method for set dictionary name
- **setAuthor** - Method for set the dictionary author
- **setCopyright** - Method for set copyright on a dictionary
- **setLictext** - Method for set license information dictionary
- **setContacts** - Method for set contact details of the dictionary author
- **pruneArpa** - Language model pruning method
- **addWord** - Method for add a word to the dictionary
- **generateEmbedding** - Method for generation embedding
- **setSizeEmbedding** - Method for set the embedding size

#### Description
| Smoothing       |
|-----------------|
| wittenBell      |
| addSmooth       |
| goodTuring      |
| constDiscount   |
| naturalDiscount |
| kneserNey       |
| modKneserNey    |

### Example:
```python
>>> import asc
>>> 
>>> asc.setSize(3)
>>> asc.setAlmV2()
>>> asc.setThreads(0)
>>> asc.setLocale("en_US.UTF-8")
>>> 
>>> asc.setOption(asc.options_t.allowUnk)
>>> asc.setOption(asc.options_t.resetUnk)
>>> asc.setOption(asc.options_t.mixDicts)
>>> asc.setOption(asc.options_t.tokenWords)
>>> asc.setOption(asc.options_t.confidence)
>>> asc.setOption(asc.options_t.interpolate)
>>> 
>>> asc.addAlt("е", "ё")
>>> asc.addAlt("ежик", "ёжик")
>>> asc.addAlt("зажег", "зажёг")
>>> asc.addAlt("Легкий", "Лёгкий")
>>> 
>>> asc.setAlphabet("абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz")
>>> 
>>> asc.setPilots(["а","у","в","о","с","к","б","и","я","э","a","i","o","e","g"])
>>> asc.setSubstitutes({'p':'р','c':'с','o':'о','t':'т','k':'к','e':'е','a':'а','h':'н','x':'х','b':'в','m':'м'})
>>> 
>>> asc.addAbbr("США")
>>> asc.addAbbr("Сбер")
>>> asc.addGoodword("T-34")
>>> asc.addGoodword("АН-25")
>>> 
>>> asc.addBadword("ийти")
>>> asc.addBadword("циган")
>>> asc.addBadword("апичатка")
>>> 
>>> asc.addUWord("Москва")
>>> asc.addUWord("Санкт-Петербург")
>>> 
>>> def statusMap(status):
...     print("Write map", status)
... 
>>> def statusArpa1(status):
...     print("Build arpa", status)
... 
>>> def statusArpa2(status):
...     print("Write arpa", status)
... 
>>> def statusWords(status):
...     print("Write words", status)
... 
>>> def statusVocab(status):
...     print("Write vocab", status)
... 
>>> def statusAbbrs(status):
...     print("Write abbrs", status)
... 
>>> def statusPrune(status):
...     print("Prune vocab", status)
... 
>>> def statusNgram(status):
...     print("Write ngram", status)
... 
>>> def statusIndex(status):
...     print("Build index", status)
... 
>>> def status(text, status):
...     print(text, status)
... 
>>> asc.addText("The future is now", 0)
>>> 
>>> asc.collectCorpus("./corpus/text.txt", asc.smoothing_t.wittenBell, 0.0, False, False, status)
Read text corpora 0
Read text corpora 1
Read text corpora 2
Read text corpora 3
Read text corpora 4
Read text corpora 5
Read text corpora 6
...
>>> asc.pruneVocab(-15.0, 0, 0, statusPrune)
Prune vocab 0
Prune vocab 1
Prune vocab 2
Prune vocab 3
Prune vocab 4
Prune vocab 5
Prune vocab 6
...
# Prune VOCAB or prune ARPA example
>>> asc.pruneArpa(0.015, 3, statusPrune)
Prune arpa 0
Prune arpa 1
Prune arpa 2
Prune arpa 3
Prune arpa 4
Prune arpa 5
Prune arpa 6
...
>>> asc.buildArpa(statusArpa1)
Build arpa 0
Build arpa 1
Build arpa 2
Build arpa 3
Build arpa 4
Build arpa 5
Build arpa 6
...
>>> asc.writeMap("./words.map", statusMap)
Write map 0
Write map 1
Write map 2
Write map 3
Write map 4
Write map 5
Write map 6
...
>>> asc.writeArpa("./words.arpa", statusArpa2)
Write arpa 0
Write arpa 1
Write arpa 2
Write arpa 3
Write arpa 4
Write arpa 5
Write arpa 6
...
>>> asc.writeWords("./words.txt", statusWords)
Write words 0
Write words 1
Write words 2
Write words 3
Write words 4
Write words 5
Write words 6
...
>>> asc.writeVocab("./words.vocab", statusVocab)
Write vocab 0
Write vocab 1
Write vocab 2
Write vocab 3
Write vocab 4
Write vocab 5
Write vocab 6
...
>>> asc.writeAbbrs("./words1.abbr", statusAbbrs)
Write abbrs 50
Write abbrs 100
>>> 
>>> asc.writeSuffix("./words2.abbr", statusAbbrs)
Write abbrs 10
Write abbrs 20
Write abbrs 30
Write abbrs 40
Write abbrs 50
Write abbrs 60
...
>>> asc.writeNgrams("./words.ngram", statusNgram)
Write ngram 0
Write ngram 1
Write ngram 2
Write ngram 3
Write ngram 4
Write ngram 5
Write ngram 6
...
>>> asc.setCode("RU")
>>> asc.setLictype("MIT")
>>> asc.setName("Russian")
>>> asc.setAuthor("You name")
>>> asc.setCopyright("You company LLC")
>>> asc.setLictext("... License text ...")
>>> asc.setContacts("site: https://example.com, e-mail: info@example.com")
>>> 
>>> asc.setEmbedding({
...     "а": 0, "б": 1, "в": 2, "г": 3, "д": 4, "е": 5,
...     "ё": 5, "ж": 6, "з": 7, "и": 8, "й": 8, "к": 9,
...     "л": 10, "м": 11, "н": 12, "о": 0, "п": 13, "р": 14,
...     "с": 15, "т": 16, "у": 17, "ф": 18, "х": 19, "ц": 20,
...     "ч": 21, "ш": 21, "щ": 21, "ъ": 22, "ы": 23, "ь": 22,
...     "э": 5, "ю": 24, "я": 25, "<": 26, ">": 26, "~": 26,
...     "-": 26, "+": 26, "=": 26, "*": 26, "/": 26, ":": 26,
...     "%": 26, "|": 26, "^": 26, "&": 26, "#": 26, "'": 26,
...     "\\": 26, "0": 27, "1": 27, "2": 27, "3": 27, "4": 27,
...     "5": 27, "6": 27, "7": 27, "8": 27, "9": 27, "a": 0,
...     "b": 2, "c": 15, "d": 4, "e": 5, "f": 18, "g": 3,
...     "h": 12, "i": 8, "j": 6, "k": 9, "l": 10, "m": 11,
...     "n": 12, "o": 0, "p": 14, "q": 13, "r": 14, "s": 15,
...     "t": 16, "u": 24, "v": 21, "w": 22, "x": 19, "y": 17, "z": 7
... }, 28)
>>> 
>> asc.saveIndex("./3-wittenbell.asc", "", 128, status)
Read words 1
Read words 2
Read words 3
Read words 4
Read words 5
Read words 6
...
Train dictionary 0
Train dictionary 1
Train dictionary 2
Train dictionary 3
Train dictionary 4
Train dictionary 5
Train dictionary 6
...
Dump dictionary 0
Dump dictionary 1
Dump dictionary 2
Dump dictionary 3
Dump dictionary 4
Dump dictionary 5
Dump dictionary 6
...
Dump alternative letters 100
Dump alternative letters 100
Dump alternative words 200
Dump alternative words 100
Dump language model 0
Dump language model 100
Dump substitutes letters 9
Dump substitutes letters 18
Dump substitutes letters 27
Dump substitutes letters 36
Dump substitutes letters 45
Dump substitutes letters 54
Dump substitutes letters 63
Dump substitutes letters 72
Dump substitutes letters 81
Dump substitutes letters 90
Dump substitutes letters 100
Dump substitutes letters 100
>>>
>>> asc.infoIndex("./3-wittenbell.asc")

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* Code: RU

* Version: 1.0.0

* Dictionary name: Russian

* Locale: en_US.UTF-8
* Alphabet: абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz

* Build date: 09/14/2020 01:39:50

* Encrypted: NO

* ALM type: ALMv2

* Allow apostrophe: NO

* Count words: 38120
* Count documents: 13

* Only good words: NO
* Mix words in dicts: YES
* Confidence arpa: YES

* Count upper words: 2
* Count pilots words: 15
* Count bad words: 3
* Count good words: 2
* Count substitutes: 11
* Count abbreviations: 12

* Alternatives: е => ё
* Count alternatives words: 1

* Size embedding: 28

* Length n-gram: 1

* Author: You name

* Contacts: site: https://example.com, e-mail: info@example.com

* Copyright ©: You company LLC

* License type: MIT
* License text:
... License text ...

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

```

### Example:
```python
>>> import asc
>>> 
>>> asc.setSize(3)
>>> asc.setThreads(0)
>>> asc.setLocale("en_US.UTF-8")
>>> 
>>> asc.setOption(asc.options_t.allowUnk)
>>> asc.setOption(asc.options_t.resetUnk)
>>> asc.setOption(asc.options_t.mixDicts)
>>> asc.setOption(asc.options_t.tokenWords)
>>> asc.setOption(asc.options_t.confidence)
>>> asc.setOption(asc.options_t.interpolate)
>>> 
>>> asc.addAlt("е", "ё")
>>> asc.addAlt("Легкий", "Лёгкий")
>>> 
>>> asc.setAlphabet("абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz")
>>> 
>>> asc.setPilots(["а","у","в","о","с","к","б","и","я","э","a","i","o","e","g"])
>>> asc.setSubstitutes({'p':'р','c':'с','o':'о','t':'т','k':'к','e':'е','a':'а','h':'н','x':'х','b':'в','m':'м'})
>>> 
>>> asc.addAbbr("США")
>>> asc.addAbbr("Сбер")
>>> asc.addGoodword("T-34")
>>> asc.addGoodword("АН-25")
>>> 
>>> asc.addBadword("ийти")
>>> asc.addBadword("циган")
>>> asc.addBadword("апичатка")
>>> 
>>> asc.addUWord("Москва")
>>> asc.addUWord("Санкт-Петербург")
>>> 
>>> def statusArpa(status):
...     print("Read arpa", status)
... 
>>> def statusVocab(status):
...     print("Read vocab", status)
... 
>>> def statusIndex(status):
...     print("Build index", status)
...
>>> def status(text, status):
...     print(text, status)
... 
>>> asc.readArpa("./words.arpa", statusArpa)
Read arpa 0
Read arpa 1
Read arpa 2
Read arpa 3
Read arpa 4
Read arpa 5
Read arpa 6
...
>>> asc.readVocab("./words.vocab", statusVocab)
Read vocab 0
Read vocab 1
Read vocab 2
Read vocab 3
Read vocab 4
Read vocab 5
Read vocab 6
...
>>> asc.setCode("RU")
>>> asc.setLictype("MIT")
>>> asc.setName("Russian")
>>> asc.setAuthor("You name")
>>> asc.setCopyright("You company LLC")
>>> asc.setLictext("... License text ...")
>>> asc.setContacts("site: https://example.com, e-mail: info@example.com")
>>> 
>>> asc.setEmbedding({
...     "а": 0, "б": 1, "в": 2, "г": 3, "д": 4, "е": 5,
...     "ё": 5, "ж": 6, "з": 7, "и": 8, "й": 8, "к": 9,
...     "л": 10, "м": 11, "н": 12, "о": 0, "п": 13, "р": 14,
...     "с": 15, "т": 16, "у": 17, "ф": 18, "х": 19, "ц": 20,
...     "ч": 21, "ш": 21, "щ": 21, "ъ": 22, "ы": 23, "ь": 22,
...     "э": 5, "ю": 24, "я": 25, "<": 26, ">": 26, "~": 26,
...     "-": 26, "+": 26, "=": 26, "*": 26, "/": 26, ":": 26,
...     "%": 26, "|": 26, "^": 26, "&": 26, "#": 26, "'": 26,
...     "\\": 26, "0": 27, "1": 27, "2": 27, "3": 27, "4": 27,
...     "5": 27, "6": 27, "7": 27, "8": 27, "9": 27, "a": 0,
...     "b": 2, "c": 15, "d": 4, "e": 5, "f": 18, "g": 3,
...     "h": 12, "i": 8, "j": 6, "k": 9, "l": 10, "m": 11,
...     "n": 12, "o": 0, "p": 14, "q": 13, "r": 14, "s": 15,
...     "t": 16, "u": 24, "v": 21, "w": 22, "x": 19, "y": 17, "z": 7
... }, 28)
>>> 
>>> asc.buildIndex(statusIndex)
Build index 0
Build index 1
Build index 2
Build index 3
Build index 4
Build index 5
Build index 6
...
>>> asc.saveIndex("./3-wittenbell.asc", "", 128, status)
Dump dictionary 0
Dump dictionary 1
Dump dictionary 2
Dump dictionary 3
Dump dictionary 4
Dump dictionary 5
Dump dictionary 6
...
Dump alternative letters 100
Dump alternative letters 100
Dump alternative words 200
Dump alternative words 100
Dump language model 0
Dump language model 100
Dump substitutes letters 9
Dump substitutes letters 18
Dump substitutes letters 27
Dump substitutes letters 36
Dump substitutes letters 45
Dump substitutes letters 54
Dump substitutes letters 63
Dump substitutes letters 72
Dump substitutes letters 81
Dump substitutes letters 90
Dump substitutes letters 100
Dump substitutes letters 100
>>>
>>> asc.infoIndex("./3-wittenbell.asc")

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* Code: RU

* Version: 1.0.0

* Dictionary name: Russian

* Locale: en_US.UTF-8
* Alphabet: абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz

* Build date: 09/14/2020 01:58:52

* Encrypted: NO

* ALM type: ALMv1

* Allow apostrophe: NO

* Count words: 38120
* Count documents: 13

* Only good words: NO
* Mix words in dicts: YES
* Confidence arpa: YES

* Count upper words: 2
* Count pilots words: 15
* Count bad words: 3
* Count good words: 2
* Count substitutes: 11
* Count abbreviations: 2

* Alternatives: е => ё
* Count alternatives words: 1

* Size embedding: 28

* Length n-gram: 3
* Count n-grams: 353

* Author: You name

* Contacts: site: https://example.com, e-mail: info@example.com

* Copyright ©: You company LLC

* License type: MIT
* License text:
... License text ...

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

```

### Example:
```python
>>> import asc
>>> 
>>> asc.setSize(3)
>>> asc.setAlmV2()
>>> asc.setThreads(0)
>>> asc.setLocale("en_US.UTF-8")
>>> 
>>> asc.setOption(asc.options_t.allowUnk)
>>> asc.setOption(asc.options_t.resetUnk)
>>> asc.setOption(asc.options_t.mixDicts)
>>> asc.setOption(asc.options_t.tokenWords)
>>> asc.setOption(asc.options_t.confidence)
>>> asc.setOption(asc.options_t.interpolate)
>>> 
>>> asc.addAlt("е", "ё")
>>> asc.addAlt("Легкий", "Лёгкий")
>>> 
>>> asc.setAlphabet("абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz")
>>> 
>>> asc.setPilots(["а","у","в","о","с","к","б","и","я","э","a","i","o","e","g"])
>>> asc.setSubstitutes({'p':'р','c':'с','o':'о','t':'т','k':'к','e':'е','a':'а','h':'н','x':'х','b':'в','m':'м'})
>>> 
>>> asc.addAbbr("США")
>>> asc.addAbbr("Сбер")
>>> asc.addGoodword("T-34")
>>> asc.addGoodword("АН-25")
>>> 
>>> asc.addBadword("ийти")
>>> asc.addBadword("циган")
>>> asc.addBadword("апичатка")
>>> 
>>> asc.addUWord("Москва")
>>> asc.addUWord("Санкт-Петербург")
>>> 
>>> def statusArpa(status):
...     print("Read arpa", status)
... 
>>> def statusIndex(status):
...     print("Build index", status)
... 
>>> def statusPrune(status):
...     print("Prune arpa", status)
... 
>>> def status(text, status):
...     print(text, status)
... 
>>> asc.readArpa("./words.arpa", statusArpa)
Read arpa 0
Read arpa 1
Read arpa 2
Read arpa 3
Read arpa 4
Read arpa 5
Read arpa 6
...
>>> asc.setAdCw(38120, 13)
>>> 
>>> asc.addWord("министерство")
>>> asc.addWord("возмездие", 0, 1)
>>> asc.addWord("возражение", asc.idw("возражение"), 2)
...
>>> 
>>> asc.setCode("RU")
>>> asc.setLictype("MIT")
>>> asc.setName("Russian")
>>> asc.setAuthor("You name")
>>> asc.setCopyright("You company LLC")
>>> asc.setLictext("... License text ...")
>>> asc.setContacts("site: https://example.com, e-mail: info@example.com")
>>> 
>>> asc.setEmbedding({
...     "а": 0, "б": 1, "в": 2, "г": 3, "д": 4, "е": 5,
...     "ё": 5, "ж": 6, "з": 7, "и": 8, "й": 8, "к": 9,
...     "л": 10, "м": 11, "н": 12, "о": 0, "п": 13, "р": 14,
...     "с": 15, "т": 16, "у": 17, "ф": 18, "х": 19, "ц": 20,
...     "ч": 21, "ш": 21, "щ": 21, "ъ": 22, "ы": 23, "ь": 22,
...     "э": 5, "ю": 24, "я": 25, "<": 26, ">": 26, "~": 26,
...     "-": 26, "+": 26, "=": 26, "*": 26, "/": 26, ":": 26,
...     "%": 26, "|": 26, "^": 26, "&": 26, "#": 26, "'": 26,
...     "\\": 26, "0": 27, "1": 27, "2": 27, "3": 27, "4": 27,
...     "5": 27, "6": 27, "7": 27, "8": 27, "9": 27, "a": 0,
...     "b": 2, "c": 15, "d": 4, "e": 5, "f": 18, "g": 3,
...     "h": 12, "i": 8, "j": 6, "k": 9, "l": 10, "m": 11,
...     "n": 12, "o": 0, "p": 14, "q": 13, "r": 14, "s": 15,
...     "t": 16, "u": 24, "v": 21, "w": 22, "x": 19, "y": 17, "z": 7
... }, 28)
>>> 
>>> asc.buildIndex(statusIndex)
Build index 0
Build index 1
Build index 2
Build index 3
Build index 4
Build index 5
Build index 6
...
>>> asc.saveIndex("./3-wittenbell.asc", "password", 128, status)
Dump dictionary 0
Dump dictionary 1
Dump dictionary 2
Dump dictionary 3
Dump dictionary 4
Dump dictionary 5
Dump dictionary 6
...
Dump alternative letters 100
Dump alternative letters 100
Dump alternative words 200
Dump alternative words 100
Dump language model 0
Dump language model 100
Dump substitutes letters 9
Dump substitutes letters 18
Dump substitutes letters 27
Dump substitutes letters 36
Dump substitutes letters 45
Dump substitutes letters 54
Dump substitutes letters 63
Dump substitutes letters 72
Dump substitutes letters 81
Dump substitutes letters 90
Dump substitutes letters 100
Dump substitutes letters 100
>>>
>>> asc.infoIndex("./3-wittenbell.asc")

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* Code: RU

* Version: 1.0.0

* Dictionary name: Russian

* Build date: 09/14/2020 02:09:38

* Encrypted: YES

* ALM type: ALMv2

* Allow apostrophe: NO

* Count words: 38120
* Count documents: 13

* Only good words: NO
* Mix words in dicts: YES
* Confidence arpa: YES

* Count upper words: 2
* Count pilots words: 15
* Count bad words: 3
* Count good words: 2
* Count substitutes: 11
* Count abbreviations: 2

* Alternatives: е => ё
* Count alternatives words: 1

* Size embedding: 28

* Length n-gram: 3
* Count n-grams: 353

* Author: You name

* Contacts: site: https://example.com, e-mail: info@example.com

* Copyright ©: You company LLC

* License type: MIT
* License text:
... License text ...

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

```

### Example:
```python
>>> import asc
>>> 
>>> asc.setSize(3)
>>> asc.setAlmV2()
>>> asc.setThreads(0)
>>> asc.setLocale("en_US.UTF-8")
>>> 
>>> asc.setOption(asc.options_t.allowUnk)
>>> asc.setOption(asc.options_t.resetUnk)
>>> asc.setOption(asc.options_t.mixDicts)
>>> asc.setOption(asc.options_t.tokenWords)
>>> asc.setOption(asc.options_t.confidence)
>>> asc.setOption(asc.options_t.interpolate)
>>> 
>>> asc.addAlt("е", "ё")
>>> asc.addAlt("Легкий", "Лёгкий")
>>> 
>>> asc.setAlphabet("абвгдеёжзийклмнопрстуфхцчшщъыьэюяabcdefghijklmnopqrstuvwxyz")
>>> 
>>> asc.setPilots(["а","у","в","о","с","к","б","и","я","э","a","i","o","e","g"])
>>> asc.setSubstitutes({'p':'р','c':'с','o':'о','t':'т','k':'к','e':'е','a':'а','h':'н','x':'х','b':'в','m':'м'})
>>> 
>>> asc.addAbbr("США")
>>> asc.addAbbr("Сбер")
>>> asc.addGoodword("T-34")
>>> asc.addGoodword("АН-25")
>>> 
>>> asc.addBadword("ийти")
>>> asc.addBadword("циган")
>>> asc.addBadword("апичатка")
>>> 
>>> asc.addUWord("Москва")
>>> asc.addUWord("Санкт-Петербург")
>>> 
>>> def statusArpa(status):
...     print("Read arpa", status)
... 
>>> def statusIndex(status):
...     print("Build index", status)
... 
>>> def statusPrune(status):
...     print("Prune arpa", status)
... 
>>> def status(text, status):
...     print(text, status)
... 
>>> asc.readArpa("./words.arpa", statusArpa)
Read arpa 0
Read arpa 1
Read arpa 2
Read arpa 3
Read arpa 4
Read arpa 5
Read arpa 6
...
>>> asc.setAdCw(38120, 13)
>>> 
>>> asc.addWord("министерство")
>>> asc.addWord("возмездие", 0, 1)
>>> asc.addWord("возражение", asc.idw("возражение"), 2)
...
>>> 
>>> asc.setCode("RU")
>>> asc.setLictype("MIT")
>>> asc.setName("Russian")
>>> asc.setAuthor("You name")
>>> asc.setCopyright("You company LLC")
>>> asc.setLictext("... License text ...")
>>> asc.setContacts("site: https://example.com, e-mail: info@example.com")
>>> 
>>> asc.setSizeEmbedding(32)
>>> asc.generateEmbedding()
>>> 
>>> asc.buildIndex(statusIndex)
Build index 0
Build index 1
Build index 2
Build index 3
Build index 4
Build index 5
Build index 6
...
>>> asc.saveIndex("./3-wittenbell.asc", "password", 128, status)
Dump dictionary 0
Dump dictionary 1
Dump dictionary 2
Dump dictionary 3
Dump dictionary 4
Dump dictionary 5
Dump dictionary 6
...
Dump alternative letters 100
Dump alternative letters 100
Dump alternative words 200
Dump alternative words 100
Dump language model 0
Dump language model 100
Dump substitutes letters 9
Dump substitutes letters 18
Dump substitutes letters 27
Dump substitutes letters 36
Dump substitutes letters 45
Dump substitutes letters 54
Dump substitutes letters 63
Dump substitutes letters 72
Dump substitutes letters 81
Dump substitutes letters 90
Dump substitutes letters 100
Dump substitutes letters 100
>>>
>>> asc.infoIndex("./3-wittenbell.asc")

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* Code: RU

* Version: 1.0.0

* Dictionary name: Russian

* Build date: 09/14/2020 02:09:38

* Encrypted: YES

* ALM type: ALMv2

* Allow apostrophe: NO

* Count words: 38120
* Count documents: 13

* Only good words: NO
* Mix words in dicts: YES
* Confidence arpa: YES

* Count upper words: 2
* Count pilots words: 15
* Count bad words: 3
* Count good words: 2
* Count substitutes: 11
* Count abbreviations: 2

* Alternatives: е => ё
* Count alternatives words: 1

* Size embedding: 28

* Length n-gram: 3
* Count n-grams: 353

* Author: You name

* Contacts: site: https://example.com, e-mail: info@example.com

* Copyright ©: You company LLC

* License type: MIT
* License text:
... License text ...

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

```

---

### Methods:
- **size** - Method of obtaining the size of the N-gram

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.size()
3
```

---

### Methods:
- **damerauLevenshtein** - Determination of the Damerau-Levenshtein distance in phrases
- **distanceLevenshtein** - Determination of Levenshtein distance in phrases
- **tanimoto** - Method for determining Jaccard coefficient (quotient - Tanimoto coefficient)
- **needlemanWunsch** - Word stretching method

### Example:
```python
>>> import asc
>>> asc.damerauLevenshtein("привет", "приветик")
2
>>> 
>>> asc.damerauLevenshtein("приевтик", "приветик")
1
>>> 
>>> asc.distanceLevenshtein("приевтик", "приветик")
2
>>> 
>>> asc.tanimoto("привет", "приветик")
0.7142857142857143
>>> 
>>> asc.tanimoto("привеитк", "приветик")
0.4
>>> 
>>> asc.needlemanWunsch("привеитк", "приветик")
4
>>> 
>>> asc.needlemanWunsch("привет", "приветик")
2
>>> 
>>> asc.damerauLevenshtein("acre", "car")
2
>>> asc.distanceLevenshtein("acre", "car")
3
>>> 
>>> asc.damerauLevenshtein("anteater", "theatre")
4
>>> asc.distanceLevenshtein("anteater", "theatre")
5
>>> 
>>> asc.damerauLevenshtein("banana", "nanny")
3
>>> asc.distanceLevenshtein("banana", "nanny")
3
>>> 
>>> asc.damerauLevenshtein("cat", "crate")
2
>>> asc.distanceLevenshtein("cat", "crate")
2
>>>
>>> asc.mulctLevenshtein("привет", "приветик")
4
>>>
>>> asc.mulctLevenshtein("приевтик", "приветик")
1
>>>
>>> asc.mulctLevenshtein("acre", "car")
3
>>>
>>> asc.mulctLevenshtein("anteater", "theatre")
5
>>>
>>> asc.mulctLevenshtein("banana", "nanny")
4
>>>
>>> asc.mulctLevenshtein("cat", "crate")
4
```

---

### Methods:
- **textToJson** - Method to convert text to JSON
- **isAllowApostrophe** - Apostrophe permission check method
- **switchAllowApostrophe** - Method for permitting or denying an apostrophe as part of a word

### Example:
```python
>>> import asc
>>>
>>> def callbackFn(text):
...     print(text)
... 
>>> asc.isAllowApostrophe()
False
>>> asc.switchAllowApostrophe()
>>>
>>> asc.isAllowApostrophe()
True
>>> asc.textToJson("«On nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'évaluer» l'astronomie", callbackFn)
[["«","On","nous","dit","qu'aujourd'hui","c'est","le","cas",",","encore","faudra-t-il","l'évaluer","»","l'astronomie"]]
```

---

### Methods:
- **jsonToText** - Method to convert JSON to text

### Example:
```python
>>> import asc
>>>
>>> def callbackFn(text):
...     print(text)
... 
>>> asc.jsonToText('[["«","On","nous","dit","qu\'aujourd\'hui","c\'est","le","cas",",","encore","faudra-t-il","l\'évaluer","»","l\'astronomie"]]', callbackFn)
«On nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'évaluer» l'astronomie
```

---

### Methods:
- **restore** - Method for restore text from context

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.uppers)
>>>
>>> asc.restore(["«","On","nous","dit","qu\'aujourd\'hui","c\'est","le","cas",",","encore","faudra-t-il","l\'évaluer","»","l\'astronomie"])
"«On nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'évaluer» l'astronomie"
```

---

### Methods:
- **allowStress** - Method for allow using stress in words
- **disallowStress** - Method for disallow using stress in words

### Example:
```python
>>> import asc
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> def callbackFn(text):
...     print(text)
... 
>>> asc.textToJson('«Бе́лая стрела́» — согласно распространённой в 1990-е годы в России городской легенде, якобы специально организованная и подготовленная законспирированная правительственная спецслужба, сотрудники которой — бывшие и действовавшие милиционеры и спецназовцы, имеющие право на физическую ликвидацию особо опасных уголовных авторитетов и лидеров орудовавших в России ОПГ, относительно которых не представляется возможным привлечения их к уголовной ответственности законными методами[1][2][3]. Несмотря на отсутствие официальных доказательств существования организации и многочисленные опровержения со стороны силовых структур и служб безопасности[4], в российском обществе легенду считают основанной на подлинных фактах громких убийств криминальных авторитетов, совершённых в 1990-е годы, и не исключают существование реальной спецслужбы[5].', callbackFn)
[["«","Белая","стрела","»","—","согласно","распространённой","в","1990-е","годы","в","России","городской","легенде",",","якобы","специально","организованная","и","подготовленная","законспирированная","правительственная","спецслужба",",","сотрудники","которой","—","бывшие","и","действовавшие","милиционеры","и","спецназовцы",",","имеющие","право","на","физическую","ликвидацию","особо","опасных","уголовных","авторитетов","и","лидеров","орудовавших","в","России","ОПГ",",","относительно","которых","не","представляется","возможным","привлечения","их","к","уголовной","ответственности","законными","методами","[","1","]","[","2","]","[","3","]","."],["Несмотря","на","отсутствие","официальных","доказательств","существования","организации","и","многочисленные","опровержения","со","стороны","силовых","структур","и","служб","безопасности","[","4","]",",","в","российском","обществе","легенду","считают","основанной","на","подлинных","фактах","громких","убийств","криминальных","авторитетов",",","совершённых","в","1990-е","годы",",","и","не","исключают","существование","реальной","спецслужбы","[","5","]","."]]
>>>
>>> asc.jsonToText('[["«","Белая","стрела","»","—","согласно","распространённой","в","1990-е","годы","в","России","городской","легенде",",","якобы","специально","организованная","и","подготовленная","законспирированная","правительственная","спецслужба",",","сотрудники","которой","—","бывшие","и","действовавшие","милиционеры","и","спецназовцы",",","имеющие","право","на","физическую","ликвидацию","особо","опасных","уголовных","авторитетов","и","лидеров","орудовавших","в","России","ОПГ",",","относительно","которых","не","представляется","возможным","привлечения","их","к","уголовной","ответственности","законными","методами","[","1","]","[","2","]","[","3","]","."],["Несмотря","на","отсутствие","официальных","доказательств","существования","организации","и","многочисленные","опровержения","со","стороны","силовых","структур","и","служб","безопасности","[","4","]",",","в","российском","обществе","легенду","считают","основанной","на","подлинных","фактах","громких","убийств","криминальных","авторитетов",",","совершённых","в","1990-е","годы",",","и","не","исключают","существование","реальной","спецслужбы","[","5","]","."]]', callbackFn)
«Белая стрела» — согласно распространённой в 1990-е годы в России городской легенде, якобы специально организованная и подготовленная законспирированная правительственная спецслужба, сотрудники которой — бывшие и действовавшие милиционеры и спецназовцы, имеющие право на физическую ликвидацию особо опасных уголовных авторитетов и лидеров орудовавших в России ОПГ, относительно которых не представляется возможным привлечения их к уголовной ответственности законными методами [1] [2] [3].
Несмотря на отсутствие официальных доказательств существования организации и многочисленные опровержения со стороны силовых структур и служб безопасности [4], в российском обществе легенду считают основанной на подлинных фактах громких убийств криминальных авторитетов, совершённых в 1990-е годы, и не исключают существование реальной спецслужбы [5].
>>>
>>> asc.allowStress()
>>> asc.textToJson('«Бе́лая стрела́» — согласно распространённой в 1990-е годы в России городской легенде, якобы специально организованная и подготовленная законспирированная правительственная спецслужба, сотрудники которой — бывшие и действовавшие милиционеры и спецназовцы, имеющие право на физическую ликвидацию особо опасных уголовных авторитетов и лидеров орудовавших в России ОПГ, относительно которых не представляется возможным привлечения их к уголовной ответственности законными методами[1][2][3]. Несмотря на отсутствие официальных доказательств существования организации и многочисленные опровержения со стороны силовых структур и служб безопасности[4], в российском обществе легенду считают основанной на подлинных фактах громких убийств криминальных авторитетов, совершённых в 1990-е годы, и не исключают существование реальной спецслужбы[5].', callbackFn)
[["«","Бе́лая","стрела́","»","—","согласно","распространённой","в","1990-е","годы","в","России","городской","легенде",",","якобы","специально","организованная","и","подготовленная","законспирированная","правительственная","спецслужба",",","сотрудники","которой","—","бывшие","и","действовавшие","милиционеры","и","спецназовцы",",","имеющие","право","на","физическую","ликвидацию","особо","опасных","уголовных","авторитетов","и","лидеров","орудовавших","в","России","ОПГ",",","относительно","которых","не","представляется","возможным","привлечения","их","к","уголовной","ответственности","законными","методами","[","1","]","[","2","]","[","3","]","."],["Несмотря","на","отсутствие","официальных","доказательств","существования","организации","и","многочисленные","опровержения","со","стороны","силовых","структур","и","служб","безопасности","[","4","]",",","в","российском","обществе","легенду","считают","основанной","на","подлинных","фактах","громких","убийств","криминальных","авторитетов",",","совершённых","в","1990-е","годы",",","и","не","исключают","существование","реальной","спецслужбы","[","5","]","."]]
>>>
>>> asc.jsonToText('[["«","Бе́лая","стрела́","»","—","согласно","распространённой","в","1990-е","годы","в","России","городской","легенде",",","якобы","специально","организованная","и","подготовленная","законспирированная","правительственная","спецслужба",",","сотрудники","которой","—","бывшие","и","действовавшие","милиционеры","и","спецназовцы",",","имеющие","право","на","физическую","ликвидацию","особо","опасных","уголовных","авторитетов","и","лидеров","орудовавших","в","России","ОПГ",",","относительно","которых","не","представляется","возможным","привлечения","их","к","уголовной","ответственности","законными","методами","[","1","]","[","2","]","[","3","]","."],["Несмотря","на","отсутствие","официальных","доказательств","существования","организации","и","многочисленные","опровержения","со","стороны","силовых","структур","и","служб","безопасности","[","4","]",",","в","российском","обществе","легенду","считают","основанной","на","подлинных","фактах","громких","убийств","криминальных","авторитетов",",","совершённых","в","1990-е","годы",",","и","не","исключают","существование","реальной","спецслужбы","[","5","]","."]]', callbackFn)
«Бе́лая стрела́» — согласно распространённой в 1990-е годы в России городской легенде, якобы специально организованная и подготовленная законспирированная правительственная спецслужба, сотрудники которой — бывшие и действовавшие милиционеры и спецназовцы, имеющие право на физическую ликвидацию особо опасных уголовных авторитетов и лидеров орудовавших в России ОПГ, относительно которых не представляется возможным привлечения их к уголовной ответственности законными методами [1] [2] [3].
Несмотря на отсутствие официальных доказательств существования организации и многочисленные опровержения со стороны силовых структур и служб безопасности [4], в российском обществе легенду считают основанной на подлинных фактах громких убийств криминальных авторитетов, совершённых в 1990-е годы, и не исключают существование реальной спецслужбы [5].
>>>
>>> asc.disallowStress()
>>> asc.textToJson('«Бе́лая стрела́» — согласно распространённой в 1990-е годы в России городской легенде, якобы специально организованная и подготовленная законспирированная правительственная спецслужба, сотрудники которой — бывшие и действовавшие милиционеры и спецназовцы, имеющие право на физическую ликвидацию особо опасных уголовных авторитетов и лидеров орудовавших в России ОПГ, относительно которых не представляется возможным привлечения их к уголовной ответственности законными методами[1][2][3]. Несмотря на отсутствие официальных доказательств существования организации и многочисленные опровержения со стороны силовых структур и служб безопасности[4], в российском обществе легенду считают основанной на подлинных фактах громких убийств криминальных авторитетов, совершённых в 1990-е годы, и не исключают существование реальной спецслужбы[5].', callbackFn)
[["«","Белая","стрела","»","—","согласно","распространённой","в","1990-е","годы","в","России","городской","легенде",",","якобы","специально","организованная","и","подготовленная","законспирированная","правительственная","спецслужба",",","сотрудники","которой","—","бывшие","и","действовавшие","милиционеры","и","спецназовцы",",","имеющие","право","на","физическую","ликвидацию","особо","опасных","уголовных","авторитетов","и","лидеров","орудовавших","в","России","ОПГ",",","относительно","которых","не","представляется","возможным","привлечения","их","к","уголовной","ответственности","законными","методами","[","1","]","[","2","]","[","3","]","."],["Несмотря","на","отсутствие","официальных","доказательств","существования","организации","и","многочисленные","опровержения","со","стороны","силовых","структур","и","служб","безопасности","[","4","]",",","в","российском","обществе","легенду","считают","основанной","на","подлинных","фактах","громких","убийств","криминальных","авторитетов",",","совершённых","в","1990-е","годы",",","и","не","исключают","существование","реальной","спецслужбы","[","5","]","."]]
>>>
>>> asc.jsonToText('[["«","Белая","стрела","»","—","согласно","распространённой","в","1990-е","годы","в","России","городской","легенде",",","якобы","специально","организованная","и","подготовленная","законспирированная","правительственная","спецслужба",",","сотрудники","которой","—","бывшие","и","действовавшие","милиционеры","и","спецназовцы",",","имеющие","право","на","физическую","ликвидацию","особо","опасных","уголовных","авторитетов","и","лидеров","орудовавших","в","России","ОПГ",",","относительно","которых","не","представляется","возможным","привлечения","их","к","уголовной","ответственности","законными","методами","[","1","]","[","2","]","[","3","]","."],["Несмотря","на","отсутствие","официальных","доказательств","существования","организации","и","многочисленные","опровержения","со","стороны","силовых","структур","и","служб","безопасности","[","4","]",",","в","российском","обществе","легенду","считают","основанной","на","подлинных","фактах","громких","убийств","криминальных","авторитетов",",","совершённых","в","1990-е","годы",",","и","не","исключают","существование","реальной","спецслужбы","[","5","]","."]]', callbackFn)
«Белая стрела» — согласно распространённой в 1990-е годы в России городской легенде, якобы специально организованная и подготовленная законспирированная правительственная спецслужба, сотрудники которой — бывшие и действовавшие милиционеры и спецназовцы, имеющие право на физическую ликвидацию особо опасных уголовных авторитетов и лидеров орудовавших в России ОПГ, относительно которых не представляется возможным привлечения их к уголовной ответственности законными методами [1] [2] [3].
Несмотря на отсутствие официальных доказательств существования организации и многочисленные опровержения со стороны силовых структур и служб безопасности [4], в российском обществе легенду считают основанной на подлинных фактах громких убийств криминальных авторитетов, совершённых в 1990-е годы, и не исключают существование реальной спецслужбы [5].
```

---

### Methods:
- **addBadword** - Method add bad word
- **setBadwords** - Method set words to blacklist
- **getBadwords** - Method get words in blacklist

### Example:
```python
>>> import asc
>>>
>>> asc.setBadwords(["hello", "world", "test"])
>>>
>>> asc.getBadwords()
{1554834897, 2156498622, 28307030}
>>>
>>> asc.addBadword("test2")
>>>
>>> asc.getBadwords()
{5170183734, 1554834897, 2156498622, 28307030}
```

### Example:
```python
>>> import asc
>>>
>>> asc.setBadwords({24227504, 1219922507, 1794085167})
>>>
>>> asc.getBadwords()
{24227504, 1219922507, 1794085167}
>>>
>>> asc.clear(asc.clear_t.badwords)
>>>
>>> asc.getBadwords()
{}
```

---

### Methods:
- **addGoodword** - Method add good word
- **setGoodwords** - Method set words to whitelist
- **getGoodwords** - Method get words in whitelist

### Example:
```python
>>> import asc
>>>
>>> asc.setGoodwords(["hello", "world", "test"])
>>>
>>> asc.getGoodwords()
{1554834897, 2156498622, 28307030}
>>>
>>> asc.addGoodword("test2")
>>>
>>> asc.getGoodwords()
{5170183734, 1554834897, 2156498622, 28307030}
>>>
>>> asc.clear(asc.clear_t.goodwords)
>>>
>>  asc.getGoodwords()
{}
```

### Example:
```python
>>> import asc
>>>
>>> asc.setGoodwords({24227504, 1219922507, 1794085167})
>>>
>>> asc.getGoodwords()
{24227504, 1219922507, 1794085167}
```

---

### Methods:
- **setUserToken** - Method for adding user token
- **getUserTokens** - User token list retrieval method
- **getUserTokenId** - Method for obtaining user token identifier
- **getUserTokenWord** - Method for obtaining a custom token by its identifier

### Example:
```python
>>> import asc
>>>
>>> asc.setUserToken("usa")
>>>
>>> asc.setUserToken("russia")
>>>
>>> asc.getUserTokenId("usa")
5759809081
>>>
>>> asc.getUserTokenId("russia")
9910674734
>>>
>>> asc.getUserTokens()
['usa', 'russia']
>>>
>>> asc.getUserTokenWord(5759809081)
'usa'
>>>
>>> asc.getUserTokenWord(9910674734)
'russia'
>>>
>> asc.clear(asc.clear_t.utokens)
>>>
>>> asc.getUserTokens()
[]
```

---

### Methods:
- **findNgram** - N-gram search method in text
- **word** - "Method to extract a word by its identifier"

### Example:
```python
>>> import asc
>>> 
>>> def callbackFn(text):
...     print(text)
... 
>>> asc.setOption(asc.options_t.confidence)
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>> asc.readArpa('./lm.arpa')
>>> 
>>> asc.idw("привет")
2487910648
>>> asc.word(2487910648)
'привет'
>>> 
>>> asc.findNgram("Особое место занимает чудотворная икона Лобзание Христа Иудою", callbackFn)
<s> Особое
Особое место
место занимает
занимает чудотворная
чудотворная икона
икона Лобзание
Лобзание Христа
Христа Иудою
Иудою </s>


>>>
```

---

### Methods:
- **setUserTokenMethod** - Method for set a custom token processing function

### Example:
```python
>>> import asc
>>>
>>> def fn(token, word):
...     if token and (token == "<usa>"):
...         if word and (word.lower() == "usa"):
...             return True
...     elif token and (token == "<russia>"):
...         if word and (word.lower() == "russia"):
...             return True
...     return False
... 
>>> asc.setUserToken("usa")
>>>
>>> asc.setUserToken("russia")
>>>
>>> asc.setUserTokenMethod("usa", fn)
>>>
>>> asc.setUserTokenMethod("russia", fn)
>>>
>>> asc.idw("usa")
5759809081
>>>
>>> asc.idw("russia")
9910674734
>>>
>>> asc.getUserTokenWord(5759809081)
'usa'
>>>
>>> asc.getUserTokenWord(9910674734)
'russia'
```

---

### Methods:
- **setWordPreprocessingMethod** - Method for set the word preprocessing function

### Example:
```python
>>> import asc
>>>
>>> def run(word, context):
...     if word == "возле": word = "около"
...     return word
... 
>>> asc.setOption(asc.options_t.debug)
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.setWordPreprocessingMethod(run)
>>>
>>> a = asc.perplexity("неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???с лязгом выкатился и остановился возле мальчика....")
info: <s> Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор <punct> <punct> <punct> </s>

info: p( неожиданно | <s> ) 	= [2gram] 0.00038931 [ -3.40969900 ] / 0.99999991
info: p( из | неожиданно ...) 	= [2gram] 0.10110741 [ -0.99521700 ] / 0.99999979
info: p( подворотни | из ...) 	= [2gram] 0.00711798 [ -2.14764300 ] / 1.00000027
info: p( в | подворотни ...) 	= [2gram] 0.51077661 [ -0.29176900 ] / 1.00000021
info: p( олега | в ...) 	= [2gram] 0.00082936 [ -3.08125500 ] / 0.99999974
info: p( ударил | олега ...) 	= [2gram] 0.25002820 [ -0.60201100 ] / 0.99999978
info: p( яркий | ударил ...) 	= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034
info: p( прожектор | яркий ...) 	= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034
info: p( патрульный | прожектор ...) 	= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034
info: p( трактор | патрульный ...) 	= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034
info: p( <punct> | трактор ...) 	= [OOV] 0.00000000 [ -inf ] / 0.99999973
info: p( <punct> | <punct> ...) 	= [OOV] 0.00000000 [ -inf ] / 0.99999993
info: p( <punct> | <punct> ...) 	= [OOV] 0.00000000 [ -inf ] / 0.99999993
info: p( </s> | <punct> ...) 	= [1gram] 0.05693430 [ -1.24462600 ] / 0.99999993

info: 1 sentences, 13 words, 0 OOVs
info: 3 zeroprobs, logprob= -12.97624000 ppl= 8.45034200 ppl1= 9.95800426

info: <s> С лязгом выкатился и остановился около мальчика <punct> <punct> <punct> <punct> </s>

info: p( с | <s> ) 	= [2gram] 0.00642448 [ -2.19216200 ] / 0.99999991
info: p( лязгом | с ...) 	= [2gram] 0.00195917 [ -2.70792700 ] / 0.99999999
info: p( выкатился | лязгом ...) 	= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034
info: p( и | выкатился ...) 	= [2gram] 0.51169951 [ -0.29098500 ] / 1.00000024
info: p( остановился | и ...) 	= [2gram] 0.00143382 [ -2.84350600 ] / 0.99999975
info: p( около | остановился ...) 	= [1gram] 0.00011358 [ -3.94468000 ] / 1.00000003
info: p( мальчика | около ...) 	= [1gram] 0.00003932 [ -4.40541100 ] / 1.00000016
info: p( <punct> | мальчика ...) 	= [OOV] 0.00000000 [ -inf ] / 0.99999990
info: p( <punct> | <punct> ...) 	= [OOV] 0.00000000 [ -inf ] / 0.99999993
info: p( <punct> | <punct> ...) 	= [OOV] 0.00000000 [ -inf ] / 0.99999993
info: p( <punct> | <punct> ...) 	= [OOV] 0.00000000 [ -inf ] / 0.99999993
info: p( </s> | <punct> ...) 	= [1gram] 0.05693430 [ -1.24462600 ] / 0.99999993

info: 1 sentences, 11 words, 0 OOVs
info: 4 zeroprobs, logprob= -17.93030200 ppl= 31.20267541 ppl1= 42.66064865
>>> print(a.logprob)
-30.906542
```

---

### Methods:
- **setLogfile** - Method of set the file for log output
- **setOOvFile** - Method set file for saving OOVs words

### Example:
```python
>>> import asc
>>>
>>> asc.setLogfile("./log.txt")
>>>
>>> asc.setOOvFile("./oov.txt")
```

---

### Methods:
- **perplexity** - Perplexity calculation
- **pplConcatenate** - Method of combining perplexia
- **pplByFiles** - Method for reading perplexity calculation by file or group of files

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> a = asc.perplexity("неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???с лязгом выкатился и остановился возле мальчика....")
>>>
>>> print(a.logprob)
-30.906542
>>>
>>> print(a.oovs)
0
>>>
>>> print(a.words)
24
>>>
>>> print(a.sentences)
2
>>>
>>> print(a.zeroprobs)
7
>>>
>>> print(a.ppl)
17.229063831108224
>>>
>>> print(a.ppl1)
19.398698060810077
>>>
>>> b = asc.pplByFiles("./text.txt")
>>>
>>> c = asc.pplConcatenate(a, b)
>>>
>>> print(c.ppl)
7.384123548831112
```

### Description
| Name      | Description                                                                 |
|-----------|-----------------------------------------------------------------------------|
| ppl       | The meaning of perplexity without considering the beginning of the sentence |
| ppl1      | The meaning of perplexion taking into account the beginning of the sentence |
| oovs      | Count of oov words                                                          |
| words     | Count of words in sentence                                                  |
| logprob   | Word sequence frequency                                                     |
| sentences | Count of sequences                                                          |
| zeroprobs | Count of zero probs                                                         |

---

### Methods:
- **tokenization** - Method for breaking text into tokens

### Example:
```python
>>> import asc
>>>
>>> def tokensFn(word, context, reset, stop):
...     print(word, " => ", context)
...     return True
...
>>> asc.switchAllowApostrophe()
>>>
>>> asc.tokenization("«On nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'évaluer» l'astronomie", tokensFn)
«  =>  []
On  =>  ['«']
nous  =>  ['«', 'On']
dit  =>  ['«', 'On', 'nous']
qu'aujourd'hui  =>  ['«', 'On', 'nous', 'dit']
c'est  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui"]
le  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est"]
cas  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le']
,  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas']
encore  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas', ',']
faudra-t-il  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas', ',', 'encore']
l  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l']
'  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l']
évaluer  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', "'"]
»  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', "'", 'évaluer']
l  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', "'", 'évaluer', '»']
'  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', "'", 'évaluer', '»', 'l']
astronomie  =>  ['«', 'On', 'nous', 'dit', "qu'aujourd'hui", "c'est", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', "'", 'évaluer', '»', 'l', "'"]
```

---

### Methods:
- **setTokenizerFn** - Method for set the function of an external tokenizer

### Example:
```python
>>> import asc
>>>
>>> def tokenizerFn(text, callback):
...     word = ""
...     context = []
...     for letter in text:
...         if letter == " " and len(word) > 0:
...             if not callback(word, context, False, False): return
...             context.append(word)
...             word = ""
...         elif letter == "." or letter == "!" or letter == "?":
...             if not callback(word, context, True, False): return
...             word = ""
...             context = []
...         else:
...             word += letter
...     if len(word) > 0:
...         if not callback(word, context, False, True): return
...
>>> def tokensFn(word, context, reset, stop):
...     print(word, " => ", context)
...     return True
...
>>> asc.setTokenizerFn(tokenizerFn)
>>>
>>> asc.tokenization("Hello World today!", tokensFn)
Hello  =>  []
World  =>  ['Hello']
today  =>  ['Hello', 'World']
```

---

### Methods:
- **sentences** - Sentences generation method
- **sentencesToFile** - Method for assembling a specified number of sentences and writing to a file

### Example:
```python
>>> import asc
>>>
>>> def sentencesFn(text):
...     print("Sentences:", text)
...     return True
...
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.sentences(sentencesFn)
Sentences: <s> В общем </s>
Sentences: <s> С лязгом выкатился и остановился возле мальчика </s>
Sentences: <s> У меня нет </s>
Sentences: <s> Я вообще не хочу </s>
Sentences: <s> Да и в общем </s>
Sentences: <s> Не могу </s>
Sentences: <s> Ну в общем </s>
Sentences: <s> Так что я вообще не хочу </s>
Sentences: <s> Потому что я вообще не хочу </s>
Sentences: <s> Продолжение следует </s>
Sentences: <s> Неожиданно из подворотни в олега ударил яркий прожектор патрульный трактор </s>
>>>
>>> asc.sentencesToFile(5, "./result.txt")
```

---

### Methods:
- **fixUppers** - Method for correcting registers in the text
- **fixUppersByFiles** - Method for correcting text registers in a text file

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.fixUppers("неожиданно из подворотни в олега ударил яркий прожектор патрульный трактор???с лязгом выкатился и остановился возле мальчика....")
'Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор??? С лязгом выкатился и остановился возле мальчика....'
>>>
>>> asc.fixUppersByFiles("./corpus", "./result.txt", "txt")
```

---

### Methods:
- **checkHypLat** - Hyphen and latin character search method

### Example:
```python
>>> import asc
>>>
>>> asc.checkHypLat("Hello-World")
(True, True)
>>>
>>> asc.checkHypLat("Hello")
(False, True)
>>>
>>> asc.checkHypLat("Привет")
(False, False)
>>>
>>> asc.checkHypLat("так-как")
(True, False)
```

---

### Methods:
- **getUppers** - Method for extracting registers for each word
- **countLetter** - Method for counting the amount of a specific letter in a word

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.idw("Living")
10493385932
>>>
>>> asc.idw("in")
3301
>>>
>>> asc.idw("the")
217280
>>>
>>> asc.idw("USA")
188643
>>>
>>> asc.getUppers([10493385932, 3301, 217280, 188643])
[1, 0, 0, 7]
>>> 
>>> asc.countLetter("hello-world", "-")
1
>>>
>>> asc.countLetter("hello-world", "l")
3
```

---

### Methods:
- **urls** - Method for extracting URL address coordinates in a string

### Example:
```python
>>> import asc
>>>
>>> asc.urls("This website: example.com was designed with ...")
{14: 25}
>>>
>>> asc.urls("This website: https://a.b.c.example.net?id=52#test-1 was designed with ...")
{14: 52}
>>>
>>> asc.urls("This website: https://a.b.c.example.net?id=52#test-1 and 127.0.0.1 was designed with ...")
{14: 52, 57: 66}
```

---

### Methods:
- **roman2Arabic** - Method for translating Roman numerals to Arabic

### Example:
```python
>>> import asc
>>>
>>> asc.roman2Arabic("XVI")
16
```

---

### Methods:
- **rest** - Method for correction and detection of words with mixed alphabets
- **setSubstitutes** - Method for set letters to correct words from mixed alphabets
- **getSubstitutes** - Method of extracting letters to correct words from mixed alphabets

### Example:
```python
>>> import asc
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.setSubstitutes({'p':'р','c':'с','o':'о','t':'т','k':'к','e':'е','a':'а','h':'н','x':'х','b':'в','m':'м'})
>>>
>>> asc.getSubstitutes()
{'a': 'а', 'b': 'в', 'c': 'с', 'e': 'е', 'h': 'н', 'k': 'к', 'm': 'м', 'o': 'о', 'p': 'р', 't': 'т', 'x': 'х'}
>>>
>>> str = "ПPИBETИК"
>>>
>>> str.lower()
'пpиbetик'
>>>
>>> asc.rest(str)
'приветик'
```

---

### Methods:
- **setTokensDisable** - Method for set the list of forbidden tokens
- **setTokensUnknown** - Method for set the list of tokens cast to 〈unk〉
- **setTokenDisable** - Method for set the list of unidentifiable tokens
- **setTokenUnknown** - Method of set the list of tokens that need to be identified as 〈unk〉
- **getTokensDisable** - Method for retrieving the list of forbidden tokens
- **getTokensUnknown** - Method for extracting a list of tokens reducible to 〈unk〉
- **setAllTokenDisable** - Method for set all tokens as unidentifiable
- **setAllTokenUnknown** - The method of set all tokens identified as 〈unk〉

### Example:
```python
>>> import asc
>>>
>>> asc.idw("<date>")
6
>>>
>>> asc.idw("<time>")
7
>>>
>>> asc.idw("<abbr>")
5
>>>
>>> asc.idw("<math>")
9
>>>
>>> asc.setTokenDisable("date|time|abbr|math")
>>>
>>> asc.getTokensDisable()
{9, 5, 6, 7}
>>>
>>> asc.setTokensDisable({6, 7, 5, 9})
>>>
>>> asc.setTokenUnknown("date|time|abbr|math")
>>>
>>> asc.getTokensUnknown()
{9, 5, 6, 7}
>>>
>>> asc.setTokensUnknown({6, 7, 5, 9})
>>>
>>> asc.setAllTokenDisable()
>>>
>>> asc.getTokensDisable()
{2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23}
>>>
>>> asc.setAllTokenUnknown()
>>>
>>> asc.getTokensUnknown()
{2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23}
```

---

### Methods:
- **countAlphabet** - Method of obtaining the number of letters in the dictionary

### Example:
```python
>>> import asc
>>>
>>> asc.getAlphabet()
'abcdefghijklmnopqrstuvwxyz'
>>>
>>> asc.countAlphabet()
26
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.countAlphabet()
59
```

---

### Methods:
- **countBigrams** - Method get count bigrams
- **countTrigrams** - Method get count trigrams
- **countGrams** - Method get count N-gram by lm size

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.countBigrams("неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???с лязгом выкатился и остановился возле мальчика....")
12
>>>
>>> asc.countTrigrams("неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???с лязгом выкатился и остановился возле мальчика....")
10
>>>
>>> asc.size()
3
>>>
>>> asc.countGrams("неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???с лязгом выкатился и остановился возле мальчика....")
10
>>>
>>> asc.idw("неожиданно")
3263936167
>>>
>>> asc.idw("из")
5134
>>>
>>> asc.idw("подворотни")
12535356101
>>>
>>> asc.idw("в")
53
>>>
>>> asc.idw("Олега")
2824508300
>>>
>>> asc.idw("ударил")
24816796913
>>>
>>> asc.countBigrams([3263936167, 5134, 12535356101, 53, 2824508300, 24816796913])
5
>>>
>>> asc.countTrigrams([3263936167, 5134, 12535356101, 53, 2824508300, 24816796913])
4
>>>
>>> asc.countGrams([3263936167, 5134, 12535356101, 53, 2824508300, 24816796913])
4
```

---

### Methods:
- **arabic2Roman** - Convert arabic number to roman number

### Example:
```python
>>> import asc
>>>
>>> asc.arabic2Roman(23)
'XXIII'
>>>
>>> asc.arabic2Roman("33")
'XXXIII'
```

---

### Methods:
- **setThreads** - Method for set the number of threads (0 - all threads)

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.setThreads(3)
>>>
>>> a = asc.pplByFiles("./text.txt")
>>>
>>> print(a.logprob)
-48201.29481399994
```

---

### Methods:
- **fti** - Method for removing the fractional part of a number

### Example:
```python
>>> import asc
>>>
>>> asc.fti(5892.4892)
5892489200000
>>>
>>> asc.fti(5892.4892, 4)
58924892
```

---

### Methods:
- **context** - Method for assembling text context from a sequence

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.idw("неожиданно")
3263936167
>>>
>>> asc.idw("из")
5134
>>>
>>> asc.idw("подворотни")
12535356101
>>>
>>> asc.idw("в")
53
>>>
>>> asc.idw("Олега")
2824508300
>>>
>>> asc.idw("ударил")
24816796913
>>>
>>> asc.context([3263936167, 5134, 12535356101, 53, 2824508300, 24816796913])
'Неожиданно из подворотни в Олега ударил'
```

---

### Methods:
- **isAbbr** - Method of checking a word for compliance with an abbreviation
- **isSuffix** - Method for checking a word for a suffix of a numeric abbreviation
- **isToken** - Method for checking if an identifier matches a token
- **isIdWord** - Method for checking if an identifier matches a word

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.addAbbr("США")
>>>
>>> asc.isAbbr("сша")
True
>>>
>>> asc.addSuffix("1-я")
>>>
>>> asc.isSuffix("1-я")
True
>>>
>>> asc.isToken(asc.idw("США"))
True
>>>
>>> asc.isToken(asc.idw("1-я"))
True
>>>
>>> asc.isToken(asc.idw("125"))
True
>>>
>>> asc.isToken(asc.idw("<s>"))
True
>>>
>>> asc.isToken(asc.idw("Hello"))
False
>>>
>>> asc.isIdWord(asc.idw("https://anyks.com"))
True
>>>
>>> asc.isIdWord(asc.idw("Hello"))
True
>>>
>>> asc.isIdWord(asc.idw("-"))
False
```

---

### Methods:
- **findByFiles** - Method search N-grams in a text file

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.debug)
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.findByFiles("./text.txt", "./result.txt")
info: <s> Кукай
сари кукай
сари японские
японские каллиграфы
каллиграфы я
я постоянно
постоянно навещал
навещал их
их тайно
тайно от
от людей
людей </s>


info: <s> Неожиданно из
Неожиданно из подворотни
из подворотни в
подворотни в Олега
в Олега ударил
Олега ударил яркий
ударил яркий прожектор
яркий прожектор патрульный
прожектор патрульный трактор
патрульный трактор

<s> С лязгом
С лязгом выкатился
лязгом выкатился и
выкатился и остановился
и остановился возле
остановился возле мальчика
возле мальчика
```

---

### Methods:
- **checkSequence** - Sequence Existence Method
- **existSequence** - Method for checking the existence of a sequence, excluding non-word tokens
- **checkByFiles** - Method for checking if a sequence exists in a text file

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.debug)
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.addAbbr("США")
>>>
>>> asc.isAbbr("сша")
>>>
>>> asc.checkSequence("Неожиданно из подворотни в олега ударил")
True
>>>
>>> asc.checkSequence("Сегодня сыграл и в Олега ударил яркий прожектор патрульный трактор с корпоративным сектором")
True
>>>
>>> asc.checkSequence("Сегодня сыграл и в Олега ударил яркий прожектор патрульный трактор с корпоративным сектором", True)
True
>>>
>>> asc.checkSequence("в Олега ударил яркий")
True
>>>
>>> asc.checkSequence("в Олега ударил яркий", True)
True
>>>
>>> asc.checkSequence("от госсекретаря США")
True
>>>
>>> asc.checkSequence("от госсекретаря США", True)
True
>>>
>>> asc.checkSequence("Неожиданно из подворотни в олега ударил", 2)
True
>>>
>>> asc.checkSequence(["Неожиданно","из","подворотни","в","олега","ударил"], 2)
True
>>>
>>> asc.existSequence("<s> Сегодня сыграл и в, Олега ударил яркий прожектор, патрульный трактор - с корпоративным сектором </s>", 2)
(True, 0)
>>>
>>> asc.existSequence(["<s>","Сегодня","сыграл","и","в",",","Олега","ударил","яркий","прожектор",",","патрульный","трактор","-","с","корпоративным","сектором","</s>"], 2)
(True, 2)
>>>
>>> asc.idw("от")
6086
>>>
>>> asc.idw("госсекретаря")
51273912082
>>>
>>> asc.idw("США")
5
>>>
>>> asc.checkSequence([6086, 51273912082, 5])
True
>>>
>>> asc.checkSequence([6086, 51273912082, 5], True)
True
>>>
>>> asc.checkSequence(["от", "госсекретаря", "США"])
True
>>>
>>> asc.checkSequence(["от", "госсекретаря", "США"], True)
True
>>>
>>> asc.checkByFiles("./text.txt", "./result.txt")
info: 1999 | YES | Какой-то период времени мы вообще не общались

info: 2000 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор.С лязгом выкатился и остановился возле мальчика.

info: 2001 | YES | Так как эти яйца жалко есть а хочется все больше любоваться их можно покрыть лаком даже прозрачным лаком для ногтей

info: 2002 | NO | кукай <unk> <unk> сари кукай <unk> <unk> сари японские каллиграфы я постоянно навещал их тайно от людей

info: 2003 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???С лязгом выкатился и остановился возле мальчика....

info: 2004 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор?С лязгом выкатился и остановился возле мальчика.

info: 2005 | YES | Сегодня яичницей никто не завтракал как впрочем и вчера на ближайшем к нам рынке мы ели фруктовый салат со свежевыжатым соком как в старые добрые времена в Бразилии

info: 2006 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор!С лязгом выкатился и остановился возле мальчика.

info: 2007 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор.с лязгом выкатился и остановился возле мальчика.

All texts: 2007
Exists texts: 1359
Not exists texts: 648
>>>
>>> asc.checkByFiles("./corpus", "./result.txt", False, "txt")
info: 1999 | YES | Какой-то период времени мы вообще не общались

info: 2000 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор.С лязгом выкатился и остановился возле мальчика.

info: 2001 | YES | Так как эти яйца жалко есть а хочется все больше любоваться их можно покрыть лаком даже прозрачным лаком для ногтей

info: 2002 | NO | кукай <unk> <unk> сари кукай <unk> <unk> сари японские каллиграфы я постоянно навещал их тайно от людей

info: 2003 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???С лязгом выкатился и остановился возле мальчика....

info: 2004 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор?С лязгом выкатился и остановился возле мальчика.

info: 2005 | YES | Сегодня яичницей никто не завтракал как впрочем и вчера на ближайшем к нам рынке мы ели фруктовый салат со свежевыжатым соком как в старые добрые времена в Бразилии

info: 2006 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор!С лязгом выкатился и остановился возле мальчика.

info: 2007 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор.с лязгом выкатился и остановился возле мальчика.

All texts: 2007
Exists texts: 1359
Not exists texts: 648
>>>
>>> asc.checkByFiles("./corpus", "./result.txt", True, "txt")
info: 2000 | NO | Так как эти яйца жалко есть а хочется все больше любоваться их можно покрыть лаком даже прозрачным лаком для ногтей

info: 2001 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор.С лязгом выкатился и остановился возле мальчика.

info: 2002 | NO | Сегодня яичницей никто не завтракал как впрочем и вчера на ближайшем к нам рынке мы ели фруктовый салат со свежевыжатым соком как в старые добрые времена в Бразилии

info: 2003 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор!С лязгом выкатился и остановился возле мальчика.

info: 2004 | NO | кукай <unk> <unk> сари кукай <unk> <unk> сари японские каллиграфы я постоянно навещал их тайно от людей

info: 2005 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор?С лязгом выкатился и остановился возле мальчика.

info: 2006 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???С лязгом выкатился и остановился возле мальчика....

info: 2007 | NO | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор.с лязгом выкатился и остановился возле мальчика.

All texts: 2007
Exists texts: 0
Not exists texts: 2007
```

---

### Methods:
- **check** - String Check Method
- **match** - String Matching Method
- **addAbbr** - Method add abbreviation
- **addSuffix** - Method add number suffix abbreviation
- **setSuffixes** - Method set number suffix abbreviations
- **readSuffix** - Method for reading data from a file of suffixes and abbreviations

### Example:
```python
>>> import asc
>>> 
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>> asc.setSubstitutes({'p':'р','c':'с','o':'о','t':'т','k':'к','e':'е','a':'а','h':'н','x':'х','b':'в','m':'м'})
>>> 
>>> asc.check("Дом-2", asc.check_t.home2)
True
>>> 
>>> asc.check("Дом2", asc.check_t.home2)
False
>>> 
>>> asc.check("Дом-2", asc.check_t.latian)
False
>>> 
>>> asc.check("Hello", asc.check_t.latian)
True
>>> 
>>> asc.check("прiвет", asc.check_t.latian)
True
>>> 
>>> asc.check("Дом-2", asc.check_t.hyphen)
True
>>> 
>>> asc.check("Дом2", asc.check_t.hyphen)
False
>>> 
>>> asc.check("Д", asc.check_t.letter)
True
>>> 
>>> asc.check("$", asc.check_t.letter)
False
>>> 
>>> asc.check("-", asc.check_t.letter)
False
>>> 
>>> asc.check("просtоквaшино", asc.check_t.similars)
True
>>> 
>>> asc.match("my site http://example.ru, it's true", asc.match_t.url)
True
>>> 
>>> asc.match("по вашему ip адресу 46.40.123.12 проводится проверка", asc.match_t.url)
True
>>> 
>>> asc.match("мой адрес в формате IPv6: http://[2001:0db8:11a3:09d7:1f34:8a2e:07a0:765d]/", asc.match_t.url)
True
>>> 
>>> asc.match("13-я", asc.match_t.abbr)
True
>>> 
asc.match("13-я-й", asc.match_t.abbr)
False
>>> 
asc.match("т.д", asc.match_t.abbr)
True
>>> 
asc.match("т.п.", asc.match_t.abbr)
True
>>> 
>>> asc.match("С.Ш.А.", asc.match_t.abbr)
True
>>> 
>>> asc.addAbbr("сша")
>>> asc.match("США", asc.match_t.abbr)
True
>>> 
>>> asc.addSuffix("15-летия")
>>> asc.match("15-летия", asc.match_t.abbr)
True
>>> 
>>> asc.getSuffixes()
{3139900457}
>>> 
>>> asc.idw("лет")
328041
>>> 
>>> asc.idw("тых")
352214
>>> 
>>> asc.setSuffixes({328041, 352214})
>>> 
>>> asc.getSuffixes()
{328041, 352214}
>>> 
>>> def status(status):
...     print(status)
... 
>>> asc.readSuffix("./suffix.abbr", status)
>>> 
>>> asc.match("15-лет", asc.match_t.abbr)
True
>>> 
>>> asc.match("20-тых", asc.match_t.abbr)
True
>>> 
>>> asc.match("15-летия", asc.match_t.abbr)
False
>>> 
>>> asc.match("Hello", asc.match_t.latian)
True
>>> 
>>> asc.match("прiвет", asc.match_t.latian)
False
>>> 
>>> asc.match("23424", asc.match_t.number)
True
>>> 
>>> asc.match("hello", asc.match_t.number)
False
>>> 
>>> asc.match("23424.55", asc.match_t.number)
False
>>> 
>>> asc.match("23424", asc.match_t.decimal)
False
>>> 
>>> asc.match("23424.55", asc.match_t.decimal)
True
>>> 
>>> asc.match("23424,55", asc.match_t.decimal)
True
>>> 
>>> asc.match("-23424.55", asc.match_t.decimal)
True
>>> 
>>> asc.match("+23424.55", asc.match_t.decimal)
True
>>> 
>>> asc.match("+23424.55", asc.match_t.anumber)
True
>>> 
>>> asc.match("15T-34", asc.match_t.anumber)
True
>>> 
>>> asc.match("hello", asc.match_t.anumber)
False
>>> 
>>> asc.match("hello", asc.match_t.allowed)
True
>>> 
>>> asc.match("évaluer", asc.match_t.allowed)
False
>>> 
>>> asc.match("13", asc.match_t.allowed)
True
>>> 
>>> asc.match("Hello-World", asc.match_t.allowed)
True
>>> 
>>> asc.match("Hello", asc.match_t.math)
False
>>> 
>>> asc.match("+", asc.match_t.math)
True
>>> 
>>> asc.match("=", asc.match_t.math)
True
>>> 
>>> asc.match("Hello", asc.match_t.upper)
True
>>> 
>>> asc.match("hello", asc.match_t.upper)
False
>>> 
>>> asc.match("hellO", asc.match_t.upper)
False
>>> 
>>> asc.match("a", asc.match_t.punct)
False
>>> 
>>> asc.match(",", asc.match_t.punct)
True
>>> 
>>> asc.match(" ", asc.match_t.space)
True
>>> 
>>> asc.match("a", asc.match_t.space)
False
>>> 
>>> asc.match("a", asc.match_t.special)
False
>>> 
>>> asc.match("±", asc.match_t.special)
False
>>> 
>>> asc.match("[", asc.match_t.isolation)
True
>>> 
>>> asc.match("a", asc.match_t.isolation)
False
>>> 
>>> asc.match("a", asc.match_t.greek)
False
>>> 
>>> asc.match("Ψ", asc.match_t.greek)
True
>>> 
>>> asc.match("->", asc.match_t.route)
False
>>> 
>>> asc.match("⇔", asc.match_t.route)
True
>>> 
>>> asc.match("a", asc.match_t.letter)
True
>>> 
>>> asc.match("!", asc.match_t.letter)
False
>>> 
>>> asc.match("!", asc.match_t.pcards)
False
>>> 
>>> asc.match("♣", asc.match_t.pcards)
True
>>> 
>>> asc.match("p", asc.match_t.currency)
False
>>> 
>>> asc.match("$", asc.match_t.currency)
True
>>> 
>>> asc.match("€", asc.match_t.currency)
True
>>> 
>>> asc.match("₽", asc.match_t.currency)
True
>>> 
>>> asc.match("₿", asc.match_t.currency)
True
```

---

### Methods:
- **delInText** - Method for delete letter in text

### Example:
```python
>>> import asc
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.delInText("неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор??? с лязгом выкатился и остановился возле мальчика....", asc.wdel_t.punct)
'неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор с лязгом выкатился и остановился возле мальчика'
>>>
>>> asc.delInText("hello-world-hello-world", asc.wdel_t.hyphen)
'helloworldhelloworld'
>>>
>>> asc.delInText("неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор??? с лязгом выкатился и остановился возле мальчика....", asc.wdel_t.broken)
'неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор с лязгом выкатился и остановился возле мальчика'
>>>
>>> asc.delInText("«On nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'évaluer» l'astronomie", asc.wdel_t.broken)
"On nous dit qu'aujourd'hui c'est le cas encore faudra-t-il l'valuer l'astronomie"
```

---

### Methods:
- **countsByFiles** - Method for counting the number of n-grams in a text file

### Example:
```python
>>> import asc
>>>
>>> asc.setOption(asc.options_t.debug)
>>>
>>> asc.setOption(asc.options_t.confidence)
>>>
>>> asc.setAlphabet("abcdefghijklmnopqrstuvwxyzабвгдеёжзийклмнопрстуфхцчшщъыьэюя")
>>>
>>> asc.readArpa('./lm.arpa')
>>>
>>> asc.countsByFiles("./text.txt", "./result.txt", 3)
info: 0 | Сегодня яичницей никто не завтракал как впрочем и вчера на ближайшем к нам рынке мы ели фруктовый салат со свежевыжатым соком как в старые добрые времена в Бразилии

info: 10 | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор?С лязгом выкатился и остановился возле мальчика.

info: 10 | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор!С лязгом выкатился и остановился возле мальчика.

info: 0 | Так как эти яйца жалко есть а хочется все больше любоваться их можно покрыть лаком даже прозрачным лаком для ногтей

info: 10 | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???С лязгом выкатился и остановился возле мальчика....

Counts 3grams: 471
>>>
>>> asc.countsByFiles("./corpus", "./result.txt", 2, "txt")
info: 19 | Так как эти яйца жалко есть а хочется все больше любоваться их можно покрыть лаком даже прозрачным лаком для ногтей

info: 12 | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор.с лязгом выкатился и остановился возле мальчика.

info: 12 | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор!С лязгом выкатился и остановился возле мальчика.

info: 10 | кукай <unk> <unk> сари кукай <unk> <unk> сари японские каллиграфы я постоянно навещал их тайно от людей

info: 12 | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор???С лязгом выкатился и остановился возле мальчика....

info: 12 | Неожиданно из подворотни в Олега ударил яркий прожектор патрульный трактор?С лязгом выкатился и остановился возле мальчика.

info: 27 | Сегодня яичницей никто не завтракал как впрочем и вчера на ближайшем к нам рынке мы ели фруктовый салат со свежевыжатым соком как в старые добрые времена в Бразилии

Counts 2grams: 20270
```

### Description
| N-gram size | Description         |
|-------------|---------------------|
| 1           | language model size |
| 2           | bigram              |
| 3           | trigram             |

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/anyks/asc",
    "name": "anyks-sc",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "nlp,lm,alm,language-model,spellchecker,spell-checker,anyks",
    "author": "Yuriy Lobarev",
    "author_email": "forman@anyks.com",
    "download_url": "https://files.pythonhosted.org/packages/fa/0a/e71554f1551e1f68c508437a967f6e808b39e8db213a139dcce750c3e7c5/anyks-sc-1.2.6.tar.gz",
    "platform": null,
    "description": "[![ANYKS Smart language model](https://raw.githubusercontent.com/anyks/asc/master/site/img/banner.jpg)](https://anyks.com)\n\n# ANYKS Spell-checker (ASC)\n\n## Project description\n\nThere are a lot of typo and text error correction systems out there. Each one of those systems has its pros and cons, and each system has the right to live and will find its own user base. I would like to present my own version of the typo correction system with its own unique features.\n\n## List of features\n\n- Correction of mistakes in words with a **Levenshtein distance** of up to 4;\n- Correction of different types of typos in words: insertion, deletion, substitution, rearrangement of character;\n- **\u0401**-fication of a word given the context (letter '\u0451' is commonly replaced by letter '\u0435' in russian typed text);\n- Context-based word capitalization for proper names and titles;\n- Context-based splitting for words that are missing the separating space character;\n- Text analysis without correcting the original text;\n- Searching the text for errors, typos, incorrect context.\n\n## Requirements\n\n- [Zlib](http://www.zlib.net)\n- [Bloom](http://www.partow.net/programming/bloomfilter/index.html)\n- [OpenSSL](https://www.openssl.org)\n- [hnswlib](https://github.com/nmslib/hnswlib)\n- [HandyPack](https://github.com/bakwc/HandyPack)\n- [GperfTools](https://github.com/gperftools/gperftools)\n- [Python3](https://www.python.org/download/releases/3.0)\n- [NLohmann::json](https://github.com/nlohmann/json)\n- [BigInteger](http://mattmccutchen.net/bigint)\n- [ALM](https://github.com/anyks/alm)\n\n## Install PyBind11\n\n```bash\n$ python3 -m pip install pybind11\n```\n\n## Ready-to-use dictionaries\n\n| Dictionary name                                                        | Size (GB)   | RAM (GB)                | N-gram order   | Language |\n|------------------------------------------------------------------------|-------------|-------------------------|----------------|----------|\n| [wittenbell-3-big.asc](https://cloud.mail.ru/public/2b9E/sz9b8YGJ1)    | 1.97        | 15.6                    | 3              | RU       |\n| [wittenbell-3-middle.asc](https://cloud.mail.ru/public/5zo5/2F6uav8fz) | 1.24        | 9.7                     | 3              | RU       |\n| [mkneserney-3-middle.asc](https://cloud.mail.ru/public/3SBP/35WJFKFyd) | 1.33        | 9.7                     | 3              | RU       |\n| [wittenbell-3-single.asc](https://cloud.mail.ru/public/21jt/YEW493XQa) | 0.772       | 5.14                    | 3              | RU       |\n| [wittenbell-5-single.asc](https://cloud.mail.ru/public/5Adc/5x5Ea1eA6) | 1.37        | 10.7                    | 5              | RU       |\n\n## Testing\n\nTo test the system, we used data from the [2016 \"spelling correction\" competition organized by Dialog21](http://www.dialog-21.ru/evaluation/2016/spelling_correction).<br>\nThe trained binary dictionary that was used for testing: [wittenbell-3-middle.asc](https://cloud.mail.ru/public/5zo5/2F6uav8fz)\n\n| Mode                              | Precision | Recall | FMeasure |\n|-----------------------------------|-----------|--------|----------|\n| **Typo correction**               | 76.97     | 62.71  | 69.11    |\n| **Error correction**              | 73.72     | 60.53  | 66.48    |\n\nI think it is unnecessary to add any other data. Anyone can repeat the test if they wish (all files used for testing are attached below).\n\n### Files used for testing\n\n- [test.txt](https://cloud.mail.ru/public/3rzx/2HwqDU9j5) - Text used for testing;\n- [correct.txt](https://cloud.mail.ru/public/3dAN/J4EzV4U3W) - File with correct text;\n- [evaluate.py](https://cloud.mail.ru/public/ZTRf/4sUS1Sc2p) - Python3 script for correction result evaluation.\n\n---\n\n## Description of Methods\n\n### Methods:\n- **idw** - Word ID retrieval method\n- **idt** - Token ID retrieval method\n- **ids** - Sequence ID retrieval method\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.idw(\"hello\")\n313191024\n>>>\n>>> asc.idw(\"<s>\")\n1\n>>>\n>>> asc.idw(\"</s>\")\n22\n>>>\n>>> asc.idw(\"<unk>\")\n3\n>>>\n>>> asc.idt(\"1424\")\n2\n>>>\n>>> asc.idt(\"hello\")\n0\n>>>\n>>> asc.idw(\"Living\")\n13268942501\n>>>\n>>> asc.idw(\"in\")\n2047\n>>>\n>>> asc.idw(\"the\")\n83201\n>>>\n>>> asc.idw(\"USA\")\n72549\n>>>\n>>> asc.ids([13268942501, 2047, 83201, 72549])\n16314074810955466382\n```\n\n### Description\n| Name      | Description                                                                                                                                           |\n|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------|\n|\u2329s\u232a       | Sentence beginning token                                                                                                                              |\n|\u2329/s\u232a      | Sentence end token                                                                                                                                    |\n|\u2329url\u232a     | URL-address token                                                                                                                                     |\n|\u2329num\u232a     | Number (arabic or roman) token                                                                                                                        |\n|\u2329unk\u232a     | Unknown word token                                                                                                                                    |\n|\u2329time\u232a    | Time token (15:44:56)                                                                                                                                 |\n|\u2329score\u232a   | Score count token (4:3 \u00a6 01:04)                                                                                                                       |\n|\u2329fract\u232a   | Fraction token (5/20 \u00a6 192/864)                                                                                                                       |\n|\u2329date\u232a    | Date token (18.07.2004 \u00a6 07/18/2004)                                                                                                                  |\n|\u2329abbr\u232a    | Abbreviation token (1-\u0439 \u00a6 2-\u0435 \u00a6 20-\u044f \u00a6 p.s \u00a6 p.s.)                                                                                                    |\n|\u2329dimen\u232a   | Dimensions token (200x300 \u00a6 1920x1080)                                                                                                                |\n|\u2329range\u232a   | Range of numbers token (1-2 \u00a6 100-200 \u00a6 300-400)                                                                                                      |\n|\u2329aprox\u232a   | Approximate number token (~93 \u00a6 ~95.86 \u00a6 10~20)                                                                                                       |\n|\u2329anum\u232a    | Pseudo-number token (combination of numbers and other symbols) (T34 \u00a6 895-M-86 \u00a6 39km)                                                                |\n|\u2329pcards\u232a  | Symbols of the play cards (\u2660 \u00a6 \u2663 \u00a6 \u2665 \u00a6 \u2666 )                                                                                                            |\n|\u2329punct\u232a   | Punctuation token (. \u00a6 , \u00a6 ? \u00a6 ! \u00a6 : \u00a6 ; \u00a6 \u2026 \u00a6 \u00a1 \u00a6 \u00bf)                                                                                                 |\n|\u2329route\u232a   | Direction symbols (arrows) (\u2190 \u00a6 \u2191 \u00a6 \u2193 \u00a6 \u2194 \u00a6 \u21b5 \u00a6 \u21d0 \u00a6 \u21d1 \u00a6 \u21d2 \u00a6 \u21d3 \u00a6 \u21d4 \u00a6 \u25c4 \u00a6 \u25b2 \u00a6 \u25ba \u00a6 \u25bc)                                                                    |\n|\u2329greek\u232a   | Symbols of the Greek alphabet (\u0391 \u00a6 \u0392 \u00a6 \u0393 \u00a6 \u0394 \u00a6 \u0395 \u00a6 \u0396 \u00a6 \u0397 \u00a6 \u0398 \u00a6 \u0399 \u00a6 \u039a \u00a6 \u039b \u00a6 \u039c \u00a6 \u039d \u00a6 \u039e \u00a6 \u039f \u00a6 \u03a0 \u00a6 \u03a1 \u00a6 \u03a3 \u00a6 \u03a4 \u00a6 \u03a5 \u00a6 \u03a6 \u00a6 \u03a7 \u00a6 \u03a8 \u00a6 \u03a9)                         |\n|\u2329isolat\u232a  | Isolation/quotation token (( \u00a6 ) \u00a6 [ \u00a6 ] \u00a6 { \u00a6 } \u00a6 \" \u00a6 \u00ab \u00a6 \u00bb \u00a6 \u201e \u00a6 \u201c \u00a6 ` \u00a6 \u2308 \u00a6 \u2309 \u00a6 \u230a \u00a6 \u230b \u00a6 \u2039 \u00a6 \u203a \u00a6 \u201a \u00a6 \u2019 \u00a6 \u2032 \u00a6 \u201b \u00a6 \u2033 \u00a6 \u2018 \u00a6 \u201d \u00a6 \u201f \u00a6 ' \u00a6\u2329 \u00a6 \u232a)         |\n|\u2329specl\u232a   | Special character token (_ \u00a6 @ \u00a6 # \u00a6 \u2116 \u00a6 \u00a9 \u00a6 \u00ae \u00a6 & \u00a6 \u00a7 \u00a6 \u00e6 \u00a6 \u00f8 \u00a6 \u00de \u00a6 \u2013 \u00a6 \u203e \u00a6 \u2011 \u00a6 \u2014 \u00a6 \u00af \u00a6 \u00b6 \u00a6 \u02c6 \u00a6 \u02dc \u00a6 \u2020 \u00a6 \u2021 \u00a6 \u2022 \u00a6 \u2030 \u00a6 \u2044 \u00a6 \u2111 \u00a6 \u2118 \u00a6 \u211c \u00a6 \u2135 \u00a6 \u25ca \u00a6 \\ )     |\n|\u2329currency\u232a| Symbols of world currencies ($ \u00a6 \u20ac \u00a6 \u20bd \u00a6 \u00a2 \u00a6 \u00a3 \u00a6 \u20a4 \u00a6 \u00a4 \u00a6 \u00a5 \u00a6 \u2133 \u00a6 \u20a3 \u00a6 \u20b4 \u00a6 \u20b8 \u00a6 \u20b9 \u00a6 \u20a9 \u00a6 \u20a6 \u00a6 \u20ad \u00a6 \u20aa \u00a6 \u09f3 \u00a6 \u0192 \u00a6 \u20a8 \u00a6 \u0e3f \u00a6 \u20ab \u00a6 \u17db \u00a6 \u20ae \u00a6 \u20b1 \u00a6 \ufdfc \u00a6 \u20a1 \u00a6 \u20b2 \u00a6 \u060b \u00a6 \u20b5 \u00a6 \u20ba \u00a6 \u20bc \u00a6 \u20be \u00a6 \u20a0 \u00a6 \u20a7 \u00a6 \u20af \u00a6 \u20a2 \u00a6 \u20b3 \u00a6 \u20a5 \u00a6 \u20b0 \u00a6 \u20bf \u00a6 \u04b1) |\n|\u2329math\u232a    | Mathematical operation token (+ \u00a6 - \u00a6 = \u00a6 / \u00a6 * \u00a6 ^ \u00a6 \u00d7 \u00a6 \u00f7 \u00a6 \u2212 \u00a6 \u2215 \u00a6 \u2216 \u00a6 \u2217 \u00a6 \u221a \u00a6 \u221d \u00a6 \u221e \u00a6 \u2220 \u00a6 \u00b1 \u00a6 \u00b9 \u00a6 \u00b2 \u00a6 \u00b3 \u00a6 \u00bd \u00a6 \u2153 \u00a6 \u00bc \u00a6 \u00be \u00a6 % \u00a6 ~ \u00a6 \u00b7 \u00a6 \u22c5 \u00a6 \u00b0 \u00a6 \u00ba \u00a6 \u00ac \u00a6 \u0192 \u00a6 \u2200 \u00a6 \u2202 \u00a6 \u2203 \u00a6 \u2205 \u00a6 \u2207 \u00a6 \u2208 \u00a6 \u2209 \u00a6 \u220b \u00a6 \u220f \u00a6 \u2211 \u00a6 \u2227 \u00a6 \u2228 \u00a6 \u2229 \u00a6 \u222a \u00a6 \u222b \u00a6 \u2234 \u00a6 \u223c \u00a6 \u2245 \u00a6 \u2248 \u00a6 \u2260 \u00a6 \u2261 \u00a6 \u2264 \u00a6 \u2265 \u00a6 \u00aa \u00a6 \u2282 \u00a6 \u2283 \u00a6 \u2284 \u00a6 \u2286 \u00a6 \u2287 \u00a6 \u2295 \u00a6 \u2297 \u00a6 \u22a5 \u00a6 \u00a8) |\n\n---\n\n### Methods:\n- **setZone** - User zone set method\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setZone(\"com\")\n>>> asc.setZone(\"ru\")\n>>> asc.setZone(\"org\")\n>>> asc.setZone(\"net\")\n```\n\n---\n\n### Methods:\n- **clear** - Method clear all data\n- **setAlphabet** - Method set alphabet\n- **getAlphabet** - Method get alphabet\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.getAlphabet()\n'abcdefghijklmnopqrstuvwxyz'\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.getAlphabet()\n'abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f'\n>>>\n>>> asc.clear()\n>>>\n>>> asc.getAlphabet()\n'abcdefghijklmnopqrstuvwxyz'\n```\n\n---\n\n### Methods:\n- **setUnknown** - Method set unknown word\n- **getUnknown** - Method extraction unknown word\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setUnknown(\"word\")\n>>>\n>>> asc.getUnknown()\n'word'\n```\n\n---\n\n### Methods:\n- **infoIndex** - Method for print information about the dictionary\n- **token** - Method for determining the type of the token words\n- **addText** - Method of adding text for estimate\n- **collectCorpus** - Training method of assembling the text data for ASC [curpus = filename or dir, smoothing = wittenBell, modified = False, prepares = False, mod = 0.0, status = Null]\n- **pruneVocab** - Dictionary pruning method\n- **buildArpa** - Method for build ARPA\n- **writeWords** - Method for writing these words to a file\n- **writeVocab** - Method for writing dictionary data to a file\n- **writeNgrams** - Method of writing data to NGRAMs files\n- **writeMap** - Method of writing sequence map to file\n- **writeSuffix** - Method for writing data to a suffix file for digital abbreviations\n- **writeAbbrs** - Method for writing data to an abbreviation file\n- **getSuffixes** - Method for extracting the list of suffixes of digital abbreviations\n- **writeArpa** - Method of writing data to ARPA file\n- **setThreads** - Method for setting the number of threads used in work (0 - all available threads)\n- **setStemmingMethod** - Method for setting external stemming function\n- **loadIndex** - Binary index loading method\n- **spell** - Method for performing spell-checker\n- **analyze** - Method for analyze text\n- **addAlt** - Method for add a word/letter with an alternative letter\n- **setAlphabet** - Method for set Alphabet\n- **setPilots** - Method for set pilot words\n- **setSubstitutes** - Method for set letters to correct words from mixed alphabets\n- **addAbbr** - Method add abbreviation\n- **setAbbrs** - Method set abbreviations\n- **getAbbrs** - Method for extracting the list of abbreviations\n- **addGoodword** - Method add good word\n- **addBadword** - Method add bad word\n- **addUWord** - Method for add a word that always starts with a capital letter\n- **setUWords** - Method for add a list of identifiers for words that always start with a capital letter\n- **readArpa** - Method for reading an ARPA file, language model\n- **readVocab** - Method of reading the dictionary\n- **setEmbedding** - Method for set embedding\n- **buildIndex** - Method for build spell-checker index\n- **setAdCw** - Method for set dictionary characteristics (cw - count all words in dataset, ad - count all documents in dataset)\n- **setCode** - Method for set code language\n- **addLemma** - Method for add a Lemma to the dictionary\n- **setNSWLibCount** - Method for set the maximum number of options for analysis\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> asc.infoIndex(\"./wittenbell-3-single.asc\")\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n* Code: RU\n\n* Version: 1.0.0\n\n* Dictionary name: Russian - single\n\n* Locale: en_US.UTF-8\n* Alphabet: \u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\n\n* Build date: 09/08/2020 15:39:31\n\n* Encrypted: NO\n\n* ALM type: ALMv1\n\n* Allow apostrophe: NO\n\n* Count words: 106912195\n* Count documents: 263998\n\n* Only good words: NO\n* Mix words in dicts: YES\n* Confidence arpa: YES\n\n* Count upper words: 841915\n* Count pilots words: 15\n* Count bad words: 108790\n* Count good words: 124\n* Count substitutes: 14\n* Count abbreviations: 16532\n\n* Alternatives: \u0435 => \u0451\n* Count alternatives words: 58138\n\n* Size embedding: 28\n\n* Length n-gram: 3\n* Count n-grams: 6710202\n\n* Author: Yuriy Lobarev\n\n* Contacts: site: https://anyks.com, e-mail: forman@anyks.com\n\n* Copyright \u00a9: Yuriy Lobarev\n\n* License type: GPLv3\n* License text:\nThe GNU General Public License is a free, copyleft license for software and other kinds of works.\n\nThe licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.\n\nWhen we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.\n\nTo protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.\n\nFor example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.\n\nDevelopers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.\n\nFor the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.\n\nSome devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.\n\nFinally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.\n\nThe precise terms and conditions for copying, distribution and modification follow.\n\nURL: https://www.gnu.org/licenses/gpl-3.0.ru.html\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n```\n\n### Example:\n```python\n>>> import asc\n>>> import spacy\n>>> import pymorphy2\n>>> \n>>> asc.setThreads(0)\n>>> asc.setOption(asc.options_t.ascSplit)\n>>> asc.setOption(asc.options_t.ascAlter)\n>>> asc.setOption(asc.options_t.ascESplit)\n>>> asc.setOption(asc.options_t.ascRSplit)\n>>> asc.setOption(asc.options_t.ascUppers)\n>>> asc.setOption(asc.options_t.ascHyphen)\n>>> asc.setOption(asc.options_t.ascWordRep)\n>>> asc.setOption(asc.options_t.mixDicts)\n>>> asc.setOption(asc.options_t.confidence)\n>>> asc.setOption(asc.options_t.stemming)\n>>> \n>>> morphRu = pymorphy2.MorphAnalyzer()\n>>> morphEn = spacy.load('en', disable=['parser', 'ner'])\n>>> \n>>> def status(text, status):\n...     print(text, status)\n... \n>>> \n>>> def eng(word):\n...     global morphEn\n...     words = morphEn(word)\n...     word = ''.join([token.lemma_ for token in words]).strip()\n...     if word[0] != '-' and word[len(word) - 1] != '-':\n...         return word\n...     else:\n...         return \"\"\n... \n>>> \n>>> def rus(word):\n...     global morphRu\n...     if morphRu != None:\n...         word = morphRu.parse(word)[0].normal_form\n...         return word\n...     else:\n...         return \"\"\n... \n>>> \n>>> def run(word, lang):\n...     if lang == \"ru\":\n...         return rus(word.lower())\n...     elif lang == \"en\":\n...         return eng(word.lower())\n... \n>>> \n>>> asc.setStemmingMethod(run)\n>>> \n>>> asc.loadIndex(\"./wittenbell-3-single.asc\", \"\", status)\nLoading dictionary 1\nLoading dictionary 2\nLoading dictionary 3\nLoading dictionary 4\nLoading dictionary 5\nLoading dictionary 6\nLoading dictionary 7\nLoading dictionary 8\n...\nLoading Bloom filter 100\nLoading stemming 0\nLoading stemming 1\nLoading stemming 2\nLoading stemming 3\n...\nLoading language model 6\nLoading language model 12\nLoading language model 18\nLoading language model 25\nLoading language model 31\nLoading language model 37\n...\nLoading alternative words 1\nLoading alternative words 2\nLoading alternative words 3\nLoading alternative words 4\nLoading alternative words 5\nLoading alternative words 6\nLoading alternative words 7\n...\nLoading substitutes letters 7\nLoading substitutes letters 14\nLoading substitutes letters 21\nLoading substitutes letters 28\nLoading substitutes letters 35\nLoading substitutes letters 42\n...\n>>> \n>>> res = asc.spell(\"\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433 \u0437\u0430\u0436\u043e\u0433 \u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443\", True)\n>>> res\n('\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a \u0437\u0430\u0436\u0451\u0433 \u043f\u043e-\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443', [('\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433', '\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a'), ('\u0437\u0430\u0436\u043e\u0433', '\u0437\u0430\u0436\u0451\u0433'), ('\u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443', '\u043f\u043e-\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443')])\n>>> \n>>> res = asc.analyze(\"\u0441\u043b\u0437\u044b \u0442\u0435\u0443\u0442 \u043d\u0430 \u043c\u0440\u043e\u0437\u0435\")\n>>> res\n[('\u0442\u0435\u0443\u0442', ['\u0442\u0435\u043a\u0443\u0442']), ('\u043c\u0440\u043e\u0437\u0435', ['\u043c\u043e\u0437\u0433', '\u043c\u043e\u0440\u043e\u0437', '\u043c\u043e\u0440\u043e\u0437\u0435', '\u043c\u043e\u0451']), ('\u0441\u043b\u0437\u044b', ['\u0441\u043b\u0435\u0437\u044b', '\u0441\u043b\u0451\u0437\u044b'])]\n```\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> asc.setThreads(0)\n>>> asc.setOption(asc.options_t.ascSplit)\n>>> asc.setOption(asc.options_t.ascAlter)\n>>> asc.setOption(asc.options_t.ascESplit)\n>>> asc.setOption(asc.options_t.ascRSplit)\n>>> asc.setOption(asc.options_t.ascUppers)\n>>> asc.setOption(asc.options_t.ascHyphen)\n>>> asc.setOption(asc.options_t.ascWordRep)\n>>> asc.setOption(asc.options_t.mixDicts)\n>>> asc.setOption(asc.options_t.confidence)\n>>> \n>>> asc.addAlt(\"\u0435\", \"\u0451\")\n>>> asc.addAlt(\"\u0435\u0436\u0438\u043a\", \"\u0451\u0436\u0438\u043a\")\n>>> asc.addAlt(\"\u041b\u0435\u0433\u043a\u0438\u0439\", \"\u041b\u0451\u0433\u043a\u0438\u0439\")\n...\n>>> asc.setAlphabet(\"\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\")\n>>> \n>>> asc.setPilots([\"\u0430\",\"\u0443\",\"\u0432\",\"\u043e\",\"\u0441\",\"\u043a\",\"\u0431\",\"\u0438\",\"\u044f\",\"\u044d\",\"a\",\"i\",\"o\",\"e\",\"g\"])\n>>> asc.setSubstitutes({'p':'\u0440','c':'\u0441','o':'\u043e','t':'\u0442','k':'\u043a','e':'\u0435','a':'\u0430','h':'\u043d','x':'\u0445','b':'\u0432','m':'\u043c'})\n>>> \n>>> asc.idw(\"\u0421\u0431\u0435\u0440\u0431\u0430\u043d\u043a\")\n13236490857\nasc.idw(\"\u0421\u043e\u0432\u043a\u043e\u043c\u0431\u0430\u043d\u043a\")\n22287680895\n>>> \n>>> asc.token(\"\u0421\u0431\u0435\u0440\u0431\u0430\u043d\u043a\")\n'<word>'\n>>> asc.token(\"\u0441\u043e\u0432\u043a\u043e\u043c\u0431\u0430\u043d\u043a\")\n'<word>'\n>>> \n>>> asc.setAbbrs({13236490857, 22287680895})\n>>> \n>>> asc.addAbbr(\"\u0421\u0428\u0410\")\n>>> asc.addAbbr(\"\u0421\u0431\u0435\u0440\")\n>>> \n>>> asc.token(\"\u0421\u0431\u0435\u0440\u0431\u0430\u043d\u043a\")\n'<abbr>'\n>>> asc.token(\"\u0441\u043e\u0432\u043a\u043e\u043c\u0431\u0430\u043d\u043a\")\n'<abbr>'\n>>> asc.token(\"\u0441\u0448\u0430\")\n'<abbr>'\n>>> asc.token(\"\u0421\u0411\u0415\u0420\")\n'<abbr>'\n...\n>>> asc.getAbbrs()\n{13236490857, 189243, 22287680895, 26938511}\n>>> \n>>> asc.addGoodword(\"T-34\")\n>>> asc.addGoodword(\"\u0410\u041d-25\")\n...\n>>> asc.addBadword(\"\u0438\u0439\u0442\u0438\")\n>>> asc.addBadword(\"\u0446\u0438\u0433\u0430\u043d\")\n>>> asc.addBadword(\"\u0430\u043f\u0438\u0447\u0430\u0442\u043a\u0430\")\n...\n>>> asc.addUWord(\"\u041c\u043e\u0441\u043a\u0432\u0430\")\n>>> asc.addUWord(\"\u0421\u0430\u043d\u043a\u0442-\u041f\u0435\u0442\u0435\u0440\u0431\u0443\u0440\u0433\")\n...\n>>> def statusArpa(status):\n...     print(\"Read arpa\", status)\n... \n>>> def statusVocab(status):\n...     print(\"Read vocab\", status)\n... \n>>> def statusIndex(status):\n...     print(\"Build index\", status)\n... \n>>> asc.readArpa(\"./words.arpa\", statusArpa)\nRead arpa 0\nRead arpa 1\nRead arpa 2\nRead arpa 3\nRead arpa 4\nRead arpa 5\nRead arpa 6\nRead arpa 7\nRead arpa 8\n...\n>>> asc.readVocab(\"./words.vocab\", statusVocab)\nRead vocab 0\nRead vocab 1\nRead vocab 2\nRead vocab 3\nRead vocab 4\nRead vocab 5\nRead vocab 6\n...\n>>> asc.setEmbedding({\n...     \"\u0430\": 0, \"\u0431\": 1, \"\u0432\": 2, \"\u0433\": 3, \"\u0434\": 4, \"\u0435\": 5,\n...     \"\u0451\": 5, \"\u0436\": 6, \"\u0437\": 7, \"\u0438\": 8, \"\u0439\": 8, \"\u043a\": 9,\n...     \"\u043b\": 10, \"\u043c\": 11, \"\u043d\": 12, \"\u043e\": 0, \"\u043f\": 13, \"\u0440\": 14,\n...     \"\u0441\": 15, \"\u0442\": 16, \"\u0443\": 17, \"\u0444\": 18, \"\u0445\": 19, \"\u0446\": 20,\n...     \"\u0447\": 21, \"\u0448\": 21, \"\u0449\": 21, \"\u044a\": 22, \"\u044b\": 23, \"\u044c\": 22,\n...     \"\u044d\": 5, \"\u044e\": 24, \"\u044f\": 25, \"<\": 26, \">\": 26, \"~\": 26,\n...     \"-\": 26, \"+\": 26, \"=\": 26, \"*\": 26, \"/\": 26, \":\": 26,\n...     \"%\": 26, \"|\": 26, \"^\": 26, \"&\": 26, \"#\": 26, \"'\": 26,\n...     \"\\\\\": 26, \"0\": 27, \"1\": 27, \"2\": 27, \"3\": 27, \"4\": 27,\n...     \"5\": 27, \"6\": 27, \"7\": 27, \"8\": 27, \"9\": 27, \"a\": 0,\n...     \"b\": 2, \"c\": 15, \"d\": 4, \"e\": 5, \"f\": 18, \"g\": 3,\n...     \"h\": 12, \"i\": 8, \"j\": 6, \"k\": 9, \"l\": 10, \"m\": 11,\n...     \"n\": 12, \"o\": 0, \"p\": 14, \"q\": 13, \"r\": 14, \"s\": 15,\n...     \"t\": 16, \"u\": 24, \"v\": 21, \"w\": 22, \"x\": 19, \"y\": 17, \"z\": 7\n... }, 28)\n>>> \n>>> asc.buildIndex(statusIndex)\nBuild index 0\nBuild index 1\nBuild index 2\nBuild index 3\nBuild index 4\n...\n>>> res = asc.spell(\"\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433 \u0437\u0430\u0436\u043e\u0433 \u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443\", True)\n>>> res\n('\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a \u0437\u0430\u0436\u0435\u0433 \u043f\u043e-\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443', [('\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433', '\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a'), ('\u0437\u0430\u0436\u043e\u0433', '\u0437\u0430\u0436\u0435\u0433'), ('\u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443', '\u043f\u043e-\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443')])\n>>> \n>>> res = asc.analyze(\"\u0441\u043b\u0437\u044b \u0442\u0435\u0443\u0442 \u043d\u0430 \u043c\u0440\u043e\u0437\u0435\")\n>>> res\n[('\u0442\u0435\u0443\u0442', ['\u0442\u0435\u043a\u0443\u0442']), ('\u043c\u0440\u043e\u0437\u0435', ['\u043c\u043e\u0437\u0433', '\u043c\u043e\u0440\u043e\u0437', '\u043c\u043e\u0440\u043e\u0437\u0435', '\u043c\u043e\u0451']), ('\u0441\u043b\u0437\u044b', ['\u0441\u043b\u0435\u0437\u044b', '\u0441\u043b\u0451\u0437\u044b'])]\n```\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> asc.setThreads(0)\n>>> asc.setOption(asc.options_t.ascSplit)\n>>> asc.setOption(asc.options_t.ascAlter)\n>>> asc.setOption(asc.options_t.ascESplit)\n>>> asc.setOption(asc.options_t.ascRSplit)\n>>> asc.setOption(asc.options_t.ascUppers)\n>>> asc.setOption(asc.options_t.ascHyphen)\n>>> asc.setOption(asc.options_t.ascWordRep)\n>>> asc.setOption(asc.options_t.mixDicts)\n>>> asc.setOption(asc.options_t.confidence)\n>>> \n>>> asc.addAlt(\"\u0435\", \"\u0451\")\n>>> asc.addAlt(\"\u0435\u0436\u0438\u043a\", \"\u0451\u0436\u0438\u043a\")\n>>> asc.addAlt(\"\u0437\u0430\u0436\u0435\u0433\", \"\u0437\u0430\u0436\u0451\u0433\")\n>>> asc.addAlt(\"\u041b\u0435\u0433\u043a\u0438\u0439\", \"\u041b\u0451\u0433\u043a\u0438\u0439\")\n...\n>>> asc.setAlphabet(\"\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\")\n>>> \n>>> asc.setPilots([\"\u0430\",\"\u0443\",\"\u0432\",\"\u043e\",\"\u0441\",\"\u043a\",\"\u0431\",\"\u0438\",\"\u044f\",\"\u044d\",\"a\",\"i\",\"o\",\"e\",\"g\"])\n>>> asc.setSubstitutes({'p':'\u0440','c':'\u0441','o':'\u043e','t':'\u0442','k':'\u043a','e':'\u0435','a':'\u0430','h':'\u043d','x':'\u0445','b':'\u0432','m':'\u043c'})\n>>> \n>>> asc.addAbbr(\"\u0421\u0428\u0410\")\n>>> asc.addAbbr(\"\u0421\u0431\u0435\u0440\")\n...\n>>> asc.addGoodword(\"T-34\")\n>>> asc.addGoodword(\"\u0410\u041d-25\")\n...\n>>> asc.addBadword(\"\u0438\u0439\u0442\u0438\")\n>>> asc.addBadword(\"\u0446\u0438\u0433\u0430\u043d\")\n>>> asc.addBadword(\"\u0430\u043f\u0438\u0447\u0430\u0442\u043a\u0430\")\n...\n>>> asc.idw(\"\u041c\u043e\u0441\u043a\u0432\u0430\")\n50387419219\n>>> asc.idw(\"\u0421\u0430\u043d\u043a\u0442-\u041f\u0435\u0442\u0435\u0440\u0431\u0443\u0440\u0433\")\n68256898625\n>>> \n>>> asc.setUWords({50387419219: 1, 68256898625: 1})\n>>> \n...\n>>> def statusArpa(status):\n...     print(\"Read arpa\", status)\n... \n>>> def statusIndex(status):\n...     print(\"Build index\", status)\n... \n>>> asc.readArpa(\"./words.arpa\", statusArpa)\nRead arpa 0\nRead arpa 1\nRead arpa 2\nRead arpa 3\nRead arpa 4\nRead arpa 5\nRead arpa 6\nRead arpa 7\nRead arpa 8\n...\n>>> asc.setAdCw(38120, 13)\n>>> \n>>> asc.setEmbedding({\n...     \"\u0430\": 0, \"\u0431\": 1, \"\u0432\": 2, \"\u0433\": 3, \"\u0434\": 4, \"\u0435\": 5,\n...     \"\u0451\": 5, \"\u0436\": 6, \"\u0437\": 7, \"\u0438\": 8, \"\u0439\": 8, \"\u043a\": 9,\n...     \"\u043b\": 10, \"\u043c\": 11, \"\u043d\": 12, \"\u043e\": 0, \"\u043f\": 13, \"\u0440\": 14,\n...     \"\u0441\": 15, \"\u0442\": 16, \"\u0443\": 17, \"\u0444\": 18, \"\u0445\": 19, \"\u0446\": 20,\n...     \"\u0447\": 21, \"\u0448\": 21, \"\u0449\": 21, \"\u044a\": 22, \"\u044b\": 23, \"\u044c\": 22,\n...     \"\u044d\": 5, \"\u044e\": 24, \"\u044f\": 25, \"<\": 26, \">\": 26, \"~\": 26,\n...     \"-\": 26, \"+\": 26, \"=\": 26, \"*\": 26, \"/\": 26, \":\": 26,\n...     \"%\": 26, \"|\": 26, \"^\": 26, \"&\": 26, \"#\": 26, \"'\": 26,\n...     \"\\\\\": 26, \"0\": 27, \"1\": 27, \"2\": 27, \"3\": 27, \"4\": 27,\n...     \"5\": 27, \"6\": 27, \"7\": 27, \"8\": 27, \"9\": 27, \"a\": 0,\n...     \"b\": 2, \"c\": 15, \"d\": 4, \"e\": 5, \"f\": 18, \"g\": 3,\n...     \"h\": 12, \"i\": 8, \"j\": 6, \"k\": 9, \"l\": 10, \"m\": 11,\n...     \"n\": 12, \"o\": 0, \"p\": 14, \"q\": 13, \"r\": 14, \"s\": 15,\n...     \"t\": 16, \"u\": 24, \"v\": 21, \"w\": 22, \"x\": 19, \"y\": 17, \"z\": 7\n... }, 28)\n>>> \n>>> asc.buildIndex(statusIndex)\nBuild index 0\nBuild index 1\nBuild index 2\nBuild index 3\nBuild index 4\n...\n>>> res = asc.spell(\"\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433 \u0437\u0430\u0436\u043e\u0433 \u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443\", True)\n>>> res\n('\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a \u0437\u0430\u0436\u0451\u0433 \u043f\u043e-\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443', [('\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433', '\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a'), ('\u0437\u0430\u0436\u043e\u0433', '\u0437\u0430\u0436\u0451\u0433'), ('\u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443', '\u043f\u043e-\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443')])\n>>> \n>>> res = asc.analyze(\"\u0441\u043b\u0437\u044b \u0442\u0435\u0443\u0442 \u043d\u0430 \u043c\u0440\u043e\u0437\u0435\")\n>>> res\n[('\u0442\u0435\u0443\u0442', ['\u0442\u0435\u043a\u0443\u0442']), ('\u043c\u0440\u043e\u0437\u0435', ['\u043c\u043e\u0437\u0433', '\u043c\u043e\u0440\u043e\u0437', '\u043c\u043e\u0440\u043e\u0437\u0435', '\u043c\u043e\u0451']), ('\u0441\u043b\u0437\u044b', ['\u0441\u043b\u0435\u0437\u044b', '\u0441\u043b\u0451\u0437\u044b'])]\n```\n\n### Example:\n```python\n>>> import asc\n>>> import spacy\n>>> import pymorphy2\n>>> \n>>> asc.setThreads(0)\n>>> asc.setOption(asc.options_t.ascSplit)\n>>> asc.setOption(asc.options_t.ascAlter)\n>>> asc.setOption(asc.options_t.ascESplit)\n>>> asc.setOption(asc.options_t.ascRSplit)\n>>> asc.setOption(asc.options_t.ascUppers)\n>>> asc.setOption(asc.options_t.ascHyphen)\n>>> asc.setOption(asc.options_t.ascWordRep)\n>>> asc.setOption(asc.options_t.mixDicts)\n>>> asc.setOption(asc.options_t.confidence)\n>>> asc.setOption(asc.options_t.stemming)\n>>> \n>>> asc.addAlt(\"\u0435\", \"\u0451\")\n>>> asc.addAlt(\"\u0435\u0436\u0438\u043a\", \"\u0451\u0436\u0438\u043a\")\n>>> asc.addAlt(\"\u0437\u0430\u0436\u0435\u0433\", \"\u0437\u0430\u0436\u0451\u0433\")\n>>> asc.addAlt(\"\u041b\u0435\u0433\u043a\u0438\u0439\", \"\u041b\u0451\u0433\u043a\u0438\u0439\")\n...\n>>> asc.setAlphabet(\"\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\")\n>>> \n>>> asc.setPilots([\"\u0430\",\"\u0443\",\"\u0432\",\"\u043e\",\"\u0441\",\"\u043a\",\"\u0431\",\"\u0438\",\"\u044f\",\"\u044d\",\"a\",\"i\",\"o\",\"e\",\"g\"])\n>>> asc.setSubstitutes({'p':'\u0440','c':'\u0441','o':'\u043e','t':'\u0442','k':'\u043a','e':'\u0435','a':'\u0430','h':'\u043d','x':'\u0445','b':'\u0432','m':'\u043c'})\n>>> \n>>> asc.addAbbr(\"\u0421\u0428\u0410\")\n>>> asc.addAbbr(\"\u0421\u0431\u0435\u0440\")\n...\n>>> asc.addGoodword(\"T-34\")\n>>> asc.addGoodword(\"\u0410\u041d-25\")\n...\n>>> asc.addBadword(\"\u0438\u0439\u0442\u0438\")\n>>> asc.addBadword(\"\u0446\u0438\u0433\u0430\u043d\")\n>>> asc.addBadword(\"\u0430\u043f\u0438\u0447\u0430\u0442\u043a\u0430\")\n...\n>>> asc.addUWord(\"\u041c\u043e\u0441\u043a\u0432\u0430\")\n>>> asc.addUWord(\"\u0421\u0430\u043d\u043a\u0442-\u041f\u0435\u0442\u0435\u0440\u0431\u0443\u0440\u0433\")\n...\n>>> morphRu = pymorphy2.MorphAnalyzer()\n>>> morphEn = spacy.load('en', disable=['parser', 'ner'])\n>>> \n>>> def statusArpa(status):\n...     print(\"Read arpa\", status)\n... \n>>> def statusIndex(status):\n...     print(\"Build index\", status)\n... \n>>> def statusStemming(status):\n...    print(\"Build stemming\", status)\n...\n>>> def eng(word):\n...     global morphEn\n...     words = morphEn(word)\n...     word = ''.join([token.lemma_ for token in words]).strip()\n...     if word[0] != '-' and word[len(word) - 1] != '-':\n...         return word\n...     else:\n...         return \"\"\n... \n>>> def rus(word):\n...     global morphRu\n...     if morphRu != None:\n...         word = morphRu.parse(word)[0].normal_form\n...         return word\n...     else:\n...         return \"\"\n... \n>>> def run(word, lang):\n...     if lang == \"ru\":\n...         return rus(word.lower())\n...     elif lang == \"en\":\n...         return eng(word.lower())\n... \n>>> asc.readArpa(\"./words.arpa\", statusArpa)\nRead arpa 0\nRead arpa 1\nRead arpa 2\nRead arpa 3\nRead arpa 4\nRead arpa 5\nRead arpa 6\nRead arpa 7\nRead arpa 8\n...\n>>> asc.setAdCw(38120, 13)\n>>> \n>>> asc.setEmbedding({\n...     \"\u0430\": 0, \"\u0431\": 1, \"\u0432\": 2, \"\u0433\": 3, \"\u0434\": 4, \"\u0435\": 5,\n...     \"\u0451\": 5, \"\u0436\": 6, \"\u0437\": 7, \"\u0438\": 8, \"\u0439\": 8, \"\u043a\": 9,\n...     \"\u043b\": 10, \"\u043c\": 11, \"\u043d\": 12, \"\u043e\": 0, \"\u043f\": 13, \"\u0440\": 14,\n...     \"\u0441\": 15, \"\u0442\": 16, \"\u0443\": 17, \"\u0444\": 18, \"\u0445\": 19, \"\u0446\": 20,\n...     \"\u0447\": 21, \"\u0448\": 21, \"\u0449\": 21, \"\u044a\": 22, \"\u044b\": 23, \"\u044c\": 22,\n...     \"\u044d\": 5, \"\u044e\": 24, \"\u044f\": 25, \"<\": 26, \">\": 26, \"~\": 26,\n...     \"-\": 26, \"+\": 26, \"=\": 26, \"*\": 26, \"/\": 26, \":\": 26,\n...     \"%\": 26, \"|\": 26, \"^\": 26, \"&\": 26, \"#\": 26, \"'\": 26,\n...     \"\\\\\": 26, \"0\": 27, \"1\": 27, \"2\": 27, \"3\": 27, \"4\": 27,\n...     \"5\": 27, \"6\": 27, \"7\": 27, \"8\": 27, \"9\": 27, \"a\": 0,\n...     \"b\": 2, \"c\": 15, \"d\": 4, \"e\": 5, \"f\": 18, \"g\": 3,\n...     \"h\": 12, \"i\": 8, \"j\": 6, \"k\": 9, \"l\": 10, \"m\": 11,\n...     \"n\": 12, \"o\": 0, \"p\": 14, \"q\": 13, \"r\": 14, \"s\": 15,\n...     \"t\": 16, \"u\": 24, \"v\": 21, \"w\": 22, \"x\": 19, \"y\": 17, \"z\": 7\n... }, 28)\n>>> \n>>> asc.setCode(\"ru\")\n>>> \n>>> asc.buildIndex(statusIndex)\nBuild index 0\nBuild index 1\nBuild index 2\nBuild index 3\nBuild index 4\n...\n>>> asc.setStemmingMethod(run)\n>>>\n>>> asc.buildStemming(statusStemming)\nBuild stemming 0\nBuild stemming 1\nBuild stemming 2\nBuild stemming 3\nBuild stemming 4\nBuild stemming 5\n...\n>>> asc.addLemma(\"\u0433\u043e\u0432\u043e\u0440\u0438\u043b\")\n>>> asc.addLemma(\"\u0445\u043e\u0434\u0438\u0442\u044c\")\n...\n>>> asc.setNSWLibCount(50000)\n>>> \n>>> res = asc.spell(\"\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433 \u0437\u0430\u0436\u043e\u0433 \u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443\", True)\n>>> res\n('\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a \u0437\u0430\u0436\u0451\u0433 \u043f\u043e-\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443', [('\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433', '\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0438\u043a'), ('\u0437\u0430\u0436\u043e\u0433', '\u0437\u0430\u0436\u0451\u0433'), ('\u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443', '\u043f\u043e-\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443')])\n>>> \n>>> res = asc.analyze(\"\u0441\u043b\u0437\u044b \u0442\u0435\u0443\u0442 \u043d\u0430 \u043c\u0440\u043e\u0437\u0435\")\n>>> res\n[('\u0442\u0435\u0443\u0442', ['\u0442\u0435\u043a\u0443\u0442']), ('\u043c\u0440\u043e\u0437\u0435', ['\u043c\u043e\u0437\u0433', '\u043c\u043e\u0440\u043e\u0437', '\u043c\u043e\u0440\u043e\u0437\u0435', '\u043c\u043e\u0451']), ('\u0441\u043b\u0437\u044b', ['\u0441\u043b\u0435\u0437\u044b', '\u0441\u043b\u0451\u0437\u044b'])]\n```\n\n---\n\n### Methods:\n- **setOption** - Library options setting method\n- **unsetOption** - Disable module option method\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.unsetOption(asc.options_t.debug)\n>>> asc.unsetOption(asc.options_t.mixDicts)\n>>> asc.unsetOption(asc.options_t.onlyGood)\n>>> asc.unsetOption(asc.options_t.confidence)\n...\n```\n\n#### Description\n| Options     | Description                                                                              |\n|-------------|------------------------------------------------------------------------------------------|\n| debug       | Flag debug mode                                                                          |\n| bloom       | Flag allowed to use Bloom filter to check words                                          |\n| uppers      | Flag that allows you to correct the case of letters                                      |\n| stemming    | Flag for stemming activation                                                             |\n| onlyGood    | Flag allowing to consider words from the white list only                                 |\n| mixDicts    | Flag allowing the use of words consisting of mixed dictionaries                          |\n| allowUnk    | Flag allowing to unknown word                                                            |\n| resetUnk    | Flag to reset the frequency of an unknown word                                           |\n| allGrams    | Flag allowing accounting of all collected n-grams                                        |\n| onlyTypos   | Flag to only correct typos                                                               |\n| lowerCase   | Flag allowing to case-insensitive                                                        |\n| confidence  | Flag arpa file loading without pre-processing the words                                  |\n| tokenWords  | Flag that takes into account when assembling N-grams, only those tokens that match words |\n| interpolate | Flag allowing to use interpolation in estimating                                         |\n| ascSplit    | Flag to allow splitting of merged words                                                  |\n| ascAlter    | Flag that allows you to replace alternative letters in words                             |\n| ascESplit   | Flag to allow splitting of misspelled concatenated words                                 |\n| ascRSplit   | Flag that allows you to combine words separated by a space                               |\n| ascUppers   | Flag that allows you to correct the case of letters                                      |\n| ascHyphen   | Flag to allow splitting of concatenated words with hyphens                               |\n| ascSkipUpp  | Flag to skip uppercase words                                                             |\n| ascSkipLat  | Flag allowing words in the latin alphabet to be skipped                                  |\n| ascSkipHyp  | Flag to skip hyphenated words                                                            |\n| ascWordRep  | Flag that allows you to remove duplicate words                                           |\n\n---\n\n### Methods:\n- **erratum** - Method for search typos in text\n- **token** - Method for determining the type of the token words\n- **split** - Method for performing a split of clumped words\n- **splitByHyphens** - Method for performing a split of clumped words by hyphens\n- **check** - Method for checking a word for its existence in the dictionary\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> asc.setThreads(0)\n>>> asc.setOption(asc.options_t.ascSplit)\n>>> asc.setOption(asc.options_t.ascAlter)\n>>> asc.setOption(asc.options_t.ascESplit)\n>>> asc.setOption(asc.options_t.ascRSplit)\n>>> asc.setOption(asc.options_t.ascUppers)\n>>> asc.setOption(asc.options_t.ascHyphen)\n>>> asc.setOption(asc.options_t.ascWordRep)\n>>> asc.setOption(asc.options_t.mixDicts)\n>>> asc.setOption(asc.options_t.confidence)\n>>> \n>>> def status(text, status):\n...     print(text, status)\n... \n>>> \n>>> asc.loadIndex(\"./wittenbell-3-single.asc\", \"\", status)\nLoading dictionary 1\nLoading dictionary 2\nLoading dictionary 3\nLoading dictionary 4\nLoading dictionary 5\nLoading dictionary 6\nLoading dictionary 7\nLoading dictionary 8\n...\nLoading Bloom filter 100\nLoading stemming 100\nLoading language model 6\nLoading language model 12\nLoading language model 18\nLoading language model 25\nLoading language model 31\nLoading language model 37\n...\nLoading alternative words 1\nLoading alternative words 2\nLoading alternative words 3\nLoading alternative words 4\nLoading alternative words 5\nLoading alternative words 6\nLoading alternative words 7\n...\nLoading substitutes letters 7\nLoading substitutes letters 14\nLoading substitutes letters 21\nLoading substitutes letters 28\nLoading substitutes letters 35\nLoading substitutes letters 42\n...\n>>> \nasc.erratum(\"\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433 \u0437\u0430\u0436\u0451\u0433 \u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443\")\n['\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u0435\u0433', '\u043f\u0430\u0432\u0437\u0440\u043e\u0441\u043b\u043e\u043c\u0443']\n>>> \nasc.token(\"word\")\n'<word>'\n>>> asc.token(\"12\")\n'<num>'\n>>> asc.token(\"127.0.0.1\")\n'<url>'\n>>> asc.token(\"14-33\")\n'<range>'\n>>> asc.token(\"14:44:22\")\n'<time>'\n>>> asc.token(\"08/02/2020\")\n'<date>'\n>>> \n>>> asc.split(\"\u043f\u0440\u0438\u0432\u0435\u0442\u043a\u0430\u043a\u0434\u0435\u043b\u0430\")\n'\u043f\u0440\u0438\u0432\u0435\u0442 \u043a\u0430\u043a \u0414\u0435\u043b\u0430'\n>>> asc.split(\"\u0431\u044b\u043b\u043c\u0430\u0441\u0442\u0435\u0440\u043e\u043c\u043f\u0440\u044f\u0442\u0430\u0442\u044c\u0441\u044f\u043d\u043e\u043d\u0435\u043c\u043e\u0433\u0432\u043e\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c\u0441\u044f\u0441\u0432\u043e\u0438\u043c\u0438\u0442\u0430\u043b\u0430\u043d\u0442\u0430\u043c\u0438\u043f\u043e\u0442\u043e\u043c\u0443\")\n'\u0431\u044b\u043b \u043c\u0430\u0441\u0442\u0435\u0440\u043e\u043c \u043f\u0440\u044f\u0442\u0430\u0442\u044c\u0441\u044f \u043d\u043e \u043d\u0435 \u043c\u043e\u0433 \u0432\u043e\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c\u0441\u044f \u0441\u0432\u043e\u0438\u043c\u0438 \u0442\u0430\u043b\u0430\u043d\u0442\u0430\u043c\u0438 \u043f\u043e\u0442\u043e\u043c\u0443'\n>>> asc.split(\"\u042f\u0431\u0438\u043d\u0430\u0442\u0430\u043a\u043e\u0439\u0441\u043e\u0441\u0442\u0430\u0432\u0431\u044b\u0441\u0445\u043e\u0434\u0438\u043b\u0435\u0441\u043b\u0438\u0431\")\n'\u044f \u0431 \u0438 \u043d\u0430 \u0442\u0430\u043a\u043e\u0439 \u0441\u043e\u0441\u0442\u0430\u0432 \u0431\u044b \u0441\u0445\u043e\u0434\u0438\u043b \u0435\u0441\u043b\u0438 \u0431'\n>>> asc.split(\"\u043b\u0435\u0442\u0447\u0435\u0440\u0435\u0437XVI\u0440\u0435\u0442\u0440\u043e\u0441\u043f\u0435\u043a\u0442\u0438\u0432\u043d\u043e\u043f\u0440\u043e\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u0442\u044c\u044d\u0442\u043e\u0431\u0443\u0434\u0435\u0442\")\n'\u043b\u0435\u0442 \u0447\u0435\u0440\u0435\u0437 XVI \u0440\u0435\u0442\u0440\u043e\u0441\u043f\u0435\u043a\u0442\u0438\u0432\u043d\u043e \u043f\u0440\u043e\u0441\u043c\u0430\u0442\u0440\u0438\u0432\u0430\u0442\u044c \u044d\u0442\u043e \u0431\u0443\u0434\u0435\u0442'\n>>> \n>>> asc.splitByHyphens(\"\u043f\u0440\u0438\u0432\u0435\u0442-\u043a\u0430\u043a-\u0434\u0435\u043b\u0430\")\n'\u043f\u0440\u0438\u0432\u0435\u0442 \u043a\u0430\u043a \u0434\u0435\u043b\u0430'\n>>> asc.splitByHyphens(\"\u043a\u0430\u043a-\u0442\u043e-\u0442\u0430\u043a\")\n'\u043a\u0430\u043a \u0442\u043e \u0442\u0430\u043a'\n>>> asc.splitByHyphens(\"\u043a\u0430\u043a-\u0442\u043e\")\n'\u043a\u0430\u043a-\u0442\u043e'\n>>> \n>>> asc.check(\"hello\")\nTrue\n>>> asc.check(\"\u0428\u0432\u0430\u0440\u0446\u0435\u043d\u0435\u0433\u0433\u0435\u0440\")\nTrue\n>>> asc.check(\"\u043f\u0440\u044b\u0432\u0435\u0442\")\nFalse\n```\n\n---\n\n### Methods:\n- **setSize** - Method for set size N-gram\n- **setAlmV2** - Method for set the language model type ALMv2\n- **unsetAlmV2** - Method for unset the language model type ALMv2\n- **setLocale** - Method set locale (Default: en_US.UTF-8)\n- **setCode** - Method for set code language\n- **setLictype** - Method for set dictionary license information type\n- **setName** - Method for set dictionary name\n- **setAuthor** - Method for set the dictionary author\n- **setCopyright** - Method for set copyright on a dictionary\n- **setLictext** - Method for set license information dictionary\n- **setContacts** - Method for set contact details of the dictionary author\n- **pruneArpa** - Language model pruning method\n- **addWord** - Method for add a word to the dictionary\n- **generateEmbedding** - Method for generation embedding\n- **setSizeEmbedding** - Method for set the embedding size\n\n#### Description\n| Smoothing       |\n|-----------------|\n| wittenBell      |\n| addSmooth       |\n| goodTuring      |\n| constDiscount   |\n| naturalDiscount |\n| kneserNey       |\n| modKneserNey    |\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> asc.setSize(3)\n>>> asc.setAlmV2()\n>>> asc.setThreads(0)\n>>> asc.setLocale(\"en_US.UTF-8\")\n>>> \n>>> asc.setOption(asc.options_t.allowUnk)\n>>> asc.setOption(asc.options_t.resetUnk)\n>>> asc.setOption(asc.options_t.mixDicts)\n>>> asc.setOption(asc.options_t.tokenWords)\n>>> asc.setOption(asc.options_t.confidence)\n>>> asc.setOption(asc.options_t.interpolate)\n>>> \n>>> asc.addAlt(\"\u0435\", \"\u0451\")\n>>> asc.addAlt(\"\u0435\u0436\u0438\u043a\", \"\u0451\u0436\u0438\u043a\")\n>>> asc.addAlt(\"\u0437\u0430\u0436\u0435\u0433\", \"\u0437\u0430\u0436\u0451\u0433\")\n>>> asc.addAlt(\"\u041b\u0435\u0433\u043a\u0438\u0439\", \"\u041b\u0451\u0433\u043a\u0438\u0439\")\n>>> \n>>> asc.setAlphabet(\"\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\")\n>>> \n>>> asc.setPilots([\"\u0430\",\"\u0443\",\"\u0432\",\"\u043e\",\"\u0441\",\"\u043a\",\"\u0431\",\"\u0438\",\"\u044f\",\"\u044d\",\"a\",\"i\",\"o\",\"e\",\"g\"])\n>>> asc.setSubstitutes({'p':'\u0440','c':'\u0441','o':'\u043e','t':'\u0442','k':'\u043a','e':'\u0435','a':'\u0430','h':'\u043d','x':'\u0445','b':'\u0432','m':'\u043c'})\n>>> \n>>> asc.addAbbr(\"\u0421\u0428\u0410\")\n>>> asc.addAbbr(\"\u0421\u0431\u0435\u0440\")\n>>> asc.addGoodword(\"T-34\")\n>>> asc.addGoodword(\"\u0410\u041d-25\")\n>>> \n>>> asc.addBadword(\"\u0438\u0439\u0442\u0438\")\n>>> asc.addBadword(\"\u0446\u0438\u0433\u0430\u043d\")\n>>> asc.addBadword(\"\u0430\u043f\u0438\u0447\u0430\u0442\u043a\u0430\")\n>>> \n>>> asc.addUWord(\"\u041c\u043e\u0441\u043a\u0432\u0430\")\n>>> asc.addUWord(\"\u0421\u0430\u043d\u043a\u0442-\u041f\u0435\u0442\u0435\u0440\u0431\u0443\u0440\u0433\")\n>>> \n>>> def statusMap(status):\n...     print(\"Write map\", status)\n... \n>>> def statusArpa1(status):\n...     print(\"Build arpa\", status)\n... \n>>> def statusArpa2(status):\n...     print(\"Write arpa\", status)\n... \n>>> def statusWords(status):\n...     print(\"Write words\", status)\n... \n>>> def statusVocab(status):\n...     print(\"Write vocab\", status)\n... \n>>> def statusAbbrs(status):\n...     print(\"Write abbrs\", status)\n... \n>>> def statusPrune(status):\n...     print(\"Prune vocab\", status)\n... \n>>> def statusNgram(status):\n...     print(\"Write ngram\", status)\n... \n>>> def statusIndex(status):\n...     print(\"Build index\", status)\n... \n>>> def status(text, status):\n...     print(text, status)\n... \n>>> asc.addText(\"The future is now\", 0)\n>>> \n>>> asc.collectCorpus(\"./corpus/text.txt\", asc.smoothing_t.wittenBell, 0.0, False, False, status)\nRead text corpora 0\nRead text corpora 1\nRead text corpora 2\nRead text corpora 3\nRead text corpora 4\nRead text corpora 5\nRead text corpora 6\n...\n>>> asc.pruneVocab(-15.0, 0, 0, statusPrune)\nPrune vocab 0\nPrune vocab 1\nPrune vocab 2\nPrune vocab 3\nPrune vocab 4\nPrune vocab 5\nPrune vocab 6\n...\n# Prune VOCAB or prune ARPA example\n>>> asc.pruneArpa(0.015, 3, statusPrune)\nPrune arpa 0\nPrune arpa 1\nPrune arpa 2\nPrune arpa 3\nPrune arpa 4\nPrune arpa 5\nPrune arpa 6\n...\n>>> asc.buildArpa(statusArpa1)\nBuild arpa 0\nBuild arpa 1\nBuild arpa 2\nBuild arpa 3\nBuild arpa 4\nBuild arpa 5\nBuild arpa 6\n...\n>>> asc.writeMap(\"./words.map\", statusMap)\nWrite map 0\nWrite map 1\nWrite map 2\nWrite map 3\nWrite map 4\nWrite map 5\nWrite map 6\n...\n>>> asc.writeArpa(\"./words.arpa\", statusArpa2)\nWrite arpa 0\nWrite arpa 1\nWrite arpa 2\nWrite arpa 3\nWrite arpa 4\nWrite arpa 5\nWrite arpa 6\n...\n>>> asc.writeWords(\"./words.txt\", statusWords)\nWrite words 0\nWrite words 1\nWrite words 2\nWrite words 3\nWrite words 4\nWrite words 5\nWrite words 6\n...\n>>> asc.writeVocab(\"./words.vocab\", statusVocab)\nWrite vocab 0\nWrite vocab 1\nWrite vocab 2\nWrite vocab 3\nWrite vocab 4\nWrite vocab 5\nWrite vocab 6\n...\n>>> asc.writeAbbrs(\"./words1.abbr\", statusAbbrs)\nWrite abbrs 50\nWrite abbrs 100\n>>> \n>>> asc.writeSuffix(\"./words2.abbr\", statusAbbrs)\nWrite abbrs 10\nWrite abbrs 20\nWrite abbrs 30\nWrite abbrs 40\nWrite abbrs 50\nWrite abbrs 60\n...\n>>> asc.writeNgrams(\"./words.ngram\", statusNgram)\nWrite ngram 0\nWrite ngram 1\nWrite ngram 2\nWrite ngram 3\nWrite ngram 4\nWrite ngram 5\nWrite ngram 6\n...\n>>> asc.setCode(\"RU\")\n>>> asc.setLictype(\"MIT\")\n>>> asc.setName(\"Russian\")\n>>> asc.setAuthor(\"You name\")\n>>> asc.setCopyright(\"You company LLC\")\n>>> asc.setLictext(\"... License text ...\")\n>>> asc.setContacts(\"site: https://example.com, e-mail: info@example.com\")\n>>> \n>>> asc.setEmbedding({\n...     \"\u0430\": 0, \"\u0431\": 1, \"\u0432\": 2, \"\u0433\": 3, \"\u0434\": 4, \"\u0435\": 5,\n...     \"\u0451\": 5, \"\u0436\": 6, \"\u0437\": 7, \"\u0438\": 8, \"\u0439\": 8, \"\u043a\": 9,\n...     \"\u043b\": 10, \"\u043c\": 11, \"\u043d\": 12, \"\u043e\": 0, \"\u043f\": 13, \"\u0440\": 14,\n...     \"\u0441\": 15, \"\u0442\": 16, \"\u0443\": 17, \"\u0444\": 18, \"\u0445\": 19, \"\u0446\": 20,\n...     \"\u0447\": 21, \"\u0448\": 21, \"\u0449\": 21, \"\u044a\": 22, \"\u044b\": 23, \"\u044c\": 22,\n...     \"\u044d\": 5, \"\u044e\": 24, \"\u044f\": 25, \"<\": 26, \">\": 26, \"~\": 26,\n...     \"-\": 26, \"+\": 26, \"=\": 26, \"*\": 26, \"/\": 26, \":\": 26,\n...     \"%\": 26, \"|\": 26, \"^\": 26, \"&\": 26, \"#\": 26, \"'\": 26,\n...     \"\\\\\": 26, \"0\": 27, \"1\": 27, \"2\": 27, \"3\": 27, \"4\": 27,\n...     \"5\": 27, \"6\": 27, \"7\": 27, \"8\": 27, \"9\": 27, \"a\": 0,\n...     \"b\": 2, \"c\": 15, \"d\": 4, \"e\": 5, \"f\": 18, \"g\": 3,\n...     \"h\": 12, \"i\": 8, \"j\": 6, \"k\": 9, \"l\": 10, \"m\": 11,\n...     \"n\": 12, \"o\": 0, \"p\": 14, \"q\": 13, \"r\": 14, \"s\": 15,\n...     \"t\": 16, \"u\": 24, \"v\": 21, \"w\": 22, \"x\": 19, \"y\": 17, \"z\": 7\n... }, 28)\n>>> \n>> asc.saveIndex(\"./3-wittenbell.asc\", \"\", 128, status)\nRead words 1\nRead words 2\nRead words 3\nRead words 4\nRead words 5\nRead words 6\n...\nTrain dictionary 0\nTrain dictionary 1\nTrain dictionary 2\nTrain dictionary 3\nTrain dictionary 4\nTrain dictionary 5\nTrain dictionary 6\n...\nDump dictionary 0\nDump dictionary 1\nDump dictionary 2\nDump dictionary 3\nDump dictionary 4\nDump dictionary 5\nDump dictionary 6\n...\nDump alternative letters 100\nDump alternative letters 100\nDump alternative words 200\nDump alternative words 100\nDump language model 0\nDump language model 100\nDump substitutes letters 9\nDump substitutes letters 18\nDump substitutes letters 27\nDump substitutes letters 36\nDump substitutes letters 45\nDump substitutes letters 54\nDump substitutes letters 63\nDump substitutes letters 72\nDump substitutes letters 81\nDump substitutes letters 90\nDump substitutes letters 100\nDump substitutes letters 100\n>>>\n>>> asc.infoIndex(\"./3-wittenbell.asc\")\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n* Code: RU\n\n* Version: 1.0.0\n\n* Dictionary name: Russian\n\n* Locale: en_US.UTF-8\n* Alphabet: \u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\n\n* Build date: 09/14/2020 01:39:50\n\n* Encrypted: NO\n\n* ALM type: ALMv2\n\n* Allow apostrophe: NO\n\n* Count words: 38120\n* Count documents: 13\n\n* Only good words: NO\n* Mix words in dicts: YES\n* Confidence arpa: YES\n\n* Count upper words: 2\n* Count pilots words: 15\n* Count bad words: 3\n* Count good words: 2\n* Count substitutes: 11\n* Count abbreviations: 12\n\n* Alternatives: \u0435 => \u0451\n* Count alternatives words: 1\n\n* Size embedding: 28\n\n* Length n-gram: 1\n\n* Author: You name\n\n* Contacts: site: https://example.com, e-mail: info@example.com\n\n* Copyright \u00a9: You company LLC\n\n* License type: MIT\n* License text:\n... License text ...\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n```\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> asc.setSize(3)\n>>> asc.setThreads(0)\n>>> asc.setLocale(\"en_US.UTF-8\")\n>>> \n>>> asc.setOption(asc.options_t.allowUnk)\n>>> asc.setOption(asc.options_t.resetUnk)\n>>> asc.setOption(asc.options_t.mixDicts)\n>>> asc.setOption(asc.options_t.tokenWords)\n>>> asc.setOption(asc.options_t.confidence)\n>>> asc.setOption(asc.options_t.interpolate)\n>>> \n>>> asc.addAlt(\"\u0435\", \"\u0451\")\n>>> asc.addAlt(\"\u041b\u0435\u0433\u043a\u0438\u0439\", \"\u041b\u0451\u0433\u043a\u0438\u0439\")\n>>> \n>>> asc.setAlphabet(\"\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\")\n>>> \n>>> asc.setPilots([\"\u0430\",\"\u0443\",\"\u0432\",\"\u043e\",\"\u0441\",\"\u043a\",\"\u0431\",\"\u0438\",\"\u044f\",\"\u044d\",\"a\",\"i\",\"o\",\"e\",\"g\"])\n>>> asc.setSubstitutes({'p':'\u0440','c':'\u0441','o':'\u043e','t':'\u0442','k':'\u043a','e':'\u0435','a':'\u0430','h':'\u043d','x':'\u0445','b':'\u0432','m':'\u043c'})\n>>> \n>>> asc.addAbbr(\"\u0421\u0428\u0410\")\n>>> asc.addAbbr(\"\u0421\u0431\u0435\u0440\")\n>>> asc.addGoodword(\"T-34\")\n>>> asc.addGoodword(\"\u0410\u041d-25\")\n>>> \n>>> asc.addBadword(\"\u0438\u0439\u0442\u0438\")\n>>> asc.addBadword(\"\u0446\u0438\u0433\u0430\u043d\")\n>>> asc.addBadword(\"\u0430\u043f\u0438\u0447\u0430\u0442\u043a\u0430\")\n>>> \n>>> asc.addUWord(\"\u041c\u043e\u0441\u043a\u0432\u0430\")\n>>> asc.addUWord(\"\u0421\u0430\u043d\u043a\u0442-\u041f\u0435\u0442\u0435\u0440\u0431\u0443\u0440\u0433\")\n>>> \n>>> def statusArpa(status):\n...     print(\"Read arpa\", status)\n... \n>>> def statusVocab(status):\n...     print(\"Read vocab\", status)\n... \n>>> def statusIndex(status):\n...     print(\"Build index\", status)\n...\n>>> def status(text, status):\n...     print(text, status)\n... \n>>> asc.readArpa(\"./words.arpa\", statusArpa)\nRead arpa 0\nRead arpa 1\nRead arpa 2\nRead arpa 3\nRead arpa 4\nRead arpa 5\nRead arpa 6\n...\n>>> asc.readVocab(\"./words.vocab\", statusVocab)\nRead vocab 0\nRead vocab 1\nRead vocab 2\nRead vocab 3\nRead vocab 4\nRead vocab 5\nRead vocab 6\n...\n>>> asc.setCode(\"RU\")\n>>> asc.setLictype(\"MIT\")\n>>> asc.setName(\"Russian\")\n>>> asc.setAuthor(\"You name\")\n>>> asc.setCopyright(\"You company LLC\")\n>>> asc.setLictext(\"... License text ...\")\n>>> asc.setContacts(\"site: https://example.com, e-mail: info@example.com\")\n>>> \n>>> asc.setEmbedding({\n...     \"\u0430\": 0, \"\u0431\": 1, \"\u0432\": 2, \"\u0433\": 3, \"\u0434\": 4, \"\u0435\": 5,\n...     \"\u0451\": 5, \"\u0436\": 6, \"\u0437\": 7, \"\u0438\": 8, \"\u0439\": 8, \"\u043a\": 9,\n...     \"\u043b\": 10, \"\u043c\": 11, \"\u043d\": 12, \"\u043e\": 0, \"\u043f\": 13, \"\u0440\": 14,\n...     \"\u0441\": 15, \"\u0442\": 16, \"\u0443\": 17, \"\u0444\": 18, \"\u0445\": 19, \"\u0446\": 20,\n...     \"\u0447\": 21, \"\u0448\": 21, \"\u0449\": 21, \"\u044a\": 22, \"\u044b\": 23, \"\u044c\": 22,\n...     \"\u044d\": 5, \"\u044e\": 24, \"\u044f\": 25, \"<\": 26, \">\": 26, \"~\": 26,\n...     \"-\": 26, \"+\": 26, \"=\": 26, \"*\": 26, \"/\": 26, \":\": 26,\n...     \"%\": 26, \"|\": 26, \"^\": 26, \"&\": 26, \"#\": 26, \"'\": 26,\n...     \"\\\\\": 26, \"0\": 27, \"1\": 27, \"2\": 27, \"3\": 27, \"4\": 27,\n...     \"5\": 27, \"6\": 27, \"7\": 27, \"8\": 27, \"9\": 27, \"a\": 0,\n...     \"b\": 2, \"c\": 15, \"d\": 4, \"e\": 5, \"f\": 18, \"g\": 3,\n...     \"h\": 12, \"i\": 8, \"j\": 6, \"k\": 9, \"l\": 10, \"m\": 11,\n...     \"n\": 12, \"o\": 0, \"p\": 14, \"q\": 13, \"r\": 14, \"s\": 15,\n...     \"t\": 16, \"u\": 24, \"v\": 21, \"w\": 22, \"x\": 19, \"y\": 17, \"z\": 7\n... }, 28)\n>>> \n>>> asc.buildIndex(statusIndex)\nBuild index 0\nBuild index 1\nBuild index 2\nBuild index 3\nBuild index 4\nBuild index 5\nBuild index 6\n...\n>>> asc.saveIndex(\"./3-wittenbell.asc\", \"\", 128, status)\nDump dictionary 0\nDump dictionary 1\nDump dictionary 2\nDump dictionary 3\nDump dictionary 4\nDump dictionary 5\nDump dictionary 6\n...\nDump alternative letters 100\nDump alternative letters 100\nDump alternative words 200\nDump alternative words 100\nDump language model 0\nDump language model 100\nDump substitutes letters 9\nDump substitutes letters 18\nDump substitutes letters 27\nDump substitutes letters 36\nDump substitutes letters 45\nDump substitutes letters 54\nDump substitutes letters 63\nDump substitutes letters 72\nDump substitutes letters 81\nDump substitutes letters 90\nDump substitutes letters 100\nDump substitutes letters 100\n>>>\n>>> asc.infoIndex(\"./3-wittenbell.asc\")\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n* Code: RU\n\n* Version: 1.0.0\n\n* Dictionary name: Russian\n\n* Locale: en_US.UTF-8\n* Alphabet: \u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\n\n* Build date: 09/14/2020 01:58:52\n\n* Encrypted: NO\n\n* ALM type: ALMv1\n\n* Allow apostrophe: NO\n\n* Count words: 38120\n* Count documents: 13\n\n* Only good words: NO\n* Mix words in dicts: YES\n* Confidence arpa: YES\n\n* Count upper words: 2\n* Count pilots words: 15\n* Count bad words: 3\n* Count good words: 2\n* Count substitutes: 11\n* Count abbreviations: 2\n\n* Alternatives: \u0435 => \u0451\n* Count alternatives words: 1\n\n* Size embedding: 28\n\n* Length n-gram: 3\n* Count n-grams: 353\n\n* Author: You name\n\n* Contacts: site: https://example.com, e-mail: info@example.com\n\n* Copyright \u00a9: You company LLC\n\n* License type: MIT\n* License text:\n... License text ...\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n```\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> asc.setSize(3)\n>>> asc.setAlmV2()\n>>> asc.setThreads(0)\n>>> asc.setLocale(\"en_US.UTF-8\")\n>>> \n>>> asc.setOption(asc.options_t.allowUnk)\n>>> asc.setOption(asc.options_t.resetUnk)\n>>> asc.setOption(asc.options_t.mixDicts)\n>>> asc.setOption(asc.options_t.tokenWords)\n>>> asc.setOption(asc.options_t.confidence)\n>>> asc.setOption(asc.options_t.interpolate)\n>>> \n>>> asc.addAlt(\"\u0435\", \"\u0451\")\n>>> asc.addAlt(\"\u041b\u0435\u0433\u043a\u0438\u0439\", \"\u041b\u0451\u0433\u043a\u0438\u0439\")\n>>> \n>>> asc.setAlphabet(\"\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\")\n>>> \n>>> asc.setPilots([\"\u0430\",\"\u0443\",\"\u0432\",\"\u043e\",\"\u0441\",\"\u043a\",\"\u0431\",\"\u0438\",\"\u044f\",\"\u044d\",\"a\",\"i\",\"o\",\"e\",\"g\"])\n>>> asc.setSubstitutes({'p':'\u0440','c':'\u0441','o':'\u043e','t':'\u0442','k':'\u043a','e':'\u0435','a':'\u0430','h':'\u043d','x':'\u0445','b':'\u0432','m':'\u043c'})\n>>> \n>>> asc.addAbbr(\"\u0421\u0428\u0410\")\n>>> asc.addAbbr(\"\u0421\u0431\u0435\u0440\")\n>>> asc.addGoodword(\"T-34\")\n>>> asc.addGoodword(\"\u0410\u041d-25\")\n>>> \n>>> asc.addBadword(\"\u0438\u0439\u0442\u0438\")\n>>> asc.addBadword(\"\u0446\u0438\u0433\u0430\u043d\")\n>>> asc.addBadword(\"\u0430\u043f\u0438\u0447\u0430\u0442\u043a\u0430\")\n>>> \n>>> asc.addUWord(\"\u041c\u043e\u0441\u043a\u0432\u0430\")\n>>> asc.addUWord(\"\u0421\u0430\u043d\u043a\u0442-\u041f\u0435\u0442\u0435\u0440\u0431\u0443\u0440\u0433\")\n>>> \n>>> def statusArpa(status):\n...     print(\"Read arpa\", status)\n... \n>>> def statusIndex(status):\n...     print(\"Build index\", status)\n... \n>>> def statusPrune(status):\n...     print(\"Prune arpa\", status)\n... \n>>> def status(text, status):\n...     print(text, status)\n... \n>>> asc.readArpa(\"./words.arpa\", statusArpa)\nRead arpa 0\nRead arpa 1\nRead arpa 2\nRead arpa 3\nRead arpa 4\nRead arpa 5\nRead arpa 6\n...\n>>> asc.setAdCw(38120, 13)\n>>> \n>>> asc.addWord(\"\u043c\u0438\u043d\u0438\u0441\u0442\u0435\u0440\u0441\u0442\u0432\u043e\")\n>>> asc.addWord(\"\u0432\u043e\u0437\u043c\u0435\u0437\u0434\u0438\u0435\", 0, 1)\n>>> asc.addWord(\"\u0432\u043e\u0437\u0440\u0430\u0436\u0435\u043d\u0438\u0435\", asc.idw(\"\u0432\u043e\u0437\u0440\u0430\u0436\u0435\u043d\u0438\u0435\"), 2)\n...\n>>> \n>>> asc.setCode(\"RU\")\n>>> asc.setLictype(\"MIT\")\n>>> asc.setName(\"Russian\")\n>>> asc.setAuthor(\"You name\")\n>>> asc.setCopyright(\"You company LLC\")\n>>> asc.setLictext(\"... License text ...\")\n>>> asc.setContacts(\"site: https://example.com, e-mail: info@example.com\")\n>>> \n>>> asc.setEmbedding({\n...     \"\u0430\": 0, \"\u0431\": 1, \"\u0432\": 2, \"\u0433\": 3, \"\u0434\": 4, \"\u0435\": 5,\n...     \"\u0451\": 5, \"\u0436\": 6, \"\u0437\": 7, \"\u0438\": 8, \"\u0439\": 8, \"\u043a\": 9,\n...     \"\u043b\": 10, \"\u043c\": 11, \"\u043d\": 12, \"\u043e\": 0, \"\u043f\": 13, \"\u0440\": 14,\n...     \"\u0441\": 15, \"\u0442\": 16, \"\u0443\": 17, \"\u0444\": 18, \"\u0445\": 19, \"\u0446\": 20,\n...     \"\u0447\": 21, \"\u0448\": 21, \"\u0449\": 21, \"\u044a\": 22, \"\u044b\": 23, \"\u044c\": 22,\n...     \"\u044d\": 5, \"\u044e\": 24, \"\u044f\": 25, \"<\": 26, \">\": 26, \"~\": 26,\n...     \"-\": 26, \"+\": 26, \"=\": 26, \"*\": 26, \"/\": 26, \":\": 26,\n...     \"%\": 26, \"|\": 26, \"^\": 26, \"&\": 26, \"#\": 26, \"'\": 26,\n...     \"\\\\\": 26, \"0\": 27, \"1\": 27, \"2\": 27, \"3\": 27, \"4\": 27,\n...     \"5\": 27, \"6\": 27, \"7\": 27, \"8\": 27, \"9\": 27, \"a\": 0,\n...     \"b\": 2, \"c\": 15, \"d\": 4, \"e\": 5, \"f\": 18, \"g\": 3,\n...     \"h\": 12, \"i\": 8, \"j\": 6, \"k\": 9, \"l\": 10, \"m\": 11,\n...     \"n\": 12, \"o\": 0, \"p\": 14, \"q\": 13, \"r\": 14, \"s\": 15,\n...     \"t\": 16, \"u\": 24, \"v\": 21, \"w\": 22, \"x\": 19, \"y\": 17, \"z\": 7\n... }, 28)\n>>> \n>>> asc.buildIndex(statusIndex)\nBuild index 0\nBuild index 1\nBuild index 2\nBuild index 3\nBuild index 4\nBuild index 5\nBuild index 6\n...\n>>> asc.saveIndex(\"./3-wittenbell.asc\", \"password\", 128, status)\nDump dictionary 0\nDump dictionary 1\nDump dictionary 2\nDump dictionary 3\nDump dictionary 4\nDump dictionary 5\nDump dictionary 6\n...\nDump alternative letters 100\nDump alternative letters 100\nDump alternative words 200\nDump alternative words 100\nDump language model 0\nDump language model 100\nDump substitutes letters 9\nDump substitutes letters 18\nDump substitutes letters 27\nDump substitutes letters 36\nDump substitutes letters 45\nDump substitutes letters 54\nDump substitutes letters 63\nDump substitutes letters 72\nDump substitutes letters 81\nDump substitutes letters 90\nDump substitutes letters 100\nDump substitutes letters 100\n>>>\n>>> asc.infoIndex(\"./3-wittenbell.asc\")\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n* Code: RU\n\n* Version: 1.0.0\n\n* Dictionary name: Russian\n\n* Build date: 09/14/2020 02:09:38\n\n* Encrypted: YES\n\n* ALM type: ALMv2\n\n* Allow apostrophe: NO\n\n* Count words: 38120\n* Count documents: 13\n\n* Only good words: NO\n* Mix words in dicts: YES\n* Confidence arpa: YES\n\n* Count upper words: 2\n* Count pilots words: 15\n* Count bad words: 3\n* Count good words: 2\n* Count substitutes: 11\n* Count abbreviations: 2\n\n* Alternatives: \u0435 => \u0451\n* Count alternatives words: 1\n\n* Size embedding: 28\n\n* Length n-gram: 3\n* Count n-grams: 353\n\n* Author: You name\n\n* Contacts: site: https://example.com, e-mail: info@example.com\n\n* Copyright \u00a9: You company LLC\n\n* License type: MIT\n* License text:\n... License text ...\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n```\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> asc.setSize(3)\n>>> asc.setAlmV2()\n>>> asc.setThreads(0)\n>>> asc.setLocale(\"en_US.UTF-8\")\n>>> \n>>> asc.setOption(asc.options_t.allowUnk)\n>>> asc.setOption(asc.options_t.resetUnk)\n>>> asc.setOption(asc.options_t.mixDicts)\n>>> asc.setOption(asc.options_t.tokenWords)\n>>> asc.setOption(asc.options_t.confidence)\n>>> asc.setOption(asc.options_t.interpolate)\n>>> \n>>> asc.addAlt(\"\u0435\", \"\u0451\")\n>>> asc.addAlt(\"\u041b\u0435\u0433\u043a\u0438\u0439\", \"\u041b\u0451\u0433\u043a\u0438\u0439\")\n>>> \n>>> asc.setAlphabet(\"\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044fabcdefghijklmnopqrstuvwxyz\")\n>>> \n>>> asc.setPilots([\"\u0430\",\"\u0443\",\"\u0432\",\"\u043e\",\"\u0441\",\"\u043a\",\"\u0431\",\"\u0438\",\"\u044f\",\"\u044d\",\"a\",\"i\",\"o\",\"e\",\"g\"])\n>>> asc.setSubstitutes({'p':'\u0440','c':'\u0441','o':'\u043e','t':'\u0442','k':'\u043a','e':'\u0435','a':'\u0430','h':'\u043d','x':'\u0445','b':'\u0432','m':'\u043c'})\n>>> \n>>> asc.addAbbr(\"\u0421\u0428\u0410\")\n>>> asc.addAbbr(\"\u0421\u0431\u0435\u0440\")\n>>> asc.addGoodword(\"T-34\")\n>>> asc.addGoodword(\"\u0410\u041d-25\")\n>>> \n>>> asc.addBadword(\"\u0438\u0439\u0442\u0438\")\n>>> asc.addBadword(\"\u0446\u0438\u0433\u0430\u043d\")\n>>> asc.addBadword(\"\u0430\u043f\u0438\u0447\u0430\u0442\u043a\u0430\")\n>>> \n>>> asc.addUWord(\"\u041c\u043e\u0441\u043a\u0432\u0430\")\n>>> asc.addUWord(\"\u0421\u0430\u043d\u043a\u0442-\u041f\u0435\u0442\u0435\u0440\u0431\u0443\u0440\u0433\")\n>>> \n>>> def statusArpa(status):\n...     print(\"Read arpa\", status)\n... \n>>> def statusIndex(status):\n...     print(\"Build index\", status)\n... \n>>> def statusPrune(status):\n...     print(\"Prune arpa\", status)\n... \n>>> def status(text, status):\n...     print(text, status)\n... \n>>> asc.readArpa(\"./words.arpa\", statusArpa)\nRead arpa 0\nRead arpa 1\nRead arpa 2\nRead arpa 3\nRead arpa 4\nRead arpa 5\nRead arpa 6\n...\n>>> asc.setAdCw(38120, 13)\n>>> \n>>> asc.addWord(\"\u043c\u0438\u043d\u0438\u0441\u0442\u0435\u0440\u0441\u0442\u0432\u043e\")\n>>> asc.addWord(\"\u0432\u043e\u0437\u043c\u0435\u0437\u0434\u0438\u0435\", 0, 1)\n>>> asc.addWord(\"\u0432\u043e\u0437\u0440\u0430\u0436\u0435\u043d\u0438\u0435\", asc.idw(\"\u0432\u043e\u0437\u0440\u0430\u0436\u0435\u043d\u0438\u0435\"), 2)\n...\n>>> \n>>> asc.setCode(\"RU\")\n>>> asc.setLictype(\"MIT\")\n>>> asc.setName(\"Russian\")\n>>> asc.setAuthor(\"You name\")\n>>> asc.setCopyright(\"You company LLC\")\n>>> asc.setLictext(\"... License text ...\")\n>>> asc.setContacts(\"site: https://example.com, e-mail: info@example.com\")\n>>> \n>>> asc.setSizeEmbedding(32)\n>>> asc.generateEmbedding()\n>>> \n>>> asc.buildIndex(statusIndex)\nBuild index 0\nBuild index 1\nBuild index 2\nBuild index 3\nBuild index 4\nBuild index 5\nBuild index 6\n...\n>>> asc.saveIndex(\"./3-wittenbell.asc\", \"password\", 128, status)\nDump dictionary 0\nDump dictionary 1\nDump dictionary 2\nDump dictionary 3\nDump dictionary 4\nDump dictionary 5\nDump dictionary 6\n...\nDump alternative letters 100\nDump alternative letters 100\nDump alternative words 200\nDump alternative words 100\nDump language model 0\nDump language model 100\nDump substitutes letters 9\nDump substitutes letters 18\nDump substitutes letters 27\nDump substitutes letters 36\nDump substitutes letters 45\nDump substitutes letters 54\nDump substitutes letters 63\nDump substitutes letters 72\nDump substitutes letters 81\nDump substitutes letters 90\nDump substitutes letters 100\nDump substitutes letters 100\n>>>\n>>> asc.infoIndex(\"./3-wittenbell.asc\")\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n* Code: RU\n\n* Version: 1.0.0\n\n* Dictionary name: Russian\n\n* Build date: 09/14/2020 02:09:38\n\n* Encrypted: YES\n\n* ALM type: ALMv2\n\n* Allow apostrophe: NO\n\n* Count words: 38120\n* Count documents: 13\n\n* Only good words: NO\n* Mix words in dicts: YES\n* Confidence arpa: YES\n\n* Count upper words: 2\n* Count pilots words: 15\n* Count bad words: 3\n* Count good words: 2\n* Count substitutes: 11\n* Count abbreviations: 2\n\n* Alternatives: \u0435 => \u0451\n* Count alternatives words: 1\n\n* Size embedding: 28\n\n* Length n-gram: 3\n* Count n-grams: 353\n\n* Author: You name\n\n* Contacts: site: https://example.com, e-mail: info@example.com\n\n* Copyright \u00a9: You company LLC\n\n* License type: MIT\n* License text:\n... License text ...\n\n* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *\n\n```\n\n---\n\n### Methods:\n- **size** - Method of obtaining the size of the N-gram\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.size()\n3\n```\n\n---\n\n### Methods:\n- **damerauLevenshtein** - Determination of the Damerau-Levenshtein distance in phrases\n- **distanceLevenshtein** - Determination of Levenshtein distance in phrases\n- **tanimoto** - Method for determining Jaccard coefficient (quotient - Tanimoto coefficient)\n- **needlemanWunsch** - Word stretching method\n\n### Example:\n```python\n>>> import asc\n>>> asc.damerauLevenshtein(\"\u043f\u0440\u0438\u0432\u0435\u0442\", \"\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a\")\n2\n>>> \n>>> asc.damerauLevenshtein(\"\u043f\u0440\u0438\u0435\u0432\u0442\u0438\u043a\", \"\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a\")\n1\n>>> \n>>> asc.distanceLevenshtein(\"\u043f\u0440\u0438\u0435\u0432\u0442\u0438\u043a\", \"\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a\")\n2\n>>> \n>>> asc.tanimoto(\"\u043f\u0440\u0438\u0432\u0435\u0442\", \"\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a\")\n0.7142857142857143\n>>> \n>>> asc.tanimoto(\"\u043f\u0440\u0438\u0432\u0435\u0438\u0442\u043a\", \"\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a\")\n0.4\n>>> \n>>> asc.needlemanWunsch(\"\u043f\u0440\u0438\u0432\u0435\u0438\u0442\u043a\", \"\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a\")\n4\n>>> \n>>> asc.needlemanWunsch(\"\u043f\u0440\u0438\u0432\u0435\u0442\", \"\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a\")\n2\n>>> \n>>> asc.damerauLevenshtein(\"acre\", \"car\")\n2\n>>> asc.distanceLevenshtein(\"acre\", \"car\")\n3\n>>> \n>>> asc.damerauLevenshtein(\"anteater\", \"theatre\")\n4\n>>> asc.distanceLevenshtein(\"anteater\", \"theatre\")\n5\n>>> \n>>> asc.damerauLevenshtein(\"banana\", \"nanny\")\n3\n>>> asc.distanceLevenshtein(\"banana\", \"nanny\")\n3\n>>> \n>>> asc.damerauLevenshtein(\"cat\", \"crate\")\n2\n>>> asc.distanceLevenshtein(\"cat\", \"crate\")\n2\n>>>\n>>> asc.mulctLevenshtein(\"\u043f\u0440\u0438\u0432\u0435\u0442\", \"\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a\")\n4\n>>>\n>>> asc.mulctLevenshtein(\"\u043f\u0440\u0438\u0435\u0432\u0442\u0438\u043a\", \"\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a\")\n1\n>>>\n>>> asc.mulctLevenshtein(\"acre\", \"car\")\n3\n>>>\n>>> asc.mulctLevenshtein(\"anteater\", \"theatre\")\n5\n>>>\n>>> asc.mulctLevenshtein(\"banana\", \"nanny\")\n4\n>>>\n>>> asc.mulctLevenshtein(\"cat\", \"crate\")\n4\n```\n\n---\n\n### Methods:\n- **textToJson** - Method to convert text to JSON\n- **isAllowApostrophe** - Apostrophe permission check method\n- **switchAllowApostrophe** - Method for permitting or denying an apostrophe as part of a word\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> def callbackFn(text):\n...     print(text)\n... \n>>> asc.isAllowApostrophe()\nFalse\n>>> asc.switchAllowApostrophe()\n>>>\n>>> asc.isAllowApostrophe()\nTrue\n>>> asc.textToJson(\"\u00abOn nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'\u00e9valuer\u00bb l'astronomie\", callbackFn)\n[[\"\u00ab\",\"On\",\"nous\",\"dit\",\"qu'aujourd'hui\",\"c'est\",\"le\",\"cas\",\",\",\"encore\",\"faudra-t-il\",\"l'\u00e9valuer\",\"\u00bb\",\"l'astronomie\"]]\n```\n\n---\n\n### Methods:\n- **jsonToText** - Method to convert JSON to text\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> def callbackFn(text):\n...     print(text)\n... \n>>> asc.jsonToText('[[\"\u00ab\",\"On\",\"nous\",\"dit\",\"qu\\'aujourd\\'hui\",\"c\\'est\",\"le\",\"cas\",\",\",\"encore\",\"faudra-t-il\",\"l\\'\u00e9valuer\",\"\u00bb\",\"l\\'astronomie\"]]', callbackFn)\n\u00abOn nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'\u00e9valuer\u00bb l'astronomie\n```\n\n---\n\n### Methods:\n- **restore** - Method for restore text from context\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.uppers)\n>>>\n>>> asc.restore([\"\u00ab\",\"On\",\"nous\",\"dit\",\"qu\\'aujourd\\'hui\",\"c\\'est\",\"le\",\"cas\",\",\",\"encore\",\"faudra-t-il\",\"l\\'\u00e9valuer\",\"\u00bb\",\"l\\'astronomie\"])\n\"\u00abOn nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'\u00e9valuer\u00bb l'astronomie\"\n```\n\n---\n\n### Methods:\n- **allowStress** - Method for allow using stress in words\n- **disallowStress** - Method for disallow using stress in words\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> def callbackFn(text):\n...     print(text)\n... \n>>> asc.textToJson('\u00ab\u0411\u0435\u0301\u043b\u0430\u044f \u0441\u0442\u0440\u0435\u043b\u0430\u0301\u00bb \u2014 \u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e \u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439 \u043b\u0435\u0433\u0435\u043d\u0434\u0435, \u044f\u043a\u043e\u0431\u044b \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u0438 \u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f \u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430, \u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u2014 \u0431\u044b\u0432\u0448\u0438\u0435 \u0438 \u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435 \u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b \u0438 \u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b, \u0438\u043c\u0435\u044e\u0449\u0438\u0435 \u043f\u0440\u0430\u0432\u043e \u043d\u0430 \u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e \u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e \u043e\u0441\u043e\u0431\u043e \u043e\u043f\u0430\u0441\u043d\u044b\u0445 \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432 \u0438 \u043b\u0438\u0434\u0435\u0440\u043e\u0432 \u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445 \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u041e\u041f\u0413, \u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043d\u0435 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c \u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u0438\u0445 \u043a \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439 \u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438 \u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438[1][2][3]. \u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f \u043d\u0430 \u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435 \u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445 \u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438 \u0438 \u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435 \u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f \u0441\u043e \u0441\u0442\u043e\u0440\u043e\u043d\u044b \u0441\u0438\u043b\u043e\u0432\u044b\u0445 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440 \u0438 \u0441\u043b\u0443\u0436\u0431 \u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438[4], \u0432 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435 \u043b\u0435\u0433\u0435\u043d\u0434\u0443 \u0441\u0447\u0438\u0442\u0430\u044e\u0442 \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439 \u043d\u0430 \u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445 \u0444\u0430\u043a\u0442\u0430\u0445 \u0433\u0440\u043e\u043c\u043a\u0438\u0445 \u0443\u0431\u0438\u0439\u0441\u0442\u0432 \u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432, \u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b, \u0438 \u043d\u0435 \u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435 \u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439 \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b[5].', callbackFn)\n[[\"\u00ab\",\"\u0411\u0435\u043b\u0430\u044f\",\"\u0441\u0442\u0440\u0435\u043b\u0430\",\"\u00bb\",\"\u2014\",\"\u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e\",\"\u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0435\",\",\",\"\u044f\u043a\u043e\u0431\u044b\",\"\u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u0438\",\"\u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f\",\"\u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430\",\",\",\"\u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438\",\"\u043a\u043e\u0442\u043e\u0440\u043e\u0439\",\"\u2014\",\"\u0431\u044b\u0432\u0448\u0438\u0435\",\"\u0438\",\"\u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435\",\"\u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b\",\"\u0438\",\"\u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b\",\",\",\"\u0438\u043c\u0435\u044e\u0449\u0438\u0435\",\"\u043f\u0440\u0430\u0432\u043e\",\"\u043d\u0430\",\"\u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e\",\"\u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e\",\"\u043e\u0441\u043e\u0431\u043e\",\"\u043e\u043f\u0430\u0441\u043d\u044b\u0445\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\"\u0438\",\"\u043b\u0438\u0434\u0435\u0440\u043e\u0432\",\"\u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u041e\u041f\u0413\",\",\",\"\u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e\",\"\u043a\u043e\u0442\u043e\u0440\u044b\u0445\",\"\u043d\u0435\",\"\u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f\",\"\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c\",\"\u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f\",\"\u0438\u0445\",\"\u043a\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439\",\"\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438\",\"\u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438\",\"\u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438\",\"[\",\"1\",\"]\",\"[\",\"2\",\"]\",\"[\",\"3\",\"]\",\".\"],[\"\u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f\",\"\u043d\u0430\",\"\u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435\",\"\u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438\",\"\u0438\",\"\u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435\",\"\u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f\",\"\u0441\u043e\",\"\u0441\u0442\u043e\u0440\u043e\u043d\u044b\",\"\u0441\u0438\u043b\u043e\u0432\u044b\u0445\",\"\u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\",\"\u0438\",\"\u0441\u043b\u0443\u0436\u0431\",\"\u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438\",\"[\",\"4\",\"]\",\",\",\"\u0432\",\"\u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c\",\"\u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0443\",\"\u0441\u0447\u0438\u0442\u0430\u044e\u0442\",\"\u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439\",\"\u043d\u0430\",\"\u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445\",\"\u0444\u0430\u043a\u0442\u0430\u0445\",\"\u0433\u0440\u043e\u043c\u043a\u0438\u0445\",\"\u0443\u0431\u0438\u0439\u0441\u0442\u0432\",\"\u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\",\",\"\u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\",\",\"\u0438\",\"\u043d\u0435\",\"\u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435\",\"\u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b\",\"[\",\"5\",\"]\",\".\"]]\n>>>\n>>> asc.jsonToText('[[\"\u00ab\",\"\u0411\u0435\u043b\u0430\u044f\",\"\u0441\u0442\u0440\u0435\u043b\u0430\",\"\u00bb\",\"\u2014\",\"\u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e\",\"\u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0435\",\",\",\"\u044f\u043a\u043e\u0431\u044b\",\"\u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u0438\",\"\u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f\",\"\u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430\",\",\",\"\u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438\",\"\u043a\u043e\u0442\u043e\u0440\u043e\u0439\",\"\u2014\",\"\u0431\u044b\u0432\u0448\u0438\u0435\",\"\u0438\",\"\u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435\",\"\u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b\",\"\u0438\",\"\u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b\",\",\",\"\u0438\u043c\u0435\u044e\u0449\u0438\u0435\",\"\u043f\u0440\u0430\u0432\u043e\",\"\u043d\u0430\",\"\u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e\",\"\u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e\",\"\u043e\u0441\u043e\u0431\u043e\",\"\u043e\u043f\u0430\u0441\u043d\u044b\u0445\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\"\u0438\",\"\u043b\u0438\u0434\u0435\u0440\u043e\u0432\",\"\u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u041e\u041f\u0413\",\",\",\"\u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e\",\"\u043a\u043e\u0442\u043e\u0440\u044b\u0445\",\"\u043d\u0435\",\"\u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f\",\"\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c\",\"\u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f\",\"\u0438\u0445\",\"\u043a\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439\",\"\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438\",\"\u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438\",\"\u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438\",\"[\",\"1\",\"]\",\"[\",\"2\",\"]\",\"[\",\"3\",\"]\",\".\"],[\"\u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f\",\"\u043d\u0430\",\"\u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435\",\"\u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438\",\"\u0438\",\"\u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435\",\"\u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f\",\"\u0441\u043e\",\"\u0441\u0442\u043e\u0440\u043e\u043d\u044b\",\"\u0441\u0438\u043b\u043e\u0432\u044b\u0445\",\"\u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\",\"\u0438\",\"\u0441\u043b\u0443\u0436\u0431\",\"\u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438\",\"[\",\"4\",\"]\",\",\",\"\u0432\",\"\u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c\",\"\u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0443\",\"\u0441\u0447\u0438\u0442\u0430\u044e\u0442\",\"\u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439\",\"\u043d\u0430\",\"\u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445\",\"\u0444\u0430\u043a\u0442\u0430\u0445\",\"\u0433\u0440\u043e\u043c\u043a\u0438\u0445\",\"\u0443\u0431\u0438\u0439\u0441\u0442\u0432\",\"\u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\",\",\"\u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\",\",\"\u0438\",\"\u043d\u0435\",\"\u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435\",\"\u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b\",\"[\",\"5\",\"]\",\".\"]]', callbackFn)\n\u00ab\u0411\u0435\u043b\u0430\u044f \u0441\u0442\u0440\u0435\u043b\u0430\u00bb \u2014 \u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e \u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439 \u043b\u0435\u0433\u0435\u043d\u0434\u0435, \u044f\u043a\u043e\u0431\u044b \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u0438 \u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f \u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430, \u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u2014 \u0431\u044b\u0432\u0448\u0438\u0435 \u0438 \u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435 \u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b \u0438 \u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b, \u0438\u043c\u0435\u044e\u0449\u0438\u0435 \u043f\u0440\u0430\u0432\u043e \u043d\u0430 \u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e \u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e \u043e\u0441\u043e\u0431\u043e \u043e\u043f\u0430\u0441\u043d\u044b\u0445 \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432 \u0438 \u043b\u0438\u0434\u0435\u0440\u043e\u0432 \u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445 \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u041e\u041f\u0413, \u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043d\u0435 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c \u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u0438\u0445 \u043a \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439 \u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438 \u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438 [1] [2] [3].\n\u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f \u043d\u0430 \u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435 \u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445 \u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438 \u0438 \u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435 \u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f \u0441\u043e \u0441\u0442\u043e\u0440\u043e\u043d\u044b \u0441\u0438\u043b\u043e\u0432\u044b\u0445 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440 \u0438 \u0441\u043b\u0443\u0436\u0431 \u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438 [4], \u0432 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435 \u043b\u0435\u0433\u0435\u043d\u0434\u0443 \u0441\u0447\u0438\u0442\u0430\u044e\u0442 \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439 \u043d\u0430 \u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445 \u0444\u0430\u043a\u0442\u0430\u0445 \u0433\u0440\u043e\u043c\u043a\u0438\u0445 \u0443\u0431\u0438\u0439\u0441\u0442\u0432 \u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432, \u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b, \u0438 \u043d\u0435 \u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435 \u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439 \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b [5].\n>>>\n>>> asc.allowStress()\n>>> asc.textToJson('\u00ab\u0411\u0435\u0301\u043b\u0430\u044f \u0441\u0442\u0440\u0435\u043b\u0430\u0301\u00bb \u2014 \u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e \u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439 \u043b\u0435\u0433\u0435\u043d\u0434\u0435, \u044f\u043a\u043e\u0431\u044b \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u0438 \u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f \u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430, \u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u2014 \u0431\u044b\u0432\u0448\u0438\u0435 \u0438 \u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435 \u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b \u0438 \u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b, \u0438\u043c\u0435\u044e\u0449\u0438\u0435 \u043f\u0440\u0430\u0432\u043e \u043d\u0430 \u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e \u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e \u043e\u0441\u043e\u0431\u043e \u043e\u043f\u0430\u0441\u043d\u044b\u0445 \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432 \u0438 \u043b\u0438\u0434\u0435\u0440\u043e\u0432 \u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445 \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u041e\u041f\u0413, \u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043d\u0435 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c \u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u0438\u0445 \u043a \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439 \u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438 \u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438[1][2][3]. \u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f \u043d\u0430 \u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435 \u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445 \u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438 \u0438 \u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435 \u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f \u0441\u043e \u0441\u0442\u043e\u0440\u043e\u043d\u044b \u0441\u0438\u043b\u043e\u0432\u044b\u0445 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440 \u0438 \u0441\u043b\u0443\u0436\u0431 \u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438[4], \u0432 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435 \u043b\u0435\u0433\u0435\u043d\u0434\u0443 \u0441\u0447\u0438\u0442\u0430\u044e\u0442 \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439 \u043d\u0430 \u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445 \u0444\u0430\u043a\u0442\u0430\u0445 \u0433\u0440\u043e\u043c\u043a\u0438\u0445 \u0443\u0431\u0438\u0439\u0441\u0442\u0432 \u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432, \u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b, \u0438 \u043d\u0435 \u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435 \u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439 \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b[5].', callbackFn)\n[[\"\u00ab\",\"\u0411\u0435\u0301\u043b\u0430\u044f\",\"\u0441\u0442\u0440\u0435\u043b\u0430\u0301\",\"\u00bb\",\"\u2014\",\"\u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e\",\"\u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0435\",\",\",\"\u044f\u043a\u043e\u0431\u044b\",\"\u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u0438\",\"\u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f\",\"\u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430\",\",\",\"\u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438\",\"\u043a\u043e\u0442\u043e\u0440\u043e\u0439\",\"\u2014\",\"\u0431\u044b\u0432\u0448\u0438\u0435\",\"\u0438\",\"\u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435\",\"\u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b\",\"\u0438\",\"\u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b\",\",\",\"\u0438\u043c\u0435\u044e\u0449\u0438\u0435\",\"\u043f\u0440\u0430\u0432\u043e\",\"\u043d\u0430\",\"\u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e\",\"\u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e\",\"\u043e\u0441\u043e\u0431\u043e\",\"\u043e\u043f\u0430\u0441\u043d\u044b\u0445\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\"\u0438\",\"\u043b\u0438\u0434\u0435\u0440\u043e\u0432\",\"\u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u041e\u041f\u0413\",\",\",\"\u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e\",\"\u043a\u043e\u0442\u043e\u0440\u044b\u0445\",\"\u043d\u0435\",\"\u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f\",\"\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c\",\"\u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f\",\"\u0438\u0445\",\"\u043a\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439\",\"\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438\",\"\u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438\",\"\u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438\",\"[\",\"1\",\"]\",\"[\",\"2\",\"]\",\"[\",\"3\",\"]\",\".\"],[\"\u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f\",\"\u043d\u0430\",\"\u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435\",\"\u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438\",\"\u0438\",\"\u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435\",\"\u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f\",\"\u0441\u043e\",\"\u0441\u0442\u043e\u0440\u043e\u043d\u044b\",\"\u0441\u0438\u043b\u043e\u0432\u044b\u0445\",\"\u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\",\"\u0438\",\"\u0441\u043b\u0443\u0436\u0431\",\"\u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438\",\"[\",\"4\",\"]\",\",\",\"\u0432\",\"\u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c\",\"\u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0443\",\"\u0441\u0447\u0438\u0442\u0430\u044e\u0442\",\"\u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439\",\"\u043d\u0430\",\"\u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445\",\"\u0444\u0430\u043a\u0442\u0430\u0445\",\"\u0433\u0440\u043e\u043c\u043a\u0438\u0445\",\"\u0443\u0431\u0438\u0439\u0441\u0442\u0432\",\"\u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\",\",\"\u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\",\",\"\u0438\",\"\u043d\u0435\",\"\u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435\",\"\u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b\",\"[\",\"5\",\"]\",\".\"]]\n>>>\n>>> asc.jsonToText('[[\"\u00ab\",\"\u0411\u0435\u0301\u043b\u0430\u044f\",\"\u0441\u0442\u0440\u0435\u043b\u0430\u0301\",\"\u00bb\",\"\u2014\",\"\u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e\",\"\u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0435\",\",\",\"\u044f\u043a\u043e\u0431\u044b\",\"\u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u0438\",\"\u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f\",\"\u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430\",\",\",\"\u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438\",\"\u043a\u043e\u0442\u043e\u0440\u043e\u0439\",\"\u2014\",\"\u0431\u044b\u0432\u0448\u0438\u0435\",\"\u0438\",\"\u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435\",\"\u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b\",\"\u0438\",\"\u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b\",\",\",\"\u0438\u043c\u0435\u044e\u0449\u0438\u0435\",\"\u043f\u0440\u0430\u0432\u043e\",\"\u043d\u0430\",\"\u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e\",\"\u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e\",\"\u043e\u0441\u043e\u0431\u043e\",\"\u043e\u043f\u0430\u0441\u043d\u044b\u0445\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\"\u0438\",\"\u043b\u0438\u0434\u0435\u0440\u043e\u0432\",\"\u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u041e\u041f\u0413\",\",\",\"\u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e\",\"\u043a\u043e\u0442\u043e\u0440\u044b\u0445\",\"\u043d\u0435\",\"\u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f\",\"\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c\",\"\u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f\",\"\u0438\u0445\",\"\u043a\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439\",\"\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438\",\"\u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438\",\"\u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438\",\"[\",\"1\",\"]\",\"[\",\"2\",\"]\",\"[\",\"3\",\"]\",\".\"],[\"\u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f\",\"\u043d\u0430\",\"\u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435\",\"\u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438\",\"\u0438\",\"\u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435\",\"\u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f\",\"\u0441\u043e\",\"\u0441\u0442\u043e\u0440\u043e\u043d\u044b\",\"\u0441\u0438\u043b\u043e\u0432\u044b\u0445\",\"\u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\",\"\u0438\",\"\u0441\u043b\u0443\u0436\u0431\",\"\u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438\",\"[\",\"4\",\"]\",\",\",\"\u0432\",\"\u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c\",\"\u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0443\",\"\u0441\u0447\u0438\u0442\u0430\u044e\u0442\",\"\u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439\",\"\u043d\u0430\",\"\u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445\",\"\u0444\u0430\u043a\u0442\u0430\u0445\",\"\u0433\u0440\u043e\u043c\u043a\u0438\u0445\",\"\u0443\u0431\u0438\u0439\u0441\u0442\u0432\",\"\u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\",\",\"\u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\",\",\"\u0438\",\"\u043d\u0435\",\"\u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435\",\"\u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b\",\"[\",\"5\",\"]\",\".\"]]', callbackFn)\n\u00ab\u0411\u0435\u0301\u043b\u0430\u044f \u0441\u0442\u0440\u0435\u043b\u0430\u0301\u00bb \u2014 \u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e \u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439 \u043b\u0435\u0433\u0435\u043d\u0434\u0435, \u044f\u043a\u043e\u0431\u044b \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u0438 \u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f \u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430, \u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u2014 \u0431\u044b\u0432\u0448\u0438\u0435 \u0438 \u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435 \u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b \u0438 \u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b, \u0438\u043c\u0435\u044e\u0449\u0438\u0435 \u043f\u0440\u0430\u0432\u043e \u043d\u0430 \u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e \u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e \u043e\u0441\u043e\u0431\u043e \u043e\u043f\u0430\u0441\u043d\u044b\u0445 \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432 \u0438 \u043b\u0438\u0434\u0435\u0440\u043e\u0432 \u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445 \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u041e\u041f\u0413, \u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043d\u0435 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c \u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u0438\u0445 \u043a \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439 \u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438 \u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438 [1] [2] [3].\n\u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f \u043d\u0430 \u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435 \u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445 \u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438 \u0438 \u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435 \u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f \u0441\u043e \u0441\u0442\u043e\u0440\u043e\u043d\u044b \u0441\u0438\u043b\u043e\u0432\u044b\u0445 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440 \u0438 \u0441\u043b\u0443\u0436\u0431 \u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438 [4], \u0432 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435 \u043b\u0435\u0433\u0435\u043d\u0434\u0443 \u0441\u0447\u0438\u0442\u0430\u044e\u0442 \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439 \u043d\u0430 \u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445 \u0444\u0430\u043a\u0442\u0430\u0445 \u0433\u0440\u043e\u043c\u043a\u0438\u0445 \u0443\u0431\u0438\u0439\u0441\u0442\u0432 \u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432, \u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b, \u0438 \u043d\u0435 \u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435 \u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439 \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b [5].\n>>>\n>>> asc.disallowStress()\n>>> asc.textToJson('\u00ab\u0411\u0435\u0301\u043b\u0430\u044f \u0441\u0442\u0440\u0435\u043b\u0430\u0301\u00bb \u2014 \u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e \u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439 \u043b\u0435\u0433\u0435\u043d\u0434\u0435, \u044f\u043a\u043e\u0431\u044b \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u0438 \u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f \u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430, \u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u2014 \u0431\u044b\u0432\u0448\u0438\u0435 \u0438 \u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435 \u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b \u0438 \u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b, \u0438\u043c\u0435\u044e\u0449\u0438\u0435 \u043f\u0440\u0430\u0432\u043e \u043d\u0430 \u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e \u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e \u043e\u0441\u043e\u0431\u043e \u043e\u043f\u0430\u0441\u043d\u044b\u0445 \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432 \u0438 \u043b\u0438\u0434\u0435\u0440\u043e\u0432 \u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445 \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u041e\u041f\u0413, \u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043d\u0435 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c \u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u0438\u0445 \u043a \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439 \u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438 \u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438[1][2][3]. \u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f \u043d\u0430 \u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435 \u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445 \u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438 \u0438 \u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435 \u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f \u0441\u043e \u0441\u0442\u043e\u0440\u043e\u043d\u044b \u0441\u0438\u043b\u043e\u0432\u044b\u0445 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440 \u0438 \u0441\u043b\u0443\u0436\u0431 \u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438[4], \u0432 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435 \u043b\u0435\u0433\u0435\u043d\u0434\u0443 \u0441\u0447\u0438\u0442\u0430\u044e\u0442 \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439 \u043d\u0430 \u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445 \u0444\u0430\u043a\u0442\u0430\u0445 \u0433\u0440\u043e\u043c\u043a\u0438\u0445 \u0443\u0431\u0438\u0439\u0441\u0442\u0432 \u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432, \u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b, \u0438 \u043d\u0435 \u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435 \u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439 \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b[5].', callbackFn)\n[[\"\u00ab\",\"\u0411\u0435\u043b\u0430\u044f\",\"\u0441\u0442\u0440\u0435\u043b\u0430\",\"\u00bb\",\"\u2014\",\"\u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e\",\"\u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0435\",\",\",\"\u044f\u043a\u043e\u0431\u044b\",\"\u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u0438\",\"\u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f\",\"\u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430\",\",\",\"\u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438\",\"\u043a\u043e\u0442\u043e\u0440\u043e\u0439\",\"\u2014\",\"\u0431\u044b\u0432\u0448\u0438\u0435\",\"\u0438\",\"\u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435\",\"\u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b\",\"\u0438\",\"\u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b\",\",\",\"\u0438\u043c\u0435\u044e\u0449\u0438\u0435\",\"\u043f\u0440\u0430\u0432\u043e\",\"\u043d\u0430\",\"\u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e\",\"\u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e\",\"\u043e\u0441\u043e\u0431\u043e\",\"\u043e\u043f\u0430\u0441\u043d\u044b\u0445\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\"\u0438\",\"\u043b\u0438\u0434\u0435\u0440\u043e\u0432\",\"\u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u041e\u041f\u0413\",\",\",\"\u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e\",\"\u043a\u043e\u0442\u043e\u0440\u044b\u0445\",\"\u043d\u0435\",\"\u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f\",\"\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c\",\"\u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f\",\"\u0438\u0445\",\"\u043a\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439\",\"\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438\",\"\u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438\",\"\u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438\",\"[\",\"1\",\"]\",\"[\",\"2\",\"]\",\"[\",\"3\",\"]\",\".\"],[\"\u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f\",\"\u043d\u0430\",\"\u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435\",\"\u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438\",\"\u0438\",\"\u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435\",\"\u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f\",\"\u0441\u043e\",\"\u0441\u0442\u043e\u0440\u043e\u043d\u044b\",\"\u0441\u0438\u043b\u043e\u0432\u044b\u0445\",\"\u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\",\"\u0438\",\"\u0441\u043b\u0443\u0436\u0431\",\"\u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438\",\"[\",\"4\",\"]\",\",\",\"\u0432\",\"\u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c\",\"\u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0443\",\"\u0441\u0447\u0438\u0442\u0430\u044e\u0442\",\"\u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439\",\"\u043d\u0430\",\"\u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445\",\"\u0444\u0430\u043a\u0442\u0430\u0445\",\"\u0433\u0440\u043e\u043c\u043a\u0438\u0445\",\"\u0443\u0431\u0438\u0439\u0441\u0442\u0432\",\"\u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\",\",\"\u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\",\",\"\u0438\",\"\u043d\u0435\",\"\u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435\",\"\u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b\",\"[\",\"5\",\"]\",\".\"]]\n>>>\n>>> asc.jsonToText('[[\"\u00ab\",\"\u0411\u0435\u043b\u0430\u044f\",\"\u0441\u0442\u0440\u0435\u043b\u0430\",\"\u00bb\",\"\u2014\",\"\u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e\",\"\u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0435\",\",\",\"\u044f\u043a\u043e\u0431\u044b\",\"\u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u0438\",\"\u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f\",\"\u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f\",\"\u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430\",\",\",\"\u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438\",\"\u043a\u043e\u0442\u043e\u0440\u043e\u0439\",\"\u2014\",\"\u0431\u044b\u0432\u0448\u0438\u0435\",\"\u0438\",\"\u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435\",\"\u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b\",\"\u0438\",\"\u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b\",\",\",\"\u0438\u043c\u0435\u044e\u0449\u0438\u0435\",\"\u043f\u0440\u0430\u0432\u043e\",\"\u043d\u0430\",\"\u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e\",\"\u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e\",\"\u043e\u0441\u043e\u0431\u043e\",\"\u043e\u043f\u0430\u0441\u043d\u044b\u0445\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\"\u0438\",\"\u043b\u0438\u0434\u0435\u0440\u043e\u0432\",\"\u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445\",\"\u0432\",\"\u0420\u043e\u0441\u0441\u0438\u0438\",\"\u041e\u041f\u0413\",\",\",\"\u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e\",\"\u043a\u043e\u0442\u043e\u0440\u044b\u0445\",\"\u043d\u0435\",\"\u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f\",\"\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c\",\"\u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f\",\"\u0438\u0445\",\"\u043a\",\"\u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439\",\"\u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438\",\"\u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438\",\"\u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438\",\"[\",\"1\",\"]\",\"[\",\"2\",\"]\",\"[\",\"3\",\"]\",\".\"],[\"\u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f\",\"\u043d\u0430\",\"\u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435\",\"\u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f\",\"\u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438\",\"\u0438\",\"\u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435\",\"\u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f\",\"\u0441\u043e\",\"\u0441\u0442\u043e\u0440\u043e\u043d\u044b\",\"\u0441\u0438\u043b\u043e\u0432\u044b\u0445\",\"\u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\",\"\u0438\",\"\u0441\u043b\u0443\u0436\u0431\",\"\u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438\",\"[\",\"4\",\"]\",\",\",\"\u0432\",\"\u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c\",\"\u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435\",\"\u043b\u0435\u0433\u0435\u043d\u0434\u0443\",\"\u0441\u0447\u0438\u0442\u0430\u044e\u0442\",\"\u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439\",\"\u043d\u0430\",\"\u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445\",\"\u0444\u0430\u043a\u0442\u0430\u0445\",\"\u0433\u0440\u043e\u043c\u043a\u0438\u0445\",\"\u0443\u0431\u0438\u0439\u0441\u0442\u0432\",\"\u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445\",\"\u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432\",\",\",\"\u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445\",\"\u0432\",\"1990-\u0435\",\"\u0433\u043e\u0434\u044b\",\",\",\"\u0438\",\"\u043d\u0435\",\"\u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442\",\"\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435\",\"\u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439\",\"\u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b\",\"[\",\"5\",\"]\",\".\"]]', callbackFn)\n\u00ab\u0411\u0435\u043b\u0430\u044f \u0441\u0442\u0440\u0435\u043b\u0430\u00bb \u2014 \u0441\u043e\u0433\u043b\u0430\u0441\u043d\u043e \u0440\u0430\u0441\u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0451\u043d\u043d\u043e\u0439 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u0433\u043e\u0440\u043e\u0434\u0441\u043a\u043e\u0439 \u043b\u0435\u0433\u0435\u043d\u0434\u0435, \u044f\u043a\u043e\u0431\u044b \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u043e \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u0438 \u043f\u043e\u0434\u0433\u043e\u0442\u043e\u0432\u043b\u0435\u043d\u043d\u0430\u044f \u0437\u0430\u043a\u043e\u043d\u0441\u043f\u0438\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f \u043f\u0440\u0430\u0432\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0435\u043d\u043d\u0430\u044f \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u0430, \u0441\u043e\u0442\u0440\u0443\u0434\u043d\u0438\u043a\u0438 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u2014 \u0431\u044b\u0432\u0448\u0438\u0435 \u0438 \u0434\u0435\u0439\u0441\u0442\u0432\u043e\u0432\u0430\u0432\u0448\u0438\u0435 \u043c\u0438\u043b\u0438\u0446\u0438\u043e\u043d\u0435\u0440\u044b \u0438 \u0441\u043f\u0435\u0446\u043d\u0430\u0437\u043e\u0432\u0446\u044b, \u0438\u043c\u0435\u044e\u0449\u0438\u0435 \u043f\u0440\u0430\u0432\u043e \u043d\u0430 \u0444\u0438\u0437\u0438\u0447\u0435\u0441\u043a\u0443\u044e \u043b\u0438\u043a\u0432\u0438\u0434\u0430\u0446\u0438\u044e \u043e\u0441\u043e\u0431\u043e \u043e\u043f\u0430\u0441\u043d\u044b\u0445 \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432 \u0438 \u043b\u0438\u0434\u0435\u0440\u043e\u0432 \u043e\u0440\u0443\u0434\u043e\u0432\u0430\u0432\u0448\u0438\u0445 \u0432 \u0420\u043e\u0441\u0441\u0438\u0438 \u041e\u041f\u0413, \u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u043e \u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043d\u0435 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u043c \u043f\u0440\u0438\u0432\u043b\u0435\u0447\u0435\u043d\u0438\u044f \u0438\u0445 \u043a \u0443\u0433\u043e\u043b\u043e\u0432\u043d\u043e\u0439 \u043e\u0442\u0432\u0435\u0442\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u0438 \u0437\u0430\u043a\u043e\u043d\u043d\u044b\u043c\u0438 \u043c\u0435\u0442\u043e\u0434\u0430\u043c\u0438 [1] [2] [3].\n\u041d\u0435\u0441\u043c\u043e\u0442\u0440\u044f \u043d\u0430 \u043e\u0442\u0441\u0443\u0442\u0441\u0442\u0432\u0438\u0435 \u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0445 \u0434\u043e\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u044c\u0441\u0442\u0432 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u044f \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u0430\u0446\u0438\u0438 \u0438 \u043c\u043d\u043e\u0433\u043e\u0447\u0438\u0441\u043b\u0435\u043d\u043d\u044b\u0435 \u043e\u043f\u0440\u043e\u0432\u0435\u0440\u0436\u0435\u043d\u0438\u044f \u0441\u043e \u0441\u0442\u043e\u0440\u043e\u043d\u044b \u0441\u0438\u043b\u043e\u0432\u044b\u0445 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440 \u0438 \u0441\u043b\u0443\u0436\u0431 \u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438 [4], \u0432 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u0435 \u043b\u0435\u0433\u0435\u043d\u0434\u0443 \u0441\u0447\u0438\u0442\u0430\u044e\u0442 \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u043d\u043e\u0439 \u043d\u0430 \u043f\u043e\u0434\u043b\u0438\u043d\u043d\u044b\u0445 \u0444\u0430\u043a\u0442\u0430\u0445 \u0433\u0440\u043e\u043c\u043a\u0438\u0445 \u0443\u0431\u0438\u0439\u0441\u0442\u0432 \u043a\u0440\u0438\u043c\u0438\u043d\u0430\u043b\u044c\u043d\u044b\u0445 \u0430\u0432\u0442\u043e\u0440\u0438\u0442\u0435\u0442\u043e\u0432, \u0441\u043e\u0432\u0435\u0440\u0448\u0451\u043d\u043d\u044b\u0445 \u0432 1990-\u0435 \u0433\u043e\u0434\u044b, \u0438 \u043d\u0435 \u0438\u0441\u043a\u043b\u044e\u0447\u0430\u044e\u0442 \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043e\u0432\u0430\u043d\u0438\u0435 \u0440\u0435\u0430\u043b\u044c\u043d\u043e\u0439 \u0441\u043f\u0435\u0446\u0441\u043b\u0443\u0436\u0431\u044b [5].\n```\n\n---\n\n### Methods:\n- **addBadword** - Method add bad word\n- **setBadwords** - Method set words to blacklist\n- **getBadwords** - Method get words in blacklist\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setBadwords([\"hello\", \"world\", \"test\"])\n>>>\n>>> asc.getBadwords()\n{1554834897, 2156498622, 28307030}\n>>>\n>>> asc.addBadword(\"test2\")\n>>>\n>>> asc.getBadwords()\n{5170183734, 1554834897, 2156498622, 28307030}\n```\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setBadwords({24227504, 1219922507, 1794085167})\n>>>\n>>> asc.getBadwords()\n{24227504, 1219922507, 1794085167}\n>>>\n>>> asc.clear(asc.clear_t.badwords)\n>>>\n>>> asc.getBadwords()\n{}\n```\n\n---\n\n### Methods:\n- **addGoodword** - Method add good word\n- **setGoodwords** - Method set words to whitelist\n- **getGoodwords** - Method get words in whitelist\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setGoodwords([\"hello\", \"world\", \"test\"])\n>>>\n>>> asc.getGoodwords()\n{1554834897, 2156498622, 28307030}\n>>>\n>>> asc.addGoodword(\"test2\")\n>>>\n>>> asc.getGoodwords()\n{5170183734, 1554834897, 2156498622, 28307030}\n>>>\n>>> asc.clear(asc.clear_t.goodwords)\n>>>\n>>  asc.getGoodwords()\n{}\n```\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setGoodwords({24227504, 1219922507, 1794085167})\n>>>\n>>> asc.getGoodwords()\n{24227504, 1219922507, 1794085167}\n```\n\n---\n\n### Methods:\n- **setUserToken** - Method for adding user token\n- **getUserTokens** - User token list retrieval method\n- **getUserTokenId** - Method for obtaining user token identifier\n- **getUserTokenWord** - Method for obtaining a custom token by its identifier\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setUserToken(\"usa\")\n>>>\n>>> asc.setUserToken(\"russia\")\n>>>\n>>> asc.getUserTokenId(\"usa\")\n5759809081\n>>>\n>>> asc.getUserTokenId(\"russia\")\n9910674734\n>>>\n>>> asc.getUserTokens()\n['usa', 'russia']\n>>>\n>>> asc.getUserTokenWord(5759809081)\n'usa'\n>>>\n>>> asc.getUserTokenWord(9910674734)\n'russia'\n>>>\n>> asc.clear(asc.clear_t.utokens)\n>>>\n>>> asc.getUserTokens()\n[]\n```\n\n---\n\n### Methods:\n- **findNgram** - N-gram search method in text\n- **word** - \"Method to extract a word by its identifier\"\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> def callbackFn(text):\n...     print(text)\n... \n>>> asc.setOption(asc.options_t.confidence)\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>> asc.readArpa('./lm.arpa')\n>>> \n>>> asc.idw(\"\u043f\u0440\u0438\u0432\u0435\u0442\")\n2487910648\n>>> asc.word(2487910648)\n'\u043f\u0440\u0438\u0432\u0435\u0442'\n>>> \n>>> asc.findNgram(\"\u041e\u0441\u043e\u0431\u043e\u0435 \u043c\u0435\u0441\u0442\u043e \u0437\u0430\u043d\u0438\u043c\u0430\u0435\u0442 \u0447\u0443\u0434\u043e\u0442\u0432\u043e\u0440\u043d\u0430\u044f \u0438\u043a\u043e\u043d\u0430 \u041b\u043e\u0431\u0437\u0430\u043d\u0438\u0435 \u0425\u0440\u0438\u0441\u0442\u0430 \u0418\u0443\u0434\u043e\u044e\", callbackFn)\n<s> \u041e\u0441\u043e\u0431\u043e\u0435\n\u041e\u0441\u043e\u0431\u043e\u0435 \u043c\u0435\u0441\u0442\u043e\n\u043c\u0435\u0441\u0442\u043e \u0437\u0430\u043d\u0438\u043c\u0430\u0435\u0442\n\u0437\u0430\u043d\u0438\u043c\u0430\u0435\u0442 \u0447\u0443\u0434\u043e\u0442\u0432\u043e\u0440\u043d\u0430\u044f\n\u0447\u0443\u0434\u043e\u0442\u0432\u043e\u0440\u043d\u0430\u044f \u0438\u043a\u043e\u043d\u0430\n\u0438\u043a\u043e\u043d\u0430 \u041b\u043e\u0431\u0437\u0430\u043d\u0438\u0435\n\u041b\u043e\u0431\u0437\u0430\u043d\u0438\u0435 \u0425\u0440\u0438\u0441\u0442\u0430\n\u0425\u0440\u0438\u0441\u0442\u0430 \u0418\u0443\u0434\u043e\u044e\n\u0418\u0443\u0434\u043e\u044e </s>\n\n\n>>>\n```\n\n---\n\n### Methods:\n- **setUserTokenMethod** - Method for set a custom token processing function\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> def fn(token, word):\n...     if token and (token == \"<usa>\"):\n...         if word and (word.lower() == \"usa\"):\n...             return True\n...     elif token and (token == \"<russia>\"):\n...         if word and (word.lower() == \"russia\"):\n...             return True\n...     return False\n... \n>>> asc.setUserToken(\"usa\")\n>>>\n>>> asc.setUserToken(\"russia\")\n>>>\n>>> asc.setUserTokenMethod(\"usa\", fn)\n>>>\n>>> asc.setUserTokenMethod(\"russia\", fn)\n>>>\n>>> asc.idw(\"usa\")\n5759809081\n>>>\n>>> asc.idw(\"russia\")\n9910674734\n>>>\n>>> asc.getUserTokenWord(5759809081)\n'usa'\n>>>\n>>> asc.getUserTokenWord(9910674734)\n'russia'\n```\n\n---\n\n### Methods:\n- **setWordPreprocessingMethod** - Method for set the word preprocessing function\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> def run(word, context):\n...     if word == \"\u0432\u043e\u0437\u043b\u0435\": word = \"\u043e\u043a\u043e\u043b\u043e\"\n...     return word\n... \n>>> asc.setOption(asc.options_t.debug)\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.setWordPreprocessingMethod(run)\n>>>\n>>> a = asc.perplexity(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\")\ninfo: <s> \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440 <punct> <punct> <punct> </s>\n\ninfo: p( \u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e | <s> ) \t= [2gram] 0.00038931 [ -3.40969900 ] / 0.99999991\ninfo: p( \u0438\u0437 | \u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e ...) \t= [2gram] 0.10110741 [ -0.99521700 ] / 0.99999979\ninfo: p( \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 | \u0438\u0437 ...) \t= [2gram] 0.00711798 [ -2.14764300 ] / 1.00000027\ninfo: p( \u0432 | \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 ...) \t= [2gram] 0.51077661 [ -0.29176900 ] / 1.00000021\ninfo: p( \u043e\u043b\u0435\u0433\u0430 | \u0432 ...) \t= [2gram] 0.00082936 [ -3.08125500 ] / 0.99999974\ninfo: p( \u0443\u0434\u0430\u0440\u0438\u043b | \u043e\u043b\u0435\u0433\u0430 ...) \t= [2gram] 0.25002820 [ -0.60201100 ] / 0.99999978\ninfo: p( \u044f\u0440\u043a\u0438\u0439 | \u0443\u0434\u0430\u0440\u0438\u043b ...) \t= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034\ninfo: p( \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 | \u044f\u0440\u043a\u0438\u0439 ...) \t= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034\ninfo: p( \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 | \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 ...) \t= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034\ninfo: p( \u0442\u0440\u0430\u043a\u0442\u043e\u0440 | \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 ...) \t= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034\ninfo: p( <punct> | \u0442\u0440\u0430\u043a\u0442\u043e\u0440 ...) \t= [OOV] 0.00000000 [ -inf ] / 0.99999973\ninfo: p( <punct> | <punct> ...) \t= [OOV] 0.00000000 [ -inf ] / 0.99999993\ninfo: p( <punct> | <punct> ...) \t= [OOV] 0.00000000 [ -inf ] / 0.99999993\ninfo: p( </s> | <punct> ...) \t= [1gram] 0.05693430 [ -1.24462600 ] / 0.99999993\n\ninfo: 1 sentences, 13 words, 0 OOVs\ninfo: 3 zeroprobs, logprob= -12.97624000 ppl= 8.45034200 ppl1= 9.95800426\n\ninfo: <s> \u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u043e\u043a\u043e\u043b\u043e \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430 <punct> <punct> <punct> <punct> </s>\n\ninfo: p( \u0441 | <s> ) \t= [2gram] 0.00642448 [ -2.19216200 ] / 0.99999991\ninfo: p( \u043b\u044f\u0437\u0433\u043e\u043c | \u0441 ...) \t= [2gram] 0.00195917 [ -2.70792700 ] / 0.99999999\ninfo: p( \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f | \u043b\u044f\u0437\u0433\u043e\u043c ...) \t= [2gram] 0.50002878 [ -0.30100500 ] / 1.00000034\ninfo: p( \u0438 | \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f ...) \t= [2gram] 0.51169951 [ -0.29098500 ] / 1.00000024\ninfo: p( \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f | \u0438 ...) \t= [2gram] 0.00143382 [ -2.84350600 ] / 0.99999975\ninfo: p( \u043e\u043a\u043e\u043b\u043e | \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f ...) \t= [1gram] 0.00011358 [ -3.94468000 ] / 1.00000003\ninfo: p( \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430 | \u043e\u043a\u043e\u043b\u043e ...) \t= [1gram] 0.00003932 [ -4.40541100 ] / 1.00000016\ninfo: p( <punct> | \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430 ...) \t= [OOV] 0.00000000 [ -inf ] / 0.99999990\ninfo: p( <punct> | <punct> ...) \t= [OOV] 0.00000000 [ -inf ] / 0.99999993\ninfo: p( <punct> | <punct> ...) \t= [OOV] 0.00000000 [ -inf ] / 0.99999993\ninfo: p( <punct> | <punct> ...) \t= [OOV] 0.00000000 [ -inf ] / 0.99999993\ninfo: p( </s> | <punct> ...) \t= [1gram] 0.05693430 [ -1.24462600 ] / 0.99999993\n\ninfo: 1 sentences, 11 words, 0 OOVs\ninfo: 4 zeroprobs, logprob= -17.93030200 ppl= 31.20267541 ppl1= 42.66064865\n>>> print(a.logprob)\n-30.906542\n```\n\n---\n\n### Methods:\n- **setLogfile** - Method of set the file for log output\n- **setOOvFile** - Method set file for saving OOVs words\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setLogfile(\"./log.txt\")\n>>>\n>>> asc.setOOvFile(\"./oov.txt\")\n```\n\n---\n\n### Methods:\n- **perplexity** - Perplexity calculation\n- **pplConcatenate** - Method of combining perplexia\n- **pplByFiles** - Method for reading perplexity calculation by file or group of files\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> a = asc.perplexity(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\")\n>>>\n>>> print(a.logprob)\n-30.906542\n>>>\n>>> print(a.oovs)\n0\n>>>\n>>> print(a.words)\n24\n>>>\n>>> print(a.sentences)\n2\n>>>\n>>> print(a.zeroprobs)\n7\n>>>\n>>> print(a.ppl)\n17.229063831108224\n>>>\n>>> print(a.ppl1)\n19.398698060810077\n>>>\n>>> b = asc.pplByFiles(\"./text.txt\")\n>>>\n>>> c = asc.pplConcatenate(a, b)\n>>>\n>>> print(c.ppl)\n7.384123548831112\n```\n\n### Description\n| Name      | Description                                                                 |\n|-----------|-----------------------------------------------------------------------------|\n| ppl       | The meaning of perplexity without considering the beginning of the sentence |\n| ppl1      | The meaning of perplexion taking into account the beginning of the sentence |\n| oovs      | Count of oov words                                                          |\n| words     | Count of words in sentence                                                  |\n| logprob   | Word sequence frequency                                                     |\n| sentences | Count of sequences                                                          |\n| zeroprobs | Count of zero probs                                                         |\n\n---\n\n### Methods:\n- **tokenization** - Method for breaking text into tokens\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> def tokensFn(word, context, reset, stop):\n...     print(word, \" => \", context)\n...     return True\n...\n>>> asc.switchAllowApostrophe()\n>>>\n>>> asc.tokenization(\"\u00abOn nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'\u00e9valuer\u00bb l'astronomie\", tokensFn)\n\u00ab  =>  []\nOn  =>  ['\u00ab']\nnous  =>  ['\u00ab', 'On']\ndit  =>  ['\u00ab', 'On', 'nous']\nqu'aujourd'hui  =>  ['\u00ab', 'On', 'nous', 'dit']\nc'est  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\"]\nle  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\"]\ncas  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le']\n,  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas']\nencore  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas', ',']\nfaudra-t-il  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas', ',', 'encore']\nl  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l']\n'  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l']\n\u00e9valuer  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', \"'\"]\n\u00bb  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', \"'\", '\u00e9valuer']\nl  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', \"'\", '\u00e9valuer', '\u00bb']\n'  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', \"'\", '\u00e9valuer', '\u00bb', 'l']\nastronomie  =>  ['\u00ab', 'On', 'nous', 'dit', \"qu'aujourd'hui\", \"c'est\", 'le', 'cas', ',', 'encore', 'faudra-t-il', 'l', \"'\", '\u00e9valuer', '\u00bb', 'l', \"'\"]\n```\n\n---\n\n### Methods:\n- **setTokenizerFn** - Method for set the function of an external tokenizer\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> def tokenizerFn(text, callback):\n...     word = \"\"\n...     context = []\n...     for letter in text:\n...         if letter == \" \" and len(word) > 0:\n...             if not callback(word, context, False, False): return\n...             context.append(word)\n...             word = \"\"\n...         elif letter == \".\" or letter == \"!\" or letter == \"?\":\n...             if not callback(word, context, True, False): return\n...             word = \"\"\n...             context = []\n...         else:\n...             word += letter\n...     if len(word) > 0:\n...         if not callback(word, context, False, True): return\n...\n>>> def tokensFn(word, context, reset, stop):\n...     print(word, \" => \", context)\n...     return True\n...\n>>> asc.setTokenizerFn(tokenizerFn)\n>>>\n>>> asc.tokenization(\"Hello World today!\", tokensFn)\nHello  =>  []\nWorld  =>  ['Hello']\ntoday  =>  ['Hello', 'World']\n```\n\n---\n\n### Methods:\n- **sentences** - Sentences generation method\n- **sentencesToFile** - Method for assembling a specified number of sentences and writing to a file\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> def sentencesFn(text):\n...     print(\"Sentences:\", text)\n...     return True\n...\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.sentences(sentencesFn)\nSentences: <s> \u0412 \u043e\u0431\u0449\u0435\u043c </s>\nSentences: <s> \u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430 </s>\nSentences: <s> \u0423 \u043c\u0435\u043d\u044f \u043d\u0435\u0442 </s>\nSentences: <s> \u042f \u0432\u043e\u043e\u0431\u0449\u0435 \u043d\u0435 \u0445\u043e\u0447\u0443 </s>\nSentences: <s> \u0414\u0430 \u0438 \u0432 \u043e\u0431\u0449\u0435\u043c </s>\nSentences: <s> \u041d\u0435 \u043c\u043e\u0433\u0443 </s>\nSentences: <s> \u041d\u0443 \u0432 \u043e\u0431\u0449\u0435\u043c </s>\nSentences: <s> \u0422\u0430\u043a \u0447\u0442\u043e \u044f \u0432\u043e\u043e\u0431\u0449\u0435 \u043d\u0435 \u0445\u043e\u0447\u0443 </s>\nSentences: <s> \u041f\u043e\u0442\u043e\u043c\u0443 \u0447\u0442\u043e \u044f \u0432\u043e\u043e\u0431\u0449\u0435 \u043d\u0435 \u0445\u043e\u0447\u0443 </s>\nSentences: <s> \u041f\u0440\u043e\u0434\u043e\u043b\u0436\u0435\u043d\u0438\u0435 \u0441\u043b\u0435\u0434\u0443\u0435\u0442 </s>\nSentences: <s> \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u043e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440 </s>\n>>>\n>>> asc.sentencesToFile(5, \"./result.txt\")\n```\n\n---\n\n### Methods:\n- **fixUppers** - Method for correcting registers in the text\n- **fixUppersByFiles** - Method for correcting text registers in a text file\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.fixUppers(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u043e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\")\n'\u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440??? \u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....'\n>>>\n>>> asc.fixUppersByFiles(\"./corpus\", \"./result.txt\", \"txt\")\n```\n\n---\n\n### Methods:\n- **checkHypLat** - Hyphen and latin character search method\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.checkHypLat(\"Hello-World\")\n(True, True)\n>>>\n>>> asc.checkHypLat(\"Hello\")\n(False, True)\n>>>\n>>> asc.checkHypLat(\"\u041f\u0440\u0438\u0432\u0435\u0442\")\n(False, False)\n>>>\n>>> asc.checkHypLat(\"\u0442\u0430\u043a-\u043a\u0430\u043a\")\n(True, False)\n```\n\n---\n\n### Methods:\n- **getUppers** - Method for extracting registers for each word\n- **countLetter** - Method for counting the amount of a specific letter in a word\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.idw(\"Living\")\n10493385932\n>>>\n>>> asc.idw(\"in\")\n3301\n>>>\n>>> asc.idw(\"the\")\n217280\n>>>\n>>> asc.idw(\"USA\")\n188643\n>>>\n>>> asc.getUppers([10493385932, 3301, 217280, 188643])\n[1, 0, 0, 7]\n>>> \n>>> asc.countLetter(\"hello-world\", \"-\")\n1\n>>>\n>>> asc.countLetter(\"hello-world\", \"l\")\n3\n```\n\n---\n\n### Methods:\n- **urls** - Method for extracting URL address coordinates in a string\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.urls(\"This website: example.com was designed with ...\")\n{14: 25}\n>>>\n>>> asc.urls(\"This website: https://a.b.c.example.net?id=52#test-1 was designed with ...\")\n{14: 52}\n>>>\n>>> asc.urls(\"This website: https://a.b.c.example.net?id=52#test-1 and 127.0.0.1 was designed with ...\")\n{14: 52, 57: 66}\n```\n\n---\n\n### Methods:\n- **roman2Arabic** - Method for translating Roman numerals to Arabic\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.roman2Arabic(\"XVI\")\n16\n```\n\n---\n\n### Methods:\n- **rest** - Method for correction and detection of words with mixed alphabets\n- **setSubstitutes** - Method for set letters to correct words from mixed alphabets\n- **getSubstitutes** - Method of extracting letters to correct words from mixed alphabets\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.setSubstitutes({'p':'\u0440','c':'\u0441','o':'\u043e','t':'\u0442','k':'\u043a','e':'\u0435','a':'\u0430','h':'\u043d','x':'\u0445','b':'\u0432','m':'\u043c'})\n>>>\n>>> asc.getSubstitutes()\n{'a': '\u0430', 'b': '\u0432', 'c': '\u0441', 'e': '\u0435', 'h': '\u043d', 'k': '\u043a', 'm': '\u043c', 'o': '\u043e', 'p': '\u0440', 't': '\u0442', 'x': '\u0445'}\n>>>\n>>> str = \"\u041fP\u0418BET\u0418\u041a\"\n>>>\n>>> str.lower()\n'\u043fp\u0438bet\u0438\u043a'\n>>>\n>>> asc.rest(str)\n'\u043f\u0440\u0438\u0432\u0435\u0442\u0438\u043a'\n```\n\n---\n\n### Methods:\n- **setTokensDisable** - Method for set the list of forbidden tokens\n- **setTokensUnknown** - Method for set the list of tokens cast to \u2329unk\u232a\n- **setTokenDisable** - Method for set the list of unidentifiable tokens\n- **setTokenUnknown** - Method of set the list of tokens that need to be identified as \u2329unk\u232a\n- **getTokensDisable** - Method for retrieving the list of forbidden tokens\n- **getTokensUnknown** - Method for extracting a list of tokens reducible to \u2329unk\u232a\n- **setAllTokenDisable** - Method for set all tokens as unidentifiable\n- **setAllTokenUnknown** - The method of set all tokens identified as \u2329unk\u232a\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.idw(\"<date>\")\n6\n>>>\n>>> asc.idw(\"<time>\")\n7\n>>>\n>>> asc.idw(\"<abbr>\")\n5\n>>>\n>>> asc.idw(\"<math>\")\n9\n>>>\n>>> asc.setTokenDisable(\"date|time|abbr|math\")\n>>>\n>>> asc.getTokensDisable()\n{9, 5, 6, 7}\n>>>\n>>> asc.setTokensDisable({6, 7, 5, 9})\n>>>\n>>> asc.setTokenUnknown(\"date|time|abbr|math\")\n>>>\n>>> asc.getTokensUnknown()\n{9, 5, 6, 7}\n>>>\n>>> asc.setTokensUnknown({6, 7, 5, 9})\n>>>\n>>> asc.setAllTokenDisable()\n>>>\n>>> asc.getTokensDisable()\n{2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23}\n>>>\n>>> asc.setAllTokenUnknown()\n>>>\n>>> asc.getTokensUnknown()\n{2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23}\n```\n\n---\n\n### Methods:\n- **countAlphabet** - Method of obtaining the number of letters in the dictionary\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.getAlphabet()\n'abcdefghijklmnopqrstuvwxyz'\n>>>\n>>> asc.countAlphabet()\n26\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.countAlphabet()\n59\n```\n\n---\n\n### Methods:\n- **countBigrams** - Method get count bigrams\n- **countTrigrams** - Method get count trigrams\n- **countGrams** - Method get count N-gram by lm size\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.countBigrams(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\")\n12\n>>>\n>>> asc.countTrigrams(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\")\n10\n>>>\n>>> asc.size()\n3\n>>>\n>>> asc.countGrams(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\")\n10\n>>>\n>>> asc.idw(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e\")\n3263936167\n>>>\n>>> asc.idw(\"\u0438\u0437\")\n5134\n>>>\n>>> asc.idw(\"\u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438\")\n12535356101\n>>>\n>>> asc.idw(\"\u0432\")\n53\n>>>\n>>> asc.idw(\"\u041e\u043b\u0435\u0433\u0430\")\n2824508300\n>>>\n>>> asc.idw(\"\u0443\u0434\u0430\u0440\u0438\u043b\")\n24816796913\n>>>\n>>> asc.countBigrams([3263936167, 5134, 12535356101, 53, 2824508300, 24816796913])\n5\n>>>\n>>> asc.countTrigrams([3263936167, 5134, 12535356101, 53, 2824508300, 24816796913])\n4\n>>>\n>>> asc.countGrams([3263936167, 5134, 12535356101, 53, 2824508300, 24816796913])\n4\n```\n\n---\n\n### Methods:\n- **arabic2Roman** - Convert arabic number to roman number\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.arabic2Roman(23)\n'XXIII'\n>>>\n>>> asc.arabic2Roman(\"33\")\n'XXXIII'\n```\n\n---\n\n### Methods:\n- **setThreads** - Method for set the number of threads (0 - all threads)\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.setThreads(3)\n>>>\n>>> a = asc.pplByFiles(\"./text.txt\")\n>>>\n>>> print(a.logprob)\n-48201.29481399994\n```\n\n---\n\n### Methods:\n- **fti** - Method for removing the fractional part of a number\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.fti(5892.4892)\n5892489200000\n>>>\n>>> asc.fti(5892.4892, 4)\n58924892\n```\n\n---\n\n### Methods:\n- **context** - Method for assembling text context from a sequence\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.idw(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e\")\n3263936167\n>>>\n>>> asc.idw(\"\u0438\u0437\")\n5134\n>>>\n>>> asc.idw(\"\u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438\")\n12535356101\n>>>\n>>> asc.idw(\"\u0432\")\n53\n>>>\n>>> asc.idw(\"\u041e\u043b\u0435\u0433\u0430\")\n2824508300\n>>>\n>>> asc.idw(\"\u0443\u0434\u0430\u0440\u0438\u043b\")\n24816796913\n>>>\n>>> asc.context([3263936167, 5134, 12535356101, 53, 2824508300, 24816796913])\n'\u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b'\n```\n\n---\n\n### Methods:\n- **isAbbr** - Method of checking a word for compliance with an abbreviation\n- **isSuffix** - Method for checking a word for a suffix of a numeric abbreviation\n- **isToken** - Method for checking if an identifier matches a token\n- **isIdWord** - Method for checking if an identifier matches a word\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.addAbbr(\"\u0421\u0428\u0410\")\n>>>\n>>> asc.isAbbr(\"\u0441\u0448\u0430\")\nTrue\n>>>\n>>> asc.addSuffix(\"1-\u044f\")\n>>>\n>>> asc.isSuffix(\"1-\u044f\")\nTrue\n>>>\n>>> asc.isToken(asc.idw(\"\u0421\u0428\u0410\"))\nTrue\n>>>\n>>> asc.isToken(asc.idw(\"1-\u044f\"))\nTrue\n>>>\n>>> asc.isToken(asc.idw(\"125\"))\nTrue\n>>>\n>>> asc.isToken(asc.idw(\"<s>\"))\nTrue\n>>>\n>>> asc.isToken(asc.idw(\"Hello\"))\nFalse\n>>>\n>>> asc.isIdWord(asc.idw(\"https://anyks.com\"))\nTrue\n>>>\n>>> asc.isIdWord(asc.idw(\"Hello\"))\nTrue\n>>>\n>>> asc.isIdWord(asc.idw(\"-\"))\nFalse\n```\n\n---\n\n### Methods:\n- **findByFiles** - Method search N-grams in a text file\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.debug)\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.findByFiles(\"./text.txt\", \"./result.txt\")\ninfo: <s> \u041a\u0443\u043a\u0430\u0439\n\u0441\u0430\u0440\u0438 \u043a\u0443\u043a\u0430\u0439\n\u0441\u0430\u0440\u0438 \u044f\u043f\u043e\u043d\u0441\u043a\u0438\u0435\n\u044f\u043f\u043e\u043d\u0441\u043a\u0438\u0435 \u043a\u0430\u043b\u043b\u0438\u0433\u0440\u0430\u0444\u044b\n\u043a\u0430\u043b\u043b\u0438\u0433\u0440\u0430\u0444\u044b \u044f\n\u044f \u043f\u043e\u0441\u0442\u043e\u044f\u043d\u043d\u043e\n\u043f\u043e\u0441\u0442\u043e\u044f\u043d\u043d\u043e \u043d\u0430\u0432\u0435\u0449\u0430\u043b\n\u043d\u0430\u0432\u0435\u0449\u0430\u043b \u0438\u0445\n\u0438\u0445 \u0442\u0430\u0439\u043d\u043e\n\u0442\u0430\u0439\u043d\u043e \u043e\u0442\n\u043e\u0442 \u043b\u044e\u0434\u0435\u0439\n\u043b\u044e\u0434\u0435\u0439 </s>\n\n\ninfo: <s> \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437\n\u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438\n\u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432\n\u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430\n\u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b\n\u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439\n\u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440\n\u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439\n\u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440\n\u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440\n\n<s> \u0421 \u043b\u044f\u0437\u0433\u043e\u043c\n\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f\n\u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438\n\u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f\n\u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435\n\u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430\n\u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430\n```\n\n---\n\n### Methods:\n- **checkSequence** - Sequence Existence Method\n- **existSequence** - Method for checking the existence of a sequence, excluding non-word tokens\n- **checkByFiles** - Method for checking if a sequence exists in a text file\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.debug)\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.addAbbr(\"\u0421\u0428\u0410\")\n>>>\n>>> asc.isAbbr(\"\u0441\u0448\u0430\")\n>>>\n>>> asc.checkSequence(\"\u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u043e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b\")\nTrue\n>>>\n>>> asc.checkSequence(\"\u0421\u0435\u0433\u043e\u0434\u043d\u044f \u0441\u044b\u0433\u0440\u0430\u043b \u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440 \u0441 \u043a\u043e\u0440\u043f\u043e\u0440\u0430\u0442\u0438\u0432\u043d\u044b\u043c \u0441\u0435\u043a\u0442\u043e\u0440\u043e\u043c\")\nTrue\n>>>\n>>> asc.checkSequence(\"\u0421\u0435\u0433\u043e\u0434\u043d\u044f \u0441\u044b\u0433\u0440\u0430\u043b \u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440 \u0441 \u043a\u043e\u0440\u043f\u043e\u0440\u0430\u0442\u0438\u0432\u043d\u044b\u043c \u0441\u0435\u043a\u0442\u043e\u0440\u043e\u043c\", True)\nTrue\n>>>\n>>> asc.checkSequence(\"\u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439\")\nTrue\n>>>\n>>> asc.checkSequence(\"\u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439\", True)\nTrue\n>>>\n>>> asc.checkSequence(\"\u043e\u0442 \u0433\u043e\u0441\u0441\u0435\u043a\u0440\u0435\u0442\u0430\u0440\u044f \u0421\u0428\u0410\")\nTrue\n>>>\n>>> asc.checkSequence(\"\u043e\u0442 \u0433\u043e\u0441\u0441\u0435\u043a\u0440\u0435\u0442\u0430\u0440\u044f \u0421\u0428\u0410\", True)\nTrue\n>>>\n>>> asc.checkSequence(\"\u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u043e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b\", 2)\nTrue\n>>>\n>>> asc.checkSequence([\"\u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e\",\"\u0438\u0437\",\"\u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438\",\"\u0432\",\"\u043e\u043b\u0435\u0433\u0430\",\"\u0443\u0434\u0430\u0440\u0438\u043b\"], 2)\nTrue\n>>>\n>>> asc.existSequence(\"<s> \u0421\u0435\u0433\u043e\u0434\u043d\u044f \u0441\u044b\u0433\u0440\u0430\u043b \u0438 \u0432, \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440, \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440 - \u0441 \u043a\u043e\u0440\u043f\u043e\u0440\u0430\u0442\u0438\u0432\u043d\u044b\u043c \u0441\u0435\u043a\u0442\u043e\u0440\u043e\u043c </s>\", 2)\n(True, 0)\n>>>\n>>> asc.existSequence([\"<s>\",\"\u0421\u0435\u0433\u043e\u0434\u043d\u044f\",\"\u0441\u044b\u0433\u0440\u0430\u043b\",\"\u0438\",\"\u0432\",\",\",\"\u041e\u043b\u0435\u0433\u0430\",\"\u0443\u0434\u0430\u0440\u0438\u043b\",\"\u044f\u0440\u043a\u0438\u0439\",\"\u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440\",\",\",\"\u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439\",\"\u0442\u0440\u0430\u043a\u0442\u043e\u0440\",\"-\",\"\u0441\",\"\u043a\u043e\u0440\u043f\u043e\u0440\u0430\u0442\u0438\u0432\u043d\u044b\u043c\",\"\u0441\u0435\u043a\u0442\u043e\u0440\u043e\u043c\",\"</s>\"], 2)\n(True, 2)\n>>>\n>>> asc.idw(\"\u043e\u0442\")\n6086\n>>>\n>>> asc.idw(\"\u0433\u043e\u0441\u0441\u0435\u043a\u0440\u0435\u0442\u0430\u0440\u044f\")\n51273912082\n>>>\n>>> asc.idw(\"\u0421\u0428\u0410\")\n5\n>>>\n>>> asc.checkSequence([6086, 51273912082, 5])\nTrue\n>>>\n>>> asc.checkSequence([6086, 51273912082, 5], True)\nTrue\n>>>\n>>> asc.checkSequence([\"\u043e\u0442\", \"\u0433\u043e\u0441\u0441\u0435\u043a\u0440\u0435\u0442\u0430\u0440\u044f\", \"\u0421\u0428\u0410\"])\nTrue\n>>>\n>>> asc.checkSequence([\"\u043e\u0442\", \"\u0433\u043e\u0441\u0441\u0435\u043a\u0440\u0435\u0442\u0430\u0440\u044f\", \"\u0421\u0428\u0410\"], True)\nTrue\n>>>\n>>> asc.checkByFiles(\"./text.txt\", \"./result.txt\")\ninfo: 1999 | YES | \u041a\u0430\u043a\u043e\u0439-\u0442\u043e \u043f\u0435\u0440\u0438\u043e\u0434 \u0432\u0440\u0435\u043c\u0435\u043d\u0438 \u043c\u044b \u0432\u043e\u043e\u0431\u0449\u0435 \u043d\u0435 \u043e\u0431\u0449\u0430\u043b\u0438\u0441\u044c\n\ninfo: 2000 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440.\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 2001 | YES | \u0422\u0430\u043a \u043a\u0430\u043a \u044d\u0442\u0438 \u044f\u0439\u0446\u0430 \u0436\u0430\u043b\u043a\u043e \u0435\u0441\u0442\u044c \u0430 \u0445\u043e\u0447\u0435\u0442\u0441\u044f \u0432\u0441\u0435 \u0431\u043e\u043b\u044c\u0448\u0435 \u043b\u044e\u0431\u043e\u0432\u0430\u0442\u044c\u0441\u044f \u0438\u0445 \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u043a\u0440\u044b\u0442\u044c \u043b\u0430\u043a\u043e\u043c \u0434\u0430\u0436\u0435 \u043f\u0440\u043e\u0437\u0440\u0430\u0447\u043d\u044b\u043c \u043b\u0430\u043a\u043e\u043c \u0434\u043b\u044f \u043d\u043e\u0433\u0442\u0435\u0439\n\ninfo: 2002 | NO | \u043a\u0443\u043a\u0430\u0439 <unk> <unk> \u0441\u0430\u0440\u0438 \u043a\u0443\u043a\u0430\u0439 <unk> <unk> \u0441\u0430\u0440\u0438 \u044f\u043f\u043e\u043d\u0441\u043a\u0438\u0435 \u043a\u0430\u043b\u043b\u0438\u0433\u0440\u0430\u0444\u044b \u044f \u043f\u043e\u0441\u0442\u043e\u044f\u043d\u043d\u043e \u043d\u0430\u0432\u0435\u0449\u0430\u043b \u0438\u0445 \u0442\u0430\u0439\u043d\u043e \u043e\u0442 \u043b\u044e\u0434\u0435\u0439\n\ninfo: 2003 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\n\ninfo: 2004 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440?\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 2005 | YES | \u0421\u0435\u0433\u043e\u0434\u043d\u044f \u044f\u0438\u0447\u043d\u0438\u0446\u0435\u0439 \u043d\u0438\u043a\u0442\u043e \u043d\u0435 \u0437\u0430\u0432\u0442\u0440\u0430\u043a\u0430\u043b \u043a\u0430\u043a \u0432\u043f\u0440\u043e\u0447\u0435\u043c \u0438 \u0432\u0447\u0435\u0440\u0430 \u043d\u0430 \u0431\u043b\u0438\u0436\u0430\u0439\u0448\u0435\u043c \u043a \u043d\u0430\u043c \u0440\u044b\u043d\u043a\u0435 \u043c\u044b \u0435\u043b\u0438 \u0444\u0440\u0443\u043a\u0442\u043e\u0432\u044b\u0439 \u0441\u0430\u043b\u0430\u0442 \u0441\u043e \u0441\u0432\u0435\u0436\u0435\u0432\u044b\u0436\u0430\u0442\u044b\u043c \u0441\u043e\u043a\u043e\u043c \u043a\u0430\u043a \u0432 \u0441\u0442\u0430\u0440\u044b\u0435 \u0434\u043e\u0431\u0440\u044b\u0435 \u0432\u0440\u0435\u043c\u0435\u043d\u0430 \u0432 \u0411\u0440\u0430\u0437\u0438\u043b\u0438\u0438\n\ninfo: 2006 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440!\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 2007 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440.\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\nAll texts: 2007\nExists texts: 1359\nNot exists texts: 648\n>>>\n>>> asc.checkByFiles(\"./corpus\", \"./result.txt\", False, \"txt\")\ninfo: 1999 | YES | \u041a\u0430\u043a\u043e\u0439-\u0442\u043e \u043f\u0435\u0440\u0438\u043e\u0434 \u0432\u0440\u0435\u043c\u0435\u043d\u0438 \u043c\u044b \u0432\u043e\u043e\u0431\u0449\u0435 \u043d\u0435 \u043e\u0431\u0449\u0430\u043b\u0438\u0441\u044c\n\ninfo: 2000 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440.\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 2001 | YES | \u0422\u0430\u043a \u043a\u0430\u043a \u044d\u0442\u0438 \u044f\u0439\u0446\u0430 \u0436\u0430\u043b\u043a\u043e \u0435\u0441\u0442\u044c \u0430 \u0445\u043e\u0447\u0435\u0442\u0441\u044f \u0432\u0441\u0435 \u0431\u043e\u043b\u044c\u0448\u0435 \u043b\u044e\u0431\u043e\u0432\u0430\u0442\u044c\u0441\u044f \u0438\u0445 \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u043a\u0440\u044b\u0442\u044c \u043b\u0430\u043a\u043e\u043c \u0434\u0430\u0436\u0435 \u043f\u0440\u043e\u0437\u0440\u0430\u0447\u043d\u044b\u043c \u043b\u0430\u043a\u043e\u043c \u0434\u043b\u044f \u043d\u043e\u0433\u0442\u0435\u0439\n\ninfo: 2002 | NO | \u043a\u0443\u043a\u0430\u0439 <unk> <unk> \u0441\u0430\u0440\u0438 \u043a\u0443\u043a\u0430\u0439 <unk> <unk> \u0441\u0430\u0440\u0438 \u044f\u043f\u043e\u043d\u0441\u043a\u0438\u0435 \u043a\u0430\u043b\u043b\u0438\u0433\u0440\u0430\u0444\u044b \u044f \u043f\u043e\u0441\u0442\u043e\u044f\u043d\u043d\u043e \u043d\u0430\u0432\u0435\u0449\u0430\u043b \u0438\u0445 \u0442\u0430\u0439\u043d\u043e \u043e\u0442 \u043b\u044e\u0434\u0435\u0439\n\ninfo: 2003 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\n\ninfo: 2004 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440?\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 2005 | YES | \u0421\u0435\u0433\u043e\u0434\u043d\u044f \u044f\u0438\u0447\u043d\u0438\u0446\u0435\u0439 \u043d\u0438\u043a\u0442\u043e \u043d\u0435 \u0437\u0430\u0432\u0442\u0440\u0430\u043a\u0430\u043b \u043a\u0430\u043a \u0432\u043f\u0440\u043e\u0447\u0435\u043c \u0438 \u0432\u0447\u0435\u0440\u0430 \u043d\u0430 \u0431\u043b\u0438\u0436\u0430\u0439\u0448\u0435\u043c \u043a \u043d\u0430\u043c \u0440\u044b\u043d\u043a\u0435 \u043c\u044b \u0435\u043b\u0438 \u0444\u0440\u0443\u043a\u0442\u043e\u0432\u044b\u0439 \u0441\u0430\u043b\u0430\u0442 \u0441\u043e \u0441\u0432\u0435\u0436\u0435\u0432\u044b\u0436\u0430\u0442\u044b\u043c \u0441\u043e\u043a\u043e\u043c \u043a\u0430\u043a \u0432 \u0441\u0442\u0430\u0440\u044b\u0435 \u0434\u043e\u0431\u0440\u044b\u0435 \u0432\u0440\u0435\u043c\u0435\u043d\u0430 \u0432 \u0411\u0440\u0430\u0437\u0438\u043b\u0438\u0438\n\ninfo: 2006 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440!\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 2007 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440.\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\nAll texts: 2007\nExists texts: 1359\nNot exists texts: 648\n>>>\n>>> asc.checkByFiles(\"./corpus\", \"./result.txt\", True, \"txt\")\ninfo: 2000 | NO | \u0422\u0430\u043a \u043a\u0430\u043a \u044d\u0442\u0438 \u044f\u0439\u0446\u0430 \u0436\u0430\u043b\u043a\u043e \u0435\u0441\u0442\u044c \u0430 \u0445\u043e\u0447\u0435\u0442\u0441\u044f \u0432\u0441\u0435 \u0431\u043e\u043b\u044c\u0448\u0435 \u043b\u044e\u0431\u043e\u0432\u0430\u0442\u044c\u0441\u044f \u0438\u0445 \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u043a\u0440\u044b\u0442\u044c \u043b\u0430\u043a\u043e\u043c \u0434\u0430\u0436\u0435 \u043f\u0440\u043e\u0437\u0440\u0430\u0447\u043d\u044b\u043c \u043b\u0430\u043a\u043e\u043c \u0434\u043b\u044f \u043d\u043e\u0433\u0442\u0435\u0439\n\ninfo: 2001 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440.\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 2002 | NO | \u0421\u0435\u0433\u043e\u0434\u043d\u044f \u044f\u0438\u0447\u043d\u0438\u0446\u0435\u0439 \u043d\u0438\u043a\u0442\u043e \u043d\u0435 \u0437\u0430\u0432\u0442\u0440\u0430\u043a\u0430\u043b \u043a\u0430\u043a \u0432\u043f\u0440\u043e\u0447\u0435\u043c \u0438 \u0432\u0447\u0435\u0440\u0430 \u043d\u0430 \u0431\u043b\u0438\u0436\u0430\u0439\u0448\u0435\u043c \u043a \u043d\u0430\u043c \u0440\u044b\u043d\u043a\u0435 \u043c\u044b \u0435\u043b\u0438 \u0444\u0440\u0443\u043a\u0442\u043e\u0432\u044b\u0439 \u0441\u0430\u043b\u0430\u0442 \u0441\u043e \u0441\u0432\u0435\u0436\u0435\u0432\u044b\u0436\u0430\u0442\u044b\u043c \u0441\u043e\u043a\u043e\u043c \u043a\u0430\u043a \u0432 \u0441\u0442\u0430\u0440\u044b\u0435 \u0434\u043e\u0431\u0440\u044b\u0435 \u0432\u0440\u0435\u043c\u0435\u043d\u0430 \u0432 \u0411\u0440\u0430\u0437\u0438\u043b\u0438\u0438\n\ninfo: 2003 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440!\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 2004 | NO | \u043a\u0443\u043a\u0430\u0439 <unk> <unk> \u0441\u0430\u0440\u0438 \u043a\u0443\u043a\u0430\u0439 <unk> <unk> \u0441\u0430\u0440\u0438 \u044f\u043f\u043e\u043d\u0441\u043a\u0438\u0435 \u043a\u0430\u043b\u043b\u0438\u0433\u0440\u0430\u0444\u044b \u044f \u043f\u043e\u0441\u0442\u043e\u044f\u043d\u043d\u043e \u043d\u0430\u0432\u0435\u0449\u0430\u043b \u0438\u0445 \u0442\u0430\u0439\u043d\u043e \u043e\u0442 \u043b\u044e\u0434\u0435\u0439\n\ninfo: 2005 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440?\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 2006 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\n\ninfo: 2007 | NO | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440.\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\nAll texts: 2007\nExists texts: 0\nNot exists texts: 2007\n```\n\n---\n\n### Methods:\n- **check** - String Check Method\n- **match** - String Matching Method\n- **addAbbr** - Method add abbreviation\n- **addSuffix** - Method add number suffix abbreviation\n- **setSuffixes** - Method set number suffix abbreviations\n- **readSuffix** - Method for reading data from a file of suffixes and abbreviations\n\n### Example:\n```python\n>>> import asc\n>>> \n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>> asc.setSubstitutes({'p':'\u0440','c':'\u0441','o':'\u043e','t':'\u0442','k':'\u043a','e':'\u0435','a':'\u0430','h':'\u043d','x':'\u0445','b':'\u0432','m':'\u043c'})\n>>> \n>>> asc.check(\"\u0414\u043e\u043c-2\", asc.check_t.home2)\nTrue\n>>> \n>>> asc.check(\"\u0414\u043e\u043c2\", asc.check_t.home2)\nFalse\n>>> \n>>> asc.check(\"\u0414\u043e\u043c-2\", asc.check_t.latian)\nFalse\n>>> \n>>> asc.check(\"Hello\", asc.check_t.latian)\nTrue\n>>> \n>>> asc.check(\"\u043f\u0440i\u0432\u0435\u0442\", asc.check_t.latian)\nTrue\n>>> \n>>> asc.check(\"\u0414\u043e\u043c-2\", asc.check_t.hyphen)\nTrue\n>>> \n>>> asc.check(\"\u0414\u043e\u043c2\", asc.check_t.hyphen)\nFalse\n>>> \n>>> asc.check(\"\u0414\", asc.check_t.letter)\nTrue\n>>> \n>>> asc.check(\"$\", asc.check_t.letter)\nFalse\n>>> \n>>> asc.check(\"-\", asc.check_t.letter)\nFalse\n>>> \n>>> asc.check(\"\u043f\u0440\u043e\u0441t\u043e\u043a\u0432a\u0448\u0438\u043d\u043e\", asc.check_t.similars)\nTrue\n>>> \n>>> asc.match(\"my site http://example.ru, it's true\", asc.match_t.url)\nTrue\n>>> \n>>> asc.match(\"\u043f\u043e \u0432\u0430\u0448\u0435\u043c\u0443 ip \u0430\u0434\u0440\u0435\u0441\u0443 46.40.123.12 \u043f\u0440\u043e\u0432\u043e\u0434\u0438\u0442\u0441\u044f \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0430\", asc.match_t.url)\nTrue\n>>> \n>>> asc.match(\"\u043c\u043e\u0439 \u0430\u0434\u0440\u0435\u0441 \u0432 \u0444\u043e\u0440\u043c\u0430\u0442\u0435 IPv6: http://[2001:0db8:11a3:09d7:1f34:8a2e:07a0:765d]/\", asc.match_t.url)\nTrue\n>>> \n>>> asc.match(\"13-\u044f\", asc.match_t.abbr)\nTrue\n>>> \nasc.match(\"13-\u044f-\u0439\", asc.match_t.abbr)\nFalse\n>>> \nasc.match(\"\u0442.\u0434\", asc.match_t.abbr)\nTrue\n>>> \nasc.match(\"\u0442.\u043f.\", asc.match_t.abbr)\nTrue\n>>> \n>>> asc.match(\"\u0421.\u0428.\u0410.\", asc.match_t.abbr)\nTrue\n>>> \n>>> asc.addAbbr(\"\u0441\u0448\u0430\")\n>>> asc.match(\"\u0421\u0428\u0410\", asc.match_t.abbr)\nTrue\n>>> \n>>> asc.addSuffix(\"15-\u043b\u0435\u0442\u0438\u044f\")\n>>> asc.match(\"15-\u043b\u0435\u0442\u0438\u044f\", asc.match_t.abbr)\nTrue\n>>> \n>>> asc.getSuffixes()\n{3139900457}\n>>> \n>>> asc.idw(\"\u043b\u0435\u0442\")\n328041\n>>> \n>>> asc.idw(\"\u0442\u044b\u0445\")\n352214\n>>> \n>>> asc.setSuffixes({328041, 352214})\n>>> \n>>> asc.getSuffixes()\n{328041, 352214}\n>>> \n>>> def status(status):\n...     print(status)\n... \n>>> asc.readSuffix(\"./suffix.abbr\", status)\n>>> \n>>> asc.match(\"15-\u043b\u0435\u0442\", asc.match_t.abbr)\nTrue\n>>> \n>>> asc.match(\"20-\u0442\u044b\u0445\", asc.match_t.abbr)\nTrue\n>>> \n>>> asc.match(\"15-\u043b\u0435\u0442\u0438\u044f\", asc.match_t.abbr)\nFalse\n>>> \n>>> asc.match(\"Hello\", asc.match_t.latian)\nTrue\n>>> \n>>> asc.match(\"\u043f\u0440i\u0432\u0435\u0442\", asc.match_t.latian)\nFalse\n>>> \n>>> asc.match(\"23424\", asc.match_t.number)\nTrue\n>>> \n>>> asc.match(\"hello\", asc.match_t.number)\nFalse\n>>> \n>>> asc.match(\"23424.55\", asc.match_t.number)\nFalse\n>>> \n>>> asc.match(\"23424\", asc.match_t.decimal)\nFalse\n>>> \n>>> asc.match(\"23424.55\", asc.match_t.decimal)\nTrue\n>>> \n>>> asc.match(\"23424,55\", asc.match_t.decimal)\nTrue\n>>> \n>>> asc.match(\"-23424.55\", asc.match_t.decimal)\nTrue\n>>> \n>>> asc.match(\"+23424.55\", asc.match_t.decimal)\nTrue\n>>> \n>>> asc.match(\"+23424.55\", asc.match_t.anumber)\nTrue\n>>> \n>>> asc.match(\"15T-34\", asc.match_t.anumber)\nTrue\n>>> \n>>> asc.match(\"hello\", asc.match_t.anumber)\nFalse\n>>> \n>>> asc.match(\"hello\", asc.match_t.allowed)\nTrue\n>>> \n>>> asc.match(\"\u00e9valuer\", asc.match_t.allowed)\nFalse\n>>> \n>>> asc.match(\"13\", asc.match_t.allowed)\nTrue\n>>> \n>>> asc.match(\"Hello-World\", asc.match_t.allowed)\nTrue\n>>> \n>>> asc.match(\"Hello\", asc.match_t.math)\nFalse\n>>> \n>>> asc.match(\"+\", asc.match_t.math)\nTrue\n>>> \n>>> asc.match(\"=\", asc.match_t.math)\nTrue\n>>> \n>>> asc.match(\"Hello\", asc.match_t.upper)\nTrue\n>>> \n>>> asc.match(\"hello\", asc.match_t.upper)\nFalse\n>>> \n>>> asc.match(\"hellO\", asc.match_t.upper)\nFalse\n>>> \n>>> asc.match(\"a\", asc.match_t.punct)\nFalse\n>>> \n>>> asc.match(\",\", asc.match_t.punct)\nTrue\n>>> \n>>> asc.match(\" \", asc.match_t.space)\nTrue\n>>> \n>>> asc.match(\"a\", asc.match_t.space)\nFalse\n>>> \n>>> asc.match(\"a\", asc.match_t.special)\nFalse\n>>> \n>>> asc.match(\"\u00b1\", asc.match_t.special)\nFalse\n>>> \n>>> asc.match(\"[\", asc.match_t.isolation)\nTrue\n>>> \n>>> asc.match(\"a\", asc.match_t.isolation)\nFalse\n>>> \n>>> asc.match(\"a\", asc.match_t.greek)\nFalse\n>>> \n>>> asc.match(\"\u03a8\", asc.match_t.greek)\nTrue\n>>> \n>>> asc.match(\"->\", asc.match_t.route)\nFalse\n>>> \n>>> asc.match(\"\u21d4\", asc.match_t.route)\nTrue\n>>> \n>>> asc.match(\"a\", asc.match_t.letter)\nTrue\n>>> \n>>> asc.match(\"!\", asc.match_t.letter)\nFalse\n>>> \n>>> asc.match(\"!\", asc.match_t.pcards)\nFalse\n>>> \n>>> asc.match(\"\u2663\", asc.match_t.pcards)\nTrue\n>>> \n>>> asc.match(\"p\", asc.match_t.currency)\nFalse\n>>> \n>>> asc.match(\"$\", asc.match_t.currency)\nTrue\n>>> \n>>> asc.match(\"\u20ac\", asc.match_t.currency)\nTrue\n>>> \n>>> asc.match(\"\u20bd\", asc.match_t.currency)\nTrue\n>>> \n>>> asc.match(\"\u20bf\", asc.match_t.currency)\nTrue\n```\n\n---\n\n### Methods:\n- **delInText** - Method for delete letter in text\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.delInText(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440??? \u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\", asc.wdel_t.punct)\n'\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440 \u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430'\n>>>\n>>> asc.delInText(\"hello-world-hello-world\", asc.wdel_t.hyphen)\n'helloworldhelloworld'\n>>>\n>>> asc.delInText(\"\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440??? \u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\", asc.wdel_t.broken)\n'\u043d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440 \u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430'\n>>>\n>>> asc.delInText(\"\u00abOn nous dit qu'aujourd'hui c'est le cas, encore faudra-t-il l'\u00e9valuer\u00bb l'astronomie\", asc.wdel_t.broken)\n\"On nous dit qu'aujourd'hui c'est le cas encore faudra-t-il l'valuer l'astronomie\"\n```\n\n---\n\n### Methods:\n- **countsByFiles** - Method for counting the number of n-grams in a text file\n\n### Example:\n```python\n>>> import asc\n>>>\n>>> asc.setOption(asc.options_t.debug)\n>>>\n>>> asc.setOption(asc.options_t.confidence)\n>>>\n>>> asc.setAlphabet(\"abcdefghijklmnopqrstuvwxyz\u0430\u0431\u0432\u0433\u0434\u0435\u0451\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\")\n>>>\n>>> asc.readArpa('./lm.arpa')\n>>>\n>>> asc.countsByFiles(\"./text.txt\", \"./result.txt\", 3)\ninfo: 0 | \u0421\u0435\u0433\u043e\u0434\u043d\u044f \u044f\u0438\u0447\u043d\u0438\u0446\u0435\u0439 \u043d\u0438\u043a\u0442\u043e \u043d\u0435 \u0437\u0430\u0432\u0442\u0440\u0430\u043a\u0430\u043b \u043a\u0430\u043a \u0432\u043f\u0440\u043e\u0447\u0435\u043c \u0438 \u0432\u0447\u0435\u0440\u0430 \u043d\u0430 \u0431\u043b\u0438\u0436\u0430\u0439\u0448\u0435\u043c \u043a \u043d\u0430\u043c \u0440\u044b\u043d\u043a\u0435 \u043c\u044b \u0435\u043b\u0438 \u0444\u0440\u0443\u043a\u0442\u043e\u0432\u044b\u0439 \u0441\u0430\u043b\u0430\u0442 \u0441\u043e \u0441\u0432\u0435\u0436\u0435\u0432\u044b\u0436\u0430\u0442\u044b\u043c \u0441\u043e\u043a\u043e\u043c \u043a\u0430\u043a \u0432 \u0441\u0442\u0430\u0440\u044b\u0435 \u0434\u043e\u0431\u0440\u044b\u0435 \u0432\u0440\u0435\u043c\u0435\u043d\u0430 \u0432 \u0411\u0440\u0430\u0437\u0438\u043b\u0438\u0438\n\ninfo: 10 | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440?\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 10 | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440!\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 0 | \u0422\u0430\u043a \u043a\u0430\u043a \u044d\u0442\u0438 \u044f\u0439\u0446\u0430 \u0436\u0430\u043b\u043a\u043e \u0435\u0441\u0442\u044c \u0430 \u0445\u043e\u0447\u0435\u0442\u0441\u044f \u0432\u0441\u0435 \u0431\u043e\u043b\u044c\u0448\u0435 \u043b\u044e\u0431\u043e\u0432\u0430\u0442\u044c\u0441\u044f \u0438\u0445 \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u043a\u0440\u044b\u0442\u044c \u043b\u0430\u043a\u043e\u043c \u0434\u0430\u0436\u0435 \u043f\u0440\u043e\u0437\u0440\u0430\u0447\u043d\u044b\u043c \u043b\u0430\u043a\u043e\u043c \u0434\u043b\u044f \u043d\u043e\u0433\u0442\u0435\u0439\n\ninfo: 10 | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\n\nCounts 3grams: 471\n>>>\n>>> asc.countsByFiles(\"./corpus\", \"./result.txt\", 2, \"txt\")\ninfo: 19 | \u0422\u0430\u043a \u043a\u0430\u043a \u044d\u0442\u0438 \u044f\u0439\u0446\u0430 \u0436\u0430\u043b\u043a\u043e \u0435\u0441\u0442\u044c \u0430 \u0445\u043e\u0447\u0435\u0442\u0441\u044f \u0432\u0441\u0435 \u0431\u043e\u043b\u044c\u0448\u0435 \u043b\u044e\u0431\u043e\u0432\u0430\u0442\u044c\u0441\u044f \u0438\u0445 \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u043a\u0440\u044b\u0442\u044c \u043b\u0430\u043a\u043e\u043c \u0434\u0430\u0436\u0435 \u043f\u0440\u043e\u0437\u0440\u0430\u0447\u043d\u044b\u043c \u043b\u0430\u043a\u043e\u043c \u0434\u043b\u044f \u043d\u043e\u0433\u0442\u0435\u0439\n\ninfo: 12 | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440.\u0441 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 12 | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440!\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 10 | \u043a\u0443\u043a\u0430\u0439 <unk> <unk> \u0441\u0430\u0440\u0438 \u043a\u0443\u043a\u0430\u0439 <unk> <unk> \u0441\u0430\u0440\u0438 \u044f\u043f\u043e\u043d\u0441\u043a\u0438\u0435 \u043a\u0430\u043b\u043b\u0438\u0433\u0440\u0430\u0444\u044b \u044f \u043f\u043e\u0441\u0442\u043e\u044f\u043d\u043d\u043e \u043d\u0430\u0432\u0435\u0449\u0430\u043b \u0438\u0445 \u0442\u0430\u0439\u043d\u043e \u043e\u0442 \u043b\u044e\u0434\u0435\u0439\n\ninfo: 12 | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440???\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430....\n\ninfo: 12 | \u041d\u0435\u043e\u0436\u0438\u0434\u0430\u043d\u043d\u043e \u0438\u0437 \u043f\u043e\u0434\u0432\u043e\u0440\u043e\u0442\u043d\u0438 \u0432 \u041e\u043b\u0435\u0433\u0430 \u0443\u0434\u0430\u0440\u0438\u043b \u044f\u0440\u043a\u0438\u0439 \u043f\u0440\u043e\u0436\u0435\u043a\u0442\u043e\u0440 \u043f\u0430\u0442\u0440\u0443\u043b\u044c\u043d\u044b\u0439 \u0442\u0440\u0430\u043a\u0442\u043e\u0440?\u0421 \u043b\u044f\u0437\u0433\u043e\u043c \u0432\u044b\u043a\u0430\u0442\u0438\u043b\u0441\u044f \u0438 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u043b\u0441\u044f \u0432\u043e\u0437\u043b\u0435 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430.\n\ninfo: 27 | \u0421\u0435\u0433\u043e\u0434\u043d\u044f \u044f\u0438\u0447\u043d\u0438\u0446\u0435\u0439 \u043d\u0438\u043a\u0442\u043e \u043d\u0435 \u0437\u0430\u0432\u0442\u0440\u0430\u043a\u0430\u043b \u043a\u0430\u043a \u0432\u043f\u0440\u043e\u0447\u0435\u043c \u0438 \u0432\u0447\u0435\u0440\u0430 \u043d\u0430 \u0431\u043b\u0438\u0436\u0430\u0439\u0448\u0435\u043c \u043a \u043d\u0430\u043c \u0440\u044b\u043d\u043a\u0435 \u043c\u044b \u0435\u043b\u0438 \u0444\u0440\u0443\u043a\u0442\u043e\u0432\u044b\u0439 \u0441\u0430\u043b\u0430\u0442 \u0441\u043e \u0441\u0432\u0435\u0436\u0435\u0432\u044b\u0436\u0430\u0442\u044b\u043c \u0441\u043e\u043a\u043e\u043c \u043a\u0430\u043a \u0432 \u0441\u0442\u0430\u0440\u044b\u0435 \u0434\u043e\u0431\u0440\u044b\u0435 \u0432\u0440\u0435\u043c\u0435\u043d\u0430 \u0432 \u0411\u0440\u0430\u0437\u0438\u043b\u0438\u0438\n\nCounts 2grams: 20270\n```\n\n### Description\n| N-gram size | Description         |\n|-------------|---------------------|\n| 1           | language model size |\n| 2           | bigram              |\n| 3           | trigram             |\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Smart language model",
    "version": "1.2.6",
    "split_keywords": [
        "nlp",
        "lm",
        "alm",
        "language-model",
        "spellchecker",
        "spell-checker",
        "anyks"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dafc92d6fcd5b26f630b33d2d9956719becf167e8fe15da926af4f9d04a2cd93",
                "md5": "27b4a046f71ce9cfc7dbb827c27ada11",
                "sha256": "aa6a3b66678f4b7d962d0789444d6244de4c927e8086dce619004107ae8a69b3"
            },
            "downloads": -1,
            "filename": "anyks_sc-1.2.6-cp39-cp39-macosx_10_9_universal2.whl",
            "has_sig": false,
            "md5_digest": "27b4a046f71ce9cfc7dbb827c27ada11",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.6",
            "size": 2499872,
            "upload_time": "2023-01-05T00:09:46",
            "upload_time_iso_8601": "2023-01-05T00:09:46.799493Z",
            "url": "https://files.pythonhosted.org/packages/da/fc/92d6fcd5b26f630b33d2d9956719becf167e8fe15da926af4f9d04a2cd93/anyks_sc-1.2.6-cp39-cp39-macosx_10_9_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fa0ae71554f1551e1f68c508437a967f6e808b39e8db213a139dcce750c3e7c5",
                "md5": "6fee3b2549e53bc33d66eedc085c927b",
                "sha256": "15fa598fe35ba870806c699b228bcf378fc570c6362abf9286c12d00eb59ce63"
            },
            "downloads": -1,
            "filename": "anyks-sc-1.2.6.tar.gz",
            "has_sig": false,
            "md5_digest": "6fee3b2549e53bc33d66eedc085c927b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 548211,
            "upload_time": "2023-01-05T00:09:51",
            "upload_time_iso_8601": "2023-01-05T00:09:51.280896Z",
            "url": "https://files.pythonhosted.org/packages/fa/0a/e71554f1551e1f68c508437a967f6e808b39e8db213a139dcce750c3e7c5/anyks-sc-1.2.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-05 00:09:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "anyks",
    "github_project": "asc",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "anyks-sc"
}
        
Elapsed time: 0.02771s