# PyTextRust
- **main**:
- [![pipeline status](https://gitlab.com/g6313/pytextrust/badges/main/pipeline.svg)](https://gitlab.com/g6313/pytextrust/-/commits/main)
- [![coverage report](https://gitlab.com/g6313/pytextrust/badges/main/coverage.svg)](https://gitlab.com/g6313/pytextrust/-/commits/main)
- **develop**:
- [![pipeline status](https://gitlab.com/g6313/pytextrust/badges/develop/pipeline.svg)](https://gitlab.com/g6313/pytextrust/-/commits/develop)
- [![coverage report](https://gitlab.com/g6313/pytextrust/badges/develop/coverage.svg)](https://gitlab.com/g6313/pytextrust/-/commits/develop)
Library defined to achieve easily high performance on regex and text processing inside Python, being built as a direct Wrapper of Rust regex and text crates.
On short text, sparsity of found elements is the common denominator, this library focuses on algorithms that aknowledge this sparsity and efficiently achieves good performance from simple Python API calls to Rust optimized logics.
[Give some happiness](https://www.paypal.com/donate/?business=V4NHA93BU6WPA&no_recurring=0&item_name=The+children+need+to+eat+but+I+am+too+busy¤cy_code=EUR)
# Features
## Special case
This lib has special treatment for texts that only contain `[a-zA-Z0-9ñç ]` plus accented vocals, allowing to use non unicode matching over those texts. This is particularly convenient for some Automatic Speech Recognition outputs.
In every place that it is possibly to provide it, this:
- `unicode`: `False` -> removes unicode chars from matching, making matching much more efficient (x6 - x12 it is easly achieved).
- `substitute_bound`: `True` -> substitutes in patterns `r"\b"` for `r"(?-u:\b)"` as recommended [here](https://github.com/rust-lang/regex/blob/master/PERFORMANCE.md#unicode-word-boundaries-may-prevent-the-dfa-from-being-used)
- `substitute_latin_char`: True -> substitutes in patterns `pkg::constants::LATIN_CHARS_TO_REPLACE` **for** `pkg::constants::LATIN_CHARS_REPLACEMENT`, to allow the use of non unicode variant without losing the ability to match texts and patterns that contain those latin chars (care **it projects them into `pkg::constants::LATIN_CHARS_TO_REPLACE` both in patterns and texts**).
## Find
Find patterns in texts, possibly parallelizing by chunks of either patterns or texts.
It uses efficient [`regex::RegexSet`](https://docs.rs/regex/latest/regex/struct.RegexSet.html) that reduces the cardinality of the patterns in the matching phase.
The structure of finding function is:
- Rust phase:
1. Try to compile in `regex::Regex` for the list of patterns. Get the list of valid ones and invalid ones.
2. Compile `regex::RegexSet` with valid patterns and apply over the list of texts. This gives which ones have match over the texts.
3. Operate compiled `regex::Regex`, finding them over all the texts for the subset of pairs that have matched in the `regex::RegexSet`.
4. Try to compile invalid patterns with `fancy_regex::Regex` and find matches over the texts. It reduces final invalid
patterns list that is given back to python.
5. Give matches of valid patterns and invalid patterns back to Python.
- Python phase:
1. Try to apply all failed patterns, finding them over all the texts. It uses [`regex`](https://github.com/mrabarnett/mrab-regex) package
that has expanded pattern support over `re` built-in package.
2. Return the final result.
### Calling examples
## Literal replacer
This is a very concrete function to perform high performance literal replacement using Rust `aho_corasick` implementation. It accepts parallelization by chunks of text.
It uses Rust [`aho_corasick`](https://docs.rs/aho-corasick/latest/aho_corasick/) to perform replacements, adding a layer of bounding around literals to replace through the `is_bounded` parameter.
- If `is_bounded` is `True` then before replacing the literal found, it is checked that any of `[A-Za-z0-9_]` (expanded with accents and special word chards that can be checked in `pkg::unicode::check_if_word_bytes`) is around the literal.
- Matching types can be chosen over available ones in [`aho_corasick::MatchKind`](https://docs.rs/aho-corasick/latest/aho_corasick/enum.MatchKind.html#variants), being the default one `aho_corasick::MatchKind::LeftmostLongest`.
More at [`doc/notebook/doc/literal_replacer.ipynb`](https://gitlab.com/g6313/pytextrust/-/blob/main/notebook/doc/literal_replacer.ipynb) in the repository.
### Calling examples
```python
from pytextrust.replacer import replace_literal_patterns, MatchKind
replace_literal_patterns(
literal_patterns=["uno", "dos"],
replacements=["1", "2"],
text_to_replace=["es el numero uno o el Dos yo soy el veintiuno"],
is_bounded=True,
case_insensitive=True,
match_kind=MatchKind.LeftmostLongest)
```
returns the replaced text and the number of replacements
```
(['es el numero 1 o el 2 yo soy el veintiuno'], 2)
```
## Entities
Entities are found by overlapping and have a hierarchichal folder structure.
- **Literal entities**: fast only literal based entities. Those entities are based in literal alternations, and
are built from a list of strings, is like matching (lit_1|...|lit_N)`. Can be:
- Private: only used by regex entities by composition. The only interest on them is for composition so those are only matched not finded.
- Public: calculated and reported. Those reports enforce that matched boundaries are `\b`, just if the literal matching where `\b(lit_1|...|lit_N)\b`. *Tech note: positions reported by aho corasick should be mapped from byte to char position.*
- **Regex entities**: a list of regex patterns, possibly containing literal entities calls with template language. For example if `month` is a literal entity,
Then `\d+ of \d+ of {{month}}` is a possible entity. The regex entities that depend positively (no negative lookback or lookahead), only are searched on the texts where the literal entity has been found, minimizing computational weight.
Feeding of entity matches:
- From python list of objects, where each object is equivalent to the file JSON loaded. Each object contains a field `kind` with one of two values: `re` or `lit`.
- From local folder with folders:
- Structured hierarchically.
Steps of entity recognition:
1. Load the entity system:
- Deserialize all defined entities.
- Build `LiteralEntityPool`. There are public and private literal entities:
- **Private literal entities** will not be reported only used internally by regex entities.
- **Public literal entities** will be reported as entities.
**NOTE: the bound of the literal public entities is calculated after all as Aho Corasick has not bound allowed**.
- Build `RegexEntityPool` using literals from `LiteralEntityPool`, then there are two kinds of regex entities
- The ones that use any literal entity.
- The ones that do not use any literal entity.
2. Process texts and get entities:
- Get literal entity raw index matches.
- Literal-based regex entities perform find if the ordered set of matches of literal entities is satisfied from literal entities results.
- Non literal-based regex entities find is performed using `regex::RegexSet`
3. Ensemble together public literal entities, literal-based regex entities and non literal-based regex entities and give output.
A pattern in a regex entity has two type of categorizations:
- Pattern that can be compiled at `regex` crate:
- Pattern with at least one positive capture group related to a literal entity. Match will be decided by aho corasick and literal entity order. This is a regex were `entities::extract_required_template_structure` throws a non-empty vector.
- Pattern that does not fit the previous case, this pattern will be matched through `RegexSet`. This is a pattern with `entities::extract_required_template_structure` throwing an empty vector.
- Regex that can not be compiled by `regex` crate will receive a direct find from `fancy_regex` crate. This pattern
receives an Error from `entities::extract_required_template_structure`.
Naming convention for entity files is:
### Calling examples
# CICD
This repository pretends to be a perfect CICD example for a Python+Rust lib based on `pyo3`. Any suggestions (caching, badges, anything, ...) just let me know by issue :)
# Useful doc
## Learning doc
- [The Rust Programming Language](https://doc.rust-lang.org/book/title-page.html)
- [Rust CookBook](https://rust-lang-nursery.github.io/rust-cookbook/intro.html)
- [Rust by example](https://doc.rust-lang.org/rust-by-example/)
- [PyO3](https://pyo3.rs/v0.16.3/index.html)
- [Maturin](https://maturin.rs/)
- [A comparison of regex engines](https://rust-leipzig.github.io/regex/2017/03/28/comparison-of-regex-engines/)
## Reference Rust pattern matching packages
- <https://docs.rs/fst/latest/fst/>, particularly <https://docs.rs/fst/latest/fst/#example-searching-multiple-sets-efficiently> for entities
- <https://docs.rs/regex-automata/latest/regex_automata/>
- <https://docs.rs/aho-corasick/0.7.18/aho_corasick/>
- <https://docs.rs/regex-syntax/latest/regex_syntax/>
## Performance advices
- <https://github.com/rust-lang/regex/blob/master/PERFORMANCE.md#unicode-word-boundaries-may-prevent-the-dfa-from-being-used>
- *there is no problem with using non-greedy matching or having lots of alternations in your regex* <https://github.com/rust-lang/regex/blob/master/PERFORMANCE.md#resist-the-temptation-to-optimize-regexes>
[**Benchmark by Rust regex author**](https://github.com/BurntSushi/rebar)
Raw data
{
"_id": null,
"home_page": "https://gitlab.com/g6313/pytextrust",
"name": "pytextrust",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "text-processing, regex-matching, entity-system, literal-replacement",
"author": "Guillermo Gonzalez <guillermogsjc@gmail.com>",
"author_email": "Guillermo Gonzalez <guillermogsjc@gmail.com>",
"download_url": null,
"platform": null,
"description": "# PyTextRust\n\n- **main**:\n - [![pipeline status](https://gitlab.com/g6313/pytextrust/badges/main/pipeline.svg)](https://gitlab.com/g6313/pytextrust/-/commits/main)\n - [![coverage report](https://gitlab.com/g6313/pytextrust/badges/main/coverage.svg)](https://gitlab.com/g6313/pytextrust/-/commits/main)\n- **develop**:\n - [![pipeline status](https://gitlab.com/g6313/pytextrust/badges/develop/pipeline.svg)](https://gitlab.com/g6313/pytextrust/-/commits/develop)\n - [![coverage report](https://gitlab.com/g6313/pytextrust/badges/develop/coverage.svg)](https://gitlab.com/g6313/pytextrust/-/commits/develop)\n\nLibrary defined to achieve easily high performance on regex and text processing inside Python, being built as a direct Wrapper of Rust regex and text crates.\n\nOn short text, sparsity of found elements is the common denominator, this library focuses on algorithms that aknowledge this sparsity and efficiently achieves good performance from simple Python API calls to Rust optimized logics.\n\n[Give some happiness](https://www.paypal.com/donate/?business=V4NHA93BU6WPA&no_recurring=0&item_name=The+children+need+to+eat+but+I+am+too+busy¤cy_code=EUR)\n\n# Features\n\n## Special case\n\nThis lib has special treatment for texts that only contain `[a-zA-Z0-9\u00f1\u00e7 ]` plus accented vocals, allowing to use non unicode matching over those texts. This is particularly convenient for some Automatic Speech Recognition outputs.\n\nIn every place that it is possibly to provide it, this:\n\n- `unicode`: `False` -> removes unicode chars from matching, making matching much more efficient (x6 - x12 it is easly achieved).\n- `substitute_bound`: `True` -> substitutes in patterns `r\"\\b\"` for `r\"(?-u:\\b)\"` as recommended [here](https://github.com/rust-lang/regex/blob/master/PERFORMANCE.md#unicode-word-boundaries-may-prevent-the-dfa-from-being-used)\n- `substitute_latin_char`: True -> substitutes in patterns `pkg::constants::LATIN_CHARS_TO_REPLACE` **for** `pkg::constants::LATIN_CHARS_REPLACEMENT`, to allow the use of non unicode variant without losing the ability to match texts and patterns that contain those latin chars (care **it projects them into `pkg::constants::LATIN_CHARS_TO_REPLACE` both in patterns and texts**).\n\n## Find\n\nFind patterns in texts, possibly parallelizing by chunks of either patterns or texts.\n\nIt uses efficient [`regex::RegexSet`](https://docs.rs/regex/latest/regex/struct.RegexSet.html) that reduces the cardinality of the patterns in the matching phase.\n\nThe structure of finding function is:\n\n- Rust phase:\n 1. Try to compile in `regex::Regex` for the list of patterns. Get the list of valid ones and invalid ones.\n 2. Compile `regex::RegexSet` with valid patterns and apply over the list of texts. This gives which ones have match over the texts.\n 3. Operate compiled `regex::Regex`, finding them over all the texts for the subset of pairs that have matched in the `regex::RegexSet`.\n 4. Try to compile invalid patterns with `fancy_regex::Regex` and find matches over the texts. It reduces final invalid\n patterns list that is given back to python.\n 5. Give matches of valid patterns and invalid patterns back to Python.\n- Python phase:\n 1. Try to apply all failed patterns, finding them over all the texts. It uses [`regex`](https://github.com/mrabarnett/mrab-regex) package\n that has expanded pattern support over `re` built-in package.\n 2. Return the final result.\n\n### Calling examples\n\n## Literal replacer\n\nThis is a very concrete function to perform high performance literal replacement using Rust `aho_corasick` implementation. It accepts parallelization by chunks of text.\n\nIt uses Rust [`aho_corasick`](https://docs.rs/aho-corasick/latest/aho_corasick/) to perform replacements, adding a layer of bounding around literals to replace through the `is_bounded` parameter.\n\n- If `is_bounded` is `True` then before replacing the literal found, it is checked that any of `[A-Za-z0-9_]` (expanded with accents and special word chards that can be checked in `pkg::unicode::check_if_word_bytes`) is around the literal.\n- Matching types can be chosen over available ones in [`aho_corasick::MatchKind`](https://docs.rs/aho-corasick/latest/aho_corasick/enum.MatchKind.html#variants), being the default one `aho_corasick::MatchKind::LeftmostLongest`.\n\nMore at [`doc/notebook/doc/literal_replacer.ipynb`](https://gitlab.com/g6313/pytextrust/-/blob/main/notebook/doc/literal_replacer.ipynb) in the repository.\n\n### Calling examples\n\n```python\nfrom pytextrust.replacer import replace_literal_patterns, MatchKind\n\nreplace_literal_patterns(\n literal_patterns=[\"uno\", \"dos\"],\n replacements=[\"1\", \"2\"],\n text_to_replace=[\"es el numero uno o el Dos yo soy el veintiuno\"],\n is_bounded=True,\n case_insensitive=True,\n match_kind=MatchKind.LeftmostLongest)\n```\n\nreturns the replaced text and the number of replacements\n\n```\n(['es el numero 1 o el 2 yo soy el veintiuno'], 2)\n```\n\n## Entities\n\nEntities are found by overlapping and have a hierarchichal folder structure.\n\n- **Literal entities**: fast only literal based entities. Those entities are based in literal alternations, and\nare built from a list of strings, is like matching (lit_1|...|lit_N)`. Can be:\n - Private: only used by regex entities by composition. The only interest on them is for composition so those are only matched not finded.\n - Public: calculated and reported. Those reports enforce that matched boundaries are `\\b`, just if the literal matching where `\\b(lit_1|...|lit_N)\\b`. *Tech note: positions reported by aho corasick should be mapped from byte to char position.*\n- **Regex entities**: a list of regex patterns, possibly containing literal entities calls with template language. For example if `month` is a literal entity,\n Then `\\d+ of \\d+ of {{month}}` is a possible entity. The regex entities that depend positively (no negative lookback or lookahead), only are searched on the texts where the literal entity has been found, minimizing computational weight.\n\nFeeding of entity matches:\n\n- From python list of objects, where each object is equivalent to the file JSON loaded. Each object contains a field `kind` with one of two values: `re` or `lit`.\n- From local folder with folders:\n - Structured hierarchically.\n\nSteps of entity recognition:\n\n1. Load the entity system:\n - Deserialize all defined entities.\n - Build `LiteralEntityPool`. There are public and private literal entities:\n - **Private literal entities** will not be reported only used internally by regex entities.\n - **Public literal entities** will be reported as entities.\n **NOTE: the bound of the literal public entities is calculated after all as Aho Corasick has not bound allowed**.\n - Build `RegexEntityPool` using literals from `LiteralEntityPool`, then there are two kinds of regex entities\n - The ones that use any literal entity.\n - The ones that do not use any literal entity.\n2. Process texts and get entities:\n - Get literal entity raw index matches.\n - Literal-based regex entities perform find if the ordered set of matches of literal entities is satisfied from literal entities results.\n - Non literal-based regex entities find is performed using `regex::RegexSet`\n3. Ensemble together public literal entities, literal-based regex entities and non literal-based regex entities and give output.\n\nA pattern in a regex entity has two type of categorizations:\n\n- Pattern that can be compiled at `regex` crate:\n - Pattern with at least one positive capture group related to a literal entity. Match will be decided by aho corasick and literal entity order. This is a regex were `entities::extract_required_template_structure` throws a non-empty vector.\n - Pattern that does not fit the previous case, this pattern will be matched through `RegexSet`. This is a pattern with `entities::extract_required_template_structure` throwing an empty vector.\n- Regex that can not be compiled by `regex` crate will receive a direct find from `fancy_regex` crate. This pattern\n receives an Error from `entities::extract_required_template_structure`.\n\nNaming convention for entity files is:\n\n### Calling examples\n\n# CICD\n\nThis repository pretends to be a perfect CICD example for a Python+Rust lib based on `pyo3`. Any suggestions (caching, badges, anything, ...) just let me know by issue :)\n\n# Useful doc\n\n## Learning doc\n\n- [The Rust Programming Language](https://doc.rust-lang.org/book/title-page.html)\n- [Rust CookBook](https://rust-lang-nursery.github.io/rust-cookbook/intro.html)\n- [Rust by example](https://doc.rust-lang.org/rust-by-example/)\n- [PyO3](https://pyo3.rs/v0.16.3/index.html)\n- [Maturin](https://maturin.rs/)\n- [A comparison of regex engines](https://rust-leipzig.github.io/regex/2017/03/28/comparison-of-regex-engines/)\n\n## Reference Rust pattern matching packages\n\n- <https://docs.rs/fst/latest/fst/>, particularly <https://docs.rs/fst/latest/fst/#example-searching-multiple-sets-efficiently> for entities\n- <https://docs.rs/regex-automata/latest/regex_automata/>\n- <https://docs.rs/aho-corasick/0.7.18/aho_corasick/>\n- <https://docs.rs/regex-syntax/latest/regex_syntax/>\n\n## Performance advices\n\n- <https://github.com/rust-lang/regex/blob/master/PERFORMANCE.md#unicode-word-boundaries-may-prevent-the-dfa-from-being-used>\n- *there is no problem with using non-greedy matching or having lots of alternations in your regex* <https://github.com/rust-lang/regex/blob/master/PERFORMANCE.md#resist-the-temptation-to-optimize-regexes>\n\n[**Benchmark by Rust regex author**](https://github.com/BurntSushi/rebar)\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Library designed as a python wrapper to unleash Rust text processing power combined with Python",
"version": "0.11.0",
"project_urls": {
"Homepage": "https://gitlab.com/g6313/pytextrust"
},
"split_keywords": [
"text-processing",
" regex-matching",
" entity-system",
" literal-replacement"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0e7885c506248276e6ab2274bbb81f1db9499e29f132de020abe276332750c95",
"md5": "15ce1d0e412db6248b0a4bfa9ae698bc",
"sha256": "32e8cc2fe501ed8b7883725bde6be33ca7417d4f146b71d07f855a5d135159c0"
},
"downloads": -1,
"filename": "pytextrust-0.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "15ce1d0e412db6248b0a4bfa9ae698bc",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.7",
"size": 1374154,
"upload_time": "2024-09-09T09:41:11",
"upload_time_iso_8601": "2024-09-09T09:41:11.863640Z",
"url": "https://files.pythonhosted.org/packages/0e/78/85c506248276e6ab2274bbb81f1db9499e29f132de020abe276332750c95/pytextrust-0.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0a417c2028807c2a64aac7fbdf79e1d93ce5b84e262dcf4e4a914918c9d9c854",
"md5": "e976359afad04314b0bc03f3fc7b9952",
"sha256": "4e0a8859bee0d256a260937774c93892dd8ec2fe13109a643d3341a38322f09d"
},
"downloads": -1,
"filename": "pytextrust-0.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "e976359afad04314b0bc03f3fc7b9952",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": ">=3.7",
"size": 1374301,
"upload_time": "2024-09-09T09:41:09",
"upload_time_iso_8601": "2024-09-09T09:41:09.811497Z",
"url": "https://files.pythonhosted.org/packages/0a/41/7c2028807c2a64aac7fbdf79e1d93ce5b84e262dcf4e4a914918c9d9c854/pytextrust-0.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "eb5a6d43f97a16e065b1442029415a049969b4671901d1657cf8afc812973c58",
"md5": "9c0480926bc409322fe13c2323df8768",
"sha256": "869971d27e96671315eae612e41842d6f96b52f5f041c8d6abf22fea7806010f"
},
"downloads": -1,
"filename": "pytextrust-0.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "9c0480926bc409322fe13c2323df8768",
"packagetype": "bdist_wheel",
"python_version": "cp37",
"requires_python": ">=3.7",
"size": 1374896,
"upload_time": "2024-09-09T09:41:07",
"upload_time_iso_8601": "2024-09-09T09:41:07.540688Z",
"url": "https://files.pythonhosted.org/packages/eb/5a/6d43f97a16e065b1442029415a049969b4671901d1657cf8afc812973c58/pytextrust-0.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0a03d1561ba2208eb71a9a6bc5b846187445a596477e11be07ad3c6cabff4230",
"md5": "4d0ae5c6235ecf496176d34d654e2c4a",
"sha256": "09fff6a06859a9d95082fed1b5b8aa7a3be5d4c735b0de247020fde5b6c0d1b8"
},
"downloads": -1,
"filename": "pytextrust-0.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "4d0ae5c6235ecf496176d34d654e2c4a",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.7",
"size": 1374808,
"upload_time": "2024-09-09T09:41:10",
"upload_time_iso_8601": "2024-09-09T09:41:10.191525Z",
"url": "https://files.pythonhosted.org/packages/0a/03/d1561ba2208eb71a9a6bc5b846187445a596477e11be07ad3c6cabff4230/pytextrust-0.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c7061b421dc1461a95fb2bedfc84f5c9d75c2c1c4f3b6d62fa1c443986a7622f",
"md5": "b9972bfb4da761f58729cd92264eed2a",
"sha256": "044dfb683b714846c385e9642f7bd479bfa717c378a2f17299a19bc1a5754856"
},
"downloads": -1,
"filename": "pytextrust-0.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "b9972bfb4da761f58729cd92264eed2a",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.7",
"size": 1374700,
"upload_time": "2024-09-09T09:41:18",
"upload_time_iso_8601": "2024-09-09T09:41:18.644084Z",
"url": "https://files.pythonhosted.org/packages/c7/06/1b421dc1461a95fb2bedfc84f5c9d75c2c1c4f3b6d62fa1c443986a7622f/pytextrust-0.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-09 09:41:11",
"github": false,
"gitlab": true,
"bitbucket": false,
"codeberg": false,
"gitlab_user": "g6313",
"gitlab_project": "pytextrust",
"lcname": "pytextrust"
}