# Matcher Rust Implementation with PyO3 Binding
A high-performance, multi-functional word matcher implemented in Rust.
Designed to solve **AND OR NOT** and **TEXT VARIATIONS** problems in word/word_list matching. For detailed implementation, see the [Design Document](../DESIGN.md).
## Features
- **Multiple Matching Methods**:
- Simple Word Matching
- Regex-Based Matching
- Similarity-Based Matching
- **Text Normalization**:
- **Fanjian**: Simplify traditional Chinese characters to simplified ones.
Example: `蟲艸` -> `虫艹`
- **Delete**: Remove specific characters.
Example: `*Fu&*iii&^%%*&kkkk` -> `Fuiiikkkk`
- **Normalize**: Normalize special characters to identifiable characters.
Example: `𝜢𝕰𝕃𝙻Ϙ 𝙒ⓞƦℒ𝒟!` -> `hello world`
- **PinYin**: Convert Chinese characters to Pinyin for fuzzy matching.
Example: `西安` -> `/xi//an/`, matches `洗按` -> `/xi//an/`, but not `先` -> `/xian/`
- **PinYinChar**: Convert Chinese characters to Pinyin.
Example: `西安` -> `xian`, matches `洗按` and `先` -> `xian`
- **Combination and Repeated Word Matching**:
- Takes into account the number of repetitions of words.
- Example: `hello,world` matches `hello world` and `world,hello`
- Example: `无,法,无,天` matches `无无法天` (because `无` is repeated twice), but not `无法天`
- **Customizable Exemption Lists**: Exclude specific words from matching.
- **Efficient Handling of Large Word Lists**: Optimized for performance.
## Installation
### Use pip
```shell
pip install matcher_py
```
### Install pre-built binary
Visit the [release page](https://github.com/Lips7/Matcher/releases) to download the pre-built binary.
## Usage
The `msgspec` library is recommended for serializing the matcher configuration due to its performance benefits. You can also use other msgpack serialization libraries like `ormsgpack`. All relevant types are defined in [extension_types.py](./matcher_py/extension_types.py).
### Explanation of the configuration
* `Matcher`'s configuration is defined by the `MatchTableMap = Dict[int, List[MatchTable]]` type, the key of `MatchTableMap` is called `match_id`, for each `match_id`, the `table_id` inside **should but isn't required to be unique**.
* `SimpleMatcher`'s configuration is defined by the `SimpleMatchTableMap = Dict[SimpleMatchType, Dict[int, str]]` type, the value `Dict[int, str]`'s key is called `word_id`, **`word_id` is required to be globally unique**.
#### MatchTable
* `table_id`: The unique ID of the match table.
* `match_table_type`: The type of the match table.
* `word_list`: The word list of the match table.
* `exemption_simple_match_type`: The type of the exemption simple match.
* `exemption_word_list`: The exemption word list of the match table.
For each match table, word matching is performed over the `word_list`, and exemption word matching is performed over the `exemption_word_list`. If the exemption word matching result is True, the word matching result will be False.
#### MatchTableType
* `Simple`: Supports simple multiple patterns matching with text normalization defined by `simple_match_type`.
* We offer transformation methods for text normalization, including `Fanjian`, `Normalize`, `PinYin` ···.
* It can handle combination patterns and repeated times sensitive matching, delimited by `,`, such as `hello,world,hello` will match `hellohelloworld` and `worldhellohello`, but not `helloworld` due to the repeated times of `hello`.
* `Regex`: Supports regex patterns matching.
* `SimilarChar`: Supports similar character matching using regex.
* `["hello,hallo,hollo,hi", "word,world,wrd,🌍", "!,?,~"]` will match `helloworld`, `hollowrd`, `hi🌍` ··· any combinations of the words split by `,` in the list.
* `Acrostic`: Supports acrostic matching using regex **(currently only supports Chinese and simple English sentences)**.
* `["h,e,l,l,o", "你,好"]` will match `hope, endures, love, lasts, onward.` and `你的笑容温暖, 好心情常伴。`.
* `Regex`: Supports regex matching.
* `["h[aeiou]llo", "w[aeiou]rd"]` will match `hello`, `world`, `hillo`, `wurld` ··· any text that matches the regex in the list.
* `Similar`: Supports similar text matching based on distance and threshold.
* `Levenshtein`: Supports similar text matching based on Levenshtein distance.
* `DamerauLevenshtein`: Supports similar text matching based on Damerau-Levenshtein distance.
* `Indel`: Supports similar text matching based on Indel distance.
* `Jaro`: Supports similar text matching based on Jaro distance.
* `JaroWinkler`: Supports similar text matching based on Jaro-Winkler distance.
#### SimpleMatchType
* `None`: No transformation.
* `Fanjian`: Traditional Chinese to simplified Chinese transformation. Based on [FANJIAN](../matcher_rs/str_conv_map/FANJIAN.txt) and [UNICODE](../matcher_rs/str_conv_map/UNICODE.txt).
* `妳好` -> `你好`
* `現⾝` -> `现身`
* `Delete`: Delete all punctuation, special characters and white spaces.
* `hello, world!` -> `helloworld`
* `《你∷好》` -> `你好`
* `Normalize`: Normalize all English character variations and number variations to basic characters. Based on [UPPER_LOWER](../matcher_rs/str_conv_map/UPPER-LOWER.txt), [EN_VARIATION](../matcher_rs/str_conv_map/EN-VARIATION.txt) and [NUM_NORM](../matcher_rs/str_conv_map/NUM-NORM.txt).
* `ℋЀ⒈㈠ϕ` -> `he11o`
* `⒈Ƨ㊂` -> `123`
* `PinYin`: Convert all unicode Chinese characters to pinyin with boundaries. Based on [PINYIN](../matcher_rs/str_conv_map/PINYIN.txt).
* `你好` -> `␀ni␀␀hao␀`
* `西安` -> `␀xi␀␀an␀`
* `PinYinChar`: Convert all unicode Chinese characters to pinyin without boundaries. Based on [PINYIN_CHAR](../matcher_rs/str_conv_map/PINYIN-CHAR.txt).
* `你好` -> `nihao`
* `西安` -> `xian`
You can combine these transformations as needed. Pre-defined combinations like `DeleteNormalize` and `FanjianDeleteNormalize` are provided for convenience.
Avoid combining `PinYin` and `PinYinChar` due to that `PinYin` is a more limited version of `PinYinChar`, in some cases like `xian`, can be treat as two words `xi` and `an`, or only one word `xian`.
`Delete` is technologically a combination of `TextDelete` and `WordDelete`, we implement different delete methods for text and word. 'Cause we believe `CN_SPECIAL` and `EN_SPECIAL` are parts of the word, but not for text. For `text_process` and `reduce_text_process` functions, users should use `TextDelete` instead of `WordDelete`.
* `WordDelete`: Delete all patterns in [PUNCTUATION_SPECIAL](../matcher_rs/str_conv_map/PUNCTUATION-SPECIAL.txt).
* `TextDelete`: Delete all patterns in [PUNCTUATION_SPECIAL](../matcher_rs/str_conv_map/PUNCTUATION-SPECIAL.txt), [CN_SPECIAL](../matcher_rs/str_conv_map/CN-SPECIAL.txt), [EN_SPECIAL](../matcher_rs/str_conv_map/EN-SPECIAL.txt).
### Text Process Usage
Here’s an example of how to use the `reduce_text_process` and `text_process` functions:
```python
from matcher_py import reduce_text_process, text_process
from matcher_py.extension_types import SimpleMatchType
print(reduce_text_process(SimpleMatchType.MatchTextDelete | SimpleMatchType.MatchNormalize, "hello, world!"))
print(text_process(SimpleMatchType.MatchTextDelete, "hello, world!"))
```
### Matcher Basic Usage
Here’s an example of how to use the `Matcher`:
```python
import msgspec
import numpy as np
from matcher_py import Matcher
from matcher_py.extension_types import MatchTable, MatchTableType, SimpleMatchType
msgpack_encoder = msgspec.msgpack.Encoder()
matcher = Matcher(
msgpack_encoder.encode({
1: [
MatchTable(
table_id=1,
match_table_type=MatchTableType.Simple(simple_match_type = SimpleMatchType.MatchFanjianDeleteNormalize),
word_list=["hello", "world"],
exemption_simple_match_type=SimpleMatchType.MatchNone,
exemption_word_list=["word"],
)
]
})
)
# Check if a text matches
assert matcher.is_match("hello")
assert not matcher.is_match("hello, word")
# Perform word matching as a dict
assert matcher.word_match(r"hello, world")[1]
# Perform word matching as a string
result = matcher.word_match_as_string("hello")
assert result == """{1:[{\"table_id\":1,\"word\":\"hello\"}]"}"""
# Perform batch processing as a dict using a list
text_list = ["hello", "world", "hello,word"]
batch_results = matcher.batch_word_match(text_list)
print(batch_results)
# Perform batch processing as a string using a list
text_list = ["hello", "world", "hello,word"]
batch_results = matcher.batch_word_match_as_string(text_list)
print(batch_results)
# Perform batch processing as a dict using a numpy array
text_array = np.array(["hello", "world", "hello,word"], dtype=np.dtype("object"))
numpy_results = matcher.numpy_word_match(text_array)
print(numpy_results)
# Perform batch processing as a string using a numpy array
text_array = np.array(["hello", "world", "hello,word"], dtype=np.dtype("object"))
numpy_results = matcher.numpy_word_match_as_string(text_array)
print(numpy_results)
```
### Simple Matcher Basic Usage
Here’s an example of how to use the `SimpleMatcher`:
```python
import msgspec
import numpy as np
from matcher_py import SimpleMatcher
from matcher_py.extension_types import SimpleMatchType
msgpack_encoder = msgspec.msgpack.Encoder()
simple_matcher = SimpleMatcher(
msgpack_encoder.encode({SimpleMatchType.MatchNone: {1: "example"}})
)
# Check if a text matches
assert simple_matcher.is_match("example")
# Perform simple processing
results = simple_matcher.simple_process("example")
print(results)
# Perform batch processing using a list
text_list = ["example", "test", "example test"]
batch_results = simple_matcher.batch_simple_process(text_list)
print(batch_results)
# Perform batch processing using a NumPy array
text_array = np.array(["example", "test", "example test"], dtype=np.dtype("object"))
numpy_results = simple_matcher.numpy_simple_process(text_array)
print(numpy_results)
```
## Contributing
Contributions to `matcher_py` are welcome! If you find a bug or have a feature request, please open an issue on the [GitHub repository](https://github.com/Lips7/Matcher). If you would like to contribute code, please fork the repository and submit a pull request.
## License
`matcher_py` is licensed under the MIT OR Apache-2.0 license.
## More Information
For more details, visit the [GitHub repository](https://github.com/Lips7/Matcher).
Raw data
{
"_id": null,
"home_page": "https://github.com/Lips7/Matcher",
"name": "matcher-py",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "text, string, search, pattern, multi",
"author": "Fuji Guo",
"author_email": "f975793771@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/e7/30/0c77bb440c88d5b8debd602b4c32e89e73b77fda36bbef35c0004e253418/matcher_py-0.3.2.tar.gz",
"platform": null,
"description": "# Matcher Rust Implementation with PyO3 Binding\n\nA high-performance, multi-functional word matcher implemented in Rust.\n\nDesigned to solve **AND OR NOT** and **TEXT VARIATIONS** problems in word/word_list matching. For detailed implementation, see the [Design Document](../DESIGN.md).\n\n## Features\n\n- **Multiple Matching Methods**:\n - Simple Word Matching\n - Regex-Based Matching\n - Similarity-Based Matching\n- **Text Normalization**:\n - **Fanjian**: Simplify traditional Chinese characters to simplified ones.\n Example: `\u87f2\u8278` -> `\u866b\u8279`\n - **Delete**: Remove specific characters.\n Example: `*Fu&*iii&^%%*&kkkk` -> `Fuiiikkkk`\n - **Normalize**: Normalize special characters to identifiable characters.\n Example: `\ud835\udf22\ud835\udd70\ud835\udd43\ud835\ude7b\u03d8 \ud835\ude52\u24de\u01a6\u2112\ud835\udc9f!` -> `hello world`\n - **PinYin**: Convert Chinese characters to Pinyin for fuzzy matching.\n Example: `\u897f\u5b89` -> `/xi//an/`, matches `\u6d17\u6309` -> `/xi//an/`, but not `\u5148` -> `/xian/`\n - **PinYinChar**: Convert Chinese characters to Pinyin.\n Example: `\u897f\u5b89` -> `xian`, matches `\u6d17\u6309` and `\u5148` -> `xian`\n- **Combination and Repeated Word Matching**:\n - Takes into account the number of repetitions of words.\n - Example: `hello,world` matches `hello world` and `world,hello`\n - Example: `\u65e0,\u6cd5,\u65e0,\u5929` matches `\u65e0\u65e0\u6cd5\u5929` (because `\u65e0` is repeated twice), but not `\u65e0\u6cd5\u5929`\n- **Customizable Exemption Lists**: Exclude specific words from matching.\n- **Efficient Handling of Large Word Lists**: Optimized for performance.\n\n## Installation\n\n### Use pip\n\n```shell\npip install matcher_py\n```\n\n### Install pre-built binary\n\nVisit the [release page](https://github.com/Lips7/Matcher/releases) to download the pre-built binary.\n\n## Usage\n\nThe `msgspec` library is recommended for serializing the matcher configuration due to its performance benefits. You can also use other msgpack serialization libraries like `ormsgpack`. All relevant types are defined in [extension_types.py](./matcher_py/extension_types.py).\n\n### Explanation of the configuration\n\n* `Matcher`'s configuration is defined by the `MatchTableMap = Dict[int, List[MatchTable]]` type, the key of `MatchTableMap` is called `match_id`, for each `match_id`, the `table_id` inside **should but isn't required to be unique**.\n* `SimpleMatcher`'s configuration is defined by the `SimpleMatchTableMap = Dict[SimpleMatchType, Dict[int, str]]` type, the value `Dict[int, str]`'s key is called `word_id`, **`word_id` is required to be globally unique**.\n\n#### MatchTable\n\n* `table_id`: The unique ID of the match table.\n* `match_table_type`: The type of the match table.\n* `word_list`: The word list of the match table.\n* `exemption_simple_match_type`: The type of the exemption simple match.\n* `exemption_word_list`: The exemption word list of the match table.\n\nFor each match table, word matching is performed over the `word_list`, and exemption word matching is performed over the `exemption_word_list`. If the exemption word matching result is True, the word matching result will be False.\n\n#### MatchTableType\n\n* `Simple`: Supports simple multiple patterns matching with text normalization defined by `simple_match_type`.\n * We offer transformation methods for text normalization, including `Fanjian`, `Normalize`, `PinYin` \u00b7\u00b7\u00b7.\n * It can handle combination patterns and repeated times sensitive matching, delimited by `,`, such as `hello,world,hello` will match `hellohelloworld` and `worldhellohello`, but not `helloworld` due to the repeated times of `hello`.\n* `Regex`: Supports regex patterns matching.\n * `SimilarChar`: Supports similar character matching using regex.\n * `[\"hello,hallo,hollo,hi\", \"word,world,wrd,\ud83c\udf0d\", \"!,?,~\"]` will match `helloworld`, `hollowrd`, `hi\ud83c\udf0d` \u00b7\u00b7\u00b7 any combinations of the words split by `,` in the list.\n * `Acrostic`: Supports acrostic matching using regex **(currently only supports Chinese and simple English sentences)**.\n * `[\"h,e,l,l,o\", \"\u4f60,\u597d\"]` will match `hope, endures, love, lasts, onward.` and `\u4f60\u7684\u7b11\u5bb9\u6e29\u6696, \u597d\u5fc3\u60c5\u5e38\u4f34\u3002`.\n * `Regex`: Supports regex matching.\n * `[\"h[aeiou]llo\", \"w[aeiou]rd\"]` will match `hello`, `world`, `hillo`, `wurld` \u00b7\u00b7\u00b7 any text that matches the regex in the list.\n* `Similar`: Supports similar text matching based on distance and threshold.\n * `Levenshtein`: Supports similar text matching based on Levenshtein distance.\n * `DamerauLevenshtein`: Supports similar text matching based on Damerau-Levenshtein distance.\n * `Indel`: Supports similar text matching based on Indel distance.\n * `Jaro`: Supports similar text matching based on Jaro distance.\n * `JaroWinkler`: Supports similar text matching based on Jaro-Winkler distance.\n\n#### SimpleMatchType\n\n* `None`: No transformation.\n* `Fanjian`: Traditional Chinese to simplified Chinese transformation. Based on [FANJIAN](../matcher_rs/str_conv_map/FANJIAN.txt) and [UNICODE](../matcher_rs/str_conv_map/UNICODE.txt).\n * `\u59b3\u597d` -> `\u4f60\u597d`\n * `\u73fe\u2f9d` -> `\u73b0\u8eab`\n* `Delete`: Delete all punctuation, special characters and white spaces.\n * `hello, world!` -> `helloworld`\n * `\u300a\u4f60\u2237\u597d\u300b` -> `\u4f60\u597d`\n* `Normalize`: Normalize all English character variations and number variations to basic characters. Based on [UPPER_LOWER](../matcher_rs/str_conv_map/UPPER-LOWER.txt), [EN_VARIATION](../matcher_rs/str_conv_map/EN-VARIATION.txt) and [NUM_NORM](../matcher_rs/str_conv_map/NUM-NORM.txt).\n * `\u210b\u0400\u2488\u3220\u03d5` -> `he11o`\n * `\u2488\u01a7\u3282` -> `123`\n* `PinYin`: Convert all unicode Chinese characters to pinyin with boundaries. Based on [PINYIN](../matcher_rs/str_conv_map/PINYIN.txt).\n * `\u4f60\u597d` -> `\u2400ni\u2400\u2400hao\u2400`\n * `\u897f\u5b89` -> `\u2400xi\u2400\u2400an\u2400`\n* `PinYinChar`: Convert all unicode Chinese characters to pinyin without boundaries. Based on [PINYIN_CHAR](../matcher_rs/str_conv_map/PINYIN-CHAR.txt).\n * `\u4f60\u597d` -> `nihao`\n * `\u897f\u5b89` -> `xian`\n\nYou can combine these transformations as needed. Pre-defined combinations like `DeleteNormalize` and `FanjianDeleteNormalize` are provided for convenience.\n\nAvoid combining `PinYin` and `PinYinChar` due to that `PinYin` is a more limited version of `PinYinChar`, in some cases like `xian`, can be treat as two words `xi` and `an`, or only one word `xian`.\n\n`Delete` is technologically a combination of `TextDelete` and `WordDelete`, we implement different delete methods for text and word. 'Cause we believe `CN_SPECIAL` and `EN_SPECIAL` are parts of the word, but not for text. For `text_process` and `reduce_text_process` functions, users should use `TextDelete` instead of `WordDelete`.\n* `WordDelete`: Delete all patterns in [PUNCTUATION_SPECIAL](../matcher_rs/str_conv_map/PUNCTUATION-SPECIAL.txt).\n* `TextDelete`: Delete all patterns in [PUNCTUATION_SPECIAL](../matcher_rs/str_conv_map/PUNCTUATION-SPECIAL.txt), [CN_SPECIAL](../matcher_rs/str_conv_map/CN-SPECIAL.txt), [EN_SPECIAL](../matcher_rs/str_conv_map/EN-SPECIAL.txt).\n\n### Text Process Usage\n\nHere\u2019s an example of how to use the `reduce_text_process` and `text_process` functions:\n\n```python\nfrom matcher_py import reduce_text_process, text_process\nfrom matcher_py.extension_types import SimpleMatchType\n\nprint(reduce_text_process(SimpleMatchType.MatchTextDelete | SimpleMatchType.MatchNormalize, \"hello, world!\"))\nprint(text_process(SimpleMatchType.MatchTextDelete, \"hello, world!\"))\n```\n\n### Matcher Basic Usage\n\nHere\u2019s an example of how to use the `Matcher`:\n\n```python\nimport msgspec\nimport numpy as np\nfrom matcher_py import Matcher\nfrom matcher_py.extension_types import MatchTable, MatchTableType, SimpleMatchType\n\nmsgpack_encoder = msgspec.msgpack.Encoder()\nmatcher = Matcher(\n msgpack_encoder.encode({\n 1: [\n MatchTable(\n table_id=1,\n match_table_type=MatchTableType.Simple(simple_match_type = SimpleMatchType.MatchFanjianDeleteNormalize),\n word_list=[\"hello\", \"world\"],\n exemption_simple_match_type=SimpleMatchType.MatchNone,\n exemption_word_list=[\"word\"],\n )\n ]\n })\n)\n# Check if a text matches\nassert matcher.is_match(\"hello\")\nassert not matcher.is_match(\"hello, word\")\n# Perform word matching as a dict\nassert matcher.word_match(r\"hello, world\")[1]\n# Perform word matching as a string\nresult = matcher.word_match_as_string(\"hello\")\nassert result == \"\"\"{1:[{\\\"table_id\\\":1,\\\"word\\\":\\\"hello\\\"}]\"}\"\"\"\n# Perform batch processing as a dict using a list\ntext_list = [\"hello\", \"world\", \"hello,word\"]\nbatch_results = matcher.batch_word_match(text_list)\nprint(batch_results)\n# Perform batch processing as a string using a list\ntext_list = [\"hello\", \"world\", \"hello,word\"]\nbatch_results = matcher.batch_word_match_as_string(text_list)\nprint(batch_results)\n# Perform batch processing as a dict using a numpy array\ntext_array = np.array([\"hello\", \"world\", \"hello,word\"], dtype=np.dtype(\"object\"))\nnumpy_results = matcher.numpy_word_match(text_array)\nprint(numpy_results)\n# Perform batch processing as a string using a numpy array\ntext_array = np.array([\"hello\", \"world\", \"hello,word\"], dtype=np.dtype(\"object\"))\nnumpy_results = matcher.numpy_word_match_as_string(text_array)\nprint(numpy_results)\n```\n\n### Simple Matcher Basic Usage\n\nHere\u2019s an example of how to use the `SimpleMatcher`:\n\n```python\nimport msgspec\nimport numpy as np\nfrom matcher_py import SimpleMatcher\nfrom matcher_py.extension_types import SimpleMatchType\n\nmsgpack_encoder = msgspec.msgpack.Encoder()\nsimple_matcher = SimpleMatcher(\n msgpack_encoder.encode({SimpleMatchType.MatchNone: {1: \"example\"}})\n)\n# Check if a text matches\nassert simple_matcher.is_match(\"example\")\n# Perform simple processing\nresults = simple_matcher.simple_process(\"example\")\nprint(results)\n# Perform batch processing using a list\ntext_list = [\"example\", \"test\", \"example test\"]\nbatch_results = simple_matcher.batch_simple_process(text_list)\nprint(batch_results)\n# Perform batch processing using a NumPy array\ntext_array = np.array([\"example\", \"test\", \"example test\"], dtype=np.dtype(\"object\"))\nnumpy_results = simple_matcher.numpy_simple_process(text_array)\nprint(numpy_results)\n```\n\n## Contributing\n\nContributions to `matcher_py` are welcome! If you find a bug or have a feature request, please open an issue on the [GitHub repository](https://github.com/Lips7/Matcher). If you would like to contribute code, please fork the repository and submit a pull request.\n\n## License\n\n`matcher_py` is licensed under the MIT OR Apache-2.0 license.\n\n## More Information\n\nFor more details, visit the [GitHub repository](https://github.com/Lips7/Matcher).\n",
"bugtrack_url": null,
"license": "Apache-2.0 OR MIT",
"summary": "A high performance multiple functional word matcher",
"version": "0.3.2",
"project_urls": {
"Homepage": "https://github.com/Lips7/Matcher"
},
"split_keywords": [
"text",
" string",
" search",
" pattern",
" multi"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2e12f10233e68c9f3cd295820550f274d439915ab25df1fc304dfe8ca82e78af",
"md5": "f6a5ba94ddf7bfc9dbab84aa3c5b7054",
"sha256": "4c3eab1755ddfd5f3a40983c753a19408fe1fd8e2fc5ecd02cef9c52a375ab38"
},
"downloads": -1,
"filename": "matcher_py-0.3.2-cp38-abi3-macosx_10_12_x86_64.whl",
"has_sig": false,
"md5_digest": "f6a5ba94ddf7bfc9dbab84aa3c5b7054",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 1550598,
"upload_time": "2024-06-26T02:23:55",
"upload_time_iso_8601": "2024-06-26T02:23:55.087185Z",
"url": "https://files.pythonhosted.org/packages/2e/12/f10233e68c9f3cd295820550f274d439915ab25df1fc304dfe8ca82e78af/matcher_py-0.3.2-cp38-abi3-macosx_10_12_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e421a99132fe8a083278f29e285780df905423d404c85409df7fa8862601832e",
"md5": "ef25e24e291ff7d6d4d4f0e862a8af24",
"sha256": "c7d374a4c36f9284c784c862a7709faafbf45d6af561af4f5d701e7e2fdd0352"
},
"downloads": -1,
"filename": "matcher_py-0.3.2-cp38-abi3-macosx_11_0_arm64.whl",
"has_sig": false,
"md5_digest": "ef25e24e291ff7d6d4d4f0e862a8af24",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 1492193,
"upload_time": "2024-06-26T02:23:57",
"upload_time_iso_8601": "2024-06-26T02:23:57.511627Z",
"url": "https://files.pythonhosted.org/packages/e4/21/a99132fe8a083278f29e285780df905423d404c85409df7fa8862601832e/matcher_py-0.3.2-cp38-abi3-macosx_11_0_arm64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "de7dfc57a2b467144f4afc6e3d50e57c7757a22fde0140fcae7b3b87a17b4b1c",
"md5": "cd1f2c19d74230cecaa6b41311864c3e",
"sha256": "7f38cfc468653e49a9b8d13167fc130b7436dee57107704a526dbd161b805955"
},
"downloads": -1,
"filename": "matcher_py-0.3.2-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "cd1f2c19d74230cecaa6b41311864c3e",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 1668173,
"upload_time": "2024-06-26T02:23:59",
"upload_time_iso_8601": "2024-06-26T02:23:59.541080Z",
"url": "https://files.pythonhosted.org/packages/de/7d/fc57a2b467144f4afc6e3d50e57c7757a22fde0140fcae7b3b87a17b4b1c/matcher_py-0.3.2-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "034e9e3439b7c789295b96a9525089384cb161c3b974389af1d77aaad46bb831",
"md5": "a172fe9f71dc980b451ee5f74ab3d749",
"sha256": "997ef0033d4459e8265971cc6111acbde6aceaa0dc89ccd0cb52e9e486ace5a2"
},
"downloads": -1,
"filename": "matcher_py-0.3.2-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "a172fe9f71dc980b451ee5f74ab3d749",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 1630411,
"upload_time": "2024-06-26T02:24:01",
"upload_time_iso_8601": "2024-06-26T02:24:01.592185Z",
"url": "https://files.pythonhosted.org/packages/03/4e/9e3439b7c789295b96a9525089384cb161c3b974389af1d77aaad46bb831/matcher_py-0.3.2-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4f1e9234164f350381f689b4fa57e07e7dd8cb505e2cd2fec9ec413b18ac66e8",
"md5": "125ee4fcd4e5ea1b64005f058bea5860",
"sha256": "9d0777551e9e9f1d4172692e942dac9445d4571aa18935a1226981403926d65e"
},
"downloads": -1,
"filename": "matcher_py-0.3.2-cp38-abi3-musllinux_1_2_aarch64.whl",
"has_sig": false,
"md5_digest": "125ee4fcd4e5ea1b64005f058bea5860",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 1871606,
"upload_time": "2024-06-26T02:24:03",
"upload_time_iso_8601": "2024-06-26T02:24:03.632838Z",
"url": "https://files.pythonhosted.org/packages/4f/1e/9234164f350381f689b4fa57e07e7dd8cb505e2cd2fec9ec413b18ac66e8/matcher_py-0.3.2-cp38-abi3-musllinux_1_2_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7758be1bd9342e36142177d205ffc2bbc44c1eeef08ea933acfebcf7ac55eab8",
"md5": "c9fc23f284129ee7cdd12e1d864aad15",
"sha256": "41f7dfd58911d39d2d57a05c3673634074cc354687cf2ed98289dcc9f0a6f2ee"
},
"downloads": -1,
"filename": "matcher_py-0.3.2-cp38-abi3-musllinux_1_2_x86_64.whl",
"has_sig": false,
"md5_digest": "c9fc23f284129ee7cdd12e1d864aad15",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 1814680,
"upload_time": "2024-06-26T02:24:05",
"upload_time_iso_8601": "2024-06-26T02:24:05.946720Z",
"url": "https://files.pythonhosted.org/packages/77/58/be1bd9342e36142177d205ffc2bbc44c1eeef08ea933acfebcf7ac55eab8/matcher_py-0.3.2-cp38-abi3-musllinux_1_2_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6151ac436d3c27d7e1359cd6ba1fe71b600d056722a05c929c1d60e68e71c6af",
"md5": "deef8df8f62d98d91b67ea70c336a67a",
"sha256": "b1ba53865240e99c839cc05a923189a5d1e5ca9c871114db74d08cbdf2c5fc8a"
},
"downloads": -1,
"filename": "matcher_py-0.3.2-cp38-abi3-win_amd64.whl",
"has_sig": false,
"md5_digest": "deef8df8f62d98d91b67ea70c336a67a",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.8",
"size": 1543183,
"upload_time": "2024-06-26T02:24:07",
"upload_time_iso_8601": "2024-06-26T02:24:07.993164Z",
"url": "https://files.pythonhosted.org/packages/61/51/ac436d3c27d7e1359cd6ba1fe71b600d056722a05c929c1d60e68e71c6af/matcher_py-0.3.2-cp38-abi3-win_amd64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e7300c77bb440c88d5b8debd602b4c32e89e73b77fda36bbef35c0004e253418",
"md5": "59ce87be36f7c5a536d122d497cd5194",
"sha256": "df8c56c291eb5d93c728c5431faf250cb0637553b0eb8b794df463954e09da0a"
},
"downloads": -1,
"filename": "matcher_py-0.3.2.tar.gz",
"has_sig": false,
"md5_digest": "59ce87be36f7c5a536d122d497cd5194",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 305595,
"upload_time": "2024-06-26T02:24:09",
"upload_time_iso_8601": "2024-06-26T02:24:09.445680Z",
"url": "https://files.pythonhosted.org/packages/e7/30/0c77bb440c88d5b8debd602b4c32e89e73b77fda36bbef35c0004e253418/matcher_py-0.3.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-26 02:24:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Lips7",
"github_project": "Matcher",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "matcher-py"
}