# Lexpy
[![lexpy](https://github.com/aosingh/lexpy/actions/workflows/lexpy_build.yaml/badge.svg)](https://github.com/aosingh/lexpy/actions)
[![Downloads](https://pepy.tech/badge/lexpy)](https://pepy.tech/project/lexpy)
[![PyPI version](https://badge.fury.io/py/lexpy.svg)](https://pypi.python.org/pypi/lexpy)
[![Python 3.7](https://img.shields.io/badge/python-3.7-blue.svg)](https://www.python.org/downloads/release/python-370/)
[![Python 3.8](https://img.shields.io/badge/python-3.8-blue.svg)](https://www.python.org/downloads/release/python-380/)
[![Python 3.9](https://img.shields.io/badge/python-3.9-blue.svg)](https://www.python.org/downloads/release/python-390/)
[![Python 3.10](https://img.shields.io/badge/python-3.10-blue.svg)](https://www.python.org/downloads/release/python-3100/)
[![Python 3.11](https://img.shields.io/badge/python-3.11-blue.svg)](https://www.python.org/downloads/release/python-3110/)
[![Python 3.12](https://img.shields.io/badge/python-3.12-blue.svg)](https://www.python.org/downloads/release/python-3120/)
[![PyPy3.7](https://img.shields.io/badge/python-PyPy3.7-blue.svg)](https://www.pypy.org/download.html)
[![PyPy3.8](https://img.shields.io/badge/python-PyPy3.8-blue.svg)](https://www.pypy.org/download.html)
[![PyPy3.9](https://img.shields.io/badge/python-PyPy3.9-blue.svg)](https://www.pypy.org/download.html)
- A lexicon is a data-structure which stores a set of words. The difference between
a dictionary and a lexicon is that in a lexicon there are **no values** associated with the words.
- A lexicon is similar to a list or a set of words, but the internal representation is different and optimized
for faster searches of words, prefixes and wildcard patterns.
- Given a word, precisely, the search time is O(W) where W is the length of the word.
- 2 important lexicon data-structures are **_Trie_** and **_Directed Acyclic Word Graph (DAWG)_**.
# Install
`lexpy` can be installed via Python Package Index `(PyPI)` using `pip`. The only installation requirement is that you need Python 3.7 or higher.
```commandline
pip install lexpy
```
# Interface
| **Interface Description** | **Trie** | **DAWG** |
|------------------------------------------------------------------------------------------------------------------------------- |------------------------------------------ |------------------------------------------ |
| Add a single word | `add('apple', count=2)` | `add('apple', count=2)` |
| Add multiple words | `add_all(['advantage', 'courage'])` | `add_all(['advantage', 'courage'])` |
| Check if exists? | `in` operator | `in` operator |
| Search using wildcard expression | `search('a?b*', with_count=True)` | `search('a?b*, with_count=True)` |
| Search for prefix matches | `search_with_prefix('bar', with_count=True)` | `search_with_prefix('bar')` |
| Search for similar words within given edit distance. Here, the notion of edit distance is same as Levenshtein distance | `search_within_distance('apble', dist=1, with_count=True)` | `search_within_distance('apble', dist=1, with_count=True)` |
| Get the number of nodes in the automaton | `len(trie)` | `len(dawg)` |
# Examples
## Trie
### Build from an input list, set, or tuple of words.
```python
from lexpy import Trie
trie = Trie()
input_words = ['ampyx', 'abuzz', 'athie', 'athie', 'athie', 'amato', 'amato', 'aneto', 'aneto', 'aruba',
'arrow', 'agony', 'altai', 'alisa', 'acorn', 'abhor', 'aurum', 'albay', 'arbil', 'albin',
'almug', 'artha', 'algin', 'auric', 'sore', 'quilt', 'psychotic', 'eyes', 'cap', 'suit',
'tank', 'common', 'lonely', 'likeable' 'language', 'shock', 'look', 'pet', 'dime', 'small'
'dusty', 'accept', 'nasty', 'thrill', 'foot', 'steel', 'steel', 'steel', 'steel', 'abuzz']
trie.add_all(input_words) # You can pass any sequence types or a file-like object here
print(trie.get_word_count())
>>> 48
```
### Build from a file or file path.
In the file, words should be newline separated.
```python
from lexpy import Trie
# Either
trie = Trie()
trie.add_all('/path/to/file.txt')
# Or
with open('/path/to/file.txt', 'r') as infile:
trie.add_all(infile)
```
### Check if exists using the `in` operator
```python
print('ampyx' in trie)
>>> True
```
### Prefix search
```python
print(trie.search_with_prefix('ab'))
>>> ['abhor', 'abuzz']
```
```python
print(trie.search_with_prefix('ab', with_count=True))
>>> [('abuzz', 2), ('abhor', 1)]
```
### Wildcard search using `?` and `*`
- `?` = 0 or 1 occurrence of any character
- `*` = 0 or more occurrence of any character
```python
print(trie.search('a*o*'))
>>> ['amato', 'abhor', 'aneto', 'arrow', 'agony', 'acorn']
print(trie.search('a*o*', with_count=True))
>>> [('amato', 2), ('abhor', 1), ('aneto', 2), ('arrow', 1), ('agony', 1), ('acorn', 1)]
print(trie.search('su?t'))
>>> ['suit']
print(trie.search('su?t', with_count=True))
>>> [('suit', 1)]
```
### Search for similar words using the notion of Levenshtein distance
```python
print(trie.search_within_distance('arie', dist=2))
>>> ['athie', 'arbil', 'auric']
print(trie.search_within_distance('arie', dist=2, with_count=True))
>>> [('athie', 3), ('arbil', 1), ('auric', 1)]
```
### Increment word count
- You can either add a new word or increment the counter for an existing word.
```python
trie.add('athie', count=1000)
print(trie.search_within_distance('arie', dist=2, with_count=True))
>>> [('athie', 1003), ('arbil', 1), ('auric', 1)]
```
# Directed Acyclic Word Graph (DAWG)
- DAWG supports the same set of operations as a Trie. The difference is the number of nodes in a DAWG is always
less than or equal to the number of nodes in Trie.
- They both are Deterministic Finite State Automata. However, DAWG is a minimized version of the Trie DFA.
- In a Trie, prefix redundancy is removed. In a DAWG, both prefix and suffix redundancies are removed.
- In the current implementation of DAWG, the insertion order of the words should be **alphabetical**.
- The implementation idea of DAWG is borrowed from http://stevehanov.ca/blog/?id=115
```python
from lexpy import Trie, DAWG
trie = Trie()
trie.add_all(['advantageous', 'courageous'])
dawg = DAWG()
dawg.add_all(['advantageous', 'courageous'])
len(trie) # Number of Nodes in Trie
23
dawg.reduce() # Perform DFA minimization. Call this every time a chunk of words are uploaded in DAWG.
len(dawg) # Number of nodes in DAWG
21
```
## DAWG
The APIs are exactly same as the Trie APIs
### Build a DAWG
```python
from lexpy import DAWG
dawg = DAWG()
input_words = ['ampyx', 'abuzz', 'athie', 'athie', 'athie', 'amato', 'amato', 'aneto', 'aneto', 'aruba',
'arrow', 'agony', 'altai', 'alisa', 'acorn', 'abhor', 'aurum', 'albay', 'arbil', 'albin',
'almug', 'artha', 'algin', 'auric', 'sore', 'quilt', 'psychotic', 'eyes', 'cap', 'suit',
'tank', 'common', 'lonely', 'likeable' 'language', 'shock', 'look', 'pet', 'dime', 'small'
'dusty', 'accept', 'nasty', 'thrill', 'foot', 'steel', 'steel', 'steel', 'steel', 'abuzz']
dawg.add_all(input_words)
dawg.reduce()
dawg.get_word_count()
>>> 48
```
### Check if exists using the `in` operator
```python
print('ampyx' in dawg)
>>> True
```
### Prefix search
```python
print(dawg.search_with_prefix('ab'))
>>> ['abhor', 'abuzz']
```
```python
print(dawg.search_with_prefix('ab', with_count=True))
>>> [('abuzz', 2), ('abhor', 1)]
```
### Wildcard search using `?` and `*`
`?` = 0 or 1 occurance of any character
`*` = 0 or more occurance of any character
```python
print(dawg.search('a*o*'))
>>> ['amato', 'abhor', 'aneto', 'arrow', 'agony', 'acorn']
print(dawg.search('a*o*', with_count=True))
>>> [('amato', 2), ('abhor', 1), ('aneto', 2), ('arrow', 1), ('agony', 1), ('acorn', 1)]
print(dawg.search('su?t'))
>>> ['suit']
print(dawg.search('su?t', with_count=True))
>>> [('suit', 1)]
```
### Search for similar words using the notion of Levenshtein distance
```python
print(dawg.search_within_distance('arie', dist=2))
>>> ['athie', 'arbil', 'auric']
print(dawg.search_within_distance('arie', dist=2, with_count=True))
>>> [('athie', 3), ('arbil', 1), ('auric', 1)]
```
### Alphabetical order insertion
If you insert a word which is lexicographically out-of-order, ``ValueError`` will be raised.
```python
dawg.add('athie', count=1000)
```
ValueError
```text
ValueError: Words should be inserted in Alphabetical order. <Previous word - thrill>, <Current word - athie>
```
### Increment the word count
- You can either add an alphabetically greater word with a specific count or increment the count of the previous added word.
```python
dawg.add_all(['thrill']*20000) # or dawg.add('thrill', count=20000)
print(dawg.search('thrill', with_count=True))
>> [('thrill', 20001)]
```
## Special Characters
Special characters, except `?` and `*`, are matched literally.
```python
from lexpy import Trie
t = Trie()
t.add('a©')
```
```python
t.search('a©')
>> ['a©']
```
```python
t.search('a?')
>> ['a©']
```
```python
t.search('?©')
>> ['a©']
```
## Trie vs DAWG
![Number of nodes comparison](https://github.com/aosingh/lexpy/blob/main/lexpy_trie_dawg_nodes.png)
![Build time comparison](https://github.com/aosingh/lexpy/blob/main/lexpy_trie_dawg_time.png)
# Future Work
These are some ideas which I would love to work on next in that order. Pull requests or discussions are invited.
- Merge trie and DAWG features in one data structure
- Support all functionalities and still be as compressed as possible.
- Serialization / Deserialization
- Pickle is definitely an option.
- Server (TCP or HTTP) to serve queries over the network.
# Fun Facts
1. The 45-letter word pneumonoultramicroscopicsilicovolcanoconiosis is the longest English word that appears in a major dictionary.
So for all english words, the search time is bounded by O(45).
2. The longest technical word(not in dictionary) is the name of a protein called as [titin](https://en.wikipedia.org/wiki/Titin). It has 189,819
letters and it is disputed whether it is a word.
Raw data
{
"_id": null,
"home_page": "https://github.com/aosingh/lexpy",
"name": "lexpy",
"maintainer": "Abhishek Singh",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "abhishek.singh20141@gmail.com",
"keywords": "trie, suffix-trees, lexicon, directed-acyclic-word-graph, dawg",
"author": "Abhishek Singh",
"author_email": "abhishek.singh20141@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/97/e1/954f0a8d4e49e4b78ee111d0a3cc6fa6cee05040411fde20d15ade5682e0/lexpy-1.1.0.tar.gz",
"platform": null,
"description": "# Lexpy\n\n[![lexpy](https://github.com/aosingh/lexpy/actions/workflows/lexpy_build.yaml/badge.svg)](https://github.com/aosingh/lexpy/actions)\n[![Downloads](https://pepy.tech/badge/lexpy)](https://pepy.tech/project/lexpy)\n[![PyPI version](https://badge.fury.io/py/lexpy.svg)](https://pypi.python.org/pypi/lexpy)\n\n[![Python 3.7](https://img.shields.io/badge/python-3.7-blue.svg)](https://www.python.org/downloads/release/python-370/)\n[![Python 3.8](https://img.shields.io/badge/python-3.8-blue.svg)](https://www.python.org/downloads/release/python-380/)\n[![Python 3.9](https://img.shields.io/badge/python-3.9-blue.svg)](https://www.python.org/downloads/release/python-390/)\n[![Python 3.10](https://img.shields.io/badge/python-3.10-blue.svg)](https://www.python.org/downloads/release/python-3100/)\n[![Python 3.11](https://img.shields.io/badge/python-3.11-blue.svg)](https://www.python.org/downloads/release/python-3110/)\n[![Python 3.12](https://img.shields.io/badge/python-3.12-blue.svg)](https://www.python.org/downloads/release/python-3120/)\n\n\n\n[![PyPy3.7](https://img.shields.io/badge/python-PyPy3.7-blue.svg)](https://www.pypy.org/download.html)\n[![PyPy3.8](https://img.shields.io/badge/python-PyPy3.8-blue.svg)](https://www.pypy.org/download.html)\n[![PyPy3.9](https://img.shields.io/badge/python-PyPy3.9-blue.svg)](https://www.pypy.org/download.html)\n\n\n\n- A lexicon is a data-structure which stores a set of words. The difference between \na dictionary and a lexicon is that in a lexicon there are **no values** associated with the words. \n\n- A lexicon is similar to a list or a set of words, but the internal representation is different and optimized\nfor faster searches of words, prefixes and wildcard patterns. \n\n- Given a word, precisely, the search time is O(W) where W is the length of the word. \n\n- 2 important lexicon data-structures are **_Trie_** and **_Directed Acyclic Word Graph (DAWG)_**.\n\n# Install\n\n`lexpy` can be installed via Python Package Index `(PyPI)` using `pip`. The only installation requirement is that you need Python 3.7 or higher.\n\n```commandline\npip install lexpy\n```\n\n# Interface\n\n| **Interface Description** \t| **Trie** \t| **DAWG** \t|\n|-------------------------------------------------------------------------------------------------------------------------------\t|------------------------------------------\t|------------------------------------------\t|\n| Add a single word \t| `add('apple', count=2)` \t| `add('apple', count=2)` \t|\n| Add multiple words \t| `add_all(['advantage', 'courage'])` \t| `add_all(['advantage', 'courage'])` \t|\n| Check if exists? \t| `in` operator \t| `in` operator \t|\n| Search using wildcard expression \t| `search('a?b*', with_count=True)` | `search('a?b*, with_count=True)` |\n| Search for prefix matches \t| `search_with_prefix('bar', with_count=True)` | `search_with_prefix('bar')` \t|\n| Search for similar words within given edit distance. Here, the notion of edit distance is same as Levenshtein distance \t| `search_within_distance('apble', dist=1, with_count=True)` \t| `search_within_distance('apble', dist=1, with_count=True)` \t|\n| Get the number of nodes in the automaton \t| `len(trie)` \t| `len(dawg)` \t|\n\n\n# Examples\n\n## Trie\n\n### Build from an input list, set, or tuple of words.\n\n```python\nfrom lexpy import Trie\n\ntrie = Trie()\n\ninput_words = ['ampyx', 'abuzz', 'athie', 'athie', 'athie', 'amato', 'amato', 'aneto', 'aneto', 'aruba', \n 'arrow', 'agony', 'altai', 'alisa', 'acorn', 'abhor', 'aurum', 'albay', 'arbil', 'albin', \n 'almug', 'artha', 'algin', 'auric', 'sore', 'quilt', 'psychotic', 'eyes', 'cap', 'suit', \n 'tank', 'common', 'lonely', 'likeable' 'language', 'shock', 'look', 'pet', 'dime', 'small' \n 'dusty', 'accept', 'nasty', 'thrill', 'foot', 'steel', 'steel', 'steel', 'steel', 'abuzz']\n\ntrie.add_all(input_words) # You can pass any sequence types or a file-like object here\n\nprint(trie.get_word_count())\n\n>>> 48\n```\n\n### Build from a file or file path.\n\nIn the file, words should be newline separated.\n\n```python\n\nfrom lexpy import Trie\n\n# Either\ntrie = Trie()\ntrie.add_all('/path/to/file.txt')\n\n# Or\nwith open('/path/to/file.txt', 'r') as infile:\n trie.add_all(infile)\n\n```\n\n### Check if exists using the `in` operator\n\n```python\nprint('ampyx' in trie)\n\n>>> True\n```\n\n### Prefix search\n\n```python\nprint(trie.search_with_prefix('ab'))\n\n>>> ['abhor', 'abuzz']\n```\n\n```python\n\nprint(trie.search_with_prefix('ab', with_count=True))\n\n>>> [('abuzz', 2), ('abhor', 1)]\n\n```\n\n### Wildcard search using `?` and `*`\n\n- `?` = 0 or 1 occurrence of any character\n\n- `*` = 0 or more occurrence of any character\n\n```python\nprint(trie.search('a*o*'))\n\n>>> ['amato', 'abhor', 'aneto', 'arrow', 'agony', 'acorn']\n\nprint(trie.search('a*o*', with_count=True))\n\n>>> [('amato', 2), ('abhor', 1), ('aneto', 2), ('arrow', 1), ('agony', 1), ('acorn', 1)]\n\nprint(trie.search('su?t'))\n\n>>> ['suit']\n\nprint(trie.search('su?t', with_count=True))\n\n>>> [('suit', 1)]\n\n```\n\n### Search for similar words using the notion of Levenshtein distance\n\n```python\nprint(trie.search_within_distance('arie', dist=2))\n\n>>> ['athie', 'arbil', 'auric']\n\nprint(trie.search_within_distance('arie', dist=2, with_count=True))\n\n>>> [('athie', 3), ('arbil', 1), ('auric', 1)]\n\n```\n\n### Increment word count\n\n- You can either add a new word or increment the counter for an existing word.\n\n```python\n\ntrie.add('athie', count=1000)\n\nprint(trie.search_within_distance('arie', dist=2, with_count=True))\n\n>>> [('athie', 1003), ('arbil', 1), ('auric', 1)]\n```\n\n# Directed Acyclic Word Graph (DAWG)\n\n- DAWG supports the same set of operations as a Trie. The difference is the number of nodes in a DAWG is always\nless than or equal to the number of nodes in Trie. \n\n- They both are Deterministic Finite State Automata. However, DAWG is a minimized version of the Trie DFA.\n\n- In a Trie, prefix redundancy is removed. In a DAWG, both prefix and suffix redundancies are removed.\n\n- In the current implementation of DAWG, the insertion order of the words should be **alphabetical**.\n\n- The implementation idea of DAWG is borrowed from http://stevehanov.ca/blog/?id=115\n\n\n```python\nfrom lexpy import Trie, DAWG\n\ntrie = Trie()\ntrie.add_all(['advantageous', 'courageous'])\n\ndawg = DAWG()\ndawg.add_all(['advantageous', 'courageous'])\n\nlen(trie) # Number of Nodes in Trie\n23\n\ndawg.reduce() # Perform DFA minimization. Call this every time a chunk of words are uploaded in DAWG.\n\nlen(dawg) # Number of nodes in DAWG\n21\n\n```\n\n## DAWG\n\nThe APIs are exactly same as the Trie APIs\n\n### Build a DAWG\n\n```python\nfrom lexpy import DAWG\ndawg = DAWG()\n\ninput_words = ['ampyx', 'abuzz', 'athie', 'athie', 'athie', 'amato', 'amato', 'aneto', 'aneto', 'aruba', \n 'arrow', 'agony', 'altai', 'alisa', 'acorn', 'abhor', 'aurum', 'albay', 'arbil', 'albin', \n 'almug', 'artha', 'algin', 'auric', 'sore', 'quilt', 'psychotic', 'eyes', 'cap', 'suit', \n 'tank', 'common', 'lonely', 'likeable' 'language', 'shock', 'look', 'pet', 'dime', 'small' \n 'dusty', 'accept', 'nasty', 'thrill', 'foot', 'steel', 'steel', 'steel', 'steel', 'abuzz']\n\n\ndawg.add_all(input_words)\ndawg.reduce()\n\ndawg.get_word_count()\n\n>>> 48\n\n```\n\n### Check if exists using the `in` operator\n\n```python\nprint('ampyx' in dawg)\n\n>>> True\n```\n\n### Prefix search\n\n```python\nprint(dawg.search_with_prefix('ab'))\n\n>>> ['abhor', 'abuzz']\n```\n\n```python\n\nprint(dawg.search_with_prefix('ab', with_count=True))\n\n>>> [('abuzz', 2), ('abhor', 1)]\n\n```\n\n### Wildcard search using `?` and `*`\n\n`?` = 0 or 1 occurance of any character\n\n`*` = 0 or more occurance of any character\n\n```python\nprint(dawg.search('a*o*'))\n\n>>> ['amato', 'abhor', 'aneto', 'arrow', 'agony', 'acorn']\n\nprint(dawg.search('a*o*', with_count=True))\n\n>>> [('amato', 2), ('abhor', 1), ('aneto', 2), ('arrow', 1), ('agony', 1), ('acorn', 1)]\n\nprint(dawg.search('su?t'))\n\n>>> ['suit']\n\nprint(dawg.search('su?t', with_count=True))\n\n>>> [('suit', 1)]\n\n```\n\n### Search for similar words using the notion of Levenshtein distance\n\n```python\nprint(dawg.search_within_distance('arie', dist=2))\n\n>>> ['athie', 'arbil', 'auric']\n\nprint(dawg.search_within_distance('arie', dist=2, with_count=True))\n\n>>> [('athie', 3), ('arbil', 1), ('auric', 1)]\n\n```\n\n### Alphabetical order insertion\n\nIf you insert a word which is lexicographically out-of-order, ``ValueError`` will be raised.\n```python\ndawg.add('athie', count=1000)\n```\nValueError\n\n```text\nValueError: Words should be inserted in Alphabetical order. <Previous word - thrill>, <Current word - athie>\n```\n\n### Increment the word count\n\n- You can either add an alphabetically greater word with a specific count or increment the count of the previous added word.\n\n```python\n\n\ndawg.add_all(['thrill']*20000) # or dawg.add('thrill', count=20000)\n\nprint(dawg.search('thrill', with_count=True))\n\n>> [('thrill', 20001)]\n\n```\n\n## Special Characters\n\nSpecial characters, except `?` and `*`, are matched literally. \n\n```python\nfrom lexpy import Trie\nt = Trie()\nt.add('a\u00a9')\n```\n\n```python\nt.search('a\u00a9')\n>> ['a\u00a9']\n\n```\n\n```python\nt.search('a?')\n>> ['a\u00a9']\n```\n\n```python\nt.search('?\u00a9')\n>> ['a\u00a9']\n```\n\n## Trie vs DAWG\n\n\n![Number of nodes comparison](https://github.com/aosingh/lexpy/blob/main/lexpy_trie_dawg_nodes.png)\n\n![Build time comparison](https://github.com/aosingh/lexpy/blob/main/lexpy_trie_dawg_time.png)\n\n\n\n# Future Work\n\nThese are some ideas which I would love to work on next in that order. Pull requests or discussions are invited.\n\n- Merge trie and DAWG features in one data structure\n - Support all functionalities and still be as compressed as possible.\n- Serialization / Deserialization\n - Pickle is definitely an option. \n- Server (TCP or HTTP) to serve queries over the network.\n\n\n# Fun Facts\n1. The 45-letter word pneumonoultramicroscopicsilicovolcanoconiosis is the longest English word that appears in a major dictionary.\nSo for all english words, the search time is bounded by O(45). \n2. The longest technical word(not in dictionary) is the name of a protein called as [titin](https://en.wikipedia.org/wiki/Titin). It has 189,819\nletters and it is disputed whether it is a word.\n\n\n\n",
"bugtrack_url": null,
"license": "GNU GPLv3",
"summary": "Python package for lexicon",
"version": "1.1.0",
"project_urls": {
"Bug Tracker": "https://github.com/aosingh/lexpy/issues",
"CI": "https://github.com/aosingh/lexpy/actions",
"Documentation": "https://github.com/aosingh/lexpy",
"Homepage": "https://github.com/aosingh/lexpy",
"License": "https://github.com/aosingh/lexpy/blob/main/LICENSE",
"Release Notes": "https://github.com/aosingh/lexpy/releases",
"Source": "https://github.com/aosingh/lexpy"
},
"split_keywords": [
"trie",
" suffix-trees",
" lexicon",
" directed-acyclic-word-graph",
" dawg"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a81841917f8c907e53a7843fecef554c68f36467edc070f0f5f4f7bfd341e229",
"md5": "684606454e324397c538eb77289d2de9",
"sha256": "50fd02c97c832de10a6a1e8c84587cef98f21035c9f4092f7e95a5dabe6d0837"
},
"downloads": -1,
"filename": "lexpy-1.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "684606454e324397c538eb77289d2de9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 27510,
"upload_time": "2024-06-10T04:57:35",
"upload_time_iso_8601": "2024-06-10T04:57:35.974542Z",
"url": "https://files.pythonhosted.org/packages/a8/18/41917f8c907e53a7843fecef554c68f36467edc070f0f5f4f7bfd341e229/lexpy-1.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "97e1954f0a8d4e49e4b78ee111d0a3cc6fa6cee05040411fde20d15ade5682e0",
"md5": "6a20896e847add4834071072b925640b",
"sha256": "0c6c92a768fecade1d6e4acf95e5c5908f5bd5d19e3ce57ed1a26075e03576a9"
},
"downloads": -1,
"filename": "lexpy-1.1.0.tar.gz",
"has_sig": false,
"md5_digest": "6a20896e847add4834071072b925640b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 28395,
"upload_time": "2024-06-10T04:57:37",
"upload_time_iso_8601": "2024-06-10T04:57:37.658631Z",
"url": "https://files.pythonhosted.org/packages/97/e1/954f0a8d4e49e4b78ee111d0a3cc6fa6cee05040411fde20d15ade5682e0/lexpy-1.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-10 04:57:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "aosingh",
"github_project": "lexpy",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"requirements": [],
"lcname": "lexpy"
}