# Arabic Stop words
![ghalatawi logo](doc/arabicStopWordsheader.png "Arabic Stop Words logo")
![PyPI - Downloads](https://img.shields.io/pypi/dm/Arabic-Stopwords)
Developpers: Taha Zerrouki: http://tahadz.com
taha dot zerrouki at gmail dot com
Features | value
---------|---------------------------------------------------------------------------------
Authors | [Authors.md](https://github.com/linuxscout/arabicstopwords/main/AUTHORS.md)
Release | 0.9
License |[GPL](https://github.com/linuxscout/arabicstopwords/main/LICENSE)
Tracker |[linuxscout/arabicstopwords/Issues](https://github.com/linuxscout/arabicstopwords/issues)
Source |[Github](http://github.com/linuxscout/arabicstopwords)
Website |[ArabicStopwords on SourceForge](https://arabicstopwords.sf.net)
Doc |[package Documentaion](https://arabicstopwords.readthedocs.io/)
Download |[Python Library](https://pypi.python.org/pypi/https://pypi.org/project/Arabic-Stopwords/)
Download | Data set [CSV/SQL/Python](https://github.com/linuxscout/arabicstopwords/releases/latest)
Feedbacks |[Comments](https://github.com/linuxscout/arabicstopwords/)
Accounts |[@Twitter](https://twitter.com/linuxscout))
## Citation
If you would cite it in academic work, can you use this citation
```
T. Zerrouki, Arabic Stop Words, https://github.com/linuxscout/arabicstopwords/, 2010
```
Another Citation:
```
Zerrouki, Taha. "Towards An Open Platform For Arabic Language Processing." (2020).
```
or in bibtex format
```bibtex
@misc{zerrouki2010arabicstopwords,
title={Arabic Stop Words},
author={Zerrouki, Taha},
url={https://github.com/linuxscout/arabicstopwords},
year={2010}
}
@thesis{zerrouki2020towards,
title={Towards An Open Platform For Arabic Language Processing},
author={Zerrouki, Taha},
year={2020}
}
```
## Description
It's not easy to detemine the stop words, and in other hand, stop words differs according to the case,
for this purpos, we propose a classified list which can be parametered by developper.
The Word list contains only wonds in its commun forms, and we have generated all forms by a script.
It can used as library 'see section arabicstopwords library'
## Files
* data/ : contains data of stopwords
* data/classified/stopwords.ods: data in LibreOffice format with more valuble informations, and classified stopwords
* docs: docs files
* scripts: scripts used to generate all forms, and file formats
## Data
This project contains two parts:
- Data part, which contains classified stopwords, or all generated forms, in multiple format
- CSV
- Python
- SQL / Sqlite
- Python library for handling stopwords.
### Data Structure
Two fromats of data are given:
- classified words (lemma) with features to generate inflected froms
- Generated forms from lemmas with adding affixes.
![Stopwords Example](doc/images/stopwords.png "Stopwords Example")
Minimal classified data .ODS/CSV file
- 1st field : unvocalised word ( في)
- 2nd field : type of the word: e.g. حرف
- 3rd field : class of word : e.g. preposition
Affixation infomration in other fields:
- 4th field : AIN in arabic , if word accept Conjuction 'العطف', '*' else
- 5th field : TEH in arabic , if word accept definate article 'ال التعريف', '*' else
- 6th field : JEEM in arabic , if word accept preposition article 'حروف الجر المتصلة', '*' else
- 7th field : DAD in arabic , if word accept IDAFA articles 'الضمائر المتصلة', '*' else
- 7th field : SAD in arabic , if word accept verb conjugation articles 'التصريف', '*' else
- 8th field : LAM in arabic , if word accept LAM QASAM articles 'لام القسم', '*' else
- 8th field : MEEM in arabic , if word has ALEF LAM as definition article 'معرف', '*' else
All forms data CSV file
- 1st field : unvocalised word ( بأنك)
- 2nd field : vocalised inflected word with : e.g. ف-ب-خمسين-ي
- 3rd field: word type (super class): noun, verb, tool حرف
- 4th field: word type (sub class): إنّ وأخواتها
- 5th field: original or lemma: إن
- 6th field: procletic : ب
- 7th field: stem : أن
- 8th field: encletic: ك
- 9th field: tags: جر:مضاف
```csv
word vocalized type category original procletic stem encletic tags
بأنك بِأَنّكَ حرف إن و أخواتها أن ب- -ك جر:مضاف
بأنكما بِأَنّكُمَا حرف إن و أخواتها أن ب- -كما جر:مضاف
```
## How to customize stop word list
* check the minimal form data file (stopwords.csv)
* comment by "#" all words which you don't need
* run
```
make
```
* catch the output of script in releases folder.
## How to update data
* check if the word doesn't exist in the minimal form data file ( classified/stopwords.ods)
* add affixation information
* run
```
make
```
* catch the output of script in releases folder.
## Arabic Stopwords Library
### install
``` shell
pip install arabicstopwords
```
### usage
* test if a word is stop
``` python
>>> import arabicstopwords.arabicstopwords as stp
>>> # test if a word is a stop
... stp.is_stop(u'ممكن')
False
>>> stp.is_stop(u'منكم')
True
```
* stem a stopword
```python
>>> word = u"لعلهم"
>>> stp.stop_stem(word)
u'لعل'
```
* list all stop words
```
>>> stp.stopwords_list()
......
>>> len(stp.stopwords_list())
13629
>>> len(stp.classed_stopwords_list())
507
```
* give all forms of a stopword
```python
>>> stp.stopword_forms(u"على")
....
>>> len(stp.stopword_forms(u"على"))
144
```
* get stopword as list of dictionaries
``` python
>>> from arabicstopwords.stopwords_lexicon import stopwords_lexicon
>>> lexicon = stopwords_lexicon()
>>> # test if a word is a stop
... lexicon.is_stop(u'ممكن')
False
>>> lexicon.is_stop(u'منكم')
True
>>> lexicon.get_features_dict(u'منكم')
[{'vocalized': 'منكم', 'procletic': '', 'tags': 'حرف;حرف جر;ضمير', 'stem': 'من', 'type': 'حرف', 'original': 'من', 'encletic': '-كم'}]
```
* get stopword as tuple
``` python
>>> from arabicstopwords.stopwords_lexicon import stopwords_lexicon
>>> lexicon = stopwords_lexicon()
>>> tuples = lexicon.get_stopwordtuples(u'منكم')
>>> tuples
[<stopwordtuple.stopwordTuple object at 0x7fd93b3d12b0>]
>>> for tup in tuples:
... print(tup)
...
{'vocalized': 'منكم', 'procletic': '', 'tags': 'حرف;حرف جر;ضمير', 'stem': 'من', 'type': 'حرف', 'original': 'من', 'encletic': '-كم'}
>>> >>> for tup in tuples:
... dir(tup)
...
['accept_conjuction', 'accept_conjugation', 'accept_definition', 'accept_inflection', 'accept_interrog', 'accept_preposition', 'accept_pronoun', 'accept_qasam', 'accept_tanwin', 'get_action', 'get_enclitic', 'get_feature', 'get_features_dict', 'get_lemma', 'get_need', 'get_object_type', 'get_procletic', 'get_stem', 'get_tags', 'get_vocalized', 'get_wordclass', 'get_wordtype', 'is_defined', 'stop_dict']
>>>
```
* get stopword by categories
``` python
>>> from arabicstopwords.stopwords_lexicon import stopwords_lexicon
>>> lexicon = stopwords_lexicon()
>>> lexicon.get_categories()
['حرف', 'ضمير', 'فعل', 'اسم', 'اسم فعل', 'حرف ابجدي']
>>> lexicon.get_by_category("اسم فعل", lemma=True, vocalized=True)
['آهاً', 'بَسّْ', 'بَسْ', 'حَايْ', 'صَهْ', 'صَهٍ', 'طَاقْ', 'طَقْ', 'عَدَسْ', 'كِخْ', 'نَخْ', 'هَجْ', 'وَا', 'وَا', 'وَاهاً', 'وَيْ', 'آمِينَ', 'آهٍ', 'أُفٍّ', 'أُفٍّ', 'أَمَامَكَ', 'أَوَّهْ', 'إِلَيْكَ', 'إِلَيْكُمْ', 'إِلَيْكُمَا', 'إِلَيْكُنَّ', 'إيهِ', 'بخٍ', 'بُطْآنَ', 'بَلْهَ', 'حَذَارِ', 'حَيَّ', 'دُونَكَ', 'رُوَيْدَكَ', 'سُرْعَانَ', 'شَتَّانَ', 'عَلَيْكَ', 'مَكَانَكَ', 'مَكَانَكِ', 'مَكَانَكُمْ', 'مَكَانَكُمَا', 'مَكَانَكُنَّ', 'مَهْ', 'هَا', 'هَاؤُمُ', 'هَاكَ', 'هَلُمَّ', 'هَيَّا', 'هِيتَ', 'هَيْهَاتَ', 'وَرَاءَكَ', 'وَرَاءَكِ', 'وُشْكَانَ', 'وَيْكَأَنَّ', 'وَرَاءَكُما', 'وَرَاءَكُمْ', 'وَرَاءَكُنَّ', 'بِئْسَمَا']
```
Raw data
{
"_id": null,
"home_page": "http://arabicstopwords.sourceforge.net/",
"name": "Arabic-Stopwords",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "Taha Zerrouki",
"author_email": "taha.zerrouki@gmail.com",
"download_url": "",
"platform": null,
"description": "# Arabic Stop words\n![ghalatawi logo](doc/arabicStopWordsheader.png \"Arabic Stop Words logo\")\n\n![PyPI - Downloads](https://img.shields.io/pypi/dm/Arabic-Stopwords)\n\n\n Developpers: Taha Zerrouki: http://tahadz.com\n taha dot zerrouki at gmail dot com\n\nFeatures | value\n---------|---------------------------------------------------------------------------------\nAuthors | [Authors.md](https://github.com/linuxscout/arabicstopwords/main/AUTHORS.md)\nRelease | 0.9\nLicense |[GPL](https://github.com/linuxscout/arabicstopwords/main/LICENSE)\nTracker |[linuxscout/arabicstopwords/Issues](https://github.com/linuxscout/arabicstopwords/issues)\nSource |[Github](http://github.com/linuxscout/arabicstopwords)\nWebsite |[ArabicStopwords on SourceForge](https://arabicstopwords.sf.net)\nDoc |[package Documentaion](https://arabicstopwords.readthedocs.io/)\nDownload |[Python Library](https://pypi.python.org/pypi/https://pypi.org/project/Arabic-Stopwords/)\nDownload | Data set [CSV/SQL/Python](https://github.com/linuxscout/arabicstopwords/releases/latest)\nFeedbacks |[Comments](https://github.com/linuxscout/arabicstopwords/)\nAccounts |[@Twitter](https://twitter.com/linuxscout))\n\n\n## Citation\nIf you would cite it in academic work, can you use this citation\n```\nT. Zerrouki\u200f, Arabic Stop Words, https://github.com/linuxscout/arabicstopwords/, 2010\n```\nAnother Citation:\n```\nZerrouki, Taha. \"Towards An Open Platform For Arabic Language Processing.\" (2020).\n```\n\nor in bibtex format\n```bibtex\n@misc{zerrouki2010arabicstopwords,\n title={Arabic Stop Words},\n author={Zerrouki, Taha},\n url={https://github.com/linuxscout/arabicstopwords},\n year={2010}\n}\n@thesis{zerrouki2020towards,\n title={Towards An Open Platform For Arabic Language Processing},\n author={Zerrouki, Taha},\n year={2020}\n}\n\n\n```\n## Description\n\nIt's not easy to detemine the stop words, and in other hand, stop words differs according to the case,\nfor this purpos, we propose a classified list which can be parametered by developper.\n\nThe Word list contains only wonds in its commun forms, and we have generated all forms by a script.\n\nIt can used as library 'see section arabicstopwords library'\n\n## Files\n\n* data/ : contains data of stopwords\n* data/classified/stopwords.ods: data in LibreOffice format with more valuble informations, and classified stopwords\n* docs: docs files\n* scripts: scripts used to generate all forms, and file formats\n\n## Data\nThis project contains two parts:\n- Data part, which contains classified stopwords, or all generated forms, in multiple format\n - CSV\n - Python\n - SQL / Sqlite\n- Python library for handling stopwords.\n\n### Data Structure\nTwo fromats of data are given:\n- classified words (lemma) with features to generate inflected froms\n- Generated forms from lemmas with adding affixes.\n\n![Stopwords Example](doc/images/stopwords.png \"Stopwords Example\")\n\nMinimal classified data .ODS/CSV file \n- 1st field : unvocalised word ( \u0641\u064a)\n- 2nd field : type of the word: e.g. \u062d\u0631\u0641\n- 3rd field : class of word : e.g. preposition\n\nAffixation infomration in other fields:\n- 4th field : AIN in arabic , if word accept Conjuction '\u0627\u0644\u0639\u0637\u0641', '*' else\n- 5th field : TEH in arabic , if word accept definate article '\u0627\u0644 \u0627\u0644\u062a\u0639\u0631\u064a\u0641', '*' else\n- 6th field : JEEM in arabic , if word accept preposition article '\u062d\u0631\u0648\u0641 \u0627\u0644\u062c\u0631 \u0627\u0644\u0645\u062a\u0635\u0644\u0629', '*' else \n- 7th field : DAD in arabic , if word accept IDAFA articles '\u0627\u0644\u0636\u0645\u0627\u0626\u0631 \u0627\u0644\u0645\u062a\u0635\u0644\u0629', '*' else \n- 7th field : SAD in arabic , if word accept verb conjugation articles '\u0627\u0644\u062a\u0635\u0631\u064a\u0641', '*' else \n- 8th field : LAM in arabic , if word accept LAM QASAM articles '\u0644\u0627\u0645 \u0627\u0644\u0642\u0633\u0645', '*' else \n- 8th field : MEEM in arabic , if word has ALEF LAM as definition article '\u0645\u0639\u0631\u0641', '*' else \n\n\nAll forms data CSV file\n- 1st field : unvocalised word ( \u0628\u0623\u0646\u0643)\n- 2nd field : vocalised inflected word with : e.g. \u0641-\u0628-\u062e\u0645\u0633\u064a\u0646-\u064a\n- 3rd field: word type (super class): noun, verb, tool \u062d\u0631\u0641\n- 4th field: word type (sub class): \u0625\u0646\u0651 \u0648\u0623\u062e\u0648\u0627\u062a\u0647\u0627 \n- 5th field: original or lemma: \u0625\u0646\n- 6th field: procletic : \u0628\n- 7th field: stem : \u0623\u0646\n- 8th field: encletic: \u0643\n- 9th field: tags: \u062c\u0631:\u0645\u0636\u0627\u0641\n\n\n\n```csv\nword vocalized type category original procletic stem encletic tags\n\u0628\u0623\u0646\u0643 \u0628\u0650\u0623\u064e\u0646\u0651\u0643\u064e \u062d\u0631\u0641 \u0625\u0646 \u0648 \u0623\u062e\u0648\u0627\u062a\u0647\u0627 \u0623\u0646 \u0628- -\u0643 \u062c\u0631:\u0645\u0636\u0627\u0641\n\u0628\u0623\u0646\u0643\u0645\u0627 \u0628\u0650\u0623\u064e\u0646\u0651\u0643\u064f\u0645\u064e\u0627 \u062d\u0631\u0641 \u0625\u0646 \u0648 \u0623\u062e\u0648\u0627\u062a\u0647\u0627 \u0623\u0646 \u0628- -\u0643\u0645\u0627 \u062c\u0631:\u0645\u0636\u0627\u0641\n```\n## How to customize stop word list\n\n* check the minimal form data file (stopwords.csv)\n* comment by \"#\" all words which you don't need\n* run \n```\nmake\n```\n* catch the output of script in releases folder.\n\n\n## How to update data\n\n* check if the word doesn't exist in the minimal form data file ( classified/stopwords.ods)\n* add affixation information\n* run \n```\nmake\n```\n* catch the output of script in releases folder.\n\n## Arabic Stopwords Library\n### install\n``` shell\npip install arabicstopwords\n```\n### usage\n* test if a word is stop\n``` python\n>>> import arabicstopwords.arabicstopwords as stp\n>>> # test if a word is a stop\n... stp.is_stop(u'\u0645\u0645\u0643\u0646')\nFalse\n>>> stp.is_stop(u'\u0645\u0646\u0643\u0645')\nTrue\n```\n\n* stem a stopword\n```python\n>>> word = u\"\u0644\u0639\u0644\u0647\u0645\"\n>>> stp.stop_stem(word)\nu'\u0644\u0639\u0644'\n\n```\n* list all stop words\n```\n>>> stp.stopwords_list()\n......\n>>> len(stp.stopwords_list())\n13629\n>>> len(stp.classed_stopwords_list())\n 507\n```\n* give all forms of a stopword\n```python\n>>> stp.stopword_forms(u\"\u0639\u0644\u0649\")\n....\n>>> len(stp.stopword_forms(u\"\u0639\u0644\u0649\"))\n144\n```\n\n\n* get stopword as list of dictionaries\n``` python\n>>> from arabicstopwords.stopwords_lexicon import stopwords_lexicon \n>>> lexicon = stopwords_lexicon()\n>>> # test if a word is a stop\n... lexicon.is_stop(u'\u0645\u0645\u0643\u0646')\nFalse\n>>> lexicon.is_stop(u'\u0645\u0646\u0643\u0645')\nTrue\n>>> lexicon.get_features_dict(u'\u0645\u0646\u0643\u0645')\n[{'vocalized': '\u0645\u0646\u0643\u0645', 'procletic': '', 'tags': '\u062d\u0631\u0641;\u062d\u0631\u0641 \u062c\u0631;\u0636\u0645\u064a\u0631', 'stem': '\u0645\u0646', 'type': '\u062d\u0631\u0641', 'original': '\u0645\u0646', 'encletic': '-\u0643\u0645'}]\n```\n\n* get stopword as tuple\n``` python\n>>> from arabicstopwords.stopwords_lexicon import stopwords_lexicon \n>>> lexicon = stopwords_lexicon()\n>>> tuples = lexicon.get_stopwordtuples(u'\u0645\u0646\u0643\u0645')\n>>> tuples\n[<stopwordtuple.stopwordTuple object at 0x7fd93b3d12b0>]\n>>> for tup in tuples:\n... print(tup)\n... \n{'vocalized': '\u0645\u0646\u0643\u0645', 'procletic': '', 'tags': '\u062d\u0631\u0641;\u062d\u0631\u0641 \u062c\u0631;\u0636\u0645\u064a\u0631', 'stem': '\u0645\u0646', 'type': '\u062d\u0631\u0641', 'original': '\u0645\u0646', 'encletic': '-\u0643\u0645'}\n>>> >>> for tup in tuples:\n... dir(tup)\n... \n['accept_conjuction', 'accept_conjugation', 'accept_definition', 'accept_inflection', 'accept_interrog', 'accept_preposition', 'accept_pronoun', 'accept_qasam', 'accept_tanwin', 'get_action', 'get_enclitic', 'get_feature', 'get_features_dict', 'get_lemma', 'get_need', 'get_object_type', 'get_procletic', 'get_stem', 'get_tags', 'get_vocalized', 'get_wordclass', 'get_wordtype', 'is_defined', 'stop_dict']\n>>> \n```\n\n* get stopword by categories\n``` python\n>>> from arabicstopwords.stopwords_lexicon import stopwords_lexicon \n>>> lexicon = stopwords_lexicon()\n>>> lexicon.get_categories()\n['\u062d\u0631\u0641', '\u0636\u0645\u064a\u0631', '\u0641\u0639\u0644', '\u0627\u0633\u0645', '\u0627\u0633\u0645 \u0641\u0639\u0644', '\u062d\u0631\u0641 \u0627\u0628\u062c\u062f\u064a']\n>>> lexicon.get_by_category(\"\u0627\u0633\u0645 \u0641\u0639\u0644\", lemma=True, vocalized=True)\n['\u0622\u0647\u0627\u064b', '\u0628\u064e\u0633\u0651\u0652', '\u0628\u064e\u0633\u0652', '\u062d\u064e\u0627\u064a\u0652', '\u0635\u064e\u0647\u0652', '\u0635\u064e\u0647\u064d', '\u0637\u064e\u0627\u0642\u0652', '\u0637\u064e\u0642\u0652', '\u0639\u064e\u062f\u064e\u0633\u0652', '\u0643\u0650\u062e\u0652', '\u0646\u064e\u062e\u0652', '\u0647\u064e\u062c\u0652', '\u0648\u064e\u0627', '\u0648\u064e\u0627', '\u0648\u064e\u0627\u0647\u0627\u064b', '\u0648\u064e\u064a\u0652', '\u0622\u0645\u0650\u064a\u0646\u064e', '\u0622\u0647\u064d', '\u0623\u064f\u0641\u0651\u064d', '\u0623\u064f\u0641\u0651\u064d', '\u0623\u064e\u0645\u064e\u0627\u0645\u064e\u0643\u064e', '\u0623\u064e\u0648\u0651\u064e\u0647\u0652', '\u0625\u0650\u0644\u064e\u064a\u0652\u0643\u064e', '\u0625\u0650\u0644\u064e\u064a\u0652\u0643\u064f\u0645\u0652', '\u0625\u0650\u0644\u064e\u064a\u0652\u0643\u064f\u0645\u064e\u0627', '\u0625\u0650\u0644\u064e\u064a\u0652\u0643\u064f\u0646\u0651\u064e', '\u0625\u064a\u0647\u0650', '\u0628\u062e\u064d', '\u0628\u064f\u0637\u0652\u0622\u0646\u064e', '\u0628\u064e\u0644\u0652\u0647\u064e', '\u062d\u064e\u0630\u064e\u0627\u0631\u0650', '\u062d\u064e\u064a\u0651\u064e', '\u062f\u064f\u0648\u0646\u064e\u0643\u064e', '\u0631\u064f\u0648\u064e\u064a\u0652\u062f\u064e\u0643\u064e', '\u0633\u064f\u0631\u0652\u0639\u064e\u0627\u0646\u064e', '\u0634\u064e\u062a\u0651\u064e\u0627\u0646\u064e', '\u0639\u064e\u0644\u064e\u064a\u0652\u0643\u064e', '\u0645\u064e\u0643\u064e\u0627\u0646\u064e\u0643\u064e', '\u0645\u064e\u0643\u064e\u0627\u0646\u064e\u0643\u0650', '\u0645\u064e\u0643\u064e\u0627\u0646\u064e\u0643\u064f\u0645\u0652', '\u0645\u064e\u0643\u064e\u0627\u0646\u064e\u0643\u064f\u0645\u064e\u0627', '\u0645\u064e\u0643\u064e\u0627\u0646\u064e\u0643\u064f\u0646\u0651\u064e', '\u0645\u064e\u0647\u0652', '\u0647\u064e\u0627', '\u0647\u064e\u0627\u0624\u064f\u0645\u064f', '\u0647\u064e\u0627\u0643\u064e', '\u0647\u064e\u0644\u064f\u0645\u0651\u064e', '\u0647\u064e\u064a\u0651\u064e\u0627', '\u0647\u0650\u064a\u062a\u064e', '\u0647\u064e\u064a\u0652\u0647\u064e\u0627\u062a\u064e', '\u0648\u064e\u0631\u064e\u0627\u0621\u064e\u0643\u064e', '\u0648\u064e\u0631\u064e\u0627\u0621\u064e\u0643\u0650', '\u0648\u064f\u0634\u0652\u0643\u064e\u0627\u0646\u064e', '\u0648\u064e\u064a\u0652\u0643\u064e\u0623\u064e\u0646\u0651\u064e', '\u0648\u064e\u0631\u064e\u0627\u0621\u064e\u0643\u064f\u0645\u0627', '\u0648\u064e\u0631\u064e\u0627\u0621\u064e\u0643\u064f\u0645\u0652', '\u0648\u064e\u0631\u064e\u0627\u0621\u064e\u0643\u064f\u0646\u0651\u064e', '\u0628\u0650\u0626\u0652\u0633\u064e\u0645\u064e\u0627']\n```\n\n\n",
"bugtrack_url": null,
"license": "GPL",
"summary": "Arabic Stop words: list and routins",
"version": "0.4.3",
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "afc6fb23c352b164979fb2493fae4115ae797f9c7f052234612dc5f03b116b12",
"md5": "37a90bfd82f27396c8f1cb70e602bc20",
"sha256": "a1fd681f04316f5e2af6800db384ae665985767899ca6819a8f2b50220a66010"
},
"downloads": -1,
"filename": "Arabic_Stopwords-0.4.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "37a90bfd82f27396c8f1cb70e602bc20",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 360467,
"upload_time": "2023-02-02T08:00:29",
"upload_time_iso_8601": "2023-02-02T08:00:29.247590Z",
"url": "https://files.pythonhosted.org/packages/af/c6/fb23c352b164979fb2493fae4115ae797f9c7f052234612dc5f03b116b12/Arabic_Stopwords-0.4.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-02-02 08:00:29",
"github": false,
"gitlab": false,
"bitbucket": false,
"lcname": "arabic-stopwords"
}