moralstrength


Namemoralstrength JSON
Version 0.2.13 PyPI version JSON
download
home_pagehttps://github.com/oaraque/moral-foundations/
SummaryA package to predict the Moral Foundations for a tweet or text
upload_time2022-12-01 11:07:49
maintainer
docs_urlNone
authorOscar Araque, Lorenzo Gatti and Kyriaki Kalimeri
requires_python
licenseLGPLv3
keywords moral foundations nlp moralstrength machine learning
VCS
bugtrack_url
requirements spacy pandas gsitk numpy scikit_learn
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Moral Foundations Theory predictor and lexicon

<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->
**Table of Contents**

- [Moral Foundations Theory predictor and lexicon](#moral-foundations-theory-predictor-and-lexicon)
    - [NEW! Liberty lexicon 2nd version - LibertyMFD](#new-liberty-lexicon-2nd-version---libertymfd)
    - [Liberty lexicon 1st version](#liberty-lexicon-1st-version)
    - [Install](#install)
    - [GUI](#gui)
- [MoralStrength lexicon](#moralstrength-lexicon)
    - [MoralStrength processed lexicon](#moralstrength-processed-lexicon)
    - [MoralStrength presence](#moralstrength-presence)
    - [Unsupervised prediction text using MoralStrength](#unsupervised-prediction-text-using-moralstrength)
    - [Changing lexicon version](#changing-lexicon-version)
    - [List of methods to use](#list-of-methods-to-use)
    - [MoralStrength raw lexicon](#moralstrength-raw-lexicon)
    - [MoralStrength annotation task descriptions](#moralstrength-annotation-task-descriptions)

<!-- markdown-toc end -->


This repository contains code and trained models corresponding to the paper "MoralStrength: Exploiting a Moral Lexicon and Embedding Similarity for Moral Foundations Prediction".
Run `Predictor.ipynb` to see a functioning version of the moral foundations predictor. Keep reading for some examples of use below.

## NEW! Liberty lexicon 2nd version - LibertyMFD

On an new work, we have generated two new versions of the *Liberty/oppression* moral foundation lexicon: the _LibertyMFD_ lexicon.
The lexicons are accessible in this repository, in the `liberty/2nd_version` folder ([link here](https://github.com/oaraque/moral-foundations/tree/master/liberty/2nd_version)).
We expect to update this lexicon soon.

If you use this lexicon, please cite the [following publication](https://doi.org/10.1145/3524458.3547264):
```
@inproceedings{10.1145/3524458.3547264,
author = {Araque, Oscar and Gatti, Lorenzo and Kalimeri, Kyriaki},
title = {LibertyMFD: A Lexicon to Assess the Moral Foundation of Liberty.},
year = {2022},
isbn = {9781450392846},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3524458.3547264},
doi = {10.1145/3524458.3547264},
booktitle = {Proceedings of the 2022 ACM Conference on Information Technology for Social Good},
pages = {154–160},
numpages = {7},
keywords = {lexicon, natural language processing, word embeddings, liberty, moral foundations theory, moral values},
location = {Limassol, Cyprus},
series = {GoodIT '22}
}
```

## Liberty lexicon 1st version

We have generated a new lexicon that contains the *Liberty/oppression* moral foundation.
To access the lexicon, see the `liberty/1st_version` folder ([link here](https://github.com/oaraque/moral-foundations/tree/master/liberty/1st_version)).
This lexicon will be updated regularly.

If you use the **liberty lexicon**, please cite the following paper:
```
DOI: https://doi.org/10.1145/3442442.3452351
```

## Install

The software is written in Python 3. For installing, please use `pip`:

```
pip install moralstrength
```

## GUI

This repository is intended for users that are willing to use the software through Python.
Alternatively, we have published a Graphical Interface that works on Linux, MacOS, and Windows. Please visit [this repository](https://github.com/oaraque/moral-foundations-gui).

# MoralStrength lexicon

## MoralStrength processed lexicon

This repository contains the MoralStrength lexicon, which enables researchers to extract the moral valence from a variety of lemmas.
It is available under the directory `moralstrength_annotations`.
An example of use of the lexicon with Python is:

```python
>>> import moralstrength

>>> moralstrength.word_moral_annotations('care')
{'care': 8.8, 'fairness': -1, 'loyalty': -1, 'authority': -1, 'purity': -1}
```

## MoralStrength presence

Also, this repository contains several already-trained models that predict the presence of a certain moral trait.
That is, whether the analyzed text is relevant for a moral trait, or not.
A minimal example of use:

```python
import moralstregnth

text = "PLS help #HASHTAG's family. No one prepares for this. They are in need of any assistance you can offer"  

moralstrength.string_moral_value(text, moral='care')
```
         
You can check the available moral traits using the `moralstrength.lexicon_morals` method.
The complete list of methods that can be used is shown in the next section.
        
## Unsupervised prediction text using MoralStrength

This package offers a function to perform unsupervised prediction over a list of texts, giving the prediction in a organized fashion.
For example:

```python
from moralstrength.moralstrength import estimate_morals

texts = '''My dog is very loyal to me.
My cat is not loyal, but understands my authority.
He did not want to break the router, he was fixing it.
It is not fair! She cheated on the exams.
Are you pure of heart? Because I am sure not.
Will you take care of me? I am sad.'''

texts = texts.split('\n')

result = estimate_morals(texts, process=True) # set to false if text is alredy pre-processed
print(result)
```

The result of this short script would be as follows.
The estimation is given in a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) format.
```
   care  fairness  loyalty  authority  purity
0   NaN       NaN    8.875     5.1250     NaN
1   NaN       NaN    8.875     6.9625     NaN
2   NaN       NaN      NaN        NaN     NaN
3   NaN       9.0      NaN        NaN     NaN
4   NaN       NaN      NaN        NaN     9.0
5   8.8       NaN      NaN        NaN     NaN

```

## Changing lexicon version

The original version of the MoralStrength lexicon is described here:

```
Oscar Araque, Lorenzo Gatti, Kyriaki Kalimeri,
MoralStrength: Exploiting a moral lexicon and embedding similarity for moral foundations prediction,
Knowledge-Based Systems,
Volume 191,
2020,
105184,
ISSN 0950-7051,
https://doi.org/10.1016/j.knosys.2019.105184.
(http://www.sciencedirect.com/science/article/pii/S095070511930526X)
```

which is also open in arXiv [https://arxiv.org/abs/1904.08314](https://arxiv.org/abs/1904.08314).

A new improved version of the lexicon can be used to predict moral values.
By default, the software uses the last version.
to use the original version, you can do:

```python
from moralstrength import lexicon_use

lexicon_use.select_version("original")
# predict here moral values using the original MoralStrength
```

If at any moment you want to use the new version of the lexicon again, just do:

```python
lexicon_use.select_version("latest")
```

        
## List of methods to use

The methods that are under `moralstrength.moralstrength` are the following:
```
get_available_lexicon_traits()
    Returns a list of traits that were annotated and can be queried
    by word_moral_value().
    care: Care/Harm
    fairness: Fairness/Cheating
    loyalty: Loyalty/Betrayal
    authority: Authority/Subversion
    purity: Purity/Degradation

get_available_models()
    Returns a list of available models for predicting texts.
    Short explanation of names:
    unigram: simple unigram-based model
    count: number of words that are rated as closer to moral extremes
    freq: distribution of moral ratings across the text
    simon: SIMilarity-based sentiment projectiON
    or a combination of these.
    For a comprehensive explanation of what each model does and how it performs on
    different datasets, see https://arxiv.org/abs/1904.08314
    (published at Knowledge-Based Systems).

get_available_prediction_traits()
    Returns a list of traits that can be predicted by string_moral_value()
    or file_moral_value().
    care: Care/Harm
    fairness: Fairness/Cheating
    loyalty: Loyalty/Betrayal
    authority: Authority/Subversion
    purity: Purity/Degradation
    non-moral: Tweet/text is non-moral

string_average_moral(text, moral)
    Returns the average of the annotations for the words in the sentence (for one moral).
    If no word is recognized/found in the lexicon, returns -1.
    Words are lemmatized using spacy.

string_moral_value(text, moral, model='unigram+freq')
    Returns the estimated probability that the text is relevant to either a vice or
    virtue of the corresponding moral trait.
    The default model is unigram+freq, the best performing (on average) across all
    dataset, according to our work.
    For a list of available models, see get_available_models().
    For a list of traits, get_available_prediction_traits().

string_moral_values(text, model='unigram+freq')
    Returns the estimated probability that the text is relevant to vices or virtues
    of all moral traits, as a dict.
    The default model is unigram+freq, the best performing (on average) across all
    dataset, according to our work.
    For a list of available models, see get_available_models().
    For a list of traits, get_available_prediction_traits().

word_moral_value(word, moral)
    Returns the association strength between word and moral trait,
    as rated by annotators. Value ranges from 1 to 9.
    1: words closely associated to harm, cheating, betrayal, subversion, degradation
    9: words closely associated to care, fairness, loyalty, authority, sanctity
    If the word is not in the lexicon of that moral trait, returns -1.
    For a list of available traits, get_available_lexicon_traits()

word_moral_values(word)
    Returns a dict that gives the association strength between word and every
    moral trait, as rated by annotators. Value ranges from 1 to 9.
    1: words closely associated to harm, cheating, betrayal, subversion, degradation
    9: words closely associated to care, fairness, loyalty, authority, purity/sanctity
    If the word is not in the lexicon of that moral trait, returns -1.
```
## MoralStrength raw lexicon

The `moralstrength_raw` folder contains the raw annotations collected from figure-eight.
The folder all_annotators_except_failed contains all the annotations collected, except for the annotators that failed the task (see the paper for details on the control questions, which were based on valence ratings from Warriner et al.).
The folder filtered_annotators contains the annotations after the annotators with low inter-annotator agreement were removed.

The filename is `RAW_ANNOTATIONS_[MORAL]`, where MORAL is the moral trait considered and can either be AUTHORITY, CARE, FAIRNESS, LOYALTY or PURITY.

The fields in each file are:
- WORD	the word to be annotated
- ANNOTATOR_ID	the unique ID of each annotator
- VALENCE	the valence rating of WORD, on a scale from 1 (low) to 9 (high)
- AROUSAL	the arousal rating of WORD, on a scale from 1 (low) to 9 (high)
- RELEVANCE	whether WORD is related to the MORAL
- EXPRESSED_MORAL	the moral strength of WORD, i.e. whether it is closer to one or the other extremes pertaining the MORAL trait.

The numbers for EXPRESSED_MORAL range from 1 to 9, and the extremes of the scales are:
- 1=Subversion, 9=Authority for AUTHORITY
- 1=Harm, 9=Care for CARE
- 1=Proportionality, 9=Fairness for FAIRNESS
- 1=Disloyalty, 9=Loyalty for LOYALTY
- 1=Degradation, 9=Purity for PURITY

For privacy reason, the annotator ID has been salted and hashed, so that going back to the original annotator ID is not possible, but it is still possible to track each annotator's ratings across the different morals.

## MoralStrength annotation task descriptions

In the folder `moralstrength/tasks` we also include the original description of the annotation tasks for the crowdsourcing process.
The interested reader can consult the instructions given to the human annotators.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/oaraque/moral-foundations/",
    "name": "moralstrength",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "moral foundations,NLP,moralstrength,machine learning",
    "author": "Oscar Araque, Lorenzo Gatti and Kyriaki Kalimeri",
    "author_email": "o.araque@upm.es",
    "download_url": "https://files.pythonhosted.org/packages/88/c3/a73674da1ba2c2f26ad3ed5144e2801a3d444b7c9168348927b65ae1fff4/moralstrength-0.2.13.tar.gz",
    "platform": null,
    "description": "# Moral Foundations Theory predictor and lexicon\n\n<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->\n**Table of Contents**\n\n- [Moral Foundations Theory predictor and lexicon](#moral-foundations-theory-predictor-and-lexicon)\n    - [NEW! Liberty lexicon 2nd version - LibertyMFD](#new-liberty-lexicon-2nd-version---libertymfd)\n    - [Liberty lexicon 1st version](#liberty-lexicon-1st-version)\n    - [Install](#install)\n    - [GUI](#gui)\n- [MoralStrength lexicon](#moralstrength-lexicon)\n    - [MoralStrength processed lexicon](#moralstrength-processed-lexicon)\n    - [MoralStrength presence](#moralstrength-presence)\n    - [Unsupervised prediction text using MoralStrength](#unsupervised-prediction-text-using-moralstrength)\n    - [Changing lexicon version](#changing-lexicon-version)\n    - [List of methods to use](#list-of-methods-to-use)\n    - [MoralStrength raw lexicon](#moralstrength-raw-lexicon)\n    - [MoralStrength annotation task descriptions](#moralstrength-annotation-task-descriptions)\n\n<!-- markdown-toc end -->\n\n\nThis repository contains code and trained models corresponding to the paper \"MoralStrength: Exploiting a Moral Lexicon and Embedding Similarity for Moral Foundations Prediction\".\nRun `Predictor.ipynb` to see a functioning version of the moral foundations predictor. Keep reading for some examples of use below.\n\n## NEW! Liberty lexicon 2nd version - LibertyMFD\n\nOn an new work, we have generated two new versions of the *Liberty/oppression* moral foundation lexicon: the _LibertyMFD_ lexicon.\nThe lexicons are accessible in this repository, in the `liberty/2nd_version` folder ([link here](https://github.com/oaraque/moral-foundations/tree/master/liberty/2nd_version)).\nWe expect to update this lexicon soon.\n\nIf you use this lexicon, please cite the [following publication](https://doi.org/10.1145/3524458.3547264):\n```\n@inproceedings{10.1145/3524458.3547264,\nauthor = {Araque, Oscar and Gatti, Lorenzo and Kalimeri, Kyriaki},\ntitle = {LibertyMFD: A Lexicon to Assess the Moral Foundation of Liberty.},\nyear = {2022},\nisbn = {9781450392846},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3524458.3547264},\ndoi = {10.1145/3524458.3547264},\nbooktitle = {Proceedings of the 2022 ACM Conference on Information Technology for Social Good},\npages = {154\u2013160},\nnumpages = {7},\nkeywords = {lexicon, natural language processing, word embeddings, liberty, moral foundations theory, moral values},\nlocation = {Limassol, Cyprus},\nseries = {GoodIT '22}\n}\n```\n\n## Liberty lexicon 1st version\n\nWe have generated a new lexicon that contains the *Liberty/oppression* moral foundation.\nTo access the lexicon, see the `liberty/1st_version` folder ([link here](https://github.com/oaraque/moral-foundations/tree/master/liberty/1st_version)).\nThis lexicon will be updated regularly.\n\nIf you use the **liberty lexicon**, please cite the following paper:\n```\nDOI: https://doi.org/10.1145/3442442.3452351\n```\n\n## Install\n\nThe software is written in Python 3. For installing, please use `pip`:\n\n```\npip install moralstrength\n```\n\n## GUI\n\nThis repository is intended for users that are willing to use the software through Python.\nAlternatively, we have published a Graphical Interface that works on Linux, MacOS, and Windows. Please visit [this repository](https://github.com/oaraque/moral-foundations-gui).\n\n# MoralStrength lexicon\n\n## MoralStrength processed lexicon\n\nThis repository contains the MoralStrength lexicon, which enables researchers to extract the moral valence from a variety of lemmas.\nIt is available under the directory `moralstrength_annotations`.\nAn example of use of the lexicon with Python is:\n\n```python\n>>> import moralstrength\n\n>>> moralstrength.word_moral_annotations('care')\n{'care': 8.8, 'fairness': -1, 'loyalty': -1, 'authority': -1, 'purity': -1}\n```\n\n## MoralStrength presence\n\nAlso, this repository contains several already-trained models that predict the presence of a certain moral trait.\nThat is, whether the analyzed text is relevant for a moral trait, or not.\nA minimal example of use:\n\n```python\nimport moralstregnth\n\ntext = \"PLS help #HASHTAG's family. No one prepares for this. They are in need of any assistance you can offer\"  \n\nmoralstrength.string_moral_value(text, moral='care')\n```\n         \nYou can check the available moral traits using the `moralstrength.lexicon_morals` method.\nThe complete list of methods that can be used is shown in the next section.\n        \n## Unsupervised prediction text using MoralStrength\n\nThis package offers a function to perform unsupervised prediction over a list of texts, giving the prediction in a organized fashion.\nFor example:\n\n```python\nfrom moralstrength.moralstrength import estimate_morals\n\ntexts = '''My dog is very loyal to me.\nMy cat is not loyal, but understands my authority.\nHe did not want to break the router, he was fixing it.\nIt is not fair! She cheated on the exams.\nAre you pure of heart? Because I am sure not.\nWill you take care of me? I am sad.'''\n\ntexts = texts.split('\\n')\n\nresult = estimate_morals(texts, process=True) # set to false if text is alredy pre-processed\nprint(result)\n```\n\nThe result of this short script would be as follows.\nThe estimation is given in a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) format.\n```\n   care  fairness  loyalty  authority  purity\n0   NaN       NaN    8.875     5.1250     NaN\n1   NaN       NaN    8.875     6.9625     NaN\n2   NaN       NaN      NaN        NaN     NaN\n3   NaN       9.0      NaN        NaN     NaN\n4   NaN       NaN      NaN        NaN     9.0\n5   8.8       NaN      NaN        NaN     NaN\n\n```\n\n## Changing lexicon version\n\nThe original version of the MoralStrength lexicon is described here:\n\n```\nOscar Araque, Lorenzo Gatti, Kyriaki Kalimeri,\nMoralStrength: Exploiting a moral lexicon and embedding similarity for moral foundations prediction,\nKnowledge-Based Systems,\nVolume 191,\n2020,\n105184,\nISSN 0950-7051,\nhttps://doi.org/10.1016/j.knosys.2019.105184.\n(http://www.sciencedirect.com/science/article/pii/S095070511930526X)\n```\n\nwhich is also open in arXiv [https://arxiv.org/abs/1904.08314](https://arxiv.org/abs/1904.08314).\n\nA new improved version of the lexicon can be used to predict moral values.\nBy default, the software uses the last version.\nto use the original version, you can do:\n\n```python\nfrom moralstrength import lexicon_use\n\nlexicon_use.select_version(\"original\")\n# predict here moral values using the original MoralStrength\n```\n\nIf at any moment you want to use the new version of the lexicon again, just do:\n\n```python\nlexicon_use.select_version(\"latest\")\n```\n\n        \n## List of methods to use\n\nThe methods that are under `moralstrength.moralstrength` are the following:\n```\nget_available_lexicon_traits()\n    Returns a list of traits that were annotated and can be queried\n    by word_moral_value().\n    care: Care/Harm\n    fairness: Fairness/Cheating\n    loyalty: Loyalty/Betrayal\n    authority: Authority/Subversion\n    purity: Purity/Degradation\n\nget_available_models()\n    Returns a list of available models for predicting texts.\n    Short explanation of names:\n    unigram: simple unigram-based model\n    count: number of words that are rated as closer to moral extremes\n    freq: distribution of moral ratings across the text\n    simon: SIMilarity-based sentiment projectiON\n    or a combination of these.\n    For a comprehensive explanation of what each model does and how it performs on\n    different datasets, see https://arxiv.org/abs/1904.08314\n    (published at Knowledge-Based Systems).\n\nget_available_prediction_traits()\n    Returns a list of traits that can be predicted by string_moral_value()\n    or file_moral_value().\n    care: Care/Harm\n    fairness: Fairness/Cheating\n    loyalty: Loyalty/Betrayal\n    authority: Authority/Subversion\n    purity: Purity/Degradation\n    non-moral: Tweet/text is non-moral\n\nstring_average_moral(text, moral)\n    Returns the average of the annotations for the words in the sentence (for one moral).\n    If no word is recognized/found in the lexicon, returns -1.\n    Words are lemmatized using spacy.\n\nstring_moral_value(text, moral, model='unigram+freq')\n    Returns the estimated probability that the text is relevant to either a vice or\n    virtue of the corresponding moral trait.\n    The default model is unigram+freq, the best performing (on average) across all\n    dataset, according to our work.\n    For a list of available models, see get_available_models().\n    For a list of traits, get_available_prediction_traits().\n\nstring_moral_values(text, model='unigram+freq')\n    Returns the estimated probability that the text is relevant to vices or virtues\n    of all moral traits, as a dict.\n    The default model is unigram+freq, the best performing (on average) across all\n    dataset, according to our work.\n    For a list of available models, see get_available_models().\n    For a list of traits, get_available_prediction_traits().\n\nword_moral_value(word, moral)\n    Returns the association strength between word and moral trait,\n    as rated by annotators. Value ranges from 1 to 9.\n    1: words closely associated to harm, cheating, betrayal, subversion, degradation\n    9: words closely associated to care, fairness, loyalty, authority, sanctity\n    If the word is not in the lexicon of that moral trait, returns -1.\n    For a list of available traits, get_available_lexicon_traits()\n\nword_moral_values(word)\n    Returns a dict that gives the association strength between word and every\n    moral trait, as rated by annotators. Value ranges from 1 to 9.\n    1: words closely associated to harm, cheating, betrayal, subversion, degradation\n    9: words closely associated to care, fairness, loyalty, authority, purity/sanctity\n    If the word is not in the lexicon of that moral trait, returns -1.\n```\n## MoralStrength raw lexicon\n\nThe `moralstrength_raw` folder contains the raw annotations collected from figure-eight.\nThe folder all_annotators_except_failed contains all the annotations collected, except for the annotators that failed the task (see the paper for details on the control questions, which were based on valence ratings from Warriner et al.).\nThe folder filtered_annotators contains the annotations after the annotators with low inter-annotator agreement were removed.\n\nThe filename is `RAW_ANNOTATIONS_[MORAL]`, where MORAL is the moral trait considered and can either be AUTHORITY, CARE, FAIRNESS, LOYALTY or PURITY.\n\nThe fields in each file are:\n- WORD\tthe word to be annotated\n- ANNOTATOR_ID\tthe unique ID of each annotator\n- VALENCE\tthe valence rating of WORD, on a scale from 1 (low) to 9 (high)\n- AROUSAL\tthe arousal rating of WORD, on a scale from 1 (low) to 9 (high)\n- RELEVANCE\twhether WORD is related to the MORAL\n- EXPRESSED_MORAL\tthe moral strength of WORD, i.e. whether it is closer to one or the other extremes pertaining the MORAL trait.\n\nThe numbers for EXPRESSED_MORAL range from 1 to 9, and the extremes of the scales are:\n- 1=Subversion, 9=Authority for AUTHORITY\n- 1=Harm, 9=Care for CARE\n- 1=Proportionality, 9=Fairness for FAIRNESS\n- 1=Disloyalty, 9=Loyalty for LOYALTY\n- 1=Degradation, 9=Purity for PURITY\n\nFor privacy reason, the annotator ID has been salted and hashed, so that going back to the original annotator ID is not possible, but it is still possible to track each annotator's ratings across the different morals.\n\n## MoralStrength annotation task descriptions\n\nIn the folder `moralstrength/tasks` we also include the original description of the annotation tasks for the crowdsourcing process.\nThe interested reader can consult the instructions given to the human annotators.\n\n\n",
    "bugtrack_url": null,
    "license": "LGPLv3",
    "summary": "A package to predict the Moral Foundations for a tweet or text",
    "version": "0.2.13",
    "split_keywords": [
        "moral foundations",
        "nlp",
        "moralstrength",
        "machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "dfd12bc0e946f3893e7781c63b1f987b",
                "sha256": "625ace1b37081ba83c28b9a731485c06c2b50b6c1b2496cdae14fda994c6e6a9"
            },
            "downloads": -1,
            "filename": "moralstrength-0.2.13-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dfd12bc0e946f3893e7781c63b1f987b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 225422,
            "upload_time": "2022-12-01T11:07:46",
            "upload_time_iso_8601": "2022-12-01T11:07:46.572914Z",
            "url": "https://files.pythonhosted.org/packages/00/b5/2c6054a48ce771ea0fd1b390d51f4009d5c34bc25e64f2e8207796bd242d/moralstrength-0.2.13-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "0c8fe3374b5b70f943bf62455b8dfe49",
                "sha256": "7a1e846d1b343ddf8e6213f36a41d21845733179989bb0ac3935ef0a1a4a91ef"
            },
            "downloads": -1,
            "filename": "moralstrength-0.2.13.tar.gz",
            "has_sig": false,
            "md5_digest": "0c8fe3374b5b70f943bf62455b8dfe49",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 203368,
            "upload_time": "2022-12-01T11:07:49",
            "upload_time_iso_8601": "2022-12-01T11:07:49.294951Z",
            "url": "https://files.pythonhosted.org/packages/88/c3/a73674da1ba2c2f26ad3ed5144e2801a3d444b7c9168348927b65ae1fff4/moralstrength-0.2.13.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-12-01 11:07:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "oaraque",
    "github_project": "moral-foundations",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "spacy",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "gsitk",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "scikit_learn",
            "specs": []
        }
    ],
    "lcname": "moralstrength"
}
        
Elapsed time: 0.01256s