# Classy Classification
Have you ever struggled with needing a [Spacy TextCategorizer](https://spacy.io/api/textcategorizer) but didn't have the time to train one from scratch? Classy Classification is the way to go! For few-shot classification using [sentence-transformers](https://github.com/UKPLab/sentence-transformers) or [spaCy models](https://spacy.io/usage/models), provide a dictionary with labels and examples, or just provide a list of labels for zero shot-classification with [Hugginface zero-shot classifiers](https://huggingface.co/models?pipeline_tag=zero-shot-classification).
[![Current Release Version](https://img.shields.io/github/release/pandora-intelligence/classy-classification.svg?style=flat-square&logo=github)](https://github.com/pandora-intelligence/classy-classification/releases)
[![pypi Version](https://img.shields.io/pypi/v/classy-classification.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/classy-classification/)
[![PyPi downloads](https://static.pepy.tech/personalized-badge/classy-classification?period=total&units=international_system&left_color=grey&right_color=orange&left_text=pip%20downloads)](https://pypi.org/project/classy-classification/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg?style=flat-square)](https://github.com/ambv/black)
# Install
``` pip install classy-classification```
Or, install with faster inference using ONNX.
``` pip install classy-classification[onnx]```
## SetFit support
I got a lot of requests for SetFit support, but I decided to create a [separate package](https://github.com/davidberenstein1957/spacy-setfit) for this. Feel free to check it out. ❤️
## ONNX issues
### pickling
ONNX does show some issues when pickling the data.
### M1
Some [installation issues](https://github.com/onnx/onnx/issues/3129) might occur, which can be fixed by these commands.
```
brew install cmake
brew install protobuf
pip3 install onnx --no-use-pep517
```
# Quickstart
## SpaCy embeddings
```python
import spacy
# or import standalone
# from classy_classification import ClassyClassifier
data = {
"furniture": ["This text is about chairs.",
"Couches, benches and televisions.",
"I really need to get a new sofa."],
"kitchen": ["There also exist things like fridges.",
"I hope to be getting a new stove today.",
"Do you also have some ovens."]
}
nlp = spacy.load("en_core_web_trf")
nlp.add_pipe(
"classy_classification",
config={
"data": data,
"model": "spacy"
}
)
print(nlp("I am looking for kitchen appliances.")._.cats)
# Output:
#
# [{"furniture" : 0.21}, {"kitchen": 0.79}]
```
### Sentence level classification
```python
import spacy
data = {
"furniture": ["This text is about chairs.",
"Couches, benches and televisions.",
"I really need to get a new sofa."],
"kitchen": ["There also exist things like fridges.",
"I hope to be getting a new stove today.",
"Do you also have some ovens."]
}
nlp.add_pipe(
"classy_classification",
config={
"data": data,
"model": "spacy",
"include_sent": True
}
)
print(nlp("I am looking for kitchen appliances. And I love doing so.").sents[0]._.cats)
# Output:
#
# [[{"furniture" : 0.21}, {"kitchen": 0.79}]
```
### Define random seed and verbosity
```python
nlp.add_pipe(
"classy_classification",
config={
"data": data,
"verbose": True,
"config": {"seed": 42}
}
)
```
### Multi-label classification
Sometimes multiple labels are necessary to fully describe the contents of a text. In that case, we want to make use of the **multi-label** implementation, here the sum of label scores is not limited to 1. Just pass the same training data to multiple keys.
```python
import spacy
data = {
"furniture": ["This text is about chairs.",
"Couches, benches and televisions.",
"I really need to get a new sofa.",
"We have a new dinner table.",
"There also exist things like fridges.",
"I hope to be getting a new stove today.",
"Do you also have some ovens.",
"We have a new dinner table."],
"kitchen": ["There also exist things like fridges.",
"I hope to be getting a new stove today.",
"Do you also have some ovens.",
"We have a new dinner table.",
"There also exist things like fridges.",
"I hope to be getting a new stove today.",
"Do you also have some ovens.",
"We have a new dinner table."]
}
nlp = spacy.load("en_core_web_md")
nlp.add_pipe(
"classy_classification",
config={
"data": data,
"model": "spacy",
"multi_label": True,
}
)
print(nlp("I am looking for furniture and kitchen equipment.")._.cats)
# Output:
#
# [{"furniture": 0.92}, {"kitchen": 0.91}]
```
### Outlier detection
Sometimes it is worth to be able to do outlier detection or binary classification. This can either be approached using
a binary training dataset, however, I have also implemented support for a `OneClassSVM` for [outlier detection using a single label](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html). Not that this method does not return probabilities, but that the data is formatted like label-score value pair to ensure uniformity.
Approach 1:
```python
import spacy
data_binary = {
"inlier": ["This text is about chairs.",
"Couches, benches and televisions.",
"I really need to get a new sofa."],
"outlier": ["Text about kitchen equipment",
"This text is about politics",
"Comments about AI and stuff."]
}
nlp = spacy.load("en_core_web_md")
nlp.add_pipe(
"classy_classification",
config={
"data": data_binary,
}
)
print(nlp("This text is a random text")._.cats)
# Output:
#
# [{'inlier': 0.2926672385488411, 'outlier': 0.707332761451159}]
```
Approach 2:
```python
import spacy
data_singular = {
"furniture": ["This text is about chairs.",
"Couches, benches and televisions.",
"I really need to get a new sofa.",
"We have a new dinner table."]
}
nlp = spacy.load("en_core_web_md")
nlp.add_pipe(
"classy_classification",
config={
"data": data_singular,
}
)
print(nlp("This text is a random text")._.cats)
# Output:
#
# [{'furniture': 0, 'not_furniture': 1}]
```
## Sentence-transfomer embeddings
```python
import spacy
data = {
"furniture": ["This text is about chairs.",
"Couches, benches and televisions.",
"I really need to get a new sofa."],
"kitchen": ["There also exist things like fridges.",
"I hope to be getting a new stove today.",
"Do you also have some ovens."]
}
nlp = spacy.blank("en")
nlp.add_pipe(
"classy_classification",
config={
"data": data,
"model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"device": "gpu"
}
)
print(nlp("I am looking for kitchen appliances.")._.cats)
# Output:
#
# [{"furniture": 0.21}, {"kitchen": 0.79}]
```
## Hugginface zero-shot classifiers
```python
import spacy
data = ["furniture", "kitchen"]
nlp = spacy.blank("en")
nlp.add_pipe(
"classy_classification",
config={
"data": data,
"model": "typeform/distilbert-base-uncased-mnli",
"cat_type": "zero",
"device": "gpu"
}
)
print(nlp("I am looking for kitchen appliances.")._.cats)
# Output:
#
# [{"furniture": 0.21}, {"kitchen": 0.79}]
```
# Credits
## Inspiration Drawn From
[Huggingface](https://huggingface.co/) does offer some nice models for few/zero-shot classification, but these are not tailored to multi-lingual approaches. Rasa NLU has [a nice approach](https://rasa.com/blog/rasa-nlu-in-depth-part-1-intent-classification/) for this, but its too embedded in their codebase for easy usage outside of Rasa/chatbots. Additionally, it made sense to integrate [sentence-transformers](https://github.com/UKPLab/sentence-transformers) and [Hugginface zero-shot](https://huggingface.co/models?pipeline_tag=zero-shot-classification), instead of default [word embeddings](https://arxiv.org/abs/1301.3781). Finally, I decided to integrate with Spacy, since training a custom [Spacy TextCategorizer](https://spacy.io/api/textcategorizer) seems like a lot of hassle if you want something quick and dirty.
- [Scikit-learn](https://github.com/scikit-learn/scikit-learn)
- [Rasa NLU](https://github.com/RasaHQ/rasa)
- [Sentence Transformers](https://github.com/UKPLab/sentence-transformers)
- [Spacy](https://github.com/explosion/spaCy)
## Or buy me a coffee
[!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/98kf2552674)
# Standalone usage without spaCy
```python
from classy_classification import ClassyClassifier
data = {
"furniture": ["This text is about chairs.",
"Couches, benches and televisions.",
"I really need to get a new sofa."],
"kitchen": ["There also exist things like fridges.",
"I hope to be getting a new stove today.",
"Do you also have some ovens."]
}
classifier = ClassyClassifier(data=data)
classifier("I am looking for kitchen appliances.")
classifier.pipe(["I am looking for kitchen appliances."])
# overwrite training data
classifier.set_training_data(data=data)
classifier("I am looking for kitchen appliances.")
# overwrite [embedding model](https://www.sbert.net/docs/pretrained_models.html)
classifier.set_embedding_model(model="paraphrase-MiniLM-L3-v2")
classifier("I am looking for kitchen appliances.")
# overwrite SVC config
classifier.set_classification_model(
config={
"C": [1, 2, 5, 10, 20, 100],
"kernel": ["linear"],
"max_cross_validation_folds": 5
}
)
classifier("I am looking for kitchen appliances.")
```
## Save and load models
```python
data = {
"furniture": ["This text is about chairs.",
"Couches, benches and televisions.",
"I really need to get a new sofa."],
"kitchen": ["There also exist things like fridges.",
"I hope to be getting a new stove today.",
"Do you also have some ovens."]
}
classifier = classyClassifier(data=data)
with open("./classifier.pkl", "wb") as f:
pickle.dump(classifier, f)
f = open("./classifier.pkl", "rb")
classifier = pickle.load(f)
classifier("I am looking for kitchen appliances.")
```
Raw data
{
"_id": null,
"home_page": "https://github.com/davidberenstein1957/classy-classification",
"name": "classy-classification",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8,<3.12",
"maintainer_email": "",
"keywords": "spacy,rasa,few-shot classification,nlu,sentence-transformers",
"author": "David Berenstein",
"author_email": "david.m.berenstein@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/0f/be/34cf311d9810acb36f5678a4038ae73b1f621c5ffd98ec9218684828ecef/classy-classification-0.6.7.tar.gz",
"platform": null,
"description": "# Classy Classification\nHave you ever struggled with needing a [Spacy TextCategorizer](https://spacy.io/api/textcategorizer) but didn't have the time to train one from scratch? Classy Classification is the way to go! For few-shot classification using [sentence-transformers](https://github.com/UKPLab/sentence-transformers) or [spaCy models](https://spacy.io/usage/models), provide a dictionary with labels and examples, or just provide a list of labels for zero shot-classification with [Hugginface zero-shot classifiers](https://huggingface.co/models?pipeline_tag=zero-shot-classification).\n\n[![Current Release Version](https://img.shields.io/github/release/pandora-intelligence/classy-classification.svg?style=flat-square&logo=github)](https://github.com/pandora-intelligence/classy-classification/releases)\n[![pypi Version](https://img.shields.io/pypi/v/classy-classification.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/classy-classification/)\n[![PyPi downloads](https://static.pepy.tech/personalized-badge/classy-classification?period=total&units=international_system&left_color=grey&right_color=orange&left_text=pip%20downloads)](https://pypi.org/project/classy-classification/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg?style=flat-square)](https://github.com/ambv/black)\n\n# Install\n``` pip install classy-classification```\n\nOr, install with faster inference using ONNX.\n\n``` pip install classy-classification[onnx]```\n## SetFit support\n\nI got a lot of requests for SetFit support, but I decided to create a [separate package](https://github.com/davidberenstein1957/spacy-setfit) for this. Feel free to check it out. \u2764\ufe0f\n## ONNX issues\n\n\n### pickling\n\nONNX does show some issues when pickling the data.\n### M1\n\nSome [installation issues](https://github.com/onnx/onnx/issues/3129) might occur, which can be fixed by these commands.\n\n```\nbrew install cmake\nbrew install protobuf\npip3 install onnx --no-use-pep517\n```\n\n# Quickstart\n## SpaCy embeddings\n```python\nimport spacy\n# or import standalone\n# from classy_classification import ClassyClassifier\n\ndata = {\n \"furniture\": [\"This text is about chairs.\",\n \"Couches, benches and televisions.\",\n \"I really need to get a new sofa.\"],\n \"kitchen\": [\"There also exist things like fridges.\",\n \"I hope to be getting a new stove today.\",\n \"Do you also have some ovens.\"]\n}\n\nnlp = spacy.load(\"en_core_web_trf\")\nnlp.add_pipe(\n \"classy_classification\",\n config={\n \"data\": data,\n \"model\": \"spacy\"\n }\n)\n\nprint(nlp(\"I am looking for kitchen appliances.\")._.cats)\n\n# Output:\n#\n# [{\"furniture\" : 0.21}, {\"kitchen\": 0.79}]\n```\n### Sentence level classification\n```python\nimport spacy\n\ndata = {\n \"furniture\": [\"This text is about chairs.\",\n \"Couches, benches and televisions.\",\n \"I really need to get a new sofa.\"],\n \"kitchen\": [\"There also exist things like fridges.\",\n \"I hope to be getting a new stove today.\",\n \"Do you also have some ovens.\"]\n}\n\nnlp.add_pipe(\n \"classy_classification\",\n config={\n \"data\": data,\n \"model\": \"spacy\",\n \"include_sent\": True\n }\n)\n\nprint(nlp(\"I am looking for kitchen appliances. And I love doing so.\").sents[0]._.cats)\n\n# Output:\n#\n# [[{\"furniture\" : 0.21}, {\"kitchen\": 0.79}]\n```\n### Define random seed and verbosity\n```python\n\nnlp.add_pipe(\n \"classy_classification\",\n config={\n \"data\": data,\n \"verbose\": True,\n \"config\": {\"seed\": 42}\n }\n)\n```\n### Multi-label classification\nSometimes multiple labels are necessary to fully describe the contents of a text. In that case, we want to make use of the **multi-label** implementation, here the sum of label scores is not limited to 1. Just pass the same training data to multiple keys.\n\n```python\nimport spacy\n\ndata = {\n \"furniture\": [\"This text is about chairs.\",\n \"Couches, benches and televisions.\",\n \"I really need to get a new sofa.\",\n \"We have a new dinner table.\",\n \"There also exist things like fridges.\",\n \"I hope to be getting a new stove today.\",\n \"Do you also have some ovens.\",\n \"We have a new dinner table.\"],\n \"kitchen\": [\"There also exist things like fridges.\",\n \"I hope to be getting a new stove today.\",\n \"Do you also have some ovens.\",\n \"We have a new dinner table.\",\n \"There also exist things like fridges.\",\n \"I hope to be getting a new stove today.\",\n \"Do you also have some ovens.\",\n \"We have a new dinner table.\"]\n}\n\nnlp = spacy.load(\"en_core_web_md\")\nnlp.add_pipe(\n \"classy_classification\",\n config={\n \"data\": data,\n \"model\": \"spacy\",\n \"multi_label\": True,\n }\n)\n\nprint(nlp(\"I am looking for furniture and kitchen equipment.\")._.cats)\n\n# Output:\n#\n# [{\"furniture\": 0.92}, {\"kitchen\": 0.91}]\n```\n### Outlier detection\nSometimes it is worth to be able to do outlier detection or binary classification. This can either be approached using\na binary training dataset, however, I have also implemented support for a `OneClassSVM` for [outlier detection using a single label](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html). Not that this method does not return probabilities, but that the data is formatted like label-score value pair to ensure uniformity.\n\nApproach 1:\n```python\nimport spacy\n\ndata_binary = {\n \"inlier\": [\"This text is about chairs.\",\n \"Couches, benches and televisions.\",\n \"I really need to get a new sofa.\"],\n \"outlier\": [\"Text about kitchen equipment\",\n \"This text is about politics\",\n \"Comments about AI and stuff.\"]\n}\n\nnlp = spacy.load(\"en_core_web_md\")\nnlp.add_pipe(\n \"classy_classification\",\n config={\n \"data\": data_binary,\n }\n)\n\nprint(nlp(\"This text is a random text\")._.cats)\n\n# Output:\n#\n# [{'inlier': 0.2926672385488411, 'outlier': 0.707332761451159}]\n```\nApproach 2:\n```python\nimport spacy\n\ndata_singular = {\n \"furniture\": [\"This text is about chairs.\",\n \"Couches, benches and televisions.\",\n \"I really need to get a new sofa.\",\n \"We have a new dinner table.\"]\n}\nnlp = spacy.load(\"en_core_web_md\")\nnlp.add_pipe(\n \"classy_classification\",\n config={\n \"data\": data_singular,\n }\n)\n\nprint(nlp(\"This text is a random text\")._.cats)\n\n# Output:\n#\n# [{'furniture': 0, 'not_furniture': 1}]\n```\n## Sentence-transfomer embeddings\n```python\nimport spacy\n\ndata = {\n \"furniture\": [\"This text is about chairs.\",\n \"Couches, benches and televisions.\",\n \"I really need to get a new sofa.\"],\n \"kitchen\": [\"There also exist things like fridges.\",\n \"I hope to be getting a new stove today.\",\n \"Do you also have some ovens.\"]\n}\n\nnlp = spacy.blank(\"en\")\nnlp.add_pipe(\n \"classy_classification\",\n config={\n \"data\": data,\n \"model\": \"sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2\",\n \"device\": \"gpu\"\n }\n)\n\nprint(nlp(\"I am looking for kitchen appliances.\")._.cats)\n\n# Output:\n#\n# [{\"furniture\": 0.21}, {\"kitchen\": 0.79}]\n```\n## Hugginface zero-shot classifiers\n```python\nimport spacy\n\ndata = [\"furniture\", \"kitchen\"]\n\nnlp = spacy.blank(\"en\")\nnlp.add_pipe(\n \"classy_classification\",\n config={\n \"data\": data,\n \"model\": \"typeform/distilbert-base-uncased-mnli\",\n \"cat_type\": \"zero\",\n \"device\": \"gpu\"\n }\n)\n\nprint(nlp(\"I am looking for kitchen appliances.\")._.cats)\n\n# Output:\n#\n# [{\"furniture\": 0.21}, {\"kitchen\": 0.79}]\n```\n# Credits\n## Inspiration Drawn From\n[Huggingface](https://huggingface.co/) does offer some nice models for few/zero-shot classification, but these are not tailored to multi-lingual approaches. Rasa NLU has [a nice approach](https://rasa.com/blog/rasa-nlu-in-depth-part-1-intent-classification/) for this, but its too embedded in their codebase for easy usage outside of Rasa/chatbots. Additionally, it made sense to integrate [sentence-transformers](https://github.com/UKPLab/sentence-transformers) and [Hugginface zero-shot](https://huggingface.co/models?pipeline_tag=zero-shot-classification), instead of default [word embeddings](https://arxiv.org/abs/1301.3781). Finally, I decided to integrate with Spacy, since training a custom [Spacy TextCategorizer](https://spacy.io/api/textcategorizer) seems like a lot of hassle if you want something quick and dirty.\n\n- [Scikit-learn](https://github.com/scikit-learn/scikit-learn)\n- [Rasa NLU](https://github.com/RasaHQ/rasa)\n- [Sentence Transformers](https://github.com/UKPLab/sentence-transformers)\n- [Spacy](https://github.com/explosion/spaCy)\n\n## Or buy me a coffee\n[![\"Buy Me A Coffee\"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/98kf2552674)\n\n\n# Standalone usage without spaCy\n\n```python\n\nfrom classy_classification import ClassyClassifier\n\ndata = {\n \"furniture\": [\"This text is about chairs.\",\n \"Couches, benches and televisions.\",\n \"I really need to get a new sofa.\"],\n \"kitchen\": [\"There also exist things like fridges.\",\n \"I hope to be getting a new stove today.\",\n \"Do you also have some ovens.\"]\n}\n\nclassifier = ClassyClassifier(data=data)\nclassifier(\"I am looking for kitchen appliances.\")\nclassifier.pipe([\"I am looking for kitchen appliances.\"])\n\n# overwrite training data\nclassifier.set_training_data(data=data)\nclassifier(\"I am looking for kitchen appliances.\")\n\n# overwrite [embedding model](https://www.sbert.net/docs/pretrained_models.html)\nclassifier.set_embedding_model(model=\"paraphrase-MiniLM-L3-v2\")\nclassifier(\"I am looking for kitchen appliances.\")\n\n# overwrite SVC config\nclassifier.set_classification_model(\n config={\n \"C\": [1, 2, 5, 10, 20, 100],\n \"kernel\": [\"linear\"],\n \"max_cross_validation_folds\": 5\n }\n)\nclassifier(\"I am looking for kitchen appliances.\")\n```\n\n## Save and load models\n```python\ndata = {\n \"furniture\": [\"This text is about chairs.\",\n \"Couches, benches and televisions.\",\n \"I really need to get a new sofa.\"],\n \"kitchen\": [\"There also exist things like fridges.\",\n \"I hope to be getting a new stove today.\",\n \"Do you also have some ovens.\"]\n}\nclassifier = classyClassifier(data=data)\n\nwith open(\"./classifier.pkl\", \"wb\") as f:\n pickle.dump(classifier, f)\n\nf = open(\"./classifier.pkl\", \"rb\")\nclassifier = pickle.load(f)\nclassifier(\"I am looking for kitchen appliances.\")\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Have you every struggled with needing a Spacy TextCategorizer but didn't have the time to train one from scratch? Classy Classification is the way to go!",
"version": "0.6.7",
"project_urls": {
"Documentation": "https://github.com/davidberenstein1957/classy-classification",
"Homepage": "https://github.com/davidberenstein1957/classy-classification",
"Repository": "https://github.com/davidberenstein1957/classy-classification"
},
"split_keywords": [
"spacy",
"rasa",
"few-shot classification",
"nlu",
"sentence-transformers"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e0d72c6e723f1618c2ccd62bba251ec136e98232590a9a8c66a3ec56c6ffc5fa",
"md5": "f14df72cca002c2fca75102e76b76dea",
"sha256": "f0f0fca412fc6eee2342e84d0bb1e91e8bc3d16631a195c2c2e42c65367982e5"
},
"downloads": -1,
"filename": "classy_classification-0.6.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f14df72cca002c2fca75102e76b76dea",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8,<3.12",
"size": 15710,
"upload_time": "2023-08-31T19:12:36",
"upload_time_iso_8601": "2023-08-31T19:12:36.111056Z",
"url": "https://files.pythonhosted.org/packages/e0/d7/2c6e723f1618c2ccd62bba251ec136e98232590a9a8c66a3ec56c6ffc5fa/classy_classification-0.6.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0fbe34cf311d9810acb36f5678a4038ae73b1f621c5ffd98ec9218684828ecef",
"md5": "e5c238005102d1c6a94ac6f22a2bab51",
"sha256": "e925d37f1af40076743e05df0fed7f60f56630290287afeb38305ba29d52c1e6"
},
"downloads": -1,
"filename": "classy-classification-0.6.7.tar.gz",
"has_sig": false,
"md5_digest": "e5c238005102d1c6a94ac6f22a2bab51",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8,<3.12",
"size": 15073,
"upload_time": "2023-08-31T19:12:34",
"upload_time_iso_8601": "2023-08-31T19:12:34.273442Z",
"url": "https://files.pythonhosted.org/packages/0f/be/34cf311d9810acb36f5678a4038ae73b1f621c5ffd98ec9218684828ecef/classy-classification-0.6.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-08-31 19:12:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "davidberenstein1957",
"github_project": "classy-classification",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "classy-classification"
}