# Spark NLP: State-of-the-Art Natural Language Processing & LLMs Library
<p align="center">
<a href="https://github.com/JohnSnowLabs/spark-nlp/actions" alt="build">
<img src="https://github.com/JohnSnowLabs/spark-nlp/workflows/build/badge.svg" /></a>
<a href="https://github.com/JohnSnowLabs/spark-nlp/releases" alt="Current Release Version">
<img src="https://img.shields.io/github/v/release/JohnSnowLabs/spark-nlp.svg?style=flat-square&logo=github" /></a>
<a href="https://search.maven.org/artifact/com.johnsnowlabs.nlp/spark-nlp_2.12" alt="Maven Central">
<img src="https://maven-badges.herokuapp.com/maven-central/com.johnsnowlabs.nlp/spark-nlp_2.12/badge.svg" /></a>
<a href="https://badge.fury.io/py/spark-nlp" alt="PyPI version">
<img src="https://badge.fury.io/py/spark-nlp.svg" /></a>
<a href="https://anaconda.org/JohnSnowLabs/spark-nlp" alt="Anaconda-Cloud">
<img src="https://anaconda.org/johnsnowlabs/spark-nlp/badges/version.svg" /></a>
<a href="https://github.com/JohnSnowLabs/spark-nlp/blob/master/LICENSE" alt="License">
<img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" /></a>
<a href="https://pypi.org/project/spark-nlp/" alt="PyPi downloads">
<img src="https://static.pepy.tech/personalized-badge/spark-nlp?period=total&units=international_system&left_color=grey&right_color=orange&left_text=pip%20downloads" /></a>
</p>
Spark NLP is a state-of-the-art Natural Language Processing library built on top of Apache Spark. It provides **simple**, **performant** & **accurate** NLP annotations for machine learning pipelines that **scale** easily in a distributed environment.
Spark NLP comes with **83000+** pretrained **pipelines** and **models** in more than **200+** languages.
It also offers tasks such as **Tokenization**, **Word Segmentation**, **Part-of-Speech Tagging**, Word and Sentence **Embeddings**, **Named Entity Recognition**, **Dependency Parsing**, **Spell Checking**, **Text Classification**, **Sentiment Analysis**, **Token Classification**, **Machine Translation** (+180 languages), **Summarization**, **Question Answering**, **Table Question Answering**, **Text Generation**, **Image Classification**, **Image to Text (captioning)**, **Automatic Speech Recognition**, **Zero-Shot Learning**, and many more [NLP tasks](#features).
**Spark NLP** is the only open-source NLP library in **production** that offers state-of-the-art transformers such as **BERT**, **CamemBERT**, **ALBERT**, **ELECTRA**, **XLNet**, **DistilBERT**, **RoBERTa**, **DeBERTa**, **XLM-RoBERTa**, **Longformer**, **ELMO**, **Universal Sentence Encoder**, **Llama-2**, **M2M100**, **BART**, **Instructor**, **E5**, **Google T5**, **MarianMT**, **OpenAI GPT2**, **Vision Transformers (ViT)**, **OpenAI Whisper**, **Llama**, **Mistral**, **Phi**, **Qwen2**, and many more not only to **Python** and **R**, but also to **JVM** ecosystem (**Java**, **Scala**, and **Kotlin**) at **scale** by extending **Apache Spark** natively.
## Model Importing Support
Spark NLP provides easy support for importing models from various popular frameworks:
- **TensorFlow**
- **ONNX**
- **OpenVINO**
- **Llama.cpp (GGUF)**
This wide range of support allows you to seamlessly integrate models from different sources into your Spark NLP workflows, enhancing flexibility and compatibility with existing machine learning ecosystems.
## Project's website
Take a look at our official Spark NLP page: [https://sparknlp.org/](https://sparknlp.org/) for user
documentation and examples
## Features
- [Text Preprocessing](https://sparknlp.org/docs/en/features#text-preproccesing)
- [Parsing and Analysis](https://sparknlp.org/docs/en/features#parsing-and-analysis)
- [Sentiment and Classification](https://sparknlp.org/docs/en/features#sentiment-and-classification)
- [Embeddings](https://sparknlp.org/docs/en/features#embeddings)
- [Classification and Question Answering](https://sparknlp.org/docs/en/features#classification-and-question-answering-models)
- [Machine Translation and Generation](https://sparknlp.org/docs/en/features#machine-translation-and-generation)
- [Image and Speech](https://sparknlp.org/docs/en/features#image-and-speech)
- [Integration and Interoperability (ONNX, OpenVINO)](https://sparknlp.org/docs/en/features#integration-and-interoperability)
- [Pre-trained Models (36000+ in +200 languages)](https://sparknlp.org/docs/en/features#pre-trained-models)
- [Multi-lingual Support](https://sparknlp.org/docs/en/features#multi-lingual-support)
## Quick Start
This is a quick example of how to use Spark NLP pre-trained pipeline in Python and PySpark:
```sh
$ java -version
# should be Java 8 or 11 (Oracle or OpenJDK)
$ conda create -n sparknlp python=3.7 -y
$ conda activate sparknlp
# spark-nlp by default is based on pyspark 3.x
$ pip install spark-nlp==5.5.1 pyspark==3.3.1
```
In Python console or Jupyter `Python3` kernel:
```python
# Import Spark NLP
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.pretrained import PretrainedPipeline
import sparknlp
# Start SparkSession with Spark NLP
# start() functions has 3 parameters: gpu, apple_silicon, and memory
# sparknlp.start(gpu=True) will start the session with GPU support
# sparknlp.start(apple_silicon=True) will start the session with macOS M1 & M2 support
# sparknlp.start(memory="16G") to change the default driver memory in SparkSession
spark = sparknlp.start()
# Download a pre-trained pipeline
pipeline = PretrainedPipeline('explain_document_dl', lang='en')
# Your testing dataset
text = """
The Mona Lisa is a 16th century oil painting created by Leonardo.
It's held at the Louvre in Paris.
"""
# Annotate your testing dataset
result = pipeline.annotate(text)
# What's in the pipeline
list(result.keys())
Output: ['entities', 'stem', 'checked', 'lemma', 'document',
'pos', 'token', 'ner', 'embeddings', 'sentence']
# Check the results
result['entities']
Output: ['Mona Lisa', 'Leonardo', 'Louvre', 'Paris']
```
For more examples, you can visit our dedicated [examples](https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples) to showcase all Spark NLP use cases!
### Packages Cheatsheet
This is a cheatsheet for corresponding Spark NLP Maven package to Apache Spark / PySpark major version:
| Apache Spark | Spark NLP on CPU | Spark NLP on GPU | Spark NLP on AArch64 (linux) | Spark NLP on Apple Silicon |
|-------------------------|--------------------|----------------------------|--------------------------------|--------------------------------------|
| 3.0/3.1/3.2/3.3/3.4/3.5 | `spark-nlp` | `spark-nlp-gpu` | `spark-nlp-aarch64` | `spark-nlp-silicon` |
| Start Function | `sparknlp.start()` | `sparknlp.start(gpu=True)` | `sparknlp.start(aarch64=True)` | `sparknlp.start(apple_silicon=True)` |
NOTE: `M1/M2` and `AArch64` are under `experimental` support. Access and support to these architectures are limited by the
community and we had to build most of the dependencies by ourselves to make them compatible. We support these two
architectures, however, they may not work in some environments.
## Pipelines and Models
For a quick example of using pipelines and models take a look at our official [documentation](https://sparknlp.org/docs/en/install#pipelines-and-models)
#### Please check out our Models Hub for the full list of [pre-trained models](https://sparknlp.org/models) with examples, demo, benchmark, and more
## Platform and Ecosystem Support
### Apache Spark Support
Spark NLP *5.5.1* has been built on top of Apache Spark 3.4 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, 3.4.x, and 3.5.x
| Spark NLP | Apache Spark 3.5.x | Apache Spark 3.4.x | Apache Spark 3.3.x | Apache Spark 3.2.x | Apache Spark 3.1.x | Apache Spark 3.0.x | Apache Spark 2.4.x | Apache Spark 2.3.x |
|-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
| 5.5.x | YES | YES | YES | YES | YES | YES | NO | NO |
| 5.4.x | YES | YES | YES | YES | YES | YES | NO | NO |
| 5.3.x | YES | YES | YES | YES | YES | YES | NO | NO |
| 5.2.x | YES | YES | YES | YES | YES | YES | NO | NO |
| 5.1.x | Partially | YES | YES | YES | YES | YES | NO | NO |
| 5.0.x | YES | YES | YES | YES | YES | YES | NO | NO |
Find out more about `Spark NLP` versions from our [release notes](https://github.com/JohnSnowLabs/spark-nlp/releases).
### Scala and Python Support
| Spark NLP | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10| Scala 2.11 | Scala 2.12 |
|-----------|------------|------------|------------|------------|------------|------------|------------|
| 5.5.x | NO | YES | YES | YES | YES | NO | YES |
| 5.4.x | NO | YES | YES | YES | YES | NO | YES |
| 5.3.x | NO | YES | YES | YES | YES | NO | YES |
| 5.2.x | NO | YES | YES | YES | YES | NO | YES |
| 5.1.x | NO | YES | YES | YES | YES | NO | YES |
| 5.0.x | NO | YES | YES | YES | YES | NO | YES |
Find out more about 4.x `SparkNLP` versions in our official [documentation](https://sparknlp.org/docs/en/install#apache-spark-support)
### Databricks Support
Spark NLP 5.5.1 has been tested and is compatible with the following runtimes:
| **CPU** | **GPU** |
|--------------------|--------------------|
| 14.1 / 14.1 ML | 14.1 ML & GPU |
| 14.2 / 14.2 ML | 14.2 ML & GPU |
| 14.3 / 14.3 ML | 14.3 ML & GPU |
| 15.0 / 15.0 ML | 15.0 ML & GPU |
| 15.1 / 15.0 ML | 15.1 ML & GPU |
| 15.2 / 15.0 ML | 15.2 ML & GPU |
| 15.3 / 15.0 ML | 15.3 ML & GPU |
| 15.4 / 15.0 ML | 15.4 ML & GPU |
We are compatible with older runtimes. For a full list check databricks support in our official [documentation](https://sparknlp.org/docs/en/install#databricks-support)
### EMR Support
Spark NLP 5.5.1 has been tested and is compatible with the following EMR releases:
| **EMR Release** |
|--------------------|
| emr-6.13.0 |
| emr-6.14.0 |
| emr-6.15.0 |
| emr-7.0.0 |
| emr-7.1.0 |
| emr-7.2.0 |
We are compatible with older EMR releases. For a full list check EMR support in our official [documentation](https://sparknlp.org/docs/en/install#emr-support)
Full list of [Amazon EMR 6.x releases](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-6x.html)
Full list of [Amazon EMR 7.x releases](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-7x.html)
NOTE: The EMR 6.1.0 and 6.1.1 are not supported.
## Installation
### Command line (requires internet connection)
To install spark-nlp packages through command line follow [these instructions](https://sparknlp.org/docs/en/install#command-line) from our official documentation
### Scala
Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x versions. Our packages are
deployed to Maven central. To add any of our packages as a dependency in your application you can follow [these instructions](https://sparknlp.org/docs/en/install#scala-and-java)
from our official documentation.
If you are interested, there is a simple SBT project for Spark NLP to guide you on how to use it in your
projects [Spark NLP SBT S5.5.1r](https://github.com/maziyarpanahi/spark-nlp-starter)
### Python
Spark NLP supports Python 3.7.x and above depending on your major PySpark version.
Check all available installations for Python in our official [documentation](https://sparknlp.org/docs/en/install#python)
### Compiled JARs
To compile the jars from source follow [these instructions](https://sparknlp.org/docs/en/compiled#jars) from our official documenation
## Platform-Specific Instructions
For detailed instructions on how to use Spark NLP on supported platforms, please refer to our official documentation:
| Platform | Supported Language(s) |
|-------------------------|-----------------------|
| [Apache Zeppelin](https://sparknlp.org/docs/en/install#apache-zeppelin) | Scala, Python |
| [Jupyter Notebook](https://sparknlp.org/docs/en/install#jupter-notebook) | Python |
| [Google Colab Notebook](https://sparknlp.org/docs/en/install#google-colab-notebook) | Python |
| [Kaggle Kernel](https://sparknlp.org/docs/en/install#kaggle-kernel) | Python |
| [Databricks Cluster](https://sparknlp.org/docs/en/install#databricks-cluster) | Scala, Python |
| [EMR Cluster](https://sparknlp.org/docs/en/install#emr-cluster) | Scala, Python |
| [GCP Dataproc Cluster](https://sparknlp.org/docs/en/install#gcp-dataproc) | Scala, Python |
### Offline
Spark NLP library and all the pre-trained models/pipelines can be used entirely offline with no access to the Internet.
Please check [these instructions](https://sparknlp.org/docs/en/install#s3-integration) from our official documentation
to use Spark NLP offline
## Advanced Settings
You can change Spark NLP configurations via Spark properties configuration.
Please check [these instructions](https://sparknlp.org/docs/en/install#sparknlp-properties) from our official documentation.
### S3 Integration
In Spark NLP we can define S3 locations to:
- Export log files of training models
- Store tensorflow graphs used in `NerDLApproach`
Please check [these instructions](https://sparknlp.org/docs/en/install#s3-integration) from our official documentation.
## Document5.5.1
### Examples
Need more **examples**? Check out our dedicated [Spark NLP Examples](https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples)
repository to showcase all Spark NLP use cases!
Also, don't forget to check [Spark NLP in Action](https://sparknlp.org/demo) built by Streamlit.
#### All examples: [spark-nlp/examples](https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples)
### FAQ
[Check our Articles and Videos page here](https://sparknlp.org/learn)
### Citation
We have published a [paper](https://www.sciencedirect.com/science/article/pii/S2665963821000063) that you can cite for
the Spark NLP library:
```bibtex
@article{KOCAMAN2021100058,
title = {Spark NLP: Natural language understanding at scale},
journal = {Software Impacts},
pages = {100058},
year = {2021},
issn = {2665-9638},
doi = {https://doi.org/10.1016/j.simpa.2021.100058},
url = {https://www.sciencedirect.com/science/article/pii/S2665963.2.300063},
author = {Veysel Kocaman and David Talby},
keywords = {Spark, Natural language processing, Deep learning, Tensorflow, Cluster},
abstract = {Spark NLP is a Natural Language Processing (NLP) library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines that can scale easily in a distributed environment. Spark NLP comes with 1100+ pretrained pipelines and models in more than 192+ languages. It supports nearly all the NLP tasks and modules that can be used seamlessly in a cluster. Downloaded more than 2.7 million times and experiencing 9x growth since January 2020, Spark NLP is used by 54% of healthcare organizations as the world’s most widely used NLP library in the enterprise.}
}
}5.5.1
```
## Community support
- [Slack](https://join.slack.com/t/spark-nlp/shared_invite/zt-198dipu77-L3UWNe_AJ8xqDk0ivmih5Q) For live discussion with the Spark NLP community and the team
- [GitHub](https://github.com/JohnSnowLabs/spark-nlp) Bug reports, feature requests, and contributions
- [Discussions](https://github.com/JohnSnowLabs/spark-nlp/discussions) Engage with other community members, share ideas,
and show off how you use Spark NLP!
- [Medium](https://medium.com/spark-nlp) Spark NLP articles
- [YouTube](https://www.youtube.com/channel/UCmFOjlpYEhxf_wJUDuz6xxQ/videos) Spark NLP video tutorials
## Contributing
We appreciate any sort of contributions:
- ideas
- feedback
- documentation
- bug reports
- NLP training and testing corpora
- Development and testing
Clone the repo and submit your pull-requests! Or directly create issues in this repo.
## John Snow Labs
[http://johnsnowlabs.com](http://johnsnowlabs.com)
Raw data
{
"_id": null,
"home_page": "https://github.com/JohnSnowLabs/spark-nlp",
"name": "spark-nlp",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "NLP spark vision speech deep learning transformer tensorflow BERT GPT-2 Wav2Vec2 ViT",
"author": "John Snow Labs",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/d9/1c/31c822efbf58f4ebb27ee1273474d83912c41c9562d59cd3de2ba5726003/spark-nlp-5.5.1.tar.gz",
"platform": null,
"description": "# Spark NLP: State-of-the-Art Natural Language Processing & LLMs Library\n\n<p align=\"center\">\n <a href=\"https://github.com/JohnSnowLabs/spark-nlp/actions\" alt=\"build\">\n <img src=\"https://github.com/JohnSnowLabs/spark-nlp/workflows/build/badge.svg\" /></a>\n <a href=\"https://github.com/JohnSnowLabs/spark-nlp/releases\" alt=\"Current Release Version\">\n <img src=\"https://img.shields.io/github/v/release/JohnSnowLabs/spark-nlp.svg?style=flat-square&logo=github\" /></a>\n <a href=\"https://search.maven.org/artifact/com.johnsnowlabs.nlp/spark-nlp_2.12\" alt=\"Maven Central\">\n <img src=\"https://maven-badges.herokuapp.com/maven-central/com.johnsnowlabs.nlp/spark-nlp_2.12/badge.svg\" /></a>\n <a href=\"https://badge.fury.io/py/spark-nlp\" alt=\"PyPI version\">\n <img src=\"https://badge.fury.io/py/spark-nlp.svg\" /></a>\n <a href=\"https://anaconda.org/JohnSnowLabs/spark-nlp\" alt=\"Anaconda-Cloud\">\n <img src=\"https://anaconda.org/johnsnowlabs/spark-nlp/badges/version.svg\" /></a>\n <a href=\"https://github.com/JohnSnowLabs/spark-nlp/blob/master/LICENSE\" alt=\"License\">\n <img src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\" /></a>\n <a href=\"https://pypi.org/project/spark-nlp/\" alt=\"PyPi downloads\">\n <img src=\"https://static.pepy.tech/personalized-badge/spark-nlp?period=total&units=international_system&left_color=grey&right_color=orange&left_text=pip%20downloads\" /></a>\n</p>\n\nSpark NLP is a state-of-the-art Natural Language Processing library built on top of Apache Spark. It provides **simple**, **performant** & **accurate** NLP annotations for machine learning pipelines that **scale** easily in a distributed environment.\n\nSpark NLP comes with **83000+** pretrained **pipelines** and **models** in more than **200+** languages.\nIt also offers tasks such as **Tokenization**, **Word Segmentation**, **Part-of-Speech Tagging**, Word and Sentence **Embeddings**, **Named Entity Recognition**, **Dependency Parsing**, **Spell Checking**, **Text Classification**, **Sentiment Analysis**, **Token Classification**, **Machine Translation** (+180 languages), **Summarization**, **Question Answering**, **Table Question Answering**, **Text Generation**, **Image Classification**, **Image to Text (captioning)**, **Automatic Speech Recognition**, **Zero-Shot Learning**, and many more [NLP tasks](#features).\n\n**Spark NLP** is the only open-source NLP library in **production** that offers state-of-the-art transformers such as **BERT**, **CamemBERT**, **ALBERT**, **ELECTRA**, **XLNet**, **DistilBERT**, **RoBERTa**, **DeBERTa**, **XLM-RoBERTa**, **Longformer**, **ELMO**, **Universal Sentence Encoder**, **Llama-2**, **M2M100**, **BART**, **Instructor**, **E5**, **Google T5**, **MarianMT**, **OpenAI GPT2**, **Vision Transformers (ViT)**, **OpenAI Whisper**, **Llama**, **Mistral**, **Phi**, **Qwen2**, and many more not only to **Python** and **R**, but also to **JVM** ecosystem (**Java**, **Scala**, and **Kotlin**) at **scale** by extending **Apache Spark** natively.\n\n## Model Importing Support\n\nSpark NLP provides easy support for importing models from various popular frameworks:\n\n- **TensorFlow**\n- **ONNX**\n- **OpenVINO**\n- **Llama.cpp (GGUF)**\n\nThis wide range of support allows you to seamlessly integrate models from different sources into your Spark NLP workflows, enhancing flexibility and compatibility with existing machine learning ecosystems.\n\n## Project's website\n\nTake a look at our official Spark NLP page: [https://sparknlp.org/](https://sparknlp.org/) for user\ndocumentation and examples\n\n## Features\n\n- [Text Preprocessing](https://sparknlp.org/docs/en/features#text-preproccesing)\n- [Parsing and Analysis](https://sparknlp.org/docs/en/features#parsing-and-analysis)\n- [Sentiment and Classification](https://sparknlp.org/docs/en/features#sentiment-and-classification)\n- [Embeddings](https://sparknlp.org/docs/en/features#embeddings)\n- [Classification and Question Answering](https://sparknlp.org/docs/en/features#classification-and-question-answering-models)\n- [Machine Translation and Generation](https://sparknlp.org/docs/en/features#machine-translation-and-generation)\n- [Image and Speech](https://sparknlp.org/docs/en/features#image-and-speech)\n- [Integration and Interoperability (ONNX, OpenVINO)](https://sparknlp.org/docs/en/features#integration-and-interoperability)\n- [Pre-trained Models (36000+ in +200 languages)](https://sparknlp.org/docs/en/features#pre-trained-models)\n- [Multi-lingual Support](https://sparknlp.org/docs/en/features#multi-lingual-support)\n\n## Quick Start\n\nThis is a quick example of how to use Spark NLP pre-trained pipeline in Python and PySpark:\n\n```sh\n$ java -version\n# should be Java 8 or 11 (Oracle or OpenJDK)\n$ conda create -n sparknlp python=3.7 -y\n$ conda activate sparknlp\n# spark-nlp by default is based on pyspark 3.x\n$ pip install spark-nlp==5.5.1 pyspark==3.3.1\n```\n\nIn Python console or Jupyter `Python3` kernel:\n\n```python\n# Import Spark NLP\nfrom sparknlp.base import *\nfrom sparknlp.annotator import *\nfrom sparknlp.pretrained import PretrainedPipeline\nimport sparknlp\n\n# Start SparkSession with Spark NLP\n# start() functions has 3 parameters: gpu, apple_silicon, and memory\n# sparknlp.start(gpu=True) will start the session with GPU support\n# sparknlp.start(apple_silicon=True) will start the session with macOS M1 & M2 support\n# sparknlp.start(memory=\"16G\") to change the default driver memory in SparkSession\nspark = sparknlp.start()\n\n# Download a pre-trained pipeline\npipeline = PretrainedPipeline('explain_document_dl', lang='en')\n\n# Your testing dataset\ntext = \"\"\"\nThe Mona Lisa is a 16th century oil painting created by Leonardo.\nIt's held at the Louvre in Paris.\n\"\"\"\n\n# Annotate your testing dataset\nresult = pipeline.annotate(text)\n\n# What's in the pipeline\nlist(result.keys())\nOutput: ['entities', 'stem', 'checked', 'lemma', 'document',\n 'pos', 'token', 'ner', 'embeddings', 'sentence']\n\n# Check the results\nresult['entities']\nOutput: ['Mona Lisa', 'Leonardo', 'Louvre', 'Paris']\n```\n\nFor more examples, you can visit our dedicated [examples](https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples) to showcase all Spark NLP use cases!\n\n### Packages Cheatsheet\n\nThis is a cheatsheet for corresponding Spark NLP Maven package to Apache Spark / PySpark major version:\n\n| Apache Spark | Spark NLP on CPU | Spark NLP on GPU | Spark NLP on AArch64 (linux) | Spark NLP on Apple Silicon |\n|-------------------------|--------------------|----------------------------|--------------------------------|--------------------------------------|\n| 3.0/3.1/3.2/3.3/3.4/3.5 | `spark-nlp` | `spark-nlp-gpu` | `spark-nlp-aarch64` | `spark-nlp-silicon` |\n| Start Function | `sparknlp.start()` | `sparknlp.start(gpu=True)` | `sparknlp.start(aarch64=True)` | `sparknlp.start(apple_silicon=True)` |\n\nNOTE: `M1/M2` and `AArch64` are under `experimental` support. Access and support to these architectures are limited by the\ncommunity and we had to build most of the dependencies by ourselves to make them compatible. We support these two\narchitectures, however, they may not work in some environments.\n\n## Pipelines and Models\n\nFor a quick example of using pipelines and models take a look at our official [documentation](https://sparknlp.org/docs/en/install#pipelines-and-models)\n\n#### Please check out our Models Hub for the full list of [pre-trained models](https://sparknlp.org/models) with examples, demo, benchmark, and more\n\n## Platform and Ecosystem Support\n\n### Apache Spark Support\n\nSpark NLP *5.5.1* has been built on top of Apache Spark 3.4 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, 3.4.x, and 3.5.x\n\n| Spark NLP | Apache Spark 3.5.x | Apache Spark 3.4.x | Apache Spark 3.3.x | Apache Spark 3.2.x | Apache Spark 3.1.x | Apache Spark 3.0.x | Apache Spark 2.4.x | Apache Spark 2.3.x |\n|-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|\n| 5.5.x | YES | YES | YES | YES | YES | YES | NO | NO |\n| 5.4.x | YES | YES | YES | YES | YES | YES | NO | NO |\n| 5.3.x | YES | YES | YES | YES | YES | YES | NO | NO |\n| 5.2.x | YES | YES | YES | YES | YES | YES | NO | NO |\n| 5.1.x | Partially | YES | YES | YES | YES | YES | NO | NO |\n| 5.0.x | YES | YES | YES | YES | YES | YES | NO | NO |\n\nFind out more about `Spark NLP` versions from our [release notes](https://github.com/JohnSnowLabs/spark-nlp/releases).\n\n### Scala and Python Support\n\n| Spark NLP | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10| Scala 2.11 | Scala 2.12 |\n|-----------|------------|------------|------------|------------|------------|------------|------------|\n| 5.5.x | NO | YES | YES | YES | YES | NO | YES |\n| 5.4.x | NO | YES | YES | YES | YES | NO | YES |\n| 5.3.x | NO | YES | YES | YES | YES | NO | YES |\n| 5.2.x | NO | YES | YES | YES | YES | NO | YES |\n| 5.1.x | NO | YES | YES | YES | YES | NO | YES |\n| 5.0.x | NO | YES | YES | YES | YES | NO | YES |\n\nFind out more about 4.x `SparkNLP` versions in our official [documentation](https://sparknlp.org/docs/en/install#apache-spark-support)\n\n### Databricks Support\n\nSpark NLP 5.5.1 has been tested and is compatible with the following runtimes:\n\n| **CPU** | **GPU** |\n|--------------------|--------------------|\n| 14.1 / 14.1 ML | 14.1 ML & GPU |\n| 14.2 / 14.2 ML | 14.2 ML & GPU |\n| 14.3 / 14.3 ML | 14.3 ML & GPU |\n| 15.0 / 15.0 ML | 15.0 ML & GPU |\n| 15.1 / 15.0 ML | 15.1 ML & GPU |\n| 15.2 / 15.0 ML | 15.2 ML & GPU |\n| 15.3 / 15.0 ML | 15.3 ML & GPU |\n| 15.4 / 15.0 ML | 15.4 ML & GPU |\n\nWe are compatible with older runtimes. For a full list check databricks support in our official [documentation](https://sparknlp.org/docs/en/install#databricks-support)\n\n### EMR Support\n\nSpark NLP 5.5.1 has been tested and is compatible with the following EMR releases:\n\n| **EMR Release** |\n|--------------------|\n| emr-6.13.0 |\n| emr-6.14.0 |\n| emr-6.15.0 |\n| emr-7.0.0 |\n| emr-7.1.0 |\n| emr-7.2.0 |\n\nWe are compatible with older EMR releases. For a full list check EMR support in our official [documentation](https://sparknlp.org/docs/en/install#emr-support)\n\nFull list of [Amazon EMR 6.x releases](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-6x.html)\nFull list of [Amazon EMR 7.x releases](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-7x.html)\n\nNOTE: The EMR 6.1.0 and 6.1.1 are not supported.\n\n## Installation\n\n### Command line (requires internet connection)\n\nTo install spark-nlp packages through command line follow [these instructions](https://sparknlp.org/docs/en/install#command-line) from our official documentation\n\n### Scala\n\nSpark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x versions. Our packages are\ndeployed to Maven central. To add any of our packages as a dependency in your application you can follow [these instructions](https://sparknlp.org/docs/en/install#scala-and-java)\nfrom our official documentation.\n\nIf you are interested, there is a simple SBT project for Spark NLP to guide you on how to use it in your\nprojects [Spark NLP SBT S5.5.1r](https://github.com/maziyarpanahi/spark-nlp-starter)\n\n### Python\n\nSpark NLP supports Python 3.7.x and above depending on your major PySpark version.\nCheck all available installations for Python in our official [documentation](https://sparknlp.org/docs/en/install#python)\n\n### Compiled JARs\n\nTo compile the jars from source follow [these instructions](https://sparknlp.org/docs/en/compiled#jars) from our official documenation\n\n## Platform-Specific Instructions\n\nFor detailed instructions on how to use Spark NLP on supported platforms, please refer to our official documentation:\n\n| Platform | Supported Language(s) |\n|-------------------------|-----------------------|\n| [Apache Zeppelin](https://sparknlp.org/docs/en/install#apache-zeppelin) | Scala, Python |\n| [Jupyter Notebook](https://sparknlp.org/docs/en/install#jupter-notebook) | Python |\n| [Google Colab Notebook](https://sparknlp.org/docs/en/install#google-colab-notebook) | Python |\n| [Kaggle Kernel](https://sparknlp.org/docs/en/install#kaggle-kernel) | Python |\n| [Databricks Cluster](https://sparknlp.org/docs/en/install#databricks-cluster) | Scala, Python |\n| [EMR Cluster](https://sparknlp.org/docs/en/install#emr-cluster) | Scala, Python |\n| [GCP Dataproc Cluster](https://sparknlp.org/docs/en/install#gcp-dataproc) | Scala, Python |\n\n### Offline\n\nSpark NLP library and all the pre-trained models/pipelines can be used entirely offline with no access to the Internet.\nPlease check [these instructions](https://sparknlp.org/docs/en/install#s3-integration) from our official documentation\nto use Spark NLP offline\n\n## Advanced Settings\n\nYou can change Spark NLP configurations via Spark properties configuration.\nPlease check [these instructions](https://sparknlp.org/docs/en/install#sparknlp-properties) from our official documentation.\n\n### S3 Integration\n\nIn Spark NLP we can define S3 locations to:\n\n- Export log files of training models\n- Store tensorflow graphs used in `NerDLApproach`\n\nPlease check [these instructions](https://sparknlp.org/docs/en/install#s3-integration) from our official documentation.\n\n## Document5.5.1\n\n### Examples\n\nNeed more **examples**? Check out our dedicated [Spark NLP Examples](https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples)\nrepository to showcase all Spark NLP use cases!\n\nAlso, don't forget to check [Spark NLP in Action](https://sparknlp.org/demo) built by Streamlit.\n\n#### All examples: [spark-nlp/examples](https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples)\n\n### FAQ\n\n[Check our Articles and Videos page here](https://sparknlp.org/learn)\n\n### Citation\n\nWe have published a [paper](https://www.sciencedirect.com/science/article/pii/S2665963821000063) that you can cite for\nthe Spark NLP library:\n\n```bibtex\n@article{KOCAMAN2021100058,\n title = {Spark NLP: Natural language understanding at scale},\n journal = {Software Impacts},\n pages = {100058},\n year = {2021},\n issn = {2665-9638},\n doi = {https://doi.org/10.1016/j.simpa.2021.100058},\n url = {https://www.sciencedirect.com/science/article/pii/S2665963.2.300063},\n author = {Veysel Kocaman and David Talby},\n keywords = {Spark, Natural language processing, Deep learning, Tensorflow, Cluster},\n abstract = {Spark NLP is a Natural Language Processing (NLP) library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines that can scale easily in a distributed environment. Spark NLP comes with 1100+ pretrained pipelines and models in more than 192+ languages. It supports nearly all the NLP tasks and modules that can be used seamlessly in a cluster. Downloaded more than 2.7 million times and experiencing 9x growth since January 2020, Spark NLP is used by 54% of healthcare organizations as the world\u2019s most widely used NLP library in the enterprise.}\n }\n}5.5.1\n```\n\n## Community support\n\n- [Slack](https://join.slack.com/t/spark-nlp/shared_invite/zt-198dipu77-L3UWNe_AJ8xqDk0ivmih5Q) For live discussion with the Spark NLP community and the team\n- [GitHub](https://github.com/JohnSnowLabs/spark-nlp) Bug reports, feature requests, and contributions\n- [Discussions](https://github.com/JohnSnowLabs/spark-nlp/discussions) Engage with other community members, share ideas,\n and show off how you use Spark NLP!\n- [Medium](https://medium.com/spark-nlp) Spark NLP articles\n- [YouTube](https://www.youtube.com/channel/UCmFOjlpYEhxf_wJUDuz6xxQ/videos) Spark NLP video tutorials\n\n## Contributing\n\nWe appreciate any sort of contributions:\n\n- ideas\n- feedback\n- documentation\n- bug reports\n- NLP training and testing corpora\n- Development and testing\n\nClone the repo and submit your pull-requests! Or directly create issues in this repo.\n\n## John Snow Labs\n\n[http://johnsnowlabs.com](http://johnsnowlabs.com)\n",
"bugtrack_url": null,
"license": null,
"summary": "John Snow Labs Spark NLP is a natural language processing library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines, that scale easily in a distributed environment.",
"version": "5.5.1",
"project_urls": {
"Homepage": "https://github.com/JohnSnowLabs/spark-nlp"
},
"split_keywords": [
"nlp",
"spark",
"vision",
"speech",
"deep",
"learning",
"transformer",
"tensorflow",
"bert",
"gpt-2",
"wav2vec2",
"vit"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d74e4a793300f6106db87b11f9f5afc7717cbaaed6990a0d9f8feac8d83d6e2c",
"md5": "13356da2e64c62c6efc1c0138f559b1e",
"sha256": "d14fed01118a04cd74e59d32bf10f5dd07bdab6803ec818ae2aef1cbef4b6db8"
},
"downloads": -1,
"filename": "spark_nlp-5.5.1-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "13356da2e64c62c6efc1c0138f559b1e",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": null,
"size": 626554,
"upload_time": "2024-10-24T19:21:24",
"upload_time_iso_8601": "2024-10-24T19:21:24.432968Z",
"url": "https://files.pythonhosted.org/packages/d7/4e/4a793300f6106db87b11f9f5afc7717cbaaed6990a0d9f8feac8d83d6e2c/spark_nlp-5.5.1-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d91c31c822efbf58f4ebb27ee1273474d83912c41c9562d59cd3de2ba5726003",
"md5": "dfd59db7499877f93eae63f213980fda",
"sha256": "e8ddaf939a1b0acbe0d7b6d6a67f7fa0c5a73339d9e4563e3c1aba1cf0039409"
},
"downloads": -1,
"filename": "spark-nlp-5.5.1.tar.gz",
"has_sig": false,
"md5_digest": "dfd59db7499877f93eae63f213980fda",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 316103,
"upload_time": "2024-10-24T19:21:26",
"upload_time_iso_8601": "2024-10-24T19:21:26.899491Z",
"url": "https://files.pythonhosted.org/packages/d9/1c/31c822efbf58f4ebb27ee1273474d83912c41c9562d59cd3de2ba5726003/spark-nlp-5.5.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-24 19:21:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "JohnSnowLabs",
"github_project": "spark-nlp",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "spark-nlp"
}