Skip to content

Releases: JohnSnowLabs/spark-nlp

6.2.3

03 Dec 11:22
6.2.3

Choose a tag to compare

📢 Spark NLP 6.2.3: Further Improvements for NerDL

Spark NLP 6.2.3 introduces targeted improvements to training performance and stability of NerDLApproach and bug fixes for CamemBertForTokenClassification.

NerDLApproach now uses new internal data-loading behavior, and improving training speed and preventing out-of-memory errors.

🔥 Highlights

Enhanced NerDLApproach training performance through threaded data loading and optimized partitioning.

🚀 New Features & Enhancements

NerDLApproach Training Optimizations

Significant performance improvements for training of NerDLApproach:

Threaded Data Loading: When enabling the memory optimizer (setEnableMemoryOptimizer(true)), data can now be pre-fetched through a threaded data loader. By default, it is disabled but can be tuned by using:

.setPrefetchBatches(int)

By tuning this parameter (for example 20 batches), you can get training time reductions of about 10%.

Optimized Partitioning Strategy: NerDLApproach now applies optimized dataframe partitioning when using the memory optimizer (setEnableMemoryOptimizer(true)) by default, improving parallelization efficiency during training and preventing out-of-memory errors.

For manual tuning of the input data frames, this behavior can be disabled with:

.setOptimizePartitioning(false)

🐛 Bug Fixes

  • CamemBertForTokenClassification: Fixed an issue with expected input types during inference.

❤️ Community Support

  • Slack - real-time discussion with the Spark NLP community and team
  • GitHub - issue tracking, feature requests, and contributions
  • Discussions - community ideas and showcases
  • Medium - latest Spark NLP articles and tutorials
  • YouTube - educational videos and demos

💻 Installation

Python

pip install spark-nlp==6.2.3

Spark Packages

CPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.3

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.3

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.3

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.3

Maven

<dependency>
  <groupId>com.johnsnowlabs.nlp</groupId>
  <artifactId>spark-nlp_2.12</artifactId>
  <version>6.2.3</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.2.2...6.2.3

6.2.2

13 Nov 16:19
6.2.2

Choose a tag to compare

📢 Spark NLP 6.2.2: Bugfix Release

Spark NLP 6.2.2 brings bug fixes to WordEmbeddings and NerDLApproach logging.

🐛 Bug Fixes

  • WordEmbeddings: Fixed a bug where WordEmbeddings would duplicate input tokens in the output
  • NerDLApproach: Fixed a bug during logging, that would show inaccurate training/validation counts.

❤️ Community Support

  • Slack - real-time discussion with the Spark NLP community and team
  • GitHub - issue tracking, feature requests, and contributions
  • Discussions - community ideas and showcases
  • Medium - latest Spark NLP articles and tutorials
  • YouTube - educational videos and demos

💻 Installation

Python

pip install spark-nlp==6.2.2

Spark Packages

CPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.2

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.2

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.2

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.2

Maven

<dependency>
  <groupId>com.johnsnowlabs.nlp</groupId>
  <artifactId>spark-nlp_2.12</artifactId>
  <version>6.2.2</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.2.0...6.2.2

6.2.1

07 Nov 16:51
6.2.1

Choose a tag to compare

📢 Spark NLP 6.2.1: Enhanced hierarchical document processing and training optimizations

Spark NLP 6.2.1 brings significant improvements to document ingestion with expanded hierarchical support, XML processing enhancements, and optimizations for NerDL training. This release builds on the foundation of 6.2.0, continuing to focus on structure-awareness, flexibility, and performance for production NLP pipelines.

🔥 Highlights

  • Hierarchical Document Processing: Extended support for PDF, Word, and Markdown with parent-child element relationships
  • NerDLApproach Training Optimizations: Reduced memory footprint and improved training performance with BERT based embeddings
  • Improved Document Output Format: Single document annotations by default for more intuitive behavior with large documents
  • Enhanced XML Reading: Attribute extraction and improved tag handling in Reader2Doc

🚀 New Features & Enhancements

Hierarchical Support for Multiple Document Formats

Building on the HTMLReader hierarchical features introduced in 6.2.0, this release extends structured element tracking to additional document formats:

  • Reader2Doc now supports hierarchical processing for PDF, Microsoft Word, and Markdown files

  • Each extracted element includes:

    • element_id: Unique UUID identifier per element
    • parent_id: References the parent element's ID for logical document structure
  • Enables tree-like navigation and contextual understanding of document hierarchy:

    Chapter 1
     ├── Narrative Text A
     ├── Narrative Text B
    Chapter 2
     ├── Paragraph C
    
  • Supports advanced use cases including hierarchical retrieval, graph-based indexing, and multi-level document analysis

  • Metadata propagation ensures downstream annotators maintain structural relationships

NerDLApproach Training Optimizations

Significant performance improvements for training of NerDLApproach:

  • Reduced Memory Usage with BERT based embeddings: Optimized output embeddings allocations, lowering peak memory footprint during training
  • Automatic Dataset Caching: When using setEnableMemoryOptimizer(true) with maxEpoch > 1, input datasets are automatically cached to improve training speed
  • Graph Metadata Reuse: NerDLGraphChecker now populates TensorFlow graph metadata that NerDLApproach can reuse, reducing redundant computations during training initialization

With all these improvements you can expect up half the memory consumption and training time on RAM constrained environments (when using setEnableMemoryOptimizer(true)). For larger distributed datasets, the effect will be more pronounced.

XML Reader and Reader2Doc Enhancements

  • Single Document Output by Default: Reader2Doc now creates single document annotations per file by default, providing more expected behavior when processing large documents

    • Lines are joined by newline character \n by default, configurable via new setJoinString(string) parameter for custom separators
    • Automatically includes specified attribute values in document output
  • Improved Tag Handling: XML reader now ignores empty tags without text content, reducing noise in parsed output

  • Enhanced content type handling for application/xml documents

  • XML Tag Attribute Extraction: New setExtractTagAttributes(attributes: list[str]) parameter enables extraction of XML attribute values. Example:

    <bookstore>
        <book category="children">
            <title lang="en">Harry Potter</title>
            <author>J K. Rowling</author>
            <year>2005</year>
            <price>29.99</price>
        </book>
        <book category="web">
            <title lang="en">Learning XML</title>
            <author>Erik T. Ray</author>
            <year>2003</year>
            <price>39.95</price>
        </book>
    </bookstore>

    We can extract category and lang values with the Reader2Doc Config

    reader2doc = Reader2Doc() \
        .setContentType("application/xml") \
        .setContentPath("../src/test/resources/reader/xml/test.xml") \
        .setOutputCol("document") \
        .setExtractTagAttributes(["category", "lang"])

    Resulting in

    children
    en
    Harry Potter
    J K. Rowling
    2005
    29.99
    web
    en
    Learning XML
    Erik T. Ray
    2003
    39.95
    

🐛 Bug Fixes

  • Colab Environment Setup: Added Java installation to Colab setup script for improved out-of-the-box compatibility

❤️ Community Support

  • Slack - real-time discussion with the Spark NLP community and team
  • GitHub - issue tracking, feature requests, and contributions
  • Discussions - community ideas and showcases
  • Medium - latest Spark NLP articles and tutorials
  • YouTube - educational videos and demos

💻 Installation

Python

pip install spark-nlp==6.2.1

Spark Packages

CPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.1

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.1

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.1

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.1

Maven

<dependency>
  <groupId>com.johnsnowlabs.nlp</groupId>
  <artifactId>spark-nlp_2.12</artifactId>
  <version>6.2.1</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.2.0...6.2.1

6.2.0

22 Oct 15:38
6.2.0

Choose a tag to compare

📢 Spark NLP 6.2.0: A new stage for unstructured document ingestion and processing at scale

Spark NLP 6.2.0 introduces key upgrades across entity extraction, document normalization, HTML reading, and GGUF-based models. To recap, since the releases of Spark NLP 6.1 you can:

  • Infer quantized cutting-edge LLMs and VLMs such as Gemma 3, Phi-4, Llama 3.1, Qwen 2.5
  • Rerank documents using llama.cpp with AutoGGUFReranker
  • Ingest unstructured documents of diverse formats
    • Reader2Doc: streamlines the process of loading and integrating diverse file formats (PDFs, Word, Excel, PowerPoint, HTML, Text, Email, Markdown) directly into Spark NLP pipelines with a unified and flexible interface.
    • Reader2Table: streamlines tabular data extraction from multiple document formats with seamless pipeline integration.
    • Reader2Image: extract structured image content from various document types

Spark NLP release 6.2.0 further focuses on automation, structure-awareness, and resource efficiency, making pipelines easier to configure, manage, and extend.

🔥 Highlights

  • Auto Modes for EntityRuler and DocumentNormalizer: automatic regex and text-cleaning presets for faster setup.
  • Hierarchical Element Tracking in HTMLReader: adds element and parent identifiers for structure-aware document processing.
  • Resource Management for AutoGGUF Annotators: improved control and cleanup of llama.cpp-based models.

🚀 New Features & Enhancements

EntityRulerModel and DocumentNormalizer Auto Modes

EntityRulerModel

  • Added autoMode parameter to enable predefined regex entity groups ("network_entities", "communication_entities", "media_entities", "email_entities", "all_entities").
  • Added extractEntities parameter to filter entities within auto modes.
  • Automatically applies case-insensitive regex presets and falls back to manual mode if not specified.
  • Retains full backward compatibility with JSON or RocksDB-based rules.

DocumentNormalizer

  • Added presetPattern and autoMode parameters to apply built-in text cleaning patterns.
  • New modes include "light_clean", "document_clean", "social_clean", "html_clean", and "full_auto".
  • Enables quick application of multiple cleaning operations without manual configuration.

Together, these additions significantly reduce boilerplate setup for common text extraction and normalization workflows.

Hierarchical Element Identification in HTMLReader

  • Introduced element_id and parent_id metadata fields for each parsed HTML element.
  • Enables explicit structural relationships (e.g., title → paragraph → link) for hierarchical retrieval and contextual reasoning.
  • Supports graph-based indexing, hybrid search, and multi-level document analysis.
  • Metadata propagation improvements ensure Sentence Detector outputs also retain upstream hierarchy information.

AutoGGUF Annotator Enhancements

For AutoGGUFModel, AutoGGUFVision, AutoGGUFEmbeddings, AutoGGUFReranker

  • Added close() method to explicitly release llama.cpp model resources, preventing memory retention in long-running sessions.
  • Introduced setRemoveThinkingTag(tag: String) parameter to remove internal <think>...</think> sections from model outputs.
    • Regex pattern: (?s)<$tag>.+?</$tag>
    • Simplifies downstream processing for chat and reasoning models.

🐛 Bug Fixes

  • RobertaEmbeddings Warmup Test - fixed token sequence bug where unknown tokens caused initialization errors.

❤️ Community Support

  • Slack - real-time discussion with the Spark NLP community and team
  • GitHub - issue tracking, feature requests, and contributions
  • Discussions - community ideas and showcases
  • Medium - latest Spark NLP articles and tutorials
  • YouTube - educational videos and demos

💻 Installation

Python

pip install spark-nlp==6.2.0

Spark Packages

CPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.0

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.0

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.0

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.0

Maven

<dependency>
  <groupId>com.johnsnowlabs.nlp</groupId>
  <artifactId>spark-nlp_2.12</artifactId>
  <version>6.2.0</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.5...6.2.0

6.1.5

09 Oct 11:46
6.1.5

Choose a tag to compare

📢 Spark NLP 6.1.5: Smarter Readers and More Resilient Pipelines

Spark NLP 6.1.5 focuses on improving data ingestion reliability and pipeline flexibility. This release enhances reader components with better fault tolerance, broader input support, and introduces a new ReaderAssembler annotator for streamlined integration. Several key fixes also improve model loading and stability in distributed environments.

🔥 Highlights

  • New ReaderAssembler Annotator: Unify multiple reader annotators into one configurable component for simpler and cleaner ingestion pipelines.

🚀 New Features & Enhancements

Reader Pipeline Enhancements

  • ReaderAssembler Annotator
    A new meta-annotator that unifies Reader2X components (e.g., Reader2Doc, Reader2Image, Reader2Table) under a single interface.

    • Automatically selects the right reader(s) based on configuration.
    • Supports declarative assembly of reading stages.
    • Provides parameters for reader selection, fallback rules, and error handling.
      This simplifies pipeline construction and improves maintainability for multi-format ingestion workflows. (Link to notebook)
  • Support for String Input Columns in Readers (SPARKNLP-1291)
    Spark NLP readers only supported inputs via file paths. That means if you already had a DataFrame with text content (say from another pipeline or a preliminary load), you had to write it to disk just to let the reader ingest it. This adds friction and overhead, especially in streaming or in-memory pipelines.

    With this change, you can:

    • Feed raw text stored in a DataFrame column directly into Spark NLP readers — zero I/O overhead when not needed.
    • Simplify workflows and pipelines (no need for temporary file staging just to “read” back data).
    • Improve performance and resource usage in scenarios where input is already available as strings (e.g. generated, preprocessed, or coming from another system).
    • Make the reader APIs more flexible and general-purpose.
  • Fault-Tolerant XML Reader
    The XML reader now skips malformed XML fragments (e.g., mismatched tags, missing closures, invalid characters) instead of failing the job.
    Enhanced error handling ensures more resilient ingestion of imperfect real-world data.

🐛 Bug Fixes

  • GGUF Model Loading Duplication
    Fixed an issue in FeaturesFallbackReader that caused duplicate loading or missing model files when calling .pretrained() on GGUF-based annotators such as AutoGGUFModel and rerankers, especially in Databricks environments.

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

💻 Installation

Python

pip install spark-nlp==6.1.5

Spark Packages

CPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.5

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.5

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.5

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.5

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.5</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.5</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.5</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.5</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.4...6.1.5

6.1.4

23 Sep 14:21

Choose a tag to compare

📢 Spark NLP 6.1.4: Advancing Multimodal Workflows with Reader2Image

We are excited to announce the release of Spark NLP 6.1.4!
This version introduces a powerful new annotator, Reader2Image, which extends Spark NLP’s universal ingestion capabilities to embedded images across a wide range of document formats. With this release, Spark NLP users can now seamlessly integrate text and image processing in the same pipeline, unlocking new opportunities for vision-language modeling (VLM), multimodal search, and document understanding.


🔥 Highlights

  • New Reader2Image Annotator: Extract and structure image content directly from documents like PDFs, Word, PowerPoint, Excel, HTML, Markdown, and Email files.
  • Multimodal Pipeline Expansion: Build workflows that combine text, tables, and now images for comprehensive document AI applications.
  • Consistent Structured Output: Access image metadata (filename, dimensions, channels, mode) alongside binary image data in Spark DataFrames, fully compatible with other visual annotators.

🚀 New Features & Enhancements

Document Ingestion

  • Reader2Image Annotator
    A new multimodal annotator designed to parse image content embedded in structured documents. Supported formats include:

    • PDFs
    • Word (.doc/.docx)
    • Excel (.xls/.xlsx)
    • PowerPoint (.ppt/.pptx)
    • HTML & Markdown (.md)
    • Email files (.eml, .msg)

    Output Fields:

    • File name
    • Image dimensions (height, width)
    • Number of channels
    • Mode
    • Binary image data
    • Metadata

    This enables seamless integration with vision-language models (VLMs), multimodal embeddings, and downstream Spark NLP annotators, all within the same distributed pipeline.


🐛 Bug Fixes

  • None

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

pip install spark-nlp==6.1.4

Spark Packages

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.4

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.4

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.4

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.4

Maven

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.4</version>
</dependency>
  • GPU: spark-nlp-gpu_2.12:6.1.4
  • Apple Silicon: spark-nlp-silicon_2.12:6.1.4
  • AArch64: spark-nlp-aarch64_2.12:6.1.4

FAT JARs


📊 What’s Changed

Full Changelog: 6.1.3...6.1.4

What's Changed

Full Changelog: 6.1.3...6.1.4

6.1.3

01 Sep 13:57

Choose a tag to compare

📢 Spark NLP 6.1.3: NerDL Graph Checker, Reader2Doc Enhancements, Ranking Finisher

We are pleased to announce Spark NLP 6.1.3, introducing a new graph validation annotator for NER training, enhancements to Reader2Doc for flexible document handling, and a new ranking finisher for AutoGGUFReranker outputs. This release focuses on improving training robustness, document processing flexibility, and retrieval ranking capabilities.

🔥 Highlights

  • New NerDLGraphChecker annotator to validate NER training graphs before training starts.
  • Reader2Doc enhancements with options for consolidated output and filtering.
  • New AutoGGUFRerankerFinisher for ranking, filtering, and normalizing reranker outputs.

🚀 New Features & Enhancements

Named Entity Recognition (NER)

NerDLGraphChecker:
A new annotator that validates whether a suitable NerDL graph is available for a given training dataset before embeddings or training start. This helps avoid wasted computation in custom training scenarios. (Link to notebook)

  • Must be placed before embedding or NerDLApproach annotators.
  • Requires token and label columns in the dataset.
  • Automatically extracts embedding dimensions from the pipeline to validate graph compatibility.

Document Processing

Reader2Doc Enhancements:
New configuration options provide more control over output formatting:

  • outputAsDocument: Concatenates all sentences into a single document.
  • excludeNonText: Filters out non-textual elements (e.g., tables, images) from the document.

Ranking & Retrieval

AutoGGUFRerankerFinisher:
A finisher for processing AutoGGUFReranker outputs, adding advanced ranking and filtering capabilities (Link to notebook):

  • Top-k document selection.
  • Score threshold filtering.
  • Min-max score normalization (0–1 range).
  • Sorting by relevance score.
  • Rank assignment in metadata while preserving document structure.

🐛 Bug Fixes

None.

❤️ Community Support

  • Slack Live discussion with the Spark NLP community and team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Share ideas and engage with other community members
  • Medium Spark NLP technical articles
  • JohnSnowLabs Medium Official blog
  • YouTube Spark NLP tutorials and demos

Installation

Python

pip install spark-nlp==6.1.3

Spark Packages

spark-nlp on Apache Spark 3.0.x–3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.3

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.3

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.3

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.3

Maven

spark-nlp:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.3</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.3</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.3</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.3</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.2...6.1.3

6.1.2

20 Aug 13:05
6.1.2

Choose a tag to compare

📢 Spark NLP 6.1.2: AutoGGUFReranker and AutoGGUF improvements

We are excited to announce Spark NLP 6.1.2, enhancing AutoGGUF model support and introduces a brand new reranking annotator based on llama.cpp LLMs. This release also brings fixes for AutoGGUFVision model and improvements for CUDA compatibility of AutoGGUF models.

🔥 Highlights

New AutoGGUFReranker annotator for advanced LLM-based reranking in information retrieval and retrieval-augmented generation (RAG) pipelines.

🚀 New Features & Enhancements

Large Language Models (LLMs)

  • AutoGGUFReranker
    A new annotator for reranking candidate results using AutoGGUF-based LLM embeddings. This enables more accurate ranking in retrieval pipelines, benefiting applications such as search, RAG, and question answering. (Link to notebook)

🐛 Bug Fixes

  • Fixed Python initialization errors in AutoGGUFVisionModel.
  • Using save for AutoGGUF models now supports more file protocols.
  • Ensured better GPU support for AutoGGUF annotators on a broader range of CUDA devices.

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

Installation

Python

pip install spark-nlp==6.1.2

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.1...6.1.2

6.1.1

05 Aug 12:24
6.1.1

Choose a tag to compare

📢 Spark NLP 6.1.1: Enhanced LLM Performance and Expanded Data Ingestion Capabilities

We are thrilled to announce Spark NLP 6.1.1, a focused release that delivers significant performance improvements and enhanced functionality for large language models and universal data ingestion. This release continues our commitment to providing state-of-the-art AI capabilities within the native Spark ecosystem, with optimized inference performance and expanded multimodal support.

🔥 Highlights

  • Performance Boost for llama.cpp models: Inference optimizations in AutoGGUFModel and AutoGGUFEmbeddings deliver improvements for large language model workflows on GPU.
  • Multimodal Vision Models Restored: The AutoGGUFVisionModel annotator is back with full functionality and latest SOTA VLMs, enabling sophisticated vision-language processing capabilities.
  • Enhanced Table Processing: New Reader2Table annotator streamlines tabular data extraction from multiple document formats with seamless pipeline integration.
  • Upgraded openVINO backend: We upgraded our openVINO backend to 2025.02 and added hyperthreading configuration options to maximize performance on multi-core systems.

🚀 New Features & Enhancements

Large Language Models (LLMs)

  • Optimized AutoGGUFModel Performance: We improved the inference of llama.cpp models and achieved a 10% performance increase for AutoGGUFModel on GPU.
  • Restored AutoGGUFVisionModel: The multimodal vision model annotator is fully operational again, enabling powerful vision-language processing capabilities. Users can now process images alongside text for comprehensive multimodal AI applications while using the latest SOTA vision-language models.
  • Enhanced Model Compatibility: AutoGGUFModel can now seamlessly load the language model components from pretrained AutoGGUFVisionModel instances, providing greater flexibility in model deployment and usage. (Link to notebook)
  • Robust Model Loading: Pretrained AutoGGUF-based annotators now load despite the inclusion of deprecated parameters, ensuring broader compatibility.
  • Updated Default Models: All AutoGGUF annotators now use more recent and capable pretrained models:
Annotator Default pretrained model
AutoGGUFModel Phi_4_mini_instruct_Q4_K_M_gguf
AutoGGUFEmbeddings Qwen3_Embedding_0.6B_Q8_0_gguf
AutoGGUFVisionModel Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf

Document Ingestion

  • Reader2Table Annotator: This powerful new annotator provides a streamlined interface for extracting and processing tabular data from various document formats (Link to notebook). It offers:
    • Unified API for interacting with Spark NLP readers
    • Enhanced flexibility through reader-specific configurations
    • Improved maintainability and scalability for data loading workflows
    • Support for multiple formats including HTML, Word (.doc/.docx), Excel (.xls/.xlsx), PowerPoint (.ppt/.pptx), Markdown (.md), and CSV (.csv)

Performance Optimizations

  • OpenVINO Upgrade: We upgrade the backend to 2025.02 and added comprehensive hyperthreading configuration options for the OpenVINO backend, enabling users to optimize performance on multi-core systems by fine-tuning thread allocation and CPU utilization.

🐛 Bug Fixes

None

❤️ Community Support

  • Slack: For live discussion with the Spark NLP community and the team.
  • GitHub: Bug reports, feature requests, and contributions.
  • Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium: Spark NLP articles.
  • JohnSnowLabs official Medium
  • YouTube: Spark NLP video tutorials.

Installation

Python

pip install spark-nlp==6.1.1

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.0...6.1.1

6.1.0

23 Jul 16:10
6.1.0

Choose a tag to compare

📢 Spark NLP 6.1.0: State-of-the-art LLM Capabilities and Advancing Universal Ingestion

We are excited to announce Spark NLP 6.1.0, another milestone for building scalable, distributed AI pipelines! This major release significantly enhances our capabilities for state-of-the-art multimodal and large language models and universal data ingestion. Upgrade Spark NLP to 6.1.0 to improve both usability and performance across ingestion, inference, and multimodal processing pipelines, all within the native Spark ecosystem.

🔥 Highlights

  • Upgraded llama.cpp Integration: We've updated our llama.cpp backend to tag b5932 which supports inference with the latest generation of LLMs.
  • Unified Document Ingestion with Reader2Doc: Introducing a new annotator that streamlines the process of loading and integrating diverse file formats (PDFs, Word, Excel, PowerPoint, HTML, Text, Email, Markdown) directly into Spark NLP pipelines with a unified and flexible interface.
  • Support for Phi-4: Spark NLP now natively supports the Phi-4 model, allowing users to leverage its advanced reasoning capabilities.

🚀 New Features & Enhancements

Large Language Models (LLMs)

  • llama.cpp Upgrade: Our llama.cpp backend has been upgraded to version b5932. This update enables native inference for the newest LLMs, such as Gemma 3 and Phi-4, ensuring broader model compatibility and improved performance.
    • NOTE: We are still in the process of upgrading our multimodal AutoGGUFVisionModel annotator to the latest backend. This means that this annotator will not be available in this version. As a workaround, please use version 6.0.5 of Spark NLP.
  • Phi-4 Model Support: Spark NLP now integrates the Phi-4 model, an advanced open model trained on a blend of synthetic data, filtered public domain content, and academic Q&A datasets. This integration enables sophisticated reasoning capabilities directly within Spark NLP. (Link to notebook)

Document Ingestion

  • Reader2Doc Annotator: This new annotator provides a simplified, unified interface for integrating various Spark NLP readers. It supports a wide range of formats, including PDFs, plain text, HTML, Word (.doc/.docx), Excel (.xls/.xlsx), PowerPoint (.ppt/.pptx), email files (.eml, .msg), and Markdown (.md).
  • Using this annotator, you can read all these different formats into Spark NLP documents, making them directly accessible in all your Spark NLP pipelines. This significantly reduces boilerplate code and enhances flexibility in data loading workflows, making it easier to scale and switch between data sources.

Let's use a code example to see how easy it is to use:

reader2doc = Reader2Doc() \
    .setContentType("application/pdf") \
    .setContentPath("./pdf-files") \
    .setOutputCol("document")

# other NLP stages in `nlp_stages`

pipeline = Pipeline(stages=[reader2doc] + nlp_stages)
model = pipeline.fit(empty_df)
result_df = model.transform(empty_df)

Check out our full example notebook to see it in action.

🐛 Bug Fixes

  • HuggingFace OpenVINO Notebook for Qwen2VL: Addressed and fixed issues in the notebook related to the OpenVINO conversion of the Qwen2VL model, ensuring smoother functionality.

❤️ Community Support

  • Slack: For live discussion with the Spark NLP community and the team.
  • GitHub: Bug reports, feature requests, and contributions.
  • Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium: Spark NLP articles.
  • JohnSnowLabs official Medium
  • YouTube: Spark NLP video tutorials.

Installation

Python

pip install spark-nlp==6.1.0

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.5...6.1.0