Releases: JohnSnowLabs/spark-nlp
6.2.3
📢 Spark NLP 6.2.3: Further Improvements for NerDL
Spark NLP 6.2.3 introduces targeted improvements to training performance and stability of NerDLApproach and bug fixes for CamemBertForTokenClassification.
NerDLApproach now uses new internal data-loading behavior, and improving training speed and preventing out-of-memory errors.
🔥 Highlights
Enhanced NerDLApproach training performance through threaded data loading and optimized partitioning.
🚀 New Features & Enhancements
NerDLApproach Training Optimizations
Significant performance improvements for training of NerDLApproach:
Threaded Data Loading: When enabling the memory optimizer (setEnableMemoryOptimizer(true)), data can now be pre-fetched through a threaded data loader. By default, it is disabled but can be tuned by using:
.setPrefetchBatches(int)By tuning this parameter (for example 20 batches), you can get training time reductions of about 10%.
Optimized Partitioning Strategy: NerDLApproach now applies optimized dataframe partitioning when using the memory optimizer (setEnableMemoryOptimizer(true)) by default, improving parallelization efficiency during training and preventing out-of-memory errors.
For manual tuning of the input data frames, this behavior can be disabled with:
.setOptimizePartitioning(false)🐛 Bug Fixes
- CamemBertForTokenClassification: Fixed an issue with expected input types during inference.
❤️ Community Support
- Slack - real-time discussion with the Spark NLP community and team
- GitHub - issue tracking, feature requests, and contributions
- Discussions - community ideas and showcases
- Medium - latest Spark NLP articles and tutorials
- YouTube - educational videos and demos
💻 Installation
Python
pip install spark-nlp==6.2.3Spark Packages
CPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.3GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.3Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.3AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.3Maven
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.2.3</version>
</dependency>FAT JARs
- CPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.2.3.jar
- GPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.2.3.jar
- Apple Silicon: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.2.3.jar
- AArch64: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.2.3.jar
What's Changed
- #14701 by @ahmedlone127
- #14699 by @DevinTDHa
Full Changelog: 6.2.2...6.2.3
6.2.2
📢 Spark NLP 6.2.2: Bugfix Release
Spark NLP 6.2.2 brings bug fixes to WordEmbeddings and NerDLApproach logging.
🐛 Bug Fixes
- WordEmbeddings: Fixed a bug where
WordEmbeddingswould duplicate input tokens in the output - NerDLApproach: Fixed a bug during logging, that would show inaccurate training/validation counts.
❤️ Community Support
- Slack - real-time discussion with the Spark NLP community and team
- GitHub - issue tracking, feature requests, and contributions
- Discussions - community ideas and showcases
- Medium - latest Spark NLP articles and tutorials
- YouTube - educational videos and demos
💻 Installation
Python
pip install spark-nlp==6.2.2Spark Packages
CPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.2GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.2Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.2AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.2Maven
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.2.2</version>
</dependency>FAT JARs
- CPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.2.2.jar
- GPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.2.2.jar
- Apple Silicon: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.2.2.jar
- AArch64: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.2.2.jar
What's Changed
Full Changelog: 6.2.0...6.2.2
6.2.1
📢 Spark NLP 6.2.1: Enhanced hierarchical document processing and training optimizations
Spark NLP 6.2.1 brings significant improvements to document ingestion with expanded hierarchical support, XML processing enhancements, and optimizations for NerDL training. This release builds on the foundation of 6.2.0, continuing to focus on structure-awareness, flexibility, and performance for production NLP pipelines.
🔥 Highlights
- Hierarchical Document Processing: Extended support for PDF, Word, and Markdown with parent-child element relationships
- NerDLApproach Training Optimizations: Reduced memory footprint and improved training performance with BERT based embeddings
- Improved Document Output Format: Single document annotations by default for more intuitive behavior with large documents
- Enhanced XML Reading: Attribute extraction and improved tag handling in
Reader2Doc
🚀 New Features & Enhancements
Hierarchical Support for Multiple Document Formats
Building on the HTMLReader hierarchical features introduced in 6.2.0, this release extends structured element tracking to additional document formats:
-
Reader2Doc now supports hierarchical processing for PDF, Microsoft Word, and Markdown files
-
Each extracted element includes:
element_id: Unique UUID identifier per elementparent_id: References the parent element's ID for logical document structure
-
Enables tree-like navigation and contextual understanding of document hierarchy:
Chapter 1 ├── Narrative Text A ├── Narrative Text B Chapter 2 ├── Paragraph C -
Supports advanced use cases including hierarchical retrieval, graph-based indexing, and multi-level document analysis
-
Metadata propagation ensures downstream annotators maintain structural relationships
NerDLApproach Training Optimizations
Significant performance improvements for training of NerDLApproach:
- Reduced Memory Usage with BERT based embeddings: Optimized output embeddings allocations, lowering peak memory footprint during training
- Automatic Dataset Caching: When using
setEnableMemoryOptimizer(true)withmaxEpoch > 1, input datasets are automatically cached to improve training speed - Graph Metadata Reuse:
NerDLGraphCheckernow populates TensorFlow graph metadata that NerDLApproach can reuse, reducing redundant computations during training initialization
With all these improvements you can expect up half the memory consumption and training time on RAM constrained environments (when using setEnableMemoryOptimizer(true)). For larger distributed datasets, the effect will be more pronounced.
XML Reader and Reader2Doc Enhancements
-
Single Document Output by Default:
Reader2Docnow creates single document annotations per file by default, providing more expected behavior when processing large documents- Lines are joined by newline character
\nby default, configurable via newsetJoinString(string)parameter for custom separators - Automatically includes specified attribute values in document output
- Lines are joined by newline character
-
Improved Tag Handling: XML reader now ignores empty tags without text content, reducing noise in parsed output
-
Enhanced content type handling for
application/xmldocuments -
XML Tag Attribute Extraction: New
setExtractTagAttributes(attributes: list[str])parameter enables extraction of XML attribute values. Example:<bookstore> <book category="children"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="web"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore>
We can extract
categoryandlangvalues with the Reader2Doc Configreader2doc = Reader2Doc() \ .setContentType("application/xml") \ .setContentPath("../src/test/resources/reader/xml/test.xml") \ .setOutputCol("document") \ .setExtractTagAttributes(["category", "lang"])
Resulting in
children en Harry Potter J K. Rowling 2005 29.99 web en Learning XML Erik T. Ray 2003 39.95
🐛 Bug Fixes
- Colab Environment Setup: Added Java installation to Colab setup script for improved out-of-the-box compatibility
❤️ Community Support
- Slack - real-time discussion with the Spark NLP community and team
- GitHub - issue tracking, feature requests, and contributions
- Discussions - community ideas and showcases
- Medium - latest Spark NLP articles and tutorials
- YouTube - educational videos and demos
💻 Installation
Python
pip install spark-nlp==6.2.1Spark Packages
CPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.1GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.1Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.1AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.1Maven
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.2.1</version>
</dependency>FAT JARs
- CPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.2.1.jar
- GPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.2.1.jar
- Apple Silicon: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.2.1.jar
- AArch64: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.2.1.jar
What's Changed
- #14681 by @danilojsl
- #14682 by @AbdullahMubeenAnwar
- #14685 by @DevinTDHa
- #14691 by @DevinTDHa
Full Changelog: 6.2.0...6.2.1
6.2.0
📢 Spark NLP 6.2.0: A new stage for unstructured document ingestion and processing at scale
Spark NLP 6.2.0 introduces key upgrades across entity extraction, document normalization, HTML reading, and GGUF-based models. To recap, since the releases of Spark NLP 6.1 you can:
- Infer quantized cutting-edge LLMs and VLMs such as Gemma 3, Phi-4, Llama 3.1, Qwen 2.5
- Rerank documents using llama.cpp with
AutoGGUFReranker - Ingest unstructured documents of diverse formats
Reader2Doc: streamlines the process of loading and integrating diverse file formats (PDFs, Word, Excel, PowerPoint, HTML, Text, Email, Markdown) directly into Spark NLP pipelines with a unified and flexible interface.Reader2Table: streamlines tabular data extraction from multiple document formats with seamless pipeline integration.Reader2Image: extract structured image content from various document types
Spark NLP release 6.2.0 further focuses on automation, structure-awareness, and resource efficiency, making pipelines easier to configure, manage, and extend.
🔥 Highlights
- Auto Modes for EntityRuler and DocumentNormalizer: automatic regex and text-cleaning presets for faster setup.
- Hierarchical Element Tracking in HTMLReader: adds element and parent identifiers for structure-aware document processing.
- Resource Management for AutoGGUF Annotators: improved control and cleanup of llama.cpp-based models.
🚀 New Features & Enhancements
EntityRulerModel and DocumentNormalizer Auto Modes
EntityRulerModel
- Added
autoModeparameter to enable predefined regex entity groups ("network_entities","communication_entities","media_entities","email_entities","all_entities"). - Added
extractEntitiesparameter to filter entities within auto modes. - Automatically applies case-insensitive regex presets and falls back to manual mode if not specified.
- Retains full backward compatibility with JSON or RocksDB-based rules.
DocumentNormalizer
- Added
presetPatternandautoModeparameters to apply built-in text cleaning patterns. - New modes include
"light_clean","document_clean","social_clean","html_clean", and"full_auto". - Enables quick application of multiple cleaning operations without manual configuration.
Together, these additions significantly reduce boilerplate setup for common text extraction and normalization workflows.
Hierarchical Element Identification in HTMLReader
- Introduced
element_idandparent_idmetadata fields for each parsed HTML element. - Enables explicit structural relationships (e.g.,
title → paragraph → link) for hierarchical retrieval and contextual reasoning. - Supports graph-based indexing, hybrid search, and multi-level document analysis.
- Metadata propagation improvements ensure Sentence Detector outputs also retain upstream hierarchy information.
AutoGGUF Annotator Enhancements
For AutoGGUFModel, AutoGGUFVision, AutoGGUFEmbeddings, AutoGGUFReranker
- Added
close()method to explicitly release llama.cpp model resources, preventing memory retention in long-running sessions. - Introduced
setRemoveThinkingTag(tag: String)parameter to remove internal<think>...</think>sections from model outputs.- Regex pattern:
(?s)<$tag>.+?</$tag> - Simplifies downstream processing for chat and reasoning models.
- Regex pattern:
🐛 Bug Fixes
- RobertaEmbeddings Warmup Test - fixed token sequence bug where unknown tokens caused initialization errors.
❤️ Community Support
- Slack - real-time discussion with the Spark NLP community and team
- GitHub - issue tracking, feature requests, and contributions
- Discussions - community ideas and showcases
- Medium - latest Spark NLP articles and tutorials
- YouTube - educational videos and demos
💻 Installation
Python
pip install spark-nlp==6.2.0Spark Packages
CPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.0GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.0Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.0AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.0Maven
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.2.0</version>
</dependency>FAT JARs
- CPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.2.0.jar
- GPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.2.0.jar
- Apple Silicon: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.2.0.jar
- AArch64: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.2.0.jar
What's Changed
- #14671 by @DevinTDHa
- #14672 by @DevinTDHa
- #14674 by @danilojsl
- #14675 by @danilojsl
- #14677 by @ahmedlone127
- #14673 by @AbdullahMubeenAnwar
Full Changelog: 6.1.5...6.2.0
6.1.5
📢 Spark NLP 6.1.5: Smarter Readers and More Resilient Pipelines
Spark NLP 6.1.5 focuses on improving data ingestion reliability and pipeline flexibility. This release enhances reader components with better fault tolerance, broader input support, and introduces a new ReaderAssembler annotator for streamlined integration. Several key fixes also improve model loading and stability in distributed environments.
🔥 Highlights
- New
ReaderAssemblerAnnotator: Unify multiple reader annotators into one configurable component for simpler and cleaner ingestion pipelines.
🚀 New Features & Enhancements
Reader Pipeline Enhancements
-
ReaderAssemblerAnnotator
A new meta-annotator that unifiesReader2Xcomponents (e.g.,Reader2Doc,Reader2Image,Reader2Table) under a single interface.- Automatically selects the right reader(s) based on configuration.
- Supports declarative assembly of reading stages.
- Provides parameters for reader selection, fallback rules, and error handling.
This simplifies pipeline construction and improves maintainability for multi-format ingestion workflows. (Link to notebook)
-
Support for String Input Columns in Readers (SPARKNLP-1291)
Spark NLP readers only supported inputs via file paths. That means if you already had a DataFrame with text content (say from another pipeline or a preliminary load), you had to write it to disk just to let the reader ingest it. This adds friction and overhead, especially in streaming or in-memory pipelines.With this change, you can:
- Feed raw text stored in a DataFrame column directly into Spark NLP readers — zero I/O overhead when not needed.
- Simplify workflows and pipelines (no need for temporary file staging just to “read” back data).
- Improve performance and resource usage in scenarios where input is already available as strings (e.g. generated, preprocessed, or coming from another system).
- Make the reader APIs more flexible and general-purpose.
-
Fault-Tolerant XML Reader
The XML reader now skips malformed XML fragments (e.g., mismatched tags, missing closures, invalid characters) instead of failing the job.
Enhanced error handling ensures more resilient ingestion of imperfect real-world data.
🐛 Bug Fixes
- GGUF Model Loading Duplication
Fixed an issue inFeaturesFallbackReaderthat caused duplicate loading or missing model files when calling.pretrained()on GGUF-based annotators such asAutoGGUFModeland rerankers, especially in Databricks environments.
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
💻 Installation
Python
pip install spark-nlp==6.1.5Spark Packages
CPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.5GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.5Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.5AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.5Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.5</version>
</dependency>spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.1.5</version>
</dependency>spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.1.5</version>
</dependency>spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.1.5</version>
</dependency>FAT JARs
- CPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.5.jar
- GPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.5.jar
- Apple Silicon: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.5.jar
- AArch64: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.5.jar
What's Changed
- #14665 by @danilojsl
- #14666 by @danilojsl
- #14668 by @danilojsl
- #14667 by @C-K-Loan, @DevinTDHa
Full Changelog: 6.1.4...6.1.5
6.1.4
📢 Spark NLP 6.1.4: Advancing Multimodal Workflows with Reader2Image
We are excited to announce the release of Spark NLP 6.1.4!
This version introduces a powerful new annotator, Reader2Image, which extends Spark NLP’s universal ingestion capabilities to embedded images across a wide range of document formats. With this release, Spark NLP users can now seamlessly integrate text and image processing in the same pipeline, unlocking new opportunities for vision-language modeling (VLM), multimodal search, and document understanding.
🔥 Highlights
- New
Reader2ImageAnnotator: Extract and structure image content directly from documents like PDFs, Word, PowerPoint, Excel, HTML, Markdown, and Email files. - Multimodal Pipeline Expansion: Build workflows that combine text, tables, and now images for comprehensive document AI applications.
- Consistent Structured Output: Access image metadata (filename, dimensions, channels, mode) alongside binary image data in Spark DataFrames, fully compatible with other visual annotators.
🚀 New Features & Enhancements
Document Ingestion
-
Reader2ImageAnnotator
A new multimodal annotator designed to parse image content embedded in structured documents. Supported formats include:- PDFs
- Word (.doc/.docx)
- Excel (.xls/.xlsx)
- PowerPoint (.ppt/.pptx)
- HTML & Markdown (.md)
- Email files (.eml, .msg)
Output Fields:
- File name
- Image dimensions (height, width)
- Number of channels
- Mode
- Binary image data
- Metadata
This enables seamless integration with vision-language models (VLMs), multimodal embeddings, and downstream Spark NLP annotators, all within the same distributed pipeline.
🐛 Bug Fixes
- None
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
⚙️ Installation
Python
pip install spark-nlp==6.1.4Spark Packages
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.4GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.4Apple Silicon (M1 & M2)
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.4AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.4Maven
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.4</version>
</dependency>- GPU:
spark-nlp-gpu_2.12:6.1.4 - Apple Silicon:
spark-nlp-silicon_2.12:6.1.4 - AArch64:
spark-nlp-aarch64_2.12:6.1.4
FAT JARs
- CPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.4.jar
- GPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.4.jar
- M1/M2: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.4.jar
- AArch64: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.4.jar
📊 What’s Changed
- [SPARKNLP-1261] Introducing
Reader2ImageAnnotator (#14658) by @danilojsl
Full Changelog: 6.1.3...6.1.4
What's Changed
- SPARKNLP-1261 Introducing Reader2Image Annotator by @danilojsl in #14658
Full Changelog: 6.1.3...6.1.4
6.1.3
📢 Spark NLP 6.1.3: NerDL Graph Checker, Reader2Doc Enhancements, Ranking Finisher
We are pleased to announce Spark NLP 6.1.3, introducing a new graph validation annotator for NER training, enhancements to Reader2Doc for flexible document handling, and a new ranking finisher for AutoGGUFReranker outputs. This release focuses on improving training robustness, document processing flexibility, and retrieval ranking capabilities.
🔥 Highlights
- New NerDLGraphChecker annotator to validate NER training graphs before training starts.
- Reader2Doc enhancements with options for consolidated output and filtering.
- New AutoGGUFRerankerFinisher for ranking, filtering, and normalizing reranker outputs.
🚀 New Features & Enhancements
Named Entity Recognition (NER)
NerDLGraphChecker:
A new annotator that validates whether a suitable NerDL graph is available for a given training dataset before embeddings or training start. This helps avoid wasted computation in custom training scenarios. (Link to notebook)
- Must be placed before embedding or
NerDLApproachannotators. - Requires token and label columns in the dataset.
- Automatically extracts embedding dimensions from the pipeline to validate graph compatibility.
Document Processing
Reader2Doc Enhancements:
New configuration options provide more control over output formatting:
outputAsDocument: Concatenates all sentences into a single document.excludeNonText: Filters out non-textual elements (e.g., tables, images) from the document.
Ranking & Retrieval
AutoGGUFRerankerFinisher:
A finisher for processing AutoGGUFReranker outputs, adding advanced ranking and filtering capabilities (Link to notebook):
- Top-k document selection.
- Score threshold filtering.
- Min-max score normalization (0–1 range).
- Sorting by relevance score.
- Rank assignment in metadata while preserving document structure.
🐛 Bug Fixes
None.
❤️ Community Support
- Slack Live discussion with the Spark NLP community and team
- GitHub Bug reports, feature requests, and contributions
- Discussions Share ideas and engage with other community members
- Medium Spark NLP technical articles
- JohnSnowLabs Medium Official blog
- YouTube Spark NLP tutorials and demos
Installation
Python
pip install spark-nlp==6.1.3Spark Packages
spark-nlp on Apache Spark 3.0.x–3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.3GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.3Apple Silicon (M1 & M2)
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.3AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.3Maven
spark-nlp:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.3</version>
</dependency>spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.1.3</version>
</dependency>spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.1.3</version>
</dependency>spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.1.3</version>
</dependency>FAT JARs
- CPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.3.jar
- GPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.3.jar
- M1: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.3.jar
- AArch64: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.3.jar
What's Changed
Full Changelog: 6.1.2...6.1.3
6.1.2
📢 Spark NLP 6.1.2: AutoGGUFReranker and AutoGGUF improvements
We are excited to announce Spark NLP 6.1.2, enhancing AutoGGUF model support and introduces a brand new reranking annotator based on llama.cpp LLMs. This release also brings fixes for AutoGGUFVision model and improvements for CUDA compatibility of AutoGGUF models.
🔥 Highlights
New AutoGGUFReranker annotator for advanced LLM-based reranking in information retrieval and retrieval-augmented generation (RAG) pipelines.
🚀 New Features & Enhancements
Large Language Models (LLMs)
AutoGGUFReranker
A new annotator for reranking candidate results using AutoGGUF-based LLM embeddings. This enables more accurate ranking in retrieval pipelines, benefiting applications such as search, RAG, and question answering. (Link to notebook)
🐛 Bug Fixes
- Fixed Python initialization errors in
AutoGGUFVisionModel. - Using
savefor AutoGGUF models now supports more file protocols. - Ensured better GPU support for AutoGGUF annotators on a broader range of CUDA devices.
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
Installation
Python
pip install spark-nlp==6.1.2Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2Apple Silicon (M1 & M2)
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.2</version>
</dependency>spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.1.2</version>
</dependency>spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.1.2</version>
</dependency>spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.1.2</version>
</dependency>FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.2.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.2.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.2.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.2.jar
What's Changed
- #14649 by @prabod
- #14650 by @DevinTDHa
Full Changelog: 6.1.1...6.1.2
6.1.1
📢 Spark NLP 6.1.1: Enhanced LLM Performance and Expanded Data Ingestion Capabilities
We are thrilled to announce Spark NLP 6.1.1, a focused release that delivers significant performance improvements and enhanced functionality for large language models and universal data ingestion. This release continues our commitment to providing state-of-the-art AI capabilities within the native Spark ecosystem, with optimized inference performance and expanded multimodal support.
🔥 Highlights
- Performance Boost for llama.cpp models: Inference optimizations in
AutoGGUFModelandAutoGGUFEmbeddingsdeliver improvements for large language model workflows on GPU. - Multimodal Vision Models Restored: The
AutoGGUFVisionModelannotator is back with full functionality and latest SOTA VLMs, enabling sophisticated vision-language processing capabilities. - Enhanced Table Processing: New
Reader2Tableannotator streamlines tabular data extraction from multiple document formats with seamless pipeline integration. - Upgraded openVINO backend: We upgraded our openVINO backend to 2025.02 and added hyperthreading configuration options to maximize performance on multi-core systems.
🚀 New Features & Enhancements
Large Language Models (LLMs)
- Optimized
AutoGGUFModelPerformance: We improved the inference of llama.cpp models and achieved a 10% performance increase forAutoGGUFModelon GPU. - Restored
AutoGGUFVisionModel: The multimodal vision model annotator is fully operational again, enabling powerful vision-language processing capabilities. Users can now process images alongside text for comprehensive multimodal AI applications while using the latest SOTA vision-language models. - Enhanced Model Compatibility:
AutoGGUFModelcan now seamlessly load the language model components from pretrainedAutoGGUFVisionModelinstances, providing greater flexibility in model deployment and usage. (Link to notebook) - Robust Model Loading: Pretrained AutoGGUF-based annotators now load despite the inclusion of deprecated parameters, ensuring broader compatibility.
- Updated Default Models: All AutoGGUF annotators now use more recent and capable pretrained models:
| Annotator | Default pretrained model |
|---|---|
| AutoGGUFModel | Phi_4_mini_instruct_Q4_K_M_gguf |
| AutoGGUFEmbeddings | Qwen3_Embedding_0.6B_Q8_0_gguf |
| AutoGGUFVisionModel | Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf |
Document Ingestion
Reader2TableAnnotator: This powerful new annotator provides a streamlined interface for extracting and processing tabular data from various document formats (Link to notebook). It offers:- Unified API for interacting with Spark NLP readers
- Enhanced flexibility through reader-specific configurations
- Improved maintainability and scalability for data loading workflows
- Support for multiple formats including HTML, Word (.doc/.docx), Excel (.xls/.xlsx), PowerPoint (.ppt/.pptx), Markdown (.md), and CSV (.csv)
Performance Optimizations
- OpenVINO Upgrade: We upgrade the backend to 2025.02 and added comprehensive hyperthreading configuration options for the OpenVINO backend, enabling users to optimize performance on multi-core systems by fine-tuning thread allocation and CPU utilization.
🐛 Bug Fixes
None
❤️ Community Support
- Slack: For live discussion with the Spark NLP community and the team.
- GitHub: Bug reports, feature requests, and contributions.
- Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium: Spark NLP articles.
- JohnSnowLabs official Medium
- YouTube: Spark NLP video tutorials.
Installation
Python
pip install spark-nlp==6.1.1Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.1</version>
</dependency>spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.1.1</version>
</dependency>spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.1.1</version>
</dependency>spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.1.1</version>
</dependency>FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.1.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.1.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.1.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.1.jar
What's Changed
- #14641 by @prabod
- #14640 by @danilojsl
- #14644 by @DevinTDHa and @C-K-Loan
Full Changelog: 6.1.0...6.1.1
6.1.0
📢 Spark NLP 6.1.0: State-of-the-art LLM Capabilities and Advancing Universal Ingestion
We are excited to announce Spark NLP 6.1.0, another milestone for building scalable, distributed AI pipelines! This major release significantly enhances our capabilities for state-of-the-art multimodal and large language models and universal data ingestion. Upgrade Spark NLP to 6.1.0 to improve both usability and performance across ingestion, inference, and multimodal processing pipelines, all within the native Spark ecosystem.
🔥 Highlights
- Upgraded
llama.cppIntegration: We've updated ourllama.cppbackend to tagb5932which supports inference with the latest generation of LLMs. - Unified Document Ingestion with
Reader2Doc: Introducing a new annotator that streamlines the process of loading and integrating diverse file formats (PDFs, Word, Excel, PowerPoint, HTML, Text, Email, Markdown) directly into Spark NLP pipelines with a unified and flexible interface. - Support for Phi-4: Spark NLP now natively supports the Phi-4 model, allowing users to leverage its advanced reasoning capabilities.
🚀 New Features & Enhancements
Large Language Models (LLMs)
llama.cppUpgrade: Our llama.cpp backend has been upgraded to versionb5932. This update enables native inference for the newest LLMs, such as Gemma 3 and Phi-4, ensuring broader model compatibility and improved performance.- NOTE: We are still in the process of upgrading our multimodal
AutoGGUFVisionModelannotator to the latest backend. This means that this annotator will not be available in this version. As a workaround, please use version 6.0.5 of Spark NLP.
- NOTE: We are still in the process of upgrading our multimodal
- Phi-4 Model Support: Spark NLP now integrates the Phi-4 model, an advanced open model trained on a blend of synthetic data, filtered public domain content, and academic Q&A datasets. This integration enables sophisticated reasoning capabilities directly within Spark NLP. (Link to notebook)
Document Ingestion
Reader2DocAnnotator: This new annotator provides a simplified, unified interface for integrating various Spark NLP readers. It supports a wide range of formats, including PDFs, plain text, HTML, Word (.doc/.docx), Excel (.xls/.xlsx), PowerPoint (.ppt/.pptx), email files (.eml,.msg), and Markdown (.md).- Using this annotator, you can read all these different formats into Spark NLP documents, making them directly accessible in all your Spark NLP pipelines. This significantly reduces boilerplate code and enhances flexibility in data loading workflows, making it easier to scale and switch between data sources.
Let's use a code example to see how easy it is to use:
reader2doc = Reader2Doc() \
.setContentType("application/pdf") \
.setContentPath("./pdf-files") \
.setOutputCol("document")
# other NLP stages in `nlp_stages`
pipeline = Pipeline(stages=[reader2doc] + nlp_stages)
model = pipeline.fit(empty_df)
result_df = model.transform(empty_df)Check out our full example notebook to see it in action.
🐛 Bug Fixes
- HuggingFace OpenVINO Notebook for Qwen2VL: Addressed and fixed issues in the notebook related to the OpenVINO conversion of the Qwen2VL model, ensuring smoother functionality.
❤️ Community Support
- Slack: For live discussion with the Spark NLP community and the team.
- GitHub: Bug reports, feature requests, and contributions.
- Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium: Spark NLP articles.
- JohnSnowLabs official Medium
- YouTube: Spark NLP video tutorials.
Installation
Python
pip install spark-nlp==6.1.0Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0Apple Silicon (M1 & M2)
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.0</version>
</dependency>spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.1.0</version>
</dependency>spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.1.0</version>
</dependency>spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.1.0</version>
</dependency>FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.0.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.0.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.0.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.0.jar
What's Changed
- Update HuggingFace_OpenVINO_in_Spark_NLP_Qwen2VL.ipynb #14631 by @AbdullahMubeenAnwar
- Sparknlp 1189 Introducing Phi4 #14606 by @prabod
- SPARKNLP-1259 Introducing Reader2Doc Annotator #14632 by @danilojsl
- [SPARKNLP-1194] Upgrade jsl-llamacpp to newest version #14633 by @DevinTDHa
- Add telemetry to github actions [skip-test] #14568 by @KshitizGIT
Full Changelog: 6.0.5...6.1.0