Skip to content

Commit

Permalink
Merge pull request #3012 from krisfreedain/2024-06-edit2
Browse files Browse the repository at this point in the history
Minor blog edits
  • Loading branch information
krisfreedain authored Jun 25, 2024
2 parents 44fbe21 + 64f62c6 commit bb3e794
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ categories:
meta_keywords: opensearchcon north america, opensearchcon na, opesearchcon call for papers, register for opensearchcon, opensearch community
meta_description: Join the OpenSearch Project in San Francisco for it’s third annual OpenSearchCon North America 2024 taking place September 24-26 at the Hilton Union Square. Register today.

excerpt: The OpenSearch Project invites the OpenSearch community to explore the future of search, analytics, and generative AI at the first OpenSearch user conference in Europe. Join us in Berlin on May 6 & 7 and learn how to build powerful applications and get the most out of your OpenSearch deployments.
excerpt: The OpenSearch Project invites the OpenSearch community to explore the future of search, analytics, and generative AI at the first OpenSearch user conference in North America. Join us in San Francisco September 24-26 and learn how to build powerful applications and get the most out of your OpenSearch deployments.
featured_blog_post: true
featured_image: /assets/media/opensearchcon/2024/OSC2024_NASF_Social-Graphic1_1200x627.png
---
Expand Down
4 changes: 2 additions & 2 deletions _posts/2024-06-25-diving-into-opensearch-2.15.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Many modern applications require significant data processing at the time of inge

**Accelerate hybrid search with parallel processing**

This release also brings parallel processing to hybrid search for significant performance improvements. Introduced in OpenSearch 2.10, [hybrid search](https://opensearch.org/blog/hybrid-search/) combines lexical (BM25) or neural sparse search with semantic vector search to provide higher-quality results than when using either technique alone, and is a best practice for text search. OpenSearch 2.15 lowers hybrid search latency by running the two [subsearches in parallel](https://opensearch.org/docs/latest/search-plugins/neural-sparse-search/#step-5-create-and-enable-the-two-phase-processor-optional)at various stages of the process. The result is a latency reduction of up to 25%.
This release also brings parallel processing to hybrid search for significant performance improvements. Introduced in OpenSearch 2.10, [hybrid search](https://opensearch.org/blog/hybrid-search/) combines lexical (BM25) or neural sparse search with semantic vector search to provide higher-quality results than when using either technique alone, and is a best practice for text search. OpenSearch 2.15 lowers hybrid search latency by running the two [subsearches in parallel](https://opensearch.org/docs/latest/search-plugins/neural-sparse-search/#step-5-create-and-enable-the-two-phase-processor-optional) at various stages of the process. The result is a latency reduction of up to 25%.

Check failure on line 28 in _posts/2024-06-25-diving-into-opensearch-2.15.md

View workflow job for this annotation

GitHub Actions / vale

[vale] _posts/2024-06-25-diving-into-opensearch-2.15.md#L28

[OpenSearch.Spelling] Error: subsearches. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks.
Raw output
{"message": "[OpenSearch.Spelling] Error: subsearches. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks.", "location": {"path": "_posts/2024-06-25-diving-into-opensearch-2.15.md", "range": {"start": {"line": 28, "column": 451}}}, "severity": "ERROR"}

**Advance search performance with SIMD support for exact search**

Expand Down Expand Up @@ -77,7 +77,7 @@ Previously, OpenSearch users could only create regex-based guardrails to detect

**Enable local models for ML inference processing**

The [ML inference processor](https://opensearch.org/docs/latest/ingest-pipelines/processors/ml-inference/)enables users to enrich ingest pipelines using inferences from any integrated ML model. Previously, the processor only supported remote models, which connect to model provider APIs like Amazon SageMaker, OpenAI, Cohere, and Amazon Bedrock. In OpenSearch 2.15, the processor is compatible with local models, which are models hosted on the search cluster's infrastructure.
The [ML inference processor](https://opensearch.org/docs/latest/ingest-pipelines/processors/ml-inference/) enables users to enrich ingest pipelines using inferences from any integrated ML model. Previously, the processor only supported remote models, which connect to model provider APIs like Amazon SageMaker, OpenAI, Cohere, and Amazon Bedrock. In OpenSearch 2.15, the processor is compatible with local models, which are models hosted on the search cluster's infrastructure.


### ***Ease of use***
Expand Down

0 comments on commit bb3e794

Please sign in to comment.