From 7a3e0df9c4a0a8bed43c72aef1800c94f2f83ca2 Mon Sep 17 00:00:00 2001 From: Kris Freedain Date: Tue, 25 Jun 2024 16:04:04 -0700 Subject: [PATCH 1/2] Minor blog edits Signed-off-by: Kris Freedain --- ...-opensearchcon-san-francisco-schedule-and-cfp-updates.md | 2 +- _posts/2024-06-25-diving-into-opensearch-2.15.md | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/_posts/2024-06-21-opensearchcon-san-francisco-schedule-and-cfp-updates.md b/_posts/2024-06-21-opensearchcon-san-francisco-schedule-and-cfp-updates.md index 0fe5d9f19d..2d42eff940 100644 --- a/_posts/2024-06-21-opensearchcon-san-francisco-schedule-and-cfp-updates.md +++ b/_posts/2024-06-21-opensearchcon-san-francisco-schedule-and-cfp-updates.md @@ -10,7 +10,7 @@ categories: meta_keywords: opensearchcon north america, opensearchcon na, opesearchcon call for papers, register for opensearchcon, opensearch community meta_description: Join the OpenSearch Project in San Francisco for it’s third annual OpenSearchCon North America 2024 taking place September 24-26 at the Hilton Union Square. Register today. -excerpt: The OpenSearch Project invites the OpenSearch community to explore the future of search, analytics, and generative AI at the first OpenSearch user conference in Europe. Join us in Berlin on May 6 & 7 and learn how to build powerful applications and get the most out of your OpenSearch deployments. +excerpt: The OpenSearch Project invites the OpenSearch community to explore the future of search, analytics, and generative AI at the first OpenSearch user conference in North America. Join us in San Francisco September 24-26 and learn how to build powerful applications and get the most out of your OpenSearch deployments. featured_blog_post: true featured_image: /assets/media/opensearchcon/2024/OSC2024_NASF_Social-Graphic1_1200x627.png --- diff --git a/_posts/2024-06-25-diving-into-opensearch-2.15.md b/_posts/2024-06-25-diving-into-opensearch-2.15.md index fb62bc98dc..2fbab25438 100644 --- a/_posts/2024-06-25-diving-into-opensearch-2.15.md +++ b/_posts/2024-06-25-diving-into-opensearch-2.15.md @@ -25,7 +25,7 @@ Many modern applications require significant data processing at the time of inge **Accelerate hybrid search with parallel processing** -This release also brings parallel processing to hybrid search for significant performance improvements. Introduced in OpenSearch 2.10, [hybrid search](https://opensearch.org/blog/hybrid-search/) combines lexical (BM25) or neural sparse search with semantic vector search to provide higher-quality results than when using either technique alone, and is a best practice for text search. OpenSearch 2.15 lowers hybrid search latency by running the two [subsearches in parallel](https://opensearch.org/docs/latest/search-plugins/neural-sparse-search/#step-5-create-and-enable-the-two-phase-processor-optional)at various stages of the process. The result is a latency reduction of up to 25%. +This release also brings parallel processing to hybrid search for significant performance improvements. Introduced in OpenSearch 2.10, [hybrid search](https://opensearch.org/blog/hybrid-search/) combines lexical (BM25) or neural sparse search with semantic vector search to provide higher-quality results than when using either technique alone, and is a best practice for text search. OpenSearch 2.15 lowers hybrid search latency by running the two [subsearches in parallel](https://opensearch.org/docs/latest/search-plugins/neural-sparse-search/#step-5-create-and-enable-the-two-phase-processor-optional) at various stages of the process. The result is a latency reduction of up to 25%. **Advance search performance with SIMD support for exact search** @@ -33,7 +33,7 @@ OpenSearch 2.12 introduced support for JDK21, enabling users to run OpenSearch c **Save vector search storage capacity** -OpenSearch 2.15 introduces the ability to disable document values for the `k-nn` field when using the Lucene engine for vector search. This does not impact k-NN search functionality; for example, you can continue to perform both approximate nearest neighbor and exact search with the Lucene engine, similarly to previous versions of OpenSearch. In our tests, after disabling document values, we observed a ~16% reduction in shard size. We plan to extend this optimization to the NMSLIB and Faiss engines in future releases. +OpenSearch 2.15 introduces the ability to [disable document values](https://opensearch.org/docs/latest/search-plugins/knn/performance-tuning/) for the `k-nn` field when using the Lucene engine for vector search. This does not impact k-NN search functionality; for example, you can continue to perform both approximate nearest neighbor and exact search with the Lucene engine, similarly to previous versions of OpenSearch. In our tests, after disabling document values, we observed a ~16% reduction in shard size. We plan to extend this optimization to the NMSLIB and Faiss engines in future releases. **Query certain data more efficiently with wildcard fields** @@ -77,7 +77,7 @@ Previously, OpenSearch users could only create regex-based guardrails to detect **Enable local models for ML inference processing** -The [ML inference processor](https://opensearch.org/docs/latest/ingest-pipelines/processors/ml-inference/)enables users to enrich ingest pipelines using inferences from any integrated ML model. Previously, the processor only supported remote models, which connect to model provider APIs like Amazon SageMaker, OpenAI, Cohere, and Amazon Bedrock. In OpenSearch 2.15, the processor is compatible with local models, which are models hosted on the search cluster's infrastructure. +The [ML inference processor](https://opensearch.org/docs/latest/ingest-pipelines/processors/ml-inference/) enables users to enrich ingest pipelines using inferences from any integrated ML model. Previously, the processor only supported remote models, which connect to model provider APIs like Amazon SageMaker, OpenAI, Cohere, and Amazon Bedrock. In OpenSearch 2.15, the processor is compatible with local models, which are models hosted on the search cluster's infrastructure. ### ***Ease of use*** From 64f62c650c534ac0fc2d5a5c869db0a6864fc6b9 Mon Sep 17 00:00:00 2001 From: Kris Freedain Date: Tue, 25 Jun 2024 16:12:25 -0700 Subject: [PATCH 2/2] Update 2024-06-25-diving-into-opensearch-2.15.md --- _posts/2024-06-25-diving-into-opensearch-2.15.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-06-25-diving-into-opensearch-2.15.md b/_posts/2024-06-25-diving-into-opensearch-2.15.md index 2fbab25438..afc6f03e4b 100644 --- a/_posts/2024-06-25-diving-into-opensearch-2.15.md +++ b/_posts/2024-06-25-diving-into-opensearch-2.15.md @@ -33,7 +33,7 @@ OpenSearch 2.12 introduced support for JDK21, enabling users to run OpenSearch c **Save vector search storage capacity** -OpenSearch 2.15 introduces the ability to [disable document values](https://opensearch.org/docs/latest/search-plugins/knn/performance-tuning/) for the `k-nn` field when using the Lucene engine for vector search. This does not impact k-NN search functionality; for example, you can continue to perform both approximate nearest neighbor and exact search with the Lucene engine, similarly to previous versions of OpenSearch. In our tests, after disabling document values, we observed a ~16% reduction in shard size. We plan to extend this optimization to the NMSLIB and Faiss engines in future releases. +OpenSearch 2.15 introduces the ability to disable document values for the `k-nn` field when using the Lucene engine for vector search. This does not impact k-NN search functionality; for example, you can continue to perform both approximate nearest neighbor and exact search with the Lucene engine, similarly to previous versions of OpenSearch. In our tests, after disabling document values, we observed a ~16% reduction in shard size. We plan to extend this optimization to the NMSLIB and Faiss engines in future releases. **Query certain data more efficiently with wildcard fields**