-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use new instance of Decompressor on channel initialization #3583
Use new instance of Decompressor on channel initialization #3583
Conversation
Signed-off-by: Craig Perkins <[email protected]>
Codecov Report
@@ Coverage Diff @@
## main #3583 +/- ##
============================================
+ Coverage 64.90% 64.92% +0.01%
- Complexity 3637 3640 +3
============================================
Files 284 284
Lines 20618 20616 -2
Branches 3390 3390
============================================
+ Hits 13383 13384 +1
+ Misses 5555 5551 -4
- Partials 1680 1681 +1
|
Signed-off-by: Craig Perkins <[email protected]>
Signed-off-by: Craig Perkins <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @cwperks - how do we know that the Header Verifier is safe to keep as is?
src/integrationTest/java/org/opensearch/security/rest/GzipTests.java
Outdated
Show resolved
Hide resolved
src/integrationTest/java/org/opensearch/security/rest/GzipTests.java
Outdated
Show resolved
Hide resolved
src/integrationTest/java/org/opensearch/security/rest/GzipTests.java
Outdated
Show resolved
Hide resolved
src/integrationTest/java/org/opensearch/security/ResourceFocusedTests.java
Outdated
Show resolved
Hide resolved
src/integrationTest/java/org/opensearch/security/ResourceFocusedTests.java
Show resolved
Hide resolved
src/main/java/org/opensearch/security/ssl/http/netty/Netty4ConditionalDecompressor.java
Show resolved
Hide resolved
Signed-off-by: Craig Perkins <[email protected]>
Signed-off-by: Craig Perkins <[email protected]>
Signed-off-by: Craig Perkins <[email protected]>
src/integrationTest/java/org/opensearch/security/rest/GzipTests.java
Outdated
Show resolved
Hide resolved
src/integrationTest/java/org/opensearch/security/rest/GzipTests.java
Outdated
Show resolved
Hide resolved
src/integrationTest/java/org/opensearch/security/rest/GzipTests.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
Signed-off-by: Peter Nied <[email protected]>
…ssor Signed-off-by: Peter Nied <[email protected]>
Signed-off-by: Peter Nied <[email protected]>
Thank you @cwperks & @peternied for fixing this! |
### Description Resolves an issue with decompression that can lead to concurrent gzipped requests failing. This removes the `@Sharable` annotation from the `Netty4ConditionalDecompressor` and creates a new instance of the decompressor on channel initialization. `Netty4ConditionalDecompressor` is an `HttpContentDecompressor` which is a subclass of `HttpContentDecoder` - a stateful handler. Netty docs on `@Sharable` annotation: https://netty.io/4.0/api/io/netty/channel/ChannelHandler.Sharable.html * Category (Enhancement, New feature, Bug fix, Test fix, Refactoring, Maintenance, Documentation) Bug fix ### Issues Resolved - opensearch-project/OpenSearch#10802 ### Testing Tested by running OpenSearch w fluentbit and Merge_Log on. See files below which can reproduce the issue from the linked error. I opened this PR as draft pending an integration test to validate the behavior. `docker-compose.yml` ``` version: '3' services: opensearch: # This is also the hostname of the container within the Docker network (i.e. https://opensearch-node1/) image: opensearchproject/opensearch:latest # Specifying the latest available image - modify if you want a specific version container_name: opensearch environment: - cluster.name=opensearch-cluster # Name the cluster - node.name=opensearch # Name the node that will run in this container - discovery.type=single-node - bootstrap.memory_lock=true # Disable JVM heap memory swapping - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM ulimits: memlock: soft: -1 # Set memlock to unlimited (no soft or hard limit) hard: -1 nofile: soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536 hard: 65536 volumes: - opensearch-data1:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container # - /Users/craigperkins/Projects/OpenSearch/security/build/distributions/opensearch-security-2.11.0.0-SNAPSHOT.jar:/usr/share/opensearch/plugins/opensearch-security/opensearch-security-2.11.0.0.jar ports: - 9200:9200 # REST API - 9600:9600 # Performance Analyzer networks: - opensearch-net # All of the containers will join the same Docker bridge network fluent-bit: image: fluent/fluent-bit volumes: - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf depends_on: - opensearch networks: - opensearch-net volumes: opensearch-data1: opensearch-data2: networks: opensearch-net: ``` `fluent-bit.conf` ``` [INPUT] Name dummy Dummy {"top": {".dotted": "value"}} [OUTPUT] Name es Host opensearch Port 9200 HTTP_User admin HTTP_Passwd admin Replace_Dots On Suppress_Type_Name On Compress gzip tls On tls.verify Off net.keepalive Off [FILTER] Name kubernetes Match kube.* Buffer_Size 256KB Merge_Log On Keep_Log On ``` ### Check List - [ ] New functionality includes testing - [ ] New functionality has been documented - [x] Commits are signed per the DCO using --signoff By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check [here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin). --------- Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Peter Nied <[email protected]> Co-authored-by: Peter Nied <[email protected]> (cherry picked from commit 499db78) Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
### Description Resolves an issue with decompression that can lead to concurrent gzipped requests failing. This removes the `@Sharable` annotation from the `Netty4ConditionalDecompressor` and creates a new instance of the decompressor on channel initialization. `Netty4ConditionalDecompressor` is an `HttpContentDecompressor` which is a subclass of `HttpContentDecoder` - a stateful handler. Netty docs on `@Sharable` annotation: https://netty.io/4.0/api/io/netty/channel/ChannelHandler.Sharable.html * Category (Enhancement, New feature, Bug fix, Test fix, Refactoring, Maintenance, Documentation) Bug fix ### Issues Resolved - opensearch-project/OpenSearch#10802 ### Testing Tested by running OpenSearch w fluentbit and Merge_Log on. See files below which can reproduce the issue from the linked error. I opened this PR as draft pending an integration test to validate the behavior. `docker-compose.yml` ``` version: '3' services: opensearch: # This is also the hostname of the container within the Docker network (i.e. https://opensearch-node1/) image: opensearchproject/opensearch:latest # Specifying the latest available image - modify if you want a specific version container_name: opensearch environment: - cluster.name=opensearch-cluster # Name the cluster - node.name=opensearch # Name the node that will run in this container - discovery.type=single-node - bootstrap.memory_lock=true # Disable JVM heap memory swapping - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM ulimits: memlock: soft: -1 # Set memlock to unlimited (no soft or hard limit) hard: -1 nofile: soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536 hard: 65536 volumes: - opensearch-data1:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container # - /Users/craigperkins/Projects/OpenSearch/security/build/distributions/opensearch-security-2.11.0.0-SNAPSHOT.jar:/usr/share/opensearch/plugins/opensearch-security/opensearch-security-2.11.0.0.jar ports: - 9200:9200 # REST API - 9600:9600 # Performance Analyzer networks: - opensearch-net # All of the containers will join the same Docker bridge network fluent-bit: image: fluent/fluent-bit volumes: - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf depends_on: - opensearch networks: - opensearch-net volumes: opensearch-data1: opensearch-data2: networks: opensearch-net: ``` `fluent-bit.conf` ``` [INPUT] Name dummy Dummy {"top": {".dotted": "value"}} [OUTPUT] Name es Host opensearch Port 9200 HTTP_User admin HTTP_Passwd admin Replace_Dots On Suppress_Type_Name On Compress gzip tls On tls.verify Off net.keepalive Off [FILTER] Name kubernetes Match kube.* Buffer_Size 256KB Merge_Log On Keep_Log On ``` ### Check List - [ ] New functionality includes testing - [ ] New functionality has been documented - [x] Commits are signed per the DCO using --signoff By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check [here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin). --------- Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Peter Nied <[email protected]> Co-authored-by: Peter Nied <[email protected]> (cherry picked from commit 499db78) Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
…tion (#3598) Backport 499db78 from #3583. --------- Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Peter Nied <[email protected]> Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Signed-off-by: Darshit Chanpura <[email protected]> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Peter Nied <[email protected]> Co-authored-by: Darshit Chanpura <[email protected]>
…ation (#3599) Backport 499db78 from #3583. --------- Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Peter Nied <[email protected]> Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Signed-off-by: Darshit Chanpura <[email protected]> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Peter Nied <[email protected]> Co-authored-by: Darshit Chanpura <[email protected]>
Is backporting this to 2.11 a violation of semver as outlined in OpenSearch's patch version policy, which says "Small improvements and features will be backported to a new minor version.... Security fixes will be backported to a new patch version." Maybe this should only go into 2.12, so as to not violate the expectation that 2.11.1 would only contain security fixes. |
@wbeckler 2.11.1 is being created for this fix: opensearch-project/opensearch-build#4161 (comment) |
As per official semVar guidelines, Patch version Z (x.y.Z | x > 0) MUST be incremented if only backward compatible bug fixes are introduced. A bug fix is defined as an internal change that fixes incorrect behavior. |
@bbarani that assumes that we make x.y.z.N for security fixes, doesn’t it? |
…ation (opensearch-project#3599) Backport 499db78 from opensearch-project#3583. --------- Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Peter Nied <[email protected]> Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Signed-off-by: Darshit Chanpura <[email protected]> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Peter Nied <[email protected]> Co-authored-by: Darshit Chanpura <[email protected]>
…requests (#3418) (#3675) ### Description Includes: - Backport f7c47af of #3418 - Backport 2dab119 of #3717 - Backport f27dee2 of #3583 --- Previously unauthorized requests were fully processed and rejected once they reached the RestHandler. This allocations more memory and resources for these requests that might not be useful if they are already detected as unauthorized. Using the headerVerifer and decompressor customization from [1], perform an early authorization check when only the headers are available, save an 'early response' for transmission and do not perform the decompression on the request to speed up closing out the connection. ```mermaid graph TD oA["Receive Request Headers<br>(Orginal)"] --> oB[Decompress Request] oB --> oC[RestHandler] oC --> osrf[Intercept Request] subgraph sp[Security Plugin] osrf --> oD[Check Authorization] oD --> oE{Authorized?} oE -->|Yes| oF[Process and Respond] oE -->|No| oG[Reject Request] end oF --> oH[Forward to Request Handler] H["Receive Request Headers<br>(Updated)"] --> I[HeaderVerifier] subgraph nsp[Security Plugin] I --> J{Authorized?} J -->|Yes| K[Decompress Request] J -->|No| N[Save Early Response] end K --> L[RestHandler] N --> L L --> M[Intercept Request] subgraph n2sp[Security Plugin] M --> n2D["Check Authorization<br>(Cached)"] n2D --> nE{Authorized?} nE -->|Yes| nF[Process and Respond] nE -->|No| nG[Reject Request] end nF --> nH[Forward to Request Handler] class oA,oB old; class H,I,K,N,n2D new; classDef old fill:#f9d0c4,stroke:#f28b82; classDef new fill:#cfe8fc,stroke:#68a9ef; ``` ### Issues Resolved - Related #3559 ### Check List - [X] New functionality includes testing - [ ] ~New functionality has been documented~ - [X] Commits are signed per the DCO using --signoff By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check [here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin). --------- Signed-off-by: Peter Nied <[email protected]> Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Craig Perkins <[email protected]> Signed-off-by: Peter Nied <[email protected]> Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Signed-off-by: Darshit Chanpura <[email protected]> Co-authored-by: Craig Perkins <[email protected]> Co-authored-by: opensearch-trigger-bot[bot] <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Darshit Chanpura <[email protected]>
Description
Resolves an issue with decompression that can lead to concurrent gzipped requests failing. This removes the
@Sharable
annotation from theNetty4ConditionalDecompressor
and creates a new instance of the decompressor on channel initialization.Netty4ConditionalDecompressor
is anHttpContentDecompressor
which is a subclass ofHttpContentDecoder
- a stateful handler. Netty docs on@Sharable
annotation: https://netty.io/4.0/api/io/netty/channel/ChannelHandler.Sharable.htmlBug fix
Issues Resolved
Testing
Tested by running OpenSearch w fluentbit and Merge_Log on. See files below which can reproduce the issue from the linked error.
I opened this PR as draft pending an integration test to validate the behavior.
docker-compose.yml
fluent-bit.conf
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.