Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[awsfirehose] Missing data when API permission is set to restrict writing to specific data stream #11768

Open
kaiyan-sheng opened this issue Nov 19, 2024 · 0 comments
Assignees
Labels
bug Something isn't working, use only for issues Integration:awsfirehose Amazon Data Firehose Team:obs-ds-hosted-services Label for the Observability Hosted Services team [elastic/obs-ds-hosted-services]

Comments

@kaiyan-sheng
Copy link
Contributor

Problem

When API permission is set to not allow documents to be sent to a specific data stream, Firehose does not recognize the permission issue and continue assuming 100% HTTP endpoint delivery success. No failure is reported and no data getting backed up in S3 bucket.

How to reproduce this issue

Step1: When creating an API key in Kibana, make sure to change the indices.names from default * to logs-awsfirehose-* for example.
Please see below for the full Control security privileges section:

{
  "superuser": {
    "cluster": [
      "all"
    ],
    "indices": [
      {
        "names": [
          "logs-awsfirehose-*"
        ],
        "privileges": [
          "all"
        ],
        "allow_restricted_indices": false
      },
      {
        "names": [
          "*"
        ],
        "privileges": [
          "monitor",
          "read",
          "view_index_metadata",
          "read_cross_cluster"
        ],
        "allow_restricted_indices": true
      }
    ],
    "applications": [],
    "run_as": [
      "*"
    ],
    "metadata": {},
    "transient_metadata": {
      "enabled": true
    },
    "remote_indices": [
      {
        "names": [
          "*"
        ],
        "privileges": [
          "all"
        ],
        "allow_restricted_indices": false,
        "clusters": [
          "*"
        ]
      },
      {
        "names": [
          "*"
        ],
        "privileges": [
          "monitor",
          "read",
          "view_index_metadata",
          "read_cross_cluster"
        ],
        "allow_restricted_indices": true,
        "clusters": [
          "*"
        ]
      }
    ],
    "remote_cluster": [
      {
        "privileges": [
          "monitor_enrich"
        ],
        "clusters": [
          "*"
        ]
      }
    ]
  }
}

Step2: Use this API key to create a firehose stream in AWS without specifying a es_datastream_name parameter and send logs to this firehose.
Here you should start seeing logs getting sent to ES through Firehose and stored in logs-aws.firehose-default data stream by default:
Kibana:Image
Firehose:
Image

Step3: Change Firehose stream setting by adding es_datastream_name parameter to something that does not match "logs-awsfirehose-*". In this case I used logs-awsinput-default.

Step4: Now you will see documents stops coming into ES and no logs-awsinput-default data stream being created. Going back to AWS portal, we are still seeing 100% delivery success with no failure, nothing getting backed up in S3 bucket.

@kaiyan-sheng kaiyan-sheng added bug Something isn't working, use only for issues Integration:awsfirehose Amazon Data Firehose Team:obs-ds-hosted-services Label for the Observability Hosted Services team [elastic/obs-ds-hosted-services] labels Nov 19, 2024
@kaiyan-sheng kaiyan-sheng self-assigned this Nov 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working, use only for issues Integration:awsfirehose Amazon Data Firehose Team:obs-ds-hosted-services Label for the Observability Hosted Services team [elastic/obs-ds-hosted-services]
Projects
None yet
Development

No branches or pull requests

1 participant