You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're running a multi-tenant platform on Amazon EKS where each deployment has its own EKS cluster and each tenant has its own set of pods in their own namespace.
We've deployed the "centralized-logging-with-opensearch" solution such that we've created an EKS application pipeline for each EKS cluster and then for each tenant, created a separate log source within the relevant pipeline.
This is working very well.
However, we have a question around data deletion.
If a tenant decides to leave, we naturally undeploy the tenant from the relevant EKS cluster and clean up any remaining resources like backups and so on.
With regards to log data that has been moved into the "centralized-logging-with-opensearch" solution, how would we go about removing just that tenant's data?
I can't think of a simple way of doing it.
It seems simple enough to delete data belonging to an EKS cluster entirely, as the name of the clusters are part of the S3 prefix in the S3 buckets, so the process would be to delete all objects with that prefix.
However, the tenant data would be nested inside all the objects within that cluster's S3 prefix.
Any obvious ideas I haven't thought of or is it as painful as having to fetch, unpack, filter out, upload each and every object under the EKS cluster's S3 prefix?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
We're running a multi-tenant platform on Amazon EKS where each deployment has its own EKS cluster and each tenant has its own set of pods in their own namespace.
We've deployed the "centralized-logging-with-opensearch" solution such that we've created an EKS application pipeline for each EKS cluster and then for each tenant, created a separate log source within the relevant pipeline.
This is working very well.
However, we have a question around data deletion.
If a tenant decides to leave, we naturally undeploy the tenant from the relevant EKS cluster and clean up any remaining resources like backups and so on.
With regards to log data that has been moved into the "centralized-logging-with-opensearch" solution, how would we go about removing just that tenant's data?
I can't think of a simple way of doing it.
It seems simple enough to delete data belonging to an EKS cluster entirely, as the name of the clusters are part of the S3 prefix in the S3 buckets, so the process would be to delete all objects with that prefix.
However, the tenant data would be nested inside all the objects within that cluster's S3 prefix.
Any obvious ideas I haven't thought of or is it as painful as having to fetch, unpack, filter out, upload each and every object under the EKS cluster's S3 prefix?
Beta Was this translation helpful? Give feedback.
All reactions