Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubearmor Relay not configured (enable log) when using helm or KubeArmorConfig #1866

Open
henrikrexed opened this issue Sep 25, 2024 · 4 comments · May be fixed by #1893
Open

Kubearmor Relay not configured (enable log) when using helm or KubeArmorConfig #1866

henrikrexed opened this issue Sep 25, 2024 · 4 comments · May be fixed by #1893
Assignees
Labels
bug Something isn't working

Comments

@henrikrexed
Copy link

henrikrexed commented Sep 25, 2024

Bug Report

When installing kubearmor-operator , i'm trying to enable to log produced by the ENABLE_STDOUT_LOGS on kubearmor-relay.
My intention is to use fluentbit or the opentelemetry collector to collect the various kubearmor events.

I have used a modified values.yaml file by enabling :

kubearmorConfig:
  defaultCapabilitiesPosture: audit
  defaultFilePosture: audit
  defaultNetworkPosture: audit
  defaultVisibility: process,network,file,capability
  enableStdOutLogs: true
  enableStdOutAlerts: true
  enableStdOutMsgs: true

But the expected configuraiton is not done.

I have also tried by deploying a KubeArmorConfig with the similar settings..but kuebarmor-relay has still the ENABLE_STDOUT_LOGS set to false.

General Information

  • Environment description : latest GKE cluster
  • Orchestration system version in use (e.g. kubectl version, ...): 1.27

To Reproduce

  1. Instruction 1
helm upgrade --install kubearmor-operator kubearmor/kubearmor-operator -n kubearmor --create-namespace -f kubearmor/values.yaml

or

helm upgrade --install kubearmor-operator kubearmor/kubearmor-operator -n kubearmor --create-namespace 
kubectl apply -f kubearmor/kubeArmorConfig.yaml

here is the kubeArmorconfig:

apiVersion: operator.kubearmor.com/v1
kind: KubeArmorConfig
metadata:
  labels:
    app.kubernetes.io/name: kubearmorconfig
    app.kubernetes.io/instance: kubearmorconfig-sample
    app.kubernetes.io/part-of: kubearmoroperator
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: kubearmoroperator
  name: kubearmorconfig-default
  namespace: kubearmor
spec:
  defaultCapabilitiesPosture: audit
  defaultFilePosture: audit
  defaultNetworkPosture: audit
  defaultVisibility: process,network, network, capabilities
  enableStdOutLogs: true
  enableStdOutAlerts: true
  enableStdOutMsgs: true
  seccompEnabled: false
  alertThrottling: false
  maxAlertPerSec: 10
  throttleSec: 30
  kubearmorImage:
    image: kubearmor/kubearmor:stable
    imagePullPolicy: Always
  kubearmorInitImage:
    image: kubearmor/kubearmor-init:stable
    imagePullPolicy: Always
  kubearmorRelayImage:
    image: kubearmor/kubearmor-relay-server
    imagePullPolicy: Always
  kubearmorControllerImage:
    image: kubearmor/kubearmor-controller
    imagePullPolicy: Always
```yaml 


**Expected behavior**

the kubearmor-relay deploymnet should have ENABLE_STDOUT_LOGS set to true

@henrikrexed henrikrexed added the bug Something isn't working label Sep 25, 2024
@kareem-DA
Copy link

I am seeing this problem as well. It feels like a race condition. I have this scripted for deployment. I am also using fluxcd to deploy kubearmor, followed by

apiVersion: v1
kind: Namespace
metadata:
  name: kubearmor
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
  name: kubearmor
  namespace: kubearmor
spec:
  interval: 5m
  url: https://kubearmor.github.io/charts
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: kubearmor
  namespace: kubearmor
spec:
  chart:
    spec:
      chart: kubearmor-operator
      interval: 5m
      sourceRef:
        kind: HelmRepository
        name: kubearmor
      version: v1.4.0
  driftDetection:
    mode: enabled
  install:
    remediation:
      retries: 3
  interval: 10m
  releaseName: kubearmor-operator
  timeout: 5m
  upgrade:
    remediation:
      retries: 3
---
apiVersion: operator.kubearmor.com/v1
kind: KubeArmorConfig
metadata:
  labels:
    app.kubernetes.io/created-by: kubearmoroperator
    app.kubernetes.io/instance: kubearmorconfig-sample
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/name: kubearmorconfig
    app.kubernetes.io/part-of: kubearmoroperator
  name: kubearmorconfig-default
  namespace: kubearmor
spec:
  alertThrottling: true
  defaultCapabilitiesPosture: audit
  defaultFilePosture: audit
  defaultNetworkPosture: audit
  defaultVisibility: process,network
  enableStdOutAlerts: true
  enableStdOutLogs: false
  enableStdOutMsgs: false
  kubearmorControllerImage:
    image: kubearmor/kubearmor-controller
    imagePullPolicy: Always
  kubearmorImage:
    image: kubearmor/kubearmor:stable
    imagePullPolicy: Always
  kubearmorInitImage:
    image: kubearmor/kubearmor-init:stable
    imagePullPolicy: Always
  kubearmorRelayImage:
    image: kubearmor/kubearmor-relay-server
    imagePullPolicy: Always
  maxAlertPerSec: 10
  seccompEnabled: false
  throttleSec: 30

After the initial deployment, the setting for the environment variables for the relay pod are always all false. If I make a change to the config (either by pushing an updated config, or editing the config on the cluster with 'kubectl edit ...') after the entire deployment is up, the enviornment variables will get updated the relay pod.

@rksharma95
Copy link
Collaborator

@henrikrexed i'm not able to reproduce the issue with current stable. can you please check again?

@rksharma95
Copy link
Collaborator

rksharma95 commented Nov 7, 2024

@kareem-DA with flux or any gitops the initial configuration would be desired state right! so any manual change to KubeArmorConfig would be reverted in next reconcilation. let me know if i'm missing out anything here.

@rksharma95
Copy link
Collaborator

@kareem-DA thanks for the clarification over slack discussion, i will try again to reproduce the issue and will revert back here.

@rksharma95 rksharma95 linked a pull request Nov 12, 2024 that will close this issue
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants