-
Notifications
You must be signed in to change notification settings - Fork 17
IDS reconfiguration
NS2 + IDS artifacts:
- Wiki page on IDS deployment and triggering an alarm
- K8s descriptors
- 5GTANGO descriptors and project
- CNF Docker image implementations
After successful deployment, an alarm triggered by the IDS should lead to a reconfiguration of the MDC container to connect to the quarantine instance of NS1. This requires a series of 5GTANGO components playing together.
Alternatively, the reconfiguration may also be triggered manually from the FMP/SMPortal:
-
Tested & confirmed: IDS triggers an alarm iff an intrusion (wrong host or user) is detected. The alarm leads to a corresponding log created in elasticsearch (visible in Kibana).
-
Tested & confirmed: The HTTP server (H) exposes this alarm via an REST interface to the monitory. This leads to a monitoring metric being set.
When an alarm is triggered, the metric
ip0
changes from 0 to some positive number for around 20s. -
Tested & Not working consistently: The policy manager picks up the change in the specified custom metric and triggers a reconfiguration request for the SLM. This request also contains the service ID since the policy is bound to the corresponding service instance.
Open issues:
- Policy manager needs more testing and probably fixes for higher robustness: Reconfig requests are sometimes not sent out after an alert. Sometimes multiple times. Also requests not formatted correctly. Example message: http://logs.sonata-nfv.eu/messages/graylog_65/c849a8c0-eff4-11e9-a4e4-0242ac130004
-
Tested & confirmed: The SLM contacts the SSM, which sends back the reconfiguration payload, which is send via the FLM to the FSM of the MDC container.
-
Tested & confirmed: The FSM requests the MANO to restart the MDC container with a new environmental variable, which should reconfigure the connection of the MDC from the old NS1 to the quarantine NS1 instance.
-
WIP: After the FSM's response the MDC pod is restarted with a new env var and the IMMS traffic should stop arriving in the old NS1 and start arriving in the quarantine NS1. Open issues:
- Reconfiguration and restart of pod works. Env var is set correctly. Traffic stops arriving in old NS1
- But: Traffic doesn't arrive correctly in quarantine NS1 either.
All policy apis are available at the swagger API
In order to create a new policy and check it for the industrial NS you should follow the next steps:
- create the policy Policy creation should be done within the portal. But it is not ready from the UI part. you can create a new policy by posting the policy available here at the REST API: http://pre-int-sp-ath.5gtango.eu:8081/api/v1
- define policy as default
- activate the policy (this means that you will see the prometheus rule activated) Policy is activated automatically upon the NS deployment. Alternatively steps 1 to 3 can be executed via the execution of the following robot test
- then we should somehow trigger the ip0 metric to more than 0
This can be done by getting connected at msf-vnf1 external ip :smbclient -L <external-IP>
- the prometheus rule will be fired and monitoring manager will send the alert to pub/sub
- policy reads the alert and creates the following alert action
you should be able to see the policy alert action at the portal - you can also check the payload in the pub/sub logs to confirm that triggering worked
2019-10-14 11:32:41:519: Message published
Node: rabbit@1beebaffb872
Connection: 172.18.0.39:49626 -> 172.18.0.7:5672
Virtual host: /
User: guest
Channel: 6
Exchange: son-kernel
Routing keys: [<<"service.instance.reconfigure">>]
Routed queues: [<<"service.instance.reconfigure">>]
Properties: [{<<"app_id">>,longstr,<<"tng-policy-mngr">>},
{<<"reply_to">>,longstr,<<"service.instance.reconfigure">>},
{<<"correlation_id">>,longstr,<<"5da45cd976e1730001b7e2b9">>},
{<<"priority">>,signedint,0},
{<<"delivery_mode">>,signedint,2},
{<<"headers">>,table,[]},
{<<"content_encoding">>,longstr,<<"UTF-8">>},
{<<"content_type">>,longstr,<<"text/plain">>}]
Payload:
service_instance_uuid: f3a9af69-e42e-4b13-9b02-b6e900d7beb4
reconfiguration_payload: {vnf_name: lhc-vnf2, vnfd_uuid: 602c67d2-4080-436b-95e7-5828a57f0f85,
log_message: intrusion, value: '1'}
You can check the graylogs of the tng-policy-mngr at http://logs.sonata-nfv.eu putting as search options:
source:pre-int-sp-ath AND container_name:tng-policy-mngr AND message:reconfiguration*
- Alternatively/Additionally, start creating a trace log of the broker messages on http://int-sp-ath.5gtango.eu:15672/#/traces
- Policy manager triggers
reconfiguration_event
of SSM and SSM triggersreconfiguration_event
of FSM - Need to overwrite the corresponding functions in the SSM/FSM code to return the correct response
- The incoming
content
argument looks like this: https://github.com/sonata-nfv/tng-sdk-sm/blob/master/src/tngsdksm/examples/payloads/fsm/configure_event.yml
-
The incoming
content
argument can be used to extract VNFR IDs is of this format: https://github.com/sonata-nfv/tng-sdk-sm/blob/master/src/tngsdksm/examples/payloads/ssm/configure_event.yml#L101 -
The
response
dict to return should have the following format, but as Python dict--- vnf: - configure: payload: message: 'alert 1' trigger: True id: <uuid of the vnf instance that the fsm is associated to> - configure: trigger: False id: <uuid of the vnf instance that doesn't need reconfiguration> - configure: trigger: False id: <uuid of the vnf instance that doesn't need reconfiguration>
- First test locally using
tng-sdk-sm
: https://github.com/sonata-nfv/tng-sdk-sm/wiki/Usage#subcommand-execute - Afterwards, when SSM/FSM image is built and on Dockerhub & NSD/VNFD is adjusted to deploy SSM/FSM on instantiation, then instantiate service. Trigger reconfiguration either via policy manager or manually via MANO API. Check logs to see if it worked.
- To trigger reconfiguration via the MANO use the broker GUI: http://pre-int-sp-ath.5gtango.eu:15672/#/exchanges/%2F/son-kernel
Set properties routing key and
reply_to
toservice.instance.reconfigure
andcorrelation_id
to any valid UUID and payload as in screenshot below
-
Prometheus with monitoring metric
ip0
: http://pre-int-sp-ath.5gtango.eu:9090/graph?g0.range_input=1h&g0.expr=ip0&g0.tab=0When an alarm is triggered, the metric
ip0
changes from 0 to some positive number, here 183762988, for around 20s. -
List all container names in kubernetes to identify the correct container to look out for in Prometheus:
kubectl get pods --all-namespaces -o=custom-columns=NameSpace:.metadata.namespace,NAME:.metadata.name,CONTAINERS:.spec.containers[*].name