-
Notifications
You must be signed in to change notification settings - Fork 17
IDS reconfiguration
NS2 + IDS artifacts:
- Wiki page on IDS deployment and triggering an alarm
- K8s descriptors
- 5GTANGO descriptors and project
- CNF Docker image implementations
After successful deployment, an alarm triggered by the IDS should lead to a reconfiguration of the MDC container to connect to the quarantine instance of NS1. This requires a series of 5GTANGO components playing together.
Alternatively, the reconfiguration may also be triggered manually from the FMP/SMPortal:
-
Tested & confirmed: IDS triggers an alarm iff an intrusion (wrong host or user) is detected. The alarm leads to a corresponding log created in elasticsearch (visible in Kibana).
-
Tested & confirmed: The HTTP server (H) exposes this alarm via an REST interface to the monitory. This leads to a monitoring metric being set.
When an alarm is triggered, the metric
ip0
changes from 0 to some positive number for around 20s. -
Tested & confirmed: The policy manager picks up the change in the specified custom metric and triggers a reconfiguration request for the SLM. This request also contains the service ID since the policy is bound to the corresponding service instance.
-
WIP: The SLM contacts the SSM, which sends back the reconfiguration payload, which is send via the FLM to the FSM of the MDC container.
Need to implement SSM. See details here https://github.com/sonata-nfv/tng-industrial-pilot/wiki/FSM-SSM-Development
-
WIP: The FSM requests the MANO to restart the MDC container with a new environmental variable, which reconfigures the connection of the MDC from the old NS1 to the quarantine NS1 instance. Need to test FSM
All policy apis are available at the swagger API
In order to create a new policy and check it for the industrial NS you should follow the next steps:
- create the policy Policy creation should be done within the portal. But it is not ready from the UI part. you can create a new policy by posting the policy available here at the REST API: http://pre-int-sp-ath.5gtango.eu:8081/api/v1
- define policy as default
- activate the policy (this means that you will see the prometheus rule activated) Policy is activated automatically upon the NS deployment. Alternatively steps 1 to 3 can be executed via the execution of the following robot test
- then we should somehow trigger the ip0 metric to more than 0
This can be done by getting connected at msf-vnf1 external ip :smbclient -L <external-IP>
- the prometheus rule will be fired and monitoring manager will send the alert to pub/sub
- policy reads the alert and creates the following alert action
you should be able to see the policy alert action at the portal - you can also check the payload in the pub/sub logs to confirm that triggering worked
2019-10-14 11:32:41:519: Message published
Node: rabbit@1beebaffb872
Connection: 172.18.0.39:49626 -> 172.18.0.7:5672
Virtual host: /
User: guest
Channel: 6
Exchange: son-kernel
Routing keys: [<<"service.instance.reconfigure">>]
Routed queues: [<<"service.instance.reconfigure">>]
Properties: [{<<"app_id">>,longstr,<<"tng-policy-mngr">>},
{<<"reply_to">>,longstr,<<"service.instance.reconfigure">>},
{<<"correlation_id">>,longstr,<<"5da45cd976e1730001b7e2b9">>},
{<<"priority">>,signedint,0},
{<<"delivery_mode">>,signedint,2},
{<<"headers">>,table,[]},
{<<"content_encoding">>,longstr,<<"UTF-8">>},
{<<"content_type">>,longstr,<<"text/plain">>}]
Payload:
service_instance_uuid: f3a9af69-e42e-4b13-9b02-b6e900d7beb4
reconfiguration_payload: {vnf_name: lhc-vnf2, vnfd_uuid: 602c67d2-4080-436b-95e7-5828a57f0f85,
log_message: intrusion, value: '1'}
You can check the graylogs of the tng-policy-mngr at http://logs.sonata-nfv.eu putting as search options:
source:pre-int-sp-ath AND container_name:tng-policy-mngr AND message:reconfiguration*
-
Prometheus with monitoring metric
ip0
: http://pre-int-sp-ath.5gtango.eu:9090/graph?g0.range_input=1h&g0.expr=ip0&g0.tab=0When an alarm is triggered, the metric
ip0
changes from 0 to some positive number, here 183762988, for around 20s. -
List all container names in kubernetes to identify the correct container to look out for in Prometheus:
kubectl get pods --all-namespaces -o=custom-columns=NameSpace:.metadata.namespace,NAME:.metadata.name,CONTAINERS:.spec.containers[*].name