This repository contains Nemea system modules for a threat detection in IoT networks. The modules and their functionality/purposes are:
- ble-conn-detector -- detect usage of a ble device
- ble-conn-guard -- detect unexpected ble connections (filters usage data from ble-conn-detector)
- ble-pairing -- detect unexpected ble pairing
- lora-airtime -- detect unexpected frequency of lora messages
- lora-replay -- detect replay attack in lora networks
- lora-distance -- detect unexpected sensor location change
- wsn-anomaly -- universal anomaly detector for wireless sensor networks
- zwave-detector -- detect network scanning and attacks on routing
To enable more advanced threat detection we create specialized collectors using dedicated HW interfaces to collect more detailed data. These modules provide collected data to NEMEA detectors. The modules and their functionality/purposes are:
- SIoTpot -- universal honeypot for IoT networks
- ble-adv-collector -- provide ble devices which are in proximity
- hci-collector -- exports ble packet date from hci interface
- zwave-sdr-sniffer -- sniff and exports z-wave frames using RTL-SDR
- zwave-collector -- parse frames from zwave-sdr-sniffer and exports them
- zwave-stats-creator -- create statistics of z-wave network (from sniffed frames)
Also, there are datasets of IoT devices communication.
You can install all available modules by the following steps:
./bootstrap.sh
./configure
make
sudo make install
Also, you can install every module separatedly by running the above commands inside the appropriate directory.
This folder includes IoT datasets for treat detection and ML trainig purposes. Extended flow data was created by Joy tool. Used initial parameters you can see in bash scripts inside datasets folder.
The script AutoTest.py
serves for module unit testing.
When launched with no arguments, it scans the repository and determines all directories containing modules. The criteria for this determination are:
- A directory that contains a file named bootstrap.sh is considered to be a module written in C language.
When the modules are determined, the compiling phase follows. During this phase
are executed commands ./bootstrap.sh
, ./configure
and make
in each C module
directory. If they succeed, the script checks the presence of an executable file
with an expected name. The expected name of a module binary contains siot-
prefix
that is followed by the module name. Module name is considered to be identical with
the name of the module directory.
The last phase of the script is testing the module data processing. The script considers module to be suitable for testing if the directory tests is present in its module directory. The conventional structure of the tests directory is that it contains .csv
and .out
file pairs, where the .csv
file contains test data that are proper for the module work demonstration. The .out
file then contains an expected output of the module after processing the test data. Each testable module is tested as follows:
- A module is launched in its directory, and after one second it is checked if the module is still running.
- If so, the testing script injects the test data to the module input IFC using
NEMEA logreplay
and captures the module output usingNEMEA logger
. - The output is then compared to the expected output, and if they are identical, the test succeeded.
- Moreover, the testing script can also handle unexpected situations such as module segmentation fault while processing the data.
-m M [M...]
Specify modules to be tested.-n {T,C}
Do not Compile / Test.-L path
Path to NEMEA Logger (if not sepcified considered as installed)-R path
Path to NEMEA Logreplay (if not sepcified considered as installed)
Compile and test all modules
python3 AutoTest.py
Compile and test wsn-anomaly and lora-replay
python3 AutoTest.py -m wsn-anomaly lora-replay
Just test wsn-anomaly
python3 AutoTest.py -m wsn-anomaly -n C
Do nothing
python3 AutoTest.py -n C -n T
The script IntegrationTest.py
serves for integration testing on Turris Omnia.
The work of the integration test script and auto test script is significantly alike. The main difference is that the integration test launches all modules at once. For this purpose, the NEMEA SupervisorL is used. For proper testing it is required to have the SupervisorL installed and turned off.
The module first create a backup of potentially present SupervisorL configuration file and replaces it with the one for testing. This backup is restored when the script shuts down.
Modules are tested same way as by AutoTest. The testing script injects the test data to the module input IFC using NEMEA logreplay
and captures the module output using NEMEA logger
. The output is then compared to the expected output, and if they are identical, the test succeeded.
-m M [M...]
Specify modules to be tested.-L path
Path to NEMEA Logger (if not sepcified considered as installed)-R path
Path to NEMEA Logreplay (if not sepcified considered as installed)-p
Pause the script before and after the test so that it is possible to verify wether modules are correctly running
Test all modules
python3 IntegrationTest.py
Test ble-pairing and lora-replay
python3 AutoTest.py -m ble-pairing lora-replay