Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
adding ZooKeeper autocreation documentation (#427)
Browse files Browse the repository at this point in the history
removing creating ZooKeeper nodes from quickstart install
adding underscore char into allowed upload name in enrichment store
  • Loading branch information
mariannovotny authored Nov 18, 2021
1 parent 664b698 commit e6f47e9
Show file tree
Hide file tree
Showing 4 changed files with 28 additions and 40 deletions.
2 changes: 1 addition & 1 deletion deployment/helm-k8s/resources/upload.php
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
if (isset($_POST['directory_path'])) {
$user_dir = $_POST['directory_path'];
//the allowed characters, i.e. we do not accept e.g.: ../ . %2e%2e%2f etc.
if (!preg_match("/^(\/[a-zA-Z0-9]{1,}){1,}$/", $user_dir)) {
if (!preg_match("/^(\/[a-zA-Z0-9_]{1,}){1,}$/", $user_dir)) {
logs(LOG_WARNING, "Warning: Not a valid directory path");
closelog();
http_response_code(422);
Expand Down
9 changes: 3 additions & 6 deletions deployment/quickstart_install/ps-scripts/demoInstall.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -50,12 +50,9 @@ $zookeeper_status=$(kubectl get pods --namespace $NAMESPACE -l "app.kubernetes.i
if ($zookeeper_status -eq 'True') {
Git-Details
Write-Output "************************************************************"
Init-Zookeeper-Nodes
Write-Output "****** You can now deploy siembol from helm charts ******"
Write-Output "************************************************************"
} else {
Write-Output "Zookeeper pod is not running yet, please try again in a few seconds"
exit 1
}

Write-Output "************************************************************"
Write-Output "****** You can now deploy siembol from helm charts ******"
Write-Output "************************************************************"
}
23 changes: 3 additions & 20 deletions deployment/quickstart_install/sh-scripts/demoInstall.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,33 +32,16 @@ git_details () {
fi
}

init_zookeeper_nodes () {
declare -a ZookeeperNodes=("/siembol/synchronise" "/siembol/alerts" "/siembol/correlation_alerts" "/siembol/parser_configs" "/siembol/cache")
echo "Creating Zookeeper nodes "
POD_NAME=$(kubectl get pods --namespace $NAMESPACE -l "app.kubernetes.io/component=zookeeper,app.kubernetes.io/instance=storm,app.kubernetes.io/name=zookeeper" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -n $NAMESPACE -- zkCli.sh create /siembol 1> /dev/null
for node in "${ZookeeperNodes[@]}"; do
kubectl exec -it $POD_NAME -n $NAMESPACE -- zkCli.sh create $node 1> /dev/null
kubectl exec -it $POD_NAME -n $NAMESPACE -- zkCli.sh set $node '{}' 1> /dev/null
echo "$node node initialised with empty JSON object"
done

}

echo "************** Install Script For Demo **************"
echo "*****************************************************"

zookeeper_status=$(kubectl get pods --namespace $NAMESPACE -l "app.kubernetes.io/component=zookeeper,app.kubernetes.io/instance=storm,app.kubernetes.io/name=zookeeper" -o jsonpath="{.items[0].status.containerStatuses[0].ready}")
if [ "$zookeeper_status" = true ]; then
if [ "$zookeeper_status" = true ]; then
git_details
echo "************************************************************"
init_zookeeper_nodes
echo "****** You can now deploy siembol from helm charts ******"
echo "************************************************************"
else
echo "Zookeeper pod is not running yet, please try again in a few seconds"
exit 1
fi

echo "************************************************************"
echo "****** You can now deploy siembol from helm charts ******"
echo "************************************************************"

34 changes: 21 additions & 13 deletions docs/deployment/how-tos/how_to_set_up_zookeeper_nodes.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,17 @@
How to set-up Zookeeper nodes for Siembol configuration
How to set-up ZooKeeper nodes for Siembol configuration
=======================================================

Siembol services watch Zookeeper nodes for configuration. Zookeeper nodes notify the storm topologies when updates occur allowing for configuration changes without the need to restart components.
Siembol configurations are stored in git repositories and cached in ZooKeeper. ZooKeeper notifies the storm topologies when updates occur allowing for configuration changes without the need to restart components.

You will need to create the Zookeeper nodes prior to running siembol services. To create a node connect to a Zookeeper server and run a command like the following:
You can create the ZooKeeper nodes manually prior to running siembol services. However, you can use autocreation of ZooKeeper nodes by defining the default value of the node in the ZooKeeper connector property `init-value-if-not-exists`, for example:

```properties
config-editor.storm-topologies-zookeeper.zk-path=/siembol/synchronise
config-editor.storm-topologies-zookeeper.zk-url=siembol-zookeeper:2181
config-editor.storm-topologies-zookeeper.init-value-if-not-exists={}
```

Alternatively you can create a node by connecting to a ZooKeeper server and running a command like the following:

```shell
zookeeper@siembol-storm-zookeeper-0:/apache-zookeeper-3.5.5-bin$ bin/zkCli.sh
Expand All @@ -22,41 +30,41 @@ Connecting to localhost:2181
```


Zookeeper nodes for configuration deployments
ZooKeeper nodes for configuration deployments
---------------------------------------------

### Admin configuration settings

When siembol services are launched in Storm, they are given the Zookeeper node to watch for configuration updates. If we take an alerting component as an example, we can navigate to the Admin interface in the siembol UI for the service and view Zookeeper settings.
When siembol services are launched in Storm, they are given the ZooKeeper node to watch for configuration updates. If we take an alerting component as an example, we can navigate to the Admin interface in the siembol UI for the service and view ZooKeeper settings.

`Siembol UI -> Alerts -> Admin -> Zookeeper Attributes`
`Siembol UI -> Alerts -> Admin -> ZooKeeper Attributes`

![](images/alerts-zookeeper.jpg)

### Config editor rest application properties

The [config editor rest service](../../services/how-tos/how_to_set_up_service_in_config_editor_rest.md) needs to have the same Zookeeper node mentioned above added into it's `application.properties`. This is added with the following entry:
The [config editor rest service](../../services/how-tos/how_to_set_up_service_in_config_editor_rest.md) needs to have the same ZooKeeper node mentioned above added into it's `application.properties`. This is added with the following entry:

```properties
config-editor.services.alert.release-zookeeper.zk-path=/siembol/alerts/rules
```

These Zookeeper nodes now ensure that any change to alerting rules in the Siembol UI will be deployed to alerting instances running in Storm.
These ZooKeeper nodes now ensure that any change to alerting rules in the Siembol UI will be deployed to alerting instances running in Storm.

Zookeper nodes for storm topology manager
-----------------------------------------

The storm topology manager is responsible for the orchestration of Storm topologies Siembol requires. It does this by listening to a Zookeeper synchronisation node, which the configuration rest service publishes a desired state to. The service will use an internal cache node to persists state, and continually try to reconcile any differences.
The storm topology manager is responsible for the orchestration of Storm topologies Siembol requires. It does this by listening to a ZooKeeper synchronisation node, which the configuration rest service publishes a desired state to. The service will use an internal cache node to persists state, and continually try to reconcile any differences.

Therefore, it is required to have two Zookeeper nodes for this to work: at least one synchronization node and one cache node. The configuration rest service only requires access to write to the sync node, and it is set in the [configuration rest services](../../services/how-tos/how_to_set_up_service_in_config_editor_rest.md) `application.properties` file:
Therefore, it is required to have two ZooKeeper nodes for this to work: at least one synchronization node and one cache node. The configuration rest service only requires access to write to the sync node, and it is set in the [configuration rest services](../../services/how-tos/how_to_set_up_service_in_config_editor_rest.md) `application.properties` file:

```properties
config-editor.storm-topologies-zookeeper.zk-path=/siembol/topologies/synchronise
config-editor.storm-topologies-zookeeper.zk-path=/siembol/synchronise
```

The storm topology manager service requires read access to the synchronise node and read/write access to its internal cache node. Both nodes can be configured in the storm-topology-manager's `application.properties` file:

```properties
topology-manager.desired-state.zk-path=/siembol/topologies/synchronise
topology-manager.saved-state.zk-path=/siembol/topologies/cache
topology-manager.desired-state.zk-path=/siembol/synchronise
topology-manager.saved-state.zk-path=/siembol/cache
```

0 comments on commit e6f47e9

Please sign in to comment.