Properly monitoring a fleet of devices is an evolving art. One of the current leaders in the server world for application and hardware monitoring is Prometheus, both for bare metal and as a first-class citizen in the Kubernetes world. In order to reduce the friction between the edge and the cloud, this project will deploy a Prometheus stack to monitor an entire fleet of balenaCloud devices (from a balena device, nonetheless!).
We have showcased Prometheus a few other times, and this tutorial expands on those to provide a fair bit more functionality.
This demo is the first part of a series on how to monitor your stack & fleet with Prometheus, everything from service discovery to instrumentation to alerting. Stay tuned for future updates!
Here are our goals with this tutorial:
- Prometheus monitoring stack monitoring a discovered fleet (from the fleet nonetheless!)
- Integrated Grafana for visualization also deployed to balena device(s)
- Service discovery mechanism to automatically detect new devices
- Basic machine monitoring deployed to a device using one open source exporter to expose service metrics
- Note: for this tutorial, we will limit ourselves to one open source exporter (exporters are services that expose metrics for Prometheus to ingest) for simplicity's sake. Stay tuned for later installments where we will dive into running multiple exporters and instrumenting custom code!
- Two applications, one to do the monitoring (let us call this
monitor
) and one to be monitored (call this applicationthingy
) - This design is especially powerful if the application is multicontainer, though it need not be.
- The monitoring stack can be deployed to many different locations. In this example, we will deploy it to balenaCloud and run it on a device within the fleet. Sign up for free if you don’t already have an account.
- Start by creating an application to deploy to. For the sake of the demo, let us call it
monitor
. - Since the service discovery is configured purely via an environment variable, we will want to preset some to ensure our monitoring starts up without a hitch.
- Generate an API key and save the key in your application
as an environment variable named
API_KEY
. - If you plan to monitor remotely (i.e. via the public URLs), set an environment variable
USE_PUBLIC_URLS
totrue
- Next, clone the example repository here and push it to your
newly-created application using
balena push
orgit
(read more). - If you now enable the public URL of the
device(s) running the
monitor
application and navigate to the public URL, you should be able to view and access your very own Grafana instance.- Note: the default username/password is admin/admin, we recommend you change that as soon as you log in for the first time (Grafana will prompt you).
- If you do not already have an application running that you would like to instrument, you can create a new application
for demo purposes. Let us call this application
thingy
. - At a bare minimum, to get the most from your device you will want to run
node_exporter
, which exports machine metrics like packet counters and memory usage. We will use this exporter to show how to configure and scrape a device, but there are many other useful exporters that may interest you as well:
- MQTT exporter
- Redis exporter
- OpenVPN exporter
- Redis exporter
- PostgreSQL exporter
- Scan this list for any other open-source code you may be running. If an exporter exists for your preferred database/message queue/application, it is always a good practice to track it. Since there are many pre-baked exporters and dashboards, you can monitor almost everything you did not write with minimal setup. The real power comes when instrumenting your own code, more on that in another post!
- Using our
node_exporter
example, add your exporter to yourdocker-compose.yml
to configure the on-device scraping process. If you are not using multicontainer mode, you can just daemonize thenode_exporter
process as part of your single container application. - If using public URLs, ensure that the public URLs are enabled for the devices you want to monitor.
- Find or create a dashboard in Grafana to visualize what you need from the data you are now collecting (make sure the
datasource
type is Prometheus!)- If you are following along with the
node_exporter
example, we recommend using this sample dashboard.
- If you are following along with the
- Drop the dashboard json blob into the
grafana/dashboards
directory, following thenode_exporter
example here.
Upon completion, baletheus
should log letting you know it is updating the registry of devices:
The real power of PromQL (Prometheus' query language) comes when filtering by tags, which are metadata associated with
different timeseries. Since baletheus
by default exposes a bevy of tags to Prometheus, it is trivial to begin
dissecting your data by commit/OS version/device type. This feature will allow you to track changes side-by-side and be
more sure than ever when promoting a new OS version or code release to production.
At this point, you should be able to monitor any number of (single) exporters and create beautiful graphs and visualizations for those devices/exporters/applications. This tutorial is just the tip of the iceberg, Grafana and Prometheus are incredibly active communities that are evolving every day. Some other things potentially worth investigating (though mostly outside the scope of this tutorial):
- Instrument your own code and export only the metrics you see fit
- Configuring and managing alerting via Alertmanager
- Monitoring various cloud providers usage via Grafana (HetznerCloud, AWS CloudWatch, GitHub)
- Lock down Grafana to authenticate via third-party authentication provider
- Connect Grafana to other datasources via plugins (Pagerduty, Datadog, sensu)
Grafana and Prometheus are both fairly robust, resource-intensive applications. While it is possible to deploy a full monitoring stack following the instructions above if you have any data retention requirements we recommend either streaming the timeseries to a persistent backend or deploying directly in the cloud for any production deployment. Prometheus makes use of persistent storage (which can shorten the life of some media like SD cards), though Grafana should be fully configurable proscriptively.
This tutorial has been adjusted to make Grafana as lightweight as possible to run on an edge device. Since this tutorial attempts to minimize disk writes, upon every subsequent deploy the admin password will need to be reset.
Alternatively, feel free to configure a more persistent storage medium One of the niceties of a pull-based monitoring system is you can redeploy the same stack in multiple places without reconfiguring the clients, saving the headache of changing the whole fleet. Tell us how you monitor your own stack & fleet in the forums!
- Pull-based monitoring system and time series database
- Visualization platform for time series data
- Prometheus-project secondary component that handles alerting
- Supported mechanism to add new scrape targets to Prometheus backend
- Process that runs alongside an application, aggregating data and exporting when scraped by Prometheus
- Sidecar process that runs alongside an application and returns metrics describing the state of the application