Introduction
Once you have your first flows deployed, you might think about how to integrate the Crosser Node and Flows into your existing monitoring solution.
In this article we describe what options you have and how to utilize provided interfaces for integration into your monitoring solution.
Local log files
The logs of the Node are stored locally and roll-over once per day assuming they do not reach the size limit. A timestamp is added to the log name to make it easy to identify.
You can find the logs in the installation folder of your Node ./data/logs.
We decided to create one log per process running on the device which results in one log for the Node itself ./data/logs/host and separated logs for each flow ./data/logs/runtime/<Flow ID>. Assuming you had executed Remote Sessions at some point on that Node, you will also see a folder ./data/logs/runtime/remotesession/<Flow ID>.
Since you most likely do not want to integrate this one into your monitoring solution, we leave this folder out of scope.
Once you have located the logs you can use third-party tools like 'datadog' or 'filebeat' to catch and upload the logs into your monitoring solution.
Node's API
The Node offers a local API interface (version 2.5.x and above) which you can use to extract information about the Node, Flows and Modules.
A documentation about this API is integrated into the Node. The API is exposed to port 9191 (default) for both Windows and Docker installations.
Assuming your port is exposed and not blocked by a firewall, you should be able to access your Node's local API and documentation via port 9191.
i.e. http://<Your Node IP>:9191/swagger/index.html
You should be able to access the Web UI with any browser:

Once you know that the API is accessible, check the endpoints and think about what information you would like to extract.
You can use the exposed Web UI to run API commands immediately to see how it works.
Now it is up to you how to utilize the API.
Consume the API from external systems
Assuming your monitoring solution has access to the local Node's API, you can think about implementing them directly (if your monitoring system does support that).
This approach is often seen in environments where customer have monitoring solutions on-premise or agents of their monitoring solution that are capable of consuming APIs.
Create a monitoring flow
Another way to consume the local API is to create a Flow that uses the API itself. You can simply use the HTTP Request module to call the API and extract the relevant information with the low-code approach. Once you have done that you can react on certain events and send out notifications to external Brokers, APIs, HTTP endpoints or notify a user with the extracted and relevant information.
With this setup you would push the data out to external systems. This is useful if your monitoring solution does not have access to the Node's local API but the Monitoring System's endpoint is exposed so we can i.e. use
HTTP Request module to push data to it. You can deploy this flow on the same Node as your other flows are running.
The downside of this implementation though is that if the entire Node fails for some reason, also your monitoring flow will die.
To get around that, you can think about setting up a dedicated Node which only runs monitoring flows in a dedicated environment.
This setup can also help bridging network gaps between the Nodes (production network) and a monitoring system (office network/cloud).
MQTT (upcoming in Node Version 2.6.x)
In the upcoming release we will also publish events on the Node's local MQTT Broker to have an event-driven implementation to monitoring systems.
Instead of reading the API every then and now, you will then get notified whenever something happend on the Node or Flows.
-- More details will be published with the new release.