I have put together a simple demo of analysis and monitoring engine adapted for capturing and analyzing data from reputation-simulations. It is adapted from the singnet/offernet project and is based on the Elastic Stack. In order to make it work one has to make some changes to the way logging is being done (e.g. add real timestamps and UUID of simulation for each line in the log -- so that these can be discriminated when querying the database). All changes are hosted on the private fork of this repo (https://github.com/kabirkbr/reputation-simulation/tree/master/monitoring-engine).
In short, the engine works as follows:
-
Each new line of a log file produced by a running simulation is turned into event by the elastic filebeat module installed and propertly configured on the computer that runs simulation. In principle there is no limit to how many filebeats and separate servers can be connected into one monitoring cluster. For the demo we of course use one.
-
Events are then send via the net into the Elastic Logstash engine for collecting and parsing. One has to write pipelines in order to transform lines from log files into properly formatted events which can then be indexed into the database;
-
After being processed by Logstash pipelines, events are streamed into Elasticsearch database for indexing and storage.
-
It is then possible to connect to the running Elasticsearch instance via REST API, which are available for major programming languages as well as monitoring and visualization products. I am using Elastic Kibana and used R in offernets project -- but there is no limit on that.
E.g. graphs here are produced by directly querying Elasticsearch database from Rmd scripts.
Here is a quick video for a visual impression.
Note, that in order to leverage this analysis and monitoring system, the logic of how the logging is done in the reputation system has to be changed a bit. The idea is that log files should contain the most granular level of data -- every event that happens in the system and the associated information. Events are streamed into the database therefore aggregation and analysis happens via there. It is perfectly possible to also connect the logging of the reputation system itself (whatever logs Aigents.jar or python version produces). Having them in one database allows to combine events from different subsystems of a distributed/decentralized system, aggregate across simulations, etc.
I have put together a simple demo of analysis and monitoring engine adapted for capturing and analyzing data from reputation-simulations. It is adapted from the singnet/offernet project and is based on the Elastic Stack. In order to make it work one has to make some changes to the way logging is being done (e.g. add real timestamps and UUID of simulation for each line in the log -- so that these can be discriminated when querying the database). All changes are hosted on the private fork of this repo (https://github.com/kabirkbr/reputation-simulation/tree/master/monitoring-engine).
In short, the engine works as follows:
Each new line of a log file produced by a running simulation is turned into event by the elastic filebeat module installed and propertly configured on the computer that runs simulation. In principle there is no limit to how many filebeats and separate servers can be connected into one monitoring cluster. For the demo we of course use one.
Events are then send via the net into the Elastic Logstash engine for collecting and parsing. One has to write pipelines in order to transform lines from log files into properly formatted events which can then be indexed into the database;
After being processed by Logstash pipelines, events are streamed into Elasticsearch database for indexing and storage.
It is then possible to connect to the running Elasticsearch instance via REST API, which are available for major programming languages as well as monitoring and visualization products. I am using Elastic Kibana and used R in offernets project -- but there is no limit on that.
E.g. graphs here are produced by directly querying Elasticsearch database from Rmd scripts.
Here is a quick video for a visual impression.
Note, that in order to leverage this analysis and monitoring system, the logic of how the logging is done in the reputation system has to be changed a bit. The idea is that log files should contain the most granular level of data -- every event that happens in the system and the associated information. Events are streamed into the database therefore aggregation and analysis happens via there. It is perfectly possible to also connect the logging of the reputation system itself (whatever logs Aigents.jar or python version produces). Having them in one database allows to combine events from different subsystems of a distributed/decentralized system, aggregate across simulations, etc.