EAR includes several report plugins that are used to send data to various services:
eacct
's CSV option (see `eacct`) with an added column for the timestamp.The Prometheus plugin has only one dependency, microhttpd. To be able to compile it make sure that it is in your LD_LIBRARY_PATH.
Currently, to compile and install the prometheus plugin one has the run the following command.
With that, the plugin will be correctly placed in the usual folder.
Due to the way in which Prometheus works, this plugin is designed to be used by the EAR Daemons, although the EARDBD should not have many issues running it too.
To have it running in the daemons, simply add it to the corresponding line in the [configuration file](Configuration).
This will expose the metrics on each node on a small HTTP server. You can access them normally through a browser at port 9011 (fixed for now).
In Prometheus, simply add the nodes you want to scrape in prometheus.yml with the port 9011. Make sure that the scrape interval is equal or shorter than the insertion time (NodeDaemonPowermonFreq
in ear.conf) since metrics only stay in the page for that duration.
ExaMon (Exascale Monitoring) is a lightweight monitoring framework for supporting accurate monitoring of power/energy/thermal and architectural parameters in distributed and large-scale high-performance computing installations.
To compile the EXAMON plugin you need a functioning EXAMON installation.
Modify the main Makefile
and set FEAT_EXAMON=1
. In src/report/Makefile
, update EXAMON_BASE
with the path to the current EXAMON installation. Finally, set an examon.conf
file somewhere on your installation, and modify src/report/examon.c
(line 83, variable `char* conffile = "/hpc/opt/ear/etc/ear/examon.conf"`) to point to the new examon.conf
file.
The file should look like this:
Where hostip
is the actual ip of the node.
Once that is set up, you can compile EAR normally and the plugin will be installed in the lib/plugins/report
folder inside EAR's installation. To activate it, set it as one of the values in the EARDReportPlugins
of ear.conf
and restart the EARD.
The plugin is designed to be used locally in each node (EARD level) together with EXAMON's data broker.
The Data Center Data Base (DCDB) is a modular, continuous, and holistic monitoring framework targeted at HPC environments.
This plugin implements the functions to report periodic metrics, report loops, and report events.
When the DCDB plugin is loaded the collected EAR data per report type are stored into a shared memory which is accessed by DCDB ear sensor (report plugin implemented on the DCDB side) to collect the data and push them into the database using MQTT messages.
This plugin is automatically installed with the default EAR installation. To activate it, set it as one of the values in the EARDReportPlugins
of ear.conf
and restart the EARD.
The plugin is designed to be used locally in each node (EARD level) with the DCDB collect agent.
This is a new report plugin to write EAR collected data into a file. Single file is generated per metric per jobID & stepID per node per island per cluster. Only the last collected data metrices are stored into the files, means every time the report runs it saves the current collected values by overwriting the pervious data.
The below schema has been followed to create the metric files:
The root_directory is the default path where all the created metric files are generated.
The cluster, island and nodename will be replaced by the island number, cluster name, and node information.
metricFile
will be replaced by the name of the metrics collected by EAR.
The naming format used to create the metric files is implementing the standard sysfs interface format. The current commonly used schema of file naming is:
<type>_<component>_<metric-name>_<unit>
Numbering is used with some metric files if the component has more than one instance like FLOPS counters or GPU data.
Examples of some generated metric files:
The following are the reported values for each type of metric recorded by ear:
``` Note: If the cluster contains GPUs, both report_loops and report_applications will generate new schema files will per GPU which contain all the collected data for each GPU with the paths below: ◦ /root_directory/cluster/island/jobs/jobID/stepID/nodename/current/GPU-ID/metricFile ◦ /root_directory/cluster/island/jobs/jobID/stepID/nodename/avg/GPU-ID/metricFile