Metrics¶
AMD Inference Server exposes metrics using Prometheus and they allow users to check AMD Inference Server’s state in real-time.
Quickstart¶
The easiest way to view the collected metrics is to run Prometheus as a binary in the container while an instrumented application runs. In the container:
$ cd /tmp
$ wget https://github.com/prometheus/prometheus/releases/download/v2.30.1/prometheus-2.30.1.linux-amd64.tar.gz
$ tar -xzf prometheus-2.30.1.linux-amd64.tar.gz
$ cd /tmp/prometheus-2.30.1.linux-amd64
$ cp $PROTEUS_ROOT/src/proteus/observation/prometheus.yml .
$ ./prometheus
By default, Prometheus will use a configuration file named prometheus.yml
in the local directory.
A sample prometheus.yml
is provided in AMD Inference Server that can be used as-is or changed as needed.
Documentation about the additional options for this file is available online.
Once the prometheus executable is running, start your instrumented application.
The collected metrics can be viewed, queried and graphed at (by default) localhost:9090
using Prometheus’s browser interface.