Container Monitor with Prometheus
Last updated
Was this helpful?
Last updated
Was this helpful?
Container Monitor is a Prometheus-compatible interface to performance metrics for all your instances on Triton.
Container Monitor allows you to use the Prometheus-compatible ecosystem of monitoring solutions to visualize the status of your applications and track alerts for your performance thresholds. Learn more about Prometheus, an open source application that can read Container Monitor metrics, at .
Any solution that can read a can use Container Monitor, but the following configuration documentation is written for Prometheus itself.
If you haven't already, . You can choose to upload a key or we'll make one for you. This key is used to authenticate you with all of your account's containers and Joyent's APIs, including Container Monitor. Container Monitor uses this key to identify and authenticate your access to the Container Monitor interfaces.
Container Monitor exposes a metrics endpoint for every instance in your account. Rather than manually configuring (and reconfiguring) Prometheus for every instance, you can use the to automate it.
The Triton configuration block in the prometheus.yml
file looks like the following:
You must also enable Triton CNS in order to use Container Monitor. All new containers will get a Container Monitor CNAME record automatically. Similarly, proxy records will be added and removed when proxies come and go, along with corresponding CNAME records.
Each CNAME record represents a virtual Prometheus endpoint backed by a proxy.
To retrieve containers that can be scraped by a Prometheus server for metrics:
There are several metrics with a singular endpoint to learn more about your containers:
cpu_user_usage
: User CPU utilization in nanoseconds
cpu_sys_usage
: System CPU usage in nanoseconds
cpu_wait_time
: CPU wait time in nanoseconds
load_average
: Load average
mem_agg_usage
: Aggregate memory usage in bytes
mem_limit
: Memory limit in bytes
mem_swap
: Swap in bytes
mem_swap_limit
: Swap limit in bytes
mem_anon_alloc_fail
: Anonymous allocation failure count
net_agg_packets_in
: Aggregate inbound packets
net_agg_packets_out
: Aggregate outbound packets
net_agg_bytes_in
: Aggregate inbound bytes
net_agg_bytes_out
: Aggregate outbound bytes
tcp_failed_connection_attempt_count
: Failed TCP connection attempts
tcp_retransmitted_segment_count
: Retransmitted TCP segments
tcp_duplicate_ack_count
: Duplicate TCP ACK count
tcp_listen_drop_count
: TCP listen drops. Connection refused because backlog full
tcp_listen_drop_Qzero_count
: Total # of connections refused due to half-open queue (q0) full
tcp_half_open_drop_count
: TCP connection dropped from a full half-open queue
tcp_retransmit_timeout_drop_count
: TCP connection dropped due to retransmit timeout
tcp_active_open_count
: TCP active open connections
tcp_passive_open_count
: TCP passive open connections
tcp_current_established_connections_total
: TCP total established connections
vfs_bytes_read_count
: VFS number of bytes read
vfs_bytes_written_count
: VFS number of bytes written
vfs_read_operation_count
: VFS number of read operations
vfs_write_operation_count
: VFS number of write operations
vfs_wait_time_count
: VFS cumulative wait (pre-service) time
vfs_wait_length_time_count
: VFS cumulative wait length*time product
vfs_run_time_count
: VFS cumulative run (pre-service) time
vfs_run_length_time_count
: VFS cumulative run length*time product
vfs_elements_wait_state
: VFS number of elements in wait state
vfs_elements_run_state
: VFS number of elements in run state
zfs_used
: zfs space used in bytes
zfs_available
: zfs space available in bytes
time_of_day
: System time in seconds since epoch
For users running pre-existing Prometheus servers, the suggested service discovery mechanism will be leveraging in conjunction with our . That way the servers will have an equivalent experience to CloudAPI-based discovery without having to upgrade their Prometheus installation.