Go to file
Jose Donizetti fc9306cd7e Add expired silence validation (#1096)
* Add expired silence validation

* Add silence end time in the past validation
2018-01-21 15:29:51 +01:00
.github add ISSUE_TEMPLATE.md (#1036) 2017-10-10 10:51:30 +02:00
api Add expired silence validation (#1096) 2018-01-21 15:29:51 +01:00
cli amtool silence update support dwy suffixes to expire flag (#1197) 2018-01-15 19:45:46 +01:00
cmd Return reload status from http endpoint (#1152) (#1180) 2018-01-08 11:51:05 +01:00
config Add Slack additional "fields" to notifications (#1135) 2017-12-15 12:18:05 +01:00
dispatch Don't notify resolved alerts if none were firing (#1198) 2018-01-18 11:12:17 +01:00
doc Fix config name inconsistency (#1087) 2017-11-11 15:01:21 +01:00
examples Linkify alert annotations (#946) 2017-08-13 19:48:36 +02:00
inhibit inhibit: Fix race in stopping inhibitor 2017-11-24 12:05:52 +01:00
nflog Don't notify resolved alerts if none were firing (#1198) 2018-01-18 11:12:17 +01:00
notify Don't notify resolved alerts if none were firing (#1198) 2018-01-18 11:12:17 +01:00
pkg/parse Amtool implementation (#636) 2017-04-20 11:04:17 +02:00
provider Add tests to memory provider (#1104) 2018-01-21 15:27:21 +01:00
scripts vendor: add gogo protobuf packages 2017-04-18 12:58:32 +02:00
silence Log snapshot sizes on maintenance (#1155) 2018-01-10 14:53:57 +01:00
template Add footer field for slack messages (#1141) 2017-12-12 22:50:41 +01:00
test Fix flaky TestBatching acceptance test (#1193) 2018-01-11 22:45:59 +01:00
types Add expired silence validation (#1096) 2018-01-21 15:29:51 +01:00
ui Return reload status from http endpoint (#1152) (#1180) 2018-01-08 11:51:05 +01:00
vendor Fix pending connections never going to established (#1204) 2018-01-21 15:09:50 +01:00
.dockerignore New release process using docker, circleci and a centralized 2016-04-22 21:46:53 +02:00
.gitignore Amtool implementation (#636) 2017-04-20 11:04:17 +02:00
.promu.yml Remove openbsd/arm builds (#732) 2017-04-24 22:52:55 +02:00
.travis.yml travis: update go version 2017-12-20 15:46:00 +01:00
CHANGELOG.md Correct CHANGELOG.md 2018-01-12 14:24:40 +01:00
circle.yml package amtool in docker container (#1127) 2017-12-03 15:15:49 +01:00
CONTRIBUTING.md Rename Promtheus to Alertmanager in CONTRIBUTING.md (#1033) 2017-10-09 10:23:39 +02:00
Dockerfile Switch cmd/alertmanager to kingpin (#974) 2018-01-06 11:22:26 +01:00
LICENSE License cleanup. 2015-01-22 15:45:23 +01:00
MAINTAINERS.md Update MAINTAINERS.md 2017-11-26 13:37:51 -05:00
Makefile Use elm reactor for dev assets (#1133) 2017-12-10 21:59:15 +01:00
NOTICE License cleanup. 2015-01-22 15:45:23 +01:00
Procfile Switch cmd/alertmanager to kingpin (#974) 2018-01-06 11:22:26 +01:00
README.md Switch cmd/alertmanager to kingpin (#974) 2018-01-06 11:22:26 +01:00
VERSION v0.13.0 (#1194) 2018-01-12 11:29:15 +01:00

Alertmanager Build Status

CircleCI Docker Repository on Quay Docker Pulls

The Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integrations such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.

Install

There are various ways of installing Alertmanager.

Precompiled binaries

Precompiled binaries for released versions are available in the download section on prometheus.io. Using the latest production release binary is the recommended way of installing Alertmanager.

Docker images

Docker images are available on Quay.io.

Compiling the binary

You can either go get it:

$ GO15VENDOREXPERIMENT=1 go get github.com/prometheus/alertmanager/cmd/...
# cd $GOPATH/src/github.com/prometheus/alertmanager
$ alertmanager --config.file=<your_file>

Or checkout the source code and build manually:

$ mkdir -p $GOPATH/src/github.com/prometheus
$ cd $GOPATH/src/github.com/prometheus
$ git clone https://github.com/prometheus/alertmanager.git
$ cd alertmanager
$ make build
$ ./alertmanager --config.file=<your_file>

You can also build just one of the binaries in this repo by passing a name to the build function:

$ make build BINARIES=amtool

Example

This is an example configuration that should cover most relevant aspects of the new YAML configuration format. The full documentation of the configuration can be found here.

global:
  # The smarthost and SMTP sender used for mail notifications.
  smtp_smarthost: 'localhost:25'
  smtp_from: 'alertmanager@example.org'

# The root route on which each incoming alert enters.
route:
  # The root route must not have any matchers as it is the entry point for
  # all alerts. It needs to have a receiver configured so alerts that do not
  # match any of the sub-routes are sent to someone.
  receiver: 'team-X-mails'

  # The labels by which incoming alerts are grouped together. For example,
  # multiple alerts coming in for cluster=A and alertname=LatencyHigh would
  # be batched into a single group.
  group_by: ['alertname', 'cluster']

  # When a new group of alerts is created by an incoming alert, wait at
  # least 'group_wait' to send the initial notification.
  # This way ensures that you get multiple alerts for the same group that start
  # firing shortly after another are batched together on the first
  # notification.
  group_wait: 30s

  # When the first notification was sent, wait 'group_interval' to send a batch
  # of new alerts that started firing for that group.
  group_interval: 5m

  # If an alert has successfully been sent, wait 'repeat_interval' to
  # resend them.
  repeat_interval: 3h

  # All the above attributes are inherited by all child routes and can
  # overwritten on each.

  # The child route trees.
  routes:
  # This routes performs a regular expression match on alert labels to
  # catch alerts that are related to a list of services.
  - match_re:
      service: ^(foo1|foo2|baz)$
    receiver: team-X-mails

    # The service has a sub-route for critical alerts, any alerts
    # that do not match, i.e. severity != critical, fall-back to the
    # parent node and are sent to 'team-X-mails'
    routes:
    - match:
        severity: critical
      receiver: team-X-pager

  - match:
      service: files
    receiver: team-Y-mails

    routes:
    - match:
        severity: critical
      receiver: team-Y-pager

  # This route handles all alerts coming from a database service. If there's
  # no team to handle it, it defaults to the DB team.
  - match:
      service: database

    receiver: team-DB-pager
    # Also group alerts by affected database.
    group_by: [alertname, cluster, database]

    routes:
    - match:
        owner: team-X
      receiver: team-X-pager

    - match:
        owner: team-Y
      receiver: team-Y-pager


# Inhibition rules allow to mute a set of alerts given that another alert is
# firing.
# We use this to mute any warning-level notifications if the same alert is
# already critical.
inhibit_rules:
- source_match:
    severity: 'critical'
  target_match:
    severity: 'warning'
  # Apply inhibition if the alertname is the same.
  equal: ['alertname']


receivers:
- name: 'team-X-mails'
  email_configs:
  - to: 'team-X+alerts@example.org, team-Y+alerts@example.org'

- name: 'team-X-pager'
  email_configs:
  - to: 'team-X+alerts-critical@example.org'
  pagerduty_configs:
  - routing_key: <team-X-key>

- name: 'team-Y-mails'
  email_configs:
  - to: 'team-Y+alerts@example.org'

- name: 'team-Y-pager'
  pagerduty_configs:
  - routing_key: <team-Y-key>

- name: 'team-DB-pager'
  pagerduty_configs:
  - routing_key: <team-DB-key>

Amtool

amtool is a cli tool for interacting with the alertmanager api. It is bundled with all releases of alertmanager.

Install

Alternatively you can install with:

go get github.com/prometheus/alertmanager/cmd/amtool

Examples

View all currently firing alerts

$ amtool alert
Alertname        Starts At                Summary
Test_Alert       2017-08-02 18:30:18 UTC  This is a testing alert!
Test_Alert       2017-08-02 18:30:18 UTC  This is a testing alert!
Check_Foo_Fails  2017-08-02 18:30:18 UTC  This is a testing alert!
Check_Foo_Fails  2017-08-02 18:30:18 UTC  This is a testing alert!

View all currently firing alerts with extended output

$ amtool -o extended alert
Labels                                        Annotations                                                    Starts At                Ends At                  Generator URL
alertname="Test_Alert" instance="node0"       link="https://example.com" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  http://my.testing.script.local
alertname="Test_Alert" instance="node1"       link="https://example.com" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  http://my.testing.script.local
alertname="Check_Foo_Fails" instance="node0"  link="https://example.com" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  http://my.testing.script.local
alertname="Check_Foo_Fails" instance="node1"  link="https://example.com" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  http://my.testing.script.local

In addition to viewing alerts you can use the rich query syntax provided by alertmanager

$ amtool -o extended alert query alertname="Test_Alert"
Labels                                   Annotations                                                    Starts At                Ends At                  Generator URL
alertname="Test_Alert" instance="node0"  link="https://example.com" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  http://my.testing.script.local
alertname="Test_Alert" instance="node1"  link="https://example.com" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  http://my.testing.script.local

$ amtool -o extended alert query instance=~".+1"
Labels                                        Annotations                                                    Starts At                Ends At                  Generator URL
alertname="Test_Alert" instance="node1"       link="https://example.com" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  http://my.testing.script.local
alertname="Check_Foo_Fails" instance="node1"  link="https://example.com" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  http://my.testing.script.local

$ amtool -o extended alert query alertname=~"Test.*" instance=~".+1"
Labels                                   Annotations                                                    Starts At                Ends At                  Generator URL
alertname="Test_Alert" instance="node1"  link="https://example.com" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  http://my.testing.script.local

Silence an alert

$ amtool silence add alertname=Test_Alert
b3ede22e-ca14-4aa0-932c-ca2f3445f926

$ amtool silence add alertname="Test_Alert" instance=~".+0"
e48cb58a-0b17-49ba-b734-3585139b1d25

View silences

$ amtool silence query
ID                                    Matchers              Ends At                  Created By  Comment
b3ede22e-ca14-4aa0-932c-ca2f3445f926  alertname=Test_Alert  2017-08-02 19:54:50 UTC  kellel

$ amtool silence query instance=~".+0"
ID                                    Matchers                            Ends At                  Created By  Comment
e48cb58a-0b17-49ba-b734-3585139b1d25  alertname=Test_Alert instance=~.+0  2017-08-02 22:41:39 UTC  kellel

Expire a silence

$ amtool silence expire b3ede22e-ca14-4aa0-932c-ca2f3445f926

Expire all silences matching a query

$ amtool silence query instance=~".+0"
ID                                    Matchers                            Ends At                  Created By  Comment
e48cb58a-0b17-49ba-b734-3585139b1d25  alertname=Test_Alert instance=~.+0  2017-08-02 22:41:39 UTC  kellel

$ amtool silence expire $(amtool silence -q query instance=~".+0")

$ amtool silence query instance=~".+0"

Expire all silences

$ amtool silence expire $(amtool silence query -q)

Config

Amtool allows a config file to specify some options for convenience. The default config file paths are $HOME/.config/amtool/config.yml or /etc/amtool/config.yml

An example configfile might look like the following:

# Define the path that amtool can find your `alertmanager` instance at
alertmanager.url: "http://localhost:9093"

# Override the default author. (unset defaults to your username)
author: me@example.com

# Force amtool to give you an error if you don't include a comment on a silence
comment_required: true

# Set a default output format. (unset defaults to simple)
output: extended

High Availability

Warning: High Availability is under active development

To create a highly available cluster of the Alertmanager the instances need to be configured to communicate with each other. This is configured using the -mesh.* flags.

  • --mesh.peer-id string: mesh peer ID (default "<hardware-mac-address>")
  • --mesh.listen-address string: mesh listen address (default "0.0.0.0:6783")
  • --mesh.nickname string: mesh peer nickname (default "<machine-hostname>")
  • --mesh.peer value: initial peers (repeat flag for each additional peer)

The mesh.peer-id flag is used as a unique ID among the peers. It defaults to the MAC address, therefore the default value should typically be a good option.

The same applies to the default of the mesh.nickname flag, as it defaults to the hostname.

The chosen port in the mesh.listen-address flag is the port that needs to be specified in the mesh.peer flag of the other peers.

To start a cluster of three peers on your local machine use goreman and the Procfile within this repository.

goreman start

To point your Prometheus 1.4, or later, instance to multiple Alertmanagers, configure them in your prometheus.yml configuration file, for example:

alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - alertmanager1:9093
      - alertmanager2:9093
      - alertmanager3:9093

Important: Do not load balance traffic between Prometheus and its Alertmanagers, but instead point Prometheus to a list of all Alertmanagers. The Alertmanager implementation expects all alerts to be sent to all Alertmanagers to ensure high availability.

Contributing to the Front-End

Refer to ui/app/CONTRIBUTING.md.

Architecture