Integrate changes from prometheus/docs

This commit is contained in:
Tobias Schmidt 2017-10-26 15:42:07 +02:00
parent 41281aff81
commit 299802dfd0
3 changed files with 38 additions and 33 deletions

View File

@ -1,6 +1,5 @@
---
title: Configuration
sort_rank: 20
---
# Configuration
@ -549,6 +548,8 @@ project: <string>
zone: <string>
# Filter can be used optionally to filter the instance list by other criteria
# Syntax of this filter string is described here in the filter query parameter section:
# https://cloud.google.com/compute/docs/reference/latest/instances/list
[ filter: <string> ]
# Refresh interval to re-read the instance list
@ -770,8 +771,8 @@ in the configuration file), which can also be changed using relabeling.
### `<nerve_sd_config>`
Nerve SD configurations allow retrieving scrape targets from [AirBnB's
Nerve](https://github.com/airbnb/nerve) which are stored in
Nerve SD configurations allow retrieving scrape targets from [AirBnB's Nerve]
(https://github.com/airbnb/nerve) which are stored in
[Zookeeper](https://zookeeper.apache.org/).
The following meta labels are available on targets during [relabeling](#relabel_config):
@ -793,10 +794,10 @@ paths:
### `<serverset_sd_config>`
Serverset SD configurations allow retrieving scrape targets from
[Serversets](https://github.com/twitter/finagle/tree/master/finagle-serversets)
which are stored in [Zookeeper](https://zookeeper.apache.org/). Serversets are
commonly used by [Finagle](https://twitter.github.io/finagle/) and
Serverset SD configurations allow retrieving scrape targets from [Serversets]
(https://github.com/twitter/finagle/tree/master/finagle-serversets) which are
stored in [Zookeeper](https://zookeeper.apache.org/). Serversets are commonly
used by [Finagle](https://twitter.github.io/finagle/) and
[Aurora](http://aurora.apache.org/).
The following meta labels are available on targets during relabeling:

View File

@ -60,8 +60,8 @@ For a complete specification of configuration options, see the
## Starting Prometheus
To start Prometheus with your newly created configuration file, change to your
Prometheus build directory and run:
To start Prometheus with your newly created configuration file, change to the
directory containing the Prometheus binary and run:
```bash
# Start Prometheus.
@ -69,9 +69,9 @@ Prometheus build directory and run:
./prometheus -config.file=prometheus.yml
```
Prometheus should start up and it should show a status page about itself at
[localhost:9090](http://localhost:9090). Give it a couple of seconds to collect
data about itself from its own HTTP metrics endpoint.
Prometheus should start up. You should also be able to browse to a status page
about itself at [localhost:9090](http://localhost:9090). Give it a couple of
seconds to collect data about itself from its own HTTP metrics endpoint.
You can also verify that Prometheus is serving metrics about itself by
navigating to its metrics endpoint:
@ -81,11 +81,10 @@ The number of OS threads executed by Prometheus is controlled by the
`GOMAXPROCS` environment variable. As of Go 1.5 the default value is
the number of cores available.
Blindly setting `GOMAXPROCS` to a high value can be
counterproductive. See the relevant [Go
FAQs](http://golang.org/doc/faq#Why_no_multi_CPU).
Blindly setting `GOMAXPROCS` to a high value can be counterproductive. See the
relevant [Go FAQs](http://golang.org/doc/faq#Why_no_multi_CPU).
Note that Prometheus by default uses around 3GB in memory. If you have a
Prometheus by default uses around 3GB in memory. If you have a
smaller machine, you can tune Prometheus to use less memory. For details,
see the [memory usage documentation](storage.md#memory-usage).
@ -96,8 +95,8 @@ use Prometheus's built-in expression browser, navigate to
http://localhost:9090/graph and choose the "Console" view within the "Graph"
tab.
As you can gather from http://localhost:9090/metrics, one metric that
Prometheus exports about itself is called
As you can gather from [localhost:9090/metrics](http://localhost:9090/metrics),
one metric that Prometheus exports about itself is called
`prometheus_target_interval_length_seconds` (the actual amount of time between
target scrapes). Go ahead and enter this into the expression console:
@ -105,7 +104,7 @@ target scrapes). Go ahead and enter this into the expression console:
prometheus_target_interval_length_seconds
```
This should return a lot of different time series (along with the latest value
This should return a number of different time series (along with the latest value
recorded for each), all with the metric name
`prometheus_target_interval_length_seconds`, but with different labels. These
labels designate different latency percentiles and target group intervals.
@ -186,7 +185,7 @@ section in your `prometheus.yml` and restart your Prometheus instance:
```yaml
scrape_configs:
- job_name: 'example-random'
- job_name: 'example-random'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
@ -231,7 +230,7 @@ job_service:rpc_durations_seconds_count:avg_rate5m = avg(rate(rpc_durations_seco
```
To make Prometheus pick up this new rule, add a `rule_files` statement to the
global configuration section in your `prometheus.yml`. The config should now
`global` configuration section in your `prometheus.yml`. The config should now
look like this:
```yaml

View File

@ -15,6 +15,11 @@ available versions.
For building Prometheus components from source, see the `Makefile` targets in
the respective repository.
NOTE: **Note:** The documentation on this website refers to the latest stable
release (excluding pre-releases). The branch
[next-release](https://github.com/prometheus/docs/compare/next-release) refers
to unreleased changes that are in master branches of source repos.
## Using Docker
All Prometheus services are available as Docker images under the
@ -26,7 +31,7 @@ exposes it on port 9090.
The Prometheus image uses a volume to store the actual metrics. For
production deployments it is highly recommended to use the
[Data Volume Container](https://docs.docker.com/engine/userguide/containers/dockervolumes/#creating-and-mounting-a-data-volume-container)
[Data Volume Container](https://docs.docker.com/engine/admin/volumes/volumes/)
pattern to ease managing the data on Prometheus upgrades.
To provide your own configuration, there are several options. Here are
@ -34,16 +39,16 @@ two examples.
### Volumes & bind-mount
Bind-mount your prometheus.yml from the host by running:
Bind-mount your `prometheus.yml` from the host by running:
```
```bash
docker run -p 9090:9090 -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
```
Or use an additional volume for the config:
```
```bash
docker run -p 9090:9090 -v /prometheus-data \
prom/prometheus -config.file=/prometheus-data/prometheus.yml
```
@ -56,21 +61,21 @@ configuration itself is rather static and the same across all
environments.
For this, create a new directory with a Prometheus configuration and a
Dockerfile like this:
`Dockerfile` like this:
```
```Dockerfile
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
```
Now build and run it:
```
```bash
docker build -t my-prometheus .
docker run -p 9090:9090 my-prometheus
```
A more advanced option is to render the config dynamically on start
A more advanced option is to render the configuration dynamically on start
with some tooling or even have a daemon update it periodically.
## Using configuration management systems
@ -78,19 +83,19 @@ with some tooling or even have a daemon update it periodically.
If you prefer using configuration management systems you might be interested in
the following third-party contributions:
Ansible:
### Ansible
* [griggheo/ansible-prometheus](https://github.com/griggheo/ansible-prometheus)
* [William-Yeh/ansible-prometheus](https://github.com/William-Yeh/ansible-prometheus)
Chef:
### Chef
* [rayrod2030/chef-prometheus](https://github.com/rayrod2030/chef-prometheus)
Puppet:
### Puppet
* [puppet/prometheus](https://forge.puppet.com/puppet/prometheus)
SaltStack:
### SaltStack
* [bechtoldt/saltstack-prometheus-formula](https://github.com/bechtoldt/saltstack-prometheus-formula)