Merge changes I76203973,I38646c2b

* changes:
  More updates for first time users.
  Update example config file from json to new protobuf format.
This commit is contained in:
Julius Volz 2013-10-24 12:45:55 +02:00 committed by Gerrit Code Review
commit b70d5ca143
3 changed files with 35 additions and 26 deletions

5
.gitignore vendored
View File

@ -9,8 +9,13 @@
*.pyc
*.rej
*.so
# Editor files #
################
*~
.*.swp
.*.swo
.DS_Store
._*
.nfs.*

View File

@ -39,6 +39,11 @@ For basic help how to get started:
For first time users, simply run the following:
$ make
$ ARGUMENTS="-configFile=documentation/examples/prometheus.conf" make run
``${ARGUMENTS}`` is passed verbatim into the makefile and thusly Prometheus as
``$(ARGUMENTS)``. This is useful for quick one-off invocations and smoke
testing.
If you run into problems, try the following:
@ -56,13 +61,6 @@ staticly link against C dependency libraries, so including the ``lib``
directory is paramount. Providing ``LD_LIBRARY_PATH`` or
``DYLD_LIBRARY_PATH`` in a scaffolding shell script is advised.
Executing the following target will start up Prometheus for lazy users:
$ ARGUMENTS="-foo -bar -baz" make run
``${ARGUMENTS}`` is passed verbatim into the makefile and thusly Prometheus as
``$(ARGUMENTS)``. This is useful for quick one-off invocations and smoke
testing.
### Problems
If at any point you run into an error with the ``make`` build system in terms of

View File

@ -1,24 +1,30 @@
# Global default settings.
global {
scrape_interval = "1s"
evaluation_interval = "1s"
labels {
monitor = "test"
}
rule_files = [
"prometheus.rules"
]
}
scrape_interval: "15s" # By default, scrape targets every 15 seconds.
evaluation_interval: "15s" # By default, evaluate rules every 15 seconds.
job {
name = "prometheus"
scrape_interval = "5s"
targets {
endpoints = [
"http://localhost:9090/metrics.json"
]
labels {
group = "canary"
# Attach these extra labels to all timeseries collected by this Prometheus instance.
labels: {
label: {
name: "monitor"
value: "codelab-monitor"
}
}
# Load and evaluate rules in this file every 'evaluation_interval' seconds. This field may be repeated.
#rule_file: "prometheus.rules"
}
# A job definition containing exactly one endpoint to scrape: Here it's prometheus itself.
job: {
# The job name is added as a label `job={job-name}` to any timeseries scraped from this job.
name: "prometheus"
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: "5s"
# Let's define a group of targets to scrape for this job. In this case, only one.
target_group: {
# These endpoints are scraped via HTTP.
target: "http://localhost:9090/metrics.json"
}
}