This roughly comprises the following changes:
- index target pools by job instead of scrape interval
- make targets within a pool exchangable while preserving existing
health state for targets
- allow exchanging targets via HTTP API (PUT)
- show target lists in /status (experimental, for own debug use)
We have an open question of how long does it take for each target
pool to have the state retrieved from all participating elements.
This commit starts by providing insight into this.
client_golang was updated to support full label-oriented telemetry,
which introduced interface incompatibilities with the previous
version of Prometheus. To alleviate this, a general fetching and
processing dispatching system has been created, which discriminates
and processes according to the version of input.
Future tests around the ``TargetPool`` and ``TargetManager`` and
friends will be a lot easier when the concrete behaviors of
``Target`` can be extracted out. Plus, each ``Target``, I suspect,
will have its own resolution and query strategy.
``Target`` will be refactored down the road to support various
nuanced endpoint types. Thusly incorporating the scheduling
behavior within it will be problematic. To that end, the scheduling
behavior has been moved into a separate assistance type to improve
conciseness and testability.
``make format`` was also run.
``TargetPool`` is a pool of targets pending scraping. For now, it
uses the ``heap.Interface`` from ``container/heap`` to provide a
priority queue for the system to scrape from the next target.
It is my supposition that we'll use a model whereby we create a
``TargetPool`` for each scrape interval, into which ``Target``
instances are registered.