mirror of
https://github.com/prometheus-community/postgres_exporter
synced 2025-05-05 17:38:01 +00:00
PMM-12154 pull upstream changes (#153)
* Dashboard linting improvements for mixin Signed-off-by: Ryan J. Geyer <me@ryangeyer.com> * WIP: Add prelim multi-target support - Remove multi server support from new collector package - Add http handler for multi-target support Signed-off-by: Joe Adams <github@joeadams.io> * Add config module The config module supports adding configuration to the exporter via a config file. This supports adding authentication details in a config file so that /probe requests can specify authentication for endpoints Signed-off-by: Joe Adams <github@joeadams.io> * cleanup and README Signed-off-by: Joe Adams <github@joeadams.io> Co-authored-by: Ben Kochie <superq@gmail.com> * Update cmd/postgres_exporter/main.go Signed-off-by: Joe Adams <github@joeadams.io> * fix for exporter issue 633 fix for exporter issue 633: https://github.com/prometheus-community/postgres_exporter/issues/633 "Scan error on column index 2, name \"checkpoint_write_time\": converting driver.Value type float64 (\"6.594096e+06\") to a int: invalid syntax #633" Signed-off-by: bravosierrasierra <bravosierrasierra@users.noreply.github.com> * Fix checkpoint_sync_time value type Error: sql: Scan error on column index 3, name \"checkpoint_sync_time\": converting driver.Value type float64 (\"1.876469e+06\") to a int: invalid syntax See also: https://github.com/prometheus-community/postgres_exporter/issues/633 https://github.com/prometheus-community/postgres_exporter/pull/666 Signed-off-by: Nicolas Rodriguez <nico@nicoladmin.fr> * Bump github.com/prometheus/common from 0.35.0 to 0.37.0 Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.35.0 to 0.37.0. - [Release notes](https://github.com/prometheus/common/releases) - [Commits](https://github.com/prometheus/common/compare/v0.35.0...v0.37.0) --- updated-dependencies: - dependency-name: github.com/prometheus/common dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * Correct minor typos in README.md Signed-off-by: Luckz <224748+Luckz@users.noreply.github.com> * Release 0.11.1 * [BUGFIX] Fix checkpoint_write_time value type #666 * [BUGFIX] Fix checkpoint_sync_time value type #667 Signed-off-by: SuperQ <superq@gmail.com> * Add dsn type for handling datasources dsn is designed to replace the other uses of dsn as a string in the long term. dsn is designed to be safe to log, properly redacting passwords. The goal is eventually always parse datasource information into a dsn type object which can safely be passed around and logged without worrying about wrapping calls in a redaction function (today this function is loggableDSN(). This should solve the root issue in #648, #677, and #643, although the full fix will require more changes to update all code references over to use the dsn type. Signed-off-by: Joe Adams <github@joeadams.io> * Release 0.12.0-rc.0 BREAKING CHANGES: This release changes support for multiple postgres servers to use the multi-target exporter pattern. This makes it much easier to monitor multiple PostgreSQL servers from a single exporter by passing the target via URL params. See the Multi-Target Support section of the README. * [CHANGE] Add multi-target support #618 * [BUGFIX] Add dsn type for handling datasources #678 Signed-off-by: SuperQ <superq@gmail.com> * fix: typo Signed-off-by: Yoan Blanc <yoan@dosimple.ch> * Update multi-target handler to use new DSN type - Moves new dsn type to config.DSN. This will prevent circular dependencies. - Change DSN.query to be url.Values. This allows the multi-target functionality to merge values without re-parsing the query string - Change NewProbeCollector to use the new config.DSN type - Add DSN.GetConnectionString to return a string formatted for the sql driver to use during connection Signed-off-by: Joe Adams <github@joeadams.io> * Add missing license header Signed-off-by: Joe Adams <github@joeadams.io> * Convert pg_stat_database to new collector model Signed-off-by: Joe Adams <github@joeadams.io> * extended /probe path metrics Signed-off-by: Ildar Valiullin <preved.911@gmail.com> * Bump github.com/lib/pq from 1.10.6 to 1.10.7 Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.6 to 1.10.7. - [Release notes](https://github.com/lib/pq/releases) - [Commits](https://github.com/lib/pq/compare/v1.10.6...v1.10.7) --- updated-dependencies: - dependency-name: github.com/lib/pq dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Capture usename and application_name for pg_stat_activity It is necessary to be able to exclude backups from long-running transaction alerts, as they are to be expected. With the current pg_stat_activity metric there is no ability to filter out specific users or application names. Resolves #668 Signed-off-by: cezmunsta <github@incoming-email.co.uk> * Fixed formatting Signed-off-by: cezmunsta <github@incoming-email.co.uk> * Update common Prometheus files Signed-off-by: prombot <prometheus-team@googlegroups.com> * 4kB size added for postgres with 4kB block_size Signed-off-by: Sergey Morozov <38383507+ken3122@users.noreply.github.com> * Correct additional typo in README.md Signed-off-by: Luckz <224748+Luckz@users.noreply.github.com> * Set gauge to 1 when collector is successful Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu> * Update common Prometheus files Signed-off-by: prombot <prometheus-team@googlegroups.com> * probe: clean-up database connection after probe to prevent connection leak Signed-off-by: Kurtis Bass <kurtis.bass@hinge.co> * Set gauge to 1 when collector is successful Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu> Signed-off-by: Khiem Doan <doankhiem.crazy@gmail.com> * Add postgres 15 for CI test Signed-off-by: Khiem Doan <doankhiem.crazy@gmail.com> * Add postgres 15 for CI test Signed-off-by: Khiem Doan <doankhiem.crazy@gmail.com> * New unit value 64kB Signed-off-by: Oleksandr Mysyura <olexandr.mysyura@pragmaticplay.com> * Update common Prometheus files Signed-off-by: prombot <prometheus-team@googlegroups.com> * Update exporter-toolkit Update to the latest exporter-toolkit * Enables multi-listener and systemd socket activation. * Bump Go to 1.19. * Remove `PG_EXPORTER_WEB_LISTEN_ADDRESS` env var because this is now a repeatable flag. Signed-off-by: SuperQ <superq@gmail.com> * go fmt Signed-off-by: SuperQ <superq@gmail.com> * adding codified functionality for logical replication metrics Signed-off-by: Zachary Caldarola <zachary.caldarola@reddit.com> * Bump github.com/prometheus/client_golang from 1.13.0 to 1.14.0 Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.13.0 to 1.14.0. - [Release notes](https://github.com/prometheus/client_golang/releases) - [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/client_golang/compare/v1.13.0...v1.14.0) --- updated-dependencies: - dependency-name: github.com/prometheus/client_golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * Bump github.com/prometheus/common from 0.37.0 to 0.39.0 Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.37.0 to 0.39.0. - [Release notes](https://github.com/prometheus/common/releases) - [Commits](https://github.com/prometheus/common/compare/v0.37.0...v0.39.0) --- updated-dependencies: - dependency-name: github.com/prometheus/common dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * addressing comments Signed-off-by: Zachary Caldarola <zachary.caldarola@reddit.com> * more comments Signed-off-by: Zachary Caldarola <zmc2005@gmail.com> * fmt Signed-off-by: Zachary Caldarola <zmc2005@gmail.com> * typing Signed-off-by: Zachary Caldarola <zmc2005@gmail.com> * fmt Signed-off-by: Zachary Caldarola <zmc2005@gmail.com> * send stdout/stderr to syslog Signed-off-by: Mike <gavrikster@gmail.com> * Update common Prometheus files Signed-off-by: prombot <prometheus-team@googlegroups.com> * Fix exclude-databases for collector package The pg_database collector was not respecting the --exclude-databases flag and causing problems where databases were not accessible. This now respects the list of databases to exclude. - Adjusts the Collector create func to take a config struct instead of a logger. This allows more changes like this in the future. I figured we would need to do this at some point but I wasn't sure if we could hold off. - Split the database size collection to a separate query when database is not excluded. - Comment some probe code that was not useful/accurate Signed-off-by: Joe Adams <github@joeadams.io> * Remove commented code Signed-off-by: Joe Adams <github@joeadams.io> * Remove more dead code Signed-off-by: Joe Adams <github@joeadams.io> * Update build * Update Go to 1.20. * Update golanci-lint. * Bump modules. * Update CI orb. * Fix up use of deprecated ioutil. Signed-off-by: SuperQ <superq@gmail.com> * Reduce cardinality of pg_stat_statements Make the example queries.yaml `pg_stat_statements` query safer. * Select the top 10% of queries by total query time. * Only expose the top 100 queries by total query time. * Keep only the most useful metrics. * Comment out the example by default. Fixes: https://github.com/prometheus-community/postgres_exporter/issues/549 Signed-off-by: SuperQ <superq@gmail.com> * Update changelog and version for v0.12.0 release Signed-off-by: Joe Adams <github@joeadams.io> * Update exporter-toolkit Updates the exporter-toolkit to the latest version * Adds new landing page feature. * Allow metrics path to be on `/`. Signed-off-by: SuperQ <superq@gmail.com> * Update common Prometheus files Signed-off-by: prombot <prometheus-team@googlegroups.com> * Fix column type for pg_replication_slots Change the data type of `active` from int64 to bool. The documentation confirms that this is a boolean field. https://www.postgresql.org/docs/current/view-pg-replication-slots.html fixes #769 Signed-off-by: Joe Adams <github@joeadams.io> * Update versions listed in the README Update the supported versions based on what we actually test in CI. Signed-off-by: SuperQ <superq@gmail.com> * Update README cli flags These have not been kept up to date. Signed-off-by: Joe Adams <github@joeadams.io> * Adjust log level for collector startup Since we support both multi-target and typical direct scrapes, either of these can fail and it is no longer an error. Signed-off-by: Joe Adams <github@joeadams.io> * Fix pg_setting different help values Signed-off-by: GitHub <noreply@github.com> * Supports alternate postgres:// prefix in URLs Adds support for the alternate postgres:// prefix in URLs. It's maybe not the cleanest approach, but works. Hoping I can either get some pointers on a more appropriate patch, or that we could use this in the interim to unblock this use-case. Signed-off-by: Jack Wink <57678801+mothershipper@users.noreply.github.com> * Bump github.com/lib/pq from 1.10.7 to 1.10.9 Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.7 to 1.10.9. - [Release notes](https://github.com/lib/pq/releases) - [Commits](https://github.com/lib/pq/compare/v1.10.7...v1.10.9) --- updated-dependencies: - dependency-name: github.com/lib/pq dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Refactor collector descriptors Use individual collector metric descriptor vars to help avoid miss-mapped or unused metrics. Signed-off-by: SuperQ <superq@gmail.com> * Bump github.com/prometheus/common from 0.42.0 to 0.44.0 Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.42.0 to 0.44.0. - [Release notes](https://github.com/prometheus/common/releases) - [Commits](https://github.com/prometheus/common/compare/v0.42.0...v0.44.0) --- updated-dependencies: - dependency-name: github.com/prometheus/common dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * Update linting * Move errcheck exclude list to config file. * Enable revive linter * Fix up revive linting issues. Signed-off-by: SuperQ <superq@gmail.com> * Bump github.com/prometheus/exporter-toolkit from 0.9.1 to 0.10.0 Bumps [github.com/prometheus/exporter-toolkit](https://github.com/prometheus/exporter-toolkit) from 0.9.1 to 0.10.0. - [Release notes](https://github.com/prometheus/exporter-toolkit/releases) - [Changelog](https://github.com/prometheus/exporter-toolkit/blob/master/CHANGELOG.md) - [Commits](https://github.com/prometheus/exporter-toolkit/compare/v0.9.1...v0.10.0) --- updated-dependencies: - dependency-name: github.com/prometheus/exporter-toolkit dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * Move queries from queries.yaml to collectors (#801) Signed-off-by: Ben Kochie <superq@gmail.com> * Fix pg_stat_database collector The signature for creating a collector changed and CI didn't retrigger. Move metrics out of map and into individual vars. Signed-off-by: Joe Adams <github@joeadams.io> * Fix up collector registration (#812) Use const definitions to make collector registration consistent. * Use collector subsystem name consistently. * Fix up replication metric name unit. Signed-off-by: SuperQ <superq@gmail.com> * Update release info for v0.12.1 Signed-off-by: Joe Adams <github@joeadams.io> * Deprecate extend queries feature (#811) Mark the extend queries feature as deprecated in favor of recommending the sql_exporter. Signed-off-by: SuperQ <superq@gmail.com> * Update common Prometheus files Signed-off-by: prombot <prometheus-team@googlegroups.com> * Deprecate additional database features Now that we have deprecated extended queries we can deprecate related database features. * Deprecate flags/functions around auto discover databases. * Deprecate flags/functions for additional constant labels. Signed-off-by: SuperQ <superq@gmail.com> * Release v0.13.0 BREAKING CHANGES: Please note, the following features are deprecated and may be removed in a future release: - `auto-discover-databases` - `extend.query-path` - `constantLabels` - `exclude-databases` - `include-databases` This exporter is meant to monitor PostgresSQL servers, not the user data/databases. If you need a generic SQL report exporter https://github.com/burningalchemist/sql_exporter is recommended. * [CHANGE] Adjust log level for collector startup #784 * [CHANGE] Move queries from queries.yaml to collectors #801 * [CHANGE] Deprecate extend queries feature #811 * [CHANGE] Deprecate additional database features #815 * [CHANGE] Convert pg_stat_database to new collector #685 * [ENHANCEMENT] Supports alternate postgres:// prefix in URLs #787 * [BUGFIX] Fix pg_setting different help values #771 * [BUGFIX] Fix column type for pg_replication_slots #777 * [BUGFIX] Fix pg_stat_database collector #809 Signed-off-by: SuperQ <superq@gmail.com> * Add the instance struct to handle connections The intent is to use the instance struct to hold the connection to the database as well as metadata about the instance. Currently this metadata only includes the version of postgres for the instance which can be used in the collectors to decide what query to run. In the future this could hold more metadata but for now it keeps the Collector interface arguments to a reasonable number. Signed-off-by: Joe Adams <github@joeadams.io> * chore: fix a few typos Signed-off-by: Alex Tymchuk <alexander.tymchuk@percona.com> * Bug fix: Make collector not fail on null values (#823) * Make all values nullable --------- Signed-off-by: Felix Yuan <felix.yuan@reddit.com> Co-authored-by: Ben Kochie <superq@gmail.com> * Release 0.13.1 (#824) * [BUGFIX] Make collectors not fail on null values #823 Signed-off-by: SuperQ <superq@gmail.com> * Fixed replication pgReplicationSlotQuery - now it's working correctly for replica and primary (#825) Signed-off-by: Vadim Voitenko <vadim.voitenko@exness.com> Co-authored-by: Vadim Voitenko <vadim.voitenko@exness.com> * Migrate pg_locks to collector package (#817) Migrate the `pg_locks_count` query from `main` to the `collector` package. Signed-off-by: SuperQ <superq@gmail.com> * Cleanup collectors (#826) Fix up `replication` and `process_idle` Update input params to match the rest of the collectors. Signed-off-by: SuperQ <superq@gmail.com> * Bug Fix: Fix lingering type issues (#828) * Fix postmaster type issue * Disable postmaster collector by default --------- Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Update common Prometheus files (#829) Signed-off-by: prombot <prometheus-team@googlegroups.com> * Fix replication collector Signed-off-by: Tom Hughes <tom@compton.nu> * Add some more escapes to the query sanitizer Signed-off-by: Tom Hughes <tom@compton.nu> * Add a collector to gather metrics on WAL size Signed-off-by: Tom Hughes <tom@compton.nu> * Bump github.com/prometheus/client_golang from 1.15.1 to 1.16.0 (#853) Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.15.1 to 1.16.0. - [Release notes](https://github.com/prometheus/client_golang/releases) - [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/client_golang/compare/v1.15.1...v1.16.0) --- updated-dependencies: - dependency-name: github.com/prometheus/client_golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Fix untyped integer overflows on 32-bit archs (#857) go-sqlmock's Rows.AddRow() takes values which have a type alias of "any", and appear to default to untyped ints if not explicitly cast. When large values are passed which would overflow int32, tests fail. Signed-off-by: Daniel Swarbrick <daniel.swarbrick@gmail.com> * Bump github.com/smartystreets/goconvey from 1.8.0 to 1.8.1 (#852) Bumps [github.com/smartystreets/goconvey](https://github.com/smartystreets/goconvey) from 1.8.0 to 1.8.1. - [Release notes](https://github.com/smartystreets/goconvey/releases) - [Commits](https://github.com/smartystreets/goconvey/compare/v1.8.0...v1.8.1) --- updated-dependencies: - dependency-name: github.com/smartystreets/goconvey dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Unpack postgres arrays for process idle times correctly (#855) Signed-off-by: Ben Kochie <superq@gmail.com> * Include all idle processes in the process idle metrics Signed-off-by: Tom Hughes <tom@compton.nu> * Improve linting (#861) * Disable unused-parameter check due to false positives on Collect() calls. * Enable misspell. * Simplify error returns. Signed-off-by: SuperQ <superq@gmail.com> * Update common Prometheus files (#860) Signed-off-by: prombot <prometheus-team@googlegroups.com> Co-authored-by: Ben Kochie <superq@gmail.com> * Update common Prometheus files Signed-off-by: prombot <prometheus-team@googlegroups.com> * Gitlab collector: Database wraparound collector and test (#834) * Database wraparound collector and test --------- Signed-off-by: Felix Yuan <felix.yuan@reddit.com> Co-authored-by: Joe Adams <github@joeadams.io> * Add a logger to stat_database collector to get better handle on error (also clean up some metric validity checks) Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Update changelog for release 0.13.2 (#872) Signed-off-by: Joe Adams <github@joeadams.io> * Gitlab Collector: Autovacuum collector and test (#840) * Autovacuum collector and test Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Update collector/pg_stat_activity_autovacuum.go Co-authored-by: Joe Adams <github@joeadams.io> Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Update collector/pg_stat_activity_autovacuum.go Co-authored-by: Joe Adams <github@joeadams.io> Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Use timestamp seconds Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * query formating Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * SQL format Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Loosen autovacuum query Signed-off-by: Felix Yuan <felix.yuan@reddit.com> --------- Signed-off-by: Felix Yuan <felix.yuan@reddit.com> Co-authored-by: Joe Adams <github@joeadams.io> * Gitlab Collector: Wal Receiver Collector and Test (#844) * Wal Receiver Collector and Test Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Add more escapes Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Corrections to wal_receiver Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Continue on null labels Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Skip nulls and log a message Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Redundant breaks Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Fix up walreceiver Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Remove extra label Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Update collector/pg_stat_walreceiver.go Co-authored-by: Ben Kochie <superq@gmail.com> Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Clean up the extra assignments Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Update collector/pg_stat_walreceiver.go Co-authored-by: Joe Adams <github@joeadams.io> Signed-off-by: Felix Yuan <felix.yuan@reddit.com> --------- Signed-off-by: Felix Yuan <felix.yuan@reddit.com> Co-authored-by: Ben Kochie <superq@gmail.com> Co-authored-by: Joe Adams <github@joeadams.io> * Gitlab collector: Xlog location collector and test (#849) * Xlog location collector and test Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Add more escapes Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Change to Gauge Signed-off-by: Felix Yuan <felix.yuan@reddit.com> --------- Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Handle new pg_stat_statements column names (#874) Update pg_stat_statements collector to handle the new column names in PostgreSQL 13. Fixes: https://github.com/prometheus-community/postgres_exporter/issues/502 Signed-off-by: SuperQ <superq@gmail.com> * Fixup new pg_stats_statements query (#876) Fix all renames of `total_time` to `total_exec_time`. Fixes: https://github.com/prometheus-community/postgres_exporter/issues/502 Signed-off-by: SuperQ <superq@gmail.com> * Add a multi-target example config (#890) Add an example Prometheus scrape config, similar to the blackbox_exporter's example config. Fixes: https://github.com/prometheus-community/postgres_exporter/issues/888 Signed-off-by: SuperQ <superq@gmail.com> * Delay database connection until scrape (#882) This no longer returns an error when creating a collector.instance when the database cannot be reached for the version query. This will resolve the entire postgresCollector not being registered for metrics collection when a database is not available. If the version query fails, the scrape will fail. Resolves #880 Signed-off-by: Joe Adams <github@joeadams.io> * Bugfix: Make statsreset nullable (#877) * Stats_reset as null seems to actually be legitimate for new databases, so don't fail for it --------- Signed-off-by: Felix Yuan <felix.yuan@reddit.com> Co-authored-by: Ben Kochie <superq@gmail.com> * Gitlab Collector: User Index io stats collector and test (#845) * User Index io stats collector and test --------- Signed-off-by: Felix Yuan <felix.yuan@reddit.com> * Update README to reflect changes made in #828 (#894) Signed-off-by: Mathis Raguin <mathis.raguin@gitguardian.com> * Gitlab Collector: Long running transactions collector and test (#836) * Long running transactions collector and test --------- Signed-off-by: Felix Yuan <felix.yuan@reddit.com> Co-authored-by: Ben Kochie <superq@gmail.com> * Update common Prometheus files (#900) Signed-off-by: prombot <prometheus-team@googlegroups.com> * Fix a connection leak (#902) The leak was introduced in PR#882 Signed-off-by: Christian Albrecht <cal@albix.de> Co-authored-by: Christian Albrecht <christian.albrecht@akquinet.de> * Fix cross-compilation command in README.md (#903) Signed-off-by: David Cook <dcook@divviup.org> * fix pg_replication_lag_seconds (#895) Signed-off-by: Vladimir Luksha <waldemarluksha@gmail.com> Co-authored-by: Vladimir Luksha <luksha@limcore.io> * stat_user_tables: Add total size metric (#904) Signed-off-by: David Cook <dcook@divviup.org> * Fix bugs mentioned in #908 (#910) * Fix bugs mentioned in #908 These collectors are disabled by default, so unless enabled, they are not tested regularly. Signed-off-by: Joe Adams <github@joeadams.io> --------- Signed-off-by: Joe Adams <github@joeadams.io> * Update common Prometheus files (#913) Signed-off-by: prombot <prometheus-team@googlegroups.com> * Add changelog for v0.14 (#906) * Add changelog for v0.14 - Add changelog entries since v0.13.2 - Update README with new options - Bump version file Signed-off-by: Joe Adams <github@joeadams.io> * Add changelog entry for #904 Signed-off-by: Joe Adams <github@joeadams.io> --------- Signed-off-by: Joe Adams <github@joeadams.io> * PMM-12154 pull upstream changes. * PMM-12154 Fix go mod. * PMM-12154 Remove some built-in queries, they were moved to collectors. * PMM-12154 compatibility improvements. * PMM-12154 compatibility improvements. * PMM-12154 performance improvement. * revert pg_lock_conflicts --------- Signed-off-by: Ryan J. Geyer <me@ryangeyer.com> Signed-off-by: Joe Adams <github@joeadams.io> Signed-off-by: bravosierrasierra <bravosierrasierra@users.noreply.github.com> Signed-off-by: Nicolas Rodriguez <nico@nicoladmin.fr> Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: Luckz <224748+Luckz@users.noreply.github.com> Signed-off-by: SuperQ <superq@gmail.com> Signed-off-by: Yoan Blanc <yoan@dosimple.ch> Signed-off-by: Ildar Valiullin <preved.911@gmail.com> Signed-off-by: cezmunsta <github@incoming-email.co.uk> Signed-off-by: prombot <prometheus-team@googlegroups.com> Signed-off-by: Sergey Morozov <38383507+ken3122@users.noreply.github.com> Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu> Signed-off-by: Kurtis Bass <kurtis.bass@hinge.co> Signed-off-by: Khiem Doan <doankhiem.crazy@gmail.com> Signed-off-by: Oleksandr Mysyura <olexandr.mysyura@pragmaticplay.com> Signed-off-by: Zachary Caldarola <zachary.caldarola@reddit.com> Signed-off-by: Zachary Caldarola <zmc2005@gmail.com> Signed-off-by: Mike <gavrikster@gmail.com> Signed-off-by: GitHub <noreply@github.com> Signed-off-by: Jack Wink <57678801+mothershipper@users.noreply.github.com> Signed-off-by: Ben Kochie <superq@gmail.com> Signed-off-by: Alex Tymchuk <alexander.tymchuk@percona.com> Signed-off-by: Felix Yuan <felix.yuan@reddit.com> Signed-off-by: Vadim Voitenko <vadim.voitenko@exness.com> Signed-off-by: Tom Hughes <tom@compton.nu> Signed-off-by: Daniel Swarbrick <daniel.swarbrick@gmail.com> Signed-off-by: Mathis Raguin <mathis.raguin@gitguardian.com> Signed-off-by: Christian Albrecht <cal@albix.de> Signed-off-by: David Cook <dcook@divviup.org> Signed-off-by: Vladimir Luksha <waldemarluksha@gmail.com> Co-authored-by: Ryan J. Geyer <me@ryangeyer.com> Co-authored-by: Joe Adams <github@joeadams.io> Co-authored-by: Ben Kochie <superq@gmail.com> Co-authored-by: Joe Adams <adams10301@gmail.com> Co-authored-by: bravosierrasierra <bravosierrasierra@users.noreply.github.com> Co-authored-by: Nicolas Rodriguez <nico@nicoladmin.fr> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Luckz <224748+Luckz@users.noreply.github.com> Co-authored-by: Yoan Blanc <yoan@dosimple.ch> Co-authored-by: Ildar Valiullin <preved.911@gmail.com> Co-authored-by: cezmunsta <github@incoming-email.co.uk> Co-authored-by: prombot <prometheus-team@googlegroups.com> Co-authored-by: Sergey Morozov <38383507+ken3122@users.noreply.github.com> Co-authored-by: Julien Pivotto <roidelapluie@o11y.eu> Co-authored-by: Kurtis Bass <kurtis.bass@hinge.co> Co-authored-by: Khiem Doan <doankhiem.crazy@gmail.com> Co-authored-by: Oleksandr Mysyura <olexandr.mysyura@pragmaticplay.com> Co-authored-by: Zachary Caldarola <zachary.caldarola@reddit.com> Co-authored-by: Zachary Caldarola <zmc2005@gmail.com> Co-authored-by: Mike <gavrikster@gmail.com> Co-authored-by: Khaled Khalifa <33331600+khkhalifa@users.noreply.github.com> Co-authored-by: Jack Wink <57678801+mothershipper@users.noreply.github.com> Co-authored-by: Felix Yuan <felix.yuan@reddit.com> Co-authored-by: Alex Tymchuk <alexander.tymchuk@percona.com> Co-authored-by: Vadim Voitenko <74241416+wwoytenko@users.noreply.github.com> Co-authored-by: Vadim Voitenko <vadim.voitenko@exness.com> Co-authored-by: Tom Hughes <tom@compton.nu> Co-authored-by: Daniel Swarbrick <daniel.swarbrick@gmail.com> Co-authored-by: Mathis Raguin <evaelis.market@gmail.com> Co-authored-by: Christian Albrecht <cal@albix.de> Co-authored-by: Christian Albrecht <christian.albrecht@akquinet.de> Co-authored-by: David Cook <divergentdave@gmail.com> Co-authored-by: Vladimir Luksha <waldemarluksha@gmail.com> Co-authored-by: Vladimir Luksha <luksha@limcore.io> Co-authored-by: David Cook <dcook@divviup.org>
This commit is contained in:
parent
085bdfed5f
commit
82694ff65e
@ -2,13 +2,13 @@
|
||||
version: 2.1
|
||||
|
||||
orbs:
|
||||
prometheus: prometheus/prometheus@0.16.0
|
||||
prometheus: prometheus/prometheus@0.17.1
|
||||
|
||||
executors:
|
||||
# This must match .promu.yml.
|
||||
golang:
|
||||
docker:
|
||||
- image: cimg/go:1.18
|
||||
- image: cimg/go:1.20
|
||||
|
||||
jobs:
|
||||
test:
|
||||
@ -22,7 +22,7 @@ jobs:
|
||||
|
||||
integration:
|
||||
docker:
|
||||
- image: cimg/go:1.18
|
||||
- image: cimg/go:1.20
|
||||
- image: << parameters.postgres_image >>
|
||||
environment:
|
||||
POSTGRES_DB: circle_test
|
||||
@ -61,6 +61,7 @@ workflows:
|
||||
- circleci/postgres:12
|
||||
- circleci/postgres:13
|
||||
- cimg/postgres:14.1
|
||||
- cimg/postgres:15.1
|
||||
- prometheus/build:
|
||||
name: build
|
||||
parallelism: 3
|
||||
|
11
.github/workflows/golangci-lint.yml
vendored
11
.github/workflows/golangci-lint.yml
vendored
@ -1,3 +1,5 @@
|
||||
---
|
||||
# This action is synced from https://github.com/prometheus/prometheus
|
||||
name: golangci-lint
|
||||
on:
|
||||
push:
|
||||
@ -16,10 +18,9 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v4.0.0
|
||||
- name: install Go
|
||||
uses: actions/setup-go@v4
|
||||
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3.5.0
|
||||
with:
|
||||
go-version-file: ${{ github.workspace }}/go.mod
|
||||
|
||||
@ -28,6 +29,6 @@ jobs:
|
||||
if: github.repository == 'prometheus/snmp_exporter'
|
||||
|
||||
- name: Lint
|
||||
uses: golangci/golangci-lint-action@v3.6.0
|
||||
uses: golangci/golangci-lint-action@3a919529898de77ec3da873e3063ca4b10e7f5cc # v3.7.0
|
||||
with:
|
||||
version: v1.45.2
|
||||
version: v1.54.2
|
||||
|
5
.gitignore
vendored
5
.gitignore
vendored
@ -22,7 +22,4 @@
|
||||
/vendor
|
||||
/percona_tests/assets/postgres_exporter
|
||||
/percona_tests/assets/postgres_exporter_percona
|
||||
/percona_tests/assets/metrics.new.txt
|
||||
/percona_tests/assets/metrics.old.txt
|
||||
/percona_tests/assets/metrics.names.new.txt
|
||||
/percona_tests/assets/metrics.names.old.txt
|
||||
/percona_tests/assets/metrics.*
|
@ -1,4 +1,9 @@
|
||||
---
|
||||
linters:
|
||||
enable:
|
||||
- misspell
|
||||
- revive
|
||||
|
||||
issues:
|
||||
exclude-rules:
|
||||
- path: _test.go
|
||||
@ -7,4 +12,12 @@ issues:
|
||||
|
||||
linters-settings:
|
||||
errcheck:
|
||||
exclude: scripts/errcheck_excludes.txt
|
||||
exclude-functions:
|
||||
# Never check for logger errors.
|
||||
- (github.com/go-kit/log.Logger).Log
|
||||
revive:
|
||||
rules:
|
||||
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unused-parameter
|
||||
- name: unused-parameter
|
||||
severity: warning
|
||||
disabled: true
|
||||
|
@ -1,6 +1,6 @@
|
||||
go:
|
||||
# This must match .circle/config.yml.
|
||||
version: 1.18
|
||||
version: 1.20
|
||||
repository:
|
||||
path: github.com/prometheus-community/postgres_exporter
|
||||
build:
|
||||
|
@ -20,9 +20,4 @@ rules:
|
||||
config/testdata/section_key_dup.bad.yml
|
||||
line-length: disable
|
||||
truthy:
|
||||
ignore: |
|
||||
.github/workflows/codeql-analysis.yml
|
||||
.github/workflows/funcbench.yml
|
||||
.github/workflows/fuzzing.yml
|
||||
.github/workflows/prombench.yml
|
||||
.github/workflows/golangci-lint.yml
|
||||
check-keys: false
|
||||
|
76
CHANGELOG.md
76
CHANGELOG.md
@ -1,4 +1,78 @@
|
||||
## master / unreleased
|
||||
## 0.14.0 / 2023-09-11
|
||||
|
||||
* [CHANGE] Add `state` label to pg_process_idle_seconds #862
|
||||
* [CHANGE] Change database connections to one per scrape #882 #902
|
||||
* [ENHANCEMENT] Add wal collector #858
|
||||
* [ENHANCEMENT] Add database_wraparound collector #834
|
||||
* [ENHANCEMENT] Add stat_activity_autovacuum collector #840
|
||||
* [ENHANCEMENT] Add stat_wal_receiver collector #844
|
||||
* [ENHANCEMENT] Add xlog_location collector #849
|
||||
* [ENHANCEMENT] Add statio_user_indexes collector #845
|
||||
* [ENHANCEMENT] Add long_running_transactions collector #836
|
||||
* [ENHANCEMENT] Add pg_stat_user_tables_size_bytes metric #904
|
||||
* [BUGFIX] Fix tests on 32-bit systems #857
|
||||
* [BUGFIX] Fix pg_stat_statements metrics on Postgres 13+ #874 #876
|
||||
* [BUGFIX] Fix pg_stat_database metrics for NULL stats_reset #877
|
||||
* [BUGFIX] Fix pg_replication_lag_seconds on Postgres 10+ when master is idle #895
|
||||
|
||||
## 0.13.2 / 2023-07-21
|
||||
|
||||
* [BUGFIX] Fix type issues on pg_postmaster metrics #828
|
||||
* [BUGFIX] Fix pg_replication collector instantiation #854
|
||||
* [BUGFIX] Fix pg_process_idle metrics #855
|
||||
|
||||
## 0.13.1 / 2023-06-27
|
||||
|
||||
* [BUGFIX] Make collectors not fail on null values #823
|
||||
|
||||
## 0.13.0 / 2023-06-21
|
||||
|
||||
BREAKING CHANGES:
|
||||
|
||||
Please note, the following features are deprecated and may be removed in a future release:
|
||||
- `auto-discover-databases`
|
||||
- `extend.query-path`
|
||||
- `constantLabels`
|
||||
- `exclude-databases`
|
||||
- `include-databases`
|
||||
|
||||
This exporter is meant to monitor PostgresSQL servers, not the user data/databases. If
|
||||
you need a generic SQL report exporter https://github.com/burningalchemist/sql_exporter
|
||||
is recommended.
|
||||
|
||||
* [CHANGE] Adjust log level for collector startup #784
|
||||
* [CHANGE] Move queries from queries.yaml to collectors #801
|
||||
* [CHANGE] Deprecate extend queries feature #811
|
||||
* [CHANGE] Deprecate additional database features #815
|
||||
* [CHANGE] Convert pg_stat_database to new collector #685
|
||||
* [ENHANCEMENT] Supports alternate postgres:// prefix in URLs #787
|
||||
* [BUGFIX] Fix pg_setting different help values #771
|
||||
* [BUGFIX] Fix column type for pg_replication_slots #777
|
||||
* [BUGFIX] Fix pg_stat_database collector #809
|
||||
|
||||
## 0.12.1 / 2023-06-12
|
||||
* [BUGFIX] Fix column type for pg_replication_slots #777
|
||||
|
||||
## 0.12.0 / 2023-03-21
|
||||
|
||||
BREAKING CHANGES:
|
||||
|
||||
This release changes support for multiple postgres servers to use the
|
||||
multi-target exporter pattern. This makes it much easier to monitor multiple
|
||||
PostgreSQL servers from a single exporter by passing the target via URL
|
||||
params. See the Multi-Target Support section of the README.
|
||||
|
||||
* [CHANGE] Add multi-target support #618
|
||||
* [CHANGE] Add usename and application_name to pg_stat_activity metrics #673
|
||||
* [FEATURE] Add replication metrics from pg_replication_slots #747
|
||||
* [BUGFIX] Add dsn type for handling datasources #678
|
||||
* [BUGFIX] Add 64kB unit for postgres 15 #740
|
||||
* [BUGFIX] Add 4kB unit for postgres compiled with small blocks #699
|
||||
|
||||
## 0.11.1 / 2022-08-01
|
||||
|
||||
* [BUGFIX] Fix checkpoint_write_time value type #666
|
||||
* [BUGFIX] Fix checkpoint_sync_time value type #667
|
||||
|
||||
## 0.11.1 / 2022-08-01
|
||||
|
||||
|
@ -49,19 +49,19 @@ endif
|
||||
GOTEST := $(GO) test
|
||||
GOTEST_DIR :=
|
||||
ifneq ($(CIRCLE_JOB),)
|
||||
ifneq ($(shell which gotestsum),)
|
||||
ifneq ($(shell command -v gotestsum > /dev/null),)
|
||||
GOTEST_DIR := test-results
|
||||
GOTEST := gotestsum --junitfile $(GOTEST_DIR)/unit-tests.xml --
|
||||
endif
|
||||
endif
|
||||
|
||||
PROMU_VERSION ?= 0.13.0
|
||||
PROMU_VERSION ?= 0.15.0
|
||||
PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_VERSION)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM).tar.gz
|
||||
|
||||
SKIP_GOLANGCI_LINT ?= $(CI)
|
||||
GOLANGCI_LINT :=
|
||||
GOLANGCI_LINT_OPTS ?=
|
||||
GOLANGCI_LINT_VERSION ?= v1.45.2
|
||||
GOLANGCI_LINT_VERSION ?= v1.54.2
|
||||
# golangci-lint only supports linux, darwin and windows platforms on i386/amd64.
|
||||
# windows isn't included here because of the path separator being different.
|
||||
ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin))
|
||||
@ -91,6 +91,8 @@ BUILD_DOCKER_ARCHS = $(addprefix common-docker-,$(DOCKER_ARCHS))
|
||||
PUBLISH_DOCKER_ARCHS = $(addprefix common-docker-publish-,$(DOCKER_ARCHS))
|
||||
TAG_DOCKER_ARCHS = $(addprefix common-docker-tag-latest-,$(DOCKER_ARCHS))
|
||||
|
||||
SANITIZED_DOCKER_IMAGE_TAG := $(subst +,-,$(DOCKER_IMAGE_TAG))
|
||||
|
||||
ifeq ($(GOHOSTARCH),amd64)
|
||||
ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux freebsd darwin windows))
|
||||
# Only supported on amd64
|
||||
@ -165,7 +167,7 @@ endif
|
||||
.PHONY: common-yamllint
|
||||
common-yamllint:
|
||||
@echo ">> running yamllint on all YAML files in the repository"
|
||||
ifeq (, $(shell which yamllint))
|
||||
ifeq (, $(shell command -v yamllint > /dev/null))
|
||||
@echo "yamllint not installed so skipping"
|
||||
else
|
||||
yamllint .
|
||||
@ -194,7 +196,7 @@ common-tarball: promu
|
||||
.PHONY: common-docker $(BUILD_DOCKER_ARCHS)
|
||||
common-docker: $(BUILD_DOCKER_ARCHS)
|
||||
$(BUILD_DOCKER_ARCHS): common-docker-%:
|
||||
docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" \
|
||||
docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" \
|
||||
-f $(DOCKERFILE_PATH) \
|
||||
--build-arg ARCH="$*" \
|
||||
--build-arg OS="linux" \
|
||||
@ -203,19 +205,19 @@ $(BUILD_DOCKER_ARCHS): common-docker-%:
|
||||
.PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS)
|
||||
common-docker-publish: $(PUBLISH_DOCKER_ARCHS)
|
||||
$(PUBLISH_DOCKER_ARCHS): common-docker-publish-%:
|
||||
docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)"
|
||||
docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)"
|
||||
|
||||
DOCKER_MAJOR_VERSION_TAG = $(firstword $(subst ., ,$(shell cat VERSION)))
|
||||
.PHONY: common-docker-tag-latest $(TAG_DOCKER_ARCHS)
|
||||
common-docker-tag-latest: $(TAG_DOCKER_ARCHS)
|
||||
$(TAG_DOCKER_ARCHS): common-docker-tag-latest-%:
|
||||
docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest"
|
||||
docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:v$(DOCKER_MAJOR_VERSION_TAG)"
|
||||
docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest"
|
||||
docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:v$(DOCKER_MAJOR_VERSION_TAG)"
|
||||
|
||||
.PHONY: common-docker-manifest
|
||||
common-docker-manifest:
|
||||
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(DOCKER_IMAGE_TAG))
|
||||
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)"
|
||||
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(SANITIZED_DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(SANITIZED_DOCKER_IMAGE_TAG))
|
||||
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(SANITIZED_DOCKER_IMAGE_TAG)"
|
||||
|
||||
.PHONY: promu
|
||||
promu: $(PROMU)
|
||||
|
152
README.md
152
README.md
@ -7,7 +7,7 @@
|
||||
|
||||
Prometheus exporter for PostgreSQL server metrics.
|
||||
|
||||
CI Tested PostgreSQL versions: `9.4`, `9.5`, `9.6`, `10`, `11`, `12`, `13`, `14`
|
||||
CI Tested PostgreSQL versions: `10`, `11`, `12`, `13`, `14`, `15`
|
||||
|
||||
## Quick Start
|
||||
This package is available for Docker:
|
||||
@ -21,6 +21,56 @@ docker run \
|
||||
quay.io/prometheuscommunity/postgres-exporter
|
||||
```
|
||||
|
||||
## Multi-Target Support (BETA)
|
||||
**This Feature is in beta and may require changes in future releases. Feedback is welcome.**
|
||||
|
||||
This exporter supports the [multi-target pattern](https://prometheus.io/docs/guides/multi-target-exporter/). This allows running a single instance of this exporter for multiple postgres targets. Using the multi-target functionality of this exporter is **optional** and meant for cases where it is impossible to install the exporter as a sidecar, for example SaaS-managed services.
|
||||
|
||||
To use the multi-target functionality, send an http request to the endpoint `/probe?target=foo:5432` where target is set to the DSN of the postgres instance to scrape metrics from.
|
||||
|
||||
To avoid putting sensitive information like username and password in the URL, preconfigured auth modules are supported via the [auth_modules](#auth_modules) section of the config file. auth_modules for DSNs can be used with the `/probe` endpoint by specifying the `?auth_module=foo` http parameter.
|
||||
|
||||
Example Prometheus config:
|
||||
```yaml
|
||||
scrape_configs:
|
||||
- job_name: 'postgres'
|
||||
static_configs:
|
||||
- targets:
|
||||
- server1:5432
|
||||
- server2:5432
|
||||
metrics_path: /probe
|
||||
params:
|
||||
auth_module: [foo]
|
||||
relabel_configs:
|
||||
- source_labels: [__address__]
|
||||
target_label: __param_target
|
||||
- source_labels: [__param_target]
|
||||
target_label: instance
|
||||
- target_label: __address__
|
||||
replacement: 127.0.0.1:9116 # The postgres exporter's real hostname:port.
|
||||
```
|
||||
|
||||
## Configuration File
|
||||
|
||||
The configuration file controls the behavior of the exporter. It can be set using the `--config.file` command line flag and defaults to `postgres_exporter.yml`.
|
||||
|
||||
### auth_modules
|
||||
This section defines preset authentication and connection parameters for use in the [multi-target endpoint](#multi-target-support-beta). `auth_modules` is a map of modules with the key being the identifier which can be used in the `/probe` endpoint.
|
||||
Currently only the `userpass` type is supported.
|
||||
|
||||
Example:
|
||||
```yaml
|
||||
auth_modules:
|
||||
foo1: # Set this to any name you want
|
||||
type: userpass
|
||||
userpass:
|
||||
username: first
|
||||
password: firstpass
|
||||
options:
|
||||
# options become key=value parameters of the DSN
|
||||
sslmode: disable
|
||||
```
|
||||
|
||||
## Building and running
|
||||
|
||||
git clone https://github.com/prometheus-community/postgres_exporter.git
|
||||
@ -31,7 +81,7 @@ docker run \
|
||||
To build the Docker image:
|
||||
|
||||
make promu
|
||||
promu crossbuild -p linux/amd64 -p linux/armv7 -p linux/amd64 -p linux/ppc64le
|
||||
promu crossbuild -p linux/amd64 -p linux/armv7 -p linux/arm64 -p linux/ppc64le
|
||||
make docker
|
||||
|
||||
This will build the docker image as `prometheuscommunity/postgres_exporter:${branch}`.
|
||||
@ -41,15 +91,74 @@ This will build the docker image as `prometheuscommunity/postgres_exporter:${bra
|
||||
* `help`
|
||||
Show context-sensitive help (also try --help-long and --help-man).
|
||||
|
||||
* `collector.database`
|
||||
Enable the pg_database collector. Default is `enabled`
|
||||
|
||||
* `collector.bgwriter`
|
||||
Enable the pg_stat_bgwriter collector. Default is `enabled`
|
||||
* `[no-]collector.database`
|
||||
Enable the `database` collector (default: enabled).
|
||||
|
||||
* `[no-]collector.database_wraparound`
|
||||
Enable the `database_wraparound` collector (default: disabled).
|
||||
|
||||
* `[no-]collector.locks`
|
||||
Enable the `locks` collector (default: enabled).
|
||||
|
||||
* `[no-]collector.long_running_transactions`
|
||||
Enable the `long_running_transactions` collector (default: disabled).
|
||||
|
||||
* `[no-]collector.postmaster`
|
||||
Enable the `postmaster` collector (default: disabled).
|
||||
|
||||
* `[no-]collector.process_idle`
|
||||
Enable the `process_idle` collector (default: disabled).
|
||||
|
||||
* `[no-]collector.replication`
|
||||
Enable the `replication` collector (default: enabled).
|
||||
|
||||
* `[no-]collector.replication_slot`
|
||||
Enable the `replication_slot` collector (default: enabled).
|
||||
|
||||
* `[no-]collector.stat_activity_autovacuum`
|
||||
Enable the `stat_activity_autovacuum` collector (default: disabled).
|
||||
|
||||
* `[no-]collector.stat_bgwriter`
|
||||
Enable the `stat_bgwriter` collector (default: enabled).
|
||||
|
||||
* `[no-]collector.stat_database`
|
||||
Enable the `stat_database` collector (default: enabled).
|
||||
|
||||
* `[no-]collector.stat_statements`
|
||||
Enable the `stat_statements` collector (default: disabled).
|
||||
|
||||
* `[no-]collector.stat_user_tables`
|
||||
Enable the `stat_user_tables` collector (default: enabled).
|
||||
|
||||
* `[no-]collector.stat_wal_receiver`
|
||||
Enable the `stat_wal_receiver` collector (default: disabled).
|
||||
|
||||
* `[no-]collector.statio_user_indexes`
|
||||
Enable the `statio_user_indexes` collector (default: disabled).
|
||||
|
||||
* `[no-]collector.statio_user_tables`
|
||||
Enable the `statio_user_tables` collector (default: enabled).
|
||||
|
||||
* `[no-]collector.wal`
|
||||
Enable the `wal` collector (default: enabled).
|
||||
|
||||
* `[no-]collector.xlog_location`
|
||||
Enable the `xlog_location` collector (default: disabled).
|
||||
|
||||
* `config.file`
|
||||
Set the config file path. Default is `postgres_exporter.yml`
|
||||
|
||||
* `web.systemd-socket`
|
||||
Use systemd socket activation listeners instead of port listeners (Linux only). Default is `false`
|
||||
|
||||
* `web.listen-address`
|
||||
Address to listen on for web interface and telemetry. Default is `:9187`.
|
||||
|
||||
* `web.config.file`
|
||||
Configuration file to use TLS and/or basic authentication. The format of the
|
||||
file is described [in the exporter-toolkit repository](https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md).
|
||||
|
||||
* `web.telemetry-path`
|
||||
Path under which to expose metrics. Default is `/metrics`.
|
||||
|
||||
@ -59,10 +168,10 @@ This will build the docker image as `prometheuscommunity/postgres_exporter:${bra
|
||||
* `disable-settings-metrics`
|
||||
Use the flag if you don't want to scrape `pg_settings`. Default is `false`.
|
||||
|
||||
* `auto-discover-databases`
|
||||
* `auto-discover-databases` (DEPRECATED)
|
||||
Whether to discover the databases on a server dynamically. Default is `false`.
|
||||
|
||||
* `extend.query-path`
|
||||
* `extend.query-path` (DEPRECATED)
|
||||
Path to a YAML file containing custom queries to run. Check out [`queries.yaml`](queries.yaml)
|
||||
for examples of the format.
|
||||
|
||||
@ -70,16 +179,16 @@ This will build the docker image as `prometheuscommunity/postgres_exporter:${bra
|
||||
Do not run - print the internal representation of the metric maps. Useful when debugging a custom
|
||||
queries file.
|
||||
|
||||
* `constantLabels`
|
||||
* `constantLabels` (DEPRECATED)
|
||||
Labels to set in all metrics. A list of `label=value` pairs, separated by commas.
|
||||
|
||||
* `version`
|
||||
Show application version.
|
||||
|
||||
* `exclude-databases`
|
||||
* `exclude-databases` (DEPRECATED)
|
||||
A list of databases to remove when autoDiscoverDatabases is enabled.
|
||||
|
||||
* `include-databases`
|
||||
* `include-databases` (DEPRECATED)
|
||||
A list of databases to only include when autoDiscoverDatabases is enabled.
|
||||
|
||||
* `log.level`
|
||||
@ -88,10 +197,6 @@ This will build the docker image as `prometheuscommunity/postgres_exporter:${bra
|
||||
* `log.format`
|
||||
Set the log format: one of `logfmt`, `json`.
|
||||
|
||||
* `web.config.file`
|
||||
Configuration file to use TLS and/or basic authentication. The format of the
|
||||
file is described [in the exporter-toolkit repository](https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md).
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The following environment variables configure the exporter:
|
||||
@ -122,9 +227,6 @@ The following environment variables configure the exporter:
|
||||
* `DATA_SOURCE_PASS_FILE`
|
||||
The same as above but reads the password from a file.
|
||||
|
||||
* `PG_EXPORTER_WEB_LISTEN_ADDRESS`
|
||||
Address to listen on for web interface and telemetry. Default is `:9187`.
|
||||
|
||||
* `PG_EXPORTER_WEB_TELEMETRY_PATH`
|
||||
Path under which to expose metrics. Default is `/metrics`.
|
||||
|
||||
@ -134,20 +236,20 @@ The following environment variables configure the exporter:
|
||||
* `PG_EXPORTER_DISABLE_SETTINGS_METRICS`
|
||||
Use the flag if you don't want to scrape `pg_settings`. Value can be `true` or `false`. Default is `false`.
|
||||
|
||||
* `PG_EXPORTER_AUTO_DISCOVER_DATABASES`
|
||||
* `PG_EXPORTER_AUTO_DISCOVER_DATABASES` (DEPRECATED)
|
||||
Whether to discover the databases on a server dynamically. Value can be `true` or `false`. Default is `false`.
|
||||
|
||||
* `PG_EXPORTER_EXTEND_QUERY_PATH`
|
||||
Path to a YAML file containing custom queries to run. Check out [`queries.yaml`](queries.yaml)
|
||||
for examples of the format.
|
||||
|
||||
* `PG_EXPORTER_CONSTANT_LABELS`
|
||||
* `PG_EXPORTER_CONSTANT_LABELS` (DEPRECATED)
|
||||
Labels to set in all metrics. A list of `label=value` pairs, separated by commas.
|
||||
|
||||
* `PG_EXPORTER_EXCLUDE_DATABASES`
|
||||
* `PG_EXPORTER_EXCLUDE_DATABASES` (DEPRECATED)
|
||||
A comma-separated list of databases to remove when autoDiscoverDatabases is enabled. Default is empty string.
|
||||
|
||||
* `PG_EXPORTER_INCLUDE_DATABASES`
|
||||
* `PG_EXPORTER_INCLUDE_DATABASES` (DEPRECATED)
|
||||
A comma-separated list of databases to only include when autoDiscoverDatabases is enabled. Default is empty string,
|
||||
means allow all.
|
||||
|
||||
@ -186,7 +288,9 @@ for l in StringIO(x):
|
||||
Adjust the value of the resultant prometheus value type appropriately. This helps build
|
||||
rich self-documenting metrics for the exporter.
|
||||
|
||||
### Adding new metrics via a config file
|
||||
### Adding new metrics via a config file (DEPRECATED)
|
||||
|
||||
This feature is deprecated in favor of built-in collector functions. For generic SQL database monitoring see the [sql_exporter](https://github.com/burningalchemist/sql_exporter).
|
||||
|
||||
The -extend.query-path command-line argument specifies a YAML file containing additional queries to run.
|
||||
Some examples are provided in [queries.yaml](queries.yaml).
|
||||
@ -197,7 +301,7 @@ or variants of postgres (e.g. Greenplum), you can disable the default metrics wi
|
||||
flag. This removes all built-in metrics, and uses only metrics defined by queries in the `queries.yaml` file you supply
|
||||
(so you must supply one, otherwise the exporter will return nothing but internal statuses and not your database).
|
||||
|
||||
### Automatically discover databases
|
||||
### Automatically discover databases (DEPRECATED)
|
||||
To scrape metrics from all databases on a database server, the database DSN's can be dynamically discovered via the
|
||||
`--auto-discover-databases` flag. When true, `SELECT datname FROM pg_database WHERE datallowconn = true AND datistemplate = false and datname != current_database()` is run for all configured DSN's. From the
|
||||
result a new set of DSN's is created for which the metrics are scraped.
|
||||
|
@ -15,7 +15,6 @@ package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/url"
|
||||
"os"
|
||||
"regexp"
|
||||
@ -50,7 +49,7 @@ func (e *Exporter) discoverDatabaseDSNs() []string {
|
||||
continue
|
||||
}
|
||||
|
||||
server, err := e.servers.GetServer(dsn)
|
||||
server, err := e.servers.GetServer(dsn, e.resolutionEnabled)
|
||||
if err != nil {
|
||||
level.Error(logger).Log("msg", "Error opening connection to database", "dsn", loggableDSN(dsn), "err", err)
|
||||
continue
|
||||
@ -101,7 +100,7 @@ func (e *Exporter) discoverDatabaseDSNs() []string {
|
||||
}
|
||||
|
||||
func (e *Exporter) scrapeDSN(ch chan<- prometheus.Metric, dsn string) error {
|
||||
server, err := e.servers.GetServer(dsn)
|
||||
server, err := e.servers.GetServer(dsn, e.resolutionEnabled)
|
||||
|
||||
if err != nil {
|
||||
return &ErrorConnectToServer{fmt.Sprintf("Error opening connection to database (%s): %s", loggableDSN(dsn), err.Error())}
|
||||
@ -134,7 +133,7 @@ func getDataSources() ([]string, error) {
|
||||
|
||||
dataSourceUserFile := os.Getenv("DATA_SOURCE_USER_FILE")
|
||||
if len(dataSourceUserFile) != 0 {
|
||||
fileContents, err := ioutil.ReadFile(dataSourceUserFile)
|
||||
fileContents, err := os.ReadFile(dataSourceUserFile)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed loading data source user file %s: %s", dataSourceUserFile, err.Error())
|
||||
}
|
||||
@ -145,7 +144,7 @@ func getDataSources() ([]string, error) {
|
||||
|
||||
dataSourcePassFile := os.Getenv("DATA_SOURCE_PASS_FILE")
|
||||
if len(dataSourcePassFile) != 0 {
|
||||
fileContents, err := ioutil.ReadFile(dataSourcePassFile)
|
||||
fileContents, err := os.ReadFile(dataSourcePassFile)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed loading data source pass file %s: %s", dataSourcePassFile, err.Error())
|
||||
}
|
||||
@ -157,7 +156,7 @@ func getDataSources() ([]string, error) {
|
||||
ui := url.UserPassword(user, pass).String()
|
||||
dataSrouceURIFile := os.Getenv("DATA_SOURCE_URI_FILE")
|
||||
if len(dataSrouceURIFile) != 0 {
|
||||
fileContents, err := ioutil.ReadFile(dataSrouceURIFile)
|
||||
fileContents, err := os.ReadFile(dataSrouceURIFile)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed loading data source URI file %s: %s", dataSrouceURIFile, err.Error())
|
||||
}
|
||||
@ -166,6 +165,12 @@ func getDataSources() ([]string, error) {
|
||||
uri = os.Getenv("DATA_SOURCE_URI")
|
||||
}
|
||||
|
||||
// No datasources found. This allows us to support the multi-target pattern
|
||||
// without an explicit datasource.
|
||||
if uri == "" {
|
||||
return []string{}, nil
|
||||
}
|
||||
|
||||
dsn = "postgresql://" + ui + "@" + uri
|
||||
|
||||
return []string{dsn}, nil
|
||||
|
@ -17,12 +17,15 @@ import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
_ "net/http/pprof"
|
||||
|
||||
"github.com/alecthomas/kingpin/v2"
|
||||
"github.com/go-kit/log"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus-community/postgres_exporter/collector"
|
||||
"github.com/prometheus-community/postgres_exporter/config"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/collectors"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
@ -30,27 +33,31 @@ import (
|
||||
"github.com/prometheus/common/promlog/flag"
|
||||
"github.com/prometheus/common/version"
|
||||
"github.com/prometheus/exporter-toolkit/web"
|
||||
webflag "github.com/prometheus/exporter-toolkit/web/kingpinflag"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
"github.com/prometheus/exporter-toolkit/web/kingpinflag"
|
||||
)
|
||||
|
||||
var (
|
||||
listenAddress = kingpin.Flag("web.listen-address", "Address to listen on for web interface and telemetry.").Default(":9187").Envar("PG_EXPORTER_WEB_LISTEN_ADDRESS").String()
|
||||
webConfig = webflag.AddFlags(kingpin.CommandLine)
|
||||
c = config.Handler{
|
||||
Config: &config.Config{},
|
||||
}
|
||||
|
||||
configFile = kingpin.Flag("config.file", "Postgres exporter configuration file.").Default("postgres_exporter.yml").String()
|
||||
webConfig = kingpinflag.AddFlags(kingpin.CommandLine, ":9187")
|
||||
webConfigFile = kingpin.Flag(
|
||||
"web.config",
|
||||
"[EXPERIMENTAL] Path to config yaml file that can enable TLS or authentication.",
|
||||
).Default("").String()
|
||||
metricPath = kingpin.Flag("web.telemetry-path", "Path under which to expose metrics.").Default("/metrics").Envar("PG_EXPORTER_WEB_TELEMETRY_PATH").String()
|
||||
).Default("").String() // added for compatibility reasons to not break it in PMM 2.
|
||||
metricsPath = kingpin.Flag("web.telemetry-path", "Path under which to expose metrics.").Default("/metrics").Envar("PG_EXPORTER_WEB_TELEMETRY_PATH").String()
|
||||
disableDefaultMetrics = kingpin.Flag("disable-default-metrics", "Do not include default metrics.").Default("false").Envar("PG_EXPORTER_DISABLE_DEFAULT_METRICS").Bool()
|
||||
disableSettingsMetrics = kingpin.Flag("disable-settings-metrics", "Do not include pg_settings metrics.").Default("false").Envar("PG_EXPORTER_DISABLE_SETTINGS_METRICS").Bool()
|
||||
autoDiscoverDatabases = kingpin.Flag("auto-discover-databases", "Whether to discover the databases on a server dynamically.").Default("false").Envar("PG_EXPORTER_AUTO_DISCOVER_DATABASES").Bool()
|
||||
onlyDumpMaps = kingpin.Flag("dumpmaps", "Do not run, simply dump the maps.").Bool()
|
||||
constantLabelsList = kingpin.Flag("constantLabels", "A list of label=value separated by comma(,).").Default("").Envar("PG_EXPORTER_CONSTANT_LABELS").String()
|
||||
excludeDatabases = kingpin.Flag("exclude-databases", "A list of databases to remove when autoDiscoverDatabases is enabled").Default("").Envar("PG_EXPORTER_EXCLUDE_DATABASES").String()
|
||||
includeDatabases = kingpin.Flag("include-databases", "A list of databases to include when autoDiscoverDatabases is enabled").Default("").Envar("PG_EXPORTER_INCLUDE_DATABASES").String()
|
||||
metricPrefix = kingpin.Flag("metric-prefix", "A metric prefix can be used to have non-default (not \"pg\") prefixes for each of the metrics").Default("pg").Envar("PG_EXPORTER_METRIC_PREFIX").String()
|
||||
logger = log.NewNopLogger()
|
||||
autoDiscoverDatabases = kingpin.Flag("auto-discover-databases", "Whether to discover the databases on a server dynamically. (DEPRECATED)").Default("false").Envar("PG_EXPORTER_AUTO_DISCOVER_DATABASES").Bool()
|
||||
//queriesPath = kingpin.Flag("extend.query-path", "Path to custom queries to run. (DEPRECATED)").Default("").Envar("PG_EXPORTER_EXTEND_QUERY_PATH").String()
|
||||
onlyDumpMaps = kingpin.Flag("dumpmaps", "Do not run, simply dump the maps.").Bool()
|
||||
constantLabelsList = kingpin.Flag("constantLabels", "A list of label=value separated by comma(,). (DEPRECATED)").Default("").Envar("PG_EXPORTER_CONSTANT_LABELS").String()
|
||||
excludeDatabases = kingpin.Flag("exclude-databases", "A list of databases to remove when autoDiscoverDatabases is enabled (DEPRECATED)").Default("").Envar("PG_EXPORTER_EXCLUDE_DATABASES").String()
|
||||
includeDatabases = kingpin.Flag("include-databases", "A list of databases to include when autoDiscoverDatabases is enabled (DEPRECATED)").Default("").Envar("PG_EXPORTER_INCLUDE_DATABASES").String()
|
||||
metricPrefix = kingpin.Flag("metric-prefix", "A metric prefix can be used to have non-default (not \"pg\") prefixes for each of the metrics").Default("pg").Envar("PG_EXPORTER_METRIC_PREFIX").String()
|
||||
logger = log.NewNopLogger()
|
||||
)
|
||||
|
||||
// Metric name parts.
|
||||
@ -73,46 +80,55 @@ func main() {
|
||||
promlogConfig := &promlog.Config{}
|
||||
flag.AddFlags(kingpin.CommandLine, promlogConfig)
|
||||
kingpin.HelpFlag.Short('h')
|
||||
webConfig.WebConfigFile = webConfigFile
|
||||
kingpin.Parse()
|
||||
logger = promlog.New(promlogConfig)
|
||||
|
||||
// landingPage contains the HTML served at '/'.
|
||||
// TODO: Make this nicer and more informative.
|
||||
var landingPage = []byte(`<html>
|
||||
<head><title>Postgres exporter</title></head>
|
||||
<body>
|
||||
<h1>Postgres exporter</h1>
|
||||
<p><a href='` + *metricPath + `'>Metrics</a></p>
|
||||
</body>
|
||||
</html>
|
||||
`)
|
||||
|
||||
if *onlyDumpMaps {
|
||||
dumpMaps()
|
||||
return
|
||||
}
|
||||
|
||||
dsn, err := getDataSources()
|
||||
if err := c.ReloadConfig(*configFile, logger); err != nil {
|
||||
// This is not fatal, but it means that auth must be provided for every dsn.
|
||||
level.Warn(logger).Log("msg", "Error loading config", "err", err)
|
||||
}
|
||||
|
||||
dsns, err := getDataSources()
|
||||
if err != nil {
|
||||
level.Error(logger).Log("msg", "Failed reading data sources", "err", err.Error())
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if len(dsn) == 0 {
|
||||
level.Error(logger).Log("msg", "Couldn't find environment variables describing the datasource to use")
|
||||
os.Exit(1)
|
||||
excludedDatabases := strings.Split(*excludeDatabases, ",")
|
||||
logger.Log("msg", "Excluded databases", "databases", fmt.Sprintf("%v", excludedDatabases))
|
||||
|
||||
//if *queriesPath != "" {
|
||||
// level.Warn(logger).Log("msg", "The extended queries.yaml config is DEPRECATED", "file", *queriesPath)
|
||||
//}
|
||||
|
||||
if *autoDiscoverDatabases || *excludeDatabases != "" || *includeDatabases != "" {
|
||||
level.Warn(logger).Log("msg", "Scraping additional databases via auto discovery is DEPRECATED")
|
||||
}
|
||||
|
||||
if *constantLabelsList != "" {
|
||||
level.Warn(logger).Log("msg", "Constant labels on all metrics is DEPRECATED")
|
||||
}
|
||||
|
||||
servers := NewServers(ServerWithLabels(parseConstLabels(*constantLabelsList)))
|
||||
|
||||
opts := []ExporterOpt{
|
||||
CollectorName("exporter"),
|
||||
DisableDefaultMetrics(*disableDefaultMetrics),
|
||||
DisableSettingsMetrics(*disableSettingsMetrics),
|
||||
AutoDiscoverDatabases(*autoDiscoverDatabases),
|
||||
WithConstantLabels(*constantLabelsList),
|
||||
ExcludeDatabases(*excludeDatabases),
|
||||
WithServers(servers),
|
||||
ExcludeDatabases(excludedDatabases),
|
||||
IncludeDatabases(*includeDatabases),
|
||||
}
|
||||
|
||||
exporter := NewExporter(dsn, opts...)
|
||||
exporter := NewExporter(dsns, opts...)
|
||||
defer func() {
|
||||
exporter.servers.Close()
|
||||
}()
|
||||
@ -122,19 +138,26 @@ func main() {
|
||||
|
||||
prometheus.MustRegister(exporter)
|
||||
|
||||
cleanup, hr, mr, lr := initializePerconaExporters(dsn, opts)
|
||||
// TODO(@sysadmind): Remove this with multi-target support. We are removing multiple DSN support
|
||||
dsn := ""
|
||||
if len(dsns) > 0 {
|
||||
dsn = dsns[0]
|
||||
}
|
||||
|
||||
cleanup, hr, mr, lr := initializePerconaExporters(dsns, servers)
|
||||
defer cleanup()
|
||||
|
||||
pe, err := collector.NewPostgresCollector(
|
||||
logger,
|
||||
excludedDatabases,
|
||||
dsn,
|
||||
[]string{},
|
||||
)
|
||||
if err != nil {
|
||||
level.Error(logger).Log("msg", "Failed to create PostgresCollector", "err", err.Error())
|
||||
os.Exit(1)
|
||||
level.Warn(logger).Log("msg", "Failed to create PostgresCollector", "err", err.Error())
|
||||
} else {
|
||||
prometheus.MustRegister(pe)
|
||||
}
|
||||
prometheus.MustRegister(pe)
|
||||
|
||||
psCollector := collectors.NewProcessCollector(collectors.ProcessCollectorOpts{})
|
||||
goCollector := collectors.NewGoCollector()
|
||||
@ -150,24 +173,33 @@ func main() {
|
||||
"postgres": pe,
|
||||
})
|
||||
|
||||
http.Handle(*metricPath, promHandler)
|
||||
http.Handle(*metricsPath, promHandler)
|
||||
|
||||
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "text/html; charset=UTF-8") // nolint: errcheck
|
||||
w.Write(landingPage) // nolint: errcheck
|
||||
})
|
||||
|
||||
var webCfg string
|
||||
if *webConfigFile != "" {
|
||||
webCfg = *webConfigFile
|
||||
}
|
||||
if *webConfig != "" {
|
||||
webCfg = *webConfig
|
||||
if *metricsPath != "/" && *metricsPath != "" {
|
||||
landingConfig := web.LandingConfig{
|
||||
Name: "Postgres Exporter",
|
||||
Description: "Prometheus PostgreSQL server Exporter",
|
||||
Version: version.Info(),
|
||||
Links: []web.LandingLinks{
|
||||
{
|
||||
Address: *metricsPath,
|
||||
Text: "Metrics",
|
||||
},
|
||||
},
|
||||
}
|
||||
landingPage, err := web.NewLandingPage(landingConfig)
|
||||
if err != nil {
|
||||
level.Error(logger).Log("err", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
http.Handle("/", landingPage)
|
||||
}
|
||||
|
||||
level.Info(logger).Log("msg", "Listening on address", "address", *listenAddress)
|
||||
srv := &http.Server{Addr: *listenAddress}
|
||||
if err := web.ListenAndServe(srv, webCfg, logger); err != nil {
|
||||
http.HandleFunc("/probe", handleProbe(logger, excludedDatabases))
|
||||
|
||||
level.Info(logger).Log("msg", "Listening on address", "address", *webConfig.WebListenAddresses)
|
||||
srv := &http.Server{}
|
||||
if err := web.ListenAndServe(srv, webConfig, logger); err != nil {
|
||||
level.Error(logger).Log("msg", "Error running HTTP server", "err", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
@ -19,7 +19,7 @@ import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/blang/semver"
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/lib/pq"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
|
@ -2,21 +2,25 @@ package main
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"github.com/blang/semver"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/alecthomas/kingpin/v2"
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
type MetricResolution string
|
||||
|
||||
const (
|
||||
LR MetricResolution = "lr"
|
||||
MR MetricResolution = "mr"
|
||||
HR MetricResolution = "hr"
|
||||
DISABLED MetricResolution = ""
|
||||
LR MetricResolution = "lr"
|
||||
MR MetricResolution = "mr"
|
||||
HR MetricResolution = "hr"
|
||||
)
|
||||
|
||||
var (
|
||||
@ -28,71 +32,53 @@ var (
|
||||
collectCustomQueryHrDirectory = kingpin.Flag("collect.custom_query.hr.directory", "Path to custom queries with high resolution directory.").Envar("PG_EXPORTER_EXTEND_QUERY_HR_PATH").String()
|
||||
)
|
||||
|
||||
func initializePerconaExporters(dsn []string, opts []ExporterOpt) (func(), *Exporter, *Exporter, *Exporter) {
|
||||
func initializePerconaExporters(dsn []string, servers *Servers) (func(), *Exporter, *Exporter, *Exporter) {
|
||||
queriesPath := map[MetricResolution]string{
|
||||
HR: *collectCustomQueryHrDirectory,
|
||||
MR: *collectCustomQueryMrDirectory,
|
||||
LR: *collectCustomQueryLrDirectory,
|
||||
}
|
||||
|
||||
defaultOpts := []ExporterOpt{CollectorName("exporter")}
|
||||
defaultOpts = append(defaultOpts, opts...)
|
||||
defaultExporter := NewExporter(
|
||||
dsn,
|
||||
defaultOpts...,
|
||||
)
|
||||
prometheus.MustRegister(defaultExporter)
|
||||
|
||||
hrExporter := NewExporter(dsn,
|
||||
CollectorName("custom_query.hr"),
|
||||
excludedDatabases := strings.Split(*excludeDatabases, ",")
|
||||
opts := []ExporterOpt{
|
||||
DisableDefaultMetrics(true),
|
||||
DisableSettingsMetrics(true),
|
||||
AutoDiscoverDatabases(*autoDiscoverDatabases),
|
||||
WithUserQueriesEnabled(map[MetricResolution]bool{
|
||||
HR: *collectCustomQueryHr,
|
||||
MR: false,
|
||||
LR: false,
|
||||
}),
|
||||
WithServers(servers),
|
||||
WithUserQueriesPath(queriesPath),
|
||||
WithConstantLabels(*constantLabelsList),
|
||||
ExcludeDatabases(*excludeDatabases),
|
||||
ExcludeDatabases(excludedDatabases),
|
||||
}
|
||||
hrExporter := NewExporter(dsn,
|
||||
append(opts,
|
||||
CollectorName("custom_query.hr"),
|
||||
WithUserQueriesResolutionEnabled(HR),
|
||||
WithEnabled(*collectCustomQueryHr),
|
||||
WithConstantLabels(*constantLabelsList),
|
||||
)...,
|
||||
)
|
||||
prometheus.MustRegister(hrExporter)
|
||||
|
||||
mrExporter := NewExporter(dsn,
|
||||
CollectorName("custom_query.mr"),
|
||||
DisableDefaultMetrics(true),
|
||||
DisableSettingsMetrics(true),
|
||||
AutoDiscoverDatabases(*autoDiscoverDatabases),
|
||||
WithUserQueriesEnabled(map[MetricResolution]bool{
|
||||
HR: false,
|
||||
MR: *collectCustomQueryMr,
|
||||
LR: false,
|
||||
}),
|
||||
WithUserQueriesPath(queriesPath),
|
||||
WithConstantLabels(*constantLabelsList),
|
||||
ExcludeDatabases(*excludeDatabases),
|
||||
append(opts,
|
||||
CollectorName("custom_query.mr"),
|
||||
WithUserQueriesResolutionEnabled(MR),
|
||||
WithEnabled(*collectCustomQueryMr),
|
||||
WithConstantLabels(*constantLabelsList),
|
||||
)...,
|
||||
)
|
||||
prometheus.MustRegister(mrExporter)
|
||||
|
||||
lrExporter := NewExporter(dsn,
|
||||
CollectorName("custom_query.lr"),
|
||||
DisableDefaultMetrics(true),
|
||||
DisableSettingsMetrics(true),
|
||||
AutoDiscoverDatabases(*autoDiscoverDatabases),
|
||||
WithUserQueriesEnabled(map[MetricResolution]bool{
|
||||
HR: false,
|
||||
MR: false,
|
||||
LR: *collectCustomQueryLr,
|
||||
}),
|
||||
WithUserQueriesPath(queriesPath),
|
||||
WithConstantLabels(*constantLabelsList),
|
||||
ExcludeDatabases(*excludeDatabases),
|
||||
append(opts,
|
||||
CollectorName("custom_query.lr"),
|
||||
WithUserQueriesResolutionEnabled(LR),
|
||||
WithEnabled(*collectCustomQueryLr),
|
||||
WithConstantLabels(*constantLabelsList),
|
||||
)...,
|
||||
)
|
||||
prometheus.MustRegister(lrExporter)
|
||||
|
||||
return func() {
|
||||
defaultExporter.servers.Close()
|
||||
hrExporter.servers.Close()
|
||||
mrExporter.servers.Close()
|
||||
lrExporter.servers.Close()
|
||||
@ -107,6 +93,7 @@ func (e *Exporter) loadCustomQueries(res MetricResolution, version semver.Versio
|
||||
"err", err)
|
||||
return
|
||||
}
|
||||
level.Debug(logger).Log("msg", fmt.Sprintf("reading dir %q for custom query", e.userQueriesPath[res]))
|
||||
|
||||
for _, v := range fi {
|
||||
if v.IsDir() {
|
||||
@ -141,3 +128,21 @@ func (e *Exporter) addCustomQueriesFromFile(path string, version semver.Version,
|
||||
// Mark user queries as successfully loaded
|
||||
e.userQueriesError.WithLabelValues(path, hashsumStr).Set(0)
|
||||
}
|
||||
|
||||
// NewDB establishes a new connection using DSN.
|
||||
func NewDB(dsn string) (*sql.DB, error) {
|
||||
fingerprint, err := parseFingerprint(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
db, err := sql.Open("postgres", dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
db.SetMaxOpenConns(1)
|
||||
db.SetMaxIdleConns(1)
|
||||
|
||||
level.Info(logger).Log("msg", "Established new database connection", "fingerprint", fingerprint)
|
||||
return db, nil
|
||||
}
|
||||
|
@ -70,7 +70,7 @@ func (s *pgSetting) metric(labels prometheus.Labels) prometheus.Metric {
|
||||
err error
|
||||
name = strings.Replace(s.name, ".", "_", -1)
|
||||
unit = s.unit // nolint: ineffassign
|
||||
shortDesc = s.shortDesc
|
||||
shortDesc = fmt.Sprintf("Server Parameter: %s", s.name)
|
||||
subsystem = "settings"
|
||||
val float64
|
||||
)
|
||||
@ -129,7 +129,7 @@ func (s *pgSetting) normaliseUnit() (val float64, unit string, err error) {
|
||||
return
|
||||
case "ms", "s", "min", "h", "d":
|
||||
unit = "seconds"
|
||||
case "B", "kB", "MB", "GB", "TB", "8kB", "16kB", "32kB", "16MB", "32MB", "64MB":
|
||||
case "B", "kB", "MB", "GB", "TB", "4kB", "8kB", "16kB", "32kB", "64kB", "16MB", "32MB", "64MB":
|
||||
unit = "bytes"
|
||||
default:
|
||||
err = fmt.Errorf("Unknown unit for runtime variable: %q", s.unit)
|
||||
@ -158,12 +158,16 @@ func (s *pgSetting) normaliseUnit() (val float64, unit string, err error) {
|
||||
val *= math.Pow(2, 30)
|
||||
case "TB":
|
||||
val *= math.Pow(2, 40)
|
||||
case "4kB":
|
||||
val *= math.Pow(2, 12)
|
||||
case "8kB":
|
||||
val *= math.Pow(2, 13)
|
||||
case "16kB":
|
||||
val *= math.Pow(2, 14)
|
||||
case "32kB":
|
||||
val *= math.Pow(2, 15)
|
||||
case "64kB":
|
||||
val *= math.Pow(2, 16)
|
||||
case "16MB":
|
||||
val *= math.Pow(2, 24)
|
||||
case "32MB":
|
||||
|
@ -40,7 +40,7 @@ var fixtures = []fixture{
|
||||
unit: "seconds",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_seconds_fixture_metric_seconds", help: "Foo foo foo [Units converted to seconds.]", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_seconds_fixture_metric_seconds", help: "Server Parameter: seconds_fixture_metric [Units converted to seconds.]", constLabels: {}, variableLabels: []}`,
|
||||
v: 5,
|
||||
},
|
||||
{
|
||||
@ -56,7 +56,7 @@ var fixtures = []fixture{
|
||||
unit: "seconds",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_milliseconds_fixture_metric_seconds", help: "Foo foo foo [Units converted to seconds.]", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_milliseconds_fixture_metric_seconds", help: "Server Parameter: milliseconds_fixture_metric [Units converted to seconds.]", constLabels: {}, variableLabels: []}`,
|
||||
v: 5,
|
||||
},
|
||||
{
|
||||
@ -72,7 +72,7 @@ var fixtures = []fixture{
|
||||
unit: "bytes",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_eight_kb_fixture_metric_bytes", help: "Foo foo foo [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_eight_kb_fixture_metric_bytes", help: "Server Parameter: eight_kb_fixture_metric [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
v: 139264,
|
||||
},
|
||||
{
|
||||
@ -88,7 +88,7 @@ var fixtures = []fixture{
|
||||
unit: "bytes",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_16_kb_real_fixture_metric_bytes", help: "Foo foo foo [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_16_kb_real_fixture_metric_bytes", help: "Server Parameter: 16_kb_real_fixture_metric [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
v: 49152,
|
||||
},
|
||||
{
|
||||
@ -104,7 +104,7 @@ var fixtures = []fixture{
|
||||
unit: "bytes",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_16_mb_real_fixture_metric_bytes", help: "Foo foo foo [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_16_mb_real_fixture_metric_bytes", help: "Server Parameter: 16_mb_real_fixture_metric [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
v: 5.0331648e+07,
|
||||
},
|
||||
{
|
||||
@ -120,7 +120,7 @@ var fixtures = []fixture{
|
||||
unit: "bytes",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_32_mb_real_fixture_metric_bytes", help: "Foo foo foo [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_32_mb_real_fixture_metric_bytes", help: "Server Parameter: 32_mb_real_fixture_metric [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
v: 1.00663296e+08,
|
||||
},
|
||||
{
|
||||
@ -136,7 +136,7 @@ var fixtures = []fixture{
|
||||
unit: "bytes",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_64_mb_real_fixture_metric_bytes", help: "Foo foo foo [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_64_mb_real_fixture_metric_bytes", help: "Server Parameter: 64_mb_real_fixture_metric [Units converted to bytes.]", constLabels: {}, variableLabels: []}`,
|
||||
v: 2.01326592e+08,
|
||||
},
|
||||
{
|
||||
@ -152,7 +152,7 @@ var fixtures = []fixture{
|
||||
unit: "",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_bool_on_fixture_metric", help: "Foo foo foo", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_bool_on_fixture_metric", help: "Server Parameter: bool_on_fixture_metric", constLabels: {}, variableLabels: []}`,
|
||||
v: 1,
|
||||
},
|
||||
{
|
||||
@ -168,7 +168,7 @@ var fixtures = []fixture{
|
||||
unit: "",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_bool_off_fixture_metric", help: "Foo foo foo", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_bool_off_fixture_metric", help: "Server Parameter: bool_off_fixture_metric", constLabels: {}, variableLabels: []}`,
|
||||
v: 0,
|
||||
},
|
||||
{
|
||||
@ -184,7 +184,7 @@ var fixtures = []fixture{
|
||||
unit: "seconds",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_special_minus_one_value_seconds", help: "foo foo foo [Units converted to seconds.]", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_special_minus_one_value_seconds", help: "Server Parameter: special_minus_one_value [Units converted to seconds.]", constLabels: {}, variableLabels: []}`,
|
||||
v: -1,
|
||||
},
|
||||
{
|
||||
@ -200,7 +200,7 @@ var fixtures = []fixture{
|
||||
unit: "",
|
||||
err: "",
|
||||
},
|
||||
d: `Desc{fqName: "pg_settings_rds_rds_superuser_reserved_connections", help: "Sets the number of connection slots reserved for rds_superusers.", constLabels: {}, variableLabels: []}`,
|
||||
d: `Desc{fqName: "pg_settings_rds_rds_superuser_reserved_connections", help: "Server Parameter: rds.rds_superuser_reserved_connections", constLabels: {}, variableLabels: []}`,
|
||||
v: 2,
|
||||
},
|
||||
{
|
||||
|
@ -22,7 +22,7 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/blang/semver"
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
@ -161,48 +161,6 @@ func dumpMaps() {
|
||||
}
|
||||
|
||||
var builtinMetricMaps = map[string]intermediateMetricMap{
|
||||
"pg_stat_bgwriter": {
|
||||
map[string]ColumnMapping{
|
||||
"checkpoints_timed": {COUNTER, "Number of scheduled checkpoints that have been performed", nil, nil},
|
||||
"checkpoints_req": {COUNTER, "Number of requested checkpoints that have been performed", nil, nil},
|
||||
"checkpoint_write_time": {COUNTER, "Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds", nil, nil},
|
||||
"checkpoint_sync_time": {COUNTER, "Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds", nil, nil},
|
||||
"buffers_checkpoint": {COUNTER, "Number of buffers written during checkpoints", nil, nil},
|
||||
"buffers_clean": {COUNTER, "Number of buffers written by the background writer", nil, nil},
|
||||
"maxwritten_clean": {COUNTER, "Number of times the background writer stopped a cleaning scan because it had written too many buffers", nil, nil},
|
||||
"buffers_backend": {COUNTER, "Number of buffers written directly by a backend", nil, nil},
|
||||
"buffers_backend_fsync": {COUNTER, "Number of times a backend had to execute its own fsync call (normally the background writer handles those even when the backend does its own write)", nil, nil},
|
||||
"buffers_alloc": {COUNTER, "Number of buffers allocated", nil, nil},
|
||||
"stats_reset": {COUNTER, "Time at which these statistics were last reset", nil, nil},
|
||||
},
|
||||
true,
|
||||
0,
|
||||
},
|
||||
"pg_stat_database": {
|
||||
map[string]ColumnMapping{
|
||||
"datid": {LABEL, "OID of a database", nil, nil},
|
||||
"datname": {LABEL, "Name of this database", nil, nil},
|
||||
"numbackends": {GAUGE, "Number of backends currently connected to this database. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset.", nil, nil},
|
||||
"xact_commit": {COUNTER, "Number of transactions in this database that have been committed", nil, nil},
|
||||
"xact_rollback": {COUNTER, "Number of transactions in this database that have been rolled back", nil, nil},
|
||||
"blks_read": {COUNTER, "Number of disk blocks read in this database", nil, nil},
|
||||
"blks_hit": {COUNTER, "Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache)", nil, nil},
|
||||
"tup_returned": {COUNTER, "Number of rows returned by queries in this database", nil, nil},
|
||||
"tup_fetched": {COUNTER, "Number of rows fetched by queries in this database", nil, nil},
|
||||
"tup_inserted": {COUNTER, "Number of rows inserted by queries in this database", nil, nil},
|
||||
"tup_updated": {COUNTER, "Number of rows updated by queries in this database", nil, nil},
|
||||
"tup_deleted": {COUNTER, "Number of rows deleted by queries in this database", nil, nil},
|
||||
"conflicts": {COUNTER, "Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see pg_stat_database_conflicts for details.)", nil, nil},
|
||||
"temp_files": {COUNTER, "Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the log_temp_files setting.", nil, nil},
|
||||
"temp_bytes": {COUNTER, "Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting.", nil, nil},
|
||||
"deadlocks": {COUNTER, "Number of deadlocks detected in this database", nil, nil},
|
||||
"blk_read_time": {COUNTER, "Time spent reading data file blocks by backends in this database, in milliseconds", nil, nil},
|
||||
"blk_write_time": {COUNTER, "Time spent writing data file blocks by backends in this database, in milliseconds", nil, nil},
|
||||
"stats_reset": {COUNTER, "Time at which these statistics were last reset", nil, nil},
|
||||
},
|
||||
true,
|
||||
0,
|
||||
},
|
||||
"pg_stat_database_conflicts": {
|
||||
map[string]ColumnMapping{
|
||||
"datid": {LABEL, "OID of a database", nil, nil},
|
||||
@ -216,15 +174,6 @@ var builtinMetricMaps = map[string]intermediateMetricMap{
|
||||
true,
|
||||
0,
|
||||
},
|
||||
"pg_locks": {
|
||||
map[string]ColumnMapping{
|
||||
"datname": {LABEL, "Name of this database", nil, nil},
|
||||
"mode": {LABEL, "Type of Lock", nil, nil},
|
||||
"count": {GAUGE, "Number of locks", nil, nil},
|
||||
},
|
||||
true,
|
||||
0,
|
||||
},
|
||||
"pg_lock_conflicts": {
|
||||
map[string]ColumnMapping{
|
||||
"blocking_pid": {LABEL, "PID of blocking session", nil, nil},
|
||||
@ -460,9 +409,10 @@ type cachedMetrics struct {
|
||||
|
||||
// Exporter collects Postgres metrics. It implements prometheus.Collector.
|
||||
type Exporter struct {
|
||||
collectorName string
|
||||
userQueriesPath map[MetricResolution]string
|
||||
userQueriesEnabled map[MetricResolution]bool
|
||||
collectorName string
|
||||
userQueriesPath map[MetricResolution]string
|
||||
resolutionEnabled MetricResolution
|
||||
enabled bool
|
||||
|
||||
// Holds a reference to the build in column mappings. Currently this is for testing purposes
|
||||
// only, since it just points to the global.
|
||||
@ -502,10 +452,17 @@ func CollectorName(name string) ExporterOpt {
|
||||
}
|
||||
}
|
||||
|
||||
// WithUserQueriesEnabled enables user's queries.
|
||||
func WithUserQueriesEnabled(p map[MetricResolution]bool) ExporterOpt {
|
||||
// WithUserQueriesResolutionEnabled enables resolution for user's queries.
|
||||
func WithUserQueriesResolutionEnabled(p MetricResolution) ExporterOpt {
|
||||
return func(e *Exporter) {
|
||||
e.userQueriesEnabled = p
|
||||
e.resolutionEnabled = p
|
||||
}
|
||||
}
|
||||
|
||||
// WithEnabled enables user's queries.
|
||||
func WithEnabled(p bool) ExporterOpt {
|
||||
return func(e *Exporter) {
|
||||
e.enabled = p
|
||||
}
|
||||
}
|
||||
|
||||
@ -524,9 +481,9 @@ func AutoDiscoverDatabases(b bool) ExporterOpt {
|
||||
}
|
||||
|
||||
// ExcludeDatabases allows to filter out result from AutoDiscoverDatabases
|
||||
func ExcludeDatabases(s string) ExporterOpt {
|
||||
func ExcludeDatabases(s []string) ExporterOpt {
|
||||
return func(e *Exporter) {
|
||||
e.excludeDatabases = strings.Split(s, ",")
|
||||
e.excludeDatabases = s
|
||||
}
|
||||
}
|
||||
|
||||
@ -558,6 +515,13 @@ func WithConstantLabels(s string) ExporterOpt {
|
||||
}
|
||||
}
|
||||
|
||||
// WithServers configures constant labels.
|
||||
func WithServers(s *Servers) ExporterOpt {
|
||||
return func(e *Exporter) {
|
||||
e.servers = s
|
||||
}
|
||||
}
|
||||
|
||||
func parseConstLabels(s string) prometheus.Labels {
|
||||
labels := make(prometheus.Labels)
|
||||
|
||||
@ -589,6 +553,7 @@ func NewExporter(dsn []string, opts ...ExporterOpt) *Exporter {
|
||||
e := &Exporter{
|
||||
dsn: dsn,
|
||||
builtinMetricMaps: builtinMetricMaps,
|
||||
enabled: true,
|
||||
}
|
||||
|
||||
for _, opt := range opts {
|
||||
@ -596,7 +561,6 @@ func NewExporter(dsn []string, opts ...ExporterOpt) *Exporter {
|
||||
}
|
||||
|
||||
e.setupInternalMetrics()
|
||||
e.servers = NewServers(ServerWithLabels(e.constantLabels))
|
||||
|
||||
return e
|
||||
}
|
||||
@ -644,6 +608,9 @@ func (e *Exporter) Describe(ch chan<- *prometheus.Desc) {
|
||||
|
||||
// Collect implements prometheus.Collector.
|
||||
func (e *Exporter) Collect(ch chan<- prometheus.Metric) {
|
||||
if !e.enabled {
|
||||
return
|
||||
}
|
||||
e.scrape(ch)
|
||||
|
||||
ch <- e.duration
|
||||
@ -703,16 +670,12 @@ func (e *Exporter) checkMapVersions(ch chan<- prometheus.Metric, server *Server)
|
||||
|
||||
server.lastMapVersion = semanticVersion
|
||||
|
||||
if e.userQueriesPath[HR] != "" || e.userQueriesPath[MR] != "" || e.userQueriesPath[LR] != "" {
|
||||
if e.userQueriesPath[e.resolutionEnabled] != "" {
|
||||
// Clear the metric while reload
|
||||
e.userQueriesError.Reset()
|
||||
}
|
||||
|
||||
for res := range e.userQueriesPath {
|
||||
if e.userQueriesEnabled[res] {
|
||||
e.loadCustomQueries(res, semanticVersion, server)
|
||||
}
|
||||
}
|
||||
e.loadCustomQueries(e.resolutionEnabled, semanticVersion, server)
|
||||
|
||||
server.mappingMtx.Unlock()
|
||||
}
|
||||
|
@ -62,7 +62,11 @@ func (s *IntegrationSuite) TestAllNamespacesReturnResults(c *C) {
|
||||
|
||||
for _, dsn := range s.e.dsn {
|
||||
// Open a database connection
|
||||
server, err := NewServer(dsn)
|
||||
db, err := NewDB(dsn)
|
||||
c.Assert(db, NotNil)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
server, err := NewServer(dsn, db)
|
||||
c.Assert(server, NotNil)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
@ -156,7 +160,11 @@ func (s *IntegrationSuite) TestExtendQueriesDoesntCrash(c *C) {
|
||||
|
||||
exporter := NewExporter(
|
||||
strings.Split(dsn, ","),
|
||||
WithUserQueriesPath("../user_queries_test.yaml"),
|
||||
WithUserQueriesPath(map[MetricResolution]string{
|
||||
HR: "../user_queries_test.yaml",
|
||||
MR: "../user_queries_test.yaml",
|
||||
LR: "../user_queries_test.yaml",
|
||||
}),
|
||||
)
|
||||
c.Assert(exporter, NotNil)
|
||||
|
||||
|
@ -17,14 +17,13 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/blang/semver"
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
. "gopkg.in/check.v1"
|
||||
)
|
||||
@ -409,7 +408,7 @@ func (s *FunctionalSuite) TestBooleanConversionToValueAndString(c *C) {
|
||||
}
|
||||
|
||||
func (s *FunctionalSuite) TestParseUserQueries(c *C) {
|
||||
userQueriesData, err := ioutil.ReadFile("./tests/user_queries_ok.yaml")
|
||||
userQueriesData, err := os.ReadFile("./tests/user_queries_ok.yaml")
|
||||
if err == nil {
|
||||
metricMaps, newQueryOverrides, err := parseUserQueries(userQueriesData)
|
||||
c.Assert(err, Equals, nil)
|
||||
|
107
cmd/postgres_exporter/probe.go
Normal file
107
cmd/postgres_exporter/probe.go
Normal file
@ -0,0 +1,107 @@
|
||||
// Copyright 2022 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus-community/postgres_exporter/collector"
|
||||
"github.com/prometheus-community/postgres_exporter/config"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
)
|
||||
|
||||
func handleProbe(logger log.Logger, excludeDatabases []string) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
conf := c.GetConfig()
|
||||
params := r.URL.Query()
|
||||
target := params.Get("target")
|
||||
if target == "" {
|
||||
http.Error(w, "target is required", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
var authModule config.AuthModule
|
||||
authModuleName := params.Get("auth_module")
|
||||
if authModuleName == "" {
|
||||
level.Info(logger).Log("msg", "no auth_module specified, using default")
|
||||
} else {
|
||||
var ok bool
|
||||
authModule, ok = conf.AuthModules[authModuleName]
|
||||
if !ok {
|
||||
http.Error(w, fmt.Sprintf("auth_module %s not found", authModuleName), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if authModule.UserPass.Username == "" || authModule.UserPass.Password == "" {
|
||||
http.Error(w, fmt.Sprintf("auth_module %s has no username or password", authModuleName), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
dsn, err := authModule.ConfigureTarget(target)
|
||||
if err != nil {
|
||||
level.Error(logger).Log("msg", "failed to configure target", "err", err)
|
||||
http.Error(w, fmt.Sprintf("could not configure dsn for target: %v", err), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
// TODO(@sysadmind): Timeout
|
||||
|
||||
tl := log.With(logger, "target", target)
|
||||
|
||||
registry := prometheus.NewRegistry()
|
||||
|
||||
opts := []ExporterOpt{
|
||||
DisableDefaultMetrics(*disableDefaultMetrics),
|
||||
DisableSettingsMetrics(*disableSettingsMetrics),
|
||||
AutoDiscoverDatabases(*autoDiscoverDatabases),
|
||||
//WithUserQueriesPath(*queriesPath),
|
||||
WithConstantLabels(*constantLabelsList),
|
||||
ExcludeDatabases(excludeDatabases),
|
||||
IncludeDatabases(*includeDatabases),
|
||||
}
|
||||
|
||||
dsns := []string{dsn.GetConnectionString()}
|
||||
exporter := NewExporter(dsns, opts...)
|
||||
defer func() {
|
||||
exporter.servers.Close()
|
||||
}()
|
||||
registry.MustRegister(exporter)
|
||||
|
||||
// Run the probe
|
||||
pc, err := collector.NewProbeCollector(tl, excludeDatabases, registry, dsn)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
// Cleanup underlying connections to prevent connection leaks
|
||||
defer pc.Close()
|
||||
|
||||
// TODO(@sysadmind): Remove the registry.MustRegister() call below and instead handle the collection here. That will allow
|
||||
// for the passing of context, handling of timeouts, and more control over the collection.
|
||||
// The current NewProbeCollector() implementation relies on the MustNewConstMetric() call to create the metrics which is not
|
||||
// ideal to use without the registry.MustRegister() call.
|
||||
_ = ctx
|
||||
|
||||
registry.MustRegister(pc)
|
||||
|
||||
// TODO check success, etc
|
||||
h := promhttp.HandlerFor(registry, promhttp.HandlerOpts{})
|
||||
h.ServeHTTP(w, r)
|
||||
}
|
||||
}
|
@ -17,7 +17,7 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/blang/semver"
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/go-kit/log/level"
|
||||
"gopkg.in/yaml.v2"
|
||||
)
|
||||
@ -46,30 +46,6 @@ type OverrideQuery struct {
|
||||
// Overriding queries for namespaces above.
|
||||
// TODO: validate this is a closed set in tests, and there are no overlaps
|
||||
var queryOverrides = map[string][]OverrideQuery{
|
||||
"pg_locks": {
|
||||
{
|
||||
semver.MustParseRange(">0.0.0"),
|
||||
`SELECT pg_database.datname,tmp.mode,COALESCE(count,0) as count
|
||||
FROM
|
||||
(
|
||||
VALUES ('accesssharelock'),
|
||||
('rowsharelock'),
|
||||
('rowexclusivelock'),
|
||||
('shareupdateexclusivelock'),
|
||||
('sharelock'),
|
||||
('sharerowexclusivelock'),
|
||||
('exclusivelock'),
|
||||
('accessexclusivelock'),
|
||||
('sireadlock')
|
||||
) AS tmp(mode) CROSS JOIN pg_database
|
||||
LEFT JOIN
|
||||
(SELECT database, lower(mode) AS mode,count(*) AS count
|
||||
FROM pg_locks WHERE database IS NOT NULL
|
||||
GROUP BY database, lower(mode)
|
||||
) AS tmp2
|
||||
ON tmp.mode=tmp2.mode and pg_database.oid = tmp2.database ORDER BY 1`,
|
||||
},
|
||||
},
|
||||
"pg_lock_conflicts": {
|
||||
{
|
||||
semver.MustParseRange(">0.0.0"),
|
||||
|
@ -19,7 +19,7 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/blang/semver"
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
@ -54,25 +54,17 @@ func ServerWithLabels(labels prometheus.Labels) ServerOpt {
|
||||
for k, v := range labels {
|
||||
s.labels[k] = v
|
||||
}
|
||||
s.labels["collector"] = "exporter"
|
||||
}
|
||||
}
|
||||
|
||||
// NewServer establishes a new connection using DSN.
|
||||
func NewServer(dsn string, opts ...ServerOpt) (*Server, error) {
|
||||
func NewServer(dsn string, db *sql.DB, opts ...ServerOpt) (*Server, error) {
|
||||
fingerprint, err := parseFingerprint(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
db, err := sql.Open("postgres", dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
db.SetMaxOpenConns(1)
|
||||
db.SetMaxIdleConns(1)
|
||||
|
||||
level.Info(logger).Log("msg", "Established new database connection", "fingerprint", fingerprint)
|
||||
|
||||
s := &Server{
|
||||
db: db,
|
||||
master: false,
|
||||
@ -139,6 +131,7 @@ func (s *Server) Scrape(ch chan<- prometheus.Metric, disableSettingsMetrics bool
|
||||
type Servers struct {
|
||||
m sync.Mutex
|
||||
servers map[string]*Server
|
||||
dbs map[string]*sql.DB
|
||||
opts []ServerOpt
|
||||
}
|
||||
|
||||
@ -146,34 +139,47 @@ type Servers struct {
|
||||
func NewServers(opts ...ServerOpt) *Servers {
|
||||
return &Servers{
|
||||
servers: make(map[string]*Server),
|
||||
dbs: make(map[string]*sql.DB),
|
||||
opts: opts,
|
||||
}
|
||||
}
|
||||
|
||||
// GetServer returns established connection from a collection.
|
||||
func (s *Servers) GetServer(dsn string) (*Server, error) {
|
||||
func (s *Servers) GetServer(dsn string, res MetricResolution) (*Server, error) {
|
||||
s.m.Lock()
|
||||
defer s.m.Unlock()
|
||||
var err error
|
||||
var ok bool
|
||||
errCount := 0 // start at zero because we increment before doing work
|
||||
retries := 1
|
||||
var db *sql.DB
|
||||
var server *Server
|
||||
for {
|
||||
if errCount++; errCount > retries {
|
||||
return nil, err
|
||||
}
|
||||
server, ok = s.servers[dsn]
|
||||
db, ok = s.dbs[dsn]
|
||||
if !ok {
|
||||
server, err = NewServer(dsn, s.opts...)
|
||||
db, err = NewDB(dsn)
|
||||
if err != nil {
|
||||
time.Sleep(time.Duration(errCount) * time.Second)
|
||||
continue
|
||||
}
|
||||
s.servers[dsn] = server
|
||||
s.dbs[dsn] = db
|
||||
}
|
||||
key := dsn + ":" + string(res)
|
||||
server, ok = s.servers[key]
|
||||
if !ok {
|
||||
server, err = NewServer(dsn, db, s.opts...)
|
||||
if err != nil {
|
||||
time.Sleep(time.Duration(errCount) * time.Second)
|
||||
continue
|
||||
}
|
||||
s.servers[key] = server
|
||||
}
|
||||
if err = server.Ping(); err != nil {
|
||||
delete(s.servers, dsn)
|
||||
delete(s.servers, key)
|
||||
delete(s.dbs, dsn)
|
||||
time.Sleep(time.Duration(errCount) * time.Second)
|
||||
continue
|
||||
}
|
||||
|
@ -20,14 +20,14 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/alecthomas/kingpin/v2"
|
||||
"github.com/go-kit/log"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"gopkg.in/alecthomas/kingpin.v2"
|
||||
)
|
||||
|
||||
var (
|
||||
factories = make(map[string]func(logger log.Logger) (Collector, error))
|
||||
factories = make(map[string]func(collectorConfig) (Collector, error))
|
||||
initiatedCollectorsMtx = sync.Mutex{}
|
||||
initiatedCollectors = make(map[string]Collector)
|
||||
collectorState = make(map[string]*bool)
|
||||
@ -38,8 +38,8 @@ const (
|
||||
// Namespace for all metrics.
|
||||
namespace = "pg"
|
||||
|
||||
defaultEnabled = true
|
||||
// defaultDisabled = false
|
||||
defaultEnabled = true
|
||||
defaultDisabled = false
|
||||
)
|
||||
|
||||
var (
|
||||
@ -58,10 +58,15 @@ var (
|
||||
)
|
||||
|
||||
type Collector interface {
|
||||
Update(ctx context.Context, server *server, ch chan<- prometheus.Metric) error
|
||||
Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error
|
||||
}
|
||||
|
||||
func registerCollector(name string, isDefaultEnabled bool, createFunc func(logger log.Logger) (Collector, error)) {
|
||||
type collectorConfig struct {
|
||||
logger log.Logger
|
||||
excludeDatabases []string
|
||||
}
|
||||
|
||||
func registerCollector(name string, isDefaultEnabled bool, createFunc func(collectorConfig) (Collector, error)) {
|
||||
var helpDefaultState string
|
||||
if isDefaultEnabled {
|
||||
helpDefaultState = "enabled"
|
||||
@ -86,13 +91,13 @@ type PostgresCollector struct {
|
||||
Collectors map[string]Collector
|
||||
logger log.Logger
|
||||
|
||||
servers map[string]*server
|
||||
instance *instance
|
||||
}
|
||||
|
||||
type Option func(*PostgresCollector) error
|
||||
|
||||
// NewPostgresCollector creates a new PostgresCollector.
|
||||
func NewPostgresCollector(logger log.Logger, dsns []string, filters []string, options ...Option) (*PostgresCollector, error) {
|
||||
func NewPostgresCollector(logger log.Logger, excludeDatabases []string, dsn string, filters []string, options ...Option) (*PostgresCollector, error) {
|
||||
p := &PostgresCollector{
|
||||
logger: logger,
|
||||
}
|
||||
@ -125,7 +130,10 @@ func NewPostgresCollector(logger log.Logger, dsns []string, filters []string, op
|
||||
if collector, ok := initiatedCollectors[key]; ok {
|
||||
collectors[key] = collector
|
||||
} else {
|
||||
collector, err := factories[key](log.With(logger, "collector", key))
|
||||
collector, err := factories[key](collectorConfig{
|
||||
logger: log.With(logger, "collector", key),
|
||||
excludeDatabases: excludeDatabases,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -136,17 +144,15 @@ func NewPostgresCollector(logger log.Logger, dsns []string, filters []string, op
|
||||
|
||||
p.Collectors = collectors
|
||||
|
||||
servers := make(map[string]*server)
|
||||
for _, dsn := range dsns {
|
||||
s, err := makeServer(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
servers[dsn] = s
|
||||
if dsn == "" {
|
||||
return nil, errors.New("empty dsn")
|
||||
}
|
||||
|
||||
p.servers = servers
|
||||
instance, err := newInstance(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p.instance = instance
|
||||
|
||||
return p, nil
|
||||
}
|
||||
@ -160,32 +166,29 @@ func (p PostgresCollector) Describe(ch chan<- *prometheus.Desc) {
|
||||
// Collect implements the prometheus.Collector interface.
|
||||
func (p PostgresCollector) Collect(ch chan<- prometheus.Metric) {
|
||||
ctx := context.TODO()
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(len(p.servers))
|
||||
for _, s := range p.servers {
|
||||
go func(s *server) {
|
||||
p.subCollect(ctx, s, ch)
|
||||
wg.Done()
|
||||
}(s)
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func (p PostgresCollector) subCollect(ctx context.Context, server *server, ch chan<- prometheus.Metric) {
|
||||
// Set up the database connection for the collector.
|
||||
err := p.instance.setup()
|
||||
if err != nil {
|
||||
level.Error(p.logger).Log("msg", "Error opening connection to database", "err", err)
|
||||
return
|
||||
}
|
||||
defer p.instance.Close()
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(len(p.Collectors))
|
||||
for name, c := range p.Collectors {
|
||||
go func(name string, c Collector) {
|
||||
execute(ctx, name, c, server, ch, p.logger)
|
||||
execute(ctx, name, c, p.instance, ch, p.logger)
|
||||
wg.Done()
|
||||
}(name, c)
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func execute(ctx context.Context, name string, c Collector, s *server, ch chan<- prometheus.Metric, logger log.Logger) {
|
||||
func execute(ctx context.Context, name string, c Collector, instance *instance, ch chan<- prometheus.Metric, logger log.Logger) {
|
||||
begin := time.Now()
|
||||
err := c.Update(ctx, s, ch)
|
||||
err := c.Update(ctx, instance, ch)
|
||||
duration := time.Since(begin)
|
||||
var success float64
|
||||
|
||||
|
62
collector/collector_test.go
Normal file
62
collector/collector_test.go
Normal file
@ -0,0 +1,62 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
)
|
||||
|
||||
type labelMap map[string]string
|
||||
|
||||
type MetricResult struct {
|
||||
labels labelMap
|
||||
value float64
|
||||
metricType dto.MetricType
|
||||
}
|
||||
|
||||
func readMetric(m prometheus.Metric) MetricResult {
|
||||
pb := &dto.Metric{}
|
||||
m.Write(pb)
|
||||
labels := make(labelMap, len(pb.Label))
|
||||
for _, v := range pb.Label {
|
||||
labels[v.GetName()] = v.GetValue()
|
||||
}
|
||||
if pb.Gauge != nil {
|
||||
return MetricResult{labels: labels, value: pb.GetGauge().GetValue(), metricType: dto.MetricType_GAUGE}
|
||||
}
|
||||
if pb.Counter != nil {
|
||||
return MetricResult{labels: labels, value: pb.GetCounter().GetValue(), metricType: dto.MetricType_COUNTER}
|
||||
}
|
||||
if pb.Untyped != nil {
|
||||
return MetricResult{labels: labels, value: pb.GetUntyped().GetValue(), metricType: dto.MetricType_UNTYPED}
|
||||
}
|
||||
panic("Unsupported metric type")
|
||||
}
|
||||
|
||||
func sanitizeQuery(q string) string {
|
||||
q = strings.Join(strings.Fields(q), " ")
|
||||
q = strings.Replace(q, "(", "\\(", -1)
|
||||
q = strings.Replace(q, "?", "\\?", -1)
|
||||
q = strings.Replace(q, ")", "\\)", -1)
|
||||
q = strings.Replace(q, "[", "\\[", -1)
|
||||
q = strings.Replace(q, "]", "\\]", -1)
|
||||
q = strings.Replace(q, "{", "\\{", -1)
|
||||
q = strings.Replace(q, "}", "\\}", -1)
|
||||
q = strings.Replace(q, "*", "\\*", -1)
|
||||
q = strings.Replace(q, "^", "\\^", -1)
|
||||
q = strings.Replace(q, "$", "\\$", -1)
|
||||
return q
|
||||
}
|
142
collector/instance.go
Normal file
142
collector/instance.go
Normal file
@ -0,0 +1,142 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/lib/pq"
|
||||
)
|
||||
|
||||
type instance struct {
|
||||
dsn string
|
||||
name string
|
||||
db *sql.DB
|
||||
version semver.Version
|
||||
}
|
||||
|
||||
func newInstance(dsn string) (*instance, error) {
|
||||
i := &instance{
|
||||
dsn: dsn,
|
||||
}
|
||||
|
||||
// "Create" a database handle to verify the DSN provided is valid.
|
||||
// Open is not guaranteed to create a connection.
|
||||
db, err := sql.Open("postgres", dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
db.Close()
|
||||
|
||||
i.name, err = parseServerName(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
func (i *instance) setup() error {
|
||||
db, err := sql.Open("postgres", i.dsn)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
db.SetMaxOpenConns(1)
|
||||
db.SetMaxIdleConns(1)
|
||||
i.db = db
|
||||
|
||||
version, err := queryVersion(i.db)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error querying postgresql version: %w", err)
|
||||
} else {
|
||||
i.version = version
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *instance) getDB() *sql.DB {
|
||||
return i.db
|
||||
}
|
||||
|
||||
func (i *instance) Close() error {
|
||||
return i.db.Close()
|
||||
}
|
||||
|
||||
// Regex used to get the "short-version" from the postgres version field.
|
||||
// The result of SELECT version() is something like "PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 6.2.1 20160830, 64-bit"
|
||||
var versionRegex = regexp.MustCompile(`^\w+ ((\d+)(\.\d+)?(\.\d+)?)`)
|
||||
var serverVersionRegex = regexp.MustCompile(`^((\d+)(\.\d+)?(\.\d+)?)`)
|
||||
|
||||
func queryVersion(db *sql.DB) (semver.Version, error) {
|
||||
var version string
|
||||
err := db.QueryRow("SELECT version();").Scan(&version)
|
||||
if err != nil {
|
||||
return semver.Version{}, err
|
||||
}
|
||||
submatches := versionRegex.FindStringSubmatch(version)
|
||||
if len(submatches) > 1 {
|
||||
return semver.ParseTolerant(submatches[1])
|
||||
}
|
||||
|
||||
// We could also try to parse the version from the server_version field.
|
||||
// This is of the format 13.3 (Debian 13.3-1.pgdg100+1)
|
||||
err = db.QueryRow("SHOW server_version;").Scan(&version)
|
||||
if err != nil {
|
||||
return semver.Version{}, err
|
||||
}
|
||||
submatches = serverVersionRegex.FindStringSubmatch(version)
|
||||
if len(submatches) > 1 {
|
||||
return semver.ParseTolerant(submatches[1])
|
||||
}
|
||||
return semver.Version{}, fmt.Errorf("could not parse version from %q", version)
|
||||
}
|
||||
|
||||
func parseServerName(url string) (string, error) {
|
||||
dsn, err := pq.ParseURL(url)
|
||||
if err != nil {
|
||||
dsn = url
|
||||
}
|
||||
|
||||
pairs := strings.Split(dsn, " ")
|
||||
kv := make(map[string]string, len(pairs))
|
||||
for _, pair := range pairs {
|
||||
splitted := strings.SplitN(pair, "=", 2)
|
||||
if len(splitted) != 2 {
|
||||
return "", fmt.Errorf("malformed dsn %q", dsn)
|
||||
}
|
||||
// Newer versions of pq.ParseURL quote values so trim them off if they exist
|
||||
key := strings.Trim(splitted[0], "'\"")
|
||||
value := strings.Trim(splitted[1], "'\"")
|
||||
kv[key] = value
|
||||
}
|
||||
|
||||
var fingerprint string
|
||||
|
||||
if host, ok := kv["host"]; ok {
|
||||
fingerprint += host
|
||||
} else {
|
||||
fingerprint += "localhost"
|
||||
}
|
||||
|
||||
if port, ok := kv["port"]; ok {
|
||||
fingerprint += ":" + port
|
||||
} else {
|
||||
fingerprint += ":5432"
|
||||
}
|
||||
|
||||
return fingerprint, nil
|
||||
}
|
@ -15,59 +15,114 @@ package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const databaseSubsystem = "database"
|
||||
|
||||
func init() {
|
||||
registerCollector("database", defaultEnabled, NewPGDatabaseCollector)
|
||||
registerCollector(databaseSubsystem, defaultEnabled, NewPGDatabaseCollector)
|
||||
}
|
||||
|
||||
type PGDatabaseCollector struct {
|
||||
log log.Logger
|
||||
log log.Logger
|
||||
excludedDatabases []string
|
||||
}
|
||||
|
||||
func NewPGDatabaseCollector(logger log.Logger) (Collector, error) {
|
||||
return &PGDatabaseCollector{log: logger}, nil
|
||||
}
|
||||
|
||||
var pgDatabase = map[string]*prometheus.Desc{
|
||||
"size_bytes": prometheus.NewDesc(
|
||||
"pg_database_size_bytes",
|
||||
"Disk space used by the database",
|
||||
[]string{"datname", "server"}, nil,
|
||||
),
|
||||
}
|
||||
|
||||
func (PGDatabaseCollector) Update(ctx context.Context, server *server, ch chan<- prometheus.Metric) error {
|
||||
db, err := server.GetDB()
|
||||
if err != nil {
|
||||
return err
|
||||
func NewPGDatabaseCollector(config collectorConfig) (Collector, error) {
|
||||
exclude := config.excludeDatabases
|
||||
if exclude == nil {
|
||||
exclude = []string{}
|
||||
}
|
||||
return &PGDatabaseCollector{
|
||||
log: config.logger,
|
||||
excludedDatabases: exclude,
|
||||
}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
pgDatabaseSizeDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
databaseSubsystem,
|
||||
"size_bytes",
|
||||
),
|
||||
"Disk space used by the database",
|
||||
[]string{"datname"}, nil,
|
||||
)
|
||||
|
||||
pgDatabaseQuery = "SELECT pg_database.datname FROM pg_database;"
|
||||
pgDatabaseSizeQuery = "SELECT pg_database_size($1)"
|
||||
)
|
||||
|
||||
// Update implements Collector and exposes database size.
|
||||
// It is called by the Prometheus registry when collecting metrics.
|
||||
// The list of databases is retrieved from pg_database and filtered
|
||||
// by the excludeDatabase config parameter. The tradeoff here is that
|
||||
// we have to query the list of databases and then query the size of
|
||||
// each database individually. This is because we can't filter the
|
||||
// list of databases in the query because the list of excluded
|
||||
// databases is dynamic.
|
||||
func (c PGDatabaseCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
// Query the list of databases
|
||||
rows, err := db.QueryContext(ctx,
|
||||
`SELECT pg_database.datname
|
||||
,pg_database_size(pg_database.datname)
|
||||
FROM pg_database;`)
|
||||
pgDatabaseQuery,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var databases []string
|
||||
|
||||
for rows.Next() {
|
||||
var datname string
|
||||
var size int64
|
||||
if err := rows.Scan(&datname, &size); err != nil {
|
||||
var datname sql.NullString
|
||||
if err := rows.Scan(&datname); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !datname.Valid {
|
||||
continue
|
||||
}
|
||||
// Ignore excluded databases
|
||||
// Filtering is done here instead of in the query to avoid
|
||||
// a complicated NOT IN query with a variable number of parameters
|
||||
if sliceContains(c.excludedDatabases, datname.String) {
|
||||
continue
|
||||
}
|
||||
|
||||
databases = append(databases, datname.String)
|
||||
}
|
||||
|
||||
// Query the size of the databases
|
||||
for _, datname := range databases {
|
||||
var size sql.NullFloat64
|
||||
err = db.QueryRowContext(ctx, pgDatabaseSizeQuery, datname).Scan(&size)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sizeMetric := 0.0
|
||||
if size.Valid {
|
||||
sizeMetric = size.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgDatabase["size_bytes"],
|
||||
prometheus.GaugeValue, float64(size), datname, server.GetName(),
|
||||
pgDatabaseSizeDesc,
|
||||
prometheus.GaugeValue, sizeMetric, datname,
|
||||
)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
return rows.Err()
|
||||
}
|
||||
|
||||
func sliceContains(slice []string, s string) bool {
|
||||
for _, item := range slice {
|
||||
if item == s {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
101
collector/pg_database_test.go
Normal file
101
collector/pg_database_test.go
Normal file
@ -0,0 +1,101 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGDatabaseCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(pgDatabaseQuery)).WillReturnRows(sqlmock.NewRows([]string{"datname"}).
|
||||
AddRow("postgres"))
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(pgDatabaseSizeQuery)).WithArgs("postgres").WillReturnRows(sqlmock.NewRows([]string{"pg_database_size"}).
|
||||
AddRow(1024))
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGDatabaseCollector{}
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGDatabaseCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datname": "postgres"}, value: 1024, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
// TODO add a null db test
|
||||
|
||||
func TestPGDatabaseCollectorNullMetric(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(pgDatabaseQuery)).WillReturnRows(sqlmock.NewRows([]string{"datname"}).
|
||||
AddRow("postgres"))
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(pgDatabaseSizeQuery)).WithArgs("postgres").WillReturnRows(sqlmock.NewRows([]string{"pg_database_size"}).
|
||||
AddRow(nil))
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGDatabaseCollector{}
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGDatabaseCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datname": "postgres"}, value: 0, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
115
collector/pg_database_wraparound.go
Normal file
115
collector/pg_database_wraparound.go
Normal file
@ -0,0 +1,115 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const databaseWraparoundSubsystem = "database_wraparound"
|
||||
|
||||
func init() {
|
||||
registerCollector(databaseWraparoundSubsystem, defaultDisabled, NewPGDatabaseWraparoundCollector)
|
||||
}
|
||||
|
||||
type PGDatabaseWraparoundCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGDatabaseWraparoundCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGDatabaseWraparoundCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
databaseWraparoundAgeDatfrozenxid = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, databaseWraparoundSubsystem, "age_datfrozenxid_seconds"),
|
||||
"Age of the oldest transaction ID that has not been frozen.",
|
||||
[]string{"datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
databaseWraparoundAgeDatminmxid = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, databaseWraparoundSubsystem, "age_datminmxid_seconds"),
|
||||
"Age of the oldest multi-transaction ID that has been replaced with a transaction ID.",
|
||||
[]string{"datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
databaseWraparoundQuery = `
|
||||
SELECT
|
||||
datname,
|
||||
age(d.datfrozenxid) as age_datfrozenxid,
|
||||
mxid_age(d.datminmxid) as age_datminmxid
|
||||
FROM
|
||||
pg_catalog.pg_database d
|
||||
WHERE
|
||||
d.datallowconn
|
||||
`
|
||||
)
|
||||
|
||||
func (c *PGDatabaseWraparoundCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
rows, err := db.QueryContext(ctx,
|
||||
databaseWraparoundQuery)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var datname sql.NullString
|
||||
var ageDatfrozenxid, ageDatminmxid sql.NullFloat64
|
||||
|
||||
if err := rows.Scan(&datname, &ageDatfrozenxid, &ageDatminmxid); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !datname.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping database with NULL name")
|
||||
continue
|
||||
}
|
||||
if !ageDatfrozenxid.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping stat emission with NULL age_datfrozenxid")
|
||||
continue
|
||||
}
|
||||
if !ageDatminmxid.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping stat emission with NULL age_datminmxid")
|
||||
continue
|
||||
}
|
||||
|
||||
ageDatfrozenxidMetric := ageDatfrozenxid.Float64
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
databaseWraparoundAgeDatfrozenxid,
|
||||
prometheus.GaugeValue,
|
||||
ageDatfrozenxidMetric, datname.String,
|
||||
)
|
||||
|
||||
ageDatminmxidMetric := ageDatminmxid.Float64
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
databaseWraparoundAgeDatminmxid,
|
||||
prometheus.GaugeValue,
|
||||
ageDatminmxidMetric, datname.String,
|
||||
)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
64
collector/pg_database_wraparound_test.go
Normal file
64
collector/pg_database_wraparound_test.go
Normal file
@ -0,0 +1,64 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGDatabaseWraparoundCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
inst := &instance{db: db}
|
||||
columns := []string{
|
||||
"datname",
|
||||
"age_datfrozenxid",
|
||||
"age_datminmxid",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("newreddit", 87126426, 0)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(databaseWraparoundQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGDatabaseWraparoundCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGDatabaseWraparoundCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datname": "newreddit"}, value: 87126426, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"datname": "newreddit"}, value: 0, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
@ -39,22 +39,17 @@ type ExtensionsCollector struct {
|
||||
logger log.Logger
|
||||
}
|
||||
|
||||
func NewExtensionsCollector(logger log.Logger) (Collector, error) {
|
||||
return &ExtensionsCollector{logger: logger}, nil
|
||||
func NewExtensionsCollector(collectorConfig collectorConfig) (Collector, error) {
|
||||
return &ExtensionsCollector{logger: collectorConfig.logger}, nil
|
||||
}
|
||||
|
||||
func (e *ExtensionsCollector) Update(ctx context.Context, server *server, ch chan<- prometheus.Metric) error {
|
||||
db, err := server.GetDB()
|
||||
func (e *ExtensionsCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
err := e.scrapeAvailableExtensions(ctx, instance.db, ch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = e.scrapeAvailableExtensions(ctx, db, ch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = e.scrapeInstalledExtensions(ctx, db, ch)
|
||||
err = e.scrapeInstalledExtensions(ctx, instance.db, ch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
129
collector/pg_locks.go
Normal file
129
collector/pg_locks.go
Normal file
@ -0,0 +1,129 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const locksSubsystem = "locks"
|
||||
|
||||
func init() {
|
||||
registerCollector(locksSubsystem, defaultEnabled, NewPGLocksCollector)
|
||||
}
|
||||
|
||||
type PGLocksCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGLocksCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGLocksCollector{
|
||||
log: config.logger,
|
||||
}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
pgLocksDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
locksSubsystem,
|
||||
"count",
|
||||
),
|
||||
"Number of locks",
|
||||
[]string{"datname", "mode"}, nil,
|
||||
)
|
||||
|
||||
pgLocksQuery = `
|
||||
SELECT
|
||||
pg_database.datname as datname,
|
||||
tmp.mode as mode,
|
||||
COALESCE(count, 0) as count
|
||||
FROM
|
||||
(
|
||||
VALUES
|
||||
('accesssharelock'),
|
||||
('rowsharelock'),
|
||||
('rowexclusivelock'),
|
||||
('shareupdateexclusivelock'),
|
||||
('sharelock'),
|
||||
('sharerowexclusivelock'),
|
||||
('exclusivelock'),
|
||||
('accessexclusivelock'),
|
||||
('sireadlock')
|
||||
) AS tmp(mode)
|
||||
CROSS JOIN pg_database
|
||||
LEFT JOIN (
|
||||
SELECT
|
||||
database,
|
||||
lower(mode) AS mode,
|
||||
count(*) AS count
|
||||
FROM
|
||||
pg_locks
|
||||
WHERE
|
||||
database IS NOT NULL
|
||||
GROUP BY
|
||||
database,
|
||||
lower(mode)
|
||||
) AS tmp2 ON tmp.mode = tmp2.mode
|
||||
and pg_database.oid = tmp2.database
|
||||
ORDER BY
|
||||
1
|
||||
`
|
||||
)
|
||||
|
||||
// Update implements Collector and exposes database locks.
|
||||
// It is called by the Prometheus registry when collecting metrics.
|
||||
func (c PGLocksCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
// Query the list of databases
|
||||
rows, err := db.QueryContext(ctx,
|
||||
pgLocksQuery,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var datname, mode sql.NullString
|
||||
var count sql.NullInt64
|
||||
|
||||
for rows.Next() {
|
||||
if err := rows.Scan(&datname, &mode, &count); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !datname.Valid || !mode.Valid {
|
||||
continue
|
||||
}
|
||||
|
||||
countMetric := 0.0
|
||||
if count.Valid {
|
||||
countMetric = float64(count.Int64)
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgLocksDesc,
|
||||
prometheus.GaugeValue, countMetric,
|
||||
datname.String, mode.String,
|
||||
)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
60
collector/pg_locks_test.go
Normal file
60
collector/pg_locks_test.go
Normal file
@ -0,0 +1,60 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGLocksCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
rows := sqlmock.NewRows([]string{"datname", "mode", "count"}).
|
||||
AddRow("test", "exclusivelock", 42)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(pgLocksQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGLocksCollector{}
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGLocksCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datname": "test", "mode": "exclusivelock"}, value: 42, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
93
collector/pg_long_running_transactions.go
Normal file
93
collector/pg_long_running_transactions.go
Normal file
@ -0,0 +1,93 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const longRunningTransactionsSubsystem = "long_running_transactions"
|
||||
|
||||
func init() {
|
||||
registerCollector(longRunningTransactionsSubsystem, defaultDisabled, NewPGLongRunningTransactionsCollector)
|
||||
}
|
||||
|
||||
type PGLongRunningTransactionsCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGLongRunningTransactionsCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGLongRunningTransactionsCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
longRunningTransactionsCount = prometheus.NewDesc(
|
||||
"pg_long_running_transactions",
|
||||
"Current number of long running transactions",
|
||||
[]string{},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
longRunningTransactionsAgeInSeconds = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, longRunningTransactionsSubsystem, "oldest_timestamp_seconds"),
|
||||
"The current maximum transaction age in seconds",
|
||||
[]string{},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
longRunningTransactionsQuery = `
|
||||
SELECT
|
||||
COUNT(*) as transactions,
|
||||
MAX(EXTRACT(EPOCH FROM clock_timestamp())) AS oldest_timestamp_seconds
|
||||
FROM pg_catalog.pg_stat_activity
|
||||
WHERE state is distinct from 'idle' AND query not like 'autovacuum:%'
|
||||
`
|
||||
)
|
||||
|
||||
func (PGLongRunningTransactionsCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
rows, err := db.QueryContext(ctx,
|
||||
longRunningTransactionsQuery)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var transactions, ageInSeconds float64
|
||||
|
||||
if err := rows.Scan(&transactions, &ageInSeconds); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
longRunningTransactionsCount,
|
||||
prometheus.GaugeValue,
|
||||
transactions,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
longRunningTransactionsAgeInSeconds,
|
||||
prometheus.GaugeValue,
|
||||
ageInSeconds,
|
||||
)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
63
collector/pg_long_running_transactions_test.go
Normal file
63
collector/pg_long_running_transactions_test.go
Normal file
@ -0,0 +1,63 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGLongRunningTransactionsCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
inst := &instance{db: db}
|
||||
columns := []string{
|
||||
"transactions",
|
||||
"age_in_seconds",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(20, 1200)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(longRunningTransactionsQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGLongRunningTransactionsCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGLongRunningTransactionsCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{}, value: 20, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{}, value: 1200, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
69
collector/pg_postmaster.go
Normal file
69
collector/pg_postmaster.go
Normal file
@ -0,0 +1,69 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const postmasterSubsystem = "postmaster"
|
||||
|
||||
func init() {
|
||||
registerCollector(postmasterSubsystem, defaultDisabled, NewPGPostmasterCollector)
|
||||
}
|
||||
|
||||
type PGPostmasterCollector struct {
|
||||
}
|
||||
|
||||
func NewPGPostmasterCollector(collectorConfig) (Collector, error) {
|
||||
return &PGPostmasterCollector{}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
pgPostMasterStartTimeSeconds = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
postmasterSubsystem,
|
||||
"start_time_seconds",
|
||||
),
|
||||
"Time at which postmaster started",
|
||||
[]string{}, nil,
|
||||
)
|
||||
|
||||
pgPostmasterQuery = "SELECT extract(epoch from pg_postmaster_start_time) from pg_postmaster_start_time();"
|
||||
)
|
||||
|
||||
func (c *PGPostmasterCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
row := db.QueryRowContext(ctx,
|
||||
pgPostmasterQuery)
|
||||
|
||||
var startTimeSeconds sql.NullFloat64
|
||||
err := row.Scan(&startTimeSeconds)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
startTimeSecondsMetric := 0.0
|
||||
if startTimeSeconds.Valid {
|
||||
startTimeSecondsMetric = startTimeSeconds.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgPostMasterStartTimeSeconds,
|
||||
prometheus.GaugeValue, startTimeSecondsMetric,
|
||||
)
|
||||
return nil
|
||||
}
|
95
collector/pg_postmaster_test.go
Normal file
95
collector/pg_postmaster_test.go
Normal file
@ -0,0 +1,95 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPgPostmasterCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(pgPostmasterQuery)).WillReturnRows(sqlmock.NewRows([]string{"pg_postmaster_start_time"}).
|
||||
AddRow(1685739904))
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGPostmasterCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGPostmasterCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{}, value: 1685739904, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPgPostmasterCollectorNullTime(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(pgPostmasterQuery)).WillReturnRows(sqlmock.NewRows([]string{"pg_postmaster_start_time"}).
|
||||
AddRow(nil))
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGPostmasterCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGPostmasterCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{}, value: 0, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
132
collector/pg_process_idle.go
Normal file
132
collector/pg_process_idle.go
Normal file
@ -0,0 +1,132 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/lib/pq"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
func init() {
|
||||
// Making this default disabled because we have no tests for it
|
||||
registerCollector(processIdleSubsystem, defaultDisabled, NewPGProcessIdleCollector)
|
||||
}
|
||||
|
||||
type PGProcessIdleCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
const processIdleSubsystem = "process_idle"
|
||||
|
||||
func NewPGProcessIdleCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGProcessIdleCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var pgProcessIdleSeconds = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, processIdleSubsystem, "seconds"),
|
||||
"Idle time of server processes",
|
||||
[]string{"state", "application_name"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
func (PGProcessIdleCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
row := db.QueryRowContext(ctx,
|
||||
`WITH
|
||||
metrics AS (
|
||||
SELECT
|
||||
state,
|
||||
application_name,
|
||||
SUM(EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change))::bigint)::float AS process_idle_seconds_sum,
|
||||
COUNT(*) AS process_idle_seconds_count
|
||||
FROM pg_stat_activity
|
||||
WHERE state ~ '^idle'
|
||||
GROUP BY state, application_name
|
||||
),
|
||||
buckets AS (
|
||||
SELECT
|
||||
state,
|
||||
application_name,
|
||||
le,
|
||||
SUM(
|
||||
CASE WHEN EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change)) <= le
|
||||
THEN 1
|
||||
ELSE 0
|
||||
END
|
||||
)::bigint AS bucket
|
||||
FROM
|
||||
pg_stat_activity,
|
||||
UNNEST(ARRAY[1, 2, 5, 15, 30, 60, 90, 120, 300]) AS le
|
||||
GROUP BY state, application_name, le
|
||||
ORDER BY state, application_name, le
|
||||
)
|
||||
SELECT
|
||||
state,
|
||||
application_name,
|
||||
process_idle_seconds_sum as seconds_sum,
|
||||
process_idle_seconds_count as seconds_count,
|
||||
ARRAY_AGG(le) AS seconds,
|
||||
ARRAY_AGG(bucket) AS seconds_bucket
|
||||
FROM metrics JOIN buckets USING (state, application_name)
|
||||
GROUP BY 1, 2, 3, 4;`)
|
||||
|
||||
var state sql.NullString
|
||||
var applicationName sql.NullString
|
||||
var secondsSum sql.NullFloat64
|
||||
var secondsCount sql.NullInt64
|
||||
var seconds []float64
|
||||
var secondsBucket []int64
|
||||
|
||||
err := row.Scan(&state, &applicationName, &secondsSum, &secondsCount, pq.Array(&seconds), pq.Array(&secondsBucket))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var buckets = make(map[float64]uint64, len(seconds))
|
||||
for i, second := range seconds {
|
||||
if i >= len(secondsBucket) {
|
||||
break
|
||||
}
|
||||
buckets[second] = uint64(secondsBucket[i])
|
||||
}
|
||||
|
||||
stateLabel := "unknown"
|
||||
if state.Valid {
|
||||
stateLabel = state.String
|
||||
}
|
||||
|
||||
applicationNameLabel := "unknown"
|
||||
if applicationName.Valid {
|
||||
applicationNameLabel = applicationName.String
|
||||
}
|
||||
|
||||
var secondsCountMetric uint64
|
||||
if secondsCount.Valid {
|
||||
secondsCountMetric = uint64(secondsCount.Int64)
|
||||
}
|
||||
secondsSumMetric := 0.0
|
||||
if secondsSum.Valid {
|
||||
secondsSumMetric = secondsSum.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstHistogram(
|
||||
pgProcessIdleSeconds,
|
||||
secondsCountMetric, secondsSumMetric, buckets,
|
||||
stateLabel, applicationNameLabel,
|
||||
)
|
||||
return nil
|
||||
}
|
88
collector/pg_replication.go
Normal file
88
collector/pg_replication.go
Normal file
@ -0,0 +1,88 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const replicationSubsystem = "replication"
|
||||
|
||||
func init() {
|
||||
registerCollector(replicationSubsystem, defaultEnabled, NewPGReplicationCollector)
|
||||
}
|
||||
|
||||
type PGReplicationCollector struct {
|
||||
}
|
||||
|
||||
func NewPGReplicationCollector(collectorConfig) (Collector, error) {
|
||||
return &PGReplicationCollector{}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
pgReplicationLag = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
replicationSubsystem,
|
||||
"lag_seconds",
|
||||
),
|
||||
"Replication lag behind master in seconds",
|
||||
[]string{}, nil,
|
||||
)
|
||||
pgReplicationIsReplica = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
replicationSubsystem,
|
||||
"is_replica",
|
||||
),
|
||||
"Indicates if the server is a replica",
|
||||
[]string{}, nil,
|
||||
)
|
||||
|
||||
pgReplicationQuery = `SELECT
|
||||
CASE
|
||||
WHEN NOT pg_is_in_recovery() THEN 0
|
||||
WHEN pg_last_wal_receive_lsn () = pg_last_wal_replay_lsn () THEN 0
|
||||
ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))
|
||||
END AS lag,
|
||||
CASE
|
||||
WHEN pg_is_in_recovery() THEN 1
|
||||
ELSE 0
|
||||
END as is_replica`
|
||||
)
|
||||
|
||||
func (c *PGReplicationCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
row := db.QueryRowContext(ctx,
|
||||
pgReplicationQuery,
|
||||
)
|
||||
|
||||
var lag float64
|
||||
var isReplica int64
|
||||
err := row.Scan(&lag, &isReplica)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgReplicationLag,
|
||||
prometheus.GaugeValue, lag,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgReplicationIsReplica,
|
||||
prometheus.GaugeValue, float64(isReplica),
|
||||
)
|
||||
return nil
|
||||
}
|
130
collector/pg_replication_slot.go
Normal file
130
collector/pg_replication_slot.go
Normal file
@ -0,0 +1,130 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const replicationSlotSubsystem = "replication_slot"
|
||||
|
||||
func init() {
|
||||
registerCollector(replicationSlotSubsystem, defaultEnabled, NewPGReplicationSlotCollector)
|
||||
}
|
||||
|
||||
type PGReplicationSlotCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGReplicationSlotCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGReplicationSlotCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
pgReplicationSlotCurrentWalDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
replicationSlotSubsystem,
|
||||
"slot_current_wal_lsn",
|
||||
),
|
||||
"current wal lsn value",
|
||||
[]string{"slot_name"}, nil,
|
||||
)
|
||||
pgReplicationSlotCurrentFlushDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
replicationSlotSubsystem,
|
||||
"slot_confirmed_flush_lsn",
|
||||
),
|
||||
"last lsn confirmed flushed to the replication slot",
|
||||
[]string{"slot_name"}, nil,
|
||||
)
|
||||
pgReplicationSlotIsActiveDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
replicationSlotSubsystem,
|
||||
"slot_is_active",
|
||||
),
|
||||
"whether the replication slot is active or not",
|
||||
[]string{"slot_name"}, nil,
|
||||
)
|
||||
|
||||
pgReplicationSlotQuery = `SELECT
|
||||
slot_name,
|
||||
CASE WHEN pg_is_in_recovery() THEN
|
||||
pg_last_wal_receive_lsn() - '0/0'
|
||||
ELSE
|
||||
pg_current_wal_lsn() - '0/0'
|
||||
END AS current_wal_lsn,
|
||||
COALESCE(confirmed_flush_lsn, '0/0') - '0/0',
|
||||
active
|
||||
FROM pg_replication_slots;`
|
||||
)
|
||||
|
||||
func (PGReplicationSlotCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
rows, err := db.QueryContext(ctx,
|
||||
pgReplicationSlotQuery)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var slotName sql.NullString
|
||||
var walLSN sql.NullFloat64
|
||||
var flushLSN sql.NullFloat64
|
||||
var isActive sql.NullBool
|
||||
if err := rows.Scan(&slotName, &walLSN, &flushLSN, &isActive); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
isActiveValue := 0.0
|
||||
if isActive.Valid && isActive.Bool {
|
||||
isActiveValue = 1.0
|
||||
}
|
||||
slotNameLabel := "unknown"
|
||||
if slotName.Valid {
|
||||
slotNameLabel = slotName.String
|
||||
}
|
||||
|
||||
var walLSNMetric float64
|
||||
if walLSN.Valid {
|
||||
walLSNMetric = walLSN.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgReplicationSlotCurrentWalDesc,
|
||||
prometheus.GaugeValue, walLSNMetric, slotNameLabel,
|
||||
)
|
||||
if isActive.Valid && isActive.Bool {
|
||||
var flushLSNMetric float64
|
||||
if flushLSN.Valid {
|
||||
flushLSNMetric = flushLSN.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgReplicationSlotCurrentFlushDesc,
|
||||
prometheus.GaugeValue, flushLSNMetric, slotNameLabel,
|
||||
)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgReplicationSlotIsActiveDesc,
|
||||
prometheus.GaugeValue, isActiveValue, slotNameLabel,
|
||||
)
|
||||
}
|
||||
return rows.Err()
|
||||
}
|
186
collector/pg_replication_slot_test.go
Normal file
186
collector/pg_replication_slot_test.go
Normal file
@ -0,0 +1,186 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPgReplicationSlotCollectorActive(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{"slot_name", "current_wal_lsn", "confirmed_flush_lsn", "active"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("test_slot", 5, 3, true)
|
||||
mock.ExpectQuery(sanitizeQuery(pgReplicationSlotQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGReplicationSlotCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGPostmasterCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"slot_name": "test_slot"}, value: 5, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"slot_name": "test_slot"}, value: 3, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"slot_name": "test_slot"}, value: 1, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPgReplicationSlotCollectorInActive(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{"slot_name", "current_wal_lsn", "confirmed_flush_lsn", "active"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("test_slot", 6, 12, false)
|
||||
mock.ExpectQuery(sanitizeQuery(pgReplicationSlotQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGReplicationSlotCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGReplicationSlotCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"slot_name": "test_slot"}, value: 6, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"slot_name": "test_slot"}, value: 0, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestPgReplicationSlotCollectorActiveNil(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{"slot_name", "current_wal_lsn", "confirmed_flush_lsn", "active"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("test_slot", 6, 12, nil)
|
||||
mock.ExpectQuery(sanitizeQuery(pgReplicationSlotQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGReplicationSlotCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGReplicationSlotCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"slot_name": "test_slot"}, value: 6, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"slot_name": "test_slot"}, value: 0, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPgReplicationSlotCollectorTestNilValues(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{"slot_name", "current_wal_lsn", "confirmed_flush_lsn", "active"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(nil, nil, nil, true)
|
||||
mock.ExpectQuery(sanitizeQuery(pgReplicationSlotQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGReplicationSlotCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGReplicationSlotCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"slot_name": "unknown"}, value: 0, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"slot_name": "unknown"}, value: 0, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"slot_name": "unknown"}, value: 1, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
63
collector/pg_replication_test.go
Normal file
63
collector/pg_replication_test.go
Normal file
@ -0,0 +1,63 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPgReplicationCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{"lag", "is_replica"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(1000, 1)
|
||||
mock.ExpectQuery(sanitizeQuery(pgReplicationQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGReplicationCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGReplicationCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{}, value: 1000, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{}, value: 1, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
84
collector/pg_stat_activity_autovacuum.go
Normal file
84
collector/pg_stat_activity_autovacuum.go
Normal file
@ -0,0 +1,84 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const statActivityAutovacuumSubsystem = "stat_activity_autovacuum"
|
||||
|
||||
func init() {
|
||||
registerCollector(statActivityAutovacuumSubsystem, defaultDisabled, NewPGStatActivityAutovacuumCollector)
|
||||
}
|
||||
|
||||
type PGStatActivityAutovacuumCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGStatActivityAutovacuumCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGStatActivityAutovacuumCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
statActivityAutovacuumAgeInSeconds = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statActivityAutovacuumSubsystem, "timestamp_seconds"),
|
||||
"Start timestamp of the vacuum process in seconds",
|
||||
[]string{"relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
statActivityAutovacuumQuery = `
|
||||
SELECT
|
||||
SPLIT_PART(query, '.', 2) AS relname,
|
||||
EXTRACT(EPOCH FROM xact_start) AS timestamp_seconds
|
||||
FROM
|
||||
pg_catalog.pg_stat_activity
|
||||
WHERE
|
||||
query LIKE 'autovacuum:%'
|
||||
`
|
||||
)
|
||||
|
||||
func (PGStatActivityAutovacuumCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
rows, err := db.QueryContext(ctx,
|
||||
statActivityAutovacuumQuery)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var relname string
|
||||
var ageInSeconds float64
|
||||
|
||||
if err := rows.Scan(&relname, &ageInSeconds); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statActivityAutovacuumAgeInSeconds,
|
||||
prometheus.GaugeValue,
|
||||
ageInSeconds, relname,
|
||||
)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
62
collector/pg_stat_activity_autovacuum_test.go
Normal file
62
collector/pg_stat_activity_autovacuum_test.go
Normal file
@ -0,0 +1,62 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGStatActivityAutovacuumCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
inst := &instance{db: db}
|
||||
columns := []string{
|
||||
"relname",
|
||||
"timestamp_seconds",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("test", 3600)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(statActivityAutovacuumQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatActivityAutovacuumCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatActivityAutovacuumCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"relname": "test"}, value: 3600, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
@ -15,92 +15,93 @@ package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const bgWriterSubsystem = "stat_bgwriter"
|
||||
|
||||
func init() {
|
||||
registerCollector("bgwriter", defaultEnabled, NewPGStatBGWriterCollector)
|
||||
registerCollector(bgWriterSubsystem, defaultEnabled, NewPGStatBGWriterCollector)
|
||||
}
|
||||
|
||||
type PGStatBGWriterCollector struct {
|
||||
}
|
||||
|
||||
func NewPGStatBGWriterCollector(logger log.Logger) (Collector, error) {
|
||||
func NewPGStatBGWriterCollector(collectorConfig) (Collector, error) {
|
||||
return &PGStatBGWriterCollector{}, nil
|
||||
}
|
||||
|
||||
const bgWriterSubsystem = "stat_bgwriter"
|
||||
|
||||
var statBGWriter = map[string]*prometheus.Desc{
|
||||
"checkpoints_timed": prometheus.NewDesc(
|
||||
var (
|
||||
statBGWriterCheckpointsTimedDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "checkpoints_timed_total"),
|
||||
"Number of scheduled checkpoints that have been performed",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"checkpoints_req": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterCheckpointsReqDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "checkpoints_req_total"),
|
||||
"Number of requested checkpoints that have been performed",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"checkpoint_write_time": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterCheckpointsReqTimeDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "checkpoint_write_time_total"),
|
||||
"Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"checkpoint_sync_time": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterCheckpointsSyncTimeDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "checkpoint_sync_time_total"),
|
||||
"Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"buffers_checkpoint": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterBuffersCheckpointDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "buffers_checkpoint_total"),
|
||||
"Number of buffers written during checkpoints",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"buffers_clean": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterBuffersCleanDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "buffers_clean_total"),
|
||||
"Number of buffers written by the background writer",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"maxwritten_clean": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterMaxwrittenCleanDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "maxwritten_clean_total"),
|
||||
"Number of times the background writer stopped a cleaning scan because it had written too many buffers",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"buffers_backend": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterBuffersBackendDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "buffers_backend_total"),
|
||||
"Number of buffers written directly by a backend",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"buffers_backend_fsync": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterBuffersBackendFsyncDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "buffers_backend_fsync_total"),
|
||||
"Number of times a backend had to execute its own fsync call (normally the background writer handles those even when the backend does its own write)",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"buffers_alloc": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterBuffersAllocDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "buffers_alloc_total"),
|
||||
"Number of buffers allocated",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
"stats_reset": prometheus.NewDesc(
|
||||
)
|
||||
statBGWriterStatsResetDesc = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "stats_reset_total"),
|
||||
"Time at which these statistics were last reset",
|
||||
[]string{"collector", "server"},
|
||||
prometheus.Labels{},
|
||||
),
|
||||
)
|
||||
)
|
||||
var statBGWriter = map[string]*prometheus.Desc{
|
||||
"percona_checkpoints_timed": prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, bgWriterSubsystem, "checkpoints_timed"),
|
||||
"Number of scheduled checkpoints that have been performed",
|
||||
@ -169,198 +170,235 @@ var statBGWriter = map[string]*prometheus.Desc{
|
||||
),
|
||||
}
|
||||
|
||||
func (PGStatBGWriterCollector) Update(ctx context.Context, server *server, ch chan<- prometheus.Metric) error {
|
||||
db, err := server.GetDB()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
const statBGWriterQuery = `SELECT
|
||||
checkpoints_timed
|
||||
,checkpoints_req
|
||||
,checkpoint_write_time
|
||||
,checkpoint_sync_time
|
||||
,buffers_checkpoint
|
||||
,buffers_clean
|
||||
,maxwritten_clean
|
||||
,buffers_backend
|
||||
,buffers_backend_fsync
|
||||
,buffers_alloc
|
||||
,stats_reset
|
||||
FROM pg_stat_bgwriter;`
|
||||
|
||||
func (PGStatBGWriterCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
|
||||
db := instance.getDB()
|
||||
row := db.QueryRowContext(ctx,
|
||||
`SELECT
|
||||
checkpoints_timed
|
||||
,checkpoints_req
|
||||
,checkpoint_write_time
|
||||
,checkpoint_sync_time
|
||||
,buffers_checkpoint
|
||||
,buffers_clean
|
||||
,maxwritten_clean
|
||||
,buffers_backend
|
||||
,buffers_backend_fsync
|
||||
,buffers_alloc
|
||||
,stats_reset
|
||||
FROM pg_stat_bgwriter;`)
|
||||
statBGWriterQuery)
|
||||
|
||||
var cpt int
|
||||
var cpr int
|
||||
var cpwt float64
|
||||
var cpst float64
|
||||
var bcp int
|
||||
var bc int
|
||||
var mwc int
|
||||
var bb int
|
||||
var bbf int
|
||||
var ba int
|
||||
var sr time.Time
|
||||
var cpt, cpr, bcp, bc, mwc, bb, bbf, ba sql.NullInt64
|
||||
var cpwt, cpst sql.NullFloat64
|
||||
var sr sql.NullTime
|
||||
|
||||
err = row.Scan(&cpt, &cpr, &cpwt, &cpst, &bcp, &bc, &mwc, &bb, &bbf, &ba, &sr)
|
||||
err := row.Scan(&cpt, &cpr, &cpwt, &cpst, &bcp, &bc, &mwc, &bb, &bbf, &ba, &sr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cptMetric := 0.0
|
||||
if cpt.Valid {
|
||||
cptMetric = float64(cpt.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["checkpoints_timed"],
|
||||
statBGWriterCheckpointsTimedDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(cpt),
|
||||
cptMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
cprMetric := 0.0
|
||||
if cpr.Valid {
|
||||
cprMetric = float64(cpr.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["checkpoints_req"],
|
||||
statBGWriterCheckpointsReqDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(cpr),
|
||||
cprMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
cpwtMetric := 0.0
|
||||
if cpwt.Valid {
|
||||
cpwtMetric = float64(cpwt.Float64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["checkpoint_write_time"],
|
||||
statBGWriterCheckpointsReqTimeDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(cpwt),
|
||||
cpwtMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
cpstMetric := 0.0
|
||||
if cpst.Valid {
|
||||
cpstMetric = float64(cpst.Float64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["checkpoint_sync_time"],
|
||||
statBGWriterCheckpointsSyncTimeDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(cpst),
|
||||
cpstMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
bcpMetric := 0.0
|
||||
if bcp.Valid {
|
||||
bcpMetric = float64(bcp.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["buffers_checkpoint"],
|
||||
statBGWriterBuffersCheckpointDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(bcp),
|
||||
bcpMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
bcMetric := 0.0
|
||||
if bc.Valid {
|
||||
bcMetric = float64(bc.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["buffers_clean"],
|
||||
statBGWriterBuffersCleanDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(bc),
|
||||
bcMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
mwcMetric := 0.0
|
||||
if mwc.Valid {
|
||||
mwcMetric = float64(mwc.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["maxwritten_clean"],
|
||||
statBGWriterMaxwrittenCleanDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(mwc),
|
||||
mwcMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
bbMetric := 0.0
|
||||
if bb.Valid {
|
||||
bbMetric = float64(bb.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["buffers_backend"],
|
||||
statBGWriterBuffersBackendDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(bb),
|
||||
bbMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
bbfMetric := 0.0
|
||||
if bbf.Valid {
|
||||
bbfMetric = float64(bbf.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["buffers_backend_fsync"],
|
||||
statBGWriterBuffersBackendFsyncDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(bbf),
|
||||
bbfMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
baMetric := 0.0
|
||||
if ba.Valid {
|
||||
baMetric = float64(ba.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["buffers_alloc"],
|
||||
statBGWriterBuffersAllocDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(ba),
|
||||
baMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
srMetric := 0.0
|
||||
if sr.Valid {
|
||||
srMetric = float64(sr.Time.Unix())
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["stats_reset"],
|
||||
statBGWriterStatsResetDesc,
|
||||
prometheus.CounterValue,
|
||||
float64(sr.Unix()),
|
||||
srMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
|
||||
// TODO: analyze metrics below, why do we duplicate them?
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_checkpoints_timed"],
|
||||
prometheus.CounterValue,
|
||||
float64(cpt),
|
||||
cptMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_checkpoints_req"],
|
||||
prometheus.CounterValue,
|
||||
float64(cpr),
|
||||
cprMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_checkpoint_write_time"],
|
||||
prometheus.CounterValue,
|
||||
float64(cpwt),
|
||||
cpwtMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_checkpoint_sync_time"],
|
||||
prometheus.CounterValue,
|
||||
float64(cpst),
|
||||
cpstMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_buffers_checkpoint"],
|
||||
prometheus.CounterValue,
|
||||
float64(bcp),
|
||||
bcpMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_buffers_clean"],
|
||||
prometheus.CounterValue,
|
||||
float64(bc),
|
||||
bcMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_maxwritten_clean"],
|
||||
prometheus.CounterValue,
|
||||
float64(mwc),
|
||||
mwcMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_buffers_backend"],
|
||||
prometheus.CounterValue,
|
||||
float64(bb),
|
||||
bbMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_buffers_backend_fsync"],
|
||||
prometheus.CounterValue,
|
||||
float64(bbf),
|
||||
bbfMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_buffers_alloc"],
|
||||
prometheus.CounterValue,
|
||||
float64(ba),
|
||||
baMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statBGWriter["percona_stats_reset"],
|
||||
prometheus.CounterValue,
|
||||
float64(sr.Unix()),
|
||||
srMetric,
|
||||
"exporter",
|
||||
server.GetName(),
|
||||
instance.name,
|
||||
)
|
||||
|
||||
return nil
|
||||
|
153
collector/pg_stat_bgwriter_test.go
Normal file
153
collector/pg_stat_bgwriter_test.go
Normal file
@ -0,0 +1,153 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGStatBGWriterCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db, name: "test"}
|
||||
|
||||
columns := []string{
|
||||
"checkpoints_timed",
|
||||
"checkpoints_req",
|
||||
"checkpoint_write_time",
|
||||
"checkpoint_sync_time",
|
||||
"buffers_checkpoint",
|
||||
"buffers_clean",
|
||||
"maxwritten_clean",
|
||||
"buffers_backend",
|
||||
"buffers_backend_fsync",
|
||||
"buffers_alloc",
|
||||
"stats_reset"}
|
||||
|
||||
srT, err := time.Parse("2006-01-02 15:04:05.00000-07", "2023-05-25 17:10:42.81132-07")
|
||||
if err != nil {
|
||||
t.Fatalf("Error parsing time: %s", err)
|
||||
}
|
||||
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(354, 4945, 289097744, 1242257, int64(3275602074), 89320867, 450139, 2034563757, 0, int64(2725688749), srT)
|
||||
mock.ExpectQuery(sanitizeQuery(statBGWriterQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatBGWriterCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatBGWriterCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
labels := labelMap{"collector": "exporter", "server": "test"}
|
||||
expected := []MetricResult{
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 354},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 4945},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 289097744},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 1242257},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 3275602074},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 89320867},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 450139},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 2034563757},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 2725688749},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 1685059842},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPGStatBGWriterCollectorNullValues(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db, name: "test"}
|
||||
|
||||
columns := []string{
|
||||
"checkpoints_timed",
|
||||
"checkpoints_req",
|
||||
"checkpoint_write_time",
|
||||
"checkpoint_sync_time",
|
||||
"buffers_checkpoint",
|
||||
"buffers_clean",
|
||||
"maxwritten_clean",
|
||||
"buffers_backend",
|
||||
"buffers_backend_fsync",
|
||||
"buffers_alloc",
|
||||
"stats_reset"}
|
||||
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil)
|
||||
mock.ExpectQuery(sanitizeQuery(statBGWriterQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatBGWriterCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatBGWriterCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
labels := labelMap{"collector": "exporter", "server": "test"}
|
||||
expected := []MetricResult{
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labels, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
478
collector/pg_stat_database.go
Normal file
478
collector/pg_stat_database.go
Normal file
@ -0,0 +1,478 @@
|
||||
// Copyright 2022 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const statDatabaseSubsystem = "stat_database"
|
||||
|
||||
func init() {
|
||||
registerCollector(statDatabaseSubsystem, defaultEnabled, NewPGStatDatabaseCollector)
|
||||
}
|
||||
|
||||
type PGStatDatabaseCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGStatDatabaseCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGStatDatabaseCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
statDatabaseNumbackends = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"numbackends",
|
||||
),
|
||||
"Number of backends currently connected to this database. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset.",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseXactCommit = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"xact_commit",
|
||||
),
|
||||
"Number of transactions in this database that have been committed",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseXactRollback = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"xact_rollback",
|
||||
),
|
||||
"Number of transactions in this database that have been rolled back",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseBlksRead = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"blks_read",
|
||||
),
|
||||
"Number of disk blocks read in this database",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseBlksHit = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"blks_hit",
|
||||
),
|
||||
"Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache)",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseTupReturned = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"tup_returned",
|
||||
),
|
||||
"Number of rows returned by queries in this database",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseTupFetched = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"tup_fetched",
|
||||
),
|
||||
"Number of rows fetched by queries in this database",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseTupInserted = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"tup_inserted",
|
||||
),
|
||||
"Number of rows inserted by queries in this database",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseTupUpdated = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"tup_updated",
|
||||
),
|
||||
"Number of rows updated by queries in this database",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseTupDeleted = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"tup_deleted",
|
||||
),
|
||||
"Number of rows deleted by queries in this database",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseConflicts = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"conflicts",
|
||||
),
|
||||
"Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see pg_stat_database_conflicts for details.)",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseTempFiles = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"temp_files",
|
||||
),
|
||||
"Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the log_temp_files setting.",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseTempBytes = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"temp_bytes",
|
||||
),
|
||||
"Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting.",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseDeadlocks = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"deadlocks",
|
||||
),
|
||||
"Number of deadlocks detected in this database",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseBlkReadTime = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"blk_read_time",
|
||||
),
|
||||
"Time spent reading data file blocks by backends in this database, in milliseconds",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseBlkWriteTime = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"blk_write_time",
|
||||
),
|
||||
"Time spent writing data file blocks by backends in this database, in milliseconds",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statDatabaseStatsReset = prometheus.NewDesc(prometheus.BuildFQName(
|
||||
namespace,
|
||||
statDatabaseSubsystem,
|
||||
"stats_reset",
|
||||
),
|
||||
"Time at which these statistics were last reset",
|
||||
[]string{"datid", "datname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
statDatabaseQuery = `
|
||||
SELECT
|
||||
datid
|
||||
,datname
|
||||
,numbackends
|
||||
,xact_commit
|
||||
,xact_rollback
|
||||
,blks_read
|
||||
,blks_hit
|
||||
,tup_returned
|
||||
,tup_fetched
|
||||
,tup_inserted
|
||||
,tup_updated
|
||||
,tup_deleted
|
||||
,conflicts
|
||||
,temp_files
|
||||
,temp_bytes
|
||||
,deadlocks
|
||||
,blk_read_time
|
||||
,blk_write_time
|
||||
,stats_reset
|
||||
FROM pg_stat_database;
|
||||
`
|
||||
)
|
||||
|
||||
func (c *PGStatDatabaseCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
rows, err := db.QueryContext(ctx,
|
||||
statDatabaseQuery,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var datid, datname sql.NullString
|
||||
var numBackends, xactCommit, xactRollback, blksRead, blksHit, tupReturned, tupFetched, tupInserted, tupUpdated, tupDeleted, conflicts, tempFiles, tempBytes, deadlocks, blkReadTime, blkWriteTime sql.NullFloat64
|
||||
var statsReset sql.NullTime
|
||||
|
||||
err := rows.Scan(
|
||||
&datid,
|
||||
&datname,
|
||||
&numBackends,
|
||||
&xactCommit,
|
||||
&xactRollback,
|
||||
&blksRead,
|
||||
&blksHit,
|
||||
&tupReturned,
|
||||
&tupFetched,
|
||||
&tupInserted,
|
||||
&tupUpdated,
|
||||
&tupDeleted,
|
||||
&conflicts,
|
||||
&tempFiles,
|
||||
&tempBytes,
|
||||
&deadlocks,
|
||||
&blkReadTime,
|
||||
&blkWriteTime,
|
||||
&statsReset,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !datid.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no datid")
|
||||
continue
|
||||
}
|
||||
if !datname.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no datname")
|
||||
continue
|
||||
}
|
||||
if !numBackends.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no numbackends")
|
||||
continue
|
||||
}
|
||||
if !xactCommit.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no xact_commit")
|
||||
continue
|
||||
}
|
||||
if !xactRollback.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no xact_rollback")
|
||||
continue
|
||||
}
|
||||
if !blksRead.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no blks_read")
|
||||
continue
|
||||
}
|
||||
if !blksHit.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no blks_hit")
|
||||
continue
|
||||
}
|
||||
if !tupReturned.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no tup_returned")
|
||||
continue
|
||||
}
|
||||
if !tupFetched.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no tup_fetched")
|
||||
continue
|
||||
}
|
||||
if !tupInserted.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no tup_inserted")
|
||||
continue
|
||||
}
|
||||
if !tupUpdated.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no tup_updated")
|
||||
continue
|
||||
}
|
||||
if !tupDeleted.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no tup_deleted")
|
||||
continue
|
||||
}
|
||||
if !conflicts.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no conflicts")
|
||||
continue
|
||||
}
|
||||
if !tempFiles.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no temp_files")
|
||||
continue
|
||||
}
|
||||
if !tempBytes.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no temp_bytes")
|
||||
continue
|
||||
}
|
||||
if !deadlocks.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no deadlocks")
|
||||
continue
|
||||
}
|
||||
if !blkReadTime.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no blk_read_time")
|
||||
continue
|
||||
}
|
||||
if !blkWriteTime.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping collecting metric because it has no blk_write_time")
|
||||
continue
|
||||
}
|
||||
|
||||
statsResetMetric := 0.0
|
||||
if !statsReset.Valid {
|
||||
level.Debug(c.log).Log("msg", "No metric for stats_reset, will collect 0 instead")
|
||||
}
|
||||
if statsReset.Valid {
|
||||
statsResetMetric = float64(statsReset.Time.Unix())
|
||||
}
|
||||
|
||||
labels := []string{datid.String, datname.String}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseNumbackends,
|
||||
prometheus.GaugeValue,
|
||||
numBackends.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseXactCommit,
|
||||
prometheus.CounterValue,
|
||||
xactCommit.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseXactRollback,
|
||||
prometheus.CounterValue,
|
||||
xactRollback.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseBlksRead,
|
||||
prometheus.CounterValue,
|
||||
blksRead.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseBlksHit,
|
||||
prometheus.CounterValue,
|
||||
blksHit.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseTupReturned,
|
||||
prometheus.CounterValue,
|
||||
tupReturned.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseTupFetched,
|
||||
prometheus.CounterValue,
|
||||
tupFetched.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseTupInserted,
|
||||
prometheus.CounterValue,
|
||||
tupInserted.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseTupUpdated,
|
||||
prometheus.CounterValue,
|
||||
tupUpdated.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseTupDeleted,
|
||||
prometheus.CounterValue,
|
||||
tupDeleted.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseConflicts,
|
||||
prometheus.CounterValue,
|
||||
conflicts.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseTempFiles,
|
||||
prometheus.CounterValue,
|
||||
tempFiles.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseTempBytes,
|
||||
prometheus.CounterValue,
|
||||
tempBytes.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseDeadlocks,
|
||||
prometheus.CounterValue,
|
||||
deadlocks.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseBlkReadTime,
|
||||
prometheus.CounterValue,
|
||||
blkReadTime.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseBlkWriteTime,
|
||||
prometheus.CounterValue,
|
||||
blkWriteTime.Float64,
|
||||
labels...,
|
||||
)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statDatabaseStatsReset,
|
||||
prometheus.CounterValue,
|
||||
statsResetMetric,
|
||||
labels...,
|
||||
)
|
||||
}
|
||||
return nil
|
||||
}
|
506
collector/pg_stat_database_test.go
Normal file
506
collector/pg_stat_database_test.go
Normal file
@ -0,0 +1,506 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGStatDatabaseCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{
|
||||
"datid",
|
||||
"datname",
|
||||
"numbackends",
|
||||
"xact_commit",
|
||||
"xact_rollback",
|
||||
"blks_read",
|
||||
"blks_hit",
|
||||
"tup_returned",
|
||||
"tup_fetched",
|
||||
"tup_inserted",
|
||||
"tup_updated",
|
||||
"tup_deleted",
|
||||
"conflicts",
|
||||
"temp_files",
|
||||
"temp_bytes",
|
||||
"deadlocks",
|
||||
"blk_read_time",
|
||||
"blk_write_time",
|
||||
"stats_reset",
|
||||
}
|
||||
|
||||
srT, err := time.Parse("2006-01-02 15:04:05.00000-07", "2023-05-25 17:10:42.81132-07")
|
||||
if err != nil {
|
||||
t.Fatalf("Error parsing time: %s", err)
|
||||
}
|
||||
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(
|
||||
"pid",
|
||||
"postgres",
|
||||
354,
|
||||
4945,
|
||||
289097744,
|
||||
1242257,
|
||||
int64(3275602074),
|
||||
89320867,
|
||||
450139,
|
||||
2034563757,
|
||||
0,
|
||||
int64(2725688749),
|
||||
23,
|
||||
52,
|
||||
74,
|
||||
925,
|
||||
16,
|
||||
823,
|
||||
srT)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(statDatabaseQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatDatabaseCollector{
|
||||
log: log.With(log.NewNopLogger(), "collector", "pg_stat_database"),
|
||||
}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatDatabaseCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_GAUGE, value: 354},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 4945},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 289097744},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1242257},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 3275602074},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 89320867},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 450139},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2034563757},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2725688749},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 23},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 52},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 74},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 925},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 16},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 823},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1685059842},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPGStatDatabaseCollectorNullValues(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
srT, err := time.Parse("2006-01-02 15:04:05.00000-07", "2023-05-25 17:10:42.81132-07")
|
||||
if err != nil {
|
||||
t.Fatalf("Error parsing time: %s", err)
|
||||
}
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{
|
||||
"datid",
|
||||
"datname",
|
||||
"numbackends",
|
||||
"xact_commit",
|
||||
"xact_rollback",
|
||||
"blks_read",
|
||||
"blks_hit",
|
||||
"tup_returned",
|
||||
"tup_fetched",
|
||||
"tup_inserted",
|
||||
"tup_updated",
|
||||
"tup_deleted",
|
||||
"conflicts",
|
||||
"temp_files",
|
||||
"temp_bytes",
|
||||
"deadlocks",
|
||||
"blk_read_time",
|
||||
"blk_write_time",
|
||||
"stats_reset",
|
||||
}
|
||||
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(
|
||||
nil,
|
||||
"postgres",
|
||||
354,
|
||||
4945,
|
||||
289097744,
|
||||
1242257,
|
||||
int64(3275602074),
|
||||
89320867,
|
||||
450139,
|
||||
2034563757,
|
||||
0,
|
||||
int64(2725688749),
|
||||
23,
|
||||
52,
|
||||
74,
|
||||
925,
|
||||
16,
|
||||
823,
|
||||
srT).
|
||||
AddRow(
|
||||
"pid",
|
||||
"postgres",
|
||||
354,
|
||||
4945,
|
||||
289097744,
|
||||
1242257,
|
||||
int64(3275602074),
|
||||
89320867,
|
||||
450139,
|
||||
2034563757,
|
||||
0,
|
||||
int64(2725688749),
|
||||
23,
|
||||
52,
|
||||
74,
|
||||
925,
|
||||
16,
|
||||
823,
|
||||
srT)
|
||||
mock.ExpectQuery(sanitizeQuery(statDatabaseQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatDatabaseCollector{
|
||||
log: log.With(log.NewNopLogger(), "collector", "pg_stat_database"),
|
||||
}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatDatabaseCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_GAUGE, value: 354},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 4945},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 289097744},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1242257},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 3275602074},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 89320867},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 450139},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2034563757},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2725688749},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 23},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 52},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 74},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 925},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 16},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 823},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1685059842},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
func TestPGStatDatabaseCollectorRowLeakTest(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{
|
||||
"datid",
|
||||
"datname",
|
||||
"numbackends",
|
||||
"xact_commit",
|
||||
"xact_rollback",
|
||||
"blks_read",
|
||||
"blks_hit",
|
||||
"tup_returned",
|
||||
"tup_fetched",
|
||||
"tup_inserted",
|
||||
"tup_updated",
|
||||
"tup_deleted",
|
||||
"conflicts",
|
||||
"temp_files",
|
||||
"temp_bytes",
|
||||
"deadlocks",
|
||||
"blk_read_time",
|
||||
"blk_write_time",
|
||||
"stats_reset",
|
||||
}
|
||||
|
||||
srT, err := time.Parse("2006-01-02 15:04:05.00000-07", "2023-05-25 17:10:42.81132-07")
|
||||
if err != nil {
|
||||
t.Fatalf("Error parsing time: %s", err)
|
||||
}
|
||||
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(
|
||||
"pid",
|
||||
"postgres",
|
||||
354,
|
||||
4945,
|
||||
289097744,
|
||||
1242257,
|
||||
int64(3275602074),
|
||||
89320867,
|
||||
450139,
|
||||
2034563757,
|
||||
0,
|
||||
int64(2725688749),
|
||||
23,
|
||||
52,
|
||||
74,
|
||||
925,
|
||||
16,
|
||||
823,
|
||||
srT).
|
||||
AddRow(
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
).
|
||||
AddRow(
|
||||
"pid",
|
||||
"postgres",
|
||||
355,
|
||||
4946,
|
||||
289097745,
|
||||
1242258,
|
||||
int64(3275602075),
|
||||
89320868,
|
||||
450140,
|
||||
2034563758,
|
||||
1,
|
||||
int64(2725688750),
|
||||
24,
|
||||
53,
|
||||
75,
|
||||
926,
|
||||
17,
|
||||
824,
|
||||
srT)
|
||||
mock.ExpectQuery(sanitizeQuery(statDatabaseQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatDatabaseCollector{
|
||||
log: log.With(log.NewNopLogger(), "collector", "pg_stat_database"),
|
||||
}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatDatabaseCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_GAUGE, value: 354},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 4945},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 289097744},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1242257},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 3275602074},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 89320867},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 450139},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2034563757},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2725688749},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 23},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 52},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 74},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 925},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 16},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 823},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1685059842},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_GAUGE, value: 355},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 4946},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 289097745},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1242258},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 3275602075},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 89320868},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 450140},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2034563758},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2725688750},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 24},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 53},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 75},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 926},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 17},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 824},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1685059842},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPGStatDatabaseCollectorTestNilStatReset(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{
|
||||
"datid",
|
||||
"datname",
|
||||
"numbackends",
|
||||
"xact_commit",
|
||||
"xact_rollback",
|
||||
"blks_read",
|
||||
"blks_hit",
|
||||
"tup_returned",
|
||||
"tup_fetched",
|
||||
"tup_inserted",
|
||||
"tup_updated",
|
||||
"tup_deleted",
|
||||
"conflicts",
|
||||
"temp_files",
|
||||
"temp_bytes",
|
||||
"deadlocks",
|
||||
"blk_read_time",
|
||||
"blk_write_time",
|
||||
"stats_reset",
|
||||
}
|
||||
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(
|
||||
"pid",
|
||||
"postgres",
|
||||
354,
|
||||
4945,
|
||||
289097744,
|
||||
1242257,
|
||||
int64(3275602074),
|
||||
89320867,
|
||||
450139,
|
||||
2034563757,
|
||||
0,
|
||||
int64(2725688749),
|
||||
23,
|
||||
52,
|
||||
74,
|
||||
925,
|
||||
16,
|
||||
823,
|
||||
nil)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(statDatabaseQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatDatabaseCollector{
|
||||
log: log.With(log.NewNopLogger(), "collector", "pg_stat_database"),
|
||||
}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatDatabaseCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_GAUGE, value: 354},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 4945},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 289097744},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 1242257},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 3275602074},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 89320867},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 450139},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2034563757},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 2725688749},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 23},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 52},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 74},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 925},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 16},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 823},
|
||||
{labels: labelMap{"datid": "pid", "datname": "postgres"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
211
collector/pg_stat_statements.go
Normal file
211
collector/pg_stat_statements.go
Normal file
@ -0,0 +1,211 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const statStatementsSubsystem = "stat_statements"
|
||||
|
||||
func init() {
|
||||
// WARNING:
|
||||
// Disabled by default because this set of metrics can be quite expensive on a busy server
|
||||
// Every unique query will cause a new timeseries to be created
|
||||
registerCollector(statStatementsSubsystem, defaultDisabled, NewPGStatStatementsCollector)
|
||||
}
|
||||
|
||||
type PGStatStatementsCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGStatStatementsCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGStatStatementsCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
statSTatementsCallsTotal = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statStatementsSubsystem, "calls_total"),
|
||||
"Number of times executed",
|
||||
[]string{"user", "datname", "queryid"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statStatementsSecondsTotal = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statStatementsSubsystem, "seconds_total"),
|
||||
"Total time spent in the statement, in seconds",
|
||||
[]string{"user", "datname", "queryid"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statStatementsRowsTotal = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statStatementsSubsystem, "rows_total"),
|
||||
"Total number of rows retrieved or affected by the statement",
|
||||
[]string{"user", "datname", "queryid"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statStatementsBlockReadSecondsTotal = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statStatementsSubsystem, "block_read_seconds_total"),
|
||||
"Total time the statement spent reading blocks, in seconds",
|
||||
[]string{"user", "datname", "queryid"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statStatementsBlockWriteSecondsTotal = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statStatementsSubsystem, "block_write_seconds_total"),
|
||||
"Total time the statement spent writing blocks, in seconds",
|
||||
[]string{"user", "datname", "queryid"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
pgStatStatementsQuery = `SELECT
|
||||
pg_get_userbyid(userid) as user,
|
||||
pg_database.datname,
|
||||
pg_stat_statements.queryid,
|
||||
pg_stat_statements.calls as calls_total,
|
||||
pg_stat_statements.total_time / 1000.0 as seconds_total,
|
||||
pg_stat_statements.rows as rows_total,
|
||||
pg_stat_statements.blk_read_time / 1000.0 as block_read_seconds_total,
|
||||
pg_stat_statements.blk_write_time / 1000.0 as block_write_seconds_total
|
||||
FROM pg_stat_statements
|
||||
JOIN pg_database
|
||||
ON pg_database.oid = pg_stat_statements.dbid
|
||||
WHERE
|
||||
total_time > (
|
||||
SELECT percentile_cont(0.1)
|
||||
WITHIN GROUP (ORDER BY total_time)
|
||||
FROM pg_stat_statements
|
||||
)
|
||||
ORDER BY seconds_total DESC
|
||||
LIMIT 100;`
|
||||
|
||||
pgStatStatementsNewQuery = `SELECT
|
||||
pg_get_userbyid(userid) as user,
|
||||
pg_database.datname,
|
||||
pg_stat_statements.queryid,
|
||||
pg_stat_statements.calls as calls_total,
|
||||
pg_stat_statements.total_exec_time / 1000.0 as seconds_total,
|
||||
pg_stat_statements.rows as rows_total,
|
||||
pg_stat_statements.blk_read_time / 1000.0 as block_read_seconds_total,
|
||||
pg_stat_statements.blk_write_time / 1000.0 as block_write_seconds_total
|
||||
FROM pg_stat_statements
|
||||
JOIN pg_database
|
||||
ON pg_database.oid = pg_stat_statements.dbid
|
||||
WHERE
|
||||
total_exec_time > (
|
||||
SELECT percentile_cont(0.1)
|
||||
WITHIN GROUP (ORDER BY total_exec_time)
|
||||
FROM pg_stat_statements
|
||||
)
|
||||
ORDER BY seconds_total DESC
|
||||
LIMIT 100;`
|
||||
)
|
||||
|
||||
func (PGStatStatementsCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
query := pgStatStatementsQuery
|
||||
if instance.version.GE(semver.MustParse("13.0.0")) {
|
||||
query = pgStatStatementsNewQuery
|
||||
}
|
||||
|
||||
db := instance.getDB()
|
||||
rows, err := db.QueryContext(ctx, query)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var user, datname, queryid sql.NullString
|
||||
var callsTotal, rowsTotal sql.NullInt64
|
||||
var secondsTotal, blockReadSecondsTotal, blockWriteSecondsTotal sql.NullFloat64
|
||||
|
||||
if err := rows.Scan(&user, &datname, &queryid, &callsTotal, &secondsTotal, &rowsTotal, &blockReadSecondsTotal, &blockWriteSecondsTotal); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
userLabel := "unknown"
|
||||
if user.Valid {
|
||||
userLabel = user.String
|
||||
}
|
||||
datnameLabel := "unknown"
|
||||
if datname.Valid {
|
||||
datnameLabel = datname.String
|
||||
}
|
||||
queryidLabel := "unknown"
|
||||
if queryid.Valid {
|
||||
queryidLabel = queryid.String
|
||||
}
|
||||
|
||||
callsTotalMetric := 0.0
|
||||
if callsTotal.Valid {
|
||||
callsTotalMetric = float64(callsTotal.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statSTatementsCallsTotal,
|
||||
prometheus.CounterValue,
|
||||
callsTotalMetric,
|
||||
userLabel, datnameLabel, queryidLabel,
|
||||
)
|
||||
|
||||
secondsTotalMetric := 0.0
|
||||
if secondsTotal.Valid {
|
||||
secondsTotalMetric = secondsTotal.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statStatementsSecondsTotal,
|
||||
prometheus.CounterValue,
|
||||
secondsTotalMetric,
|
||||
userLabel, datnameLabel, queryidLabel,
|
||||
)
|
||||
|
||||
rowsTotalMetric := 0.0
|
||||
if rowsTotal.Valid {
|
||||
rowsTotalMetric = float64(rowsTotal.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statStatementsRowsTotal,
|
||||
prometheus.CounterValue,
|
||||
rowsTotalMetric,
|
||||
userLabel, datnameLabel, queryidLabel,
|
||||
)
|
||||
|
||||
blockReadSecondsTotalMetric := 0.0
|
||||
if blockReadSecondsTotal.Valid {
|
||||
blockReadSecondsTotalMetric = blockReadSecondsTotal.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statStatementsBlockReadSecondsTotal,
|
||||
prometheus.CounterValue,
|
||||
blockReadSecondsTotalMetric,
|
||||
userLabel, datnameLabel, queryidLabel,
|
||||
)
|
||||
|
||||
blockWriteSecondsTotalMetric := 0.0
|
||||
if blockWriteSecondsTotal.Valid {
|
||||
blockWriteSecondsTotalMetric = blockWriteSecondsTotal.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statStatementsBlockWriteSecondsTotal,
|
||||
prometheus.CounterValue,
|
||||
blockWriteSecondsTotalMetric,
|
||||
userLabel, datnameLabel, queryidLabel,
|
||||
)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
153
collector/pg_stat_statements_test.go
Normal file
153
collector/pg_stat_statements_test.go
Normal file
@ -0,0 +1,153 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGStateStatementsCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db, version: semver.MustParse("12.0.0")}
|
||||
|
||||
columns := []string{"user", "datname", "queryid", "calls_total", "seconds_total", "rows_total", "block_read_seconds_total", "block_write_seconds_total"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("postgres", "postgres", 1500, 5, 0.4, 100, 0.1, 0.2)
|
||||
mock.ExpectQuery(sanitizeQuery(pgStatStatementsQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatStatementsCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatStatementsCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 5},
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 0.4},
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 100},
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 0.1},
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 0.2},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPGStateStatementsCollectorNull(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db, version: semver.MustParse("13.3.7")}
|
||||
|
||||
columns := []string{"user", "datname", "queryid", "calls_total", "seconds_total", "rows_total", "block_read_seconds_total", "block_write_seconds_total"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(nil, nil, nil, nil, nil, nil, nil, nil)
|
||||
mock.ExpectQuery(sanitizeQuery(pgStatStatementsNewQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatStatementsCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatStatementsCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"user": "unknown", "datname": "unknown", "queryid": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"user": "unknown", "datname": "unknown", "queryid": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"user": "unknown", "datname": "unknown", "queryid": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"user": "unknown", "datname": "unknown", "queryid": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"user": "unknown", "datname": "unknown", "queryid": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPGStateStatementsCollectorNewPG(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db, version: semver.MustParse("13.3.7")}
|
||||
|
||||
columns := []string{"user", "datname", "queryid", "calls_total", "seconds_total", "rows_total", "block_read_seconds_total", "block_write_seconds_total"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("postgres", "postgres", 1500, 5, 0.4, 100, 0.1, 0.2)
|
||||
mock.ExpectQuery(sanitizeQuery(pgStatStatementsNewQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatStatementsCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatStatementsCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 5},
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 0.4},
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 100},
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 0.1},
|
||||
{labels: labelMap{"user": "postgres", "datname": "postgres", "queryid": "1500"}, metricType: dto.MetricType_COUNTER, value: 0.2},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
446
collector/pg_stat_user_tables.go
Normal file
446
collector/pg_stat_user_tables.go
Normal file
@ -0,0 +1,446 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const userTableSubsystem = "stat_user_tables"
|
||||
|
||||
func init() {
|
||||
registerCollector(userTableSubsystem, defaultEnabled, NewPGStatUserTablesCollector)
|
||||
}
|
||||
|
||||
type PGStatUserTablesCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGStatUserTablesCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGStatUserTablesCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
statUserTablesSeqScan = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "seq_scan"),
|
||||
"Number of sequential scans initiated on this table",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesSeqTupRead = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "seq_tup_read"),
|
||||
"Number of live rows fetched by sequential scans",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesIdxScan = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "idx_scan"),
|
||||
"Number of index scans initiated on this table",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesIdxTupFetch = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "idx_tup_fetch"),
|
||||
"Number of live rows fetched by index scans",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesNTupIns = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "n_tup_ins"),
|
||||
"Number of rows inserted",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesNTupUpd = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "n_tup_upd"),
|
||||
"Number of rows updated",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesNTupDel = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "n_tup_del"),
|
||||
"Number of rows deleted",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesNTupHotUpd = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "n_tup_hot_upd"),
|
||||
"Number of rows HOT updated (i.e., with no separate index update required)",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesNLiveTup = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "n_live_tup"),
|
||||
"Estimated number of live rows",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesNDeadTup = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "n_dead_tup"),
|
||||
"Estimated number of dead rows",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesNModSinceAnalyze = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "n_mod_since_analyze"),
|
||||
"Estimated number of rows changed since last analyze",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesLastVacuum = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "last_vacuum"),
|
||||
"Last time at which this table was manually vacuumed (not counting VACUUM FULL)",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesLastAutovacuum = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "last_autovacuum"),
|
||||
"Last time at which this table was vacuumed by the autovacuum daemon",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesLastAnalyze = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "last_analyze"),
|
||||
"Last time at which this table was manually analyzed",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesLastAutoanalyze = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "last_autoanalyze"),
|
||||
"Last time at which this table was analyzed by the autovacuum daemon",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesVacuumCount = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "vacuum_count"),
|
||||
"Number of times this table has been manually vacuumed (not counting VACUUM FULL)",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesAutovacuumCount = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "autovacuum_count"),
|
||||
"Number of times this table has been vacuumed by the autovacuum daemon",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesAnalyzeCount = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "analyze_count"),
|
||||
"Number of times this table has been manually analyzed",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesAutoanalyzeCount = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "autoanalyze_count"),
|
||||
"Number of times this table has been analyzed by the autovacuum daemon",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statUserTablesTotalSize = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, userTableSubsystem, "size_bytes"),
|
||||
"Total disk space used by this table, in bytes, including all indexes and TOAST data",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
statUserTablesQuery = `SELECT
|
||||
current_database() datname,
|
||||
schemaname,
|
||||
relname,
|
||||
seq_scan,
|
||||
seq_tup_read,
|
||||
idx_scan,
|
||||
idx_tup_fetch,
|
||||
n_tup_ins,
|
||||
n_tup_upd,
|
||||
n_tup_del,
|
||||
n_tup_hot_upd,
|
||||
n_live_tup,
|
||||
n_dead_tup,
|
||||
n_mod_since_analyze,
|
||||
COALESCE(last_vacuum, '1970-01-01Z') as last_vacuum,
|
||||
COALESCE(last_autovacuum, '1970-01-01Z') as last_autovacuum,
|
||||
COALESCE(last_analyze, '1970-01-01Z') as last_analyze,
|
||||
COALESCE(last_autoanalyze, '1970-01-01Z') as last_autoanalyze,
|
||||
vacuum_count,
|
||||
autovacuum_count,
|
||||
analyze_count,
|
||||
autoanalyze_count,
|
||||
pg_total_relation_size(relid) as total_size
|
||||
FROM
|
||||
pg_stat_user_tables`
|
||||
)
|
||||
|
||||
func (c *PGStatUserTablesCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
rows, err := db.QueryContext(ctx,
|
||||
statUserTablesQuery)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var datname, schemaname, relname sql.NullString
|
||||
var seqScan, seqTupRead, idxScan, idxTupFetch, nTupIns, nTupUpd, nTupDel, nTupHotUpd, nLiveTup, nDeadTup,
|
||||
nModSinceAnalyze, vacuumCount, autovacuumCount, analyzeCount, autoanalyzeCount, totalSize sql.NullInt64
|
||||
var lastVacuum, lastAutovacuum, lastAnalyze, lastAutoanalyze sql.NullTime
|
||||
|
||||
if err := rows.Scan(&datname, &schemaname, &relname, &seqScan, &seqTupRead, &idxScan, &idxTupFetch, &nTupIns, &nTupUpd, &nTupDel, &nTupHotUpd, &nLiveTup, &nDeadTup, &nModSinceAnalyze, &lastVacuum, &lastAutovacuum, &lastAnalyze, &lastAutoanalyze, &vacuumCount, &autovacuumCount, &analyzeCount, &autoanalyzeCount, &totalSize); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
datnameLabel := "unknown"
|
||||
if datname.Valid {
|
||||
datnameLabel = datname.String
|
||||
}
|
||||
schemanameLabel := "unknown"
|
||||
if schemaname.Valid {
|
||||
schemanameLabel = schemaname.String
|
||||
}
|
||||
relnameLabel := "unknown"
|
||||
if relname.Valid {
|
||||
relnameLabel = relname.String
|
||||
}
|
||||
|
||||
seqScanMetric := 0.0
|
||||
if seqScan.Valid {
|
||||
seqScanMetric = float64(seqScan.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesSeqScan,
|
||||
prometheus.CounterValue,
|
||||
seqScanMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
seqTupReadMetric := 0.0
|
||||
if seqTupRead.Valid {
|
||||
seqTupReadMetric = float64(seqTupRead.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesSeqTupRead,
|
||||
prometheus.CounterValue,
|
||||
seqTupReadMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
idxScanMetric := 0.0
|
||||
if idxScan.Valid {
|
||||
idxScanMetric = float64(idxScan.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesIdxScan,
|
||||
prometheus.CounterValue,
|
||||
idxScanMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
idxTupFetchMetric := 0.0
|
||||
if idxTupFetch.Valid {
|
||||
idxTupFetchMetric = float64(idxTupFetch.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesIdxTupFetch,
|
||||
prometheus.CounterValue,
|
||||
idxTupFetchMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
nTupInsMetric := 0.0
|
||||
if nTupIns.Valid {
|
||||
nTupInsMetric = float64(nTupIns.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesNTupIns,
|
||||
prometheus.CounterValue,
|
||||
nTupInsMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
nTupUpdMetric := 0.0
|
||||
if nTupUpd.Valid {
|
||||
nTupUpdMetric = float64(nTupUpd.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesNTupUpd,
|
||||
prometheus.CounterValue,
|
||||
nTupUpdMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
nTupDelMetric := 0.0
|
||||
if nTupDel.Valid {
|
||||
nTupDelMetric = float64(nTupDel.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesNTupDel,
|
||||
prometheus.CounterValue,
|
||||
nTupDelMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
nTupHotUpdMetric := 0.0
|
||||
if nTupHotUpd.Valid {
|
||||
nTupHotUpdMetric = float64(nTupHotUpd.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesNTupHotUpd,
|
||||
prometheus.CounterValue,
|
||||
nTupHotUpdMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
nLiveTupMetric := 0.0
|
||||
if nLiveTup.Valid {
|
||||
nLiveTupMetric = float64(nLiveTup.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesNLiveTup,
|
||||
prometheus.GaugeValue,
|
||||
nLiveTupMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
nDeadTupMetric := 0.0
|
||||
if nDeadTup.Valid {
|
||||
nDeadTupMetric = float64(nDeadTup.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesNDeadTup,
|
||||
prometheus.GaugeValue,
|
||||
nDeadTupMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
nModSinceAnalyzeMetric := 0.0
|
||||
if nModSinceAnalyze.Valid {
|
||||
nModSinceAnalyzeMetric = float64(nModSinceAnalyze.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesNModSinceAnalyze,
|
||||
prometheus.GaugeValue,
|
||||
nModSinceAnalyzeMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
lastVacuumMetric := 0.0
|
||||
if lastVacuum.Valid {
|
||||
lastVacuumMetric = float64(lastVacuum.Time.Unix())
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesLastVacuum,
|
||||
prometheus.GaugeValue,
|
||||
lastVacuumMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
lastAutovacuumMetric := 0.0
|
||||
if lastAutovacuum.Valid {
|
||||
lastAutovacuumMetric = float64(lastAutovacuum.Time.Unix())
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesLastAutovacuum,
|
||||
prometheus.GaugeValue,
|
||||
lastAutovacuumMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
lastAnalyzeMetric := 0.0
|
||||
if lastAnalyze.Valid {
|
||||
lastAnalyzeMetric = float64(lastAnalyze.Time.Unix())
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesLastAnalyze,
|
||||
prometheus.GaugeValue,
|
||||
lastAnalyzeMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
lastAutoanalyzeMetric := 0.0
|
||||
if lastAutoanalyze.Valid {
|
||||
lastAutoanalyzeMetric = float64(lastAutoanalyze.Time.Unix())
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesLastAutoanalyze,
|
||||
prometheus.GaugeValue,
|
||||
lastAutoanalyzeMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
vacuumCountMetric := 0.0
|
||||
if vacuumCount.Valid {
|
||||
vacuumCountMetric = float64(vacuumCount.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesVacuumCount,
|
||||
prometheus.CounterValue,
|
||||
vacuumCountMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
autovacuumCountMetric := 0.0
|
||||
if autovacuumCount.Valid {
|
||||
autovacuumCountMetric = float64(autovacuumCount.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesAutovacuumCount,
|
||||
prometheus.CounterValue,
|
||||
autovacuumCountMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
analyzeCountMetric := 0.0
|
||||
if analyzeCount.Valid {
|
||||
analyzeCountMetric = float64(analyzeCount.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesAnalyzeCount,
|
||||
prometheus.CounterValue,
|
||||
analyzeCountMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
autoanalyzeCountMetric := 0.0
|
||||
if autoanalyzeCount.Valid {
|
||||
autoanalyzeCountMetric = float64(autoanalyzeCount.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesAutoanalyzeCount,
|
||||
prometheus.CounterValue,
|
||||
autoanalyzeCountMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
totalSizeMetric := 0.0
|
||||
if totalSize.Valid {
|
||||
totalSizeMetric = float64(totalSize.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statUserTablesTotalSize,
|
||||
prometheus.GaugeValue,
|
||||
totalSizeMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
243
collector/pg_stat_user_tables_test.go
Normal file
243
collector/pg_stat_user_tables_test.go
Normal file
@ -0,0 +1,243 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGStatUserTablesCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
lastVacuumTime, err := time.Parse("2006-01-02Z", "2023-06-02Z")
|
||||
if err != nil {
|
||||
t.Fatalf("Error parsing vacuum time: %s", err)
|
||||
}
|
||||
lastAutoVacuumTime, err := time.Parse("2006-01-02Z", "2023-06-03Z")
|
||||
if err != nil {
|
||||
t.Fatalf("Error parsing vacuum time: %s", err)
|
||||
}
|
||||
lastAnalyzeTime, err := time.Parse("2006-01-02Z", "2023-06-04Z")
|
||||
if err != nil {
|
||||
t.Fatalf("Error parsing vacuum time: %s", err)
|
||||
}
|
||||
lastAutoAnalyzeTime, err := time.Parse("2006-01-02Z", "2023-06-05Z")
|
||||
if err != nil {
|
||||
t.Fatalf("Error parsing vacuum time: %s", err)
|
||||
}
|
||||
|
||||
columns := []string{
|
||||
"datname",
|
||||
"schemaname",
|
||||
"relname",
|
||||
"seq_scan",
|
||||
"seq_tup_read",
|
||||
"idx_scan",
|
||||
"idx_tup_fetch",
|
||||
"n_tup_ins",
|
||||
"n_tup_upd",
|
||||
"n_tup_del",
|
||||
"n_tup_hot_upd",
|
||||
"n_live_tup",
|
||||
"n_dead_tup",
|
||||
"n_mod_since_analyze",
|
||||
"last_vacuum",
|
||||
"last_autovacuum",
|
||||
"last_analyze",
|
||||
"last_autoanalyze",
|
||||
"vacuum_count",
|
||||
"autovacuum_count",
|
||||
"analyze_count",
|
||||
"autoanalyze_count",
|
||||
"total_size"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("postgres",
|
||||
"public",
|
||||
"a_table",
|
||||
1,
|
||||
2,
|
||||
3,
|
||||
4,
|
||||
5,
|
||||
6,
|
||||
7,
|
||||
8,
|
||||
9,
|
||||
10,
|
||||
0,
|
||||
lastVacuumTime,
|
||||
lastAutoVacuumTime,
|
||||
lastAnalyzeTime,
|
||||
lastAutoAnalyzeTime,
|
||||
11,
|
||||
12,
|
||||
13,
|
||||
14,
|
||||
15)
|
||||
mock.ExpectQuery(sanitizeQuery(statUserTablesQuery)).WillReturnRows(rows)
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatUserTablesCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatUserTablesCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 1},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 2},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 3},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 4},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 5},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 6},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 7},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 8},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_GAUGE, value: 9},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_GAUGE, value: 10},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_GAUGE, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_GAUGE, value: 1685664000},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_GAUGE, value: 1685750400},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_GAUGE, value: 1685836800},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_GAUGE, value: 1685923200},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 11},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 12},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 13},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 14},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPGStatUserTablesCollectorNullValues(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{
|
||||
"datname",
|
||||
"schemaname",
|
||||
"relname",
|
||||
"seq_scan",
|
||||
"seq_tup_read",
|
||||
"idx_scan",
|
||||
"idx_tup_fetch",
|
||||
"n_tup_ins",
|
||||
"n_tup_upd",
|
||||
"n_tup_del",
|
||||
"n_tup_hot_upd",
|
||||
"n_live_tup",
|
||||
"n_dead_tup",
|
||||
"n_mod_since_analyze",
|
||||
"last_vacuum",
|
||||
"last_autovacuum",
|
||||
"last_analyze",
|
||||
"last_autoanalyze",
|
||||
"vacuum_count",
|
||||
"autovacuum_count",
|
||||
"analyze_count",
|
||||
"autoanalyze_count",
|
||||
"total_size"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("postgres",
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil)
|
||||
mock.ExpectQuery(sanitizeQuery(statUserTablesQuery)).WillReturnRows(rows)
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatUserTablesCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatUserTablesCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_GAUGE, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_GAUGE, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_GAUGE, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_GAUGE, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_GAUGE, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_GAUGE, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_GAUGE, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
271
collector/pg_stat_walreceiver.go
Normal file
271
collector/pg_stat_walreceiver.go
Normal file
@ -0,0 +1,271 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerCollector(statWalReceiverSubsystem, defaultDisabled, NewPGStatWalReceiverCollector)
|
||||
}
|
||||
|
||||
type PGStatWalReceiverCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
const statWalReceiverSubsystem = "stat_wal_receiver"
|
||||
|
||||
func NewPGStatWalReceiverCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGStatWalReceiverCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
labelCats = []string{"upstream_host", "slot_name", "status"}
|
||||
statWalReceiverReceiveStartLsn = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statWalReceiverSubsystem, "receive_start_lsn"),
|
||||
"First write-ahead log location used when WAL receiver is started represented as a decimal",
|
||||
labelCats,
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statWalReceiverReceiveStartTli = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statWalReceiverSubsystem, "receive_start_tli"),
|
||||
"First timeline number used when WAL receiver is started",
|
||||
labelCats,
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statWalReceiverFlushedLSN = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statWalReceiverSubsystem, "flushed_lsn"),
|
||||
"Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started represented as a decimal",
|
||||
labelCats,
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statWalReceiverReceivedTli = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statWalReceiverSubsystem, "received_tli"),
|
||||
"Timeline number of last write-ahead log location received and flushed to disk",
|
||||
labelCats,
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statWalReceiverLastMsgSendTime = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statWalReceiverSubsystem, "last_msg_send_time"),
|
||||
"Send time of last message received from origin WAL sender",
|
||||
labelCats,
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statWalReceiverLastMsgReceiptTime = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statWalReceiverSubsystem, "last_msg_receipt_time"),
|
||||
"Send time of last message received from origin WAL sender",
|
||||
labelCats,
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statWalReceiverLatestEndLsn = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statWalReceiverSubsystem, "latest_end_lsn"),
|
||||
"Last write-ahead log location reported to origin WAL sender as integer",
|
||||
labelCats,
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statWalReceiverLatestEndTime = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statWalReceiverSubsystem, "latest_end_time"),
|
||||
"Time of last write-ahead log location reported to origin WAL sender",
|
||||
labelCats,
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statWalReceiverUpstreamNode = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statWalReceiverSubsystem, "upstream_node"),
|
||||
"Node ID of the upstream node",
|
||||
labelCats,
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
pgStatWalColumnQuery = `
|
||||
SELECT
|
||||
column_name
|
||||
FROM information_schema.columns
|
||||
WHERE
|
||||
table_name = 'pg_stat_wal_receiver' and
|
||||
column_name = 'flushed_lsn'
|
||||
`
|
||||
|
||||
pgStatWalReceiverQueryTemplate = `
|
||||
SELECT
|
||||
trim(both '''' from substring(conninfo from 'host=([^ ]*)')) as upstream_host,
|
||||
slot_name,
|
||||
status,
|
||||
(receive_start_lsn- '0/0') %% (2^52)::bigint as receive_start_lsn,
|
||||
%s
|
||||
receive_start_tli,
|
||||
received_tli,
|
||||
extract(epoch from last_msg_send_time) as last_msg_send_time,
|
||||
extract(epoch from last_msg_receipt_time) as last_msg_receipt_time,
|
||||
(latest_end_lsn - '0/0') %% (2^52)::bigint as latest_end_lsn,
|
||||
extract(epoch from latest_end_time) as latest_end_time,
|
||||
substring(slot_name from 'repmgr_slot_([0-9]*)') as upstream_node
|
||||
FROM pg_catalog.pg_stat_wal_receiver
|
||||
`
|
||||
)
|
||||
|
||||
func (c *PGStatWalReceiverCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
hasFlushedLSNRows, err := db.QueryContext(ctx, pgStatWalColumnQuery)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
hasFlushedLSN := hasFlushedLSNRows.Next()
|
||||
var query string
|
||||
if hasFlushedLSN {
|
||||
query = fmt.Sprintf(pgStatWalReceiverQueryTemplate, "(flushed_lsn - '0/0') % (2^52)::bigint as flushed_lsn,\n")
|
||||
} else {
|
||||
query = fmt.Sprintf(pgStatWalReceiverQueryTemplate, "")
|
||||
}
|
||||
|
||||
hasFlushedLSNRows.Close()
|
||||
|
||||
rows, err := db.QueryContext(ctx, query)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var upstreamHost, slotName, status sql.NullString
|
||||
var receiveStartLsn, receiveStartTli, flushedLsn, receivedTli, latestEndLsn, upstreamNode sql.NullInt64
|
||||
var lastMsgSendTime, lastMsgReceiptTime, latestEndTime sql.NullFloat64
|
||||
|
||||
if hasFlushedLSN {
|
||||
if err := rows.Scan(&upstreamHost, &slotName, &status, &receiveStartLsn, &receiveStartTli, &flushedLsn, &receivedTli, &lastMsgSendTime, &lastMsgReceiptTime, &latestEndLsn, &latestEndTime, &upstreamNode); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err := rows.Scan(&upstreamHost, &slotName, &status, &receiveStartLsn, &receiveStartTli, &receivedTli, &lastMsgSendTime, &lastMsgReceiptTime, &latestEndLsn, &latestEndTime, &upstreamNode); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if !upstreamHost.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because upstream host is null")
|
||||
continue
|
||||
}
|
||||
|
||||
if !slotName.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because slotname host is null")
|
||||
continue
|
||||
}
|
||||
|
||||
if !status.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because status is null")
|
||||
continue
|
||||
}
|
||||
labels := []string{upstreamHost.String, slotName.String, status.String}
|
||||
|
||||
if !receiveStartLsn.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because receive_start_lsn is null")
|
||||
continue
|
||||
}
|
||||
if !receiveStartTli.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because receive_start_tli is null")
|
||||
continue
|
||||
}
|
||||
if hasFlushedLSN && !flushedLsn.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because flushed_lsn is null")
|
||||
continue
|
||||
}
|
||||
if !receivedTli.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because received_tli is null")
|
||||
continue
|
||||
}
|
||||
if !lastMsgSendTime.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because last_msg_send_time is null")
|
||||
continue
|
||||
}
|
||||
if !lastMsgReceiptTime.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because last_msg_receipt_time is null")
|
||||
continue
|
||||
}
|
||||
if !latestEndLsn.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because latest_end_lsn is null")
|
||||
continue
|
||||
}
|
||||
if !latestEndTime.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because latest_end_time is null")
|
||||
continue
|
||||
}
|
||||
if !upstreamNode.Valid {
|
||||
level.Debug(c.log).Log("msg", "Skipping wal receiver stats because upstream_node is null")
|
||||
continue
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statWalReceiverReceiveStartLsn,
|
||||
prometheus.CounterValue,
|
||||
float64(receiveStartLsn.Int64),
|
||||
labels...)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statWalReceiverReceiveStartTli,
|
||||
prometheus.GaugeValue,
|
||||
float64(receiveStartTli.Int64),
|
||||
labels...)
|
||||
|
||||
if hasFlushedLSN {
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statWalReceiverFlushedLSN,
|
||||
prometheus.CounterValue,
|
||||
float64(flushedLsn.Int64),
|
||||
labels...)
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statWalReceiverReceivedTli,
|
||||
prometheus.GaugeValue,
|
||||
float64(receivedTli.Int64),
|
||||
labels...)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statWalReceiverLastMsgSendTime,
|
||||
prometheus.CounterValue,
|
||||
float64(lastMsgSendTime.Float64),
|
||||
labels...)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statWalReceiverLastMsgReceiptTime,
|
||||
prometheus.CounterValue,
|
||||
float64(lastMsgReceiptTime.Float64),
|
||||
labels...)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statWalReceiverLatestEndLsn,
|
||||
prometheus.CounterValue,
|
||||
float64(latestEndLsn.Int64),
|
||||
labels...)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statWalReceiverLatestEndTime,
|
||||
prometheus.CounterValue,
|
||||
latestEndTime.Float64,
|
||||
labels...)
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statWalReceiverUpstreamNode,
|
||||
prometheus.GaugeValue,
|
||||
float64(upstreamNode.Int64),
|
||||
labels...)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
186
collector/pg_stat_walreceiver_test.go
Normal file
186
collector/pg_stat_walreceiver_test.go
Normal file
@ -0,0 +1,186 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
var queryWithFlushedLSN = fmt.Sprintf(pgStatWalReceiverQueryTemplate, "(flushed_lsn - '0/0') % (2^52)::bigint as flushed_lsn,\n")
|
||||
var queryWithNoFlushedLSN = fmt.Sprintf(pgStatWalReceiverQueryTemplate, "")
|
||||
|
||||
func TestPGStatWalReceiverCollectorWithFlushedLSN(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
infoSchemaColumns := []string{
|
||||
"column_name",
|
||||
}
|
||||
|
||||
infoSchemaRows := sqlmock.NewRows(infoSchemaColumns).
|
||||
AddRow(
|
||||
"flushed_lsn",
|
||||
)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(pgStatWalColumnQuery)).WillReturnRows(infoSchemaRows)
|
||||
|
||||
columns := []string{
|
||||
"upstream_host",
|
||||
"slot_name",
|
||||
"status",
|
||||
"receive_start_lsn",
|
||||
"receive_start_tli",
|
||||
"flushed_lsn",
|
||||
"received_tli",
|
||||
"last_msg_send_time",
|
||||
"last_msg_receipt_time",
|
||||
"latest_end_lsn",
|
||||
"latest_end_time",
|
||||
"upstream_node",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(
|
||||
"foo",
|
||||
"bar",
|
||||
"stopping",
|
||||
1200668684563608,
|
||||
1687321285,
|
||||
1200668684563609,
|
||||
1687321280,
|
||||
1687321275,
|
||||
1687321276,
|
||||
1200668684563610,
|
||||
1687321277,
|
||||
5,
|
||||
)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(queryWithFlushedLSN)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatWalReceiverCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PgStatWalReceiverCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "stopping"}, value: 1200668684563608, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "stopping"}, value: 1687321285, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "stopping"}, value: 1200668684563609, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "stopping"}, value: 1687321280, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "stopping"}, value: 1687321275, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "stopping"}, value: 1687321276, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "stopping"}, value: 1200668684563610, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "stopping"}, value: 1687321277, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "stopping"}, value: 5, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestPGStatWalReceiverCollectorWithNoFlushedLSN(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
infoSchemaColumns := []string{
|
||||
"column_name",
|
||||
}
|
||||
|
||||
infoSchemaRows := sqlmock.NewRows(infoSchemaColumns)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(pgStatWalColumnQuery)).WillReturnRows(infoSchemaRows)
|
||||
|
||||
columns := []string{
|
||||
"upstream_host",
|
||||
"slot_name",
|
||||
"status",
|
||||
"receive_start_lsn",
|
||||
"receive_start_tli",
|
||||
"received_tli",
|
||||
"last_msg_send_time",
|
||||
"last_msg_receipt_time",
|
||||
"latest_end_lsn",
|
||||
"latest_end_time",
|
||||
"upstream_node",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(
|
||||
"foo",
|
||||
"bar",
|
||||
"starting",
|
||||
1200668684563608,
|
||||
1687321285,
|
||||
1687321280,
|
||||
1687321275,
|
||||
1687321276,
|
||||
1200668684563610,
|
||||
1687321277,
|
||||
5,
|
||||
)
|
||||
mock.ExpectQuery(sanitizeQuery(queryWithNoFlushedLSN)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatWalReceiverCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PgStatWalReceiverCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "starting"}, value: 1200668684563608, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "starting"}, value: 1687321285, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "starting"}, value: 1687321280, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "starting"}, value: 1687321275, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "starting"}, value: 1687321276, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "starting"}, value: 1200668684563610, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "starting"}, value: 1687321277, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"upstream_host": "foo", "slot_name": "bar", "status": "starting"}, value: 5, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
|
||||
}
|
118
collector/pg_statio_user_indexes.go
Normal file
118
collector/pg_statio_user_indexes.go
Normal file
@ -0,0 +1,118 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerCollector(statioUserIndexesSubsystem, defaultDisabled, NewPGStatioUserIndexesCollector)
|
||||
}
|
||||
|
||||
type PGStatioUserIndexesCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
const statioUserIndexesSubsystem = "statio_user_indexes"
|
||||
|
||||
func NewPGStatioUserIndexesCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGStatioUserIndexesCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
statioUserIndexesIdxBlksRead = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserIndexesSubsystem, "idx_blks_read_total"),
|
||||
"Number of disk blocks read from this index",
|
||||
[]string{"schemaname", "relname", "indexrelname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statioUserIndexesIdxBlksHit = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserIndexesSubsystem, "idx_blks_hit_total"),
|
||||
"Number of buffer hits in this index",
|
||||
[]string{"schemaname", "relname", "indexrelname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
statioUserIndexesQuery = `
|
||||
SELECT
|
||||
schemaname,
|
||||
relname,
|
||||
indexrelname,
|
||||
idx_blks_read,
|
||||
idx_blks_hit
|
||||
FROM pg_statio_user_indexes
|
||||
`
|
||||
)
|
||||
|
||||
func (c *PGStatioUserIndexesCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
rows, err := db.QueryContext(ctx,
|
||||
statioUserIndexesQuery)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var schemaname, relname, indexrelname sql.NullString
|
||||
var idxBlksRead, idxBlksHit sql.NullFloat64
|
||||
|
||||
if err := rows.Scan(&schemaname, &relname, &indexrelname, &idxBlksRead, &idxBlksHit); err != nil {
|
||||
return err
|
||||
}
|
||||
schemanameLabel := "unknown"
|
||||
if schemaname.Valid {
|
||||
schemanameLabel = schemaname.String
|
||||
}
|
||||
relnameLabel := "unknown"
|
||||
if relname.Valid {
|
||||
relnameLabel = relname.String
|
||||
}
|
||||
indexrelnameLabel := "unknown"
|
||||
if indexrelname.Valid {
|
||||
indexrelnameLabel = indexrelname.String
|
||||
}
|
||||
labels := []string{schemanameLabel, relnameLabel, indexrelnameLabel}
|
||||
|
||||
idxBlksReadMetric := 0.0
|
||||
if idxBlksRead.Valid {
|
||||
idxBlksReadMetric = idxBlksRead.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserIndexesIdxBlksRead,
|
||||
prometheus.CounterValue,
|
||||
idxBlksReadMetric,
|
||||
labels...,
|
||||
)
|
||||
|
||||
idxBlksHitMetric := 0.0
|
||||
if idxBlksHit.Valid {
|
||||
idxBlksHitMetric = idxBlksHit.Float64
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserIndexesIdxBlksHit,
|
||||
prometheus.CounterValue,
|
||||
idxBlksHitMetric,
|
||||
labels...,
|
||||
)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
109
collector/pg_statio_user_indexes_test.go
Normal file
109
collector/pg_statio_user_indexes_test.go
Normal file
@ -0,0 +1,109 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPgStatioUserIndexesCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
inst := &instance{db: db}
|
||||
columns := []string{
|
||||
"schemaname",
|
||||
"relname",
|
||||
"indexrelname",
|
||||
"idx_blks_read",
|
||||
"idx_blks_hit",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("public", "pgtest_accounts", "pgtest_accounts_pkey", 8, 9)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(statioUserIndexesQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatioUserIndexesCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatioUserIndexesCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"schemaname": "public", "relname": "pgtest_accounts", "indexrelname": "pgtest_accounts_pkey"}, value: 8, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"schemaname": "public", "relname": "pgtest_accounts", "indexrelname": "pgtest_accounts_pkey"}, value: 9, metricType: dto.MetricType_COUNTER},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPgStatioUserIndexesCollectorNull(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
inst := &instance{db: db}
|
||||
columns := []string{
|
||||
"schemaname",
|
||||
"relname",
|
||||
"indexrelname",
|
||||
"idx_blks_read",
|
||||
"idx_blks_hit",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(nil, nil, nil, nil, nil)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(statioUserIndexesQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatioUserIndexesCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatioUserIndexesCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"schemaname": "unknown", "relname": "unknown", "indexrelname": "unknown"}, value: 0, metricType: dto.MetricType_COUNTER},
|
||||
{labels: labelMap{"schemaname": "unknown", "relname": "unknown", "indexrelname": "unknown"}, value: 0, metricType: dto.MetricType_COUNTER},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
222
collector/pg_statio_user_tables.go
Normal file
222
collector/pg_statio_user_tables.go
Normal file
@ -0,0 +1,222 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const statioUserTableSubsystem = "statio_user_tables"
|
||||
|
||||
func init() {
|
||||
registerCollector(statioUserTableSubsystem, defaultEnabled, NewPGStatIOUserTablesCollector)
|
||||
}
|
||||
|
||||
type PGStatIOUserTablesCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGStatIOUserTablesCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGStatIOUserTablesCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
statioUserTablesHeapBlksRead = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserTableSubsystem, "heap_blocks_read"),
|
||||
"Number of disk blocks read from this table",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statioUserTablesHeapBlksHit = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserTableSubsystem, "heap_blocks_hit"),
|
||||
"Number of buffer hits in this table",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statioUserTablesIdxBlksRead = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserTableSubsystem, "idx_blocks_read"),
|
||||
"Number of disk blocks read from all indexes on this table",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statioUserTablesIdxBlksHit = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserTableSubsystem, "idx_blocks_hit"),
|
||||
"Number of buffer hits in all indexes on this table",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statioUserTablesToastBlksRead = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserTableSubsystem, "toast_blocks_read"),
|
||||
"Number of disk blocks read from this table's TOAST table (if any)",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statioUserTablesToastBlksHit = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserTableSubsystem, "toast_blocks_hit"),
|
||||
"Number of buffer hits in this table's TOAST table (if any)",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statioUserTablesTidxBlksRead = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserTableSubsystem, "tidx_blocks_read"),
|
||||
"Number of disk blocks read from this table's TOAST table indexes (if any)",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
statioUserTablesTidxBlksHit = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, statioUserTableSubsystem, "tidx_blocks_hit"),
|
||||
"Number of buffer hits in this table's TOAST table indexes (if any)",
|
||||
[]string{"datname", "schemaname", "relname"},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
statioUserTablesQuery = `SELECT
|
||||
current_database() datname,
|
||||
schemaname,
|
||||
relname,
|
||||
heap_blks_read,
|
||||
heap_blks_hit,
|
||||
idx_blks_read,
|
||||
idx_blks_hit,
|
||||
toast_blks_read,
|
||||
toast_blks_hit,
|
||||
tidx_blks_read,
|
||||
tidx_blks_hit
|
||||
FROM pg_statio_user_tables`
|
||||
)
|
||||
|
||||
func (PGStatIOUserTablesCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
rows, err := db.QueryContext(ctx,
|
||||
statioUserTablesQuery)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var datname, schemaname, relname sql.NullString
|
||||
var heapBlksRead, heapBlksHit, idxBlksRead, idxBlksHit, toastBlksRead, toastBlksHit, tidxBlksRead, tidxBlksHit sql.NullInt64
|
||||
|
||||
if err := rows.Scan(&datname, &schemaname, &relname, &heapBlksRead, &heapBlksHit, &idxBlksRead, &idxBlksHit, &toastBlksRead, &toastBlksHit, &tidxBlksRead, &tidxBlksHit); err != nil {
|
||||
return err
|
||||
}
|
||||
datnameLabel := "unknown"
|
||||
if datname.Valid {
|
||||
datnameLabel = datname.String
|
||||
}
|
||||
schemanameLabel := "unknown"
|
||||
if schemaname.Valid {
|
||||
schemanameLabel = schemaname.String
|
||||
}
|
||||
relnameLabel := "unknown"
|
||||
if relname.Valid {
|
||||
relnameLabel = relname.String
|
||||
}
|
||||
|
||||
heapBlksReadMetric := 0.0
|
||||
if heapBlksRead.Valid {
|
||||
heapBlksReadMetric = float64(heapBlksRead.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserTablesHeapBlksRead,
|
||||
prometheus.CounterValue,
|
||||
heapBlksReadMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
heapBlksHitMetric := 0.0
|
||||
if heapBlksHit.Valid {
|
||||
heapBlksHitMetric = float64(heapBlksHit.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserTablesHeapBlksHit,
|
||||
prometheus.CounterValue,
|
||||
heapBlksHitMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
idxBlksReadMetric := 0.0
|
||||
if idxBlksRead.Valid {
|
||||
idxBlksReadMetric = float64(idxBlksRead.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserTablesIdxBlksRead,
|
||||
prometheus.CounterValue,
|
||||
idxBlksReadMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
idxBlksHitMetric := 0.0
|
||||
if idxBlksHit.Valid {
|
||||
idxBlksHitMetric = float64(idxBlksHit.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserTablesIdxBlksHit,
|
||||
prometheus.CounterValue,
|
||||
idxBlksHitMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
toastBlksReadMetric := 0.0
|
||||
if toastBlksRead.Valid {
|
||||
toastBlksReadMetric = float64(toastBlksRead.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserTablesToastBlksRead,
|
||||
prometheus.CounterValue,
|
||||
toastBlksReadMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
toastBlksHitMetric := 0.0
|
||||
if toastBlksHit.Valid {
|
||||
toastBlksHitMetric = float64(toastBlksHit.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserTablesToastBlksHit,
|
||||
prometheus.CounterValue,
|
||||
toastBlksHitMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
tidxBlksReadMetric := 0.0
|
||||
if tidxBlksRead.Valid {
|
||||
tidxBlksReadMetric = float64(tidxBlksRead.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserTablesTidxBlksRead,
|
||||
prometheus.CounterValue,
|
||||
tidxBlksReadMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
|
||||
tidxBlksHitMetric := 0.0
|
||||
if tidxBlksHit.Valid {
|
||||
tidxBlksHitMetric = float64(tidxBlksHit.Int64)
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
statioUserTablesTidxBlksHit,
|
||||
prometheus.CounterValue,
|
||||
tidxBlksHitMetric,
|
||||
datnameLabel, schemanameLabel, relnameLabel,
|
||||
)
|
||||
}
|
||||
return rows.Err()
|
||||
}
|
157
collector/pg_statio_user_tables_test.go
Normal file
157
collector/pg_statio_user_tables_test.go
Normal file
@ -0,0 +1,157 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGStatIOUserTablesCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{
|
||||
"datname",
|
||||
"schemaname",
|
||||
"relname",
|
||||
"heap_blks_read",
|
||||
"heap_blks_hit",
|
||||
"idx_blks_read",
|
||||
"idx_blks_hit",
|
||||
"toast_blks_read",
|
||||
"toast_blks_hit",
|
||||
"tidx_blks_read",
|
||||
"tidx_blks_hit",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow("postgres",
|
||||
"public",
|
||||
"a_table",
|
||||
1,
|
||||
2,
|
||||
3,
|
||||
4,
|
||||
5,
|
||||
6,
|
||||
7,
|
||||
8)
|
||||
mock.ExpectQuery(sanitizeQuery(statioUserTablesQuery)).WillReturnRows(rows)
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatIOUserTablesCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatIOUserTablesCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 1},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 2},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 3},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 4},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 5},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 6},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 7},
|
||||
{labels: labelMap{"datname": "postgres", "schemaname": "public", "relname": "a_table"}, metricType: dto.MetricType_COUNTER, value: 8},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPGStatIOUserTablesCollectorNullValues(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{
|
||||
"datname",
|
||||
"schemaname",
|
||||
"relname",
|
||||
"heap_blks_read",
|
||||
"heap_blks_hit",
|
||||
"idx_blks_read",
|
||||
"idx_blks_hit",
|
||||
"toast_blks_read",
|
||||
"toast_blks_hit",
|
||||
"tidx_blks_read",
|
||||
"tidx_blks_hit",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
nil)
|
||||
mock.ExpectQuery(sanitizeQuery(statioUserTablesQuery)).WillReturnRows(rows)
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGStatIOUserTablesCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGStatIOUserTablesCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{"datname": "unknown", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "unknown", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "unknown", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "unknown", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "unknown", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "unknown", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "unknown", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
{labels: labelMap{"datname": "unknown", "schemaname": "unknown", "relname": "unknown"}, metricType: dto.MetricType_COUNTER, value: 0},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
84
collector/pg_wal.go
Normal file
84
collector/pg_wal.go
Normal file
@ -0,0 +1,84 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const walSubsystem = "wal"
|
||||
|
||||
func init() {
|
||||
registerCollector(walSubsystem, defaultEnabled, NewPGWALCollector)
|
||||
}
|
||||
|
||||
type PGWALCollector struct {
|
||||
}
|
||||
|
||||
func NewPGWALCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGWALCollector{}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
pgWALSegments = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
walSubsystem,
|
||||
"segments",
|
||||
),
|
||||
"Number of WAL segments",
|
||||
[]string{}, nil,
|
||||
)
|
||||
pgWALSize = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(
|
||||
namespace,
|
||||
walSubsystem,
|
||||
"size_bytes",
|
||||
),
|
||||
"Total size of WAL segments",
|
||||
[]string{}, nil,
|
||||
)
|
||||
|
||||
pgWALQuery = `
|
||||
SELECT
|
||||
COUNT(*) AS segments,
|
||||
SUM(size) AS size
|
||||
FROM pg_ls_waldir()
|
||||
WHERE name ~ '^[0-9A-F]{24}$'`
|
||||
)
|
||||
|
||||
func (c PGWALCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
row := db.QueryRowContext(ctx,
|
||||
pgWALQuery,
|
||||
)
|
||||
|
||||
var segments uint64
|
||||
var size uint64
|
||||
err := row.Scan(&segments, &size)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgWALSegments,
|
||||
prometheus.GaugeValue, float64(segments),
|
||||
)
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
pgWALSize,
|
||||
prometheus.GaugeValue, float64(size),
|
||||
)
|
||||
return nil
|
||||
}
|
63
collector/pg_wal_test.go
Normal file
63
collector/pg_wal_test.go
Normal file
@ -0,0 +1,63 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPgWALCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
inst := &instance{db: db}
|
||||
|
||||
columns := []string{"segments", "size"}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(47, 788529152)
|
||||
mock.ExpectQuery(sanitizeQuery(pgWALQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGWALCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGWALCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{}, value: 47, metricType: dto.MetricType_GAUGE},
|
||||
{labels: labelMap{}, value: 788529152, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
91
collector/pg_xlog_location.go
Normal file
91
collector/pg_xlog_location.go
Normal file
@ -0,0 +1,91 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/blang/semver/v4"
|
||||
"github.com/go-kit/log"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
const xlogLocationSubsystem = "xlog_location"
|
||||
|
||||
func init() {
|
||||
registerCollector(xlogLocationSubsystem, defaultDisabled, NewPGXlogLocationCollector)
|
||||
}
|
||||
|
||||
type PGXlogLocationCollector struct {
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func NewPGXlogLocationCollector(config collectorConfig) (Collector, error) {
|
||||
return &PGXlogLocationCollector{log: config.logger}, nil
|
||||
}
|
||||
|
||||
var (
|
||||
xlogLocationBytes = prometheus.NewDesc(
|
||||
prometheus.BuildFQName(namespace, xlogLocationSubsystem, "bytes"),
|
||||
"Postgres LSN (log sequence number) being generated on primary or replayed on replica (truncated to low 52 bits)",
|
||||
[]string{},
|
||||
prometheus.Labels{},
|
||||
)
|
||||
|
||||
xlogLocationQuery = `
|
||||
SELECT CASE
|
||||
WHEN pg_is_in_recovery() THEN (pg_last_xlog_replay_location() - '0/0') % (2^52)::bigint
|
||||
ELSE (pg_current_xlog_location() - '0/0') % (2^52)::bigint
|
||||
END AS bytes
|
||||
`
|
||||
)
|
||||
|
||||
func (c PGXlogLocationCollector) Update(ctx context.Context, instance *instance, ch chan<- prometheus.Metric) error {
|
||||
db := instance.getDB()
|
||||
|
||||
// xlog was renmaed to WAL in PostgreSQL 10
|
||||
// https://wiki.postgresql.org/wiki/New_in_postgres_10#Renaming_of_.22xlog.22_to_.22wal.22_Globally_.28and_location.2Flsn.29
|
||||
after10 := instance.version.Compare(semver.MustParse("10.0.0"))
|
||||
if after10 >= 0 {
|
||||
level.Warn(c.log).Log("msg", "xlog_location collector is not available on PostgreSQL >= 10.0.0, skipping")
|
||||
return nil
|
||||
}
|
||||
|
||||
rows, err := db.QueryContext(ctx,
|
||||
xlogLocationQuery)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var bytes float64
|
||||
|
||||
if err := rows.Scan(&bytes); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ch <- prometheus.MustNewConstMetric(
|
||||
xlogLocationBytes,
|
||||
prometheus.GaugeValue,
|
||||
bytes,
|
||||
)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
61
collector/pg_xlog_location_test.go
Normal file
61
collector/pg_xlog_location_test.go
Normal file
@ -0,0 +1,61 @@
|
||||
// Copyright 2023 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/DATA-DOG/go-sqlmock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestPGXlogLocationCollector(t *testing.T) {
|
||||
db, mock, err := sqlmock.New()
|
||||
if err != nil {
|
||||
t.Fatalf("Error opening a stub db connection: %s", err)
|
||||
}
|
||||
defer db.Close()
|
||||
inst := &instance{db: db}
|
||||
columns := []string{
|
||||
"bytes",
|
||||
}
|
||||
rows := sqlmock.NewRows(columns).
|
||||
AddRow(53401)
|
||||
|
||||
mock.ExpectQuery(sanitizeQuery(xlogLocationQuery)).WillReturnRows(rows)
|
||||
|
||||
ch := make(chan prometheus.Metric)
|
||||
go func() {
|
||||
defer close(ch)
|
||||
c := PGXlogLocationCollector{}
|
||||
|
||||
if err := c.Update(context.Background(), inst, ch); err != nil {
|
||||
t.Errorf("Error calling PGXlogLocationCollector.Update: %s", err)
|
||||
}
|
||||
}()
|
||||
expected := []MetricResult{
|
||||
{labels: labelMap{}, value: 53401, metricType: dto.MetricType_GAUGE},
|
||||
}
|
||||
convey.Convey("Metrics comparison", t, func() {
|
||||
for _, expect := range expected {
|
||||
m := readMetric(<-ch)
|
||||
convey.So(expect, convey.ShouldResemble, m)
|
||||
}
|
||||
})
|
||||
if err := mock.ExpectationsWereMet(); err != nil {
|
||||
t.Errorf("there were unfulfilled exceptions: %s", err)
|
||||
}
|
||||
}
|
99
collector/probe.go
Normal file
99
collector/probe.go
Normal file
@ -0,0 +1,99 @@
|
||||
// Copyright 2022 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/go-kit/log/level"
|
||||
"github.com/prometheus-community/postgres_exporter/config"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
type ProbeCollector struct {
|
||||
registry *prometheus.Registry
|
||||
collectors map[string]Collector
|
||||
logger log.Logger
|
||||
instance *instance
|
||||
}
|
||||
|
||||
func NewProbeCollector(logger log.Logger, excludeDatabases []string, registry *prometheus.Registry, dsn config.DSN) (*ProbeCollector, error) {
|
||||
collectors := make(map[string]Collector)
|
||||
initiatedCollectorsMtx.Lock()
|
||||
defer initiatedCollectorsMtx.Unlock()
|
||||
for key, enabled := range collectorState {
|
||||
// TODO: Handle filters
|
||||
// if !*enabled || (len(f) > 0 && !f[key]) {
|
||||
// continue
|
||||
// }
|
||||
if !*enabled {
|
||||
continue
|
||||
}
|
||||
if collector, ok := initiatedCollectors[key]; ok {
|
||||
collectors[key] = collector
|
||||
} else {
|
||||
collector, err := factories[key](
|
||||
collectorConfig{
|
||||
logger: log.With(logger, "collector", key),
|
||||
excludeDatabases: excludeDatabases,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
collectors[key] = collector
|
||||
initiatedCollectors[key] = collector
|
||||
}
|
||||
}
|
||||
|
||||
instance, err := newInstance(dsn.GetConnectionString())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &ProbeCollector{
|
||||
registry: registry,
|
||||
collectors: collectors,
|
||||
logger: logger,
|
||||
instance: instance,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (pc *ProbeCollector) Describe(ch chan<- *prometheus.Desc) {
|
||||
}
|
||||
|
||||
func (pc *ProbeCollector) Collect(ch chan<- prometheus.Metric) {
|
||||
// Set up the database connection for the collector.
|
||||
err := pc.instance.setup()
|
||||
if err != nil {
|
||||
level.Error(pc.logger).Log("msg", "Error opening connection to database", "err", err)
|
||||
return
|
||||
}
|
||||
defer pc.instance.Close()
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(len(pc.collectors))
|
||||
for name, c := range pc.collectors {
|
||||
go func(name string, c Collector) {
|
||||
execute(context.TODO(), name, c, pc.instance, ch, pc.logger)
|
||||
wg.Done()
|
||||
}(name, c)
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func (pc *ProbeCollector) Close() error {
|
||||
return pc.instance.Close()
|
||||
}
|
@ -1,100 +0,0 @@
|
||||
// Copyright 2022 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package collector
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/lib/pq"
|
||||
)
|
||||
|
||||
type server struct {
|
||||
dsn string
|
||||
name string
|
||||
db *sql.DB
|
||||
}
|
||||
|
||||
func makeServer(dsn string) (*server, error) {
|
||||
name, err := parseServerName(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &server{
|
||||
dsn: dsn,
|
||||
name: name,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *server) GetDB() (*sql.DB, error) {
|
||||
if s.db != nil {
|
||||
return s.db, nil
|
||||
}
|
||||
|
||||
db, err := sql.Open("postgres", s.dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
db.SetMaxOpenConns(1)
|
||||
db.SetMaxIdleConns(1)
|
||||
|
||||
s.db = db
|
||||
|
||||
return s.db, nil
|
||||
}
|
||||
|
||||
func (s *server) GetName() string {
|
||||
return s.name
|
||||
}
|
||||
|
||||
func (s *server) String() string {
|
||||
return s.name
|
||||
}
|
||||
|
||||
func parseServerName(url string) (string, error) {
|
||||
dsn, err := pq.ParseURL(url)
|
||||
if err != nil {
|
||||
dsn = url
|
||||
}
|
||||
|
||||
pairs := strings.Split(dsn, " ")
|
||||
kv := make(map[string]string, len(pairs))
|
||||
for _, pair := range pairs {
|
||||
splitted := strings.SplitN(pair, "=", 2)
|
||||
if len(splitted) != 2 {
|
||||
return "", fmt.Errorf("malformed dsn %q", dsn)
|
||||
}
|
||||
// Newer versions of pq.ParseURL quote values so trim them off if they exist
|
||||
key := strings.Trim(splitted[0], "'\"")
|
||||
value := strings.Trim(splitted[1], "'\"")
|
||||
kv[key] = value
|
||||
}
|
||||
|
||||
var fingerprint string
|
||||
|
||||
if host, ok := kv["host"]; ok {
|
||||
fingerprint += host
|
||||
} else {
|
||||
fingerprint += "localhost"
|
||||
}
|
||||
|
||||
if port, ok := kv["port"]; ok {
|
||||
fingerprint += ":" + port
|
||||
} else {
|
||||
fingerprint += ":5432"
|
||||
}
|
||||
|
||||
return fingerprint, nil
|
||||
}
|
120
config/config.go
Normal file
120
config/config.go
Normal file
@ -0,0 +1,120 @@
|
||||
// Copyright 2022 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/go-kit/log"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
var (
|
||||
configReloadSuccess = promauto.NewGauge(prometheus.GaugeOpts{
|
||||
Namespace: "postgres_exporter",
|
||||
Name: "config_last_reload_successful",
|
||||
Help: "Postgres exporter config loaded successfully.",
|
||||
})
|
||||
|
||||
configReloadSeconds = promauto.NewGauge(prometheus.GaugeOpts{
|
||||
Namespace: "postgres_exporter",
|
||||
Name: "config_last_reload_success_timestamp_seconds",
|
||||
Help: "Timestamp of the last successful configuration reload.",
|
||||
})
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
AuthModules map[string]AuthModule `yaml:"auth_modules"`
|
||||
}
|
||||
|
||||
type AuthModule struct {
|
||||
Type string `yaml:"type"`
|
||||
UserPass UserPass `yaml:"userpass,omitempty"`
|
||||
// Add alternative auth modules here
|
||||
Options map[string]string `yaml:"options"`
|
||||
}
|
||||
|
||||
type UserPass struct {
|
||||
Username string `yaml:"username"`
|
||||
Password string `yaml:"password"`
|
||||
}
|
||||
|
||||
type Handler struct {
|
||||
sync.RWMutex
|
||||
Config *Config
|
||||
}
|
||||
|
||||
func (ch *Handler) GetConfig() *Config {
|
||||
ch.RLock()
|
||||
defer ch.RUnlock()
|
||||
return ch.Config
|
||||
}
|
||||
|
||||
func (ch *Handler) ReloadConfig(f string, logger log.Logger) error {
|
||||
config := &Config{}
|
||||
var err error
|
||||
defer func() {
|
||||
if err != nil {
|
||||
configReloadSuccess.Set(0)
|
||||
} else {
|
||||
configReloadSuccess.Set(1)
|
||||
configReloadSeconds.SetToCurrentTime()
|
||||
}
|
||||
}()
|
||||
|
||||
yamlReader, err := os.Open(f)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error opening config file %q: %s", f, err)
|
||||
}
|
||||
defer yamlReader.Close()
|
||||
decoder := yaml.NewDecoder(yamlReader)
|
||||
decoder.KnownFields(true)
|
||||
|
||||
if err = decoder.Decode(config); err != nil {
|
||||
return fmt.Errorf("Error parsing config file %q: %s", f, err)
|
||||
}
|
||||
|
||||
ch.Lock()
|
||||
ch.Config = config
|
||||
ch.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m AuthModule) ConfigureTarget(target string) (DSN, error) {
|
||||
dsn, err := dsnFromString(target)
|
||||
if err != nil {
|
||||
return DSN{}, err
|
||||
}
|
||||
|
||||
// Set the credentials from the authentication module
|
||||
// TODO(@sysadmind): What should the order of precedence be?
|
||||
if m.Type == "userpass" {
|
||||
if m.UserPass.Username != "" {
|
||||
dsn.username = m.UserPass.Username
|
||||
}
|
||||
if m.UserPass.Password != "" {
|
||||
dsn.password = m.UserPass.Password
|
||||
}
|
||||
}
|
||||
|
||||
for k, v := range m.Options {
|
||||
dsn.query.Set(k, v)
|
||||
}
|
||||
|
||||
return dsn, nil
|
||||
}
|
58
config/config_test.go
Normal file
58
config/config_test.go
Normal file
@ -0,0 +1,58 @@
|
||||
// Copyright 2022 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLoadConfig(t *testing.T) {
|
||||
ch := &Handler{
|
||||
Config: &Config{},
|
||||
}
|
||||
|
||||
err := ch.ReloadConfig("testdata/config-good.yaml", nil)
|
||||
if err != nil {
|
||||
t.Errorf("Error loading config: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadBadConfigs(t *testing.T) {
|
||||
ch := &Handler{
|
||||
Config: &Config{},
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
input string
|
||||
want string
|
||||
}{
|
||||
{
|
||||
input: "testdata/config-bad-auth-module.yaml",
|
||||
want: "Error parsing config file \"testdata/config-bad-auth-module.yaml\": yaml: unmarshal errors:\n line 3: field pretendauth not found in type config.AuthModule",
|
||||
},
|
||||
{
|
||||
input: "testdata/config-bad-extra-field.yaml",
|
||||
want: "Error parsing config file \"testdata/config-bad-extra-field.yaml\": yaml: unmarshal errors:\n line 8: field doesNotExist not found in type config.AuthModule",
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.input, func(t *testing.T) {
|
||||
got := ch.ReloadConfig(test.input, nil)
|
||||
if got == nil || got.Error() != test.want {
|
||||
t.Fatalf("ReloadConfig(%q) = %v, want %s", test.input, got, test.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
238
config/dsn.go
Normal file
238
config/dsn.go
Normal file
@ -0,0 +1,238 @@
|
||||
// Copyright 2022 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/url"
|
||||
"regexp"
|
||||
"strings"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
// DSN represents a parsed datasource. It contains fields for the individual connection components.
|
||||
type DSN struct {
|
||||
scheme string
|
||||
username string
|
||||
password string
|
||||
host string
|
||||
path string
|
||||
query url.Values
|
||||
}
|
||||
|
||||
// String makes a dsn safe to print by excluding any passwords. This allows dsn to be used in
|
||||
// strings and log messages without needing to call a redaction function first.
|
||||
func (d DSN) String() string {
|
||||
if d.password != "" {
|
||||
return fmt.Sprintf("%s://%s:******@%s%s?%s", d.scheme, d.username, d.host, d.path, d.query.Encode())
|
||||
}
|
||||
|
||||
if d.username != "" {
|
||||
return fmt.Sprintf("%s://%s@%s%s?%s", d.scheme, d.username, d.host, d.path, d.query.Encode())
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s://%s%s?%s", d.scheme, d.host, d.path, d.query.Encode())
|
||||
}
|
||||
|
||||
// GetConnectionString returns the URL to pass to the driver for database connections. This value should not be logged.
|
||||
func (d DSN) GetConnectionString() string {
|
||||
u := url.URL{
|
||||
Scheme: d.scheme,
|
||||
Host: d.host,
|
||||
Path: d.path,
|
||||
RawQuery: d.query.Encode(),
|
||||
}
|
||||
|
||||
// Username and Password
|
||||
if d.username != "" {
|
||||
u.User = url.UserPassword(d.username, d.password)
|
||||
}
|
||||
|
||||
return u.String()
|
||||
}
|
||||
|
||||
// dsnFromString parses a connection string into a dsn. It will attempt to parse the string as
|
||||
// a URL and as a set of key=value pairs. If both attempts fail, dsnFromString will return an error.
|
||||
func dsnFromString(in string) (DSN, error) {
|
||||
if strings.HasPrefix(in, "postgresql://") || strings.HasPrefix(in, "postgres://") {
|
||||
return dsnFromURL(in)
|
||||
}
|
||||
|
||||
// Try to parse as key=value pairs
|
||||
d, err := dsnFromKeyValue(in)
|
||||
if err == nil {
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// Parse the string as a URL, with the scheme prefixed
|
||||
d, err = dsnFromURL(fmt.Sprintf("postgresql://%s", in))
|
||||
if err == nil {
|
||||
return d, nil
|
||||
}
|
||||
|
||||
return DSN{}, fmt.Errorf("could not understand DSN")
|
||||
}
|
||||
|
||||
// dsnFromURL parses the input as a URL and returns the dsn representation.
|
||||
func dsnFromURL(in string) (DSN, error) {
|
||||
u, err := url.Parse(in)
|
||||
if err != nil {
|
||||
return DSN{}, err
|
||||
}
|
||||
pass, _ := u.User.Password()
|
||||
user := u.User.Username()
|
||||
|
||||
query := u.Query()
|
||||
|
||||
if queryPass := query.Get("password"); queryPass != "" {
|
||||
if pass == "" {
|
||||
pass = queryPass
|
||||
}
|
||||
}
|
||||
query.Del("password")
|
||||
|
||||
if queryUser := query.Get("user"); queryUser != "" {
|
||||
if user == "" {
|
||||
user = queryUser
|
||||
}
|
||||
}
|
||||
query.Del("user")
|
||||
|
||||
d := DSN{
|
||||
scheme: u.Scheme,
|
||||
username: user,
|
||||
password: pass,
|
||||
host: u.Host,
|
||||
path: u.Path,
|
||||
query: query,
|
||||
}
|
||||
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// dsnFromKeyValue parses the input as a set of key=value pairs and returns the dsn representation.
|
||||
func dsnFromKeyValue(in string) (DSN, error) {
|
||||
// Attempt to confirm at least one key=value pair before starting the rune parser
|
||||
connstringRe := regexp.MustCompile(`^ *[a-zA-Z0-9]+ *= *[^= ]+`)
|
||||
if !connstringRe.MatchString(in) {
|
||||
return DSN{}, fmt.Errorf("input is not a key-value DSN")
|
||||
}
|
||||
|
||||
// Anything other than known fields should be part of the querystring
|
||||
query := url.Values{}
|
||||
|
||||
pairs, err := parseKeyValue(in)
|
||||
if err != nil {
|
||||
return DSN{}, fmt.Errorf("failed to parse key-value DSN: %v", err)
|
||||
}
|
||||
|
||||
// Build the dsn from the key=value pairs
|
||||
d := DSN{
|
||||
scheme: "postgresql",
|
||||
}
|
||||
|
||||
hostname := ""
|
||||
port := ""
|
||||
|
||||
for k, v := range pairs {
|
||||
switch k {
|
||||
case "host":
|
||||
hostname = v
|
||||
case "port":
|
||||
port = v
|
||||
case "user":
|
||||
d.username = v
|
||||
case "password":
|
||||
d.password = v
|
||||
default:
|
||||
query.Set(k, v)
|
||||
}
|
||||
}
|
||||
|
||||
if hostname == "" {
|
||||
hostname = "localhost"
|
||||
}
|
||||
|
||||
if port == "" {
|
||||
d.host = hostname
|
||||
} else {
|
||||
d.host = fmt.Sprintf("%s:%s", hostname, port)
|
||||
}
|
||||
|
||||
d.query = query
|
||||
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// parseKeyValue is a key=value parser. It loops over each rune to split out keys and values
|
||||
// and attempting to honor quoted values. parseKeyValue will return an error if it is unable
|
||||
// to properly parse the input.
|
||||
func parseKeyValue(in string) (map[string]string, error) {
|
||||
out := map[string]string{}
|
||||
|
||||
inPart := false
|
||||
inQuote := false
|
||||
part := []rune{}
|
||||
key := ""
|
||||
for _, c := range in {
|
||||
switch {
|
||||
case unicode.In(c, unicode.Quotation_Mark):
|
||||
if inQuote {
|
||||
inQuote = false
|
||||
} else {
|
||||
inQuote = true
|
||||
}
|
||||
case unicode.In(c, unicode.White_Space):
|
||||
if inPart {
|
||||
if inQuote {
|
||||
part = append(part, c)
|
||||
} else {
|
||||
// Are we finishing a key=value?
|
||||
if key == "" {
|
||||
return out, fmt.Errorf("invalid input")
|
||||
}
|
||||
out[key] = string(part)
|
||||
inPart = false
|
||||
part = []rune{}
|
||||
}
|
||||
} else {
|
||||
// Are we finishing a key=value?
|
||||
if key == "" {
|
||||
return out, fmt.Errorf("invalid input")
|
||||
}
|
||||
out[key] = string(part)
|
||||
inPart = false
|
||||
part = []rune{}
|
||||
// Do something with the value
|
||||
}
|
||||
case c == '=':
|
||||
if inPart {
|
||||
inPart = false
|
||||
key = string(part)
|
||||
part = []rune{}
|
||||
} else {
|
||||
return out, fmt.Errorf("invalid input")
|
||||
}
|
||||
default:
|
||||
inPart = true
|
||||
part = append(part, c)
|
||||
}
|
||||
}
|
||||
|
||||
if key != "" && len(part) > 0 {
|
||||
out[key] = string(part)
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
228
config/dsn_test.go
Normal file
228
config/dsn_test.go
Normal file
@ -0,0 +1,228 @@
|
||||
// Copyright 2022 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"net/url"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Test_dsn_String is designed to test different dsn combinations for their string representation.
|
||||
// dsn.String() is designed to be safe to print, redacting any password information and these test
|
||||
// cases are intended to cover known cases.
|
||||
func Test_dsn_String(t *testing.T) {
|
||||
type fields struct {
|
||||
scheme string
|
||||
username string
|
||||
password string
|
||||
host string
|
||||
path string
|
||||
query url.Values
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
fields fields
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "Without Password",
|
||||
fields: fields{
|
||||
scheme: "postgresql",
|
||||
username: "test",
|
||||
host: "localhost:5432",
|
||||
query: url.Values{},
|
||||
},
|
||||
want: "postgresql://test@localhost:5432?",
|
||||
},
|
||||
{
|
||||
name: "With Password",
|
||||
fields: fields{
|
||||
scheme: "postgresql",
|
||||
username: "test",
|
||||
password: "supersecret",
|
||||
host: "localhost:5432",
|
||||
query: url.Values{},
|
||||
},
|
||||
want: "postgresql://test:******@localhost:5432?",
|
||||
},
|
||||
{
|
||||
name: "With Password and Query String",
|
||||
fields: fields{
|
||||
scheme: "postgresql",
|
||||
username: "test",
|
||||
password: "supersecret",
|
||||
host: "localhost:5432",
|
||||
query: url.Values{
|
||||
"ssldisable": []string{"true"},
|
||||
},
|
||||
},
|
||||
want: "postgresql://test:******@localhost:5432?ssldisable=true",
|
||||
},
|
||||
{
|
||||
name: "With Password, Path, and Query String",
|
||||
fields: fields{
|
||||
scheme: "postgresql",
|
||||
username: "test",
|
||||
password: "supersecret",
|
||||
host: "localhost:5432",
|
||||
path: "/somevalue",
|
||||
query: url.Values{
|
||||
"ssldisable": []string{"true"},
|
||||
},
|
||||
},
|
||||
want: "postgresql://test:******@localhost:5432/somevalue?ssldisable=true",
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
d := DSN{
|
||||
scheme: tt.fields.scheme,
|
||||
username: tt.fields.username,
|
||||
password: tt.fields.password,
|
||||
host: tt.fields.host,
|
||||
path: tt.fields.path,
|
||||
query: tt.fields.query,
|
||||
}
|
||||
if got := d.String(); got != tt.want {
|
||||
t.Errorf("dsn.String() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Test_dsnFromString tests the dsnFromString function with known variations
|
||||
// of connection string inputs to ensure that it properly parses the input into
|
||||
// a dsn.
|
||||
func Test_dsnFromString(t *testing.T) {
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
want DSN
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "Key value with password",
|
||||
input: "host=host.example.com user=postgres port=5432 password=s3cr3t",
|
||||
want: DSN{
|
||||
scheme: "postgresql",
|
||||
host: "host.example.com:5432",
|
||||
username: "postgres",
|
||||
password: "s3cr3t",
|
||||
query: url.Values{},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Key value with quoted password and space",
|
||||
input: "host=host.example.com user=postgres port=5432 password=\"s3cr 3t\"",
|
||||
want: DSN{
|
||||
scheme: "postgresql",
|
||||
host: "host.example.com:5432",
|
||||
username: "postgres",
|
||||
password: "s3cr 3t",
|
||||
query: url.Values{},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Key value with different order",
|
||||
input: "password=abcde host=host.example.com user=postgres port=5432",
|
||||
want: DSN{
|
||||
scheme: "postgresql",
|
||||
host: "host.example.com:5432",
|
||||
username: "postgres",
|
||||
password: "abcde",
|
||||
query: url.Values{},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Key value with different order, quoted password, duplicate password",
|
||||
input: "password=abcde host=host.example.com user=postgres port=5432 password=\"s3cr 3t\"",
|
||||
want: DSN{
|
||||
scheme: "postgresql",
|
||||
host: "host.example.com:5432",
|
||||
username: "postgres",
|
||||
password: "s3cr 3t",
|
||||
query: url.Values{},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "URL with user in query string",
|
||||
input: "postgresql://host.example.com:5432/tsdb?user=postgres",
|
||||
want: DSN{
|
||||
scheme: "postgresql",
|
||||
host: "host.example.com:5432",
|
||||
path: "/tsdb",
|
||||
query: url.Values{},
|
||||
username: "postgres",
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "URL with user and password",
|
||||
input: "postgresql://user:s3cret@host.example.com:5432/tsdb?user=postgres",
|
||||
want: DSN{
|
||||
scheme: "postgresql",
|
||||
host: "host.example.com:5432",
|
||||
path: "/tsdb",
|
||||
query: url.Values{},
|
||||
username: "user",
|
||||
password: "s3cret",
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Alternative URL prefix",
|
||||
input: "postgres://user:s3cret@host.example.com:5432/tsdb?user=postgres",
|
||||
want: DSN{
|
||||
scheme: "postgres",
|
||||
host: "host.example.com:5432",
|
||||
path: "/tsdb",
|
||||
query: url.Values{},
|
||||
username: "user",
|
||||
password: "s3cret",
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "URL with user and password in query string",
|
||||
input: "postgresql://host.example.com:5432/tsdb?user=postgres&password=s3cr3t",
|
||||
want: DSN{
|
||||
scheme: "postgresql",
|
||||
host: "host.example.com:5432",
|
||||
path: "/tsdb",
|
||||
query: url.Values{},
|
||||
username: "postgres",
|
||||
password: "s3cr3t",
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := dsnFromString(tt.input)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("dsnFromString() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
}
|
||||
if !reflect.DeepEqual(got, tt.want) {
|
||||
t.Errorf("dsnFromString() = %+v, want %+v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
7
config/testdata/config-bad-auth-module.yaml
vendored
Normal file
7
config/testdata/config-bad-auth-module.yaml
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
auth_modules:
|
||||
foo:
|
||||
pretendauth:
|
||||
username: test
|
||||
password: pass
|
||||
options:
|
||||
extra: "1"
|
8
config/testdata/config-bad-extra-field.yaml
vendored
Normal file
8
config/testdata/config-bad-extra-field.yaml
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
auth_modules:
|
||||
foo:
|
||||
userpass:
|
||||
username: test
|
||||
password: pass
|
||||
options:
|
||||
extra: "1"
|
||||
doesNotExist: test
|
8
config/testdata/config-good.yaml
vendored
Normal file
8
config/testdata/config-good.yaml
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
auth_modules:
|
||||
first:
|
||||
type: userpass
|
||||
userpass:
|
||||
username: first
|
||||
password: firstpass
|
||||
options:
|
||||
sslmode: disable
|
65
go.mod
65
go.mod
@ -3,47 +3,52 @@ module github.com/prometheus-community/postgres_exporter
|
||||
go 1.19
|
||||
|
||||
require (
|
||||
github.com/blang/semver v3.5.1+incompatible
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.0
|
||||
github.com/alecthomas/kingpin/v2 v2.3.2
|
||||
github.com/blang/semver/v4 v4.0.0
|
||||
github.com/go-kit/log v0.2.1
|
||||
github.com/lib/pq v1.10.6
|
||||
github.com/montanaflynn/stats v0.6.6
|
||||
github.com/lib/pq v1.10.9
|
||||
github.com/montanaflynn/stats v0.7.1
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/prometheus/client_golang v1.12.2
|
||||
github.com/prometheus/client_golang v1.16.0
|
||||
github.com/prometheus/client_model v0.4.0
|
||||
github.com/prometheus/common v0.35.0
|
||||
github.com/prometheus/exporter-toolkit v0.7.2
|
||||
github.com/stretchr/testify v1.8.4
|
||||
github.com/tklauser/go-sysconf v0.3.11
|
||||
golang.org/x/sys v0.5.0
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6
|
||||
github.com/prometheus/common v0.44.0
|
||||
github.com/prometheus/exporter-toolkit v0.10.0
|
||||
github.com/smartystreets/goconvey v1.8.1
|
||||
github.com/stretchr/testify v1.8.2
|
||||
github.com/tklauser/go-sysconf v0.3.12
|
||||
golang.org/x/sys v0.11.0
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 // indirect
|
||||
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d // indirect
|
||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.1.2 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.2.0 // indirect
|
||||
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/go-logfmt/logfmt v0.5.1 // indirect
|
||||
github.com/golang/protobuf v1.5.2 // indirect
|
||||
github.com/golang/protobuf v1.5.3 // indirect
|
||||
github.com/gopherjs/gopherjs v1.17.2 // indirect
|
||||
github.com/jpillora/backoff v1.0.0 // indirect
|
||||
github.com/kr/pretty v0.2.1 // indirect
|
||||
github.com/kr/text v0.1.0 // indirect
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
|
||||
github.com/jtolds/gls v4.20.0+incompatible // indirect
|
||||
github.com/kr/pretty v0.3.1 // indirect
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
|
||||
github.com/prometheus/procfs v0.7.3 // indirect
|
||||
golang.org/x/crypto v0.1.0 // indirect
|
||||
golang.org/x/net v0.7.0 // indirect
|
||||
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b // indirect
|
||||
golang.org/x/text v0.7.0 // indirect
|
||||
google.golang.org/appengine v1.6.6 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/prometheus/procfs v0.10.1 // indirect
|
||||
github.com/rogpeppe/go-internal v1.10.0 // indirect
|
||||
github.com/smarty/assertions v1.15.0 // indirect
|
||||
github.com/tklauser/numcpus v0.6.1 // indirect
|
||||
github.com/xhit/go-str2duration/v2 v2.1.0 // indirect
|
||||
golang.org/x/crypto v0.8.0 // indirect
|
||||
golang.org/x/net v0.10.0 // indirect
|
||||
golang.org/x/oauth2 v0.8.0 // indirect
|
||||
golang.org/x/sync v0.2.0 // indirect
|
||||
golang.org/x/text v0.9.0 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/protobuf v1.30.0 // indirect
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/tklauser/numcpus v0.6.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
|
549
go.sum
549
go.sum
@ -1,522 +1,123 @@
|
||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
|
||||
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
|
||||
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
|
||||
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
|
||||
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
|
||||
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
|
||||
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
|
||||
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
|
||||
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
|
||||
cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
|
||||
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
|
||||
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
|
||||
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
|
||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
||||
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
|
||||
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
|
||||
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
|
||||
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
|
||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
|
||||
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
||||
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
|
||||
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
|
||||
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
|
||||
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
|
||||
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
|
||||
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
|
||||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 h1:JYp7IbQjafoB+tBA3gMyHYHrpOtNuDiK/uB5uXxq5wM=
|
||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d h1:UQZhZ2O0vMHr2cI+DC1Mbh0TJxzA3RcLoMsFw+aXw7E=
|
||||
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.0 h1:Shsta01QNfFxHCfpW6YH2STWB0MudeXXEWMr20OEh60=
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.0/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
|
||||
github.com/alecthomas/kingpin/v2 v2.3.2 h1:H0aULhgmSzN8xQ3nX1uxtdlTHYoPLu5AhHxWrKI6ocU=
|
||||
github.com/alecthomas/kingpin/v2 v2.3.2/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
|
||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
|
||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/blang/semver v3.5.1+incompatible h1:cQNTCjp13qL8KC3Nbxr/y2Bqb63oX6wdnnjpJbkM4JQ=
|
||||
github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
|
||||
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
||||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
|
||||
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
|
||||
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
|
||||
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
|
||||
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
|
||||
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
|
||||
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
|
||||
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
|
||||
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
|
||||
github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU=
|
||||
github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
|
||||
github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA=
|
||||
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
|
||||
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
||||
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
||||
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
||||
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
|
||||
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
|
||||
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
|
||||
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
|
||||
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
|
||||
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
|
||||
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
|
||||
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
|
||||
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
|
||||
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
|
||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
|
||||
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
|
||||
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
|
||||
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
|
||||
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
|
||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||
github.com/gopherjs/gopherjs v1.17.2 h1:fQnZVsXk8uxXIStYb0N4bGk7jeyTalG/wsZjQ25dO0g=
|
||||
github.com/gopherjs/gopherjs v1.17.2/go.mod h1:pRRIvn/QzFLrKfvEz3qUuEhtE/zLCWfreZ6J5gM2i+k=
|
||||
github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
|
||||
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
|
||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
|
||||
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
|
||||
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
|
||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/lib/pq v1.10.6 h1:jbk+ZieJ0D7EVGJYpL9QTz7/YW6UHbmdnZWYyK5cdBs=
|
||||
github.com/lib/pq v1.10.6/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/montanaflynn/stats v0.6.6 h1:Duep6KMIDpY4Yo11iFsvyqJDyfzLF9+sndUKT+v64GQ=
|
||||
github.com/montanaflynn/stats v0.6.6/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
|
||||
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
|
||||
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
||||
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
|
||||
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
|
||||
github.com/prometheus/client_golang v1.12.2 h1:51L9cDoUHVrXx4zWYlcLQIZ+d+VXHgqnYKkIuq4g/34=
|
||||
github.com/prometheus/client_golang v1.12.2/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_golang v1.16.0 h1:yk/hx9hDbrGHovbci4BY+pRMfSuuat626eFsHb7tmT8=
|
||||
github.com/prometheus/client_golang v1.16.0/go.mod h1:Zsulrv/L9oM40tJ7T815tM89lFEugiJ9HzIqaAx4LKc=
|
||||
github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY=
|
||||
github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU=
|
||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
|
||||
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
|
||||
github.com/prometheus/common v0.29.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
|
||||
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
|
||||
github.com/prometheus/common v0.35.0 h1:Eyr+Pw2VymWejHqCugNaQXkAi6KayVNxaHeu6khmFBE=
|
||||
github.com/prometheus/common v0.35.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
|
||||
github.com/prometheus/exporter-toolkit v0.7.2 h1:O7dcXagEAkXNSU6f3uXYqrhIjHArvxVeGAm0YGctino=
|
||||
github.com/prometheus/exporter-toolkit v0.7.2/go.mod h1:ZUBIj498ePooX9t/2xtDjeQYwvRpiPP2lh5u4iblj2g=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
|
||||
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||
github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0VU=
|
||||
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
|
||||
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
|
||||
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
|
||||
github.com/prometheus/exporter-toolkit v0.10.0 h1:yOAzZTi4M22ZzVxD+fhy1URTuNRj/36uQJJ5S8IPza8=
|
||||
github.com/prometheus/exporter-toolkit v0.10.0/go.mod h1:+sVFzuvV5JDyw+Ih6p3zFxZNVnKQa3x5qPmDSiPu4ZY=
|
||||
github.com/prometheus/procfs v0.10.1 h1:kYK1Va/YMlutzCGazswoHKo//tZVlFpKYh+PymziUAg=
|
||||
github.com/prometheus/procfs v0.10.1/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM=
|
||||
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
|
||||
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
|
||||
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
|
||||
github.com/smarty/assertions v1.15.0 h1:cR//PqUBUiQRakZWqBiFFQ9wb8emQGDb0HeGdqGByCY=
|
||||
github.com/smarty/assertions v1.15.0/go.mod h1:yABtdzeQs6l1brC900WlRNwj6ZR55d7B+E8C6HtKdec=
|
||||
github.com/smartystreets/goconvey v1.8.1 h1:qGjIddxOk4grTu9JPOU31tVfq3cNdBlNa5sSznIX1xY=
|
||||
github.com/smartystreets/goconvey v1.8.1/go.mod h1:+/u4qLyY6x1jReYOp7GOM2FSt8aP9CzCZL03bI28W60=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
|
||||
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||
github.com/tklauser/go-sysconf v0.3.11 h1:89WgdJhk5SNwJfu+GKyYveZ4IaJ7xAkecBo+KdJV0CM=
|
||||
github.com/tklauser/go-sysconf v0.3.11/go.mod h1:GqXfhXY3kiPa0nAXPDIQIWzJbMCB7AmcWpGR8lSZfqI=
|
||||
github.com/tklauser/numcpus v0.6.0 h1:kebhY2Qt+3U6RNK7UqpYNA+tJ23IBEGKkB7JQBfDYms=
|
||||
github.com/tklauser/numcpus v0.6.0/go.mod h1:FEZLMke0lhOUG6w2JadTzp0a+Nl8PF/GFkQ5UVIcaL4=
|
||||
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
|
||||
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
||||
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
|
||||
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
|
||||
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
|
||||
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
|
||||
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
|
||||
github.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc=
|
||||
github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
|
||||
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
|
||||
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
|
||||
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
|
||||
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
|
||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
||||
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
|
||||
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
|
||||
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
|
||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/crypto v0.8.0 h1:pd9TJtTueMTVQXzk8E2XESSMQDj/U7OUu0PqJqPXQjQ=
|
||||
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
|
||||
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
|
||||
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b h1:clP8eMhB30EHdc0bd2Twtq6kgU7yl5ub2cQLSdrv1Dg=
|
||||
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/net v0.10.0 h1:X2//UzNDwYmtCLn7To6G58Wr6f5ahEAQgKNzv9Y951M=
|
||||
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||
golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8=
|
||||
golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sync v0.2.0 h1:PUR+T4wwASmuSTYdKjYHI5TD22Wy5ogLU5qZCOLxBrI=
|
||||
golang.org/x/sync v0.2.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.11.0 h1:eG7RXZHdqOJ1i+0lgLgCpSXAp6M3LYlAo6osgSi0xOM=
|
||||
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
|
||||
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
|
||||
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
|
||||
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
|
||||
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
|
||||
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
|
||||
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
||||
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
|
||||
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
|
||||
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
|
||||
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
|
||||
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
|
||||
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
|
||||
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
|
||||
google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
|
||||
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
|
||||
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
|
||||
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
|
||||
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/appengine v1.6.6 h1:lMO5rYAqUxkmaj76jAkRUvt5JZgFymx/+Q5Mzfivuhc=
|
||||
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
|
||||
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
|
||||
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
|
||||
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
|
||||
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
|
||||
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
|
||||
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
|
||||
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
|
||||
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
|
||||
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
|
||||
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
||||
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
|
||||
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
|
||||
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6 h1:jMFz6MfLP0/4fUyZle81rXUoxOBFi19VUFKVDOQfozc=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
|
||||
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
|
||||
|
@ -38,12 +38,12 @@ prepare-exporter-from-repo:
|
||||
make -C ../ build && cp ../postgres_exporter assets/postgres_exporter
|
||||
|
||||
prepare-base-exporter:
|
||||
tar -xf assets/postgres_exporter_percona.tar.xz -C assets/
|
||||
tar -xf assets/postgres_exporter_percona.tar.gz -C assets/
|
||||
|
||||
start-postgres-db:
|
||||
docker-compose -f assets/postgres-compose.yml up -d --force-recreate --renew-anon-volumes --remove-orphans
|
||||
docker-compose up -d --force-recreate --renew-anon-volumes --remove-orphans
|
||||
|
||||
stop-postgres-db:
|
||||
docker-compose -f assets/postgres-compose.yml down
|
||||
docker-compose down
|
||||
|
||||
prepare-env-from-repo: prepare-exporter-from-repo prepare-base-exporter start-postgres-db
|
||||
|
1
percona_tests/assets/postgres_exporter.yml
Normal file
1
percona_tests/assets/postgres_exporter.yml
Normal file
@ -0,0 +1 @@
|
||||
auth_modules:
|
7
percona_tests/assets/test.new-flags.txt
Normal file
7
percona_tests/assets/test.new-flags.txt
Normal file
@ -0,0 +1,7 @@
|
||||
--auto-discover-databases
|
||||
--collect.custom_query.hr
|
||||
--collect.custom_query.lr
|
||||
--collect.custom_query.mr
|
||||
--exclude-databases=template0,template1,postgres,cloudsqladmin,pmm-managed-dev,azure_maintenance,rdsadmin
|
||||
--log.level=warn
|
||||
--config.file=assets/postgres_exporter.yml
|
@ -23,7 +23,22 @@ services:
|
||||
networks:
|
||||
- postgres-test-srv-net
|
||||
|
||||
golang:
|
||||
image: golang:1.21
|
||||
container_name: golang-test
|
||||
command: >
|
||||
tail -f ./assets/test.new-flags.txt
|
||||
volumes:
|
||||
- ../:/usr/src/myapp
|
||||
- go-modules:/go/pkg/mod # Put modules cache into a separate volume
|
||||
working_dir: /usr/src/myapp/percona_tests
|
||||
depends_on:
|
||||
- postgres
|
||||
networks:
|
||||
- postgres-test-srv-net
|
||||
|
||||
volumes:
|
||||
go-modules: # Define the volume
|
||||
postgres-test-srv-vol:
|
||||
|
||||
networks:
|
@ -22,7 +22,7 @@ const lowResolutionEndpoint = "metrics?collect%5B%5D=custom_query.lr"
|
||||
|
||||
// that metric is disabled by default in new exporters, so will trigger test
|
||||
// however we don't use it at all in our dashboards, so for now - safe to skip it
|
||||
const skipMetricName = "go_memstats_gc_cpu_fraction"
|
||||
var skipMetricNames = []string{"go_memstats_gc_cpu_fraction", "go_info"}
|
||||
|
||||
type Metric struct {
|
||||
name string
|
||||
@ -44,18 +44,28 @@ func TestMissingMetrics(t *testing.T) {
|
||||
return
|
||||
}
|
||||
|
||||
newMetrics, err := getMetrics(updatedExporterFileName)
|
||||
endpoint := "metrics?collect[]=exporter&collect[]=postgres&collect[]=custom_query.mr"
|
||||
newMetrics, err := getMetricsFrom(updatedExporterFileName, updatedExporterArgs, endpoint)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
oldMetrics, err := getMetrics(oldExporterFileName)
|
||||
oldMetrics, err := getMetricsFrom(oldExporterFileName, oldExporterArgs, endpoint)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
err = os.WriteFile(updatedExporterMetrics, []byte(newMetrics), os.ModePerm)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = os.WriteFile(oldExporterMetrics, []byte(oldMetrics), os.ModePerm)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
oldMetricsCollection := parseMetricsCollection(oldMetrics)
|
||||
newMetricsCollection := parseMetricsCollection(newMetrics)
|
||||
|
||||
@ -70,18 +80,27 @@ func TestMissingLabels(t *testing.T) {
|
||||
return
|
||||
}
|
||||
|
||||
newMetrics, err := getMetrics(updatedExporterFileName)
|
||||
newMetrics, err := getMetrics(updatedExporterFileName, updatedExporterArgs)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
oldMetrics, err := getMetrics(oldExporterFileName)
|
||||
oldMetrics, err := getMetrics(oldExporterFileName, oldExporterArgs)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
err = os.WriteFile(updatedExporterMetrics+"-labels", []byte(newMetrics), os.ModePerm)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = os.WriteFile(oldExporterMetrics+"-labels", []byte(oldMetrics), os.ModePerm)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
oldMetricsCollection := parseMetricsCollection(oldMetrics)
|
||||
newMetricsCollection := parseMetricsCollection(newMetrics)
|
||||
|
||||
@ -108,13 +127,13 @@ func TestDumpMetrics(t *testing.T) {
|
||||
ep = "metrics"
|
||||
}
|
||||
|
||||
newMetrics, err := getMetricsFrom(updatedExporterFileName, ep)
|
||||
newMetrics, err := getMetricsFrom(updatedExporterFileName, updatedExporterArgs, ep)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
oldMetrics, err := getMetricsFrom(oldExporterFileName, ep)
|
||||
oldMetrics, err := getMetricsFrom(oldExporterFileName, oldExporterArgs, ep)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
@ -132,19 +151,19 @@ func TestResolutionsMetricDuplicates(t *testing.T) {
|
||||
return
|
||||
}
|
||||
|
||||
hrMetrics, err := getMetricsFrom(updatedExporterFileName, highResolutionEndpoint)
|
||||
hrMetrics, err := getMetricsFrom(updatedExporterFileName, updatedExporterArgs, highResolutionEndpoint)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
mrMetrics, err := getMetricsFrom(updatedExporterFileName, medResolutionEndpoint)
|
||||
mrMetrics, err := getMetricsFrom(updatedExporterFileName, updatedExporterArgs, medResolutionEndpoint)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
lrMetrics, err := getMetricsFrom(updatedExporterFileName, lowResolutionEndpoint)
|
||||
lrMetrics, err := getMetricsFrom(updatedExporterFileName, updatedExporterArgs, lowResolutionEndpoint)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
@ -203,18 +222,27 @@ func TestResolutions(t *testing.T) {
|
||||
}
|
||||
|
||||
func testResolution(t *testing.T, resolutionEp, resolutionName string) {
|
||||
newMetrics, err := getMetricsFrom(updatedExporterFileName, resolutionEp)
|
||||
newMetrics, err := getMetricsFrom(updatedExporterFileName, updatedExporterArgs, resolutionEp)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
oldMetrics, err := getMetricsFrom(oldExporterFileName, resolutionEp)
|
||||
oldMetrics, err := getMetricsFrom(oldExporterFileName, oldExporterArgs, resolutionEp)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
err = os.WriteFile(fmt.Sprintf("%s-%s", updatedExporterMetrics, resolutionName), []byte(newMetrics), os.ModePerm)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = os.WriteFile(fmt.Sprintf("%s-%s", oldExporterMetrics, resolutionName), []byte(oldMetrics), os.ModePerm)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
oldMetricsCollection := parseMetricsCollection(oldMetrics)
|
||||
newMetricsCollection := parseMetricsCollection(newMetrics)
|
||||
|
||||
@ -224,7 +252,10 @@ func testResolution(t *testing.T, resolutionEp, resolutionName string) {
|
||||
missingLabels := ""
|
||||
for _, oldMetric := range oldMetricsCollection.MetricsData {
|
||||
// skip empty lines, comments and redundant metrics
|
||||
if oldMetric.name == "" || strings.HasPrefix(oldMetric.name, "# ") || oldMetric.name == skipMetricName {
|
||||
if oldMetric.name == "" || strings.HasPrefix(oldMetric.name, "# ") {
|
||||
continue
|
||||
}
|
||||
if skipMetric(oldMetric.name) {
|
||||
continue
|
||||
}
|
||||
|
||||
@ -250,10 +281,10 @@ func testResolution(t *testing.T, resolutionEp, resolutionName string) {
|
||||
|
||||
if !metricFound {
|
||||
missingCount++
|
||||
missingMetrics += fmt.Sprintf("%s\n", oldMetric.name)
|
||||
missingMetrics += fmt.Sprintf("%s\n", oldMetric)
|
||||
} else if !labelsMatch {
|
||||
missingLabelsCount++
|
||||
missingLabels += fmt.Sprintf("%s\n", oldMetric.name)
|
||||
missingLabels += fmt.Sprintf("%s\n", oldMetric)
|
||||
}
|
||||
}
|
||||
|
||||
@ -262,7 +293,7 @@ func testResolution(t *testing.T, resolutionEp, resolutionName string) {
|
||||
}
|
||||
|
||||
if missingLabelsCount > 0 {
|
||||
t.Errorf("%d metrics's labels missing in new exporter for %s resolution:\n%s", missingCount, resolutionName, missingLabels)
|
||||
t.Errorf("%d metrics's labels missing in new exporter for %s resolution:\n%s", missingLabelsCount, resolutionName, missingLabels)
|
||||
}
|
||||
|
||||
extraCount := 0
|
||||
@ -282,6 +313,16 @@ func testResolution(t *testing.T, resolutionEp, resolutionName string) {
|
||||
}
|
||||
}
|
||||
|
||||
func skipMetric(oldMetricName string) bool {
|
||||
skip := false
|
||||
for _, name := range skipMetricNames {
|
||||
if name == oldMetricName {
|
||||
skip = true
|
||||
}
|
||||
}
|
||||
return skip
|
||||
}
|
||||
|
||||
func dumpMetricsInfo(oldMetricsCollection, newMetricsCollection MetricsCollection) {
|
||||
if getBool(dumpMetricsFlag) {
|
||||
dumpMetrics(oldMetricsCollection, newMetricsCollection)
|
||||
@ -331,7 +372,7 @@ func testForMissingMetricsLabels(oldMetricsCollection, newMetricsCollection Metr
|
||||
func testForMissingMetrics(oldMetricsCollection, newMetricsCollection MetricsCollection) (bool, string) {
|
||||
missingMetrics := make([]string, 0)
|
||||
for metricName := range oldMetricsCollection.LabelsByMetric {
|
||||
if metricName == skipMetricName {
|
||||
if skipMetric(metricName) {
|
||||
continue
|
||||
}
|
||||
|
||||
@ -541,12 +582,12 @@ func getMetricNames(metrics []string) []string {
|
||||
return ret
|
||||
}
|
||||
|
||||
func getMetrics(fileName string) (string, error) {
|
||||
return getMetricsFrom(fileName, "metrics")
|
||||
func getMetrics(fileName, argsFile string) (string, error) {
|
||||
return getMetricsFrom(fileName, argsFile, "metrics")
|
||||
}
|
||||
|
||||
func getMetricsFrom(fileName, endpoint string) (string, error) {
|
||||
cmd, port, collectOutput, err := launchExporter(fileName)
|
||||
func getMetricsFrom(fileName, argsFile, endpoint string) (string, error) {
|
||||
cmd, port, collectOutput, err := launchExporter(fileName, argsFile)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "Failed to launch exporter")
|
||||
}
|
||||
|
@ -47,11 +47,11 @@ func TestPerformance(t *testing.T) {
|
||||
|
||||
var updated, original *StatsData
|
||||
t.Run("upstream exporter", func(t *testing.T) {
|
||||
updated = doTestStats(t, repeatCount, scrapesCount, updatedExporterFileName)
|
||||
updated = doTestStats(t, repeatCount, scrapesCount, updatedExporterFileName, updatedExporterArgs)
|
||||
})
|
||||
|
||||
t.Run("percona exporter", func(t *testing.T) {
|
||||
original = doTestStats(t, repeatCount, scrapesCount, oldExporterFileName)
|
||||
original = doTestStats(t, repeatCount, scrapesCount, oldExporterFileName, oldExporterArgs)
|
||||
})
|
||||
|
||||
printStats(original, updated)
|
||||
@ -65,13 +65,13 @@ func calculatePerc(base, updated float64) float64 {
|
||||
return diffPerc
|
||||
}
|
||||
|
||||
func doTestStats(t *testing.T, cnt int, size int, fileName string) *StatsData {
|
||||
func doTestStats(t *testing.T, cnt, size int, fileName, argsFile string) *StatsData {
|
||||
var durations []float64
|
||||
var hwms []float64
|
||||
var datas []float64
|
||||
|
||||
for i := 0; i < cnt; i++ {
|
||||
d, hwm, data, err := doTest(size, fileName)
|
||||
d, hwm, data, err := doTest(size, fileName, argsFile)
|
||||
if !assert.NoError(t, err) {
|
||||
return nil
|
||||
}
|
||||
@ -124,8 +124,8 @@ func doTestStats(t *testing.T, cnt int, size int, fileName string) *StatsData {
|
||||
return &st
|
||||
}
|
||||
|
||||
func doTest(iterations int, fileName string) (cpu, hwm, data int64, _ error) {
|
||||
cmd, port, collectOutput, err := launchExporter(fileName)
|
||||
func doTest(iterations int, fileName, argsFile string) (cpu, hwm, data int64, _ error) {
|
||||
cmd, port, collectOutput, err := launchExporter(fileName, argsFile)
|
||||
if err != nil {
|
||||
return 0, 0, 0, err
|
||||
}
|
||||
|
@ -18,7 +18,7 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
postgresHost = "127.0.0.1"
|
||||
postgresHost = "postgres"
|
||||
postgresPort = 5432
|
||||
postgresUser = "postgres"
|
||||
postgresPassword = "postgres"
|
||||
@ -28,16 +28,20 @@ const (
|
||||
|
||||
exporterWaitTimeoutMs = 3000 // time to wait for exporter process start
|
||||
|
||||
updatedExporterFileName = "assets/postgres_exporter"
|
||||
oldExporterFileName = "assets/postgres_exporter_percona"
|
||||
updatedExporterFileName = "/usr/src/myapp/percona_tests/assets/postgres_exporter"
|
||||
oldExporterFileName = "/usr/src/myapp/percona_tests/assets/postgres_exporter_percona"
|
||||
updatedExporterArgs = "/usr/src/myapp/percona_tests/assets/test.new-flags.txt"
|
||||
oldExporterArgs = "/usr/src/myapp/percona_tests/assets/test.old-flags.txt"
|
||||
updatedExporterMetrics = "/usr/src/myapp/percona_tests/assets/metrics.new"
|
||||
oldExporterMetrics = "/usr/src/myapp/percona_tests/assets/metrics.old"
|
||||
)
|
||||
|
||||
func getBool(val *bool) bool {
|
||||
return val != nil && *val
|
||||
}
|
||||
|
||||
func launchExporter(fileName string) (cmd *exec.Cmd, port int, collectOutput func() string, _ error) {
|
||||
lines, err := os.ReadFile("assets/test.exporter-flags.txt")
|
||||
func launchExporter(fileName string, argsFile string) (cmd *exec.Cmd, port int, collectOutput func() string, _ error) {
|
||||
lines, err := os.ReadFile(argsFile)
|
||||
if err != nil {
|
||||
return nil, 0, nil, errors.Wrapf(err, "Unable to read exporter args file")
|
||||
}
|
||||
@ -116,6 +120,8 @@ func stopExporter(cmd *exec.Cmd, collectOutput func() string) error {
|
||||
return errors.Wrapf(err, "Failed to wait for exporter process termination.%s\n", collectOutput())
|
||||
}
|
||||
|
||||
fmt.Println(collectOutput())
|
||||
|
||||
return nil
|
||||
}
|
||||
func tryGetMetrics(port int) (string, error) {
|
||||
|
@ -56,7 +56,8 @@ postgres_exporter_data_source_name="postgresql://${postgres_exporter_pg_user}:${
|
||||
pidfile=/var/run/postgres_exporter.pid
|
||||
command="/usr/sbin/daemon"
|
||||
procname="/usr/local/bin/postgres_exporter"
|
||||
command_args="-p ${pidfile} /usr/bin/env DATA_SOURCE_NAME="${postgres_exporter_data_source_name}" ${procname} \
|
||||
command_args="-f -p ${pidfile} -T ${name} \
|
||||
/usr/bin/env DATA_SOURCE_NAME="${postgres_exporter_data_source_name}" ${procname} \
|
||||
--web.listen-address=${postgres_exporter_listen_address} \
|
||||
${postgres_exporter_args}"
|
||||
|
||||
|
@ -25,7 +25,7 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "Postgres Overview",
|
||||
"datasource": "$datasource",
|
||||
"editable": true,
|
||||
"error": false,
|
||||
"fieldConfig": {
|
||||
@ -77,7 +77,7 @@
|
||||
{
|
||||
"alias": "fetched",
|
||||
"dsType": "prometheus",
|
||||
"expr": "sum(irate(pg_stat_database_tup_fetched{datname=~\"$db\",instance=~\"$instance\"}[5m]))",
|
||||
"expr": "sum(irate(pg_stat_database_tup_fetched{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -131,7 +131,7 @@
|
||||
{
|
||||
"alias": "fetched",
|
||||
"dsType": "prometheus",
|
||||
"expr": "sum(irate(pg_stat_database_tup_returned{datname=~\"$db\",instance=~\"$instance\"}[5m]))",
|
||||
"expr": "sum(irate(pg_stat_database_tup_returned{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -185,7 +185,7 @@
|
||||
{
|
||||
"alias": "fetched",
|
||||
"dsType": "prometheus",
|
||||
"expr": "sum(irate(pg_stat_database_tup_inserted{datname=~\"$db\",instance=~\"$instance\"}[5m]))",
|
||||
"expr": "sum(irate(pg_stat_database_tup_inserted{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -239,7 +239,7 @@
|
||||
{
|
||||
"alias": "fetched",
|
||||
"dsType": "prometheus",
|
||||
"expr": "sum(irate(pg_stat_database_tup_updated{datname=~\"$db\",instance=~\"$instance\"}[5m]))",
|
||||
"expr": "sum(irate(pg_stat_database_tup_updated{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -293,7 +293,7 @@
|
||||
{
|
||||
"alias": "fetched",
|
||||
"dsType": "prometheus",
|
||||
"expr": "sum(irate(pg_stat_database_tup_deleted{datname=~\"$db\",instance=~\"$instance\"}[5m]))",
|
||||
"expr": "sum(irate(pg_stat_database_tup_deleted{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -396,7 +396,7 @@
|
||||
"rgba(237, 129, 40, 0.89)",
|
||||
"rgba(50, 172, 45, 0.97)"
|
||||
],
|
||||
"datasource": "Postgres Overview",
|
||||
"datasource": "$datasource",
|
||||
"decimals": 0,
|
||||
"editable": true,
|
||||
"error": false,
|
||||
@ -460,7 +460,7 @@
|
||||
"targets": [
|
||||
{
|
||||
"dsType": "prometheus",
|
||||
"expr": "sum(irate(pg_stat_database_xact_commit{datname=~\"$db\",instance=~\"$instance\"}[5m])) + sum(irate(pg_stat_database_xact_rollback{datname=~\"$db\",instance=~\"$instance\"}[5m]))",
|
||||
"expr": "sum(irate(pg_stat_database_xact_commit{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])) + sum(irate(pg_stat_database_xact_rollback{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -530,7 +530,7 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "Postgres Overview",
|
||||
"datasource": "$datasource",
|
||||
"decimals": 1,
|
||||
"editable": true,
|
||||
"error": false,
|
||||
@ -584,7 +584,7 @@
|
||||
{
|
||||
"alias": "Buffers Allocated",
|
||||
"dsType": "prometheus",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_alloc_total{instance='$instance'}[5m])",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_alloc{job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -636,7 +636,7 @@
|
||||
{
|
||||
"alias": "Buffers Allocated",
|
||||
"dsType": "prometheus",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_backend_fsync_total{instance='$instance'}[5m])",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_backend_fsync{job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -688,7 +688,7 @@
|
||||
{
|
||||
"alias": "Buffers Allocated",
|
||||
"dsType": "prometheus",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_backend_total{instance='$instance'}[5m])",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_backend{job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -740,7 +740,7 @@
|
||||
{
|
||||
"alias": "Buffers Allocated",
|
||||
"dsType": "prometheus",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_clean_total{instance='$instance'}[5m])",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_clean{job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -792,7 +792,7 @@
|
||||
{
|
||||
"alias": "Buffers Allocated",
|
||||
"dsType": "prometheus",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_checkpoint_total{instance='$instance'}[5m])",
|
||||
"expr": "irate(pg_stat_bgwriter_buffers_checkpoint{job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -889,7 +889,7 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "Postgres Overview",
|
||||
"datasource": "$datasource",
|
||||
"editable": true,
|
||||
"error": false,
|
||||
"fieldConfig": {
|
||||
@ -939,7 +939,7 @@
|
||||
{
|
||||
"alias": "conflicts",
|
||||
"dsType": "prometheus",
|
||||
"expr": "sum(rate(pg_stat_database_deadlocks{datname=~\"$db\",instance=~\"$instance\"}[5m]))",
|
||||
"expr": "sum(rate(pg_stat_database_deadlocks{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -991,7 +991,7 @@
|
||||
{
|
||||
"alias": "deadlocks",
|
||||
"dsType": "prometheus",
|
||||
"expr": "sum(rate(pg_stat_database_conflicts{datname=~\"$db\",instance=~\"$instance\"}[5m]))",
|
||||
"expr": "sum(rate(pg_stat_database_conflicts{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval]))",
|
||||
"format": "time_series",
|
||||
"groupBy": [
|
||||
{
|
||||
@ -1088,7 +1088,7 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "Postgres Overview",
|
||||
"datasource": "$datasource",
|
||||
"editable": true,
|
||||
"error": false,
|
||||
"fieldConfig": {
|
||||
@ -1136,10 +1136,10 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(pg_stat_database_blks_hit{datname=~\"$db\",instance=~\"$instance\"}) / (sum(pg_stat_database_blks_hit{datname=~\"$db\",instance=~\"$instance\"}) + sum(pg_stat_database_blks_read{datname=~\"$db\",instance=~\"$instance\"}))",
|
||||
"expr": "sum by (datname) (rate(pg_stat_database_blks_hit{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])) / (sum by (datname)(rate(pg_stat_database_blks_hit{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])) + sum by (datname)(rate(pg_stat_database_blks_read{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}[$__rate_interval])))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "cache hit rate",
|
||||
"legendFormat": "{{datname}} - cache hit rate",
|
||||
"refId": "A",
|
||||
"step": 240
|
||||
}
|
||||
@ -1191,7 +1191,7 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "Postgres Overview",
|
||||
"datasource": "$datasource",
|
||||
"editable": true,
|
||||
"error": false,
|
||||
"fieldConfig": {
|
||||
@ -1239,10 +1239,10 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "pg_stat_database_numbackends{datname=~\"$db\",instance=~\"$instance\"}",
|
||||
"expr": "pg_stat_database_numbackends{datname=~\"$db\",job=~\"$job\",instance=~\"$instance\"}",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{__name__}}",
|
||||
"legendFormat": "{{datname}} - {{__name__}}",
|
||||
"refId": "A",
|
||||
"step": 240
|
||||
}
|
||||
@ -1299,21 +1299,50 @@
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"allValue": ".*",
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "All",
|
||||
"value": "$__all"
|
||||
},
|
||||
"datasource": "Postgres Overview",
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
"label": "Data Source",
|
||||
"multi": false,
|
||||
"name": "datasource",
|
||||
"options": [],
|
||||
"query": "prometheus",
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"type": "datasource"
|
||||
},
|
||||
{
|
||||
"allValue": ".+",
|
||||
"datasource": "$datasource",
|
||||
"definition": "label_values(pg_up, job)",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"label": "job",
|
||||
"multi": true,
|
||||
"name": "job",
|
||||
"options": [],
|
||||
"query": "label_values(pg_up, job)",
|
||||
"refresh": 0,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"sort": 0,
|
||||
"tagValuesQuery": "",
|
||||
"tags": [],
|
||||
"tagsQuery": "",
|
||||
"type": "query",
|
||||
"useTags": false
|
||||
},
|
||||
{
|
||||
"allValue": ".+",
|
||||
"datasource": "$datasource",
|
||||
"definition": "",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"label": null,
|
||||
"multi": false,
|
||||
"label": "instance",
|
||||
"multi": true,
|
||||
"name": "instance",
|
||||
"options": [],
|
||||
"query": "label_values(up{job=~\"postgres.*\"},instance)",
|
||||
"query": "label_values(up{job=~\"$job\"},instance)",
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
@ -1325,13 +1354,8 @@
|
||||
"useTags": false
|
||||
},
|
||||
{
|
||||
"allValue": ".*",
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "All",
|
||||
"value": "$__all"
|
||||
},
|
||||
"datasource": "Postgres Overview",
|
||||
"allValue": ".+",
|
||||
"datasource": "$datasource",
|
||||
"definition": "label_values(pg_stat_database_tup_fetched{instance=~\"$instance\",datname!~\"template.*|postgres\"},datname)",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
@ -1349,56 +1373,6 @@
|
||||
"tagsQuery": "",
|
||||
"type": "query",
|
||||
"useTags": false
|
||||
},
|
||||
{
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "Postgres Overview",
|
||||
"value": "Postgres Overview"
|
||||
},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
"label": "datasource",
|
||||
"multi": false,
|
||||
"name": "datasource",
|
||||
"options": [],
|
||||
"query": "prometheus",
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"type": "datasource"
|
||||
},
|
||||
{
|
||||
"allValue": null,
|
||||
"current": {
|
||||
"selected": true,
|
||||
"text": "postgres",
|
||||
"value": "postgres"
|
||||
},
|
||||
"datasource": "$datasource",
|
||||
"definition": "label_values(pg_up, job)",
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
"label": "job",
|
||||
"multi": false,
|
||||
"name": "job",
|
||||
"options": [
|
||||
{
|
||||
"selected": true,
|
||||
"text": "postgres",
|
||||
"value": "postgres"
|
||||
}
|
||||
],
|
||||
"query": "label_values(pg_up, job)",
|
||||
"refresh": 0,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"sort": 0,
|
||||
"tagValuesQuery": "",
|
||||
"tags": [],
|
||||
"tagsQuery": "",
|
||||
"type": "query",
|
||||
"useTags": false
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -1,2 +0,0 @@
|
||||
// Never check for logger errors.
|
||||
(github.com/go-kit/log.Logger).Log
|
Loading…
Reference in New Issue
Block a user