Prepare stats function to handle a new format labelled "stats-file". Its
purpose is to generate a statistics dump with a format closed from the
CSV output. Such output will be then used to preload haproxy internal
counters on process startup.
stats-file output differs from a standard CSV on several points. First,
only an excerpt of all statistics is outputted. All values that does not
make sense to preload are excluded. For the moment, stats-file only list
stats fully defined via "struct stat_col" method. Contrary to a CSV, sll
columns of a stats-file will be filled. As such, empty field value is
used to mark stats which should not be outputted.
Some adaptation specifics to stats-file are necessary into
me_generate_field(). First, stats-file will output separatedly values
from frontend and backend sides with their own respective set of
columns. As such, an empty field value is returned if stat is not
defined for either frontend/listener, or backend/server when outputting
the other side. Also, as stats-file does not support empty column,
stcol_hide() is not used for it.
A minor adjustement was necessary for stats_fill_fe_line() to pass
context flags. This is necessary to detect stat output format. All other
listener/server/backend corresponding functions already have it.
The name "metrics" was chosen to represent the various list of haproxy
exposed statistics. However, it is deemed as ambiguous as some stats are
indeed metric in the true sense, but some are not, as highlighted by
various "enum field_origin" values.
Replace it by the new name "stat_cols" for statistic columns. Along with
the already existing notion of stat lines it should better reflect its
purpose.
Several unique names were used for different purposes under statistics
implementation. This caused the code to be difficult to understand.
* stat/stats name is removed when a more specific name could be used
* restrict field usage to purely refer to <struct field> which
represents a raw stat value.
* use "line" naming to represent an array of <struct field>
Info are used to expose haproxy global metrics. It is similar to proxy
statistics and any other module. As such, rename info indexes using
SI_I_INF_* prefix. Also info variable is renamed stat_line_info.
Thanks to this, naming is now consistent between info and other
statistics. It will help to integrate it as a "global" statistics
module.
Statistics were extended with the introduction of stats module. This
mechanism allows to expose various metrics for several haproxy
components. As a consequence of this, some static variables were
transformed to dynamic ones to be able to regroup all statistics
definition.
Rename these variables with more explicit naming :
* stat_lines can be used to generate one line of statistics for any
module using struct field as value
* metrics and metrics_len are used to stored description of metrics
indexed by module
Note that info is not integrated in the statistics module mechanism.
However, it could be done in the future to better reflect its purpose.
This commit is the first one of a serie which adjust naming convention
for stats module. The objective is to remove ambiguity and better
reflect how stats are implemented, especially since the introduction of
stats module.
This patch renames elements related to proxies statistics. One of the
main change is to rename ST_F_* statistics indexes prefix with the new
name ST_I_PX_*. This remove the reference to field which represents
another concept in the stats module. In the same vein, global
stat_fields variable is renamed metrics_px.
This is 39th iteration of typo fixes
The naming issue on the argument called "unsued" instead of "unused"
in two functions from resolvers and stick-tables was put into a second
patch so that it can be omitted if it were to cause backport issues.
Promex applet is used to dump many metrics. Some of them are related to
a server instance. The applet can be interrupted in the middle of a
dump, for example waiting for output buffer space. In this case, its
context is save to resume dump on the correct instance.
A crash can occur if dump is interrupted during servers loop. If the
server instance is deleted during two scheduling of the promex applet,
its context will still referenced the deleted server on resume.
To fix this, use server refcount to prevent its deletion during parsing.
No backport is needed, despite all stable releases being affected. This
is because promex applet context has been recently rewritten to use
generic pointers. As such, a specific commit will be applied for earlier
releases.
It is now possible to filter the metrics on their name, by listing
explicitly metrics to dump or on the opposite to exclude only some metrics
from the dump. To do so, a comma-separated list of metrics must be
specified. If a name is preceded by a minus (-), the metric is excluded from
the dump. If at least one metric is specified to be explicitly dumped, all
metrics are no longer dumped, but only those explicitly listed.
The list is specified via one or more "metrics" parameters in the uri
query-string. For insance:
# Dumped all metrics, except "haproxy_server_check_status"
/metrics?metrics=-haproxy_server_check_status
# Only dump frontends, backends and servers status
/metrics?metrics=haproxy_frontend_status,haproxy_backend_status,haproxy_server_status
Included and Excluded metrics can be mixed. Only the intersection will be
dumped.
This patch should fix the issue #770.
It is easier this way, especially for promex modules. And because name and
description are now explicitly passed to this function, there is no reason
to still pass the metric, its type is enough. The function is easier to read
this way.
In Prometheus, a time series a stream of timestamped values belonging to the
same metric and the same set of labeled dimensions. Thus the exporter dump
time-series and not metrics.
Thus, promex_dump_metric(), promex_dump_metric_header() and
promex_metric_to_str() functions were renamed to replace "metric"
Just like for stick-tables, this patch adds a promex module to dump
resolvers metrics. It adds the "resolver" scope and for now, it dumps
folloowing metrics:
* haproxy_resolver_sent
* haproxy_resolver_send_error
* haproxy_resolver_valid
* haproxy_resolver_update
* haproxy_resolver_cname
* haproxy_resolver_cname_error
* haproxy_resolver_any_err
* haproxy_resolver_nx
* haproxy_resolver_timeout
* haproxy_resolver_refused
* haproxy_resolver_other
* haproxy_resolver_invalid
* haproxy_resolver_too_big
* haproxy_resolver_outdated
This patch adds a dump loop on the registered modules. It is very similar to
other dump loops. When a module registered, a implicit scope is created with
the module's name. It means a module name must be unique. It also means,
metrics dump of modules can be filtered via the "scope" parameter.
In this patch we add a registration mechanism for modules. To do so, a
module must defined the "promex_module" structure. The dump itself will be
based on 2 contexts. One for all the dump and another one for each metric
time-series. These contexts are used as restart points when the dump is
interrupted.
Modules must also implement 6 callback functions:
* start_metric_dump(): It is an optional callback function. If defined, it
is responsible to initialize the dump context use
as the first restart point.
* stop_metric_dump(): It is an optional callback function. If defined, it
is responsible to deinit the dump context.
* metric_info(): This one is mandatory. It returns the info about the
metric: name, type and flags and descrition.
* start_ts(): This one is mandatory, it initializes the context for a time
series for a given metric. This context is the second
restart point.
* next_ts(): This one is mandatory. It interates on time series for a
given metrics. It is also responsible to handle end of a
time series and deinit the context.
* fill_ts(): It fills info on the time series for a given metric : the
labels and the value.
In addition, a module must set its name and declare the number of metrics is
exposed.
Instead of using typed pointers to save the restart points we know use
generic pointers. 4 pointers can be saved now. This replaces the 5 typed
pointers used before. So, we save 8-bytes but it is also more generic and
this will be used by the promex modules.
It was not an issue since now, be a way to register modules on promex will
be added. Thus it is important to add some extra checks. Here, we take care
to never dump more than the max labels allowed.
The context of the promex applet was extended to support the dump of extra
counters. These counters are not dumped yet, but info to interrupt and
restart the dump are required. The stats module and the relative field
number for this module can now be saved.
In addition support for "extra-counters" parameter was added on the
query-string to dump these counters. Otherwise, no extra-counters are
dumped.
When a metric is dumped, it is now possible to specify a custom
description. We will add the support for extra counters. The list of these
counters is retrived dynamically. Thus the description must be dynamic
too. Note it was already possible to customize the metric name.
These were previously used for the "dadwsch" utility that's no longer
part of the addon, so we should not move the CFLAGS/LDFLAGS to the
global ones as this adds an undesired dependency on the libcurl and
libzip libs.
No backport is needed.
- Reflecing the changes done in addons/deviceatlas/Makefile.inc.
Enabling the cache feature and its disabling option as well.
- Now the `dadwsch` application is part of the API's package for more
general purposes, we remove it.
- Minor and transparent to user changes into da.c's workflow, also
making more noticeable some notices with appropriate logging levels.
- Adding support for the new `deviceatlas-cache-size` config keyword,
a no-op when the cache support is disabled.
- Adding missing compilation units and relevant api updates to
the dummy library version.
- Removing the legacy v2 support, which in turn suppress the need to set
a regex engine.
- Moving the options and addon into its distrinct build unit, cleaning up
the main one in the process.
- Adding a new option to disable the cache if desired or if
having a C++ toolchain is not a possibility.
Addition to commit 18da35c "MEDIUM: tree-wide: logsrv struct becomes logger",
when the OpenTracing filter is compiled in debug mode (using OT_DEBUG=1)
then logsrv should be changed to logger here as well.
This patch should be backported to branch 2.9.
numbers of active and backup servers per backend were exported but the info
was not exported per-server. The main issue to do so was we were unable to
have a different name for the same metric in a different scope. Thanks to
the previous patch ("MINOR: promex: Add support for specialized
front/back/li/srv metric names") it is now possible. So the info is now
exported per-server.
This patch should fix the issue #2271.
Depending on the scope, metrics can have different names. For instance, the
number of active/backend servers are reported with
"haproxy_backend_active_servers"/"haproxy_backend_backup_servers" metric names
in the backend scope while it should be
"haproxy_server_active"/"haproxy_server_backup" in the server scope.
To be able to support different names depending on the scope for the same
metric, arrays of ISTs were added, one by scope (front, back, listen,
server). These arrays only contain names overriding the default ones.
Note: the exemple above is not supported for now and is the reason for this
commit.
When 'log' directive was implemented, the internal representation was
named 'struct logsrv', because the 'log' directive would directly point
to the log target, which used to be a (UDP) log server exclusively at
that time, hence the name.
But things have become more complex, since today 'log' directive can point
to ring targets (implicit, or named) for example.
Indeed, a 'log' directive does no longer reference the "final" server to
which the log will be sent, but instead it describes which log API and
parameters to use for transporting the log messages to the proper log
destination.
So now the term 'logsrv' is rather confusing and prevents us from
introducing a new level of abstraction because they would be mixed
with logsrv.
So in order to better designate this 'log' directive, and make it more
generic, we chose the word 'logger' which now replaces logsrv everywhere
it was used in the code (including related comments).
This is internal rewording, so no functional change should be expected
on user-side.
When a server is in maintenance, the check.status is no longer updated.
Therefore, we shouldn't consider check.status if the checks are not active. This
check was properly implemented in the haproxy_server_check_status metric, but
wasn't carried over to backend_agg_check_status, which introduced
inconsistencies between the two metrics.
[cf: This patch must be backported as far as 2.4]
Now that we have free_acl_cond(cond) function that does cond prune then
frees cond, replace all occurences of this pattern:
| prune_acl_cond(cond)
| free(cond)
with:
| free_acl_cond(cond)
sc_need_room() now takes the required free space to receive more data as
parameter. All calls to this function are updated accordingly. For now, this
value is set but not used. When we are waiting for a buffer, 0 is used. So
we expect to be unblocked ASAP. However this must be reviewed because
SC_FL_NEED_BUF is probably enough in this case and this flag is already set
if the input buffer allocation fails.
This reverts commit aadcfc9ea6.
The parts affecting the DeviceAtlas addon were wrong actually, the
"now" variable was a local time_t in a file that's not compiled with
the haproxy binary (dadwsch). Only the fix to the calltrace is correct,
so better revert and fix the only one in a separate commit. No backport
is needed.
The filter was using "now" in visible output in debug mode, that's
not correct, we should rather use "date" since it's visible. No
backport is needed as it was mostly emphasized by commit 28360dc
("MEDIUM: clock: force internal time to wrap early after boot")
in 2.8..
Since commit 28360dc ("MEDIUM: clock: force internal time to wrap early
after boot") we have a much clearer distinction between 'now' (the internal,
drifting clock) and 'date' (the wall clock time). There were still a few
places where 'now' was being used for human consumption.
No backport is needed.
In flt_ot_sample_to_str() there were two occurrences of strncat() which
are used to copy N chars from the source and append a zero. For the sake
of definitely getting rid of this nasty function let's replace them by
memcpy() instead. It's worth noting that the length test there appeared
to be incorrect as it didn't make provisions for the trailing zero,
unless the size argument doesn't take it into account (seems unlikely).
Nothing was changed regarding this. If the code was good, it still is,
otherwise if it was bad it still is. At least this is more obvious now
than when using a function that needs n+1 chars to work.
Thanks to the previous patch, it is now possible for applets to not set the
CF_EOI flag on the channels. On this point, the applets get closer to the
muxes.
It was done by hand by callers when a shutdown for read or write was
performed. It is now always handled by the functions performing the
shutdown. This way the callers don't take care of it. This will avoid some
bugs.
When the promex applet triggers an error, for instance because the URI is
invalid, we must still take care to consume the request. Otherwise, the
error will be handled by HTTP analyzers as a server abort.
This patch must be backported as far as 2.0.
CF_READ_NULL flag is not really useful and used. It is a transient event
used to wakeup the stream. As we will see, all read events on a channel may
be resumed to only one and are all used to wake up the stream.
In this patch, we introduce CF_READ_EVENT flag as a replacement to
CF_READ_NULL. There is no breaking change for now, it is just a
rename. Gradually, other read events will be merged with this one.
It happens that a few "if USE_foo" were placed too low in the makefile,
and would mostly work by luck thanks to not using variables that were
already referenced before. The opentracing include is even trickier
because it extends OPTIONS_CFLAGS that was last read a few lines before
being included, but it only works because COPTS is defined as a macro and
not a variable, so it will be evaluated later. At least now it doesn't
touch OPTIONS_* anymore and since it's cleanly arranged, it will work by
default via the flags collector.
Let's just move these late USE_* handlers upper and place a visible
delimiter after them reminding not to add any after.
With gcc-11.2 and binutils-2.37 I'm getting link errors due to multiply
defined symbols when enabling USE_51DEGREES_V4. This is caused by two
variables being present in hash.h instead of hash.c, hence they're
defined twice.
This patch just moves them to hash.c and turns their declaration to
extern.
No backport is needed since this was introduced in 2.8-dev.
"haproxy_backend_agg_server_status" and "haproxy_backend_agg_check_status"
were not referenced in promex README.
"haproxy_backend_agg_server_check_status" is also missing but it is a
deprecated metric. Thus, it is better to not reference it.