haproxy/doc/lua.txt
Willy Tarreau 65e94d1ce9 [RELEASE] Released version 1.9-dev1
Released version 1.9-dev1 with the following main changes :
    - BUG/MEDIUM: kqueue: Don't bother closing the kqueue after fork.
    - DOC: cache: update sections and fix some typos
    - BUILD/MINOR: deviceatlas: enable thread support
    - BUG/MEDIUM: tcp-check: Don't lock the server in tcpcheck_main
    - BUG/MEDIUM: ssl: don't allocate shctx several time
    - BUG/MEDIUM: cache: bad computation of the remaining size
    - BUILD: checks: don't include server.h
    - BUG/MEDIUM: stream: fix session leak on applet-initiated connections
    - BUILD/MINOR: haproxy : FreeBSD/cpu affinity needs pthread_np header
    - BUILD/MINOR: Makefile : enabling USE_CPU_AFFINITY
    - BUG/MINOR: ssl: CO_FL_EARLY_DATA removal is managed by stream
    - BUG/MEDIUM: threads/peers: decrement, not increment jobs on quitting
    - BUG/MEDIUM: h2: don't report an error after parsing a 100-continue response
    - BUG/MEDIUM: peers: fix some track counter rules dont register entries for sync.
    - BUG/MAJOR: thread/peers: fix deadlock on peers sync.
    - BUILD/MINOR: haproxy: compiling config cpu parsing handling when needed
    - MINOR: config: report when "monitor fail" rules are misplaced
    - BUG/MINOR: mworker: fix validity check for the pipe FDs
    - BUG/MINOR: mworker: detach from tty when in daemon mode
    - MINOR: threads: Fix pthread_setaffinity_np on FreeBSD.
    - BUG/MAJOR: thread: Be sure to request a sync between threads only once at a time
    - BUILD: Fix LDFLAGS vs. LIBS re linking order in various makefiles
    - BUG/MEDIUM: checks: Be sure we have a mux if we created a cs.
    - BUG/MINOR: hpack: fix debugging output of pseudo header names
    - BUG/MINOR: hpack: must reject huffman literals padded with more than 7 bits
    - BUG/MINOR: hpack: reject invalid header index
    - BUG/MINOR: hpack: dynamic table size updates are only allowed before headers
    - BUG/MAJOR: h2: correctly check the request length when building an H1 request
    - BUG/MINOR: h2: immediately close if receiving GOAWAY after the last stream
    - BUG/MINOR: h2: try to abort closed streams as soon as possible
    - BUG/MINOR: h2: ":path" must not be empty
    - BUG/MINOR: h2: fix a typo causing PING/ACK to be responded to
    - BUG/MINOR: h2: the TE header if present may only contain trailers
    - BUG/MEDIUM: h2: enforce the per-connection stream limit
    - BUG/MINOR: h2: do not accept SETTINGS_ENABLE_PUSH other than 0 or 1
    - BUG/MINOR: h2: reject incorrect stream dependencies on HEADERS frame
    - BUG/MINOR: h2: properly check PRIORITY frames
    - BUG/MINOR: h2: reject response pseudo-headers from requests
    - BUG/MEDIUM: h2: remove connection-specific headers from request
    - BUG/MEDIUM: h2: do not accept upper case letters in request header names
    - BUG/MINOR: h2: use the H2_F_DATA_* macros for DATA frames
    - BUG/MINOR: action: Don't check http capture rules when no id is defined
    - BUG/MAJOR: hpack: don't pretend large headers fit in empty table
    - BUG/MINOR: ssl: support tune.ssl.cachesize 0 again
    - BUG/MEDIUM: mworker: also close peers sockets in the master
    - BUG/MEDIUM: ssl engines: Fix async engines fds were not considered to fix fd limit automatically.
    - BUG/MEDIUM: checks: a down server going to maint remains definitely stucked on down state.
    - BUG/MEDIUM: peers: set NOLINGER on the outgoing stream interface
    - BUG/MEDIUM: h2: fix handling of end of stream again
    - MINOR: mworker: Update messages referencing exit-on-failure
    - MINOR: mworker: Improve wording in `void mworker_wait()`
    - CONTRIB: halog: Add help text for -s switch in halog program
    - BUG/MEDIUM: email-alert: don't set server check status from a email-alert task
    - BUG/MEDIUM: threads/vars: Fix deadlock in register_name
    - MINOR: systemd: remove comment about HAPROXY_STATS_SOCKET
    - DOC: notifications: add precisions about thread usage
    - BUG/MEDIUM: lua/notification: memory leak
    - MINOR: conn_stream: add new flag CS_FL_RCV_MORE to indicate pending data
    - BUG/MEDIUM: stream-int: always set SI_FL_WAIT_ROOM on CS_FL_RCV_MORE
    - BUG/MEDIUM: h2: automatically set CS_FL_RCV_MORE when the output buffer is full
    - BUG/MEDIUM: h2: enable recv polling whenever demuxing is possible
    - BUG/MEDIUM: h2: work around a connection API limitation
    - BUG/MEDIUM: h2: debug incoming traffic in h2_wake()
    - MINOR: h2: store the demux padding length in the h2c struct
    - BUG/MEDIUM: h2: support uploading partial DATA frames
    - MINOR: h2: don't demand that a DATA frame is complete before processing it
    - BUG/MEDIUM: h2: don't switch the state to HREM before end of DATA frame
    - BUG/MEDIUM: h2: don't close after the first DATA frame on tunnelled responses
    - BUG/MEDIUM: http: don't disable lingering on requests with tunnelled responses
    - BUG/MEDIUM: h2: fix stream limit enforcement
    - BUG/MINOR: stream-int: don't try to receive again after receiving an EOS
    - MINOR: sample: add len converter
    - BUG: MAJOR: lb_map: server map calculation broken
    - BUG: MINOR: http: don't check http-request capture id when len is provided
    - MINOR: sample: rename the "len" converter to "length"
    - BUG/MEDIUM: mworker: Set FD_CLOEXEC flag on log fd
    - DOC/MINOR: intro: typo, wording, formatting fixes
    - MINOR: netscaler: respect syntax
    - MINOR: netscaler: remove the use of cip_magic only used once
    - MINOR: netscaler: rename cip_len to clarify its uage
    - BUG/MEDIUM: netscaler: use the appropriate IPv6 header size
    - BUG/MAJOR: netscaler: address truncated CIP header detection
    - MINOR: netscaler: check in one-shot if buffer is large enough for IP and TCP header
    - MEDIUM: netscaler: do not analyze original IP packet size
    - MEDIUM: netscaler: add support for standard NetScaler CIP protocol
    - MINOR: spoe: add force-set-var option in spoe-agent configuration
    - CONTRIB: iprange: Fix compiler warning in iprange.c
    - CONTRIB: halog: Fix compiler warnings in halog.c
    - BUG/MINOR: h2: properly report a stream error on RST_STREAM
    - MINOR: mux: add flags to describe a mux's capabilities
    - MINOR: stream-int: set flag SI_FL_CLEAN_ABRT when mux supports clean aborts
    - BUG/MEDIUM: stream: don't consider abortonclose on muxes which close cleanly
    - BUG/MEDIUM: checks: a server passed in maint state was not forced down.
    - BUG/MEDIUM: lua: fix crash when using bogus mode in register_service()
    - MINOR: http: adjust the list of supposedly cacheable methods
    - MINOR: http: update the list of cacheable status codes as per RFC7231
    - MINOR: http: start to compute the transaction's cacheability from the request
    - BUG/MINOR: http: do not ignore cache-control: public
    - BUG/MINOR: http: properly detect max-age=0 and s-maxage=0 in responses
    - BUG/MINOR: cache: do not force the TX_CACHEABLE flag before checking cacheability
    - MINOR: http: add a function to check request's cache-control header field
    - BUG/MEDIUM: cache: do not try to retrieve host-less requests from the cache
    - BUG/MEDIUM: cache: replace old object on store
    - BUG/MEDIUM: cache: respect the request cache-control header
    - BUG/MEDIUM: cache: don't cache the response on no-cache="set-cookie"
    - BUG/MAJOR: connection: refine the situations where we don't send shutw()
    - BUG/MEDIUM: checks: properly set servers to stopping state on 404
    - BUG/MEDIUM: h2: properly handle and report some stream errors
    - BUG/MEDIUM: h2: improve handling of frames received on closed streams
    - DOC/MINOR: configuration: typo, formatting fixes
    - BUG/MEDIUM: h2: ensure we always know the stream before sending a reset
    - BUG/MEDIUM: mworker: don't close stdio several time
    - MINOR: don't close stdio anymore
    - BUG/MEDIUM: http: don't automatically forward request close
    - BUG/MAJOR: hpack: don't return direct references to the dynamic headers table
    - MINOR: h2: add a function to report pseudo-header names
    - DEBUG: hpack: make hpack_dht_dump() expose the output file
    - DEBUG: hpack: add more traces to the hpack decoder
    - CONTRIB: hpack: add an hpack decoder
    - MEDIUM: h2: prepare a graceful shutdown when the frontend is stopped
    - BUG/MEDIUM: h2: properly handle the END_STREAM flag on empty DATA frames
    - BUILD: ssl: silence a warning when building without NPN nor ALPN support
    - CLEANUP: rbtree: remove
    - BUG/MEDIUM: ssl: cache doesn't release shctx blocks
    - BUG/MINOR: lua: Fix default value for pattern in Socket.receive
    - DOC: lua: Fix typos in comments of hlua_socket_receive
    - BUG/MEDIUM: lua: Fix IPv6 with separate port support for Socket.connect
    - BUG/MINOR: lua: Fix return value of Socket.settimeout
    - MINOR: dns: Handle SRV record weight correctly.
    - BUG/MEDIUM: mworker: execvp failure depending on argv[0]
    - MINOR: hathreads: add support for gcc < 4.7
    - BUILD/MINOR: ancient gcc versions atomic fix
    - BUG/MEDIUM: stream: properly handle client aborts during redispatch
    - MINOR: spoe: add register-var-names directive in spoe-agent configuration
    - MINOR: spoe: Don't queue a SPOE context if nothing is sent
    - DOC: clarify the scope of ssl_fc_is_resumed
    - CONTRIB: debug: fix a few flags definitions
    - BUG/MINOR: poll: too large size allocation for FD events
    - MINOR: sample: add date_us sample
    - BUG/MEDIUM: peers: fix expire date wasn't updated if entry is modified remotely.
    - MINOR: servers: Don't report duplicate dyncookies for disabled servers.
    - MINOR: global/threads: move cpu_map at the end of the global struct
    - MINOR: threads: add a MAX_THREADS define instead of LONGBITS
    - MINOR: global: add some global activity counters to help debugging
    - MINOR: threads/fd: Use a bitfield to know if there are FDs for a thread in the FD cache
    - BUG/MEDIUM: threads/polling: Use fd_cache_mask instead of fd_cache_num
    - BUG/MEDIUM: fd: maintain a per-thread update mask
    - MINOR: fd: add a bitmask to indicate that an FD is known by the poller
    - BUG/MEDIUM: epoll/threads: use one epoll_fd per thread
    - BUG/MEDIUM: kqueue/threads: use one kqueue_fd per thread
    - BUG/MEDIUM: threads/mworker: fix a race on startup
    - BUG/MINOR: mworker: only write to pidfile if it exists
    - MINOR: threads: Fix build when we're not compiling with threads.
    - BUG/MINOR: threads: always set an owner to the thread_sync pipe
    - BUG/MEDIUM: threads/server: Fix deadlock in srv_set_stopping/srv_set_admin_flag
    - BUG/MEDIUM: checks: Don't try to release undefined conn_stream when a check is freed
    - BUG/MINOR: kqueue/threads: Don't forget to close kqueue_fd[tid] on each thread
    - MINOR: threads: Use __decl_hathreads instead of #ifdef/#endif
    - BUILD: epoll/threads: Add test on MAX_THREADS to avoid warnings when complied without threads
    - BUILD: kqueue/threads: Add test on MAX_THREADS to avoid warnings when complied without threads
    - CLEANUP: sample: Fix comment encoding of sample.c
    - CLEANUP: sample: Fix outdated comment about sample casts functions
    - BUG/MINOR: sample: Fix output type of c_ipv62ip
    - CLEANUP: Fix typo in ARGT_MSK6 comment
    - CLEANUP: standard: Use len2mask4 in str2mask
    - MINOR: standard: Add str2mask6 function
    - MINOR: config: Add support for ARGT_MSK6
    - MEDIUM: sample: Add IPv6 support to the ipmask converter
    - MINOR: config: Enable tracking of up to MAX_SESS_STKCTR stick counters.
    - BUG/MINOR: cli: use global.maxsock and not maxfd to list all FDs
    - MINOR: polling: make epoll and kqueue not depend on maxfd anymore
    - MINOR: fd: don't report maxfd in alert messages
    - MEDIUM: polling: start to move maxfd computation to the pollers
    - CLEANUP: fd/threads: remove the now unused fdtab_lock
    - MINOR: poll: more accurately compute the new maxfd in the loop
    - CLEANUP: fd: remove the unused "new" field
    - MINOR: fd: move the hap_fd_{clr,set,isset} functions to fd.h
    - MEDIUM: select: make use of hap_fd_* functions
    - MEDIUM: fd: use atomic ops for hap_fd_{clr,set} and remove poll_lock
    - MEDIUM: select: don't use the old FD state anymore
    - MEDIUM: poll: don't use the old FD state anymore
    - MINOR: fd: pass the iocb and owner to fd_insert()
    - BUG/MINOR: threads: Update labels array because of changes in lock_label enum
    - MINOR: stick-tables: Adds support for new "gpc1" and "gpc1_rate" counters.
    - BUG/MINOR: epoll/threads: only call epoll_ctl(DEL) on polled FDs
    - DOC: don't suggest using http-server-close
    - MINOR: introduce proxy-v2-options for send-proxy-v2
    - BUG/MEDIUM: spoe: Always try to receive or send the frame to detect shutdowns
    - BUG/MEDIUM: spoe: Allow producer to read and to forward shutdown on request side
    - MINOR: spoe: Remove check on min_applets number when a SPOE context is queued
    - MINOR: spoe: Always link a SPOE context with the applet processing it
    - MINOR: spoe: Replace sending_rate by a frequency counter
    - MINOR: spoe: Count the number of frames waiting for an ack for each applet
    - MEDIUM: spoe: Use an ebtree to manage idle applets
    - MINOR: spoa_example: Count the number of frames processed by each worker
    - MINOR: spoe: Add max-waiting-frames directive in spoe-agent configuration
    - MINOR: init: make stdout unbuffered
    - MINOR: early data: Don't rely on CO_FL_EARLY_DATA to wake up streams.
    - MINOR: early data: Never remove the CO_FL_EARLY_DATA flag.
    - MINOR: compiler: introduce offsetoff().
    - MINOR: threads: Introduce double-width CAS on x86_64 and arm.
    - MINOR: threads: add test and set/reset operations
    - MINOR: pools/threads: Implement lockless memory pools.
    - MAJOR: fd/threads: Make the fdcache mostly lockless.
    - MEDIUM: fd/threads: Make sure we don't miss a fd cache entry.
    - MAJOR: fd: compute the new fd polling state out of the fd lock
    - MINOR: epoll: get rid of the now useless fd_compute_new_polled_status()
    - MINOR: kqueue: get rid of the now useless fd_compute_new_polled_status()
    - MINOR: poll: get rid of the now useless fd_compute_new_polled_status()
    - MINOR: select: get rid of the now useless fd_compute_new_polled_status()
    - CLEANUP: fd: remove the now unused fd_compute_new_polled_status() function
    - MEDIUM: fd: make updt_fd_polling() use atomics
    - MEDIUM: poller: use atomic ops to update the fdtab mask
    - MINOR: fd: move the fd_{add_to,rm_from}_fdlist functions to fd.c
    - BUG/MINOR: fd/threads: properly dereference fdcache as volatile
    - MINOR: fd: remove the unneeded last CAS when adding an fd to the list
    - MINOR: fd: reorder fd_add_to_fd_list()
    - BUG/MINOR: time/threads: ensure the adjusted time is always correct
    - BUG/MEDIUM: standard: Fix memory leak in str2ip2()
    - MINOR: init: emit warning when -sf/-sd cannot parse argument
    - BUILD: fd/threads: fix breakage build breakage without threads
    - DOC: Describe routing impact of using interface keyword on bind lines
    - DOC: Mention -Ws in the list of available options
    - BUG/MINOR: config: don't emit a warning when global stats is incompletely configured
    - BUG/MINOR: fd/threads: properly lock the FD before adding it to the fd cache.
    - BUG/MEDIUM: threads: fix the double CAS implementation for ARMv7
    - BUG/MEDIUM: ssl: Don't always treat SSL_ERROR_SYSCALL as unrecovarable.
    - BUILD/MINOR: memory: stdint is needed for uintptr_t
    - BUG/MINOR: init: Add missing brackets in the code parsing -sf/-st
    - DOC: lua: new prototype for function "register_action()"
    - DOC: cfgparse: Warn on option (tcp|http)log in backend
    - BUG/MINOR: ssl/threads: Make management of the TLS ticket keys files thread-safe
    - MINOR: sample: add a new "concat" converter
    - BUG/MEDIUM: ssl: Shutdown the connection for reading on SSL_ERROR_SYSCALL
    - BUG/MEDIUM: http: Switch the HTTP response in tunnel mode as earlier as possible
    - BUG/MEDIUM: ssl/sample: ssl_bc_* fetch keywords are broken.
    - MINOR: ssl/sample: adds ssl_bc_is_resumed fetch keyword.
    - CLEANUP: cfgparse: Remove unused label end
    - CLEANUP: spoe: Remove unused label retry
    - CLEANUP: h2: Remove unused labels from mux_h2.c
    - CLEANUP: pools: Remove unused end label in memory.h
    - CLEANUP: standard: Fix typo in IPv6 mask example
    - BUG/MINOR: pools/threads: don't ignore DEBUG_UAF on double-word CAS capable archs
    - BUG/MINOR: debug/pools: properly handle out-of-memory when building with DEBUG_UAF
    - MINOR: debug/pools: make DEBUG_UAF also detect underflows
    - MINOR: stats: display the number of threads in the statistics.
    - BUG/MINOR: h2: Set the target of dbuf_wait to h2c
    - BUG/MEDIUM: h2: always consume any trailing data after end of output buffers
    - BUG/MEDIUM: buffer: Fix the wrapping case in bo_putblk
    - BUG/MEDIUM: buffer: Fix the wrapping case in bi_putblk
    - BUG/MEDIUM: spoe: Remove idle applets from idle list when HAProxy is stopping
    - Revert "BUG/MINOR: send-proxy-v2: string size must include ('\0')"
    - MINOR: ssl: extract full pkey info in load_certificate
    - MINOR: ssl: add ssl_sock_get_pkey_algo function
    - MINOR: ssl: add ssl_sock_get_cert_sig function
    - MINOR: connection: add proxy-v2-options ssl-cipher,cert-sig,cert-key
    - MINOR: connection: add proxy-v2-options authority
    - MINOR: systemd: Add section for SystemD sandboxing to unit file
    - MINOR: systemd: Add SystemD's Protect*= options to the unit file
    - MINOR: systemd: Add SystemD's SystemCallFilter option to the unit file
    - CLEANUP: h2: rename misleading h2c_stream_close() to h2s_close()
    - MINOR: h2: provide and use h2s_detach() and h2s_free()
    - MEDIUM: h2: use a single buffer allocator
    - MINOR/BUILD: fix Lua build on Mac OS X
    - BUILD/MINOR: fix Lua build on Mac OS X (again)
    - BUG/MINOR: session: Fix tcp-request session failure if handshake.
    - CLEANUP: .gitignore: Ignore binaries from the contrib directory
    - BUG/MINOR: unix: Don't mess up when removing the socket from the xfer_sock_list.
    - DOC: buffers: clarify the purpose of the <from> pointer in offer_buffers()
    - BUG/MEDIUM: h2: also arm the h2 timeout when sending
    - BUG/MINOR: cli: Fix a crash when passing a negative or too large value to "show fd"
    - CLEANUP: ssl: Remove a duplicated #include
    - CLEANUP: cli: Remove a leftover debug message
    - BUG/MINOR: cli: Fix a typo in the 'set rate-limit' usage
    - BUG/MEDIUM: fix a 100% cpu usage with cpu-map and nbthread/nbproc
    - BUG/MINOR: force-persist and ignore-persist only apply to backends
    - BUG/MEDIUM: threads/unix: Fix a deadlock when a listener is temporarily disabled
    - BUG/MAJOR: threads/queue: Fix thread-safety issues on the queues management
    - BUG/MINOR: dns: don't downgrade DNS accepted payload size automatically
    - TESTS: Add a testcase for multi-port + multi-server listener issue
    - CLEANUP: dns: remove duplicate code in src/dns.c
    - BUG/MINOR: seemless reload: Fix crash when an interface is specified.
    - BUG/MINOR: cli: Ensure all command outputs end with a LF
    - BUG/MINOR: cli: Fix a crash when sending a command with too many arguments
    - BUILD: ssl: Fix build with OpenSSL without NPN capability
    - BUG/MINOR: spoa-example: unexpected behavior for more than 127 args
    - BUG/MINOR: lua: return bad error messages
    - CLEANUP: lua/syntax: lua is a name and not an acronym
    - BUG/MEDIUM: tcp-check: single connect rule can't detect DOWN servers
    - BUG/MINOR: tcp-check: use the server's service port as a fallback
    - BUG/MEDIUM: threads/queue: wake up other threads upon dequeue
    - MINOR: log: stop emitting alerts when it's not possible to write on the socket
    - BUILD/BUG: enable -fno-strict-overflow by default
    - BUG/MEDIUM: fd/threads: ensure the fdcache_mask always reflects the cache contents
    - DOC: log: more than 2 log servers are allowed
    - MINOR: hash: add new function hash_crc32c
    - MINOR: proxy-v2-options: add crc32c
    - MINOR: accept-proxy: support proxy protocol v2 CRC32c checksum
    - REORG: compact "struct server"
    - MINOR: samples: add crc32c converter
    - BUG/MEDIUM: h2: properly account for DATA padding in flow control
    - BUG/MINOR: h2: ensure we can never send an RST_STREAM in response to an RST_STREAM
    - BUG/MINOR: listener: Don't decrease actconn twice when a new session is rejected
    - CLEANUP: map, stream: remove duplicate code in src/map.c, src/stream.c
    - BUG/MINOR: lua: the function returns anything
    - BUG/MINOR: lua funtion hlua_socket_settimeout don't check negative values
    - CLEANUP: lua: typo fix in comments
    - BUILD/MINOR: fix build when USE_THREAD is not defined
    - MINOR: lua: allow socket api settimeout to accept integers, float, and doubles
    - BUG/MINOR: hpack: fix harmless use of uninitialized value in hpack_dht_insert
    - MINOR: cli/threads: make "show fd" report thread_sync_io_handler instead of "unknown"
    - MINOR: cli: make "show fd" report the mux and mux_ctx pointers when available
    - BUILD/MINOR: cli: fix a build warning introduced by last commit
    - BUG/MAJOR: h2: remove orphaned streams from the send list before closing
    - MINOR: h2: always call h2s_detach() in h2_detach()
    - MINOR: h2: fuse h2s_detach() and h2s_free() into h2s_destroy()
    - BUG/MEDIUM: h2/threads: never release the task outside of the task handler
    - BUG/MEDIUM: h2: don't consider pending data on detach if connection is in error
    - BUILD/MINOR: threads: always export thread_sync_io_handler()
    - MINOR: mux: add a "show_fd" function to dump debugging information for "show fd"
    - MINOR: h2: implement a basic "show_fd" function
    - MINOR: cli: report cache indexes in "show fd"
    - BUG/MINOR: h2: remove accidental debug code introduced with show_fd function
    - BUG/MEDIUM: h2: always add a stream to the send or fctl list when blocked
    - BUG/MINOR: checks: check the conn_stream's readiness and not the connection
    - BUG/MINOR: fd: Don't clear the update_mask in fd_insert.
    - BUG/MINOR: email-alert: Set the mailer port during alert initialization
    - BUG/MINOR: cache: fix "show cache" output
    - BUG/MAJOR: cache: fix random crashes caused by incorrect delete() on non-first blocks
    - BUG/MINOR: spoe: Initialize variables used during conf parsing before any check
    - BUG/MINOR: spoe: Don't release the context buffer in .check_timeouts callbaclk
    - BUG/MINOR: spoe: Register the variable to set when an error occurred
    - BUG/MINOR: spoe: Don't forget to decrement fpa when a processing is interrupted
    - MINOR: spoe: Add metrics in to know time spent in the SPOE
    - MINOR: spoe: Add options to store processing times in variables
    - MINOR: log: move 'log' keyword parsing in dedicated function
    - MINOR: log: Keep the ref when a log server is copied to avoid duplicate entries
    - MINOR: spoe: Add loggers dedicated to the SPOE agent
    - MINOR: spoe: Add support for option dontlog-normal in the SPOE agent section
    - MINOR: spoe: use agent's logger to log SPOE messages
    - MINOR: spoe: Add counters to log info about SPOE agents
    - BUG/MAJOR: cache: always initialize newly created objects
    - MINOR: servers: Support alphanumeric characters for the server templates names
    - BUG/MEDIUM: threads: Fix the max/min calculation because of name clashes
    - BUG/MEDIUM: connection: Make sure we have a mux before calling detach().
    - BUG/MINOR: http: Return an error in proxy mode when url2sa fails
    - MINOR: proxy: Add fe_defbe fetcher
    - MINOR: config: Warn if resolvers has no nameservers
    - BUG/MINOR: cli: Guard against NULL messages when using CLI_ST_PRINT_FREE
    - MINOR: cli: Ensure the CLI always outputs an error when it should
    - MEDIUM: sample: Extend functionality for field/word converters
    - MINOR: export localpeer as an environment variable
    - BUG/MEDIUM: kqueue: When adding new events, provide an output to get errors.
    - BUILD: sample: avoid build warning in sample.c
    - BUG/CRITICAL: h2: fix incorrect frame length check
    - DOC: lua: update the links to the config and Lua API
    - BUG/MINOR: pattern: Add a missing HA_SPIN_INIT() in pat_ref_newid()
    - BUG/MAJOR: channel: Fix crash when trying to read from a closed socket
    - BUG/MINOR: log: t_idle (%Ti) is not set for some requests
    - BUG/MEDIUM: lua: Fix segmentation fault if a Lua task exits
    - MINOR: h2: detect presence of CONNECT and/or content-length
    - BUG/MEDIUM: h2: implement missing support for chunked encoded uploads
    - BUG/MINOR: spoe: Fix counters update when processing is interrupted
    - BUG/MINOR: spoe: Fix parsing of dontlog-normal option
    - MEDIUM: cli: Add payload support
    - MINOR: map: Add payload support to "add map"
    - MINOR: ssl: Add payload support to "set ssl ocsp-response"
    - BUG/MINOR: lua/threads: Make lua's tasks sticky to the current thread
    - MINOR: sample: Add strcmp sample converter
    - MINOR: http: Add support for 421 Misdirected Request
    - BUG/MINOR: config: disable http-reuse on TCP proxies
    - MINOR: ssl: disable SSL sample fetches when unsupported
    - MINOR: ssl: add fetch 'ssl_fc_session_key' and 'ssl_bc_session_key'
    - BUG/MINOR: checks: Fix check->health computation for flapping servers
    - BUG/MEDIUM: threads: Fix the sync point for more than 32 threads
    - BUG/MINOR, BUG/MINOR: lua: Put tasks to sleep when waiting for data
    - MINOR: backend: implement random-based load balancing
    - DOC/MINOR: clean up LUA documentation re: servers & array/table.
    - MINOR: lua: Add server name & puid to LUA Server class.
    - MINOR: lua: add get_maxconn and set_maxconn to LUA Server class.
    - BUG/MINOR: map: correctly track reference to the last ref_elt being dumped
    - BUG/MEDIUM: task: Don't free a task that is about to be run.
    - MINOR: fd: Make the lockless fd list work with multiple lists.
    - BUG/MEDIUM: pollers: Use a global list for fd shared between threads.
    - MINOR: pollers: move polled_mask outside of struct fdtab.
    - BUG/MINOR: lua: schedule socket task upon lua connect()
    - BUG/MINOR: lua: ensure large proxy IDs can be represented
    - BUG/MEDIUM: pollers/kqueue: use incremented position in event list
    - BUG/MINOR: cli: don't stop cli_gen_usage_msg() when kw->usage == NULL
    - BUG/MEDIUM: http: don't always abort transfers on CF_SHUTR
    - BUG/MEDIUM: ssl: properly protect SSL cert generation
    - BUG/MINOR: lua: Socket.send threw runtime error: 'close' needs 1 arguments.
    - BUG/MINOR: spoe: Mistake in error message about SPOE configuration
    - BUG/MEDIUM: spoe: Flags are not encoded in network order
    - CLEANUP: spoe: Remove unused variables the agent structure
    - DOC: spoe: fix a typo
    - BUG/MEDIUM: contrib/mod_defender: Use network order to encode/decode flags
    - BUG/MEDIUM: contrib/modsecurity: Use network order to encode/decode flags
    - DOC: add some description of the pending rework of the buffer structure
    - BUG/MINOR: ssl/lua: prevent lua from affecting automatic maxconn computation
    - MINOR: lua: Improve error message
    - BUG/MEDIUM: cache: don't cache when an Authorization header is present
    - MINOR: ssl: set SSL_OP_PRIORITIZE_CHACHA
    - BUG/MEDIUM: dns: Delay the attempt to run a DNS resolution on check failure.
    - BUG/BUILD: threads: unbreak build without threads
    - BUG/MEDIUM: servers: Add srv_addr default placeholder to the state file
    - BUG/MEDIUM: lua/socket: Length required read doesn't work
    - MINOR: tasks: Change the task API so that the callback takes 3 arguments.
    - MAJOR: tasks: Create a per-thread runqueue.
    - MAJOR: tasks: Introduce tasklets.
    - MINOR: tasks: Make the number of tasks to run at once configurable.
    - MAJOR: applets: Use tasks, instead of rolling our own scheduler.
    - BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_* converters
    - MINOR: http: Log warning if (add|set)-header fails
    - DOC: management: add the new wrew stats column
    - MINOR: stats: also report the failed header rewrites warnings on the stats page
    - BUG/MEDIUM: tasks: Don't forget to increase/decrease tasks_run_queue.
    - BUG/MEDIUM: task: Don't forget to decrement max_processed after each task.
    - MINOR: task: Also consider the task list size when getting global tasks.
    - MINOR: dns: Implement `parse-resolv-conf` directive
    - BUG/MEDIUM: spoe: Return an error when the wrong ACK is received in sync mode
    - MINOR: task/notification: Is notifications registered ?
    - BUG/MEDIUM: lua/socket: wrong scheduling for sockets
    - BUG/MAJOR: lua: Dead lock with sockets
    - BUG/MEDIUM: lua/socket: Notification error
    - BUG/MEDIUM: lua/socket: Sheduling error on write: may dead-lock
    - BUG/MEDIUM: lua/socket: Buffer error, may segfault
    - DOC: contrib/modsecurity: few typo fixes
    - DOC: SPOE.txt: fix a typo
    - MAJOR: spoe: upgrade the SPOP version to 2.0 and remove the support for 1.0
    - BUG/MINOR: contrib/spoa_example: Don't reset the status code during disconnect
    - BUG/MINOR: contrib/mod_defender: Don't reset the status code during disconnect
    - BUG/MINOR: contrib/modsecurity: Don't reset the status code during disconnect
    - BUG/MINOR: contrib/mod_defender: update pointer on the end of the frame
    - BUG/MINOR: contrib/modsecurity: update pointer on the end of the frame
    - MINOR: task: Fix a compiler warning by adding a cast.
    - MINOR: stats: also report the nice and number of calls for applets
    - MINOR: applet: assign the same nice value to a new appctx as its owner task
    - MINOR: task: Fix compiler warning.
    - BUG/MEDIUM: tasks: Use the local runqueue when building without threads.
    - MINOR: tasks: Don't define rqueue if we're building without threads.
    - BUG/MINOR: unix: Make sure we can transfer abns sockets on seamless reload.
    - MINOR: lua: Increase debug information
    - BUG/MEDIUM: threads: handle signal queue only in thread 0
    - BUG/MINOR: don't ignore SIG{BUS,FPE,ILL,SEGV} during signal processing
    - BUG/MINOR: signals: ha_sigmask macro for multithreading
    - BUG/MAJOR: map: fix a segfault when using http-request set-map
    - DOC: regression testing: Add a short starting guide.
    - MINOR: tasks: Make sure we correctly init and deinit a tasklet.
    - BUG/MINOR: tasklets: Just make sure we don't pass a tasklet to the handler.
    - BUG/MINOR: lua: Segfaults with wrong usage of types.
    - BUG/MAJOR: ssl: Random crash with cipherlist capture
    - BUG/MAJOR: ssl: OpenSSL context is stored in non-reserved memory slot
    - BUG/MEDIUM: ssl: do not store pkinfo with SSL_set_ex_data
    - MINOR: tests: First regression testing file.
    - MINOR: reg-tests: Add reg-tests/README file.
    - MINOR: reg-tests: Add a few regression testing files.
    - DOC: Add new REGTEST tag info about reg testing.
    - BUG/MEDIUM: fd: Don't modify the update_mask in fd_dodelete().
    - MINOR: Some spelling cleanup in the comments.
    - BUG/MEDIUM: threads: Use the sync point to check active jobs and exit
    - MINOR: threads: Be sure to remove threads from all_threads_mask on exit
    - REGTEST/MINOR: Wrong URI in a reg test for SSL/TLS.
    - REGTEST/MINOR: Set HAPROXY_PROGRAM default value.
    - REGTEST/MINOR: Add levels to reg-tests target.
    - BUG/MAJOR: Stick-tables crash with segfault when the key is not in the stick-table
    - BUG/BUILD: threads: unbreak build without threads
    - BUG/MAJOR: stick_table: Complete incomplete SEGV fix
    - MINOR: stick-tables: make stktable_release() do nothing on NULL
    - BUG/MEDIUM: lua: possible CLOSE-WAIT state with '\n' headers
    - MINOR: startup: change session/process group settings
    - MINOR: systemd: consider exit status 143 as successful
    - REGTEST/MINOR: Wrong URI syntax.
    - CLEANUP: dns: remove obsolete macro DNS_MAX_IP_REC
    - CLEANUP: dns: inacurate comment about prefered IP score
    - MINOR: dns: fix wrong score computation in dns_get_ip_from_response
    - MINOR: dns: new DNS options to allow/prevent IP address duplication
    - REGTEST/MINOR: Unexpected curl URL globling.
    - BUG/MINOR: ssl: properly ref-count the tls_keys entries
    - MINOR: h2: keep a count of the number of conn_streams attached to the mux
    - BUG/MEDIUM: h2: don't accept new streams if conn_streams are still in excess
    - MINOR: h2: add the mux and demux buffer lengths on "show fd"
    - BUG/MEDIUM: h2: never leave pending data in the output buffer on close
    - BUG/MEDIUM: h2: make sure the last stream closes the connection after a timeout
    - MINOR: tasklet: Set process to NULL.
    - MINOR: buffer: implement a new file for low-level buffer manipulation functions
    - MINOR: buffer: switch buffer sizes and offsets to size_t
    - MINOR: buffer: add a few basic functions for the new API
    - MINOR: buffer: Introduce b_sub(), b_add(), and bo_add()
    - MINOR: buffer: Add b_set_data().
    - MINOR: buffer: introduce b_realign_if_empty()
    - MINOR: compression: pass the channel to http_compression_buffer_end()
    - MINOR: channel: add a few basic functions for the new buffer API
    - MINOR: channel/buffer: use c_realign_if_empty() instead of buffer_realign()
    - MINOR: channel/buffer: replace buffer_slow_realign() with channel_slow_realign() and b_slow_realign()
    - MEDIUM: channel: make channel_slow_realign() take a swap buffer
    - MINOR: h2: use b_slow_realign() with the trash as a swap buffer
    - MINOR: buffer: remove buffer_slow_realign() and the swap_buffer allocation code
    - MINOR: channel/buffer: replace b_{adv,rew} with c_{adv,rew}
    - MINOR: buffer: replace calls to buffer_space_wraps() with b_space_wraps()
    - MINOR: buffer: remove bi_getblk() and bi_getblk_nc()
    - MINOR: buffer: split bi_contig_data() into ci_contig_data and b_config_data()
    - MINOR: buffer: remove bi_ptr()
    - MINOR: buffer: remove bo_ptr()
    - MINOR: buffer: remove bo_end()
    - MINOR: buffer: remove bi_end()
    - MINOR: buffer: remove bo_contig_data()
    - MINOR: buffer: merge b{i,o}_contig_space()
    - MINOR: buffer: replace bo_getblk() with direction agnostic b_getblk()
    - MINOR: buffer: replace bo_getblk_nc() with b_getblk_nc() which takes an offset
    - MINOR: buffer: replace bi_del() and bo_del() with b_del()
    - MINOR: buffer: convert most b_ptr() calls to c_ptr()
    - MINOR: h1: make h1_measure_trailers() take the byte count in argument
    - MINOR: h2: clarify the fact that the send functions are unsigned
    - MEDIUM: h2: prevent the various mux encoders from modifying the buffer
    - MINOR: h1: make h1_skip_chunk_crlf() not depend on b_ptr() anymore
    - MINOR: h1: make h1_parse_chunk_size() not depend on b_ptr() anymore
    - MINOR: h1: make h1_measure_trailers() use an offset and a count
    - MEDIUM: h2: do not use buf->o anymore inside h2_snd_buf's loop
    - MEDIUM: h2: don't use b_ptr() nor b_end() anymore
    - MINOR: buffer: get rid of b_end() and b_to_end()
    - MINOR: buffer: make b_getblk_nc() take const pointers
    - MINOR: buffer: make b_getblk_nc() take size_t for the block sizes
    - MEDIUM: connection: make xprt->snd_buf() take the byte count in argument
    - MEDIUM: mux: make mux->snd_buf() take the byte count in argument
    - MEDIUM: connection: make xprt->rcv_buf() use size_t for the count
    - MEDIUM: mux: make mux->rcv_buf() take a size_t for the count
    - MINOR: connection: add a flags argument to rcv_buf()
    - MINOR: connection: add a new receive flag : CO_RFL_BUF_WET
    - MINOR: buffer: get rid of b_ptr() and convert its last users
    - MINOR: buffer: use b_room() to determine available space in a buffer
    - MINOR: buffer: replace buffer_not_empty() with b_data() or c_data()
    - MINOR: buffer: replace buffer_empty() with b_empty() or c_empty()
    - MINOR: buffer: make bo_putchar() use b_tail()
    - MINOR: buffer: replace buffer_full() with channel_full()
    - MINOR: buffer: replace bi_space_for_replace() with ci_space_for_replace()
    - MINOR: buffer: replace buffer_pending() with ci_data()
    - MINOR: buffer: replace buffer_flush() with c_adv(chn, ci_data(chn))
    - MINOR: buffer: use c_head() instead of buffer_wrap_sub(c->buf, p-o)
    - MINOR: buffer: use b_orig() to replace most references to b->data
    - MINOR: buffer: Use b_add()/bo_add() instead of accessing b->i/b->o.
    - MINOR: channel: remove almost all references to buf->i and buf->o
    - MINOR: channel: Add co_set_data().
    - MEDIUM: channel: adapt to the new buffer API
    - MINOR: checks: adapt to the new buffer API
    - MEDIUM: h2: update to the new buffer API
    - MINOR: buffer: remove unused bo_add()
    - MEDIUM: spoe: use the new buffer API for the SPOE buffer
    - MINOR: stats: adapt to the new buffers API
    - MINOR: cli: use the new buffer API
    - MINOR: cache: use the new buffer API
    - MINOR: stream-int: use the new buffer API
    - MINOR: stream: use wrappers instead of directly manipulating buffers
    - MINOR: backend: use new buffer API
    - MEDIUM: http: use wrappers instead of directly manipulating buffers states
    - MINOR: filters: convert to the new buffer API
    - MINOR: payload: convert to the new buffer API
    - MEDIUM: h1: port to new buffer API.
    - MINOR: flt_trace: adapt to the new buffer API
    - MEDIUM: compression: start to move to the new buffer API
    - MINOR: lua: use the wrappers instead of directly manipulating buffer states
    - MINOR: buffer: convert part bo_putblk() and bi_putblk() to the new API
    - MINOR: buffer: adapt buffer_slow_realign() and buffer_dump() to the new API
    - MAJOR: start to change buffer API
    - MINOR: buffer: remove the check for output on b_del()
    - MINOR: buffer: b_set_data() doesn't truncate output data anymore
    - MINOR: buffer: rename the "data" field to "area"
    - MEDIUM: buffers: move "output" from struct buffer to struct channel
    - MINOR: buffer: replace bi_fast_delete() with b_del()
    - MINOR: buffer: replace b{i,o}_put* with b_put*
    - MINOR: buffer: add a new file for ist + buffer manipulation functions
    - MINOR: checks: use b_putist() instead of b_putstr()
    - MINOR: buffers: remove b_putstr()
    - CLEANUP: buffer: minor cleanups to buffer.h
    - MINOR: buffers/channel: replace buffer_insert_line2() with ci_insert_line2()
    - MINOR: buffer: replace buffer_replace2() with b_rep_blk()
    - MINOR: buffer: rename the data length member to '->data'
    - MAJOR: buffer: finalize buffer detachment
    - MEDIUM: chunks: make the chunk struct's fields match the buffer struct
    - MAJOR: chunks: replace struct chunk with struct buffer
    - DOC: buffers: document the new buffers API
    - DOC: buffers: remove obsolete docs about buffers
    - MINOR: tasklets: Don't attempt to add a tasklet in the list twice.
    - MINOR: connections/mux: Add a new "subscribe" method.
    - MEDIUM: connections/mux: Revamp the send direction.
    - MINOR: connection: simplify subscription by adding a registration function
    - BUG/MINOR: http: Set brackets for the unlikely macro at the right place
    - BUG/MINOR: build: Fix compilation with debug mode enabled
    - BUILD: Generate sha256 checksums in publish-release
    - MINOR: debug: Add check for CO_FL_WILL_UPDATE
    - MINOR: debug: Add checks for conn_stream flags
    - MINOR: ist: Add the function isteqi
    - BUG/MEDIUM: threads: Fix the exit condition of the thread barrier
    - BUG/MEDIUM: mux_h2: Call h2_send() before updating polling.
    - MINOR: buffers: simplify b_contig_space()
    - MINOR: buffers: split b_putblk() into __b_putblk()
    - MINOR: buffers: add b_xfer() to transfer data between buffers
    - DOC: add some design notes about the new layering model
    - MINOR: conn_stream: add a new CS_FL_REOS flag
    - MINOR: conn_stream: add an rx buffer to the conn_stream
    - MEDIUM: conn_stream: add cs_recv() as a default rcv_buf() function
    - MEDIUM: stream-int: automatically call si_cs_recv_cb() if the cs has data on wake()
    - MINOR: h2: make each H2 stream support an intermediary input buffer
    - MEDIUM: h2: make h2_frt_decode_headers() use an intermediary buffer
    - MEDIUM: h2: make h2_frt_transfer_data() copy via an intermediary buffer
    - MEDIUM: h2: centralize transfer of decoded frames in h2_rcv_buf()
    - MEDIUM: h2: move headers and data frame decoding to their respective parsers
    - MEDIUM: buffers: make b_xfer() automatically swap buffers when possible
    - MEDIUM: h2: perform a single call to the data layer in demux()
    - MEDIUM: h2: don't call data_cb->recv() anymore
    - MINOR: h2: make use of CS_FL_REOS to indicate that end of stream was seen
    - MEDIUM: h2: use the default conn_stream's receive function
    - DOC: add more design feedback on the new layering model
    - MINOR: h2: add the error code and the max/last stream IDs to "show fd"
    - BUG/MEDIUM: stream-int: don't immediately enable reading when the buffer was reportedly full
    - BUG/MEDIUM: stats: don't ask for more data as long as we're responding
    - BUG/MINOR: servers: Don't make "server" in a frontend fatal.
    - BUG/MEDIUM: tasks: make sure we pick all tasks in the run queue
    - BUG/MEDIUM: tasks: Decrement rqueue_size at the right time.
    - BUG/MEDIUM: tasks: use atomic ops for active_tasks_mask
    - BUG/MEDIUM: tasks: Make sure there's no task left before considering inactive.
    - MINOR: signal: don't pass the signal number anymore as the wakeup reason
    - MINOR: tasks: extend the state bits from 8 to 16 and remove the reason
    - MINOR: tasks: Add a flag that tells if we're in the global runqueue.
    - BUG/MEDIUM: tasks: make __task_unlink_rq responsible for the rqueue size.
    - MINOR: queue: centralize dequeuing code a bit better
    - MEDIUM: queue: make pendconn_free() work on the stream instead
    - DOC: queue: document the expected locking model for the server's queue
    - MINOR: queue: make sure pendconn->strm->pend_pos is always valid
    - MINOR: queue: use a distinct variable for the assigned server and the queue
    - MINOR: queue: implement pendconn queue locking functions
    - MEDIUM: queue: get rid of the pendconn lock
    - MINOR: tasks: Make active_tasks_mask volatile.
    - MINOR: tasks: Make global_tasks_mask volatile.
    - MINOR: pollers: Add a way to wake a thread sleeping in the poller.
    - MINOR: threads/queue: Get rid of THREAD_WANT_SYNC in the queue code.
    - BUG/MEDIUM: threads/sync: use sched_yield when available
    - MINOR: ssl: BoringSSL matches OpenSSL 1.1.0
    - BUG/MEDIUM: h2: prevent orphaned streams from blocking a connection forever
    - BUG/MINOR: config: stick-table is not supported in defaults section
    - BUILD/MINOR: threads: unbreak build with threads disabled
    - BUG/MINOR: threads: Handle nbthread == MAX_THREADS.
    - BUG/MEDIUM: threads: properly fix nbthreads == MAX_THREADS
    - MINOR: threads: move "nbthread" parsing to hathreads.c
    - BUG/MEDIUM: threads: unbreak "bind" referencing an incorrect thread number
    - MEDIUM: proxy_protocol: Convert IPs to v6 when protocols are mixed
    - BUILD/MINOR: compiler: fix offsetof() on older compilers
    - SCRIPTS: git-show-backports: add missing quotes to "echo"
    - MINOR: threads: add more consistency between certain variables in no-thread case
    - MEDIUM: hathreads: implement a more flexible rendez-vous point
    - BUG/MEDIUM: cli: make "show fd" thread-safe
2018-08-02 18:12:50 +02:00

968 lines
39 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Lua: Architecture and first steps
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
version 1.9
author: Thierry FOURNIER
contact: tfournier at arpalert dot org
HAProxy is a powerful load balancer. It embeds many options and many
configuration styles in order to give a solution to many load balancing
problems. However, HAProxy is not universal and some special or specific
problems do not have solution with the native software.
This text is not a full explanation of the Lua syntax.
This text is not a replacement of the HAProxy Lua API documentation. The API
documentation can be found at the project root, in the documentation directory.
The goal of this text is to discover how Lua is implemented in HAProxy and using
it efficiently.
However, this can be read by Lua beginners. Some examples are detailed.
Why a scripting language in HAProxy
===================================
HAProxy 1.5 makes at possible to do many things using samples, but some people
want to more combining results of samples fetches, programming conditions and
loops which is not possible. Sometimes people implement these functionnalities
in patches which have no meaning outside their network. These people must
maintain these patches, or worse we must integrate them in the HAProxy
mainstream.
Their need is to have an embedded programming language in order to no longer
modify the HAProxy source code, but to write their own control code. Lua is
encountered very often in the software industry, and in some open source
projects. It is easy to understand, efficient, light without external
dependancies, and leaves the resource control to the implementation. Its design
is close to the HAProxy philosophy which uses components for what they do
perfectly.
The HAProxy control block allows one to take a decision based on the comparison
between samples and patterns. The samples are extracted using fetch functions
easily extensible, and are used by actions which are also extensible. It seems
natural to allow Lua to give samples, modify them, and to be an action target.
So, Lua uses the same entities as the configuration language. This is the most
natural and reliable way for the Lua integration. So, the Lua engine allows one
to add new sample fetch functions, new converter functions and new actions.
These new entities can access the existing samples fetches and converters
allowing to extend them without rewriting them.
The writing of the first Lua functions shows that implementing complex concepts
like protocol analysers is easy and can be extended to full services. It appears
that these services are not easy to implement with the HAProxy configuration
model which is based on four steps: fetch, convert, compare and action. HAProxy
is extended with a notion of services which are a formalisation of the existing
services like stats, cli and peers. The service is an autonomous entity with a
behaviour pattern close to that of an external client or server. The Lua engine
inherits from this new service and offers new possibilities for writing
services.
This scripting language is useful for testing new features as proof of concept.
Later, if there is general interest, the proof of concept could be integrated
with C language in the HAProxy core.
The HAProxy Lua integration also provides a simple way for distributing Lua
packages. The final user needs only to install the Lua file, load it in HAProxy
and follow the attached documentation.
Design and technical things
===========================
Lua is integrated into the HAProxy event driven core. We want to preserve the
fast processing of HAProxy. To ensure this, we implement some technical concepts
between HAProxy and the Lua library.
The following paragraph also describes the interactions between Lua and HAProxy
from a technical point of view.
Prerequisite
-----------
Reading the following documentation links is required to understand the
current paragraph:
HAProxy doc: http://cbonte.github.io/haproxy-dconv/
Lua API: http://www.lua.org/manual/5.3/
HAProxy API: http://www.arpalert.org/src/haproxy-lua-api/1.9dev/index.html
Lua guide: http://www.lua.org/pil/
more about Lua choice
---------------------
Lua language is very simple to extend. It is easy to add new functions written
in C in the core language. It is not required to embed very intrusive libraries,
and we do not change compilation processes.
The amount of memory consumed can be controlled, and the issues due to lack of
memory are perfectly caught. The maximum amount of memory allowed for the Lua
processes is configurable. If some memory is missing, the current Lua action
fails, and the HAProxy processing flow continues.
Lua provides a way for implementing event driven design. When the Lua code
wants to do a blocking action, the action is started, it executes non blocking
operations, and returns control to the HAProxy scheduler when it needs to wait
for some external event.
The Lua process can be interrupted after a number of instructions executed. The
Lua execution will resume later. This is a useful way for controlling the
execution time. This system also keeps HAProxy responsive. When the Lua
execution is interrupted, HAProxy accepts some connections or transfers pending
data. The Lua execution does not block the main HAProxy processing, except in
some cases which we will see later.
Lua function integration
------------------------
The Lua actions, sample fetches, converters and services are integrated in
HAProxy with "register_*" functions. The register system is a choice for
providing HAProxy Lua packages easily. The register system adds new sample
fetches, converters, actions or services usable in the HAProxy configuration
file.
The register system is defined in the "core" functions collection. This
collection is provided by HAProxy and is always available. Below, the list of
these functions:
- core.register_action()
- core.register_converters()
- core.register_fetches()
- core.register_init()
- core.register_service()
- core.register_task()
These functions are the execution entry points.
HTTP action must be used for manipulating HTTP request headers. This action
can not manipulates HTTP content. It is dangerous to use the channel
manipulation object with an HTTP request in an HTTP action. The channel
manipulation can transform a valid request in an invalid request. In this case,
the action will never resume and the processing will be frozen. HAProxy
discards the request after the reception timeout.
Non blocking design
-------------------
HAProxy is an event driven software, so blocking system calls are absolutely
forbidden. However, the Lua allows to do blocking actions. When an action
blocks, HAProxy is waiting and do nothing, so the basic functionalities like
accepting connections or forwarding data are blocked while the end of the system
call. In this case HAProxy will be less responsive.
This is very insidious because when the developer tries to execute its Lua code
with only one stream, HAProxy seems to run fine. When the code is used with
production stream, HAProxy encounters some slow processing, and it cannot
hold the load.
However, during the initialisation state, you can obviously using blocking
functions. There are typically used for loading files.
The list of prohibited standard Lua functions during the runtime contains all
that do filesystem access:
- os.remove()
- os.rename()
- os.tmpname()
- package.*()
- io.*()
- file.*()
Some other functions are prohibited:
- os.execute(), waits for the end of the required execution blocking HAProxy.
- os.exit(), is not really dangerous for the process, but it's not the good way
for exiting the HAProxy process.
- print(), writes data on stdout. In some cases these writes are blocking, the
best practice is reserving this call for debugging. We must prefer
to use core.log() or TXN.log() for sending messages.
Some HAProxy functions have a blocking behaviour pattern in the Lua code, but
there are compatible with the non blocking design. These functions are:
- All the socket class
- core.sleep()
Responsive design
-----------------
HAProxy must process connections accept, forwarding data and processing timeouts
as soon as possible. The first thing is to believe that a Lua script with a long
execution time should impact the expected responsive behaviour.
It is not the case, the Lua script execution are regularly interrupted, and
HAProxy can process other things. These interruptions are exprimed in number of
Lua instructions. The number of interruptions between two interrupts is
configured with the following "tune" option:
tune.lua.forced-yield <nb>
The default value is 10 000. For determining it, I ran benchmark on my laptop.
I executed a Lua loop between 10 seconds with different values for the
"tune.lua.forced-yield" option, and I noted the results:
configured | Number of
instructions | loops executed
between two | in millions
forced yields |
---------------+---------------
10 | 160
500 | 670
1000 | 680
5000 | 700
7000 | 700
8000 | 700
9000 | 710 <- ceil
10000 | 710
100000 | 710
1000000 | 710
The result showed that from 9000 instructions between two interrupt, we reached
a ceil, so the default parameter is 10 000.
When HAProxy interrupts the Lua processing, we have two states possible:
- Lua is resumable, and it returns control to the HAProxy scheduler,
- Lua is not resumable, and we just check the execution timeout.
The second case occurs if it is required by the HAProxy core. This state is
forced if the Lua is processed in a non resumable HAProxy part, like sample
fetches or converters.
It occurs also if the Lua is non resumable. For example, if some code is
executed through the Lua pcall() function, the execution is not resumable. This
is explained later.
So, the Lua code must be fast and simple when is executed as sample fetches and
converters, it could be slow and complex when is executed as actions and
services.
Execution time
--------------
The Lua execution time is measured and limited. Each group of functions has its
own timeout configured. The time measured is the real Lua execution time, and
not the difference between the end time and the start time. The groups are:
- main code and init are not submitted to the timeout,
- fetches, converters and action have a default timeout of 4s,
- task, by default does not have timeout,
- service have a default timeout of 4s.
The corresponding tune options are:
- tune.lua.session-timeout (fetches, converters and action)
- tune.lua.task-timeout (task)
- tune.lua.service-timeout (services)
The task does not have a timeout because it runs in background along the
HAProxy process life.
For example, if an Lua script is executed during 1.1s and the script executes a
sleep of 1 second, the effective measured running time is 0.1s.
This timeout is useful for preventing infinite loops. During the runtime, it
should be never triggered.
The stack and the coprocess
---------------------------
The Lua execution is organized around a stack. Each Lua action, even out of the
effective execution, affects the stack. HAProxy integration uses one main stack,
which is common for all the process, and a secondary one used as coprocess.
After the initialization, the main stack is no longer used by HAProxy, except
for global storage. The second type of stack is used by all the Lua functions
called from different Lua actions declared in HAProxy. The main stack permits
to store coroutines pointers, and some global variables.
Do you want to see an example of how seems Lua C development around a stack ?
Some examples follows. This first one, is a simple addition:
lua_pushnumber(L, 1)
lua_pushnumber(L, 2)
lua_arith(L, LUA_OPADD)
It's easy, we push 1 on the stack, after, we push 2, and finally, we perform an
addition. The two top entries of the stack are added, poped, and the result is
pushed. It is a classic way with a stack.
Now an example for constructing array and objects. It's a little bit more
complicated. The difficult consist to keep in mind the state of the stack while
we write the code. The goal is to create the entity described below. Note that
the notation "*1" is a metatable reference. The metatable will be explained
later.
name*1 = {
[0] = <userdata>,
}
*1 = {
"__index" = {
"method1" = <function>,
"method2" = <function>
}
"__gc" = <function>
}
Let's go:
lua_newtable() // The "name" table
lua_newtable() // The metatable *1
lua_pushstring("__index")
lua_newtable() // The "__index" table
lua_pushstring("method1")
lua_pushfunction(function)
lua_settable(-3) // -3 is an index in the stack. insert method1
lua_pushstring("method2")
lua_pushfunction(function)
lua_settable(-3) // insert method2
lua_settable(-3) // insert "__index"
lua_pushstring("__gc")
lua_pushfunction(function)
lua_settable() // insert "__gc"
lua_setmetatable(-1) // attach metatable to "name"
lua_pushnumber(0)
lua_pushuserdata(userdata)
lua_settable(-3)
lua_setglobal("name")
So, coding for Lua in C, is not complex, but it needs some mental gymnastic.
The object concept and the HAProxy format
-----------------------------------------
The object seems to be not a native concept. An Lua object is a table. We can
note that the table notation accept three forms:
1. mytable["entry"](mytable, "param")
2. mytable.entry(mytable, "param")
3. mytable:entry("param")
These three notation have the same behaviour pattern: a function is executed
with the table itself as first parameter and string "param" as second parameter
The notation with [] is commonly used for storing data in a hash table, and the
dotted notation is used for objects. The notation with ":" indicates that the
first parameter is the element at the left of the symbol ":".
So, an object is a table and each entry of the table is a variable. A variable
can be a function. These are the first concepts of the object notation in the
Lua, but it is not the end.
With the objects, we usually expect classes and inheritance. This is the role of
the metable. A metable is a table with predefined entries. These entries modify
the default behaviour of the table. The simplest example is the "__index" entry.
If this entry exists, it is called when a value is requested in the table. The
behaviour is the following:
1 - looks in the table if the entry exists, and if it the case, return it
2 - looks if a metatable exists, and if the "__index" entry exists
3 - if "__index" is a function, execute it with the key as parameter, and
returns the result of the function.
4 - if "__index" is a table, looks if the requested entry exists, and if
exists, return it.
5 - if not exists, return to step 2
The behaviour of the point 5 represents the inheritance.
In HAProxy all the provided objects are tables, the entry "[0]" contains private
data, there are often userdata or lightuserdata. The matatable is registered in
the global part of the main Lua stack, and it is called with the case sensitive
class name. A great part of these class must not be used directly because it
requires an initialisation using the HAProxy internal structs.
The HAProxy objects use unified conventions. An Lua object is always a table.
In most cases, an HAProxy Lua object needs some private data. These are always
set in the index [0] of the array. The metatable entry "__tostring" returns the
object name.
The Lua developer can add entries to the HAProxy object. He just works carefully
and prevent to modify the index [0].
Common HAproxy objects are:
- TXN : manipulates the transaction between the client and the server
- Channel : manipulates proxified data between the client and the server
- HTTP : manipulates HTTP between the client and the server
- Map : manipulates HAProxy maps.
- Fetches : access to all HAProxy sample fetches
- Converters : access to all HAProxy sample converters
- AppletTCP : process client request like a TCP server
- AppletHTTP : process client request like an HTTP server
- Socket : establish tcp connection to a server (ipv4/ipv6/socket/ssl/...)
The garbage collector and the memory allocation
-----------------------------------------------
Lua doesn't really have a global memory limit, but HAProxy implements it. This
permits to control the amount of memory dedicated to the Lua processes. It is
specially useful with embedded environments.
When the memory limit is reached, HAProxy refuses to give more memory to the Lua
scripts. The current Lua execution is terminated with an error and HAProxy
continues its processing.
The max amount of memory is configured with the option:
tune.lua.maxmem
As many other script languages, Lua uses a garbage collector for reusing its
memory. The Lua developper can work without memory preoccupation. Usually, the
garbage collector is controlled by the Lua core, but sometimes it will be useful
to run when the user/developer requires. So the garbage collector can be called
from C part or Lua part.
Sometimes, objects using lightuserdata or userdata requires to free some memory
block or close filedescriptor not controlled by the Lua. A dedicated garbage
collection function is provided through the metatable. It is referenced with the
special entry "__gc".
Generally, in HAProxy, the garbage collector does this job without any
intervention. However some objects use a great amount of memory, and we want to
release as quickly as possible. The problem is that only the GC knows if the
object is in use or not. The reason is simple variable containing objects can be
shared between coroutines and the main thread, so an object can be used
everywhere in HAProxy.
The only one example is the HAProxy sockets. These are explained later, just for
understanding the GC issues, a quick overview of the socket follows. The HAProxy
socket uses an internal session and stream, the session uses resources like
memory and file descriptor and in some cases keeps a socket open while it is no
longer used by Lua.
If the HAProxy socket is used, we forcing a garbage collector cycle after the
end of each function using HAProxy socket. The reason is simple: if the socket
is no longer used, we want to close the connection quickly.
A special flag is used in HAProxy indicating that a HAProxy socket is created.
If this flag is set, a full GC cycle is started after each Lua action. This is
not free, we loose about 10% of performances, but it is the only way for closing
sockets quickly.
The yield concept / longjmp issues
----------------------------------
The "yield" is an action which does some Lua processing in pause and give back
the hand to the HAProxy core. This action is do when the Lua needs to wait about
data or other things. The most basically example is the sleep() function. In an
event driven software the code must not process blocking systems call, so the
sleep blocks the software between a lot of time. In HAProxy, an Lua sleep does a
yield, and ask to the scheduler to be woken up in a required sleep time.
Meanwhile, the HAProxy scheduler does other things, like accepting new
connection or forwarding data.
A yield is also executed regularly, after a lot of Lua instructions processed.
This yield permits to control the effective execution time, and also give back
the hand to the HAProxy core. When HAProxy finishes to process the pending jobs,
the Lua execution continues.
This special "yield" uses the Lua "debug" functions. Lua provides a debug method
called "lua_sethook()" which permits to interrupt the execution after some
configured condition and call a function. This condition used in HAProxy is
a number of instructions processed and when a function returns. The function
called controls the effective execution time, and if it is possible to send a
"yield".
The yield system is based on a couple setjmp/longjmp. In brief, the setjmp()
stores a stack state, and the longjmp restores the stack in its state which had
before the last Lua execution.
Lua can immediately stop its execution if an error occurs. This system uses also
the longjmp system. In HAProxy, we try to use this sytem only for unrecoverable
errors. Maybe some trivial errors target an exception, but we try to remove it.
It seems that Lua uses the longjmp system for having a behaviour like the java
try / catch. We can use the function pcall() to execute some code. The function
pcall() run a setjmp(). So, if any error occurs while the Lua code execution,
the flow immediately returns from the pcall() with an error.
The big issue of this behaviour is that we cannot do a yield. So if some Lua code
executes a library using pcall for catching errors, HAProxy must be wait for the
end of execution without processing any accept or any stream. The cause is the
yield must be jump to the root of execution. The intermediate setjmp() avoids
this behaviour.
HAproxy start Lua execution
+ Lua puts a setjmp()
+ Lua executes code
+ Some code is executed in a pcall()
+ pcall() puts a setjmp()
+ Lua executes code
+ A yield is require for a sleep function
it cannot be jumps to the Lua root execution.
Another issue with the processing of strong errors is the manipulation of the
Lua stack outside of an Lua processing. If one of the functions called occurs a
strong error, the default behaviour is an abort(). It is not acceptable when
HAProxy is in runtime mode. The Lua documentation propose to use another
setjmp/longjmp to avoid the abort(). The goal is to put a setjmp between
manipulating the Lua stack and using an alternative "panic" function which jumps
to the setjmp() in error case.
All of these behaviours are very dangerous for the stability, and the internal
HAProxy code must be modified with many precautions.
For preserving a good behaviour of HAProxy, the yield is mandatory.
Unfortunately, some HAProxy parts are not adapted for resuming an execution
after a yield. These parts are the sample fetches and the sample converters. So,
the Lua code written in these parts of HAProxy must be quickly executed, and can
not do actions which require yield like TCP connection or simple sleep.
HAproxy socket object
---------------------
The HAProxy design is optimized for the data transfers between a client and a
server, and processing the many errors which can occurs during these exchanges.
HAProxy is not designed for having a third connection established to a third
party server.
The solution consist to put the main stream in pause waiting for the end of the
exchanges with the third connection. This is completed by a signal between
internal tasks. The following graph shows the HAProxy Lua socket:
+--------------------+
| Lua processing |
------------------\ | creates socket | ------------------\
incoming request > | and puts the | Outgoing request >
------------------/ | current processing | ------------------/
   | in pause waiting |
| for TCP applet |
+-----------------+--+
^ |
| |
| signal | read / write
| | data
| |
+-------------+---------+ v
| HAProxy internal +----------------+
| applet send signals | |
| when data is received | | -------------------\
| or some room is | Attached I/O | Client TCP stream >
| available | Buffers | -------------------/
+--------------------+--+ |
| |
+-------------------+
A more detailed graph is available in the "doc/internals" directory.
The HAProxy Lua socket uses a full HAProxy session / stream for establishing the
connection. This mechanism provides all the facilities and HAProxy features,
like the SSL stack, many socket type, and support for namespaces.
Technically it supports the proxy protocol, but there are no way to enable it.
How compiling HAProxy with Lua
==============================
HAProxy 1.6 requires Lua 5.3. Lua 5.3 offers some features which make easy the
integration. Lua 5.3 is young, and some distros do not distribute it. Luckily,
Lua is a great product because it does not require exotic dependencies, and its
build process is really easy.
The compilation process for linux is easy:
- download the source tarball
wget http://www.lua.org/ftp/lua-5.3.1.tar.gz
- untar it
tar xf lua-5.3.1.tar.gz
- enter the directory
cd lua-5.3.1
- build the library for linux
make linux
- install it:
sudo make INSTALL_TOP=/opt/lua-5.3.1 install
HAProxy builds with your favourite options, plus the following options for
embedding the Lua script language:
- download the source tarball
wget http://www.haproxy.org/download/1.6/src/haproxy-1.6.2.tar.gz
- untar it
tar xf haproxy-1.6.2.tar.gz
- enter the directory
cd haproxy-1.6.2
- build HAProxy:
make TARGET=linux \
USE_DL=1 \
USE_LUA=1 \
LUA_LIB=/opt/lua-5.3.1/lib \
LUA_INC=/opt/lua-5.3.1/include
- install it:
sudo make PREFIX=/opt/haproxy-1.6.2 install
First steps with Lua
====================
Now, it's time to use Lua in HAProxy.
Start point
-----------
The HAProxy global directive "lua-load <file>" allows to load an Lua file. This
is the entry point. This load become during the configuration parsing, and the
Lua file is immediately executed.
All the register_*() functions must be called at this time because they are used
just after the processing of the global section, in the frontend/backend/listen
sections.
The most simple "Hello world !" is the following line a loaded Lua file:
core.Alert("Hello World !");
It displays a log during the HAProxy startup:
[alert] 285/083533 (14465) : Hello World !
Default path and libraries
--------------------------
Lua can embed some libraries. These libraries can be included from different
paths. It seems that Lua doesn't like subdirectories. In the following example,
I try to load a compiled library, so the first line is Lua code, the second line
is an 'strace' extract proving that the library was opened. The next lines are
the associated error.
require("luac/concat")
open("./luac/concat.so", O_RDONLY|O_CLOEXEC) = 4
[ALERT] 293/175822 (22806) : parsing [commonstats.conf:15] : lua runtime
error: error loading module 'luac/concat' from file './luac/concat.so':
./luac/concat.so: undefined symbol: luaopen_luac/concat
Lua tries to load the C symbol 'luaopen_luac/concat'. When Lua tries to open a
library, it tries to execute the function associated to the symbol
"luaopen_<libname>".
The variable "<libname>" is defined using the content of the variable
"package.cpath" and/or "package.path". The default definition of the
"package.cpath" (on my computer is ) variable is:
/usr/local/lib/lua/5.3/?.so;/usr/local/lib/lua/5.3/loadall.so;./?.so
The "<libname>" is the content which replaces the symbol "<?>". In the previous
example, its "luac/concat", and obviously the Lua core try to load the function
associated with the symbol "luaopen_luac/concat".
My conclusion is that Lua doesn't support subdirectories. So, for loading
libraries in subdirectory, it must fill the variable with the name of this
subdirectory. The extension .so must disappear, otherwise Lua try to execute the
function associated with the symbol "luaopen_concat.so". The following syntax is
correct:
package.cpath = package.cpath .. ";./luac/?.so"
require("concat")
First useful example
--------------------
core.register_fetches("my-hash", function(txn, salt)
return txn.sc:sdbm(salt .. txn.sf:req_fhdr("host") .. txn.sf:path() .. txn.sf:src(), 1)
end)
You will see that these 3 lines can generate a lot of explanations :)
Core.register_fetches() is executed during the processing of the global section
by the HAProxy configuration parser. A new sample fetch is declared with name
"my-hash", this name is always prefixed by "lua.". So this new declared
sample fetch will be used calling "lua.my-hash" in the HAProxy configuration
file.
The second parameter is an inline declared anonymous function. Note the closed
parenthesis after the keyword "end" which ends the function. The first parameter
of this anonymous function is "txn". It is an object of class TXN. It provides
access functions. The second parameter is an arbitrary value provided by the
HAProxy configuration file. This parameter is optional, the developer must
check if it is present.
The anonymous function registration is executed when the HAProxy backend or
frontend configuration references the sample fetch "lua.my-hash".
This example can be written with another style, like below:
function my_hash(txn, salt)
return txn.sc:sdbm(salt .. txn.sf:req_fhdr("host") .. txn.sf:path() .. txn.sf:src(), 1)
end
core.register_fetches("my-hash", my_hash)
This second form is clearer, but the first one is compact.
The operator ".." is a string concatenation. If one of the two operands is not a
string, an error occurs and the execution is immediately stopped. This is
important to keep in mind for the following things.
Now I write the example on more than one line. Its an easiest way for commenting
the code:
1. function my_hash(txn, salt)
2. local str = ""
3. str = str .. salt
4. str = str .. txn.sf:req_fhdr("host")
5. str = str .. txn.sf:path()
6. str = str .. txn.sf:src()
7. local result = txn.sc:sdbm(str, 1)
8. return result
9. end
10.
11. core.register_fetches("my-hash", my_hash)
local
~~~~~
The first keyword is "local". This is a really important keyword. You must
understand that the function "my_hash" will be called for each HAProxy request
using the declared sample fetch. So, this function can be executed many times in
parallel.
By default, Lua uses global variables. So in this example, if the variable "str"
is declared without the keyword "local", it will be shared by all the parallel
executions of the function and obviously, the content of the requests will be
shared.
This warning is very important. I tried to write useful Lua code like a rewrite
of the statistics page, and it is very hard thing to declare each variable as
"local".
I guess that this behaviour will be the cause of many troubles on the mailing
list.
str = str ..
~~~~~~~~~~~~
Now a parenthesis about the form "str = str ..". This form allows to do string
concatenations. Remember that Lua uses a garbage collector, so what happens when
we do "str = str .. 'another string'" ?
str = str .. "another string"
^ ^ ^ ^
1 2 3 4
Lua executes first the concatenation operator (3), it allocates memory for the
resulting string and fill this memory with the concatenation of the operands 2
and 4. Next, it frees the variable 1, now the old content of 1 can be garbage
collected. And finally, the new content of 1 is the concatenation.
what the matter ? when we do this operation many times, we consume a lot of
memory, and the string data is duplicated and move many times. So, this practice
is expensive in execution time and memory consumption.
There are easy ways to prevent this behaviour. I guess that a C binding for
concatenation with chunks will be available ASAP (it is already written). I do
some benchmarks. I compare the execution time of 1 000 times, 1 000
concatenation of 10 bytes written in pure Lua and with a C library. The result is
10 times faster in C (1s in Lua, and 0.1s in C).
txn
~~~
txn is an HAProxy object of class TXN. The documentation is available in the
HAProxy Lua API reference. This class allow the access to the native HAProxy
sample fetches and converters. The object txn contains 2 members dedicated to
the sample fetches and 2 members dedicated to the converters.
The sample fetches members are "f" (as sample-Fetch) and "sf" (as String
sample-Fetch). These two members contain exactly the same functions. All the
HAProxy native sample fetches are available, obviously, the Lua registered sample
fetches are not available. Unfortunately, HAProxy sample fetches names are not
compatible with the Lua function names, and they are renamed. The rename
convention is simple, we replace all the '.', '+' and '-' by '_'. The '.' is the
object member separator, and the "-" and "+" is math operator.
Now, that I'm writing this article, I know the Lua better than I wrote the
sample-fetches wrapper. The original HAProxy sample-fetches name should be used
using alternative manner to call an object member, so the sample-fetch
"req.fhdr" (actually renamed req_fhdr") should be used like this:
txn.f["req.fhdr"](txn.f, ...)
However, I think that this form is not elegant.
The "s" collection return a data with a type near to the original returned type.
A string returns an Lua string, an integer returns an Lua integer and an IP
address returns an Lua string. Sometime the data is not or not yet available, in
this case it returns the Lua nil value.
The "sf" collection guarantees that a string will be always returned. If the data
is not available, an empty string is returned. The main usage of these collection
is to concatenate the returned sample-fetches without testing each function.
The parameters of the sample-fetches are according with the HAProxy
documentation.
The converters run exactly with the same manner as the sample fetches. The
only one difference is that the first parameter is the converter entry element.
The "c" collection returns a precise result, and the "sc" collection returns
always a string.
The sample-fetches used in the example function are "txn.sf:req_fhdr()",
"txn.sf:path()" and "txn.sf:src()". The converter is "txn.sc:sdbm()". The same
function with the "s" collection of sample-fetches and the "c" collection of
converter should be written like this:
1. function my_hash(txn, salt)
2. local str = ""
3. str = str .. salt
4. str = str .. tostring(txn.f:req_fhdr("host"))
5. str = str .. tostring(txn.f:path())
6. str = str .. tostring(txn.f:src())
7. local result = tostring(txn.c:sdbm(str, 1))
8. return result
9. end
10.
11. core.register_fetches("my-hash", my_hash)
tostring
~~~~~~~~
The function tostring ensures that its parameter is returned as a string. If the
parameter is a table or a thread or anything that will not have any sense as a
string, a form like the typename followed by a pointer is returned. For example:
t = {}
print(tostring(t))
returns:
table: 0x15facc0
For objects, if the special function __tostring() is registered in the attached
metatable, it will be called with the table itself as first argument. The
HAProxy object returns its own type.
About the converters entry point
--------------------------------
In HAProxy, a converter is a stateless function that takes a data as entry and
returns a transformation of this data as output. In Lua it is exactly the same
behaviour.
So, the registered Lua function doesn't have any special parameters, just a
variable as input which contains the value to convert, and it must return data.
The data required as input by the Lua converter is a string. So HAProxy will
always provide a string as input. If the native sample fetch is not a string it
will be converted in best effort.
The returned value will have anything type, it will be converted as sample of
the near HAProxy type. The conversion rules from Lua variables to HAProxy
samples are:
Lua | HAProxy sample types
-----------+---------------------
"number" | "sint"
"boolean" | "bool"
"string" | "str"
"userdata" | "bool" (false)
"nil" | "bool" (false)
"table" | "bool" (false)
"function" | "bool" (false)
"thread" | "bool" (false)
The function used for registering a converter is:
core.register_converters()
The task entry point
--------------------
The function "core.register_task(fcn)" executes once the function "fcn" when the
scheduler starts. This way is used for executing background task. For example,
you can use this functionnality for periodically checking the health of another
service, and giving the result to each proxy needing it.
The task is started once, if you want periodic actions, you can use the
"core.sleep()" or "core.msleep()" for waiting the next runtime.
Storing Lua variable between function in the same session
---------------------------------------------------------
All the functions registered as action or sample fetch can share an Lua context.
This context is a memory zone in the stack. sample fetch and action use the
same stack, so both can access to the context.
The context is accessible via the function get_priv and set_priv provided by an
object of class TXN. The value given to set_priv replaces the current stored
value. This value can be a table, it is useful if a lot of data can be shared.
If the value stored is a table, you can add or remove entries from the table
without storing again the new table. Maybe an example will be clearer:
local t = {}
txn:set_priv(t)
t["entry1"] = "foo"
t["entry2"] = "bar"
-- this will display "foo"
print(txn:get_priv()["entry1"])
HTTP actions
============
... comming soon ...
Lua is fast, but my service require more execution speed
========================================================
We can write C modules for Lua. These modules must run with HAProxy while they
are compliant with the HAProxy Lua version. A simple example is the "concat"
module.
It is very easy to write and compile a C Lua library, however, I don't see
documentation about this process. So the current chapter is a quick howto.
The entry point
---------------
The entry point is called "luaopen_<name>", where <name> is the name of the ".so"
file. An hello world is like this:
#include <stdio.h>
#include <lua.h>
#include <lauxlib.h>
int luaopen_mymod(lua_State *L)
{
printf("Hello world\n");
return 0;
}
The build
---------
The compilation of the source file requires the Lua "include" directory. The
compilation and the link of the object file requires the -fPIC option. That's
all.
cc -I/opt/lua/include -fPIC -shared -o mymod.so mymod.c
Usage
-----
You can load this module with the following Lua syntax:
require("mymod")
When you start HAProxy, this module just print "Hello world" when it is loaded.
Please, remember that HAProxy doesn't allow blocking method, so if you write a
function doing filesystem access or synchronous network access, all the HAProxy
process will fail.