This generates the tables and indexes which will be used by the HPACK
encoder. The headers are sorted by length, then by statistical frequency,
then by direction (preference for responses), then by name, then by index.
The purpose is to speed up their lookup.
Now all the code used to manipulate chunks uses a struct buffer instead.
The functions are still called "chunk*", and some of them will progressively
move to the generic buffer handling code as they are cleaned up.
Chunks are only a subset of a buffer (a non-wrapping version with no head
offset). Despite this we still carry a lot of duplicated code between
buffers and chunks. Replacing chunks with buffers would significantly
reduce the maintenance efforts. This first patch renames the chunk's
fields to match the name and types used by struct buffers, with the goal
of isolating the code changes from the declaration changes.
Most of the changes were made with spatch using this coccinelle script :
@rule_d1@
typedef chunk;
struct chunk chunk;
@@
- chunk.str
+ chunk.area
@rule_d2@
typedef chunk;
struct chunk chunk;
@@
- chunk.len
+ chunk.data
@rule_i1@
typedef chunk;
struct chunk *chunk;
@@
- chunk->str
+ chunk->area
@rule_i2@
typedef chunk;
struct chunk *chunk;
@@
- chunk->len
+ chunk->data
Some minor updates to 3 http functions had to be performed to take size_t
ints instead of ints in order to match the unsigned length here.
This one was built by studying the HPACK Huffman table (RFC7541
appendix B). It creates 5 small tables (4*512 bytes, 1*64 bytes) to
map one byte at a time from the input stream based on the following
observations :
* rht_bit31_24[256] is indexed on bits 31..24 when < 0xfe
* rht_bit24_17[256] is indexed on bits 24..17 when 31..24 >= 0xfe
* rht_bit15_11_fe[32] is indexed on bits 15..11 when 24..17 == 0xfe
* rht_bit15_8[256] is indexed on bits 15..8 when 24..17 == 0xff
* rht_bit11_4[256] is indexed on bits 11..4 when 15..8 == 0xff
* when 11..4 == 0xff, 3..2 provide the following mapping :
* 00 => 0x0a, 01 => 0x0d, 10 => 0x16, 11 => EOS