the casts of the argument to unsigned int suppressed diagnosis of
errors like passing a pointer instead of a character. putting the
actual function call in an unreachable branch restores any diagnostics
that would be present if the macros didn't exist and functions were
used.
the C standard specifies that setjmp is a macro, but longjmp is a
normal function. a macro version of it would be permitted (albeit
useless) for C (not C++), but would have to be a function-like macro,
not an object-like one.
while it's the same for all presently supported archs, it differs at
least on sparc, and conceptually it's no less arch-specific than the
other O_* macros. O_SEARCH and O_EXEC are still defined in terms of
O_PATH in the main fcntl.h.
commit 559de8f5f0 redefined FLT_ROUNDS
to use an external function that can report the actual current
rounding mode, rather than always reporting round-to-nearest. however,
float.h did not include 'extern "C"' wrapping for C++, so C++ programs
using FLT_ROUNDS ended up with an unresolved reference to a
name-mangled C++ function __flt_rounds.
the previous values (2k min and 8k default) were too small for some
archs. aarch64 reserves 4k in the signal context for future extensions
and requires about 4.5k total, and powerpc reportedly uses over 2k.
the new minimums are chosen to fit the saved context and also allow a
minimal signal handler to run.
since the default (SIGSTKSZ) has always been 6k larger than the
minimum, it is also increased to maintain the 6k usable by the signal
handler. this happens to be able to store one pathname buffer and
should be sufficient for calling any function in libc that doesn't
involve conversion between floating point and decimal representations.
x86 (both 32-bit and 64-bit variants) may also need a larger minimum
(around 2.5k) in the future to support avx-512, but the values on
these archs are left alone for now pending further analysis.
the value for PTHREAD_STACK_MIN is not increased to match MINSIGSTKSZ
at this time. this is so as not to preclude applications from using
extremely small thread stacks when they know they will not be handling
signals. unfortunately cancellation and multi-threaded set*id() use
signals as an implementation detail and therefore require a stack
large enough for a signal context, so applications which use extremely
small thread stacks may still need to avoid using these features.
normally time.h would provide a definition for this struct, but
depending on the feature test macros in use, it may not be exposed,
leading to warnings when it's used in the function prototypes.
these macros have the same distinct definition on blackfin, frv, m68k,
mips, sparc and xtensa kernels. POLLMSG and POLLRDHUP additionally
differ on sparc.
the memory model we use internally for atomics permits plain loads of
values which may be subject to concurrent modification without
requiring that a special load function be used. since a compiler is
free to make transformations that alter the number of loads or the way
in which loads are performed, the compiler is theoretically free to
break this usage. the most obvious concern is with atomic cas
constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be
transformed to a_cas(p,*p,f(*p)); where the latter is intended to show
multiple loads of *p whose resulting values might fail to be equal;
this would break the atomicity of the whole operation. but even more
fundamental breakage is possible.
with the changes being made now, objects that may be modified by
atomics are modeled as volatile, and the atomic operations performed
on them by other threads are modeled as asynchronous stores by
hardware which happens to be acting on the request of another thread.
such modeling of course does not itself address memory synchronization
between cores/cpus, but that aspect was already handled. this all
seems less than ideal, but it's the best we can do without mandating a
C11 compiler and using the C11 model for atomics.
in the case of pthread_once_t, the ABI type of the underlying object
is not volatile-qualified. so we are assuming that accessing the
object through a volatile-qualified lvalue via casts yields volatile
access semantics. the language of the C standard is somewhat unclear
on this matter, but this is an assumption the linux kernel also makes,
and seems to be the correct interpretation of the standard.
this is a new extension which is presently intended only for
experimental and internal libc use. interface and behavior details may
change subject to feedback and experience from using it internally.
the basic concept for the new PTHREAD_CANCEL_MASKED state is that the
first cancellation point to observe the cancellation request fails
with an errno value of ECANCELED rather than acting on cancellation,
allowing the caller to process the status and choose whether/how to
act upon it.
these socket options are new in linux v3.19, introduced in commit
2c8c56e15df3d4c2af3d656e44feb18789f75837 and commit
89aa075832b0da4402acebd698d0411dcc82d03e
with SO_INCOMING_CPU the cpu can be queried on which a socket is
managed inside the kernel and optimize polling of large number of
sockets accordingly.
SO_ATTACH_BPF lets eBPF programs (created by the bpf syscall) to
be attached to sockets.
the definitions are generic for all kernel archs. exposure of these
macros now only occurs on the same feature test as for the function
accepting them, which is believed to be more correct.
PR_SET_MM_MAP was introduced as a subcommand for PR_SET_MM in
linux v3.18 commit f606b77f1a9e362451aca8f81d8f36a3a112139e
the associated struct type is replicated in sys/prctl.h using
libc types.
example usage:
struct prctl_mm_map *p;
...
prctl(PR_SET_MM, PR_SET_MM_MAP, p, sizeof *p);
the kernel side supported struct size may be queried with
the PR_SET_MM_MAP_SIZE subcommand.
per the rules for hexadecimal integer constants, the previous
definitions were correctly treated as having unsigned type except
possibly when used in preprocessor conditionals, where all artithmetic
takes place as intmax_t or uintmax_t. the explicit 'u' suffix ensures
that they are treated as unsigned in all contexts.
it's unclear whether compilers which provide pure imaginary types
might produce a pure imaginary expression for 1.0fi. using 0.0f+1.0fi
ensures that the result is explicitly complex and makes this obvious
to human readers too.
based on patches by Jens Gustedt. these macros need to be usable in
static initializers, and the old definitions were not.
there is no portable way to provide correct definitions for these
macros unless the compiler supports pure imaginary types. a portable
definition is provided for this case even though there are presently
no compilers that can use it. gcc and compatible compilers provide a
builtin function that can be used, but clang fails to support this and
instead requires a construct which is a constraint violation and which
is only a constant expression as a clang-specific extension.
since these macros are a namespace violation in pre-C11 profiles, and
since no known pre-C11 compilers provide any way to define them
correctly anyway, the definitions have been made conditional on C11.
based on patch by Timo Teräs, with some corrections to bounds checking
code and other minor changes.
while they are borderline scope creep, the functions added are fairly
small and are roughly the minimum code needed to use the results of
the res_query API without re-implementing error-prone DNS packet
parsing, and they are used in practice by some kerberos related
software and possibly other things. at this time there is no intent to
implement further nameser.h API functions.
C++ programmers typically expect something like "::function(x,y)" to work
and may be surprised to find that "(::function)(x,y)" is actually required
due to the headers declaring a macro version of some standard functions.
We already omit function-like macros for C++ in most cases where there is
a real function available. This commit extends this to the remaining
function-like macros which have a real function version.
new in linux v3.17 commit 40e041a2c858b3caefc757e26cb85bfceae5062b
sealing allows some operations to be blocked on a file which makes
file access safer when fds are shared between processes (only
supported for shared mem fds currently)
flags:
F_SEAL_SEAL prevents further sealing
F_SEAL_SHRINK prevents file from shrinking
F_SEAL_GROW prevents file from growing
F_SEAL_WRITE prevents writes
fcntl commands:
F_GET_SEALS get the current seal flags
F_ADD_SEALS add new seal flags
as a result of commit ab8f6a6e42, this
definition is now equivalent to the actual "default profile" which
appears immediately below in features.h, and which defines both
_BSD_SOURCE and _XOPEN_SOURCE.
the intent of providing a _DEFAULT_SOURCE, which glibc also now
provides, is to give applications a way to "get back" the default
feature profile when it was lost either by compiler flags that inhibit
it (such as -std=c99) or by library-provided predefined macros (such
as -D_POSIX_C_SOURCE=200809L) which may inhibit exposure of features
that were otherwise visible by default and which the application may
need. without _DEFAULT_SOURCE, the application had encode knowledge of
a particular libc's defaults, and such knowledge was fragile and
subject to bitrot.
eventually the names _GNU_SOURCE and _BSD_SOURCE should be phased out
in favor of the more-descriptive and more-accurate _ALL_SOURCE and
_DEFAULT_SOURCE, leaving the old names as aliases but using the new
ones internally. however this is a more invasive change that would
require extensive regression testing, so it is deferred.
the vast majority of these failures seem to have been oversights at
the time _BSD_SOURCE was added, or perhaps shortly afterward. the one
which may have had some reason behind it is omission of setpgrp from
the _BSD_SOURCE feature profile, since the standard setpgrp interface
conflicts with a legacy (pre-POSIX) BSD interface by the same name.
however, such omission is not aligned with our general policy in this
area (for example, handling of similar _GNU_SOURCE cases) and should
not be preserved.
open file description locks are inherited across fork and only auto
dropped after the last fd of the file description is closed, they can be
used to synchronize between threads that open separate file descriptions
for the same file.
new in linux 3.15 commit 0d3f7a2dd2f5cf9642982515e020c1aee2cf7af6
based on patch by Jens Gustedt.
mtx_t and cnd_t are defined in such a way that they are formally
"compatible types" with pthread_mutex_t and pthread_cond_t,
respectively, when accessed from a different translation unit. this
makes it possible to implement the C11 functions using the pthread
functions (which will dereference them with the pthread types) without
having to use the same types, which would necessitate either namespace
violations (exposing pthread type names in threads.h) or incompatible
changes to the C++ name mangling ABI for the pthread types.
for the rest of the types, things are much simpler; using identical
types is possible without any namespace considerations.
there is no blksize64_t (blksize_t is always long) but there are
fsblkcnt64_t and fsfilcnt64_t types in sys/stat.h and sys/types.h.
and glob.h missed glob64_t.
this function is needed for some important practical applications of
ABI compatibility, and may be useful for supporting some non-portable
software at the source level too.
I was hesitant to add a function which imposes any constraints on
malloc internals; however, it turns out that any malloc implementation
which has realloc must already have an efficient way to determine the
size of existing allocations, so no additional constraint is imposed.
for now, some internal malloc definitions are duplicated in the new
source file. if/when malloc is refactored to put them in a shared
internal header file, these could be removed.
since malloc_usable_size is conventionally declared in malloc.h, the
empty stub version of this file was no longer suitable. it's updated
to provide the standard allocator functions, nonstandard ones (even if
stdlib.h would not expose them based on the feature test macros in
effect), and any malloc-extension functions provided (currently, only
malloc_usable_size).
unfortunately this needs to be able to vary by arch, because of a huge
mess GCC made: the GCC definition, which became the ABI, depends on
quirks in GCC's definition of __alignof__, which does not match the
formal alignment of the type.
GCC's __alignof__ unexpectedly exposes the an implementation detail,
its "preferred alignment" for the type, rather than the formal/ABI
alignment of the type, which it only actually uses in structures. on
most archs the two values are the same, but on some (at least i386)
the preferred alignment is greater than the ABI alignment.
I considered using _Alignas(8) unconditionally, but on at least one
arch (or1k), the alignment of max_align_t with GCC's definition is
only 4 (even the "preferred alignment" for these types is only 4).
ETH_P_80221 is ethertype for IEEE Std 802.21 - Media Independent Handover Protocol
introduced in linux 3.15 commit b62faf3cdc875a1ac5a10696cf6ea0b12bab1596
ETH_P_LOOPBACK is the correct packet type for loopback in IEEE 802.3*
introduced in linux 3.15 commit 61ccbb684421d374fdcd7cf5d6b024b06f03ce4e
some defines were shuffled to be in ascending order and match the kernel header
this function provides a way for third-party library code to use the
same logic that's used internally in libc for suppressing untrusted
input/state (e.g. the environment) when the application is running
with privleges elevated by the setuid or setgid bit or some other
mechanism. its semantics are intended to match the openbsd function by
the same name.
there was some question as to whether this function is necessary:
getauxval(AT_SECURE) was proposed as an alternative. however, this has
several drawbacks. the most obvious is that it asks programmers to be
aware of an implementation detail of ELF-based systems (the aux
vector) rather than simply the semantic predicate to be checked. and
trying to write a safe, reliable version of issetugid in terms of
getauxval is difficult. for example, early versions of the glibc
getauxval did not report ENOENT, which could lead to false negatives
if AT_SECURE was not present in the aux vector (this could probably
only happen when running on non-linux kernels under linux emulation,
since glibc does not support linux versions old enough to lack
AT_SECURE). as for musl, getauxval has always properly reported
errors, but prior to commit 7bece9c209,
the musl implementation did not emulate AT_SECURE if missing, which
would result in a false positive. since musl actually does partially
support kernels that lack AT_SECURE, this was problematic.
the intent is that library authors will use issetugid if its
availability is detected at build time, and only fall back to the
unreliable alternatives on systems that lack it.
patch by Brent Cook. commit message/rationale by Rich Felker.
With the exception of a fenv implementation, the port is fully featured.
The port has been tested in or1ksim, the golden reference functional
simulator for OpenRISC 1000.
It passes all libc-test tests (except the math tests that
requires a fenv implementation).
The port assumes an or1k implementation that has support for
atomic instructions (l.lwa/l.swa).
Although it passes all the libc-test tests, the port is still
in an experimental state, and has yet experienced very little
'real-world' use.