2011-02-12 05:22:29 +00:00
|
|
|
#ifndef _STDDEF_H
|
|
|
|
#define _STDDEF_H
|
|
|
|
|
2013-11-25 02:42:55 +00:00
|
|
|
#ifdef __cplusplus
|
use a common definition of NULL as 0L for C and C++
the historical mess of having different definitions for C and C++
comes from the historical C definition as (void *)0 and the fact that
(void *)0 can't be used in C++ because it does not convert to other
pointer types implicitly. however, using plain 0 in C++ exposed bugs
in C++ programs that call variadic functions with NULL as an argument
and (wrongly; this is UB) expect it to arrive as a null pointer. on
64-bit machines, the high bits end up containing junk. glibc dodges
the issue by using a GCC extension __null to define NULL; this is
observably non-conforming because a conforming application could
observe the definition of NULL via stringizing and see that it is
neither an integer constant expression with value zero nor such an
expression cast to void.
switching to 0L eliminates the issue and provides compatibility with
broken applications, since on all musl targets, long and pointers have
the same size, representation, and argument-passing convention. we
could maintain separate C and C++ definitions of NULL (i.e. just use
0L on C++ and use (void *)0 on C) but after careful analysis, it seems
extremely difficult for a C program to even determine whether NULL has
integer or pointer type, much less depend in subtle, unintentional
ways, on whether it does. C89 seems to have no way to make the
distinction. on C99, the fact that (int)(void *)0 is not an integer
constant expression, along with subtle VLA/sizeof semantics, can be
used to make the distinction, but many compilers are non-conforming
and give the wrong result to this test anyway. on C11, _Generic can
trivially make the distinction, but it seems unlikely that code
targetting C11 would be so backwards in caring which definition of
NULL an implementation uses.
as such, the simplest path of using the same definition for NULL in
both C and C++ was chosen. the #undef directive was also removed so
that the compiler can catch and give a warning or error on
redefinition if buggy programs have defined their own versions of
NULL prior to inclusion of standard headers.
2013-01-19 01:35:26 +00:00
|
|
|
#define NULL 0L
|
2013-11-25 02:42:55 +00:00
|
|
|
#else
|
|
|
|
#define NULL ((void*)0)
|
|
|
|
#endif
|
2011-02-12 05:22:29 +00:00
|
|
|
|
|
|
|
#define __NEED_ptrdiff_t
|
|
|
|
#define __NEED_size_t
|
|
|
|
#define __NEED_wchar_t
|
add max_align_t definition for C11 and C++11
unfortunately this needs to be able to vary by arch, because of a huge
mess GCC made: the GCC definition, which became the ABI, depends on
quirks in GCC's definition of __alignof__, which does not match the
formal alignment of the type.
GCC's __alignof__ unexpectedly exposes the an implementation detail,
its "preferred alignment" for the type, rather than the formal/ABI
alignment of the type, which it only actually uses in structures. on
most archs the two values are the same, but on some (at least i386)
the preferred alignment is greater than the ABI alignment.
I considered using _Alignas(8) unconditionally, but on at least one
arch (or1k), the alignment of max_align_t with GCC's definition is
only 4 (even the "preferred alignment" for these types is only 4).
2014-08-20 21:20:14 +00:00
|
|
|
#if __STDC_VERSION__ >= 201112L || __cplusplus >= 201103L
|
|
|
|
#define __NEED_max_align_t
|
|
|
|
#endif
|
2011-02-12 05:22:29 +00:00
|
|
|
|
|
|
|
#include <bits/alltypes.h>
|
|
|
|
|
2012-12-05 05:00:42 +00:00
|
|
|
#if __GNUC__ > 3
|
|
|
|
#define offsetof(type, member) __builtin_offsetof(type, member)
|
|
|
|
#else
|
2011-02-12 05:22:29 +00:00
|
|
|
#define offsetof(type, member) ((size_t)( (char *)&(((type *)0)->member) - (char *)0 ))
|
2012-12-05 05:00:42 +00:00
|
|
|
#endif
|
2011-02-12 05:22:29 +00:00
|
|
|
|
|
|
|
#endif
|