fix signedness of UINT32_MAX and UINT64_MAX at the preprocessor level

per the rules for hexadecimal integer constants, the previous
definitions were correctly treated as having unsigned type except
possibly when used in preprocessor conditionals, where all artithmetic
takes place as intmax_t or uintmax_t. the explicit 'u' suffix ensures
that they are treated as unsigned in all contexts.
This commit is contained in:
Rich Felker 2014-12-21 02:30:29 -05:00
parent 814aae2009
commit dac4fc49ae
1 changed files with 2 additions and 2 deletions

View File

@ -47,8 +47,8 @@ typedef uint64_t uint_least64_t;
#define UINT8_MAX (0xff)
#define UINT16_MAX (0xffff)
#define UINT32_MAX (0xffffffff)
#define UINT64_MAX (0xffffffffffffffff)
#define UINT32_MAX (0xffffffffu)
#define UINT64_MAX (0xffffffffffffffffu)
#define INT_FAST8_MIN INT8_MIN
#define INT_FAST64_MIN INT64_MIN