Linux and macOS already use nanosecond resolution for their sleep
functions. It was just being converted from microseconds before. Since
we have mp_time_ns now, go ahead and bump the precision here. The timer
for windows uses some timeBeginPeriod thing which I'm not sure what it
does really but whatever just convert the units to ms like they were
doing before. There's really no reason to keep the mp_sleep_us helper
around. A multiplication by 1000 is trivial and underlying OS clocks
have nanosecond precision.
Make OS specific timer code export a mp_raw_time_us() function, and
add generic implementations of GetTimer()/GetTimerMS() using this
function. New mpv code is supposed to call mp_time_us() in situations
where precision is absolutely needed, or mp_time_s() otherwise.
Make it so that mp_time_us() will return a value near program start.
We don't set it to 0 though to avoid confusion with relative vs.
absolute time. Instead, pick an arbitrary offset.
Move the test program in timer-darwin.c to timer.c, and modify it to
work with the generic timer functions.
Finish renaming directories and moving files. Adjust all include
statements to make the previous commit compile.
The two commits are separate, because git is bad at tracking renames
and content changes at the same time.
Also take this as an opportunity to remove the separation between
"common" and "mplayer" sources in the Makefile. ("common" used to be
shared between mplayer and mencoder.)
Move the code calculating time delta since last query out of the
platform-specific drivers and into mplayer.c. The platform-specific
drivers now return absolute values only.
The way the code in timer-darwin.c uses doubles in wrapping arithmetic
looks questionable and this change might make problems in it more
visible.