mirror of
https://github.com/mpv-player/mpv
synced 2024-12-23 15:22:09 +00:00
timer: switch to CLOCK_MONOTONIC
Apparently, this is always _really_ monotonic, despite what the Linux manpages say. So this should be much better than gettimeofday(). (At times there were kernel bugs which broke the monotonic property.) From the perspective of the player, time can still be discontinuous (you could just stop the process with ^Z), but at least it's guaranteed to be monotonic without further hacks required. Also note that clock_gettime() returns the time in nanoseconds. We want microseconds only, because that's the unit we chose internally. Another problem is that nanoseconds can wrap pretty quickly (less than 300 years in 63 bits), so it's just better to use microseconds. The devision won't make the code that much slower (compilers can avoid a real division). Note: this expects that the system provides clock_gettime() as well as CLOCK_MONOTONIC. Both are optional according to POSIX. The only system I know which doesn't have these, OSX, has seperate timer code anyway, but I still don't know whether more obscure (yet supported) platforms have a problem with this, so I'm playing safely. But this still expects that CLOCK_MONOTONIC always works at runtime if it's defined.
This commit is contained in:
parent
f50c1d2c26
commit
3620cf97ad
@ -40,12 +40,22 @@ void mp_sleep_us(int64_t us)
|
||||
#endif
|
||||
}
|
||||
|
||||
#if defined(_POSIX_TIMERS) && _POSIX_TIMERS > 0 && defined(CLOCK_MONOTONIC)
|
||||
uint64_t mp_raw_time_us(void)
|
||||
{
|
||||
struct timespec ts;
|
||||
if (clock_gettime(CLOCK_MONOTONIC, &ts))
|
||||
abort();
|
||||
return ts.tv_sec * 1000000LL + ts.tv_nsec / 1000;
|
||||
}
|
||||
#else
|
||||
uint64_t mp_raw_time_us(void)
|
||||
{
|
||||
struct timeval tv;
|
||||
gettimeofday(&tv,NULL);
|
||||
return tv.tv_sec * 1000000LL + tv.tv_usec;
|
||||
}
|
||||
#endif
|
||||
|
||||
void mp_raw_time_init(void)
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user