It turns out the glibc people are very clever and return an error if the
thread name exceeds the maximum supported kernel length, instead of
truncating the name. So everyone has to hardcode the currently allowed
Linux kernel name length limit, even if it gets extended later.
Also the Lua script filenames could get too long; use the client name
instead.
Another strange thing is that on Linux, unrelated threads "inherit" the
name by the thread they were created. This leads to random thread names,
because there's not necessarily a strong relation between these threads
(e.g. script command leads to filter recreation -> the filter's threads
are tagged with the script's thread name). Unfortunate.
Especially with other components (libavcodec, OSX stuff), the thread
list can get quite populated. Setting the thread name helps when
debugging.
Since this is not portable, we check the OS variants in waf configure.
old-configure just gets a special-case for glibc, since doing a full
check here would probably be a waste of effort.
Use the time as returned by mp_time_us() for mpthread_cond_timedwait(),
instead of calculating the struct timespec value based on a timeout.
This (probably) makes it easier to wait for a specific deadline.
This was part of osdep/threads.c out of laziness. But it doesn't contain
anything OS dependent. Note that the rest of threads.c actually isn't
all that OS dependent either (just some minor ifdeffery to work around
the lack of clock_gettime() on OSX).
When passing a very large timeout to mpthread_cond_timed_wait(), the
calculations could overflow, setting tv_sec to a negative value, and
making the pthread_cond_timed_wait() call return immediately. This
accidentally made Lua support poll and burn CPU for no reason.
The existing overflow check was ineffective on 32 bit systems. tv_sec is
usually a long, so adding INT_MAX to it will usually not overflow on 64
bit systems, but on 32 bit systems it's guaranteed to overflow. Simply
fix by clamping against a relatively high value. This will work until 1
week before the UNIX time wraps around in 32 bits.
It's quite possible to overflow the calculation by setting the timeout
to high values. Limit it to INT_MAX, which should be safe. The issue is
mainly the secs variable.
timespec.tv_sec will normally be 64 bit on sane systems, and we assume
it can't overflow by adding INT_MAX to it.
Usually, you have to call pthread_cond_timedwait() in a loop (because it
can wake up sporadically). If this function is used by another higher
level function, which uses a relative timeout, we actually have to
reduce the timeout on each iteration - or, simpler, compute the
"deadline" at the beginning of the function, and always pass the same
absolute time to the waiting function.
Might be unsafe if the system time is changed. On the other hand, this
is a fundamental race condition with these APIs.