I don't think this is really a very good idea because it is conceptually
incorrect but other prominent multimedia programs use this approach
(VLC and xbmc), and it seems to make the conversion more robust in certain
cases.
For example it has been reported, that configuring a receiver that can output
7.1 to output 5.1, will make CoreAudio report 8 channel descriptions, and the
last 2 descriptions will be tagged kAudioChannelLabel_Unknown.
Fixes#737
The code was falling back to the full waveext chmap_sel when less than 2
channels were detected. This new code is slightly more correct since it only
fills the chmap_sel with the stereo or mono chmap in the fallback case.
CoreAudio supports 3 kinds of layouts: bitmap based, tag based, and speaker
description based (using either channel labels or positional data).
Previously we tried to convert everything to bitmap based channel layouts,
but it turns out description based ones are the most generic and there are
built-in CoreAudio APIs to perform the conversion in this direction.
Moreover description based layouts support waveext extensions (like SDL and
SDR), and are easier to map to mp_chmaps.
This might shift bits into the sign, which is undefined behavior. Making
the right operand unsigned was supposed to help with this, but it seems
it did nothing, and C99 makes the result type dependent on the left
operand only.
Use the exact floating point value, instead of a broken integer
constant. The expression calculating the constant probably relied on
undefined behavior, because it left-shifts a negative value.
This also changes the type of the constant to double, which is perfectly
fine, and maybe better than an integer constant.
Preparation so that various things related to video can run in different
threads. One part to this is making the video surface pool safe.
Another issue is the preemption mechanism, which continues to give us
endless pain. In theory, it's probably impossible to handle preemption
100% correctly and race-condition free, unless _every_ API user in the
same process uses a central, shared mutex to protect every vdpau API
call. Otherwise, it could happen that one thread recovering from
preemption allocates a vdpau object, and then another thread (which
hasn't recovered yet) happens to free the object for some reason. This
is because objects are referenced by integer IDs, and vdpau will reuse
IDs invalidated by preemption after preemption.
Since this is unreasonable, we're as lazy as possible when it comes to
handling preemption. We don't do any locking around the mp_vdpau_ctx
fields that are normally immutable, and only can change when recovering
from preemption. In practice, this will work, because it doesn't matter
whether not-yet-recovered components use the old or new vdpau function
pointers or device ID. Code calls mp_vdpau_handle_preemption() anyway to
check for the preemption event and possibly to recover, and that
function acquires the lock protecting the preemption state.
Another possible source of potential grandiose fuckup is the fact that
the vdpau library is in fact only a tiny wrapper, and the real driver
lives in a shared object dlopen()ed by the wrapper. The wrapper also
calls dlclose() on the loaded shared object in some situations. One
possible danger is that failing to recreate a vdpau device could trigger
a dlclose() call, and that glibc might unload it. Currently, glibc
implements full unloading of shared objects on the last dlclose() call,
and if that happens, calls to function pointers pointing into the shared
object would obviously crash. Fortunately, it seems the existing vdpau
wrapper won't trigger this case and never unloads the driver once it's
successfully loaded.
To make it short, vdpau preemption opens up endless depths of WTFs.
Another issue is that any participating thread might do the preemption
recovery (whichever comes first). This is easier to implement. The
implication is that we need threadsafe xlib. We just hope and pray that
this will actually work. This also means that once vdpau code is
actually involved in a multithreaded scenario, we have to add
XInitThreads() to the X11 code.
Use the newly provided mp_vdpau_handle_preemption() function, instead of
accessing mp_vdpau_ctx fields directly. Will probably make multithreaded
access to the vdpau context easier.
Mostly unrelated to the actual changes, I've noticed that using hw
decoding with vo_opengl sometimes leads to segfaults inside of nvidia's
libGL when doing the following:
1. use hw decoding + vo_opengl
2. switch to console (will preempt on nvidia systems)
3. switch back to X (mpv will recover, switches to sw decoding)
4. enable hw decoding again
5. exit mpv
Then it segfaults when mpv finally calls exit(). I'll just blame nvidia,
although it seems likely that something in the gl_hwdec_vdpau.c
preemption handling triggers corner cases in nvidia's code.
This was broken for some time, and it didn't recover correctly.
Redo decoder display preemption. Instead of trying to reinitialize the
hw decoder, simply fallback to software decoding. I consider display
preemption a bug in the vdpau API, so being able to _somehow_ recover
playback is good enough.
The approach taking here will probably also make it easier to handle
multithreading.
Move the code that copies the dylib's to the bundle to a new script
(dylib-unhell.py) which is called by osxbundle.py.
dylib-unhell is about 20x faster than the previous implementation. This is
accomplished by removing superflous shell-out operations which are kept track
of using an in memory tree of all the needed dependencies. Moreover the
shell-outs have been further optimized by not requiring a complete shell for
every operation and just using subprocess.call (which is equivalent to Popen).
The many "boxes" in this entry were causing rst2pdf to fail because it
couldn't figure out where to break the page. Make the boxes smaller by
removing semi-redundant examples. Also try and make surrounding text a
little shorter by rewording.
This is done after filters, so things like framerate-doubling
deinterlacing is accounted for.
Unfortunately, framedropping can cause inaccuracies (especially after
precise seeks), and we can't really know when that happens. Even though
we know that the decoder might drop a frame if we request it to do so,
we don't know when the dropped frame will start or stop affecting the
video filter chain. Video filters can have frames buffered, and we
can't tell at which point the dropped frame would have been output.
It's not even possible to mark a discontinuity after seek, because
again we don't know if the filter chain still has the discontinuity
within its buffers.
So we have to live with the fact that the output of this property can
be completely broken after seek, unless --no-hr-seek-framedrop is used.
This allows disabling of decoder framedrop during hr-seek.
It's basically another useless option, but it will help exploring
whether this framedropping really makes seeking faster, or whether
disabling it helps with precise seeking (especially frame backstepping).
Until recently, the VO was an unavoidable part of the seeking code path.
This was because vdpau deinterlacing could double the framerate, and hr-
seek and framestepping etc. all had to "see" the additional frames. But
we've removed the frame doubling from the vdpau VO and moved it into a
video filter (vf_vdpaupp), and there's no reason left why the VO should
participate in seeking.
Instead of queuing frames to the VO during seek and skipping them
afterwards, drop the frames early.
This actually might make seeking with vo_vdpau and software decoding
faster, although I haven't measured it.
Now we avoid calling update_video() twice on reconfig (once to check
whether there are still new frames, and again to actually do the
reconfig). Instead, we check whether there's still something going on
before calling update_video() at all, and depending on that
update_video() will be allowed to reconfig or not.
This will simplify some things later.
It doesn't look like vo_wayland_config() necessarily sets this flag, so
it seems safer to trigger an explicit resize. This accounts for the case
when playing a new file with different size than the one before.
Currently, vo_reconfig() calculates the requested window size and sets
the vo->dwidth/dheight fields _if_ VOCTRL_UPDATE_SCREENINFO is
implemented by the VO or the windowing backend. The window size can be
different from the display size if e.g. the --geometry option is used.
It will also set the vo->dx/dy fields and read vo->xinerama_x/y.
It turned out that this is very backwards and actually requires the
windowing backends to workaround these things. There's also
MPOpts.screenwidth/screenheight, which used to map to actual options,
but is now used only to communicate the screen size to the vo.c code
calculating the window size and position.
Change this by making the window geometry calculations available as
separate functions. This commit doesn't change any VO code yet, and just
emulates the old way using the new functions. VO code will remove its
usage of VOCTRL_UPDATE_SCREENINFO and use the new functions directly.
Because the http playlist URL I had for testing claimed to be m3u by
file extension and mime type, but didn't have the header.
Note that this actually changes behavior only in the case the format is
detected by mime type. Then p->force will be set before calling the
parser, and the header becomes optional.
Changing --softvol-max and then resuming would change the volume level
on resume to something different than the original volume. This is
because the user volume setting is always between 0-100, and 100
corresponds to --softvol-max gain.
Avoid that changing -softvol-max and resuming an older file could lead
to a too loud volume level by refusing to restore if --softvol-max
changed.