Basically, we need to make sure to allocate enough data for the pretty
dumb copy_nv12 function. (It could be avoided by making the function
less dumb, but this fix is simpler.)
ao_coreaudio (using AudioUnit) accounted only for part of the latency -
move the code in ao_coreaudio_exclusive to utils, and use that for the
AudioUnit code.
(There's still the question why CoreAudio and AudioUnit require you to
jump through hoops this much, but apparently that's how it is.)
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
This caused issues with hardware decoding. The VOs by definition dictate
the lifetime of the hardware context, so no surface allocations must
survive the VO. Fixes assertions on exit with vdpau.
It's conceivable that the OS time source is subject to clock changes.
The time could jump back to before when mpv was started, which would
cause mp_time_us() to return values smaller than 1. This is unexpected
by the code and could trigger assertions. If there's no monotonic time
source there's not much we can do anyway, so just sanitize the return
value. It will cause strange behavior until the "lost" time offset has
passed, but if you make such huge changes to the system clock while
everything is running, you're asking for trouble anyway.
(Normally we try to get a monotonic time source, though. This problem
sometimes happened on Windows when compiled without winpthreads, when
the code was falling back to gettimeofday(). This was already fixed by
always using another method.)
clock_gettime is implemented in winpthreads, so it's unavailable when
mpv is compiled with its internal pthreads implementation. This makes
mp_raw_time_us fall back to gettimeofday(), which can cause an assert
failure in mp_add_timeout() when the system clock is changed. Use
QueryPerformanceCounter instead.
The clock_gettime(CLOCK_MONOTONIC) implementation in winpthreads uses
QueryPerformanceCounter anyway, so there shouldn't be any change in
behaviour.
If the request contains a "request_id", copy it back into the
response. There is no interpretation of the request_id value by mpv; the
only purpose is to make it easier on the requester by providing an
ability to match up responses with requests.
Because the IPC mechanism sends events continously, it's possible for
the response to a request to arrive several events after the request was
made. This can make it very difficult on the requester to determine
which response goes to which request.
Until now, this was for AC3 only. For PCM, we used AudioUnit in
ao_coreaudio, and the only reason ao_coreaudio_exclusive exists
is that there is no other way to passthrough AC3.
PCM support is actually rather simple. The most complicated
issue is that modern OS X versions actually do not support
copying through the data; instead everything must go through
float. So we have to deal with virtual and physical format
being different, which causes some complications.
This possibly also doesn't support some other things correctly.
For one, if the device allows non-interleaved output only, we
will probably fail. (I couldn't test it, so I don't even know
what is required. Supporting it would probably be rather
simple, and we already do it with AudioUnit.)
Mapping of spdif formats was imperfect. Since the first format on the
list is somehow AAC, it was returned first, which is confusing, because
CoreAudio calls all spdif formats AC3. Since the spdif formats have some
rather arbitrary, reverse mapping the formats didn"t actually work
either. Fix by explicitly ignoring these when spdif is used.
Also, don't forget to set the samplerate in ca_asbd_to_mpformat(), or it
will work only in some cases.
This is basically a hack for drivers which prevent the mpv DXVA2 decoder
glue from working if OpenGL is in fullscreen mode.
Since it doesn't add any "hard" new API to the client API, some of the
code would be required for a true zero-copy hw decoding pipeline, and
sine it isn't too much code after all, this is probably acceptable.
For the sake of removing the separate stream/demuxer loading code.
This could probably be reimplemented in some other way, but I have no
DVB hardware for testing. The most preferred way would be making DVB to
not quit, and just rerun the stream selection.
The final goal is making opening the demuxer and opening the stream the
same operation.
Stream dumping is a rather uninteresting feature, but has a small
number of vocal users, and it's easy to keep.
When seeking to a different position, and seeking takes long, the OSD
might get redrawn. This means that the VO will receive a request to
redraw an old frame using whatever the previous PTS was. This breaks the
interpolation logic: the old frame will be added to the queue, and then
the next frames (with lower PTS if you seeked backwards) are not drawn
as the logic assumes they're past frames.
Fix this by using the non-interpolation code path when redrawing after a
seek reset, and no "real" frame has been drawn yet.
It's a recent regression caused by the redrawing code simplification.
The old code simply sent a VOCTRL for redrawing the frame, and the VO
had to deal with retaining the old frame on its own.
This is a hack as in there's probably a better solution.
Fixes#2097.
Less code, and avoids a black flash on start.
In theory it could happen that we map the window, and then don't have a
frame to draw - but mapping the window is done in the exact moment we
have a new frame to display.
This flags stuff tried to be too clever - if there are overlapping flags
(e.g. exclusive or combined flags), the one matching with most bits has
to be chosen.
This fixes logging of the seek command. E.g. "relative" and "absolute"
overlap to make them exclusive, but "relative" was always printed as it
happened to match first.
This is not the most theoretically perfect solution, ideally we could
check to see if the frame in question has already been rendered
somewhere in the queue and then avoid re-rendering it, at the cost of a
few extra lines of code. But I don't think the performance trade-off is
dramatic enough here.
draw_image_timed is renamed to draw_frame. struct frame_timing is
renamed to vo_frame. flip_page_timed is merged into draw_frame (the
additional parameters are part of struct vo_frame). draw_frame also
deprecates VOCTRL_REDRAW_FRAME, and replaces it with a method that
works for both VOs which can cache the current frame, and VOs which
need to redraw it anyway.
This is preparation to making the interpolation and (work in progress)
display sync code saner.
Lots of other refactoring, and also some simplifications.
This should make interpolation work much better in general, although
there still might be some side effects for unusual framerates (eg. 35 Hz
or 48 Hz). Most of the common framerates are tested and working fine.
(24 Hz, 30 Hz, 60 Hz)
The new code doesn't have support for oversample yet, so it's been
removed (and will most likely be reimplemented in a cleaner way if
there's enough demand). I would recommend using something like robidoux
or mitchell instead of oversample, though - they're much
smoother for the common cases.
For now, this is trivial (and actually redundant). The future display
sync code will make better use of it. The main point is that the new
internal API pretty much makes this transparent to the vo_opengl
interpolation code.
Now the VO can request a number of future frames with the last parameter
of vo_set_queue_params(). This will be helpful to fix the interpolation
code.
Note that the first frame (after playback start or seeking) will usually
not have any future frames (to make seeking fast). Near the end of the
file, the number of future frames will become lower as well.