Normally, vdpau decoded frames are passed directly to a suitable
vo (vo_vdpau or vo_opengl) without ever touching system memory. This
is efficient for output purposes, but prevents any of the regular
filters from being used with such frames.
This new filter implements a read-back step to pull the frames back
into system memory where they can be acted on by other filters.
Eventually the frames will be sent to the vo as if they were normal
software-decoded frames.
Note that a vdpau compatible vo must still be used to ensure that
the decoder is properly initialised.
Signed-off-by: wm4 <wm4@nowhere>
mp_seek_chapter() had only 1 caller. Also the code was rather
roundabout; the entire function can be compressed to 5 lines of code.
(The new code is functionally the same - "mpctx->last_chapter_seek =
-2;" was effectively a dead assingment.)
Extend the --demuxer-mkv-probe-video-duration behavior to work with
files that are partial and are missing an index. Do this by finding a
cluster 10MB before the end of the file, and if that fails, just read
the entire file. This is actually pretty trivial to do and requires only
5 lines of code.
Also add a mode that always reads the entire file to estimate the video
duration.
Until now, if a stream wasn't seekable, but the stream cache was enabled
(--cache), we've enabled seeking anyway. The idea was that at least
short seeks would typically fall within the cache. And if not, the user
was out of luck and terrible things happened. In other words, it was
unreliable.
Be stricter about it and remove this behavior. Effectively, this will
for example disable seeking in piped data.
Instead of trying to be clever, add an --force-seekable option, which
will always enable seeking if the user really wants it.
This is a real pain: if a quit command is received, it's set to PT_QUIT.
And then other code could overwrite it, making it not quit. The annoying
bit is that stop_play is written and read in many places. Just not
overwriting it unconditionally seems to be the best course of action.
Some code called by vf_vdpaupp.c calls mp_image_new_custom_ref(), but
out of convenience doesn't reset the buffers. Make this behavior ok.
(The assert() was there to catch usage errors, but the same error could
already happen before the refcount changes were made, so the check is
not overly helpful.)
Fixes#2115.
Drop libva versions below 0.34.0. These are ancient, so I don't care.
Drop the vo_vaapi deinterlacer as well. With 0.34.0, VPP is always
available, and deinterlacing is done with vf_vavpp.
The vaCreateSurfaces() function changes its signature - actually it did
in 0.34.0 or so, and the <va/va_compat.h> defined a macro to make it use
the old signature.
Sometime recently, hardware decoding started to fail if h264 with full
reference frames was decoded, and --vo=vaapi was used. VAAPI requires
registering all surfaces that the decoder will ever use in advance, so
if the playback chain uses more surfaces than originally allocated, we
fail and drop back to software decoding.
I'm not really sure why or when this started happening. Commit 7b9d7265
for one is not the cause - it can be reproduced with earlier commits. It
also seems to be timing dependent. Possibly it has to do with the way
vo.c retains previous surfaces, and the way they can be queued/unqueued
asynchronously.
Increasing the number of reserved additional surfaces by 1 fixes it.
(Though I have no idea where exactly all these surfaces are being used.
Or rather, _when_.)
See manpage additions. This is mainly useful for vo_opengl_cb, but can
also be applied to vo_opengl.
On a side note, gl_hwdec_load_api() should stop using a name string, and
instead always use the IDs. This should be cleaned up another time.
Now there's a "canonical" table for mapping the names, that other code
can use, without having to rely too much on option code magic.
Also, use the central HWDEC constants, instead of magic values. (There
used to be semi-ok reasons to do this, but now it makes no sense
anymore.)
Some filter chains require a huge number of auto-inserted conversion
filters. There is an overly stupid safeguard against infinite filter
insertions, which counts the number of conversion filters inserted. This
triggered accidentally in this case. Fix by resetting this counter after
a non-conversion filter was successfully configured.
Each subtitle track gets its own decoder instance (sd_ass). But they use
a shared ASS_Renderer. This is done mainly because of fontconfig.
Initializing fontconfig is very slow when using it with memory fonts, so
there's a practical need to cache this memory font state, which is done
by not creating separate ASS_Renderers. This is very dirty and very
evil, but we probably can't get rid of it any time soon.
The shared ASS_Renderer was not properly synchronized. While the program
logic guarantees that only one sd_ass instance is visible at a time,
there are other interactions that require synchronization. In
particular, I suspect concurrent execution of mp_ass_configure_fonts()
and sd_ass.get_bitmaps cause issues in a newer libass development
branch.
So here's a shitty hack that hopefully fixes things, hopefully only
until libass becomes less dependent on fontconfig.
Basically, we need to make sure to allocate enough data for the pretty
dumb copy_nv12 function. (It could be avoided by making the function
less dumb, but this fix is simpler.)
ao_coreaudio (using AudioUnit) accounted only for part of the latency -
move the code in ao_coreaudio_exclusive to utils, and use that for the
AudioUnit code.
(There's still the question why CoreAudio and AudioUnit require you to
jump through hoops this much, but apparently that's how it is.)
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
This caused issues with hardware decoding. The VOs by definition dictate
the lifetime of the hardware context, so no surface allocations must
survive the VO. Fixes assertions on exit with vdpau.
It's conceivable that the OS time source is subject to clock changes.
The time could jump back to before when mpv was started, which would
cause mp_time_us() to return values smaller than 1. This is unexpected
by the code and could trigger assertions. If there's no monotonic time
source there's not much we can do anyway, so just sanitize the return
value. It will cause strange behavior until the "lost" time offset has
passed, but if you make such huge changes to the system clock while
everything is running, you're asking for trouble anyway.
(Normally we try to get a monotonic time source, though. This problem
sometimes happened on Windows when compiled without winpthreads, when
the code was falling back to gettimeofday(). This was already fixed by
always using another method.)
clock_gettime is implemented in winpthreads, so it's unavailable when
mpv is compiled with its internal pthreads implementation. This makes
mp_raw_time_us fall back to gettimeofday(), which can cause an assert
failure in mp_add_timeout() when the system clock is changed. Use
QueryPerformanceCounter instead.
The clock_gettime(CLOCK_MONOTONIC) implementation in winpthreads uses
QueryPerformanceCounter anyway, so there shouldn't be any change in
behaviour.
If the request contains a "request_id", copy it back into the
response. There is no interpretation of the request_id value by mpv; the
only purpose is to make it easier on the requester by providing an
ability to match up responses with requests.
Because the IPC mechanism sends events continously, it's possible for
the response to a request to arrive several events after the request was
made. This can make it very difficult on the requester to determine
which response goes to which request.
Until now, this was for AC3 only. For PCM, we used AudioUnit in
ao_coreaudio, and the only reason ao_coreaudio_exclusive exists
is that there is no other way to passthrough AC3.
PCM support is actually rather simple. The most complicated
issue is that modern OS X versions actually do not support
copying through the data; instead everything must go through
float. So we have to deal with virtual and physical format
being different, which causes some complications.
This possibly also doesn't support some other things correctly.
For one, if the device allows non-interleaved output only, we
will probably fail. (I couldn't test it, so I don't even know
what is required. Supporting it would probably be rather
simple, and we already do it with AudioUnit.)
Mapping of spdif formats was imperfect. Since the first format on the
list is somehow AAC, it was returned first, which is confusing, because
CoreAudio calls all spdif formats AC3. Since the spdif formats have some
rather arbitrary, reverse mapping the formats didn"t actually work
either. Fix by explicitly ignoring these when spdif is used.
Also, don't forget to set the samplerate in ca_asbd_to_mpformat(), or it
will work only in some cases.