Outputting the detected OpenGL features was useless and redundant with
the extension loading output.
Also, remove MPGL_CAP_3D_TEX from OpenGL(ES) 3.0. This block didn't
include the glTexImage3D function, so that was pointless and couldn't
have worked. The OpenGL 2.1 block does it correctly.
VDPAU has explicit support for rotating surfaces, and it is far less
expensive than using the normal rotation filter (which would require
reading video frames back into system memory), it is desirable to
implement the VO rotation capability.
To do this, we need to render the video frames to an output surface,
without rotation, and then render from that surface to the final
output surface to apply the rotation. It is important that the
intermediate surface is the same size as the final one (only not
rotated) so that hqscaling can be applied if requested by the user.
(hqscaling is a mixer capability and so takes effect when the video
surface is rendered to an output surface)
Finally, we must remember to explicitly clear the final output
surface as VDPAU only auto-clears output surfaces when rendering video
surfaces.
Normally, vdpau decoded frames are passed directly to a suitable
vo (vo_vdpau or vo_opengl) without ever touching system memory. This
is efficient for output purposes, but prevents any of the regular
filters from being used with such frames.
This new filter implements a read-back step to pull the frames back
into system memory where they can be acted on by other filters.
Eventually the frames will be sent to the vo as if they were normal
software-decoded frames.
Note that a vdpau compatible vo must still be used to ensure that
the decoder is properly initialised.
Signed-off-by: wm4 <wm4@nowhere>
mp_seek_chapter() had only 1 caller. Also the code was rather
roundabout; the entire function can be compressed to 5 lines of code.
(The new code is functionally the same - "mpctx->last_chapter_seek =
-2;" was effectively a dead assingment.)
Extend the --demuxer-mkv-probe-video-duration behavior to work with
files that are partial and are missing an index. Do this by finding a
cluster 10MB before the end of the file, and if that fails, just read
the entire file. This is actually pretty trivial to do and requires only
5 lines of code.
Also add a mode that always reads the entire file to estimate the video
duration.
Until now, if a stream wasn't seekable, but the stream cache was enabled
(--cache), we've enabled seeking anyway. The idea was that at least
short seeks would typically fall within the cache. And if not, the user
was out of luck and terrible things happened. In other words, it was
unreliable.
Be stricter about it and remove this behavior. Effectively, this will
for example disable seeking in piped data.
Instead of trying to be clever, add an --force-seekable option, which
will always enable seeking if the user really wants it.
This is a real pain: if a quit command is received, it's set to PT_QUIT.
And then other code could overwrite it, making it not quit. The annoying
bit is that stop_play is written and read in many places. Just not
overwriting it unconditionally seems to be the best course of action.
Some code called by vf_vdpaupp.c calls mp_image_new_custom_ref(), but
out of convenience doesn't reset the buffers. Make this behavior ok.
(The assert() was there to catch usage errors, but the same error could
already happen before the refcount changes were made, so the check is
not overly helpful.)
Fixes#2115.
Drop libva versions below 0.34.0. These are ancient, so I don't care.
Drop the vo_vaapi deinterlacer as well. With 0.34.0, VPP is always
available, and deinterlacing is done with vf_vavpp.
The vaCreateSurfaces() function changes its signature - actually it did
in 0.34.0 or so, and the <va/va_compat.h> defined a macro to make it use
the old signature.
Sometime recently, hardware decoding started to fail if h264 with full
reference frames was decoded, and --vo=vaapi was used. VAAPI requires
registering all surfaces that the decoder will ever use in advance, so
if the playback chain uses more surfaces than originally allocated, we
fail and drop back to software decoding.
I'm not really sure why or when this started happening. Commit 7b9d7265
for one is not the cause - it can be reproduced with earlier commits. It
also seems to be timing dependent. Possibly it has to do with the way
vo.c retains previous surfaces, and the way they can be queued/unqueued
asynchronously.
Increasing the number of reserved additional surfaces by 1 fixes it.
(Though I have no idea where exactly all these surfaces are being used.
Or rather, _when_.)
See manpage additions. This is mainly useful for vo_opengl_cb, but can
also be applied to vo_opengl.
On a side note, gl_hwdec_load_api() should stop using a name string, and
instead always use the IDs. This should be cleaned up another time.
Now there's a "canonical" table for mapping the names, that other code
can use, without having to rely too much on option code magic.
Also, use the central HWDEC constants, instead of magic values. (There
used to be semi-ok reasons to do this, but now it makes no sense
anymore.)
Some filter chains require a huge number of auto-inserted conversion
filters. There is an overly stupid safeguard against infinite filter
insertions, which counts the number of conversion filters inserted. This
triggered accidentally in this case. Fix by resetting this counter after
a non-conversion filter was successfully configured.
Each subtitle track gets its own decoder instance (sd_ass). But they use
a shared ASS_Renderer. This is done mainly because of fontconfig.
Initializing fontconfig is very slow when using it with memory fonts, so
there's a practical need to cache this memory font state, which is done
by not creating separate ASS_Renderers. This is very dirty and very
evil, but we probably can't get rid of it any time soon.
The shared ASS_Renderer was not properly synchronized. While the program
logic guarantees that only one sd_ass instance is visible at a time,
there are other interactions that require synchronization. In
particular, I suspect concurrent execution of mp_ass_configure_fonts()
and sd_ass.get_bitmaps cause issues in a newer libass development
branch.
So here's a shitty hack that hopefully fixes things, hopefully only
until libass becomes less dependent on fontconfig.
Basically, we need to make sure to allocate enough data for the pretty
dumb copy_nv12 function. (It could be avoided by making the function
less dumb, but this fix is simpler.)
ao_coreaudio (using AudioUnit) accounted only for part of the latency -
move the code in ao_coreaudio_exclusive to utils, and use that for the
AudioUnit code.
(There's still the question why CoreAudio and AudioUnit require you to
jump through hoops this much, but apparently that's how it is.)
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
This caused issues with hardware decoding. The VOs by definition dictate
the lifetime of the hardware context, so no surface allocations must
survive the VO. Fixes assertions on exit with vdpau.
It's conceivable that the OS time source is subject to clock changes.
The time could jump back to before when mpv was started, which would
cause mp_time_us() to return values smaller than 1. This is unexpected
by the code and could trigger assertions. If there's no monotonic time
source there's not much we can do anyway, so just sanitize the return
value. It will cause strange behavior until the "lost" time offset has
passed, but if you make such huge changes to the system clock while
everything is running, you're asking for trouble anyway.
(Normally we try to get a monotonic time source, though. This problem
sometimes happened on Windows when compiled without winpthreads, when
the code was falling back to gettimeofday(). This was already fixed by
always using another method.)
clock_gettime is implemented in winpthreads, so it's unavailable when
mpv is compiled with its internal pthreads implementation. This makes
mp_raw_time_us fall back to gettimeofday(), which can cause an assert
failure in mp_add_timeout() when the system clock is changed. Use
QueryPerformanceCounter instead.
The clock_gettime(CLOCK_MONOTONIC) implementation in winpthreads uses
QueryPerformanceCounter anyway, so there shouldn't be any change in
behaviour.
If the request contains a "request_id", copy it back into the
response. There is no interpretation of the request_id value by mpv; the
only purpose is to make it easier on the requester by providing an
ability to match up responses with requests.
Because the IPC mechanism sends events continously, it's possible for
the response to a request to arrive several events after the request was
made. This can make it very difficult on the requester to determine
which response goes to which request.