The interlaced frame test needs to be aware that the input mpi might be
NULL - this happens at the end of a stream when the input frames have
all been submitted but frames still need to be drained from the
decoder.
ass_set_fonts() is called by mp_ass_configure_fonts(), which was called
every time a subtitle renderer was initialized. I'm not sure why this
was done - I can't find a good reason, and most likely there's none.
However, it did cause problems with an experimental libass branch. It
crashed some time after switching to a second subtitle track. The branch
will hopefully be merged soon, and it seems unlikely that libass wants
to fix its problems with its ridiculous API (rather it should normalize
its API so that the issue doesn't happen in the first place), so just
apply this change. It makes our code simpler too.
Outputting the detected OpenGL features was useless and redundant with
the extension loading output.
Also, remove MPGL_CAP_3D_TEX from OpenGL(ES) 3.0. This block didn't
include the glTexImage3D function, so that was pointless and couldn't
have worked. The OpenGL 2.1 block does it correctly.
VDPAU has explicit support for rotating surfaces, and it is far less
expensive than using the normal rotation filter (which would require
reading video frames back into system memory), it is desirable to
implement the VO rotation capability.
To do this, we need to render the video frames to an output surface,
without rotation, and then render from that surface to the final
output surface to apply the rotation. It is important that the
intermediate surface is the same size as the final one (only not
rotated) so that hqscaling can be applied if requested by the user.
(hqscaling is a mixer capability and so takes effect when the video
surface is rendered to an output surface)
Finally, we must remember to explicitly clear the final output
surface as VDPAU only auto-clears output surfaces when rendering video
surfaces.
Normally, vdpau decoded frames are passed directly to a suitable
vo (vo_vdpau or vo_opengl) without ever touching system memory. This
is efficient for output purposes, but prevents any of the regular
filters from being used with such frames.
This new filter implements a read-back step to pull the frames back
into system memory where they can be acted on by other filters.
Eventually the frames will be sent to the vo as if they were normal
software-decoded frames.
Note that a vdpau compatible vo must still be used to ensure that
the decoder is properly initialised.
Signed-off-by: wm4 <wm4@nowhere>
mp_seek_chapter() had only 1 caller. Also the code was rather
roundabout; the entire function can be compressed to 5 lines of code.
(The new code is functionally the same - "mpctx->last_chapter_seek =
-2;" was effectively a dead assingment.)
Extend the --demuxer-mkv-probe-video-duration behavior to work with
files that are partial and are missing an index. Do this by finding a
cluster 10MB before the end of the file, and if that fails, just read
the entire file. This is actually pretty trivial to do and requires only
5 lines of code.
Also add a mode that always reads the entire file to estimate the video
duration.
Until now, if a stream wasn't seekable, but the stream cache was enabled
(--cache), we've enabled seeking anyway. The idea was that at least
short seeks would typically fall within the cache. And if not, the user
was out of luck and terrible things happened. In other words, it was
unreliable.
Be stricter about it and remove this behavior. Effectively, this will
for example disable seeking in piped data.
Instead of trying to be clever, add an --force-seekable option, which
will always enable seeking if the user really wants it.
This is a real pain: if a quit command is received, it's set to PT_QUIT.
And then other code could overwrite it, making it not quit. The annoying
bit is that stop_play is written and read in many places. Just not
overwriting it unconditionally seems to be the best course of action.
Some code called by vf_vdpaupp.c calls mp_image_new_custom_ref(), but
out of convenience doesn't reset the buffers. Make this behavior ok.
(The assert() was there to catch usage errors, but the same error could
already happen before the refcount changes were made, so the check is
not overly helpful.)
Fixes#2115.
Drop libva versions below 0.34.0. These are ancient, so I don't care.
Drop the vo_vaapi deinterlacer as well. With 0.34.0, VPP is always
available, and deinterlacing is done with vf_vavpp.
The vaCreateSurfaces() function changes its signature - actually it did
in 0.34.0 or so, and the <va/va_compat.h> defined a macro to make it use
the old signature.
Sometime recently, hardware decoding started to fail if h264 with full
reference frames was decoded, and --vo=vaapi was used. VAAPI requires
registering all surfaces that the decoder will ever use in advance, so
if the playback chain uses more surfaces than originally allocated, we
fail and drop back to software decoding.
I'm not really sure why or when this started happening. Commit 7b9d7265
for one is not the cause - it can be reproduced with earlier commits. It
also seems to be timing dependent. Possibly it has to do with the way
vo.c retains previous surfaces, and the way they can be queued/unqueued
asynchronously.
Increasing the number of reserved additional surfaces by 1 fixes it.
(Though I have no idea where exactly all these surfaces are being used.
Or rather, _when_.)
See manpage additions. This is mainly useful for vo_opengl_cb, but can
also be applied to vo_opengl.
On a side note, gl_hwdec_load_api() should stop using a name string, and
instead always use the IDs. This should be cleaned up another time.
Now there's a "canonical" table for mapping the names, that other code
can use, without having to rely too much on option code magic.
Also, use the central HWDEC constants, instead of magic values. (There
used to be semi-ok reasons to do this, but now it makes no sense
anymore.)
Some filter chains require a huge number of auto-inserted conversion
filters. There is an overly stupid safeguard against infinite filter
insertions, which counts the number of conversion filters inserted. This
triggered accidentally in this case. Fix by resetting this counter after
a non-conversion filter was successfully configured.
Each subtitle track gets its own decoder instance (sd_ass). But they use
a shared ASS_Renderer. This is done mainly because of fontconfig.
Initializing fontconfig is very slow when using it with memory fonts, so
there's a practical need to cache this memory font state, which is done
by not creating separate ASS_Renderers. This is very dirty and very
evil, but we probably can't get rid of it any time soon.
The shared ASS_Renderer was not properly synchronized. While the program
logic guarantees that only one sd_ass instance is visible at a time,
there are other interactions that require synchronization. In
particular, I suspect concurrent execution of mp_ass_configure_fonts()
and sd_ass.get_bitmaps cause issues in a newer libass development
branch.
So here's a shitty hack that hopefully fixes things, hopefully only
until libass becomes less dependent on fontconfig.
Basically, we need to make sure to allocate enough data for the pretty
dumb copy_nv12 function. (It could be avoided by making the function
less dumb, but this fix is simpler.)
ao_coreaudio (using AudioUnit) accounted only for part of the latency -
move the code in ao_coreaudio_exclusive to utils, and use that for the
AudioUnit code.
(There's still the question why CoreAudio and AudioUnit require you to
jump through hoops this much, but apparently that's how it is.)