vf_fix_img_params() takes care of overwriting image parameters that are
normally not set correctly by filters. But this makes no sense for input
images. So instead, check that the input is correct.
It still has to be done for the first input image, because that's used
to handle some overrides (see video_reconfig_filters()).
These replace vf_read_output_frame(), although we still emulate that
function. This change is preparation for another commit (and this is
basically just to reduce the diff and signal/noise ratio in that
commit).
Basically, if we feed the filter a new image even after the EOF state
has been reached (e.g. because the input stream "recovered"), we want
the filter to restart, instead of returning an error forever.
Some non-deinterlacing filters (potentially denoising) also use
additional frames for filtering. The vdpau docs suggest providing at
least 1 future and 2 past _fields_, which means we need to provide 1
past frame (the future field is already the other field of the current
field, and both fields are in the same frame).
We can easily achieve this by buffering an additional frame in the non-
deint case.
Remove the special casing of vo_vdpau vs. other VOs. Replace the
complicated interaction between vo.c and vo_vdpau.c with a simple queue
in vo.c. VOs other than vdpau are handled by setting the length of the
queue to 1 (this is essentially what waiting_mpi was).
Note that vo_vdpau.c seems to have buffered only 1 or 2 frames into the
future, while the remaining 3 or 4 frames were past frames. So the new
code buffers 2 frames (vo_vdpau.c requests this queue length by setting
vo->max_video_queue to 2). It should probably be investigated why
vo_vdpau.c kept so many past frames.
The field vo->redrawing is removed. I'm not really sure what that would
be needed for; it seems pointless.
Future directions include making the interface between playloop and VO
simpler, as well as making rendering a frame a single operation, as
opposed to the weird 3-step sequence of rendering, drawing OSD, and
flipping.
Give up on the deint_filters[] array, and probe using explicit code
instead. Add additional checks to test the pixel format to avoid
annoying warnings when a hardware deinterlacer is inserted when the
current video chain is obviously incompatible.
The previous commits changed vo_vdpau so that these options are set by
vf_vdpaupp, and the corresponding vo_vdpau were ignored. But for
compatibility, keep the "old" options working.
The value of this is questionable - maybe the vo_vdpau options should
just be removed. For now, at least demonstrate that it's possible.
The "deint" suboption still doesn't work, because the framerate doubling
logic required for some deint modes was moved to vf_vdpaupp. This
requires more elaborate workarounds.
This is slightly incomplete: the mixer options, such as sharpen and
especially deinterlacing, are ignored. This also breaks automatic
enabling of interlacing with 'D' or --deinterlace. These issues will be
fixed later in the following commits.
Note that we keep all the custom vdpau queue stuff. This will also be
simplified later.
This uses mp_vdpau_mixer_render(). The benefit is that it makes vdpau
deinterlacing just work. One additional minor advantage is that the
video mixer creation code is factored out (although that is a double-
edged sword).
This factors out some code from vo_vdpau.c, especially deinterlacing
handling. The intention is to use this for vo_vdpau.c to make the logic
significantly easier, and to use it for vo_opengl (gl_hwdec_vdpau.c) to
allow selecting deinterlace and postprocessing modes.
As of this commit, the filter actually does nothing, since both vo_vdpau
and vo_opengl treat the generated images as normal vdpau images. This
will change in the following commits.
It now inserts no filters and does nothing until the hot-key is pressed.
This makes it more suitable to be put in ~/.mpv/lua.
When the hot-key is pressed, it now inserts the cropdetect filter and
waits 1 second (or a --lua-opts specified duration) before gathering
the cropdetect metadata and inserting the appropriate crop filter. A
second press of the hotkey removes the crop.
It might have been nice not to do this so that metadata could
accumulate accross seeks, but it seems libavfilter looses its copy
anyway on recreate_graph.
lavfi would segfault due to a NULL dereference if it was asked for its
metadata and none had been allocated (oops). This happens for libav
which has no concept of filter metadata.
Commit 5e4e248 added a mp_image_params field to mp_image, and moved many
parameters to that struct. display_w/h was left redundant with
mp_image_params.d_w/d_h. These fields were supposed to be always in
sync, but it seems some code forgot to do this correctly, such as
vf_fix_img_params() or mp_image_copy_attributes(). This led to the
problem in github issue #756, because display_w/_h could become
incorrect.
It turns out that most code didn't use the old fields anyway. Just
remove them. Note that mp_image_params.d_w/d_h are supposed to be always
valid, so the additional checks for 0 shouldn't be needed. Remove these
checks as well.
Fixes#756.
In theory, returning the screenshot with original pixel aspect would
allow avoiding scaling them with image formats that support non-square
pixels, but in practice this isn't used anyway (nothing seems to
understand e.g. jpeg aspect ratio tags).
We only support them for input. The frame properties of output frames
are ignored (except frame durations).
Properties not set for now: _ChromaLocation, _Field, _FieldBased
Set _DurationNum/_DurationDen on each VS frame, instead of
_AbsoluteTime. The duration is the difference between the timestamp of
the frame and the next frame, and when receiving filtered VS frames, we
convert them back to an absolute PTS by summing them.
We pass the timestamps with microsecond resolution. mpv uses double for
timestamps internally, so we don't know the "real" timebase or FPS. VS
on the other hand uses fractions for frame durations. We can't pass
through the numbers exactly, but microseconds ought to be enough to be
even safe from accumulating rounding errors.
Since this leaks video images, and the player keeps feeding new images
to the fitler even if it fails, this would probably have disastrous
consequences.
Or in other words, add support for properly draining remaining frames
from video filters. vf_yadif is buffering at least one frame, and the
buffered frame was not retrieved on EOF.
For most filters, ignore this for now, and just adjust them to the
changed semantics of filter_ext. But for vf_lavfi (used by vf_yadif),
real support is implemented. libavfilter handles this simply by passing
a NULL frame to av_buffersrc_add_frame(), so we just have to make
mp_to_av() handle NULL arguments.
In load_next_vo_frame(), we first try to output a frame buffered in the
VO, then the filter, and then (if EOF is reached and there's still no
new frame) the VO again, with draining enabled. I guess this was
implemented slightly incorrectly before, because the filter chain still
could have had remaining output frames.
This extracts the scheduling logic to a single function which is nicer to keep
it consistent.
Additionally make sure we don't schedule sync operations from a sync operation
itself since that could cause deadlocks (even if it should not be happening
with the current code).