This was a minor code duplication between vf_vdpaupp.c and vo_vdpau.c.
(In theory, we could always require using vf_vdpaupp with vo_vdpau, but
I think it's better if vo_vdpau can work standalone.)
vf_fix_img_params() takes care of overwriting image parameters that are
normally not set correctly by filters. But this makes no sense for input
images. So instead, check that the input is correct.
It still has to be done for the first input image, because that's used
to handle some overrides (see video_reconfig_filters()).
These replace vf_read_output_frame(), although we still emulate that
function. This change is preparation for another commit (and this is
basically just to reduce the diff and signal/noise ratio in that
commit).
Basically, if we feed the filter a new image even after the EOF state
has been reached (e.g. because the input stream "recovered"), we want
the filter to restart, instead of returning an error forever.
Some non-deinterlacing filters (potentially denoising) also use
additional frames for filtering. The vdpau docs suggest providing at
least 1 future and 2 past _fields_, which means we need to provide 1
past frame (the future field is already the other field of the current
field, and both fields are in the same frame).
We can easily achieve this by buffering an additional frame in the non-
deint case.
This factors out some code from vo_vdpau.c, especially deinterlacing
handling. The intention is to use this for vo_vdpau.c to make the logic
significantly easier, and to use it for vo_opengl (gl_hwdec_vdpau.c) to
allow selecting deinterlace and postprocessing modes.
As of this commit, the filter actually does nothing, since both vo_vdpau
and vo_opengl treat the generated images as normal vdpau images. This
will change in the following commits.
It might have been nice not to do this so that metadata could
accumulate accross seeks, but it seems libavfilter looses its copy
anyway on recreate_graph.
lavfi would segfault due to a NULL dereference if it was asked for its
metadata and none had been allocated (oops). This happens for libav
which has no concept of filter metadata.
We only support them for input. The frame properties of output frames
are ignored (except frame durations).
Properties not set for now: _ChromaLocation, _Field, _FieldBased
Set _DurationNum/_DurationDen on each VS frame, instead of
_AbsoluteTime. The duration is the difference between the timestamp of
the frame and the next frame, and when receiving filtered VS frames, we
convert them back to an absolute PTS by summing them.
We pass the timestamps with microsecond resolution. mpv uses double for
timestamps internally, so we don't know the "real" timebase or FPS. VS
on the other hand uses fractions for frame durations. We can't pass
through the numbers exactly, but microseconds ought to be enough to be
even safe from accumulating rounding errors.
Since this leaks video images, and the player keeps feeding new images
to the fitler even if it fails, this would probably have disastrous
consequences.
Or in other words, add support for properly draining remaining frames
from video filters. vf_yadif is buffering at least one frame, and the
buffered frame was not retrieved on EOF.
For most filters, ignore this for now, and just adjust them to the
changed semantics of filter_ext. But for vf_lavfi (used by vf_yadif),
real support is implemented. libavfilter handles this simply by passing
a NULL frame to av_buffersrc_add_frame(), so we just have to make
mp_to_av() handle NULL arguments.
In load_next_vo_frame(), we first try to output a frame buffered in the
VO, then the filter, and then (if EOF is reached and there's still no
new frame) the VO again, with draining enabled. I guess this was
implemented slightly incorrectly before, because the filter chain still
could have had remaining output frames.
I thought the "_" in "_AbsoluteTime" was part of the documentation
markup.
This still doesn't help us with VS filters that change timing;
apparently you must use frame durations instead.
Make the filter apply the pixel aspect ratio of the input to the output.
This is more useful than forcing 1:1 PAR when playing anamorphic video
such as DVDs.
VapourSynth itself actually allows passing through the aspect ratio, but
it's in a not very useful form for us: it's per video-frame instead of
constant (i.e in VSVideoInfo). As long as we don't have a way to allow a
filter to spontaneously change output parameters, we can't use this.
(And I don't really feel like making this possible.)
When using rotation with hw decoding, and the VO does not support
rotation, vf_rotate is attempted to be inserted. This will go wrong, and
after that it can't recover because a vf_scale filter was autoinserted.
Just removing all autoinserted filters before reconfig fixes this.
This couldn't rotate by 180°. Add this, and also make the parameter in
degrees, instead of magic numbers.
For now, drop the flipping stuff. You can still flip with --vf=flip or
--vf=mirror. Drop the landscape/portrait stuff - I think this is
something almost nobody will use. If it turns out that we need some of
these things, they can be readded later.
Make it use libavfilter. Its vf_transpose implementation looks pretty
simple, except that it uses slice threading and should be much faster.
Fix all include statements of the form:
#include "libav.../..."
These come from MPlayer times, when FFmpeg was somehow part of the
MPlayer build tree, and this form was needed to prefer the local files
over system FFmpeg.
In some cases, the include statement wasn't needed or could be replaced
with mpv defined symbols.
Not needed anymore. I'm not opposed to having asm, but inline asm is too
much of a pain, and it was planned long ago to eventually get rid fo all
inline asm uses.
For the note, the inline asm use that was removed with the previous
commits was almost worthless. It was confined to video filters, and most
video filtering is now done with libavfilter. Some mpv filters (like
vf_pullup) actually redirect to libavfilter if possible.
If asm is added in the future, it should happen in the form of external
files.
No change in speed (or even slightly faster, though I tested with
progressive solid color video only), and normally we use libavformat's
vf_pullup anyway.
I didn't test the speed, but by default, this filter diverts to
libavfilter already. So this would help only if libavfilter is disabled,
or libavfilter doesn't have vf_noise (like on Libav). For these cases,
we still provide the (possibly but not necessarily) slower C
implementation of vf_noise.
This makes it multiple times slower. However, the output format (packed
YUV) isn't handled efficiently by anything to begin with, and I have no
clue we even have this filter. I guess it's one of these filters which
find some use sometimes, but are not of higher importance, which
justifies removing the faster inline asm.
We were relying on vsscript_freeScript() to take care of proper
termination. But it doesn't do that: it doesn't wait for the filters to
finish and exit at all. Instead, it just destroys all objects, which
causes the worker threads to crash sometimes.
Also, we're supposed to wait for the frame callback to finish before
freeing the associated node.
Handle this by explicitly waiting as far as we can. Probably fixes
crashes on seeking, although VapourSynth itself might also need some
work to make this case completely stable.
Before this commit, the filter attempted to keep the vsscript state
(p->se) even when the script was reloaded. Change it to destroy the
script state too on reloading. Now no workaround for LoadPlugin is
necessary, and this also fixes a weird theoretical race condition when
destroying and recreating the mpv source filter.
I hate tabs.
This replaces all tabs in all source files with spaces. The only
exception is old-makefile. The replacement was made by running the
GNU coreutils "expand" command on every file. Since the replacement was
automatic, it's possible that some formatting was destroyed (but perhaps
only if it was assuming that the end of a tab does not correspond to
aligning the end to multiples of 8 spaces).