While I'm not very fond of "const", it's important for declarations
(it decides whether a symbol is emitted in a read-only or read/write
section). Fix all these cases, so we have writeable global data only
when we really need.
Additionally to removing the global variables, this makes the options
more uniform. --ssf-... becomes --sws-..., and --sws becomes --sws-
scaler. For --sws-scaler, use choices instead of magic integer values.
It could in theory happen that the filter loop will enter a blocking
wait, even though it could make progress by emptying the list of
already-filtered images. I'm not quite sure if this could actually cause
a real issue - probably not.
VapourSynth won't just filter multiple frames at once on its own. You
have to request multiple frames at once manually. This is what this
commit introduces: a sub-option controls how many frames will be
requested at once. This also changes the semantics of the maxbuffer sub-
option, now renamed to buffered-frames.
This was a minor code duplication between vf_vdpaupp.c and vo_vdpau.c.
(In theory, we could always require using vf_vdpaupp with vo_vdpau, but
I think it's better if vo_vdpau can work standalone.)
vf_fix_img_params() takes care of overwriting image parameters that are
normally not set correctly by filters. But this makes no sense for input
images. So instead, check that the input is correct.
It still has to be done for the first input image, because that's used
to handle some overrides (see video_reconfig_filters()).
These replace vf_read_output_frame(), although we still emulate that
function. This change is preparation for another commit (and this is
basically just to reduce the diff and signal/noise ratio in that
commit).
Basically, if we feed the filter a new image even after the EOF state
has been reached (e.g. because the input stream "recovered"), we want
the filter to restart, instead of returning an error forever.
Some non-deinterlacing filters (potentially denoising) also use
additional frames for filtering. The vdpau docs suggest providing at
least 1 future and 2 past _fields_, which means we need to provide 1
past frame (the future field is already the other field of the current
field, and both fields are in the same frame).
We can easily achieve this by buffering an additional frame in the non-
deint case.
This factors out some code from vo_vdpau.c, especially deinterlacing
handling. The intention is to use this for vo_vdpau.c to make the logic
significantly easier, and to use it for vo_opengl (gl_hwdec_vdpau.c) to
allow selecting deinterlace and postprocessing modes.
As of this commit, the filter actually does nothing, since both vo_vdpau
and vo_opengl treat the generated images as normal vdpau images. This
will change in the following commits.
It might have been nice not to do this so that metadata could
accumulate accross seeks, but it seems libavfilter looses its copy
anyway on recreate_graph.
lavfi would segfault due to a NULL dereference if it was asked for its
metadata and none had been allocated (oops). This happens for libav
which has no concept of filter metadata.
We only support them for input. The frame properties of output frames
are ignored (except frame durations).
Properties not set for now: _ChromaLocation, _Field, _FieldBased
Set _DurationNum/_DurationDen on each VS frame, instead of
_AbsoluteTime. The duration is the difference between the timestamp of
the frame and the next frame, and when receiving filtered VS frames, we
convert them back to an absolute PTS by summing them.
We pass the timestamps with microsecond resolution. mpv uses double for
timestamps internally, so we don't know the "real" timebase or FPS. VS
on the other hand uses fractions for frame durations. We can't pass
through the numbers exactly, but microseconds ought to be enough to be
even safe from accumulating rounding errors.
Since this leaks video images, and the player keeps feeding new images
to the fitler even if it fails, this would probably have disastrous
consequences.
Or in other words, add support for properly draining remaining frames
from video filters. vf_yadif is buffering at least one frame, and the
buffered frame was not retrieved on EOF.
For most filters, ignore this for now, and just adjust them to the
changed semantics of filter_ext. But for vf_lavfi (used by vf_yadif),
real support is implemented. libavfilter handles this simply by passing
a NULL frame to av_buffersrc_add_frame(), so we just have to make
mp_to_av() handle NULL arguments.
In load_next_vo_frame(), we first try to output a frame buffered in the
VO, then the filter, and then (if EOF is reached and there's still no
new frame) the VO again, with draining enabled. I guess this was
implemented slightly incorrectly before, because the filter chain still
could have had remaining output frames.
I thought the "_" in "_AbsoluteTime" was part of the documentation
markup.
This still doesn't help us with VS filters that change timing;
apparently you must use frame durations instead.
Make the filter apply the pixel aspect ratio of the input to the output.
This is more useful than forcing 1:1 PAR when playing anamorphic video
such as DVDs.
VapourSynth itself actually allows passing through the aspect ratio, but
it's in a not very useful form for us: it's per video-frame instead of
constant (i.e in VSVideoInfo). As long as we don't have a way to allow a
filter to spontaneously change output parameters, we can't use this.
(And I don't really feel like making this possible.)
When using rotation with hw decoding, and the VO does not support
rotation, vf_rotate is attempted to be inserted. This will go wrong, and
after that it can't recover because a vf_scale filter was autoinserted.
Just removing all autoinserted filters before reconfig fixes this.
This couldn't rotate by 180°. Add this, and also make the parameter in
degrees, instead of magic numbers.
For now, drop the flipping stuff. You can still flip with --vf=flip or
--vf=mirror. Drop the landscape/portrait stuff - I think this is
something almost nobody will use. If it turns out that we need some of
these things, they can be readded later.
Make it use libavfilter. Its vf_transpose implementation looks pretty
simple, except that it uses slice threading and should be much faster.
Fix all include statements of the form:
#include "libav.../..."
These come from MPlayer times, when FFmpeg was somehow part of the
MPlayer build tree, and this form was needed to prefer the local files
over system FFmpeg.
In some cases, the include statement wasn't needed or could be replaced
with mpv defined symbols.
Not needed anymore. I'm not opposed to having asm, but inline asm is too
much of a pain, and it was planned long ago to eventually get rid fo all
inline asm uses.
For the note, the inline asm use that was removed with the previous
commits was almost worthless. It was confined to video filters, and most
video filtering is now done with libavfilter. Some mpv filters (like
vf_pullup) actually redirect to libavfilter if possible.
If asm is added in the future, it should happen in the form of external
files.
No change in speed (or even slightly faster, though I tested with
progressive solid color video only), and normally we use libavformat's
vf_pullup anyway.
I didn't test the speed, but by default, this filter diverts to
libavfilter already. So this would help only if libavfilter is disabled,
or libavfilter doesn't have vf_noise (like on Libav). For these cases,
we still provide the (possibly but not necessarily) slower C
implementation of vf_noise.