We inserted these filters with fixed parameters, which was ok. But this
also didn't change image parameters for the filters down the filter
chain and the VO. For example, if rotation by 90° was requested by the
file, we would insert a filter and rotate the video, but the VO would
still receive image parameters that direct rotation by 90°.
This wasn't a problem, but it could become one.
Fix this by letting the filters automatically pick up the image params.
The image params are reset on application. (We could probably also
always try to apply and reset image params in a filter, instead of
having special "auto" parameters. This would probably work, and video.c
would insert a "rotate=0" filter. But I'm afraid this would be confusing
and the current solution is cosmetically slightly nicer.)
Unfortunately, the vf_stereo3d.c change turned out a big mess, but once
the "internal" filter is fully replaced with libavfilter, most of this
can be radically simplified.
Some filters exists only to create a specific lavfi graph. Allow these
filters to reset the graph exactly on reconfig, and allow them to modify
some image parameters too. Also make vf_lw_update_graph() behave like
vf_lw_set_graph() - they had a subtitle difference with filter==NULL.
Useful for the following commit.
This was once central, but now it's almost unused. Only vf_divtc still
uses it for extremely weird and incomprehensible reasons. The use in
stream.c is trivial. Replace these, and remove mpbswap.h.
Until now, we always required the playback core to decode a new frame to
get more output from the filter. That seems to be completely
unnecessary, because filtered results may arrive before that.
Add a filter_out callback, and restructure the code such that it can
return any filtered frames, or block if it hasn't read at least one
frame.
In the worst case, it still can happen that bursts of input requests and
output requests happen. (This commit tries to reduce burst-like
behavior, but it's not entirely possible due to the indeterministic
nature of VS threading.)
This is a similar change as with 95bb0bb6.
Uses the new mechanism introduced in the previous commit.
Depending on the actual filter, this distributes CPU load more evenly
over time, although it probably doesn't matter.
Consider a filter which turns 1 frame into 2 frames (such as an
deinterlacer). Until now, we forced filters to produce all output frames
at once. This was done for simplicity.
Change the filter API such that a filter can produce frames
incrementally.
This inserts an automatic conversion filter if a Matroska file is marked
as 3D (StereoMode element). The basic idea is similar to video rotation
and colorspace handling: the 3D mode is added as a property to the video
params. Depending on this property, a video filter can be inserted.
As of this commit, extending mp_image_params is actually completely
unnecessary - but the idea is that it will make it easier to integrate
with VOs supporting stereo 3D mogrification. Although vo_opengl does
support some stereo rendering, it didn't support the mode my sample file
used, so I'll leave that part for later.
Not that most mappings from Matroska mode to vf_stereo3d mode are
probably wrong, and some are missing.
Assuming that Matroska modes, and vf_stereo3d in modes, and out modes
are all the same might be an oversimplification - we'll see.
See issue #1045.
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.
The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
Add two new script environment variables 'video_in_dw' and
'video_in_dh', representing the display resolution of video. Along
with video resolution, sample ratio aspect can be calculated in
scripts.
Currently it's impossible to change sample ratio aspect with single
vapoursynth filter since '_SARNum' and '_SARDen' frame properties
from output clip will be ignored. A following 'dsize' filter is
necessary for this purpose.
Use OPT_KEYVALUELIST() for all places where AVOptions are directly set
from mpv command line options. This allows escaping values, better
diagnostics (also no more "pal"), and somehow reduces code size.
Remove the old crappy option parser (av_opts.c).
The final goal is all mp_msg calls produce complete lines. We want this
because otherwise, race conditions could corrupt the terminal output,
and it's inconvenient for the client API too. This commit works towards
this goal. There's still code that has this not fixed yet, though.
When seeking, we violently destroy the filter, because vapoursynth has
no proper API for terminating a video with unknown frame count. This
looks like an error to vapoursynth, and the error is returned via the
frame callbacks. The bug is that we remember this error state across
reinitialization, so on the first filter call after reinitialization, we
thought filtering the current frame failed. This caused a shift by 1
frame on each seek.
CC: @mpv-player/stable
Make sure every video filter has valid parameters for input and output.
(This also ensures we don't take possibly invalid decoder output, or
feed invalid decodr/filter output to VOs.)
Also, the updated image size check now (almost) works like the
corresponding check in FFmpeg.
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
This affects packed RGB formats up to 16 bits per pixel. The old mplayer
names used LSB-to-MSB order, while FFmpeg (and some other libraries) use
MSB-to-LSB.
Nothing should change with this commit, i.e. no bit order or endian bugs
should be added or fixed. In some cases, the name stays the same, even
though the byte order changes, e.g. RGB8->BGR8 and BGR8->RGB8, and this
affects the user-visible names too; this might cause confusion.
These suffixes are annoying when they're redundant, so strip them
automatically. On little endian machines, always strip the "le" suffix,
and on big endian machines vice versa (although I don't think anyone
ever tried to run mpv on a big endian machine).
Since pixel format strings are returned by a certain function and we
can't just change static strings, use a trick to pass a stack buffer
transparently. But this also means the string can't be permanently
stored by the caller, so vf_dlopen.c has to be updated. There seems
to be no other case where this is done, though.
While I'm not very fond of "const", it's important for declarations
(it decides whether a symbol is emitted in a read-only or read/write
section). Fix all these cases, so we have writeable global data only
when we really need.
Additionally to removing the global variables, this makes the options
more uniform. --ssf-... becomes --sws-..., and --sws becomes --sws-
scaler. For --sws-scaler, use choices instead of magic integer values.
It could in theory happen that the filter loop will enter a blocking
wait, even though it could make progress by emptying the list of
already-filtered images. I'm not quite sure if this could actually cause
a real issue - probably not.
VapourSynth won't just filter multiple frames at once on its own. You
have to request multiple frames at once manually. This is what this
commit introduces: a sub-option controls how many frames will be
requested at once. This also changes the semantics of the maxbuffer sub-
option, now renamed to buffered-frames.
This was a minor code duplication between vf_vdpaupp.c and vo_vdpau.c.
(In theory, we could always require using vf_vdpaupp with vo_vdpau, but
I think it's better if vo_vdpau can work standalone.)
vf_fix_img_params() takes care of overwriting image parameters that are
normally not set correctly by filters. But this makes no sense for input
images. So instead, check that the input is correct.
It still has to be done for the first input image, because that's used
to handle some overrides (see video_reconfig_filters()).
These replace vf_read_output_frame(), although we still emulate that
function. This change is preparation for another commit (and this is
basically just to reduce the diff and signal/noise ratio in that
commit).
Basically, if we feed the filter a new image even after the EOF state
has been reached (e.g. because the input stream "recovered"), we want
the filter to restart, instead of returning an error forever.
Some non-deinterlacing filters (potentially denoising) also use
additional frames for filtering. The vdpau docs suggest providing at
least 1 future and 2 past _fields_, which means we need to provide 1
past frame (the future field is already the other field of the current
field, and both fields are in the same frame).
We can easily achieve this by buffering an additional frame in the non-
deint case.
This factors out some code from vo_vdpau.c, especially deinterlacing
handling. The intention is to use this for vo_vdpau.c to make the logic
significantly easier, and to use it for vo_opengl (gl_hwdec_vdpau.c) to
allow selecting deinterlace and postprocessing modes.
As of this commit, the filter actually does nothing, since both vo_vdpau
and vo_opengl treat the generated images as normal vdpau images. This
will change in the following commits.