Commit Graph

18 Commits

Author SHA1 Message Date
Ashyni d32f1aac3f af/vf-command: add ability to target a specific lavfi filter
fixes: #11180
2023-10-05 11:41:09 +02:00
Dudemanguy a177fb6188 vf_vapoursynth: save display resolution as a variable
mpv has a generic method for getting the display resolution, so we can
save it in vf_vapoursynth without too much trouble. Unfortunately, the
resolution won't actually be available in many cases (like my own)
because the windowing backend doesn't actually know it yet. It looks
like at least windows always returns the default monitor (maybe we
should do something similar for x11 and wayland), so there's at least
some value. Of course, this still has a bunch of pitfalls like not being
able to cope with multi monitor setups at all but so does display_fps.
As an aside, the vapoursynth API this uses apparently requires R26 (an
ancient version anyway), so bump the build to compensate for this.
Fixes #11510
2023-08-13 19:58:20 +00:00
Harri Nieminen 292a5868cb various: fix typos
Found by codespell
2023-03-28 19:29:44 +00:00
Philip Langdale 989d873d6e filters: lavfi: allow hwdec_interop selection for filters
Today, lavfi filters are provided a hw_device from the first
hwdec_interop that was loaded, regardless of whether it's the right one
or not. In most situations where a hardware based filter is used, we
need more control over the device.

In this change, a `hwdec_interop` option is added to the lavfi wrapper
filter configuration and this is used to pick the correct hw_device to
inject into the filter or graph (in the case of a graph, all filters
get the same device).

Note that this requires the use of the explicit lavfi syntax to allow
for the extra configuration.

eg:

```
mpv --vf=hwupload
```

becomes

```
mpv --vf=lavfi=[hwupload]:hwdec_interop=cuda-nvdec
```

or

```
mpv --vf=lavfi-bridge=[hwupload]:hwdec_interop=cuda-nvdec
```
2022-09-21 09:39:34 -07:00
Philip Langdale e50db42927 vo: hwdec: do hwdec interop lookup by image format
It turns out that it's generally more useful to look up hwdecs by image
format, rather than device type. In the situations where we need to
find one, we generally know the image format we're dealing with. Doing
this avoids us having to create mappings from image format to device
type.

The most significant part of this change is filling in the image format
for the various hw interops. There is a hw_imgfmt field today today, but
only a couple of the interops fill it in, and that seems to be because
we've never actually used this piece of metadata before. Well, now we
have a good use for it.
2022-09-21 09:39:34 -07:00
wm4 2c7139753d filter: add filter priority thing
This is a kind of bad hack (with bad implementation) to paint over other
problems of the filter system. The main problem is that some filters
might be left with pending frames if the filter runner is "paused",
which we don't want. To be used in a later commit.
2020-08-28 19:57:23 +02:00
wm4 ab4e0c42fb audio: redo video-sync=display-adrop
This mode drops or repeats audio data to adapt to video speed, instead
of resampling it or such. It was added to deal with SPDIF. The
implementation was part of fill_audio_out_buffers() - the entire
function is something whose complexity exploded in my face, and which I
want to clean up, and this is hopefully a first step.

Put it in a filter, and mess with the shitty glue code. It's all sort of
roundabout and illogical, but that can be rectified later. The important
part is that it works much like the resample or scaletempo filters.

For PCM audio, this does not work on samples anymore. This makes it much
worse. But for PCM you can use saner mechanisms that sound better. Also,
something about PTS tracking is wrong. But not wasting more time on
this.
2020-05-23 04:04:46 +02:00
wm4 3b4641a5a9 filter: minor cosmetic naming issue
Just putting some more lipstick on the pig, maybe it looks a bit nicer
now.
2020-03-08 19:38:10 +01:00
wm4 8a1bd15216 filter: add functions to suspend filtering temporarily
Filtering is integrated into an event loop, which is something the
filter API user provides. To make interacting with the event loop
easier, and in particular to avoid filtering to block event handling,
add functions the event loop code can suspend filtering.

While we cannot actually suspend a single filter, it's pretty easy to
suspend the filter graph run loop itself, which is responsible for
selecting which filter to run next.

This commit shouldn't change behavior at all, but the functions will be
used in later commits.
2020-03-05 22:00:50 +01:00
wm4 485f335b69 filter: add async queue filter
This is supposed to enable communication between filter graphs on
separate threads. Having multiple threads makes only sense if they can
run concurrently with each other, which requires such an asynchronous
queue as a building block. (Probably.)

The basic idea is that you have two independent filters, which can be
each part of separate filter graphs, but which communicate into one
direction with an explicit queue. This is rather similar to unix pipes.
Just like unix pipes, the queue is limited in size, so that still some
data flow control enforced, and runaway memory usage is avoided.

This implementation is pretty dumb. In theory, you could avoid avoid
waking up the filter graphs in quite a lot of situations. For example,
you don't need to wake up the consumer filter if there are already
frames queued. Also, you could add "watermarks" that set a threshold at
which producer or consumer should be woken up to produce/consume more
frames (this would generally serve to "batch" multiple frames at once,
instead of performing high-frequency wakeups). But this is hard, so the
code is dumb. (I just deleted all related code when I still got
situations where wakeups were lost.)

This is actually salvaged and modified from a much older branch I had
lying around. It will be used in the next commit.
2020-02-29 21:52:00 +01:00
wm4 f29623786b filter: decide how multi-threading is supposed to work
Instead of vague ideas about making different filter graphs on different
threads interact directly, this have no direct support. Instead, helpers
are required (such as added with the next commit).

Document it. Different root filters (i.e. separate filter graphs) are
now considered to be part of separate threads, so assert() if they're
found to accidentally interact.
2020-02-29 21:49:14 +01:00
wm4 4b32d51b96 f_output_chain: log status of auto filters
Just so users don't think the filters do anything when they don't insert
any filters.
2018-04-29 02:21:32 +03:00
wm4 ff24285eb1 video: pass through container fps to filters
This means vf_vapoursynth doesn't need a hack to work around the filter
code, and libavfilter filters now actually get the frame_rate field on
input pads set.

The libavfilter doxygen says the frame_rate field is only to be set if
the frame rate is known to be constant, and uses the word "must" (which
probably means they really mean it?) - but ffmpeg.c sets the field to
mere guesses anyway, and it looks like this normally won't lead to
problems.
2018-04-19 23:22:48 +02:00
wm4 3ffa70e2da filter: extend documentation comments
Add more explanations, and also fix some blatantly wrong things.
2018-02-13 17:45:29 -08:00
wm4 debc17663d
filter: add/use a convenience function
I guess this is generally useful for filters which buffer data
internally.
2018-02-03 05:01:28 -08:00
wm4 6d36fad83c video: make decoder wrapper a filter
Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes
a source filter. vd.h mostly disappears, because mp_filter takes care of
the dataflow, but its remains are in struct mp_decoder_fns.

One goal is to simplify dataflow by letting the filter framework handle
it (or more accurately, using its conventions). One result is that the
decode calls disappear from video.c, because we simply connect the
decoder wrapper and the filter chain with mp_pin_connect().

Another goal is to eventually remove the code duplication between the
audio and video paths for this. This commit prepares for this by trying
to make f_decoder_wrapper.c extensible, so it can be used for audio as
well later.

Decoder framedropping changes a bit. It doesn't seem to be worse than
before, and it's an obscure feature, so I'm content with its new state.
Some special code that was apparently meant to avoid dropping too many
frames in a row is removed, though.

I'm not sure how the source code tree should be organized. For one,
video/decode/vd_lavc.c is the only file in its directory, which is a bit
annoying.
2018-01-30 03:10:27 -08:00
wm4 b9f804b566 audio: rewrite filtering glue code
Use the new filtering code for audio too.
2018-01-30 03:10:27 -08:00
wm4 76276c9210 video: rewrite filtering glue code
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.

This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.

vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.

f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).

The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.

Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)

In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.

vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.

The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.

Exporting VO capabilities is still a big mess (mp_stream_info stuff).

The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.

f_async_queue is unused.
2018-01-30 03:10:27 -08:00