Commit Graph

167 Commits

Author SHA1 Message Date
wm4 8d965a1bfb options: change how option range min/max is handled
Before this commit, option declarations used M_OPT_MIN/M_OPT_MAX (and
some other identifiers based on these) to signal whether an option had
min/max values. Remove these flags, and make it use a range implicitly
on the condition if min<max is true.

This requires care in all cases when only M_OPT_MIN or M_OPT_MAX were
set (instead of both). Generally, the commit replaces all these
instances with using DBL_MAX/DBL_MIN for the "unset" part of the range.

This also happens to fix some cases where you could pass over-large
values to integer options, which were silently truncated, but now cause
an error.

This commit has some higher potential for regressions.
2020-03-13 17:34:46 +01:00
wm4 eb381cbd4b options: split m_config.c/h
Move the "old" mostly command line parsing and option management related
code to m_config_frontend.c/h. Move the the code that enables other part
of the player to access options to m_config_core.c/h. "frontend" is out
of lack of creativity for a better name.

Unfortunately, the separation isn't quite clean yet. m_config_frontend.c
still references some m_config_core.c implementation details, and
m_config_new() is even left in m_config_core.c for now. There some odd
functions that should be removed as well (marked as "Bad functions").
Fixing these things requires more changes and will be done separately.

struct m_config is left with the current name to reduce diff noise.
Also, since there are a _lot_ source files that include m_config.h, add
a replacement m_config.h that "redirects" to m_config_core.h.
2020-03-13 16:50:27 +01:00
wm4 3006c4ba5d options: remove min/max support from strings and string lists
We don't really use this anymore. Only --playlist and vf_lavfi filter
names did (to error on empty parameters), but it doesn't really matter.
2020-03-13 16:50:27 +01:00
wm4 3b4641a5a9 filter: minor cosmetic naming issue
Just putting some more lipstick on the pig, maybe it looks a bit nicer
now.
2020-03-08 19:38:10 +01:00
wm4 c5eb2f2ac4 f_decoder_wrapper: make decoder thread responsive while filling queue
The mp_filter_run() invocation blocks as long as the demuxer provides
packets and the queue can be filled. That means it may block quite a
long time of the decoder queue size is large (since we use libavcodec in
a blocking manner; it regrettably does not have an async. API).

This made the main thread freeze in certain situations, because it has
to wait on the decoder thread.

Other than I suspected (I wrote that code, but that doesn't mean I know
how the hell it works), this did not freeze during seeking: seek resets
flushed the queue, which also prevented the decoder thread from adding
more frames to it, thus stopping decoding and responding to the main
thread in time. But it does fix the issue that exiting the player waited
for the decoder to finish filling the queue when stopping playback.
(This happened because it called mp_decoder_wrapper_set_play_dir()
before any resets. Related to the somewhat messy way play_dir is
generally set. But it affects all "synchronous" decoder wrapper API
calls.)

This uses pretty weird mechanisms in filter.h and dispatch.h. The
resulting durr hurr interactions are probably hard to follow, and this
whole thing is a sin. On the other hand, this is a _very_ user visible
issue, and I'm happy that it can be fixed in such an unintrusive way.
2020-03-05 22:00:50 +01:00
wm4 12375f67b4 f_decoder_wrapper: use proper log prefix for all involved filters
p->log has a prefix set that gives some context and distinguishes audio
and video decoders. The "public" wrapper filter didn't use it, which is
a regression since commit a3823ce0e0.
2020-03-05 22:00:50 +01:00
wm4 8a1bd15216 filter: add functions to suspend filtering temporarily
Filtering is integrated into an event loop, which is something the
filter API user provides. To make interacting with the event loop
easier, and in particular to avoid filtering to block event handling,
add functions the event loop code can suspend filtering.

While we cannot actually suspend a single filter, it's pretty easy to
suspend the filter graph run loop itself, which is responsible for
selecting which filter to run next.

This commit shouldn't change behavior at all, but the functions will be
used in later commits.
2020-03-05 22:00:50 +01:00
wm4 c1ff54e2e4 f_decoder_wrapper: enable DR and hwdec with --vd-queue-enable
This was forgotten.

Hardware decoding typically breaks immediately, because many hw decoding
APIs require allocating all surfaces in advance (and/or libavcodec was
not made flexible enough to add new surfaces later). If the queue is
large enough, it will run out of surfaces, fail decoding, and fall back
to software decoding. We consider this the user's fault.

--hwdec-extra-frames can be used to avoid this, if you have enough GPU
memory, and the needed number of frames is lower than the arbitrary
mpv-set maximum limit of that option.
2020-03-05 22:00:50 +01:00
wm4 872794be90 f_decoder_wrapper: make most queue options runtime changeable
Why not.
2020-03-01 00:38:34 +01:00
wm4 ae1aeab7aa options: make decoder options local to decoder wrapper
Instead of having f_decoder_wrapper create its own copy of the entire
mpv option tree, create a struct local to that file and move all used
options to there.

movie_aspect is used by the "video-aspect" deprecated property code. I
think it's probably better not to remove the property yet, but
fortunately it's easy to work around without needing special handling
for this option or so.

correct_pts is used to prevent use of hr-seek in playloop.c. Ignore
that, if you use --no-correct-pts you're asking for trouble anyway. This
is the only behavior change.
2020-03-01 00:28:09 +01:00
wm4 a3823ce0e0 player: add optional separate video decoding thread
See manpage additions. This has been a topic in MPlayer/mplayer2/mpv
since forever. But since libavcodec multi-threaded decoding was added,
I've always considered this pointless. libavcodec requires you to
"preload" it with packets, and then you can pretty much avoid blocking
on it, if decoding is fast enough.

But in some cases, a decoupled decoder thread _might_ help. Users have
for example come up with cases where decoding video in a separate
process and piping it as raw video to mpv helped. (Or my memory is
false, and it was about vapoursynth filtering, who knows.) So let's just
see whether this helps with anything.

Note that this would have been _much_ easier if libavcodec had an
asynchronous (or rather, non-blocking) API. It could probably have
easily gained that with a small change to its multi-threading code and a
small extension to its API, but I guess not.

Unfortunately, this uglifies f_decoder_wrapper quite a lot. Part of this
is due to annoying corner cases like legacy frame dropping and hardware
decoder state. These could probably be prettified later on.

There is also a change in playloop.c: this is because there is a need to
coordinate playback resets between demuxer thread, decoder thread, and
playback logic. I think this SEEK_BLOCK idea worked out reasonably well.

There are still a number of problems. For example, if the demuxer cache
is full, the decoder thread will simply block hard until the output
queue is full, which interferes with seeking. Could also be improved
later. Hardware decoding will probably die in a fire, because it will
run out of surfaces quickly. We could reduce the queue to size 1...
maybe later. We could update the queue options at runtime easily, but
currently I'm not going to bother.

I could only have put the lavc wrapper itself on a separate thread. But
there is some annoying interaction with EDL and backward playback shit,
and also you would have had to loop demuxer packets through the
playloop, so this sounded less annoying.

The food my mother made for us today was delicious.

Because audio uses the same code, also for audio (even if completely
pointless).

Fixes: #6926
2020-02-29 21:52:00 +01:00
wm4 485f335b69 filter: add async queue filter
This is supposed to enable communication between filter graphs on
separate threads. Having multiple threads makes only sense if they can
run concurrently with each other, which requires such an asynchronous
queue as a building block. (Probably.)

The basic idea is that you have two independent filters, which can be
each part of separate filter graphs, but which communicate into one
direction with an explicit queue. This is rather similar to unix pipes.
Just like unix pipes, the queue is limited in size, so that still some
data flow control enforced, and runaway memory usage is avoided.

This implementation is pretty dumb. In theory, you could avoid avoid
waking up the filter graphs in quite a lot of situations. For example,
you don't need to wake up the consumer filter if there are already
frames queued. Also, you could add "watermarks" that set a threshold at
which producer or consumer should be woken up to produce/consume more
frames (this would generally serve to "batch" multiple frames at once,
instead of performing high-frequency wakeups). But this is hard, so the
code is dumb. (I just deleted all related code when I still got
situations where wakeups were lost.)

This is actually salvaged and modified from a much older branch I had
lying around. It will be used in the next commit.
2020-02-29 21:52:00 +01:00
wm4 f29623786b filter: decide how multi-threading is supposed to work
Instead of vague ideas about making different filter graphs on different
threads interact directly, this have no direct support. Instead, helpers
are required (such as added with the next commit).

Document it. Different root filters (i.e. separate filter graphs) are
now considered to be part of separate threads, so assert() if they're
found to accidentally interact.
2020-02-29 21:49:14 +01:00
wm4 84fe9c2a47 filter: fix possibly lost async wakeups
mp_filter_mark_async_progress() can asynchronously mark a filter for
processing, without waking up the filter thread. (It's some sort of
idiotic micro-optimization I guess?) But since it sets async_pending
without doing the wakeup, a mp_filter_wakeup() after this will do
nothing, and the wakeup is lost. Fix it by checking for the needed
wakeup separately.

Fortunately, nothing used this function yet, so there is no impact.
2020-02-29 20:12:31 +01:00
wm4 b0b5de3063 f_decoder_wrapper: replace most public fields with setters/getters
I may (optionally) move decoding to a separate thread in a future
change. It's a bit attractive to move the entire decoder wrapper to
there, so if the demuxer has a new packet, it doesn't have to wake up
the main thread, and can directly wake up the decoder. (Although that's
bullshit, since there's a queue in between, and libavcodec's
multi-threaded decoding plays cross-threads ping pong with packets
anyway. On the other hand, the main thread would still have to shuffle
the packets around, so whatever, just seems like better design.)

As preparation, there shouldn't be any mutable state exposed by the
wrapper. But there's still a large number of corner-caseish crap, so
just use setters/getters for them. This recorder thing will inherently
not work, so it'll have to be disabled if threads are used.

This is a bit painful, but probably still the right thing. Like
speculatively pulling teeth.
2020-02-29 01:23:20 +01:00
wm4 a4b12c54b6 f_lavfi: don't propagate filter failure if creation fails
Filters that fail to create are not supposed to do this. Generally it
should happen in process() only.

This fixes the previous commit. If a filter could not be created, it
"trashed" the wrapper filter with the failure. (Even if the wrapper were
to handle were to handle failures of sub-filter, it couldn't handle init
failure because it cannot call mp_filter_set_error_handler() before the
newly created filter is actually returned.)

Fixes: #7465 (attempt 2)
2020-02-16 16:21:03 +01:00
wm4 a19d918816 f_auto_filters: always fall back to hw-download+yadif if no hw deint filter
If hw decoding is used, but no hw deinterlacer is available, even though
we expect that it is present, fall back to using hw-download and yadif
anyway. Until now, it was over if the hw filter was somehow missing; for
example, yadif_cuda apparently requires a full Cuda SDK, so it can be
missing, even if nvdec is available. (Whether this particular case works
was not tested with this commit.)

Fixes: #7465
2020-02-16 15:28:57 +01:00
wm4 7d11eda72e Remove remains of Libav compatibility
Libav seems rather dead: no release for 2 years, no new git commits in
master for almost a year (with one exception ~6 months ago). From what I
can tell, some developers resigned themselves to the horrifying idea to
post patches to ffmpeg-devel instead, while the rest of the developers
went on to greener pastures.

Libav was a better project than FFmpeg. Unfortunately, FFmpeg won,
because it managed to keep the name and website. Libav was pushed more
and more into obscurity: while there was initially a big push for Libav,
FFmpeg just remained "in place" and visible for most people. FFmpeg was
slowly draining all manpower and energy from Libav. A big part of this
was that FFmpeg stole code from Libav (regular merges of the entire
Libav git tree), making it some sort of Frankenstein mirror of Libav,
think decaying zombie with additional legs ("features") nailed to it.
"Stealing" surely is the wrong word; I'm just aping the language that
some of the FFmpeg members used to use. All that is in the past now, I'm
probably the only person left who is annoyed by this, and with this
commit I'm putting this decade long problem finally to an end. I just
thought I'd express my annoyance about this fucking shitshow one last
time.

The most intrusive change in this commit is the resample filter, which
originally used libavresample. Since the FFmpeg developer refused to
enable libavresample by default for drama reasons, and the API was
slightly different, so the filter used some big preprocessor mess to
make it compatible to libswresample. All that falls away now. The
simplification to the build system is also significant.
2020-02-16 15:14:55 +01:00
wm4 e54ebaec52 f_decoder_wrapper, sd_add: accept "null" codec
This is for easier use with the "delay_open" feature added in the
previous commit. The "null" codec is reported if the codec is unknown
(because the stream was not opened yet at time the tracks were added).
The rest of the timeline mechanism will set the correct codec at
runtime. But this means every time a delay-loaded track is selected, it
wants to initialize a decoder for the "null" codec.

Accept a "null" decoder. But since FFmpeg has no such codec, and out of
my own laziness, just let it fall back to "common" codecs that need no
other initialization data.
2020-02-15 18:30:42 +01:00
wm4 ad1ae64251 f_autoconvert: remove subfmt conversion BS
This was used to convert e.g. P010 to NV12 within the filter chain,
which hopefully a thing that is not needed anymore. (And has been dead
code since the ANGLE "RGB" interop code was removed.)
2020-01-17 15:19:05 +01:00
wm4 044996e112 f_hwtransfer: extend vaapi whitelist with some working formats
Notably, BGR0, which is the only additional format listed as supported
by the texture mapper, results in broken colors. This is most likely not
a mpv issue, so the whitelist fulfils its purpose.
2020-01-17 15:13:23 +01:00
wm4 a500608845 f_hwtransfer: minor debug logging improvement 2020-01-17 15:10:36 +01:00
wm4 63bf362d11 f_hwtransfer: move format fields to private struct
A user can barely do anything useful with it, and there are no users
which access these anyway, so private is better.
2020-01-12 01:47:42 +01:00
wm4 68292a2780 f_hwtransfer: restructure and error properly on broken cases
I think the previous code didn't consider the situation if the input
format was not any of the upload formats. It then could have possibly
tried to upload the wrong format (and not sure what the underlying APIs
do with it).

Take care of this, also improve logging, and change it such that
mp_hwupload_find_upload_format() does not unnecessarily change the state
(although it doesn't really matter).
2020-01-12 01:43:21 +01:00
wm4 a3f220ba44 f_autoconvert: usw f_hwtransfer properly
With the recent change how f_hwtransfer could select formats, it's
possible that the upload_fmts list contains formats that are never
selected, and the filter would have failed.

The way it works now is that f_hwtransfer gets to select the format
(which honestly doesn't make sense, but whatever), and f_autoconvert
gets only a single format.

It would be more ideal if f_hwtransfer provided a list of possible input
formats, but that's absurdly too complex for now. Maybe I'll change it
back at some later point, but I expect this shit to be in a perpetual
state of complexity and brokenness.
2020-01-12 01:33:02 +01:00
wm4 b947bfcf1f f_hwtransfer: slightly better logging 2020-01-11 22:46:04 +01:00
wm4 7555e2a42a f_hwtransfer: whitelist vaapi formats that actually appear to work
(Oh yes, we could have skipped all the complexity, and hardcoded the
cases that work in the first place. This wouldn't be an issue if FFmpeg
or libva exported correct information. Also possible that FFmpeg's
filter chain does not allow to do this correctly in the first place,
since filters expose next to no meta information about what hw formats
etc. they need.)

Note that uploading yuv420p to a nv12 vaapi surface actually works, but
the blacklist excludes it. So this might get a bit slower. I'm not
bothering with this case because it's rarely needed, and the blacklist
specification would become a bit more complex if you had to specify
sw/upload format pairs.

Fixes: #7350
2020-01-11 22:45:42 +01:00
wm4 a009b57c77 f_hwtransfer: change order in which hwdec upload formats are considered
Basically, instead of trusting the upload format, and picking the first
sw format that has a desired upload format, trust the sw format. So now
we pick the sw format first, and then select from the set of upload
formats supported by it.

This is probably more straightforward, and works around a crash with
vaapi:

  mpv 8bit.mkv --vf=format=vaapi --gpu-hwdec-interop=all

(Forces vaapi upload if hw decoding is not enabled.)

Unfortunately, this still does not work, because vaapi, FFmpeg, the VO
interop code, or all of them are doing something stupid. In particular,
this picks the yuv420p sw format, which doesn't really exist despiter
advertised (???????????????????????????????????????), and simply breaks.

See: #7350
2020-01-11 22:29:53 +01:00
wm4 9b9307ea9f options: fix filter list comparison (again)
This was completely broken: it compared the first item of the filter
list only. Apparently I forgot that this is a list. This probably broke
aspects of runtime filter changing probably since commit  b16cea750f.

Fix this, and remove some redundant code from obj_settings_equals().
Which is not the same as m_obj_settings_equal(), so rename it to make
confusing them harder. (obj_setting_match() has these very weird label
semantics that should probably just be killed. Or not.)
2019-12-18 06:49:48 +01:00
wm4 06c9c38199 f_lavfi: add gross workaround for af_dynaudnorm bug
Better do this here than deal with the moronic project we unfortunately
depend on.

The workaround is generic; unknown whether it works correctly with
multi-input/output filters or filter graphs. It assumes that if all
inputs are EOF, and all outputs are EAGAIN, the bug happened.

This is pretty tricky, because anything could happen. Any time some form
of progress is made, the got_eagain state needs to be reset, because the
filter pad's state could have changed.
2019-12-18 01:44:20 +01:00
wm4 31eb2f9f33 build: downgrade EGL requirement from 1.5 to 1.4
With the previous commit, there's no need for 1.5 anymore. And in fact,
it's just too dangerous to rely on 1.5 because of all the EGL craziness.
For example, you might get a 1.5 EGL system library, but a driver might
still give you 1.4 at runtime. If you assume that you can call 1.5
functions, you will probably get random crashes in this case. What a
cursed API. (The same problem exists with EGL 1.3, but fortunately
nothing seems to use that anymore. We can just ignore that problem.)
2019-12-16 00:37:18 +01:00
wm4 855a4779ae filters: move prefix check from f_lavfi.c to user_filters.c
It's user_filters.c which allows the "lavfi-" prefix to distinguish
libavfilter filters from mpv builtin filters. f_lavfi.c is a layer
below, and strictly passes anything it gets to libavfilter. So the
correct place for this is in user_filters.c, which also has the code for
stripping the prefix in the normal filter instantiation code.
2019-12-07 14:19:11 +01:00
ekisu 02d8d4e3ef f_lavfi: mp_lavfi_is_usable: check for "lavfi-" prefix
Without this, adding filters with the prefix would fail, and some
filters that have conflicting names with mpv ones were unusable.
2019-12-06 19:12:57 +01:00
wm4 cb7048700f filters: fix incorrect #if for vf_gpu
This didn't match what is in wscript_build.py. Also, it should work on
non-X11 platforms... probably. (The condition is convoluted and almost
nonsensical, but the offscreen context creation needs to be cleaned up
anyway as soon as other backends, e.g. for win32, are added.)
2019-11-30 01:13:37 +01:00
wm4 90df6c79c9 vf_gpu: add video filter using vo_gpu's renderer
Probably pretty useless in this form (see: the wall of warnings), but
someone wanted this.

I think this should be useful to perform some automated tests, maybe.

Fixes: #7194
2019-11-29 20:37:11 +01:00
wm4 1cb085a82e options: get rid of GLOBAL_CONFIG hack
Just an implementation detail that can be cleaned up now. Internally,
m_config maintains a tree of m_sub_options structs, except for the root
it was not defined explicitly. GLOBAL_CONFIG was a hack to get access to
it anyway. Define it explicitly instead.
2019-11-29 12:14:43 +01:00
wm4 f95cdb2e97 f_output_chain: use m_option_equal()
This is used to detect whether any filters were changed. This code was
essentially ported to m_option.c.

One possible difference is how the old code did name comparison. It did
not actually compare the name (!?!?), so this might change behavior,
hopefully to the better.
2019-11-29 12:14:43 +01:00
wm4 37ac43847e options: pre-check filter names when using vf/af libavfilter bridge
Until now, using a filter not in mpv's builtin filter list would assume
it's a libavfilter filter. If it wasn't, the option value was still
accepted, but creating the filter simply failed. But since this happens
after option parsing, so the result is confusing.

Improve this slightly by checking filter names. This will reject truly
unknown filters at option parsing time. Unfortunately, this still does
not check filter arguments. This would be much more complex, because
you'd have to create a dummy filter graph and allocate the filter. Maybe
another time.
2019-11-25 20:29:42 +01:00
wm4 5af9fefff6 f_output_chain: fix possible crash when changing filters
When changing filters at runtime (vf/af commands/properties), this could
crash due to a NULL pointer access. The code for comparing the old and
new option values (to detect changes) was simply buggy.
2019-11-25 01:04:38 +01:00
wm4 ba6ba32825 f_decoder_wrapper: put coverart through image output logic
This wasn't done, probably regression from one of the last dozen of
times this special code path was touched. This meant coverart images
ignored the user-set aspect ratio completely, and some other things.
2019-11-17 02:11:45 +01:00
wm4 1bb726dedc vd_lavc: simplify fallback handling for full stream hw decoder
Shovel the code around to make the data flow slightly simpler (?). At
least there's only one send_packet function now. The old code had the
problem that send_packet() could be called even if there were queued
packets; due to sending the queued packets in the receive_frame
function, this should not happen anymore (the code checking for this
case in send_packet should normally never be called).

Untested with actual full stream hw decoders (none available here); I
created a test case by making hwaccel decoding fail.
2019-11-02 22:37:14 +01:00
wm4 e58e650a97 video: mess with the filte chain to enable zimg IMGFMT_RGB30 output
This was too hardcoded to libswscale. In particular, IMGFMT_RGB30 output
is only possible with the zimg wrapper, so the context needs to be taken
into account (since this depends on the --sws-allow-zimg option
dynamically). This is still slightly risky, because zimg currently will
still fall back to swscale in some cases, such as when it refuses to
initialize the particular color conversion that is requested.
f_autoconvert.c could actually handle this better, but I'm tool fucking
lazy right now, and nobody cares anyway, so go away, OK?
2019-11-02 17:50:32 +01:00
wm4 2ce4568018 f_decoder_wrapper: reduce uninit message log level
For vd/ad.
2019-11-01 01:55:06 +01:00
wm4 835586513d sws_utils: shuffle around some shit
Purpose uncertain. I guess it's slightly better, maybe.

The move of the sws/zimg options from VO opts (vo_opt_list) to the
top-level option list is tricky. VO opts have some helper code in vo.c,
that sends VOCTRL_SET_PANSCAN to the VO on every VO opts change. That's
because updating certain VO options used to be this way (and not just
the panscan option). This isn't needed anymore for sws/zimg options, so
explicitly move them away.
2019-10-31 15:26:03 +01:00
wm4 52536aaa7b f_decoder_wapper: trust frame return over error code
lavc_process() calls the receive/send callbacks, which mirror
libavcodec's send/receive API. The receive API in particular can return
both a status code and a frame. Normally, libavcodec is pretty explicit
that it can't return both a frame and an error. But the receive callback
does more stuff in addition (vd_lavc does hardware decoding fallbacks
etc.). The previous commit shows an instance where this happened, and
where it leaked a frame in this case.

For robustness, check whether the frame is set first, i.e. trust it over
the status code. Before this, it checked for an EOF status first.

Hopefully is of no consequence otherwise. I made this change after
testing everything (can someone implement a test suite which tests this
exhaustively).
2019-10-25 21:50:10 +02:00
wm4 5d5fdb77e9 ad_lavc, vd_lavc: return full error codes to shared decoder loop
ad_lavc and vd_lavc use the lavc_process() helper to translate the
FFmpeg push/pull API to the internal filter API (which completely
mismatch, even though I'm responsible for both, just fucking kill me).

This interface was "slightly" too tight. It returned only a bool
indicating "progress", which was not enough to handle some cases (see
following commit).

While we're at it, move all state into a struct. This is only a single
bool, but we get the chance to add more if needed.

This fixes mpv falling asleep if decoding returns an error during
draining. If decoding fails when we already sent EOF, the state machine
stopped making progress. This left mpv just sitting around and doing
nothing.

A test case can be created with: echo $RANDOM >> image.png

This makes libavformat read a proper packet plus a packet of garbage.
libavcodec will decode a frame, and then return an error code. The
lavc_process() wrapper could not deal with this, because there was no
way to differentiate between "retry" and "send new packet". Normally, it
would send a new packet, so decoding would make progress anyway. If
there was "progress", we couldn't just retry, because it'd retry
forever.

This is made worse by the fact that it tries to decode at least two
frames before starting display, meaning it will "sit around and do
nothing" before the picture is displayed.

Change it so that on error return, "receiving" a frame is retried. This
will make it return the EOF, so everything works properly.

This is a high-risk change, because all these funny bullshit exceptions
for hardware decoding are in the way, and I didn't retest them. For
example, if hardware decoding is enabled, it keeps a list of packets,
that are fed into the decoder again if hardware decoding fails, and a
software fallback is performed. Another case of horrifying accidental
complexity.

Fixes: #6618
2019-10-24 18:50:28 +02:00
wm4 5dba244c22 filters: extend vf_format so that it can convert color parameters
Form some reason (and because of my fault), vf_format converts image
formats, but nothing else. For example, setting the "colormatrix"
sub-parameter would not convert it to the new value, but instead
overwrite the metadata (basically "reinterpreting" the image data
without changing it).

Make the historical mistake worse, and go all the way and extend it such
that it can perform a conversion. For compatibility reasons, this needs
to be requested explicitly. (Maybe this would deserve a separate filter
to begin with, but things are messed up anyway. Feel free to suggest an
elegant and simple solution.)

This demonstrates how zimg can properly perform some conversions which
swscale cannot (see examples added to vf.rst).

Stupidly this requires 2 code paths, one for conversion, and one for
overriding the parameters.

Due to the filter bullshit (what was I thinking), this requires quite
some acrobatics that would not be necessary without these abstractions.
On the other hand, it'd definitely be more of a mess without it. Oh
whatever.
2019-10-21 01:38:25 +02:00
wm4 1c96152db3 f_swscale: enable use of zimg
The usual opt-in mechanism.
2019-10-21 01:38:25 +02:00
wm4 60ab82df32 video, demux: rip out unused spherical metadata code
This was preparation into something that never happened.

Spherical video is a shit idea anyway.
2019-10-17 22:49:26 +02:00
wm4 bd0af9a761 vf_d3d11vpp: remove RGB conversion hack
With the previous commit, this is dead code.

This also makes the f_autoconvert.c code for this dead code
(fortunately). Will probably remove this later.
2019-10-16 23:41:06 +02:00
wm4 8c58375dbd f_auto_filters: use f_autoconvert for hw download
Instead of using custom code.

Now if only f_lavfi knew what formats FFmpeg's vf_yadif accepts, this
could look much nicer, and wouldn't require the additional converter
filter setup.
2019-10-02 23:13:26 +02:00
wm4 9b55009e8f f_autoconvert: provide a function to determine if conversion works
This adds the function as seen in the f_autoconvert.h part of the patch.
It's pretty simple, but goes along with an intrusive code move. I guess
the resulting code is slightly nicer anyway.
2019-10-02 23:12:50 +02:00
wm4 55424c29d3 f_autoconvert: add hw->sw download path
For some reason it could do sw->sw and sw->hw (and, in some ways, even
do hw->hw in special cases), but not hw->sw. Add this.
2019-10-02 22:30:25 +02:00
wm4 3e02f39087 f_auto_filters: use software conversion if hw deint is not possible
Before this commit, enabling hardware deinterlacing via the
"deinterlace" option/property just failed if no hardware deinterlacing
was available. An error message was logged, and playback continued
without deinterlacing.

Change this, and try to copy the hardware surface to memory, and then
use yadif. This will have approximately the same effect as
--hwdec=auto-copy. Technically it's implemented differently, because
changing the hwdec mode is much more convoluted than just inserting a
filter for performing the "download". But the low level code for
actually performing the download is the same again.

Although performance won't be as good as with a hardware deinterlacer
(probably), it's more convenient than forcing the user to switch hwdec
modes separately. The "deinterlace" property is supposed to be a
convenience thing after all.

As far as the code architecture goes, it would make sense to auto-insert
such a download filter for all software-only filters that need it.
However, libavfilter does not tell us what formats a filter supports
(isn't that fucking crazy?), so all attempts to work towards this are
kind of hopeless. In this case, we merely have hardcoded knowledge that
vf_yadif definitely does not support hardware formats. But yes, this
sucks ass.
2019-10-02 21:27:07 +02:00
wm4 49f9146fe4 f_hwtransfer: add a mp_hwdownload filter
This just wraps the mp_image_hw_download() function as a filter and adds
some minor caching/error logging. (Shame that it needs to much
boilerplate, I guess.)

Will be used by the following commit. Wrapping it as filter seemed more
convenient than other choices.
2019-10-02 21:14:58 +02:00
wm4 61961d03f6 filters: add another dumb helper
Can be used with mp_chain_filters() to combine multiple filters into a
single one. This is a bit silly, but whatever. I'm making it an explicit
separate filter, because it lets the user access mp_filter.ppins against
all conventions.
2019-10-02 21:09:30 +02:00
wm4 25e70f4743 video: remove vf_vavpp from automatic deinterlace property
This reverts commit 6385a5fd1b, and in
addition removes the code that automatically inserts the vavpp filter.

The reason is the same as the commit that is being reverted: this
filter seems to trigger driver bugs. It can cause GPU freezes or
just doesn't work.

This variant of disabling the filter is better. There was no way to
add the "force" parameter to the automatically inserted filter, so
the old approach just made manual filter insertion (with the --vf
option or "vf" command) more cumbersome.
2019-10-02 19:21:42 +02:00
wm4 380033f4a2 f_swscale: fix a typo 2019-09-19 20:37:05 +02:00
wm4 9cfeafa89e video: add vf_fingerprint and a skip-logo script
skip-logo.lua is just what I wanted to have. Explanations are on the top
of that file. As usual, all documentation threatens to remove this stuff
all the time, since this stuff is just for me, and unlike a normal user
I can afford the luxuary of hacking the shit directly into the player.

vf_fingerprint is needed to support this script. It needs to scale down
video frames as part of its operation. For that, it uses zimg. zimg is
much faster than libswscale and generates more correct output. (The
filter includes a runtime fallback, but it doesn't even work because
libswscale fucks up and can't do YUV->Gray with range adjustment.)

Note on the algorithm: seems almost too simple, but was suggested to me.
It seems to be pretty effective, although long time experience with
false positives is missing. At first I wanted to use dHash [1][2], which
is also pretty simple and effective, but might actually be worse than
the implemented mechanism. dHash has the advantage that the fingerprint
is smaller. But exact matching is too unreliable, and you'd still need
to determine the number of different bits for fuzzier comparison. So
there wasn't really a reason to use it.

[1] https://pypi.org/project/dhash/
[2] http://www.hackerfactor.com/blog/index.php?/archives/529-Kind-of-Like-That.html
2019-09-19 20:37:05 +02:00
wm4 fb8d240c4d vf_vapourynth: remove Lua backend
I once created this because someone wanted to use vapoursynth without
the Python dependency. No idea if anyone ever actually used it. It's
sort of icky (it calls itself "lazy" to preempt complaints about how
much it sucks), and complicates the build process. Kill it.

It seems much more promising to have something like this:

https://github.com/vapoursynth/vapoursynth/issues/386

This would either solve the build distribution problem by relaxing the
Python dependency, and/or allow a Lua backend to be included without
pain.
2019-09-19 20:37:05 +02:00
wm4 aacc868bdd f_decoder_wrapper: fix initialization state
Some state wasn't reset when decoding was started without a seek reset
before it. The code used to rely on reset_decoder() resetting this
state, but since the commit referenced below, reset_decoder() does less
than reset().

Fix this by explicitly calling reset() on initialization.

Fixes: "f_decoder_wrapper: avoid full reset on timeline switch etc."
2019-09-19 20:37:05 +02:00
wm4 e8a051b3cb f_decoder_wrapper: reorganize, fix EDL/ordered chapters backward playback
Before this commit, there was a single process_decoded_frame() function.
It handled various aspects of dealing with a newly decoded frame. Move
some of these to a separate process_output_frame() function.

This new function is called in the order the frames are returned to the
playback core. Some correct_audio_pts() (was process_audio_frame())
becomes slightly less awkward due to this, and the timestamp smoothing
can actually work in backward playback mode now (thus moving p->pts out
of reset_decoder()).

Behavior for normal playback also changes subtly. This shouldn't matter
in sane cases, but if you mix broken files, --no-correct-pts, and
timeline stuff, differences in behavior might be visible.

Timeline clipping (EDL/ordered chapters) works now, because it's done
before "transforming" the timestamps. Audio timestamp smoothing happens
after it, which is a behavior change, but should be more correct. This
still runs crazy_video_pts_stuff() before everything else. On the pther
hand, --no-correct-pts or missing timestamp processing is done last. But
these things didn't really work with timeline before.
2019-09-19 20:37:05 +02:00
wm4 d55c2a1243 f_decoder_wrapper: avoid full reset on timeline switch etc.
Slightly cleaner. We don't need to awkwardly backup "some" state on
backwards playback. Due to not resetting last_format, normal timeline
switches don't unconditionally trigger recomputing of certain image
parameters. Also probably doesn't reset framedrop parameters, although I
don't care about that part.
2019-09-19 20:37:05 +02:00
wm4 ea5c15874a f_decoder_wrapper: fully reset timestamp fixup logic on seeks
This could lead to nonsense when backward playback is involved. Better
reduce the possible interactions. Besides, it's better to fully reset
things on seeks in general.

The only exception is has_broken_packet_pts, which enables hr-seek if
everything looks good. It's intended to trigger at the second hr-seek or
so if the file is normal, and to disable it if the file is broken. It
tries to avoid enabling the hr-seek logic before it can know about
whether things are "good", so resetting it on seeks would obviously
never enable it. Document it as explicit exception.
2019-09-19 20:37:05 +02:00
wm4 e86a0df52e f_decoder_wrapper: move option update to a common entrypoint
process() calls these functions. It's a much better place to potentially
copy changed option values into the cache struct.
2019-09-19 20:37:05 +02:00
wm4 da6e862c4f f_decoder_wrapper: hack for discarding preroll in backward playback mode
Some audio codecs will discard or cut the first frames when starting
decoding. While some of that works through well-defined mechanisms (like
initial padding), it's in general very codec/decoder specific, and not
really predictable. In addition, libavcodec isn't very good with
reporting "dropped" frames (and our internal interface reflects this).
It seems our only chance to handle this is through timestamps.

In theory, it would be best to discard frames that have timestamps
before the "resume" position. But since video has reordered timestamps,
we'd need to put some effort into finding this position. The first video
packet doesn't necessarily contain this timestamp. (In theory, we could
just do this in the demuxer with some trivial additional work, and set
it on the packet's kf_seek_pts field. Although this field is supposed to
contain just this value, the field is considered demuxer-internal, and I
didn't want to make matters worse by reusing it for the interface to the
decoder. With some more effort and buffering, we could calculate this
value within the decoder, but fuck that.)

The approach chosen in this commit is setting the timestamp to NOPTS.
This will break in some obscure situations, but backward playback is a
pretty obscure feature to begin with, so I considered this a reasonable
implementation choice.

Before passing a preroll packet to the decoder, its timestamps are set
to NOPTS. Frames that are returned from the decoder and have the NOPTS
timestamp are considered preroll and are discarded. This happens only
during "preroll" mode (preroll_discard==true), so it doesn't affect
normal forward playback. It's disabled on the first packet with a
timestamp, so it can tolerate some crap even in backward playback mode.
We don't check the dts fields out of laziness (decoded audio frames
don't even have this field).

I considered using an approach using the EDL clipping infrastructure (as
mentioned in the last two paragraphs in the commit message of commit
" demux_lavf: implement bad hack for backward playback of wav"). This
didn't work, and I blamed timestamp rounding within mpv for it. But the
problem was actually due to Matroska-rounded timestamps. Since the audio
frame size isn't exactly aligned to 1ms, there will be an overlap (or
gap) in the timestamps. This overlap is much smaller than 1ms, since
it's just the sub-millisecond remainder part of the audio frame size.
This makes the timestamps discontinuous and unreliable for the purpose
we wanted to use it. We can't just smooth the timestamps in the demuxer
either.
2019-09-19 20:37:05 +02:00
wm4 af60a45fdf f_decoder_wrapper: remove stale/duplicated comment 2019-09-19 20:37:05 +02:00
wm4 2c3c6aae66 demux, f_decoder_wrapper: fix coverart in backward mode
Shitty ancient hack that wastes my time all the time.

demux.c: always return the coverart packet as soon as possible, and
don't let the backward demux state machine possibly stop it.

f_decoder_wrapper.c: mess with some shit until it somehow starts to
work. I think the old code tried to let it cleverly fall through so the
packet was processed "normally"; just make it run the "usual" code
instead.
2019-09-19 20:37:04 +02:00
wm4 b9d351f02a Implement backwards playback
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)

(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)

How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.

The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).

Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).

The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.

Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.

E.g.:

    bool before = pts_a < pts_b;

would need to be:

    bool before = forward
        ? pts_a < pts_b
        : pts_a > pts_b;

or:

    bool before = pts_a * dir < pts_b * dir;

or if you, as it's implemented now, just do this after decoding:

    pts_a *= dir;
    pts_b *= dir;

and then in the normal timing/renderer code:

    bool before = pts_a < pts_b;

Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.

Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.

As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)

VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.

FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
2019-09-19 20:37:04 +02:00
wm4 d9cc13f311 f_decoder_wrapper: move cover art retrieval
This is basically a refactor in preparation for future changes and
shouldn't have much influence on actual behavior.
2019-09-19 20:37:04 +02:00
Jan Ekström 199aabddcc Merge branch 'master' into pr6360
Manual changes done:
  * Merged the interface-changes under the already master'd changes.
  * Moved the hwdec-related option changes to video/decode/vd_lavc.c.
2019-03-11 01:00:27 +02:00
zc62 a127912345 audio: fix segfault caused by incorrect number of planes
Use `mp_aframe_get_planes` to properly get the number of planes, instead
of assuming it to be the number of channels.

Fixes #6092
2019-02-23 00:21:54 +02:00
Anton Kindestam 8b83c89966 Merge commit '559a400ac36e75a8d73ba263fd7fa6736df1c2da' into wm4-commits--merge-edition
This bumps libmpv version to 1.103
2018-12-05 19:19:24 +01:00
Philip Langdale ce2253b358 filters: Add cuda/nvdec deinterlacing auto-filter using vf_yadif_cuda
Historically, there's been no way to offer deinterlacing with nvdec,
and for cuviddec, it required a command line flag, with no way to
toggle while playing.

Now that we have a cuda deinterlacing filter in ffmpeg, we can hook
it up hook it up as the cuda auto-deinterlacer. In practice, this
isn't going to be present very often, due to the licensing mess with
the cuda sdk, but we can support it when it is there.
2018-11-19 00:02:41 +02:00
wm4 f8ab59eacd player: get rid of mpv_global.opts
This was always a legacy thing. Remove it by applying an orgy of
mp_get_config_group() calls, and sometimes m_config_cache_alloc() or
mp_read_option_raw().

win32 changes untested.
2018-05-24 19:56:35 +02:00
wm4 2a28712b44 f_lavfi: support setting common filter options like "threads"
AVFilterContext instances support some additional AVOptions over the
actual filter. This includes useful options like "threads". We didn't
support setting those for the "direct" wrapper (--vf=yadif:threads=1
failed). Change this. It requires setting options on the AVFilterContext
directly, except the code for positional parameters still needs to
access the actual filter's AVOptions.
2018-04-29 02:21:32 +03:00
wm4 ba19ffe19a f_decoder_wrapper: fix a typo in log message 2018-04-29 02:21:32 +03:00
wm4 0e340ce804 filter: hide warning when disconnecting pins drops frames
Sometimes this hints that there's a bug, but sometimes it's normal.

Since the code for --end/--frames puts frames that should not be shown
anymore back into the pin, using those options will show this warning
when playback ends. This is a minor annoyance. We could change how it's
done (e.g. set an explicit flag somewhere), but that seems bothersome,
so just change the message from warning to verbose.
2018-04-29 02:21:32 +03:00
wm4 d2349cb833 f_output_chain: remove a redundant variable 2018-04-29 02:21:32 +03:00
wm4 c6b9288465 video: remove internal stereo_out flag
Also rename stereo3d to stereo_in. The only real change is that the
vo_gpu OSD code now uses the actual stereo 3D mode, instead of the
--video-steroe-mode value. (Why does this vo_gpu code even exist?)
2018-04-29 02:21:32 +03:00
wm4 4b32d51b96 f_output_chain: log status of auto filters
Just so users don't think the filters do anything when they don't insert
any filters.
2018-04-29 02:21:32 +03:00
wm4 d8807ca833 f_output_chain: log input instead of output format
I think this is more intuitive. This requires a dedicated "out" dummy
filter. But keep the "in" dummy filter for symmetry, like in the old
filter code. (We could remove the "in" dummy filter, because the first
actual filter would still show the real input format.)
2018-04-29 02:21:32 +03:00
wm4 ff24285eb1 video: pass through container fps to filters
This means vf_vapoursynth doesn't need a hack to work around the filter
code, and libavfilter filters now actually get the frame_rate field on
input pads set.

The libavfilter doxygen says the frame_rate field is only to be set if
the frame rate is known to be constant, and uses the word "must" (which
probably means they really mean it?) - but ffmpeg.c sets the field to
mere guesses anyway, and it looks like this normally won't lead to
problems.
2018-04-19 23:22:48 +02:00
wm4 7bfb240309 f_lavfi: add an option to use old audio PTS handling for af_lavfi
The fix-pts option basically uses the old af_lavfi's (before filter
rewrite) timestamp logic. The rest is explained in the manpage.
2018-04-15 23:11:33 +03:00
wm4 67b36c66d3 audio: do not try to resample spdif data
Normally we don't even try this, but in corner cases it can happen. For
example when inserting lavcac3enc at runtime, and display-sync-resample
was active.
2018-04-15 23:11:33 +03:00
wm4 bd62d78854 f_output_chain: fix typo 2018-04-15 23:11:33 +03:00
wm4 3c123281a7 audio: change format negotiation, remove channel remix fudging
The audio format neogitation code was pretty complicated, although the
idea was simple: when the format changes (or on the first audio frame),
filter only the new frame through the entire filter chain, discard the
resulting frame, but use the format to initialize the AO.

This was useful for "fudging" the channel remix behavior (upmix or
downmix), and moving it before other filters. Apparently this was useful
for things like DRC filters, which might work better in stereo, and
which also can only achieve the desired volume levels by doing it before
a downmix, which would modify the volume. This mechanism was introduced
in commit 60048b7eb9 (which the commit message also describes as
"idiotic heuristic"). Knowing the output format is inherently necessary
for this, because otherwise we can't know what the hell the user defined
filters will do.

There were problems with robustness. Some filters needed more than one
frame. Resampling in particular would discard initial audio at high
resampling ratios. Some filters might drop audio intentionally (like
clipping data on timestamp ranges). There were also allegations that
some decoders output 0 length frames (although that is invalid in
libavcodec). The state machine was excessively complex and hard to
understand too.

There are 3 things that could have been done:

1. Fix robustness problems by doing more heuristics, like repeating
   audio frames or simply decoding several frames. Since filters can
   behave differently, this would have added lots of complexity.
2. Make use of libavfilter's format negotiation, and add the same to
   mpv builtin filters. This is sort of annoying, because the format
   negotiation in libavfilter changes the state of the filters. It also
   reports only some parameters (mostly all for audio, but a lot of
   holes for video). It would remove some of the state machine, but not
   all.
3. Drop the channel remix fudging, and do the same as the video chain.
   This would not require format negotiation, but instead you can just
   filter the audio frames, and look what comes out of it. If nothing
   comes out, simply never create an AO.

This commit selects option 3. It removes the remix fudging, which means
the loss of a feature. Users can instead add "--af=format=channels=2"
before their DRC filter, or something. I'm also considering changing the
default for --audio-channels back to stereo, and downmix in the decoder
or at the start of the filter chain, which would give the same results,
except requiring more configuration.

Implementation-wise, this is still a bit different from the video path.
The VO always remains the same instance, while the AO might have to be
recreated on configuration changes. This still requires explicit format
change handling + draining old data, but by putting it into
f_autoconvert, not much new code is needed.
2018-04-15 23:11:33 +03:00
wm4 4b48966d87 f_autoconvert: be less clever about running specific codepaths
This tried to avoid running the audio/video functions depending on
whether any of the audio or video related format restrictions were
called (so the filter would show an error if a mismatching media type
was passed in). It was a shit idea anyway, so fuck it.
2018-04-15 23:11:33 +03:00
wm4 428fc1cbef f_lavfi: use new libavfilter iteration API 2018-04-03 20:08:15 +03:00
wm4 0b4120919a f_decoder_wrapper: retry decoding if libavcodec returns invalid state
At least the libavcodec MediaCodec wrapper sometimes seems to break the
libavcodec API, and does the following sequence:

  send_packet() -> EAGAIN
  receive_frame() -> EAGAIN
  send_packet() -> OK

The libavcodec API never allows returning EAGAIN from both functions, so
we discarded packets in this case. Change it to retrying decoding, for
the sake of MediaCodec. Note that it could also happen due to internal
bugs in the vd_lavc.c hw fallback code, but if there are any remaining,
they should be fixed properly instead.

Requested.
2018-03-26 19:47:08 +02:00
wm4 9b0102dd8b f_hwtransfer: more detailed logging
This also switches the order, because that makes more sense.
2018-03-15 23:13:53 -07:00
wm4 4aa1be44c2 f_hwtransfer: fix a logic error
Jesus Christ, how did I get this wrong, or never verified proper
function. This fixes --vf=vdpaupp not working with yuv420p input.
2018-03-15 23:13:53 -07:00
wm4 4527409c8d audio: improve behavior if filters output nothing during probing
Just bail out immediately (and disable audio) if format probing has no
result, instead of doing nothing and then apparently freezing.

This can happen with bogus filters, cases where the first audio frame is
essentially dropped by filters (can happen with large resampling
factors), and if the audio track contains no packets at all, or all
packets fail to decode.
2018-02-21 22:35:24 -08:00
wm4 13b90bcf91 video: fix --video-rotate in some cases
Which idiot wrote this code? [Yeah, me.]
2018-02-18 16:21:56 +01:00
wm4 fca64d913b filter: fix potential NULL pointer deref
The rest of the function should be executed only if both are set. It
seems like in practice this didn't happen yet with only one of them
unset, but in theory it's possible. Found by Coverity.
2018-02-16 22:04:15 -08:00
wm4 ceca1602e9 f_lavfi: extend filter help output
Also print type and help string. Also print AV_OPT_TYPE_CONST, which are
like the mpv choice option type, except different. Print them as
separate lines because FFmpeg usually has help strings for them too.
2018-02-13 17:45:29 -08:00
wm4 3ffa70e2da filter: extend documentation comments
Add more explanations, and also fix some blatantly wrong things.
2018-02-13 17:45:29 -08:00
wm4 251f4e5d77 filter: simplify/fix external filter graph usage
There was the following problem: if a filter graph had asynchronous
filters, and the filter graph user did not call mp_filter_run() (and
only accessed the mp_pins), then filtering could stall, because using
mp_pin_out_request_data() only recursively invoked filtering if the
data_requested flag wasn't already set. The latter can happen if a
request was tried earlier but failed, and then an asynchronous filter
actually produced output that would satisfy the request. Obviously, it
has to invoke filtering again to get the requested frame.

Fix this by organizing the code differently, and making sure to invoke
mp_filter_run() on every request (if there's nothing to do, it doesn't
do anything anyway). Simplify it a bit by removing things which are not
needed, like connecting filter graphs with different root filters.
2018-02-13 17:45:29 -08:00
wm4 76947798ea f_lavfi: fix typo in comment 2018-02-13 17:45:29 -08:00
wm4 2cce782527 filter: adjust root log prefix
Avoids that the audio decoder shows up with a "[root/ad]" log prefix.

This is an annoying consequence of mp_log_new(): if a log parent doesn't
have a prefix with "!", it'll add the prefix to all mp_logs created from
it. This should probably be fixed in the mp_log code itself, but doing
so would be a big deal as we'd have to make sure all the other log
prefixes are what we want. So work it around for now.
2018-02-13 17:45:29 -08:00