Make the existing "not found" messages debug only, and add a new verbose
message if a config file was opened. The idea is that logging should
make it apparent whether or not config files are loaded, and it's more
common to use scripts without config files, leading to fewer log
messages in verbose mode.
demux_mkv has lots of logging that shows information about the file. It
sort of reminds of mkvinfo output. While this is sometimes interesting,
it's too much for verbose mode, and should be in debug log level.
In 2017, we lowered this to debug level. But I think setting options is
important enough that it should be logged even in verbose, at least
compared to all the other dumb noise.
This might be reduced again if verbose logging becomes much cleaner.
This probably covers all packed formats which have byte-aligned
component, no alpha, and no subsampling. Everything else needs more
imgfmt metadata, or something even more complicated. Alpha is primarily
not supported, because zimg requires a second scaler instance for it,
and handling packing/unpacking with it is an unacceptable mess.
Raise swscale and zimg default parameters. This restores screenshot
quality settings (maybe) unset in the commit before. Also expose some
more libswscale and zimg options.
Since these options are also used for VOs like x11 and drm, this will
make x11/drm/etc. much slower. For compensation, provide a profile that
sets the old option values: sw-fast. I'm also enabling zimg here, just
as an experiment.
The core problem is that we have a single set of command line options
which control the settings used for most swscale/zimg uses. This was
done in the previous commit. It cannot differentiate between the VOs,
which need to be realtime and may accept/require lower quality options,
and things like screenshots or vo_image, which can be slower, but should
not sacrifice quality by default.
Should this have two sets of options or something similar to do the
right thing depending on the code which calls libswscale? Maybe. Or
should I just ignore the problem, make it someone else's problem (users
who want to use software conversion VOs), provide a sub-optimal
solution, and call it a day? Definitely, sounds good, pushing to master,
goodbye.
Lots of dumb crap to do... something. Instead of adding yet another dumb
helper, just use the main" sws_utils API in both callers. (Which,
unfortunately, has been duplicated for glorious webp screenshots,
despite the fact that webp is crap.)
Good part: can enable zimg for screenshots (as far as needed).
Bad part: uses "default" swscale parameters instead of HQ now.
Purpose uncertain. I guess it's slightly better, maybe.
The move of the sws/zimg options from VO opts (vo_opt_list) to the
top-level option list is tricky. VO opts have some helper code in vo.c,
that sends VOCTRL_SET_PANSCAN to the VO on every VO opts change. That's
because updating certain VO options used to be this way (and not just
the panscan option). This isn't needed anymore for sws/zimg options, so
explicitly move them away.
To be used in the next commit.
According to compiler explorer, __builtin_clz is very widely available,
and it barely makes sense to provide a fallback. clang also eats this
(and identifies at least as GCC 4). Actually, there's doubt that a fast
log2 implementation is needed at all (I guess UTF-8 parsing needs it,
but something UTF-8-specific would probably make it faster than using
log2). So the fallback is just something naive.
By default utilizes the color space of the desktop on which the
swap chain is located. If a specific value is defined, it will be
instead be utilized.
Enables configuration of the PQ color space (BT.2020 primaries,
PQ transfer function) for HDR.
Additionally, signals the swap chain color space to the renderer,
so that the render looks correct without having to specify
target-trc or target-prim manually.
Due to all of the APIs being Win10+ only, will only work starting
with Windows 10.
This lets us set primaries, transfer function and the target peak
based on what the presenting layer would want us to have.
Now that this mechanism is available, warn if the user has
overridden values such as primaries or transfer function.
This seems to have been missed when the storable flag was added, since
all the other flags were logged here. It can be useful to know if an RA
format is storable, so log it as well.
Using e.g. --vf=format=0bgr showed obviously wrong colors with --vo=gpu.
The reason is that leading padding wasn't handled correctly.
Try to hack fix it. While the code in copy_image() is somewhat
reasonable, I can't tell what the fuck is going on with that HOOKED
shit. For some reason this HOOKED shit doesn't use copy_image() (???),
or uses it incorrectly. It affects debanding. --deband=no works
correctly. If it's enabled, the crap in hook_prelude() is needed.
I bet there are many more bugs with this. For example, the deband shader
will try to deband the alpha channel if the format abgr is used (because
the correct component order is only established later). This can be
tested by inserting a "color.x = 0;" at the end of the deband shader,
and using --vf=format=rgba vs. abgr.
I cannot comprehend why it doesn't just store explicitly which
components a texture contains, and why it doesn't just read the
components always in an uniform way.
There's a big chance this fix works only by coincidence. This shouldn't
have been so hard either. Time for a complete rewrite?
With hwdec copy modes, mp_image_copy_attributes() is used to transfer
metadata other than the image data when copying the image from the
hardware surface. It didn't copy the closed caption data.
Fix this. This makes closed captions in copy mode work.
Fixes: #6376
sdl_gamepad.c and vo_sdl.c both have their own event loops and run in
separate threads. They don't know of each other (and shouldn't). Since
SDL only has one global event loop (why didn't they fix this in SDL2?),
these obviously clash. The actual behavior is relatively subtle, which
event being randomly dispatched to either of the threads.
This is very regrettable. Very.
Work this around. "Fortunately" SDL exposes its global state to some
degree. SDL_WasInit() returns whether a "subsystem" was initialized, and
you could say the one who initialized it owns it. Both SDL_INIT_VIDEO
and SDL_INIT_GAMECONTROLLER implicitly enable SDL_INIT_EVENTS, and the
event loop is indeed the resource that cannot be shared.
Unfortunately, this is still racy, since SDL_InitSubSystem is a second
call, and succeeds if the subsystem is already initialized (increases a
refcount I think). But good enough. Blame SDL for everything.
(I think I made this commit message too long. Nobody cares even.)
Fixes: #7085
These are inherently incompatible. As far as I'm aware, SDL must be used
from the main thread on OSX.
(Technically, this condition is wrong: the problem happens on OSX in
general, or more precisely, when SDL uses Cocoa. I didn't find the waf
OSX dependency name after 5 second of searching, so I'm just using
cocoa, without which mpv is useless on OSX anyway.)
It seems some users try to use it (!). This VO was always an experiment,
and intended for low power devices. Whether this experiment succeeded or
not, it's a rather obscure VO. Recently I've seen a regrettable user,
who seemed to use this only because mpv was built without x11 support
(!). Add this warning, like other fallback VOs have it. (The message was
copied from vo_x11.)
Enabling this by default probably causes a number of issues, such as
breaking vo_sdl, or reacting to various input devices while the window
is not focused. It's also pretty obscure, or at least new. Disable it by
default.
lavc_process() calls the receive/send callbacks, which mirror
libavcodec's send/receive API. The receive API in particular can return
both a status code and a frame. Normally, libavcodec is pretty explicit
that it can't return both a frame and an error. But the receive callback
does more stuff in addition (vd_lavc does hardware decoding fallbacks
etc.). The previous commit shows an instance where this happened, and
where it leaked a frame in this case.
For robustness, check whether the frame is set first, i.e. trust it over
the status code. Before this, it checked for an EOF status first.
Hopefully is of no consequence otherwise. I made this change after
testing everything (can someone implement a test suite which tests this
exhaustively).
Commit 5d5fdb77e9 changed details of the decoding control flow, and
called it a "high-risk" change. It turns out that this broke with with
hwdec copy mode, where there is some sort of delay queue (supposedly
increases efficiency, but more likely worthless cargo-cult).
It simply used the wrong (basically inverted) condition for the draining
case.
This was the only case that did not work properly. Other tests,
including video/audio decoding errors, software decoding fallbacks,
etc., seemed to work well. Might still not be exhaustive, as there are
so many corner cases.
Also change two error code returns. This don't/shouldn't really matter,
though the second error code led it to return both a frame and
AVERROR_EOF, which is unexpected, and makes lavc_process() leak a frame.
But also see next commit.
Fixes: 5d5fdb77e9
If expected_sync_pc is greater than submit_count, the unsigned
subtraction will wraparound, which breaks playback. This bug was found
while experimenting with bit-blt model present, but it might be possible
to trigger it with the flip model as well, if there was a dropped frame.