Debanding is an inherently destructive process. It is not needed for
most high-quality sources and only produces an adverse smoothing effect
when applied to fine-detailed content, removing detail. It should only
be applied when necessary, either manually with the `b` keybind or with
an automatic profile.
Additionally, it is quite computationally heavy with no real benefit for
high-quality content.
By default, and especially in the high-quality profile, mpv should
preserve source detail and quality as much as possible. Additional
processing should be opt-in.
Peak detection greatly increases HDR experience. Performance hit of
non-delayed detection is not that significant and is in line with
current default settings.
ac725764ec originally added these to
prevent those profiles from having auto as a value. However the auto
value was removed in 53d032374d later. So
having these lines here no longer serves any purpose since the default
value for slang is NULL once again (vlang/alang were never needed here).
The goal is to provide simple to understand quality/performance level
profiles for the users.
Instead of default and gpu-hq profile. There main profiles were added:
- fast: can run on any hardware
- default: balanced profile between quality and performance
- high-quality: out of the box high quality experience. Intended
mostly for dGPU.
Summary of three profiles, including default one:
[fast]
scale=bilinear
cscale=bilinear (implicit)
dscale=bilinear
dither=no
correct-downscaling=no
linear-downscaling=no
sigmoid-upscaling=no
hdr-compute-peak=no
[default] (implicit mpv defaults)
scale=lanczos
cscale=lanczos
dscale=mitchell
dither-depth=auto
correct-downscaling=yes
linear-downscaling=yes
sigmoid-upscaling=yes
hdr-compute-peak=yes
[high-quality] (inherits default options)
scale=ewa_lanczossharp
cscale=ewa_lanczossharp (implicit)
hdr-peak-percentile=99.995
hdr-contrast-recovery=0.30
allow-delayed-peak-detect=no
deband=yes
scaler-lut-size=8
Prevents transient brightness spikes on scene transitions at the cost of
sometimes forcing an extra indirect pass (in particular, when
downscaling). But on GPUs powerful enough to run gpu-hq, the extra
indirect pass shouldn't matter too much. This option mostly exists for
weak iGPUs.
This is higher quality but comes with a slight performance hit,
especially for weaker iGPUs, so I don't want to enable it out of the box
even when --hdr-compute-peak=auto.
the Apple Remote has long been deprecated and abandoned by Apple.
current macs don't come with support for it anymore. support might be
re-added with the next commit.
Some stream inputs may have higher latency with higher buffer sizes, for
example network filesystems via normal OS filesystem interface (these
have to wait until the full buffer is read, which means higher latency).
Probably doesn't matter in practice, but why take chances.
This is mostly just because of the odd RGB default gamma issue, which
shouldn't have any real impact. This also sets allow_approximate_gamma,
which I hope is fine for normal use cases.
Raise swscale and zimg default parameters. This restores screenshot
quality settings (maybe) unset in the commit before. Also expose some
more libswscale and zimg options.
Since these options are also used for VOs like x11 and drm, this will
make x11/drm/etc. much slower. For compensation, provide a profile that
sets the old option values: sw-fast. I'm also enabling zimg here, just
as an experiment.
The core problem is that we have a single set of command line options
which control the settings used for most swscale/zimg uses. This was
done in the previous commit. It cannot differentiate between the VOs,
which need to be realtime and may accept/require lower quality options,
and things like screenshots or vo_image, which can be slower, but should
not sacrifice quality by default.
Should this have two sets of options or something similar to do the
right thing depending on the code which calls libswscale? Maybe. Or
should I just ignore the problem, make it someone else's problem (users
who want to use software conversion VOs), provide a sub-optimal
solution, and call it a day? Definitely, sounds good, pushing to master,
goodbye.
Enabling this by default probably causes a number of issues, such as
breaking vo_sdl, or reacting to various input devices while the window
is not focused. It's also pretty obscure, or at least new. Disable it by
default.
The code is very basic:
- only handles gamepads, could be extended for generic joysticks in the
future.
- only has button mappings for controllers natively supported by SDL2.
I heard more can be added through env vars, there's also ways to load
mappings from text files, but I'd rather not go there yet. Common ones
like Dualshock are supported natively.
- analog buttons (TRIGGER and AXIS) are mapped to discrete buttons using an
activation threshold.
- only supports one gamepad at a time. the feature is intented to use
gamepads as evolved remote controls, not play multiplayer games in mpv :)
Since linear downscaling makes sense to handle independently from
linear/sigmoid upscaling, we split this option up. Now,
linear-downscaling is its own option that only controls linearization
when downscaling and nothing more. Likewise, linear-upscaling /
sigmoid-upscaling are two mutually exclusive options (the latter
overriding the former) that apply only to upscaling and no longer
implicitly enable linear light downscaling as well.
The old behavior was very confusing, as evidenced by issues such
as #6213. The current behavior should make much more sense, and only
minimally breaks backwards compatibility (since using linear-scaling
directly was very uncommon - most users got this for free as part of
gpu-hq and relied only on that).
Closes#6213.
The playback start logic explicitly waits until the first frame has been
displayed. Usually this will introduce a wait of 1 vsync. For normal
playback this doesn't matter, but with respect to low latency needs,
this only leads to additional data getting queued up in the demuxer or
network buffers.
Another thing is that the timing logic decodes 1 frame ahead (= 1 frame
extra latency) to determine the exact duration of a frame.
To be fair, there doesn't really seem to be a hard reason why this is
needed. With the current code, enabling the option does lead to A/V
desync sometimes (if the demuxer FPS is too inaccurate), and also frame
drops at playback start in some situations. But this all seems to be
avoidable, if the timing logic were to be rewritten completely, which
should probably happen in the future. Thus the new option comes with the
warning that it can be removed any time. This is also why the option has
"hack" in the name.
Well I guess it doesn't help that much.
Also add some stuff that might help to the manpage.
The fundamental problem with some "live" sources (e.g. x11grab) is
actually that the player gets behind initially, and never thinks it has
to catch up. This is also why --untimed can help.
With the recent changes to the script it does not incur a startup delay
by default due to starting youtube-dl and waiting for it. This was the
main reason for making libmpv have a different default.
Starting sub processes from a library can still be a bit fishy, but I
think it's ok. Still mention it in the libmpv header. There were already
other cases where libmpv would start its own processes, such as the X11
backend calling xdg-screensaver. (The reason why this is fishy is
because UNIX process management sucks: SIGCHLD and the wait() syscall
make sub processes non-transparent and could potentially introduce
conflicts with code trying to use them.)
This was off for mpv CLI, but on for libmpv. The motivation behind this
was that it would be confusing for applications if libmpv continued
playback in a severely "degraded" way (without either audio or video),
and that it would be better to fail early.
In reality the behavior was just a confusing difference to mpv CLI, and
has confused actual users as well. Get rid of it.
Not bothering with a version bump, since this is so minor, and it's easy
to ensure compatibility in affected applications by just setting the
option explicitly.
(Also adding the missing next-release-marker in client-api-changes.rst.)
This is done in several steps:
1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
(note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
ra itself (and vo_gpu altered to compensate), but this was a stop-gap
measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten
Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.
Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.
Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
for a reason i can just assume some key events can vanish from the
event chain and mpv seems unresponsive.
after quite some testing i could confirm that the events are present at
the first entry point of the event chain, the sendEvent method of the
Application, and that they vanish at a point afterwards. now we use
that entry point to grab keyDown and keyUp events. we also stop
propagating those key events to prevent the no key input' error sound.
if we ever need the key events somewhere down the event chain we need
to start propagating them again. though this is not necessary currently.
This should still allow user-set default options to override built-in
pseudo-gui while respecting user-set pseudo-gui options.
Pros:
- user option in default profile overrides built-in pseudo-gui's options
Ex: screenshot-directory overrides built-in pseudo-gui's
- user can "fix" pseudo-gui if some option like "force-window=no" is set
in default by setting "force-window=yes" in [pseudo-gui]
- `mpv --profile=pseudo-gui` will work as before
Cons:
- --show-profile=pseudo-gui won't display the built-in's options
Original idea from wm4.
Documentation edits mostly by wm4.
Signed-off-by: wm4 <wm4@nowhere>
The previous commit merely copied the profile string to a file (plus
changing how RPI-specific defaults are initialized), now make some
changes on top of it. In particular, remove the --input-lirc option,
which was removed a long time ago, but forgotten from the libmpv
profile.
Move the embedded string with the builtin profiles to a separate
builtin.conf file. This makes it easier to read and edit, and you can
also check it for errors with --include=etc/builtin.conf. (Normally
errors are hidden intentionally, because there's no way to output error
messages this early, and because some options might not be present on
all platforms or with all configurations.)