This follows the (only slowly progressing) plan to replace all internal
video filters with libavfilter.
All what's left in vf_gradfun.c is the weird wrapper around vf_lavfi.c.
This "sometimes" crashed when seeking. The fault apparently lies in
libavcodec: the decoder returns an unreferenced frame! This is
completely insane, but somehow I'm apparently still expected to
work this around. As a reaction, I will drop Libav 9 support in the
next commit. (While this commit will go into release/0.3.)
The window doesn't recieve a WM_LBUTTONUP message after it's dragged,
probably because it's swallowed by the modal loop. To stop the button
from sticking, release it manually when the drag is complete.
Mouse buttons can get stuck down if the button is pressed inside the
video window and released outside. Avoid this by capturing mouse input
when a button is pressed.
Apparently the "right" place to initialize the hardware decoder is in
the libavcodec get_format callback.
This doesn't change vda.c and vdpau_old.c, because I don't have OSX, and
vdpau_old.c is probably going to be removed soon (if Libav ever manages
to release Libav 10). So for now the init_decoder callback added with
this commit is optional.
This also means vdpau.c and vaapi.c don't have to manage and check the
image parameters anymore.
This change is probably needed for when libavcodec VDA supports gets a
new iteration of its API.
This updates the logic for the new, somewhat unified behavior of SRGB
and 3DLUT since 34bf9be (not that it was particularly correct even that
change) and checks for the presence of corresponding extensions only in
the cases in which they're needed.
This commit:
- Changes some of the #define and variable names for clarification and
adds comments where appropriate.
- Unifies :srgb and :icc-profile, making them fit into the same step of
the decoding process and removing the weird interactions between both
of them.
- Makes :icc-profile take precedence over :srgb (to significantly reduce
the number of confusing and useless special cases)
- Moves BT709 decompanding (approximate or actual) to the shader in all
cases, making it happen before upscaling (instead of the old 0.45
gamma function). This is the simpler and more proper way to do it.
- Enables the approx gamma function to work with :srgb as well due to
this (since they now share the gamma expansion code).
- Renames :icc-approx-gamma to :approx-gamma since it is no longer tied
to the ICC options or LittleCMS.
- Uses gamma 2.4 as input space for the actual 3DLUT, this is now a
pretty arbitrary factor but I picked 2.4 mainly because a higher pure
power value here seems to produce visually better results with wide
gamut profiles, rather then the previous 1.95 or BT.709.
- Adds the input gamma space to the 3dlut cache header in case we change
it more in the future, or even make it user customizable (though I
don't see why the latter would really be necessary).
- Fixes the OSD's gamma when using :srgb, which was previously still
using the old (0.45) approximation in all cases.
- Updates documentation on :srgb, it was still mentioning the old
behavior from circa a year ago.
This commit should serve to both open up and make the CMS/shader code much
more accessible and less confusing/error-prone and simultaneously also
improve the performance of 3DLUTs with wide gamut color spaces.
I would liked to have made it more modular but almost all of these
changes are interdependent, save for the documentation updates.
Note: Right now, the "3DLUT takes precedence over SRGB" logic is just
coded into gl_lcms.c's compile_shaders function. Ideally, this should be
done earlier, when parsing the options (by overriding the actual
opts.srgb flag) and output a warning to the user.
Note: I'm not sure how well this works together with real-world
subtitles that may need to be color corrected as well. I'm not sure
whether :approx-gamma needs to apply to subtitles as well. I'll need to
test this on proper files later.
Note: As of now, linear light scaling is still intrinsically tied to
either :srgb or :icc-profile. It would be thinkable to have this as an
extra option, :linear-scaling or similar, that could be used with or
without the two color management options.
Since the AO will run in a thread, and there's lots of shared state with
encoding, we have to add locking.
One case this doesn't handle correctly are the encode_lavc_available()
calls in ao_lavc.c and vo_lavc.c. They don't do much (and usually only
to protect against doing --ao=lavc with normal playback), and changing
it would be a bit messy. So just leave them.
The previous version of the gamma suboption was pretty useless. It could
be used to disable delayed gamma enabling, which is a mechanism to avoid
having to adjust gamma in the shader by default.
Repurpose the suboption and allow setting an exact gamma value with it.
You can already override gamma with the --gamma option as well as the
gamma input property, but these use a weird curve to create the
impression of a linear perceived brightness change when changing the
value. This suboption now allows setting an exact gamma value.
This used to be absolute colorimetric, but relative colorimetric is a
saner default due to the arguments presented in issue #595.
A short summary: In general it doesn't affect much because our eyes
adapt to the white point either way, but if running in windowed mode it
would make the whites seem inconsistent/tinted. For fullscreen
projection it's also undesirable since it reduces the dynamic range
without much benefit (again, since our eyes adapt either way) and it
also breaks calibration against ambient lighting.
This shouldn't change much, since most profile types that aren't 3DLUTs
aren't capable of either of those transforms, and most displays are
calibrated against D65 (same as BT.709 source) either way.
This uses the value of 1.95 as an approximation for the exact gamma
curve, which replicates the behavior of popular video software including
anything in the Apple ecosystem, as per issue #534.
This is the same issue as addressed by 257d9f1, except this time for
the :srgb option as well. (257d9f1 only addressed :icc-profile)
The conditions of the srgb_compand mix() call are also flipped to
prevent an off-by-one error.
I was unhappy with the old way of handling buffers, especially resizing. But my
original plan to use wl_shm_pool_resize wasn't as good as I initially thought.
I might get back to it.
With the new buffer pools it now possible to select triple buffering. Also the
buffer pools are also needed for the upcoming subsurfaces for osd and subtitles.
I hope this change was worth it.
I could not see any difference whatsoever, but for usage with a 3DLUT
there's zero performance difference so we might as well follow the spec to
the letter.
Legacy GL context creation (glXCreateContext) explicitly requires a X
visual, while the modern one (glXCreateContextAttribsARB) does not for
some reason. So fail only on the legacy code path if we don't find a
visual. Note that vo_x11_config_vo_window() will select a default visual
if a NULL visual is passed to it.
This fixes issue #504. For some reason, glXChooseFBConfig() will return
a fbconfig with no associated visual. (I'm not sure if this allowed.
They don't always have a visual, but since GLX_X_RENDERABLE is set
and GLX_DRAWABLE_TYPE is (implicitly) set to GLX_WINDOW_BIT, why would
there be no visual?)
Even worse, a test program seems to show that a 16 bit fbconfig is
selected (instead of 24/32 bit), which doesn't sound nice at all. Since
there _are_ better fbconfigs available, glXChooseFBConfig() should
normally sort them by quality, and return the better ones first. It's
worth noting that this function should also prefer GLX_TRUE_COLOR
over anything else, although this comes last in the sort order.
Whatever is going on, requesting GLX_X_VISUAL_TYPE with GLX_TRUE_COLOR
seems to fix it.
This was done incorrectly in the previous commit: the fallback size used
the window size as requested with the first config call, which is the
size of the hidden window in the vo_opengl case. (That damn hidden
window again...)
This code essentially does nothing. As far as I could find out, this
actually used to do something. Then it was removed with commit efe7c39f,
leaving some leftover code that didn't do anything useful. This happened
12 years ago!
Also remove a commented debug printf.
vo_opengl creates a hidden X11 window to probe the OpenGL context. It
must do that before creating a visible window, because VO creation and
VO config are separate phases.
There's a race condition involving the hidden window: when starting with
--fs, and then leaving fullscreen, the unfullscreened window is
sometimes set to the aspect ratio of the hidden window. I'm not sure why
the window size itself uses the correct size (but corrupted by the wrong
aspect), but that's perhaps because the window manager is free to ignore
the size hint while honoring the aspect, or something equally messed up.
It turns out this happens because x11_common.c thinks the size of the
hidden window is the size of the unfullscreened window. This in turn
happens because vo_x11_update_geometry() reads the size of the hidden
window when called in vo_x11_fullscreen() (called from
vo_x11_config_vo_window()) when mapping the fullscreen window. At that
point, the window could be mapped, but not necessarily. If it's not
mapped, it will get the size of the unfullscreened window... I think.
One could fix this by actively waiting until the window is mapped. Try
to pick a less hacky approach instead, and never read the window size
until MapNotify is received.
vo_x11_create_window() needs a hack, because we'd possibly set the VO's
size to 0, resulting e.g. in vdpau to fail initialization. (It'll print
error messages until a proper resize is received.)
Larger sizes can introduce overflows, depending on the image format. In
the worst case, something larger than 16000x16000 with 8 bytes per pixel
will overflow 31 bits.
Maybe there should be a proper failure path instead of a hard crash, but
not yet. I imagine anything that sets a higher image size than a known
working size should be forced to call a function to check the size (much
like in ffmpeg/libavutil).
RGB565 is one of the fastest and most supported formats on low end consumer
devices, but ffmpeg spams warning when using it. Make it opt-in instead of
opt-out.
The problem seems to have solved itself. I guess the previous changes to
resizing and commit ba101ab made this possible. Consider me happy for removing
that crap.
Still untested, because now it crashes inside of libSDL for unknown
reasons. (This also happens with mpv git from yesterday - probably an
installation problem, or SDL doing weird things it shouldn't be doing.)