This is the same issue as addressed by 257d9f1, except this time for
the :srgb option as well. (257d9f1 only addressed :icc-profile)
The conditions of the srgb_compand mix() call are also flipped to
prevent an off-by-one error.
I could not see any difference whatsoever, but for usage with a 3DLUT
there's zero performance difference so we might as well follow the spec to
the letter.
Legacy GL context creation (glXCreateContext) explicitly requires a X
visual, while the modern one (glXCreateContextAttribsARB) does not for
some reason. So fail only on the legacy code path if we don't find a
visual. Note that vo_x11_config_vo_window() will select a default visual
if a NULL visual is passed to it.
This fixes issue #504. For some reason, glXChooseFBConfig() will return
a fbconfig with no associated visual. (I'm not sure if this allowed.
They don't always have a visual, but since GLX_X_RENDERABLE is set
and GLX_DRAWABLE_TYPE is (implicitly) set to GLX_WINDOW_BIT, why would
there be no visual?)
Even worse, a test program seems to show that a 16 bit fbconfig is
selected (instead of 24/32 bit), which doesn't sound nice at all. Since
there _are_ better fbconfigs available, glXChooseFBConfig() should
normally sort them by quality, and return the better ones first. It's
worth noting that this function should also prefer GLX_TRUE_COLOR
over anything else, although this comes last in the sort order.
Whatever is going on, requesting GLX_X_VISUAL_TYPE with GLX_TRUE_COLOR
seems to fix it.
This was done incorrectly in the previous commit: the fallback size used
the window size as requested with the first config call, which is the
size of the hidden window in the vo_opengl case. (That damn hidden
window again...)
Conflicts:
video/out/x11_common.c
This code essentially does nothing. As far as I could find out, this
actually used to do something. Then it was removed with commit efe7c39f,
leaving some leftover code that didn't do anything useful. This happened
12 years ago!
Also remove a commented debug printf.
vo_opengl creates a hidden X11 window to probe the OpenGL context. It
must do that before creating a visible window, because VO creation and
VO config are separate phases.
There's a race condition involving the hidden window: when starting with
--fs, and then leaving fullscreen, the unfullscreened window is
sometimes set to the aspect ratio of the hidden window. I'm not sure why
the window size itself uses the correct size (but corrupted by the wrong
aspect), but that's perhaps because the window manager is free to ignore
the size hint while honoring the aspect, or something equally messed up.
It turns out this happens because x11_common.c thinks the size of the
hidden window is the size of the unfullscreened window. This in turn
happens because vo_x11_update_geometry() reads the size of the hidden
window when called in vo_x11_fullscreen() (called from
vo_x11_config_vo_window()) when mapping the fullscreen window. At that
point, the window could be mapped, but not necessarily. If it's not
mapped, it will get the size of the unfullscreened window... I think.
One could fix this by actively waiting until the window is mapped. Try
to pick a less hacky approach instead, and never read the window size
until MapNotify is received.
vo_x11_create_window() needs a hack, because we'd possibly set the VO's
size to 0, resulting e.g. in vdpau to fail initialization. (It'll print
error messages until a proper resize is received.)
Conflicts:
video/out/x11_common.c
Larger sizes can introduce overflows, depending on the image format. In
the worst case, something larger than 16000x16000 with 8 bytes per pixel
will overflow 31 bits.
Maybe there should be a proper failure path instead of a hard crash, but
not yet. I imagine anything that sets a higher image size than a known
working size should be forced to call a function to check the size (much
like in ffmpeg/libavutil).
RGB565 is one of the fastest and most supported formats on low end consumer
devices, but ffmpeg spams warning when using it. Make it opt-in instead of
opt-out.
The problem seems to have solved itself. I guess the previous changes to
resizing and commit ba101ab made this possible. Consider me happy for removing
that crap.
For a long time the cocoa backend set the xinerama_x/y and used dx/dy from the
VO instance. This somewhat worked with some workarounds but wasn't really
what was supposed to be happening. Moreover 27e4360, which touched this
workaround introduced a regression.
New code doesn't set the xinerama_x/y values so that dx/dy are offsets in the
current screen (not a virtual screen composed of all the screens). The screen
reference detected during VOCTRL_UPDATE_SCREENINFO is also passed down to the
window initialization code.
Fixes#472
On X11, if no wayland compositor is running, wl_list_init() will never
be called. This will cause destroy_display() to segfault when trying to
iterate over the list.
There are still some leaks from wayland-cursor stuff, but there is no way to
free the memory as user of the cursor library.
Conflicts:
video/out/wayland_common.c
The user_data is passed on add_listener and can later be changed with
set_user_data. But because we don't want to change it later and because it is
the same object remove the set_user_data call.
This might be a copy&paste leftover from the initial draft for the wayland
backend.
I added enough logic to never set ontop or fullscreen twitce.
This commit keeps also the size of the video if multiple videos are played.
If the ratio differs the width will be kept at the same size and only the
height changes.
libwayland-client contains the following code [1]:
runtime_dir = getenv("XDG_RUNTIME_DIR");
if (!runtime_dir) {
fprintf(stderr,
"error: XDG_RUNTIME_DIR not set in the environment.\n");
This means this message will unconditionally and unavoidably be printed
if XDG_RUNTIME_DIR is not set. Since mpv is a terminal program, and we
want to avoid unnecessary output, work it around by not attempting to
use wayland if this environment variable is not set.
[1] http://cgit.freedesktop.org/wayland/wayland/tree/src/wayland-client.c#n636
(cd0dccd01e16fa404e03974d30ded3aebdb1c4bc)
This commonly happens when initializing vo_opengl on a X11-only system.
Unfortunately, most wl_*_destroy() functions appear not to accept NULL
pointers, making partial deinitialization a pain: you have to add your
own NULL checks everywhere to avoid crashes.
xkb.context is uninitialized separately, because you can initialize it
just fine, even if the rest of input initialization fails.
Because of this commit there were problems displaying the frmase in their right
order.
This reverts commit 96e75d234a.
Conflicts:
video/out/gl_wayland.c
video/out/wayland_common.h
The changes in the vo_wayland_ontop function have no effect on the workaround.
Somehow the problem just disappeared. I guess it is because of the new control
function in gl_wayland.c where the resize happens immediatly after the event
dispatch/flush.
This solves the issue where we would not receive any frame events. The
difference to my earlier tests is that now it looks like eglSwapBuffers uses
it's own event queue or something similiar along the lines. Becaues the
performance is the same as without any redraw callback.
At the moment there are visual glitches when we resize the window. This happens
because in wayland there a special function for resizing EGL windows.
To prevent the glitches move the egl_context to the wayland state in
wayland_common.h and add a new control function to gl_wayland.c to wrap the
vo_wayland_control function to check for resize events.
With the new control wrapper the glitches are gone and the resizing is fluid.
The reason a segmentation happend here was because we couldn't get the
requested minor version. The major version is enough for differentiating
between OpenGL 3 and OpenGL 2. If it fails there is still a fallback to any
version available.
Also add a warning if we use the fallback.
Set the flag CODEC_FLAG_OUTPUT_CORRUPT by default. Note that there is
also CODEC_FLAG2_SHOW_ALL, which is older, but this seems to be ffmpeg
only.
Note that whether you want this enabled depends on the user. Some might
prefer that only good frames are output, while others want the decoder
to try as hard as possible to output _anything_. Since mplayer/mpv is
rather the kind of player that tries hard instead of being "clever", set
the new default to override libavcodec's default.
A nice way to test this is switching video tracks. Since mpv doesn't
wait for the next key frame, it'll start feeding the decoder with a
packet from the middle of the stream.