(Not sure why it worked without this when I tested the previous
changes.)
Untested, but should be fine. This is equivalent what is done on e.g.
panscan changes.
I think this used to be quite important, because the ancient VfW support
in MPlayer used to output flipped frames. This code has been dead in mpv
for quite some time (because VfW decoders were removed, and the --flip
option was dropped too), so get rid of it.
Currently, the wayland backend needs extra work to avoid drawing more
often than the wayland frame callback allows. (This is not ideal, but
will be fixed at a later time.)
Unify this with the start_frame callback added for cocoa. Some details
change for the better. For example, if a frame is dropped, and a redraw
is done afterwards, the actually correct frame is redrawn, instead
whatever was in the textures from before the dropped frame.
With --idle --force-window, or when started from the bundle, the cocoa
code dropped the first frame. This resulted in a black frame on start
sometimes.
The reason was that the live resizing/redrawing code was invoked, which
simply set skip_swap_buffer to false, blocking redrawing whatever was
going to be rendered next. Normally this is done so that the following
works:
1. vo_opengl draw a frame, releases GL lock
2. live resizing kicks in, redraw the frame
3. vo_opengl wants to call SwapBuffers, drawing a stale buffer
overwritten by the live resizing code
This is solved by setting skip_swap_buffer in 2., and querying it in 3.
Fix this by resetting the skip_swap_buffer at a known good point: when
vo_opengl starts drawing a new frame.
The start_frame function returns bool, so that it can be merged with
is_active in a following commit.
Commit f1746741de changed the drop
logic to have more slack (drop more frames but less frequent) to prevent
drops due to timing jitter when the clip and screen have similar rates.
However, if the clip has higher rate than the screen (or just higher
playback rate), then that policy hurts smoothness since these "chunked
drops" look worse than one frame drop at a time.
This patch restores the old drop logic when the playback frame rate is
higher than ~5% above the screen refresh rate, and solves this issue.
Fixes#1897
Also factor the display size initialization into a separate function.
For some reason this seems to work, although setting the background
color using this 1x1 pixel bitmap does not work. I blame the RPI
beign a terrible piece of hardware with even worse drivers.
Right now, the default behavior is to pick the numerically lowest screen
ID that overlaps the window in any way - but this means that mpv will
decide to pick an ICC profile in a pretty arbitrary way even if the
window only overlaps another screen by a single pixel.
The new behavior is to query it based on the center of the window
instead.
If user switched terminals frantically, mpv could get SIGUSR1 twice in a
row, which, up until now, resulted in destroying CRTC twice. This caused
it to segfault. After this fix, double SIGUSR1 should be ignored.
This prevents the machine from going to sleep while a video-only stream
is playing. When audio is playing, the audio stack should make this
request for us.
Logging was meant to be silenced only when user uses connector
auto-detection feature. If user supplies connector ID manually, he
should see exact reason why connecting to this specific connector
failed.
We already use 2 screensaver APIs when attempting to disable the
screensaver: XResetScreenSaver() (from xlib) and XScreenSaverSuspend
(from the X11 Screen Saver extension). None of these actually work.
On modern desktop Linux, we are expected to make dbus calls using some
freedesktop-defined protocol (and possibly we'd have to fallback to a
Gnome specific one). At least xscreensaver doesn't respect the "old"
APIs either.
Solve this by running the xdg-screensaver script. It's a terrible, ugly
piece of shit (just read the script if you disagree), but at least it
appears to work everywhere. It's also simpler than involving various
dbus client libraries.
I hope this can replace the --heartbeat-cmd option, and maybe we could
remove our own DPMS/XSS code too.
This is optional, but ensures that linking with -Wl,--as-needed does
not drop the MMAL VC driver. The driver normally "registers" itself
in the library constructor, but since no symbols are explicitly
referenced, the linker could remove it with as-needed enabled.
Not sure why this was so roundabout; probably to keep spam down if the
user's OpenGL drivers are crap (but then just don't enable extended
features), or because the "Disabling..." text was so repetitious.
But there doesn't seem to be a good reason after all. Also, this could
already overflow the fixed size disabled[] array. Just print the
messages directly.
This could help in cases where the DWM (Windows desktop compositor) adds another
layer of bufferring and therefore the SwapBuffers timing could get messed up.
Signed-off-by: wm4 <wm4@nowhere>
on my windows system this allows smoothmotion to work perfectly also in windowed
mode. There's no real right or wrong here, with the the only goal being to
always hit the next vsync. however, on cases where vsync timing is jittery (as
could happen with DWM), this patch tries to aim to the middle of the vsync cycle
to get as least affected as possible by such jitter.
adds 1 vsync interval "slack" before deciding to drop the first frame. it should
help on cases of timing jitter (sleep duration, container timestamps, compositor
vsync timing, etc). once the drop threshold has been crossed, it will keep
dropping until perfect timing alignment. this prevents crossing the drop
threshold back and forth repeatedly and therefore more resilient to frame drops
Increase the default queue size. This helps with "missed" frames due to
the asynchronous nature of the API. All the other VOs are synchronous,
so if rendering and displaying takes a while, the common code in vo.c
will be blocked until it can continue. But with opengl-cb, vo.c might
immediately push the next ready frame, which causes the current frame
to be dropped _if_ it wasn't rendered yet.
One could fix this by making vo.c wait a while (until the API user calls
the render function, which pulls the frame). But setting the default
queue size to 2 seems much simpler: instead of dropping the frame, it
will be pushed to the API user once the next renderer call finishes.
(This is still a bit strange, and will hopefully be cleaned up when
video scheduling is redone, but for now this appears to deliver
relatively good results.)
Use texture-from-pixmap instead of vaapi's "native" GLX support.
Apparently the latter is unused by other projects. Possibly it's broken
due that, and Intel's inability to provide anything non-broken in
relation to video.
The new code basically uses the X11 output method on a in-memory pixmap,
and maps this pixmap as texture using standard GLX mechanisms. This
requires a lot of X11 and GLX boilerplate, so the code grows. (I don't
know why libva's GLX interop doesn't just do the same under the hood,
instead of bothering the world with their broken/unmaintained "old"
method, whatever it did. I suspect that Intel programmers are just
genuine sadists.)
This change was suggested in issue #1765.
The old GLX support is removed, as it's redundant and broken anyway.
One remaining issue is that the first vaPutSurface() call fails with an
unknown error. It returns -1, which is pretty strange, because vaapi
error codes are normally positive. It happened with the old GLX code
too, but does not happen with vo_vaapi. I couldn't find out why.