Instead of requiring a complicated mechanism to share the entire OpenGL
and renderer state between VO and Cocoa thread just to do the redrawing
during live-resize on the Cocoa thread, let the Cocoa thread wait on the
VO thread. This wil allow some major simplifications and cleanups in the
future.
One problem with this is that it can enter a deadlock whenever the VO
tries to sync with the Cocoa thread. To deal with this, the Cocoa thread
waits with a timeout. This can probably be improved later, though in
general this situation can always happen, unless the Cocoa thread waits
in a reentrant way.
Some other details aren't completely clean either. For example,
pending_events should be accessed atomically. This will also be fixed
later.
Will be used to make video waiting interruptible with Cocoa (see the
following commit).
One worry was that this could cause hangs if the system clock jumps
backwards. Normally we don't support such behavior, because it's
almost impossible to handle it reasonably. E.g. we would have to
change the default clock type for condition variables, which in turn
would require a custom function for creating condition variables,
or so. If the OS even supports different clocks.
But it turns out that this is no issue, because other events seem
to wakeup the wait call anyway, and mpv internal absolute times use
a monotonic clock.
This uses the OpenGL frame interpolation code, which before could be
used by vo_opengl only.
Some effort was made to make it behave like vo_opengl, for the better or
the worse. As a consequence, there is a minor duplication of code and
mechanism. Hopefully this can all be wiped as soon as the VO frame
queue/timing mechanism is cleaned up.
This also attempts to use mpv_opengl_cb_report_flip() (as called by the
API user) to determine the vsync interval. This might need refinement as
well.
(In general, we simply expect the API user to work in vsync-blocking
manner.)
(I have no idea why there are different modes.)
Instead of risking to drop frames too early, give it some margin. Since
there are situations this could deadlock, wait with a timeout. This can
happen if e.g. the API user is refusing to render anything, or if
uninitialization is happening.
There is not much of a reason to have these wrappers around. Use POSIX
standard functions directly, and use a separate utility function to take
care of the timespec calculations. (Course POSIX for using this weird
format for time values.)
Now among other things panscan can be changed during playback.
Unfortunately, it flickers. The issue is that reconfig() clears the
framebuffer. Removing the clearing shows that the "unused" parts of
the picture are not cleared - even though OSD could render there. As
such, this is a separate issue.
When running with --panscan=1, this could crash - because the current
frame was reduced in size each time the image was redrawn, which would
result in a failed assertion the second time it's drawn.
This should fix some crashes due to dangling pointers.
The problem was that with_cocoa_lock_on_main_thread() is asynchronous.
It will not wait until it is finished. In the uninit case, this means
the VO could be deallocated and destroyed while cocoa was still running
uninit code.
So simply wait until it is done by using dispatch_sync(). There were
concerns that this could introduce a deadlock by the main thread trying
to wait for something on the VO thread. But from what I can see, this
never happens, and even if it does, it would crash anyway since the VO
is already gone.
One remaining worry is the video_resize_redraw_callback. From what I can
see, it still can mess things up, and will need a more elaborate fix.
Reduces (but likely does not remove) the danger of rounding intermediate
values down to 8 bit. This is important for cscale, or any other
processing that might store raw YUV values in framebuffers.
Fixes#1918.
Path expansion (like "~/dir/" in config file) was used inconsistently,
so the cache directory wasn't always created correctly. Fix this by
moving the path expansion from load_file() to its callers.
This unbreaks compiling command line player and libmpv at the same
time. The problem was that doing so silently disabled the OSX
application thing - but the command line player can not use the
vo_opengl Cocoa backend without it.
The OSX application code is basically dead in libmpv, but it's not
that much code anyway.
If you want a mpv binary that does not create an OSX application
singleton (and creates a menu etc.), you must disable cocoa
completely, as cocoa can't be used anyway in this case.
This now stores caches for multiple ICC profiles, potentially all the
user has ever used. The big use case for this is for users with multiple
monitors. The old logic would mandate recomputing the LUT and discarding
the cache whenever dragging mpv from one screen to another.
This also avoids having to save and check the ICC profile itself, since
the file name already uniquely determines it.
This will essentially make screenshot-tag-colorspace also affect the
"screenshot window" command, where possible.
Unfortunately, it's completely incompatible with icc-profile, due to API
limitations of ffmpeg (we can only give it an enum of well-known
primaries, rather than an actual ICC profile or primaries).
(Not sure why it worked without this when I tested the previous
changes.)
Untested, but should be fine. This is equivalent what is done on e.g.
panscan changes.
I think this used to be quite important, because the ancient VfW support
in MPlayer used to output flipped frames. This code has been dead in mpv
for quite some time (because VfW decoders were removed, and the --flip
option was dropped too), so get rid of it.
Currently, the wayland backend needs extra work to avoid drawing more
often than the wayland frame callback allows. (This is not ideal, but
will be fixed at a later time.)
Unify this with the start_frame callback added for cocoa. Some details
change for the better. For example, if a frame is dropped, and a redraw
is done afterwards, the actually correct frame is redrawn, instead
whatever was in the textures from before the dropped frame.
With --idle --force-window, or when started from the bundle, the cocoa
code dropped the first frame. This resulted in a black frame on start
sometimes.
The reason was that the live resizing/redrawing code was invoked, which
simply set skip_swap_buffer to false, blocking redrawing whatever was
going to be rendered next. Normally this is done so that the following
works:
1. vo_opengl draw a frame, releases GL lock
2. live resizing kicks in, redraw the frame
3. vo_opengl wants to call SwapBuffers, drawing a stale buffer
overwritten by the live resizing code
This is solved by setting skip_swap_buffer in 2., and querying it in 3.
Fix this by resetting the skip_swap_buffer at a known good point: when
vo_opengl starts drawing a new frame.
The start_frame function returns bool, so that it can be merged with
is_active in a following commit.
Commit f1746741de changed the drop
logic to have more slack (drop more frames but less frequent) to prevent
drops due to timing jitter when the clip and screen have similar rates.
However, if the clip has higher rate than the screen (or just higher
playback rate), then that policy hurts smoothness since these "chunked
drops" look worse than one frame drop at a time.
This patch restores the old drop logic when the playback frame rate is
higher than ~5% above the screen refresh rate, and solves this issue.
Fixes#1897
Also factor the display size initialization into a separate function.
For some reason this seems to work, although setting the background
color using this 1x1 pixel bitmap does not work. I blame the RPI
beign a terrible piece of hardware with even worse drivers.
It's weird that this basically adjusts the contrast between luma and
chroma, and not blackness.
This is more in line with the behavior of libswscale, the vdpau
"procamp" (which mpv doesn't use), and Xv.
Right now, the default behavior is to pick the numerically lowest screen
ID that overlaps the window in any way - but this means that mpv will
decide to pick an ICC profile in a pretty arbitrary way even if the
window only overlaps another screen by a single pixel.
The new behavior is to query it based on the center of the window
instead.
vo_opengl (or gl_hwdec_vdpau.c to be specific) calls
mp_vdpau_mixer_render() with video_rect=NULL, which means to use the
full surface. This is incorrect if the surface is actually cropped, as
it can happen with h264. In this case, it was rendering the parts
outside of the image.
Fix it by making this case use the cropped size instead.
Alternative fix for PR #1863.
MP_IMGFIELD_TOP/MP_IMGFIELD_BOTTOM were completely unused, and
MP_IMGFIELD_ORDERED was always set (even though vf_vdpaupp.c strangely
checked for the latter).