This is just temporary code but is a good base for future work (and baby
steps are required for these changes). The 'final destination' is embedding
the video view into any NSView but that requires some more work (the mechanism
will be the same: pass the view's pointer casted to int64_t through -wid).
For instance we will need to remove as much usage of the window instance
as possible, and use nil guards where not possible. For this reason I will
remove stuff like the mission control fullscreen feature (it's a cute feature
but annoying to support and quite limited, go make your GUIs), and a way to
lookup the current screen directly from the NSView absolute coordinates
(this is needed for ICC detection mostly, and reporting back the screen to
mpv's core).
Moreover the current view.m will need to be separated into 2 views: the actual
video view that will be embedded, and a parent view that will not be embedded
and will be responsibile for tracking events.
This could be dangerous because we initialize the window asynchronously and
return immediately from config, but since the OpenGL context is already
created, this seems to work correctly and doesn't cause weird deadlock cases.
This doesn't look to be needed anymore. Fullscreening with both the NSView
and the NSWindow API works correctly. I guess this was forgotten in from older
code which changed presentation options directly for going fullscreen.
--x11-netwm=yes now forces NetWM fullscreen, while --x11-netwm=auto
(detect whether NetWM fullsctreen support is available) is the old
behavior and still the default.
See #888.
Unfortunately using dispatch_sync for synchronization turned out to be really
bad for us. It caused a wide array of race conditions, deadlocks, etc.
Moving to a very simple mutex. It's not clear to me how to do liveresizing
with this, for now it just flickers with is unacceptable (maybe I'll draw
black instead).
This should fix all the threading cocoa bugs. Reopen if it's not the case!
Fixes#751Fixes#1129
When the VO was moved it its own thread, responsibility for redrawing
was given to the VO thread itself. So if there was a condition that
indicated that redrawing was required, like expose events or certain
VOCTRLs, the VO thread was redrawing itself.
This worked fine, but there are some corner cases where this works
rather badly. E.g. if I fullscreen the player and hit panscan controls
with mpv's default autorepeat rate, playback stops. This happens because
the VO redraws itself after every panscan change command. Running each
(repeated) command takes so long due to redrawing and (involuntary)
waiting on vsync, that it never leaves the input processing loop while
the key is held down. I suspect that in my case, redrawing in fullscreen
mode just gets slow enough that it takes 2 vsyncs instead of 1 on
average, and the processing time gets larger than the autorepeat delay.
Fix this by taking redraw control from the VO, and instead let the
playloop issue a "real" redraw command to the VO if needed. This
basically reverts redraw handling to what it was before moving the VO to
a thread.
CC: @mpv-player/stable
Another fallout resulting from the changes whether or not to wait for
mapping the window. In this case, it obviously makes no sense to wait
for mapping, because the root window is always mapped. Mapping will
never happen, and it would wait forever.
Fixes#1139.
CC: @mpv-player/stable
At least on kwin, we decide to proceed without waiting for the window
being mapped (due to the frame exts hack, see commit 8c002b79). But that
leaves us with a window size of 0x0, which causes VdpOutputSurfaceCreate
to fail. This prints some warnings, although vo_vdpau recovers later and
this has no other bad consequences.
Do the following things to deal with this:
- set the "known" window size to the suggested window size before the
window is even created
- allow calling XGetGeometry on the window even if the window is not
mapped yet (this should work just fine)
- make the output surface minimum size 1x1
Strictly speaking, only one of these would be required to make the
warning disappear, but they're all valid changes and increase robustness
and correctness. At no point we use a window size of 0x0 as magic value
for "unset" or unknown size, so keeping it unset has no purpose anyway.
CC: @mpv-player/stable
When embedding, if the parent window is destroyed, it will cause mpv's
window to be destroyed as well. Since WM_USER wakeups are sent to the
window, destroying the window will prevent wakeups and cause uninit to
hang.
Fix this by quitting the event loop on WM_DESTROY. Events should only be
processed for the lifetime of the window, from CreateWindowEx to
WM_DESTROY. After the event loop is finished, mp_dispatch_queue_process
can handle any remaining requests.
Commit 64b7811c tried to do the "right thing" with respect to whether
keyboard input should be enabled or not. It turns out that X11 does
something stupid by design. All modern toolkits work around this native
X11 behavior, but embedding breaks these workarounds.
The only way to handle this correctly is the XEmbed protocol. It needs
to be supported by the toolkit, and probably also some mpv support. But
Qt has inconsistent support for it. In Qt 4, a X11 specific embedding
widget was needed. Qt 5.0 doesn't support it at all. Qt 5.1 apparently
supports it via QWindow, but if it really does, I couldn't get it to
work.
So add a hack instead. The new --input-x11-keyboard option controls
whether mpv should enable keyboard input on the X11 window or not. In
the command line player, it's enabled by default, but in libmpv it's
disabled.
This hack has the same problem as all previous embedding had: move the
mouse outside of the window, and you don't get keyboard input anymore.
Likewise, mpv will steal all keyboard input from the parent application
as long as the mouse is inside of the mpv window.
Also see issue #1090.
We inserted these filters with fixed parameters, which was ok. But this
also didn't change image parameters for the filters down the filter
chain and the VO. For example, if rotation by 90° was requested by the
file, we would insert a filter and rotate the video, but the VO would
still receive image parameters that direct rotation by 90°.
This wasn't a problem, but it could become one.
Fix this by letting the filters automatically pick up the image params.
The image params are reset on application. (We could probably also
always try to apply and reset image params in a filter, instead of
having special "auto" parameters. This would probably work, and video.c
would insert a "rotate=0" filter. But I'm afraid this would be confusing
and the current solution is cosmetically slightly nicer.)
Unfortunately, the vf_stereo3d.c change turned out a big mess, but once
the "internal" filter is fully replaced with libavfilter, most of this
can be radically simplified.
Some filters exists only to create a specific lavfi graph. Allow these
filters to reset the graph exactly on reconfig, and allow them to modify
some image parameters too. Also make vf_lw_update_graph() behave like
vf_lw_set_graph() - they had a subtitle difference with filter==NULL.
Useful for the following commit.
This was once central, but now it's almost unused. Only vf_divtc still
uses it for extremely weird and incomprehensible reasons. The use in
stream.c is trivial. Replace these, and remove mpbswap.h.
MPlayer traditionally did this because it made sense: the most important
formats (avi, asf/wmv) used Microsoft formats, and many important
decoders (win32 binary codecs) also did. But the world has changed, and
I've always wanted to get rid of this thing from the codebase.
demux_mkv.c internally still uses it, because, guess what, Matroska has
a VfW muxing mode, which uses these data structures natively.
Until now, we always required the playback core to decode a new frame to
get more output from the filter. That seems to be completely
unnecessary, because filtered results may arrive before that.
Add a filter_out callback, and restructure the code such that it can
return any filtered frames, or block if it hasn't read at least one
frame.
In the worst case, it still can happen that bursts of input requests and
output requests happen. (This commit tries to reduce burst-like
behavior, but it's not entirely possible due to the indeterministic
nature of VS threading.)
This is a similar change as with 95bb0bb6.
When pausing after a frame was just dropped, we're logically at the
dropped frame, and thus should redraw the dropped frame. This was
implemented, but didn't work after unpausing for the second time,
because of a minor logic bug.
For incomprehensible reasons, AV_PIX_FMT_GRAY8 (and some others) have a
palette. This literally makes no sense and this issue has bitten us
before, but it is how it is.
This also caused a crash with vo_direct3d: this mapped a texture as
IMGFMT_Y8 (i.e. AV_PIX_FMT_GRAY8), and when copying this, it tried to
copy the non-existent palette.
Fixes#1113.
vo_vdpau uses its own framedrop code, mostly for historic reasons. It
has some tricky heuristics, of which I'm not sure how they work, or if
they have any effect at all, but in any case, I want to keep this code
for now. One day it might get fully ported to the vo.c framedrop code,
or just removed.
But improve its interaction with the user-visible framedrop controls.
Make --framedrop actually enable and disable the vo_vdpau framedrop
code, and increment the number of dropped frames correctly.
The code path for other VOs should be equivalent. The vo_vdpau behavior
should, except for the improvements mentioned above, be mostly
equivalent as well. One minor change is that frames "shown" during
preemption are always count as dropped.
Remove the statement from the manpage that vo_vdpau is the default; this
hasn't been the case for a while.
vc->vsync_interval and vsync_interval should be the same value, but
actually vc->vsync_interval was updated after vsync_interval was
initialized. This was probably not intended. Fix this by removing the
duplicate local variable. There were probably no bad effects.
Uses the new mechanism introduced in the previous commit.
Depending on the actual filter, this distributes CPU load more evenly
over time, although it probably doesn't matter.
Consider a filter which turns 1 frame into 2 frames (such as an
deinterlacer). Until now, we forced filters to produce all output frames
at once. This was done for simplicity.
Change the filter API such that a filter can produce frames
incrementally.
There's no reason to let the core wait until the frame is done
displaying. In practice, the core normally didn't need this additional
wakeup, and the VO was quick enough to fetch the new frame, before the
core even attempted to queue a new frame. But it wasn't entirely clean,
and the correct wakeup handling might matter in some cases.
Some window managers can prevent mapping of a window as a feature. i3
can put new windows on a certain workspace (with "assign"), so if mpv is
started on a different workspace, the window will never be mapped.
mpv currently waits until the window is mapped (blocking almost all of
the player), in order to avoid race conditions regarding the window
size. We don't want to remove this, but on the other hand we also don't
want to block the player forever in these situations.
So what we need is a way to know when the window manager is "done" with
processing the map request. Unfortunately, there doesn't seem to be a
standard way for this. So, instead we could do some arbitrary
communication with the WM, that may act as "barrier" after map request
and the "immediate" mapping of the window. If the window is not mapped
after this barrier, it means the window manager decided to delay the
mapping indefinitely. Use the _NET_REQUEST_FRAME_EXTENTS message as such
a barrier. WMs supporting this message must set the _NET_FRAME_EXTENTS
property on the mpv window, and we receive a PropertyNotify event. If
that happens, we always continue and cancel waiting for the MapNotify
event.
I don't know if this is sane or if there's a better mechanism. Also,
this works only for WMs which support this message, which are not many.
But at least it appears to work on i3. It may reintroduce flickering on
fullscreen with other WMs, though.
Merges pull request #1094, with some minor changes. mpv expects IEEE,
and IEEE allows divisions by 0 for floats, so these shouldn't actually
be a problem, but do it anyway for the sake of clang.
Signed-off-by: wm4 <wm4@nowhere>
1. Separate buffer and temporary file handling from the vo to make maintenance
and reading code easier
2. Skip resizing as much as possible if back buffer is still busy.
3. Detach and mark osd buffers for deletion if we want to redraw them and they
are still busy. This could be a possible case for the video buffers as
well. Maybe better than double buffering.
All the above steps made it possible to have resizing without any artifacts
even for subtitles. Also fixes dozen of bugs only I knew, like broken subtitles
for rgb565 buffers. I can now sleep at night again.
An attempt at fixing #1070. Apparently something goes wrong if the
video size is equal to the screen size. Since the window decorations
add to the window size, it must actually be larger than the screen.
Actually I don't know what exactly is going wrong, but since this
commit also slightly improves the behavior otherwise, it's a win
anyway.
Try to keep the window size strictly below screen size, even accounting
for window decorations. Size it down and center the window so that it
fits (by either touching the left/right or top/bottom screen borders).
I haven't found any information on what is the maximum allowed size and
position of a window so that it doesn't collide with the task bar, so
assume that we can use the entire screen, minus 1 pixel to avoid
triggering fullscreen semantics (if that is even possible).
reinit_window_state() will set VO_EVENT_RESIZE when it runs, so we
don't need to set it manually depending on the VOCTRL.
Probably avoids duplicated resize events. I don't expect this actually
fixes anything, but might help spotting other bugs easier (if there
are any).
This was kept in the codebase because it is slightly faster than --vo=opengl
on really old Intel cards (from the GMA era). Time to kill it, and let it rest.
Fixes#1061
The oldest supported FFmpeg release doesn't provide
av_vdpau_alloc_context(). With these versions, the application has no
other choice than to hard code the size of AVVDPAUContext. (On the other
hand, there's av_alloc_vdpaucontext(), which does the same thing, but is
FFmpeg specific - not sure if it was available early enough, so I'm not
touching it.)
Newer FFmpeg and Libav releases require you to call this function, for
ABI compatibility reasons. It's the typcal lakc of foresight that make
FFmpeg APIs terrible. mpv successfully pretended that this crap didn't
exist (ABI compat. is near impossible to reach anyway) - but it appears
newer developments in Libav change the function from initializing the
struct with all-zeros to something else, and mpv vdpau decoding would
stop working as soon as this new work is relewased.
So, add a configure test (sigh).
CC: @mpv-player/stable
When embedding a X window, it's hard to control whether it receives
mouse/keyboard input or not. It seems the X protocol itself makes this
hard (basically due to the outdated design mismatching with modern
toolkits), and we have to take care of these things explicitly.
Simply do this by manually querying and using the parent window event
flags.
This restores some MPlayer behavior (it doesn't add back exactly the
same code, but it's very similar).
This probably has some potential to interfere with libmpv embedding, so
bump the client API minor.
CC: @mpv-player/stable (if applied, client-api-changes.rst has to be
adjusted to include the 0.5.2 release)
This inserts an automatic conversion filter if a Matroska file is marked
as 3D (StereoMode element). The basic idea is similar to video rotation
and colorspace handling: the 3D mode is added as a property to the video
params. Depending on this property, a video filter can be inserted.
As of this commit, extending mp_image_params is actually completely
unnecessary - but the idea is that it will make it easier to integrate
with VOs supporting stereo 3D mogrification. Although vo_opengl does
support some stereo rendering, it didn't support the mode my sample file
used, so I'll leave that part for later.
Not that most mappings from Matroska mode to vf_stereo3d mode are
probably wrong, and some are missing.
Assuming that Matroska modes, and vf_stereo3d in modes, and out modes
are all the same might be an oversimplification - we'll see.
See issue #1045.
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.
The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
Regression since commit f14722a4. For some reason, this worked on
nvidia, but rightfully failed on mesa.
At least in C, the ## operator indeed needs two macro arguments, and
you can't just concatenate with non-arguments.
This change will most likely fix it.
CC: @bjin
Add a new parameter 'p' to gaussian filter. The new formula used
a different base taken from fmtconv plugin, so that the
new parameter is exactly same as the one used in Avisynth and
Vapoursynth.
The new default value is 2 / log(2) * 10, with the default value it
conforms to the original kernel taken from vector-agg.
Add two new options, make it possible for user to set the radius
for some of the filters with no fixed radius.
Also add three new filters with the new radius parameter supported.
If duration<0, it means the duration is unknown. Disable framedropping,
because end_time makes no sense in this case.
Also, strictly never drop the first frame.
This fixes weird behavior with the cover-art case (for the 100th time).
Add two new script environment variables 'video_in_dw' and
'video_in_dh', representing the display resolution of video. Along
with video resolution, sample ratio aspect can be calculated in
scripts.
Currently it's impossible to change sample ratio aspect with single
vapoursynth filter since '_SARNum' and '_SARDen' frame properties
from output clip will be ignored. A following 'dsize' filter is
necessary for this purpose.
So talking to a certain Intel dev, it sounded like modern VA-API drivers
are reasonable thread-safe. But apparently that is not the case. Not at
all. So add approximate locking around all vaapi API calls.
The problem appeared once we moved decoding and display to different
threads. That means the "vaapi-copy" mode was unaffected, but decoding
with vo_vaapi or vo_opengl lead to random crashes.
Untested on real Intel hardware. With the vdpau emulation, it seems to
work fine - but actually it worked fine even before this commit, because
vdpau was written and designed not by morons, but competent people
(vdpau is guaranteed to be fully thread-safe).
There is some probability that this commit doesn't fix things entirely.
One problem is that locking might not be complete. For one, libavcodec
_also_ accesses vaapi, so we have to rely on our own guesses how and
when lavc uses vaapi (since we disable multithreading when doing hw
decoding, our guess should be relatively good, but it's still a lavc
implementation detail). One other reason that this commit might not
help is Intel's amazing potential to fuckup anything that is good and
holy.
This could be used by VO implementations to report a recent vsync time
to the generic VO code, which in turn will use it and the display FPS
to estimate at which point in time the next vsync will happen.
This uses glXGetVideoSyncSGI() to check how many vsyncs happened since
the last flip_page() call. It allows checking a pattern of vsync
increments of at most 2 elements. For example, to check ~24 fps playback
on a ~60 Hz monitor, this can be used:
--vo=opengl:check-pattern=[3-2]:waitvsync
Whether the reported results are accurate or just plain wrong may depend
on the driver and if the waitvsync sub-option is used. There are no
guarantees.
This option is undocumented, and may be removed again in the near or
distant future.
For debugging (drawing fun plots with TOOLS/stats-conv.py).
Also move last_flip under the correct comment: it's not protected by the
lock, and can be accessed by the VO thread only.
Playing with high framedrop could make it run out of surfaces. In
theory, we wouldn't need an additional surface, if we could just clear
the vo_vaapi internal surface - but doing so would probably be a pain,
so I don't care.
Only reports the most recently entered output if the window is displayed on
2 or more outputs. Should be changed to the lowest fps of all outputs the
window is visible. Until no one complains this will have to wait.
Look for the VO framedropping for more information on this topic.
If the Xrandr configuration changes, re-read it. So if you change
display modes or screen configuration, it will update the framedrop
refresh rate accordingly.
This passes the rootwin to XRRSelectInput(), which may or may not be
allowed. But it works, and the documentation (which is worse than used
toilet paper, great job Xorg) doesn't forbid it, or in fact say anything
about what the window parameter is even used for.
Nvidia's vdpau implementation is pretty good, but other factors make it
much less attractive for use as default VO. For example, Mesa often has
low quality drivers (mess up things with the presentation queue and the
vdpau API time source). Intel ruins things completely, and we're likely
to run on emulation via OpenGL. Compositing has unknown effects (to me
anyway), but appears to reduce the vdpau advantages.
One important reason to prefer vo_vdpau was that it could do proper
framedropping. Framedropping got fixed for the other VOs, so this reason
is going away.
This works only on X11, and only if the refresh rate changes due to the
window being moved to another screen (detected by us). It doesn't
include system screen reconfiguration yet.
This calls VOCTRL_GET_DISPLAY_FPS on every frame, which makes me uneasy.
It means extra thread communication with the win32 and Cocoa backends.
On the other hand, a frame doesn't happen _that_ often, and the
communication should still be pretty cheap and fast, so it's probably
ok.
Also needs some extra fuzz for vo_vdpau.c, because that does everything
differently.
This is always included in the Xorg development headers. Strictly
speaking it's not necessarily available with other X implementations,
but these are hopefully all dead.
Drop use of the ancient XF86VM, and use the slightly less ancient Xrandr
extension to retrieve the refresh rate. Xrandr has the advantage that it
supports multiple monitors (at least the modern version of it).
For now, we don't attempt any dynamic reconfiguration. We don't request
and listen to Xrandr events, and we don't notify the VO code of changes
in the refresh rate. (The later works by assuming that X coordinates map
directly to Xrandr coordinates, which probably is wrong with compositing
window manager, at least if these use complicated transformations. But I
know of no API to handle this.)
It would be nice to drop use of the Xinerama extension too, but
unfortunately, at least one EWMH feature uses Xinerama screen numbers,
and I don't know how that maps to Xrandr outputs.
Since the display FPS is currently detected on X11 only (and even there
it's known to be wrong on certain setups), it seems like a good idea to
make this user-configurable.
I'm not sure about the merit, though it does print nice numbers if debug
output is enabled.
Basically, this tries to achieve similar results as the glFinish()
business, but again it entirely depends on the drivers whether this
does anything meaningful, or whether it's actively harmful.
It seems that at least on nvidia systems with composting disabled, we
can get it to block deterministically on the actual vsync event, which
should improve framedropping.
This mostly uses the same idea as with vo_vdpau.c, but much simplified.
On X11, it tries to get the display framerate with XF86VM, and limits
the frequency of new video frames against it. Note that this is an old
extension, and is confirmed not to work correctly with multi-monitor
setups. But we're using it because it was already around (it is also
used by vo_vdpau).
This attempts to predict the next vsync event by using the time of the
last frame and the display FPS. Even if that goes completely wrong,
the results are still relatively good.
On other systems, or if the X11 code doesn't return a display FPS, a
framerate of 1000 is assumed. This is infinite for all practical
purposes, and means that only frames which are definitely too late are
dropped. This probably has worse results, but is still useful.
"--framedrop=yes" is basically replaced with "--framedrop=decoder". The
old framedropping mode is kept around, and should perhaps be improved.
Dropping on the decoder level is still useful if decoding itself is too
slow.
Otherwise vdp_video_mixer_destroy() would later fail when called on an invalid
video mixer handle. With mesa r600 vdpau driver, this would cause a segfault.
Xlib is not thread-safe. Or actually it is, but it's an incomprehensible
hack that was added later, and which needs to be acitvated manually
(this makes no sense). And it appears that the vdpau accesses X from the
decoder thread if GLX interop is used (and not in any other situations -
this doesn't make too much sense either).
So, just call the magic function that enables Xlib thread-safety.
The previous commit broke these things, and fixing them is separate in
this commit in order to reduce the volume of changes.
Move the image queue from the VO to the playback core. The image queue
is a remnant of the old way how vdpau was implemented, and increasingly
became more and more an artifact. In the end, it did only one thing:
computing the duration of the current frame. This was done by taking the
PTS difference between the current and the future frame. We keep this,
but by moving it out of the VO, we don't have to special-case format
changes anymore. This simplifies the code a lot.
Since we need the queue to compute the duration only, a queue size
larger than 2 makes no sense, and we can hardcode that.
Also change how the last frame is handled. The last frame is a bit of a
problem, because video timing works by showing one frame after another,
which makes it a special case. Make the VO provide a function to notify
us when the frame is done, instead. The frame duration is used for that.
This is not perfect. For example, changing playback speed during the
last frame doesn't update the end time. Pausing will not stop the clock
that times the last frame. But I don't think this matters for such a
corner case.
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
Found with valgrind. This is somewhat terrifying, because the VA-API API
function is supposed to fill these values, and we access them only if
the API functions return success. So this shouldn't have happened.
vo_sdl.c has broken event handling and just polls. The polling time was
quite low, so the playloop OSD redrawing heuristic inhibited redraws,
which made the window appear frozen when paused.
Completely useless, and could accidentally be enabled by cycling
framedrop modes. Just get rid of it.
But still allow triggering the old code with --vd-lavc-framedrop, in
case someone asks for it. If nobody does, this new option will be
removed eventually.
This can just happen in the time between VO creation, and the first call
to vo_reconfig. It seems the recent threading changes exposed this bug.
Fixes#986.
Sometimes GetClientRect() appeared to fail during init, and since we
don't check GetClientRect() calls (because they're on our own window,
and logically can never fail), bogus resizes were triggered. This could
cause vo_direct3d to fail initialization.
The reason was that w32->window was set to 0 during early window
initialization: CreateWindow*() can send messages to the new window,
even though it hasn't returned yet. This means w32->window is not yet
set to our window handle, and functions in WndProc may accidentally pass
hwnd=0 to win32 API functions.
Fix it by initializing w32->window on opportunity. This also means we
always strictly expect that the WndProc is used with our own window
only.
This fixes the fullscreen issues on Intel for me. I'm baffled by it and
don't understand why this suddenly works. Intel drivers being shit?
Windows being shit? HWND or HDC being only 97% thread-safe instead of
98%? Me missing something subtle that is not documented anywhere?
Who knows.
Now instead of creating the HDC and OpenGL context on the renderer
thread, they're created on the GUI thread. The renderer thread will
then only call wglMakeCurrent, SwapBuffers, and OpenGL standard
functions.
Probably fixes github issue #968.
It seems the vdpau API does not support these.
Do a semi-expensive emulation of it. On the other hand, it's not like
this is a commonly-used feature. (It might be better to make --vf=flip
always copy instead of flipping it via pointer tricks - but everything
allows flipped images, and even decoders or libavfilter filters could
output them.)
If the compositor sends a configure event immediately after a window is
created (for example, if it implements tiling window management), mpv
will attempt to call wl_egl_window_resize before it has actually created
the egl_window, causing a crash.
Use OPT_KEYVALUELIST() for all places where AVOptions are directly set
from mpv command line options. This allows escaping values, better
diagnostics (also no more "pal"), and somehow reduces code size.
Remove the old crappy option parser (av_opts.c).
If OpenGL 3.x doesn't work, the fallback to legacy GL will call the
function to create the DC again, and it will assert. Just make it not
to, and also simplify the code a bit.
Fixes#974.
Since the new hwaccel API is now merged in ffmpeg's stable release, we can
finally remove support for the old API.
I pretty much kept lu_zero's new code unchanged and just added some error
printing (that we had with the old glue code) to make the life of our users
less miserable.
With software decoding, images were uploaded to vdpau surfaces as they
were queued to the VO. This makes it slightly more complicated
(especially later on), and has no advantages - so stop doing it.
The only reason why this was done explicitly was due to attempts to keep
the code equivalent (instead of risking performance regressions). The
original code did this naturally for certain reasons, but now that we
can measure that it has no advantages and just requires extra code, we
can just drop it.
Usually mp_image_params_guess_csp takes care of finding *some* default
for a video channel, but files with no video (or with extremely broken
configurations) end up leaving the colorspace information as
MP_CSP_PRIM_AUTO, which has no associated primaries.
As a result of this, color managed OSD messages could not display
because they were being color managed to match the (non-existing/absurd)
video channel. With this change, such non-spaces will have BT.709
primaries as far as color management and the OSD is concerned.
This fixes#961.
CC: @mpv-player/stable
Signed-off-by: wm4 <wm4@nowhere>
Follow up on commit 760548da. Mouse handling is a bit confusing, because
there are at least 3 coordinate systems associated with it, and it
should be cleaned up. But that is hard, so just apply a hack which gets
the currently-annoying issue (VO backends needing access to the VO) out
of the way.
The windows message loop now runs in a separate thread. Rendering,
such as with Direct3D or OpenGL, still happens in the main thread.
In particular, this should prevent the video from freezing if the
window is dragged. (The reason was that the message dispatcher won't
return while the dragging is active, so mpv couldn't update the
video at all.)
This is pretty "rough" and just hacked in, and there's no API yet to
make this easier for other backends. It will be cleaned up later
once we're sure that it works, or when we know how exactly it should
work. One oddity is that OpenGL is actually completely managed in the
renderer thread, while e.g. Cocoa (which has its own threading code)
creates the context in the GUI thread, and then lets the renderer
thread access it.
One strange issue is that we now have to stop WM_CLOSE from actually
closing the window. Instead, we wait until the playloop handles the
close command, and requests the VO to shutdown. This is done mainly
because closing the window apparently destroys it, and then WM_USER
can't be handled anymore - which means the playloop has no way to
wakeup the GUI thread. It seems you can't really win here... maybe
there's a better way to have a thread receive messages with and
without a window, but I didn't find one yet.
Dragging the window (by clicking into the middle of it) behaves
strangely in wine, but didn't before the change. Reason unknown.
VO backends which are or will run in their own thread have a problem
with vo_mouse_movement() calling vo_control(). Restrict this to VOs
which actually need this.
win32 does not provide a proper per-window context pointer. Although it
does allow passing a user-chosen value to WM_CREATE/WM_NCCREATE, this
is not enough - the first message doesn't even have to be WM_NCCREATE.
This gets us in trouble later on, so go the easy route and just use a
TLS variable.
__thread is gcc specific, but Windows is a very "special" platform
anyway. We support only MinGW and Cygwin on it, so who cares. (C11
standardizes __thread as _Thread_local; we can use that later.)
This shouldn't change anything. But it's worth making this explicit,
since it's very subtle and unintuitive: if the X parameter is the
magic value CW_USEDEFAULT, then the Y parameter is used as nCmdShow
parameter to ShowWindow(). And in our case, this is SW_HIDE (0),
because we want to create a hidden window.
This looked a bit overcomplicated. We don't care about the window
position (it should always be 0/0, unless the parent program moved it,
which it shouldn't). We don't care about the global on-screen position.
Also, we will just retrieve a WM_SIZE message if our window is resized,
and we don't need to update it manually.
The only thing we have to do is making sure our window fills the parent
window completely.
CS_OWNDC will make GetDC() always return the same HDC. This might
become a problem when OpenGL rendering and window management are
on different threads. Although I'm not too sure about this; our
code never requests a HDC outside of the OpenGL backend, and it
depends on whether win32 will internally request DCs. But in any
case, this seems to be cleaner.
Move the GL pixelformat setup code to gl_w32.c, where it's actually
needed. This also fixes that SetPixelFormat() should be called only
once, and not every time video params change.
These mostly describe self-explanatory things, and fail to explain
actually tricky things. Which means you just waste your time reading
this, and have to figure it out from the code anyway.
Preparation for moving win32 windowing to a separate thread.
The codesize is reduced a bit, because some small functions are
inlined, which reduces noise.
The main change is that now most functions use the private struct
directly, instead of accessing it indirectly through vo->w32.
Accesses to vo are minimalized.
The final goal is adding some sort of new windowing backend API. It
would be cleaner to use that as context pointer for all functions
(like struct vo was previously used), but since this is work in
progress, we just go with this commit.
The final goal is all mp_msg calls produce complete lines. We want this
because otherwise, race conditions could corrupt the terminal output,
and it's inconvenient for the client API too. This commit works towards
this goal. There's still code that has this not fixed yet, though.
There is no standard mechanism for detecting endianess. Doing it at
compile time in a portable way is probably hard. Doing it properly
with a configure check is probably hard too. Using the endian
definitions in <sys/types.h> (usually includes <endian.h>, which is
not available everywhere) works under circumstances, but the previous
commit broke it on OSX.
Ideally all code should be endian dependent, but that is not possible
due to the dependencies (such as FFmpeg, some video output APIs, some
audio output APIs).
Create a header osdep/endian.h, which contains various fallbacks.
Note that the last fallback uses libavutil; however, it's not clear
whether AV_HAVE_BIGENDIAN is a public symbol, or whether including
<libavutil/bswap.h> really makes it visible. And in fact we don't want
to pollute the namespace with libavutil definitions either. Thus it's
only the last fallback.
This approach is similar to what other vo_opengl backends do. It can also be
used in the future to create another cocoa backend that renders offscreen
with IOSurfaces or FBOs.
When seeking, we violently destroy the filter, because vapoursynth has
no proper API for terminating a video with unknown frame count. This
looks like an error to vapoursynth, and the error is returned via the
frame callbacks. The bug is that we remember this error state across
reinitialization, so on the first filter call after reinitialization, we
thought filtering the current frame failed. This caused a shift by 1
frame on each seek.
CC: @mpv-player/stable
DVD and Bluray (and to some extent cdda) require awful hacks all over
the codebase to make them work. The main reason is that they act like
container, but are entirely implemented on the stream layer. The raw
mpeg data resulting from these streams must be "extended" with the
container-like metadata transported via STREAM_CTRLs. The result were
hacks all over demux.c and some higher-level parts.
Add a "disc" pseudo-demuxer, and move all these hacks and special-cases
to it.
Cast away the "extra" bits (since apparently Window/XID is always
32 bit unsigned). This is not striclty needed, because you're not
supposed to pass garbage to --wid, just because the upper bits are
possibly not interpreted. But if you do so, this change increases
consistency in behavior and removes a strange behavior that was
thought to be a bug.
Also see github issue #906.
Apparently clearing on every map can cause problems with vdpau when
switching virtual desktops and such. This was observed with at least
XMonad and nvidia-340.17. It's not observed on some other setups without
XMonad.
It's not clear why this happens. Normally, the window background is not
saved, so clearing should have no additional affect. It's a complete
mystery. Possible, the use of legacy X drawing commands (used to clear
the window) interferes with vdpau operation in non-trivial ways.
Work this around by clearing on initial map only. This probably only
hides the underlying issue, but good enough.
Closes#897.
CC: @mpv-player/stable
Something like "char *s = ...; isdigit(s[0]);" triggers undefined
behavior, because char can be signed, and thus s[0] can be a negative
value. The is*() functions require unsigned char _or_ EOF. EOF is a
special value outside of unsigned char range, thus the argument to the
is*() functions can't be a char.
This undefined behavior can actually trigger crashes if the
implementation of these functions e.g. uses lookup tables, which are
then indexed with out-of-range values.
Replace all <ctype.h> uses with our own custom mp_is*() functions added
with misc/ctype.h. As a bonus, these functions are locale-independent.
(Although currently, we _require_ C locale for other reasons.)