- Change connector selection to accept human readable names (such as
eDP-1, HDMI-A-2) rather than arbitrary numbers.
- Change GPU selection to accept GPU number rather than device paths.
- Merge connector and GPU selection into one --drm-connector.
- Add support for --drm-connector=help.
- Add support for --drm-* in EGL backend.
- Refactor KMS; reduce state sharing across drm_common.
The glFlush() call was made optional recently
since it's not needed in most cases. On OSX though
this is needed since we removed kCGLPFADoubleBuffer
from the context creation, so the glFlush() call
was added to the cocoa backend only.
The CGLFlushDrawable() call can be safely removed
since it only does something when a double
buffered context is used. Also fixes a small typo.
Fixes#3627.
In "dumb mode" (where most features are disabled and which only performs
some basic rendering) we explicitly copy a set of whitelisted options,
and leave all the other options at their default values. Add the new
--opengl-early-flush option to this whitelist. Also remove an option
field accidentally added in the commit adding --opengl-early-flush.
It seems this can cause issues with certain platforms, so better to
disable it by default. The original reason for this isn't overly
justified, and display-sync mode should get rid of the need for it
anyway.
The new option is meant for testing, and will probably be removed if
nobody comes up and reports that enabling the option actually improves
anything.
Reduces code duplication between OpenGL backend and DRM VO.
(The control() for OpenGL backend isn't sufficiently similar to the
VO's control() to consider merging it as a whole - I extracted only the
FPS code.)
When the vaapi decoder is used in copy mode, it creates a dummy
display to render to. In theory, this should support hardware
decoding on on a separate GPU that is not actually connected to
any output (like an iGPU which supports more formats than the
external GPU to which the monitor is connected).
However, before this change, only X11 displays were supported as
dummy displays. This caused some graphics drivers (namely
intel-driver) to core dump when they were not actually used as X11
module.
This change introduces support for drm libav displays, which
allows vaapi-copy to run on such cards which are not actually
rendering the X11 output.
Other than being overly convoluted, this seems to make sense to me.
Except that to get the "rot" transform I have to set flip=true, which
makes no sense at all to me.
Combining rotation and cropping didn't work. It was just completely
broken.
I'm still not sure if this is correct. Chroma positioning seems to be
broken on rotation. There might also be a problem with non-mod-2 frame
sizes. Still, strictly an improvement for both rotated and non-rotated
rendering modes.
Also, this could probably be written in a more elegant way.
Commit aa1047a3 originally added this as:
+ // this is from the DarkPlaces engine, reduces to 3x3. Original code
+ // released under GPL2 or any later version.
According to Rudolf Polzer, the original author (a certain LH) was
actually asked whether it would be ok to put this code under LGPL, and
the author gave his agreement. This code is not from id Software either
(on which large parts of DarkPlaces is based on), which is the main
reason why DarkPlaces is under GPL.
So this note is just confusing, and always has been LGPL. Fix it.
The video code can deal fine with feeding software image formats to
hwdec interop drivers. In RPI's case, this is preferable for
performance, working around OpenGL bugs (see RPI firmware issue #666),
and because OpenGL rendering doesn't bring too many advantages due to
RPI supporting GLES 2.0 only.
Maybe a way to force the normal video path is needed later. But
currently, this can be tested by just not loading the hwdec interop
driver.
If you run command-line mpv and set --hwdec to something that does
not load the RPI interop layer, you'll even have to use --hwdec-preload
manually to get it enabled.
Was intended to put the GL layer above the standard console. (But
actually that was done already, and the oddness I'm seeing seems to
be an unrelated bug.)
This should make display-names usable on Windows. It returns a list of
GDI monitor names like "\\.\DISPLAY1". Since it may be useful to get the
monitor that Windows considers associated with the window (with
MonitorFromWindow,) this will always be returned as the first argument.
This monitor is the one used for display-fps and icc-profile-auto.
We always want to use __declspec(selectany) to declare GUIDs, but
manually including <initguid.h> in every file that used GUIDs was
error-prone. Since all <initguid.h> does is define INITGUID and include
<guiddef.h>, we can remove all references to <initguid.h> and just
compile with -DINITGUID to get the same effect.
Also, this partially reverts 622bcb0 by re-adding libuuid.a to the
build, since apparently some GUIDs (such as GUID_NULL) are not declared
in the source file, even when INITGUID is set.
Obviously, in the vast majority of cases, there's only one device
in the system, but doing this means we're more likely to get a
usable device in the multi-device case.
cuda would support decoding on one device and displaying on another
but the peer memory handling is not transparent and I have no way
to test it so I can't really write it.
The documentation around this stuff is poor, but I found an nvidia
sample that demonstrates how to use the interop API most efficiently.
(https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st)
Key lessons are:
1) you can register the texture itself and have cuda write to it,
thereby skipping an additional copy through the PBO.
2) You don't have to be mapped when you do the copy - once you get a
mapped pointer, it remains valid. Magic!
This lets us throw out the PBOs as well as much of the explicit
alignment and stride handling.
CPU usage is slightly (~3%) lower for 4K content in one test case,
so it makes a detectable difference, and presumably saves memory.
On x11, you can change the fullscreen via the window manager and without
mpv's involvement. In these cases, the internal fullscreen flag has to
be updated.
The hack used for this didn't really work properly. Change it
accordingly. The important thing is that the shadow copy of the option
is updated. This is still not really ideal.
Fixes#3570.
The documentation claims that --video-unscaled will still perform
anamorphic adjustments, and it rightfully should. The current reality is
that it does not, because the video-unscaled size was based on the wrong
set of variables. (encoded width/height instead of nominal display
width/height)
When we rotate the inmage by 90° or 270°, chroma width and height need
to be swapped.
Fixes#3568.
But is the chroma sub location correct? Who the hell knows...
This is the actual decoder output, with no overrides applied. (Maybe
video-params shouldn't contain the overrides in the first place, but
damage done.)
This really shouldn't be in vd_lavc.c - move it to dec_video.c, where it
also applies aspect overrides. This makes all overrides in one place.
The previous commit contains some required changes for resetting the
image parameters change detection (i.e. it's not done only on video
aspect override changes).
Use the new mechanism, instead of wrapped properties. As usual, extend
the update handling to some options that were forgotten/neglected
before. Rename video_reset_aspect() to video_reset_params() to make it
more "general" (and we can amazingly include write access to
video-aspect as well in this).
Extend the flag-based notification mechanism that was used via
M_OPT_TERM. Make the vo_opengl update mechanism use this (which, btw.,
also fixes compilation with OpenGL renderers forcibly disabled).
While this adds a 3rd mechanism and just seems to further the chaos, I'd
rather have a very simple mechanism now, than actually furthering the
mess by mixing old and new update mechanisms. In particular, we'll be
able to remove quite some property implementations, and replace them
with much simpler update handling. The new update mechanism can also
more easily refactored once we have a final mechanism that handles
everything in an uniform way.
There were multiple values under M_OPT_EXIT (M_OPT_EXIT-n for n>=0).
Somehow M_OPT_EXIT-n either meant error code n (with n==0 no error?), or
the number of option valus consumed (0 or 1). The latter is MPlayer
legacy, which left it to the option type parsers to determine whether an
option took a value or not. All of this was changed in mpv, by requiring
the user to use explicit syntax ("--opt=val" instead of "-opt val").
In any case, the n value wasn't even used (anymore), so rip this all
out. Now M_OPT_EXIT-1 doesn't mean anything, and could be used by a new
error code.
Negative height is used to signal a flipped framebuffer. There's
absolutely no reason to pass this down to overlay_adjust(), and only
requires implementers to deal with an additional special-case.
Instead of using input_ctx for waiting, use the dispatch queue directly.
One big change is that the dispatch queue will just process commands
that come in (e.g. from client API) without returning. This should
reduce unnecessary playloop excutions (which is good since the playloop
got a bit fat from rechecking a lot of conditions every iteration).
Since this doesn't force a new playloop iteration on every access, this
has to be enforced manually in some cases.
Normal input (via terminal or VO window) still wakes up the playloop
every time, though that's not too important. It makes testing this
harder, though. If there are missing wakeup calls, it will be noticed
only when using the client API in some form.
At this point we could probably use a normal lock instead of the
dispatch queue stuff.
Currently, calling mp_input_wakeup() will wake up the core thread (also
called the playloop). This seems odd, but currently the core indeed
calls mp_input_wait() when it has nothing more to do. It's done this way
because MPlayer used input_ctx as central "mainloop".
This is probably going to change. Remove direct calls to this function,
and replace it with mp_wakeup_core() calls. ao and vo are changed to use
opaque callbacks and not use input_ctx for this purpose. Other code
already uses opaque callbacks, or has legitimate reasons to use
input_ctx directly (such as sending actual user input).