Certain low-end Mali GPUs have a rather low precision and overflow
during the PRNG calculations, thereby breaking e.g. deband-grain.
Modify the permute() to avoid this, this does not impact the
quality of PRNG output (noticeably).
This problem was observed on:
GL_VENDOR='ARM', GL_RENDERER='Mali-T720'
GL_VERSION='OpenGL ES 3.1 v1.r15p0-00rel0.bdd9e62cdc8c88e0610a16b5901161e9'
Upstream has this now. Didn't really make any different for me (except
making the polar compute shader 2%-3% faster), but maybe it does for
somebody else.
When using multiple compute shaders as part of the same pass, there can
be a conflict in the block sizes. In the problematic case, the HDR
detection shader can collide with the polar sampling shader. In this
case, the solution is clear - the passes that can handle any size should
"give in" and not overwrite the block sizes.
Fixes#6083.
instead of force unwrapping and chaining the optional vars in our
containsMouseLocation function, safely unwrap and guard the resulting
var.
Fixes#6062
Add another parameter to mpv_opengl_drm_params to hold the FD to the
render node, so that the fd can be passed to hwdec_vaegl.
The render node is opened in context_drm_egl and inferred from the
primary device fd using drmGetRenderDeviceNameFromFd.
The previous code did not save enough information about the old state,
and could end up changing what plane the fbcon:s FB got attached to,
or in worse case causing a blank screen (observed in some multi-screen
setups on Sandy Bridge).
In addition refactor the handling of drmModeModeInfo property blobs to
not leak, as well as enable reuse of already created blobs.
init is a reserved keyword and Swift 4.2 got a bit stricter about using
it. this could be fixed by adding apostrophes around init but makes the
code uglier. hence i just renamed init to initialized and for
consistency uninit to uninitialized.
Fixes#5899
the pre-allocation was needed because the layer allocated a opengl
context async itself and we couldn't influence that. so we had to start
the core after the context was actually allocated. furthermore a window,
view and layer hierarchy had to be created so the layer would create
a context.
now, instead of relying on the layer to create a context we do this
manually and re-use that context later when the layer wants to create
one async itself.
This sacrifices some dynamic range for well-behaved sources, but
prevents catastrophic desaturation on badly mastered / too bright
sources. I think that's the better trade-off. This makes the
desaturation algorithm much "safer" to deploy by default, as well. One
could even argue going up to strength 1.0, which works better for some
sources but worse for others. But I think the current strength is the
best trade-off even after this change.
For some reason, the X default modifier map binds shift+tab to
ISO_Left_Tab instead of the regular Tab. So to get Shift+TAB recognized
by mpv, we also need to accept ISO_Left_Tab.
This patch matches what other programs like e.g. Qt do, which treat Tab
and ISO_Left_Tab as the same thing.
God only knows why the distinction exists, and why X decides to mix up
its bindings like that.
Fixes#5849
If anyone happened to build with GL disabled, this could lead to option
changes not always refreshing the screen. Since vo_gpu is always enabled
now (just not necessarily any backend for it), we can drop the #if
completely.
(The way this works is a bit idiotic - the option cache exists only to
grab the change notification, which will trigger a redraw and make
vo_gpu update its own second copy of them. But at least it avoids some
layering issues for now.)
This was always a legacy thing. Remove it by applying an orgy of
mp_get_config_group() calls, and sometimes m_config_cache_alloc() or
mp_read_option_raw().
win32 changes untested.
With the advent of actual HDR devices, my real measured ICC profile has
an "infinite" contrast, since the display is completely off on pure
black inputs. 100k:1 might not be enough, so let's just bump it up to
1m:1 to be safe.
Also, improve the logging in the case that the detected contrast is too
high by default.
First fix a memory leak when skipping cursor planes by inverting the
check and putting everything, but the free, in the body.
Then fix a missed drmModeFreePlane by simply copying the fields of the
drmModePlane we are interested in and freeing the drmModePlane struct
early.
Until recently, ao_lavc and vo_lavc started encoding whenever the core
happened to send them data. Since audio and video are not initialized at
the same time, and the muxer was not necessarily opened when the first
encoder started to produce data, the resulting packets were put into a
queue. As soon as the muxer was opened, the queue was flushed.
Change this to make the core wait with sending data until all encoders
are initialized. This has the advantage that we don't need to queue up
the packets.
The user won't want to have those in the video (I think). The core can
sporadically issue redraws, which is what you want for actual playback,
but not in encode mode. vo_lavc can explicitly detect those and skip
them. It only requires switching to a more advanced internal VO API.
The comments in vo.h are because vo_lavc draws to one of the images in
order to render OSD. This is OK, but might come as a surprise to whoever
calls draw_frame, so document it. (Current callers are OK with it.)
Inspired by kmscube, first try to pick the Encoder and CRTC already
associated with the selected Connector, if any. Otherwise try to find
the first matching encoder & CRTC like before.
The previous behavior had problems when using atomic
modesetting (crtc_setup_atomic) when we picked an Encoder & CRTC that
was currently being used by the fbcon together with another Encoder.
drmModeSetCrtc was able to "steal" the CRTC in this case, but using
atomic modesetting we do not seem to get this behavior automatically.
This should also improve behavior somewhat when run on a multi screen
setup with regards to deinit and VT switching (still sometimes you end
up with a blank screen where you previously had a cloned display of
your fbcon)
Add some properties which where forgotten in crtc_setup_atomic.
In both change to not use DRM_MODE_PAGE_FLIP_EVENT | DRM_MODE_ATOMIC_NONBLOCK
flags. This should make it more similar to the drmSetCrtc which it aims to
replace (take effect directly, and blocking call). This also saves us the
trouble of having to set up a poll to wait for pageflip, which would've been
neccesary with DRM_MODE_PAGE_FLIP_EVENT, in both crtc_setup_atomic and
crtc_release_atomic.
This patch will make sure that the video plane is hidden when unused.
When using high resolution modes, typically UHD, and embedding mpv,
having the video plane sitting in the back when you don't play any video
is eating a lot of memory bandwidth for compositing.
That patch makes sure that the video layer is just disabled before and
after playback.
This commit allows to add atomic modesetting when using the atomic renderer.
This is actually needed when using and osd with a smaller size than screen resolution.
It will also make the drm atomic path more consistent
We are currently using primary / overlay planes drm objects, assuming that primary plane is osd and overlay plane is video.
This commit is doing two things :
- replace the primary / overlay planes members with osd and video planes member without the assumption
- Add two more options to determine which one of the primary / overlay is associated to osd / video.
- It will default osd to overlay and video to primary if unspecified
This patch adds
- DRM connector object to atomic context.
- fd property to the drm atomic object as well as a method to read blob type properties.
This allows to ensure that the proper connector is picked up, especially when specifying it
from the commandline, and also allows to make sure we're using the right one when embedding
with interop into an application.
That new API was introduced and allows to have several native resources.
Thisuses that mechanisma for drm resources rather than the deprecated
opengl-cb structs.
This patch therefore add two structs that can be used with the drm atomic interop.
- mpv_opengl_drm_params : which will hold all the drm handles
- mpv_opengl_drm_osd_size : which will hold osd layer size
This commit adds a drm-osd-size=WxH parameter to commandline which
allows to define the OSD plane dimension. OSD can be upscaled to
screen resolution when having OSD at video resolution is too heavy.
This is especially useful for UHD modes on embedded devices where
the GPU cannot handle UHD modes at a decent framerate.
Define a hard-coded value for gl_NumWorkGroups if it is not available.
This adds an additional requirement of needing a shader recompile for
all window size changes.
This was considered a worthwhile compromise as currently f.ex. d3d11
completely lacked any peak computation - this is a major quality of
life upgrade.
This is for working around bugs in certain Android devices. At least one
device fails to sort EGLConfigs by size, so eglChooseConfig() ends up
choosing a config with 5/6/5 bits per r/g/b component. The other
attributes in the affected EGLConfigs did not look like they should
affect the sorting process as specified by the EGL 1.4 standard.
The device was reported as:
Sony Xperia Z3 Tablet Compact
Firmware 6.0.1 build number 23.5.A.1.291
GL_VERSION='OpenGL ES 3.0 V@140.0 AU@ (GIT@I741a3d36ca)'
GL_VENDOR='Qualcomm'
GL_RENDERER='Adreno (TM) 330'
Other Qualcom/Adreno devices have been reported as unaffected by this
(including some with same GL_RENDERER string).
"Fix" this by always requiring at least 8 bit. This means it would fail
on devices which cannot provide this. We're fine with this.
mpv-android/mpv-android#112
This was supposed to be a replacement for encode_lavc_discontinuity()
(so we don't need to store last_video_in_pts in a way which requires
synchronization). Unfortunately, VOCTRL_RESET is also called before
termination, and even though it shouldn't matter as far as the VO API is
concerned, it does. It's because vo_lavc.c buffers a frame to compute
the frame duration.
Drop this code. The consequence is that it appears to encode 2 frames
with the same PTS if multiple files are encoded into one. Before this,
it merely dropped a frame (maybe the first of every subsequent file, not
sure).
The main change is that we wait with opening the muxer ("writing
headers") until we have data from all streams. This fixes race
conditions at init due to broken assumptions in the old code.
This also changes a lot of other stuff. I found and fixed a few API
violations (often things for which better mechanisms were invented, and
the old ones are not valid anymore). I try to get away from the public
mutex and shared fields in encode_lavc_context. For now it's still
needed for some timestamp-related fields, but most are gone. It also
removes some bad code duplication between audio and video paths.
1. I want to get away from mp_image_params (maybe).
2. For encoding mode, it's convenient to get the nominal_fps, which is
a mp_image field, and not in mp_image_params.
Also rename stereo3d to stereo_in. The only real change is that the
vo_gpu OSD code now uses the actual stereo 3D mode, instead of the
--video-steroe-mode value. (Why does this vo_gpu code even exist?)
Attempts to enable the following things:
- let a render API user do "proper" audio-sync video timing itself
- make it possible to not re-render repeated frames if the API user has
better mechanisms available (e.g. waiting for a DisplayLink cycle
instead)
- allow the user to delay or skip redraws if it makes sense
Basically this information will be needed by API users who want to be
"clever" about optimizing timing and rendering.
In MPV_RENDER_PARAM_ADVANCED_CONTROL mode, a simple update callback does
not necessarily make the API user redraw. So handle it differently.
For one, setting vo->want_redraw already uses the "normal" redraw path,
which will call draw_frame() and set next_frame.
Then there are redraws trigered by mpv_render_context_set_parameter(),
which are on the render thread, and would require a separate mechanism.
I decided this is not really a good idea, since it's not even clear that
setting an arbitrary parameter should redraw. Also this could trigger an
unbounded number of redraws. The user can trigger redraws manually if
really needed, depending on the parameter that's being set. If we really
wanted vo_libmpv to do this, we could add a new flag like need_redraw,
which would be 4 lines of code or so.
update() used to require the lock, but now it doesn't matter. It's
slightly better to do it outside of the lock now, in case the update
callback reschedules before returning, and the user render thread tries
to acquire the still held lock (which would require 2 more context
switches).
DR (letting the decoder allocate texture memory) requires running the
allocation on the render thread. This is rather hard with the render
API, because the user controls this thread and when it's entered. It was
not possible until now.
This commit adds a bunch of infrastructure to make this possible. We add
a new optional mode (MPV_RENDER_PARAM_ADVANCED_CONTROL) which basically
lets the user's render thread and libmpv agree how this should be done.
Misuse would lead to deadlocks. To make this less likely, strictly
document thread safety/locking issues. In particular, document which
libmpv functions can be called without issues. (The rest has to be
assumed unsafe.)
The worst issue is destruction of the render context while video is
still active. To avoid certain unintended recursive locks (i.e.
deadlocks, unless we'd make the locks recursive), make the update
callback lock separate. Make "killing" the video chain asynchronous, so
we can do extra work while video is being destroyed.
Because losing wakeups is a big deal, setting the update callback now
triggers a wakeup. (It would have been better if the wakeup callback
were a parameter to mpv_render_context_create(), but too late.)
This commit does not add DR yet; the following commit does this.
I suppose this doesn't matter in practice, i.e. even if calls relayed
over the dispatch queue will cause WndProc to be invoked, WndProc will
never run for a longer time.
Preparation for removing recursion support from the dispatch queue code.
Normally, MPV_RENDER_PARAM* arguments are copied, unless documented
otherwise. Of course we can't copy X11 Display or Wayland wl_display
types, but for arguments that are "summarized" in a struct (like
MPV_RENDER_PARAM_OPENGL_FBO), a copy is expected.
Also add some unused infrastructure to make this explicit, and to make
it easier to add parameter types that require a copy.
Untested.
The CUDA dynamic loader was broken out of ffmpeg into its own repo
and package. This gives us an opportunity to re-use it in mpv and
remove our custom loader logic.
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
we rendered on the displaylink thread which wasn't the best idea. if
rendering took too long or was blocking it also blocked the displaylink
callback. when that happened new vsyncs were reported delayed or not at
all. consequently the mpv_render_context_report_swap function wasn't
called consistently and that could cause bad video playback. so the
rendering is moved to a dedicated dispatch queue. furthermore the update
callback starts a layer update directly instead of the displaylink
callback, making the rendering a bit more consistent.
Right now the atomic request is alive during the renderloop.
We want it to be alive until the drm egl context is destroyed because some properties
might still be set upon interop close
This patch make the request to be kept created even outside the renderloop.
The context uninit will commit the last request.
commit 2edf00f changed the MPV_EVENT_SHUTDOWN behaviour slightly, such
that it will only be sent once. cocoa-cb relied on it being sent
continuously till all mpv_handles are destroyed. now it manually shuts
down and destroys the mpv_handle after the animation instead of relying
on this removed behaviour.
This passed the display size as source size to the renderer, which is of
course nonsense. I don't know what I was doing in 569383bc54.
Yet another fix for those damn anamorphic videos.
As a somewhat redundant/cosmetic change, use image_params instead of
real_image_params in the code above. They should have the same, dimensions
(but possibly different formats when doing hw decdoing), and mixing them
is confusing. p->image_params wins because it's shorter.
Actually fixes#5619.
There is some sort-of awkwardness here, because option access needs to
happen in a synchronized manner, and the framedrop flag is not in the VO
option struct. Remove the mp_read_option_raw() call and the awkward
change notification via VO_EVENT_WIN_STATE from command.c, and pass it
through as new vo_frame flag.
Removes the awkward notification through VO_EVENT_WIN_STATE.
Unfortunately, some awkwardness remains in mp_property_display_fps(),
because the property has conflicting semantics with the option.
We took the storage size instead of the display size for "unscaled"
screenshots. Even if it's called "unscaled", it's still supposed to
scale to compensate for aspect ratio.
(How many commits fixing anamorphic screenshots in various situations
are there?)
Fixes#5619.
the first mouse events, that try to hide the title bar, could happen
before the title bar was actually initialised. that caused our hiding
code to access a nil value. check for an available title bar before
trying to hide it.
there were actually a few small problems. the fatalError() function
wasn't supposed to be called there and caused an "Illegal instruction".
this was replaced by a print and exit() call. the second problem was
that cocoa returns a kCGLBadPixelFormat instead of a kCGLBadAttribute
error, which broke our check, immediately exited our loop and no working
pixel format was ever created. the third problem was that macOS 10.12
didn't return any errors but also didn't return a pixel format, that
also broke our check. now the code checks for both cases.
Fixes#5631
mouse events and the tracking area are needed for (un)hiding the new
title bar, which was broken when input-cursor=no was set. no tracking
area was ever created and set which completely deactivated any mouse
events. the specific mouse event functions were already deactivated
proactively and have the needed check. no events are being propagated to
the mpv core when input-cursor=no is set, even with an active tracking
area.
The s_size() function, whatever it was supposed to do, caused the
surface size to increase indefinitely. Fix by making it always use the
maximum size that was last used, which is less optimal (many surface
recreations when making the window slowly larger), but at least it
works.
The rotation code didn't mark the old surface as invalid when it was
freed, so it could destroy random other surfaces (let's call it dangling
ID).
Also, the required rotation surface size depends on the rotation mode,
so recreate the surfaces on rotation as well.
We use triple buffering for this interop and we were only unreffing the
data structures, which doesn't destroy the drm buffers.
This patch allows to make sure that we release the drm buffers on
playback end.
we activated the rendering loop a bit too early and it was possible that
the first draw function was called before it was actually ready. this
was a remnant from the old init routine and should have been changed.
start the queue on reconfigure instead of preinit.
on a file change and when the aspect ratio of the window changed, the
first live resize state had a wrong aspect ratio because the new aspect
ratio was only set after the first resize. just set the new content
frame before the resize.
i tried being smart and handle aspect ratio differences manually via
atomic drawing and resizing to aspect fitted frames. there were a few
issues with that. like unexpected visibility of certain System GUI
elements on entering fullscreen or visually dropped frames due to the
atomic drawing. now we rely on system mechanics to keep the proper
aspect ratio of our layer, the recommended way. as a side effect it also
fixes a segfault.
Fixes#5581
It turns out that Mali drivers are likely broken, and do not return
GBM_FORMAT_ARGB8888 (they return GBM_FORMAT_XRGB8888) when getting
EGL_NATIVE_VISUAL_ID for any EGLConfig, even though the resulting
EGLConfig appears to be capable of alpha.
It could also be potentially useful to allow an ARGB EGLConfig used
with an XRGB framebuffer on some platforms, so we do that. (cf. weston)
Unrelated indentation fix in gbm_format_to_string.
the NSWindowButton enum was moved to be a member of NSWindow and renamed
to ButtonType in SDK 10.13. apparently that wasn't documented anywhere.
not even in the SDK changes Document and the official Documentations
makes it look like it was always like this. the old NSWindowButton enum
though is still around on SDK 10.13 or at least got a typealias. so we
will just use that.
The purpose of the new API is to make it useable with other APIs than
OpenGL, especially D3D11 and vulkan. In theory it's now possible to
support other vo_gpu backends, as well as backends that don't use the
vo_gpu code at all.
This also aims to get rid of the dumb mpv_get_sub_api() function. The
life cycle of the new mpv_render_context is a bit different from
mpv_opengl_cb_context, and you explicitly create/destroy the new
context, instead of calling init/uninit on an object returned by
mpv_get_sub_api().
In other to make the render API generic, it's annoyingly EGL style, and
requires you to pass in API-specific objects to generic functions. This
is to avoid explicit objects like the internal ra API has, because that
sounds more complicated and annoying for an API that's supposed to never
change.
The opengl_cb API will continue to exist for a bit longer, but
internally there are already a few tradeoffs, like reduced
thread-safety.
Mostly untested. Seems to work fine with mpc-qt.
when resizing async it's possible that the layer, and the underlying gl
surface, is stretched on an aspect ratio change. to prevent that we do
an atomic resize (resize and draw at the same time). usually max one
unique frame should be dropped but it's possible, depending on the
performance, that more are dropped.
the title bar is now within the window bounds instead of outside. same
as QuickTime Player. it supports several standard styles, two dark and
two light ones. additionally we have properly rounded corners now and
the borderless window also has the proper window shadow.
Also make the earliest supported macOS version 10.10.
Fixes#4789, #3944
By blocking the VT switcher signal in the VO thread we get less races
and other oddities.
This gets rid of tearing (at least for me) when VT switching with
--gpu-context=drm.
crtc_setup gets called on VT reacquire as well as during normal setup. When
called during VT reacquire p->front_buf might not be 0, so the maths was wrong,
and could cause array OOB errors. Use mathematically correct (for negative
numbers) modulo to always pick the farthest away buffer (should work
even for larger values of BUF_COUNT).
The VT switcher was being set up, but it was being neither polled nor
interrupted.
Insert wait_events and wakeup functions based on those from vo_drm,
and add return early in drm_egl_swap_buffers if p->active isn't set.
This should get basic VT switching working, however there will likely
still be some random glitches. Switching between mpv and X11/weston is
unlikely to work satisfactorily until we can solve the problems with
drmSetMaster and drmDropMaster.
This introduces the option --drm-format (currently used only by
context_drm_egl, vo_drm implementation is pending) which allows you to
pick between a xrgb8888 or a xrgb2101010 visual for --gpu-context=drm.
Requires a recent mesa (18.0.0_rc4 or later) to work.
This also fixes a bug when using --gpu-context=drm on a 30bpp-enabled
mesa (allow_rgb10_configs set to true). Previously it would've set up
an XRGB8888 format at the DRM/GBM level, while a 30bpp EGLConfig would
be picked, resulting in a garbled image.
even though the fullscreen animation has a shorter duration than the
system wide animation (space sliding effect) there are still cases where
it takes longer, eg performance issues (especially on init). furthermore
the final size of the animation is usually different than the actual
fullscreen size because of spect ratio differences. the actual resize to
fullscreen is done automatically by cocoa itself when the actual
transition to fullscreen happens (system event). so it could happen that
the last animation resize happened after the actual resize to fullscreen
leading to a wrongly sized frame after entering fullscreen. to prevent
this we cancel the animation when entering fullscreen, we always set the
proper frame size when in fullscreen and discard any other frame sizes,
and to prevent some performance problems on init we push entering
fullscreen to the end of the main queue to execute it when most of the
init routines are done.
Fixes#5525
on live resize, eg async resize, the layer's bounds size is not in sync
with the actual surface size. this led to a wrongly sized frame and a
perceived flicker. get and use the actual surface size instead.
Mobius isn't well-defined for sig_peak <= 1.0. We can solve this by just
soft-clamping sig_peak to 1.0. Although, in this case, we can just skip
tone mapping altogether since the limit of mobius as sig_peak -> 1.0 is
just a linear function.
Based on testing with real-world non-HDR BT.2020 clips, clipping the
color space looks better than attempting to gamut map using a tone
mapping shader that's (by now) optimized for HDR content.
If anything, we'd have to develop a separate gamut mapping shader that
works in LCh space.
When pixels are non-square, the appropriate value of vo->monitor_par is
necessary to determine the destination rectangle, which in turn tells
how to scale the video along the x and y axis. Before this commit, the
drm driver only used --monitorpixelaspect. For example, to play a video
with the right aspect on a 4:3 screen and 640:400 pixels,
--monitorpixelaspect=5:6 had to be given.
With this commit, vo->monitor_par is determined from the size of the
screen in pixels and the --monitoraspect parameter. The latter is
usually easier to determine than --monitorpixelaspect, since it is
simply the proportion between the width and the height of the screen,
in most cases 16:9 or 4:3. If --monitoraspect is not given,
--monitorpixelaspect is used if given, otherwise pixel aspect is
assumed 1:1.
This solves a number of problems simultaneously:
1. When outputting HLG, this allows tuning the OOTF based on the display
characteristics.
2. When outputting PQ or other HDR curves, this allows soft-limiting the
output brightness using the tone mapping algorithm.
3. When outputting SDR, this allows HDR-in-SDR style output, by
controlling the output brightness directly.
Closes#5521
The HLG OOTF is defined as a one-parameter family of OOTFs depending on
the display's peak luminance. With the preceding change to OOTF scale
and handling, we no longer have any issues with outputting values in
whatever signal range we need.
So as a result, it's easy for us to support a tunable OOTF which may
(drastically) alter the display brightness. In fact, this is also the
only correct way to do it, because the HLG appearance depends strongly
on the OOTF configuration. For the OOTF, we consult the mastering
display's tagging (via src.sig_peak). For the inverse OOTF, we consult
the output display's target peak.
The primary need for this change is the fact that the OOTF was
incorrectly scaled, due to the fact that the application of the OOTF can
itself change the required normalization peak. (Plus, an oversight in
pass_inverse_ootf meant we forgot to normalize at the end of it)
The linearize/delinearize functions still normalize the scale since it's
used in a number of places throughout gpu/video.c, but the color
management function now converts to absolute scale right away, instead
of in an awkward way inside the tone mapping branch. The OOTF functions
now work in absolute scale only.
In addition, minor changes have been made to the way normalization is
handled for tone mapping - we now divide out the dst_peak *after* peak
detection, in order to make the scale of the peak detection buffer
consistent even if the dst_peak were to (hypothetically) change
mid-stream. In theory, we could also do this for desaturation, but doing
the desaturation before tone mapping has the advantage of preserving
much more brightness than the other way around - and even mid-stream
changes are not that drastic here.
Finally, some preparation work has been done for allowing the user to
customize the `dst.sig_peak` in the future.
drawing off-screen failed because we didn't have a valid context. the
problem is we force off-screen drawing because the CAOpenGLLayer refuses
to draw anything while being off-screen. set the current context before
starting to draw anything off-screen.
Fixes#5530
Coverity complained about the redundant init of hratio etc. - just
remove that and merge declaration/init of these variables. Also the
first double cast in each expression is unnecessary.
the CVDisplayLinkSetOutputHandler function introduced with 10.11 is
broken on the very same version of the OS, which caused our render loop
never to start. fallback to the old display link callback on 10.11.
for reference the radar http://www.openradar.me/26640780Fixes#5527
There is now a better way. Reading the font framebuffer was always a
hack. The new code via VOCTRL_SCREENSHOT renders it into a FBO, which
does not come with the disadvantages of reading the front buffer (like
not being supported by GLES, possibly black regions due to overlapping
windows on some systems).
For now keep VOCTRL_SCREENSHOT_WIN on the VO level, because there are
still some lesser VOs and backends that use it.
This should be helpful for the new OSX Cocoa backend, which uses
opengl-cb internally. Since it comes with a behavior change that could
possibly interfere with libmpv/opengl_cb users, we mark it as explicit
API change.
This allows the new GPU screenshot functionality introduced in
9f595f3a80 to work with the D3D11 backend. It replaces the old window
screenshot functionality, which was shared between D3D11 and ANGLE. The
old code can be removed, since it's not needed by ANGLE anymore either.
Similar spirit to edb4970ca8. check_gl_features() has a confusing
early-return. This also adds compute_hdr_peak to the list of options
that is copied to the dumb-mode options struct, since it seems to make a
difference. Otherwise it would be impossible to disable HDR peak
detection in dumb mode.
this is meant to replace the old and not properly working vo_gpu/opengl
cocoa backend in the future. the problems are various shortcomings of
Apple's opengl implementation and buggy behaviour in certain
circumstances that couldn't be properly worked around. there are also
certain regressions on newer macOS versions from 10.11 onwards.
- awful opengl performance with a none layer backed context
- huge amount of dropped frames with an early context flush
- flickering of system elements like the dock or volume indicator
- double buffering not properly working with a none layer backed context
- bad performance in fullscreen because of system optimisations
all the problems were caused by using a normal opengl context, that
seems somewhat abandoned by apple, and are fixed by using a layer backed
opengl context instead. problems that couldn't be fixed could be
properly worked around.
this has all features our old backend has sans the wid embedding,
the possibility to disable the automatic GPU switching and taking
screenshots of the window content. the first was deemed unnecessary by
me for now, since i just use the libmpv API that others can use anyway.
second is technically not possible atm because we have to pre-allocate
our opengl context at a time the config isn't read yet, so we can't get
the needed property. third one is a bit tricky because of deadlocking
and it needed to be in sync, hopefully i can work around that in the
future.
this also has at least one additional feature or eye-candy. a properly
working fullscreen animation with the native fs. also since this is a
direct port of the old backend of the parts that could be used, though
with adaptions and improvements, this looks a lot cleaner and easier to
understand.
some credit goes to @pigoz for the initial swift build support which
i could improve upon.
Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739,
#2392, #2217
early flushing only caused problems on macOS, which includes:
- performance problems and huge amount of dropped frames
- problems with playing back video files with fps close to the display
refresh rate
- rendering at twice the rate of the video fps
- not properly detected display refresh rate
we always deactivate any early flush for macOS to fix these problems.
This commit allows for video to be shown with the right aspect even when
pixels are not square in the selected drm mode. For example, if drm mode
5 is "640x400", the right aspect on a 4:3 monitor is obtained by mpv
--vo=drm --drm-mode=5 --monitorpixelaspect=5:6 ...
Other vo's seem to make this parameter change the size of the window,
but in the drm vo this is fixed, being as large as the screen.
The last image is stored in vo->priv->last_input to be used when
redrawing a frame is necessary (control: VOCTRL_REDRAW_FRAME). At the
beginning it is NULL, so a redraw request has no effect since
draw_image ignores calls with image=NULL.
When using --force-window the size of the image may change without the
vo structure being re-created. Before this commit, the size of
vo->priv->last_input could become inconsistent with the cropping
rectangle vo->priv->src_rc, which could trigger an assert in
mp_image_crop_rc(). Even if it did not, the last image of a video
remained on the screen when the next file in the playlist had no video
(e.g., it was an mp3 without an embedded cover).
This commit deallocates and resets to NULL the image
vo->priv->last_input when reconfiguring video.
Before this commit, the drm vo drew the osd over the scaled image, and
then copied the result onto the framebuffer, shifted. This made the
frame centered, but forced the osd to be only as large as the image.
This was inconsistent with other vo's, covered the image with the
progress indicator even when a black band was at the top of the screen,
made the progress indicator wrap on narrow videos, etc.
The change is to always use an image as large as the screen. The frame
is copied scaled and shifted to it, and the osd drawn over it. The
result is finally copied to the framebuffer without any shift, since it
is already as large as it.
Technically, cur_frame is an image as large as the screen and
cur_frame_cropped is a dummy reference to it, cropped to the size of
the scaled video. This way, copying the scaled image to
cur_frame_cropped positions the image in the right place in cur_frame,
which can then have the osd added to it and copied to the framebuffer.
Using the GL renderer for color conversion will make sure screenshots
will use the same conversion as normal video rendering. It can do this
for all types of screenshots.
The logic when to write 16 bit PNGs changes. To approximate the old
behavior, we decide by looking whether the source video format has more
than 8 bits per component. We apply this logic even for window
screenshots. Also, 16 bit PNGs now always include an unused alpha
channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no
RGB064. RGB48 is 3 bytes and usually not supported by GPUs for
rendering, so we have to use RGBA64, which forces an alpha channel.
Will break for users who use --target-trc and similar options.
I considered creating a new gl_video context, but it could double GPU
memory use, so I didn't.
This uses FBOs instead of glGetTexImage(), because that increases the
chance it could work on GLES (e.g. ANGLE). Untested. No support for the
Vulkan and D3D11 backends yet.
Fixes#5498. Also fixes#5240, because the code for reading back is not
used with the new code path.
The re-ordering of commits e3d93fd and 0870859 ended up swallowing the
change which made the HDR tone mapping algorithm actually check for
RA_CAP_NUM_GROUPS support.
The major changes are as follows:
1. Use `uint32_t` instead of `unsigned int` for the SSBO size
calculation. This doesn't really matter, since a too-big buffer will
still work just fine, but since `uint` is a 32-bit integer by
definition this is the correct way to do it.
2. Pre-divide the frame_sum by the num_wg immediately at the end of a
frame. This change was made to prevent overflow. At 4K screen size,
this code is currently already very at risk of overflow, especially
once I started playing with longer averaging sizes. Pre-dividing this
out makes it just about fit into 32-bit even for worst-case PQ
content. (It's technically also faster and easier this way, so I
should have done it to begin with). Rename `frame_sum` to `frame_avg`
to clearly signal the change in semantics.
3. Implement a scene transition detection algorithm. This basically
compares the current frame's average brightness against the
(averaged) value of the past frames. If it exceeds a threshold, which
I experimentally configured, we reset the peak detection SSBO's state
immediately - so that it just contains the current frame. This
prevents annoying "eye adaptation"-like effects on scene transitions.
4. As a result of the previous change, we can now use a much larger
buffer size by default, which results in a more stable and less
flickery result. I experimented with values between 20 and 256 and
settled on the new value of 64. (I also switched to a power-of-2
array size, because I like powers of two)
Currently using the drmprime interop with external mpv intgration can lead
to rendering issues because the current frame is being released too early.
Typically using this with Qt results in one frame shift because Qt
will do waitforvsync and swap, rather than swap and waitforvsync.
This leads to tearing as the frambuffer is released while being
displayed on screen.
In order to avoid releasing the framebuffer that is displayed, We keep
the framebuffer alive for one more frame with triple buffering to make
sure that whatever rendering process is used, the framebuffer will not
be released when it's still on screen.
This was tested on RockChip Rock64
The current peak detection algorithm was very bugged (which contributed
to the excessive cross-frame flicker without long normalization) and
also didn't take into account the frame average brightness level.
The new algorithm both takes into account frame average brightness (in
addition to peak brightness), and also computes the values in a more
stable/correct way. (The old path was basically undefined behavior)
In addition to improving the algorithm, we also switch to hable tone
mapping by default, and try to enable peak computation automatically
whever possible (compute shaders + SSBOs supported). We also make the
desaturation milder, after extensive testing during libplacebo
development.
I also had to compensate a bit for the representational differences
between mpv and libplacebo (libplacebo treats 1.0 as the reference peak,
but mpv treats it as the nominal peak), but it shouldn't have caused any
problems.
This is still not quite the same as libplacebo, since libplacebo also
allows tagging the desired scene average brightness on the output, and
it also supports reading the scene average brightness from static
metadata (MaxFALL) where available. But those changes are a bit more
involved. It's possible we could also read this from metadata in the
future, but we have problems communicating with AVFrames as it is and I
don't want to touch the mpv colorimetry structs for the time being.
The vulkan validation layers warn you if you try requesting a query
result from a timer that hasn't even been started yet, so we have to do
some extra bit of work to keep track of which indices we've seen so far,
and avoid the queries on them.
Instead of enabling every feature under the sun, make an effort to just
whitelist the ones we actually might use. Turns out the extended storage
format support is needed for some of the storage formats we use, in
particular rgba16.
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.
This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.
vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.
f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).
The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.
Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)
In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.
vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.
The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.
Exporting VO capabilities is still a big mess (mp_stream_info stuff).
The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.
f_async_queue is unused.
The RA_CAP_FRAGCOORD checks apply to dumb mode as well, but they were
after the check for dumb mode, which returns early, so they never ran.
Fixes#5436
Using vdpau will allocate additional textures for the reinterleaving
step, which uninit_rendering() will free. This is a problem because the
hwdec image remains mapped when reinitializing, so the reinterleaving
textures are turned into dangling pointers. Fix this by freeing the
reinterleave textures on full uninit instead.
Fixes#5447.
It was actually already implemented as ta_dup_ptrtype(), but that seems
like a clunky name. Also we still use the talloc_ names throughout the
source, and I'd rather use an old name instead of a mixing inconsistent
naming conventions.
mp_sws_set_from_cmdline() has the only purpose to respect the --sws-
command line options. Instead of forcing callers to get the option
struct containing these, let callers pass mpv_global, and get it from
the option core code directly. This avoids minor annoyances later on.
DR (direct rendering) works by having the decoder decode into the GPU
staging buffers, instead of copying the video data on texture upload. We
did this even for formats unsupported by the GPU or the renderer. This
"worked" because the staging memory is untyped, and the video frame was
converted by libswscale to a supported format, and then uploaded with a
copy using the normal non-DR texture upload path.
Even though it "works", we don't gain anything from using the staging
buffers for decoding, since we can't use them for upload anyway. Also,
staging memory might be potentially limited (what really happens is up
to the driver). It's easy to avoid, so just skip it in these cases.
The check_gl_features(p) call here checks whether dumb mode can be used.
It uses the field use_integer_conversion, which is set _after_ the call
in the same function. Move check_gl_features() to the end of the
function, when use_integer_conversion is finally set.
Fixes that it tried to use bilinear filtering with integer textures. The
bug disabled the code that is supposed to convert it to non-integer
textures.
This segfaults otherwise. The conditional is needed to break a circular
dependency (gl_init depends on mpgl_load_functions which depends on
recreate_dispmanx which calls gl_ctx_resize).
Fixes#5398
Remove the max_count creation parameter, because it's pointless and
rarely ever did anything. Add a talloc parent parameter instead (which
is something completely different, but convenient, and all callers needs
to be changed anyway).
Instead of clearing the pool when the now removed maximum is reached,
clear it on image parameter changes instead.
This enables DXVA2 hardware decoding with ra_d3d11. It should be useful
for Windows 7, where D3D11VA is not available. Images are transfered
from D3D9 to D3D11 using D3D9Ex surface sharing[1].
Following Microsoft's recommendations, it uses a queue of shared
surfaces, similar to Microsoft's ISurfaceQueue. This will hopefully
prevent surface sharing from impacting parallelism and allow multiple
D3D11 frames to be in-flight at once.
[1]: https://msdn.microsoft.com/en-us/library/windows/desktop/ee913554.aspx
In a lost device scenario, resize() will fail and p->backbuffer will be
NULL. We can't recover from lost devices yet, but we should still check
for a NULL backbuffer in start_frame() rather than crashing.
Also remove a NULL check for p->swapchain. This was a red herring, since
p->swapchain never becomes NULL in an error condition, but p->backbuffer
actually does.
This should fix the crash in #5320, but it doesn't fix the underlying
reason for the lost device (which is probably a driver bug.)
Previously, mpv would attempt to use a BGRA swapchain in the hope that
it would give better performance, since the Windows desktop is also
composited in BGRA. In practice, it seems like there is no noticable
performance difference between RGBA and BGRA swapchains and BGRA
swapchains cause trouble with a42b8b1142, which attempts to use the
swapchain format for intermediate FBOs, even though D3D11 does not
guarantee BGRA surfaces will work with UAV typed stores.
Uses the EGL width/height by default when the user fails to set
the android-surface-width/android-surface-height options.
This means the vo-resize command is optional, and does not need to
be implemented on android devices which do not support rotation.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Apparently some Intel drivers have a bug where copying from staging
buffers to constant buffers does not work. We used to keep a copy of the
buffer data in a staging buffer to enable partial constant buffer
updates. To work around this bug, keep the copy in talloc-allocated
system memory instead.
There doesn't seem to be any noticable performance difference from
keeping the copy in system memory. Our cbuffers are probably too small
for it to matter anyway.
See also: https://crbug.com/593024Fixes#5293
This means that we now explicitly set an interval of 1. Although that
should be the EGL default, some drivers could possibly ignore this
(unconfirmed). In any case, this commit also allows disabling vsync, for
users who want it.
The queue family index and the queue info index are not necessarily the
same, so we're forced to do a check based on the queue family index
itself.
Fixes#5049
A vulkan validation layer update pointed out that this was wrong; we
still need to use the access type corresponding to the stage mask, even
if it means our code won't be able to skip the pipeline barrier (which
would be wrong anyway).
In additiona to this, we're also not allowed to specify any source
access mask when transitioning from top_of_pipe, which doesn't make any
sense anyway.
Async compute in particular seems to cause problems on some drivers, and
even when supprted the benefits are not that massive from the tests I
have seen, so it's probably safe to keep off by default.
Async transfer on the other hand seems to work better and offers a more
substantial improvement, so it's kept on.
This gets confused by e.g. SPARSE_BIT on the TRANSFER_BIT, leading to
situations where "more specialized" is ambiguous and the logic breaks
down. So to fix it, only compare the subset we care about.
blit() implies scaling, copy() is the equivalent command to use when the
formats are compatible (same pixel size) and the rects have the same
dimensions.
This allows RAs with support for non-opaque FBO formats to use a more
appropriate FBO format for the output tex, possibly enabling a more
efficient blit operation.
This requires distinguishing between real formats (which can be used to
create textures) and fake formats (e.g. ra_gl's FBO hack).
On AMD devices, we only get one graphics pipe but several compute pipes
which can (in theory) run independently. As such, we should prefer
compute shaders over fragment shaders in scenarios where we expect them
to be better for parallelism.
This is amusingly trivial to do, and actually improves performance even
in a single-queue scenario.
Instead of using a single primary queue, we generate multiple
vk_cmdpools and pick the right one dynamically based on the intent.
This has a number of immediate benefits:
1. We can use async texture uploads
2. We can use the DMA engine for buffer updates
3. We can benefit from async compute on AMD GPUs
Unfortunately, the major downside is that due to the lack of QF
ownership tracking, we need to use CONCURRENT sharing for all resources
(buffers *and* images!). In theory, we could try figuring out a way to
get rid of the concurrent sharing for buffers (which is only needed for
compute shader UBOs), but even so, the concurrent sharing mode doesn't
really seem to have a significant impact over here (nvidia). It's
possible that other platforms may disagree.
Our deadlock-avoidance strategy is stupidly simple: Just flush the
command every time we need to switch queues, and make sure all
submission and callbacks happen in FIFO order. This required lifting the
cmds_pending and cmds_queued out from vk_cmdpool to mpvk_ctx, and some
functions died/got moved as a result, but that's a relatively minor
change.
On my hardware this is a fairly significant performance boost, mainly
due to async transfers. (Nvidia doesn't expose separate compute queues
anyway). On AMD, this should be a performance boost as well due to async
compute.
This is especially interesting for vulkan since it allows completely
skipping the layout transition as part of the renderpass. Unfortunately,
that also means it needs to be put into renderpass_params, as opposed to
renderpass_run_params (unlike #4777).
Closes#4777.
This uses the new vk_signal mechanism to order all access to textures.
This has several advantageS:
1. It allows real synchronization of image access across multiple frames
when using multiple queues for parallelism.
2. It allows using events instead of pipeline barriers, which is a
finer-grained synchronization primitive that allows for more
efficient layout transitions over longer durations.
This commit also restructures some of the implicit transition code for
renderpasses to be more flexible and correct. (Note: this technically
drops the ability to transition the image out of undefined layout when
not blending, but that was a bug anyway and needs to be done properly)
vo_gpu: vulkan: remove no-longer-true optimization
The change to the output_tex format makes this no longer true, and it
actually seems to hurt performance now as well. So just don't do it
anymore. I also realized it hurts performance when drawing an OSD, so
it's probably not a good idea anyway.
This combines VkSemaphores and VkEvents into a common umbrella
abstraction which can resolve to either.
We aggressively try to prefer VkEvents over VkSemaphores whenever the
conditions are met (1. we can unsignal the semaphore, i.e. it comes from
the same frame; and 2. it comes from the same queue).
Instead of being submitted immediately, commands are appended into an
internal submission queue, and the actual submission is done once per
frame (at the same time as queue cycling). Again, the benefits are not
immediately obvious because nothing benefits from this yet, but it will
make more sense for an upcoming vk_signal mechanism.
This also cleans up the way the ra_vk submission interacts with the
synchronization/callbacks from the ra_vk_ctx. Although currently, the
way the dependency is signalled is a bit hacky: normally it would be
associated with the ra_tex itself and waited on in the appropriate stage
implicitly. But that code is just temporary, so I'm keeping it in there
for a better commit order.
Instead of associating a single VkSemaphore with every command buffer
and allowing the user to ad-hoc wait on it during submission, make the
raw semaphores-to-signal array work like the raw semaphores-to-wait-on
array. Doesn't really provide a clear benefit yet, but it's required for
upcoming modifications.
1. No more static arrays (deps / callbacks / queues / cmds)
2. Allows safely recording multiple commands at the same time
3. Uses resources optimally by never over-allocating commands
This hack was part of a solution to VSync judder in desktop OpenGL on
Windows. Rather than using blocking-SwapBuffers(), mpv could use
DwmFlush() to wait for the image to be presented by the compositor.
Since this would only work while the compositor was running, and the
compositor was silently disabled when OpenGL entered exclusive
fullscreen mode, mpv needed a way to detect exclusive fullscreen mode.
The code that is being removed could detect exclusive fullscreen mode by
checking the state of an undocumented mutex using undocumented native
API functions, but because of how fragile it was, it was always meant to
be removed when a better solution for accurate VSync in OpenGL was
found. Since then, mpv got the dxinterop backend, which uses desktop
OpenGL but has accurate VSync. It also got a native Direct3D 11 backend,
which is a viable alternative to OpenGL on Windows.
For people who are still using desktop OpenGL with WGL, there shouldn't
be much of a difference, since mpv can use other API functions to detect
exclusive fullscreen.
Refactored and split the `reinit_window_state` code into four
separate functions:
- `update_window_style` used to update window styles without
modifying the window rect.
- `fit_window_on_screen` used to adjust the window size when it is
larger than the screen size. Added a helper function `fit_rect` to
fit one rect on another without using any data from w32 struct.
- `update_fullscreen_state` used to calculate the new fullscreen
state and adjust the window rect accordingly.
- `update_window_state` used to display the window on screen with
new size, position and ontop state.
This commit fixes three issues:
- fixed#4753 by skipping `fit_window_on_screen` for a maximized
window, since maximized window should already fit on the screen.
It should be noted that this bug was only reproducible with
`--fit-border` option which is enabled by default. The cause of the
bug is that after calling the `add_window_borders` for a maximized
window, the rect in result is slightly larger than the screen rect,
which is okay, `SetWindowPos` will interpret it as a maximized state
later, so no auto-fitting to screen size is needed here.
- fixed#5215 by skipping `fit_window_on_screen` when leaving fullscreen.
On a multi-monitor system if the mpv window was stretched to cover
multiple monitors, its size was reset after switching back from
fullscreen to fit the size of the active monitor. Also, when changing
`--ontop` and `--border` options, now only the
`update_window_style` and `update_window_state` functions are used,
so `fit_window_on_screen` is not used for them too.
- fixed#2451 by moving the `ITaskbarList2_MarkFullscreenWindow`
below the `SetWindowPos`. If the taskbar is notified about fullscreen
state before the window is shown on screen, the taskbar button could
be missing until Alt-TAB is pressed, usually it was reproducible on
Windows 8.
Other changes:
- In `update_fullscreen_state` the `reset window bounds` debug
message now reports client area size and position, instead of window area
size and position. This is done for consistency with debug messages
in handling fullscreen state above in this function, since they also print
window bounds of the client area.
- Refactored `gui_thread_reconfig`. Added a new window flag `fit_on_screen`
to fit the window on screen even when leaving fullscreen. This is needed
for the case when the new video opened while the window is still in the
fullscreen state.
- Moved parent and fullscreen state checks out from the WM_MOVING to
`snap_to_screen_edges` function for consistency with other functions.
There's no point in keeping these checks out of the function body.
When window and screen size and position are stored in RECT, it's
much easier to modify them using WinAPI functions.
Added two macros to get width and height of the rect.
I've decided that MP_TRACE means “noisy spam per frame”, whereas
MP_DBG just means “more verbose debugging messages than MSGL_V”.
Basically, MSGL_DBG shouldn't create spam per frame like it currently
does, and MSGL_V should make sense to the end-user and provide mostly
additional informational output.
MP_DBG is basically what I want to make the new default for --log-file,
so the cut-off point for MP_DBG is if we probably want to know if for
debugging purposes but the user most likely doesn't care about on the
terminal.
Also, the debug callbacks for libass and ffmpeg got bumped in their
verbosity levels slightly, because being external components they're a
bit less relevant to mpv debugging, and a bit too over-eager in what
they consider to be relevant information.
I exclusively used the "try it on my machine and remove messages from
MSGL_* until it does what I want it to" approach of refactoring, so
YMMV.
When autoprobing the hwdec interops (which now happens to all compiled
interops if hardware decoding is used), failure to load an interop
should not print an error in the normal case. So hide it.
(We could make the log level conditional on whether autoprobing is used,
but directly loading it without autoprobing is obscure, and most other
interops don't do this either.)
* Distinguish between the window being moved or not.
* Skip trying to snap if currently in full screen or an embedded
window.
* Exit snapped state if the size changed when the window was being
moved.
Check the expected width and height against up-to-date
window placement. If they do not match, we will consider snapping
to have happened on Windows' side.
Fixes display-sync (though if you change virtual desktops you'll need to seek
to re-enable display-sync) partially under wayland.
As an advantage, rendering is completely disabled if you change desktops or
alt+tab so you lose no performance if you leave mpv running elsewhere as long
as it isn't visible.
This could also be ported to other VOs which supports it.
We need to support hardware/drivers which do not support ARGB8888 in
their primary plane.
We also use p->primary_plane_format when creating the gbm surface, to
make sure it always matches (in actuality there should be little
difference).
Passing in an invalid DRM overlay id with the --drm-overlay option would
cause drmplane to be freed twice: once in the for-loop and once at the
error-handler label fail.
Solve by setting drmpanel to NULL after freeing it.
Also the 'return false' statement after the error handler label should
probably be 'return NULL', given that the return type of
drm_atomic_create_context returns a pointer.
vo_x11 and vo_xv need this. According to the Linux manpage, all involved
functions are POSIX-2001 anyway. (I just assumed they were not, because
they're mostly System V UNIX legacy garbage.)
Finally get rid of all the HWDEC_* things, and instead rely on the
libavutil equivalents. vdpau still uses a shitty hack, but fuck the
vdpau code.
Remove all the now unneeded remains. The vdpau preemption thing was not
unused anymore; if someone cares this could probably be restored.
With the recent changes, mpv's internal mechanisms got synced to
libavcodec's once more. Some things are still needed for filters (until
the mechanism gets replaced), but there's no need to require other hwdec
methods to use these fields. So remove them where they are unnecessary.
Also fix some minor leaks in the dxva2 backends, and set the driver_name
field in the Apple ones. Untested on Apple crap.
It makes more sense to have it in the general video directory (along
with vdpau.c and vaapi.c), since the decoder source files don't even
access it anymore.
The testing_only field is not referenced anymore with vaglx removed and
the previous commit dropping all uses.
The ra_hwdec_driver.api field became unused with the previous commit,
but all hwdec interop drivers still initialized it.
Since this touches highly OS-specific code, build regressions are
possible (plus the previous commit might break hw decoding at runtime).
At least hwdec_cuda.c still used the .api field, other than initializing
it.
Make the VO<->decoder interface capable of supporting multiple hwdec
APIs at once. The main gain is that this simplifies autoprobing a lot.
Before this change, it could happen that the VO loaded the "wrong" hwdec
API, and the decoder was stuck with the choice (breaking hw decoding).
With the change applied, the VO simply loads all available APIs, so
autoprobing trickery is left entirely to the decoder.
In the past, we were quite careful about not accidentally loading the
wrong interop drivers. This was in part to make sure autoprobing works,
but also because libva had this obnoxious bug of dumping garbage to
stderr when using the API. libva was fixed, so this is not a problem
anymore.
The --opengl-hwdec-interop option is changed in various ways (again...),
and renamed to --gpu-hwdec-interop. It does not have much use anymore,
other than debugging. It's notable that the order in the hwdec interop
array ra_hwdec_drivers[] still matters if multiple drivers support the
same image formats, so the option can explicitly force one, if that
should ever be necessary, or more likely, for debugging. One example are
the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both
support d3d11 input.
vo_gpu now always loads the interop lazily by default, but when it does,
it loads them all. vo_opengl_cb now always loads them when the GL
context handle is initialized. I don't expect that this causes any
problems.
It's now possible to do things like changing between vdpau and nvdec
decoding at runtime.
This is also preparation for cleaning up vd_lavc.c hwdec autoprobing.
It's another reason why hwdec_devices_request_all() does not take a
hwdec type anymore.
nvdec aka cuvid aka cuda should work much better than vdpau, and support
newer codecs (such as vp9), and more advanced surface formats (like 10
bit).
This requires moving the d3d hwaccels in the autoprobe order, since on
Windows, d3d decoding should be preferred over nvidia proprietary stuff.
Users of older drivers will need to force --hwdec=vdpau, since it could
happen that the vo_gpu cuda hwdec interop loads (so the vdpau interop is
not loaded), but the hwdec itself doesn't work.
I expect this does not break AMD (which still needs vdpau for vo_gpu
interop, until libva is fixed so it can fully support AMD).
This has stopped being useful a long time ago, and it's the only GPL
source file in the vo_gpu source directories. Recently it wasn't even
loaded at all, unless you forced loading it.
The D3D11_CREATE_DEVICE_BGRA_SUPPORT flag doesn't enable support for
BGRA textures. BGRA textures will be supported whether or not the flag
is passed. The flag just fails device creation if they are not supported
as an API convenience for programs that need BGRA textures, such as
programs that use D2D or D3D9 interop. We can handle devices without
BGRA support fine, so don't bother with the flag.
For consistency with already implemented shcore.dll
function loading in w32->api:
Moved loading of imm32.dll to w32_api_load, and declare
pImmDisableIME function pointer in the w32->api struct.
Removed unloading of imm32.dll.
Seems like the last refactor to this code broke playing flipped images,
at least with --opengl-pbo --gpu-api=opengl.
Flipping is rather a shitmess. The main problem is that OpenGL does not
support flipped uploading. The original vo_gl implementation considered
it important to handle the flipped case efficiently, so instead of
uploading the image line by line backwards, it uploaded it flipped, and
then flipped it in the renderer (basically the upload path ignored the
flipping). The ra code and backends probably have an insane and
inconsistent mix of semantics, so fix this by never passing it flipped
images in the first place.
In the future, the backends should probably support flipped images
directly.
Fixes#5097.
Like the manual says, this is technically undefined behaviour. See:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476085.aspx
In particular, MSDN says texture arrays created with the BIND_DECODER
flag cannot be used with CreateShaderResourceView, which means they
can't be sampled through SRVs like normal Direct3D textures. However,
some programs (Google Chrome included) do this anyway for performance
and power-usage reasons, and it appears to work with most drivers.
Older AMD drivers had a "bug" with zero-copy decoding, but this appears
to have been fixed. See #3255, #3464 and http://crbug.com/623029.
The shader cache in ra_d3d11 caches the result of shaderc, crossc and
the D3DCompiler DLL, so it should be invalidated when any of those
components are updated. This should make the cache more reliable, which
makes it safer to enable gpu-shader-cache-dir. Shader compilation is
slow with D3D11, so gpu-shader-cache-dir is highly necessary
Some shaders take a _long_ time to compile with the Direct3D compiler.
The ANGLE backend had this problem too, to a certain extent. Logging
should help identify which shaders cause long stalls and could also help
with benchmarking ways of reducing compile times.
ra_d3d11 uses the SPIR-V compiler to translate GLSL to SPIR-V, which is
then translated to HLSL. This means it always exposes the same GLSL
version that the SPIR-V compiler supports (4.50 for shaderc/glslang.)
Despite claiming to support GLSL 4.50, some features that are tied to
the GLSL version in OpenGL are not supported by ra_d3d11 when targeting
legacy Direct3D feature levels.
This includes two features that mpv relies on:
- Reading from gl_FragCoord in the fragment shader (requires FL 10_0)
- textureGather from any texture component (requires FL 11_0)
These features have been exposed as new RA caps.
This is a new RA/vo_gpu backend that uses Direct3D 11. The GLSL
generated by vo_gpu is cross-compiled to HLSL with SPIRV-Cross.
What works:
- All of mpv's internal shaders should work, including compute shaders.
- Some external shaders have been tested and work, including RAVU and
adaptive-sharpen.
- Non-dumb mode works, even on very old hardware. Most features work at
feature level 9_3 and all features work at feature level 10_0. Some
features also work at feature level 9_1 and 9_2, but without high-bit-
depth FBOs, it's not very useful. (Hardware this old is probably not
fast enough for advanced features anyway.)
Note: This is more compatible than ANGLE, which requires 9_3 to work
at all (GLES 2.0,) and 10_1 for non-dumb-mode (GLES 3.0.)
- Hardware decoding with D3D11VA, including decoding of 10-bit formats
without truncation to 8-bit.
What doesn't work / can be improved:
- PBO upload and direct rendering does not work yet. Direct rendering
requires persistent-mapped PBOs because the decoder needs to be able
to read data from images that have already been decoded and uploaded.
Unfortunately, it seems like persistent-mapped PBOs are fundamentally
incompatible with D3D11, which requires all resources to use driver-
managed memory and requires memory to be unmapped (and hence pointers
to be invalidated) when a resource is used in a draw or copy
operation.
However it might be possible to use D3D11's limited multithreading
capabilities to emulate some features of PBOs, like asynchronous
texture uploading.
- The blit() and clear() operations don't have equivalents in the D3D11
API that handle all cases, so in most cases, they have to be emulated
with a shader. This is currently done inside ra_d3d11, but ideally it
would be done in generic code, so it can take advantage of mpv's
shader generation utilities.
- SPIRV-Cross is used through a NIH C-compatible wrapper library, since
it does not expose a C interface itself.
The library is available here: https://github.com/rossy/crossc
- The D3D11 context could be made to support more modern DXGI features
in future. For example, it should be possible to add support for
high-bit-depth and HDR output with DXGI 1.5/1.6.
Backported from @haasn's change to libplacebo, except in the current RA,
there's nothing to indicate an ra_format can be bound as a storage
image, so there's no way to force all of these formats to have a
glsl_format. Instead, the layout qualifier will be removed if
glsl_format is NULL.
This is needed for the upcoming ra_d3d11 backend. In Direct3D 11, while
loading float values from unorm images often works as expected, it's
technically undefined behaviour, and in Windows 10, it will cause the
debug layer to spam the log with error messages. Also, apparently in
GLSL, the format name must match the image's format exactly (but in
Direct3D, it just has to have the same component type.)
Backported from @haasn's change to libplacebo. More flexible than the
previous "shared || non-shared" distinction. The extra flexibility is
needed for Direct3D 11, but it also doesn't hurt code-wise.
For some reason vo_lavc's draw_image can buffer the frame and encode it
only later. Also, there is logic for rendering the OSD (i.e. subtitles)
only when needed.
In theory this can lead to subtitles being pruned before it tries to
render them (as the subtitle logic doesn't know that the VO still needs
them later), although this probably never happens in reality.
The worse issue, that actually happened, is that if the last frame gets
buffered, it attempts to render subtitles in the uninit callback. At
this point, the subtitle decoder is already torn down and all subtitles
removed, thus it will draw nothing. This didn't always happen. I'm not
sure why - potentially in the working cases, the frame wasn't buffered.
Since this logic doesn't have much worth, except a minor performance
advantage if frames with subtitles are dropped, just remove it.
Hopefully fixes#4689.
Repeating frames (for display-sync) is not supposed to render the entire
frame again. When using hardware decoding, it unfortunately did: the
renderer uses the frame ID to check whether the frame data changed, and
unmapping the hwdec frame clears it.
Essentially reverts commit 761eeacf54. Back then I probably
thought it would be a good idea to release the hwdec image quickly in
order to return it to the decoder, but they're referenced anyway.
This should increase the performance and reduce GPU work.
Normally such code is didsabled by have_mglsl==false in
check_gl_features(), but apparently not this one.
Just fix it. Seems also more readable.
Fixes#5069.
Apparently this is required, but it doesn't check for it. To be fair,
this was tested by creating a compatibility context and pretending it's
GL 2.1. GL_ARB_shader_storage_buffer_object actually requires GL 4.0 or
up, but GL_ARB_uniform_buffer_object requires only GL 2.0.
vo_gpu.c will call gl_video_icc_auto_enabled() to check whether it
should retrieve the ICC profile. But the value returned by this function
will be outdated, because gl_video_update_options() is not called yet.
Change the order of function calls so that this is done after updating
the options.
(This is fairly chaotic, but I guess this code will be refactored a
dozen of times anyway in the future.)
This is just a dumb consequence of HWDEC_ types somehow being part of
both decoder and VO. Obviously, the VO should only care about supporting
specific hardware surface types or providing specific device types, but
until they are separated, stupid unintuitive mismatches will occur.
See manpage additions.
(In ffmpeg-mpv and Libav, this is still called "cuvid". Libav won't work
yet, because it has no frame params support yet, but this could get
fixed soon.)
params->rc was ignored in the calculation for the buffer size. I fucking
hate this stupid ra_tex_upload signature where *rc is randomly relevant
or not.
Coverity complains about this, but it's probably a false positive.
Anyway, rewrite it in a slightly more readable way. Now it's more
obvious that it is correct.
Comparing mpv's implementation against the ACES ODR reference samples
and algorithms, it seems like they're happy desaturating highlights
_way_ more aggressively than mpv currently does. And indeed, looking at
some example clips like The Redwoods (which is actually well-mastered),
the current desaturation produces unnatural-looking brightness fringes
where the sky meets the treeline.
Adjust the algorithm to make it apply to a much larger, more gradual
brightness region; and change the interpretation of the parameter. As a
bonus, the new parameter is actually sanely scaled (higher values = more
desaturation). Also, make it scale based on the signal level instead of
the luminance, to avoid under-desaturating bright blues.
This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced
format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor
struct.
That struct holds dmabuf fds and information allowing zerocopy rendering
using KMS / DRM Atomic.
This has been tested on RockChip ROCK64 device.