This commit rips out the entire mpv vulkan implementation in favor of
exposing lightweight wrappers on top of libplacebo instead, which
provides much of the same except in a more up-to-date and polished form.
This (finally) unifies the code base between mpv and libplacebo, which
is something I've been hoping to do for a long time.
Note: The ra_pl wrappers are abstract enough from the actual libplacebo
device type that we can in theory re-use them for other devices like
d3d11 or even opengl in the future, so I moved them to a separate
directory for the time being. However, the rest of the code is still
vulkan-specific, so I've kept the "vulkan" naming and file paths, rather
than introducing a new `--gpu-api` type. (Which would have been ended up
with significantly more code duplicaiton)
Plus, the code and functionality is similar enough that for most users
this should just be a straight-up drop-in replacement.
Note: This commit excludes some changes; specifically, the updates to
context_win and hwdec_cuda are deferred to separate commits for
authorship reasons.
This helps with compatibility and/or performance in particular for
oddly-sized formats like rgb24. We use a loop to avoid having to
calculate the lcm (or waste bytes in the extremely common case of the
byte size and the stride align having shared factors).
half of the materials we used were deprecated with macOS 10.14, broken
and not supported by run time changes of the macOS theme. furthermore
our styling names were completely inconsistent with the actually look
since macOS 10.14, eg ultradark got a lot brighter and couldn't be
considered ultradark anymore.
i decided to drop the old option --macos-title-bar-style and rework
the whole mechanism to allow more freedom. now materials and appearance
can be set separately. even if apple changes the look or semantics in
the future the new options can be easily adapted.
setting the appearance of the window to nil will always use the system
setting and changes on run time switches too. this is only the case for
macOS 10.14.
quite a lot of the title bar functionality and logic was within our
window. since we recently added a custom title bar class to our window
i decided to move all that functionality into that class and in its
own file.
this is also a preparation for the next commits.
i found the old pixel format creation a bit too messy. pixel format
attribute arrays and look ups were all over the place, the actual logic
what kind of format was created was inscrutable, the software pixel
format was hardcoded and no probing was done.
i split the attributes into mandatory and optional ones, one mandatory
for a hardware and software pixel format each, and moved those to the
top of the class. that way new attributes can be easily added to either
the mandatory or optional attributes and they don't mess up the actual
pixel creation logic any more. furthermore both hardware and software
pixel formats are being probed the same way now. to minimise code
duplications the probing was moved into its own function.
only the dragged types NSFilenamesPboardType and NSURLPboardType were
supported to be dropped on the window, which was inconsistent with the
dragged types the dock icon supports. the dock icon additional supports
strings that represents an URL or a path. the system takes care of
validating the strings properly in the case of the dock icon, but in the
case of dropping on the window it needs to be done manually.
support for strings is added by also allowing the NSPasteboardTypeString
type and manually validating the strings. strings are split by new lines
and trimmed, to also support a list of URLs and paths. every new element
is checked if being an URL or path and only then being added to the
playlist.
on init an NSWindow can't be placed on a none Main screen NSScreen
outside the Main screen's frame bounds.
To fix this we just check if the target screen is different from the
main screen and if that is the case we check the actual position with
the expect one. if they are different we try to reposition the window
to the wanted position.
Fixes#6453
when quitting mpv in fullscreen the System always switchs to Space 1
regardless of which Space mpv was on previous to switching to fs. to fix
this we close the window before quitting fs. that way the System
switches back the the Space mpv was previous on before fs.
new events were added but not fetched by the vo, because we didn't
signal the vo that new events were available.
actually wakeup the vo when new events are available.
Regression from 8e3308d687.
Broken cases were:
* --no-cursor-autohide acted like --cursor-autohide=always.
* --cursor-autohide-fs-only always hid the cursor if starting
non-fullscreen; entering fullscreen at least once fixed it.
The old size limit was chosen before LUT texture was supported in user
shader. At that time, the whole user shader will be compiled and run
on GPU, which makes large user shader impractical to be used.
With the introduction of LUT texture, the old size limit doesn't make
any sense. For example, a 1024x1024 rgba16f LUT will cost 32MB shader
size.
Fix this by increasing the size limit to a value that's unlikely be
reached.
Manual changes done:
* Merged the interface-changes under the already master'd changes.
* Moved the hwdec-related option changes to video/decode/vd_lavc.c.
modulo operator could be used to check if size is multiple of a
certain number.
equal operator could be used to verify if size of different textures
aligns.
Before this commit, texture offset is set after all source textures
are finalized. Which means CHROMA hooks won't be able to align with
luma planes. This could be problematic for chroma prescalers utilizing
information from luma plane.
Fix this by find the reference texture early, and set global texture
offset early.
This allows context_drm_egl to use as many buffers as libgbm or the
swapchain_depth setting allows (whichever is smaller).
On pause and on still images (cover art etc.) to make sure that output does not
lag behind user input, the swapchain is drained and reverts to working in a dual
buffered (equivalent to swapchain-depth=1) manner.
When possible (swapchain-depth>=2), the wait on the page flip event is now not
done immediately after queueing, but is deferred to the next invocation of
swap_buffers. Which should give us more CPU time between invocations.
Although, since gbm_surface_has_free_buffers() can only tell us a boolean value
and not how many buffers we have left, we are forced to do this contortionist
dance where we first overshoot until gbm_surface_has_free_buffers() reports 0,
followed by immediately waiting so we can free a buffer, to be able to get the
deferred wait on page flip rolling.
With this commit we do not rely on the default vsync fences/latency emulation of
video/out/opengl/context.c, but supply our own, since the places we create and
wait for the fences needs to be somewhat different for best performance.
Minor fixes:
* According to GBM documentation all BO:s gotten with
gbm_surface_lock_front_buffer must be released before gbm_surface_destroy is
called on the surface.
* We let the page flip handler function handle the waiting_for_flip flag.
This solves some edge cases when using files with very weird metadata
(e.g. MaxCLL 10k and so forth). Instead of just blindly seeding it with
the tagged metadata, forcibly set the initial state from the detected
values.
Rather than the linear cd/m^2 units, these (relative) logarithmic units
lend themselves much better to actually detecting scene changes,
especially since the scene averaging was changed to also work
logarithmically.
Gamut mapping can take very bright out-of-gamut colors into the
negatives, which completely destroys the color balance (which tone
mapping tries its best to preserve).
This change switches to a logarithmic mean to estimate the average
signal brightness. This handles dark scenes with isolated highlights
much more faithfully than the linear mean did, since the log of the
signal roughly corresponds to the perceptual brightness.
In theory our "eye adaptation" algorithm works in both ways, both
darkening bright scenes and brightening dark scenes. But I've always
just prevented the latter with a hard clamp, since I wanted to avoid
blowing up dark scenes into looking funny (and full of noise).
But allowing a tiny bit of over-exposure might be a good thing. I won't
change the default just yet (better let users test), but a moderate
value of 1.2 might be better than the current 1.0 limit. Needs testing
especially on dark scenes.
The previous approach of using an FIR with tunable hard threshold for
scene changes had several problems:
- the FIR involved annoying hard-coded buffer sizes, high VRAM usage,
and the FIR sum was prone to numerical overflow which limited the
number of frames we could average over. We also totally redesign the
scene change detection.
- the hard scene change detection was prone to both false positives and
false negatives, each with their own (annoying) issues.
Scrap this entirely and switch to a dual approach of using a simple
single-pole IIR low pass filter to smooth out noise, while using a
softer scene change curve (with tunable low and high thresholds), based
on `smoothstep`. The IIR filter is extremely simple in its
implementation and has an arbitrarily user-tunable cutoff frequency,
while the smoothstep-based scene change curve provides a good, tunable
tradeoff between adaptation speed and stability - without exhibiting
either of the traditional issues associated with the hard cutoff.
Another way to think about the new options is that the "low threshold"
provides a margin of error within which we don't care about small
fluctuations in the scene (which will therefore be smoothed out by the
IIR filter).
Instead of desaturating towards luma, we desaturate towards the
per-channel tone mapped version. This essentially proves a smooth
roll-off towards the "hollywood"-style (non-chromatic) tone mapping
algorithm, which works better for bright content, while continuing to
use the "linear" style (chromatic) tone mapping algorithm for primarily
in-gamut content.
We also split up the desaturation algorithm into strength and exponent,
which allows users to use less aggressive desaturation settings without
affecting the overall curve.
This is the naming xdg-shell stable adopted, it doesn’t make much sense
to keep using “shell” everywhere with all functions calling it
“wm_base”.
Finishes what 76211609e3 started.
some safety mechanism for the async fs animation aren't needed anymore,
due to possible improved logic and slightly different behaviour on new
macOS versions. that safety fallback prevented the Split View because
it always returned a rectangle of the whole screen, instead of just
part/half of it.
Fixes#6443
Add "auto" the possible values of target-peak. The default value
for target_peak is to calculate the target using mp_trc_nom_peak.
Unfortunately, this default was outside the acceptable range of
10-10000 nits, which prevented its later reassignment. So add an
"auto" choice to target-peak which lets clients and scripts go back
to using the trc default after assigning a value.
this lead to an unexpected videotoolbox-copy hwdec name due to the last
two chars being cut off. since selection is also done by that name one
had to use "videotoolbox-co" to explicitly use the copy mode of
videotoolbox.
I misunderstood how this extension works. If I understand it correctly
now, it's worse than I thought. They key thing is that the (ust, msc,
sbc) tripple is not for a single swap event. Instead, (ust, msc) run
independently from sbc. Assuming a CFR display/compositor, this means
you can at best know the vsync phase and frequency, but not the exact
time a sbc changed value.
There is GLX_INTEL_swap_event, which might work as expected, but it has
no EGL equivalent (while GLX_OML_sync_control does, in theory).
Redo the context_glx sync code. Now it's either more correct or less
correct. I wanted to add proper skip detection (if a vsync gets skipped
due to rendering taking too long and other problems), but it turned out
to be too complex, so only some unused fields in vo.h are left of it.
The "generic" skip detection has to do.
The vsync_duration field is also unused by vo.c.
Actually this seems to be an improvement. In cases where the flip call
timing is off, but the real driver-level timing apparently still works,
this will not report vsync skips or higher vsync jitter anymore. I could
observe this with screenshots and fullscreen switching. On the other
hand, maybe it just introduces an A/V offset or so.
Why the fuck can't there be a proper API for retrieving these
statistics? I'm not even asking for much.
Use the extension to compute the (hopefully correct) video delay and
vsync phase.
This is very fuzzy, because the latency will suddenly be applied after
some frames have already been shown. This means there _will_ be "jumps"
in the time accounting, which can lead to strange effects at start of
playback (such as making initial "dropped" etc. frames worse). The only
reasonable way to fix this would be running a few dummy frame swaps at
start of playback until the latency is known. The same happens when
unpausing.
This only affects display-sync mode.
Correct function was not confirmed. It only "looks right". I don't have
the equipment to make scientifically correct measurements.
A potentially bad thing is that we trust the timestamps we're receiving.
Out of bounds timestamps could wreak havoc. On the other hand, this will
probably cause the higher level code to panic and just disable DS.
As a further caveat, this makes a bunch of assumptions about UST
timestamps. If there are delayed frames (i.e. we skipped one or more
vsyncs), the latency logic is mostly reset. There is no attempt to make
the vo.c skipped vsync logic to use this. Also, the latency computation
determines a vsync duration, and there's no effort to reconcile or share
the vo.c logic for determining vsync duration.
This option has been deprecated upstream for a long time, probably
doesn't even work anymore, and won't work moving forwards as we replace
the vulkan code by libplacebo wrappers.
I haven't removed the option completely yet since in theory we could
still add support for e.g. a native glslang wrapper in the future. But
most likely the future of this code is deletion.
As an aside, fix an issue where the man page didn't mention d3d11.
This commit bumps the libmpv version to 1.102
drm-osd-plane -> drm-draw-plane
drm-video-plane -> drm-drmprime-video-plane
drm-osd-size -> drm-draw-surface-size
"draw plane", as in the plane that OpenGL draws to, whether it be
video + OSD or just OSD.
"drmprime video plane", as in the plane used for hwdec video imported
via drmprime.
"draw surface size", as in the size of the surface used for the draw plane
The new names are invariant whether or not hwdec_drmprime_drm is being
used or not. The original naming was very confusing, as when doing
regular rendering (swdec or vaapi) the video would be displayed on the
"OSD plane", and the "Video plane" would remain unused.
Add general primary/overlay plane option to drm-osd-plane-id and
drm-video-plane-id, so that the user can just request any usable
primary or overlay plane for either of these two options. This should
be somewhat more user-friendly (especially as neither of these two
options currently have a useful help function), as usually you would
only be interested in the type of the plane, and not exactly which
plane gets picked.
By design, some vulkan implementations block until vsync during
vkAcquireNextImageKHR. Since mpv only considers the time that
`swap_buffers` spent blocking as constituting part of the vsync, we can
help it out a bit by pre-emptively calling this function here in order
to improve the accuracy of vsync jitter measurements on vulkan.
(If it fails, we just ignore the error and have the user call it a
second time later - maybe it will work then)
On my system this drops vsync-jitter from ~0.030 to ~0.007, an accuracy
of +/- 100μs. (Which *might* have something to do with the fact that
this is the polling interval for command polling)
Makes performance slightly better when using multiple queues by avoiding
unnecessary semaphores due to bad queue selection.
Also remove an aeons-old workaround for an nvidia bug that only ever
existed in the earliest beta vulkan drivers anyway.
We are currently unnecessarily including vulkan headers even when
not building with vulkan support. I also guarded the GL header
inclusion even though this doesn't appear to break anything today.
Fixes#6330.
This makes the default fit on screen, autofit and window-scale
changing behavior to use the screen working area, instead of
the whole screen area.
As a result mpv window doesn't cover the taskbar now when opening
videos with size larger than the screen size.
The actual behavior now is the same as expected behavior for
usecases 1-4 from #4363.
This commit also removes the screenrc from w32 struct.
The screen rect can now be retrieved via `get_screen_area` function,
which was renamed from `update_screen_rect`.
On a multi-monitor system, if the user moved the window between
monitors, this function will return the current screen area under
the window, and not the screen area from monitor specified by
`--screen` option. The `--screen` option sets the initial monitor
the mpv window is displayed on.
Returning -1 in a function with return type bool is the same as
returning true. In the error paths, false should be returned to
indicate that something went wrong.
depending on the capabilities of the system and testing of various
attributes the resulting CGL pixel format can change. due to that
probing it can be helpful to know which pixel format is used to create
the CGL context. added some verbose logging that outputs final pixel
format.
this adds support for GPU rendered screenshots, DR (theoretically) and
possible other advanced functions in the future that need to be executed
from the rendering thread.
additionally frames that would be off screen or not be displayed when on
screen are being dropped now.
I was inconsistent about this originally, as the functionality was
moved into the core spec in 1.1 and so both suffixed and unsuffixed
versions of everything exist and can be mixed together.
There's no reason to fail to build with 1.0.39+ so I'm fixing the
names.
Currently, the error paths in init() are a bit confusing, and we can
end up trying to pop the current context when there is no context,
which leads to distracting error messages.
I also added an explicit path to return early if the GPU backend is
not OpenGL or Vulkan. It's pointless to do any other cuda init
after that point. (Of course, someone could write more interops.)
Fixes#6256
Since 810acf32d6 video_plane can be NULL
under some circumstances. While there is a check in init, init treats
this as an error condition and would call uninit, which in turn calls
disable_video_plane, which would then segfault. Fix this by including
a NULL check inside disable_video_plane, so that it doesn't try to
disable what isnt' there.
Despite their place in the tree, hwdecs can be loaded and used just
fine by the vulkan GPU backend.
In this change we add Vulkan interop support to the cuda/nvdec hwdec.
The overall process is mostly straight forward, so the main observation
here is that I had to implement it using an intermediate Vulkan buffer
because the direct VkImage usage is blocked by a bug in the nvidia
driver. When that gets fixed, I will revist this.
Nevertheless, the intermediate buffer copy is very cheap as it's all
device memory from start to finish. Overall CPU utilisiation is pretty
much the same as with the OpenGL GPU backend.
Note that we cannot use a single intermediate buffer - rather there
is a pool of them. This is done because the cuda memcpys are not
explicitly synchronised with the texture uploads.
In the basic case, this doesn't matter because the hwdec is not
asked to map and copy the next frame until after the previous one
is rendered. In the interpolation case, we need extra future frames
available immediately, so we'll be asked to map/copy those frames
and vulkan will be asked to render them. So far, harmless right? No.
All the vulkan rendering, including the upload steps, are batched
together and end up running very asynchronously from the CUDA copies.
The end result is that all the copies happen one after another, and
only then do the uploads happen, which means all textures are uploaded
the same, final, frame data. Whoops. Unsurprisingly this results in
the jerky motion because every 3/4 frames are identical.
The buffer pool ensures that we do not overwrite a buffer that is
still waiting to be uploaded. The ra_buf_pool implementation
automatically checks if existing buffers are available for use and
only creates a new one if it really has to. It's hard to say for sure
what the maximum number of buffers might be but we believe it won't
be so large as to make this strategy unusable. The highest I've seen
is 12 when using interpolation with tscale=bicubic.
A future optimisation here is to synchronise the CUDA copies with
respect to the vulkan uploads. This can be done with shared semaphores
that would ensure the copy of the second frames only happens after the
upload of the first frame, and so on. This isn't trivial to implement
as I'd have to first adjust the hwdec code to use asynchronous cuda;
without that, there's no way to use the semaphore for synchronisation.
This should result in fewer intermediate buffers being required.
This is arguably a little contrived, but in the case of CUDA interop,
we have to track additional state on the cuda side for each exported
buffer. If we want to be able to manage buffers with an ra_buf_pool,
we need some way to keep that CUDA state associated with each created
buffer. The easiest way to do that is to attach it directly to the
buffers.
The CUDA/Vulkan interop works on the basis of memory being exported
from Vulkan and then imported by CUDA. To enable this, we add a way
to declare a buffer as being intended for export, and then add a
function to do the export.
For now, we support the fd and Handle based exports on Linux and
Windows respectively. There are others, which we can support when
a need arises.
Also note that this is just for exporting buffers, rather than
textures (VkImages). Image import on the CUDA side is supposed to
work, but it is currently buggy and waiting for a new driver release.
Finally, at least with my nvidia hardware and drivers, everything
seems to work even if we don't initialise the buffer with the right
exportability options. Nevertheless I'm enforcing it so that we're
following the spec.
Since the code just broke out of the loop on a match rather than jumping
straight to the end of the function body, it ended up hitting the code
path for when the end of the list was reached.
since we draw our own title bar we lose the standard functionality of
the system provided title bar. because of that we have to reimplement
the functionality of double clicking the title bar. depending on the
system preferences we want to minimize, zoom or do nothing.
Fixes#6223
On a multi monitor setup, when the center of the window was going off
screen, the icc profile would always switch to the profile of the first
screen.
This fixes the issue by defaulting the value to the current screen.
This was pased on the texture height, which was a mistake. In some cases
it could exceed the actual size of the buffer, leading to a vulkan API
error. This didn't seem to cause any problems in practice, since a
too-large synchronization is just bad for performance and shouldn't do
any harm internally, but either way, it was still undefined behavior to
submit a barrier outside of the buffer size.
Fix the calculation, thus fixing this issue.
Since linear downscaling makes sense to handle independently from
linear/sigmoid upscaling, we split this option up. Now,
linear-downscaling is its own option that only controls linearization
when downscaling and nothing more. Likewise, linear-upscaling /
sigmoid-upscaling are two mutually exclusive options (the latter
overriding the former) that apply only to upscaling and no longer
implicitly enable linear light downscaling as well.
The old behavior was very confusing, as evidenced by issues such
as #6213. The current behavior should make much more sense, and only
minimally breaks backwards compatibility (since using linear-scaling
directly was very uncommon - most users got this for free as part of
gpu-hq and relied only on that).
Closes#6213.
when entering a Split View a windowDidEnterFullScreen event happens
without a previous toggleFullScreen call. in that case it tries to stop
an animation that was never initiated by us and basically breaks the
system initiated fullscreen, or in this case the Split View. immediately
after entering the fullscreen it tries top stop the animation and
resizes the window, which causes the window to exit fullscreen. only
try to stop an animation that was initiated by us and is safe to stop.
by default the pixel format creation falls back to software renderer
when everything fails. this is mostly needed for VMs. additionally one
can directly request an sw renderer or exclude it entirely.
moved the retrieval of the macOS specific options from the backend
initialisation to the initialisation of the CocoaCB class, the earliest
point possible. this way macOS specific options can be used for the
opengl context creation for example.
This is to improve the experience when running with default settings
on a driver that doesn't have any overlay planes (or indeed only one
plane), but still supports DRM atomic. Since the drmprime video plane
is set to pick an overlay plane by default it would fail on these
drivers due to not being able to create any atomic context. Users with
such cards had to specify --drm-video-plane-id manually to some bogus
value (it's not used after all).
The "video" plane is only ever used by the drmprime-drm hwdec interop,
which is not used at all in the typical usecase where everything is
actually rendered on to the "OSD" plane using EGL, so having an atomic
context without the "video" plane should be fine most of the time.
For vec3, the alignment and size differ. The current code will pack a
struct like { vec3; float; vec2 } into 8 machine words, whereas the spec
would only use 6.
This actually fixes a real bug: The only place in the code I could find
where it was conceivably possible that a vec3 is followed by a float was
when using --gpu-dumb-mode in combination with --gamma-factor, and only
when --gpu-api=vulkan. So it's no surprised nobody ran into it yet.
These used to be unsupported long ago, but it seems glslang added
support in the meantime. (I don't know which version, but I'm guessing
it was long enough ago that we don't have to add a feature check)
Should hopefully help make push constant layouts more robust against
possible bugs either in our code or in the driver.
Certain low-end Mali GPUs have a rather low precision and overflow
during the PRNG calculations, thereby breaking e.g. deband-grain.
Modify the permute() to avoid this, this does not impact the
quality of PRNG output (noticeably).
This problem was observed on:
GL_VENDOR='ARM', GL_RENDERER='Mali-T720'
GL_VERSION='OpenGL ES 3.1 v1.r15p0-00rel0.bdd9e62cdc8c88e0610a16b5901161e9'
Upstream has this now. Didn't really make any different for me (except
making the polar compute shader 2%-3% faster), but maybe it does for
somebody else.
When using multiple compute shaders as part of the same pass, there can
be a conflict in the block sizes. In the problematic case, the HDR
detection shader can collide with the polar sampling shader. In this
case, the solution is clear - the passes that can handle any size should
"give in" and not overwrite the block sizes.
Fixes#6083.
instead of force unwrapping and chaining the optional vars in our
containsMouseLocation function, safely unwrap and guard the resulting
var.
Fixes#6062
Add another parameter to mpv_opengl_drm_params to hold the FD to the
render node, so that the fd can be passed to hwdec_vaegl.
The render node is opened in context_drm_egl and inferred from the
primary device fd using drmGetRenderDeviceNameFromFd.
The previous code did not save enough information about the old state,
and could end up changing what plane the fbcon:s FB got attached to,
or in worse case causing a blank screen (observed in some multi-screen
setups on Sandy Bridge).
In addition refactor the handling of drmModeModeInfo property blobs to
not leak, as well as enable reuse of already created blobs.
init is a reserved keyword and Swift 4.2 got a bit stricter about using
it. this could be fixed by adding apostrophes around init but makes the
code uglier. hence i just renamed init to initialized and for
consistency uninit to uninitialized.
Fixes#5899
the pre-allocation was needed because the layer allocated a opengl
context async itself and we couldn't influence that. so we had to start
the core after the context was actually allocated. furthermore a window,
view and layer hierarchy had to be created so the layer would create
a context.
now, instead of relying on the layer to create a context we do this
manually and re-use that context later when the layer wants to create
one async itself.
This sacrifices some dynamic range for well-behaved sources, but
prevents catastrophic desaturation on badly mastered / too bright
sources. I think that's the better trade-off. This makes the
desaturation algorithm much "safer" to deploy by default, as well. One
could even argue going up to strength 1.0, which works better for some
sources but worse for others. But I think the current strength is the
best trade-off even after this change.
The default get_format does exactly do this, so we don't need to
duplicate it.
The only potential problem with this is that the logic doesn't entirely
prevent that the avcodec_default_get_format hw_device_ctx path is
triggered, which would probably work, but has unknown consequences and
interactions. But the way the logic currently works it can't happen,
provided the hwaccel metadata libavcodec provides is correct.
For some reason, the X default modifier map binds shift+tab to
ISO_Left_Tab instead of the regular Tab. So to get Shift+TAB recognized
by mpv, we also need to accept ISO_Left_Tab.
This patch matches what other programs like e.g. Qt do, which treat Tab
and ISO_Left_Tab as the same thing.
God only knows why the distinction exists, and why X decides to mix up
its bindings like that.
Fixes#5849
If anyone happened to build with GL disabled, this could lead to option
changes not always refreshing the screen. Since vo_gpu is always enabled
now (just not necessarily any backend for it), we can drop the #if
completely.
(The way this works is a bit idiotic - the option cache exists only to
grab the change notification, which will trigger a redraw and make
vo_gpu update its own second copy of them. But at least it avoids some
layering issues for now.)
This was always a legacy thing. Remove it by applying an orgy of
mp_get_config_group() calls, and sometimes m_config_cache_alloc() or
mp_read_option_raw().
win32 changes untested.
The --hwdec* options are a good fit for the vd_lavc local option
struct. This annoyingly requires manual prefixing of most of these
options with --vd-lavc (could be avoided by using more sub-struct
craziness, but let's not).
The default get_format does exactly do this, so we don't need to
duplicate it.
The only potential problem with this is that the logic doesn't entirely
prevent that the avcodec_default_get_format hw_device_ctx path is
triggered, which would probably work, but has unknown consequences and
interactions. But the way the logic currently works it can't happen,
provided the hwaccel metadata libavcodec provides is correct.
With the advent of actual HDR devices, my real measured ICC profile has
an "infinite" contrast, since the display is completely off on pure
black inputs. 100k:1 might not be enough, so let's just bump it up to
1m:1 to be safe.
Also, improve the logging in the case that the detected contrast is too
high by default.
First fix a memory leak when skipping cursor planes by inverting the
check and putting everything, but the free, in the body.
Then fix a missed drmModeFreePlane by simply copying the fields of the
drmModePlane we are interested in and freeing the drmModePlane struct
early.
Until recently, ao_lavc and vo_lavc started encoding whenever the core
happened to send them data. Since audio and video are not initialized at
the same time, and the muxer was not necessarily opened when the first
encoder started to produce data, the resulting packets were put into a
queue. As soon as the muxer was opened, the queue was flushed.
Change this to make the core wait with sending data until all encoders
are initialized. This has the advantage that we don't need to queue up
the packets.
The user won't want to have those in the video (I think). The core can
sporadically issue redraws, which is what you want for actual playback,
but not in encode mode. vo_lavc can explicitly detect those and skip
them. It only requires switching to a more advanced internal VO API.
The comments in vo.h are because vo_lavc draws to one of the images in
order to render OSD. This is OK, but might come as a surprise to whoever
calls draw_frame, so document it. (Current callers are OK with it.)
Inspired by kmscube, first try to pick the Encoder and CRTC already
associated with the selected Connector, if any. Otherwise try to find
the first matching encoder & CRTC like before.
The previous behavior had problems when using atomic
modesetting (crtc_setup_atomic) when we picked an Encoder & CRTC that
was currently being used by the fbcon together with another Encoder.
drmModeSetCrtc was able to "steal" the CRTC in this case, but using
atomic modesetting we do not seem to get this behavior automatically.
This should also improve behavior somewhat when run on a multi screen
setup with regards to deinit and VT switching (still sometimes you end
up with a blank screen where you previously had a cloned display of
your fbcon)
Add some properties which where forgotten in crtc_setup_atomic.
In both change to not use DRM_MODE_PAGE_FLIP_EVENT | DRM_MODE_ATOMIC_NONBLOCK
flags. This should make it more similar to the drmSetCrtc which it aims to
replace (take effect directly, and blocking call). This also saves us the
trouble of having to set up a poll to wait for pageflip, which would've been
neccesary with DRM_MODE_PAGE_FLIP_EVENT, in both crtc_setup_atomic and
crtc_release_atomic.
This patch will make sure that the video plane is hidden when unused.
When using high resolution modes, typically UHD, and embedding mpv,
having the video plane sitting in the back when you don't play any video
is eating a lot of memory bandwidth for compositing.
That patch makes sure that the video layer is just disabled before and
after playback.
This commit allows to add atomic modesetting when using the atomic renderer.
This is actually needed when using and osd with a smaller size than screen resolution.
It will also make the drm atomic path more consistent
We are currently using primary / overlay planes drm objects, assuming that primary plane is osd and overlay plane is video.
This commit is doing two things :
- replace the primary / overlay planes members with osd and video planes member without the assumption
- Add two more options to determine which one of the primary / overlay is associated to osd / video.
- It will default osd to overlay and video to primary if unspecified
This patch adds
- DRM connector object to atomic context.
- fd property to the drm atomic object as well as a method to read blob type properties.
This allows to ensure that the proper connector is picked up, especially when specifying it
from the commandline, and also allows to make sure we're using the right one when embedding
with interop into an application.
That new API was introduced and allows to have several native resources.
Thisuses that mechanisma for drm resources rather than the deprecated
opengl-cb structs.
This patch therefore add two structs that can be used with the drm atomic interop.
- mpv_opengl_drm_params : which will hold all the drm handles
- mpv_opengl_drm_osd_size : which will hold osd layer size
This commit adds a drm-osd-size=WxH parameter to commandline which
allows to define the OSD plane dimension. OSD can be upscaled to
screen resolution when having OSD at video resolution is too heavy.
This is especially useful for UHD modes on embedded devices where
the GPU cannot handle UHD modes at a decent framerate.
Define a hard-coded value for gl_NumWorkGroups if it is not available.
This adds an additional requirement of needing a shader recompile for
all window size changes.
This was considered a worthwhile compromise as currently f.ex. d3d11
completely lacked any peak computation - this is a major quality of
life upgrade.
This is for working around bugs in certain Android devices. At least one
device fails to sort EGLConfigs by size, so eglChooseConfig() ends up
choosing a config with 5/6/5 bits per r/g/b component. The other
attributes in the affected EGLConfigs did not look like they should
affect the sorting process as specified by the EGL 1.4 standard.
The device was reported as:
Sony Xperia Z3 Tablet Compact
Firmware 6.0.1 build number 23.5.A.1.291
GL_VERSION='OpenGL ES 3.0 V@140.0 AU@ (GIT@I741a3d36ca)'
GL_VENDOR='Qualcomm'
GL_RENDERER='Adreno (TM) 330'
Other Qualcom/Adreno devices have been reported as unaffected by this
(including some with same GL_RENDERER string).
"Fix" this by always requiring at least 8 bit. This means it would fail
on devices which cannot provide this. We're fine with this.
mpv-android/mpv-android#112
This was supposed to be a replacement for encode_lavc_discontinuity()
(so we don't need to store last_video_in_pts in a way which requires
synchronization). Unfortunately, VOCTRL_RESET is also called before
termination, and even though it shouldn't matter as far as the VO API is
concerned, it does. It's because vo_lavc.c buffers a frame to compute
the frame duration.
Drop this code. The consequence is that it appears to encode 2 frames
with the same PTS if multiple files are encoded into one. Before this,
it merely dropped a frame (maybe the first of every subsequent file, not
sure).
The main change is that we wait with opening the muxer ("writing
headers") until we have data from all streams. This fixes race
conditions at init due to broken assumptions in the old code.
This also changes a lot of other stuff. I found and fixed a few API
violations (often things for which better mechanisms were invented, and
the old ones are not valid anymore). I try to get away from the public
mutex and shared fields in encode_lavc_context. For now it's still
needed for some timestamp-related fields, but most are gone. It also
removes some bad code duplication between audio and video paths.
1. I want to get away from mp_image_params (maybe).
2. For encoding mode, it's convenient to get the nominal_fps, which is
a mp_image field, and not in mp_image_params.
Also rename stereo3d to stereo_in. The only real change is that the
vo_gpu OSD code now uses the actual stereo 3D mode, instead of the
--video-steroe-mode value. (Why does this vo_gpu code even exist?)
Attempts to enable the following things:
- let a render API user do "proper" audio-sync video timing itself
- make it possible to not re-render repeated frames if the API user has
better mechanisms available (e.g. waiting for a DisplayLink cycle
instead)
- allow the user to delay or skip redraws if it makes sense
Basically this information will be needed by API users who want to be
"clever" about optimizing timing and rendering.
In MPV_RENDER_PARAM_ADVANCED_CONTROL mode, a simple update callback does
not necessarily make the API user redraw. So handle it differently.
For one, setting vo->want_redraw already uses the "normal" redraw path,
which will call draw_frame() and set next_frame.
Then there are redraws trigered by mpv_render_context_set_parameter(),
which are on the render thread, and would require a separate mechanism.
I decided this is not really a good idea, since it's not even clear that
setting an arbitrary parameter should redraw. Also this could trigger an
unbounded number of redraws. The user can trigger redraws manually if
really needed, depending on the parameter that's being set. If we really
wanted vo_libmpv to do this, we could add a new flag like need_redraw,
which would be 4 lines of code or so.
update() used to require the lock, but now it doesn't matter. It's
slightly better to do it outside of the lock now, in case the update
callback reschedules before returning, and the user render thread tries
to acquire the still held lock (which would require 2 more context
switches).