when quitting mpv in fullscreen the System always switchs to Space 1
regardless of which Space mpv was on previous to switching to fs. to fix
this we close the window before quitting fs. that way the System
switches back the the Space mpv was previous on before fs.
new events were added but not fetched by the vo, because we didn't
signal the vo that new events were available.
actually wakeup the vo when new events are available.
#6299 reported problems with earlier 3.0.x swift versions. i tested with
3.0.2/SDK 10.12.2 and just assumed it also works with the older 3.0.x
swift and 10.12.x SDK versions. due to the unstable nature of swift
there were slight API differences that caused build problems.
since swift is bundled with the SDK we just bump the minimum swift
version.
Regression from 8e3308d687.
Broken cases were:
* --no-cursor-autohide acted like --cursor-autohide=always.
* --cursor-autohide-fs-only always hid the cursor if starting
non-fullscreen; entering fullscreen at least once fixed it.
The old size limit was chosen before LUT texture was supported in user
shader. At that time, the whole user shader will be compiled and run
on GPU, which makes large user shader impractical to be used.
With the introduction of LUT texture, the old size limit doesn't make
any sense. For example, a 1024x1024 rgba16f LUT will cost 32MB shader
size.
Fix this by increasing the size limit to a value that's unlikely be
reached.
this was meant to be fixed by 546f038, but with --disable-cplugins the
do_the_symbol_stuff function was never called and the handle_add_object
function was again always called before the actual linking task was
created.
to fix this we explicitly call handle_add_object only after all the
tasks the do_the_symbol_stuff function is called after too.
Fixes#6028
Adds the negation missed in 8816e1117e
when moving from a positive-is-active to positive-is-idle variable.
This leads to proper updates to properties such as "eof-reached",
as well as fixes screensaver state updates.
Separately found and fixed by avih and wnoun.
Co-authored-by: wnoun <wnoun@outlook.com>
Manual changes done:
* Merged the interface-changes under the already master'd changes.
* Moved the hwdec-related option changes to video/decode/vd_lavc.c.
modulo operator could be used to check if size is multiple of a
certain number.
equal operator could be used to verify if size of different textures
aligns.
Before this commit, texture offset is set after all source textures
are finalized. Which means CHROMA hooks won't be able to align with
luma planes. This could be problematic for chroma prescalers utilizing
information from luma plane.
Fix this by find the reference texture early, and set global texture
offset early.
Only printable ASCII characters were considered to be valid texts. Make
it possible that UTF-8 contents are also considered valid.
This does not make the SDH subtitle filter support non-English
languages. This just prevents the filter from blindly marking lines that
have only UTF-8 characters as empty.
Fixes#6502
The seek range update was to early and did not take the removed head
packets into account. And therefore missed that the queue was not
BOF anymore.
This led to not be able to backward seek before the first packet of
the first seek range.
Fix it by moving the seek range update after the possible removal and
the change of the BOF flag.
Fixes: #6522
As stated in the original commit message, if the demuxer set the start
time to the first subtitle packet, the subtitles would be shifted
incorrectly. It appears that it is the case for external PGS subtitles.
This reverts commit 520fc74036.
Fixes#5485
This allows context_drm_egl to use as many buffers as libgbm or the
swapchain_depth setting allows (whichever is smaller).
On pause and on still images (cover art etc.) to make sure that output does not
lag behind user input, the swapchain is drained and reverts to working in a dual
buffered (equivalent to swapchain-depth=1) manner.
When possible (swapchain-depth>=2), the wait on the page flip event is now not
done immediately after queueing, but is deferred to the next invocation of
swap_buffers. Which should give us more CPU time between invocations.
Although, since gbm_surface_has_free_buffers() can only tell us a boolean value
and not how many buffers we have left, we are forced to do this contortionist
dance where we first overshoot until gbm_surface_has_free_buffers() reports 0,
followed by immediately waiting so we can free a buffer, to be able to get the
deferred wait on page flip rolling.
With this commit we do not rely on the default vsync fences/latency emulation of
video/out/opengl/context.c, but supply our own, since the places we create and
wait for the fences needs to be somewhat different for best performance.
Minor fixes:
* According to GBM documentation all BO:s gotten with
gbm_surface_lock_front_buffer must be released before gbm_surface_destroy is
called on the surface.
* We let the page flip handler function handle the waiting_for_flip flag.
This solves some edge cases when using files with very weird metadata
(e.g. MaxCLL 10k and so forth). Instead of just blindly seeding it with
the tagged metadata, forcibly set the initial state from the detected
values.
Rather than the linear cd/m^2 units, these (relative) logarithmic units
lend themselves much better to actually detecting scene changes,
especially since the scene averaging was changed to also work
logarithmically.
Gamut mapping can take very bright out-of-gamut colors into the
negatives, which completely destroys the color balance (which tone
mapping tries its best to preserve).
This change switches to a logarithmic mean to estimate the average
signal brightness. This handles dark scenes with isolated highlights
much more faithfully than the linear mean did, since the log of the
signal roughly corresponds to the perceptual brightness.
In theory our "eye adaptation" algorithm works in both ways, both
darkening bright scenes and brightening dark scenes. But I've always
just prevented the latter with a hard clamp, since I wanted to avoid
blowing up dark scenes into looking funny (and full of noise).
But allowing a tiny bit of over-exposure might be a good thing. I won't
change the default just yet (better let users test), but a moderate
value of 1.2 might be better than the current 1.0 limit. Needs testing
especially on dark scenes.
The previous approach of using an FIR with tunable hard threshold for
scene changes had several problems:
- the FIR involved annoying hard-coded buffer sizes, high VRAM usage,
and the FIR sum was prone to numerical overflow which limited the
number of frames we could average over. We also totally redesign the
scene change detection.
- the hard scene change detection was prone to both false positives and
false negatives, each with their own (annoying) issues.
Scrap this entirely and switch to a dual approach of using a simple
single-pole IIR low pass filter to smooth out noise, while using a
softer scene change curve (with tunable low and high thresholds), based
on `smoothstep`. The IIR filter is extremely simple in its
implementation and has an arbitrarily user-tunable cutoff frequency,
while the smoothstep-based scene change curve provides a good, tunable
tradeoff between adaptation speed and stability - without exhibiting
either of the traditional issues associated with the hard cutoff.
Another way to think about the new options is that the "low threshold"
provides a margin of error within which we don't care about small
fluctuations in the scene (which will therefore be smoothed out by the
IIR filter).
Instead of desaturating towards luma, we desaturate towards the
per-channel tone mapped version. This essentially proves a smooth
roll-off towards the "hollywood"-style (non-chromatic) tone mapping
algorithm, which works better for bright content, while continuing to
use the "linear" style (chromatic) tone mapping algorithm for primarily
in-gamut content.
We also split up the desaturation algorithm into strength and exponent,
which allows users to use less aggressive desaturation settings without
affecting the overall curve.
This is the naming xdg-shell stable adopted, it doesn’t make much sense
to keep using “shell” everywhere with all functions calling it
“wm_base”.
Finishes what 76211609e3 started.
Too many broken hardware decoders. Noticed wrong decoding of a video
file encoded with x262 on RX Vega when using VAAPI (Mesa 18.3.2).
Looks fine with swdec and a cheap hardware BD player.
Reverts 017f3d0674
some safety mechanism for the async fs animation aren't needed anymore,
due to possible improved logic and slightly different behaviour on new
macOS versions. that safety fallback prevented the Split View because
it always returned a rectangle of the whole screen, instead of just
part/half of it.
Fixes#6443
Add "auto" the possible values of target-peak. The default value
for target_peak is to calculate the target using mp_trc_nom_peak.
Unfortunately, this default was outside the acceptable range of
10-10000 nits, which prevented its later reassignment. So add an
"auto" choice to target-peak which lets clients and scripts go back
to using the trc default after assigning a value.