This is the naming xdg-shell stable adopted, it doesn’t make much sense
to keep using “shell” everywhere with all functions calling it
“wm_base”.
Finishes what 76211609e3 started.
Too many broken hardware decoders. Noticed wrong decoding of a video
file encoded with x262 on RX Vega when using VAAPI (Mesa 18.3.2).
Looks fine with swdec and a cheap hardware BD player.
Reverts 017f3d0674e48a587b9e6cd7a48f15519c799c3e
some safety mechanism for the async fs animation aren't needed anymore,
due to possible improved logic and slightly different behaviour on new
macOS versions. that safety fallback prevented the Split View because
it always returned a rectangle of the whole screen, instead of just
part/half of it.
Fixes#6443
Add "auto" the possible values of target-peak. The default value
for target_peak is to calculate the target using mp_trc_nom_peak.
Unfortunately, this default was outside the acceptable range of
10-10000 nits, which prevented its later reassignment. So add an
"auto" choice to target-peak which lets clients and scripts go back
to using the trc default after assigning a value.
this lead to an unexpected videotoolbox-copy hwdec name due to the last
two chars being cut off. since selection is also done by that name one
had to use "videotoolbox-co" to explicitly use the copy mode of
videotoolbox.
Merge file-size/file-format and audio channel-count/format into one line
respectively. This fixes stats overflowing the screen in larger than
19:6 aspect ratios. In this case a problem was reported for ~21:9 which
should be common enough for us to "support" it.
Idle handlers used to not be executed when timers were active
Now they are executed:
* After all expired timers have been executed
* After all events have been processed (same as when there are no timers)
Commit e392d6610d modified the native
demuxer to use track gain as a fallback for album gain if the latter is
not present. This commit makes functionally equivalent changes in the
libavformat demuxer.
If the number of chapters is 0, the chapter list can be NULL. clang
complains that we pass NULL to qsort(). This is yet another pointless UB
that exists for no reason other than wasting your time.
Seems to happen often with ytdl pseudo-DASH streams, so whatever. I
couldn't reproduce it and check what triggers it, I just remember seeing
the error message and found it annoying.
I misunderstood how this extension works. If I understand it correctly
now, it's worse than I thought. They key thing is that the (ust, msc,
sbc) tripple is not for a single swap event. Instead, (ust, msc) run
independently from sbc. Assuming a CFR display/compositor, this means
you can at best know the vsync phase and frequency, but not the exact
time a sbc changed value.
There is GLX_INTEL_swap_event, which might work as expected, but it has
no EGL equivalent (while GLX_OML_sync_control does, in theory).
Redo the context_glx sync code. Now it's either more correct or less
correct. I wanted to add proper skip detection (if a vsync gets skipped
due to rendering taking too long and other problems), but it turned out
to be too complex, so only some unused fields in vo.h are left of it.
The "generic" skip detection has to do.
The vsync_duration field is also unused by vo.c.
Actually this seems to be an improvement. In cases where the flip call
timing is off, but the real driver-level timing apparently still works,
this will not report vsync skips or higher vsync jitter anymore. I could
observe this with screenshots and fullscreen switching. On the other
hand, maybe it just introduces an A/V offset or so.
Why the fuck can't there be a proper API for retrieving these
statistics? I'm not even asking for much.
The redundancy here always annoyed me. Back then I didn't change it
because it's hard to test and I just had fixed something. This doesn't
matter anymore, so simplify it, without testing and with the risk that
something breaks (why care).
--record-file is nice, but only sometimes. If you watch some sort of
livestream which you want to record, it's actually much nicer not to
record what you're currently "seeing", but anything you're receiving.
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
Use the extension to compute the (hopefully correct) video delay and
vsync phase.
This is very fuzzy, because the latency will suddenly be applied after
some frames have already been shown. This means there _will_ be "jumps"
in the time accounting, which can lead to strange effects at start of
playback (such as making initial "dropped" etc. frames worse). The only
reasonable way to fix this would be running a few dummy frame swaps at
start of playback until the latency is known. The same happens when
unpausing.
This only affects display-sync mode.
Correct function was not confirmed. It only "looks right". I don't have
the equipment to make scientifically correct measurements.
A potentially bad thing is that we trust the timestamps we're receiving.
Out of bounds timestamps could wreak havoc. On the other hand, this will
probably cause the higher level code to panic and just disable DS.
As a further caveat, this makes a bunch of assumptions about UST
timestamps. If there are delayed frames (i.e. we skipped one or more
vsyncs), the latency logic is mostly reset. There is no attempt to make
the vo.c skipped vsync logic to use this. Also, the latency computation
determines a vsync duration, and there's no effort to reconcile or share
the vo.c logic for determining vsync duration.
This option has been deprecated upstream for a long time, probably
doesn't even work anymore, and won't work moving forwards as we replace
the vulkan code by libplacebo wrappers.
I haven't removed the option completely yet since in theory we could
still add support for e.g. a native glslang wrapper in the future. But
most likely the future of this code is deletion.
As an aside, fix an issue where the man page didn't mention d3d11.
This commit bumps the libmpv version to 1.102
drm-osd-plane -> drm-draw-plane
drm-video-plane -> drm-drmprime-video-plane
drm-osd-size -> drm-draw-surface-size
"draw plane", as in the plane that OpenGL draws to, whether it be
video + OSD or just OSD.
"drmprime video plane", as in the plane used for hwdec video imported
via drmprime.
"draw surface size", as in the size of the surface used for the draw plane
The new names are invariant whether or not hwdec_drmprime_drm is being
used or not. The original naming was very confusing, as when doing
regular rendering (swdec or vaapi) the video would be displayed on the
"OSD plane", and the "Video plane" would remain unused.
Add general primary/overlay plane option to drm-osd-plane-id and
drm-video-plane-id, so that the user can just request any usable
primary or overlay plane for either of these two options. This should
be somewhat more user-friendly (especially as neither of these two
options currently have a useful help function), as usually you would
only be interested in the type of the plane, and not exactly which
plane gets picked.
By design, some vulkan implementations block until vsync during
vkAcquireNextImageKHR. Since mpv only considers the time that
`swap_buffers` spent blocking as constituting part of the vsync, we can
help it out a bit by pre-emptively calling this function here in order
to improve the accuracy of vsync jitter measurements on vulkan.
(If it fails, we just ignore the error and have the user call it a
second time later - maybe it will work then)
On my system this drops vsync-jitter from ~0.030 to ~0.007, an accuracy
of +/- 100μs. (Which *might* have something to do with the fact that
this is the polling interval for command polling)
Makes performance slightly better when using multiple queues by avoiding
unnecessary semaphores due to bad queue selection.
Also remove an aeons-old workaround for an nvidia bug that only ever
existed in the earliest beta vulkan drivers anyway.
Historically, there's been no way to offer deinterlacing with nvdec,
and for cuviddec, it required a command line flag, with no way to
toggle while playing.
Now that we have a cuda deinterlacing filter in ffmpeg, we can hook
it up hook it up as the cuda auto-deinterlacer. In practice, this
isn't going to be present very often, due to the licensing mess with
the cuda sdk, but we can support it when it is there.
We are currently unnecessarily including vulkan headers even when
not building with vulkan support. I also guarded the GL header
inclusion even though this doesn't appear to break anything today.
Fixes#6330.
This makes the default fit on screen, autofit and window-scale
changing behavior to use the screen working area, instead of
the whole screen area.
As a result mpv window doesn't cover the taskbar now when opening
videos with size larger than the screen size.
The actual behavior now is the same as expected behavior for
usecases 1-4 from #4363.
This commit also removes the screenrc from w32 struct.
The screen rect can now be retrieved via `get_screen_area` function,
which was renamed from `update_screen_rect`.
On a multi-monitor system, if the user moved the window between
monitors, this function will return the current screen area under
the window, and not the screen area from monitor specified by
`--screen` option. The `--screen` option sets the initial monitor
the mpv window is displayed on.
Returning -1 in a function with return type bool is the same as
returning true. In the error paths, false should be returned to
indicate that something went wrong.