This commit tries to prepare for better underrun reporting. The goal is
to report underruns relatively immediately. Until now, this happened
only when play() was called. Change this, and abuse that get_delay() is
called "relatively often" - this reports the underrun immediately in
practice.
Background:
In commit 81e51a15f7 (and also e38b0b245e), we were quite confused
about ALSA underrun handling. The commit message showed uncertainty how
case 3 happened, but it's blindingly obvious and simple.
Actually reading the code shows that ALSA does not have a concept of a
"final chunk" (or we don't use it). It's obvious we never pass the
AOPLAY_FINAL_CHUNK flag along to the ALSA API in any way. The only thing
we do is simply writing a partial fragment. Of course this will cause an
underrun. Doing a partial write saves us the trouble to pad the last
frame with silence, or so.
The main reason why the underrun message was avoided was that play() was
never called with a non-0 sample count again (except if reset() was
called before that). That was OK, at least the goal of avoiding the
unwanted message was reached. (And the original "bogus" message at end
of playback was perfectly correct, as far as ALSA goes.)
If network stalls, play() will called again only once new data is
available. Obviously, this could take a long time, thus it's too late.
The old way of using wayland in mpv relied on an external renderloop for
semi-accurate timings. This had multiple issues though. Display sync
would break whenever the window was hidden (since the frame callback
stopped being executed) which was really annoying. Also the entire
external renderloop logic was kind of fragile and didn't play well with
mpv's internal structure (i.e. using presentation time in that old
paradigm breaks stats.lua).
Basically the problem is that swap buffers blocks on wayland which is
crap whenever you hide the mpv window since it looks up the entire
player. So you have to make swap buffers not block, but this has a
different problem. Timings will be terrible if you use the unblocked
swap buffers call.
Based on some discussion in #wayland, the trick here is relatively
simple and works well enough for our purposes. Instead we basically
build a way to block with a timeout in the wayland buffer swap
functions.
A bool is set in the frame callback function that indicates whether or
not mpv is waiting for a frame to be displayed. In the actual buffer
swap function, we enter into a while loop waiting for this flag to be
set. At the same time, the wl_display is polled to block the thread and
wakeup if it receives any events from the compositor. This loop only
breaks if enough time has passed or if the frame callback bool is
received.
In the near future, it is better to set whether or not frame a frame has
been displayed in the presentation feedback. However as a first pass,
doing it in the frame callback is more than good enough.
The "downside" is that we render frames that aren't actually shown on
screen when the player is hidden (it seems like wayland people don't
like that). But who cares. Accurate timings are way more important. It's
probably not too hard to add that behavior back in the player though.
The externally driven renderloop was originally added for the wayland
context (to make display sync somewhat work), but it has a lot of issues
with mpv's internal structure. A different approach should be used.
This reverts commit a743fef837.
This affects hwdec_dxva2dxgi, which uses ra_d3d11_wrap_tex to wrap RGB
video frames that are shared with a D3D9 device. Without it, mpv uses
nearest instead of bilinear scaling with --scale=bilinear (the default)
and --hwdec=dxva2. It's kind of hard to believe this bug has gone
unnoticed for almost two years, but that seems to have been the case.
Fixes: #7042
In pseudo-DASH mode, we may have no real streams opened until the
demuxer layer is fully loaded and playback actually starts. The only
hint that the stream is from network is, at that point, the init
segment, which is only opened as stream, and then separately as demuxer
(which is dumb but happened to fit the internal architecture better).
So just propagate the flags from the init segment stream. Seems like an
annoyance, but doesn't hurt that much I guess. (Until someone gets the
idea to pass the init segment data inline or so, but nothing does that.)
The sample link in the linked issue will probably soon switch to another
format, because that service always does this after recent uploads or
so.
Fixes: #7038
mpv typically decodes and filters at least 2 frames before starting
playback. This happens during seeks, as well as when playback starts
from the beginning of the file.
skip-logo.lua receives notifications for all filtered frames, even
during seeking. It should interrupt during seeking, so as a crude
heuristic, it ignored all frames while the player was seeking. This does
not mean all these frames are skipped due to seeking (thus it's a "crude
hueristic"). In particular, it means that the first 2 frames of a video
cannot be skipped, since they're filtered within the playback restart
phase (equivalent to "seeking").
Fix this by making the heuristic slightly less crude. Since we observe
the property as "none", the property is not actually read until we do it
explicitly. By not reading it during seeking, we can let the frames
internally queue up (vf_fingerprint discards them in a ringbuffer-like
fashion if they're too many). Then, if seeking ends, we get the current
playback timestamp, and check queued up frames that are at or after that
timestamp. (In some ways, this duplicates what the player's seeking
logic does.)
A disadvantage is that this is racy. While playback-time is guaranteed
to be set when seeking changes from false to true, playback could
already have progressed to the next frame (or more) before the script
gets time to react. In theory, we could add a seek restart hook or so,
but I don't want to. A property that returns the last playback restart
time would also do it, but feels to special. Not an important problem
in practice anyway.
Someone crazy is trying to mix images with videos in EDL files. Putting
an image as first thing into the EDL disabled audio, because the first
EDL entry was used to define the layout.
Change this. Make it user-configurable, and use a "better" heuristic to
select the default otherwise.
In theory, EDL could be easily extended to specify track layout and
mapping of parts to virtual EDL tracks manually and in great detail. But
I don't think it's worth it - who would bother using it?
Fixes: #6764
Unless --video-latency-hacks, always decode 2 frames on playback
restart. This in turn will always compute the correct frame duration
(even for the first frame), which in turn happens to fix that playback
with an image at the beginning breaks display.
If a still image precedes video, and the size/format of the frame is
different from that of the video following it, the incorrect frame
duration caused vo_reconfig2() to be called early, causing the window to
resize, and the renderer to clear the image to black. Specifically, it
hit the default value of 1 second duration (for still images), so the
image was displayed for 1 second, and changed to black until the next
proper video frame was displayed.
Normally this does not happen. Even if a video file displays still
images, it normally repeats the still image at the video's FPS (which is
sane). But you can construct such files, or use EDL to construct
something similarly behaving.
This change may increase seek latency a bit in audio video-sync mode
(the default). It needs to wait until 2 frames are decoded, before it
bothers to display the first frame. This is done even when seeking. In
theory it might be good to introduce a "seek preview" mode, which shows
the target image without all the preparations needed for starting
playback. (For example, it could not decode audio.) But since I'm using
video-sync=display-resample, which already needed to always decode 2
frames, I don't think this is a terribly high priority, nor do I
consider the slightly slower seeking a regression.
Fixes: #6765
In this case, gapless will most likely not work. It will result in (very
slight) desync, or (more commonly with small buffer sizes), in an
underflow.
I think it would be legitimate to disable gapless at end of playback
completely if video is enabled at all. But this would need an exception
for cover art mode, so I guess the current solution is OK as well.
It turns out that case 2) mentioned in the previous commit happened
quite often when playback ended normally.
There is probably a legitimate underrun with normal buffer sizes (100
ms, 4 fragments, gapless audio in "weak" mode). This is a result of the
player waiting for video to end, and/or the time needed to kill the
video window. The former case means that it depends on your test case
whether it happens (a file where video ends slightly before audio is
less likely to trigger it).
This in turn is due to how gapless playback works. Achieving not having
a "gap" requires queuing the audio of the next file without playing a
partial chunk (as AOPLAY_FINAL_CHUNK would do). The partial chunk is
then played as part of the first chunk played from the next file. But if
it detects "later" that there is no next file, it still needs to get rid
of the last fragment with AOPLAY_FINAL_CHUNK. At this point it's too
late, and an underrun may have actually happened. The way the player
uninits and reinits the entire playback engine for the next file in a
"serial" manner means it cannot know in advance whether this works.
This is the reason why the idiot who added the underrun exception for
the last chunk in play() was wrong (I wrote that btw., before you accuse
me of being rude). Yes, it's a real underrun, and you could probably
hear it.
This XRUN (aka underrun) message was printed in the following
situations:
1) legitimate underrun during playback
2) legitimate underrun when playing final chunk
3) bogus underrun when playing final chunk
The old underrun case (in play()) happens in cases 1) and 2) as well,
but 3) did not happen. It appears 3) is indeed something that happens,
although it's not known for sure. It's still pretty annoying, so remove
the new XRUN message.
When testing, care should be taken to play with buffer sizes, video
versus no video, and gapless enabled/disabled. Also, suspending the
player with Ctrl+Z in the terminal (SIGSTOP) and then resuming is a good
way to trigger a "normal" underrun.
all the get_property_* usages were removed because in some circumstances
they can lead to deadlocks. they were replaced by accessing the vo and
mp_vo_opts structs directly, like on other vos.
additionally the mpv helper was split into a mpv and libmpv helper, to
differentiate between private and public APIs and for future changes
like a macOS vulkan context for vo=gpu.
Another thing nobody will read. I'm attempting to document the rules by
which incompatible changes can be made. These rules have always been
present in this project, but I don't think they were written down. Or
maybe they were, but I forgot where.
I think due to the time of the day it became increasingly incoherent
(not necessarily near the end of the text). Hopefully no logical or
freudian lapses in there.
Although it was not true at the time this was written, both the
"program" and "cache-size" are gone now.
Since the changelog is for the entire next release, it makes no sense to
mention these removed properties.
It also happens to make the description of this much simpler, because
it's a non-issue now. It's probably not even worth mentioning anymore.
The justification for this is the fact that the `video-aspect` property
doesn't work well with `cycle_values` commands that include the value
"-1".
The "video-aspect" property has effectively no change in behavior, but
we may want to make it read-only in the future. I think it's probably
fine to leave as-is, though.
Fixes#6068.
The description of the "playback_only" field in the "subprocess" command
says "you can't start it outside of playback". This did not work
correctly: if the player was started in idle mode in the first place,
the subprocess was allowed to run even with playback_only=yes.
This is a bug, and this change fixes it. Add a test for this to
command-test.lua.
For #7025.
There's potential confusion about how long a process started with the
"subprocess" command is allowed to live. Add some more explanations
regarding "subprocess" specifics (such as the playback_only field), and
things that apply to asynchronous commands in general.
Partially for #7025.
This is the proper fix to the memory leak @wm4 pointed out. It turns out
that when you autoprobe opengl and vo_wayland_init returns false,
vo_wayland_uninit is never actually executed. So you have a leftover
pointer. The vulkan context does this correctly which was why my old,
dumb "fix" broke it.
Dumb idea. The correct thing to do is to fix the preinit and context
creation so that the uninit is correctly executed when probing fails
(and then everything gets freed).
This reverts commit defc8f359c.
wm4 mentioned that the wayland autoprobe leaked. A simple oversight in
the wayland_common code forgot to free the vo_wayland_state if
vo_wayland_init returned false.
Leaks the entire zimg state on filter deinit. Not sure what I was
thinking; with some luck, I just didn't give a shit about this case, but
most likely I was thinking the same thing as always: nothing.
As pointed out by @olifre in #7016, this line of code was wrong. p->opts
at this point is a struct allocated and managed by m_config. opts->file
is a string, and m_config explicitly frees it on destruction. The line
of code in question replaced the opts->file value, and made both the old
and new value children of the talloc allocation, so they were _also_
freed on destruction.
This crashed due to a double-free. First, talloc auto-freeing freed the
string, then m_config explicitly called talloc_free() on the stale
pointer again.
As @v-fox pointed out, commit 36dd2348a1 seems to have triggered the
crash. I suspect this code merely worked out of coincidence, since it
allowed m_config to free the value first. This removed it from the
auto-free list, and thus did not result in a double-free. The change
in order of calling alloc destructors changed the order of these calls.
There is no strong reason why new behavior (as introduced by commit
36dd2348a1) would be wrong (it feels like cleaner behavior). On the
other hand, what the vf_vapoursynth code did is clearly unclean and
going by the m_config API, you're not at all supposed to do this.
m_config manages all memroy referenced by option structs, the end.
@olifre's suggested fix also would have been correct (not just hiding
the issue), I prefer my fix, as it doesn't mess with the option struct
in tricky ways.
This wouldn't have happened if mpv were written in Haskell.
I previously skipped creating the wl_output if the --fullscreen flag
with no --fsscreen_id was inputted, so the fullscreen video lands on the
correct output (where mpv was launched). This has breakage if someone
combines the --autofit flag (or other similar options with it). Instead,
just actually read xdg_shell spec and realize that you can pass NULL to
xdg_toplevel_set_fullscreen and let the compositor choose the output if
the user doesn't specify it. If this has issues, get a better
compositor.
This partially reverts commit a9d83eac40
("Remove optical disc fancification layers").
Mostly due to the timestamp crap, this was never really going to work.
The playback layer is sensitive to timestamps, and derives the playback
time directly from the low level packet timestamps. DVD/BD works
differently, and libdvdnav/libbluray do not make it easy at all to
compensate for this. Which is why it never worked well, but not doing it
at all is even more awful.
demux_disc.c tried this and rewrote packet timestamps from low level TS
to playback time. So restore demux_disc.c, which should bring behavior
back to the old often non-working but slightly better state.
I did not revert anything that affects components above the demuxer
layer. For example, the properties for switching DVD angles or listing
disc titles are still gone. (Disc titles could be reimplemented as
editions. But not by me.)
This commit modifies the reverted code a bit; this can't be avoided,
because the internal API changed quite a bit. The old seek resync in
demux_lavf.c (which was a hack) is replaced with a hack. SEEK_FORCE and
demux_params.external_stream are new additions.
Some of this could/should be further cleaned up. If you don't want
"proper" DVD/BD support to disappear, you should probably volunteer.
Now why am I wasting my time for this? Just because some idiot users are
too lazy to rip their ever-wearing out shitty physical discs? Then why
should I not be lazy and drop support completely? They won't even be
thankful for me maintaining this horrible garbage for no compensation.
Instead of using custom code.
Now if only f_lavfi knew what formats FFmpeg's vf_yadif accepts, this
could look much nicer, and wouldn't require the additional converter
filter setup.
This adds the function as seen in the f_autoconvert.h part of the patch.
It's pretty simple, but goes along with an intrusive code move. I guess
the resulting code is slightly nicer anyway.
Before this commit, enabling hardware deinterlacing via the
"deinterlace" option/property just failed if no hardware deinterlacing
was available. An error message was logged, and playback continued
without deinterlacing.
Change this, and try to copy the hardware surface to memory, and then
use yadif. This will have approximately the same effect as
--hwdec=auto-copy. Technically it's implemented differently, because
changing the hwdec mode is much more convoluted than just inserting a
filter for performing the "download". But the low level code for
actually performing the download is the same again.
Although performance won't be as good as with a hardware deinterlacer
(probably), it's more convenient than forcing the user to switch hwdec
modes separately. The "deinterlace" property is supposed to be a
convenience thing after all.
As far as the code architecture goes, it would make sense to auto-insert
such a download filter for all software-only filters that need it.
However, libavfilter does not tell us what formats a filter supports
(isn't that fucking crazy?), so all attempts to work towards this are
kind of hopeless. In this case, we merely have hardcoded knowledge that
vf_yadif definitely does not support hardware formats. But yes, this
sucks ass.
This just wraps the mp_image_hw_download() function as a filter and adds
some minor caching/error logging. (Shame that it needs to much
boilerplate, I guess.)
Will be used by the following commit. Wrapping it as filter seemed more
convenient than other choices.
Can be used with mp_chain_filters() to combine multiple filters into a
single one. This is a bit silly, but whatever. I'm making it an explicit
separate filter, because it lets the user access mp_filter.ppins against
all conventions.
Basically predicts what mp_image_hw_download() will do. It's pretty
simple if it gets the full mp_image. (Taking just a imgfmt would make
this pretty hard/impossible or inaccurate.)
Used in one of the following commits.
This reverts commit 6385a5fd1b, and in
addition removes the code that automatically inserts the vavpp filter.
The reason is the same as the commit that is being reverted: this
filter seems to trigger driver bugs. It can cause GPU freezes or
just doesn't work.
This variant of disabling the filter is better. There was no way to
add the "force" parameter to the automatically inserted filter, so
the old approach just made manual filter insertion (with the --vf
option or "vf" command) more cumbersome.
I don't think MPlayer/mplayer2 and Libav are well-known enough anymore
to warrant such a prominent place in the top-level README file of this
project. It's just useless noise to most users. So I've moved these
things to the FAQ.
Update some other minor things.
Autoload script now suppports loading of not only video, but also
image and audio files, in a manner, where one can configure which
of the groups (audio, videos, images) is currently enabled.
Use file script-opts/autoload.conf with key=value configuration keys
disabled,images,videos,audio to configure autoload script.
See documentation on top of the script
This is realized by dvbin-channel-switch-offset,
which is a numeric offset on the channel initially tuned to.
Since the channel list is kept in the stream alone
depending on detected hardware and chosen card,
and no available backchannel to the player, there's no direct
property which could be switched.
Using input.conf like:
H cycle dvbin-channel-switch-offset up
K cycle dvbin-channel-switch-offset down
Q set dvbin-prog "ZDF HD"
allow fast and reliable channel switching again.