Still trying to get people to read it. Even though I wanted to make it
less of a wall of text and more readable, it got bigger. Oops.
While I'm at it, violate my own rules and mix these mostly cosmetic
changes with some actual rule changes and clarifications.
This allows stream_cb backends to implement blocking
behavior inside read_fn, and still get notified when the user
wants to cancel and stop playback.
Signed-off-by: Aman Gupta <aman@tmm1.net>
The C11 situation is complicated. For example, MinGW doesn't seem to
have a full C11 implementation, but we pretty much rely on C11 atomics.
Regarding "#pragma once": they say it's not standard because of unsolved
(admittedly valid) issues. Btu still, fuck writing include guards, I
just can't be bothered with this crap.
(Does anyone even read this document?)
I think a popular libmpv application did exactly this: enabling advanced
control, and then receiving deadlocks. I didn't confirm it, though. In
any case, the API docs should avoid tricking users into making this easy
mistake.
The render API (vo_libmpv) had potential deadlock problems with
MPV_RENDER_PARAM_ADVANCED_CONTROL. This required vd-lavc-dr to be
enabled (the default). I never observed these deadlocks in the wild
(doesn't mean they didn't happen), although I could specifically provoke
them with some code changes.
The problem was mostly about DR (direct rendering, letting the video
decoder write to OpenGL buffer memory). Allocating/freeing a DR image
needs to be done on the OpenGL thread, even though _lots_ of threads are
involved with handling images. Freeing a DR image is a special case that
can happen any time. dr_helper.c does most of the evil magic of
achieving this. Unfortunately, there was a (sort of) circular lock
dependency: freeing an image while certain internal locks are held would
trigger the user's context update callback, which in turn would call
mpv_render_context_update(), which processed all pending free requests,
and then acquire an internal lock - which the caller might not release
until a further DR image could be freed.
"Solve" this by making freeing DR images asynchronous. This is slightly
risky, but actually not much. The DR images will be free'd eventually.
The biggest disadvantage is probably that debugging might get trickier.
Any solution to this problem will probably add images to free to some
sort of queue, and then process it later. I considered making this more
explicit (so there'd be a point where the caller forcibly waits for all
queued items to be free'd), but discarded these ideas as this probably
would only increase complexity.
Another consequence is that freeing DR images on the GL thread is not
synchronous anymore. Instead, it mpv_render_context_update() will do it
with a delay. This seems roundabout, but doesn't actually change
anything, and avoids additional code.
This also fixes that the render API required the render API user to
remain on the same thread, even though this wasn't documented. As such,
it was a bug. OpenGL essentially forces you to do all GL usage on a
single thread, but in theory the API user could for example move the GL
context to another thread.
The API bump is because I think you can't make enough noise about this.
Since we don't backport fixes to old versions, I'm specifically stating
that old versions are broken, and I'm supplying workarounds.
Internally, dr_helper_create() does not use pthread_self() anymore, thus
the vo.c change. I think it's better to make binding to the current
thread as explicit as possible.
Of course it's not sure that this fixes all deadlocks (probably not).
libass had an API to configure this since 2013. mpv always used
ASS_FONTPROVIDER_AUTODETECT, because usually there's little reason to
use anything else. The intention of the now added option is to allow
users to disable use of system fonts.
I didn't consider it worth the trouble to add the coretext and
directwrite enum items from ASS_DefaultFontProvider. The "auto" choice
will have the same effect if they're available. Also, the part of the
code which defines the option does not necessarily have libass available
(it's still optional!), so defining all enum items as choices is icky. I
still added fontconfig, since that may be nice to emulate a nostalgic
2010 feeling of mpv freezing on fontconfig.
The option for OSD is even less useful. (But you get it for free, and
why pass up a chance to add yet another useless option?)
This is not quite what was requested in #6947, but as close as it gets.
We default to EGL instead of GLX now, which means vdpau only works
if we explicitly specify that we want a GLX context, as vdpau lacks
interop for EGL.
Update the hwdec documentation to reflect this.
Concerns #6980.
The question came up on how a client would figure out where
screenshot-directory saved its screenshots if it contained
mpv-specific expansions. This command should remedy the situation
by providing a way for the client to ask mpv to do an expansion.
Basically, both the license of the file and the preferred license of the
project (LGPLv2.1+) counts. I'm doing that so that files with more
liberal licenses don't get infected by LGPL, but allow copy & pasting to
LGPL source files without jumping through lawyer bullshit hoops.
Mention this in Copyright too.
This flag makes mpv continue using the PulseAudio driver even if the
sink is suspended.
This can be useful if JACK is running with PulseAudio in bridge mode and
the sink-input assigned to mpv is the one JACK controls, thus being
suspended.
By forcing mpv to still use PulseAudio in this case, the user can now
adjust the sink to an unsuspended one.
Replace the "+" with "/". The "+" was supposed to imply that the cache
is the sum of the time (demuxer cache) and the size in bytes (stream
cache). We could not provide something nicer, because we had no idea how
many seconds of media was buffered in the stream cache.
Now the stream cache is done, and both the duration and byte size show
the amount buffered in the demuxer cache. Hopefully "/" is better to
imply this properly. Update the manpage explanations too.
skip-logo.lua is just what I wanted to have. Explanations are on the top
of that file. As usual, all documentation threatens to remove this stuff
all the time, since this stuff is just for me, and unlike a normal user
I can afford the luxuary of hacking the shit directly into the player.
vf_fingerprint is needed to support this script. It needs to scale down
video frames as part of its operation. For that, it uses zimg. zimg is
much faster than libswscale and generates more correct output. (The
filter includes a runtime fallback, but it doesn't even work because
libswscale fucks up and can't do YUV->Gray with range adjustment.)
Note on the algorithm: seems almost too simple, but was suggested to me.
It seems to be pretty effective, although long time experience with
false positives is missing. At first I wanted to use dHash [1][2], which
is also pretty simple and effective, but might actually be worse than
the implemented mechanism. dHash has the advantage that the fingerprint
is smaller. But exact matching is too unreliable, and you'd still need
to determine the number of different bits for fuzzier comparison. So
there wasn't really a reason to use it.
[1] https://pypi.org/project/dhash/
[2] http://www.hackerfactor.com/blog/index.php?/archives/529-Kind-of-Like-That.html
Helper for the ab-loop-dump-cache command, see manpage additions.
This is kind of shit. Not only is this a very "special" feature, but it
also vomits more messy code into the big and already bloated demux.c,
and the implementation is sort of duplicated with the dump-cache code.
(Except it's different.) In addition, the results sort of depend what a
video player would do with the dump-cache output, or what the user wants
(for example, a user might be more interested in the range of output
audio, instead of the video).
But hey, I don't actually need to justify it. I'm only justifying it for
fun.
But don't tell the reader which those APIs are. Hope the user will just
search for "async" in the Lua section (lua.rst). But of course, nobody
will ever care about anything related to this.
That's right, and it's probably not the end of it. I'll just claim that
I have no idea how to create a proper user interface for this, so I'm
creating multiple partially-orthogonal, of which some may work better in
each of its special use cases.
Until now, there was --record-file. You get relatively good control
about what is muxed, and it can use the cache. But it sucks that it's
bound to playback. If you pause while it's set, muxing stops. If you
seek while it's set, the output will be sort-of trashed, and that's by
design.
Then --stream-record was added. This is a bit better (especially for
live streams), but you can't really control well when muxing stops or
ends. In particular, it can't use the cache (it just dumps whatever the
underlying demuxer returns).
Today, the idea is that the user should just be able to select a time
range to dump to a file, and it should not affected by the user seeking
around in the cache. In addition, the stream may still be running, so
there's some need to continue dumping, even if it's redundant to
--stream-record.
One notable thing is that it uses the async command shit. Not sure
whether this is a good idea. Maybe not, but whatever. Also, a user can
always use the "async" prefix to pretend it doesn't.
Much of this was barely tested (especially the reinterleaving crap),
let's just hope it mostly works. I'm sure you can tolerate the one or
other crash?
Until now, the following could happen: if you set a 1GB forward cache,
and a 1GB backward cache, and you opened a 2GB file, it would prune away
the data cached at the start as playback progressed past the 50% mark.
With this commit, nothing gets pruned, because the total memory usage
will still be 2GB, which equals the total allowed memory usage of 1GB +
1GB.
There are no explicit buffers (every packet is malloc'ed and put into a
linked list), so it all comes down to buffer size computations. Both
reader and prune code use these sizes to decide whether a new packet
should be read / an old packet discarded. So just add the remaining free
"space" from the forward buffer to the available backward buffer. Still
respect if the back buffer is set to 0 (e.g. unseekable cache where it
doesn't make sense to keep old packets).
We need to make sure that the forward buffer can always append, as long
as the forward buffer doesn't exceed the set size, even if the back
buffer "borrows" free space from it. For this reason, always keep 1 byte
free, which is enough to allow it to read a new packet. Also, it's now
necessary to call pruning when adding a packet, to get back "borrowed"
space that may need to be free'd up after a packet has been added.
I refrained from doing the same for forward caching (making forward
cache use unused backward cache). This would work, but has a
disadvantage. Assume playback starts paused. Demuxing will stop once the
total allowed low total cache size is reached. When unpausing, the
forward buffer will slowly move to the back buffer. That alone will not
change the total buffer size, so demuxing remains stopped. Playback
would need to pass over data of the size of the back buffer until
demuxing resume; consider this unacceptable. Live playback would break
(or rather, would not resume in unintuitive ways), even normal streaming
may break if the server invalidates the URL due to inactivity. As an
alternative implementation, you could prune the back buffer immediately,
so the forward buffer can grow, but then the back buffer would never
grow. Also makes no sense.
As far as the user interface is concerned, the idea is that the limits
on their own aren't really meaningful, the purpose is merely to vaguely
restrict the cache memory usage. There could be just a single option to
set the total allowed memory usage, but the separate backward cache
controls the default ratio of backward/forward cache sizes. From that
perspective, it doesn't matter if the backward cache uses more of the
total buffer than assigned, if the forward buffer is complete.
Make most of the demuxer options runtime-changeable. This includes the
cache options and stream recording. The manpage documents some of the
possibly weird issues related to this.
In particular, the disk cache isn't shuffled around if the setting
changes at runtime.
I once created this because someone wanted to use vapoursynth without
the Python dependency. No idea if anyone ever actually used it. It's
sort of icky (it calls itself "lazy" to preempt complaints about how
much it sucks), and complicates the build process. Kill it.
It seems much more promising to have something like this:
https://github.com/vapoursynth/vapoursynth/issues/386
This would either solve the build distribution problem by relaxing the
Python dependency, and/or allow a Lua backend to be included without
pain.
This filter wasn't referenced anywhere and thus was dead code. It should
have been in the audio filter list in user_filters.c. This was intended
as compatibility wrapper (to avoid breaking old command lines and config
files), and has no real use. Apparently I forgot to add it to the filter
list (did I even test this shit?), and so it was rotting around for 1.5
years doing nothing (just like myself).
Note that users can just use the libavfilter provided filter to force
resampling, just that it has a different name and different options.
There's also af_format to force inserting auto conversion through the
internal f_swsresample filter.
Normally I use the OSC like this: not at all, but have a key binding
that does "cycle osc" to show it. And in that case, I don't really want
it to overlap the damn video.
I could use the zoom/pan options to move the video out of the way, but
this is also sort of annoying. Likewise, you could write a script or so
which does this automatically if the OSC appears, but that's still
annoying, and computing values for these options such that the video is
moved correctly is tricky.
So I added a bunch of options that set explicit video borders (previous
commit), and a option for the OSC to use them (this commit).
Disabled by default, since I'm afraid this is too awkward and
unpolished, especially with OSC default settings.
I'm also using "osc-visibility=always". Effectively, making the OSC
appear will box the video, and making it disappear (by unloading
osc.lua) will restore the video back to normal.
Semantics a bit questionable. This is done for the OSC (next commit),
and a comment added the manpage explicitly states this. Meaning this is
probably garbage and needs to revisit when the OSC changes and/or
someone wants to use this margin feature for something else.
Not sure about the subtitle thing. It's imaginable that someone uses
these options to create empty borders for subtitles on the bottom, so
subtitles should be located there. On the other hand, this gives a
rather unpolished user experience when using the (later added) OSC
feature to not overlap with the video. There's not much of a point if
the OSC still overlaps the video. However, I'm too lazy to think about
this, so it stays like it is.
Somewhat similar to the old --cache-file, except for the demuxer cache.
Instead of keeping packet data in memory, it's written to disk and read
back when needed.
The idea is to reduce main memory usage, while allowing fast seeking in
large cached network streams (especially live streams). Keeping the
packet metadata on disk would be rather hard (would use mmap or so, or
rewrite the entire demux.c packet queue handling), and since it's
relatively small, just keep it in memory.
Also for simplicity, the disk cache is append-only. If you're watching
really long livestreams, and need pruning, you're probably out of luck.
This still could be improved by trying to free unused blocks with
fallocate(), but since we're writing multiple streams in an interleaved
manner, this is slightly hard.
Some rather gross ugliness in packet.h: we want to store the file
position of the cached data somewhere, but on 32 bit architectures, we
don't have any usable 64 bit members for this, just the buf/len fields,
which add up to 64 bit - so the shitty union aliases this memory.
Error paths untested. Side data (the complicated part of trying to
serialize ffmpeg packets) untested.
Stream recording had to be adjusted. Some minor details change due to
this, but probably nothing important.
The change in attempt_range_joining() is because packets in cache
have no valid len field. It was a useful check (heuristically
finding broken cases), but not a necessary one.
Various other approaches were tried. It would be interesting to list
them and to mention the pros and cons, but I don't feel like it.
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
Until now, this usually passed a single audio frame to the decoder, and
then did a backstep operation (cache seek + frame search) again. This is
probably not very efficient, especially considering it has to search the
packet queue from the "start" every time again.
Also, with most audio codecs, an additional "preroll" frame was passed
first. In these cases, the preroll frame would make up 50% of audio
decoding time. Also not very efficient.
Attempt to fix this by returning multiple frames at once. This reduces
the number of backstep operations and the ratio the preoll frames. In
theory, this should help efficiency. I didn't test it though, why would
I do this? It's just a pain. Set it to unscientific 10 frames.
(Actually, these are 10 keyframes, so it's much more for codecs like
TrueHD. But I don't care about TrueHD.)
This commit changes some other implementation details. Since we can
return more than 1 non-preroll keyframe to the decoder, some new state
is needed to remember how much. The resume packet search is adjusted to
find N ("total") keyframe packets in general, not just preroll frames.
I'm removing the special case for 1 preroll packet; audio used this, but
doesn't anymore, and it's premature optimization anyway.
Expose the new mechanism with 2 new options. They're almost completely
pointless, since nobody will try them, and if they do, they won't
understand what these options truly do. And if they actually do, they
most likely would be capable of editing the source code, and we could
just hardcode the parameters. Just so you know that I know that the
added options are pointless.
The following two things are truly unrelated to this commit, and more
like general refactoring, but fortunately nobody can stop me.
Don't set back_seek_pos in dequeue_packet() anymore. This was sort of
pointless, since it was set in find_backward_restart_pos() anyway (using
some of the same packets). The latter function tries to restrict this to
the first keyframe range though, which is an optimization that in theory
might break with broken files (duh), but in these cases a lot of other
things would be broken anyway.
Don't set back_restart_* in dequeue_packet(). I think this is an
artifact of the old restart code (cf. ad9e473c55). It can be done
directly in find_backward_restart_pos() now. Although this adds another
shitty packet search loop, I prefer this, because clearer what's
actually happening.
Before this commit, there was a single process_decoded_frame() function.
It handled various aspects of dealing with a newly decoded frame. Move
some of these to a separate process_output_frame() function.
This new function is called in the order the frames are returned to the
playback core. Some correct_audio_pts() (was process_audio_frame())
becomes slightly less awkward due to this, and the timestamp smoothing
can actually work in backward playback mode now (thus moving p->pts out
of reset_decoder()).
Behavior for normal playback also changes subtly. This shouldn't matter
in sane cases, but if you mix broken files, --no-correct-pts, and
timeline stuff, differences in behavior might be visible.
Timeline clipping (EDL/ordered chapters) works now, because it's done
before "transforming" the timestamps. Audio timestamp smoothing happens
after it, which is a behavior change, but should be more correct. This
still runs crazy_video_pts_stuff() before everything else. On the pther
hand, --no-correct-pts or missing timestamp processing is done last. But
these things didn't really work with timeline before.
And add simpler aliases for the modes.
I'm not sure how to name things, and the option list is in general full
of different conventions. Some names are shortened, some are explicit
and long.
I guess options that have a chance to be used normally (i.e. not obscure
tuning or debugging) should have a short and convenient names.
In this specific case, play-direction is like a mixture of both. It
should be either playback-direction or play-dir, not shorten one word
but not the other.
The convenience aliases are because I got sick of typing out "backward".
I guess "back" would also do it, but there's no proper antonym (and
maybe it's "wrong" in the strict sense of the word).
Together with the previous commit, this seems to make backward playback
work in files with vorbis and mp3 audio codecs.
For Vorbis (with libavcodec's decoder, didn't test libvorbis), the first
packet was just always completely discarded. This happened even though
we tell libavcodec that we do discarding of padding manually. It simply
happened inside the codec, not libavcodec's general initial padding
handling. In addition, the first output decoded frame seems to contain
partial data. (Unlike the opus decoder, it doesn't report any padding at
all.)
The Opus decoder (again libavcodec only tested) reports an initial
padding, but it appears to be too small, and it sounds right only with 2
packets discarded. So its status doesn't change.
I'm not sure why I need 2 frames for mp3, but with that value I had
success on the samples I tested.
Clarify existing semantics for the --start/--end/--length options.
De-emphasize the difference between absolute and relative timestamps,
since they've not been different by default since mpv 0.14.
Document a bug, that also happens to work as a feature: if the option
value begins with spaces, the code for checking for relative timestamps
is inactive, and they're always considered absolute. The check is done
on the first character of the string - so even a negative timestamp will
be treated as absolute.)
Yes, this is useful in extremely rare situations, such as when you
really want send a specific timestamp (even a negative one) to the
demuxer.
This changes the behavior of the --ab-loop-a/b options. In addition, it
makes it work with backward playback mode.
The most obvious change is that the both the A and B point need to be
set now before any looping happens. Unlike before, unset points don't
implicitly use the start or end of the file. I think the old behavior
was a feature that was explicitly added/wanted. Well, it's gone now.
This is because of 2 reasons:
1. I never liked this feature, and it always got in my way (as user).
2. It's inherently annoying with backward playback mode.
In backward playback mode, the user wants to set A/B in the wrong order.
The ab-loop command will first set A, then B, so if you use this command
during backward playback, A will be set to a higher timestamps than B.
If you switch back to forward playback mode, the loop would stop
working. I want the loop to just continue to work, and the chosen
solution conflicts with the removed feature.
The order issue above _could_ be fixed by also switching the AB-loop
user option values around on direction switch. But there are no other
instances of option changes magically affecting other options, and doing
this would probably lead to unexpected misery (dying from corner cases
and such).
Another solution is sorting the A/B points by timestamps after copying
them from the user options. Then A/B options set in backward mode will
work in forward mode. This is the chosen solution. If you sort the
points, you don't know anymore whether the unset point is supposed to
signify the end or the start of the file.
The AB-loop code is slightly better abstracted now, so it should be easy
to restore the removed feature. It would still require coming up with a
solution for backwards playback, though.
A minor change is that if one point is set and the other is unset, I'm
rendering both the chapter markers and the marker for the set point.
Why? I don't know. My test file had chapters, and I guess I decided this
looked better.
This commit also fixes some subtle and obvious issues that I already
forgot about when I wrote this commit message. It cleans up some minor
code duplication and nonsense too.
Regarding backward playback, the code uses an unsanitary mix of internal
("transformed") and user timestamps. So the play_dir variable appears
more than usual.
To mention one unfixed issue: if you set an AB-loop that is completely
past the end of the file, it will get stuck in an infinite seeking loop
once playback reaches the end of the file. Fixing this reliably seemed
annoying, so the fix is "just don't do this". It's not a hard freeze
anyway.
Has been deprecated for almost 3 years. Manpage didn't mention the
deprecation, but CLI and release notes did. It wouldn't be much effort
to keep this option working, but I just don't see the damn point.
--start/--end can specify chapters using special syntax, which is
equivalent.
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
As well as other filtering. I was writing this with the assumption that
timestamps go backwards (which I first planned to do). But in fact,
timestamps go forward, frame durations are positive, and adding a frame
duration to a timestamp yields the correct result. The only strange
thing is that timestamps are negative.
Also, media of course goes backwards. In other possible implementation,
filters would see normal forward playback, interrupted by seeks or
discontinuities. It turns out the current implementation of providing a
continuous backward media stream is probably better for filters.
Even deinterlacing seems to work. libavcodec always outputs fields in as
interleaved frames (i.e. fields are not reversed), and making up
timestamps for the new frames (when doubling the framerate) works
exactly like like in the forward case.
Actually the previous paragraph was a lie, and libavcodec does not
output fields as interleaved frames in rare cases. Sometimes AVFrame
contains single fields. In this case you'd need to inverse the field
dominance for deinterlacing filters to work correctly.
The way backward playback is implemented doesn't break basic assumptions
about timestamps after the decoder, so I guess all the encoding mode
needs to do is to adjust for the start offset, which it already does.
Though I might be wrong and my test was possibly flawed.
Stream recording on the other hand will fail immediately with
--record-file, and --stream-record will probably yield unexpected
results if any backstep seeks are done.
Make --audio-backward-overlap default to 2 for Opus. I have no idea why
this is needed. It seems to fix backward decoding though (going purely
by listening).
Normally, this should not be needed, since initial padding is completely
contained within the first packet (normally, and in the case I tested).
So the 2nd packet/frame should be fine, but for some unknown reason it
works only with the 3rd.
The only reasonable solution to this is probably to make discarding of
preroll frames based on timestmaps, instead of frame/packet count. But
then you get issues with video and its dumb timestamp reordering. So for
now, fuck it.
This seems more useful in general. This change also happens to fix a
miscounting of preroll packets when some of them were "rounded" away,
and which could make it stuck.
Also a simple intra-refresh encode with x264 (and muxed to mkv by it)
seems to work now. I guess I misinterpreted earlier results.
Just "mpv file.mkv --play-direction=backward" did not work, because
backward demuxing from the very end was not implemented. This is another
corner case, because the resume mechanism so far requires a packet
"position" (dts or pos) as reference. Now "EOF" is another possible
reference.
Also, the backstep mechanism could cause streams to find different
playback start positions, basically leading to random playback start
(instead of what you specified with --start). This happens only if
backstep seeks are involved (i.e. no cached data yet), but since this is
usually the case at playback start, it always happened. It was racy too,
because it depended on the order the decoders on other threads requested
new data. The comment below "resume_earlier" has some more blabla.
Some other details are changed.
I'm giving up on the "from_cache" parameter, and don't try to detect the
situation when the demuxer does not seek properly. Instead, always seek
back, hopefully some more.
Instead of trying to adjust the backstep seek target by a random value
of 1.0 seconds. Instead, always rely on the random value provided by the
user via --demuxer-backward-playback-step. If the demuxer should really
get "stuck" and somehow miss the seek target badly, or the user sets the
option value to 0, then the demuxer will not make any progress and just
eat CPU. (Although due to backward seek semantics used for backstep
seeks, even a very small seek step size will work. Just not 0.)
It seems this also fixes backstepping correctly when the initial seek
ended at the last keyframe range. (The explanation above was about the
case when it ends at EOF. These two cases are different. In the former,
you just need to step to the previous keyframe range, which was broken
because it didn't always react correctly to reaching EOF. In the latter,
you need to do a separate search for the last keyframe.)
Simple enough to do. May have mixed results. Typically, bitmap subtitles
will have a tight bounding box around the rendered text. But if for
example there is text on the top and bottom, it may be a single big
bitmap with a large transparent area between top and bottom. In
particular, DVD subtitles are really just a single screen-sized
RLE-encoded bitmap, though libavcodec will crop off transparent areas.
Like with sd_ass, you can't move subtitles _down_ if they are already in
their origin position. This could probably be improved, but I don't want
to deal with that right now.
Not specifying a --start or using --start=100% with
--play-direction=backward usually does not work. The demuxer gets no
packets and immediately enters EOF state, which then hangs because
backward playback mode neither considers this mode, nor propagates the
EOF.
As far as demuxer implementations are concerned, this behavior is OK and
even wanted. Seeking near the end with SEEK_FORWARD set is allowed not
to return any packets (so a normal relative forward seek as done by the
user would end playback). Seeking exactly to the end or past it without
SEEK_FORWARD set is probably also sane.
Another vaguely related issue is that a backward seek during playback
start does not "establish" the demux position correctly: if stream A
hits the next keyframe and seeks back, while stream B has not had a
chance to read a packet yet, then stream B will never try to read from
the old position. The effect is that stream B (and thus playback) will
effectively miss the seek target. This is "random" because it depends on
the order and number of packet read calls made by the decoders.
Fixing this is probably hard, and requires extending the already complex
state machine with more states, so turn the manpage into a TODO list for
now.
Raw audio formats can be accessed sample-wise, and logically audio
packets demuxed from it would contain only 1 sample. This is
inefficient, so raw audio demuxers typically "bundle" multiple samples
in one packet.
The problem for the demuxer cache and backward playback is that they
need properly aligned packets to make seeking "deterministic". The
requirement is that if you read some packets, and then seek back, you
eventually see the same packets again. demux_raw basically allowed to
seek into the middle of a previously returned packet, which makes it
impossible to make the transition seamless. (Unless you'd be aware of
the packet data format and cut them to make it seamless, which is too
complex for such a use case.)
Solve this by always aligning seeks to packet boundaries. This reduces
the seek accuracy to the arbitrarily chosen packet size. But you can use
hr-seek to fix this. The gain from not making raw audio an awful special
case pays in exchange for this "stupid" suggestion to use hr-seek.
It appears this also fixes that it could and did seek into the middle of
the frame (not sure if this code was ever tested - it goes back to
removing the code duplication between the former demux_rawaudio.c and
demux_rawvideo.c).
If you really cared, you could introduce a seek flag that controls
whether the seek is aligned or not. Then code which requires
"deterministic" demuxing could set it. But this isn't really useful for
us, and we'd always set the flag anyway, unless maybe the caching were
forced disabled.
libavformat's wav demuxer exhibits the same issue. We can't fix it (it
would require the unpleasant experience of contributing to FFmpeg), so
document this in otions.rst. In theory, this also affects seek range
joining, but the only bad effect should be that cached data is
discarded.
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)
(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)
How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.
The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).
Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).
The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.
Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.
E.g.:
bool before = pts_a < pts_b;
would need to be:
bool before = forward
? pts_a < pts_b
: pts_a > pts_b;
or:
bool before = pts_a * dir < pts_b * dir;
or if you, as it's implemented now, just do this after decoding:
pts_a *= dir;
pts_b *= dir;
and then in the normal timing/renderer code:
bool before = pts_a < pts_b;
Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.
Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.
As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)
VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.
FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
ytdl_hook.lua essentially uses these headers to implement parts of DASH.
Hopefully the FFmpeg DASH demuxer gets usable at some point, and/or mpv
gets a proper DASH demuxer. In any case, these EDL hacks could get
removed as soon as they get unnecessary and too annoying.
Used by the next commit. It mostly exposes part of mp4_dash
functionality. It actually makes little sense other than for ytdl
special-use. See next commit.
The ytdl wrapper can resolve web links to playlists. This playlist is
passed as big memory:// blob, and will contain further quite normal web
links. When playback of one of these playlist entries starts, ytdl is
called again and will resolve the web link to a media URL again.
This didn't work if playlist entries resolved to EDL URLs. Playback was
rejected with a "potentially unsafe URL from playlist" error. This was
completely weird and unexpected: using the playlist entry directly on
the command line worked fine, and there isn't a reason why it should be
different for a playlist entry (both are resolved by the ytdl wrapper
anyway). Also, if the only EDL URL was added via audio-add or sub-add,
the URL was accessed successfully.
The reason this happened is because the playlist entries were marked as
STREAM_SAFE_ONLY, and edl:// is not marked as "safe". Playlist entries
passed via command line directly are not marked, so resolving them to
EDL worked.
Fix this by making the ytdl hook set load-unsafe-playlists while the
playlist is parsed. (After the playlist is parsed, and before the first
playlist entry is played, file-local options are reset again.) Further,
extend the load-unsafe-playlists option so that the playlist entries are
not marked while the playlist is loaded.
Since playlist entries are already verified, this should change nothing
about the actual security situation.
There are now 2 locations which check load_unsafe_playlists. The old one
is a bit redundant now. In theory, the playlist loading code might not
be the only code which sets these flags, so keeping the old code is
somewhat justified (and in any case it doesn't hurt to keep it).
In general, the security concept sucks (and always did). I can for
example not answer the question whether you can "break" this mechanism
with various combinations of archives, EDL files, playlists files,
compromised sites, and so on. You probably can, and I'm fully aware that
it's probably possible, so don't blame me.
This commit adds an extension to mpv EDL, which basically allows you to
do the same as --audio-file, --external-file, etc. in a single EDL file.
This is a relatively quick & dirty implementation. The dirty part lies
in the fact that several shortcuts are taken. For example, struct
timeline now forms a singly linked list, which is really weird, but also
means the other timeline using demuxers (cue, mkv) don't need to be
touched. Also, memory management becomes even worse (weird object
ownership rules that are just fragile WTFs). There are some other
dubious small changes, mostly related to the weird representation of
separate streams.
demux_timeline.c contains the actual implementation of the separate
stream handling. For the most part, most things that used to be on the
top level are now in struct virtual_source, of which one for each
separate stream exists. This is basically like running multiple
demux_edl.c in parallel. Some changes could strictly speaking be split
into a separate commit, such as the stream_map type change.
Mostly untested. Seems to work for the intended purpose. Potential for
regressions for other timeline uses (like ordered chapters) is probably
low. One thing which could definitely break and which I didn't test is
the pseudo-DASH fragmented EDL code, of which ytdl can trigger various
forms in obscure situations. (Uh why don't we have a test suite.)
Background:
The intention is to use this for the ytdl wrapper. A certain streaming
site from a particularly brain damaged and plain evil Silicon Valley
company usually provides streams as separate audio and video streams.
The ytdl wrapper simply does use audio-add (i.e. adding it as external
track, like with --audio-file), which works mostly fine. Unfortunately,
mpv manages caching completely separately for external files. This has
the following potential problems:
1. Seek ranges are rendered incorrectly. They always use the "main"
stream, in this case the video stream. E.g. clicking into a cached range
on the OSC could trigger a low level seek if the audio stream is
actually not cached at the target position.
2. The stream cache bloats unnecessarily. Each stream may allocate the
full configured maximum cache size, which is not what the user intends
to do. Cached ranges are not pruned the same way, which creates disjoint
cache ranges, which only use memory and won't help with fast seeking or
playback.
3. mpv will try to aggressively read from both streams. This is done
from different threads, with no regard which stream is more important.
So it might happen that one stream starves the other one, especially if
they have different bitrates.
4. Every stream will use a separate thread, which is an unnecessary
waste of system resources.
In theory, the following solutions are available (this commit works
towards D):
A. Centrally manage reading and caching of all streams. A single thread
would do all I/O, and decide from which stream it should read next. As
long as the total TCP/socket buffering is not too high, this should be
effective to avoid starvation issues. This can also manage the cached
ranges better. It would also get rid of the quite useless additional
demuxer threads. This solution is conceptually simple, but requires
refactoring the entire demuxer middle layer.
B. Attempt to coordinate the demuxer threads. This would maintain a
shared cache and readahead state to solve the mentioned problems
explicitly. While this sounds simple and like an incremental change,
it's probably hard to implement, creates more messy special cases,
solution A. seems just a better and simpler variant of this. (On the
other hand, A. requires refactoring more code.)
C. Render an intersection of the seek ranges across all streams. This
fixes only problem 1.
D. Merge all streams in a dedicated wrapper demuxer. The general demuxer
layer remains unchanged, and reading from separate streams is handled as
special case. This effectively achieves the same as A. In particular,
caching is simply handled by the usual demuxer cache layer, which sees
the wrapper demuxer as a single stream of interleaved packets. One
implementation variant of this is to reuse the EDL infrastructure, which
this commit does.
All in all, solution A would be preferable, because it's cleaner and
works for all external streams in general.
Some previous commit tried to prepare for implementing solution A. This
could still happen. But it could take years until this is finally
seriously started and finished. In any case, this commit doesn't block
or complicate such attempts, which is also why it's the way to go.
It's worth mentioning that original mplayer handles external files by
creating a wrapper demuxer. This is like a less ideal mixture of A. and
D. (The similarity with A. is that extending the mplayer approach to be
fully dynamic and without certain disadvantages caused by the wrapper
would end up with A. anyway. The similarity with D. is that due to the
wrapper, no higher level code needs to be changed.)
EDLs can be provided either as external file, or "inline" as a big
edl:// URL. There is no difference between them, except if it's loaded
from an external file, there is some weird filename sanitation going on
(see fix_filenames() in demux_edl.c). It seems this is intended to be a
security mechanism, but probably makes no sense at all.
Note that playlists are allowed to access anything locally. One
difference to playlists is that the EDL code lacks the "security"
mechanism when accessing playlist entries (see handling of the
playlist_entry.stream_flags field - EDL would need something similar),
so don't remove that, as I'm unaware of the exact consequences.
Extending the client-allocated mpv_opengl_drm_params struct
constituted a break of ABI that could cause UB.
Create a clean break by deprecating "drm_params" and related structs
and enum values, and replacing it with "drm_params_v2".
Also fix some comments and code that wrongly assumed that open could
return any other negative number than -1 for failure.
This commit updates the libmpv version to 1.104
Originally, vo_gpu/vo_opengl considered the case of Nvidia proprietary
drivers, which required vdpau/GLX, and Intel open source drivers, which
require vaapi/EGL. Since window creation and GPU context creation are
inseparable in mpv's internal API, it had to pick the correct API very
early, or hardware decoding wouldn't work. "x11probe" was introduced for
this reason. It created a GLX context (without showing the window yet),
and checked whether vdpau was available. If yes, it used GLX, if not, it
continued probing x11/EGL. (Obviously it couldn't always fail on GLX
without vdpau, which is why it was a separate "probe" backend.)
Years passed, and now the situation is different. Vdpau is dead. Nvidia
drivers and libavcodec now provide CUDA interop, which requires EGL, and
fixes some of the vdpau problems. AMD drivers now provide vaapi, which
generally works better than vdpau. Intel didn't change.
In particular, vaapi provides working HEVC Main10 support. In theory, it
should work on vdpau too, with quality reduction (no 10 bit surfaces),
but I couldn't get it to work.
So always prefer EGL. And suddenly hardware decoding works. This is
actually rather important, because HEVC is unfortunately on the rise,
despite shitty encoders and unoptimized decoders. The latter may mean
that hardware decoding works better than libavcodec.
This should have been done a long, long time ago.
The "program" property could switch between TS programs. It was rather
complex and rather obscure (even if you deal with TS captures, you
usually don't need it). If anyone actually needs it (did anyone ever
attempt to even use it?), it should be rewritten. The demuxer should
export a program list, and the frontend should handle the "cycling"
logic.
Linux analog TV support (via tv://) was excessively complex, and
whenever I attempted to use it (cameras or loopback devices), it didn't
work well, or would have required some major work to update it. It's
very much stuck in the analog past (my favorite are the frequency tables
in frequencies.c for analog TV channels which don't exist anymore).
Especially cameras and such work fine with libavdevice and better than
tv://, for example:
mpv av://v4l2:/dev/video0
(adding --profile=low-latency --untimed even makes it mostly realtime)
Adding a new input layer that targets such "modern" uses would be
acceptable, if anyone is interested in it. The old TV code is just too
focused on actual analog TV.
DVB is rather obscure, but has an active maintainer, so don't remove it.
However, the demux/stream ctrl layer must go, so remove controls for
channel switching. Most of these could be reimplemented by using the
normal method for option runtime changes.
This removes anything related to DVD/BD/CD that negatively affected the
core code. It includes trying to rewrite timestamps (since DVDs and
Blurays do not set packet stream timestamps to playback time, and can
even have resets mid-stream), export of chapters, stream languages,
export of title/track lists, and all that.
Only basic seeking is supported. It is very much possible that seeking
completely fails on some discs (on some parts of the timeline), because
timestamp rewriting was removed.
Note that I don't give a shit about optical media. If you want to watch
them, rip them. Keeping some bare support for DVD/BD is the most I'm
going to do to appease the type of lazy, obnoxious users who will care.
There are other players which are better at optical discs.
stream_dvd.c contained large amounts of ancient, unmaintained code,
which has been historically moved to libdvdnav. Basically, it's full of
low level parsing of DVD on-disc structures.
Kill it for good. Users can use the remaining dvdnav support (which
basically operates in non-menu mode). Users have reported that
libdvdread sometimes works better, but this is just libdvdnav's problem
and not ours.
This is a straightforward parallel implementation of error diffusion
algorithms in compute shader. Basically we use single work group with
maximal possible size to process the whole image. After a shift
mapping we are able to process all pixels column by column.
A large ring buffer are allocated in shared memory to speed things up.
However the size of required shared memory depends linearly on the
height of video window (or screen height in fullscreen mode). In case
there is no enough shared memory, it will fallback to `--dither=fruit`.
The maximal allowed work group size is hardcoded as 1024. Ideally we
could query `GL_MAX_COMPUTE_WORK_GROUP_INVOCATIONS`. But for whatever
reason, it seems most high end card from nvidia and amd support only
the minimal required value, so I guess we can stick to it for now.
I assume (but cannot confirm) that VA-AP-API is in fact a typo, because
most if not all search engine results related to it are from mpv's manual
page.
By changing this to VA-API and clarifying that this requires VA-API support
on a system to use it, we can hopefully make it clear to unsuspecting
Windows users that this is not the filter they're looking for.
Concerns #6690.
This allows to select the drm mode using a string specification. You
can either select the the preferred mode, the mode with the highest
resolution, by specifying WxH[@R] or by its index in the list of modes
as before.
This was implemented by using OPT_STRING_VALIDATE for drm-mode,
instead of OPT_INT. Using a string here also prepares for future
additions to drm-mode that aim to allow specifying a mode by its
resolution.
It is useful when debugging to be able to force atomic off, or as a
workaround if atomic breaks for some user. Legacy modesetting is less
likely to break by virtue of being a less complex API.
half of the materials we used were deprecated with macOS 10.14, broken
and not supported by run time changes of the macOS theme. furthermore
our styling names were completely inconsistent with the actually look
since macOS 10.14, eg ultradark got a lot brighter and couldn't be
considered ultradark anymore.
i decided to drop the old option --macos-title-bar-style and rework
the whole mechanism to allow more freedom. now materials and appearance
can be set separately. even if apple changes the look or semantics in
the future the new options can be easily adapted.
Manual changes done:
* Merged the interface-changes under the already master'd changes.
* Moved the hwdec-related option changes to video/decode/vd_lavc.c.
Rather than the linear cd/m^2 units, these (relative) logarithmic units
lend themselves much better to actually detecting scene changes,
especially since the scene averaging was changed to also work
logarithmically.
In theory our "eye adaptation" algorithm works in both ways, both
darkening bright scenes and brightening dark scenes. But I've always
just prevented the latter with a hard clamp, since I wanted to avoid
blowing up dark scenes into looking funny (and full of noise).
But allowing a tiny bit of over-exposure might be a good thing. I won't
change the default just yet (better let users test), but a moderate
value of 1.2 might be better than the current 1.0 limit. Needs testing
especially on dark scenes.
The previous approach of using an FIR with tunable hard threshold for
scene changes had several problems:
- the FIR involved annoying hard-coded buffer sizes, high VRAM usage,
and the FIR sum was prone to numerical overflow which limited the
number of frames we could average over. We also totally redesign the
scene change detection.
- the hard scene change detection was prone to both false positives and
false negatives, each with their own (annoying) issues.
Scrap this entirely and switch to a dual approach of using a simple
single-pole IIR low pass filter to smooth out noise, while using a
softer scene change curve (with tunable low and high thresholds), based
on `smoothstep`. The IIR filter is extremely simple in its
implementation and has an arbitrarily user-tunable cutoff frequency,
while the smoothstep-based scene change curve provides a good, tunable
tradeoff between adaptation speed and stability - without exhibiting
either of the traditional issues associated with the hard cutoff.
Another way to think about the new options is that the "low threshold"
provides a margin of error within which we don't care about small
fluctuations in the scene (which will therefore be smoothed out by the
IIR filter).
Instead of desaturating towards luma, we desaturate towards the
per-channel tone mapped version. This essentially proves a smooth
roll-off towards the "hollywood"-style (non-chromatic) tone mapping
algorithm, which works better for bright content, while continuing to
use the "linear" style (chromatic) tone mapping algorithm for primarily
in-gamut content.
We also split up the desaturation algorithm into strength and exponent,
which allows users to use less aggressive desaturation settings without
affecting the overall curve.
Too many broken hardware decoders. Noticed wrong decoding of a video
file encoded with x262 on RX Vega when using VAAPI (Mesa 18.3.2).
Looks fine with swdec and a cheap hardware BD player.
Reverts 017f3d0674e48a587b9e6cd7a48f15519c799c3e
--record-file is nice, but only sometimes. If you watch some sort of
livestream which you want to record, it's actually much nicer not to
record what you're currently "seeing", but anything you're receiving.
This option has been deprecated upstream for a long time, probably
doesn't even work anymore, and won't work moving forwards as we replace
the vulkan code by libplacebo wrappers.
I haven't removed the option completely yet since in theory we could
still add support for e.g. a native glslang wrapper in the future. But
most likely the future of this code is deletion.
As an aside, fix an issue where the man page didn't mention d3d11.
This commit bumps the libmpv version to 1.102
drm-osd-plane -> drm-draw-plane
drm-video-plane -> drm-drmprime-video-plane
drm-osd-size -> drm-draw-surface-size
"draw plane", as in the plane that OpenGL draws to, whether it be
video + OSD or just OSD.
"drmprime video plane", as in the plane used for hwdec video imported
via drmprime.
"draw surface size", as in the size of the surface used for the draw plane
The new names are invariant whether or not hwdec_drmprime_drm is being
used or not. The original naming was very confusing, as when doing
regular rendering (swdec or vaapi) the video would be displayed on the
"OSD plane", and the "Video plane" would remain unused.
Add general primary/overlay plane option to drm-osd-plane-id and
drm-video-plane-id, so that the user can just request any usable
primary or overlay plane for either of these two options. This should
be somewhat more user-friendly (especially as neither of these two
options currently have a useful help function), as usually you would
only be interested in the type of the plane, and not exactly which
plane gets picked.
Despite their place in the tree, hwdecs can be loaded and used just
fine by the vulkan GPU backend.
In this change we add Vulkan interop support to the cuda/nvdec hwdec.
The overall process is mostly straight forward, so the main observation
here is that I had to implement it using an intermediate Vulkan buffer
because the direct VkImage usage is blocked by a bug in the nvidia
driver. When that gets fixed, I will revist this.
Nevertheless, the intermediate buffer copy is very cheap as it's all
device memory from start to finish. Overall CPU utilisiation is pretty
much the same as with the OpenGL GPU backend.
Note that we cannot use a single intermediate buffer - rather there
is a pool of them. This is done because the cuda memcpys are not
explicitly synchronised with the texture uploads.
In the basic case, this doesn't matter because the hwdec is not
asked to map and copy the next frame until after the previous one
is rendered. In the interpolation case, we need extra future frames
available immediately, so we'll be asked to map/copy those frames
and vulkan will be asked to render them. So far, harmless right? No.
All the vulkan rendering, including the upload steps, are batched
together and end up running very asynchronously from the CUDA copies.
The end result is that all the copies happen one after another, and
only then do the uploads happen, which means all textures are uploaded
the same, final, frame data. Whoops. Unsurprisingly this results in
the jerky motion because every 3/4 frames are identical.
The buffer pool ensures that we do not overwrite a buffer that is
still waiting to be uploaded. The ra_buf_pool implementation
automatically checks if existing buffers are available for use and
only creates a new one if it really has to. It's hard to say for sure
what the maximum number of buffers might be but we believe it won't
be so large as to make this strategy unusable. The highest I've seen
is 12 when using interpolation with tscale=bicubic.
A future optimisation here is to synchronise the CUDA copies with
respect to the vulkan uploads. This can be done with shared semaphores
that would ensure the copy of the second frames only happens after the
upload of the first frame, and so on. This isn't trivial to implement
as I'd have to first adjust the hwdec code to use asynchronous cuda;
without that, there's no way to use the semaphore for synchronisation.
This should result in fewer intermediate buffers being required.
Since linear downscaling makes sense to handle independently from
linear/sigmoid upscaling, we split this option up. Now,
linear-downscaling is its own option that only controls linearization
when downscaling and nothing more. Likewise, linear-upscaling /
sigmoid-upscaling are two mutually exclusive options (the latter
overriding the former) that apply only to upscaling and no longer
implicitly enable linear light downscaling as well.
The old behavior was very confusing, as evidenced by issues such
as #6213. The current behavior should make much more sense, and only
minimally breaks backwards compatibility (since using linear-scaling
directly was very uncommon - most users got this for free as part of
gpu-hq and relied only on that).
Closes#6213.
Someone on IRC pointed out that the default stats bindings weren't
documented in the interactive control section of the manual, so
let's add them with a short mention and a reference to the STATS
section of the manual.
by default the pixel format creation falls back to software renderer
when everything fails. this is mostly needed for VMs. additionally one
can directly request an sw renderer or exclude it entirely.
The demuxer cache is the only cache now. Might need another change to
combat seeking failures in mp4 etc. The only bad thing is the loss of
cache-speed, which was sort of nice to have.
duration is parsed as an integer, and the default value is used if ```-1``` is passed. Passing ```-``` as described here causes a parameter value error.
The player fully restarts playback when the edition or disk title is
changed. Before this, the player tried to reinitialized playback
partially. For example, it did not print a new "Playing: <file>"
message, and did not send playback end to libmpv users (scripts or
applications).
This playback restart code was a bit messy and could have unforeseen
interactions with various state. There have been bugs before. Since it's
a mostly cosmetic thing for an obscure feature, just change it to a full
restart. This works well, though since it may have consequences for
scripts or client API users, mention it in interface-changes.rst.
The only effective difference is that the former explicitly checks
whether the JSON value type is string, and errors out if not. The rest
is exactly the same (mpv_set_property_string is mpv_set_property with
MPV_FORMAT_STRING).
It seems silly to keep this, so just remove it.
Until now, stopping playback aborted the demuxer and I/O layer violently
by signaling mp_cancel (bound to libavformat's AVIOInterruptCB
mechanism). Change it to try closing them gracefully.
The main purpose is to silence those libavformat errors that happen when
you request termination. Most of libavformat barely cares about the
termination mechanism (AVIOInterruptCB), and essentially it's like the
network connection is abruptly severed, or file I/O suddenly returns I/O
errors. There were issues with dumb TLS warnings, parsers complaining
about incomplete data, and some special protocols that require server
communication to gracefully disconnect.
We still want to abort it forcefully if it refuses to terminate on its
own, so a timeout is required. Users can set the timeout to 0, which
should give them the old behavior.
This also removes the old mechanism that treats certain commands (like
"quit") specially, and tries to terminate the demuxers even if the core
is currently frozen. This is for situations where the core synchronized
to the demuxer or stream layer while network is unresponsive. This in
turn can only happen due to the "program" or "cache-size" properties in
the current code (see one of the previous commits). Also, the old
mechanism doesn't fit particularly well with the new one. We wouldn't
want to abort playback immediately on a "quit" command - the new code is
all about giving it a chance to end it gracefully. We'd need some sort
of watchdog thread or something equally complicated to handle this. So
just remove it.
The change in osd.c is to prevent that it clears the status line while
waiting for termination. The normal status line code doesn't output
anything useful at this point, and the code path taken clears it, both
of which is an annoying behavior change, so just let it show the old
one.
The player fully restarts playback when the edition or disk title is
changed. Before this, the player tried to reinitialized playback
partially. For example, it did not print a new "Playing: <file>"
message, and did not send playback end to libmpv users (scripts or
applications).
This playback restart code was a bit messy and could have unforeseen
interactions with various state. There have been bugs before. Since it's
a mostly cosmetic thing for an obscure feature, just change it to a full
restart. This works well, though since it may have consequences for
scripts or client API users, mention it in interface-changes.rst.
Before this change, only 1 command or so had named arguments. There is
no reason why other commands can't have them, except that it's a bit of
work to add them.
Commands with variable number of arguments are inherently incompatible
to named arguments, such as the "run" command. They still have dummy
names, but obviously you can't assign multiple values to a single named
argument (unless the argument has an array type, which would be
something different). For now, disallow using named argument APIs with
these commands. This might change later.
2 commands are adjusted to not need a separate default value by changing
flag constants. (The numeric values are C only and can't be set by
users.)
Make the command syntax in the manpage more consistent. Now none of the
allowed choice/flag names are in the command header, and all arguments
are shown with their proper name and quoted with <...>.
Some places in the manpage and the client.h doxygen are updated to
reflect that most commands support named arguments. In addition, try to
improve the documentation of the syntax and need for escaping etc. as
well.
(Or actually most uses of the word "argument" should be "parameter".)