Replace use of .min==1 with a proper flag. This is a good idea, because
it has nothing to do with numeric limits (also see commit 9d32d62b61
for how this can go wrong).
With this, m_option.min/max are strictly used for numeric limits.
Ancient Linux audio output. Apparently it survived until now, because
some BSDs (but not all) had use of this. But these should work with
ao_sdl or ao_openal too (that's why these AOs exist after all). ao_oss
itself has the problem that it's virtually unmaintainable from my point
of view due to all the subtle (or non-subtle) difference. Look at the
ifdef mess and the multiple code paths (that shouldn't exist) in the
removed source code.
I wonder what this even is. I've never heard of anyone using it, and
can't find a corresponding library that actually builds with it. Good
enough to remove.
It was always marked as "experimental", and had inherent problems that
were never fixed. It was disabled by default, and I don't think anyone
is using it.
Looks like the recent change to this actually made it crash whenever
audio happened to be initialized first, due to not setting the
mux_stream field before the on_ready callback. Mess a way around this.
Also remove a stray unused variable from ao_lavc.c.
In shared mode, we previously tried to feed the full native format to
IsFormatSupported in the hopes that the "closest match" returned was
actually that.
Turns out, IsFormatSupported will always return the mix format if we
don't use the mix format's sample rate. This will also clobber our
choice of channel map with the mix format channel map even if our
desired channel map is supported due to surround emulation.
The solution is to not bother trying to use anything other than the mix
format sample rate. While we're at it, we might as well use the mix
format PCM sample format (always float32) since this conversion will
happen anyway and may avoid unecessary dithering to intermediate
integer formats if we are already resampling or channel mixing.
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
This seems to be an older bug. It set priv->outputfilename to a new
talloc-allocated string, but the field is also managed as string option,
so talloc will free it first, then m_option_free() is called on the
dangling pointer. Possibly this is caused by the earlier ta destruction
order change.
Before this commit, option declarations used M_OPT_MIN/M_OPT_MAX (and
some other identifiers based on these) to signal whether an option had
min/max values. Remove these flags, and make it use a range implicitly
on the condition if min<max is true.
This requires care in all cases when only M_OPT_MIN or M_OPT_MAX were
set (instead of both). Generally, the commit replaces all these
instances with using DBL_MAX/DBL_MIN for the "unset" part of the range.
This also happens to fix some cases where you could pass over-large
values to integer options, which were silently truncated, but now cause
an error.
This commit has some higher potential for regressions.
Move the "old" mostly command line parsing and option management related
code to m_config_frontend.c/h. Move the the code that enables other part
of the player to access options to m_config_core.c/h. "frontend" is out
of lack of creativity for a better name.
Unfortunately, the separation isn't quite clean yet. m_config_frontend.c
still references some m_config_core.c implementation details, and
m_config_new() is even left in m_config_core.c for now. There some odd
functions that should be removed as well (marked as "Bad functions").
Fixing these things requires more changes and will be done separately.
struct m_config is left with the current name to reduce diff noise.
Also, since there are a _lot_ source files that include m_config.h, add
a replacement m_config.h that "redirects" to m_config_core.h.
Let's see how much everyone hates this. Leaving it enabled seems
problematic, because libavcodec returns an unspecific error if it
doesn't like it.
Fixes: #6300
Libav seems rather dead: no release for 2 years, no new git commits in
master for almost a year (with one exception ~6 months ago). From what I
can tell, some developers resigned themselves to the horrifying idea to
post patches to ffmpeg-devel instead, while the rest of the developers
went on to greener pastures.
Libav was a better project than FFmpeg. Unfortunately, FFmpeg won,
because it managed to keep the name and website. Libav was pushed more
and more into obscurity: while there was initially a big push for Libav,
FFmpeg just remained "in place" and visible for most people. FFmpeg was
slowly draining all manpower and energy from Libav. A big part of this
was that FFmpeg stole code from Libav (regular merges of the entire
Libav git tree), making it some sort of Frankenstein mirror of Libav,
think decaying zombie with additional legs ("features") nailed to it.
"Stealing" surely is the wrong word; I'm just aping the language that
some of the FFmpeg members used to use. All that is in the past now, I'm
probably the only person left who is annoyed by this, and with this
commit I'm putting this decade long problem finally to an end. I just
thought I'd express my annoyance about this fucking shitshow one last
time.
The most intrusive change in this commit is the resample filter, which
originally used libavresample. Since the FFmpeg developer refused to
enable libavresample by default for drama reasons, and the API was
slightly different, so the filter used some big preprocessor mess to
make it compatible to libswresample. All that falls away now. The
simplification to the build system is also significant.
A previous commit moved the underrun reporting to report_underruns(),
and called it from get_space(). One reason was that I worried about
printing a log message from a "realtime" callback, so I tried to move it
out of the way. (Though there's little justification other than a bad
feeling. While an older version of the pull code tried to avoid any
mutexes at all in the callback to accommodate "requirements" from APIs
like jackaudio, we gave up on that. Nobody has complained yet.)
Simplify this and move underrun reporting back to the callback. But
instead of printing the message from there, move the message into the
playloop. Change the message slightly, because ao->log is inaccessible,
and without the log prefix (e.g. "[ao/alsa]"), some context is missing.
AOs can report audio underruns, but only ao_alsa and ao_sdl (???)
currently do so. If the AO was marked as not reporting it, the cache
state was used to determine whether playback was interrupted due to slow
input.
This caused problems in some cases, such as video with very low video
frame rate: when a new frame is displayed, a new frame has to be
decoded, and since there it's so much further into the file (long frame
durations), the cache gets into an underrun state for a short moment,
even though both audio and video are playing fine. Enlarging the audio
buffer didn't help.
Fix this by making all AOs report underruns. If the AO driver does not
report underruns, fall back to using the buffer state.
pull.c behavior is slightly changed. Pull AOs are normally intended to
be used by pseudo-realtime audio APIs that fetch an audio buffer from
the API user via callback. I think it makes no sense to consider a
buffer underflow not an underrun in any situation, since we return
silence to the reader. (OK, maybe the reader could check the return
value? But let's not go there as long as there's no implementation.)
Remove the flag from ao_sdl.c, since it just worked via the generic
mechanism. Make the redundant underrun message verbose only.
push.c seems to log a redundant underflow message when resuming (because
somehow ao_play_data() is called when there's still no new data in the
buffer). But since ao_alsa does its own underrun reporting, and I only
use ao_alsa, I don't really care.
Also in all my tests, there seemed to be a rather high delay until the
underflow was logged (with audio only). I have no idea why this happened
and didn't try to debug this, but there's probably something wrong
somewhere.
This commit may cause random regressions.
See: #7440
If ao_add_events() is used, but all events flags are already set, then
we don't need to wakeup the core again.
Also, make the underrun message "exact" by avoiding the race condition
mentioned in the comment.
Avoiding redundant wakeups is not really worth the trouble, and it's
actually just a bonus in the change making the ao_underrun_event()
function return whether a new underrun was set, which is needed by the
following commit.
Before this commit, runtime changes were only applied if something else
caused audio to be reinitialized. Now setting them reinitializes audio
explicitly.
Just an implementation detail that can be cleaned up now. Internally,
m_config maintains a tree of m_sub_options structs, except for the root
it was not defined explicitly. GLOBAL_CONFIG was a hack to get access to
it anyway. Define it explicitly instead.
It's hard to see what FFmpeg does or what its API requires. It looks
like the alignment in our own allocation code might be slightly too
lenient, but who knows. Even if this is not needed, upping the alignment
only wastes memory and doesn't do anything bad.
(Note that the only reason why we have our own code is because FFmpeg
doesn't even provide it as API. API users are forced to recreate this,
even if they have no need for custom allocation!)
The "amultiply" filter crashes in AVX mode on unaligned access if an
audio pointer is unaligned (on 32 or 64 bytes I assume).
A requirement that audio data needs to be aligned isn't documented
anywhere. In our case, the data is still sample- and channel-aligned,
which is completely sane. Sure, you can imagine optimizations which make
some algorithms even faster by requiring higher alignment. But, and this
is a big but, you shouldn't crash api users because you just invented a
new undocumented requirement. And even more importantly, your
user-crashing optimization won't matter because it's just a trivial
algorithm working on audio. You don't need to pretend to be an
optimization devil, and nobody will give you a prize for this. But no,
lets random make API users crash (and then probably blame them for it!)
for something that wouldn't matter at all.
Not to mention that they do "document" some requirements on _video_
data, yet their vf_crop probably can still produce unaligned video
pointers. Oh how hilarious that the same documentation also talks about
libswscale alignment requirements. (This is weird because libswscale is
just one of many, many things which consume video data. Also did you
know that zimg, written in C++ and using intrinsics, i.e. the antithesis
to FFmpeg development, is much faster than libswscale, easier to use,
and produces more correct results, even if you ignore that libswscale
flat out doesn't support some very important features?)
Fucking tired of this bullshit. Can't wait until someone comes up with a
better framework than this... (well let's not write this out).
Fix this by copying instead of adjusting the start pointer when skipping
samples. This makes general operations slower just to fix interoperating
with a single filter. Thank you for your "optimization", FFmpeg. Go die
in a fire.
Didn't check whether this is correct. It probably is? If the frame needs
to be copied (due to COW), and memory allocation fails, it just silently
(or audibly lol) doesn't skip samples, because a never-fail function can
suddenly fail. Well, who cares.
Fixes: #7141
ad_lavc and vd_lavc use the lavc_process() helper to translate the
FFmpeg push/pull API to the internal filter API (which completely
mismatch, even though I'm responsible for both, just fucking kill me).
This interface was "slightly" too tight. It returned only a bool
indicating "progress", which was not enough to handle some cases (see
following commit).
While we're at it, move all state into a struct. This is only a single
bool, but we get the chance to add more if needed.
This fixes mpv falling asleep if decoding returns an error during
draining. If decoding fails when we already sent EOF, the state machine
stopped making progress. This left mpv just sitting around and doing
nothing.
A test case can be created with: echo $RANDOM >> image.png
This makes libavformat read a proper packet plus a packet of garbage.
libavcodec will decode a frame, and then return an error code. The
lavc_process() wrapper could not deal with this, because there was no
way to differentiate between "retry" and "send new packet". Normally, it
would send a new packet, so decoding would make progress anyway. If
there was "progress", we couldn't just retry, because it'd retry
forever.
This is made worse by the fact that it tries to decode at least two
frames before starting display, meaning it will "sit around and do
nothing" before the picture is displayed.
Change it so that on error return, "receiving" a frame is retried. This
will make it return the EOF, so everything works properly.
This is a high-risk change, because all these funny bullshit exceptions
for hardware decoding are in the way, and I didn't retest them. For
example, if hardware decoding is enabled, it keeps a list of packets,
that are fed into the decoder again if hardware decoding fails, and a
software fallback is performed. Another case of horrifying accidental
complexity.
Fixes: #6618
The code is very basic:
- only handles gamepads, could be extended for generic joysticks in the
future.
- only has button mappings for controllers natively supported by SDL2.
I heard more can be added through env vars, there's also ways to load
mappings from text files, but I'd rather not go there yet. Common ones
like Dualshock are supported natively.
- analog buttons (TRIGGER and AXIS) are mapped to discrete buttons using an
activation threshold.
- only supports one gamepad at a time. the feature is intented to use
gamepads as evolved remote controls, not play multiplayer games in mpv :)
This was all dead code. Commit 995c47da9a (over 3 years ago) removed all
uses of the controls.
It would be nice if AOs could apply a linear gain volume, that only
affects the AO's audio stream for low-latency volume adjust and muting.
AOCONTROL_HAS_SOFT_VOLUME was supposed to signal this, but to use it,
we'd have to thoroughly check whether it really uses the expected
semantics, so there's really nothing useful left in this old code.
See previous commits. ao_sdl is worthless, but it might be a good test
for pull-based AOs.
This stops using the old underrun reporting if the new one is enabled.
Also, since the AO's behavior can in theory not be according to
expectations, this needs to be enabled for every single pull AO
separately.
For some reason, in certain cases I get multiple underrun warnings while
cache-pausing is active. It fills the cache, restarts the AO,
immediately underruns again, and then fills the cache again. I'm not
sure why this happens; maybe ao_sdl tries to catch up when it shouldn't.
Who knows.
I think this was _always_ wrong. Due to the line above the first changed
line, buffered_bytes==bytes always. I can only hope I broke this in a
less under-tested edit when I originally wrote this.
Fixes: c5a82f729b
AOs can now call ao_underrun_event() (in any context) if an underrun has
happened. It will print a message.
This will be used in the following commits. But for now, audio.c only
clears the underrun bit, so that subsequent underruns still print the
warning message.
Since the underrun flag will be used in fragile ways by the playback
state machine, there is the "reports_underruns" field that signals
strong support for underrun reporting. (Otherwise, underrun events will
not be used by it.)
This commit tries to prepare for better underrun reporting. The goal is
to report underruns relatively immediately. Until now, this happened
only when play() was called. Change this, and abuse that get_delay() is
called "relatively often" - this reports the underrun immediately in
practice.
Background:
In commit 81e51a15f7 (and also e38b0b245e), we were quite confused
about ALSA underrun handling. The commit message showed uncertainty how
case 3 happened, but it's blindingly obvious and simple.
Actually reading the code shows that ALSA does not have a concept of a
"final chunk" (or we don't use it). It's obvious we never pass the
AOPLAY_FINAL_CHUNK flag along to the ALSA API in any way. The only thing
we do is simply writing a partial fragment. Of course this will cause an
underrun. Doing a partial write saves us the trouble to pad the last
frame with silence, or so.
The main reason why the underrun message was avoided was that play() was
never called with a non-0 sample count again (except if reset() was
called before that). That was OK, at least the goal of avoiding the
unwanted message was reached. (And the original "bogus" message at end
of playback was perfectly correct, as far as ALSA goes.)
If network stalls, play() will called again only once new data is
available. Obviously, this could take a long time, thus it's too late.
It turns out that case 2) mentioned in the previous commit happened
quite often when playback ended normally.
There is probably a legitimate underrun with normal buffer sizes (100
ms, 4 fragments, gapless audio in "weak" mode). This is a result of the
player waiting for video to end, and/or the time needed to kill the
video window. The former case means that it depends on your test case
whether it happens (a file where video ends slightly before audio is
less likely to trigger it).
This in turn is due to how gapless playback works. Achieving not having
a "gap" requires queuing the audio of the next file without playing a
partial chunk (as AOPLAY_FINAL_CHUNK would do). The partial chunk is
then played as part of the first chunk played from the next file. But if
it detects "later" that there is no next file, it still needs to get rid
of the last fragment with AOPLAY_FINAL_CHUNK. At this point it's too
late, and an underrun may have actually happened. The way the player
uninits and reinits the entire playback engine for the next file in a
"serial" manner means it cannot know in advance whether this works.
This is the reason why the idiot who added the underrun exception for
the last chunk in play() was wrong (I wrote that btw., before you accuse
me of being rude). Yes, it's a real underrun, and you could probably
hear it.
This XRUN (aka underrun) message was printed in the following
situations:
1) legitimate underrun during playback
2) legitimate underrun when playing final chunk
3) bogus underrun when playing final chunk
The old underrun case (in play()) happens in cases 1) and 2) as well,
but 3) did not happen. It appears 3) is indeed something that happens,
although it's not known for sure. It's still pretty annoying, so remove
the new XRUN message.
When testing, care should be taken to play with buffer sizes, video
versus no video, and gapless enabled/disabled. Also, suspending the
player with Ctrl+Z in the terminal (SIGSTOP) and then resuming is a good
way to trigger a "normal" underrun.
This wasn't used at all in my tests, because it simply passed the
frame directly to libswsresample. (And, by the way, will always do
that, because s64 is so obscure literally NOTHING uses it except
a sample specifically created to test this code. Screw FFmpeg.)
This can be due to unsupported sample formats (see previous commits),
minor allocation failures, and similar things. For identifying the exact
cause it's buried too deep in abstractions. But most time it doesn't
happen anyway, since it's extremely rare that new audio formats are
added.
What an idiotic format. It makes no sense, and should have been
converted to S32 in the demuxer, rather than plague everyone with
another extremely obscure nonsense format. Why doesn't ffmpeg add S24
instead? That's an actually useful format.
May cause compilation failure with old FFmpeg or Libav libs, but I don't
care.
ioctl(..., SNDCTL_DSP_CHANNELS, &nchannels) for not supported
nchannels does not return an error and instead set nchannels to
the default value.
Instead of failing with no audio, fallback to stereo.
This flag makes mpv continue using the PulseAudio driver even if the
sink is suspended.
This can be useful if JACK is running with PulseAudio in bridge mode and
the sink-input assigned to mpv is the one JACK controls, thus being
suspended.
By forcing mpv to still use PulseAudio in this case, the user can now
adjust the sink to an unsuspended one.
This filter wasn't referenced anywhere and thus was dead code. It should
have been in the audio filter list in user_filters.c. This was intended
as compatibility wrapper (to avoid breaking old command lines and config
files), and has no real use. Apparently I forgot to add it to the filter
list (did I even test this shit?), and so it was rotting around for 1.5
years doing nothing (just like myself).
Note that users can just use the libavfilter provided filter to force
resampling, just that it has a different name and different options.
There's also af_format to force inserting auto conversion through the
internal f_swsresample filter.
This may call memmove() with size==0 and a NULL data pointer. In
addition to this being UB with memmove(), I think it's UB to do
arithmetic on a NULL pointer too. Of course, this doesn't matter in
practice at all, and is just stupidity to torture programmers.
Fixes stupid messages with a opus/mkv test file that had an absurdly
huge codec delay.
This file fully skips several frames at the start. ad_lavc.c trimmed
these frames to 0 samples and returned them. The next layer
(f_decoder_wrapper.c) saw discontinuous PTS values, because the PTS
values increased by a frame, but amounted to 0 audio samples. This was
harmless, but logged PTS discontinuity errors.
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)
(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)
How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.
The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).
Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).
The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.
Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.
E.g.:
bool before = pts_a < pts_b;
would need to be:
bool before = forward
? pts_a < pts_b
: pts_a > pts_b;
or:
bool before = pts_a * dir < pts_b * dir;
or if you, as it's implemented now, just do this after decoding:
pts_a *= dir;
pts_b *= dir;
and then in the normal timing/renderer code:
bool before = pts_a < pts_b;
Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.
Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.
As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)
VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.
FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.