Lots of filters have generic internal function names like "process".
On a stack trace, all of the different filters use this name,
which causes confusion of the actual filter being processed.
This renames these internal function names to carry the filter names.
This matches what had already been done for some filters.
No idea how things previously worked without having these set, but
apparently they did...
If this was a normal encoder to muxer case, we would utilize
`avcodec_parameters_to_context`, but alas this is not.
Fixes: #13794
Fix DTS passthrough playback of 44.1 khz content. Also, take into account that there are some DTS variants with a samplerate of 96khz (e.g. DTS 24/96), somehow they are recognized wrongly as 48khz by the code. Don´t rely on this "bug", do it correctly. Now every samplerate above 44.1Khz is correctly treated as 48khz, and 44.1khz files are treated as 44.1khz for bitstreaming.
The avpkt was created once on decoder init but destroyed each time the
filter was destroyed, this obviously can't work. Move the packet alloc
to the filter init function instead.
fixes: 4574dd5dc6
c784820454 introduced a bool option type
as a replacement for the flag type, but didn't actually transition and
remove the flag type because it would have been too much mundane work.
This has been a long standing annoyance - ffmpeg is removing
sizeof(AVPacket) from the API which means you cannot stack-allocate
AVPacket anymore. However, that is something we take advantage of
because we use short-lived AVPackets to bridge from native mpv packets
in our main decoding paths.
We don't think that switching these to `av_packet_alloc` is desirable,
given the cost of heap allocation, so this change takes a different
approach - allocating a single packet in the relevant context and
reusing it over and over.
That's fairly straight-forward, with the main caveat being that
re-initialising the packet is unintuitive. There is no function that
does exactly what we need (what `av_init_packet` did). The closest is
`av_packet_unref`, which additionally frees buffers and side-data.
However, we don't copy those things - we just assign them in from our
own packet, so we have to explicitly clear the pointers before calling
`av_packet_unref`. But at least we can make a wrapper function for
that.
The weirdest part of the change is the handling of the vtt subtitle
conversion. This requires two packets, so I had to pre-allocate two in
the context struct. That sounds excessive, but if allocating the
primary packet is too expensive, then allocating the secondary one for
vtt subtitles must also be too expensive.
This change is not conditional as heap allocated AVPackets were
available for years and years before the deprecation.
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
Let's see how much everyone hates this. Leaving it enabled seems
problematic, because libavcodec returns an unspecific error if it
doesn't like it.
Fixes: #6300
Libav seems rather dead: no release for 2 years, no new git commits in
master for almost a year (with one exception ~6 months ago). From what I
can tell, some developers resigned themselves to the horrifying idea to
post patches to ffmpeg-devel instead, while the rest of the developers
went on to greener pastures.
Libav was a better project than FFmpeg. Unfortunately, FFmpeg won,
because it managed to keep the name and website. Libav was pushed more
and more into obscurity: while there was initially a big push for Libav,
FFmpeg just remained "in place" and visible for most people. FFmpeg was
slowly draining all manpower and energy from Libav. A big part of this
was that FFmpeg stole code from Libav (regular merges of the entire
Libav git tree), making it some sort of Frankenstein mirror of Libav,
think decaying zombie with additional legs ("features") nailed to it.
"Stealing" surely is the wrong word; I'm just aping the language that
some of the FFmpeg members used to use. All that is in the past now, I'm
probably the only person left who is annoyed by this, and with this
commit I'm putting this decade long problem finally to an end. I just
thought I'd express my annoyance about this fucking shitshow one last
time.
The most intrusive change in this commit is the resample filter, which
originally used libavresample. Since the FFmpeg developer refused to
enable libavresample by default for drama reasons, and the API was
slightly different, so the filter used some big preprocessor mess to
make it compatible to libswresample. All that falls away now. The
simplification to the build system is also significant.
Just an implementation detail that can be cleaned up now. Internally,
m_config maintains a tree of m_sub_options structs, except for the root
it was not defined explicitly. GLOBAL_CONFIG was a hack to get access to
it anyway. Define it explicitly instead.
ad_lavc and vd_lavc use the lavc_process() helper to translate the
FFmpeg push/pull API to the internal filter API (which completely
mismatch, even though I'm responsible for both, just fucking kill me).
This interface was "slightly" too tight. It returned only a bool
indicating "progress", which was not enough to handle some cases (see
following commit).
While we're at it, move all state into a struct. This is only a single
bool, but we get the chance to add more if needed.
This fixes mpv falling asleep if decoding returns an error during
draining. If decoding fails when we already sent EOF, the state machine
stopped making progress. This left mpv just sitting around and doing
nothing.
A test case can be created with: echo $RANDOM >> image.png
This makes libavformat read a proper packet plus a packet of garbage.
libavcodec will decode a frame, and then return an error code. The
lavc_process() wrapper could not deal with this, because there was no
way to differentiate between "retry" and "send new packet". Normally, it
would send a new packet, so decoding would make progress anyway. If
there was "progress", we couldn't just retry, because it'd retry
forever.
This is made worse by the fact that it tries to decode at least two
frames before starting display, meaning it will "sit around and do
nothing" before the picture is displayed.
Change it so that on error return, "receiving" a frame is retried. This
will make it return the EOF, so everything works properly.
This is a high-risk change, because all these funny bullshit exceptions
for hardware decoding are in the way, and I didn't retest them. For
example, if hardware decoding is enabled, it keeps a list of packets,
that are fed into the decoder again if hardware decoding fails, and a
software fallback is performed. Another case of horrifying accidental
complexity.
Fixes: #6618
This can be due to unsupported sample formats (see previous commits),
minor allocation failures, and similar things. For identifying the exact
cause it's buried too deep in abstractions. But most time it doesn't
happen anyway, since it's extremely rare that new audio formats are
added.
Fixes stupid messages with a opus/mkv test file that had an absurdly
huge codec delay.
This file fully skips several frames at the start. ad_lavc.c trimmed
these frames to 0 samples and returned them. The next layer
(f_decoder_wrapper.c) saw discontinuous PTS values, because the PTS
values increased by a frame, but amounted to 0 audio samples. This was
harmless, but logged PTS discontinuity errors.
Apparently, for bit streaming DTS-HD MA is specified to be handled as an
eight channel (7.1) bit stream, while DTS-HD HRA is specified to be
handled as a stereo bit stream.
Define a variable for this, and utilize it to set the correct values
for both the DTS-HD bit streaming rate, as well as the channel count
for the SPDIF encoder.
Fixes#6148
This was always a legacy thing. Remove it by applying an orgy of
mp_get_config_group() calls, and sometimes m_config_cache_alloc() or
mp_read_option_raw().
win32 changes untested.
MPlayer used this to distinguish multiple decoder wrappers (such as
libavcodec vs. binary codec loader vs. builtin decoders). It lost
meaning in mpv as non-libavcodec things were dropped. Now it doesn't
serve any purpose anymore.
Parsing was removed quite a while ago, and the recent filter change
removed any use of the internal family field. Get rid of it.
Use the decoder wrapper that was introduced for video. This removes all
code duplication the old audio decoder wrapper had with the video code.
(The audio wrapper was copy pasted from the video one over a decade ago,
and has been kept in sync ever since by the power of copy&paste. Since
the original copy&paste was possibly done by someone who did not answer
to the LGPL relicensing, this should also remove all doubts about
whether any of this code is left, since we now completely remove any
code that could possibly have been based on it.)
There is some complication with spdif handling, and a minor behavior
change (it will restrict the list of codecs to spdif if spdif is to be
used), but there should not be any difference in practice.
If feed_packet() ended with DATA_WAIT, the player should have gone to
sleep, until the demuxer wakes it up again when there is new data. But
the call to read_frame() unconditionally overwrote this status code, so
it never waited. The consequence was that the core burned CPU by
effectively polling the demuxer status, which was noticeable especially
when seeking in network streams (since seeking is async, decoders will
start out with having to wait for network).
Regression since commit 33e5755c.
The old code tried to make sure at all times to try to read a new
packet. Only once that was read, it tried to retrieve new video or audio
frames the decoder might already have decoded.
Change this to strictly read frames from the decoder until it signals
that it wants a new packet, and only then read and feed a new packet.
This is in theory nicer, follows the libavcodec recommended data flow,
and and reduces the minimum latency by 1 frame.
This merely requires switching the order in which those calls are done.
Normally, the decoder will return only 1 frame until a new packet is
required. If we would just feed it 1 packet, return DATA_AGAIN, and wait
until the next frame is decoded, we would run the playloop 1 time too
often for no reason (which is fine but might have some overhead). To
avoid this, try to read a frame again after possibly feeding a packet.
For this reason, move the feed/read code to its own functions each,
instead of merely moving the code.
The audio and video code for this particular thing is basically
duplicated. The idea is to unify them one day, so make the change to
both. (Doing this for video is the real motivation for this change, see
below.)
The video code change is slightly more complicated, because we have to
care about the framedrop counting (which is just a heuristic, but for
now considered better than nothing, and possibly considered required to
warn the user of framedrops happening - maybe).
Apparently this change helps with stalling streams on Android with the
mediacodec wrapper and mpeg2 decoder implementations which deinterlace on
decoding (and return 2 frames per packet).
Based on an idea and observations by tmm1.
A release has been made, so drop options deprecated for that release.
Also drop some options which have been deprecated a much longer time
before.
Also fix a typo in client-api-changes.rst.
The new_segment field was used to track the decoder data flow handler of
timeline boundaries, which are used for ordered chapters etc. (anything
that sets demuxer_desc.load_timeline). This broke seeking with the
demuxer cache enabled. The demuxer is expected to set the new_segment
field after every seek or segment boundary switch, so the cached packets
basically contained incorrect values for this, and the decoders were not
initialized correctly.
Fix this by getting rid of the flag completely. Let the decoders instead
compare the segment information by content, which is hopefully enough.
(In theory, two segments with same information could perhaps appear in
broken-ish corner cases, or in an attempt to simulate looping, and such.
I preferred the simple solution over others, such as generating unique
and stable segment IDs.)
We still add a "segmented" field to make it explicit whether segments
are used, instead of doing something silly like testing arbitrary other
segment fields for validity.
Cached seeking with timeline stuff is still slightly broken even with
this commit: the seek logic is not aware of the overlap that segments
can have, and the timestamp clamping that needs to be performed in
theory to account for the fact that a packet might contain a frame that
is always clipped off by segment handling. This can be fixed later.
This code could not be relicensed. The intention was to write new filter
code (which could handle both audio and video), but that's a bit of
work. Write some code that can do audio conversion (resampling,
downmixing, etc.) without the old audio filter chain code in order to
speed up the LGPL relicensing.
If you build with --disable-libaf, nothing in audio/filter/* is compiled
in. It breaks a few features, such as --volume, --af, pitch correction
on speed changes, replaygain.
Most likely this adds some bugs, even if --disable-libaf is not used.
(How the fuck does EOF notification work again anyway?)
This is pretty pointless, but I believe it allows us to claim that the
new code is not affected by the copyright of the old code. This is
needed, because the original mp_audio struct was written by someone who
has disagreed with LGPL relicensing (it was called af_data at the time,
and was defined in af.h).
The "GPL'ed" struct contents that surive are pretty trivial: just the
data pointer, and some metadata like the format, samplerate, etc. - but
at least in this case, any new code would be extremely similar anyway,
and I'm not really sure whether it's OK to claim different copyright. So
what we do is we just use AVFrame (which of course is LGPL with 100%
certainty), and add some accessors around it to adapt it to mpv
conventions.
Also, this gets rid of some annoying conventions of mp_audio, like the
struct fields that require using an accessor to write to them anyway.
For the most part, this change is only dumb replacements of mp_audio
related functions and fields. One minor actual change is that you can't
allocate the new type on the stack anymore.
Some code still uses mp_audio. All audio filter code will be deleted, so
it makes no sense to convert this code. (Audio filters which are LGPL
and which we keep will have to be ported to a new filter infrastructure
anyway.) player/audio.c uses it because it interacts with the old filter
code. push.c has some complex use of mp_audio and mp_audio_buffer, but
this and pull.c will most likely be rewritten to do something else.
This API isn't deprecated (yet?), but it's still inferior and harder to
use than avcodec_free_context().
Leave the call only in 1 case in af_lavcac3enc.c, where we apparently
seriously close and reopen the encoder for whatever reason.
Use avcodec_free_context() unstead of random other calls. Actually it
was already used in the second case, but calling avcodec_close() is
redundant.
Don't crash if allocating a codec context fails.
All relevant authors of the current code have agreed.
As always, there are the usual historical artifacts that could be
mentioned. For example, there used to be a large number of decoders
by various authors who were not asked, but whose code was all 100%
removed. (Mostly due to FFmpeg providing all codecs.)
One point of contention is that Nick Kurshev might have refactored the
old audio decoder code in 2001. Basically, there are hints that it might
have been done by him, such as Arpi's commit message stating that the
code was imported from MPlayerXP (Nick's fork), or all the files having
his name in the "maintainer" field. On the other hand, the murky history
of ad.h weakens this - it could be that Arpi started this work, and Nick
took it (and possibly finished it).
In any case, Nick could not be reached, so there is no agreement for
LGPL relicensing from him. We're changing the license anyway, and assume
that his change in itself is not copyrightable. He only moved code, and
in addition used the equivalent video decoder framework (done by Arpi,
who agreed) as template. For example, ad_functions_s was basically
vd_functions_s, which the signature of the decode callback changed to
the same as audio_decode(). ad_functions_s also had a comment that said
it interfaces with "video decoder drivers" (I'm fixing this comment in
this commit).
I verified that no additional code was added that is copyright-relevant,
still in today's code, and not copied from the existing code at the time
(either from the previous audio decoder code or the video framework
code). What apparently matters here is that none of the old code was not
written by Nick, and the authors of the old code have given his
agreement, and (probably) that Nick didn't add actual new code (none
that would have survived), that was not trivially based on the old one
(i.e. no new copyrightable "work").
A copyright expert told me that this kind of change can be considered
not relevant for copyright, so here we go.
Rewriting this would end with the same code anyway, and the naming
conventions can't be copyrighted.
All authors have agreed.
Commit 94d3170bd0 is a bit murky: Nick could not be reached, and arpi's
changes were obviously inspired or copied from Nick's. However, the
changed symbols were removed and do not exist anymore.
This can be useful in other contexts.
Note that we end up setting AVCodecContext.width/height instead of
coded_width/coded_height now. AVCodecParameters can't set coded_width,
but this is probably more correct anyway.
The FFmpeg versions we support all have the APIs we were checking for.
Only Libav missed them. Simplify this by explicitly checking for FFmpeg
in the code, instead of trying to detect the presence of the API.
Since we set "skip_manual", we can actually get frames with this set.
Currently, only AV_PKT_FLAG_DISCARD will trigger this flag, and only
mov.c sets the latter flags, so this is related to FFmpeg's half-broken
mp4 edit list support.
Apparently you set the native sample rate when passing through AC3.
This fixes passthrough with 44100 Hz AC3.
Avoid opening a decoder for this and only open the parser. (Hopefully
DTS will also support this some time in the future or so - having to
open a decoder just to get the profile is dumb.)
Same deal as with video. Including the EOF handling.
(It would be nice if this code were not duplicated, but right now we're
not even close to unifying the audio and video code paths.)