The messages "Audio: no audio" and "Video: no video" could be printed
twice each if initializing them failed. Prevent his silliness.
CC: @mpv-player/stable
Apparently this is what users want. When playing with normal speed,
nothing is done. When playing slower than normal, resampling is used
instead, because scaletempo (which does the pitch correction) adds
too many artifacts.
There's no real reason why audio_init_filter() should exist. Just use
af_init or af_reinit directly. (We lose a useless message; the same
information is printed in a quite close place with more details.)
Requires less code, and the way the filter chain is marked as having
failed to initialize allows just switching off audio instead of
crashing if trying to insert a volume filter in mixer.c fails, and
recreating the old filter chain fails too.
This would play some silence in case video was slower than audio. If
framedropping is already enabled, there's no other way to keep A/V
sync, short of changing audio playback speed (which would give worse
results). The --audiodrop option inserted silence if there was more
than 500ms desync.
This worked somewhat, but I think it was a silly idea after all. Whether
the playback experience is really bad or slightly worse doesn't really
matter. There also was a subtle bug with PTS handling, that apparently
caused A/V desync anyway at ridiculous playback speeds.
Just remove this feature; nobody is going to use it anyway.
Before this commit, there was AF_FORMAT_AC3 (the original spdif format,
used for AC3 and DTS core), and AF_FORMAT_IEC61937 (used for AC3, DTS
and DTS-HD), which was handled as some sort of superset for
AF_FORMAT_AC3. There also was AF_FORMAT_MPEG2, which used
IEC61937-framing, but still was handled as something "separate".
Technically, all of them are pretty similar, but may use different
bitrates. Since digital passthrough pretends to be PCM (just with
special headers that wrap digital packets), this is easily detectable by
the higher samplerate or higher number of channels, so I don't know why
you'd need a separate "class" of sample formats (AF_FORMAT_AC3 vs.
AF_FORMAT_IEC61937) to distinguish them. Actually, this whole thing is
just a mess.
Simplify this by handling all these formats the same way.
AF_FORMAT_IS_IEC61937() now returns 1 for all spdif formats (even MP3).
All AOs just accept all spdif formats now - whether that works or not is
not really clear (seems inconsistent due to earlier attempts to make
DTS-HD work). But on the other hand, enabling spdif requires manual user
interaction, so it doesn't matter much if initialization fails in
slightly less graceful ways if it can't work at all.
At a later point, we will support passthrough with ao_pulse. It seems
the PulseAudio API wants to know the codec type (or maybe not - feeding
it DTS while telling it it's AC3 works), add separate formats for each
codecs. While this reminds of the earlier chaos, it's stricter, and most
code just uses AF_FORMAT_IS_IEC61937().
Also, modify AF_FORMAT_TYPE_MASK (renamed from AF_FORMAT_POINT_MASK) to
include special formats, so that it always describes the fundamental
sample format type. This also ensures valid AF formats are never 0 (this
was probably broken in one of the earlier commits from today).
With e.g --start=-3 --audio-buffer=10 the decoder entered EOF state
before the initial sync was finished, entered STATUS_EOF, and just
started playing audio from a random position.
This doesn't handle seeking outside of the file, which is a different
case. E.g. --start=30:00 with audio and video enabled in a file shorter
than 30:00 will play a random last part of audio. This could perhaps be
fixed by using the hr-seek target for cutting audio, instead of the
video PTS, but that would be kind of intrusive, so don't do it for now.
The simpler solution, assuming audio EOF on video EOF, wouldn't work,
because we allow audio to start before video, or to last after video.
Somehow, there was a larger misunderstanding in the code: ao_buffer
does not need to be preserved over audio reinit for proper support of
gapless audio. The actual AO internal buffer takes care of this.
In fact, preserving ao_buffer just breaks audio resync. In the ordered
chapter case, end_pts is used, which means not all audio data in the
buffer is played, thus some data is left over when audio decoding
resumes on the next segment. This triggers some code that aborts resync
if there's "audio decoded" (ao_buffer contains something), but no PTS
is known (nothing was actually decoded yet).
Simplify, and always bind the output buffer to the decoder.
CC: @mpv-player/stable (maybe)
Probably no observable effect, but it's more correct. Setting audio to
EOF could have bad effects otherwise (anywhere the player logic for
example decides whether EOF was reached, and such).
Don't attempt to resync after speed changes. Note that most other cases
of audio reinit (like switching tracks etc.) still resync, but other
code paths take care of setting the audio_status accordingly.
This restores the old behavior of not trying to fix audio desync, which
was probably changed with commit 261506e3.
Note that the code as of now wasn't even entirely correct, since the A/V
sync values are slightly shifted. The dsync depends on the audio buffer
size, so a larger buffer size will show more extreme desync. Also see
mplayer2 commit 213a224e, which should fixed this - it was not merged
into mpv, because it disabled audio for too long, resulting in a worse
user experience. This is similar to the issue this commit attempts to
fix.
Fixes: #1042 (probably)
CC: @mpv-player-stable
This shouldn't change anything functionally.
Change the A/V desync message. --framedrop is enabled by default now, so
the text must be changed a little. I've never heard of audio outputs
messing up A/V sync recently, so remove that part.
Remove the unused ao_pts field.
Reorder 2 A/V sync related expressions so that they look the same.
In theory, timestamps can be negative, so we shouldn't just return -1
as special value.
Remove the separate code for clearing decode buffers; use the same code
that is used for normal seek reset.
Commit 5afc025c broke this. The reason is that mpctx->delay is updated
when a new video frame is added. This value is also needed to resync
audio, but it will be for the wrong PTS. They must be consistent with
each other, and if they aren't, initial sync will be off by N video
frames, which results at least in worse user experience.
This can be reproduced by for example heavily switching between normal
and 2x speed, or similar.
Fix by readding the video_next_pts field (keeping its use minimal,
instead of reverting the commit that removed it).
Apparently users prefer this behavior.
It was used for subtitles too, so move the code to calculate the video
offset into a separate function. Seeking also needs to be fixed.
Fixes#1018.
In encoding mode, the AO pretends to be infinitely fast (it will take
whatever we write, without ever rejecting input). Commit 261506e3 broke
this somehow. It turns out an old hack dealing with this was accidentally
dropped.
This is the hunk of code whose semantics were (partially) dropped:
if (mpctx->d_audio && (mpctx->restart_playback ? !video_left :
ao_untimed(mpctx->ao) && (mpctx->delay <= 0 ||
!video_left)))
{
int status = fill_audio_out_buffers(mpctx, endpts);
// Not at audio stream EOF yet
audio_left = status > -2;
}
This if condition is pretty wild, and it looked like it was pretty much
for audio-only mode, rather than subtle handling for encoding mode.
Basically move the code from playloop.c to video.c. The new function
write_video() now contains the code that was part of run_playloop().
There are no functional changes, except handling "new_frame_shown"
slightly differently. This is done so that we don't need new a new
MPContext field or a return value for write_video() to signal this
condition. Instead, it's handled indirectly.
This also reduces some code duplication with other parts of the code.
The changfe is mostly cosmetic, although there are also some subtle
changes in behavior. At least one change is that the big desync message
is now printed after every seek.
In situations when the demuxer reports EOF, but immediately "recovers"
after that and returns new data, it could happen that audio sync was
skipped. Deal with this by actually entering the EOF state, instead of
assuming this will happen later.
Some files have the first audio much later into the video (for whatever
reasons). Instead of appending large amounts of silence to the audio
buffer (and refusing to sync if the audio to append is "too large"),
just wait until enough video has played.
It probably happens relatively often that the first packet (or even the
first N packets) of a stream will fail to decode, but decoding will
eventually succeed at a later point. Before commit 261506e3, this was
handled by an explicit retry loop (although this was also for other
purposes), but with then was changed to abort on the first error. This
makes it impossible to decode some audio streams.
Change this so that errors are ignored for the first 50 packets, which
should make it equivalent to the old code.
If you for example use --audio-file, disable the external track, seek,
and enable the external track again, the playback position of the
external file was off, and you would get major A/V desync. This was
actually supposed to work, but broke at some time ago (probably commit
2b87415f). It didn't work, because it attempted to seek the stream if it
was already selected, which was always true due to
reselect_demux_streams() being called before that.
Fix by putting the initial selection and the seek together.
This commit makes audio decoding non-blocking. If e.g. the network is
too slow the playloop will just go to sleep, instead of blocking until
enough data is available.
For video, this was already done with commit 7083f88c. For audio, it's
unfortunately much more complicated, because the audio decoder was used
in a blocking manner. Large changes are required to get around this.
The whole playback restart mechanism must be turned into a statemachine,
especially since it has close interactions with video restart. Lots of
video code is thus also changed.
(For the record, I don't think switching this code to threads would
make this conceptually easier: the code would still have to deal with
external input while blocked, so these in-between states do get visible
[and thus need to be handled] anyway. On the other hand, it certainly
should be possible to modularize this code a bit better.)
This will probably cause a bunch of regressions.
Don't return an EOF code if there's still buffered data.
Also, don't call demux_stream_eof() in the playloop. There's probably
nothing wrong with it, but it's cleaner not to use it.
Also give AD_EOF its own value, so that a decoding error doesn't drain
audio by causing an EOF condition.
There was confusion about what should go into audio pts calculation and
what not (mainly due to the audio push thread). This has been fixed by
using the playing - not written - audio pts (which properly takes into
account the ao's buffer), and incrementing the samples count only by the
amount of samples actually taken from the buffer (unfortunately this
now forces us to keep the lock too long for my taste).
It's unlikely that files with multiple audio tracks and with replaygain
actually happen, but this change might help avoid minor corner cases
with later changes.
Basically, this allows gapless playback with similar files (including
the ordered chapter case), while still being robust in general.
The implementation is quite simplistic on purpose, in order to avoid
all the weird corner cases that can occur when creating the filter
chain. The consequence is that it might do not-gapless playback in
more cases when needed, but if that bothers you, you still can use
the normal gapless mode.
Just using "--gapless-audio" or "--gapless-audio=yes" selects the old
mode.
This code handles buggy AOs (even if all AOs are bug-free, it's good for
robustness). Move handling of it to the AO feed thread. Now this check
doesn't require magic numbers and does exactly what's it supposed to do.
This played the file at a wrong sample rate if the rate was out of
certain bounds.
A comment says this was for the sake of libaf/af_resample.c. This
resampler has been long removed. Our current resampler
(libav/swresample) checks supported sample rates on reconfiguration, and
will error out if a sample rate is not supported. And I think that is
the correct behavior.
This obviously doesn't work. It wasn't much of a problem in the past
because most passthrough formats use 2 channels, which is also the
default for downmix.
This is probably "safer". Without it, we will play 1 sample, because the
logic was written in a way to decode 1 sample if audio is paused. 1
sample usually will initialize the audio PTS, but not play any real
audio. Also see previous commit.
In ancient times, this actually used 1 byte (instead of 1 sample), so
clearly no sample was written, unless the audio was 8-bit mono.
Remove the ao_buffer_playable_samples field. This contained the number
of samples that fill_audio_out_buffers() wanted to write to the AO (i.e.
this data was supposed to be played at some point), but ao_play()
rejected it due to partial fill.
This could happen with many AOs, notably those which align all written
data to an internal period size (often called "outburst" in the AO
code), and the accepted number of samples is rounded down to period
boundaries. The left-over samples at the end were still kept in
mpctx->ao_buffer, and had to be played later.
The reason ao_buffer_playable_samples had to exist was to make sure that
at EOF, the correct number of left-over samples was played (and not
possibly other data in the buffer that had to be sliced off due to
endpts in fill_audio_out_buffers()). (You'd think you could just slice
the entire buffer, but I suspect this wasn't done because the end time
could actually change due to A/V sync changes. Maybe that was the reason
it's so complicated.)
Some commits ago, ao.c gained internal buffering, and ao_play() will
never return partial writes - as long as you don't try to write more
samples than ao_get_space() reports. This is always the case. The only
exception is filling the audio buffers while paused. In this case, we
decode and play only 1 sample in order to initialize decoding (e.g. on
seeking). Actually playing this 1 sample is in fact a bug, but even of
the AO doesn't have period size alignment, you won't notice it. In
summary, this means we can safely remove the code.
We want to move the AO to its own thread. There's no technical reason
for making the ao struct opaque to do this. But it helps us sleep at
night, because we can control access to shared state better.
This field will be moved out of the ao struct. The encoding code was
basically using an invalid way of accessing this field.
Since the AO will be moved into its own thread too and will do its own
buffering, the AO and the playback core might not even agree which
sample a PTS timestamp belongs to. Add some extrapolation code to handle
this case.
This sometimes happened when changing playback speed (= reinitializing
audio) after seeking of playback start. The assertion in audio.c:441 was
triggered, because buffer_playable_samples wasn't reset correctly when
the audio buffer was cleared or shortened. The assertion is correct and
should hold up any time.