See --lavfi-complex option.
This is still quite rough. There's no support for dynamic configuration
of any kind. There are probably corner cases where playback might freeze
or burn 100% CPU (due to dataflow problems when interaction with
libavfilter).
Future possible plans might include:
- freely switch tracks by providing some sort of default track graph
label
- automatically enabling audio visualization
- automatically mix audio or stack video when multiple tracks are
selected at once (similar to how multiple sub tracks can be selected)
track can't be NLUL at this point, so the if is redundant. Remove it and
unindent the block. Also, make the function check whether the track is
selected at all, which makes it safer and idempotent.
Will be helpful for the coming filter support. I planned on merging
audio/video decoding, but this will have to wait a bit longer, so only
remove the duplicate status codes.
Let's fix broken samples with questionable heuristic without real
reasoning. Until this gets fixed properly, this is a good compromise,
though. A proper fix would properly resync audio and video without
brutally resetting the decoders, but on the other hand not doing the
brutal reset would cause issues in other obscure corner cases such
resyncing might cause.
This code is tricky because it has to wakeup the mainloop to make
progressing during syncing audio, but also has to avoid waking it up
when it's not needed. Failure to do so either burns CPU by not ever
going to sleep, or causes apparent "freezes" by going to sleep (and it
will continue if the mainloop is woken up e.g. due to user input).
In this case, simply starting A/V playback with --start=5 and removing
an unrelated wakeup in osd.c can trigger such a "freeze". The unrelated
wakeup did hide this bug, nonetheless it's a bug.
(Can't wait to rewrite this shitty audio resync code. And it's all my
fault.)
We just need to provide an entrypoint for it, and move the main init
code to a separate function. This gets rid of the messy video chain full
reinit in command.c, which completely destroyed and recreated the video
state for the purpose of mid-stream hw/sw switching.
These changes don't make too much sense without context, but are
preparation for later. Then the audio_src/video_src fields will be
actually be NULL under circumstances.
Before this commit, reinit_audio_chain() did 2 things: create all the
management data structures and initialize the decoder, and handling lazy
filter/output init (as well as dealing with format changes). For the
second purpose, it could be called multiple times (even though it wasn't
really idempotent). This was pretty weird, so make them separate
functions. The new function is actually idempotent too.
It also turns out the reinit functions don't have to call themselves
recursively for the spdif PCM fallback.
Regression caused by commit 3b95dd47. Also see commit 4c25b000. We can
either use video_next_pts and add "delay", or we just use video_pts. Any
other combination breaks. The reason why the assumption that delay==0 at
this point was wrong exactly because after displaying the first video
frame (usually done before audio resync) a new frame might be "added"
immediately, resulting in a new video_next_pts and "delay", which will
still amount to video_pts.
Fixes#2770. (The reason why display-sync was blamed in this issue is
because enabling display-sync in the options forces a prefetch by 2
instead of 1 frames for seeks/playback restart, which triggers the
issue, even if display-sync is not actually enabled. In this case,
display-sync is never enabled because the frames have a unusually high
frame duration. This is also what exposed the initial desync issue.)
This seems generally easier when using libmpv (and was already requested
and implemented before: see commit 327a779a; it was reverted some time
later).
With the weird internal logic we have to deal with, in particular the
--softvol=no case (using system volume), and using the audio API's mixer
(--softvol=auto on some systems), we still can't avoid all glitches and
corner cases that complicate this issue so much. The API user is either
recommended to use --softvol=yes or auto, or to watch the new
mixer-active property, and assume the volume/mute properties have
significant values if the mixer is active.
Remaining glitches:
- changing the volume/mute properties has no effect if no internal mixer
is used (--softvol=no) and the mixer is not active; the actual mixer
controls do not change, only the property values
- --volume/--mute do not have an effect on the volume/mute properties
before mixer initialization (the options strictly are only applied
during mixer init)
- volume-max is 100 while the mixer is not active
With the format left untouched, this would just try to reinit with a
spdif format again.
We're not clearing the format in reset_audio_state() so the audio chain
can be recreated any time without having to wait for a frame to be
decoded.
Even though the timing logic is correct, it tends to mess with looping
videos and such in unappreciated ways.
It also has to be admitted that most file formats seem not to properly
define the duration of the last video frame (or libavformat does not
export it in a useful way), so whether or not we should use the demuxer
reported framerate for the last frame is questionable. (Still, why would
you essentially just discard the last frame?)
The timing logic is kept, but disabled for video with "normal" FPS
values. In particular, we want to keep it for displaying images, which
implicitly set the frame duration to 1 second by reporting 1 FPS. It's
also good for slide shows with mf://.
Fixes#2745.
Most text subtitles are read completely on loading (libavformat works
this way, and there are good reasons to do it on the higher levels too).
This leads to some messy problems. For example, the subtitle path is the
only one which might read packets during decoder initialization. This is
not neccessary; get rid of it.
This fixes a potential problem of seeking to position 0 for image
subtitles on init, and if the start position is not at the beginning of
the timeline.
It doesn't need to be part of the big context, but is strictly part of
shuffling data from the audio filters to audio output, and thus belongs
into ao_chain.
It also turns out that clearing it in clear_audio_output_buffers() is
completely redundant.
(Of course ao_buffer is an abomination in the first place and shouldn't
exist at all.)
vo_chain_uninit() isn't supposed to care much about the decoder
(although decoders and outputs still go strictly together, so there is
not much of an actual difference now).
Also unset track.d_video correctly.
Remove a stale declaration from dec_video.h as well.
Similar to the video path. dec_audio.c now handles decoding only. It
also looks very similar to dec_video.c, and actually contains some of
the rewritten code from it. (A further goal might be unifying the
decoders, I guess.)
High potential for regressions.
Seems useless.
This only helped in one case: one audio stream in the sample
av_find_best_stream_fails.ts had a AC3 packets which couldn't be
decoded, and for which avcodec_decode_audio4() returned 0 forever. In
this specific case, playback will now not start, and you have to
deselect audio manually.
(If someone complains, the old behavior might be restored, but
differently.)
Also remove the stale "bitrate" field.
This covers source files which were added in mplayer2 and mpv times
only, and where all code is covered by LGPL relicensing agreements.
There are probably more files to which this applies, but I'm being
conservative here.
A file named ao_sdl.c exists in MPlayer too, but the mpv one is a
complete rewrite, and was added some time after the original ao_sdl.c
was removed. The same applies to vo_sdl.c, for which the SDL2 API is
radically different in addition (MPlayer supports SDL 1.2 only).
common.c contains only code written by me. But common.h is a strange
case: although it originally was named mp_common.h and exists in MPlayer
too, by now it contains only definitions written by uau and me. The
exceptions are the CONTROL_ defines - thus not changing the license of
common.h yet.
codec_tags.c contained once large tables generated from MPlayer's
codecs.conf, but all of these tables were removed.
From demux_playlist.c I'm removing a code fragment from someone who was
not asked; this probably could be done later (see commit 15dccc37).
misc.c is a bit complicated to reason about (it was split off mplayer.c
and thus contains random functions out of this file), but actually all
functions have been added post-MPlayer. Except get_relative_time(),
which was written by uau, but looks similar to 3 different versions of
something similar in each of the Unix/win32/OSX timer source files. I'm
not sure what that means in regards to copyright, so I've just moved it
into another still-GPL source file for now.
screenshot.c once had some minor parts of MPlayer's vf_screenshot.c, but
they're all gone.
Slightly helps with timeline stuff, like EDL. There is no need to keep
network (or even just disk I/O) busy for all segments at the same time,
because 1. the data won't be needed any time soon, and 2. will probably
be discarded anyway if the stream is seeked when segment is resumed.
Partially fixes#2692.
Eventually we want the VO be driven by a A->V filter, so a decoder
doesn't even have to exist. Some features definitely require a decoder
though (like reporting the decoder in use, hardware decoding, etc.), so
for each thing which accessed d_video, it has to be redecided if and how
it can access decoder state.
At least the "framedrop" property slightly changes semantics: you can
now always set this property, even if no video is active.
Some untested changes in this commit, but our bio-based distributed
test suite has to take care of this.
This moves some code related to decoding from video.c to dec_video.c,
and also removes some accesses to dec_video.c from the filtering code.
dec_video.ch is starting to make sense, and simply returns video frames
from a demuxer stream. The API exposed is also somewhat intended to be
easily changeable to move decoding to a separate thread, if we ever want
this (due to libavcodec already being threaded, I don't see much of a
reason, but it might still be helpful).
Makes the next commit simpler. It's probably a bad idea to add more
fields to the global state, but on the other hand the client API state
is pretty much per-instance anyway. It also will help with things like
the proposed libmpv custom stream API.
The aspect ratio calculations are cached (mainly so that aspect ratio
related messages are not logged on every frame). The cache is not clared
anymore when video filters are reconfigured, but changing the
video-aspect-ratio property relied on it. Make it explicit.
Fixes#2714.
The binding is similar to the tv-binding, just with capital letters.
Switching the dvb-channel-name property compared to dvb-channel
means the channel-name is shown on-screen when switching instead of
"dvb-channel (error)" otherwise,
and switching anyways happens without changing the card.