With -v -v ("debug" level), which is the default for --log-file, this
would log every damn Matroska EBML element and some other uninteresting
things, which was very noisy.
Adjust the log levels to make them less noisy. Also, change some log
calls to MP_ERR for things which are actually errors.
Sometimes this hints that there's a bug, but sometimes it's normal.
Since the code for --end/--frames puts frames that should not be shown
anymore back into the pin, using those options will show this warning
when playback ends. This is a minor annoyance. We could change how it's
done (e.g. set an explicit flag somewhere), but that seems bothersome,
so just change the message from warning to verbose.
The main change is that we wait with opening the muxer ("writing
headers") until we have data from all streams. This fixes race
conditions at init due to broken assumptions in the old code.
This also changes a lot of other stuff. I found and fixed a few API
violations (often things for which better mechanisms were invented, and
the old ones are not valid anymore). I try to get away from the public
mutex and shared fields in encode_lavc_context. For now it's still
needed for some timestamp-related fields, but most are gone. It also
removes some bad code duplication between audio and video paths.
1. I want to get away from mp_image_params (maybe).
2. For encoding mode, it's convenient to get the nominal_fps, which is
a mp_image field, and not in mp_image_params.
Removes a good hunk of weird code.
This loses qscale "emulation", some logging, and the fact that duplicate
keys for values starting with +/- were added with AV_DICT_APPEND. I
don't assign those any importance, even if they are user-visible
changes.
The new M_OPT_ flag is just so that nothing weird happens for other
key-value options, which do not interpret a "help" key specially.
Also rename stereo3d to stereo_in. The only real change is that the
vo_gpu OSD code now uses the actual stereo 3D mode, instead of the
--video-steroe-mode value. (Why does this vo_gpu code even exist?)
Going by ISO 639.2, "und" means "Undetermined". Whatever it's supposed
to mean, in practice it's user for "unset". We prefer if the language
tag remains simply unset in this case.
This removes an ugliness with mp4 in partricular, because libavformat
will export unset languages as such, which affects most mp4 files.
I think this is more intuitive. This requires a dedicated "out" dummy
filter. But keep the "in" dummy filter for symmetry, like in the old
filter code. (We could remove the "in" dummy filter, because the first
actual filter would still show the real input format.)
Attempts to enable the following things:
- let a render API user do "proper" audio-sync video timing itself
- make it possible to not re-render repeated frames if the API user has
better mechanisms available (e.g. waiting for a DisplayLink cycle
instead)
- allow the user to delay or skip redraws if it makes sense
Basically this information will be needed by API users who want to be
"clever" about optimizing timing and rendering.
In MPV_RENDER_PARAM_ADVANCED_CONTROL mode, a simple update callback does
not necessarily make the API user redraw. So handle it differently.
For one, setting vo->want_redraw already uses the "normal" redraw path,
which will call draw_frame() and set next_frame.
Then there are redraws trigered by mpv_render_context_set_parameter(),
which are on the render thread, and would require a separate mechanism.
I decided this is not really a good idea, since it's not even clear that
setting an arbitrary parameter should redraw. Also this could trigger an
unbounded number of redraws. The user can trigger redraws manually if
really needed, depending on the parameter that's being set. If we really
wanted vo_libmpv to do this, we could add a new flag like need_redraw,
which would be 4 lines of code or so.
update() used to require the lock, but now it doesn't matter. It's
slightly better to do it outside of the lock now, in case the update
callback reschedules before returning, and the user render thread tries
to acquire the still held lock (which would require 2 more context
switches).
DR (letting the decoder allocate texture memory) requires running the
allocation on the render thread. This is rather hard with the render
API, because the user controls this thread and when it's entered. It was
not possible until now.
This commit adds a bunch of infrastructure to make this possible. We add
a new optional mode (MPV_RENDER_PARAM_ADVANCED_CONTROL) which basically
lets the user's render thread and libmpv agree how this should be done.
Misuse would lead to deadlocks. To make this less likely, strictly
document thread safety/locking issues. In particular, document which
libmpv functions can be called without issues. (The rest has to be
assumed unsafe.)
The worst issue is destruction of the render context while video is
still active. To avoid certain unintended recursive locks (i.e.
deadlocks, unless we'd make the locks recursive), make the update
callback lock separate. Make "killing" the video chain asynchronous, so
we can do extra work while video is being destroyed.
Because losing wakeups is a big deal, setting the update callback now
triggers a wakeup. (It would have been better if the wakeup callback
were a parameter to mpv_render_context_create(), but too late.)
This commit does not add DR yet; the following commit does this.
This means vf_vapoursynth doesn't need a hack to work around the filter
code, and libavfilter filters now actually get the frame_rate field on
input pads set.
The libavfilter doxygen says the frame_rate field is only to be set if
the frame rate is known to be constant, and uses the word "must" (which
probably means they really mean it?) - but ffmpeg.c sets the field to
mere guesses anyway, and it looks like this normally won't lead to
problems.
This makes ICY title changes show up at approximately the correct time,
even if the demuxer buffer is huge. (It'll still be wrong if the stream
byte cache contains a meaningful amount of data.)
It should have the same effect for mid-stream metadata changes in e.g.
OGG (untested).
This is still somewhat fishy, but in parts due to ICY being fishy, and
FFmpeg's metadata change API being somewhat fishy. For example, what
happens if you seek? With FFmpeg AVFMT_EVENT_FLAG_METADATA_UPDATED and
AVSTREAM_EVENT_FLAG_METADATA_UPDATED we hope that FFmpeg will correctly
restore the correct metadata when the first packet is returned.
If you seke with ICY, we're out of luck, and some audio will be
associated with the wrong tag until we get a new title through ICY
metadata update at an essentially random point (it's mostly inherent to
ICY). Then the tags will switch back and forth, and this behavior will
stick with the data stored in the demuxer cache. Fortunately, this can
happen only if the HTTP stream is actually seekable, which it usually is
not for ICY things. Seeking doesn't even make sense with ICY, since you
can't know the exact metadata location. Basically ICY metsdata sucks.
Some complexity is due to a microoptimization: I didn't want additional
atomic accesses for each packet if no timed metadata is used. (It
probably doesn't matter at all.)
Recursive invocation was needed up until the previous commit. Drop this
feature, and simplify the code. It's more logical, and easier to detect
miuses of the API.
This partially reverts commit 3878a59e. The original reason for it was
removed.
I suppose this doesn't matter in practice, i.e. even if calls relayed
over the dispatch queue will cause WndProc to be invoked, WndProc will
never run for a longer time.
Preparation for removing recursion support from the dispatch queue code.
Fundamentally, scripts are loaded asynchronously, but as a feature,
there was code to wait until a script is loaded (for a certain arbitrary
definition of "loaded"). This was done in scripting.c with the
wait_loaded() function.
This called mp_idle(), and since there are commands to load/unload
scripts, it meant the player core loop could be entered recursively. I
think this is a major complication and has some problems. For example,
if you had a script that does 'os.execute("sleep inf")', then every time
you ran a command to load an instance of the script would add a new
stack frame of mp_idle(). This would lead to some sort of reentrancy
horror that is hard to debug. Also misc/dispatch.c contains a somewhat
tricky mess to support such recursive invocations. There were also some
bugs due to this and due to unforeseen interactions with other messes.
This scripting stuff was the only thing making use of that reentrancy,
and future commands that have "logical" waiting for something should be
implemented differently. So get rid of it.
Change the code to wait only in the player initialization phase: the
only place where it really has to wait is before playback is started,
because scripts might want to set options or hooks that interact with
playback initialization. Unloading of builtin scripts (can happen with
e.g. "set osc no") is left asynchronous; the unloading wasn't too robust
anyway, and this change won't make a difference if someone is trying to
break it intentionally. Note that this is not in mp_initialize(),
because mpv_initialize() uses this by locking the core, which would have
the same problem.
In the future, commands which logically wait should use different
mechanisms. Originally I thought the current approach (that is removed
with this commit) should be used, but it's too much of a mess and can't
even be used in some cases. Examples are:
- "loadfile" should be made blocking (needs to run the normal player
code and manually unblock the thread issuing the command)
- "add-sub" should not freeze the player until the URL is opened (needs
to run opening on a separate thread)
Possibly the current scripting behavior could be restored once new
mechanisms exist, and if it turns out that anyone needs it.
With this commit there should be no further instances of recursive
playloop invocations (other than the case in the following commit),
since all mp_idle()/mp_wait_events() calls are done strictly from the
main thread (and not commands/properties or libmpv client API that
"lock" the main thread).
the icc profile data is mutated to an UnsafeMutablePointer and could
possibly changed. therefore the size of it should be accessed before a
possible change.
This fixes an issue where captions stop rendering after an
in-demuxer-cache seek, because the demuxer keeps waiting to find
a keyframe (ds->skip_to_keyframe set to true in execute_cache_seek).
ffmpeg marks audio tracks which are not meant to be played standalone
as DEPENDENT. these are typically used in DVB broadcasts for audio
descriptions, and are meant to be mixed into the main audio track during
playback.
This was slightly broken: since mp_initialize() did not necessarily
interrupt core_thread() (which is waiting for initialization), it did
not enter mp_play_files(), which would have sent an IDLE event.
I suppose that in some cases (like with mpv-android), the initial IDLE
event was never actually sent, because the first wakeup of the core
thread happens with the "loadfile" command, which will disallow the core
thread an IDLE event.
I changed avio_flush() and introduced avformat_flush() exactly for this
reason.
Used with DVD/BD only (on seeks and when setting the "angle" property).
Seems to work, but wasn't tested too thoroughly (I don't care about
optical discs, I only want this ugly stuff gone that might even violate
the API/ABI).
Although this was never observed to be happening, it seems definitely
possible: we first tell the main thread to exit, and then we ask the
main thread to do some work for us (with mp_dispatch_run()). Obviously
this is racy, and the main thread could exit without doing this, which
would block mp_dispatch_run() forever.
Fix this by changing the order of operation, so that it makes sense.
We could also just store the pthread_t of the main thread in some
variable, but the fact that pthread_create() might set the pthread_t
argument _after_ starting the thread makes this a bit messy (at least it
doesn't seem to be guaranteed on a superficial look at the manpage).
Normally, MPV_RENDER_PARAM* arguments are copied, unless documented
otherwise. Of course we can't copy X11 Display or Wayland wl_display
types, but for arguments that are "summarized" in a struct (like
MPV_RENDER_PARAM_OPENGL_FBO), a copy is expected.
Also add some unused infrastructure to make this explicit, and to make
it easier to add parameter types that require a copy.
Untested.
The first change is about spdif - I mostly ignore spdif issues these
days, but it seems like the recent changes made handling of it slightly
better (but I didn't really test).
The second change is about broken libavfilter filters. We won't restore
the old behavior, because people were complaining about the old behavior
in the past. Possibly we could make libavfilter export this was metadata
and use the old behavior if we know they're broken - but it doesn't
exist yet.
Normally we don't even try this, but in corner cases it can happen. For
example when inserting lavcac3enc at runtime, and display-sync-resample
was active.
Until recently, the AO was reinitialized strictly only on decoder format
changes. But the commit for simplifying audio format negotiation removed
this. Now the AO is recreated for any format change.
This is sort of annoying if you change playback speed. The
insertion/removal of af_scaletempo can change the sample format. For
example, the acompressor filter will convert output to double, so
toggling scaletempo will force the format back to float. This recreates
the AO under the --gapless-audio=weak default. This likely affects a lot
of other filters too.
Work this around by allowing sample format changes, and keeping the
current AO format in these cases. This is probably not a big problem.
Most audio APIs force the output format to float anyway.
This means you actually have to worry about what the default gapless
mode does to your audio. If you start with a file that uses 8 bit per
sample, and then continue playing a 24 bit FLAC, it will be converted
down to 8 bit per sample. (Assuming they are played in a way that uses
the gapless logic.)