Turns out clearing all frambuffers in reconfig isn't such a great idea
when you also end up here when setting pan/scan.
I guess this is just a leftover from a previous iteration of vo_drm
where doing this made sense.
This passed all streams to mp_recorder_create(), even disabled ones. The
disabled streams never get packets, so recorder.c eventually errors out
with unrelated-looking errors. The reason is that recorder.c waits for
packets to appear on other streams, which in turn is because libavformat
refuses to mux empty streams anyway.
recorder.c could call demux_stream_is_selected(), which would have made
the patch much smaller. But this feels like a bad idea, since recorder.c
should use sh_stream only for metadata (and not in an "active" way), nor
should it care what demux.c is currently doing with it. So make the API
user (demux.c) pass only the streams it really wants.
Fixes: #6999
Looks like this didn't actually work. Prefetching will do nothing if
there isn't a thread to "drive" it, and the demuxer thread needs to be
explicitly enabled. (I guess I did the worst possible job in verifying
whether this actually worked when I implemented it. On the other hand,
the user didn't confirm back whether it worked, so who cares.)
Like in the previous commit, bad factoring makes everything worse. It
duplicates logic and implementation of enable_demux_thread(), since the
opener thread cannot access the mpctx->opts field freely. But it's deep
night, so fuck it.
Fixes: c1f1a0845eFixes: #6753
demux_start_prefetch() was called unconditionally in two cases. This is
completely wrong. I'm not sure what part of my brain died off that
something this obviously wrong went in.
The prefetch case is a bit more complicated. It's a different thread, so
you can't access just access mpctx->opts there. So add an explicit field
for this, which is the simplest way to get this done. (Even if it's bad
factoring.)
Fixes: c1f1a0845e
Fixes: 556e204a11
Although this was sort of elegant, it just seems to complicate things
slightly. Originally, the API meant that you cache mp_recorder_sink
yourself (which would avoid the mess of passing an index around), but
that too seems slightly roundabout.
In a later change, I want to change the set of streams passed to
mp_recorder_create(), and then I'd have to keep track of the index for
each stream, which would suck. With this commit, I can just pass the
unambiguous sh_stream to it, and it will be guaranteed to match the
correct stream.
The disadvantages are barely worth discussing. It's a new linear search
per packet, but usually only 2 to 4 streams are active at a time. Also,
in theory a user could want to write 2 streams using the same sh_stream
(same metadata, just writing different packets or so), but in practice
this is never done.
Add yet another variant of the stream open function. This time, make it
so that it's possible to add new open parameters in an extendable way,
which should put an end to having to change this every other year.
Effectively get rid of the overly special stream_create_instance()
function and use the new one instead, which requires changes in
stream_concat.c and stream_memory.c. The function is still in private in
stream.c, but I preferred to make the mentioned users go through the new
function for orthogonality. The error handling (mostly logging) was
adjusted accordingly.
This should not have any functional changes. (To preempt any excuses, I
didn't actually test stream_concat and stream_memory.)
`egl.pc` can be provided either by mesa or libglvnd. The latter doesn't
follow the same version scheme as mesa but instead uses the API version
that the library exposes, which is 1.5 for EGL[1]
[1] 0dfaea2bcb (diff-b58a140c00ea99fb9a708e15afaade62R8)
Previously, there appeared to be implicit synchronisation in the
GL interop path, and we never observed any visual glitches. However,
recently, I started seeing stuttering in the GL path and on closer
examination it looked like read-before-write behaviour where GL
would display an old frame again rather than the current one.
After verifying that disabling hwdec made the problem go away,
I tried adding a cuStreamSynchronize() after the memcpys and that
also resolved the problem, so it's clearly sync related.
cuStreamSynchronize() is a CPU sync and so more heavy-weight than
you want, but it's the only tool we have. There is no mechanism
defined for synchronising GL to CUDA (It looks like there is a way
to synchronise CUDA to EGL but it appears one way and so wouldn't
directly address this problem).
Anyway, empirically, the output now looks the same as with hwdec
off.
Swapchain depth currently hard-coded to 3 (4 buffers).
As we now avoid redrawing on repeat frames (we simply requeue the same fb
again), this should give a nice performance boost when playing videos with a
lower FPS than the display FPS in video-sync=display-resample mode.
Presentation feedback has also been implemented to help counter the
significant amounts of jitter we would otherwise be seeing.
This allows stream_cb backends to implement blocking
behavior inside read_fn, and still get notified when the user
wants to cancel and stop playback.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This wasn't used at all in my tests, because it simply passed the
frame directly to libswsresample. (And, by the way, will always do
that, because s64 is so obscure literally NOTHING uses it except
a sample specifically created to test this code. Screw FFmpeg.)
This can be due to unsupported sample formats (see previous commits),
minor allocation failures, and similar things. For identifying the exact
cause it's buried too deep in abstractions. But most time it doesn't
happen anyway, since it's extremely rare that new audio formats are
added.
What an idiotic format. It makes no sense, and should have been
converted to S32 in the demuxer, rather than plague everyone with
another extremely obscure nonsense format. Why doesn't ffmpeg add S24
instead? That's an actually useful format.
May cause compilation failure with old FFmpeg or Libav libs, but I don't
care.
TARGET_OS_TV seems to always be defined, but set according to the
build configuration. This fixes all Apple configurations being
mis-identified as tvOS.
The completion function itself now parses --list-options on the first
tab press and caches the results. This does mean a slight delay on that
first tab press, but it will only do this if the argument being
completed looks like an option (i.e. starts with "-"), so there is never
a delay when just completing a file name. I've also put some effort into
making it reasonably fast; on my machine it's consistently under 100 ms,
more than half of which is mpv itself.
Installation of zsh completion is now done unconditionally because it's
nothing more than copying a file. If you really don't want it installed,
set zshdir to empty: `./waf configure --zshdir= ...`
Improvements in functionality compared to the old script:
* Produces the right results for mpv binaries other than the one it was
installed with (like a dev build for testing changes).
* Does not require running mpv at build time, so it won't cause
problems with cross compilation.
* Handles aliases.
* Slightly nicer handling of options that take comma-separated values
and/or sub-options: A space is now inserted at the end instead of a
comma, allowing you to immediately start typing the next argument,
but typing a comma will still remove the automatically added space,
and = and : will now do that too, so you can immediately add a
sub-option.
* More general/flexible handling of values for options that print their
possible values with --option=help. The code as is could handle quite
a few more options (*scale, demuxers, decoders, ...), but nobody
wants to maintain that list here so we'll just stick with what the
old completion script already did.
This allows to use drm hwaccels that require a hwdevice.
Tested with v4l2request hwaccel and cedrus driver on an allwinner device
running mpv with --vo=gpu --gpu-context=drm --hwdec=drm.
At first, this code used only 1 plane, so the compatibility stuff was
sufficient. But then use of planes 1 and 2 was added, without extending
the compatibility stuff.
I think I've seen a case recently where this broke the build and caused
users to apply invalid fixes, but I don't remember where.
It's possible that I didn't get all defines that are needed.
I think I was wrong about having to reset the stats when mpv stops
producing frames, eg. when it's paused. As long as the swapchain doesn't
underflow, last_queue_display_time will still be accurate, because the
next frame produced should still be presented one vsync after the
last one in the swapchain.
If the swapchain underflows (which is the common case for when mpv is
paused for more than 150ms,) the next predicted frame time should be in
the past. It should be fine to leave last_queue_display_time unset in
this case, since vo.c will use the current time instead, which is a
decent guess (though it doesn't take vsync phase into account.)
last_sync_refresh_count and last_sync_qpc_time should be kept on
swapchain underflow as well. Assuming the display refresh rate doesn't
change while mpv is paused, they'll only provide a more accurate guess
of the vsync duration when mpv starts playing again. Also,
vsync_duration_qpc never needs to get reset. It will get overwritten
immediately in most cases, and when it doesn't, it's still a better
guess of the vsync duration than nothing.
That's just a single one. It used to be more, when FFmpeg still required
using pointless accessors for tons of fields, which historically broke
compatibility with Libav. (I think I wrote the patch to deprecate that
crap and to allow direct access myself.)
There may be more exceptions, but I forgot about them. Another point is
that we don't really trust FFmpeg ABI stability, though.
In spdif mode, there are hacks that try to cut audio on frame boundaries
(blame spdif, which is a hack in itself). The "alignment" is used in a
bunch of places, but --end does not respect it. This leads to some audio
that can't be pushed because the alignment is off (I don't know why, not
do I care), which puts audio into an underrun state forever.
Fix this by discarding unusable extra samples if no new data can be
expected.
Fixes: #6935
The C11 situation is complicated. For example, MinGW doesn't seem to
have a full C11 implementation, but we pretty much rely on C11 atomics.
Regarding "#pragma once": they say it's not standard because of unsolved
(admittedly valid) issues. Btu still, fuck writing include guards, I
just can't be bothered with this crap.
(Does anyone even read this document?)
I think a popular libmpv application did exactly this: enabling advanced
control, and then receiving deadlocks. I didn't confirm it, though. In
any case, the API docs should avoid tricking users into making this easy
mistake.
The render API (vo_libmpv) had potential deadlock problems with
MPV_RENDER_PARAM_ADVANCED_CONTROL. This required vd-lavc-dr to be
enabled (the default). I never observed these deadlocks in the wild
(doesn't mean they didn't happen), although I could specifically provoke
them with some code changes.
The problem was mostly about DR (direct rendering, letting the video
decoder write to OpenGL buffer memory). Allocating/freeing a DR image
needs to be done on the OpenGL thread, even though _lots_ of threads are
involved with handling images. Freeing a DR image is a special case that
can happen any time. dr_helper.c does most of the evil magic of
achieving this. Unfortunately, there was a (sort of) circular lock
dependency: freeing an image while certain internal locks are held would
trigger the user's context update callback, which in turn would call
mpv_render_context_update(), which processed all pending free requests,
and then acquire an internal lock - which the caller might not release
until a further DR image could be freed.
"Solve" this by making freeing DR images asynchronous. This is slightly
risky, but actually not much. The DR images will be free'd eventually.
The biggest disadvantage is probably that debugging might get trickier.
Any solution to this problem will probably add images to free to some
sort of queue, and then process it later. I considered making this more
explicit (so there'd be a point where the caller forcibly waits for all
queued items to be free'd), but discarded these ideas as this probably
would only increase complexity.
Another consequence is that freeing DR images on the GL thread is not
synchronous anymore. Instead, it mpv_render_context_update() will do it
with a delay. This seems roundabout, but doesn't actually change
anything, and avoids additional code.
This also fixes that the render API required the render API user to
remain on the same thread, even though this wasn't documented. As such,
it was a bug. OpenGL essentially forces you to do all GL usage on a
single thread, but in theory the API user could for example move the GL
context to another thread.
The API bump is because I think you can't make enough noise about this.
Since we don't backport fixes to old versions, I'm specifically stating
that old versions are broken, and I'm supplying workarounds.
Internally, dr_helper_create() does not use pthread_self() anymore, thus
the vo.c change. I think it's better to make binding to the current
thread as explicit as possible.
Of course it's not sure that this fixes all deadlocks (probably not).
When the (float) bitrate is returned, it is implicitely converted to an
int64 value, merely discarding the fractional part.
However the bitrate of a CBR track can vary a bit due to timestamp
precision loss after clock conversion (this can affect MPEG-TS audio
tracks). So a bitrate like 191999.999... results in 191999 when
being returned - instead of 192000.
To fix this, apply proper rounding, as already done for the "old" case.
Hereby refactoring the "old" case to also use `llrint`.
libass had an API to configure this since 2013. mpv always used
ASS_FONTPROVIDER_AUTODETECT, because usually there's little reason to
use anything else. The intention of the now added option is to allow
users to disable use of system fonts.
I didn't consider it worth the trouble to add the coretext and
directwrite enum items from ASS_DefaultFontProvider. The "auto" choice
will have the same effect if they're available. Also, the part of the
code which defines the option does not necessarily have libass available
(it's still optional!), so defining all enum items as choices is icky. I
still added fontconfig, since that may be nice to emulate a nostalgic
2010 feeling of mpv freezing on fontconfig.
The option for OSD is even less useful. (But you get it for free, and
why pass up a chance to add yet another useless option?)
This is not quite what was requested in #6947, but as close as it gets.
in->eof is used as an indicator whether reading packets still makes
sense. (Without this, the prefetcher would obviously burn CPU by
retrying reading even though everything has been read.)
This was not reset properly after seeks were performed. It led to
getting stuck in at least one corner case: when enabling a track, the
demuxer would seek backwards to get new packets from the current
playback position ("refresh seeks"). But if playback was paused, and EOF
was previously reached, it would not try to read packers again due to
in->eof being false. There was not anything else that would make it
retry reading, so it was stuck in a weird underrun/buffering state.
Fixes: #6986
This was a leftover from commit b2752321 which fixed#6522 but after
the recent demux refactoring this fix is superseded by commit 0f6cda4ab.
Remove the redundant update call.