Since b74c09efbf, audio-only files let you seek to arbitrary points
beyond the end of the file (but still displayed the time clamped to the
nominal file duration). This was confusing and just not wanted. The
reason is probably that the commit removed setting the audio PTS for
data before the seek target, so if you seek past the end of the file,
the audio PTS is never set. This in turn means the logic to determine
the current playback time has no PTS at all, and thus falls back to the
seek PTS.
This happened in the past for other reasons (like efe43d768f). I have
enough of this, so I'm just changing the code to clamp the seek
timestamp to a "known" range. Do this when seeking ends, because in the
fallback case, the playback time shouldn't be stuck at e.g. "end +
seek_argument". Also do it when initiating a new seek (mp_seek), because
if the previous seek hasn't finished yet, it shouldn't add them up and
allow it to go "out of range" either. The latter is especially relevant
for hr-seeks.
Doing this clamping is problematic because the duration is a possibly
invalid value from the demuxer, or just missing. Especially with
timestamp resets, fun sometimes happens, and in these situations it
might be better not to clamp.
One could argue you should just use the last audio timestamp returned by
the decoder or demuxer (even if that directly conflicts with --end), but
that sounds even more hairy.
In summary: what a dumb waste of time, what the fuck.
Add a property that returns whether the window is focused, currently
only for X11 and Wayland.
My use cause for this is having an equivalent of pause-when-minimize.lua
for tiling window managers: make mpv play only while it's in the current
workspace or is focused (I'm fine with either one but prefer focus).
On X I do this by observing display-names, which is empty when the
rectangles of the display and mpv don't intersect, but on Wayland its
value doesn't change when mpv leaves the current workspace (and the same
check doesn't work since the geometries still intersect).
This could later be made writable as requested in #6252.
Note that on Wayland se shouldn't consider an unactivated window with
keyboard input focused.
The wlroots compositors I tested set activated after changing the
keyboard focus, so if you set wl->focused only in
keyboard_handle_enter() and keyboard_handle_leave() to avoid adding the
"has_keyboard_input" member, focused isn't set to true when first
opening mpv until you focus another window and focus mpv again.
Conversely, if that order can't be assumed for all compositors, we
should toggle wl->focused when necessary in keyboard_handle_enter() and
keyboard_handle_leave() as well as in handle_toplevel_config().
--audio-stream-silence is a shitty feature compensating for awful
consumer garbage, that mutes PCM at first to check whether it's
compressed audio, using formats advocated and owned by malicious patent
troll companies (who spend more money on their lawyers than paying any
technicians), wrapped in a wasteful way to make it constant bitrate
using a standard whose text is not freely available, and only rude users
want it. This feature has been carelessly broken, because it's
complicated and stupid. What would Jesus do? If not getting an aneurysm,
or pushing over tables with expensive A/V receivers on top of them, he'd
probably fix the feature. So let's take inspiration from Jesus Christ
himself, and do something as dumb as wasting some of our limited
lifetime on this incredibly stupid fucking shit.
This is tricky, because state changes like end-of-audio are supposed to
be driven by the AO driver, while playing silence precludes this. But it
seems code paths for "untimed" AOs can be reused.
But there are still problems. For example, underruns will just happen
normally (and stop audio streaming), because we don't have a separate
heuristic to check whether the buffer is "low enough" (as a consequence
of a network stall, but before the audio output itself underruns).
Create a central function which pumps data through the filter. This also
might fix bogus use of the filter API on flushing. (The filter is just
used for convenience, but I guess the overall result is still simpler.)
The render API replaced the opengl_cb API over 2 years ago. Since then,
the opengl_cb API was emulated on top of the render API. While it would
probably be reasonable to emulate these APIs until they're removed in an
eventual libmpv 2.0 bump, I have some non-technical reasons to disable
the API now.
The API stubs remain; they're needed for formal ABI compatibility.
If you encode to e.g. an audio-only format, then video is disabled
automatically. This also takes care of the very cryptic error message.
It says "[vo/lavc] codec for video not found". Sort of true, but
obscures the real problem if it's e.g. an audio-only format.
I don't see the point of this. Not doing it may defer an error to later.
That's OK? For now, it seems better to reduce the encoding internal API.
If someone can demonstrate that this is needed, I might reimplement it
in a different way.
Since this is a messy and fragile mechanism, I want it logged (even if
it's somewhat in conflict with the verbose logging policy). On the other
hand, it's unconditionally logged on every playloop iteration. So add
some nonsense to log it only on progress.
Just for the redundant message. The function which is called here,
ao_drain(), does not care in which state it is called, and already
handled this gracefully.
AVFrame doesn't have public code for pool allocation, so mpv does it
manually. AVFrame allocation is very tricky, so we added a bug.
This crashed with libopus encoding, but not some other audio codecs,
because the libopus libavcodec wrapper accesses AVFrame.data. Most code
tries to avoid accessing AVFrame.data and uses AVFrame.extended_data,
because using the former would subtly corrupt memory on more than 8
channels. The fact that this problem manifested only now shows that most
AVFrame consuming FFmpeg code indeed uses extended_data for audio.
It is now the AO's responsibility to handle period size alignment. The
ao->period_size alignment field is unused as of the recent audio
refactor commit. Remove it.
It turns out that ao_alsa shows extremely inefficient behavior as a
consequence of the removal of period size aligned writes in the
mentioned refactor commit. This is because it could get into a state
where it repeatedly wrote single samples (as small as 1 sample), and
starved the rest of the player as a consequence. Too bad. Explicitly
align the size in ao_alsa. Other AOs, which need this, should do the
same.
One reason why it broke so badly with ao_alsa was that it retried the
write() even if all reported space could be written. So stop doing that
too. Retry the write only if we somehow wrote less.
I'm not sure about ao_pulse.
Unused, was terrible garbage. It was (or at least its implementation
was) always a make-shift solution, and just gross bullshit. It is unused
now, so delete it.
This replaces the two buffers (ao_chain.ao_buffer in the core, and
buffer_state.buffers in the AO) with a single queue. Instead of having a
byte based buffer, the queue is simply a list of audio frames, as output
by the decoder. This should make dataflow simpler and reduce copying.
It also attempts to simplify fill_audio_out_buffers(), the function I
always hated most, because it's full of subtle and buggy logic.
Unfortunately, I got assaulted by corner cases, dumb features (attempt
at seamless looping, really?), and other crap, so it got pretty
complicated again. fill_audio_out_buffers() is still full of subtle and
buggy logic. Maybe it got worse. On the other hand, maybe there really
is some progress. Who knows.
Originally, the data flow parts was meant to be in f_output_chain, but
due to tricky interactions with the playloop code, it's now in the dummy
filter in audio.c.
At least this improves the way the audio PTS is passed to the encoder in
encoding mode. Now it attempts to pass frames directly, along with the
pts, which should minimize timestamp problems. But to be honest, encoder
mode is one big kludge that shouldn't exist in this way.
This commit should be considered pre-alpha code. There are lots of bugs
still hiding.
Do not make resetting the "access" filters reset the queue itself. This
is more flexible, and will be used in a later commit.
Also, if the queue is not in the reset state while the input access
filter is reset, make it immediately request data again. This is more
consistent, because it'll enter the state it "should" be, rather when
the filter's process function is called at an (essentially) random point
in the future. This means the filter graph will resume work on its own
if the queue was not reset before filter reset.
This could affect the only current user of f_async_queue, the code for
the --vd-queue-enable/--ad-queue-enable feature in f_decoder_wrapper.
But it looks like this already uses it in a compatible way.
This is a kind of bad hack (with bad implementation) to paint over other
problems of the filter system. The main problem is that some filters
might be left with pending frames if the filter runner is "paused",
which we don't want. To be used in a later commit.
It's relevant in some obscure corner cases (EDL file that has a segment
without audio). Didn't test what's actually going on (is ad_lavc.c
behaving wrong? is libavcodec behaving wrong or in an unexpected way? is
lavc_process wrong?) and just patched it over with some bullshit, so the
fix might be too complicated, and could be reworked at some later point.
This sure is a real data flow fuckmess.
Allow mp_aframe_clip_timestamps() to discard a spdif frame if it's
entirely out of the timestamp range. Just a minor thing that might make
handling these dumb formats easier.
Transparent windows on X11/EGL/native Mesa GL didn't work for various
reasons. From what I remember, the current code did work with nvidia at
least. Mesa has made attempts to fix this, but they never really made it
in.
But it turns out you can make EGL/Mesa list the EGLConfigs that use X11
RGBA visuals, and context_x11egl.c contains code that explicitly selects
them if alpha is requested (see pick_xrgba_config()).
The reason EGL/Mesa did not list them (and thus breaking transparency)
is because we requested a EGL_ALPHA_SIZE != 0 if alpha is requested. But
the transparent EGLConfigs use EGL_ALPHA_SIZE == 0. That's because EGL
doesn't actually support the concept of transparent windows; the alpha
size parameter is something else (memory rendering without FBOs or
something, I don't care enough to look up the real reasons).
This still won't work on Wayland. Every EGL backend needs platform
specific code. (Good job, EGL, such an awesome platform independent
standard.)
Fixes: #6590