Picks up files like "cover.jpg". It's made part of normal external file
loading, so I'm adding 3 new options that are direct equivalents for the
options that control loading of external subtitle and audio files. Even
though I bet nobody wants them and they just increase confusion... I
guess the world is actually hell, so this outcome should be fine.
It prefers non-specific external files like "cover.jpg" over embedded
cover art. Not sure if that's wanted or unwanted.
There's some pain over explicitly marking such files as external
pictures. This is basically an optimization: in most cases, a heuristic
would treat an image file loaded with --external-file the same (it's a
heuristic because ffmpeg can't tell us whether something is an image or
a video). However, even with this heuristic, it would decode the cover
art picture again on each seek, which would essentially slow down
seeking in audio files. This bothered me greatly, which is why I'm
adding these additional options at all, and bothered with the previous
commit.
Fixes: #3056
Essentially, this lets video.c decide whether to consider a video track
cover art, instead of having the decoder wrapper use the lower level
sh_stream flag.
Some pain because of the dumb threading shit. Moving the code further
down to make some of it part of the lock should not change behavior,
although it could (framedrop nonsense).
This commit should not change actual behavior, and is only preparation
for the following commit.
on macOS 10.15 setting the activation policy behaves quite weirdly. the
call changes the current active App to a nameless process, which
probably also the reason that prevents the not focusing to work.
a workaround for that, is to refocus the previous active app.
Fixes#7725
brew update tries to update the java cask, which it tries to build from
source. this takes too long and leads to a timeout of the job. we can't
manually remove the java cask because of a bug in the too old brew cask
version and the old formula. we just remove the whole cask tap and call
it a day, since we don't need it anyway.
Back in the olden days, mpv's wayland backend was driven by the frame
callback. This had several issues and was removed in favor of the
current approach which allowed some advanced features (like
display-resample and presentation time) to actually work properly.
However as a consequence, it meant that mpv always rendered, even if the
surface was hidden. Wayland people consider this "wasteful" (and well
they aren't wrong). This commit aims to avoid wasteful rendering by
doing some additional checks in the swapchain. There's three main parts
to this.
1. Wayland EGL now uses an external swapchain (like the drm context).
Before we start a new frame, we check to see if we are waiting on a
callback from the compositor. If there is no wait, then go ahead and
proceed to render the frame, swap buffers, and then initiate
vo_wayland_wait_frame to poll (with a timeout) for the next potential
callback. If we are still waiting on callback from the compositor when
starting a new frame, then we simple skip rendering it entirely until
the surface comes back into view.
2. Wayland on vulkan has essentially the same approach although the
details are a little different. The ra_vk_ctx does not have support for
an external swapchain and although such a mechanism could theoretically
be added, it doesn't make much sense with libplacebo. Instead,
start_frame was added as a param and used to check for callback.
3. For wlshm, it's simply a matter of adding frame callback to it,
leveraging vo_wayland_wait_frame, and using the frame callback value to
whether or not to draw the image.
In the recent terminal commit, I "compressed" the read() error handling,
and messed it up. The return value could be -1 for other non-fatal
errors (such as EIO when trying to read while backgrounded), which
resulted in buf.len getting messed up.
Fixes: 602384348e
Nobody needs this anymore. If not too many people complain, we'll remove
this completely. Many already consider X11 and OpenGL legacy, so we
don't need TWO X11/OpenGL backends.
Due to Unix being legacy garbage, it's not possible to safely detect the
ESC key on terminal. The key sequences are ambiguous. The code for the
ESC key also starts the sequences for other special keys.
Until now, you needed to hit ESC twice for it to be recognized.
Attempt to handle this better by using a timeout to detect the key. If
ESC is in the input buffer, but nothing else arrived after a timeout,
assume it's the ESC key. I think this is the method vim uses. Currently,
the timeout is set at 100ms. This is hardcoded and cannot be changed.
It's possible that this causes problems on slow ssh connections or so.
I'm not sure what exactly happens if you manage to get ESC + another
normal key into the input buffer. If it's a known sequence, it will be
matched and interpreted as such. If not, it'll probably be discarded.
Previous commit fixes it for libswscale. The libzimg path has extra code
to copy by slice, but it still may access pixel groups using normal
memory accesses (for example, reading rgba pixel data via uint32_t), so
document a minimum alignment requirement per pixel format.
If the alignment is less than 16, certain libswscale code paths will
silently corrupt memory outside of the target buffer. This actually
affected the libmpv software rendering API (that was fun to debug).
Rather than passing this problem to the next API user, try to avoid it
within libmpv.
It's unclear which alignment libswscale requires for safe operation. I'm
picking 32 (one more than the observed safe value in the case I
experienced), because libavfilter mostly uses this value.
The way to work this around is slow: just make a full copy of the entire
input or output image. Possibly this could be optimized by using the
slice API, but that would be more effort, and would likely expose
further libswscale bugs. Hope that this is a rarely needed path.
The next commit will update the alignment requirement documentation
bits.
Pause can be changed during a file change, such as with for example
--reset-on-next-file=pause, or in hooks, or by being quick, and in this
case the AO's pause state was not updated correctly. mpctx->ao_chain is
only set if playback is fully initialized, while the AO itself in
mpctx->ao can be reused across files.
Fix this by always running set_pause_state() if the pause option is
changed. Could cause new bugs since running this used to be explicitly
avoided outside of the loaded state. The handling of time_frame is
potentially worrisome.
Regression due to recent audio refactor; before that, the AO didn't have
a separate/persistent pause state.
Fixes: #8079
Since b74c09efbf, audio-only files let you seek to arbitrary points
beyond the end of the file (but still displayed the time clamped to the
nominal file duration). This was confusing and just not wanted. The
reason is probably that the commit removed setting the audio PTS for
data before the seek target, so if you seek past the end of the file,
the audio PTS is never set. This in turn means the logic to determine
the current playback time has no PTS at all, and thus falls back to the
seek PTS.
This happened in the past for other reasons (like efe43d768f). I have
enough of this, so I'm just changing the code to clamp the seek
timestamp to a "known" range. Do this when seeking ends, because in the
fallback case, the playback time shouldn't be stuck at e.g. "end +
seek_argument". Also do it when initiating a new seek (mp_seek), because
if the previous seek hasn't finished yet, it shouldn't add them up and
allow it to go "out of range" either. The latter is especially relevant
for hr-seeks.
Doing this clamping is problematic because the duration is a possibly
invalid value from the demuxer, or just missing. Especially with
timestamp resets, fun sometimes happens, and in these situations it
might be better not to clamp.
One could argue you should just use the last audio timestamp returned by
the decoder or demuxer (even if that directly conflicts with --end), but
that sounds even more hairy.
In summary: what a dumb waste of time, what the fuck.
Add a property that returns whether the window is focused, currently
only for X11 and Wayland.
My use cause for this is having an equivalent of pause-when-minimize.lua
for tiling window managers: make mpv play only while it's in the current
workspace or is focused (I'm fine with either one but prefer focus).
On X I do this by observing display-names, which is empty when the
rectangles of the display and mpv don't intersect, but on Wayland its
value doesn't change when mpv leaves the current workspace (and the same
check doesn't work since the geometries still intersect).
This could later be made writable as requested in #6252.
Note that on Wayland se shouldn't consider an unactivated window with
keyboard input focused.
The wlroots compositors I tested set activated after changing the
keyboard focus, so if you set wl->focused only in
keyboard_handle_enter() and keyboard_handle_leave() to avoid adding the
"has_keyboard_input" member, focused isn't set to true when first
opening mpv until you focus another window and focus mpv again.
Conversely, if that order can't be assumed for all compositors, we
should toggle wl->focused when necessary in keyboard_handle_enter() and
keyboard_handle_leave() as well as in handle_toplevel_config().
--audio-stream-silence is a shitty feature compensating for awful
consumer garbage, that mutes PCM at first to check whether it's
compressed audio, using formats advocated and owned by malicious patent
troll companies (who spend more money on their lawyers than paying any
technicians), wrapped in a wasteful way to make it constant bitrate
using a standard whose text is not freely available, and only rude users
want it. This feature has been carelessly broken, because it's
complicated and stupid. What would Jesus do? If not getting an aneurysm,
or pushing over tables with expensive A/V receivers on top of them, he'd
probably fix the feature. So let's take inspiration from Jesus Christ
himself, and do something as dumb as wasting some of our limited
lifetime on this incredibly stupid fucking shit.
This is tricky, because state changes like end-of-audio are supposed to
be driven by the AO driver, while playing silence precludes this. But it
seems code paths for "untimed" AOs can be reused.
But there are still problems. For example, underruns will just happen
normally (and stop audio streaming), because we don't have a separate
heuristic to check whether the buffer is "low enough" (as a consequence
of a network stall, but before the audio output itself underruns).
Create a central function which pumps data through the filter. This also
might fix bogus use of the filter API on flushing. (The filter is just
used for convenience, but I guess the overall result is still simpler.)
The render API replaced the opengl_cb API over 2 years ago. Since then,
the opengl_cb API was emulated on top of the render API. While it would
probably be reasonable to emulate these APIs until they're removed in an
eventual libmpv 2.0 bump, I have some non-technical reasons to disable
the API now.
The API stubs remain; they're needed for formal ABI compatibility.
If you encode to e.g. an audio-only format, then video is disabled
automatically. This also takes care of the very cryptic error message.
It says "[vo/lavc] codec for video not found". Sort of true, but
obscures the real problem if it's e.g. an audio-only format.
I don't see the point of this. Not doing it may defer an error to later.
That's OK? For now, it seems better to reduce the encoding internal API.
If someone can demonstrate that this is needed, I might reimplement it
in a different way.
Since this is a messy and fragile mechanism, I want it logged (even if
it's somewhat in conflict with the verbose logging policy). On the other
hand, it's unconditionally logged on every playloop iteration. So add
some nonsense to log it only on progress.
Just for the redundant message. The function which is called here,
ao_drain(), does not care in which state it is called, and already
handled this gracefully.
AVFrame doesn't have public code for pool allocation, so mpv does it
manually. AVFrame allocation is very tricky, so we added a bug.
This crashed with libopus encoding, but not some other audio codecs,
because the libopus libavcodec wrapper accesses AVFrame.data. Most code
tries to avoid accessing AVFrame.data and uses AVFrame.extended_data,
because using the former would subtly corrupt memory on more than 8
channels. The fact that this problem manifested only now shows that most
AVFrame consuming FFmpeg code indeed uses extended_data for audio.