mpv has a very weak and very annoying policy that determines whether a
playlist should be used or not. For example, if you play a remote
playlist, you usually don't want it to be able to read local filesystem
entries. (Although for a media player the impact is small I guess.)
It's weak and annoying as in that it does not prevent certain cases
which could be interpreted as bad in some cases, such as allowing
playlists on the local filesystem to reference remote URLs. It probably
barely makes sense, but we just want to exclude some other "definitely
not a good idea" things, all while playlists generally just work, so
whatever.
The policy is:
- from the command line anything is played
- local playlists can reference anything except "unsafe" streams
("unsafe" means special stream inputs like libavfilter graphs)
- remote playlists can reference only remote URLs
- things like "memory://" and archives are "transparent" to this
This commit does... something. It replaces the weird stream flags with a
slightly clearer "origin" value, which is now consequently passed down
and used everywhere. It fixes some deviations from the described policy.
I wanted to force archives to reference only content within them, but
this would probably have been more complicated (or required different
abstractions), and I'm too lazy to figure it out, so archives are now
"transparent" (playlists within archives behave the same outside).
There may be a lot of bugs in this.
This is unfortunately a very noisy commit because:
- every stream open call now needs to pass the origin
- so does every demuxer open call (=> params param. gets mandatory)
- most stream were changed to provide the "origin" value
- the origin value needed to be passed along in a lot of places
- I was too lazy to split the commit
Fixes: #7274
This has the advantage that playlists within the archive will work as
expected, because demux_playlist will correctly join the archive base
URL and entry name. Before this change, it could skip before the "|",
resulting in a broken URL.
For some inexplicable reason, the OSC runs the expand-text command a
_lot_. This command is logged at the log file default log level, so the
log file can quickly fill up with these messages. It directly violates
the mpv logging policy: per-frame (or similarly common) log messages
should not be enabled by default for the log file.
stats.lua uses the show-text command for some reason (instead of
creating its own OSD layer).
Explicitly reduce the log level for expand-text and some other commands.
Also reduce the log level for commands triggered by mouse movement.
The previous commit also contributed some to reduce log spam.
Fixes: #4771
Traditionally, the OSC used mpv's "tick" event, which was approximately
sent once per video frame. It didn't try to track any other state, and
just updated everything.
This is sort of a problem in many corner cases and non-corner cases. For
example, it would eat CPU in the paused state (probably to some degree
also the mpv core's fault), or would waste power or even throw errors
("event queue overflows") on high FPS video.
Change this to not using the tick event. Instead, react to a number of
property change events. Rate-limit actual redrawing with a timer; the
next update cannot happen sooner than the hardcoded 30ms OSC frame
duration. This has also the effect that multiple successive updates are
(mostly) coalesced.
This means the OSC won't eat your CPU when the player is fucking paused.
(It'll still update if e.g. the cache is growing, though.) There is some
potential for bugs whenever it uses properties that are not explicitly
observed. (In theory we could easily change this to a reactive concept
to avoid such things, but whatever.)
mpv_event_property (for property observation) actually never sets an
error status. You cannot distinguish between unavailable properties and
properties which returned an error. Not sure if it ever did.
I intend to rewrite this code approximately every 2 months.
Last time, I did this in commit d66eb93e5d (and 065c307e8e and
b2006eeb74). It was intended to remove the roundabout synchronous
thread "ping pong" when observing properties. At first, the original
async. code was replaced with some nice mostly synchronous code. But
then an async. code path had to be added for vo_libmpv, and finally the
sync. code was dropped because it broke in other obscure cases (like the
Objective-C Cocoa backend).
Try again. This time, update properties entirely on the main thread.
Updates get batched out on every playloop iteration. (At first I wanted
it to make it every time the player goes to sleep, but that might starve
API clients if the playloop get saturated.) One nice thing is that
clients only get woken up once all changed events have been sent, which
might reduce overhead.
While this sounds simple, it's not. The main problem is that reading
properties must not block the client API, i.e. no client API locks can
be held while reading the property. Maybe eventually we can avoid this
requirement, but currently it's just a fact. This means we have to
iterate over all clients and then over all properties (of each client),
all while releasing all locks when updating a property. Solve this by
rechecking on each iteration whether the list changed, and if so,
aborting the iteration and redo it "next time".
High risk change, expect bugs such as crashes and missing property
updates.
keyvalue_list_find_key() was called on a "partially" constructed list,
because the terminating NULL was added only later. Didn't I say this
code is cursed?
Fixes: #7273
This is for console.lua (see next commit). The idea is that console.lua
can adjust its offset to the bottom of the window by the height of the
OSC.
If the OSC is not set to permanently visible, export no margins, because
it would look weird to move the console depending on the mouse movement.
Very primitive and dumb, but fulfils its purpose for the next commits.
I chose this specific implementation because it has the lowest footprint
in command.c, without resorting to crazy hacks such as sending messages
between scripts (which would be hard to coordinate especially on
startup).
The size overflow check was inverted: instead of allowing reading only
the first dst_size bytes of the property, it allowed copying past the
property buffer (as returned by xlib). xlib doesn't return the size of
the buffer in bytes, so it has to be computed and checked manually.
Wouldn't it be great if C allowed me to write the overflow check in a
readable way, so it doesn't trick me into writing dumb security bugs?
Relying on X security is even dumber than creating a X security bug,
though, so this was not a real problem. But I found that one specific
call tried to read more than what the property provided, so reduce that.
Also, len*ib obviously can't overflow, so there's an additional layer of
dumb to this whole thing.
While we're at dumb things, why the hell does xlib use "long" for 32 bit
types. It's a god damn pain.
This was completely broken: it compared the first item of the filter
list only. Apparently I forgot that this is a list. This probably broke
aspects of runtime filter changing probably since commit b16cea750f.
Fix this, and remove some redundant code from obj_settings_equals().
Which is not the same as m_obj_settings_equal(), so rename it to make
confusing them harder. (obj_setting_match() has these very weird label
semantics that should probably just be killed. Or not.)
I don't even know anymore whether this was intended or not. Certain use
cases for the "-o" options might require this. These options are for
passing general FFmpeg options. These are translated to av_opt_set()
calls, which may or may not accumulate the option values on multiple
calls with the same option name (how should I know?).
Anyway, it seems crazy to allow non-unique keys, so make them unique.
The ad-hoc nature of the option code makes this wonderfully complicated
(when I wrote that this code is cursed, I meant it). In combination with
lazy testing, it probably means there are lots of bugs here.
Whenever I deal with this, I have to look at the code to make sense of
this. And beyond that, there are some strange inconsistencies. (I think
this code is cursed. It always was, and maybe always will be.)
Although the manpage claimed that using multiple items for -add etc. is
deprecated, string list options didn't warn against it. So add the
warning, and add something in the changelog (even though nobody will
ever read this).
The manpage mentioned --vf-append, but this didn't even exist. So add
it, I guess. We encourage using -append for the other option types, so
for consistency, it should work on filter options. (And I already
tricked me into believing it existed when I mentioned it in the
manpage.)
Make the "operations" table separate for all option types, and mention
the option type on every single of the top-level list options.
libavcodec's nvdec wrapper can return invalid frames, that do not have
any data fields set. This is not allowed by the API, but why would they
follow their own API?
Add a workaround to specifically detect this situation. In practice,
this should fall back to software decoding if it happens too often in a
row. (But single errors are still tolerated, because I don't know why.)
Untested due to lack of hardware from the regrettable graphics company.
Better do this here than deal with the moronic project we unfortunately
depend on.
See: #7185
Better do this here than deal with the moronic project we unfortunately
depend on.
The workaround is generic; unknown whether it works correctly with
multi-input/output filters or filter graphs. It assumes that if all
inputs are EOF, and all outputs are EAGAIN, the bug happened.
This is pretty tricky, because anything could happen. Any time some form
of progress is made, the got_eagain state needs to be reset, because the
filter pad's state could have changed.
These all have been replaced recently.
There was a leftover in window.swift. It couldn't have done anything
useful in the current state of the code, so drop these lines.
The generic change detection now handles this just as well.
The way how this function is manually called at init is slightly gross.
Make that part slightly more explicit to hopefully avoid confusion.
* Instead of following VOCTRL_FULLSCREEN, check for option changes.
* Instead of signaling VO_EVENT_FULLSCREEN_STATE, update the cached
option structure and have it propagated to the origin.
Additionally, gets rid of all the straight usage of the VO options
structure.
Done in a similar style to the Wayland common file, where in case
of reading the value, the "payload" from cache is utilized.
EDL files can have multiple segments taken from the same source file. In
this case, the source file is supposed to be opened only once. This
stopped working, and it created a new demuxer instance for every single
segment entry. This made it slow and made it use much more memory than
needed.
This was because it tried to iterate over the array of source files, but
the array count (num_parts) was only set to a non-0 value later. Fix
this by maintaining the count correctly.
In addition, the actual code for checking whether a source can be reused
(in open_source()) regressed and stopped working correctly. d->stream
could be NULL. Use demuxer.filename instead; I'm not entirely sure
whether this is always correct, but fortunately we have a distributed
almost-AI driven test suite (called "users") which will probably find
and report such cases.
Probably broke with commit a09396ee60 or something close, but didn't
check closer.
Fixes: #7267
In this combination, the [current-]window-scale properties still
incorrectly applied scaling.
For some reason, vo_calc_window_geometry2() handled this option
(basically ignored the dpi_scale parameter passed to it), but since the
DPI compensation for window-scale is implemented in x11_common.c, we
need to check and honor this option here too. (What a mess.)
console.lua uses "terminal-default" logging, which is supposed to return
all messages logged to the terminal to the API. Internally, this is
translated to MP_LOG_BUFFER_MSGL_TERM, which is MSGL_MAX+1, because it's
not an actual log level (blame C for not having proper sum types or
something).
Unfortunately, this unintentionally raised the internal log level to
MSGL_MAX+1. It still functioned as intended, because log messages were
simply filtered at a "later" point. But it led to every message being
formatted even if not needed. More importantly, it made mp_msg_test()
pointless (code calls this to avoid logging in "expensive" cases and if
the messages would just get discarded). Also, this broke libplacebo
logging, because the code to map the log messages did not expect a level
higher than MSGL_MAX (mp_msg_level() returned MSGL_MAX+1 too).
Fix this by not letting the dummy level value be used as log level.
Messages at terminal log level will always make it to the inner log
message dispatcher function (i.e. mp_msg_va() will call
write_msg_to_buffers()), so log buffers which use the dummy log level
don't need to adjust the actual log level at all.
This is similar to the "edition" change.
I considered making this go through deprecation, but didn't have a good
idea how to do that. Maybe it's fine, because this is pretty obscure.
But it might break some API users/scripts (it certainly broke
stats.lua), and all I have to say is sorry for that.
"window-scale" is 1.0 by default; however, x11 implicitly set that to
2.0 on hidpi screens. This made the default 2.0, which was inconsistent
with the option. The "window-scale" property jumped from 1.0 to 2.0 when
a window was created.
Avoid this by factoring the DPI into the window-scale. This makes the
UNFS_WINDOW_SIZE return a virtual size; since this value is used for the
window-scale property only, this is fine and has no further
consequences. (Originally, this was possibly meant to be used for other
purposes, but I'm perfectly fine with redoing this again should that
ever happen.)
This changes user-visible behavior, and it's as if setting window-scale
multiplies its argument by 2 suddenly. Hopefully no user will get angry.
This is for the previous commit, and should affect behavior with the
special M_PROPERTY_GET_CONSTRICTED_TYPE mechanism only. The effect is
that cycling the "edition" property, if the option is set to "auto",
will change to the second edition instead of the first.
Normally, option values must always be within their range, so this
should not affect anything else. M_PROPERTY_GET_CONSTRICTED_TYPE is
sort-of fine with this kind of behavior.
If this affects any other M_PROPERTY_GET_CONSTRICTED_TYPE users
neqatively, I will revert the change.
See manpage/changelog changes.
The purpose of this change is to removes another case of inconsistent
property behavior. At first I wanted to make this go through deprecation
before making a technically incompatible change, but then I considered
this feature too obscure as that anyone would care.
The VO underrun detection (just a weak heuristic) added in commit f26dfb
flagged the underrun state every time it was checked, and since the
check happened in every playloop iteration, this caused the playloop to
wake up itself on every iteration. It burned an entire core while in
this state.
Fix this by flagging this condition only once (as it should be), and
requiring that a frame is displayed to trigger it again. This makes it
work similar as the audio underrun check.
The bug report referenced below says --demuxer-thread=no avoided this.
This is because the demuxer layer doesn't do proper underrun reporting
if the reader thread is disabled.
Fixes: #7259
With the previous commit, there's no need for 1.5 anymore. And in fact,
it's just too dangerous to rely on 1.5 because of all the EGL craziness.
For example, you might get a 1.5 EGL system library, but a driver might
still give you 1.4 at runtime. If you assume that you can call 1.5
functions, you will probably get random crashes in this case. What a
cursed API. (The same problem exists with EGL 1.3, but fortunately
nothing seems to use that anymore. We can just ignore that problem.)
This tries to deal with the crazy EGL situation. The summary is:
- using eglGetDisplay() with multiple windowing platforms doesn't really
work, but Mesa had an awful hack for it
- this hack can be disabled at build time, and some distros sometimes
accidentally or intentionally do so
- Mesa will probably eventually disable it by default
- we switched to eglGetPlatformDisplay(), but this requires EGL 1.5
- the very regrettable graphics company (also known as Nvidia) ships
drivers (for old hardware I think) that are EGL 1.4 only
- that means even though we "require" EGL 1.5 and link against it, the
runtime EGL may be 1.4
- trying to run mpv there crashes in the dynamic linker
- so we have to go through some more awful compatibility hacks
This commit tries to do it "properly", but using EGL 1.4 as base. The
plaform selection mechanism is a messy extension there, which got
elevated to core API in 1.5 (but OF COURSE in incompatible ways).
I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION)
is really needed, but if you ask me, it feels slightly saner not to rely
on an EGL 1.4 kludge forever. But maybe this is just an instance of
self-harm, since they will most likely never drop or not provide this
API.
Also, unlike before, we actually check the extension string for the
individual platform extensions, because who knows, some EGL
implementations might curse us if we pass unknown platform parameters.
(But actually, the more I think about this, the more bullshit it is.)
X11 and Wayland were the only ones trying to call eglGetPlatformDisplay,
so they're the only ones which are adjusted in this commit.
Unfortunately, correct function of this commit is unconfirmed. It's
possible that it crashes with the old drivers mentioned above.
Why didn't they solve it like this:
struct native_display {
int platform_type;
void *native_display;
};
Could have kept eglGetDisplay() without all the obnoxious extension BS.
This assert() sometimes triggered (and still triggers) with lavc API
bugs. It tries to check that at least 1 plane is set to a non-NULL
value. Obviously, a valid frame returned by successful decoding should
never have it.
The problem is that some hwdecs use integer surface IDs cast to a
pointer. Recently, it happened that newer Intel drivers started using
surface ID 0 under certain circumstances (for unknown reasons), which
triggers this assert.
Just get rid of it.
For the sake of #7185, add an assert() specifically for nvdec. That
failure needs to be further analyzed, is probably a FFmpeg bug, and
without this assert() would just crash somewhere further down the video
chain.
Fixes: #7261
This code checked AVFrame.buf[0] instead of the decode return code to
see whether a frame was decoded. This is sort of suspicious; while I
think that the lavc API actually guarantees it, it's not intuitive
anyway. In addition, the code was unnecessarily roundabout.
Replace it with a proper error code check. Remove the other error return
(that was, or should have been, redundant before). The no-frame path is
now cleanly separated. Add an assert on the frame-returned path; if this
fails, lavc violated its own API.