The way how this modifies and backups/restores user option values is a
bit of a problem for runtime option changing.
Clean this up a little. Now cycling the visibility updates the user
option value, but always to "valid" values (unlike hidetimeout used to
be used). If the user option value is changed externally (enabled by a
later commit), it'll be cleanly overwritten.
Although they were not undocumented, they were hidden away in the
respective manpage sections. It's a good idea to add them to the main
keyboard bindings overview too. stats.lua also did this.
I decided to factor this into the user's scale option (instead of
somehow using it as default if the user has not specified it), because
it makes the option handling simpler, and won't break things like
per-screen DPI if the user only wants to scale the console font by a
factor.
(X11 does not support different per-screen DPI (or only via hacks), so
this is pretty simple. If other backends are going to implement this,
then they should send VO_EVENT_WIN_STATE if the DPI for the mpv window
changes by moving it to another screen or such.)
Until now, .edl files accepted only "simple" filenames, i.e. no relative
or absolute paths, no URLs. Now that the origin bullshit is a bit
cleaned up and enforced in the EDL code, there's absolutely no reason to
keep this.
The new code behaves somewhat similar to playlists. (Although playlists
are special because they're not truly recursively opened.)
mpv has a very weak and very annoying policy that determines whether a
playlist should be used or not. For example, if you play a remote
playlist, you usually don't want it to be able to read local filesystem
entries. (Although for a media player the impact is small I guess.)
It's weak and annoying as in that it does not prevent certain cases
which could be interpreted as bad in some cases, such as allowing
playlists on the local filesystem to reference remote URLs. It probably
barely makes sense, but we just want to exclude some other "definitely
not a good idea" things, all while playlists generally just work, so
whatever.
The policy is:
- from the command line anything is played
- local playlists can reference anything except "unsafe" streams
("unsafe" means special stream inputs like libavfilter graphs)
- remote playlists can reference only remote URLs
- things like "memory://" and archives are "transparent" to this
This commit does... something. It replaces the weird stream flags with a
slightly clearer "origin" value, which is now consequently passed down
and used everywhere. It fixes some deviations from the described policy.
I wanted to force archives to reference only content within them, but
this would probably have been more complicated (or required different
abstractions), and I'm too lazy to figure it out, so archives are now
"transparent" (playlists within archives behave the same outside).
There may be a lot of bugs in this.
This is unfortunately a very noisy commit because:
- every stream open call now needs to pass the origin
- so does every demuxer open call (=> params param. gets mandatory)
- most stream were changed to provide the "origin" value
- the origin value needed to be passed along in a lot of places
- I was too lazy to split the commit
Fixes: #7274
This has the advantage that playlists within the archive will work as
expected, because demux_playlist will correctly join the archive base
URL and entry name. Before this change, it could skip before the "|",
resulting in a broken URL.
For some inexplicable reason, the OSC runs the expand-text command a
_lot_. This command is logged at the log file default log level, so the
log file can quickly fill up with these messages. It directly violates
the mpv logging policy: per-frame (or similarly common) log messages
should not be enabled by default for the log file.
stats.lua uses the show-text command for some reason (instead of
creating its own OSD layer).
Explicitly reduce the log level for expand-text and some other commands.
Also reduce the log level for commands triggered by mouse movement.
The previous commit also contributed some to reduce log spam.
Fixes: #4771
Traditionally, the OSC used mpv's "tick" event, which was approximately
sent once per video frame. It didn't try to track any other state, and
just updated everything.
This is sort of a problem in many corner cases and non-corner cases. For
example, it would eat CPU in the paused state (probably to some degree
also the mpv core's fault), or would waste power or even throw errors
("event queue overflows") on high FPS video.
Change this to not using the tick event. Instead, react to a number of
property change events. Rate-limit actual redrawing with a timer; the
next update cannot happen sooner than the hardcoded 30ms OSC frame
duration. This has also the effect that multiple successive updates are
(mostly) coalesced.
This means the OSC won't eat your CPU when the player is fucking paused.
(It'll still update if e.g. the cache is growing, though.) There is some
potential for bugs whenever it uses properties that are not explicitly
observed. (In theory we could easily change this to a reactive concept
to avoid such things, but whatever.)
mpv_event_property (for property observation) actually never sets an
error status. You cannot distinguish between unavailable properties and
properties which returned an error. Not sure if it ever did.
I intend to rewrite this code approximately every 2 months.
Last time, I did this in commit d66eb93e5d (and 065c307e8e and
b2006eeb74). It was intended to remove the roundabout synchronous
thread "ping pong" when observing properties. At first, the original
async. code was replaced with some nice mostly synchronous code. But
then an async. code path had to be added for vo_libmpv, and finally the
sync. code was dropped because it broke in other obscure cases (like the
Objective-C Cocoa backend).
Try again. This time, update properties entirely on the main thread.
Updates get batched out on every playloop iteration. (At first I wanted
it to make it every time the player goes to sleep, but that might starve
API clients if the playloop get saturated.) One nice thing is that
clients only get woken up once all changed events have been sent, which
might reduce overhead.
While this sounds simple, it's not. The main problem is that reading
properties must not block the client API, i.e. no client API locks can
be held while reading the property. Maybe eventually we can avoid this
requirement, but currently it's just a fact. This means we have to
iterate over all clients and then over all properties (of each client),
all while releasing all locks when updating a property. Solve this by
rechecking on each iteration whether the list changed, and if so,
aborting the iteration and redo it "next time".
High risk change, expect bugs such as crashes and missing property
updates.
keyvalue_list_find_key() was called on a "partially" constructed list,
because the terminating NULL was added only later. Didn't I say this
code is cursed?
Fixes: #7273
This is for console.lua (see next commit). The idea is that console.lua
can adjust its offset to the bottom of the window by the height of the
OSC.
If the OSC is not set to permanently visible, export no margins, because
it would look weird to move the console depending on the mouse movement.
Very primitive and dumb, but fulfils its purpose for the next commits.
I chose this specific implementation because it has the lowest footprint
in command.c, without resorting to crazy hacks such as sending messages
between scripts (which would be hard to coordinate especially on
startup).
The size overflow check was inverted: instead of allowing reading only
the first dst_size bytes of the property, it allowed copying past the
property buffer (as returned by xlib). xlib doesn't return the size of
the buffer in bytes, so it has to be computed and checked manually.
Wouldn't it be great if C allowed me to write the overflow check in a
readable way, so it doesn't trick me into writing dumb security bugs?
Relying on X security is even dumber than creating a X security bug,
though, so this was not a real problem. But I found that one specific
call tried to read more than what the property provided, so reduce that.
Also, len*ib obviously can't overflow, so there's an additional layer of
dumb to this whole thing.
While we're at dumb things, why the hell does xlib use "long" for 32 bit
types. It's a god damn pain.
This was completely broken: it compared the first item of the filter
list only. Apparently I forgot that this is a list. This probably broke
aspects of runtime filter changing probably since commit b16cea750f.
Fix this, and remove some redundant code from obj_settings_equals().
Which is not the same as m_obj_settings_equal(), so rename it to make
confusing them harder. (obj_setting_match() has these very weird label
semantics that should probably just be killed. Or not.)
I don't even know anymore whether this was intended or not. Certain use
cases for the "-o" options might require this. These options are for
passing general FFmpeg options. These are translated to av_opt_set()
calls, which may or may not accumulate the option values on multiple
calls with the same option name (how should I know?).
Anyway, it seems crazy to allow non-unique keys, so make them unique.
The ad-hoc nature of the option code makes this wonderfully complicated
(when I wrote that this code is cursed, I meant it). In combination with
lazy testing, it probably means there are lots of bugs here.
Whenever I deal with this, I have to look at the code to make sense of
this. And beyond that, there are some strange inconsistencies. (I think
this code is cursed. It always was, and maybe always will be.)
Although the manpage claimed that using multiple items for -add etc. is
deprecated, string list options didn't warn against it. So add the
warning, and add something in the changelog (even though nobody will
ever read this).
The manpage mentioned --vf-append, but this didn't even exist. So add
it, I guess. We encourage using -append for the other option types, so
for consistency, it should work on filter options. (And I already
tricked me into believing it existed when I mentioned it in the
manpage.)
Make the "operations" table separate for all option types, and mention
the option type on every single of the top-level list options.
libavcodec's nvdec wrapper can return invalid frames, that do not have
any data fields set. This is not allowed by the API, but why would they
follow their own API?
Add a workaround to specifically detect this situation. In practice,
this should fall back to software decoding if it happens too often in a
row. (But single errors are still tolerated, because I don't know why.)
Untested due to lack of hardware from the regrettable graphics company.
Better do this here than deal with the moronic project we unfortunately
depend on.
See: #7185
Better do this here than deal with the moronic project we unfortunately
depend on.
The workaround is generic; unknown whether it works correctly with
multi-input/output filters or filter graphs. It assumes that if all
inputs are EOF, and all outputs are EAGAIN, the bug happened.
This is pretty tricky, because anything could happen. Any time some form
of progress is made, the got_eagain state needs to be reset, because the
filter pad's state could have changed.
These all have been replaced recently.
There was a leftover in window.swift. It couldn't have done anything
useful in the current state of the code, so drop these lines.
The generic change detection now handles this just as well.
The way how this function is manually called at init is slightly gross.
Make that part slightly more explicit to hopefully avoid confusion.
* Instead of following VOCTRL_FULLSCREEN, check for option changes.
* Instead of signaling VO_EVENT_FULLSCREEN_STATE, update the cached
option structure and have it propagated to the origin.
Additionally, gets rid of all the straight usage of the VO options
structure.
Done in a similar style to the Wayland common file, where in case
of reading the value, the "payload" from cache is utilized.
EDL files can have multiple segments taken from the same source file. In
this case, the source file is supposed to be opened only once. This
stopped working, and it created a new demuxer instance for every single
segment entry. This made it slow and made it use much more memory than
needed.
This was because it tried to iterate over the array of source files, but
the array count (num_parts) was only set to a non-0 value later. Fix
this by maintaining the count correctly.
In addition, the actual code for checking whether a source can be reused
(in open_source()) regressed and stopped working correctly. d->stream
could be NULL. Use demuxer.filename instead; I'm not entirely sure
whether this is always correct, but fortunately we have a distributed
almost-AI driven test suite (called "users") which will probably find
and report such cases.
Probably broke with commit a09396ee60 or something close, but didn't
check closer.
Fixes: #7267
In this combination, the [current-]window-scale properties still
incorrectly applied scaling.
For some reason, vo_calc_window_geometry2() handled this option
(basically ignored the dpi_scale parameter passed to it), but since the
DPI compensation for window-scale is implemented in x11_common.c, we
need to check and honor this option here too. (What a mess.)
console.lua uses "terminal-default" logging, which is supposed to return
all messages logged to the terminal to the API. Internally, this is
translated to MP_LOG_BUFFER_MSGL_TERM, which is MSGL_MAX+1, because it's
not an actual log level (blame C for not having proper sum types or
something).
Unfortunately, this unintentionally raised the internal log level to
MSGL_MAX+1. It still functioned as intended, because log messages were
simply filtered at a "later" point. But it led to every message being
formatted even if not needed. More importantly, it made mp_msg_test()
pointless (code calls this to avoid logging in "expensive" cases and if
the messages would just get discarded). Also, this broke libplacebo
logging, because the code to map the log messages did not expect a level
higher than MSGL_MAX (mp_msg_level() returned MSGL_MAX+1 too).
Fix this by not letting the dummy level value be used as log level.
Messages at terminal log level will always make it to the inner log
message dispatcher function (i.e. mp_msg_va() will call
write_msg_to_buffers()), so log buffers which use the dummy log level
don't need to adjust the actual log level at all.
This is similar to the "edition" change.
I considered making this go through deprecation, but didn't have a good
idea how to do that. Maybe it's fine, because this is pretty obscure.
But it might break some API users/scripts (it certainly broke
stats.lua), and all I have to say is sorry for that.
"window-scale" is 1.0 by default; however, x11 implicitly set that to
2.0 on hidpi screens. This made the default 2.0, which was inconsistent
with the option. The "window-scale" property jumped from 1.0 to 2.0 when
a window was created.
Avoid this by factoring the DPI into the window-scale. This makes the
UNFS_WINDOW_SIZE return a virtual size; since this value is used for the
window-scale property only, this is fine and has no further
consequences. (Originally, this was possibly meant to be used for other
purposes, but I'm perfectly fine with redoing this again should that
ever happen.)
This changes user-visible behavior, and it's as if setting window-scale
multiplies its argument by 2 suddenly. Hopefully no user will get angry.