Handling the window with this function makes no sense, since windows
and kernels are not the same thing and don't share the same option list.
The only reason it's done is to make sure the char* points at the static
string rather than the dynamically allocated one, which we can do
manually in this function. Rewrite a bit for clarity/quality.
This is supposed to turn input.conf comments into inline documentation.
Whether this will be useful depends on whether there'll be a script
using this field.
This changes a small aspect of input.conf parsing fundamentally: this
attempts to strip comments/whitespace from the command string, which
will later be used to generate the command when a key binding is
executed. This should not have any negative effects, but there could be
unknown bugs. (For some reason, every command is parsed when input.conf
is parsed, but it still only stores the string for the command. I guess
that saves some minor amount of memory.)
Read-only information about all bindings. Somewhat hoping someone can
make a nice GUI-like overlay thing for it, which provides information
about mapped keys.
No reason to have this as bstr, just makes everything more complex.
Also clear mp_cmd.sender when it's copied. Otherwise it would be a
dangling pointer. Apparently it's never set to non-NULL in this
situation, but this is cleaner anyway.
Should be without consequences. I think this is less trouble, because
code frequently wants to add/remove bits for modifiers and key state
from key codes, and with this change you can't accidentally break it by
testing or removing bits from the old -1 value.
It did that because there was no other way. It used a lock-free ring
buffer, which does not support this. Use a "manual" ring buffer with
explicit locks instead, and drop messages from the start.
(We could have continued to use mp_ring, but it was already too late,
and although mp_ring is fine, using it for types other than bytes looked
awkward, and writing a ring buffer yet again seemed nicer. At least it's
not C++, where mp_ring would have been a template, and everything would
have looked like shit soup no matter what.)
In the referenced commit, I forgot about this part, and a client which
tried to use this was actually not woken up when needed.
(Also why the hell does the subject line of that commit say "removed"?)
Fixes: 8c2d73f112
Particularly for "any_unicode" mappings, so they don't have to
special-case keys like '#' and ' ', which are normally mapped to
symbolic names for input.conf reasons. (Though admittedly, this is a
pretty minor thing, since API users could map these manually.)
The key is never nil if it's invoked through the normal input path. The
key name could be "" if mp_cmd.key_name==NULL. This should not happen,
but there's no strong guarantee in input.c that it cannot happen, so
whatever.
The intended target for this is the mpv.repl script, which manually
added every single ASCII key as a separate key binding. This provides a
simpler mechanism, that will catch any kind of text input.
Due to its special nature, explicitly do not give a guarantee for
compatibility; thus the warning in input.rst.
I often watch sporting events. On many occasions I get files with the
same filename for each session. For example, for F1 I might have the
following directory structure:
F1/
FP1.mkv
FP2.mkv
FP3.mkv
Qualification.mkv
Race.mkv
Since usually one simply watches one race after the other, I usually
just rsync the new event's files over the old ones, so, for example,
Race.mkv will be replaced from the file for the last event with the file
from the new event.
One problem with this is that I like to use --resume-playback for other
kinds of media, so I have it on by default. That works great for, say, a
movie, but doesn't work so well with this scheme, because you can
trivially forget to pass --no-resume-playback on the command line and
end up 2 hours in, watching spoilers as the race results scroll down the
screen :-)
This patch adds a new option, --resume-playback-check-mtime, which
validates that the file's mtime hasn't changed since the watch_later
configuration was saved. It does this by setting the watch_later
configuration to have the same mtime as the file after it is saved.
Switching back and forth between checking mtime and not checking mtime
works fine, as we only choose whether to compare based on it, but we
update the watch_later configuration mtime regardless of its value.
These were a bad idea and are obscure. Scripting key mapping support
still uses them, but this is not relevant to scripting authors, because
the mpv provided helper code (defaults.lua) takes care of this. In
addition, the OSC uses a legacy form of this.
Hopefully, this input section stuff can be removed, and replaced by a
simpler mechanism.
As preparation for making repl.lua part of the core (maybe), add some
mechanisms which are supposed to improve its behavior.
Add a silent mode. Calling mpv_request_log_messages() with the log level
name prefixed with "silent:" will disable logging from the API user's
perspective. But it will keep the log buffer, and record new messages,
without returning them to the user. If logging is enabled again by
requesting the same log level without "silent:" prefix, the buffered log
messages are returned to the user at once. This is not documented,
because it's far too messy and special as that I'd want anyone to rely
on this behavior, but it will be perfectly fine for an internal script.
Another thing is that we record early startup messages. The goal is to
make the repl.lua script show option and config parsing file errors.
This works only with the special "terminal-default" log level.
In addition, reduce the "terminal-default" capacity to only 100 log
messages. If this is going to be enabled by default, it shouldn't use
too much resources.
This will just make it not work if mpv_request_log_messages() gets
extended to accept more names. Pass the argument without checking.
To keep the behavior the same (for whatever reasons, probably not
important), still raise an error if the libmpv API function returns an
error that the argument was bad.
(The check_loglevel() function is still used when the script _emits_ log
messages, which is different, and for which there is no API anyway.)
We should only be printing errors that occur when not probing, to
avoid creating the impression that something is wrong - and errors
during probing isn't a problem.
Until now, this didn't work, since the external image had pts 0; so
enabling video at a later time did nothing, because the image was
discarded. Since hrseek now ends on the last frame (instead of nothing),
reusing the hrseek mechanism solves this, and we don't even need to
treat the cursed coverart case separately.
If SEEK_FORWARD is set, a demuxer should skip to the next frame if the
timestamp does not fall on the start of a frame. If that flag is not
set, it should always seek to the first frame before the target
timestamp (or the first frame in the file).
See what the added code comment says. Normally when this is needed, it's
the cover art case. But this flag is not set when using an external
image. This gives weird seek behavior, because the frame will be
"normally" displayed for its determined duration, and during normal
video playback, the video pts will be used - which is always 0 here.
This should happen only if audio is active. Otherwise, we're more or
less in image viewer mode, where the image should be displayed for a
configured duration.
This gives much better behavior in general, and is what we want if video
somehow ends earlier than audio.
A common special is using an audio file with an external image file.
This commit makes things like switching aspect ratio work (provided the
demuxer for the image behaves correctly, which currently isn't the case
with demux_mf.c). Since the image file had timestamp 0, it was usually
skipped by hr-seek, and changed properties weren't applied to it at the
start of the filter chain.
This wasn't done, probably regression from one of the last dozen of
times this special code path was touched. This meant coverart images
ignored the user-set aspect ratio completely, and some other things.
Surprisingly, we've managed to get this far without context_glx ever
adding the X11 display as a native resource. But with the recent change
to attempt to enable vdpau when using EGL, the hwdec now requires the
display to be added. So let's add it.
eglGetPlatform() is a broken API, since it takes a windowing specific
argument, yet is supposed to work for multiple APIs at the same time. On
Linux, it can take both a X11 "Display" and a "wl_display". Obviously
there is no way to specify what kind of display the argument is (it's
just a void*).
Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff
to try to guess the display type, including trying to call mincore() to
determine whether the pointer can be accessed at all. I guess this
recently accidentally broke (as a bug), but on the other hand, maybe
it's time to do this properly.
The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus
Mesa needs to support the associated platform extension
(EGL_KHR_platform_x11).
Since I see no reasonable way to do this in a compatible way, just
require that EGL 1.5 is available. The problem is that EGL 1.4 seems to
require you to create a display to query EGL version and extension, and
you have a chicken-and-egg problem. It's very stupid. Maybe you could
jump through some more hoops to get something compatible, but fuck that.
Users on "too old" Mesa will fall back to GLX (which we keep around for
a regrettable company known by the name of Nvidia).
I think Wayland and GBM should do the same. They're sufficiently
bleeding-edge that you can expect them to have EGL 1.5. On the other
hand, the cursed RPI code will have to stay with a eglGetDisplay().
Speculative fix for #7154.
(Rant about EGL follows. Actually I deleted it.)
pass_color_map() (in video_shaders.c) and pass_colormanage() (video.c)
both duplicate the condition on whether to do peak computation. Peak
computation requires a compute shader, so if the duplicated conditions
don't match, video_shaders.c will generate a compute shader, but video.c
will try to run it as fragment shader. This leads to a "blue screen".
This can be reproduced by playing a HDTV video with --target-peak=99.
It's not clear how to fix this. Should pass_tone_map() be only invoked
if mp_trc_is_hdr() == true (what pass_colormanage() uses to decide
whether to enable peak computation), or should pass_colormanage() just
tell pass_color_map() to skip peak computation? Decide for the latter,
as it's more robust.
Even if not correct, at least it gets rid of the blue shit.
Fixes: #7149
It's too easy to introduce unintended circular lock dependencies. Just
now we found that the (old) cocoa vo_gpu backend is also affected by
this, because it waits on the Cocoa main thread, which in turn uses
libmpv API for OSX... stuff.
Also fix a missing initial property update after observe.
This leaves me unhappy, because it just leads to a stupid thread ping
pong. Will probably rewrite this later.
Give an overview over the various methods. I feel like I've written text
like this over and over again (compatibility.rst and
interface-changes.rst for example duplicate the list of mpv API
abstractions), but such is life in hell.
Use this in particular to strongly suggest not to parse terminal output.
This suggestion got lost or de-emphasized at some point (maybe when
removing MPlayer and "slave mode" references). Some of this text is
still there, but it can be considered "fine print" at best, that nobody
will see. Now we have it in a more prominent place. This is especially
important since MPlayer-style use of mpv still seems to be prevalent,
see for example #7153.
I have no idea why this still exists, since we have --input-ipc-server.
I think there was something about Windows, but the latter option is
implemented even on Windows.