The generic change detection now handles this just as well.
The way how this function is manually called at init is slightly gross.
Make that part slightly more explicit to hopefully avoid confusion.
* Instead of following VOCTRL_FULLSCREEN, check for option changes.
* Instead of signaling VO_EVENT_FULLSCREEN_STATE, update the cached
option structure and have it propagated to the origin.
Additionally, gets rid of all the straight usage of the VO options
structure.
Done in a similar style to the Wayland common file, where in case
of reading the value, the "payload" from cache is utilized.
EDL files can have multiple segments taken from the same source file. In
this case, the source file is supposed to be opened only once. This
stopped working, and it created a new demuxer instance for every single
segment entry. This made it slow and made it use much more memory than
needed.
This was because it tried to iterate over the array of source files, but
the array count (num_parts) was only set to a non-0 value later. Fix
this by maintaining the count correctly.
In addition, the actual code for checking whether a source can be reused
(in open_source()) regressed and stopped working correctly. d->stream
could be NULL. Use demuxer.filename instead; I'm not entirely sure
whether this is always correct, but fortunately we have a distributed
almost-AI driven test suite (called "users") which will probably find
and report such cases.
Probably broke with commit a09396ee60 or something close, but didn't
check closer.
Fixes: #7267
In this combination, the [current-]window-scale properties still
incorrectly applied scaling.
For some reason, vo_calc_window_geometry2() handled this option
(basically ignored the dpi_scale parameter passed to it), but since the
DPI compensation for window-scale is implemented in x11_common.c, we
need to check and honor this option here too. (What a mess.)
console.lua uses "terminal-default" logging, which is supposed to return
all messages logged to the terminal to the API. Internally, this is
translated to MP_LOG_BUFFER_MSGL_TERM, which is MSGL_MAX+1, because it's
not an actual log level (blame C for not having proper sum types or
something).
Unfortunately, this unintentionally raised the internal log level to
MSGL_MAX+1. It still functioned as intended, because log messages were
simply filtered at a "later" point. But it led to every message being
formatted even if not needed. More importantly, it made mp_msg_test()
pointless (code calls this to avoid logging in "expensive" cases and if
the messages would just get discarded). Also, this broke libplacebo
logging, because the code to map the log messages did not expect a level
higher than MSGL_MAX (mp_msg_level() returned MSGL_MAX+1 too).
Fix this by not letting the dummy level value be used as log level.
Messages at terminal log level will always make it to the inner log
message dispatcher function (i.e. mp_msg_va() will call
write_msg_to_buffers()), so log buffers which use the dummy log level
don't need to adjust the actual log level at all.
This is similar to the "edition" change.
I considered making this go through deprecation, but didn't have a good
idea how to do that. Maybe it's fine, because this is pretty obscure.
But it might break some API users/scripts (it certainly broke
stats.lua), and all I have to say is sorry for that.
"window-scale" is 1.0 by default; however, x11 implicitly set that to
2.0 on hidpi screens. This made the default 2.0, which was inconsistent
with the option. The "window-scale" property jumped from 1.0 to 2.0 when
a window was created.
Avoid this by factoring the DPI into the window-scale. This makes the
UNFS_WINDOW_SIZE return a virtual size; since this value is used for the
window-scale property only, this is fine and has no further
consequences. (Originally, this was possibly meant to be used for other
purposes, but I'm perfectly fine with redoing this again should that
ever happen.)
This changes user-visible behavior, and it's as if setting window-scale
multiplies its argument by 2 suddenly. Hopefully no user will get angry.
This is for the previous commit, and should affect behavior with the
special M_PROPERTY_GET_CONSTRICTED_TYPE mechanism only. The effect is
that cycling the "edition" property, if the option is set to "auto",
will change to the second edition instead of the first.
Normally, option values must always be within their range, so this
should not affect anything else. M_PROPERTY_GET_CONSTRICTED_TYPE is
sort-of fine with this kind of behavior.
If this affects any other M_PROPERTY_GET_CONSTRICTED_TYPE users
neqatively, I will revert the change.
See manpage/changelog changes.
The purpose of this change is to removes another case of inconsistent
property behavior. At first I wanted to make this go through deprecation
before making a technically incompatible change, but then I considered
this feature too obscure as that anyone would care.
The VO underrun detection (just a weak heuristic) added in commit f26dfb
flagged the underrun state every time it was checked, and since the
check happened in every playloop iteration, this caused the playloop to
wake up itself on every iteration. It burned an entire core while in
this state.
Fix this by flagging this condition only once (as it should be), and
requiring that a frame is displayed to trigger it again. This makes it
work similar as the audio underrun check.
The bug report referenced below says --demuxer-thread=no avoided this.
This is because the demuxer layer doesn't do proper underrun reporting
if the reader thread is disabled.
Fixes: #7259
With the previous commit, there's no need for 1.5 anymore. And in fact,
it's just too dangerous to rely on 1.5 because of all the EGL craziness.
For example, you might get a 1.5 EGL system library, but a driver might
still give you 1.4 at runtime. If you assume that you can call 1.5
functions, you will probably get random crashes in this case. What a
cursed API. (The same problem exists with EGL 1.3, but fortunately
nothing seems to use that anymore. We can just ignore that problem.)
This tries to deal with the crazy EGL situation. The summary is:
- using eglGetDisplay() with multiple windowing platforms doesn't really
work, but Mesa had an awful hack for it
- this hack can be disabled at build time, and some distros sometimes
accidentally or intentionally do so
- Mesa will probably eventually disable it by default
- we switched to eglGetPlatformDisplay(), but this requires EGL 1.5
- the very regrettable graphics company (also known as Nvidia) ships
drivers (for old hardware I think) that are EGL 1.4 only
- that means even though we "require" EGL 1.5 and link against it, the
runtime EGL may be 1.4
- trying to run mpv there crashes in the dynamic linker
- so we have to go through some more awful compatibility hacks
This commit tries to do it "properly", but using EGL 1.4 as base. The
plaform selection mechanism is a messy extension there, which got
elevated to core API in 1.5 (but OF COURSE in incompatible ways).
I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION)
is really needed, but if you ask me, it feels slightly saner not to rely
on an EGL 1.4 kludge forever. But maybe this is just an instance of
self-harm, since they will most likely never drop or not provide this
API.
Also, unlike before, we actually check the extension string for the
individual platform extensions, because who knows, some EGL
implementations might curse us if we pass unknown platform parameters.
(But actually, the more I think about this, the more bullshit it is.)
X11 and Wayland were the only ones trying to call eglGetPlatformDisplay,
so they're the only ones which are adjusted in this commit.
Unfortunately, correct function of this commit is unconfirmed. It's
possible that it crashes with the old drivers mentioned above.
Why didn't they solve it like this:
struct native_display {
int platform_type;
void *native_display;
};
Could have kept eglGetDisplay() without all the obnoxious extension BS.
This assert() sometimes triggered (and still triggers) with lavc API
bugs. It tries to check that at least 1 plane is set to a non-NULL
value. Obviously, a valid frame returned by successful decoding should
never have it.
The problem is that some hwdecs use integer surface IDs cast to a
pointer. Recently, it happened that newer Intel drivers started using
surface ID 0 under certain circumstances (for unknown reasons), which
triggers this assert.
Just get rid of it.
For the sake of #7185, add an assert() specifically for nvdec. That
failure needs to be further analyzed, is probably a FFmpeg bug, and
without this assert() would just crash somewhere further down the video
chain.
Fixes: #7261
This code checked AVFrame.buf[0] instead of the decode return code to
see whether a frame was decoded. This is sort of suspicious; while I
think that the lavc API actually guarantees it, it's not intuitive
anyway. In addition, the code was unnecessarily roundabout.
Replace it with a proper error code check. Remove the other error return
(that was, or should have been, redundant before). The no-frame path is
now cleanly separated. Add an assert on the frame-returned path; if this
fails, lavc violated its own API.
the old event tap has several problems, like no proper priority support
or having to set accessibility permissions for mpv or the terminal.
it is now replaced by the new MediaPlayer which has proper priority
support and isn't as greedy as previously. this only includes Media Key
support and not any of the other features included in the MediaPlayer
framework, like proper Now Playing data (only set dummy data for now).
this is only available on macOS 10.12.2 and higher.
also removes some unnecessary redefines.
Fixes#6389
the Apple Remote has long been deprecated and abandoned by Apple.
current macs don't come with support for it anymore. support might be
re-added with the next commit.
using the MPContext as ta parent was a bad idea and shouldn't be done in
any circumstances there because it only supposed to be for internal
usage. this had the undesired effect that the options group was freed
but still used since the MPContext is freed afterwards.
instead manually free options group.
this removes the direct access of the mp_vo_opts stuct via the vo struct
and replaces it with the m_config_cache usage. this updates the
fullscreen and window-minimized property via m_config_cache_write_opt
instead of the old mechanism via VOCTRL and event flagging. also use the
new VOCTRL_VO_OPTS_CHANGED event for fullscreen and border changes.
People somehow think "should" makes things optional, even though the
wording was merely trying to account for the exception of the rule. I
guess this means programming documents should sound like we're running a
police state (which is also the ultimate outcome of all technological
development, if you weren't aware).
See: #7248
\s and \S aren't actually part of the spec, but it seems glibc supports
them anyway so I didn't notice when originally testing. This fixes the
script on Apple's libc and probably others that adhere more closely to
the spec.
The most direct replacement for \s would have been [[:space:]], but we
only expect to see spaces and tabs, so might as well just do that. Also
could have used [[:blank:]], which is basically a locale-aware version
of [ \t], but mpv isn't going to output anything but ASCII spaces and
tabs, so let's avoid unnecessary complexity and stick with the ASCII
literals.
It was supposed to be optional already, but I misunderstood how the
re_match_pcre option worked. If it's set, it will try to use PCRE
matching whether it's available or not (and blow up if it's not). So,
first try to load the module it'll use, and only set the option if that
works.
Fixes#7240.
Instead of traversing across leafs() which can lead to an infinite
loop issue with cross-linked libraries, use the dictionary
(libs_dict) created by libraries() to create a set (libs_set) of
every unique library. Every value in libs_dict is also a key in
libs_dict, so every unique library linked to mpv will be a key in
libs_dict. Use set() on libs_dict to return a set of the keys from
libs_dict, and remove binary from the set so that a duplicate of
the binary is not added to the libs directory.
Iterate over libs_set to bundle dylibs while using the libs_dict
to determine which install_names to change.
And troll Microsoft slightly while we're at it. But is it trolling if
it's the truth?
The level of C99 support in MSVC is probably a bit better than most
people think, but it's by far not adequate. We need a bit of either C11
or GNU extensions too, and rely on some MinGW helpers (that look like
they're provided by MS, except they're not).
Do it after decoding etc., but before waiting for input. This seems to
make more sense, because whether a queued seek can be applied depends on
the playback state. So it sounds like a good idea to apply the seek
first thing, but it's a bad idea to go to sleep if there's still a
queued seek pending (that couldn't be processed earlier).
Also add an empty line before mp_wait_events(); it doesn't really have
to do with the filter bullshit.
If you have a normal file with audio and video, and keep "spamming"
forward hr-seeks, the player just kept showing the last video frame
instead of exiting or playing the next file. This started happening
since commit 6bcda94cb. Although not a bug per se, it was odd, and very
user-noticable.
The main problem was that the pending seek command was processed before
the EOF was "noticed". Processing the command reset everything, so the
player did not terminate playback, but repeated the seek.
This commit restores the old behavior.
For one, it makes video return the correct status (video.c). The
parameter is a bit ugly, but better than duplicating the logic or having
another MPContext field. (As a minor detail, setting r=VD_EOF makes sure
have_new_frame() returns true, rather than going through another
iteration or whatever the hell will happen instead, which would clobber
logical_eof.)
Another thing is making the seek logic actually wait until the seek
outcome has been determined if audio is also active. Audio needs to wait
for video in order to get the video seek target position. (Which in turn
is because hr-seek still "snaps" to video frames. You can't seek in
between two frames, so audio can't just use the seek target, but always
has to wait on the timestamp of the video frame. This has other
disadvantages and is a misdesign, but not something I'll fix today.)
In theory, this might make hr-seeks less responsive, because it needs to
fully decode/filter the audio too, but in practice most time is spent on
video, which had to be fully decoded before this change. (In general,
hr-seek could probably just show a random frame when a queued hr-seek
overrides the current hr-seek, which would probably lead to a better
user experience, but that's out of scope.)
Fixes: #7206
I missed adding this when defining the style used for the video
title in the window control bar. The default behaviour is to wrap,
but we want to cut the title off when we run out of space.
See commit 4e4252f916 and the following as an example how this would
have to be done if done properly.
Since I'm unable to test on OSX, and nobody is interested in fixing this
code (including myself, actually), just remove the deprecated
definitions to make sure the code still builds. This will break runtime
switching of fullscreen, ontop, border. (The way the minimized state is
reported was also deprecated, but commit 40c2f2eeb0 already broke it
anyway.)
Seems like this was silently changed to enabled by default on the change
to libplacebo, without adjusting the manpage. Fix the documented
default.
Also add a comment about Nvidia; see referenced issue.
Fixes: #7245
...probably.
The EGL backend had a strange problem: when recreating the window, EGL
surface creation sometimes mysteriously failed. For example, keeping the
"_" key down (cycles video by default) destroys and recreates the window
in rapid succession, which will often enough show the "Could not create
EGL surface!" message.
This was puzzling because due to mpv's architecture, the X11 Window and
even the X11 Display were fully destroyed, the thread on which they ran
was destroyed, and then everything was recreated. There shouldn't have
been any state that could make subsequent EGL initialization fail.
It turns out mpv forgot to free EGLSurfaces in the x11 code. EGL is a
pretty crazy API (full of thread local and global state with weird
lifetime requirements), and for example it seems EGLDisplay cannot be
explicitly released, but apparently implicitly dies when the native
display is closed (at least EGL 1.5 claims eglTerminate() does _not_
invalidate the display, only certain objects linked to it). It appears
that Mesa still referenced at least EGLSurface in some form, and either
some pointer or some X11 ID was dangling, and when it randomly matched
when eglCreateWindowSurface() was called, it failed.
Fix this by calling eglTerminate(), which supposedly destroys (or rather
unreferences) contexts and surfaces created from the display (but
absurdly not the display itself).
Now why can't you just destroy the display? If it's implicitly
invalidated, why can't it just call eglTerminate() implicitly when this
happens? Did Mesa do something wrong when they somehow didn't
automatically remove the dangling object (so I could claim not to be
responsible for the bug)? Who the fuck knows, and I'm too tired to
figure this out (both because it's late, and because I'm tired of this
EGL crap API).
Still not sure if the code is correct now. I think EGL was designed to
maximize implementation and API-use complications. How else could you
possibly come up with something like the EGLDisplay life cycle? Or am I
just making a fuss? Anyway, fuck EGL, fuck computers, fuck technology.
Fixes: #7129
I was recently informed that unicode has official symbols for
window controls, and I put together a change to use them, which
worked, as long as a suitable font was installed. However, it's
not that hard to get a normal system that lacks an appropriate
font, and libass wants to print warnings if the symbols aren't
in the default font, which will almost always be true.
So, I gave up and added the symbols to the custom osd font that
we already have. This ensures they are always available, and
that they are aligned consistently on all platforms.
I took the symbols from the `symbola` font, as this has a suitable
licence and the symbols look nice enough.
Symbola Licence:
Fonts are free for any use; they may be opened, edited,
modified, regenerated, packaged and redistributed.
Finally, as we now have access to an un-maximize symbol, I added
logic to use it when the window is maximized.
Get rid of the legacy VOCTRL (which will be removed later). I'm not sure
what exactly fullscreen was supposed to do (toggling between using the
entire display, and what --geometry forced?), but I don't care, just get
rid of the VOCTRL. PRs to fix regressions caused by this will be
accepted, but personally I don't care since this is excessively fringe
and obscure.
This warning seems to be designed well. It doesn't seem to warn on
fallthrough-only case statements, so it's compatible to well written
code.
stream_dvdnav.c had an obscure bug in inactive code, fix it.
stream_dvb.c is the only place where it intentionally falls through, I
guess I'll just leave it alone.
The wayland backend needs to keep track of whether or not a window is
hidden for presentation time. There is no presentation feedback when a
window is hidden which means we shouldn't be sending information to the
vo_sync_info structure (i.e. just leave it all at -1). This seemed to
work fine, but recent changes to presentation time in one notable
compositor (Sway; it was probably always broken in Weston actually)
changed the presentation time behavior.
For reasons that aren't clear, there is a greater than 16.666ms delay
between the first presentation time event and the second presentation
time event (compositor latency?) when you switch back to an mpv window
after it is hidden for long enough (a few seconds). When using
presentation time, this causes mpv to feed in some bad values in its
vsync timing mechanism thus causing the A/V desync spike as described in
issue #7223.
This solution is not really ideal. It would be better if the
presentation time events received by the compositors did not have the
aforementioned inconsistency. However since this occurs in both Sway and
Weston and clients can't really fight compositors in wayland-world,
here's a reasonable enough workaround. Basically, just add a slight
delay before we start feeding information into the vo_sync_info again.
We already do this when the window is hidden, so it's not a huge leap.
The delay chosen here is arbitrary, and it basically just recycles the
same parameters used to detect if a window is hidden. If
vo_wayland_wait_frame times out 60 times in a row (or whatever your
monitor's refresh rate is), then we assume the window is hidden. This is
a pretty safe assumption; something has to be terribly wrong for you to
miss 60 vblanks in a row while a window is on the screen.
In this case, we basically just do the reverse of that. If mpv receives
60 frame callbacks in a row (or whatever your monitor's refresh rate
is), then it assumes the window is not hidden. Previously, as soon as it
received 1 frame callback it was declared not hidden. Essentially,
there's just 1 second of delay after reshowing a window before the
presentation time statistics are used again. This should be more than
enough time to skip over the weird inconsistent behavior presentation
time behavior and avoid the A/V desync spike.
Fixes#7223
I had previously wondered whether to do this, but in my testing
with x11 and wayland, the osc was being re-inited on a border
toggle already so I didn't add it.
However, on win32, things are different and there is no re-init
when toggling borders. I belive this is because the active window
size doesn't change in anyway, while on x11/wayland, toggling the
border actually changes the window size - and that trigger a re-init.
So, let's just be explicit and request a re-init when the border
is toggled.
Merged from mpv-repl git repo commit 5ea2bf64f9c239f0326b02. Some
changes were made on top of it:
- Tabs were converted to 4 spaces indentation (plus some manual
indentation fixes in some places).
- All user-visible mentions of "repl" were renamed to "console".
- The README was converted to a manpage (with heavy changes, some
additions taken from stats.rst; rossy converted the key bindings
table to RST).
- The method to change the default key binding was changed.
- Change minor detail about "font" default value setting (not a
functional change).
- Integrate into the player as builtin script, including an option to
prevent loading it.
Above changes and commit message done by wm4.
Signed-off-by: wm4 <wm4@nowhere>
drmModeAddFB is legacy, and might not pick the pixel format you
expect, depending on your driver. Use drmModeAddFB2 which specifies
this explicitly using a fourcc.
Seems like some drivers only increment msc every other page flip when
running in interlaced mode (I'm looking at you nouveau). I.e. it seems
to be incremented at the frame rate, rather than the field rate.
Obviously we can't work with this, so shame the driver and bail.
On intel this isn't an issue, as msc is incremented at field rate
there.
This means presentation feedback won't work correctly in interlaced
modes with those drivers, but who in their right mind uses an
interlaced mode these days, anyway?