I'm tired of dealing with this frequent spawning of xdg-screensaver when
debugging and what not. xdg-screensaver was never a serious tool anyway,
it's more like some self-deprecating joke by FDO folks.
This will affect X11 on GNOME and other DEs. I'm singling out GNOME
though, because they are the ones actively sabotaging any sane
technical solutions and community cooperation.
I have been accused of taking it out on innocent GNOME users, while none
of this will reach GNOME developers. Of course that is not the
intention.
libplacebo exposes this feature already, because this particular type of
bug is unusually common in practice. Simply make use of it, by exposing
it as an option.
Could probably also bump the libplacebo minimum version to get rid of
the #if, but that would break debian oldoldstable or something.
Fixes#7867.
De-emphasize it, since a user should usually not use it. This _could_ be
used to make the cache seekable with --cache=no, but it's better and
more intuitive to use --cache=yes. As such, the only use of this is for
debugging. I'm not quite sure if this should be removed entirely, but I
still see some value in it (for example if you want the cache lookahead,
but you're using a stream where cache seeking is somehow broken).
Implementation copy/pasted from:
https://code.videolan.org/videolan/libplacebo/-/commit/f793fc0480f
This brings mpv's tone mapping more in line with industry standard
practices, for a hopefully more consistent result across the board.
Note that we ignore the black point adjustment of the tone mapping
entirely. In theory we could revisit this, if we ever make black point
compensation part of the mpv rendering pipeline.
This standard says we should use a value of 203 nits instead of 100 for
mapping between SDR and HDR.
Code copied from https://code.videolan.org/videolan/libplacebo/-/commit/9d9164773
In particular, that commit also includes a test case to make sure the
implementation doesn't break roundtrips.
Relevant to #4248 and #7357.
This simply printf()s a concatenation of the provided string and the
relevant escape sequences. No idea what exactly defines this escape
sequence (is it just a xterm thing that is now supported relatively
widely?), and this simply uses information provided on the linked github
issue.
Not much of an advantage over --term-status-msg, though at least this
can have a lower update frequency. Also I may consider setting a default
value, and then it shouldn't conflict with the status message.
Fixes: #1725
This mode drops or repeats audio data to adapt to video speed, instead
of resampling it or such. It was added to deal with SPDIF. The
implementation was part of fill_audio_out_buffers() - the entire
function is something whose complexity exploded in my face, and which I
want to clean up, and this is hopefully a first step.
Put it in a filter, and mess with the shitty glue code. It's all sort of
roundabout and illogical, but that can be rectified later. The important
part is that it works much like the resample or scaletempo filters.
For PCM audio, this does not work on samples anymore. This makes it much
worse. But for PCM you can use saner mechanisms that sound better. Also,
something about PTS tracking is wrong. But not wasting more time on
this.
Can be useful to force it to adapt to extreme speed changes, while a
higher limit would just use a fraction closer to the original video
speed.
Probably useful for testing only.
As discussed in the referenced issue. This is quite a behavior change,
bit since this option is new, and not included in any releases yet, I
think it's OK.
Fixes: #7648
By default --sub-auto uses "exact". This was far from an "exact" match,
because it added anything that started with the video filename (without
extension), and seemed to end in something that looked like a language
code.
Make this stricter. "exact" still tolerate a language code, but the
video's filename must come before it without any unknown extra
characters. This may not load subtitles in some situations where it
previously did, and where the user might think that the naming
convention is such that it should be considered an exact match.
The subtitle priority sorting seems a bit worthless. I suppose it may
have some value in higher "fuzz" modes (like --sub-auto=fuzzy).
Also remove the mysterious "prio += prio;" line. I probably shouldn't
have checked, but it goes back to commit f16fa9d31 (2003), where someone
wanted to "refine" the priority without changing the rest of the code or
something.
Mostly untested, so have fun.
Fixes: #7702
Consider e.g. --aid=2 with a file that has only 1 track. Then it would
fall back to selecting track 1. Stop doing this. If no matching track is
found, this will not select any track now.
Note that the fingerprint stuff (track_layout_hash in the source)
prevents softens the impact of this change. Without the fingerprint,
playing a dual-audio file with the second track selected, and then a
single-audio file, would play the second file without audio. But the
fingerprint resets it due to differences in the track list.
Try to exhaustively document this and tricky interactions between the
other features. What a damn mess, I think it's simply cursed. Of course
it's still my fault.
See: #7608
Some time ago, properties and options were mostly unified. However, the
track selection properties/options semantics are incompatible to this
change. I'm still trying to handle the fallout.
There are two things that are in the way:
1. Track properties somehow return the runtime selection, not the option
value (all while properties are supposed to be aliases to options
with the same name).
2. The user's track options are not supposed to be changed without
interaction. If a track is auto-selected, the property should return
its ID, but the option value should remain at "auto". Only if the
user actually writes to the property the option should change. E.g.
playing e.g. an audio-only file and then a normal video file not play
the video file with --vid=no just because the audio file had no video
track.
In addition to each of them being in conflict with the property/option
unification, attempt to fix one of them breaks the other one.
Today, we're trying to fix parts of this and avoiding an unfortunate
case where you can get a conflicting option/property value, and where
trying to select a track does nothing if the track to select has the
same ID as the option value.
This breaks 2. from above in certain situations. See manpage additions.
See: #7608
While --input-file was removed for justified reasons, wanting to pass
down socket FDs this way is legitimate, useful, and easy to implement.
One odd thing is that
Fixes: #7592
The mp_filter_run() invocation blocks as long as the demuxer provides
packets and the queue can be filled. That means it may block quite a
long time of the decoder queue size is large (since we use libavcodec in
a blocking manner; it regrettably does not have an async. API).
This made the main thread freeze in certain situations, because it has
to wait on the decoder thread.
Other than I suspected (I wrote that code, but that doesn't mean I know
how the hell it works), this did not freeze during seeking: seek resets
flushed the queue, which also prevented the decoder thread from adding
more frames to it, thus stopping decoding and responding to the main
thread in time. But it does fix the issue that exiting the player waited
for the decoder to finish filling the queue when stopping playback.
(This happened because it called mp_decoder_wrapper_set_play_dir()
before any resets. Related to the somewhat messy way play_dir is
generally set. But it affects all "synchronous" decoder wrapper API
calls.)
This uses pretty weird mechanisms in filter.h and dispatch.h. The
resulting durr hurr interactions are probably hard to follow, and this
whole thing is a sin. On the other hand, this is a _very_ user visible
issue, and I'm happy that it can be fixed in such an unintrusive way.
Let's see how much everyone hates this. Leaving it enabled seems
problematic, because libavcodec returns an unspecific error if it
doesn't like it.
Fixes: #6300
See manpage additions. This has been a topic in MPlayer/mplayer2/mpv
since forever. But since libavcodec multi-threaded decoding was added,
I've always considered this pointless. libavcodec requires you to
"preload" it with packets, and then you can pretty much avoid blocking
on it, if decoding is fast enough.
But in some cases, a decoupled decoder thread _might_ help. Users have
for example come up with cases where decoding video in a separate
process and piping it as raw video to mpv helped. (Or my memory is
false, and it was about vapoursynth filtering, who knows.) So let's just
see whether this helps with anything.
Note that this would have been _much_ easier if libavcodec had an
asynchronous (or rather, non-blocking) API. It could probably have
easily gained that with a small change to its multi-threading code and a
small extension to its API, but I guess not.
Unfortunately, this uglifies f_decoder_wrapper quite a lot. Part of this
is due to annoying corner cases like legacy frame dropping and hardware
decoder state. These could probably be prettified later on.
There is also a change in playloop.c: this is because there is a need to
coordinate playback resets between demuxer thread, decoder thread, and
playback logic. I think this SEEK_BLOCK idea worked out reasonably well.
There are still a number of problems. For example, if the demuxer cache
is full, the decoder thread will simply block hard until the output
queue is full, which interferes with seeking. Could also be improved
later. Hardware decoding will probably die in a fire, because it will
run out of surfaces quickly. We could reduce the queue to size 1...
maybe later. We could update the queue options at runtime easily, but
currently I'm not going to bother.
I could only have put the lavc wrapper itself on a separate thread. But
there is some annoying interaction with EDL and backward playback shit,
and also you would have had to loop demuxer packets through the
playloop, so this sounded less annoying.
The food my mother made for us today was delicious.
Because audio uses the same code, also for audio (even if completely
pointless).
Fixes: #6926
Try to deal with various corner cases. But when I fix one thing, another
thing breaks. (And it's 50/50 whether I find the breakage immediately or
a few months later.) So results may vary.
The default for--hr-seek is changed to "default" (not creative enough to
find a better name). In this mode, audio seeking is exact if there is no
video, or if the video has only a single frame. This change is actually
pretty dumb, since audio frames are usually small enough that exact
seeking does not really add much. But it gets rid of some weird special
cases.
Internally, the most important change is that is_coverart and is_sparse
handling is merged. is_sparse was originally just a special case for
weird .ts streams that have the corresponding low-level flag set. The
idea is that they're pretty similar anyway, so this would reduce the
number of corner cases. But I'm not sure if this doesn't break the
original intended use case for it (I don't have a sample anyway).
This changes last-frame handling, and respects the duration of the last
frame only if audio is disabled. This is mostly "coincidental" due to
the need to make seeking past EOF trigger player exit, and is caused by
setting STATUS_EOF early. On the other hand, this might have been this
way before (see removed chunk close to it).
This is useful with live streams, and it's not much worse than the h264
first packet hack, which reads some data anyway.
For some reason, the option wasn't even documented, so do that.
In addition, print the start time even if it's negative. That should not
be possible, but for some reason, the field is an int64_t copied from an
uint64_t so... whatever. Keeping the logging slightly more straight
forward is better anyway.
Remove some redundant fields that controlled or indicated whether the
demuxer was/could/should prefetch. Redefine how the eof/reading fields
work.
The in->eof field is now always valid, instead of weirdly being reset to
false in random situations. The in->reading field now corresponds to
whether the demuxer thread is working at all, and is reset if it stops
doing anything.
Also, I always found it stupid that dequeue_packet() forced the demuxer
thread to retry reading if it was EOF. This makes little sense, but was
probably added for files that are being appended to (running downloads).
It makes no sense, because if the cache really tried to read until file
EOF, it would encounter partial packets and throw errors, so all is lost
anyway. Plus stream_file now handles this better. So stop this behavior,
but add a temporary option that enables the old behavior.
I think checking for ds->eager when enabling prefetching never really
made sense (could be debated, but no, not really). On the other hand,
the change above exposed a missing wakeup in the backward demuxing code.
Some chances of regressions that could make it stuck in certain states
or so, or incorrect demuxer cache state reporting to the player
frontend.
In all_formats mode, we've ignored what --ytdl-format did so far, since
we've converted the full format list, instead of just the formats
selected by youtube-dl.
But we can easily restore --ytdl-format behavior: just mark the selected
tracks as default tracks.
I don't think the skip_muxed option was overlay useful. While it was
nice to filter out the low quality muxed versions (as it happens on the
alphabetic site, I suspect it's compatibility stuff), it's not really
necessary, and just makes for another tricky and rarely used
configuration option. (This was different before muxed tracks were also
delay-loaded, and including the muxed versions slowed down loading.)
Add the force_all_formats option instead, which handles the HLS case.
Set it to true because they are also delay-loaded now, and don't slow
down startup as much.
See manpage additions. We would have to extend delay_open to support
multiple sub-tracks (for audio and video), and we'd still don't know (?)
whether it might contain more than one stream each (thinking of HLS
master streams). And if it's a true interleaved file (such as a "normal"
mp4 file provided as fallback for more primitive players), we'd either
have to signal such "bundled" tracks, or waste bandwidth.
This restructures a lot. The if/else tree in add_single_video for format
selection was a bit annoying, so it's split into separate if blocks,
where it checks each time whether a URL was determined yet.
Pretty worthless I guess. I only tested one site (and 2 videos), it's
somewhat likely that it will break with other sites. Even if you leave
the option disabled (the default).
Slightly related to #3548. This will allows you to use the bitrate
stream selection mechanism, that was added for HLS, with normal videos.
Works as ad-filter. I had some more plans, for example replacing
matching text with different text, but for now it's dropping matches
only. There's a big warning in the manpage that I might change
semantics. For example, I might turn it into a primitive sed.
In a sane world, you'd probably write a simple script that processes
downloaded subtitles before giving them to mpv, and avoid all this
complexity. But we don't live in a sane world, and the sooner you learn
this, the happier you will be. (But I also want to run this on muxed
subtitles.)
This is pretty straightforward. We use POSIX regexes, which are readily
available without additional pain or dependencies. This also means it's
(apparently) not available on win32 (MinGW). The regex list is because I
hate big monolithic regexes, and this makes it slightly better.
Very superficially tested.
As requested I guess. It behaves quite similar to the --loop* options.
Not quite happy with the idea that 1) the option is mutated on each
operation (but at least it's consistent with --loop* and doesn't require
more properties), and 2) the ab-loop command will do nothing once all
loop iterations are done. As a concession, the OSD shows something about
"disabled".
Fixes: #7360
this creates a default log for the last mpv run when started from the
bundle. that way one can get a log of what happened even after an issue
occurred. also add a menu entry under Help to show the current log, but
only when the bundle is used.
Fixes#7396Fixes#2547
See #7435 and related for context.
Basically, it seems that while the original vsfilter processed subtitles
like with this option set to "yes", many current players (mpc-hc
default, vlc, probably most libass users) treat them like with "no". In
the linked issue, this makes rendering severely slower, and can consume
a lot of memory (or just overflow libass memory calculations). It seems
that changing this to "no" will lead to more good than bad, especially
because newer subtitles may be authored for the "no" behavior.
Most libass users seem to use "no" exactly because they do not call
ass_set_storage_size() at all. This API was needed because the scaling
of the subtitles depends on the video size (vsfilter bugs, or
something). In addition, it's my personal opinion that rendering should
not depend on the video at all, so I like setting the default of this to
"no".
This originally existed as a hack for weston. In certain scenarios, a
frame taking too long to render would cause vo_wayland_wait_frame to
timeout which would result in a ton of dropped frames. The naive
solution was to just to add a slight delay to the time value. If a
frame took too long, it would likely to fall under the timeout value and
all was well. This was exposed to the user since the default delay
(1000) was completely arbitrary.
However with presentation time, this doesn't appear to be neccesary.
Fresh frames that take longer than the display's refresh rate (16.666 ms
in most cases) behave well in Weston. In the other two main compositors
without presentation time (GNOME and Plasma), they also do not
experience any ill effects. It's better not to overcomplicate things, so
this "feature" can be removed now.
See manpage additions. The libarchive behavior mentioned in the last
paragraph there is technically unrelated, but makes this new option
mostly pointless.
See: #7182
In the distant past, the cuviddec backed copy hwaccel could be
configured directly using lavc options. However, since that time,
we gained support for automatic hw ctx creation which ended up
bypassing the lavc options.
Rather than trying to find a way to pass those options again, a
better idea is to make the 'cuda-decode-device' option, used by
the interop hwaccels, work for the copy hwaccels too.
And that's pretty simple: we have to add a create function that
checks the option and passes it on to ffmpeg.
Note that this does require a slight re-jig to the configuration
flags, as we now have a scenario where we want to build with support
for the cuda copy hwaccels but not the interop ones. So we need
a distinct configuration flag for that combination.
Fixes#7295.
Add an "auto-safe" mode, mostly triggered by Ubuntu's nonsense to force
hwdec=vaapi in the global config file in their mpv package. But to be
honest it's probably something more people want.
This is implemented as explicit whitelist. On Windows, HEVC/Intel is
sometimes broken, but it's still whitelisted, and in theory we'd need a
detailed whitelist of device names etc. (like for example browsers tend
to do). On OSX, videotoolbox is a pretty bad choice, but unfortunately
the only one, so it's whitelisted too. There may be a larger number of
hwdec wrappers that work anyway, and I'm for example ignoring Android.
Apparently there are two different options for controlling which
screen an mpv window goes onto: --fs-screen and --screen. The former
explicitly only controls which screen a fullscreened window goes onto,
but does not appear to actually care about this option at runtime for
X11, so pressing f will always fullscreen to the screen mpv is currently
on. This means the option is of questionable usefulness for starters.
Making it worse, if you use --screen=1 --fs, mpv will actually fullscreen
on screen 0, because --fs-screen isn't set. Instead of doing that, fall
back to whatever --screen is set to.
Whenever I deal with this, I have to look at the code to make sense of
this. And beyond that, there are some strange inconsistencies. (I think
this code is cursed. It always was, and maybe always will be.)
Although the manpage claimed that using multiple items for -add etc. is
deprecated, string list options didn't warn against it. So add the
warning, and add something in the changelog (even though nobody will
ever read this).
The manpage mentioned --vf-append, but this didn't even exist. So add
it, I guess. We encourage using -append for the other option types, so
for consistency, it should work on filter options. (And I already
tricked me into believing it existed when I mentioned it in the
manpage.)
Make the "operations" table separate for all option types, and mention
the option type on every single of the top-level list options.
the Apple Remote has long been deprecated and abandoned by Apple.
current macs don't come with support for it anymore. support might be
re-added with the next commit.
Seems like this was silently changed to enabled by default on the change
to libplacebo, without adjusting the manpage. Fix the documented
default.
Also add a comment about Nvidia; see referenced issue.
Fixes: #7245
Merged from mpv-repl git repo commit 5ea2bf64f9c239f0326b02. Some
changes were made on top of it:
- Tabs were converted to 4 spaces indentation (plus some manual
indentation fixes in some places).
- All user-visible mentions of "repl" were renamed to "console".
- The README was converted to a manpage (with heavy changes, some
additions taken from stats.rst; rossy converted the key bindings
table to RST).
- The method to change the default key binding was changed.
- Change minor detail about "font" default value setting (not a
functional change).
- Integrate into the player as builtin script, including an option to
prevent loading it.
Above changes and commit message done by wm4.
Signed-off-by: wm4 <wm4@nowhere>
I often watch sporting events. On many occasions I get files with the
same filename for each session. For example, for F1 I might have the
following directory structure:
F1/
FP1.mkv
FP2.mkv
FP3.mkv
Qualification.mkv
Race.mkv
Since usually one simply watches one race after the other, I usually
just rsync the new event's files over the old ones, so, for example,
Race.mkv will be replaced from the file for the last event with the file
from the new event.
One problem with this is that I like to use --resume-playback for other
kinds of media, so I have it on by default. That works great for, say, a
movie, but doesn't work so well with this scheme, because you can
trivially forget to pass --no-resume-playback on the command line and
end up 2 hours in, watching spoilers as the race results scroll down the
screen :-)
This patch adds a new option, --resume-playback-check-mtime, which
validates that the file's mtime hasn't changed since the watch_later
configuration was saved. It does this by setting the watch_later
configuration to have the same mtime as the file after it is saved.
Switching back and forth between checking mtime and not checking mtime
works fine, as we only choose whether to compare based on it, but we
update the watch_later configuration mtime regardless of its value.
I have no idea why this still exists, since we have --input-ipc-server.
I think there was something about Windows, but the latter option is
implemented even on Windows.
It sometimes happens that HLS streams freeze because the HTTP server is
not responding for a fragment (or something similar, the exact
circumstances are unknown). The --timeout option didn't affect this,
because it's never set on HLS recursive connections (these download the
fragments, while the main connection likely nothing and just wastes a
TCP socket).
Apply an elaborate hack on top of an existing elaborate hack to somehow
get these options set. Of course this could still break easily, but hey,
it's ffmpeg, it can't not try to fuck you over. I'm so fucking sick of
ffmpeg's API bullshit, especially wrt. HLS.
Of course the change is sort of pointless. For HLS, GET requests should
just aggressively retried (because they're not "streamed", they're just
actual files on a CDN), while normal HTTP connections should probably
not be made this fragile (they could be streamed, i.e. they are backed
by some sort of real time encoder, and block if there is no data yet).
The 1 minute default timeout is too high to save playback if this
happens with HLS.
Vaguely related to #5793.
Until now, we've made FFmpeg use the default network timeout - which is
apparently infinite. I don't know if this was changed at some point,
although it seems likely, as I was sure there was a more useful default.
For most use cases, a smaller timeout is more useful (for example
recording something in the background), so force a timeout of 1 minute.
See: #5793
Until now, each .c file in test/ was built as separate, self-contained
binary. Each binary could be run to execute the tests it contained.
Change this and make them part of the normal mpv binary. Now the tests
have to be invoked via the --unittest option. Do this for two reasons:
- Tests now run within a "properly" initialized mpv instance, so all
services are available.
- Possibly simplifying the situation for future build systems.
The first point is the main motivation. The mpv code is entangled with
mp_log and the option system. It feels like a bad idea to duplicate some
of the initialization of this just so you can call code using them.
I'm also getting rid of cmocka. There wouldn't be any problem to keep it
(it's a perfectly sane set of helpers), but NIH calls. I would have had
to aggregate all tests into a CMUnitTest list, and I don't see how I'd
get different types of entry points easily. Probably easily solvable,
but since we made only pretty basic use of this library, NIH-ing this is
actually easier (I needed a list of tests with custom metadata anyway,
so all what was left was reimplement the assert_* helpers).
Unit tests now don't output anything, and if they fail, they'll simply
crash and leave a message that typically requires inspecting the test
code to figure out what went wrong (and probably editing the test code
to get more information). I even merged the various test functions into
single ones. Sucks, but here you go.
chmap_sel.c is merged into chmap.c, because I didn't see the point of
this being separate. json.c drops the print_message() to go along with
the new silent-by-default idea, also there's a memory leak fix unrelated
to the rest of this commit.
The new code is enabled with --enable-tests (--enable-test goes away).
Due to waf's option parser, --enable-test still works, because it's a
unique prefix to --enable-tests.
The use of glXGetCurrentDisplay() restricted this to the GLX backend.
But actually it works under EGL as well. Removing the GLX-specific call
and using the general mpv-internal method to get the X "Display" makes
it work in mpv.
I didn't know this. Nvidia didn't list this as extension in the EGL
context when I still used their GPUs.
Note that this might in theory break use of vdpau in some libmpv clients
using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not
used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not
provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses
now. Considering that vaapi is preferable these days, it's not bad at
all if these clients get "broken". They can be easily fixed by passing
the display to mpv correctly.
(Only half of the buffer is actually used in a useful way, see manpage
or commit which added the option.)
Might have some advantages with broken network filesystem drivers.
See: #6802
In some corner cases (see #6802), it can be beneficial to use a larger
stream buffer size. Use this as argument to rewrite everything for no
reason.
Turn stream.c itself into a ring buffer, with configurable size. The
latter would have been easily achievable with minimal changes, and the
ring buffer is the hard part. There is no reason to have a ring buffer
at all, except possibly if ffmpeg don't fix their awful mp4 demuxer, and
some subtle issues with demux_mkv.c wanting to seek back by small
offsets (the latter was handled with small stream_peek() calls, which
are unneeded now).
In addition, this turns small forward seeks into reads (where data is
simply skipped). Before this commit, only stream_skip() did this (which
also mean that stream_skip() simply calls stream_seek() now).
Replace all stream_peek() calls with something else (usually
stream_read_peek()). The function was a problem, because it returned a
pointer to the internal buffer, which is now a ring buffer with
wrapping. The new function just copies the data into a buffer, and in
some cases requires callers to dynamically allocate memory. (The most
common case, demux_lavf.c, required a separate buffer allocation anyway
due to FFmpeg "idiosyncrasies".) This is the bulk of the demuxer_*
changes.
I'm not happy with this. There still isn't a good reason why there
should be a ring buffer, that is complex, and most of the time just
wastes half of the available memory. Maybe another rewrite soon.
It also contains bugs; you're an alpha tester now.
Not like anyone reads it. Although putting all this text before listing
the allowed option values sort of has the intention to discourage users
from using the option at all. Advertise Ctrl+h, which is a decent way of
enabling hardware decoding temporarily.
The user can raise the number of tolerated hardware decoding errors. On
the other hand, we have a static limit on packets that are "saved" for
fallback handling (and that's a good idea to avoid unbounded memory
usage). In this case, it could happen that the start of a file was fine
after a fallback, but after that buffered amount of data, it would
suddenly skip.
It's more useful to skip buffering entirely if the number of tolerated
decoding errors exceeds the fixed buffer.
(And also, I'm sure nobody gives a shit about this feature.)
The statement about the display FPS is outdated by several years.
"audio"-sync mode does not use the display FPS anymore, and that it's
X11 only also isn't true anymore.
These modes have separate implementations for audio and display video
sync. modes, so the explanations are separate.
Why the hell are users playing around with this anyway? The explanations
are probably too special to make sense for anyone who doesn't know the
code (and who knows the code doesn't need them anyway), but whatever.
This is mostly just because of the odd RGB default gamma issue, which
shouldn't have any real impact. This also sets allow_approximate_gamma,
which I hope is fine for normal use cases.
Raise swscale and zimg default parameters. This restores screenshot
quality settings (maybe) unset in the commit before. Also expose some
more libswscale and zimg options.
Since these options are also used for VOs like x11 and drm, this will
make x11/drm/etc. much slower. For compensation, provide a profile that
sets the old option values: sw-fast. I'm also enabling zimg here, just
as an experiment.
The core problem is that we have a single set of command line options
which control the settings used for most swscale/zimg uses. This was
done in the previous commit. It cannot differentiate between the VOs,
which need to be realtime and may accept/require lower quality options,
and things like screenshots or vo_image, which can be slower, but should
not sacrifice quality by default.
Should this have two sets of options or something similar to do the
right thing depending on the code which calls libswscale? Maybe. Or
should I just ignore the problem, make it someone else's problem (users
who want to use software conversion VOs), provide a sub-optimal
solution, and call it a day? Definitely, sounds good, pushing to master,
goodbye.
By default utilizes the color space of the desktop on which the
swap chain is located. If a specific value is defined, it will be
instead be utilized.
Enables configuration of the PQ color space (BT.2020 primaries,
PQ transfer function) for HDR.
Additionally, signals the swap chain color space to the renderer,
so that the render looks correct without having to specify
target-trc or target-prim manually.
Due to all of the APIs being Win10+ only, will only work starting
with Windows 10.
Enabling this by default probably causes a number of issues, such as
breaking vo_sdl, or reacting to various input devices while the window
is not focused. It's also pretty obscure, or at least new. Disable it by
default.
This doesn't take a ',' separated list. --script is just an alias for
--scripts--append. --scripts accepts a list, but uses the
mplayer-inherited platform-dependent path separator.
Fixes: #5996
Internally, vo_gpu uses NaN for some options to indicate a default value
that is different depending on the context (e.g. different scalers).
There are 2 problems with this:
1. you couldn't reset the options to their defaults
2. NaN is a damn mess and shouldn't be part of the API
The option parser already rejected NaN explicitly, which is why 1.
didn't work. Regarding 2., JSON might be a good example, and actually
caused a bug report.
Fix this by mapping NaN to the special value "default". I think I'd
prefer other mechanisms (maybe just having every scaler expose separate
options?), but for now this will do. See you in a future commit, which
painfully deprecates this and replaces it with something else.
I refrained from using "no" (my favorite magic value for "unset" etc.)
because then I'd have e.g. make --no-scale-param1 work, which in
addition to a lot of effort looks dumb and nobody will use it.
Here's also an apology for the shitty added test script.
Fixes: #6691
The code is very basic:
- only handles gamepads, could be extended for generic joysticks in the
future.
- only has button mappings for controllers natively supported by SDL2.
I heard more can be added through env vars, there's also ways to load
mappings from text files, but I'd rather not go there yet. Common ones
like Dualshock are supported natively.
- analog buttons (TRIGGER and AXIS) are mapped to discrete buttons using an
activation threshold.
- only supports one gamepad at a time. the feature is intented to use
gamepads as evolved remote controls, not play multiplayer games in mpv :)
Awful shit. I probably wouldn't accept this code from someone else, just
so you know.
The idea is that a sws_utils user can automatically use zimg without
large code changes. Basically, laziness. Since zimg support is still
very new, and I don't want that anything breaks just because zimg was
enabled at build time, an option needs to be set to enable it. (I have
especially especially obscure stuff in mind, which is all what
libswscale is used in mpv.)
This _still_ doesn't cause zimg to be used anywhere, because the
sws_utils user has to opt-in by setting allow_zimg. This is because some
users depend on certain libswscale features.
This provides a very similar API to sws_utils.h, which can be used to
convert and scale from one mp_image to another.
This commit adds only the code, but does not use it anywhere.
The code is quite preliminary and barely tested. It supports only a few
pixel formats, and will return failure for many others. (Unlike
libswscale, which tries to support anything that FFmpeg knows.)
zimg itself accepts only planar formats. Supporting other formats
requires manual packing/unpacking. (Compared to libswscale, the zimg API
is generally lower level, but allows for more flexibility.) Only BGR0
output was actually tested. It appears to work.
On a audio/video desync by more than 0.5 seconds, display-sync mode was
disabled, and not enabled again (until playback restart, e.g. a seek).
The idea was that it this only happens when this playback mode is broken
and can't perform well anyway (A/V desync is a clear indication that
something is very wrong). Instead of behaving like a god damn POS, it
should revert to the more robust audio-sync mode.
Unfortunately, this could happen sporadically due to temporary system
performance problems, such as toggling fullscreen. Users didn't like
this, and asked for a function to disable it, or to recover in some
other way.
This mechanism is questionable anyway. If an ignorant user enables
display-sync, and encounters problems with it (without being able to
determine that display-sync is messing up), the player will still behave
like a POS on every playback, and even after every seek. It might
actually be helpful to fail more consistently. Also, I've found that
it's sill relatively reliable anyway even without this mechanism.
So just remove the fallback.
Fixes: #7048
OK, so --cache-secs is useless, because the default is set to 10 hours.
And that part about the "maximum" was obviously a lie (I wonder if it
simply changed at some point).