c784820454 introduced a bool option type
as a replacement for the flag type, but didn't actually transition and
remove the flag type because it would have been too much mundane work.
Treat them as http:// and https:// respectively. This allows to play files
on webdav archives directly on KDE, avoiding the (extremely slow) local
copying performed by kio.
ffmpeg 5.1 adds support for IPFS (the ipns:// and ipfs:// protocols).
This patch enables those protocols to be fist-class citizens in mpv.
Thus allowing for playing IPFS resources on mpv like:
"mpv ipfs://QmbGtJg23skhvFmu9mJiePVByhfzu5rwo74MEkVDYAmF5T"
In Linux kernel commit 819fbd3d8ef36c09576c2a0ffea503f5c46e9177
these two header files were moved to staging (though they've since
been moved out again by Linus.)
We do not actually use this, and it's in a state of maybe-removal
from the kernel as of Linux 5.14. Get rid of it; mpv still builds
fine without it, so it wasn't needed anyways.
Fixes#9233.
this can cause stutter on remote files because in certain cases this
causes a reconnect to the remote that leads to the file not being read
fast enough. VLC had the same problem and fixes it the same way.
b8b8c438f8Fixes#4434
The args struct is reused to attempt opening an URL with
different stream layers, overwriting args->url not only
breaks this but also causes the freed buffer to be used again.
Gopher over TLS (gophers) is a community-adopted protocol and has recently
been merged in:
- curl (a1f06f32b8603427535fc21183a84ce92a9b96f7)
- ffmpeg (51367267c8a9f1a840f5e810f8c788e6e03712a5)
Recent versions of mpv have applied security checks to --playlist
that previously only existed if playlist files were played as an
input directly. This commit documents this change and how to work
around it, in the event that playlist files are trusted.
Add support for reading a byte range from a stream via
the `slice://` protocol.
Syntax is `slice://start[-end]@URL` where end is a maximum
(read until end or eof).
Size suffixes support in `m_option` is reused so they can
be used with start/end.
This can be very useful with e.g. large MPEGTS streams with
corruption or time-stamp jumps or other issues in them.
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
Change it to strictly accept local paths only. No more http://, no more
$HOME expansion with "~/" or mpv config path expansion with "~~/". This
should behave as if passing a path directly to open().
Reduce annoying log noise to further facilitate it using as open()
replacement.
The header probing hacks were previously all broken. They only worked
the first time the archive file was open. Since subsequent opens (on
seek) occured in the middle of the source stream rather than at the
beginning, the stream_read_peek calls meant to retrieve the headers were
instead returning random bytes in the middle of the file.
Perhaps the worst manifestation of this was when seeking within a
multi-volume .rar archive with the "legacy" file naming pattern. If the
seek required a reopen, the fact that the archive was multi-volume would
be forgotten and the file would appear truncated terminating playback.
To solve this, only perform the header probling the first time the
archive is opened. Save the results and reuse them on subsequent
reopens. Put this in a wrapper so this is transparent to
demux_libarchive.
I couldn't find any reason for this message to be so far dispalced from
where it's necessity was determined. That necessity is not however in
question.
Also improve the wording and line breaking.
The call was hidden very well, via
dvb_streaming_read -> dvb_update_config
-> dvb_streaming_start -> dvb_set_channel,
and broke the stream buffering logic.
Dropping that call does not noticeably slow down channel switches.
The buffer can be larger than the normal size when "peeking" is used
(such as done with some file formats, where a large number of bytes masy
need to be "peeked" at the beginning, because FFmpeg). Once normal
operation resumes, it's supposed to free this buffer again. Apparently
this didn't happen as intended, because normal reading did have no way
to discard back buffer before/while resizing the buffer. There's only a
path for discarding the back buffer when actually reading.
It seems like this unfortunately needs 2 code paths for discarding old
data. Just put it into stream_resize_buffer(), where it's rather
non-tricky (discarding can be done by adjusting the copy offset when
moving data to the new allocation). The function now drops old data if
it doesn't fit into the allocation. The caller must ensure that the new
size is sufficient; the function signature changes only so the size of
the implicitly guaranteed kept part can be checked with assert().
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
Before this commit, option declarations used M_OPT_MIN/M_OPT_MAX (and
some other identifiers based on these) to signal whether an option had
min/max values. Remove these flags, and make it use a range implicitly
on the condition if min<max is true.
This requires care in all cases when only M_OPT_MIN or M_OPT_MAX were
set (instead of both). Generally, the commit replaces all these
instances with using DBL_MAX/DBL_MIN for the "unset" part of the range.
This also happens to fix some cases where you could pass over-large
values to integer options, which were silently truncated, but now cause
an error.
This commit has some higher potential for regressions.
This was mostly unused, and has certain problems. Just get rid of it.
It was still used in CDDA (--cdda-span) and a debug option for OpenGL
(--opengl-check-pattern). Replace both of these with 2 options, where
each sets the start/end values of the former span. Both were
undocumented somehow (normally we require all options to be documented),
so I'm not caring about compatibility, and not bothering to add it to
the API changelog.
This required libsmbclient, which is a heavy dependency, and as a
library, has all kinds of problems. For one, the API requires completely
unacceptable global state (in particular, leaks auth state), and is not
thread-safe (meaning concurrent reads to multiple files block each
other).
There are better replacements: you can use the Linux kernel's builtin
CIFS support, fusesmb, or contribute supoport for libdsm.
It appears using lseek() to seek to the end and back to determine file
size is inefficient in some cases.
With CIFS, this restores the performance regression that happened when
the stream cache was removed (which called read() from a thread). This
is probably faster than the old code too, because it's the seeking that
was slowing down CIFS.
According to the user who tested this, the size caching does not help
with fstat() (although it did with the old method).
Fixes: #7408, #7152
Libav seems rather dead: no release for 2 years, no new git commits in
master for almost a year (with one exception ~6 months ago). From what I
can tell, some developers resigned themselves to the horrifying idea to
post patches to ffmpeg-devel instead, while the rest of the developers
went on to greener pastures.
Libav was a better project than FFmpeg. Unfortunately, FFmpeg won,
because it managed to keep the name and website. Libav was pushed more
and more into obscurity: while there was initially a big push for Libav,
FFmpeg just remained "in place" and visible for most people. FFmpeg was
slowly draining all manpower and energy from Libav. A big part of this
was that FFmpeg stole code from Libav (regular merges of the entire
Libav git tree), making it some sort of Frankenstein mirror of Libav,
think decaying zombie with additional legs ("features") nailed to it.
"Stealing" surely is the wrong word; I'm just aping the language that
some of the FFmpeg members used to use. All that is in the past now, I'm
probably the only person left who is annoyed by this, and with this
commit I'm putting this decade long problem finally to an end. I just
thought I'd express my annoyance about this fucking shitshow one last
time.
The most intrusive change in this commit is the resample filter, which
originally used libavresample. Since the FFmpeg developer refused to
enable libavresample by default for drama reasons, and the API was
slightly different, so the filter used some big preprocessor mess to
make it compatible to libswresample. All that falls away now. The
simplification to the build system is also significant.
stream_seek() might somewhat show up in the profiler, even if it's a
no-OP, because of the MP_TRACE() call. I find this annoying. Otherwise,
this should be of no consequence, and should not break or improve
anything.
Some cache logic in demux.c queries the raw byte stream size on every
packet read. This is because it reports the value to the user. (It has
to be polled like this because there is no change notification in most
underlying I/O APIs, and also the user can't just block on the demuxer
thread to update it explicitly.)
This causes a very high number of get_size calls with low packet sizes,
so cache the size, and update it on every read. Reads only happen
approximately all 64KB read with default settings, which is way less
frequent than every packet in such extreme cases.
In theory, this could in theory cause problems in some cases. Actually
this is whole commit complete non-sense, because why micro-optimize for
broken cases like patent troll codecs. I don't need to justify it
anyway.
As a minor detail, off_t is actually specified as signed, so the off_t
cast is never needed.
Unfortunately, libarchive detects a stream of 0s as tar, as demonstrated
by "mpv /dev/zero". This is inconvenient in some cases.
One example is the .cue demuxer trying to open a raw audio .bin file,
which it allows only if probing fails (as .bin is raw and normally will
not look like any real file format). Although this use-case is
worthless.
The cdio API always reads in sectors (fixed CDIO_CD_FRAMESIZE_RAW
blocks). In the past, mpv/MPlayer streams had a way for a stream to
signal a sector size, so the stream's fill_buffer implementation could
ignore the length argument. Later, that was removed, but stream_cdda.c
was left with assuming that the read size was always larger than the
sector size (rightfully at the time). Even later, this assumption was
broken with commit f37f4de, when it was suddenly possibly that smaller
reads were performed (at ring buffer boundaries). It returned EOF if the
buffer size was too small, so playback stopped very early.
Fix this by explicitly handling arbitrary sizes.
Tested with a .cue/.bin file only.
Fixes: #7384