2004-04-11 14:26:04 +00:00
|
|
|
/*
|
2008-05-14 18:01:51 +00:00
|
|
|
* Copyright (C) 2004 Michael Niedermayer <michaelni@gmx.at>
|
|
|
|
*
|
2015-04-13 07:36:54 +00:00
|
|
|
* This file is part of mpv.
|
2008-05-14 18:01:51 +00:00
|
|
|
*
|
2017-09-21 11:50:18 +00:00
|
|
|
* mpv is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
2008-05-14 18:01:51 +00:00
|
|
|
*
|
2015-04-13 07:36:54 +00:00
|
|
|
* mpv is distributed in the hope that it will be useful,
|
2008-05-14 18:01:51 +00:00
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
2017-09-21 11:50:18 +00:00
|
|
|
* GNU Lesser General Public License for more details.
|
2008-05-14 18:01:51 +00:00
|
|
|
*
|
2017-09-21 11:50:18 +00:00
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
|
2008-05-14 18:01:51 +00:00
|
|
|
*/
|
2004-04-11 14:26:04 +00:00
|
|
|
|
|
|
|
#include <stdlib.h>
|
2006-08-31 21:26:02 +00:00
|
|
|
#include <limits.h>
|
2009-03-19 03:25:12 +00:00
|
|
|
#include <stdbool.h>
|
2010-11-05 10:50:11 +00:00
|
|
|
#include <string.h>
|
2016-12-04 22:15:31 +00:00
|
|
|
#include <errno.h>
|
core: fix DVD subtitle selection
Add all subtitle tracks as reported by libdvdread at playback start.
Display language for subtitle and audio tracks. This commit restores
these features to the state when demux_mpg was default for DVD playback,
and makes them work with demux_lavf and the recent changes to subtitle
selection in the frontend.
demux_mpg, which was the default demuxer for DVD playback, reordered
the subtitle streams according to the "logical" subtitle track number,
which conforms to the track layout reported by libdvdread, and is what
stream_dvd expects for the STREAM_CTRL_GET_LANG call. demux_lavf, on
the other hand, adds the streams in the order it encounters them in
the MPEG stream. It seems this order is essentially random, and can't
be mapped easily to what stream_dvd expects.
Solve this by making demux_lavf hand out the MPEG stream IDs (using the
demuxer_id field). The MPEG IDs are mapped by mplayer.c by special
casing DVD playback (map_id_from/to_demuxer() functions). This mapping
is essentially the same what demux_mpg did. Making demux_lavf reorder
the streams is out of the question, because its stream handling is
already messy enough.
(Note that demux_lavf doesn't export stream IDs for other formats,
because most time libavformat demuxers do not set AVStream.id, and we
don't know which demuxers do. But we know that MPEG is safe.)
Another major complication is that subtitle tracks are added lazily, as
soon as the demuxer encounters the first subtitle packet for a given
subtitle stream. Add the streams in advance. If a yet non-existent
stream is selected, demux_lavf must be made to auto-select that subtitle
stream as soon as it is added. Otherwise, the first subtitle packet
would be lost. This is done by DEMUXER_CTRL_PRESELECT_SUBTITLE.
demux_mpg didn't need this: the frontend code could just set ds->id to
the desired stream number. But demux_lavf's stream IDs don't map
directly to the stream number as used by libdvdread, which is why this
hack is needed.
2012-08-30 14:43:31 +00:00
|
|
|
#include <assert.h>
|
2004-04-11 14:26:04 +00:00
|
|
|
|
2014-03-25 10:46:10 +00:00
|
|
|
#include "config.h"
|
|
|
|
|
2012-01-28 11:41:36 +00:00
|
|
|
#include <libavformat/avformat.h>
|
|
|
|
#include <libavformat/avio.h>
|
2023-10-28 05:41:57 +00:00
|
|
|
|
2012-01-28 11:41:36 +00:00
|
|
|
#include <libavutil/avutil.h>
|
|
|
|
#include <libavutil/avstring.h>
|
2015-03-03 11:28:46 +00:00
|
|
|
#include <libavutil/display.h>
|
2022-01-07 06:24:35 +00:00
|
|
|
#include <libavutil/dovi_meta.h>
|
2023-10-28 05:41:57 +00:00
|
|
|
#include <libavutil/mathematics.h>
|
|
|
|
#include <libavutil/opt.h>
|
|
|
|
#include <libavutil/pixdesc.h>
|
|
|
|
#include <libavutil/replaygain.h>
|
2022-01-07 06:24:35 +00:00
|
|
|
|
2022-06-01 20:58:17 +00:00
|
|
|
#include "audio/chmap_avchannel.h"
|
|
|
|
|
2024-05-01 17:43:17 +00:00
|
|
|
#include "common/common.h"
|
2013-12-17 01:39:45 +00:00
|
|
|
#include "common/msg.h"
|
2014-04-13 12:01:55 +00:00
|
|
|
#include "common/tags.h"
|
2013-12-17 01:39:45 +00:00
|
|
|
#include "common/av_common.h"
|
2014-08-29 10:09:04 +00:00
|
|
|
#include "misc/bstr.h"
|
2015-12-16 22:54:25 +00:00
|
|
|
#include "misc/charset_conv.h"
|
2018-05-17 18:58:49 +00:00
|
|
|
#include "misc/thread_tools.h"
|
2004-04-11 14:26:04 +00:00
|
|
|
|
2007-03-15 18:36:36 +00:00
|
|
|
#include "stream/stream.h"
|
2012-11-09 00:06:43 +00:00
|
|
|
#include "demux.h"
|
2004-04-11 14:26:04 +00:00
|
|
|
#include "stheader.h"
|
2016-09-06 18:09:56 +00:00
|
|
|
#include "options/m_config.h"
|
2013-12-17 01:02:25 +00:00
|
|
|
#include "options/m_option.h"
|
2015-05-28 19:51:54 +00:00
|
|
|
#include "options/path.h"
|
2004-04-11 14:26:04 +00:00
|
|
|
|
2018-01-27 03:49:02 +00:00
|
|
|
#ifndef AV_DISPOSITION_TIMED_THUMBNAILS
|
|
|
|
#define AV_DISPOSITION_TIMED_THUMBNAILS 0
|
|
|
|
#endif
|
2018-05-03 02:29:11 +00:00
|
|
|
#ifndef AV_DISPOSITION_STILL_IMAGE
|
|
|
|
#define AV_DISPOSITION_STILL_IMAGE 0
|
|
|
|
#endif
|
2014-07-05 14:57:56 +00:00
|
|
|
|
2010-03-22 19:38:42 +00:00
|
|
|
#define INITIAL_PROBE_SIZE STREAM_BUFFER_SIZE
|
2018-10-02 17:58:05 +00:00
|
|
|
#define PROBE_BUF_SIZE (10 * 1024 * 1024)
|
2004-04-11 14:26:04 +00:00
|
|
|
|
2013-03-01 10:27:59 +00:00
|
|
|
|
2013-08-04 21:25:54 +00:00
|
|
|
// Should correspond to IO_BUFFER_SIZE in libavformat/aviobuf.c (not public)
|
|
|
|
// libavformat (almost) always reads data in blocks of this size.
|
|
|
|
#define BIO_BUFFER_SIZE 32768
|
|
|
|
|
2014-06-10 23:46:20 +00:00
|
|
|
#define OPT_BASE_STRUCT struct demux_lavf_opts
|
|
|
|
struct demux_lavf_opts {
|
|
|
|
int probesize;
|
2017-02-23 17:18:22 +00:00
|
|
|
int probeinfo;
|
2014-06-10 23:46:20 +00:00
|
|
|
int probescore;
|
|
|
|
float analyzeduration;
|
|
|
|
int buffersize;
|
2023-02-20 03:32:50 +00:00
|
|
|
bool allow_mimetype;
|
2014-06-10 23:46:20 +00:00
|
|
|
char *format;
|
2014-08-02 01:12:09 +00:00
|
|
|
char **avopts;
|
2023-02-20 03:32:50 +00:00
|
|
|
bool hacks;
|
2016-09-06 18:09:56 +00:00
|
|
|
char *sub_cp;
|
|
|
|
int rtsp_transport;
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
int linearize_ts;
|
2023-02-20 03:32:50 +00:00
|
|
|
bool propagate_opts;
|
2014-06-10 23:46:20 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
const struct m_sub_options demux_lavf_conf = {
|
|
|
|
.opts = (const m_option_t[]) {
|
options: change option macros and all option declarations
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
2020-03-14 20:28:01 +00:00
|
|
|
{"demuxer-lavf-probesize", OPT_INT(probesize), M_RANGE(32, INT_MAX)},
|
|
|
|
{"demuxer-lavf-probe-info", OPT_CHOICE(probeinfo,
|
|
|
|
{"no", 0}, {"yes", 1}, {"auto", -1}, {"nostreams", -2})},
|
|
|
|
{"demuxer-lavf-format", OPT_STRING(format)},
|
|
|
|
{"demuxer-lavf-analyzeduration", OPT_FLOAT(analyzeduration),
|
|
|
|
M_RANGE(0, 3600)},
|
|
|
|
{"demuxer-lavf-buffersize", OPT_INT(buffersize),
|
|
|
|
M_RANGE(1, 10 * 1024 * 1024), OPTDEF_INT(BIO_BUFFER_SIZE)},
|
2023-02-20 03:32:50 +00:00
|
|
|
{"demuxer-lavf-allow-mimetype", OPT_BOOL(allow_mimetype)},
|
options: change option macros and all option declarations
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
2020-03-14 20:28:01 +00:00
|
|
|
{"demuxer-lavf-probescore", OPT_INT(probescore),
|
|
|
|
M_RANGE(1, AVPROBE_SCORE_MAX)},
|
2023-02-20 03:32:50 +00:00
|
|
|
{"demuxer-lavf-hacks", OPT_BOOL(hacks)},
|
options: change option macros and all option declarations
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
2020-03-14 20:28:01 +00:00
|
|
|
{"demuxer-lavf-o", OPT_KEYVALUELIST(avopts)},
|
|
|
|
{"sub-codepage", OPT_STRING(sub_cp)},
|
|
|
|
{"rtsp-transport", OPT_CHOICE(rtsp_transport,
|
|
|
|
{"lavf", 0},
|
|
|
|
{"udp", 1},
|
|
|
|
{"tcp", 2},
|
|
|
|
{"http", 3},
|
|
|
|
{"udp_multicast", 4})},
|
|
|
|
{"demuxer-lavf-linearize-timestamps", OPT_CHOICE(linearize_ts,
|
|
|
|
{"no", 0}, {"auto", -1}, {"yes", 1})},
|
2023-02-20 03:32:50 +00:00
|
|
|
{"demuxer-lavf-propagate-opts", OPT_BOOL(propagate_opts)},
|
2014-06-10 23:46:20 +00:00
|
|
|
{0}
|
|
|
|
},
|
|
|
|
.size = sizeof(struct demux_lavf_opts),
|
|
|
|
.defaults = &(const struct demux_lavf_opts){
|
2017-02-23 17:18:22 +00:00
|
|
|
.probeinfo = -1,
|
2023-02-20 03:32:50 +00:00
|
|
|
.allow_mimetype = true,
|
|
|
|
.hacks = true,
|
2015-02-20 13:29:56 +00:00
|
|
|
// AVPROBE_SCORE_MAX/4 + 1 is the "recommended" limit. Below that, the
|
|
|
|
// user is supposed to retry with larger probe sizes until a higher
|
|
|
|
// value is reached.
|
|
|
|
.probescore = AVPROBE_SCORE_MAX/4 + 1,
|
2016-09-06 18:09:56 +00:00
|
|
|
.sub_cp = "auto",
|
|
|
|
.rtsp_transport = 2,
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
.linearize_ts = -1,
|
2023-02-20 03:32:50 +00:00
|
|
|
.propagate_opts = true,
|
2014-06-10 23:46:20 +00:00
|
|
|
},
|
2006-08-30 22:25:40 +00:00
|
|
|
};
|
|
|
|
|
2013-05-27 21:26:22 +00:00
|
|
|
struct format_hack {
|
|
|
|
const char *ff_name;
|
|
|
|
const char *mime_type;
|
|
|
|
int probescore;
|
|
|
|
float analyzeduration;
|
2017-02-23 17:18:22 +00:00
|
|
|
bool skipinfo : 1; // skip avformat_find_stream_info()
|
2015-03-20 21:10:00 +00:00
|
|
|
unsigned int if_flags; // additional AVInputFormat.flags flags
|
2015-02-18 20:10:15 +00:00
|
|
|
bool max_probe : 1; // use probescore only if max. probe size reached
|
|
|
|
bool ignore : 1; // blacklisted
|
|
|
|
bool no_stream : 1; // do not wrap struct stream as AVIOContext
|
2019-12-03 20:15:40 +00:00
|
|
|
bool use_stream_ids : 1; // has a meaningful native stream IDs (export it)
|
2015-02-18 20:10:43 +00:00
|
|
|
bool fully_read : 1; // set demuxer.fully_read flag
|
2015-12-16 22:54:25 +00:00
|
|
|
bool detect_charset : 1; // format is a small text file, possibly not UTF8
|
2015-02-18 20:10:15 +00:00
|
|
|
// Do not confuse player's position estimation (position is into external
|
|
|
|
// segment, with e.g. HLS, player knows about the playlist main file only).
|
|
|
|
bool clear_filepos : 1;
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
bool linearize_audio_ts : 1;// compensate timestamp resets (audio only)
|
2018-03-02 18:07:12 +00:00
|
|
|
bool is_network : 1;
|
|
|
|
bool no_seek : 1;
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
bool no_pcm_seek : 1;
|
2020-02-20 14:28:49 +00:00
|
|
|
bool no_seek_on_no_duration : 1;
|
2020-07-09 10:29:22 +00:00
|
|
|
bool readall_on_no_streamseek : 1;
|
2024-06-14 02:19:27 +00:00
|
|
|
bool first_frame_only : 1;
|
2013-05-27 21:26:22 +00:00
|
|
|
};
|
|
|
|
|
2015-02-18 20:10:15 +00:00
|
|
|
#define BLACKLIST(fmt) {fmt, .ignore = true}
|
2015-12-16 22:54:25 +00:00
|
|
|
#define TEXTSUB(fmt) {fmt, .fully_read = true, .detect_charset = true}
|
|
|
|
#define TEXTSUB_UTF8(fmt) {fmt, .fully_read = true}
|
2015-02-18 20:10:15 +00:00
|
|
|
|
2013-05-27 21:26:22 +00:00
|
|
|
static const struct format_hack format_hacks[] = {
|
2014-01-25 21:57:52 +00:00
|
|
|
// for webradios
|
2013-05-27 21:26:22 +00:00
|
|
|
{"aac", "audio/aacp", 25, 0.5},
|
2013-06-24 22:55:20 +00:00
|
|
|
{"aac", "audio/aac", 25, 0.5},
|
2015-02-18 20:10:15 +00:00
|
|
|
|
|
|
|
// some mp3 files don't detect correctly (usually id3v2 too large)
|
2014-05-04 18:38:46 +00:00
|
|
|
{"mp3", "audio/mpeg", 24, 0.5},
|
2014-01-25 22:01:00 +00:00
|
|
|
{"mp3", NULL, 24, .max_probe = true},
|
2015-02-18 20:10:15 +00:00
|
|
|
|
2017-06-12 20:04:44 +00:00
|
|
|
{"hls", .no_stream = true, .clear_filepos = true},
|
2018-01-15 11:36:04 +00:00
|
|
|
{"dash", .no_stream = true, .clear_filepos = true},
|
2018-03-02 18:07:12 +00:00
|
|
|
{"sdp", .clear_filepos = true, .is_network = true, .no_seek = true},
|
2015-02-18 20:10:15 +00:00
|
|
|
{"mpeg", .use_stream_ids = true},
|
|
|
|
{"mpegts", .use_stream_ids = true},
|
2019-12-03 20:15:40 +00:00
|
|
|
{"mxf", .use_stream_ids = true},
|
|
|
|
{"avi", .use_stream_ids = true},
|
|
|
|
{"asf", .use_stream_ids = true},
|
2024-01-09 12:23:10 +00:00
|
|
|
{"mp4", .skipinfo = true, .no_pcm_seek = true, .use_stream_ids = true},
|
2019-12-03 20:15:40 +00:00
|
|
|
{"matroska", .skipinfo = true, .no_pcm_seek = true, .use_stream_ids = true},
|
2017-02-23 17:18:22 +00:00
|
|
|
|
2018-08-24 10:55:10 +00:00
|
|
|
{"v4l2", .no_seek = true},
|
2020-02-20 14:28:49 +00:00
|
|
|
{"rtsp", .no_seek_on_no_duration = true},
|
2018-08-24 10:55:10 +00:00
|
|
|
|
2015-03-20 21:10:00 +00:00
|
|
|
// In theory, such streams might contain timestamps, but virtually none do.
|
|
|
|
{"h264", .if_flags = AVFMT_NOTIMESTAMPS },
|
|
|
|
{"hevc", .if_flags = AVFMT_NOTIMESTAMPS },
|
2024-04-28 10:51:11 +00:00
|
|
|
{"vvc", .if_flags = AVFMT_NOTIMESTAMPS },
|
2015-03-20 21:10:00 +00:00
|
|
|
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
// Some Ogg shoutcast streams are essentially concatenated OGG files. They
|
|
|
|
// reset timestamps, which causes all sorts of problems.
|
2019-12-03 20:15:40 +00:00
|
|
|
{"ogg", .linearize_audio_ts = true, .use_stream_ids = true},
|
2016-08-18 19:03:01 +00:00
|
|
|
|
2024-06-14 02:19:27 +00:00
|
|
|
// Ignore additional metadata as frames from some single frame JPEGs
|
|
|
|
// (e.g. gain map)
|
|
|
|
{"jpeg_pipe", .first_frame_only = true},
|
|
|
|
|
2020-07-09 10:29:22 +00:00
|
|
|
// At some point, FFmpeg lost the ability to read gif from unseekable
|
|
|
|
// streams.
|
|
|
|
{"gif", .readall_on_no_streamseek = true},
|
|
|
|
|
2015-12-15 20:03:34 +00:00
|
|
|
TEXTSUB("aqtitle"), TEXTSUB("jacosub"), TEXTSUB("microdvd"),
|
2015-02-18 20:10:43 +00:00
|
|
|
TEXTSUB("mpl2"), TEXTSUB("mpsub"), TEXTSUB("pjs"), TEXTSUB("realtext"),
|
|
|
|
TEXTSUB("sami"), TEXTSUB("srt"), TEXTSUB("stl"), TEXTSUB("subviewer"),
|
2016-01-04 13:56:48 +00:00
|
|
|
TEXTSUB("subviewer1"), TEXTSUB("vplayer"), TEXTSUB("ass"),
|
2015-12-15 20:03:34 +00:00
|
|
|
|
|
|
|
TEXTSUB_UTF8("webvtt"),
|
|
|
|
|
2015-02-18 20:10:15 +00:00
|
|
|
// Useless non-sense, sometimes breaks MLP2 subreader.c fallback
|
|
|
|
BLACKLIST("tty"),
|
2015-02-18 20:11:49 +00:00
|
|
|
// Let's open files with extremely generic extensions (.bin) with a
|
|
|
|
// demuxer that doesn't have a probe function! NO.
|
|
|
|
BLACKLIST("bin"),
|
2015-03-02 18:09:40 +00:00
|
|
|
// Useless, does not work with custom streams.
|
|
|
|
BLACKLIST("image2"),
|
2012-12-10 00:28:20 +00:00
|
|
|
{0}
|
|
|
|
};
|
|
|
|
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
struct nested_stream {
|
|
|
|
AVIOContext *id;
|
|
|
|
int64_t last_bytes;
|
|
|
|
};
|
|
|
|
|
2019-06-09 21:00:04 +00:00
|
|
|
struct stream_info {
|
|
|
|
struct sh_stream *sh;
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
double last_key_pts;
|
|
|
|
double highest_pts;
|
|
|
|
double ts_offset;
|
2019-06-09 21:00:04 +00:00
|
|
|
};
|
|
|
|
|
2015-02-18 20:10:15 +00:00
|
|
|
typedef struct lavf_priv {
|
2015-12-28 22:22:58 +00:00
|
|
|
struct stream *stream;
|
2016-08-26 11:05:14 +00:00
|
|
|
bool own_stream;
|
2015-02-18 20:10:15 +00:00
|
|
|
char *filename;
|
|
|
|
struct format_hack format_hack;
|
2021-05-01 19:53:56 +00:00
|
|
|
const AVInputFormat *avif;
|
2015-03-20 21:10:00 +00:00
|
|
|
int avif_flags;
|
2015-02-18 20:10:15 +00:00
|
|
|
AVFormatContext *avfc;
|
|
|
|
AVIOContext *pb;
|
2019-06-09 21:00:04 +00:00
|
|
|
struct stream_info **streams; // NULL for unknown streams
|
2015-02-18 20:10:15 +00:00
|
|
|
int num_streams;
|
|
|
|
char *mime_type;
|
2016-02-22 19:21:03 +00:00
|
|
|
double seek_delay;
|
2016-09-06 18:09:56 +00:00
|
|
|
|
|
|
|
struct demux_lavf_opts *opts;
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
bool pcm_seek_hack_disabled;
|
|
|
|
AVStream *pcm_seek_hack;
|
|
|
|
int pcm_seek_hack_packet_size;
|
|
|
|
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
int linearize_ts;
|
|
|
|
bool any_ts_fixed;
|
|
|
|
|
2020-02-28 16:25:07 +00:00
|
|
|
int retry_counter;
|
|
|
|
|
demux_lavf: fight ffmpeg API some more and get the timeout set
It sometimes happens that HLS streams freeze because the HTTP server is
not responding for a fragment (or something similar, the exact
circumstances are unknown). The --timeout option didn't affect this,
because it's never set on HLS recursive connections (these download the
fragments, while the main connection likely nothing and just wastes a
TCP socket).
Apply an elaborate hack on top of an existing elaborate hack to somehow
get these options set. Of course this could still break easily, but hey,
it's ffmpeg, it can't not try to fuck you over. I'm so fucking sick of
ffmpeg's API bullshit, especially wrt. HLS.
Of course the change is sort of pointless. For HLS, GET requests should
just aggressively retried (because they're not "streamed", they're just
actual files on a CDN), while normal HTTP connections should probably
not be made this fragile (they could be streamed, i.e. they are backed
by some sort of real time encoder, and block if there is no data yet).
The 1 minute default timeout is too high to save playback if this
happens with HLS.
Vaguely related to #5793.
2019-11-16 12:15:45 +00:00
|
|
|
AVDictionary *av_opts;
|
|
|
|
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
// Proxying nested streams.
|
|
|
|
struct nested_stream *nested;
|
|
|
|
int num_nested;
|
|
|
|
int (*default_io_open)(struct AVFormatContext *s, AVIOContext **pb,
|
|
|
|
const char *url, int flags, AVDictionary **options);
|
2023-03-26 01:20:26 +00:00
|
|
|
int (*default_io_close2)(struct AVFormatContext *s, AVIOContext *pb);
|
2015-02-18 20:10:15 +00:00
|
|
|
} lavf_priv_t;
|
|
|
|
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
static void update_read_stats(struct demuxer *demuxer)
|
|
|
|
{
|
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
|
|
|
|
for (int n = 0; n < priv->num_nested; n++) {
|
|
|
|
struct nested_stream *nest = &priv->nested[n];
|
|
|
|
|
|
|
|
int64_t cur = nest->id->bytes_read;
|
|
|
|
int64_t new = cur - nest->last_bytes;
|
|
|
|
nest->last_bytes = cur;
|
2019-01-05 07:49:31 +00:00
|
|
|
demux_report_unbuffered_read_bytes(demuxer, new);
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-02-18 20:10:15 +00:00
|
|
|
// At least mp4 has name="mov,mp4,m4a,3gp,3g2,mj2", so we split the name
|
|
|
|
// on "," in general.
|
|
|
|
static bool matches_avinputformat_name(struct lavf_priv *priv,
|
|
|
|
const char *name)
|
|
|
|
{
|
|
|
|
const char *avifname = priv->avif->name;
|
|
|
|
while (1) {
|
|
|
|
const char *next = strchr(avifname, ',');
|
|
|
|
if (!next)
|
|
|
|
return !strcmp(avifname, name);
|
|
|
|
int len = next - avifname;
|
|
|
|
if (len == strlen(name) && !memcmp(avifname, name, len))
|
|
|
|
return true;
|
|
|
|
avifname = next + 1;
|
|
|
|
}
|
|
|
|
}
|
2013-08-07 21:15:11 +00:00
|
|
|
|
2011-07-03 12:42:04 +00:00
|
|
|
static int mp_read(void *opaque, uint8_t *buf, int size)
|
|
|
|
{
|
2010-04-23 20:50:34 +00:00
|
|
|
struct demuxer *demuxer = opaque;
|
2015-12-28 22:22:58 +00:00
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
struct stream *stream = priv->stream;
|
2019-12-23 10:09:42 +00:00
|
|
|
if (!stream)
|
|
|
|
return 0;
|
demux: make webm dash work by using init fragment on all demuxers
Retarded webshit streaming protocols (well, DASH) chop a stream into
small fragments, and move unchanging header parts to an "init" fragment
to save some bytes (in the case at hand about 300 bytes for each
fragment that is 100KB-200KB, sure was worth it, fucking idiots).
Since mpv uses an even more retarded hack to inefficiently emulate DASH
through EDL, it opens a new demuxer for every fragment. Thus the
fragment needs to be virtually concatenated with the init fragment. (To
be fair, I'm not sure whether the alternative, reusing the demuxer and
letting it see a stream of byte-wise concatenated fragmenmts, would
actually be saner.)
demux_lavc.c contained a hack for this. Unfortunately, a certain shitty
streaming site by an evil company, that will bestow dytopia upon us soon
enough, sometimes serves webm based DASH instead of the expected mp4
DASH. And for some reason, libavformat's mkv demuxer can't handle the
init fragment or rejects it for some reason. Since I'd rather eat
mushrooms grown in Chernobyl than debugging, hacking, or (god no)
contributing to FFmpeg, and since Chernobyl is so far away, make it work
with our builtin mkv demuxer instead.
This is not hard. We just need to copy the hack in demux_lavf.c to
demux_mkv.c. Since I'm not _that_ much of a dumbfuck to actually do
this, remove the shitty gross demux_lavf.c hack, and replace it by a
slightly less bad generic implementation (stream_concat.c from the
previous commit), and use it on all demuxers. Although this requires
much more code, this frees demux_lavf.c from a hack, and doesn't require
adding a duplicated one to demux_mkv.c, so to the naive eye this seems
to be a much better outcome.
Regarding the code, for some reason stream_memory_open() is never meant
to fail, while stream_concat_open() can in extremely obscure situations,
and (currently) not in this case, but we handle failure of it anyway.
Yep.
2019-06-19 17:38:46 +00:00
|
|
|
|
|
|
|
int ret = stream_read_partial(stream, buf, size);
|
2004-04-11 15:04:54 +00:00
|
|
|
|
2014-07-05 14:42:50 +00:00
|
|
|
MP_TRACE(demuxer, "%d=mp_read(%p, %p, %d), pos: %"PRId64", eof:%d\n",
|
|
|
|
ret, stream, buf, size, stream_tell(stream), stream->eof);
|
2017-10-23 13:29:17 +00:00
|
|
|
return ret ? ret : AVERROR_EOF;
|
2004-04-11 14:26:04 +00:00
|
|
|
}
|
|
|
|
|
2011-07-03 12:42:04 +00:00
|
|
|
static int64_t mp_seek(void *opaque, int64_t pos, int whence)
|
|
|
|
{
|
2010-04-23 20:50:34 +00:00
|
|
|
struct demuxer *demuxer = opaque;
|
2015-12-28 22:22:58 +00:00
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
struct stream *stream = priv->stream;
|
2019-12-23 10:09:42 +00:00
|
|
|
if (!stream)
|
|
|
|
return -1;
|
2017-01-30 18:38:43 +00:00
|
|
|
|
2015-12-21 21:19:57 +00:00
|
|
|
MP_TRACE(demuxer, "mp_seek(%p, %"PRId64", %s)\n", stream, pos,
|
|
|
|
whence == SEEK_END ? "end" :
|
|
|
|
whence == SEEK_CUR ? "cur" :
|
|
|
|
whence == SEEK_SET ? "set" : "size");
|
2014-05-24 12:04:09 +00:00
|
|
|
if (whence == SEEK_END || whence == AVSEEK_SIZE) {
|
2018-03-01 20:48:55 +00:00
|
|
|
int64_t end = stream_get_size(stream);
|
2015-08-17 22:10:54 +00:00
|
|
|
if (end < 0)
|
2014-05-24 12:04:09 +00:00
|
|
|
return -1;
|
|
|
|
if (whence == AVSEEK_SIZE)
|
|
|
|
return end;
|
|
|
|
pos += end;
|
|
|
|
} else if (whence == SEEK_CUR) {
|
demux: make webm dash work by using init fragment on all demuxers
Retarded webshit streaming protocols (well, DASH) chop a stream into
small fragments, and move unchanging header parts to an "init" fragment
to save some bytes (in the case at hand about 300 bytes for each
fragment that is 100KB-200KB, sure was worth it, fucking idiots).
Since mpv uses an even more retarded hack to inefficiently emulate DASH
through EDL, it opens a new demuxer for every fragment. Thus the
fragment needs to be virtually concatenated with the init fragment. (To
be fair, I'm not sure whether the alternative, reusing the demuxer and
letting it see a stream of byte-wise concatenated fragmenmts, would
actually be saner.)
demux_lavc.c contained a hack for this. Unfortunately, a certain shitty
streaming site by an evil company, that will bestow dytopia upon us soon
enough, sometimes serves webm based DASH instead of the expected mp4
DASH. And for some reason, libavformat's mkv demuxer can't handle the
init fragment or rejects it for some reason. Since I'd rather eat
mushrooms grown in Chernobyl than debugging, hacking, or (god no)
contributing to FFmpeg, and since Chernobyl is so far away, make it work
with our builtin mkv demuxer instead.
This is not hard. We just need to copy the hack in demux_lavf.c to
demux_mkv.c. Since I'm not _that_ much of a dumbfuck to actually do
this, remove the shitty gross demux_lavf.c hack, and replace it by a
slightly less bad generic implementation (stream_concat.c from the
previous commit), and use it on all demuxers. Although this requires
much more code, this frees demux_lavf.c from a hack, and doesn't require
adding a duplicated one to demux_mkv.c, so to the naive eye this seems
to be a much better outcome.
Regarding the code, for some reason stream_memory_open() is never meant
to fail, while stream_concat_open() can in extremely obscure situations,
and (currently) not in this case, but we handle failure of it anyway.
Yep.
2019-06-19 17:38:46 +00:00
|
|
|
pos += stream_tell(stream);
|
2014-05-24 12:04:09 +00:00
|
|
|
} else if (whence != SEEK_SET) {
|
2004-04-11 14:26:04 +00:00
|
|
|
return -1;
|
2014-05-24 12:04:09 +00:00
|
|
|
}
|
2004-04-11 14:26:04 +00:00
|
|
|
|
2011-07-03 12:42:04 +00:00
|
|
|
if (pos < 0)
|
2007-08-19 21:22:27 +00:00
|
|
|
return -1;
|
2017-01-30 18:38:43 +00:00
|
|
|
|
|
|
|
int64_t current_pos = stream_tell(stream);
|
demux: make webm dash work by using init fragment on all demuxers
Retarded webshit streaming protocols (well, DASH) chop a stream into
small fragments, and move unchanging header parts to an "init" fragment
to save some bytes (in the case at hand about 300 bytes for each
fragment that is 100KB-200KB, sure was worth it, fucking idiots).
Since mpv uses an even more retarded hack to inefficiently emulate DASH
through EDL, it opens a new demuxer for every fragment. Thus the
fragment needs to be virtually concatenated with the init fragment. (To
be fair, I'm not sure whether the alternative, reusing the demuxer and
letting it see a stream of byte-wise concatenated fragmenmts, would
actually be saner.)
demux_lavc.c contained a hack for this. Unfortunately, a certain shitty
streaming site by an evil company, that will bestow dytopia upon us soon
enough, sometimes serves webm based DASH instead of the expected mp4
DASH. And for some reason, libavformat's mkv demuxer can't handle the
init fragment or rejects it for some reason. Since I'd rather eat
mushrooms grown in Chernobyl than debugging, hacking, or (god no)
contributing to FFmpeg, and since Chernobyl is so far away, make it work
with our builtin mkv demuxer instead.
This is not hard. We just need to copy the hack in demux_lavf.c to
demux_mkv.c. Since I'm not _that_ much of a dumbfuck to actually do
this, remove the shitty gross demux_lavf.c hack, and replace it by a
slightly less bad generic implementation (stream_concat.c from the
previous commit), and use it on all demuxers. Although this requires
much more code, this frees demux_lavf.c from a hack, and doesn't require
adding a duplicated one to demux_mkv.c, so to the naive eye this seems
to be a much better outcome.
Regarding the code, for some reason stream_memory_open() is never meant
to fail, while stream_concat_open() can in extremely obscure situations,
and (currently) not in this case, but we handle failure of it anyway.
Yep.
2019-06-19 17:38:46 +00:00
|
|
|
if (stream_seek(stream, pos) == 0) {
|
2008-08-13 00:01:31 +00:00
|
|
|
stream_seek(stream, current_pos);
|
2004-04-11 14:26:04 +00:00
|
|
|
return -1;
|
2008-08-13 00:01:31 +00:00
|
|
|
}
|
2004-04-11 15:04:54 +00:00
|
|
|
|
2014-05-24 12:03:07 +00:00
|
|
|
return pos;
|
2004-04-11 14:26:04 +00:00
|
|
|
}
|
|
|
|
|
2010-04-23 20:50:34 +00:00
|
|
|
static int64_t mp_read_seek(void *opaque, int stream_idx, int64_t ts, int flags)
|
|
|
|
{
|
|
|
|
struct demuxer *demuxer = opaque;
|
2015-12-28 22:22:58 +00:00
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
struct stream *stream = priv->stream;
|
2010-04-23 20:50:34 +00:00
|
|
|
|
2014-07-30 00:21:35 +00:00
|
|
|
struct stream_avseek cmd = {
|
|
|
|
.stream_index = stream_idx,
|
|
|
|
.timestamp = ts,
|
|
|
|
.flags = flags,
|
|
|
|
};
|
|
|
|
|
2019-12-23 10:09:42 +00:00
|
|
|
if (stream && stream_control(stream, STREAM_CTRL_AVSEEK, &cmd) == STREAM_OK) {
|
2014-10-30 21:50:44 +00:00
|
|
|
stream_drop_buffers(stream);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
return AVERROR(ENOSYS);
|
2010-04-23 20:50:34 +00:00
|
|
|
}
|
|
|
|
|
2013-12-21 19:24:20 +00:00
|
|
|
static void list_formats(struct demuxer *demuxer)
|
2011-07-03 12:42:04 +00:00
|
|
|
{
|
2013-12-21 19:24:20 +00:00
|
|
|
MP_INFO(demuxer, "Available lavf input formats:\n");
|
2018-02-12 18:28:30 +00:00
|
|
|
const AVInputFormat *fmt;
|
|
|
|
void *iter = NULL;
|
|
|
|
while ((fmt = av_demuxer_iterate(&iter)))
|
2013-12-21 19:24:20 +00:00
|
|
|
MP_INFO(demuxer, "%15s : %s\n", fmt->name, fmt->long_name);
|
2007-02-06 22:15:20 +00:00
|
|
|
}
|
|
|
|
|
2015-12-28 22:23:30 +00:00
|
|
|
static void convert_charset(struct demuxer *demuxer)
|
2015-12-16 22:54:25 +00:00
|
|
|
{
|
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
2016-09-06 18:09:56 +00:00
|
|
|
char *cp = priv->opts->sub_cp;
|
2019-12-20 11:37:26 +00:00
|
|
|
if (!cp || !cp[0] || mp_charset_is_utf8(cp))
|
2015-12-28 22:23:30 +00:00
|
|
|
return;
|
|
|
|
bstr data = stream_read_complete(priv->stream, NULL, 128 * 1024 * 1024);
|
|
|
|
if (!data.start) {
|
|
|
|
MP_WARN(demuxer, "File too big (or error reading) - skip charset probing.\n");
|
|
|
|
return;
|
2015-12-16 22:54:25 +00:00
|
|
|
}
|
2016-01-12 22:50:01 +00:00
|
|
|
void *alloc = data.start;
|
2015-12-28 22:23:30 +00:00
|
|
|
cp = (char *)mp_charset_guess(priv, demuxer->log, data, cp, 0);
|
2015-12-16 22:54:25 +00:00
|
|
|
if (cp && !mp_charset_is_utf8(cp))
|
|
|
|
MP_INFO(demuxer, "Using subtitle charset: %s\n", cp);
|
|
|
|
// libavformat transparently converts UTF-16 to UTF-8
|
2016-01-12 22:50:01 +00:00
|
|
|
if (!mp_charset_is_utf16(cp) && !mp_charset_is_utf8(cp)) {
|
2015-12-28 22:23:30 +00:00
|
|
|
bstr conv = mp_iconv_to_utf8(demuxer->log, data, cp, MP_ICONV_VERBOSE);
|
2016-01-18 10:46:28 +00:00
|
|
|
if (conv.start && conv.start != data.start)
|
|
|
|
talloc_steal(alloc, conv.start);
|
2015-12-28 22:23:30 +00:00
|
|
|
if (conv.start)
|
2016-01-12 22:50:01 +00:00
|
|
|
data = conv;
|
2015-12-28 22:23:30 +00:00
|
|
|
}
|
2016-08-26 11:05:14 +00:00
|
|
|
if (data.start) {
|
2019-06-19 14:48:46 +00:00
|
|
|
priv->stream = stream_memory_open(demuxer->global, data.start, data.len);
|
2016-08-26 11:05:14 +00:00
|
|
|
priv->own_stream = true;
|
|
|
|
}
|
2016-01-12 22:50:01 +00:00
|
|
|
talloc_free(alloc);
|
2015-12-16 22:54:25 +00:00
|
|
|
}
|
|
|
|
|
2014-06-10 21:56:05 +00:00
|
|
|
static char *remove_prefix(char *s, const char *const *prefixes)
|
demux_lavf: add support for libavdevice
libavdevice supports various "special" video and audio inputs, such
as screen-capture or libavfilter filter graphs.
libavdevice inputs are implemented as demuxers. They don't use the
custom stream callbacks (in AVFormatContext.pb). Instead, input
parameters are passed as filename. This means the mpv stream layer has
to be disabled. Do this by adding the pseudo stream handler avdevice://,
whose only purpose is passing the filename to demux_lavf, without
actually doing anything.
Change the logic how the filename is passed to libavformat. Remove
handling of the filename from demux_open_lavf() and move it to
lavf_check_file(). (This also fixes a possible bug when skipping the
"lavf://" prefix.)
libavdevice now can be invoked by specifying demuxer and args as in:
mpv avdevice://demuxer:args
The args are passed as filename to libavformat. When using libavdevice
demuxers, their actual meaning is highly implementation specific. They
don't refer to actual filenames.
Note:
libavdevice is disabled by default. There is one problem: libavdevice
pulls in libavfilter, which in turn causes symbol clashes with mpv
internals. The problem is that libavfilter includes a mplayer filter
bridge, which is used to interface with a set of nearly unmodified
mplayer filters copied into libavfilter. This filter bridge uses the
same symbol names as mplayer/mpv's filter chain, which results in symbol
clashes at link-time.
This can be prevented by building ffmpeg with --disable-filter=mp, but
unfortunately this is not the default.
This means linking to libavdevice (which in turn forces linking with
libavfilter by default) must be disabled. We try doing this by compiling
a test file that defines one of the clashing symbols (vf_mpi_clear).
To enable libavdevice input, ffmpeg should be built with the options:
--disable-filter=mp
and mpv with:
--enable-libavdevice
Originally, I tried to auto-detect it. But the resulting complications
in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
|
|
|
{
|
|
|
|
for (int n = 0; prefixes[n]; n++) {
|
|
|
|
int len = strlen(prefixes[n]);
|
|
|
|
if (strncmp(s, prefixes[n], len) == 0)
|
|
|
|
return s + len;
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
2014-06-10 21:56:05 +00:00
|
|
|
static const char *const prefixes[] =
|
demux_lavf: add support for libavdevice
libavdevice supports various "special" video and audio inputs, such
as screen-capture or libavfilter filter graphs.
libavdevice inputs are implemented as demuxers. They don't use the
custom stream callbacks (in AVFormatContext.pb). Instead, input
parameters are passed as filename. This means the mpv stream layer has
to be disabled. Do this by adding the pseudo stream handler avdevice://,
whose only purpose is passing the filename to demux_lavf, without
actually doing anything.
Change the logic how the filename is passed to libavformat. Remove
handling of the filename from demux_open_lavf() and move it to
lavf_check_file(). (This also fixes a possible bug when skipping the
"lavf://" prefix.)
libavdevice now can be invoked by specifying demuxer and args as in:
mpv avdevice://demuxer:args
The args are passed as filename to libavformat. When using libavdevice
demuxers, their actual meaning is highly implementation specific. They
don't refer to actual filenames.
Note:
libavdevice is disabled by default. There is one problem: libavdevice
pulls in libavfilter, which in turn causes symbol clashes with mpv
internals. The problem is that libavfilter includes a mplayer filter
bridge, which is used to interface with a set of nearly unmodified
mplayer filters copied into libavfilter. This filter bridge uses the
same symbol names as mplayer/mpv's filter chain, which results in symbol
clashes at link-time.
This can be prevented by building ffmpeg with --disable-filter=mp, but
unfortunately this is not the default.
This means linking to libavdevice (which in turn forces linking with
libavfilter by default) must be disabled. We try doing this by compiling
a test file that defines one of the clashing symbols (vf_mpi_clear).
To enable libavdevice input, ffmpeg should be built with the options:
--disable-filter=mp
and mpv with:
--enable-libavdevice
Originally, I tried to auto-detect it. But the resulting complications
in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
|
|
|
{"ffmpeg://", "lavf://", "avdevice://", "av://", NULL};
|
|
|
|
|
2013-07-12 19:58:11 +00:00
|
|
|
static int lavf_check_file(demuxer_t *demuxer, enum demux_check check)
|
2011-07-03 12:42:04 +00:00
|
|
|
{
|
2015-12-17 00:00:54 +00:00
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
2016-09-06 18:09:56 +00:00
|
|
|
struct demux_lavf_opts *lavfdopts = priv->opts;
|
2015-12-28 22:22:58 +00:00
|
|
|
struct stream *s = priv->stream;
|
2004-04-11 14:26:04 +00:00
|
|
|
|
2015-04-25 18:42:02 +00:00
|
|
|
priv->filename = remove_prefix(s->url, prefixes);
|
demux_lavf: add support for libavdevice
libavdevice supports various "special" video and audio inputs, such
as screen-capture or libavfilter filter graphs.
libavdevice inputs are implemented as demuxers. They don't use the
custom stream callbacks (in AVFormatContext.pb). Instead, input
parameters are passed as filename. This means the mpv stream layer has
to be disabled. Do this by adding the pseudo stream handler avdevice://,
whose only purpose is passing the filename to demux_lavf, without
actually doing anything.
Change the logic how the filename is passed to libavformat. Remove
handling of the filename from demux_open_lavf() and move it to
lavf_check_file(). (This also fixes a possible bug when skipping the
"lavf://" prefix.)
libavdevice now can be invoked by specifying demuxer and args as in:
mpv avdevice://demuxer:args
The args are passed as filename to libavformat. When using libavdevice
demuxers, their actual meaning is highly implementation specific. They
don't refer to actual filenames.
Note:
libavdevice is disabled by default. There is one problem: libavdevice
pulls in libavfilter, which in turn causes symbol clashes with mpv
internals. The problem is that libavfilter includes a mplayer filter
bridge, which is used to interface with a set of nearly unmodified
mplayer filters copied into libavfilter. This filter bridge uses the
same symbol names as mplayer/mpv's filter chain, which results in symbol
clashes at link-time.
This can be prevented by building ffmpeg with --disable-filter=mp, but
unfortunately this is not the default.
This means linking to libavdevice (which in turn forces linking with
libavfilter by default) must be disabled. We try doing this by compiling
a test file that defines one of the clashing symbols (vf_mpi_clear).
To enable libavdevice input, ffmpeg should be built with the options:
--disable-filter=mp
and mpv with:
--enable-libavdevice
Originally, I tried to auto-detect it. But the resulting complications
in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
|
|
|
|
|
|
|
char *avdevice_format = NULL;
|
2017-02-02 17:24:27 +00:00
|
|
|
if (s->info && strcmp(s->info->name, "avdevice") == 0) {
|
demux_lavf: add support for libavdevice
libavdevice supports various "special" video and audio inputs, such
as screen-capture or libavfilter filter graphs.
libavdevice inputs are implemented as demuxers. They don't use the
custom stream callbacks (in AVFormatContext.pb). Instead, input
parameters are passed as filename. This means the mpv stream layer has
to be disabled. Do this by adding the pseudo stream handler avdevice://,
whose only purpose is passing the filename to demux_lavf, without
actually doing anything.
Change the logic how the filename is passed to libavformat. Remove
handling of the filename from demux_open_lavf() and move it to
lavf_check_file(). (This also fixes a possible bug when skipping the
"lavf://" prefix.)
libavdevice now can be invoked by specifying demuxer and args as in:
mpv avdevice://demuxer:args
The args are passed as filename to libavformat. When using libavdevice
demuxers, their actual meaning is highly implementation specific. They
don't refer to actual filenames.
Note:
libavdevice is disabled by default. There is one problem: libavdevice
pulls in libavfilter, which in turn causes symbol clashes with mpv
internals. The problem is that libavfilter includes a mplayer filter
bridge, which is used to interface with a set of nearly unmodified
mplayer filters copied into libavfilter. This filter bridge uses the
same symbol names as mplayer/mpv's filter chain, which results in symbol
clashes at link-time.
This can be prevented by building ffmpeg with --disable-filter=mp, but
unfortunately this is not the default.
This means linking to libavdevice (which in turn forces linking with
libavfilter by default) must be disabled. We try doing this by compiling
a test file that defines one of the clashing symbols (vf_mpi_clear).
To enable libavdevice input, ffmpeg should be built with the options:
--disable-filter=mp
and mpv with:
--enable-libavdevice
Originally, I tried to auto-detect it. But the resulting complications
in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
|
|
|
// always require filename in the form "format:filename"
|
|
|
|
char *sep = strchr(priv->filename, ':');
|
|
|
|
if (!sep) {
|
2013-12-21 19:24:20 +00:00
|
|
|
MP_FATAL(demuxer, "Must specify filename in 'format:filename' form\n");
|
2013-07-11 18:08:12 +00:00
|
|
|
return -1;
|
demux_lavf: add support for libavdevice
libavdevice supports various "special" video and audio inputs, such
as screen-capture or libavfilter filter graphs.
libavdevice inputs are implemented as demuxers. They don't use the
custom stream callbacks (in AVFormatContext.pb). Instead, input
parameters are passed as filename. This means the mpv stream layer has
to be disabled. Do this by adding the pseudo stream handler avdevice://,
whose only purpose is passing the filename to demux_lavf, without
actually doing anything.
Change the logic how the filename is passed to libavformat. Remove
handling of the filename from demux_open_lavf() and move it to
lavf_check_file(). (This also fixes a possible bug when skipping the
"lavf://" prefix.)
libavdevice now can be invoked by specifying demuxer and args as in:
mpv avdevice://demuxer:args
The args are passed as filename to libavformat. When using libavdevice
demuxers, their actual meaning is highly implementation specific. They
don't refer to actual filenames.
Note:
libavdevice is disabled by default. There is one problem: libavdevice
pulls in libavfilter, which in turn causes symbol clashes with mpv
internals. The problem is that libavfilter includes a mplayer filter
bridge, which is used to interface with a set of nearly unmodified
mplayer filters copied into libavfilter. This filter bridge uses the
same symbol names as mplayer/mpv's filter chain, which results in symbol
clashes at link-time.
This can be prevented by building ffmpeg with --disable-filter=mp, but
unfortunately this is not the default.
This means linking to libavdevice (which in turn forces linking with
libavfilter by default) must be disabled. We try doing this by compiling
a test file that defines one of the clashing symbols (vf_mpi_clear).
To enable libavdevice input, ffmpeg should be built with the options:
--disable-filter=mp
and mpv with:
--enable-libavdevice
Originally, I tried to auto-detect it. But the resulting complications
in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
|
|
|
}
|
|
|
|
avdevice_format = talloc_strndup(priv, priv->filename,
|
|
|
|
sep - priv->filename);
|
|
|
|
priv->filename = sep + 1;
|
|
|
|
}
|
|
|
|
|
2015-12-28 22:22:58 +00:00
|
|
|
char *mime_type = s->mime_type;
|
2014-01-25 21:57:52 +00:00
|
|
|
if (!lavfdopts->allow_mimetype || !mime_type)
|
|
|
|
mime_type = "";
|
2013-05-27 21:26:22 +00:00
|
|
|
|
2021-05-01 19:53:56 +00:00
|
|
|
const AVInputFormat *forced_format = NULL;
|
2013-05-27 21:26:22 +00:00
|
|
|
const char *format = lavfdopts->format;
|
2010-04-23 19:57:25 +00:00
|
|
|
if (!format)
|
2013-05-25 13:00:03 +00:00
|
|
|
format = s->lavf_type;
|
demux_lavf: add support for libavdevice
libavdevice supports various "special" video and audio inputs, such
as screen-capture or libavfilter filter graphs.
libavdevice inputs are implemented as demuxers. They don't use the
custom stream callbacks (in AVFormatContext.pb). Instead, input
parameters are passed as filename. This means the mpv stream layer has
to be disabled. Do this by adding the pseudo stream handler avdevice://,
whose only purpose is passing the filename to demux_lavf, without
actually doing anything.
Change the logic how the filename is passed to libavformat. Remove
handling of the filename from demux_open_lavf() and move it to
lavf_check_file(). (This also fixes a possible bug when skipping the
"lavf://" prefix.)
libavdevice now can be invoked by specifying demuxer and args as in:
mpv avdevice://demuxer:args
The args are passed as filename to libavformat. When using libavdevice
demuxers, their actual meaning is highly implementation specific. They
don't refer to actual filenames.
Note:
libavdevice is disabled by default. There is one problem: libavdevice
pulls in libavfilter, which in turn causes symbol clashes with mpv
internals. The problem is that libavfilter includes a mplayer filter
bridge, which is used to interface with a set of nearly unmodified
mplayer filters copied into libavfilter. This filter bridge uses the
same symbol names as mplayer/mpv's filter chain, which results in symbol
clashes at link-time.
This can be prevented by building ffmpeg with --disable-filter=mp, but
unfortunately this is not the default.
This means linking to libavdevice (which in turn forces linking with
libavfilter by default) must be disabled. We try doing this by compiling
a test file that defines one of the clashing symbols (vf_mpi_clear).
To enable libavdevice input, ffmpeg should be built with the options:
--disable-filter=mp
and mpv with:
--enable-libavdevice
Originally, I tried to auto-detect it. But the resulting complications
in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
|
|
|
if (!format)
|
|
|
|
format = avdevice_format;
|
2010-04-23 19:57:25 +00:00
|
|
|
if (format) {
|
|
|
|
if (strcmp(format, "help") == 0) {
|
2013-12-21 19:24:20 +00:00
|
|
|
list_formats(demuxer);
|
2013-07-11 18:08:12 +00:00
|
|
|
return -1;
|
2007-02-06 22:15:20 +00:00
|
|
|
}
|
2015-02-20 13:29:56 +00:00
|
|
|
forced_format = av_find_input_format(format);
|
|
|
|
if (!forced_format) {
|
2013-12-21 19:24:20 +00:00
|
|
|
MP_FATAL(demuxer, "Unknown lavf format %s\n", format);
|
2013-07-11 18:08:12 +00:00
|
|
|
return -1;
|
2007-02-06 22:15:20 +00:00
|
|
|
}
|
|
|
|
}
|
2009-06-01 09:39:02 +00:00
|
|
|
|
2024-06-11 19:51:29 +00:00
|
|
|
// HLS streams seems to be not well tagged, so matching mime type is not
|
|
|
|
// enough. Strip URL parameters and match extension.
|
|
|
|
bstr ext = bstr_get_ext(bstr_split(bstr0(priv->filename), "?#", NULL));
|
2013-05-25 13:00:03 +00:00
|
|
|
AVProbeData avpd = {
|
2024-06-11 19:51:29 +00:00
|
|
|
// Disable file-extension matching with normal checks, except for HLS
|
2023-05-25 23:11:23 +00:00
|
|
|
.filename = !bstrcasecmp0(ext, "m3u8") || !bstrcasecmp0(ext, "m3u") ||
|
2023-05-15 22:33:21 +00:00
|
|
|
check <= DEMUX_CHECK_REQUEST ? priv->filename : "",
|
2013-05-25 13:00:03 +00:00
|
|
|
.buf_size = 0,
|
2017-02-08 14:08:49 +00:00
|
|
|
.buf = av_mallocz(PROBE_BUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE),
|
2023-05-15 22:36:26 +00:00
|
|
|
.mime_type = lavfdopts->allow_mimetype ? mime_type : NULL,
|
2013-05-25 13:00:03 +00:00
|
|
|
};
|
2014-12-12 16:28:22 +00:00
|
|
|
if (!avpd.buf)
|
|
|
|
return -1;
|
2013-05-25 13:00:03 +00:00
|
|
|
|
2014-01-25 21:57:52 +00:00
|
|
|
bool final_probe = false;
|
|
|
|
do {
|
2013-05-25 13:00:03 +00:00
|
|
|
int score = 0;
|
2015-02-20 13:29:56 +00:00
|
|
|
|
|
|
|
if (forced_format) {
|
|
|
|
priv->avif = forced_format;
|
|
|
|
score = AVPROBE_SCORE_MAX;
|
|
|
|
} else {
|
|
|
|
int nsize = av_clip(avpd.buf_size * 2, INITIAL_PROBE_SIZE,
|
|
|
|
PROBE_BUF_SIZE);
|
stream: turn into a ring buffer, make size configurable
In some corner cases (see #6802), it can be beneficial to use a larger
stream buffer size. Use this as argument to rewrite everything for no
reason.
Turn stream.c itself into a ring buffer, with configurable size. The
latter would have been easily achievable with minimal changes, and the
ring buffer is the hard part. There is no reason to have a ring buffer
at all, except possibly if ffmpeg don't fix their awful mp4 demuxer, and
some subtle issues with demux_mkv.c wanting to seek back by small
offsets (the latter was handled with small stream_peek() calls, which
are unneeded now).
In addition, this turns small forward seeks into reads (where data is
simply skipped). Before this commit, only stream_skip() did this (which
also mean that stream_skip() simply calls stream_seek() now).
Replace all stream_peek() calls with something else (usually
stream_read_peek()). The function was a problem, because it returned a
pointer to the internal buffer, which is now a ring buffer with
wrapping. The new function just copies the data into a buffer, and in
some cases requires callers to dynamically allocate memory. (The most
common case, demux_lavf.c, required a separate buffer allocation anyway
due to FFmpeg "idiosyncrasies".) This is the bulk of the demuxer_*
changes.
I'm not happy with this. There still isn't a good reason why there
should be a ring buffer, that is complex, and most of the time just
wastes half of the available memory. Maybe another rewrite soon.
It also contains bugs; you're an alpha tester now.
2019-11-06 20:36:02 +00:00
|
|
|
nsize = stream_read_peek(s, avpd.buf, nsize);
|
|
|
|
if (nsize <= avpd.buf_size)
|
2015-02-20 13:29:56 +00:00
|
|
|
final_probe = true;
|
stream: turn into a ring buffer, make size configurable
In some corner cases (see #6802), it can be beneficial to use a larger
stream buffer size. Use this as argument to rewrite everything for no
reason.
Turn stream.c itself into a ring buffer, with configurable size. The
latter would have been easily achievable with minimal changes, and the
ring buffer is the hard part. There is no reason to have a ring buffer
at all, except possibly if ffmpeg don't fix their awful mp4 demuxer, and
some subtle issues with demux_mkv.c wanting to seek back by small
offsets (the latter was handled with small stream_peek() calls, which
are unneeded now).
In addition, this turns small forward seeks into reads (where data is
simply skipped). Before this commit, only stream_skip() did this (which
also mean that stream_skip() simply calls stream_seek() now).
Replace all stream_peek() calls with something else (usually
stream_read_peek()). The function was a problem, because it returned a
pointer to the internal buffer, which is now a ring buffer with
wrapping. The new function just copies the data into a buffer, and in
some cases requires callers to dynamically allocate memory. (The most
common case, demux_lavf.c, required a separate buffer allocation anyway
due to FFmpeg "idiosyncrasies".) This is the bulk of the demuxer_*
changes.
I'm not happy with this. There still isn't a good reason why there
should be a ring buffer, that is complex, and most of the time just
wastes half of the available memory. Maybe another rewrite soon.
It also contains bugs; you're an alpha tester now.
2019-11-06 20:36:02 +00:00
|
|
|
avpd.buf_size = nsize;
|
2015-02-20 13:29:56 +00:00
|
|
|
|
|
|
|
priv->avif = av_probe_input_format2(&avpd, avpd.buf_size > 0, &score);
|
|
|
|
}
|
2012-12-08 19:14:13 +00:00
|
|
|
|
|
|
|
if (priv->avif) {
|
2015-02-20 13:29:56 +00:00
|
|
|
MP_VERBOSE(demuxer, "Found '%s' at score=%d size=%d%s.\n",
|
|
|
|
priv->avif->name, score, avpd.buf_size,
|
|
|
|
forced_format ? " (forced)" : "");
|
2014-01-25 21:57:52 +00:00
|
|
|
|
2018-04-10 10:17:03 +00:00
|
|
|
for (int n = 0; lavfdopts->hacks && format_hacks[n].ff_name; n++) {
|
2014-01-25 21:57:52 +00:00
|
|
|
const struct format_hack *entry = &format_hacks[n];
|
2015-02-18 20:10:15 +00:00
|
|
|
if (!matches_avinputformat_name(priv, entry->ff_name))
|
2014-01-25 21:57:52 +00:00
|
|
|
continue;
|
|
|
|
if (entry->mime_type && strcasecmp(entry->mime_type, mime_type) != 0)
|
|
|
|
continue;
|
2015-02-18 20:10:15 +00:00
|
|
|
priv->format_hack = *entry;
|
2014-01-25 21:57:52 +00:00
|
|
|
break;
|
|
|
|
}
|
2012-12-08 19:14:13 +00:00
|
|
|
|
demux_lavf: always find stream info for avif files
avif files will commonly be probed as "mov,mp4,m4a,3gp,3g2,mj2" by
ffmpeg, but demux_lavf currently has some logic to skip
avformat_find_stream_info for these kinds of files. It was introduced in
6f8c953042a7a964686e5923f5c61025ef6b842e. Presumably, the optimization
of mentioned in that commit is still valid however for avif we
specifically need to do the avformat_find_stream_info call. Without it,
several codec proprieties like width, height, etc. are unavailable. So
just check the extension type and disable the skipinfo optimization.
2024-02-27 18:34:49 +00:00
|
|
|
// AVIF always needs to find stream info
|
|
|
|
if (bstrcasecmp0(ext, "avif") == 0)
|
|
|
|
priv->format_hack.skipinfo = false;
|
|
|
|
|
2015-02-20 13:29:56 +00:00
|
|
|
if (score >= lavfdopts->probescore)
|
2013-05-27 21:26:22 +00:00
|
|
|
break;
|
2013-05-25 13:00:03 +00:00
|
|
|
|
2015-02-18 20:10:15 +00:00
|
|
|
if (priv->format_hack.probescore &&
|
|
|
|
score >= priv->format_hack.probescore &&
|
|
|
|
(!priv->format_hack.max_probe || final_probe))
|
|
|
|
break;
|
2013-05-27 21:26:22 +00:00
|
|
|
}
|
2012-12-08 19:14:13 +00:00
|
|
|
|
|
|
|
priv->avif = NULL;
|
2015-02-18 20:10:15 +00:00
|
|
|
priv->format_hack = (struct format_hack){0};
|
2014-01-25 21:57:52 +00:00
|
|
|
} while (!final_probe);
|
2013-05-25 13:00:03 +00:00
|
|
|
|
2010-02-12 20:38:29 +00:00
|
|
|
av_free(avpd.buf);
|
|
|
|
|
2015-02-20 13:29:56 +00:00
|
|
|
if (priv->avif && !forced_format && priv->format_hack.ignore) {
|
2015-02-18 20:10:15 +00:00
|
|
|
MP_VERBOSE(demuxer, "Format blacklisted.\n");
|
|
|
|
priv->avif = NULL;
|
2013-08-07 21:15:11 +00:00
|
|
|
}
|
|
|
|
|
2012-12-08 19:14:13 +00:00
|
|
|
if (!priv->avif) {
|
2013-12-21 19:24:20 +00:00
|
|
|
MP_VERBOSE(demuxer, "No format found, try lowering probescore or forcing the format.\n");
|
2013-07-11 18:08:12 +00:00
|
|
|
return -1;
|
2012-12-08 19:14:13 +00:00
|
|
|
}
|
|
|
|
|
2017-06-07 14:48:21 +00:00
|
|
|
if (lavfdopts->hacks)
|
|
|
|
priv->avif_flags = priv->avif->flags | priv->format_hack.if_flags;
|
2015-03-20 21:10:00 +00:00
|
|
|
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
priv->linearize_ts = lavfdopts->linearize_ts;
|
|
|
|
if (priv->linearize_ts < 0 && !priv->format_hack.linearize_audio_ts)
|
|
|
|
priv->linearize_ts = 0;
|
|
|
|
|
2015-01-23 14:59:06 +00:00
|
|
|
demuxer->filetype = priv->avif->name;
|
2010-11-10 13:38:36 +00:00
|
|
|
|
2015-12-16 22:54:25 +00:00
|
|
|
if (priv->format_hack.detect_charset)
|
2015-12-28 22:23:30 +00:00
|
|
|
convert_charset(demuxer);
|
2015-12-16 22:54:25 +00:00
|
|
|
|
2013-07-11 18:08:12 +00:00
|
|
|
return 0;
|
2004-04-11 14:26:04 +00:00
|
|
|
}
|
2007-04-14 10:03:42 +00:00
|
|
|
|
2015-05-28 19:51:54 +00:00
|
|
|
static char *replace_idx_ext(void *ta_ctx, bstr f)
|
|
|
|
{
|
|
|
|
if (f.len < 4 || f.start[f.len - 4] != '.')
|
|
|
|
return NULL;
|
|
|
|
char *ext = bstr_endswith0(f, "IDX") ? "SUB" : "sub"; // match case
|
2015-06-02 19:25:51 +00:00
|
|
|
return talloc_asprintf(ta_ctx, "%.*s.%s", BSTR_P(bstr_splice(f, 0, -4)), ext);
|
2015-05-28 19:51:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void guess_and_set_vobsub_name(struct demuxer *demuxer, AVDictionary **d)
|
|
|
|
{
|
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
if (!matches_avinputformat_name(priv, "vobsub"))
|
|
|
|
return;
|
|
|
|
|
|
|
|
void *tmp = talloc_new(NULL);
|
|
|
|
bstr bfilename = bstr0(priv->filename);
|
|
|
|
char *subname = NULL;
|
|
|
|
if (mp_is_url(bfilename)) {
|
|
|
|
// It might be a http URL, which has additional parameters after the
|
|
|
|
// end of the actual file path.
|
|
|
|
bstr start, end;
|
|
|
|
if (bstr_split_tok(bfilename, "?", &start, &end)) {
|
|
|
|
subname = replace_idx_ext(tmp, start);
|
|
|
|
if (subname)
|
|
|
|
subname = talloc_asprintf(tmp, "%s?%.*s", subname, BSTR_P(end));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!subname)
|
|
|
|
subname = replace_idx_ext(tmp, bfilename);
|
|
|
|
if (!subname)
|
|
|
|
subname = talloc_asprintf(tmp, "%.*s.sub", BSTR_P(bfilename));
|
|
|
|
|
|
|
|
MP_VERBOSE(demuxer, "Assuming associated .sub file: %s\n", subname);
|
|
|
|
av_dict_set(d, "sub_name", subname, 0);
|
|
|
|
talloc_free(tmp);
|
|
|
|
}
|
|
|
|
|
2013-07-11 17:23:31 +00:00
|
|
|
static void select_tracks(struct demuxer *demuxer, int start)
|
|
|
|
{
|
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
for (int n = start; n < priv->num_streams; n++) {
|
2019-06-09 21:00:04 +00:00
|
|
|
struct sh_stream *stream = priv->streams[n]->sh;
|
2013-07-11 17:23:31 +00:00
|
|
|
AVStream *st = priv->avfc->streams[n];
|
2014-07-06 17:02:21 +00:00
|
|
|
bool selected = stream && demux_stream_is_selected(stream) &&
|
core: completely change handling of attached picture pseudo video
Before this commit, we tried to play along with libavformat and tried
to pretend that attached pictures are video streams with a single
frame, and that the frame magically appeared at the seek position when
seeking. The playback core would then switch to a mode where the video
has ended, and the "remaining" audio is played.
This didn't work very well:
- we needed a hack in demux.c, because we tried to read more packets in
order to find the "next" video frame (libavformat doesn't tell us if
a stream has ended)
- switching the video stream didn't work, because we can't tell
libavformat to send the packet again
- seeking and resuming after was hacky (for some reason libavformat sets
the returned packet's PTS to that of the previously returned audio
packet in generic code not related to attached pictures, and this
happened to work)
- if the user did something stupid and e.g. inserted a deinterlacer by
default, a picture was never displayed, only an inactive VO window)
- same when using a command that reconfigured the VO (like switching
aspect or video filters)
- hr-seek didn't work
For this reason, handle attached pictures as separate case with a
separate video decoding function, which doesn't read packets. Also,
do not synchronize audio to video start in this case.
2013-07-11 17:23:56 +00:00
|
|
|
!stream->attached_picture;
|
2013-07-11 17:23:31 +00:00
|
|
|
st->discard = selected ? AVDISCARD_DEFAULT : AVDISCARD_ALL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-08-12 19:39:32 +00:00
|
|
|
static void export_replaygain(demuxer_t *demuxer, struct sh_stream *sh,
|
2016-01-12 22:48:19 +00:00
|
|
|
AVStream *st)
|
2014-03-25 10:46:10 +00:00
|
|
|
{
|
2023-11-08 01:25:13 +00:00
|
|
|
AVPacketSideData *side_data = st->codecpar->coded_side_data;
|
|
|
|
int nb_side_data = st->codecpar->nb_coded_side_data;
|
|
|
|
for (int i = 0; i < nb_side_data; i++) {
|
2014-03-25 10:46:10 +00:00
|
|
|
AVReplayGain *av_rgain;
|
|
|
|
struct replaygain_data *rgain;
|
2023-11-08 01:25:13 +00:00
|
|
|
AVPacketSideData *src_sd = &side_data[i];
|
2014-03-25 10:46:10 +00:00
|
|
|
|
|
|
|
if (src_sd->type != AV_PKT_DATA_REPLAYGAIN)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
av_rgain = (AVReplayGain*)src_sd->data;
|
|
|
|
rgain = talloc_ptrtype(demuxer, rgain);
|
2020-10-23 12:22:57 +00:00
|
|
|
rgain->track_gain = rgain->album_gain = 0;
|
|
|
|
rgain->track_peak = rgain->album_peak = 1;
|
2014-03-25 10:46:10 +00:00
|
|
|
|
2018-12-18 20:15:09 +00:00
|
|
|
// Set values in *rgain, using track gain as a fallback for album gain
|
|
|
|
// if the latter is not present. This behavior matches that in
|
|
|
|
// demux/demux.c's decode_rgain; if you change this, please make
|
|
|
|
// equivalent changes there too.
|
|
|
|
if (av_rgain->track_gain != INT32_MIN && av_rgain->track_peak != 0.0) {
|
|
|
|
// Track gain is defined.
|
|
|
|
rgain->track_gain = av_rgain->track_gain / 100000.0f;
|
|
|
|
rgain->track_peak = av_rgain->track_peak / 100000.0f;
|
|
|
|
|
|
|
|
if (av_rgain->album_gain != INT32_MIN &&
|
|
|
|
av_rgain->album_peak != 0.0)
|
|
|
|
{
|
|
|
|
// Album gain is also defined.
|
|
|
|
rgain->album_gain = av_rgain->album_gain / 100000.0f;
|
|
|
|
rgain->album_peak = av_rgain->album_peak / 100000.0f;
|
|
|
|
} else {
|
|
|
|
// Album gain is undefined; fall back to track gain.
|
|
|
|
rgain->album_gain = rgain->track_gain;
|
|
|
|
rgain->album_peak = rgain->track_peak;
|
|
|
|
}
|
|
|
|
}
|
2014-03-25 10:46:10 +00:00
|
|
|
|
2016-08-12 19:39:32 +00:00
|
|
|
// This must be run only before the stream was added, otherwise there
|
|
|
|
// will be race conditions with accesses from the user thread.
|
|
|
|
assert(!sh->ds);
|
|
|
|
sh->codec->replaygain_data = rgain;
|
2014-03-25 10:46:10 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-01 21:47:27 +00:00
|
|
|
// Return a dictionary entry as (decimal) integer.
|
|
|
|
static int dict_get_decimal(AVDictionary *dict, const char *entry, int def)
|
|
|
|
{
|
|
|
|
AVDictionaryEntry *e = av_dict_get(dict, entry, NULL, 0);
|
|
|
|
if (e && e->value) {
|
|
|
|
char *end = NULL;
|
|
|
|
long int r = strtol(e->value, &end, 10);
|
|
|
|
if (end && !end[0] && r >= INT_MIN && r <= INT_MAX)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
return def;
|
|
|
|
}
|
|
|
|
|
2022-08-21 19:10:59 +00:00
|
|
|
static bool is_image(AVStream *st, bool attached_picture, const AVInputFormat *avif)
|
|
|
|
{
|
|
|
|
return st->nb_frames <= 1 && (
|
|
|
|
attached_picture ||
|
|
|
|
bstr_endswith0(bstr0(avif->name), "_pipe") ||
|
|
|
|
strcmp(avif->name, "alias_pix") == 0 ||
|
|
|
|
strcmp(avif->name, "gif") == 0 ||
|
2024-01-18 16:43:02 +00:00
|
|
|
strcmp(avif->name, "ico") == 0 ||
|
2022-08-21 19:10:59 +00:00
|
|
|
strcmp(avif->name, "image2pipe") == 0 ||
|
|
|
|
(st->codecpar->codec_id == AV_CODEC_ID_AV1 && st->nb_frames == 1)
|
|
|
|
);
|
|
|
|
}
|
|
|
|
|
2023-11-08 01:25:13 +00:00
|
|
|
static inline const uint8_t *mp_av_stream_get_side_data(const AVStream *st,
|
|
|
|
enum AVPacketSideDataType type)
|
|
|
|
{
|
|
|
|
const AVPacketSideData *sd;
|
|
|
|
sd = av_packet_side_data_get(st->codecpar->coded_side_data,
|
|
|
|
st->codecpar->nb_coded_side_data,
|
|
|
|
type);
|
|
|
|
return sd ? sd->data : NULL;
|
|
|
|
}
|
|
|
|
|
2015-12-22 01:14:48 +00:00
|
|
|
static void handle_new_stream(demuxer_t *demuxer, int i)
|
2011-07-03 12:42:04 +00:00
|
|
|
{
|
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
2013-04-14 18:53:03 +00:00
|
|
|
AVFormatContext *avfc = priv->avfc;
|
2011-07-03 12:42:04 +00:00
|
|
|
AVStream *st = avfc->streams[i];
|
2013-02-09 14:15:28 +00:00
|
|
|
struct sh_stream *sh = NULL;
|
2016-03-31 20:00:45 +00:00
|
|
|
AVCodecParameters *codec = st->codecpar;
|
|
|
|
int lavc_delay = codec->initial_padding;
|
2012-07-29 19:04:57 +00:00
|
|
|
|
2011-07-03 12:42:04 +00:00
|
|
|
switch (codec->codec_type) {
|
|
|
|
case AVMEDIA_TYPE_AUDIO: {
|
demux: remove weird tripple-buffering for the sh_stream list
The demuxer infrastructure was originally single-threaded. To make it
suitable for multithreading (specifically, demuxing and decoding on
separate threads), some sort of tripple-buffering was introduced. There
are separate "struct demuxer" allocations. The demuxer thread sets the
state on d_thread. If anything changes, the state is copied to d_buffer
(the copy is protected by a lock), and the decoder thread is notified.
Then the decoder thread copies the state from d_buffer to d_user (again
while holding a lock). This avoids the need for locking in the
demuxer/decoder code itself (only demux.c needs an internal, "invisible"
lock.)
Remove the streams/num_streams fields from this tripple-buffering
schema. Move them to the internal struct, and protect them with the
internal lock. Use accessors for read access outside of demux.c.
Other than replacing all field accesses with accessors, this separates
allocating and adding sh_streams. This is needed to avoid race
conditions. Before this change, this was awkwardly handled by first
initializing the sh_stream, and then sending a stream change event. Now
the stream is allocated, then initialized, and then declared as
immutable and added (at which point it becomes visible to the decoder
thread immediately).
This change is useful for PR #2626. And eventually, we should probably
get entirely of the tripple buffering, and this makes a nice first step.
2015-12-23 20:44:53 +00:00
|
|
|
sh = demux_alloc_sh_stream(STREAM_AUDIO);
|
2022-06-01 20:58:17 +00:00
|
|
|
if (!mp_chmap_from_av_layout(&sh->codec->channels, &codec->ch_layout)) {
|
|
|
|
char layout[128] = {0};
|
|
|
|
MP_WARN(demuxer,
|
|
|
|
"Failed to convert channel layout %s to mpv one!\n",
|
|
|
|
av_channel_layout_describe(&codec->ch_layout,
|
|
|
|
layout, 128) < 0 ?
|
|
|
|
"undefined" : layout);
|
|
|
|
}
|
|
|
|
|
2016-01-12 22:48:19 +00:00
|
|
|
sh->codec->samplerate = codec->sample_rate;
|
|
|
|
sh->codec->bitrate = codec->bit_rate;
|
2023-10-28 05:41:57 +00:00
|
|
|
sh->codec->format_name = talloc_strdup(sh, av_get_sample_fmt_name(codec->format));
|
demux_lavf, ad_lavc, vd_lavc: pass codec header data directly
Instead of putting codec header data into WAVEFORMATEX and
BITMAPINFOHEADER, pass it directly via AVCodecContext. To do this, we
add mp_copy_lav_codec_headers(), which copies the codec header data
from one AVCodecContext to another (originally, the plan was to use
avcodec_copy_context() for this, but it looks like this would turn
decoder initialization into an even worse mess).
Get rid of the silly CodecID <-> codec_tag mapping. This was originally
needed for codecs.conf: codec tags were used to identify codecs, but
libavformat didn't always return useful codec tags (different file
formats can have different, overlapping tag numbers). Since we don't
go through WAVEFORMATEX etc. and pass all header data directly via
AVCodecContext, we can be absolutely sure that the codec tag mapping is
not needed anymore.
Note that this also destroys the "standard" MPlayer method of exporting
codec header data. WAVEFORMATEX and BITMAPINFOHEADER made sure that
other non-libavcodec decoders could be initialized. However, all these
decoders have been removed, so this is just cruft full of old hacks that
are not needed anymore. There's still ad_spdif and ad_mpg123, bu neither
of these need codec header data. Should we ever add non-libavcodec
decoders, better data structures without the past hacks could be added
to export the headers.
2013-02-09 14:15:37 +00:00
|
|
|
|
2016-02-22 19:21:03 +00:00
|
|
|
double delay = 0;
|
|
|
|
if (codec->sample_rate > 0)
|
2016-03-31 20:00:45 +00:00
|
|
|
delay = lavc_delay / (double)codec->sample_rate;
|
2016-02-22 19:21:03 +00:00
|
|
|
priv->seek_delay = MPMAX(priv->seek_delay, delay);
|
|
|
|
|
2016-08-12 19:39:32 +00:00
|
|
|
export_replaygain(demuxer, sh, st);
|
2014-03-25 10:46:10 +00:00
|
|
|
|
2019-05-22 20:32:31 +00:00
|
|
|
sh->seek_preroll = delay;
|
|
|
|
|
2011-07-03 12:42:04 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case AVMEDIA_TYPE_VIDEO: {
|
demux: remove weird tripple-buffering for the sh_stream list
The demuxer infrastructure was originally single-threaded. To make it
suitable for multithreading (specifically, demuxing and decoding on
separate threads), some sort of tripple-buffering was introduced. There
are separate "struct demuxer" allocations. The demuxer thread sets the
state on d_thread. If anything changes, the state is copied to d_buffer
(the copy is protected by a lock), and the decoder thread is notified.
Then the decoder thread copies the state from d_buffer to d_user (again
while holding a lock). This avoids the need for locking in the
demuxer/decoder code itself (only demux.c needs an internal, "invisible"
lock.)
Remove the streams/num_streams fields from this tripple-buffering
schema. Move them to the internal struct, and protect them with the
internal lock. Use accessors for read access outside of demux.c.
Other than replacing all field accesses with accessors, this separates
allocating and adding sh_streams. This is needed to avoid race
conditions. Before this change, this was awkwardly handled by first
initializing the sh_stream, and then sending a stream change event. Now
the stream is allocated, then initialized, and then declared as
immutable and added (at which point it becomes visible to the decoder
thread immediately).
This change is useful for PR #2626. And eventually, we should probably
get entirely of the tripple buffering, and this makes a nice first step.
2015-12-23 20:44:53 +00:00
|
|
|
sh = demux_alloc_sh_stream(STREAM_VIDEO);
|
2013-04-14 18:53:03 +00:00
|
|
|
|
2018-01-27 03:49:02 +00:00
|
|
|
if ((st->disposition & AV_DISPOSITION_ATTACHED_PIC) &&
|
|
|
|
!(st->disposition & AV_DISPOSITION_TIMED_THUMBNAILS))
|
|
|
|
{
|
2016-03-03 10:04:32 +00:00
|
|
|
sh->attached_picture =
|
|
|
|
new_demux_packet_from_avpacket(&st->attached_pic);
|
2014-09-16 16:11:00 +00:00
|
|
|
if (sh->attached_picture) {
|
|
|
|
sh->attached_picture->pts = 0;
|
|
|
|
talloc_steal(sh, sh->attached_picture);
|
|
|
|
sh->attached_picture->keyframe = true;
|
|
|
|
}
|
core: completely change handling of attached picture pseudo video
Before this commit, we tried to play along with libavformat and tried
to pretend that attached pictures are video streams with a single
frame, and that the frame magically appeared at the seek position when
seeking. The playback core would then switch to a mode where the video
has ended, and the "remaining" audio is played.
This didn't work very well:
- we needed a hack in demux.c, because we tried to read more packets in
order to find the "next" video frame (libavformat doesn't tell us if
a stream has ended)
- switching the video stream didn't work, because we can't tell
libavformat to send the packet again
- seeking and resuming after was hacky (for some reason libavformat sets
the returned packet's PTS to that of the previously returned audio
packet in generic code not related to attached pictures, and this
happened to work)
- if the user did something stupid and e.g. inserted a deinterlacer by
default, a picture was never displayed, only an inactive VO window)
- same when using a command that reconfigured the VO (like switching
aspect or video filters)
- hr-seek didn't work
For this reason, handle attached pictures as separate case with a
separate video decoding function, which doesn't read packets. Also,
do not synchronize audio to video start in this case.
2013-07-11 17:23:56 +00:00
|
|
|
}
|
demux_lavf, ad_lavc, vd_lavc: pass codec header data directly
Instead of putting codec header data into WAVEFORMATEX and
BITMAPINFOHEADER, pass it directly via AVCodecContext. To do this, we
add mp_copy_lav_codec_headers(), which copies the codec header data
from one AVCodecContext to another (originally, the plan was to use
avcodec_copy_context() for this, but it looks like this would turn
decoder initialization into an even worse mess).
Get rid of the silly CodecID <-> codec_tag mapping. This was originally
needed for codecs.conf: codec tags were used to identify codecs, but
libavformat didn't always return useful codec tags (different file
formats can have different, overlapping tag numbers). Since we don't
go through WAVEFORMATEX etc. and pass all header data directly via
AVCodecContext, we can be absolutely sure that the codec tag mapping is
not needed anymore.
Note that this also destroys the "standard" MPlayer method of exporting
codec header data. WAVEFORMATEX and BITMAPINFOHEADER made sure that
other non-libavcodec decoders could be initialized. However, all these
decoders have been removed, so this is just cruft full of old hacks that
are not needed anymore. There's still ad_spdif and ad_mpg123, bu neither
of these need codec header data. Should we ever add non-libavcodec
decoders, better data structures without the past hacks could be added
to export the headers.
2013-02-09 14:15:37 +00:00
|
|
|
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
if (!sh->attached_picture) {
|
|
|
|
// A real video stream probably means it's a packet based format.
|
|
|
|
priv->pcm_seek_hack_disabled = true;
|
|
|
|
priv->pcm_seek_hack = NULL;
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
// Also, we don't want to do this shit for ogv videos.
|
|
|
|
if (priv->linearize_ts < 0)
|
|
|
|
priv->linearize_ts = 0;
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
}
|
|
|
|
|
2016-01-12 22:48:19 +00:00
|
|
|
sh->codec->disp_w = codec->width;
|
|
|
|
sh->codec->disp_h = codec->height;
|
2023-10-28 05:41:57 +00:00
|
|
|
sh->codec->format_name = talloc_strdup(sh, av_get_pix_fmt_name(codec->format));
|
2013-03-09 07:52:53 +00:00
|
|
|
if (st->avg_frame_rate.num)
|
2016-03-31 19:37:32 +00:00
|
|
|
sh->codec->fps = av_q2d(st->avg_frame_rate);
|
2022-08-21 19:10:59 +00:00
|
|
|
if (is_image(st, sh->attached_picture, priv->avif)) {
|
2021-10-02 17:12:39 +00:00
|
|
|
MP_VERBOSE(demuxer, "Assuming this is an image format.\n");
|
2021-10-14 14:36:57 +00:00
|
|
|
sh->image = true;
|
2023-09-19 20:58:34 +00:00
|
|
|
sh->codec->fps = demuxer->opts->mf_fps;
|
2021-10-02 17:12:39 +00:00
|
|
|
}
|
2016-01-12 22:48:19 +00:00
|
|
|
sh->codec->par_w = st->sample_aspect_ratio.num;
|
|
|
|
sh->codec->par_h = st->sample_aspect_ratio.den;
|
2014-05-07 18:55:02 +00:00
|
|
|
|
2023-11-08 01:25:13 +00:00
|
|
|
const uint8_t *sd = mp_av_stream_get_side_data(st, AV_PKT_DATA_DISPLAYMATRIX);
|
2015-06-30 17:35:19 +00:00
|
|
|
if (sd) {
|
2023-11-08 01:25:13 +00:00
|
|
|
double r = av_display_rotation_get((int32_t *)sd);
|
2015-06-30 17:35:19 +00:00
|
|
|
if (!isnan(r))
|
2016-01-12 22:48:19 +00:00
|
|
|
sh->codec->rotate = (((int)(-r) % 360) + 360) % 360;
|
2015-06-30 17:35:19 +00:00
|
|
|
}
|
2014-04-20 19:29:22 +00:00
|
|
|
|
2023-11-08 01:25:13 +00:00
|
|
|
if ((sd = mp_av_stream_get_side_data(st, AV_PKT_DATA_DOVI_CONF))) {
|
2022-01-07 06:24:35 +00:00
|
|
|
const AVDOVIDecoderConfigurationRecord *cfg = (void *) sd;
|
|
|
|
MP_VERBOSE(demuxer, "Found Dolby Vision config record: profile "
|
|
|
|
"%d level %d\n", cfg->dv_profile, cfg->dv_level);
|
|
|
|
av_format_inject_global_side_data(avfc);
|
2024-05-08 01:21:42 +00:00
|
|
|
sh->codec->dovi = true;
|
|
|
|
sh->codec->dv_profile = cfg->dv_profile;
|
|
|
|
sh->codec->dv_level = cfg->dv_level;
|
2022-01-07 06:24:35 +00:00
|
|
|
}
|
|
|
|
|
video: add insane hack to work around FFmpeg/Libav insanity
So, FFmpeg/Libav requires us to figure out video timestamps ourselves
(see last 10 commits or so), but the methods it provides for this aren't
even sufficient. In particular, everything that uses AVI-style DTS (avi,
vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal
frame delay is broken. In this case, libavcodec will shift the packet-
to-image correspondence by the codec delay, meaning that with a delay=1,
the first AVFrame.pkt_dts is not 0, but that of the second packet. All
timestamps will appear shifted. The start time (e.g. the time displayed
when doing "mpv file.avi --pause") will not be exactly 0.
(According to Libav developers, this is how it's supposed to work; just
that the first DTS values are normally negative with formats that use
DTS "properly". Who cares if it doesn't work at all with very common
video formats? There's no indication that they'll fix this soon,
either. An elegant workaround is missing too.)
Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV.
Since these timestamps are not reorderd, we wouldn't need to sort them,
but it's less code this way (and possibly more robust, should a demuxer
unexpectedly output PTS).
The original intention of all the timestamp changes recently was
actually to get rid of demuxer-specific hacks and the old timestamp
sorting code, but it looks like this didn't work out. Yet another case
where trying to replace native MPlayer functionality with FFmpeg/Libav
led to disadvantages and bugs. (Note that the old PTS sorting code
doesn't and can't handle frame dropping correctly, though.)
Bug reports:
https://trac.ffmpeg.org/ticket/3178
https://bugzilla.libav.org/show_bug.cgi?id=600
2013-11-28 14:10:45 +00:00
|
|
|
// This also applies to vfw-muxed mkv, but we can't detect these easily.
|
2016-01-12 22:48:19 +00:00
|
|
|
sh->codec->avi_dts = matches_avinputformat_name(priv, "avi");
|
video: add insane hack to work around FFmpeg/Libav insanity
So, FFmpeg/Libav requires us to figure out video timestamps ourselves
(see last 10 commits or so), but the methods it provides for this aren't
even sufficient. In particular, everything that uses AVI-style DTS (avi,
vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal
frame delay is broken. In this case, libavcodec will shift the packet-
to-image correspondence by the codec delay, meaning that with a delay=1,
the first AVFrame.pkt_dts is not 0, but that of the second packet. All
timestamps will appear shifted. The start time (e.g. the time displayed
when doing "mpv file.avi --pause") will not be exactly 0.
(According to Libav developers, this is how it's supposed to work; just
that the first DTS values are normally negative with formats that use
DTS "properly". Who cares if it doesn't work at all with very common
video formats? There's no indication that they'll fix this soon,
either. An elegant workaround is missing too.)
Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV.
Since these timestamps are not reorderd, we wouldn't need to sort them,
but it's less code this way (and possibly more robust, should a demuxer
unexpectedly output PTS).
The original intention of all the timestamp changes recently was
actually to get rid of demuxer-specific hacks and the old timestamp
sorting code, but it looks like this didn't work out. Yet another case
where trying to replace native MPlayer functionality with FFmpeg/Libav
led to disadvantages and bugs. (Note that the old PTS sorting code
doesn't and can't handle frame dropping correctly, though.)
Bug reports:
https://trac.ffmpeg.org/ticket/3178
https://bugzilla.libav.org/show_bug.cgi?id=600
2013-11-28 14:10:45 +00:00
|
|
|
|
2011-07-03 12:42:04 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case AVMEDIA_TYPE_SUBTITLE: {
|
demux: remove weird tripple-buffering for the sh_stream list
The demuxer infrastructure was originally single-threaded. To make it
suitable for multithreading (specifically, demuxing and decoding on
separate threads), some sort of tripple-buffering was introduced. There
are separate "struct demuxer" allocations. The demuxer thread sets the
state on d_thread. If anything changes, the state is copied to d_buffer
(the copy is protected by a lock), and the decoder thread is notified.
Then the decoder thread copies the state from d_buffer to d_user (again
while holding a lock). This avoids the need for locking in the
demuxer/decoder code itself (only demux.c needs an internal, "invisible"
lock.)
Remove the streams/num_streams fields from this tripple-buffering
schema. Move them to the internal struct, and protect them with the
internal lock. Use accessors for read access outside of demux.c.
Other than replacing all field accesses with accessors, this separates
allocating and adding sh_streams. This is needed to avoid race
conditions. Before this change, this was awkwardly handled by first
initializing the sh_stream, and then sending a stream change event. Now
the stream is allocated, then initialized, and then declared as
immutable and added (at which point it becomes visible to the decoder
thread immediately).
This change is useful for PR #2626. And eventually, we should probably
get entirely of the tripple buffering, and this makes a nice first step.
2015-12-23 20:44:53 +00:00
|
|
|
sh = demux_alloc_sh_stream(STREAM_SUB);
|
2013-04-14 18:53:03 +00:00
|
|
|
|
2011-07-03 12:42:04 +00:00
|
|
|
if (codec->extradata_size) {
|
2016-01-12 22:48:19 +00:00
|
|
|
sh->codec->extradata = talloc_size(sh, codec->extradata_size);
|
|
|
|
memcpy(sh->codec->extradata, codec->extradata, codec->extradata_size);
|
|
|
|
sh->codec->extradata_size = codec->extradata_size;
|
2008-01-30 08:10:25 +00:00
|
|
|
}
|
2014-01-09 23:02:06 +00:00
|
|
|
|
2014-03-03 23:28:10 +00:00
|
|
|
if (matches_avinputformat_name(priv, "microdvd")) {
|
|
|
|
AVRational r;
|
|
|
|
if (av_opt_get_q(avfc, "subfps", AV_OPT_SEARCH_CHILDREN, &r) >= 0) {
|
|
|
|
// File headers don't have a FPS set.
|
|
|
|
if (r.num < 1 || r.den < 1)
|
2016-03-31 19:43:59 +00:00
|
|
|
sh->codec->frame_based = 23.976; // default timebase
|
2014-03-03 23:28:10 +00:00
|
|
|
}
|
|
|
|
}
|
2011-07-03 12:42:04 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case AVMEDIA_TYPE_ATTACHMENT: {
|
2015-04-05 18:52:14 +00:00
|
|
|
AVDictionaryEntry *ftag = av_dict_get(st->metadata, "filename", NULL, 0);
|
2011-07-03 12:42:04 +00:00
|
|
|
char *filename = ftag ? ftag->value : NULL;
|
2015-04-05 18:52:14 +00:00
|
|
|
AVDictionaryEntry *mt = av_dict_get(st->metadata, "mimetype", NULL, 0);
|
|
|
|
char *mimetype = mt ? mt->value : NULL;
|
2014-07-05 14:44:14 +00:00
|
|
|
if (mimetype) {
|
2015-06-24 12:18:51 +00:00
|
|
|
demuxer_add_attachment(demuxer, filename, mimetype,
|
|
|
|
codec->extradata, codec->extradata_size);
|
2014-07-05 14:44:14 +00:00
|
|
|
}
|
2011-07-03 12:42:04 +00:00
|
|
|
break;
|
|
|
|
}
|
2013-04-14 18:53:03 +00:00
|
|
|
default: ;
|
2004-04-11 14:26:04 +00:00
|
|
|
}
|
2013-04-14 18:53:03 +00:00
|
|
|
|
2019-06-09 21:00:04 +00:00
|
|
|
struct stream_info *info = talloc_zero(priv, struct stream_info);
|
|
|
|
*info = (struct stream_info){
|
|
|
|
.sh = sh,
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
.last_key_pts = MP_NOPTS_VALUE,
|
|
|
|
.highest_pts = MP_NOPTS_VALUE,
|
2019-06-09 21:00:04 +00:00
|
|
|
};
|
2013-04-14 18:53:03 +00:00
|
|
|
assert(priv->num_streams == i); // directly mapped
|
2019-06-09 21:00:04 +00:00
|
|
|
MP_TARRAY_APPEND(priv, priv->streams, priv->num_streams, info);
|
2013-04-14 18:53:03 +00:00
|
|
|
|
2013-02-09 14:15:28 +00:00
|
|
|
if (sh) {
|
2014-10-21 11:16:48 +00:00
|
|
|
sh->ff_index = st->index;
|
2016-01-12 22:48:19 +00:00
|
|
|
sh->codec->codec = mp_codec_from_av_codec_id(codec->codec_id);
|
|
|
|
sh->codec->codec_tag = codec->codec_tag;
|
2016-03-31 20:00:45 +00:00
|
|
|
sh->codec->lav_codecpar = avcodec_parameters_alloc();
|
|
|
|
if (sh->codec->lav_codecpar)
|
|
|
|
avcodec_parameters_copy(sh->codec->lav_codecpar, codec);
|
2016-08-19 12:19:46 +00:00
|
|
|
sh->codec->native_tb_num = st->time_base.num;
|
|
|
|
sh->codec->native_tb_den = st->time_base.den;
|
demux_lavf, ad_lavc, vd_lavc: pass codec header data directly
Instead of putting codec header data into WAVEFORMATEX and
BITMAPINFOHEADER, pass it directly via AVCodecContext. To do this, we
add mp_copy_lav_codec_headers(), which copies the codec header data
from one AVCodecContext to another (originally, the plan was to use
avcodec_copy_context() for this, but it looks like this would turn
decoder initialization into an even worse mess).
Get rid of the silly CodecID <-> codec_tag mapping. This was originally
needed for codecs.conf: codec tags were used to identify codecs, but
libavformat didn't always return useful codec tags (different file
formats can have different, overlapping tag numbers). Since we don't
go through WAVEFORMATEX etc. and pass all header data directly via
AVCodecContext, we can be absolutely sure that the codec tag mapping is
not needed anymore.
Note that this also destroys the "standard" MPlayer method of exporting
codec header data. WAVEFORMATEX and BITMAPINFOHEADER made sure that
other non-libavcodec decoders could be initialized. However, all these
decoders have been removed, so this is just cruft full of old hacks that
are not needed anymore. There's still ad_spdif and ad_mpg123, bu neither
of these need codec header data. Should we ever add non-libavcodec
decoders, better data structures without the past hacks could be added
to export the headers.
2013-02-09 14:15:37 +00:00
|
|
|
|
2013-02-09 14:15:28 +00:00
|
|
|
if (st->disposition & AV_DISPOSITION_DEFAULT)
|
2015-06-27 20:02:24 +00:00
|
|
|
sh->default_track = true;
|
|
|
|
if (st->disposition & AV_DISPOSITION_FORCED)
|
|
|
|
sh->forced_track = true;
|
2018-05-18 21:01:26 +00:00
|
|
|
if (st->disposition & AV_DISPOSITION_DEPENDENT)
|
|
|
|
sh->dependent_track = true;
|
2018-07-26 19:53:14 +00:00
|
|
|
if (st->disposition & AV_DISPOSITION_VISUAL_IMPAIRED)
|
|
|
|
sh->visual_impaired_track = true;
|
|
|
|
if (st->disposition & AV_DISPOSITION_HEARING_IMPAIRED)
|
|
|
|
sh->hearing_impaired_track = true;
|
2018-05-03 02:29:11 +00:00
|
|
|
if (st->disposition & AV_DISPOSITION_STILL_IMAGE)
|
|
|
|
sh->still_image = true;
|
2015-02-18 20:10:15 +00:00
|
|
|
if (priv->format_hack.use_stream_ids)
|
2013-02-09 14:15:28 +00:00
|
|
|
sh->demuxer_id = st->id;
|
|
|
|
AVDictionaryEntry *title = av_dict_get(st->metadata, "title", NULL, 0);
|
2010-05-03 22:45:00 +00:00
|
|
|
if (title && title->value)
|
2013-02-09 14:15:28 +00:00
|
|
|
sh->title = talloc_strdup(sh, title->value);
|
2016-10-19 18:54:47 +00:00
|
|
|
if (!sh->title && st->disposition & AV_DISPOSITION_VISUAL_IMPAIRED)
|
|
|
|
sh->title = talloc_asprintf(sh, "visual impaired");
|
|
|
|
if (!sh->title && st->disposition & AV_DISPOSITION_HEARING_IMPAIRED)
|
|
|
|
sh->title = talloc_asprintf(sh, "hearing impaired");
|
2013-02-09 14:15:28 +00:00
|
|
|
AVDictionaryEntry *lang = av_dict_get(st->metadata, "language", NULL, 0);
|
2018-04-21 11:25:32 +00:00
|
|
|
if (lang && lang->value && strcmp(lang->value, "und") != 0)
|
2013-02-09 14:15:28 +00:00
|
|
|
sh->lang = talloc_strdup(sh, lang->value);
|
2014-09-01 21:47:27 +00:00
|
|
|
sh->hls_bitrate = dict_get_decimal(st->metadata, "variant_bitrate", 0);
|
2023-02-21 21:23:29 +00:00
|
|
|
AVProgram *prog = av_find_program_from_stream(avfc, NULL, i);
|
|
|
|
if (prog)
|
|
|
|
sh->program_id = prog->id;
|
2015-03-20 21:10:00 +00:00
|
|
|
sh->missing_timestamps = !!(priv->avif_flags & AVFMT_NOTIMESTAMPS);
|
2023-10-11 11:03:26 +00:00
|
|
|
mp_tags_move_from_av_dictionary(sh->tags, &st->metadata);
|
demux: remove weird tripple-buffering for the sh_stream list
The demuxer infrastructure was originally single-threaded. To make it
suitable for multithreading (specifically, demuxing and decoding on
separate threads), some sort of tripple-buffering was introduced. There
are separate "struct demuxer" allocations. The demuxer thread sets the
state on d_thread. If anything changes, the state is copied to d_buffer
(the copy is protected by a lock), and the decoder thread is notified.
Then the decoder thread copies the state from d_buffer to d_user (again
while holding a lock). This avoids the need for locking in the
demuxer/decoder code itself (only demux.c needs an internal, "invisible"
lock.)
Remove the streams/num_streams fields from this tripple-buffering
schema. Move them to the internal struct, and protect them with the
internal lock. Use accessors for read access outside of demux.c.
Other than replacing all field accesses with accessors, this separates
allocating and adding sh_streams. This is needed to avoid race
conditions. Before this change, this was awkwardly handled by first
initializing the sh_stream, and then sending a stream change event. Now
the stream is allocated, then initialized, and then declared as
immutable and added (at which point it becomes visible to the decoder
thread immediately).
This change is useful for PR #2626. And eventually, we should probably
get entirely of the tripple buffering, and this makes a nice first step.
2015-12-23 20:44:53 +00:00
|
|
|
demux_add_sh_stream(demuxer, sh);
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
|
|
|
|
// Unfortunately, there is no better way to detect PCM codecs, other
|
|
|
|
// than listing them all manually. (Or other "frameless" codecs. Or
|
|
|
|
// rather, codecs with frames so small libavformat will put multiple of
|
|
|
|
// them into a single packet, but not preserve these artificial packet
|
|
|
|
// boundaries on seeking.)
|
|
|
|
if (sh->codec->codec && strncmp(sh->codec->codec, "pcm_", 4) == 0 &&
|
|
|
|
codec->block_align && !priv->pcm_seek_hack_disabled &&
|
|
|
|
priv->opts->hacks && !priv->format_hack.no_pcm_seek &&
|
|
|
|
st->time_base.num == 1 && st->time_base.den == codec->sample_rate)
|
|
|
|
{
|
|
|
|
if (priv->pcm_seek_hack) {
|
|
|
|
// More than 1 audio stream => usually doesn't apply.
|
|
|
|
priv->pcm_seek_hack_disabled = true;
|
|
|
|
priv->pcm_seek_hack = NULL;
|
|
|
|
} else {
|
|
|
|
priv->pcm_seek_hack = st;
|
|
|
|
}
|
|
|
|
}
|
2010-05-03 22:45:00 +00:00
|
|
|
}
|
2013-07-11 17:22:24 +00:00
|
|
|
|
2013-07-11 17:23:31 +00:00
|
|
|
select_tracks(demuxer, i);
|
2007-10-27 19:00:07 +00:00
|
|
|
}
|
|
|
|
|
2013-04-14 18:53:03 +00:00
|
|
|
// Add any new streams that might have been added
|
|
|
|
static void add_new_streams(demuxer_t *demuxer)
|
|
|
|
{
|
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
while (priv->num_streams < priv->avfc->nb_streams)
|
2015-12-22 01:14:48 +00:00
|
|
|
handle_new_stream(demuxer, priv->num_streams);
|
2013-04-14 18:53:03 +00:00
|
|
|
}
|
|
|
|
|
demux: support for some kinds of timed metadata
This makes ICY title changes show up at approximately the correct time,
even if the demuxer buffer is huge. (It'll still be wrong if the stream
byte cache contains a meaningful amount of data.)
It should have the same effect for mid-stream metadata changes in e.g.
OGG (untested).
This is still somewhat fishy, but in parts due to ICY being fishy, and
FFmpeg's metadata change API being somewhat fishy. For example, what
happens if you seek? With FFmpeg AVFMT_EVENT_FLAG_METADATA_UPDATED and
AVSTREAM_EVENT_FLAG_METADATA_UPDATED we hope that FFmpeg will correctly
restore the correct metadata when the first packet is returned.
If you seke with ICY, we're out of luck, and some audio will be
associated with the wrong tag until we get a new title through ICY
metadata update at an essentially random point (it's mostly inherent to
ICY). Then the tags will switch back and forth, and this behavior will
stick with the data stored in the demuxer cache. Fortunately, this can
happen only if the HTTP stream is actually seekable, which it usually is
not for ICY things. Seeking doesn't even make sense with ICY, since you
can't know the exact metadata location. Basically ICY metsdata sucks.
Some complexity is due to a microoptimization: I didn't want additional
atomic accesses for each packet if no timed metadata is used. (It
probably doesn't matter at all.)
2018-04-16 20:23:08 +00:00
|
|
|
static void update_metadata(demuxer_t *demuxer)
|
2013-10-19 01:50:08 +00:00
|
|
|
{
|
2014-08-13 23:18:18 +00:00
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
if (priv->avfc->event_flags & AVFMT_EVENT_FLAG_METADATA_UPDATED) {
|
2023-10-11 11:03:26 +00:00
|
|
|
mp_tags_move_from_av_dictionary(demuxer->metadata, &priv->avfc->metadata);
|
2014-08-13 23:18:18 +00:00
|
|
|
priv->avfc->event_flags = 0;
|
demux: support for some kinds of timed metadata
This makes ICY title changes show up at approximately the correct time,
even if the demuxer buffer is huge. (It'll still be wrong if the stream
byte cache contains a meaningful amount of data.)
It should have the same effect for mid-stream metadata changes in e.g.
OGG (untested).
This is still somewhat fishy, but in parts due to ICY being fishy, and
FFmpeg's metadata change API being somewhat fishy. For example, what
happens if you seek? With FFmpeg AVFMT_EVENT_FLAG_METADATA_UPDATED and
AVSTREAM_EVENT_FLAG_METADATA_UPDATED we hope that FFmpeg will correctly
restore the correct metadata when the first packet is returned.
If you seke with ICY, we're out of luck, and some audio will be
associated with the wrong tag until we get a new title through ICY
metadata update at an essentially random point (it's mostly inherent to
ICY). Then the tags will switch back and forth, and this behavior will
stick with the data stored in the demuxer cache. Fortunately, this can
happen only if the HTTP stream is actually seekable, which it usually is
not for ICY things. Seeking doesn't even make sense with ICY, since you
can't know the exact metadata location. Basically ICY metsdata sucks.
Some complexity is due to a microoptimization: I didn't want additional
atomic accesses for each packet if no timed metadata is used. (It
probably doesn't matter at all.)
2018-04-16 20:23:08 +00:00
|
|
|
demux_metadata_changed(demuxer);
|
2014-08-13 23:18:18 +00:00
|
|
|
}
|
2013-10-19 01:50:08 +00:00
|
|
|
}
|
|
|
|
|
2015-02-18 20:09:47 +00:00
|
|
|
static int interrupt_cb(void *ctx)
|
|
|
|
{
|
|
|
|
struct demuxer *demuxer = ctx;
|
2018-05-18 13:48:14 +00:00
|
|
|
return mp_cancel_test(demuxer->cancel);
|
2015-02-18 20:09:47 +00:00
|
|
|
}
|
|
|
|
|
2016-12-04 22:15:31 +00:00
|
|
|
static int block_io_open(struct AVFormatContext *s, AVIOContext **pb,
|
|
|
|
const char *url, int flags, AVDictionary **options)
|
|
|
|
{
|
|
|
|
struct demuxer *demuxer = s->opaque;
|
|
|
|
MP_ERR(demuxer, "Not opening '%s' due to --access-references=no.\n", url);
|
|
|
|
return AVERROR(EACCES);
|
|
|
|
}
|
|
|
|
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
static int nested_io_open(struct AVFormatContext *s, AVIOContext **pb,
|
|
|
|
const char *url, int flags, AVDictionary **options)
|
|
|
|
{
|
|
|
|
struct demuxer *demuxer = s->opaque;
|
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
|
2024-06-14 02:03:50 +00:00
|
|
|
if (options && priv->opts->propagate_opts) {
|
demux_lavf: fight ffmpeg API some more and get the timeout set
It sometimes happens that HLS streams freeze because the HTTP server is
not responding for a fragment (or something similar, the exact
circumstances are unknown). The --timeout option didn't affect this,
because it's never set on HLS recursive connections (these download the
fragments, while the main connection likely nothing and just wastes a
TCP socket).
Apply an elaborate hack on top of an existing elaborate hack to somehow
get these options set. Of course this could still break easily, but hey,
it's ffmpeg, it can't not try to fuck you over. I'm so fucking sick of
ffmpeg's API bullshit, especially wrt. HLS.
Of course the change is sort of pointless. For HLS, GET requests should
just aggressively retried (because they're not "streamed", they're just
actual files on a CDN), while normal HTTP connections should probably
not be made this fragile (they could be streamed, i.e. they are backed
by some sort of real time encoder, and block if there is no data yet).
The 1 minute default timeout is too high to save playback if this
happens with HLS.
Vaguely related to #5793.
2019-11-16 12:15:45 +00:00
|
|
|
// Copy av_opts to options, but only entries that are not present in
|
|
|
|
// options. (Hope this will break less by not overwriting important
|
|
|
|
// settings.)
|
|
|
|
AVDictionaryEntry *cur = NULL;
|
|
|
|
while ((cur = av_dict_get(priv->av_opts, "", cur, AV_DICT_IGNORE_SUFFIX)))
|
|
|
|
{
|
|
|
|
if (!*options || !av_dict_get(*options, cur->key, NULL, 0)) {
|
|
|
|
MP_TRACE(demuxer, "Nested option: '%s'='%s'\n",
|
|
|
|
cur->key, cur->value);
|
|
|
|
av_dict_set(options, cur->key, cur->value, 0);
|
|
|
|
} else {
|
|
|
|
MP_TRACE(demuxer, "Skipping nested option: '%s'\n", cur->key);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
int r = priv->default_io_open(s, pb, url, flags, options);
|
|
|
|
if (r >= 0) {
|
demux_lavf: fight ffmpeg API some more and get the timeout set
It sometimes happens that HLS streams freeze because the HTTP server is
not responding for a fragment (or something similar, the exact
circumstances are unknown). The --timeout option didn't affect this,
because it's never set on HLS recursive connections (these download the
fragments, while the main connection likely nothing and just wastes a
TCP socket).
Apply an elaborate hack on top of an existing elaborate hack to somehow
get these options set. Of course this could still break easily, but hey,
it's ffmpeg, it can't not try to fuck you over. I'm so fucking sick of
ffmpeg's API bullshit, especially wrt. HLS.
Of course the change is sort of pointless. For HLS, GET requests should
just aggressively retried (because they're not "streamed", they're just
actual files on a CDN), while normal HTTP connections should probably
not be made this fragile (they could be streamed, i.e. they are backed
by some sort of real time encoder, and block if there is no data yet).
The 1 minute default timeout is too high to save playback if this
happens with HLS.
Vaguely related to #5793.
2019-11-16 12:15:45 +00:00
|
|
|
if (options)
|
|
|
|
mp_avdict_print_unset(demuxer->log, MSGL_TRACE, *options);
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
struct nested_stream nest = {
|
|
|
|
.id = *pb,
|
|
|
|
};
|
|
|
|
MP_TARRAY_APPEND(priv, priv->nested, priv->num_nested, nest);
|
|
|
|
}
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2023-03-26 01:20:26 +00:00
|
|
|
static int nested_io_close2(struct AVFormatContext *s, AVIOContext *pb)
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
{
|
|
|
|
struct demuxer *demuxer = s->opaque;
|
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
|
|
|
|
for (int n = 0; n < priv->num_nested; n++) {
|
|
|
|
if (priv->nested[n].id == pb) {
|
|
|
|
MP_TARRAY_REMOVE_AT(priv->nested, priv->num_nested, n);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-03-26 01:20:26 +00:00
|
|
|
return priv->default_io_close2(s, pb);
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
}
|
|
|
|
|
2013-07-12 19:58:11 +00:00
|
|
|
static int demux_open_lavf(demuxer_t *demuxer, enum demux_check check)
|
2011-07-03 12:42:04 +00:00
|
|
|
{
|
2023-06-17 16:42:54 +00:00
|
|
|
AVFormatContext *avfc = NULL;
|
2011-12-22 00:06:52 +00:00
|
|
|
AVDictionaryEntry *t = NULL;
|
2013-05-27 21:26:22 +00:00
|
|
|
float analyze_duration = 0;
|
2015-12-17 00:00:54 +00:00
|
|
|
lavf_priv_t *priv = talloc_zero(NULL, lavf_priv_t);
|
2023-06-17 16:42:54 +00:00
|
|
|
AVDictionary *dopts = NULL;
|
|
|
|
|
2015-12-17 00:00:54 +00:00
|
|
|
demuxer->priv = priv;
|
2015-12-28 22:22:58 +00:00
|
|
|
priv->stream = demuxer->stream;
|
2007-10-27 19:00:07 +00:00
|
|
|
|
2016-09-06 18:09:56 +00:00
|
|
|
priv->opts = mp_get_config_group(priv, demuxer->global, &demux_lavf_conf);
|
|
|
|
struct demux_lavf_opts *lavfdopts = priv->opts;
|
|
|
|
|
2013-07-12 19:58:11 +00:00
|
|
|
if (lavf_check_file(demuxer, check) < 0)
|
2023-06-17 16:42:54 +00:00
|
|
|
goto fail;
|
2013-07-12 19:58:11 +00:00
|
|
|
|
2009-11-22 14:15:41 +00:00
|
|
|
avfc = avformat_alloc_context();
|
2014-12-12 16:28:22 +00:00
|
|
|
if (!avfc)
|
2023-06-17 16:42:54 +00:00
|
|
|
goto fail;
|
2007-10-27 19:00:07 +00:00
|
|
|
|
2023-09-19 20:58:34 +00:00
|
|
|
if (demuxer->opts->index_mode != 1)
|
2007-10-27 19:00:07 +00:00
|
|
|
avfc->flags |= AVFMT_FLAG_IGNIDX;
|
2014-02-08 00:04:37 +00:00
|
|
|
|
2010-04-23 19:08:18 +00:00
|
|
|
if (lavfdopts->probesize) {
|
2012-01-28 11:41:36 +00:00
|
|
|
if (av_opt_set_int(avfc, "probesize", lavfdopts->probesize, 0) < 0)
|
2014-07-05 14:42:50 +00:00
|
|
|
MP_ERR(demuxer, "couldn't set option probesize to %u\n",
|
2011-07-03 12:42:04 +00:00
|
|
|
lavfdopts->probesize);
|
2007-10-27 19:00:07 +00:00
|
|
|
}
|
2013-05-27 21:26:22 +00:00
|
|
|
|
2015-02-18 20:10:15 +00:00
|
|
|
if (priv->format_hack.analyzeduration)
|
|
|
|
analyze_duration = priv->format_hack.analyzeduration;
|
2013-05-27 21:26:22 +00:00
|
|
|
if (lavfdopts->analyzeduration)
|
|
|
|
analyze_duration = lavfdopts->analyzeduration;
|
|
|
|
if (analyze_duration > 0) {
|
2012-01-28 11:41:36 +00:00
|
|
|
if (av_opt_set_int(avfc, "analyzeduration",
|
2013-05-27 21:26:22 +00:00
|
|
|
analyze_duration * AV_TIME_BASE, 0) < 0)
|
2013-12-21 19:24:20 +00:00
|
|
|
MP_ERR(demuxer, "demux_lavf, couldn't set option "
|
2013-05-27 21:26:22 +00:00
|
|
|
"analyzeduration to %f\n", analyze_duration);
|
2007-10-27 19:00:07 +00:00
|
|
|
}
|
|
|
|
|
2017-02-02 17:24:27 +00:00
|
|
|
if ((priv->avif_flags & AVFMT_NOFILE) || priv->format_hack.no_stream) {
|
2019-11-14 12:46:03 +00:00
|
|
|
mp_setup_av_network_options(&dopts, priv->avif->name,
|
|
|
|
demuxer->global, demuxer->log);
|
2013-11-03 18:21:47 +00:00
|
|
|
// This might be incorrect.
|
|
|
|
demuxer->seekable = true;
|
|
|
|
} else {
|
2013-08-04 21:25:54 +00:00
|
|
|
void *buffer = av_malloc(lavfdopts->buffersize);
|
2013-08-04 21:21:50 +00:00
|
|
|
if (!buffer)
|
2023-06-17 16:42:54 +00:00
|
|
|
goto fail;
|
2013-08-04 21:25:54 +00:00
|
|
|
priv->pb = avio_alloc_context(buffer, lavfdopts->buffersize, 0,
|
2011-12-22 00:06:52 +00:00
|
|
|
demuxer, mp_read, NULL, mp_seek);
|
2013-08-04 21:21:50 +00:00
|
|
|
if (!priv->pb) {
|
|
|
|
av_free(buffer);
|
2023-06-17 16:42:54 +00:00
|
|
|
goto fail;
|
2013-08-04 21:21:50 +00:00
|
|
|
}
|
2011-06-21 20:28:53 +00:00
|
|
|
priv->pb->read_seek = mp_read_seek;
|
2013-11-03 18:21:47 +00:00
|
|
|
priv->pb->seekable = demuxer->seekable ? AVIO_SEEKABLE_NORMAL : 0;
|
2011-12-22 00:06:52 +00:00
|
|
|
avfc->pb = priv->pb;
|
2015-12-28 22:22:58 +00:00
|
|
|
if (stream_control(priv->stream, STREAM_CTRL_HAS_AVSEEK, NULL) > 0)
|
2014-10-30 21:46:25 +00:00
|
|
|
demuxer->seekable = true;
|
2016-06-08 09:49:56 +00:00
|
|
|
demuxer->seekable |= priv->format_hack.fully_read;
|
2011-06-21 20:28:53 +00:00
|
|
|
}
|
2008-01-26 21:45:31 +00:00
|
|
|
|
2013-09-22 00:40:29 +00:00
|
|
|
if (matches_avinputformat_name(priv, "rtsp")) {
|
|
|
|
const char *transport = NULL;
|
2016-09-06 18:09:56 +00:00
|
|
|
switch (lavfdopts->rtsp_transport) {
|
2013-09-22 00:40:29 +00:00
|
|
|
case 1: transport = "udp"; break;
|
|
|
|
case 2: transport = "tcp"; break;
|
|
|
|
case 3: transport = "http"; break;
|
2020-02-27 15:22:20 +00:00
|
|
|
case 4: transport = "udp_multicast"; break;
|
2013-09-22 00:40:29 +00:00
|
|
|
}
|
|
|
|
if (transport)
|
|
|
|
av_dict_set(&dopts, "rtsp_transport", transport, 0);
|
|
|
|
}
|
|
|
|
|
2015-05-28 19:51:54 +00:00
|
|
|
guess_and_set_vobsub_name(demuxer, &dopts);
|
|
|
|
|
2015-02-18 20:09:47 +00:00
|
|
|
avfc->interrupt_callback = (AVIOInterruptCB){
|
|
|
|
.callback = interrupt_cb,
|
|
|
|
.opaque = demuxer,
|
|
|
|
};
|
|
|
|
|
2016-12-04 22:15:31 +00:00
|
|
|
avfc->opaque = demuxer;
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
if (demuxer->access_references) {
|
|
|
|
priv->default_io_open = avfc->io_open;
|
|
|
|
avfc->io_open = nested_io_open;
|
2023-03-26 01:20:26 +00:00
|
|
|
priv->default_io_close2 = avfc->io_close2;
|
|
|
|
avfc->io_close2 = nested_io_close2;
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
} else {
|
2016-12-04 22:15:31 +00:00
|
|
|
avfc->io_open = block_io_open;
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
}
|
2016-12-04 22:15:31 +00:00
|
|
|
|
2014-08-02 01:12:09 +00:00
|
|
|
mp_set_avdict(&dopts, lavfdopts->avopts);
|
|
|
|
|
demux_lavf: fight ffmpeg API some more and get the timeout set
It sometimes happens that HLS streams freeze because the HTTP server is
not responding for a fragment (or something similar, the exact
circumstances are unknown). The --timeout option didn't affect this,
because it's never set on HLS recursive connections (these download the
fragments, while the main connection likely nothing and just wastes a
TCP socket).
Apply an elaborate hack on top of an existing elaborate hack to somehow
get these options set. Of course this could still break easily, but hey,
it's ffmpeg, it can't not try to fuck you over. I'm so fucking sick of
ffmpeg's API bullshit, especially wrt. HLS.
Of course the change is sort of pointless. For HLS, GET requests should
just aggressively retried (because they're not "streamed", they're just
actual files on a CDN), while normal HTTP connections should probably
not be made this fragile (they could be streamed, i.e. they are backed
by some sort of real time encoder, and block if there is no data yet).
The 1 minute default timeout is too high to save playback if this
happens with HLS.
Vaguely related to #5793.
2019-11-16 12:15:45 +00:00
|
|
|
if (av_dict_copy(&priv->av_opts, dopts, 0) < 0) {
|
2023-06-17 16:42:54 +00:00
|
|
|
MP_ERR(demuxer, "av_dict_copy() failed\n");
|
|
|
|
goto fail;
|
demux_lavf: fight ffmpeg API some more and get the timeout set
It sometimes happens that HLS streams freeze because the HTTP server is
not responding for a fragment (or something similar, the exact
circumstances are unknown). The --timeout option didn't affect this,
because it's never set on HLS recursive connections (these download the
fragments, while the main connection likely nothing and just wastes a
TCP socket).
Apply an elaborate hack on top of an existing elaborate hack to somehow
get these options set. Of course this could still break easily, but hey,
it's ffmpeg, it can't not try to fuck you over. I'm so fucking sick of
ffmpeg's API bullshit, especially wrt. HLS.
Of course the change is sort of pointless. For HLS, GET requests should
just aggressively retried (because they're not "streamed", they're just
actual files on a CDN), while normal HTTP connections should probably
not be made this fragile (they could be streamed, i.e. they are backed
by some sort of real time encoder, and block if there is no data yet).
The 1 minute default timeout is too high to save playback if this
happens with HLS.
Vaguely related to #5793.
2019-11-16 12:15:45 +00:00
|
|
|
}
|
|
|
|
|
2020-07-09 10:29:22 +00:00
|
|
|
if (priv->format_hack.readall_on_no_streamseek && priv->pb &&
|
|
|
|
!priv->pb->seekable)
|
|
|
|
{
|
|
|
|
MP_VERBOSE(demuxer, "Non-seekable demuxer pre-read hack...\n");
|
|
|
|
// Read incremental to avoid unnecessary large buffer sizes.
|
|
|
|
int r = 0;
|
|
|
|
for (int n = 16; n < 29; n++) {
|
|
|
|
r = stream_peek(priv->stream, 1 << n);
|
|
|
|
if (r < (1 << n))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
MP_VERBOSE(demuxer, "...did read %d bytes.\n", r);
|
|
|
|
}
|
|
|
|
|
2013-09-22 00:40:29 +00:00
|
|
|
if (avformat_open_input(&avfc, priv->filename, priv->avif, &dopts) < 0) {
|
2014-07-05 14:42:50 +00:00
|
|
|
MP_ERR(demuxer, "avformat_open_input() failed\n");
|
2023-06-17 16:42:54 +00:00
|
|
|
goto fail;
|
2007-10-27 19:00:07 +00:00
|
|
|
}
|
|
|
|
|
2014-08-02 01:12:09 +00:00
|
|
|
mp_avdict_print_unset(demuxer->log, MSGL_V, dopts);
|
2013-09-22 00:40:29 +00:00
|
|
|
av_dict_free(&dopts);
|
|
|
|
|
2011-07-03 12:42:04 +00:00
|
|
|
priv->avfc = avfc;
|
2017-01-30 18:38:43 +00:00
|
|
|
|
2018-03-02 12:59:10 +00:00
|
|
|
bool probeinfo = lavfdopts->probeinfo != 0;
|
|
|
|
switch (lavfdopts->probeinfo) {
|
|
|
|
case -2: probeinfo = priv->avfc->nb_streams == 0; break;
|
|
|
|
case -1: probeinfo = !priv->format_hack.skipinfo; break;
|
|
|
|
}
|
2017-02-23 17:18:22 +00:00
|
|
|
if (demuxer->params && demuxer->params->skip_lavf_probing)
|
|
|
|
probeinfo = false;
|
|
|
|
if (probeinfo) {
|
2017-01-30 18:38:43 +00:00
|
|
|
if (avformat_find_stream_info(avfc, NULL) < 0) {
|
|
|
|
MP_ERR(demuxer, "av_find_stream_info() failed\n");
|
2023-06-17 16:42:54 +00:00
|
|
|
goto fail;
|
2017-01-30 18:38:43 +00:00
|
|
|
}
|
2013-05-24 10:00:44 +00:00
|
|
|
|
2017-02-23 17:18:22 +00:00
|
|
|
MP_VERBOSE(demuxer, "avformat_find_stream_info() finished after %"PRId64
|
|
|
|
" bytes.\n", stream_tell(priv->stream));
|
|
|
|
}
|
2013-05-24 10:00:44 +00:00
|
|
|
|
2015-12-17 00:00:54 +00:00
|
|
|
for (int i = 0; i < avfc->nb_chapters; i++) {
|
2008-06-16 15:54:29 +00:00
|
|
|
AVChapter *c = avfc->chapters[i];
|
2011-12-22 00:06:52 +00:00
|
|
|
t = av_dict_get(c->metadata, "title", NULL, 0);
|
2015-06-24 14:51:19 +00:00
|
|
|
int index = demuxer_add_chapter(demuxer, t ? t->value : "",
|
2014-11-02 16:20:04 +00:00
|
|
|
c->start * av_q2d(c->time_base), i);
|
2023-10-11 11:03:26 +00:00
|
|
|
mp_tags_move_from_av_dictionary(demuxer->chapters[index].metadata, &c->metadata);
|
2008-06-16 15:54:29 +00:00
|
|
|
}
|
|
|
|
|
2013-04-14 18:53:03 +00:00
|
|
|
add_new_streams(demuxer);
|
2010-10-04 18:12:36 +00:00
|
|
|
|
2023-10-11 11:03:26 +00:00
|
|
|
mp_tags_move_from_av_dictionary(demuxer->metadata, &avfc->metadata);
|
2014-08-13 23:18:18 +00:00
|
|
|
|
2015-03-20 21:07:26 +00:00
|
|
|
demuxer->ts_resets_possible =
|
2015-03-20 21:10:00 +00:00
|
|
|
priv->avif_flags & (AVFMT_TS_DISCONT | AVFMT_NOTIMESTAMPS);
|
2013-03-01 12:20:33 +00:00
|
|
|
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
if (avfc->start_time != AV_NOPTS_VALUE)
|
2016-08-18 19:03:01 +00:00
|
|
|
demuxer->start_time = avfc->start_time / (double)AV_TIME_BASE;
|
2014-07-05 14:54:49 +00:00
|
|
|
|
2015-02-18 20:10:43 +00:00
|
|
|
demuxer->fully_read = priv->format_hack.fully_read;
|
demux: hack for instant stream switching
This removes the delay when switching audio tracks in mkv or mp4 files.
Other formats are not enabled, because it's not clear whether the
demuxers fulfill the requirements listed in demux.h. (Many formats
definitely do not with libavformat.)
Background:
The demuxer packet cache buffers a certain amount of packets. This
includes only packets from selected streams. We discard packets from
other streams for various reasons. This introduces a problem: switching
to a different audio track introduces a delay. The delay is as big as
the demuxer packet cache buffer, because while the file was read ahead
to fill the packet buffer, the process of reading packets also discarded
all packets from the previously not selected audio stream. Once the
remaining packet buffer has been played, new audio packets are available
and you hear audio again.
We could probably just not discard packets from unselected streams. But
this would require additional memory and CPU resources, and also it's
hard to tell when packets from unused streams should be discarded (we
don't want to keep them forever; it'd be a memory leak).
We could also issue a player hr-seek to the current playback position,
which would solve the problem in 1 line of code or so. But this can be
rather slow.
So what we do in this commit instead is: we just seek back to the
position where our current packet buffer starts, and start demuxing from
this position again. This way we can get the "past" packets for the
newly selected stream. For streams which were already selected the
packets are simply discarded until the previous position is reached
again.
That latter part is the hard part. We really want to skip packets
exactly until the position where we left off previously, or we will skip
packets or feed packets to the decoder twice. If we assume that the
demuxer is deterministic (returns exactly the same packets after a seek
to a previous position), then we can try to check whether it's the same
packet as the one at the end of the packet buffer. If it is, we know
that the packet after it is where we left off last time.
Unfortunately, this is not very robust, and maybe it can't be made
robust. Currently we use the demux_packet.pos field as unique packet
ID - which works fine in some scenarios, but will break in arbitrary
ways if the basic requirement to the demuxer (as listed in the demux.h
additions) are broken. Thus, this is enabled only for the internal mkv
demuxer and the libavformat mp4 demuxer.
(libavformat mkv does not work, because the packet positions are not
unique. Probably could be fixed upstream, but it's not clear whether
it's a bug or a feature.)
2015-02-13 20:17:17 +00:00
|
|
|
|
2018-01-27 03:39:10 +00:00
|
|
|
#ifdef AVFMTCTX_UNSEEKABLE
|
|
|
|
if (avfc->ctx_flags & AVFMTCTX_UNSEEKABLE)
|
|
|
|
demuxer->seekable = false;
|
|
|
|
#endif
|
|
|
|
|
2018-03-02 18:07:12 +00:00
|
|
|
demuxer->is_network |= priv->format_hack.is_network;
|
|
|
|
demuxer->seekable &= !priv->format_hack.no_seek;
|
|
|
|
|
2023-08-31 09:50:32 +00:00
|
|
|
// We initially prefer track durations over container durations because they
|
|
|
|
// have a higher degree of precision over the container duration which are
|
|
|
|
// only accurate to the 6th decimal place. This is probably a lavf bug.
|
2023-09-21 17:33:16 +00:00
|
|
|
double total_duration = -1;
|
|
|
|
double av_duration = -1;
|
2023-08-31 09:50:32 +00:00
|
|
|
for (int n = 0; n < priv->avfc->nb_streams; n++) {
|
|
|
|
AVStream *st = priv->avfc->streams[n];
|
|
|
|
if (st->duration <= 0)
|
|
|
|
continue;
|
|
|
|
double f_duration = st->duration * av_q2d(st->time_base);
|
|
|
|
total_duration = MPMAX(total_duration, f_duration);
|
|
|
|
if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO ||
|
|
|
|
st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
|
|
|
|
av_duration = MPMAX(av_duration, f_duration);
|
2017-06-20 11:57:58 +00:00
|
|
|
}
|
2023-08-31 09:50:32 +00:00
|
|
|
double duration = av_duration > 0 ? av_duration : total_duration;
|
2023-09-21 17:33:16 +00:00
|
|
|
if (duration <= 0 && priv->avfc->duration > 0)
|
2023-08-31 09:50:32 +00:00
|
|
|
duration = (double)priv->avfc->duration / AV_TIME_BASE;
|
|
|
|
demuxer->duration = duration;
|
2017-06-20 11:57:58 +00:00
|
|
|
|
2020-02-20 14:28:49 +00:00
|
|
|
if (demuxer->duration < 0 && priv->format_hack.no_seek_on_no_duration)
|
|
|
|
demuxer->seekable = false;
|
|
|
|
|
video: warn user against FFmpeg's lies
I found that at least for mjpeg streams, FFmpeg will set packet pts/dts
anyway. The mjpeg raw video demuxer (along with some other raw formats)
has a "framerate" demuxer option which defaults to 25, so all mjpeg
streams will be played at 25 FPS by default.
mpv doesn't like this much. If AVFMT_NOTIMESTAMPS is set, it prints a
warning, that might print a bogus FPS value for the assumed framerate.
The code was originally written with the assumption that FFmpeg would
not set pts/dts for such formats, but since it does, the printed
estimated framerate will never be used. --fps will also not be used by
default in this situation.
To make this hopefully less confusing, explicitly state the situation
when the AVFMT_NOTIMESTAMPS flag is set, and give instructions how to
work it around.
Also, remove the warning in dec_video.c. We don't know what FPS it's
going to assume anyway. If there are really no timestamps in the stream,
it will trigger our normal missing pts workaround. Add the assumed FPS
there.
In theory, we could just clear packet timestamps if AVFMT_NOTIMESTAMPS
is set, and make up our own timestamps. That is non-trivial for advanced
video codecs like h264, so I'm not going there. For seeking and
buffering estimation the situation thus remains half-broken.
This is a mitigation for #5419.
2018-01-20 11:56:32 +00:00
|
|
|
// In some cases, libavformat will export bogus bullshit timestamps anyway,
|
|
|
|
// such as with mjpeg.
|
|
|
|
if (priv->avif_flags & AVFMT_NOTIMESTAMPS) {
|
|
|
|
MP_WARN(demuxer,
|
|
|
|
"This format is marked by FFmpeg as having no timestamps!\n"
|
|
|
|
"FFmpeg will likely make up its own broken timestamps. For\n"
|
|
|
|
"video streams you can correct this with:\n"
|
2023-10-15 14:36:56 +00:00
|
|
|
" --no-correct-pts --container-fps-override=VALUE\n"
|
video: warn user against FFmpeg's lies
I found that at least for mjpeg streams, FFmpeg will set packet pts/dts
anyway. The mjpeg raw video demuxer (along with some other raw formats)
has a "framerate" demuxer option which defaults to 25, so all mjpeg
streams will be played at 25 FPS by default.
mpv doesn't like this much. If AVFMT_NOTIMESTAMPS is set, it prints a
warning, that might print a bogus FPS value for the assumed framerate.
The code was originally written with the assumption that FFmpeg would
not set pts/dts for such formats, but since it does, the printed
estimated framerate will never be used. --fps will also not be used by
default in this situation.
To make this hopefully less confusing, explicitly state the situation
when the AVFMT_NOTIMESTAMPS flag is set, and give instructions how to
work it around.
Also, remove the warning in dec_video.c. We don't know what FPS it's
going to assume anyway. If there are really no timestamps in the stream,
it will trigger our normal missing pts workaround. Add the assumed FPS
there.
In theory, we could just clear packet timestamps if AVFMT_NOTIMESTAMPS
is set, and make up our own timestamps. That is non-trivial for advanced
video codecs like h264, so I'm not going there. For seeking and
buffering estimation the situation thus remains half-broken.
This is a mitigation for #5419.
2018-01-20 11:56:32 +00:00
|
|
|
"with VALUE being the real framerate of the stream. You can\n"
|
|
|
|
"expect seeking and buffering estimation to be generally\n"
|
|
|
|
"broken as well.\n");
|
|
|
|
}
|
|
|
|
|
demux: change hack for closing subtitle files early
Subtitles (and a few other file types, like playlists) are not streamed,
but fully read on opening. This means keeping the file handle or network
socket open is a waste of resources and could cause other weird
behavior. This is why there's a hack to close them after opening.
Change this hack to make the demuxer itself do this, which is less
weird. (Until recently, demuxer->stream ownership was more complex,
which is why it was done this way.)
There is some evil shit due to a huge ownership/lifetime mess of various
objects. Especially EDL (the currently only nested demuxer case)
requires being careful about mp_cancel and passing down stream pointers.
As one defensive programming measure, stop accessing the "stream"
variable in open_given_type(), even where it would still work. This
includes removing a redundant line of code, and removing the peak call,
which should not be needed anymore, as the remaining demuxers do this
mostly correctly.
2018-09-07 21:02:36 +00:00
|
|
|
if (demuxer->fully_read) {
|
|
|
|
demux_close_stream(demuxer);
|
|
|
|
if (priv->own_stream)
|
|
|
|
free_stream(priv->stream);
|
|
|
|
priv->own_stream = false;
|
2019-12-23 10:09:42 +00:00
|
|
|
priv->stream = NULL;
|
demux: change hack for closing subtitle files early
Subtitles (and a few other file types, like playlists) are not streamed,
but fully read on opening. This means keeping the file handle or network
socket open is a waste of resources and could cause other weird
behavior. This is why there's a hack to close them after opening.
Change this hack to make the demuxer itself do this, which is less
weird. (Until recently, demuxer->stream ownership was more complex,
which is why it was done this way.)
There is some evil shit due to a huge ownership/lifetime mess of various
objects. Especially EDL (the currently only nested demuxer case)
requires being careful about mp_cancel and passing down stream pointers.
As one defensive programming measure, stop accessing the "stream"
variable in open_given_type(), even where it would still work. This
includes removing a redundant line of code, and removing the peak call,
which should not be needed anymore, as the remaining demuxers do this
mostly correctly.
2018-09-07 21:02:36 +00:00
|
|
|
}
|
|
|
|
|
2013-07-11 18:08:12 +00:00
|
|
|
return 0;
|
2023-06-17 16:42:54 +00:00
|
|
|
|
|
|
|
fail:
|
|
|
|
if (!priv->avfc)
|
|
|
|
avformat_free_context(avfc);
|
|
|
|
av_dict_free(&dopts);
|
|
|
|
|
|
|
|
return -1;
|
2004-04-11 14:26:04 +00:00
|
|
|
}
|
|
|
|
|
2018-09-07 13:12:24 +00:00
|
|
|
static bool demux_lavf_read_packet(struct demuxer *demux,
|
|
|
|
struct demux_packet **mp_pkt)
|
2011-07-03 12:42:04 +00:00
|
|
|
{
|
|
|
|
lavf_priv_t *priv = demux->priv;
|
2004-04-11 14:26:04 +00:00
|
|
|
|
2014-08-24 15:45:28 +00:00
|
|
|
AVPacket *pkt = &(AVPacket){0};
|
2014-07-30 01:32:56 +00:00
|
|
|
int r = av_read_frame(priv->avfc, pkt);
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
update_read_stats(demux);
|
2014-07-30 01:32:56 +00:00
|
|
|
if (r < 0) {
|
2015-10-28 22:48:56 +00:00
|
|
|
av_packet_unref(pkt);
|
2014-08-30 13:15:14 +00:00
|
|
|
if (r == AVERROR_EOF)
|
2018-09-07 13:12:24 +00:00
|
|
|
return false;
|
2019-11-22 19:57:01 +00:00
|
|
|
MP_WARN(demux, "error reading packet: %s.\n", av_err2str(r));
|
2020-02-28 16:25:07 +00:00
|
|
|
if (priv->retry_counter >= 10) {
|
|
|
|
MP_ERR(demux, "...treating it as fatal error.\n");
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
priv->retry_counter += 1;
|
|
|
|
return true;
|
2011-08-19 01:59:01 +00:00
|
|
|
}
|
2020-02-28 16:25:07 +00:00
|
|
|
priv->retry_counter = 0;
|
2008-01-26 21:45:31 +00:00
|
|
|
|
2013-04-14 18:53:03 +00:00
|
|
|
add_new_streams(demux);
|
demux: support for some kinds of timed metadata
This makes ICY title changes show up at approximately the correct time,
even if the demuxer buffer is huge. (It'll still be wrong if the stream
byte cache contains a meaningful amount of data.)
It should have the same effect for mid-stream metadata changes in e.g.
OGG (untested).
This is still somewhat fishy, but in parts due to ICY being fishy, and
FFmpeg's metadata change API being somewhat fishy. For example, what
happens if you seek? With FFmpeg AVFMT_EVENT_FLAG_METADATA_UPDATED and
AVSTREAM_EVENT_FLAG_METADATA_UPDATED we hope that FFmpeg will correctly
restore the correct metadata when the first packet is returned.
If you seke with ICY, we're out of luck, and some audio will be
associated with the wrong tag until we get a new title through ICY
metadata update at an essentially random point (it's mostly inherent to
ICY). Then the tags will switch back and forth, and this behavior will
stick with the data stored in the demuxer cache. Fortunately, this can
happen only if the HTTP stream is actually seekable, which it usually is
not for ICY things. Seeking doesn't even make sense with ICY, since you
can't know the exact metadata location. Basically ICY metsdata sucks.
Some complexity is due to a microoptimization: I didn't want additional
atomic accesses for each packet if no timed metadata is used. (It
probably doesn't matter at all.)
2018-04-16 20:23:08 +00:00
|
|
|
update_metadata(demux);
|
2010-10-04 18:12:36 +00:00
|
|
|
|
2013-04-14 18:53:03 +00:00
|
|
|
assert(pkt->stream_index >= 0 && pkt->stream_index < priv->num_streams);
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
struct stream_info *info = priv->streams[pkt->stream_index];
|
|
|
|
struct sh_stream *stream = info->sh;
|
2013-11-25 22:13:46 +00:00
|
|
|
AVStream *st = priv->avfc->streams[pkt->stream_index];
|
2004-04-11 14:26:04 +00:00
|
|
|
|
2024-06-15 03:21:36 +00:00
|
|
|
if (!demux_stream_is_selected(stream)) {
|
2024-06-13 18:23:35 +00:00
|
|
|
av_packet_unref(pkt);
|
2024-06-15 03:21:36 +00:00
|
|
|
return true; // don't signal EOF if skipping a packet
|
2024-06-13 18:23:35 +00:00
|
|
|
}
|
|
|
|
|
2024-06-15 03:21:36 +00:00
|
|
|
// Never send additional frames for streams that are a single frame.
|
|
|
|
if (stream->image && priv->format_hack.first_frame_only && pkt->pos != 0) {
|
2015-10-28 22:48:56 +00:00
|
|
|
av_packet_unref(pkt);
|
2024-06-15 03:21:36 +00:00
|
|
|
return true;
|
2005-01-30 09:13:28 +00:00
|
|
|
}
|
2008-01-26 21:45:31 +00:00
|
|
|
|
2014-08-24 15:45:28 +00:00
|
|
|
struct demux_packet *dp = new_demux_packet_from_avpacket(pkt);
|
2014-09-16 16:11:00 +00:00
|
|
|
if (!dp) {
|
2015-10-28 22:48:56 +00:00
|
|
|
av_packet_unref(pkt);
|
2018-09-07 13:12:24 +00:00
|
|
|
return true;
|
2014-09-16 16:11:00 +00:00
|
|
|
}
|
2004-04-11 14:26:04 +00:00
|
|
|
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
if (priv->pcm_seek_hack == st && !priv->pcm_seek_hack_packet_size)
|
|
|
|
priv->pcm_seek_hack_packet_size = pkt->size;
|
|
|
|
|
2019-06-09 17:17:35 +00:00
|
|
|
dp->pts = mp_pts_from_av(pkt->pts, &st->time_base);
|
|
|
|
dp->dts = mp_pts_from_av(pkt->dts, &st->time_base);
|
2013-11-25 22:13:01 +00:00
|
|
|
dp->duration = pkt->duration * av_q2d(st->time_base);
|
2013-11-16 20:04:28 +00:00
|
|
|
dp->pos = pkt->pos;
|
2012-07-24 21:23:27 +00:00
|
|
|
dp->keyframe = pkt->flags & AV_PKT_FLAG_KEY;
|
2015-10-28 22:48:56 +00:00
|
|
|
av_packet_unref(pkt);
|
2015-02-17 22:42:04 +00:00
|
|
|
|
2015-02-18 20:10:15 +00:00
|
|
|
if (priv->format_hack.clear_filepos)
|
2015-02-17 22:42:04 +00:00
|
|
|
dp->pos = -1;
|
|
|
|
|
2018-09-07 13:12:24 +00:00
|
|
|
dp->stream = stream->index;
|
|
|
|
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
if (priv->linearize_ts) {
|
|
|
|
dp->pts = MP_ADD_PTS(dp->pts, info->ts_offset);
|
|
|
|
dp->dts = MP_ADD_PTS(dp->dts, info->ts_offset);
|
|
|
|
|
|
|
|
double pts = MP_PTS_OR_DEF(dp->pts, dp->dts);
|
|
|
|
if (pts != MP_NOPTS_VALUE) {
|
|
|
|
if (dp->keyframe) {
|
|
|
|
if (pts < info->highest_pts) {
|
|
|
|
MP_WARN(demux, "Linearizing discontinuity: %f -> %f\n",
|
|
|
|
pts, info->highest_pts);
|
|
|
|
// Note: introduces a small discontinuity by a frame size.
|
|
|
|
double diff = info->highest_pts - pts;
|
|
|
|
dp->pts = MP_ADD_PTS(dp->pts, diff);
|
|
|
|
dp->dts = MP_ADD_PTS(dp->dts, diff);
|
|
|
|
pts += diff;
|
|
|
|
info->ts_offset += diff;
|
|
|
|
priv->any_ts_fixed = true;
|
|
|
|
}
|
|
|
|
info->last_key_pts = pts;
|
|
|
|
}
|
|
|
|
info->highest_pts = MP_PTS_MAX(info->highest_pts, pts);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
demux: redo timed metadata
The old implementation didn't work for the OGG case. Discard the old
shit code (instead of fixing it), and write new shit code. The old code
was already over a year old, so it's about time to rewrite it for no
reason anyway.
While it's true that the old code appears to be broken, the main reason
to rewrite this is to make it simpler. While the amount of code seems to
be about the same, both the concept and the actual tag handling are
simpler. The result is probably a bit more correct.
The packet struct shrinks by 8 byte. That fact that it wasted 8 bytes
per packet for a rather obscure use case was the reason I started this
at all (and when I found that OGG updates didn't work). While these 8
bytes aren't going to hurt, the packet struct was getting too bloated.
If you buffer a lot of data, these extra fields will add up. Still quite
some effort for 8 bytes. Fortunately, it's not like there are any
managers that need to be convinced whether it's worth doing. The freedom
to waste time on dumb shit.
The old implementation attached the current metadata to each packet.
When the decoder read the packet, the packet's metadata was made
current. The new implementation stores metadata as separate list, and
requires that the player frontend tells it the current playback time,
which will be used to find the currently valid metadata. In both cases,
the objective was to correctly update metadata even if a lot of data is
buffered ahead (and to update them correctly when seeking within the
demuxer cache).
The new implementation is actually slightly more correct, because it
uses the playback time for the metadata lookup. Consider if you have an
audio filter which buffers 15 seconds (unfortunately such a filter
exists), then the old code would update the current title 15 seconds too
early, while the new one does it correctly.
The new code also simplifies mixing the 3 metadata sources (global, per
stream, ICY). We assume these aren't mixed in a meaningful way. The old
code tried to be a bit more "exact". I didn't bother to look how the old
code did this, but the new code simply always "merges" with the previous
metadata, so if a newer tag removes a field, it's going to stick around
anyway.
I tried to keep it simple. Other approaches include making metadata a
special sh_stream with metadata packets. This would have been
conceptually clean, but the implementation would probably have been
unnatural (and doesn't match well with libavformat's API anyway). It
would have been nice to make the metadata updates chapter points (makes
a lot of sense for the intended use case, web radio current song
information), but I don't think it would have been a good idea to make
chapters suddenly so dynamic. (Still an idea to keep in mind; the new
code actually makes it easier to work towards this.)
You could mention how subtitles are timed metadata, and actually are
implemented as sparse packet streams in some formats. mp4 implements
chapters as special subtitle stream, AFAIK. (Ironically, this is very
not-ideal for files. It would be useful for streaming like web radio,
but mp4 is extremely bad for streaming by design for other reasons.)
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
2019-06-10 00:18:20 +00:00
|
|
|
if (st->event_flags & AVSTREAM_EVENT_FLAG_METADATA_UPDATED) {
|
|
|
|
st->event_flags = 0;
|
|
|
|
struct mp_tags *tags = talloc_zero(NULL, struct mp_tags);
|
2023-10-11 11:03:26 +00:00
|
|
|
mp_tags_move_from_av_dictionary(tags, &st->metadata);
|
demux: redo timed metadata
The old implementation didn't work for the OGG case. Discard the old
shit code (instead of fixing it), and write new shit code. The old code
was already over a year old, so it's about time to rewrite it for no
reason anyway.
While it's true that the old code appears to be broken, the main reason
to rewrite this is to make it simpler. While the amount of code seems to
be about the same, both the concept and the actual tag handling are
simpler. The result is probably a bit more correct.
The packet struct shrinks by 8 byte. That fact that it wasted 8 bytes
per packet for a rather obscure use case was the reason I started this
at all (and when I found that OGG updates didn't work). While these 8
bytes aren't going to hurt, the packet struct was getting too bloated.
If you buffer a lot of data, these extra fields will add up. Still quite
some effort for 8 bytes. Fortunately, it's not like there are any
managers that need to be convinced whether it's worth doing. The freedom
to waste time on dumb shit.
The old implementation attached the current metadata to each packet.
When the decoder read the packet, the packet's metadata was made
current. The new implementation stores metadata as separate list, and
requires that the player frontend tells it the current playback time,
which will be used to find the currently valid metadata. In both cases,
the objective was to correctly update metadata even if a lot of data is
buffered ahead (and to update them correctly when seeking within the
demuxer cache).
The new implementation is actually slightly more correct, because it
uses the playback time for the metadata lookup. Consider if you have an
audio filter which buffers 15 seconds (unfortunately such a filter
exists), then the old code would update the current title 15 seconds too
early, while the new one does it correctly.
The new code also simplifies mixing the 3 metadata sources (global, per
stream, ICY). We assume these aren't mixed in a meaningful way. The old
code tried to be a bit more "exact". I didn't bother to look how the old
code did this, but the new code simply always "merges" with the previous
metadata, so if a newer tag removes a field, it's going to stick around
anyway.
I tried to keep it simple. Other approaches include making metadata a
special sh_stream with metadata packets. This would have been
conceptually clean, but the implementation would probably have been
unnatural (and doesn't match well with libavformat's API anyway). It
would have been nice to make the metadata updates chapter points (makes
a lot of sense for the intended use case, web radio current song
information), but I don't think it would have been a good idea to make
chapters suddenly so dynamic. (Still an idea to keep in mind; the new
code actually makes it easier to work towards this.)
You could mention how subtitles are timed metadata, and actually are
implemented as sparse packet streams in some formats. mp4 implements
chapters as special subtitle stream, AFAIK. (Ironically, this is very
not-ideal for files. It would be useful for streaming like web radio,
but mp4 is extremely bad for streaming by design for other reasons.)
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
2019-06-10 00:18:20 +00:00
|
|
|
double pts = MP_PTS_OR_DEF(dp->pts, dp->dts);
|
|
|
|
demux_stream_tags_changed(demux, stream, tags, pts);
|
|
|
|
}
|
|
|
|
|
2018-09-07 13:12:24 +00:00
|
|
|
*mp_pkt = dp;
|
|
|
|
return true;
|
2004-04-11 14:26:04 +00:00
|
|
|
}
|
|
|
|
|
2016-02-28 18:14:23 +00:00
|
|
|
static void demux_seek_lavf(demuxer_t *demuxer, double seek_pts, int flags)
|
2011-07-03 12:42:04 +00:00
|
|
|
{
|
2004-04-11 17:20:52 +00:00
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
2006-10-05 21:25:22 +00:00
|
|
|
int avsflags = 0;
|
2016-02-28 18:14:23 +00:00
|
|
|
int64_t seek_pts_av = 0;
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
int seek_stream = -1;
|
2014-11-25 18:07:52 +00:00
|
|
|
|
demux_lavf: compensate timestamp resets for OGG web radio streams
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
|
|
|
if (priv->any_ts_fixed) {
|
|
|
|
// helpful message to piss of users
|
|
|
|
MP_WARN(demuxer, "Some timestamps returned by the demuxer were linearized. "
|
|
|
|
"A low level seek was requested; this won't work due to "
|
|
|
|
"restrictions in libavformat's API. You may have more "
|
|
|
|
"luck by enabling or enlarging the mpv cache.\n");
|
|
|
|
}
|
|
|
|
|
2019-09-22 18:52:37 +00:00
|
|
|
if (priv->linearize_ts < 0)
|
|
|
|
priv->linearize_ts = 0;
|
|
|
|
|
2017-10-23 17:05:39 +00:00
|
|
|
if (!(flags & SEEK_FORWARD))
|
2009-03-19 03:25:12 +00:00
|
|
|
avsflags = AVSEEK_FLAG_BACKWARD;
|
2013-03-01 12:20:33 +00:00
|
|
|
|
2008-01-29 15:11:38 +00:00
|
|
|
if (flags & SEEK_FACTOR) {
|
2015-12-28 22:22:58 +00:00
|
|
|
struct stream *s = priv->stream;
|
2019-12-23 10:09:42 +00:00
|
|
|
int64_t end = s ? stream_get_size(s) : -1;
|
2014-05-24 12:04:09 +00:00
|
|
|
if (end > 0 && demuxer->ts_resets_possible &&
|
2015-03-20 21:10:00 +00:00
|
|
|
!(priv->avif_flags & AVFMT_NO_BYTE_SEEK))
|
2013-03-01 12:20:33 +00:00
|
|
|
{
|
|
|
|
avsflags |= AVSEEK_FLAG_BYTE;
|
2016-02-28 18:14:23 +00:00
|
|
|
seek_pts_av = end * seek_pts;
|
2013-03-01 12:20:33 +00:00
|
|
|
} else if (priv->avfc->duration != 0 &&
|
|
|
|
priv->avfc->duration != AV_NOPTS_VALUE)
|
|
|
|
{
|
2016-02-28 18:14:23 +00:00
|
|
|
seek_pts_av = seek_pts * priv->avfc->duration;
|
2013-03-01 12:20:33 +00:00
|
|
|
}
|
|
|
|
} else {
|
2017-10-23 17:05:39 +00:00
|
|
|
if (!(flags & SEEK_FORWARD))
|
2016-02-28 18:14:23 +00:00
|
|
|
seek_pts -= priv->seek_delay;
|
|
|
|
seek_pts_av = seek_pts * AV_TIME_BASE;
|
2013-03-01 12:20:33 +00:00
|
|
|
}
|
2012-12-08 12:12:46 +00:00
|
|
|
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
// Hack to make wav seeking "deterministic". Without this, features like
|
|
|
|
// backward playback won't work.
|
|
|
|
if (priv->pcm_seek_hack && !priv->pcm_seek_hack_packet_size) {
|
|
|
|
// This might for example be the initial seek. Fuck it up like the
|
|
|
|
// bullshit it is.
|
2022-12-03 23:12:43 +00:00
|
|
|
AVPacket *pkt = av_packet_alloc();
|
|
|
|
MP_HANDLE_OOM(pkt);
|
|
|
|
if (av_read_frame(priv->avfc, pkt) >= 0)
|
|
|
|
priv->pcm_seek_hack_packet_size = pkt->size;
|
|
|
|
av_packet_free(&pkt);
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
add_new_streams(demuxer);
|
|
|
|
}
|
|
|
|
if (priv->pcm_seek_hack && priv->pcm_seek_hack_packet_size &&
|
|
|
|
!(avsflags & AVSEEK_FLAG_BYTE))
|
|
|
|
{
|
|
|
|
int samples = priv->pcm_seek_hack_packet_size /
|
|
|
|
priv->pcm_seek_hack->codecpar->block_align;
|
|
|
|
if (samples > 0) {
|
|
|
|
MP_VERBOSE(demuxer, "using bullshit libavformat PCM seek hack\n");
|
|
|
|
double pts = seek_pts_av / (double)AV_TIME_BASE;
|
|
|
|
seek_pts_av = pts / av_q2d(priv->pcm_seek_hack->time_base);
|
|
|
|
int64_t align = seek_pts_av % samples;
|
|
|
|
seek_pts_av -= align;
|
|
|
|
seek_stream = priv->pcm_seek_hack->index;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int r = av_seek_frame(priv->avfc, seek_stream, seek_pts_av, avsflags);
|
2016-07-24 16:41:55 +00:00
|
|
|
if (r < 0 && (avsflags & AVSEEK_FLAG_BACKWARD)) {
|
|
|
|
// When seeking before the beginning of the file, and seeking fails,
|
|
|
|
// try again without the backwards flag to make it seek to the
|
|
|
|
// beginning.
|
|
|
|
avsflags &= ~AVSEEK_FLAG_BACKWARD;
|
demux_lavf: implement bad hack for backward playback of wav
This commit generally fixes backward playing in wav, at least in most
PCM cases.
libavformat's wav demuxer (and actually all other raw PCM based
demuxers) have a specific behavior that breaks backward demuxing. The
same thing also breaks persistent seek ranges in the demuxer cache,
although that's less critical (it just means some cached data gets
discarded). The backward demuxing issue is fatal, will log the message
"Demuxer not cooperating.", and then typically stop doing anything.
Unlike modern media formats, these formats don't organize media data in
packets, but just wrap a monolithic byte stream that is described by a
header. This is good enough for PCM, which uses fixed frames (a single
sample for all audio channels), and for which it would be too expensive
to have per frame headers.
libavformat (and mpv) is heavily packet based, and using a single packet
for each PCM frame causes too much overhead. So they typically "bundle"
multiple frames into a single packet. This packet size is obviously
arbitrary, and in libavformat's case hardcoded in its source code.
The problem is that seeking doesn't respect this arbitrary packet
boundary. Seeking is sample accurate. You can essentially seek inside a
packet. The resulting packets will not be aligned with previously
demuxed packets. This is normally OK.
Backward seeking (and some other demuxer layer features) expect that
demuxing an earlier demuxed file position eventually results in the same
packets, regardless of the seeks that were done to get there. I like to
call this "deterministic" demuxing. Backward demuxing in particular
requires this to avoid overlaps, which would make it rather hard to get
continuous output.
Fix this issue by detecting wav and hopefully other raw audio formats
with a heuristic (even PCM needs to be detected as heuristic). Then, if
a seek is requested, align the seek timestamps on the guessed number of
samples in the audio packets returned by the demuxer.
The heuristic excludes files with multiple streams. (Except "attachment"
video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on
WAV files.) Such files will inherently use the packet concept in some
way.
We don't know how the demuxer chooses the internal packet size, but we
assume that it's fixed and aligned to PCM frame sizes. The frame size is
most likely given by block_align (the native wav frame size, according
to Microsoft). We possibly need to explicitly read and discard a packet
if the seek is done without reading anything before that. We ignore any
subsequent packet sizes; we need to avoid the very last packet, which
likely has a different size.
This hack should be rather benign. In the worst case, it will "round"
the seek target a little, but the maximum rounding amount is bounded.
Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't
bother.
An earlier commit fixed the same issue for mpv's demux_raw.
An alternative, and probably much better solution would be clipping
decoded data by timestamp. demux.c could allow the type of overlap the
wav demuxer introduces, and instruct the decoder to clip the output
against the last decoded timestamp. There's already an infrastructure
for this (demux_packet.end field) used by EDL/ordered chapters.
Although this sounds like a good solution, mpv unfortunately uses floats
for timestamps. The rounding errors break sample accuracy. Even if you
used integers, you'd need a timebase that is sample accurate (not always
easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
|
|
|
r = av_seek_frame(priv->avfc, seek_stream, seek_pts_av, avsflags);
|
2009-09-30 08:19:49 +00:00
|
|
|
}
|
2016-07-24 16:41:55 +00:00
|
|
|
|
2015-03-24 14:39:51 +00:00
|
|
|
if (r < 0) {
|
|
|
|
char buf[180];
|
|
|
|
av_strerror(r, buf, sizeof(buf));
|
|
|
|
MP_VERBOSE(demuxer, "Seek failed (%s)\n", buf);
|
|
|
|
}
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
|
|
|
|
update_read_stats(demuxer);
|
2004-04-11 14:26:04 +00:00
|
|
|
}
|
|
|
|
|
2018-09-07 21:10:14 +00:00
|
|
|
static void demux_lavf_switched_tracks(struct demuxer *demuxer)
|
2004-04-11 14:26:04 +00:00
|
|
|
{
|
2018-09-07 21:10:14 +00:00
|
|
|
select_tracks(demuxer, 0);
|
2004-04-11 14:26:04 +00:00
|
|
|
}
|
|
|
|
|
2005-08-05 19:57:47 +00:00
|
|
|
static void demux_close_lavf(demuxer_t *demuxer)
|
2004-04-11 14:26:04 +00:00
|
|
|
{
|
2011-07-03 12:42:04 +00:00
|
|
|
lavf_priv_t *priv = demuxer->priv;
|
|
|
|
if (priv) {
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
// This will be a dangling pointer; but see below.
|
|
|
|
AVIOContext *leaking = priv->avfc ? priv->avfc->pb : NULL;
|
demux_lavf: change license to LGPL (almost)
Since this demuxer is based on code by michael, this file can become
LGPL only once the mpv core becomes LGPL, and this is preparation for
it.
There were quite a lot of changes for rearranging preferred libavformat
vs. internal MPlayer demuxers, codec mappings, and filename extensions,
but all this got removed, so some of the relevant authors weren't asked.
cehoyos, who disagreed with LGPL, made a few changes in the past (mostly
codec mapping and deinterlacing related things), but all of them were
removed, mostly due to libavformat API cleanups.
adland, who could not be reached, did commit 057916ee65, but it's easy
to essentially revert the change (this is what the source changes in
this commit do), so we don't need to think about it.
Chris Welton, who could not be reached, made a simple change in commit
958c41d9b69. Fortunately, the API changed again, and his changes were
removed, so we don't need to think about this either.
There is an anonymous contribution in commit 085f35f4b48 - since this
did not introduce any original code, and the probe code was heavily
rewritten multiple times, I don't consider it relevant.
2017-06-16 13:47:54 +00:00
|
|
|
avformat_close_input(&priv->avfc);
|
demux_lavf: to get effective HLS bitrate
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
|
|
|
// The ffmpeg garbage breaks its own API yet again: hls.c will call
|
|
|
|
// io_open on the main playlist, but never calls io_close. This happens
|
|
|
|
// to work out for us (since we don't really use custom I/O), but it's
|
|
|
|
// still weird. Compensate.
|
|
|
|
if (priv->num_nested == 1 && priv->nested[0].id == leaking)
|
|
|
|
priv->num_nested = 0;
|
|
|
|
if (priv->num_nested) {
|
|
|
|
MP_WARN(demuxer, "Leaking %d nested connections (FFmpeg bug).\n",
|
|
|
|
priv->num_nested);
|
|
|
|
}
|
2013-08-04 21:21:50 +00:00
|
|
|
if (priv->pb)
|
|
|
|
av_freep(&priv->pb->buffer);
|
2007-12-22 16:22:54 +00:00
|
|
|
av_freep(&priv->pb);
|
2015-10-19 13:27:42 +00:00
|
|
|
for (int n = 0; n < priv->num_streams; n++) {
|
2019-06-09 21:00:04 +00:00
|
|
|
struct stream_info *info = priv->streams[n];
|
|
|
|
if (info->sh)
|
|
|
|
avcodec_parameters_free(&info->sh->codec->lav_codecpar);
|
2015-10-19 13:27:42 +00:00
|
|
|
}
|
2016-08-26 11:05:14 +00:00
|
|
|
if (priv->own_stream)
|
2015-12-28 22:23:30 +00:00
|
|
|
free_stream(priv->stream);
|
2021-04-20 10:13:47 +00:00
|
|
|
if (priv->av_opts)
|
|
|
|
av_dict_free(&priv->av_opts);
|
demux_lavf: add support for libavdevice
libavdevice supports various "special" video and audio inputs, such
as screen-capture or libavfilter filter graphs.
libavdevice inputs are implemented as demuxers. They don't use the
custom stream callbacks (in AVFormatContext.pb). Instead, input
parameters are passed as filename. This means the mpv stream layer has
to be disabled. Do this by adding the pseudo stream handler avdevice://,
whose only purpose is passing the filename to demux_lavf, without
actually doing anything.
Change the logic how the filename is passed to libavformat. Remove
handling of the filename from demux_open_lavf() and move it to
lavf_check_file(). (This also fixes a possible bug when skipping the
"lavf://" prefix.)
libavdevice now can be invoked by specifying demuxer and args as in:
mpv avdevice://demuxer:args
The args are passed as filename to libavformat. When using libavdevice
demuxers, their actual meaning is highly implementation specific. They
don't refer to actual filenames.
Note:
libavdevice is disabled by default. There is one problem: libavdevice
pulls in libavfilter, which in turn causes symbol clashes with mpv
internals. The problem is that libavfilter includes a mplayer filter
bridge, which is used to interface with a set of nearly unmodified
mplayer filters copied into libavfilter. This filter bridge uses the
same symbol names as mplayer/mpv's filter chain, which results in symbol
clashes at link-time.
This can be prevented by building ffmpeg with --disable-filter=mp, but
unfortunately this is not the default.
This means linking to libavdevice (which in turn forces linking with
libavfilter by default) must be disabled. We try doing this by compiling
a test file that defines one of the clashing symbols (vf_mpi_clear).
To enable libavdevice input, ffmpeg should be built with the options:
--disable-filter=mp
and mpv with:
--enable-libavdevice
Originally, I tried to auto-detect it. But the resulting complications
in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
|
|
|
talloc_free(priv);
|
2011-07-03 12:42:04 +00:00
|
|
|
demuxer->priv = NULL;
|
2004-04-11 14:26:04 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-08-05 19:57:47 +00:00
|
|
|
|
2008-01-13 16:00:39 +00:00
|
|
|
const demuxer_desc_t demuxer_desc_lavf = {
|
2013-07-11 18:08:12 +00:00
|
|
|
.name = "lavf",
|
2013-07-12 20:12:02 +00:00
|
|
|
.desc = "libavformat",
|
2018-09-07 13:12:24 +00:00
|
|
|
.read_packet = demux_lavf_read_packet,
|
2013-07-11 18:08:12 +00:00
|
|
|
.open = demux_open_lavf,
|
|
|
|
.close = demux_close_lavf,
|
|
|
|
.seek = demux_seek_lavf,
|
2018-09-07 21:10:14 +00:00
|
|
|
.switched_tracks = demux_lavf_switched_tracks,
|
2007-04-14 10:03:42 +00:00
|
|
|
};
|