mpv/demux/demux_lavf.c

1425 lines
49 KiB
C
Raw Normal View History

/*
* Copyright (C) 2004 Michael Niedermayer <michaelni@gmx.at>
*
* This file is part of mpv.
*
* mpv is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* mpv is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
*/
#include <stdlib.h>
#include <limits.h>
Add improved relative seek mode When the new mode is active relative seeks are converted to absolute ones (current video pts + relative seek amount) and forward/backward flag before being sent to the demuxer. This mode is used if the demuxer has set the accurate_seek field in the demuxer struct and there is a video stream. At the moment the mkv and lavf demuxers enable the flag. This change is useful for later Matroska ordered chapter support (and for more general timelime editing), but also fixes problems in existing functionality. The main problem with the old mode, where relative seeks are passed directly to the demuxer, is that the user wants to seek relative to the currently displayed position but the demuxer does not know what that position is. There can be an arbitrary amount of buffering between the demuxer read position and what is displayed on the screen. In some situations this makes small seeks fail to move backward at all (especially visible at high playback speed, when audio needs to be demuxed and decoded further ahead to fill the output buffers after resampling). Some container formats that can be used with the lavf demuxer do not always have reliable timestamps that could be used for unambiguous absolute seeking. However I made the demuxer always enable the new mode because it already converted all seeks to absolute ones before sending them to libavformat, so cases without reliable absolute seeks were failing already and this should only improve the working cases.
2009-03-19 03:25:12 +00:00
#include <stdbool.h>
#include <string.h>
#include <errno.h>
core: fix DVD subtitle selection Add all subtitle tracks as reported by libdvdread at playback start. Display language for subtitle and audio tracks. This commit restores these features to the state when demux_mpg was default for DVD playback, and makes them work with demux_lavf and the recent changes to subtitle selection in the frontend. demux_mpg, which was the default demuxer for DVD playback, reordered the subtitle streams according to the "logical" subtitle track number, which conforms to the track layout reported by libdvdread, and is what stream_dvd expects for the STREAM_CTRL_GET_LANG call. demux_lavf, on the other hand, adds the streams in the order it encounters them in the MPEG stream. It seems this order is essentially random, and can't be mapped easily to what stream_dvd expects. Solve this by making demux_lavf hand out the MPEG stream IDs (using the demuxer_id field). The MPEG IDs are mapped by mplayer.c by special casing DVD playback (map_id_from/to_demuxer() functions). This mapping is essentially the same what demux_mpg did. Making demux_lavf reorder the streams is out of the question, because its stream handling is already messy enough. (Note that demux_lavf doesn't export stream IDs for other formats, because most time libavformat demuxers do not set AVStream.id, and we don't know which demuxers do. But we know that MPEG is safe.) Another major complication is that subtitle tracks are added lazily, as soon as the demuxer encounters the first subtitle packet for a given subtitle stream. Add the streams in advance. If a yet non-existent stream is selected, demux_lavf must be made to auto-select that subtitle stream as soon as it is added. Otherwise, the first subtitle packet would be lost. This is done by DEMUXER_CTRL_PRESELECT_SUBTITLE. demux_mpg didn't need this: the frontend code could just set ds->id to the desired stream number. But demux_lavf's stream IDs don't map directly to the stream number as used by libdvdread, which is why this hack is needed.
2012-08-30 14:43:31 +00:00
#include <assert.h>
#include "config.h"
#include <libavformat/avformat.h>
#include <libavformat/avio.h>
#include <libavutil/avutil.h>
#include <libavutil/avstring.h>
#include <libavutil/display.h>
#include <libavutil/dovi_meta.h>
#include <libavutil/mathematics.h>
#include <libavutil/opt.h>
#include <libavutil/pixdesc.h>
#include <libavutil/replaygain.h>
#include "audio/chmap_avchannel.h"
#include "common/common.h"
#include "common/msg.h"
#include "common/tags.h"
#include "common/av_common.h"
#include "misc/bstr.h"
#include "misc/charset_conv.h"
#include "misc/thread_tools.h"
#include "stream/stream.h"
#include "demux.h"
#include "stheader.h"
#include "options/m_config.h"
#include "options/m_option.h"
#include "options/path.h"
#ifndef AV_DISPOSITION_TIMED_THUMBNAILS
#define AV_DISPOSITION_TIMED_THUMBNAILS 0
#endif
#ifndef AV_DISPOSITION_STILL_IMAGE
#define AV_DISPOSITION_STILL_IMAGE 0
#endif
#define INITIAL_PROBE_SIZE STREAM_BUFFER_SIZE
#define PROBE_BUF_SIZE (10 * 1024 * 1024)
// Should correspond to IO_BUFFER_SIZE in libavformat/aviobuf.c (not public)
// libavformat (almost) always reads data in blocks of this size.
#define BIO_BUFFER_SIZE 32768
#define OPT_BASE_STRUCT struct demux_lavf_opts
struct demux_lavf_opts {
int probesize;
int probeinfo;
int probescore;
float analyzeduration;
int buffersize;
bool allow_mimetype;
char *format;
char **avopts;
bool hacks;
char *sub_cp;
int rtsp_transport;
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
int linearize_ts;
bool propagate_opts;
};
const struct m_sub_options demux_lavf_conf = {
.opts = (const m_option_t[]) {
{"demuxer-lavf-probesize", OPT_INT(probesize), M_RANGE(32, INT_MAX)},
{"demuxer-lavf-probe-info", OPT_CHOICE(probeinfo,
{"no", 0}, {"yes", 1}, {"auto", -1}, {"nostreams", -2})},
{"demuxer-lavf-format", OPT_STRING(format)},
{"demuxer-lavf-analyzeduration", OPT_FLOAT(analyzeduration),
M_RANGE(0, 3600)},
{"demuxer-lavf-buffersize", OPT_INT(buffersize),
M_RANGE(1, 10 * 1024 * 1024), OPTDEF_INT(BIO_BUFFER_SIZE)},
{"demuxer-lavf-allow-mimetype", OPT_BOOL(allow_mimetype)},
{"demuxer-lavf-probescore", OPT_INT(probescore),
M_RANGE(1, AVPROBE_SCORE_MAX)},
{"demuxer-lavf-hacks", OPT_BOOL(hacks)},
{"demuxer-lavf-o", OPT_KEYVALUELIST(avopts)},
{"sub-codepage", OPT_STRING(sub_cp)},
{"rtsp-transport", OPT_CHOICE(rtsp_transport,
{"lavf", 0},
{"udp", 1},
{"tcp", 2},
{"http", 3},
{"udp_multicast", 4})},
{"demuxer-lavf-linearize-timestamps", OPT_CHOICE(linearize_ts,
{"no", 0}, {"auto", -1}, {"yes", 1})},
{"demuxer-lavf-propagate-opts", OPT_BOOL(propagate_opts)},
{0}
},
.size = sizeof(struct demux_lavf_opts),
.defaults = &(const struct demux_lavf_opts){
.probeinfo = -1,
.allow_mimetype = true,
.hacks = true,
// AVPROBE_SCORE_MAX/4 + 1 is the "recommended" limit. Below that, the
// user is supposed to retry with larger probe sizes until a higher
// value is reached.
.probescore = AVPROBE_SCORE_MAX/4 + 1,
.sub_cp = "auto",
.rtsp_transport = 2,
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
.linearize_ts = -1,
.propagate_opts = true,
},
};
struct format_hack {
const char *ff_name;
const char *mime_type;
int probescore;
float analyzeduration;
bool skipinfo : 1; // skip avformat_find_stream_info()
unsigned int if_flags; // additional AVInputFormat.flags flags
bool max_probe : 1; // use probescore only if max. probe size reached
bool ignore : 1; // blacklisted
bool no_stream : 1; // do not wrap struct stream as AVIOContext
bool use_stream_ids : 1; // has a meaningful native stream IDs (export it)
bool fully_read : 1; // set demuxer.fully_read flag
bool detect_charset : 1; // format is a small text file, possibly not UTF8
// Do not confuse player's position estimation (position is into external
// segment, with e.g. HLS, player knows about the playlist main file only).
bool clear_filepos : 1;
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
bool linearize_audio_ts : 1;// compensate timestamp resets (audio only)
bool is_network : 1;
bool no_seek : 1;
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
bool no_pcm_seek : 1;
bool no_seek_on_no_duration : 1;
bool readall_on_no_streamseek : 1;
bool first_frame_only : 1;
};
#define BLACKLIST(fmt) {fmt, .ignore = true}
#define TEXTSUB(fmt) {fmt, .fully_read = true, .detect_charset = true}
#define TEXTSUB_UTF8(fmt) {fmt, .fully_read = true}
static const struct format_hack format_hacks[] = {
// for webradios
{"aac", "audio/aacp", 25, 0.5},
2013-06-24 22:55:20 +00:00
{"aac", "audio/aac", 25, 0.5},
// some mp3 files don't detect correctly (usually id3v2 too large)
{"mp3", "audio/mpeg", 24, 0.5},
{"mp3", NULL, 24, .max_probe = true},
{"hls", .no_stream = true, .clear_filepos = true},
{"dash", .no_stream = true, .clear_filepos = true},
{"sdp", .clear_filepos = true, .is_network = true, .no_seek = true},
{"mpeg", .use_stream_ids = true},
{"mpegts", .use_stream_ids = true},
{"mxf", .use_stream_ids = true},
{"avi", .use_stream_ids = true},
{"asf", .use_stream_ids = true},
{"mp4", .skipinfo = true, .no_pcm_seek = true, .use_stream_ids = true},
{"matroska", .skipinfo = true, .no_pcm_seek = true, .use_stream_ids = true},
{"v4l2", .no_seek = true},
{"rtsp", .no_seek_on_no_duration = true},
// In theory, such streams might contain timestamps, but virtually none do.
{"h264", .if_flags = AVFMT_NOTIMESTAMPS },
{"hevc", .if_flags = AVFMT_NOTIMESTAMPS },
{"vvc", .if_flags = AVFMT_NOTIMESTAMPS },
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
// Some Ogg shoutcast streams are essentially concatenated OGG files. They
// reset timestamps, which causes all sorts of problems.
{"ogg", .linearize_audio_ts = true, .use_stream_ids = true},
// Ignore additional metadata as frames from some single frame JPEGs
// (e.g. gain map)
{"jpeg_pipe", .first_frame_only = true},
// At some point, FFmpeg lost the ability to read gif from unseekable
// streams.
{"gif", .readall_on_no_streamseek = true},
TEXTSUB("aqtitle"), TEXTSUB("jacosub"), TEXTSUB("microdvd"),
TEXTSUB("mpl2"), TEXTSUB("mpsub"), TEXTSUB("pjs"), TEXTSUB("realtext"),
TEXTSUB("sami"), TEXTSUB("srt"), TEXTSUB("stl"), TEXTSUB("subviewer"),
TEXTSUB("subviewer1"), TEXTSUB("vplayer"), TEXTSUB("ass"),
TEXTSUB_UTF8("webvtt"),
// Useless non-sense, sometimes breaks MLP2 subreader.c fallback
BLACKLIST("tty"),
// Let's open files with extremely generic extensions (.bin) with a
// demuxer that doesn't have a probe function! NO.
BLACKLIST("bin"),
// Useless, does not work with custom streams.
BLACKLIST("image2"),
{0}
};
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
struct nested_stream {
AVIOContext *id;
int64_t last_bytes;
};
struct stream_info {
struct sh_stream *sh;
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
double last_key_pts;
double highest_pts;
double ts_offset;
};
typedef struct lavf_priv {
struct stream *stream;
bool own_stream;
char *filename;
struct format_hack format_hack;
const AVInputFormat *avif;
int avif_flags;
AVFormatContext *avfc;
AVIOContext *pb;
struct stream_info **streams; // NULL for unknown streams
int num_streams;
char *mime_type;
double seek_delay;
struct demux_lavf_opts *opts;
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
bool pcm_seek_hack_disabled;
AVStream *pcm_seek_hack;
int pcm_seek_hack_packet_size;
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
int linearize_ts;
bool any_ts_fixed;
int retry_counter;
AVDictionary *av_opts;
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
// Proxying nested streams.
struct nested_stream *nested;
int num_nested;
int (*default_io_open)(struct AVFormatContext *s, AVIOContext **pb,
const char *url, int flags, AVDictionary **options);
int (*default_io_close2)(struct AVFormatContext *s, AVIOContext *pb);
} lavf_priv_t;
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
static void update_read_stats(struct demuxer *demuxer)
{
lavf_priv_t *priv = demuxer->priv;
for (int n = 0; n < priv->num_nested; n++) {
struct nested_stream *nest = &priv->nested[n];
int64_t cur = nest->id->bytes_read;
int64_t new = cur - nest->last_bytes;
nest->last_bytes = cur;
demux_report_unbuffered_read_bytes(demuxer, new);
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
}
}
// At least mp4 has name="mov,mp4,m4a,3gp,3g2,mj2", so we split the name
// on "," in general.
static bool matches_avinputformat_name(struct lavf_priv *priv,
const char *name)
{
const char *avifname = priv->avif->name;
while (1) {
const char *next = strchr(avifname, ',');
if (!next)
return !strcmp(avifname, name);
int len = next - avifname;
if (len == strlen(name) && !memcmp(avifname, name, len))
return true;
avifname = next + 1;
}
}
static int mp_read(void *opaque, uint8_t *buf, int size)
{
struct demuxer *demuxer = opaque;
lavf_priv_t *priv = demuxer->priv;
struct stream *stream = priv->stream;
if (!stream)
return 0;
demux: make webm dash work by using init fragment on all demuxers Retarded webshit streaming protocols (well, DASH) chop a stream into small fragments, and move unchanging header parts to an "init" fragment to save some bytes (in the case at hand about 300 bytes for each fragment that is 100KB-200KB, sure was worth it, fucking idiots). Since mpv uses an even more retarded hack to inefficiently emulate DASH through EDL, it opens a new demuxer for every fragment. Thus the fragment needs to be virtually concatenated with the init fragment. (To be fair, I'm not sure whether the alternative, reusing the demuxer and letting it see a stream of byte-wise concatenated fragmenmts, would actually be saner.) demux_lavc.c contained a hack for this. Unfortunately, a certain shitty streaming site by an evil company, that will bestow dytopia upon us soon enough, sometimes serves webm based DASH instead of the expected mp4 DASH. And for some reason, libavformat's mkv demuxer can't handle the init fragment or rejects it for some reason. Since I'd rather eat mushrooms grown in Chernobyl than debugging, hacking, or (god no) contributing to FFmpeg, and since Chernobyl is so far away, make it work with our builtin mkv demuxer instead. This is not hard. We just need to copy the hack in demux_lavf.c to demux_mkv.c. Since I'm not _that_ much of a dumbfuck to actually do this, remove the shitty gross demux_lavf.c hack, and replace it by a slightly less bad generic implementation (stream_concat.c from the previous commit), and use it on all demuxers. Although this requires much more code, this frees demux_lavf.c from a hack, and doesn't require adding a duplicated one to demux_mkv.c, so to the naive eye this seems to be a much better outcome. Regarding the code, for some reason stream_memory_open() is never meant to fail, while stream_concat_open() can in extremely obscure situations, and (currently) not in this case, but we handle failure of it anyway. Yep.
2019-06-19 17:38:46 +00:00
int ret = stream_read_partial(stream, buf, size);
MP_TRACE(demuxer, "%d=mp_read(%p, %p, %d), pos: %"PRId64", eof:%d\n",
ret, stream, buf, size, stream_tell(stream), stream->eof);
return ret ? ret : AVERROR_EOF;
}
static int64_t mp_seek(void *opaque, int64_t pos, int whence)
{
struct demuxer *demuxer = opaque;
lavf_priv_t *priv = demuxer->priv;
struct stream *stream = priv->stream;
if (!stream)
return -1;
MP_TRACE(demuxer, "mp_seek(%p, %"PRId64", %s)\n", stream, pos,
whence == SEEK_END ? "end" :
whence == SEEK_CUR ? "cur" :
whence == SEEK_SET ? "set" : "size");
if (whence == SEEK_END || whence == AVSEEK_SIZE) {
int64_t end = stream_get_size(stream);
if (end < 0)
return -1;
if (whence == AVSEEK_SIZE)
return end;
pos += end;
} else if (whence == SEEK_CUR) {
demux: make webm dash work by using init fragment on all demuxers Retarded webshit streaming protocols (well, DASH) chop a stream into small fragments, and move unchanging header parts to an "init" fragment to save some bytes (in the case at hand about 300 bytes for each fragment that is 100KB-200KB, sure was worth it, fucking idiots). Since mpv uses an even more retarded hack to inefficiently emulate DASH through EDL, it opens a new demuxer for every fragment. Thus the fragment needs to be virtually concatenated with the init fragment. (To be fair, I'm not sure whether the alternative, reusing the demuxer and letting it see a stream of byte-wise concatenated fragmenmts, would actually be saner.) demux_lavc.c contained a hack for this. Unfortunately, a certain shitty streaming site by an evil company, that will bestow dytopia upon us soon enough, sometimes serves webm based DASH instead of the expected mp4 DASH. And for some reason, libavformat's mkv demuxer can't handle the init fragment or rejects it for some reason. Since I'd rather eat mushrooms grown in Chernobyl than debugging, hacking, or (god no) contributing to FFmpeg, and since Chernobyl is so far away, make it work with our builtin mkv demuxer instead. This is not hard. We just need to copy the hack in demux_lavf.c to demux_mkv.c. Since I'm not _that_ much of a dumbfuck to actually do this, remove the shitty gross demux_lavf.c hack, and replace it by a slightly less bad generic implementation (stream_concat.c from the previous commit), and use it on all demuxers. Although this requires much more code, this frees demux_lavf.c from a hack, and doesn't require adding a duplicated one to demux_mkv.c, so to the naive eye this seems to be a much better outcome. Regarding the code, for some reason stream_memory_open() is never meant to fail, while stream_concat_open() can in extremely obscure situations, and (currently) not in this case, but we handle failure of it anyway. Yep.
2019-06-19 17:38:46 +00:00
pos += stream_tell(stream);
} else if (whence != SEEK_SET) {
return -1;
}
if (pos < 0)
return -1;
int64_t current_pos = stream_tell(stream);
demux: make webm dash work by using init fragment on all demuxers Retarded webshit streaming protocols (well, DASH) chop a stream into small fragments, and move unchanging header parts to an "init" fragment to save some bytes (in the case at hand about 300 bytes for each fragment that is 100KB-200KB, sure was worth it, fucking idiots). Since mpv uses an even more retarded hack to inefficiently emulate DASH through EDL, it opens a new demuxer for every fragment. Thus the fragment needs to be virtually concatenated with the init fragment. (To be fair, I'm not sure whether the alternative, reusing the demuxer and letting it see a stream of byte-wise concatenated fragmenmts, would actually be saner.) demux_lavc.c contained a hack for this. Unfortunately, a certain shitty streaming site by an evil company, that will bestow dytopia upon us soon enough, sometimes serves webm based DASH instead of the expected mp4 DASH. And for some reason, libavformat's mkv demuxer can't handle the init fragment or rejects it for some reason. Since I'd rather eat mushrooms grown in Chernobyl than debugging, hacking, or (god no) contributing to FFmpeg, and since Chernobyl is so far away, make it work with our builtin mkv demuxer instead. This is not hard. We just need to copy the hack in demux_lavf.c to demux_mkv.c. Since I'm not _that_ much of a dumbfuck to actually do this, remove the shitty gross demux_lavf.c hack, and replace it by a slightly less bad generic implementation (stream_concat.c from the previous commit), and use it on all demuxers. Although this requires much more code, this frees demux_lavf.c from a hack, and doesn't require adding a duplicated one to demux_mkv.c, so to the naive eye this seems to be a much better outcome. Regarding the code, for some reason stream_memory_open() is never meant to fail, while stream_concat_open() can in extremely obscure situations, and (currently) not in this case, but we handle failure of it anyway. Yep.
2019-06-19 17:38:46 +00:00
if (stream_seek(stream, pos) == 0) {
stream_seek(stream, current_pos);
return -1;
}
return pos;
}
static int64_t mp_read_seek(void *opaque, int stream_idx, int64_t ts, int flags)
{
struct demuxer *demuxer = opaque;
lavf_priv_t *priv = demuxer->priv;
struct stream *stream = priv->stream;
struct stream_avseek cmd = {
.stream_index = stream_idx,
.timestamp = ts,
.flags = flags,
};
if (stream && stream_control(stream, STREAM_CTRL_AVSEEK, &cmd) == STREAM_OK) {
stream_drop_buffers(stream);
return 0;
}
return AVERROR(ENOSYS);
}
static void list_formats(struct demuxer *demuxer)
{
MP_INFO(demuxer, "Available lavf input formats:\n");
const AVInputFormat *fmt;
void *iter = NULL;
while ((fmt = av_demuxer_iterate(&iter)))
MP_INFO(demuxer, "%15s : %s\n", fmt->name, fmt->long_name);
}
static void convert_charset(struct demuxer *demuxer)
{
lavf_priv_t *priv = demuxer->priv;
char *cp = priv->opts->sub_cp;
if (!cp || !cp[0] || mp_charset_is_utf8(cp))
return;
bstr data = stream_read_complete(priv->stream, NULL, 128 * 1024 * 1024);
if (!data.start) {
MP_WARN(demuxer, "File too big (or error reading) - skip charset probing.\n");
return;
}
void *alloc = data.start;
cp = (char *)mp_charset_guess(priv, demuxer->log, data, cp, 0);
if (cp && !mp_charset_is_utf8(cp))
MP_INFO(demuxer, "Using subtitle charset: %s\n", cp);
// libavformat transparently converts UTF-16 to UTF-8
if (!mp_charset_is_utf16(cp) && !mp_charset_is_utf8(cp)) {
bstr conv = mp_iconv_to_utf8(demuxer->log, data, cp, MP_ICONV_VERBOSE);
if (conv.start && conv.start != data.start)
talloc_steal(alloc, conv.start);
if (conv.start)
data = conv;
}
if (data.start) {
priv->stream = stream_memory_open(demuxer->global, data.start, data.len);
priv->own_stream = true;
}
talloc_free(alloc);
}
static char *remove_prefix(char *s, const char *const *prefixes)
demux_lavf: add support for libavdevice libavdevice supports various "special" video and audio inputs, such as screen-capture or libavfilter filter graphs. libavdevice inputs are implemented as demuxers. They don't use the custom stream callbacks (in AVFormatContext.pb). Instead, input parameters are passed as filename. This means the mpv stream layer has to be disabled. Do this by adding the pseudo stream handler avdevice://, whose only purpose is passing the filename to demux_lavf, without actually doing anything. Change the logic how the filename is passed to libavformat. Remove handling of the filename from demux_open_lavf() and move it to lavf_check_file(). (This also fixes a possible bug when skipping the "lavf://" prefix.) libavdevice now can be invoked by specifying demuxer and args as in: mpv avdevice://demuxer:args The args are passed as filename to libavformat. When using libavdevice demuxers, their actual meaning is highly implementation specific. They don't refer to actual filenames. Note: libavdevice is disabled by default. There is one problem: libavdevice pulls in libavfilter, which in turn causes symbol clashes with mpv internals. The problem is that libavfilter includes a mplayer filter bridge, which is used to interface with a set of nearly unmodified mplayer filters copied into libavfilter. This filter bridge uses the same symbol names as mplayer/mpv's filter chain, which results in symbol clashes at link-time. This can be prevented by building ffmpeg with --disable-filter=mp, but unfortunately this is not the default. This means linking to libavdevice (which in turn forces linking with libavfilter by default) must be disabled. We try doing this by compiling a test file that defines one of the clashing symbols (vf_mpi_clear). To enable libavdevice input, ffmpeg should be built with the options: --disable-filter=mp and mpv with: --enable-libavdevice Originally, I tried to auto-detect it. But the resulting complications in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
{
for (int n = 0; prefixes[n]; n++) {
int len = strlen(prefixes[n]);
if (strncmp(s, prefixes[n], len) == 0)
return s + len;
}
return s;
}
static const char *const prefixes[] =
demux_lavf: add support for libavdevice libavdevice supports various "special" video and audio inputs, such as screen-capture or libavfilter filter graphs. libavdevice inputs are implemented as demuxers. They don't use the custom stream callbacks (in AVFormatContext.pb). Instead, input parameters are passed as filename. This means the mpv stream layer has to be disabled. Do this by adding the pseudo stream handler avdevice://, whose only purpose is passing the filename to demux_lavf, without actually doing anything. Change the logic how the filename is passed to libavformat. Remove handling of the filename from demux_open_lavf() and move it to lavf_check_file(). (This also fixes a possible bug when skipping the "lavf://" prefix.) libavdevice now can be invoked by specifying demuxer and args as in: mpv avdevice://demuxer:args The args are passed as filename to libavformat. When using libavdevice demuxers, their actual meaning is highly implementation specific. They don't refer to actual filenames. Note: libavdevice is disabled by default. There is one problem: libavdevice pulls in libavfilter, which in turn causes symbol clashes with mpv internals. The problem is that libavfilter includes a mplayer filter bridge, which is used to interface with a set of nearly unmodified mplayer filters copied into libavfilter. This filter bridge uses the same symbol names as mplayer/mpv's filter chain, which results in symbol clashes at link-time. This can be prevented by building ffmpeg with --disable-filter=mp, but unfortunately this is not the default. This means linking to libavdevice (which in turn forces linking with libavfilter by default) must be disabled. We try doing this by compiling a test file that defines one of the clashing symbols (vf_mpi_clear). To enable libavdevice input, ffmpeg should be built with the options: --disable-filter=mp and mpv with: --enable-libavdevice Originally, I tried to auto-detect it. But the resulting complications in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
{"ffmpeg://", "lavf://", "avdevice://", "av://", NULL};
static int lavf_check_file(demuxer_t *demuxer, enum demux_check check)
{
2015-12-17 00:00:54 +00:00
lavf_priv_t *priv = demuxer->priv;
struct demux_lavf_opts *lavfdopts = priv->opts;
struct stream *s = priv->stream;
priv->filename = remove_prefix(s->url, prefixes);
demux_lavf: add support for libavdevice libavdevice supports various "special" video and audio inputs, such as screen-capture or libavfilter filter graphs. libavdevice inputs are implemented as demuxers. They don't use the custom stream callbacks (in AVFormatContext.pb). Instead, input parameters are passed as filename. This means the mpv stream layer has to be disabled. Do this by adding the pseudo stream handler avdevice://, whose only purpose is passing the filename to demux_lavf, without actually doing anything. Change the logic how the filename is passed to libavformat. Remove handling of the filename from demux_open_lavf() and move it to lavf_check_file(). (This also fixes a possible bug when skipping the "lavf://" prefix.) libavdevice now can be invoked by specifying demuxer and args as in: mpv avdevice://demuxer:args The args are passed as filename to libavformat. When using libavdevice demuxers, their actual meaning is highly implementation specific. They don't refer to actual filenames. Note: libavdevice is disabled by default. There is one problem: libavdevice pulls in libavfilter, which in turn causes symbol clashes with mpv internals. The problem is that libavfilter includes a mplayer filter bridge, which is used to interface with a set of nearly unmodified mplayer filters copied into libavfilter. This filter bridge uses the same symbol names as mplayer/mpv's filter chain, which results in symbol clashes at link-time. This can be prevented by building ffmpeg with --disable-filter=mp, but unfortunately this is not the default. This means linking to libavdevice (which in turn forces linking with libavfilter by default) must be disabled. We try doing this by compiling a test file that defines one of the clashing symbols (vf_mpi_clear). To enable libavdevice input, ffmpeg should be built with the options: --disable-filter=mp and mpv with: --enable-libavdevice Originally, I tried to auto-detect it. But the resulting complications in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
char *avdevice_format = NULL;
if (s->info && strcmp(s->info->name, "avdevice") == 0) {
demux_lavf: add support for libavdevice libavdevice supports various "special" video and audio inputs, such as screen-capture or libavfilter filter graphs. libavdevice inputs are implemented as demuxers. They don't use the custom stream callbacks (in AVFormatContext.pb). Instead, input parameters are passed as filename. This means the mpv stream layer has to be disabled. Do this by adding the pseudo stream handler avdevice://, whose only purpose is passing the filename to demux_lavf, without actually doing anything. Change the logic how the filename is passed to libavformat. Remove handling of the filename from demux_open_lavf() and move it to lavf_check_file(). (This also fixes a possible bug when skipping the "lavf://" prefix.) libavdevice now can be invoked by specifying demuxer and args as in: mpv avdevice://demuxer:args The args are passed as filename to libavformat. When using libavdevice demuxers, their actual meaning is highly implementation specific. They don't refer to actual filenames. Note: libavdevice is disabled by default. There is one problem: libavdevice pulls in libavfilter, which in turn causes symbol clashes with mpv internals. The problem is that libavfilter includes a mplayer filter bridge, which is used to interface with a set of nearly unmodified mplayer filters copied into libavfilter. This filter bridge uses the same symbol names as mplayer/mpv's filter chain, which results in symbol clashes at link-time. This can be prevented by building ffmpeg with --disable-filter=mp, but unfortunately this is not the default. This means linking to libavdevice (which in turn forces linking with libavfilter by default) must be disabled. We try doing this by compiling a test file that defines one of the clashing symbols (vf_mpi_clear). To enable libavdevice input, ffmpeg should be built with the options: --disable-filter=mp and mpv with: --enable-libavdevice Originally, I tried to auto-detect it. But the resulting complications in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
// always require filename in the form "format:filename"
char *sep = strchr(priv->filename, ':');
if (!sep) {
MP_FATAL(demuxer, "Must specify filename in 'format:filename' form\n");
return -1;
demux_lavf: add support for libavdevice libavdevice supports various "special" video and audio inputs, such as screen-capture or libavfilter filter graphs. libavdevice inputs are implemented as demuxers. They don't use the custom stream callbacks (in AVFormatContext.pb). Instead, input parameters are passed as filename. This means the mpv stream layer has to be disabled. Do this by adding the pseudo stream handler avdevice://, whose only purpose is passing the filename to demux_lavf, without actually doing anything. Change the logic how the filename is passed to libavformat. Remove handling of the filename from demux_open_lavf() and move it to lavf_check_file(). (This also fixes a possible bug when skipping the "lavf://" prefix.) libavdevice now can be invoked by specifying demuxer and args as in: mpv avdevice://demuxer:args The args are passed as filename to libavformat. When using libavdevice demuxers, their actual meaning is highly implementation specific. They don't refer to actual filenames. Note: libavdevice is disabled by default. There is one problem: libavdevice pulls in libavfilter, which in turn causes symbol clashes with mpv internals. The problem is that libavfilter includes a mplayer filter bridge, which is used to interface with a set of nearly unmodified mplayer filters copied into libavfilter. This filter bridge uses the same symbol names as mplayer/mpv's filter chain, which results in symbol clashes at link-time. This can be prevented by building ffmpeg with --disable-filter=mp, but unfortunately this is not the default. This means linking to libavdevice (which in turn forces linking with libavfilter by default) must be disabled. We try doing this by compiling a test file that defines one of the clashing symbols (vf_mpi_clear). To enable libavdevice input, ffmpeg should be built with the options: --disable-filter=mp and mpv with: --enable-libavdevice Originally, I tried to auto-detect it. But the resulting complications in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
}
avdevice_format = talloc_strndup(priv, priv->filename,
sep - priv->filename);
priv->filename = sep + 1;
}
char *mime_type = s->mime_type;
if (!lavfdopts->allow_mimetype || !mime_type)
mime_type = "";
const AVInputFormat *forced_format = NULL;
const char *format = lavfdopts->format;
if (!format)
format = s->lavf_type;
demux_lavf: add support for libavdevice libavdevice supports various "special" video and audio inputs, such as screen-capture or libavfilter filter graphs. libavdevice inputs are implemented as demuxers. They don't use the custom stream callbacks (in AVFormatContext.pb). Instead, input parameters are passed as filename. This means the mpv stream layer has to be disabled. Do this by adding the pseudo stream handler avdevice://, whose only purpose is passing the filename to demux_lavf, without actually doing anything. Change the logic how the filename is passed to libavformat. Remove handling of the filename from demux_open_lavf() and move it to lavf_check_file(). (This also fixes a possible bug when skipping the "lavf://" prefix.) libavdevice now can be invoked by specifying demuxer and args as in: mpv avdevice://demuxer:args The args are passed as filename to libavformat. When using libavdevice demuxers, their actual meaning is highly implementation specific. They don't refer to actual filenames. Note: libavdevice is disabled by default. There is one problem: libavdevice pulls in libavfilter, which in turn causes symbol clashes with mpv internals. The problem is that libavfilter includes a mplayer filter bridge, which is used to interface with a set of nearly unmodified mplayer filters copied into libavfilter. This filter bridge uses the same symbol names as mplayer/mpv's filter chain, which results in symbol clashes at link-time. This can be prevented by building ffmpeg with --disable-filter=mp, but unfortunately this is not the default. This means linking to libavdevice (which in turn forces linking with libavfilter by default) must be disabled. We try doing this by compiling a test file that defines one of the clashing symbols (vf_mpi_clear). To enable libavdevice input, ffmpeg should be built with the options: --disable-filter=mp and mpv with: --enable-libavdevice Originally, I tried to auto-detect it. But the resulting complications in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
if (!format)
format = avdevice_format;
if (format) {
if (strcmp(format, "help") == 0) {
list_formats(demuxer);
return -1;
}
forced_format = av_find_input_format(format);
if (!forced_format) {
MP_FATAL(demuxer, "Unknown lavf format %s\n", format);
return -1;
}
}
// HLS streams seems to be not well tagged, so matching mime type is not
// enough. Strip URL parameters and match extension.
bstr ext = bstr_get_ext(bstr_split(bstr0(priv->filename), "?#", NULL));
AVProbeData avpd = {
// Disable file-extension matching with normal checks, except for HLS
.filename = !bstrcasecmp0(ext, "m3u8") || !bstrcasecmp0(ext, "m3u") ||
check <= DEMUX_CHECK_REQUEST ? priv->filename : "",
.buf_size = 0,
.buf = av_mallocz(PROBE_BUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE),
.mime_type = lavfdopts->allow_mimetype ? mime_type : NULL,
};
2014-12-12 16:28:22 +00:00
if (!avpd.buf)
return -1;
bool final_probe = false;
do {
int score = 0;
if (forced_format) {
priv->avif = forced_format;
score = AVPROBE_SCORE_MAX;
} else {
int nsize = av_clip(avpd.buf_size * 2, INITIAL_PROBE_SIZE,
PROBE_BUF_SIZE);
stream: turn into a ring buffer, make size configurable In some corner cases (see #6802), it can be beneficial to use a larger stream buffer size. Use this as argument to rewrite everything for no reason. Turn stream.c itself into a ring buffer, with configurable size. The latter would have been easily achievable with minimal changes, and the ring buffer is the hard part. There is no reason to have a ring buffer at all, except possibly if ffmpeg don't fix their awful mp4 demuxer, and some subtle issues with demux_mkv.c wanting to seek back by small offsets (the latter was handled with small stream_peek() calls, which are unneeded now). In addition, this turns small forward seeks into reads (where data is simply skipped). Before this commit, only stream_skip() did this (which also mean that stream_skip() simply calls stream_seek() now). Replace all stream_peek() calls with something else (usually stream_read_peek()). The function was a problem, because it returned a pointer to the internal buffer, which is now a ring buffer with wrapping. The new function just copies the data into a buffer, and in some cases requires callers to dynamically allocate memory. (The most common case, demux_lavf.c, required a separate buffer allocation anyway due to FFmpeg "idiosyncrasies".) This is the bulk of the demuxer_* changes. I'm not happy with this. There still isn't a good reason why there should be a ring buffer, that is complex, and most of the time just wastes half of the available memory. Maybe another rewrite soon. It also contains bugs; you're an alpha tester now.
2019-11-06 20:36:02 +00:00
nsize = stream_read_peek(s, avpd.buf, nsize);
if (nsize <= avpd.buf_size)
final_probe = true;
stream: turn into a ring buffer, make size configurable In some corner cases (see #6802), it can be beneficial to use a larger stream buffer size. Use this as argument to rewrite everything for no reason. Turn stream.c itself into a ring buffer, with configurable size. The latter would have been easily achievable with minimal changes, and the ring buffer is the hard part. There is no reason to have a ring buffer at all, except possibly if ffmpeg don't fix their awful mp4 demuxer, and some subtle issues with demux_mkv.c wanting to seek back by small offsets (the latter was handled with small stream_peek() calls, which are unneeded now). In addition, this turns small forward seeks into reads (where data is simply skipped). Before this commit, only stream_skip() did this (which also mean that stream_skip() simply calls stream_seek() now). Replace all stream_peek() calls with something else (usually stream_read_peek()). The function was a problem, because it returned a pointer to the internal buffer, which is now a ring buffer with wrapping. The new function just copies the data into a buffer, and in some cases requires callers to dynamically allocate memory. (The most common case, demux_lavf.c, required a separate buffer allocation anyway due to FFmpeg "idiosyncrasies".) This is the bulk of the demuxer_* changes. I'm not happy with this. There still isn't a good reason why there should be a ring buffer, that is complex, and most of the time just wastes half of the available memory. Maybe another rewrite soon. It also contains bugs; you're an alpha tester now.
2019-11-06 20:36:02 +00:00
avpd.buf_size = nsize;
priv->avif = av_probe_input_format2(&avpd, avpd.buf_size > 0, &score);
}
if (priv->avif) {
MP_VERBOSE(demuxer, "Found '%s' at score=%d size=%d%s.\n",
priv->avif->name, score, avpd.buf_size,
forced_format ? " (forced)" : "");
for (int n = 0; lavfdopts->hacks && format_hacks[n].ff_name; n++) {
const struct format_hack *entry = &format_hacks[n];
if (!matches_avinputformat_name(priv, entry->ff_name))
continue;
if (entry->mime_type && strcasecmp(entry->mime_type, mime_type) != 0)
continue;
priv->format_hack = *entry;
break;
}
// AVIF always needs to find stream info
if (bstrcasecmp0(ext, "avif") == 0)
priv->format_hack.skipinfo = false;
if (score >= lavfdopts->probescore)
break;
if (priv->format_hack.probescore &&
score >= priv->format_hack.probescore &&
(!priv->format_hack.max_probe || final_probe))
break;
}
priv->avif = NULL;
priv->format_hack = (struct format_hack){0};
} while (!final_probe);
av_free(avpd.buf);
if (priv->avif && !forced_format && priv->format_hack.ignore) {
MP_VERBOSE(demuxer, "Format blacklisted.\n");
priv->avif = NULL;
}
if (!priv->avif) {
MP_VERBOSE(demuxer, "No format found, try lowering probescore or forcing the format.\n");
return -1;
}
if (lavfdopts->hacks)
priv->avif_flags = priv->avif->flags | priv->format_hack.if_flags;
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
priv->linearize_ts = lavfdopts->linearize_ts;
if (priv->linearize_ts < 0 && !priv->format_hack.linearize_audio_ts)
priv->linearize_ts = 0;
demuxer->filetype = priv->avif->name;
if (priv->format_hack.detect_charset)
convert_charset(demuxer);
return 0;
}
static char *replace_idx_ext(void *ta_ctx, bstr f)
{
if (f.len < 4 || f.start[f.len - 4] != '.')
return NULL;
char *ext = bstr_endswith0(f, "IDX") ? "SUB" : "sub"; // match case
return talloc_asprintf(ta_ctx, "%.*s.%s", BSTR_P(bstr_splice(f, 0, -4)), ext);
}
static void guess_and_set_vobsub_name(struct demuxer *demuxer, AVDictionary **d)
{
lavf_priv_t *priv = demuxer->priv;
if (!matches_avinputformat_name(priv, "vobsub"))
return;
void *tmp = talloc_new(NULL);
bstr bfilename = bstr0(priv->filename);
char *subname = NULL;
if (mp_is_url(bfilename)) {
// It might be a http URL, which has additional parameters after the
// end of the actual file path.
bstr start, end;
if (bstr_split_tok(bfilename, "?", &start, &end)) {
subname = replace_idx_ext(tmp, start);
if (subname)
subname = talloc_asprintf(tmp, "%s?%.*s", subname, BSTR_P(end));
}
}
if (!subname)
subname = replace_idx_ext(tmp, bfilename);
if (!subname)
subname = talloc_asprintf(tmp, "%.*s.sub", BSTR_P(bfilename));
MP_VERBOSE(demuxer, "Assuming associated .sub file: %s\n", subname);
av_dict_set(d, "sub_name", subname, 0);
talloc_free(tmp);
}
static void select_tracks(struct demuxer *demuxer, int start)
{
lavf_priv_t *priv = demuxer->priv;
for (int n = start; n < priv->num_streams; n++) {
struct sh_stream *stream = priv->streams[n]->sh;
AVStream *st = priv->avfc->streams[n];
bool selected = stream && demux_stream_is_selected(stream) &&
!stream->attached_picture;
st->discard = selected ? AVDISCARD_DEFAULT : AVDISCARD_ALL;
}
}
static void export_replaygain(demuxer_t *demuxer, struct sh_stream *sh,
AVStream *st)
{
AVPacketSideData *side_data = st->codecpar->coded_side_data;
int nb_side_data = st->codecpar->nb_coded_side_data;
for (int i = 0; i < nb_side_data; i++) {
AVReplayGain *av_rgain;
struct replaygain_data *rgain;
AVPacketSideData *src_sd = &side_data[i];
if (src_sd->type != AV_PKT_DATA_REPLAYGAIN)
continue;
av_rgain = (AVReplayGain*)src_sd->data;
rgain = talloc_ptrtype(demuxer, rgain);
rgain->track_gain = rgain->album_gain = 0;
rgain->track_peak = rgain->album_peak = 1;
// Set values in *rgain, using track gain as a fallback for album gain
// if the latter is not present. This behavior matches that in
// demux/demux.c's decode_rgain; if you change this, please make
// equivalent changes there too.
if (av_rgain->track_gain != INT32_MIN && av_rgain->track_peak != 0.0) {
// Track gain is defined.
rgain->track_gain = av_rgain->track_gain / 100000.0f;
rgain->track_peak = av_rgain->track_peak / 100000.0f;
if (av_rgain->album_gain != INT32_MIN &&
av_rgain->album_peak != 0.0)
{
// Album gain is also defined.
rgain->album_gain = av_rgain->album_gain / 100000.0f;
rgain->album_peak = av_rgain->album_peak / 100000.0f;
} else {
// Album gain is undefined; fall back to track gain.
rgain->album_gain = rgain->track_gain;
rgain->album_peak = rgain->track_peak;
}
}
// This must be run only before the stream was added, otherwise there
// will be race conditions with accesses from the user thread.
assert(!sh->ds);
sh->codec->replaygain_data = rgain;
}
}
// Return a dictionary entry as (decimal) integer.
static int dict_get_decimal(AVDictionary *dict, const char *entry, int def)
{
AVDictionaryEntry *e = av_dict_get(dict, entry, NULL, 0);
if (e && e->value) {
char *end = NULL;
long int r = strtol(e->value, &end, 10);
if (end && !end[0] && r >= INT_MIN && r <= INT_MAX)
return r;
}
return def;
}
2022-08-21 19:10:59 +00:00
static bool is_image(AVStream *st, bool attached_picture, const AVInputFormat *avif)
{
return st->nb_frames <= 1 && (
attached_picture ||
bstr_endswith0(bstr0(avif->name), "_pipe") ||
strcmp(avif->name, "alias_pix") == 0 ||
strcmp(avif->name, "gif") == 0 ||
2024-01-18 16:43:02 +00:00
strcmp(avif->name, "ico") == 0 ||
2022-08-21 19:10:59 +00:00
strcmp(avif->name, "image2pipe") == 0 ||
(st->codecpar->codec_id == AV_CODEC_ID_AV1 && st->nb_frames == 1)
);
}
static inline const uint8_t *mp_av_stream_get_side_data(const AVStream *st,
enum AVPacketSideDataType type)
{
const AVPacketSideData *sd;
sd = av_packet_side_data_get(st->codecpar->coded_side_data,
st->codecpar->nb_coded_side_data,
type);
return sd ? sd->data : NULL;
}
static void handle_new_stream(demuxer_t *demuxer, int i)
{
lavf_priv_t *priv = demuxer->priv;
AVFormatContext *avfc = priv->avfc;
AVStream *st = avfc->streams[i];
struct sh_stream *sh = NULL;
AVCodecParameters *codec = st->codecpar;
int lavc_delay = codec->initial_padding;
switch (codec->codec_type) {
case AVMEDIA_TYPE_AUDIO: {
sh = demux_alloc_sh_stream(STREAM_AUDIO);
if (!mp_chmap_from_av_layout(&sh->codec->channels, &codec->ch_layout)) {
char layout[128] = {0};
MP_WARN(demuxer,
"Failed to convert channel layout %s to mpv one!\n",
av_channel_layout_describe(&codec->ch_layout,
layout, 128) < 0 ?
"undefined" : layout);
}
sh->codec->samplerate = codec->sample_rate;
sh->codec->bitrate = codec->bit_rate;
sh->codec->format_name = talloc_strdup(sh, av_get_sample_fmt_name(codec->format));
double delay = 0;
if (codec->sample_rate > 0)
delay = lavc_delay / (double)codec->sample_rate;
priv->seek_delay = MPMAX(priv->seek_delay, delay);
export_replaygain(demuxer, sh, st);
sh->seek_preroll = delay;
break;
}
case AVMEDIA_TYPE_VIDEO: {
sh = demux_alloc_sh_stream(STREAM_VIDEO);
if ((st->disposition & AV_DISPOSITION_ATTACHED_PIC) &&
!(st->disposition & AV_DISPOSITION_TIMED_THUMBNAILS))
{
sh->attached_picture =
new_demux_packet_from_avpacket(&st->attached_pic);
if (sh->attached_picture) {
sh->attached_picture->pts = 0;
talloc_steal(sh, sh->attached_picture);
sh->attached_picture->keyframe = true;
}
}
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
if (!sh->attached_picture) {
// A real video stream probably means it's a packet based format.
priv->pcm_seek_hack_disabled = true;
priv->pcm_seek_hack = NULL;
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
// Also, we don't want to do this shit for ogv videos.
if (priv->linearize_ts < 0)
priv->linearize_ts = 0;
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
}
sh->codec->disp_w = codec->width;
sh->codec->disp_h = codec->height;
sh->codec->format_name = talloc_strdup(sh, av_get_pix_fmt_name(codec->format));
if (st->avg_frame_rate.num)
sh->codec->fps = av_q2d(st->avg_frame_rate);
2022-08-21 19:10:59 +00:00
if (is_image(st, sh->attached_picture, priv->avif)) {
MP_VERBOSE(demuxer, "Assuming this is an image format.\n");
sh->image = true;
sh->codec->fps = demuxer->opts->mf_fps;
}
sh->codec->par_w = st->sample_aspect_ratio.num;
sh->codec->par_h = st->sample_aspect_ratio.den;
const uint8_t *sd = mp_av_stream_get_side_data(st, AV_PKT_DATA_DISPLAYMATRIX);
if (sd) {
double r = av_display_rotation_get((int32_t *)sd);
if (!isnan(r))
sh->codec->rotate = (((int)(-r) % 360) + 360) % 360;
}
if ((sd = mp_av_stream_get_side_data(st, AV_PKT_DATA_DOVI_CONF))) {
const AVDOVIDecoderConfigurationRecord *cfg = (void *) sd;
MP_VERBOSE(demuxer, "Found Dolby Vision config record: profile "
"%d level %d\n", cfg->dv_profile, cfg->dv_level);
av_format_inject_global_side_data(avfc);
sh->codec->dovi = true;
sh->codec->dv_profile = cfg->dv_profile;
sh->codec->dv_level = cfg->dv_level;
}
video: add insane hack to work around FFmpeg/Libav insanity So, FFmpeg/Libav requires us to figure out video timestamps ourselves (see last 10 commits or so), but the methods it provides for this aren't even sufficient. In particular, everything that uses AVI-style DTS (avi, vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal frame delay is broken. In this case, libavcodec will shift the packet- to-image correspondence by the codec delay, meaning that with a delay=1, the first AVFrame.pkt_dts is not 0, but that of the second packet. All timestamps will appear shifted. The start time (e.g. the time displayed when doing "mpv file.avi --pause") will not be exactly 0. (According to Libav developers, this is how it's supposed to work; just that the first DTS values are normally negative with formats that use DTS "properly". Who cares if it doesn't work at all with very common video formats? There's no indication that they'll fix this soon, either. An elegant workaround is missing too.) Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV. Since these timestamps are not reorderd, we wouldn't need to sort them, but it's less code this way (and possibly more robust, should a demuxer unexpectedly output PTS). The original intention of all the timestamp changes recently was actually to get rid of demuxer-specific hacks and the old timestamp sorting code, but it looks like this didn't work out. Yet another case where trying to replace native MPlayer functionality with FFmpeg/Libav led to disadvantages and bugs. (Note that the old PTS sorting code doesn't and can't handle frame dropping correctly, though.) Bug reports: https://trac.ffmpeg.org/ticket/3178 https://bugzilla.libav.org/show_bug.cgi?id=600
2013-11-28 14:10:45 +00:00
// This also applies to vfw-muxed mkv, but we can't detect these easily.
sh->codec->avi_dts = matches_avinputformat_name(priv, "avi");
video: add insane hack to work around FFmpeg/Libav insanity So, FFmpeg/Libav requires us to figure out video timestamps ourselves (see last 10 commits or so), but the methods it provides for this aren't even sufficient. In particular, everything that uses AVI-style DTS (avi, vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal frame delay is broken. In this case, libavcodec will shift the packet- to-image correspondence by the codec delay, meaning that with a delay=1, the first AVFrame.pkt_dts is not 0, but that of the second packet. All timestamps will appear shifted. The start time (e.g. the time displayed when doing "mpv file.avi --pause") will not be exactly 0. (According to Libav developers, this is how it's supposed to work; just that the first DTS values are normally negative with formats that use DTS "properly". Who cares if it doesn't work at all with very common video formats? There's no indication that they'll fix this soon, either. An elegant workaround is missing too.) Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV. Since these timestamps are not reorderd, we wouldn't need to sort them, but it's less code this way (and possibly more robust, should a demuxer unexpectedly output PTS). The original intention of all the timestamp changes recently was actually to get rid of demuxer-specific hacks and the old timestamp sorting code, but it looks like this didn't work out. Yet another case where trying to replace native MPlayer functionality with FFmpeg/Libav led to disadvantages and bugs. (Note that the old PTS sorting code doesn't and can't handle frame dropping correctly, though.) Bug reports: https://trac.ffmpeg.org/ticket/3178 https://bugzilla.libav.org/show_bug.cgi?id=600
2013-11-28 14:10:45 +00:00
break;
}
case AVMEDIA_TYPE_SUBTITLE: {
sh = demux_alloc_sh_stream(STREAM_SUB);
if (codec->extradata_size) {
sh->codec->extradata = talloc_size(sh, codec->extradata_size);
memcpy(sh->codec->extradata, codec->extradata, codec->extradata_size);
sh->codec->extradata_size = codec->extradata_size;
}
if (matches_avinputformat_name(priv, "microdvd")) {
AVRational r;
if (av_opt_get_q(avfc, "subfps", AV_OPT_SEARCH_CHILDREN, &r) >= 0) {
// File headers don't have a FPS set.
if (r.num < 1 || r.den < 1)
sh->codec->frame_based = 23.976; // default timebase
}
}
break;
}
case AVMEDIA_TYPE_ATTACHMENT: {
AVDictionaryEntry *ftag = av_dict_get(st->metadata, "filename", NULL, 0);
char *filename = ftag ? ftag->value : NULL;
AVDictionaryEntry *mt = av_dict_get(st->metadata, "mimetype", NULL, 0);
char *mimetype = mt ? mt->value : NULL;
if (mimetype) {
demuxer_add_attachment(demuxer, filename, mimetype,
codec->extradata, codec->extradata_size);
}
break;
}
default: ;
}
struct stream_info *info = talloc_zero(priv, struct stream_info);
*info = (struct stream_info){
.sh = sh,
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
.last_key_pts = MP_NOPTS_VALUE,
.highest_pts = MP_NOPTS_VALUE,
};
assert(priv->num_streams == i); // directly mapped
MP_TARRAY_APPEND(priv, priv->streams, priv->num_streams, info);
if (sh) {
sh->ff_index = st->index;
sh->codec->codec = mp_codec_from_av_codec_id(codec->codec_id);
sh->codec->codec_tag = codec->codec_tag;
sh->codec->lav_codecpar = avcodec_parameters_alloc();
if (sh->codec->lav_codecpar)
avcodec_parameters_copy(sh->codec->lav_codecpar, codec);
sh->codec->native_tb_num = st->time_base.num;
sh->codec->native_tb_den = st->time_base.den;
if (st->disposition & AV_DISPOSITION_DEFAULT)
sh->default_track = true;
if (st->disposition & AV_DISPOSITION_FORCED)
sh->forced_track = true;
if (st->disposition & AV_DISPOSITION_DEPENDENT)
sh->dependent_track = true;
if (st->disposition & AV_DISPOSITION_VISUAL_IMPAIRED)
sh->visual_impaired_track = true;
if (st->disposition & AV_DISPOSITION_HEARING_IMPAIRED)
sh->hearing_impaired_track = true;
if (st->disposition & AV_DISPOSITION_STILL_IMAGE)
sh->still_image = true;
if (priv->format_hack.use_stream_ids)
sh->demuxer_id = st->id;
AVDictionaryEntry *title = av_dict_get(st->metadata, "title", NULL, 0);
if (title && title->value)
sh->title = talloc_strdup(sh, title->value);
if (!sh->title && st->disposition & AV_DISPOSITION_VISUAL_IMPAIRED)
sh->title = talloc_asprintf(sh, "visual impaired");
if (!sh->title && st->disposition & AV_DISPOSITION_HEARING_IMPAIRED)
sh->title = talloc_asprintf(sh, "hearing impaired");
AVDictionaryEntry *lang = av_dict_get(st->metadata, "language", NULL, 0);
if (lang && lang->value && strcmp(lang->value, "und") != 0)
sh->lang = talloc_strdup(sh, lang->value);
sh->hls_bitrate = dict_get_decimal(st->metadata, "variant_bitrate", 0);
AVProgram *prog = av_find_program_from_stream(avfc, NULL, i);
if (prog)
sh->program_id = prog->id;
sh->missing_timestamps = !!(priv->avif_flags & AVFMT_NOTIMESTAMPS);
mp_tags_move_from_av_dictionary(sh->tags, &st->metadata);
demux_add_sh_stream(demuxer, sh);
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
// Unfortunately, there is no better way to detect PCM codecs, other
// than listing them all manually. (Or other "frameless" codecs. Or
// rather, codecs with frames so small libavformat will put multiple of
// them into a single packet, but not preserve these artificial packet
// boundaries on seeking.)
if (sh->codec->codec && strncmp(sh->codec->codec, "pcm_", 4) == 0 &&
codec->block_align && !priv->pcm_seek_hack_disabled &&
priv->opts->hacks && !priv->format_hack.no_pcm_seek &&
st->time_base.num == 1 && st->time_base.den == codec->sample_rate)
{
if (priv->pcm_seek_hack) {
// More than 1 audio stream => usually doesn't apply.
priv->pcm_seek_hack_disabled = true;
priv->pcm_seek_hack = NULL;
} else {
priv->pcm_seek_hack = st;
}
}
}
select_tracks(demuxer, i);
}
// Add any new streams that might have been added
static void add_new_streams(demuxer_t *demuxer)
{
lavf_priv_t *priv = demuxer->priv;
while (priv->num_streams < priv->avfc->nb_streams)
handle_new_stream(demuxer, priv->num_streams);
}
static void update_metadata(demuxer_t *demuxer)
{
lavf_priv_t *priv = demuxer->priv;
if (priv->avfc->event_flags & AVFMT_EVENT_FLAG_METADATA_UPDATED) {
mp_tags_move_from_av_dictionary(demuxer->metadata, &priv->avfc->metadata);
priv->avfc->event_flags = 0;
demux_metadata_changed(demuxer);
}
}
static int interrupt_cb(void *ctx)
{
struct demuxer *demuxer = ctx;
return mp_cancel_test(demuxer->cancel);
}
static int block_io_open(struct AVFormatContext *s, AVIOContext **pb,
const char *url, int flags, AVDictionary **options)
{
struct demuxer *demuxer = s->opaque;
MP_ERR(demuxer, "Not opening '%s' due to --access-references=no.\n", url);
return AVERROR(EACCES);
}
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
static int nested_io_open(struct AVFormatContext *s, AVIOContext **pb,
const char *url, int flags, AVDictionary **options)
{
struct demuxer *demuxer = s->opaque;
lavf_priv_t *priv = demuxer->priv;
if (options && priv->opts->propagate_opts) {
// Copy av_opts to options, but only entries that are not present in
// options. (Hope this will break less by not overwriting important
// settings.)
AVDictionaryEntry *cur = NULL;
while ((cur = av_dict_get(priv->av_opts, "", cur, AV_DICT_IGNORE_SUFFIX)))
{
if (!*options || !av_dict_get(*options, cur->key, NULL, 0)) {
MP_TRACE(demuxer, "Nested option: '%s'='%s'\n",
cur->key, cur->value);
av_dict_set(options, cur->key, cur->value, 0);
} else {
MP_TRACE(demuxer, "Skipping nested option: '%s'\n", cur->key);
}
}
}
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
int r = priv->default_io_open(s, pb, url, flags, options);
if (r >= 0) {
if (options)
mp_avdict_print_unset(demuxer->log, MSGL_TRACE, *options);
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
struct nested_stream nest = {
.id = *pb,
};
MP_TARRAY_APPEND(priv, priv->nested, priv->num_nested, nest);
}
return r;
}
static int nested_io_close2(struct AVFormatContext *s, AVIOContext *pb)
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
{
struct demuxer *demuxer = s->opaque;
lavf_priv_t *priv = demuxer->priv;
for (int n = 0; n < priv->num_nested; n++) {
if (priv->nested[n].id == pb) {
MP_TARRAY_REMOVE_AT(priv->nested, priv->num_nested, n);
break;
}
}
return priv->default_io_close2(s, pb);
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
}
static int demux_open_lavf(demuxer_t *demuxer, enum demux_check check)
{
AVFormatContext *avfc = NULL;
AVDictionaryEntry *t = NULL;
float analyze_duration = 0;
2015-12-17 00:00:54 +00:00
lavf_priv_t *priv = talloc_zero(NULL, lavf_priv_t);
AVDictionary *dopts = NULL;
2015-12-17 00:00:54 +00:00
demuxer->priv = priv;
priv->stream = demuxer->stream;
priv->opts = mp_get_config_group(priv, demuxer->global, &demux_lavf_conf);
struct demux_lavf_opts *lavfdopts = priv->opts;
if (lavf_check_file(demuxer, check) < 0)
goto fail;
avfc = avformat_alloc_context();
2014-12-12 16:28:22 +00:00
if (!avfc)
goto fail;
if (demuxer->opts->index_mode != 1)
avfc->flags |= AVFMT_FLAG_IGNIDX;
if (lavfdopts->probesize) {
if (av_opt_set_int(avfc, "probesize", lavfdopts->probesize, 0) < 0)
MP_ERR(demuxer, "couldn't set option probesize to %u\n",
lavfdopts->probesize);
}
if (priv->format_hack.analyzeduration)
analyze_duration = priv->format_hack.analyzeduration;
if (lavfdopts->analyzeduration)
analyze_duration = lavfdopts->analyzeduration;
if (analyze_duration > 0) {
if (av_opt_set_int(avfc, "analyzeduration",
analyze_duration * AV_TIME_BASE, 0) < 0)
MP_ERR(demuxer, "demux_lavf, couldn't set option "
"analyzeduration to %f\n", analyze_duration);
}
if ((priv->avif_flags & AVFMT_NOFILE) || priv->format_hack.no_stream) {
mp_setup_av_network_options(&dopts, priv->avif->name,
demuxer->global, demuxer->log);
// This might be incorrect.
demuxer->seekable = true;
} else {
void *buffer = av_malloc(lavfdopts->buffersize);
if (!buffer)
goto fail;
priv->pb = avio_alloc_context(buffer, lavfdopts->buffersize, 0,
demuxer, mp_read, NULL, mp_seek);
if (!priv->pb) {
av_free(buffer);
goto fail;
}
priv->pb->read_seek = mp_read_seek;
priv->pb->seekable = demuxer->seekable ? AVIO_SEEKABLE_NORMAL : 0;
avfc->pb = priv->pb;
if (stream_control(priv->stream, STREAM_CTRL_HAS_AVSEEK, NULL) > 0)
demuxer->seekable = true;
demuxer->seekable |= priv->format_hack.fully_read;
}
if (matches_avinputformat_name(priv, "rtsp")) {
const char *transport = NULL;
switch (lavfdopts->rtsp_transport) {
case 1: transport = "udp"; break;
case 2: transport = "tcp"; break;
case 3: transport = "http"; break;
case 4: transport = "udp_multicast"; break;
}
if (transport)
av_dict_set(&dopts, "rtsp_transport", transport, 0);
}
guess_and_set_vobsub_name(demuxer, &dopts);
avfc->interrupt_callback = (AVIOInterruptCB){
.callback = interrupt_cb,
.opaque = demuxer,
};
avfc->opaque = demuxer;
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
if (demuxer->access_references) {
priv->default_io_open = avfc->io_open;
avfc->io_open = nested_io_open;
priv->default_io_close2 = avfc->io_close2;
avfc->io_close2 = nested_io_close2;
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
} else {
avfc->io_open = block_io_open;
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
}
mp_set_avdict(&dopts, lavfdopts->avopts);
if (av_dict_copy(&priv->av_opts, dopts, 0) < 0) {
MP_ERR(demuxer, "av_dict_copy() failed\n");
goto fail;
}
if (priv->format_hack.readall_on_no_streamseek && priv->pb &&
!priv->pb->seekable)
{
MP_VERBOSE(demuxer, "Non-seekable demuxer pre-read hack...\n");
// Read incremental to avoid unnecessary large buffer sizes.
int r = 0;
for (int n = 16; n < 29; n++) {
r = stream_peek(priv->stream, 1 << n);
if (r < (1 << n))
break;
}
MP_VERBOSE(demuxer, "...did read %d bytes.\n", r);
}
if (avformat_open_input(&avfc, priv->filename, priv->avif, &dopts) < 0) {
MP_ERR(demuxer, "avformat_open_input() failed\n");
goto fail;
}
mp_avdict_print_unset(demuxer->log, MSGL_V, dopts);
av_dict_free(&dopts);
priv->avfc = avfc;
bool probeinfo = lavfdopts->probeinfo != 0;
switch (lavfdopts->probeinfo) {
case -2: probeinfo = priv->avfc->nb_streams == 0; break;
case -1: probeinfo = !priv->format_hack.skipinfo; break;
}
if (demuxer->params && demuxer->params->skip_lavf_probing)
probeinfo = false;
if (probeinfo) {
if (avformat_find_stream_info(avfc, NULL) < 0) {
MP_ERR(demuxer, "av_find_stream_info() failed\n");
goto fail;
}
MP_VERBOSE(demuxer, "avformat_find_stream_info() finished after %"PRId64
" bytes.\n", stream_tell(priv->stream));
}
2015-12-17 00:00:54 +00:00
for (int i = 0; i < avfc->nb_chapters; i++) {
AVChapter *c = avfc->chapters[i];
t = av_dict_get(c->metadata, "title", NULL, 0);
int index = demuxer_add_chapter(demuxer, t ? t->value : "",
c->start * av_q2d(c->time_base), i);
mp_tags_move_from_av_dictionary(demuxer->chapters[index].metadata, &c->metadata);
}
add_new_streams(demuxer);
mp_tags_move_from_av_dictionary(demuxer->metadata, &avfc->metadata);
demuxer->ts_resets_possible =
priv->avif_flags & (AVFMT_TS_DISCONT | AVFMT_NOTIMESTAMPS);
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
if (avfc->start_time != AV_NOPTS_VALUE)
demuxer->start_time = avfc->start_time / (double)AV_TIME_BASE;
demuxer->fully_read = priv->format_hack.fully_read;
demux: hack for instant stream switching This removes the delay when switching audio tracks in mkv or mp4 files. Other formats are not enabled, because it's not clear whether the demuxers fulfill the requirements listed in demux.h. (Many formats definitely do not with libavformat.) Background: The demuxer packet cache buffers a certain amount of packets. This includes only packets from selected streams. We discard packets from other streams for various reasons. This introduces a problem: switching to a different audio track introduces a delay. The delay is as big as the demuxer packet cache buffer, because while the file was read ahead to fill the packet buffer, the process of reading packets also discarded all packets from the previously not selected audio stream. Once the remaining packet buffer has been played, new audio packets are available and you hear audio again. We could probably just not discard packets from unselected streams. But this would require additional memory and CPU resources, and also it's hard to tell when packets from unused streams should be discarded (we don't want to keep them forever; it'd be a memory leak). We could also issue a player hr-seek to the current playback position, which would solve the problem in 1 line of code or so. But this can be rather slow. So what we do in this commit instead is: we just seek back to the position where our current packet buffer starts, and start demuxing from this position again. This way we can get the "past" packets for the newly selected stream. For streams which were already selected the packets are simply discarded until the previous position is reached again. That latter part is the hard part. We really want to skip packets exactly until the position where we left off previously, or we will skip packets or feed packets to the decoder twice. If we assume that the demuxer is deterministic (returns exactly the same packets after a seek to a previous position), then we can try to check whether it's the same packet as the one at the end of the packet buffer. If it is, we know that the packet after it is where we left off last time. Unfortunately, this is not very robust, and maybe it can't be made robust. Currently we use the demux_packet.pos field as unique packet ID - which works fine in some scenarios, but will break in arbitrary ways if the basic requirement to the demuxer (as listed in the demux.h additions) are broken. Thus, this is enabled only for the internal mkv demuxer and the libavformat mp4 demuxer. (libavformat mkv does not work, because the packet positions are not unique. Probably could be fixed upstream, but it's not clear whether it's a bug or a feature.)
2015-02-13 20:17:17 +00:00
#ifdef AVFMTCTX_UNSEEKABLE
if (avfc->ctx_flags & AVFMTCTX_UNSEEKABLE)
demuxer->seekable = false;
#endif
demuxer->is_network |= priv->format_hack.is_network;
demuxer->seekable &= !priv->format_hack.no_seek;
// We initially prefer track durations over container durations because they
// have a higher degree of precision over the container duration which are
// only accurate to the 6th decimal place. This is probably a lavf bug.
double total_duration = -1;
double av_duration = -1;
for (int n = 0; n < priv->avfc->nb_streams; n++) {
AVStream *st = priv->avfc->streams[n];
if (st->duration <= 0)
continue;
double f_duration = st->duration * av_q2d(st->time_base);
total_duration = MPMAX(total_duration, f_duration);
if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO ||
st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
av_duration = MPMAX(av_duration, f_duration);
}
double duration = av_duration > 0 ? av_duration : total_duration;
if (duration <= 0 && priv->avfc->duration > 0)
duration = (double)priv->avfc->duration / AV_TIME_BASE;
demuxer->duration = duration;
if (demuxer->duration < 0 && priv->format_hack.no_seek_on_no_duration)
demuxer->seekable = false;
// In some cases, libavformat will export bogus bullshit timestamps anyway,
// such as with mjpeg.
if (priv->avif_flags & AVFMT_NOTIMESTAMPS) {
MP_WARN(demuxer,
"This format is marked by FFmpeg as having no timestamps!\n"
"FFmpeg will likely make up its own broken timestamps. For\n"
"video streams you can correct this with:\n"
" --no-correct-pts --container-fps-override=VALUE\n"
"with VALUE being the real framerate of the stream. You can\n"
"expect seeking and buffering estimation to be generally\n"
"broken as well.\n");
}
if (demuxer->fully_read) {
demux_close_stream(demuxer);
if (priv->own_stream)
free_stream(priv->stream);
priv->own_stream = false;
priv->stream = NULL;
}
return 0;
fail:
if (!priv->avfc)
avformat_free_context(avfc);
av_dict_free(&dopts);
return -1;
}
static bool demux_lavf_read_packet(struct demuxer *demux,
struct demux_packet **mp_pkt)
{
lavf_priv_t *priv = demux->priv;
AVPacket *pkt = &(AVPacket){0};
int r = av_read_frame(priv->avfc, pkt);
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
update_read_stats(demux);
if (r < 0) {
av_packet_unref(pkt);
if (r == AVERROR_EOF)
return false;
MP_WARN(demux, "error reading packet: %s.\n", av_err2str(r));
if (priv->retry_counter >= 10) {
MP_ERR(demux, "...treating it as fatal error.\n");
return false;
}
priv->retry_counter += 1;
return true;
}
priv->retry_counter = 0;
add_new_streams(demux);
update_metadata(demux);
assert(pkt->stream_index >= 0 && pkt->stream_index < priv->num_streams);
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
struct stream_info *info = priv->streams[pkt->stream_index];
struct sh_stream *stream = info->sh;
AVStream *st = priv->avfc->streams[pkt->stream_index];
if (!demux_stream_is_selected(stream)) {
av_packet_unref(pkt);
return true; // don't signal EOF if skipping a packet
}
// Never send additional frames for streams that are a single frame.
if (stream->image && priv->format_hack.first_frame_only && pkt->pos != 0) {
av_packet_unref(pkt);
return true;
}
struct demux_packet *dp = new_demux_packet_from_avpacket(pkt);
if (!dp) {
av_packet_unref(pkt);
return true;
}
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
if (priv->pcm_seek_hack == st && !priv->pcm_seek_hack_packet_size)
priv->pcm_seek_hack_packet_size = pkt->size;
dp->pts = mp_pts_from_av(pkt->pts, &st->time_base);
dp->dts = mp_pts_from_av(pkt->dts, &st->time_base);
dp->duration = pkt->duration * av_q2d(st->time_base);
dp->pos = pkt->pos;
dp->keyframe = pkt->flags & AV_PKT_FLAG_KEY;
av_packet_unref(pkt);
if (priv->format_hack.clear_filepos)
dp->pos = -1;
dp->stream = stream->index;
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
if (priv->linearize_ts) {
dp->pts = MP_ADD_PTS(dp->pts, info->ts_offset);
dp->dts = MP_ADD_PTS(dp->dts, info->ts_offset);
double pts = MP_PTS_OR_DEF(dp->pts, dp->dts);
if (pts != MP_NOPTS_VALUE) {
if (dp->keyframe) {
if (pts < info->highest_pts) {
MP_WARN(demux, "Linearizing discontinuity: %f -> %f\n",
pts, info->highest_pts);
// Note: introduces a small discontinuity by a frame size.
double diff = info->highest_pts - pts;
dp->pts = MP_ADD_PTS(dp->pts, diff);
dp->dts = MP_ADD_PTS(dp->dts, diff);
pts += diff;
info->ts_offset += diff;
priv->any_ts_fixed = true;
}
info->last_key_pts = pts;
}
info->highest_pts = MP_PTS_MAX(info->highest_pts, pts);
}
}
demux: redo timed metadata The old implementation didn't work for the OGG case. Discard the old shit code (instead of fixing it), and write new shit code. The old code was already over a year old, so it's about time to rewrite it for no reason anyway. While it's true that the old code appears to be broken, the main reason to rewrite this is to make it simpler. While the amount of code seems to be about the same, both the concept and the actual tag handling are simpler. The result is probably a bit more correct. The packet struct shrinks by 8 byte. That fact that it wasted 8 bytes per packet for a rather obscure use case was the reason I started this at all (and when I found that OGG updates didn't work). While these 8 bytes aren't going to hurt, the packet struct was getting too bloated. If you buffer a lot of data, these extra fields will add up. Still quite some effort for 8 bytes. Fortunately, it's not like there are any managers that need to be convinced whether it's worth doing. The freedom to waste time on dumb shit. The old implementation attached the current metadata to each packet. When the decoder read the packet, the packet's metadata was made current. The new implementation stores metadata as separate list, and requires that the player frontend tells it the current playback time, which will be used to find the currently valid metadata. In both cases, the objective was to correctly update metadata even if a lot of data is buffered ahead (and to update them correctly when seeking within the demuxer cache). The new implementation is actually slightly more correct, because it uses the playback time for the metadata lookup. Consider if you have an audio filter which buffers 15 seconds (unfortunately such a filter exists), then the old code would update the current title 15 seconds too early, while the new one does it correctly. The new code also simplifies mixing the 3 metadata sources (global, per stream, ICY). We assume these aren't mixed in a meaningful way. The old code tried to be a bit more "exact". I didn't bother to look how the old code did this, but the new code simply always "merges" with the previous metadata, so if a newer tag removes a field, it's going to stick around anyway. I tried to keep it simple. Other approaches include making metadata a special sh_stream with metadata packets. This would have been conceptually clean, but the implementation would probably have been unnatural (and doesn't match well with libavformat's API anyway). It would have been nice to make the metadata updates chapter points (makes a lot of sense for the intended use case, web radio current song information), but I don't think it would have been a good idea to make chapters suddenly so dynamic. (Still an idea to keep in mind; the new code actually makes it easier to work towards this.) You could mention how subtitles are timed metadata, and actually are implemented as sparse packet streams in some formats. mp4 implements chapters as special subtitle stream, AFAIK. (Ironically, this is very not-ideal for files. It would be useful for streaming like web radio, but mp4 is extremely bad for streaming by design for other reasons.) bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
2019-06-10 00:18:20 +00:00
if (st->event_flags & AVSTREAM_EVENT_FLAG_METADATA_UPDATED) {
st->event_flags = 0;
struct mp_tags *tags = talloc_zero(NULL, struct mp_tags);
mp_tags_move_from_av_dictionary(tags, &st->metadata);
demux: redo timed metadata The old implementation didn't work for the OGG case. Discard the old shit code (instead of fixing it), and write new shit code. The old code was already over a year old, so it's about time to rewrite it for no reason anyway. While it's true that the old code appears to be broken, the main reason to rewrite this is to make it simpler. While the amount of code seems to be about the same, both the concept and the actual tag handling are simpler. The result is probably a bit more correct. The packet struct shrinks by 8 byte. That fact that it wasted 8 bytes per packet for a rather obscure use case was the reason I started this at all (and when I found that OGG updates didn't work). While these 8 bytes aren't going to hurt, the packet struct was getting too bloated. If you buffer a lot of data, these extra fields will add up. Still quite some effort for 8 bytes. Fortunately, it's not like there are any managers that need to be convinced whether it's worth doing. The freedom to waste time on dumb shit. The old implementation attached the current metadata to each packet. When the decoder read the packet, the packet's metadata was made current. The new implementation stores metadata as separate list, and requires that the player frontend tells it the current playback time, which will be used to find the currently valid metadata. In both cases, the objective was to correctly update metadata even if a lot of data is buffered ahead (and to update them correctly when seeking within the demuxer cache). The new implementation is actually slightly more correct, because it uses the playback time for the metadata lookup. Consider if you have an audio filter which buffers 15 seconds (unfortunately such a filter exists), then the old code would update the current title 15 seconds too early, while the new one does it correctly. The new code also simplifies mixing the 3 metadata sources (global, per stream, ICY). We assume these aren't mixed in a meaningful way. The old code tried to be a bit more "exact". I didn't bother to look how the old code did this, but the new code simply always "merges" with the previous metadata, so if a newer tag removes a field, it's going to stick around anyway. I tried to keep it simple. Other approaches include making metadata a special sh_stream with metadata packets. This would have been conceptually clean, but the implementation would probably have been unnatural (and doesn't match well with libavformat's API anyway). It would have been nice to make the metadata updates chapter points (makes a lot of sense for the intended use case, web radio current song information), but I don't think it would have been a good idea to make chapters suddenly so dynamic. (Still an idea to keep in mind; the new code actually makes it easier to work towards this.) You could mention how subtitles are timed metadata, and actually are implemented as sparse packet streams in some formats. mp4 implements chapters as special subtitle stream, AFAIK. (Ironically, this is very not-ideal for files. It would be useful for streaming like web radio, but mp4 is extremely bad for streaming by design for other reasons.) bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
2019-06-10 00:18:20 +00:00
double pts = MP_PTS_OR_DEF(dp->pts, dp->dts);
demux_stream_tags_changed(demux, stream, tags, pts);
}
*mp_pkt = dp;
return true;
}
static void demux_seek_lavf(demuxer_t *demuxer, double seek_pts, int flags)
{
lavf_priv_t *priv = demuxer->priv;
int avsflags = 0;
int64_t seek_pts_av = 0;
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
int seek_stream = -1;
demux_lavf: compensate timestamp resets for OGG web radio streams Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
2019-06-09 21:39:03 +00:00
if (priv->any_ts_fixed) {
// helpful message to piss of users
MP_WARN(demuxer, "Some timestamps returned by the demuxer were linearized. "
"A low level seek was requested; this won't work due to "
"restrictions in libavformat's API. You may have more "
"luck by enabling or enlarging the mpv cache.\n");
}
if (priv->linearize_ts < 0)
priv->linearize_ts = 0;
if (!(flags & SEEK_FORWARD))
Add improved relative seek mode When the new mode is active relative seeks are converted to absolute ones (current video pts + relative seek amount) and forward/backward flag before being sent to the demuxer. This mode is used if the demuxer has set the accurate_seek field in the demuxer struct and there is a video stream. At the moment the mkv and lavf demuxers enable the flag. This change is useful for later Matroska ordered chapter support (and for more general timelime editing), but also fixes problems in existing functionality. The main problem with the old mode, where relative seeks are passed directly to the demuxer, is that the user wants to seek relative to the currently displayed position but the demuxer does not know what that position is. There can be an arbitrary amount of buffering between the demuxer read position and what is displayed on the screen. In some situations this makes small seeks fail to move backward at all (especially visible at high playback speed, when audio needs to be demuxed and decoded further ahead to fill the output buffers after resampling). Some container formats that can be used with the lavf demuxer do not always have reliable timestamps that could be used for unambiguous absolute seeking. However I made the demuxer always enable the new mode because it already converted all seeks to absolute ones before sending them to libavformat, so cases without reliable absolute seeks were failing already and this should only improve the working cases.
2009-03-19 03:25:12 +00:00
avsflags = AVSEEK_FLAG_BACKWARD;
if (flags & SEEK_FACTOR) {
struct stream *s = priv->stream;
int64_t end = s ? stream_get_size(s) : -1;
if (end > 0 && demuxer->ts_resets_possible &&
!(priv->avif_flags & AVFMT_NO_BYTE_SEEK))
{
avsflags |= AVSEEK_FLAG_BYTE;
seek_pts_av = end * seek_pts;
} else if (priv->avfc->duration != 0 &&
priv->avfc->duration != AV_NOPTS_VALUE)
{
seek_pts_av = seek_pts * priv->avfc->duration;
}
} else {
if (!(flags & SEEK_FORWARD))
seek_pts -= priv->seek_delay;
seek_pts_av = seek_pts * AV_TIME_BASE;
}
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
// Hack to make wav seeking "deterministic". Without this, features like
// backward playback won't work.
if (priv->pcm_seek_hack && !priv->pcm_seek_hack_packet_size) {
// This might for example be the initial seek. Fuck it up like the
// bullshit it is.
AVPacket *pkt = av_packet_alloc();
MP_HANDLE_OOM(pkt);
if (av_read_frame(priv->avfc, pkt) >= 0)
priv->pcm_seek_hack_packet_size = pkt->size;
av_packet_free(&pkt);
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
add_new_streams(demuxer);
}
if (priv->pcm_seek_hack && priv->pcm_seek_hack_packet_size &&
!(avsflags & AVSEEK_FLAG_BYTE))
{
int samples = priv->pcm_seek_hack_packet_size /
priv->pcm_seek_hack->codecpar->block_align;
if (samples > 0) {
MP_VERBOSE(demuxer, "using bullshit libavformat PCM seek hack\n");
double pts = seek_pts_av / (double)AV_TIME_BASE;
seek_pts_av = pts / av_q2d(priv->pcm_seek_hack->time_base);
int64_t align = seek_pts_av % samples;
seek_pts_av -= align;
seek_stream = priv->pcm_seek_hack->index;
}
}
int r = av_seek_frame(priv->avfc, seek_stream, seek_pts_av, avsflags);
if (r < 0 && (avsflags & AVSEEK_FLAG_BACKWARD)) {
// When seeking before the beginning of the file, and seeking fails,
// try again without the backwards flag to make it seek to the
// beginning.
avsflags &= ~AVSEEK_FLAG_BACKWARD;
demux_lavf: implement bad hack for backward playback of wav This commit generally fixes backward playing in wav, at least in most PCM cases. libavformat's wav demuxer (and actually all other raw PCM based demuxers) have a specific behavior that breaks backward demuxing. The same thing also breaks persistent seek ranges in the demuxer cache, although that's less critical (it just means some cached data gets discarded). The backward demuxing issue is fatal, will log the message "Demuxer not cooperating.", and then typically stop doing anything. Unlike modern media formats, these formats don't organize media data in packets, but just wrap a monolithic byte stream that is described by a header. This is good enough for PCM, which uses fixed frames (a single sample for all audio channels), and for which it would be too expensive to have per frame headers. libavformat (and mpv) is heavily packet based, and using a single packet for each PCM frame causes too much overhead. So they typically "bundle" multiple frames into a single packet. This packet size is obviously arbitrary, and in libavformat's case hardcoded in its source code. The problem is that seeking doesn't respect this arbitrary packet boundary. Seeking is sample accurate. You can essentially seek inside a packet. The resulting packets will not be aligned with previously demuxed packets. This is normally OK. Backward seeking (and some other demuxer layer features) expect that demuxing an earlier demuxed file position eventually results in the same packets, regardless of the seeks that were done to get there. I like to call this "deterministic" demuxing. Backward demuxing in particular requires this to avoid overlaps, which would make it rather hard to get continuous output. Fix this issue by detecting wav and hopefully other raw audio formats with a heuristic (even PCM needs to be detected as heuristic). Then, if a seek is requested, align the seek timestamps on the guessed number of samples in the audio packets returned by the demuxer. The heuristic excludes files with multiple streams. (Except "attachment" video streams, which could be an ID3 tag. Yes, FFmpeg allows ID3 tags on WAV files.) Such files will inherently use the packet concept in some way. We don't know how the demuxer chooses the internal packet size, but we assume that it's fixed and aligned to PCM frame sizes. The frame size is most likely given by block_align (the native wav frame size, according to Microsoft). We possibly need to explicitly read and discard a packet if the seek is done without reading anything before that. We ignore any subsequent packet sizes; we need to avoid the very last packet, which likely has a different size. This hack should be rather benign. In the worst case, it will "round" the seek target a little, but the maximum rounding amount is bounded. Maybe we _could_ round up if SEEK_FORWARD is specified, but I didn't bother. An earlier commit fixed the same issue for mpv's demux_raw. An alternative, and probably much better solution would be clipping decoded data by timestamp. demux.c could allow the type of overlap the wav demuxer introduces, and instruct the decoder to clip the output against the last decoded timestamp. There's already an infrastructure for this (demux_packet.end field) used by EDL/ordered chapters. Although this sounds like a good solution, mpv unfortunately uses floats for timestamps. The rounding errors break sample accuracy. Even if you used integers, you'd need a timebase that is sample accurate (not always easy, since EDL can merge tracks with different sample rates).
2019-05-25 14:59:20 +00:00
r = av_seek_frame(priv->avfc, seek_stream, seek_pts_av, avsflags);
}
if (r < 0) {
char buf[180];
av_strerror(r, buf, sizeof(buf));
MP_VERBOSE(demuxer, "Seek failed (%s)\n", buf);
}
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
update_read_stats(demuxer);
}
static void demux_lavf_switched_tracks(struct demuxer *demuxer)
{
select_tracks(demuxer, 0);
}
static void demux_close_lavf(demuxer_t *demuxer)
{
lavf_priv_t *priv = demuxer->priv;
if (priv) {
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
// This will be a dangling pointer; but see below.
AVIOContext *leaking = priv->avfc ? priv->avfc->pb : NULL;
avformat_close_input(&priv->avfc);
demux_lavf: to get effective HLS bitrate In theory, this could be easily done with custom I/O. In practice, all the halfassed garbage in FFmpeg shits itself and fucks up like there's no tomorrow. There are several problems: 1. FFmpeg pretends you can do custom I/O, but in reality there's a lot that custom I/O can do. hls.c even contains explicit checks to disable important things if custom I/O is used! In particular, you can't use the HTTP keepalive functionality (needed for somewhat decent HLS performance), because some cranky asshole in the cursed FFmpeg dev. community blocked it. 2. The implementation of nested I/O callbacks (io_open/io_close) is bogus and halfassed (like everything in FFmpeg, really). It will call io_open on some URLs without ever calling io_close. Instead, it'll call avio_close() on the context directly. From what I can tell, avio_close() is incompable to custom I/O anyway (overwhelmed by their own garbage, the fFmpeg devs created the io_close callback for this reason, because they couldn't fix their own fucking garbage). This commit adds some shitty workaround for this (technically triggers UB, but with that garbage heap of a library we depend on it's not like it matters). 3. Even then, you can't proxy I/O contexts (see 1.), but we can just keep track of the opened nested I/O contexts. The bytes_read is documented as not public, but reading it is literally the only way to get what we want. A more reasonable approach would probably be using curl. It could transparently handle the keep-alive thing, as well as propagating cookies etc. (which doesn't work with the FFmpeg approach if you use custom I/O). Of course even better if there were an independent HLS implementation anywhere. FFmpeg's HLS support is so embarrassing pathetic and just goes to show that they belong into the past (multimedia from 2000-2010) and should either modernize or fuck off. With FFmpeg's shit-crusted structures, todic communities, and retarded assholes denying progress, probably the latter. Did I already mention that FFmpeg is a shit fucked steaming pile of garbage shit? And all just to get some basic I/O stats, that any proper HLS consumer requires in order to implement adaptive streaming correctly (i.e. browser based players, and nothing FFmshit based).
2018-09-01 13:28:40 +00:00
// The ffmpeg garbage breaks its own API yet again: hls.c will call
// io_open on the main playlist, but never calls io_close. This happens
// to work out for us (since we don't really use custom I/O), but it's
// still weird. Compensate.
if (priv->num_nested == 1 && priv->nested[0].id == leaking)
priv->num_nested = 0;
if (priv->num_nested) {
MP_WARN(demuxer, "Leaking %d nested connections (FFmpeg bug).\n",
priv->num_nested);
}
if (priv->pb)
av_freep(&priv->pb->buffer);
av_freep(&priv->pb);
for (int n = 0; n < priv->num_streams; n++) {
struct stream_info *info = priv->streams[n];
if (info->sh)
avcodec_parameters_free(&info->sh->codec->lav_codecpar);
}
if (priv->own_stream)
free_stream(priv->stream);
2021-04-20 10:13:47 +00:00
if (priv->av_opts)
av_dict_free(&priv->av_opts);
demux_lavf: add support for libavdevice libavdevice supports various "special" video and audio inputs, such as screen-capture or libavfilter filter graphs. libavdevice inputs are implemented as demuxers. They don't use the custom stream callbacks (in AVFormatContext.pb). Instead, input parameters are passed as filename. This means the mpv stream layer has to be disabled. Do this by adding the pseudo stream handler avdevice://, whose only purpose is passing the filename to demux_lavf, without actually doing anything. Change the logic how the filename is passed to libavformat. Remove handling of the filename from demux_open_lavf() and move it to lavf_check_file(). (This also fixes a possible bug when skipping the "lavf://" prefix.) libavdevice now can be invoked by specifying demuxer and args as in: mpv avdevice://demuxer:args The args are passed as filename to libavformat. When using libavdevice demuxers, their actual meaning is highly implementation specific. They don't refer to actual filenames. Note: libavdevice is disabled by default. There is one problem: libavdevice pulls in libavfilter, which in turn causes symbol clashes with mpv internals. The problem is that libavfilter includes a mplayer filter bridge, which is used to interface with a set of nearly unmodified mplayer filters copied into libavfilter. This filter bridge uses the same symbol names as mplayer/mpv's filter chain, which results in symbol clashes at link-time. This can be prevented by building ffmpeg with --disable-filter=mp, but unfortunately this is not the default. This means linking to libavdevice (which in turn forces linking with libavfilter by default) must be disabled. We try doing this by compiling a test file that defines one of the clashing symbols (vf_mpi_clear). To enable libavdevice input, ffmpeg should be built with the options: --disable-filter=mp and mpv with: --enable-libavdevice Originally, I tried to auto-detect it. But the resulting complications in configure did't seem worth the trouble.
2012-11-30 17:41:04 +00:00
talloc_free(priv);
demuxer->priv = NULL;
}
}
const demuxer_desc_t demuxer_desc_lavf = {
.name = "lavf",
.desc = "libavformat",
.read_packet = demux_lavf_read_packet,
.open = demux_open_lavf,
.close = demux_close_lavf,
.seek = demux_seek_lavf,
.switched_tracks = demux_lavf_switched_tracks,
};