2014-07-05 14:45:28 +00:00
|
|
|
/*
|
|
|
|
* This file is part of mpv.
|
|
|
|
*
|
2017-04-21 11:33:23 +00:00
|
|
|
* mpv is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
2014-07-05 14:45:28 +00:00
|
|
|
*
|
|
|
|
* mpv is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
2017-04-21 11:33:23 +00:00
|
|
|
* GNU Lesser General Public License for more details.
|
2014-07-05 14:45:28 +00:00
|
|
|
*
|
2017-04-21 11:33:23 +00:00
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
|
2014-07-05 14:45:28 +00:00
|
|
|
*/
|
2017-04-21 11:33:23 +00:00
|
|
|
|
2014-07-05 14:45:28 +00:00
|
|
|
#include <stdlib.h>
|
|
|
|
#include <stdio.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <assert.h>
|
|
|
|
|
|
|
|
#include <libavcodec/avcodec.h>
|
2014-11-03 19:00:34 +00:00
|
|
|
#include <libavutil/intreadwrite.h>
|
2014-07-05 14:45:28 +00:00
|
|
|
|
|
|
|
#include "common/av_common.h"
|
|
|
|
#include "common/common.h"
|
demux: support for some kinds of timed metadata
This makes ICY title changes show up at approximately the correct time,
even if the demuxer buffer is huge. (It'll still be wrong if the stream
byte cache contains a meaningful amount of data.)
It should have the same effect for mid-stream metadata changes in e.g.
OGG (untested).
This is still somewhat fishy, but in parts due to ICY being fishy, and
FFmpeg's metadata change API being somewhat fishy. For example, what
happens if you seek? With FFmpeg AVFMT_EVENT_FLAG_METADATA_UPDATED and
AVSTREAM_EVENT_FLAG_METADATA_UPDATED we hope that FFmpeg will correctly
restore the correct metadata when the first packet is returned.
If you seke with ICY, we're out of luck, and some audio will be
associated with the wrong tag until we get a new title through ICY
metadata update at an essentially random point (it's mostly inherent to
ICY). Then the tags will switch back and forth, and this behavior will
stick with the data stored in the demuxer cache. Fortunately, this can
happen only if the HTTP stream is actually seekable, which it usually is
not for ICY things. Seeking doesn't even make sense with ICY, since you
can't know the exact metadata location. Basically ICY metsdata sucks.
Some complexity is due to a microoptimization: I didn't want additional
atomic accesses for each packet if no timed metadata is used. (It
probably doesn't matter at all.)
2018-04-16 20:23:08 +00:00
|
|
|
#include "demux.h"
|
2014-07-05 14:45:28 +00:00
|
|
|
|
|
|
|
#include "packet.h"
|
|
|
|
|
demux: add a on-disk cache
Somewhat similar to the old --cache-file, except for the demuxer cache.
Instead of keeping packet data in memory, it's written to disk and read
back when needed.
The idea is to reduce main memory usage, while allowing fast seeking in
large cached network streams (especially live streams). Keeping the
packet metadata on disk would be rather hard (would use mmap or so, or
rewrite the entire demux.c packet queue handling), and since it's
relatively small, just keep it in memory.
Also for simplicity, the disk cache is append-only. If you're watching
really long livestreams, and need pruning, you're probably out of luck.
This still could be improved by trying to free unused blocks with
fallocate(), but since we're writing multiple streams in an interleaved
manner, this is slightly hard.
Some rather gross ugliness in packet.h: we want to store the file
position of the cached data somewhere, but on 32 bit architectures, we
don't have any usable 64 bit members for this, just the buf/len fields,
which add up to 64 bit - so the shitty union aliases this memory.
Error paths untested. Side data (the complicated part of trying to
serialize ffmpeg packets) untested.
Stream recording had to be adjusted. Some minor details change due to
this, but probably nothing important.
The change in attempt_range_joining() is because packets in cache
have no valid len field. It was a useful check (heuristically
finding broken cases), but not a necessary one.
Various other approaches were tried. It would be interesting to list
them and to mention the pros and cons, but I don't feel like it.
2019-06-13 17:10:32 +00:00
|
|
|
// Free any refcounted data dp holds (but don't free dp itself). This does not
|
|
|
|
// care about pointers that are _not_ refcounted (like demux_packet.codec).
|
|
|
|
// Normally, a user should use talloc_free(dp). This function is only for
|
|
|
|
// annoyingly specific obscure use cases.
|
|
|
|
void demux_packet_unref_contents(struct demux_packet *dp)
|
|
|
|
{
|
|
|
|
if (dp->avpacket) {
|
|
|
|
assert(!dp->is_cached);
|
2021-12-13 15:12:45 +00:00
|
|
|
av_packet_free(&dp->avpacket);
|
demux: add a on-disk cache
Somewhat similar to the old --cache-file, except for the demuxer cache.
Instead of keeping packet data in memory, it's written to disk and read
back when needed.
The idea is to reduce main memory usage, while allowing fast seeking in
large cached network streams (especially live streams). Keeping the
packet metadata on disk would be rather hard (would use mmap or so, or
rewrite the entire demux.c packet queue handling), and since it's
relatively small, just keep it in memory.
Also for simplicity, the disk cache is append-only. If you're watching
really long livestreams, and need pruning, you're probably out of luck.
This still could be improved by trying to free unused blocks with
fallocate(), but since we're writing multiple streams in an interleaved
manner, this is slightly hard.
Some rather gross ugliness in packet.h: we want to store the file
position of the cached data somewhere, but on 32 bit architectures, we
don't have any usable 64 bit members for this, just the buf/len fields,
which add up to 64 bit - so the shitty union aliases this memory.
Error paths untested. Side data (the complicated part of trying to
serialize ffmpeg packets) untested.
Stream recording had to be adjusted. Some minor details change due to
this, but probably nothing important.
The change in attempt_range_joining() is because packets in cache
have no valid len field. It was a useful check (heuristically
finding broken cases), but not a necessary one.
Various other approaches were tried. It would be interesting to list
them and to mention the pros and cons, but I don't feel like it.
2019-06-13 17:10:32 +00:00
|
|
|
dp->buffer = NULL;
|
|
|
|
dp->len = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-07-05 14:45:28 +00:00
|
|
|
static void packet_destroy(void *ptr)
|
|
|
|
{
|
|
|
|
struct demux_packet *dp = ptr;
|
demux: add a on-disk cache
Somewhat similar to the old --cache-file, except for the demuxer cache.
Instead of keeping packet data in memory, it's written to disk and read
back when needed.
The idea is to reduce main memory usage, while allowing fast seeking in
large cached network streams (especially live streams). Keeping the
packet metadata on disk would be rather hard (would use mmap or so, or
rewrite the entire demux.c packet queue handling), and since it's
relatively small, just keep it in memory.
Also for simplicity, the disk cache is append-only. If you're watching
really long livestreams, and need pruning, you're probably out of luck.
This still could be improved by trying to free unused blocks with
fallocate(), but since we're writing multiple streams in an interleaved
manner, this is slightly hard.
Some rather gross ugliness in packet.h: we want to store the file
position of the cached data somewhere, but on 32 bit architectures, we
don't have any usable 64 bit members for this, just the buf/len fields,
which add up to 64 bit - so the shitty union aliases this memory.
Error paths untested. Side data (the complicated part of trying to
serialize ffmpeg packets) untested.
Stream recording had to be adjusted. Some minor details change due to
this, but probably nothing important.
The change in attempt_range_joining() is because packets in cache
have no valid len field. It was a useful check (heuristically
finding broken cases), but not a necessary one.
Various other approaches were tried. It would be interesting to list
them and to mention the pros and cons, but I don't feel like it.
2019-06-13 17:10:32 +00:00
|
|
|
demux_packet_unref_contents(dp);
|
2014-07-05 14:45:28 +00:00
|
|
|
}
|
|
|
|
|
2022-12-03 23:12:43 +00:00
|
|
|
static struct demux_packet *packet_create(void)
|
2014-07-05 14:45:28 +00:00
|
|
|
{
|
|
|
|
struct demux_packet *dp = talloc(NULL, struct demux_packet);
|
|
|
|
talloc_set_destructor(dp, packet_destroy);
|
|
|
|
*dp = (struct demux_packet) {
|
|
|
|
.pts = MP_NOPTS_VALUE,
|
|
|
|
.dts = MP_NOPTS_VALUE,
|
|
|
|
.duration = -1,
|
|
|
|
.pos = -1,
|
Rewrite ordered chapters and timeline stuff
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.
The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.
The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)
To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
concatenating files with completely different codecs for the sake
of EDL (which also uses the timeline infrastructure). A "lighter"
approach would try to make use of decoder mechanism to update e.g.
the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
normal playback core mechanisms like hr-seek, but now the playback
core doesn't need to care about these things.
These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)
There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.
Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
2016-02-15 20:04:07 +00:00
|
|
|
.start = MP_NOPTS_VALUE,
|
|
|
|
.end = MP_NOPTS_VALUE,
|
2014-07-05 14:45:28 +00:00
|
|
|
.stream = -1,
|
2021-12-13 15:12:45 +00:00
|
|
|
.avpacket = av_packet_alloc(),
|
2014-07-05 14:45:28 +00:00
|
|
|
};
|
2022-12-03 23:12:43 +00:00
|
|
|
MP_HANDLE_OOM(dp->avpacket);
|
|
|
|
return dp;
|
|
|
|
}
|
|
|
|
|
|
|
|
// This actually preserves only data and side data, not PTS/DTS/pos/etc.
|
|
|
|
// It also allows avpkt->data==NULL with avpkt->size!=0 - the libavcodec API
|
|
|
|
// does not allow it, but we do it to simplify new_demux_packet().
|
|
|
|
struct demux_packet *new_demux_packet_from_avpacket(struct AVPacket *avpkt)
|
|
|
|
{
|
|
|
|
if (avpkt->size > 1000000000)
|
|
|
|
return NULL;
|
|
|
|
struct demux_packet *dp = packet_create();
|
2014-08-24 15:45:28 +00:00
|
|
|
int r = -1;
|
2022-12-03 23:12:43 +00:00
|
|
|
if (avpkt->data) {
|
2014-08-24 15:45:28 +00:00
|
|
|
// We hope that this function won't need/access AVPacket input padding,
|
|
|
|
// because otherwise new_demux_packet_from() wouldn't work.
|
|
|
|
r = av_packet_ref(dp->avpacket, avpkt);
|
|
|
|
} else {
|
|
|
|
r = av_new_packet(dp->avpacket, avpkt->size);
|
|
|
|
}
|
|
|
|
if (r < 0) {
|
2014-09-16 16:11:00 +00:00
|
|
|
talloc_free(dp);
|
|
|
|
return NULL;
|
2014-07-05 14:45:28 +00:00
|
|
|
}
|
2014-08-24 15:45:28 +00:00
|
|
|
dp->buffer = dp->avpacket->data;
|
|
|
|
dp->len = dp->avpacket->size;
|
2014-07-05 14:45:28 +00:00
|
|
|
return dp;
|
|
|
|
}
|
|
|
|
|
2017-11-05 15:36:18 +00:00
|
|
|
// (buf must include proper padding)
|
|
|
|
struct demux_packet *new_demux_packet_from_buf(struct AVBufferRef *buf)
|
|
|
|
{
|
video: make decoder wrapper a filter
Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes
a source filter. vd.h mostly disappears, because mp_filter takes care of
the dataflow, but its remains are in struct mp_decoder_fns.
One goal is to simplify dataflow by letting the filter framework handle
it (or more accurately, using its conventions). One result is that the
decode calls disappear from video.c, because we simply connect the
decoder wrapper and the filter chain with mp_pin_connect().
Another goal is to eventually remove the code duplication between the
audio and video paths for this. This commit prepares for this by trying
to make f_decoder_wrapper.c extensible, so it can be used for audio as
well later.
Decoder framedropping changes a bit. It doesn't seem to be worse than
before, and it's an obscure feature, so I'm content with its new state.
Some special code that was apparently meant to avoid dropping too many
frames in a row is removed, though.
I'm not sure how the source code tree should be organized. For one,
video/decode/vd_lavc.c is the only file in its directory, which is a bit
annoying.
2018-01-28 09:08:45 +00:00
|
|
|
if (!buf)
|
|
|
|
return NULL;
|
2022-12-03 23:12:43 +00:00
|
|
|
if (buf->size > 1000000000)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
struct demux_packet *dp = packet_create();
|
|
|
|
dp->avpacket->buf = av_buffer_ref(buf);
|
|
|
|
if (!dp->avpacket->buf) {
|
|
|
|
talloc_free(dp);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
dp->avpacket->data = dp->buffer = buf->data;
|
|
|
|
dp->avpacket->size = dp->len = buf->size;
|
|
|
|
return dp;
|
2017-11-05 15:36:18 +00:00
|
|
|
}
|
|
|
|
|
2014-08-24 15:45:28 +00:00
|
|
|
// Input data doesn't need to be padded.
|
|
|
|
struct demux_packet *new_demux_packet_from(void *data, size_t len)
|
2014-07-05 14:45:28 +00:00
|
|
|
{
|
2022-12-03 23:12:43 +00:00
|
|
|
struct demux_packet *dp = new_demux_packet(len);
|
|
|
|
if (!dp)
|
2014-09-16 16:11:00 +00:00
|
|
|
return NULL;
|
2022-12-03 23:12:43 +00:00
|
|
|
memcpy(dp->avpacket->data, data, len);
|
|
|
|
return dp;
|
2014-07-05 14:45:28 +00:00
|
|
|
}
|
|
|
|
|
2014-08-24 15:45:28 +00:00
|
|
|
struct demux_packet *new_demux_packet(size_t len)
|
2014-07-05 14:45:28 +00:00
|
|
|
{
|
2014-09-16 16:11:00 +00:00
|
|
|
if (len > INT_MAX)
|
|
|
|
return NULL;
|
2022-12-03 23:12:43 +00:00
|
|
|
|
|
|
|
struct demux_packet *dp = packet_create();
|
|
|
|
int r = av_new_packet(dp->avpacket, len);
|
|
|
|
if (r < 0) {
|
|
|
|
talloc_free(dp);
|
|
|
|
return NULL;
|
|
|
|
}
|
2023-01-06 22:01:46 +00:00
|
|
|
dp->buffer = dp->avpacket->data;
|
2022-12-03 23:12:43 +00:00
|
|
|
dp->len = len;
|
|
|
|
return dp;
|
2014-07-05 14:45:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void demux_packet_shorten(struct demux_packet *dp, size_t len)
|
|
|
|
{
|
|
|
|
assert(len <= dp->len);
|
2019-09-19 15:40:26 +00:00
|
|
|
if (dp->len) {
|
|
|
|
dp->len = len;
|
|
|
|
memset(dp->buffer + dp->len, 0, AV_INPUT_BUFFER_PADDING_SIZE);
|
|
|
|
}
|
2014-07-05 14:45:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void free_demux_packet(struct demux_packet *dp)
|
|
|
|
{
|
|
|
|
talloc_free(dp);
|
|
|
|
}
|
|
|
|
|
2015-02-05 20:52:07 +00:00
|
|
|
void demux_packet_copy_attribs(struct demux_packet *dst, struct demux_packet *src)
|
|
|
|
{
|
|
|
|
dst->pts = src->pts;
|
|
|
|
dst->dts = src->dts;
|
|
|
|
dst->duration = src->duration;
|
|
|
|
dst->pos = src->pos;
|
demux: get rid of demux_packet.new_segment field
The new_segment field was used to track the decoder data flow handler of
timeline boundaries, which are used for ordered chapters etc. (anything
that sets demuxer_desc.load_timeline). This broke seeking with the
demuxer cache enabled. The demuxer is expected to set the new_segment
field after every seek or segment boundary switch, so the cached packets
basically contained incorrect values for this, and the decoders were not
initialized correctly.
Fix this by getting rid of the flag completely. Let the decoders instead
compare the segment information by content, which is hopefully enough.
(In theory, two segments with same information could perhaps appear in
broken-ish corner cases, or in an attempt to simulate looping, and such.
I preferred the simple solution over others, such as generating unique
and stable segment IDs.)
We still add a "segmented" field to make it explicit whether segments
are used, instead of doing something silly like testing arbitrary other
segment fields for validity.
Cached seeking with timeline stuff is still slightly broken even with
this commit: the seek logic is not aware of the overlap that segments
can have, and the timestamp clamping that needs to be performed in
theory to account for the fact that a packet might contain a frame that
is always clipped off by segment handling. This can be fixed later.
2017-10-24 17:33:01 +00:00
|
|
|
dst->segmented = src->segmented;
|
Rewrite ordered chapters and timeline stuff
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.
The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.
The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)
To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
concatenating files with completely different codecs for the sake
of EDL (which also uses the timeline infrastructure). A "lighter"
approach would try to make use of decoder mechanism to update e.g.
the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
normal playback core mechanisms like hr-seek, but now the playback
core doesn't need to care about these things.
These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)
There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.
Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
2016-02-15 20:04:07 +00:00
|
|
|
dst->start = src->start;
|
|
|
|
dst->end = src->end;
|
2017-10-23 09:28:31 +00:00
|
|
|
dst->codec = src->codec;
|
Implement backwards playback
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)
(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)
How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.
The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).
Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).
The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.
Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.
E.g.:
bool before = pts_a < pts_b;
would need to be:
bool before = forward
? pts_a < pts_b
: pts_a > pts_b;
or:
bool before = pts_a * dir < pts_b * dir;
or if you, as it's implemented now, just do this after decoding:
pts_a *= dir;
pts_b *= dir;
and then in the normal timing/renderer code:
bool before = pts_a < pts_b;
Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.
Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.
As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)
VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.
FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
|
|
|
dst->back_restart = src->back_restart;
|
|
|
|
dst->back_preroll = src->back_preroll;
|
2015-02-05 20:52:07 +00:00
|
|
|
dst->keyframe = src->keyframe;
|
|
|
|
dst->stream = src->stream;
|
|
|
|
}
|
|
|
|
|
2014-07-05 14:45:28 +00:00
|
|
|
struct demux_packet *demux_copy_packet(struct demux_packet *dp)
|
|
|
|
{
|
|
|
|
struct demux_packet *new = NULL;
|
|
|
|
if (dp->avpacket) {
|
2014-08-24 15:45:28 +00:00
|
|
|
new = new_demux_packet_from_avpacket(dp->avpacket);
|
|
|
|
} else {
|
|
|
|
// Some packets might be not created by new_demux_packet*().
|
|
|
|
new = new_demux_packet_from(dp->buffer, dp->len);
|
2014-07-05 14:45:28 +00:00
|
|
|
}
|
2014-09-16 16:11:00 +00:00
|
|
|
if (!new)
|
|
|
|
return NULL;
|
2015-02-05 20:52:07 +00:00
|
|
|
demux_packet_copy_attribs(new, dp);
|
2014-07-05 14:45:28 +00:00
|
|
|
return new;
|
|
|
|
}
|
2014-11-03 19:00:34 +00:00
|
|
|
|
2019-07-06 21:52:31 +00:00
|
|
|
#define ROUND_ALLOC(s) MP_ALIGN_UP((s), 16)
|
2017-04-14 17:19:44 +00:00
|
|
|
|
|
|
|
// Attempt to estimate the total memory consumption of the given packet.
|
|
|
|
// This is important if we store thousands of packets and not to exceed
|
|
|
|
// user-provided limits. Of course we can't know how much memory internal
|
|
|
|
// fragmentation of the libc memory allocator will waste.
|
|
|
|
// Note that this should return a "stable" value - e.g. if a new packet ref
|
|
|
|
// is created, this should return the same value with the new ref. (This
|
|
|
|
// implies the value is not exact and does not return the actual size of
|
|
|
|
// memory wasted due to internal fragmentation.)
|
|
|
|
size_t demux_packet_estimate_total_size(struct demux_packet *dp)
|
|
|
|
{
|
|
|
|
size_t size = ROUND_ALLOC(sizeof(struct demux_packet));
|
2019-07-06 21:52:31 +00:00
|
|
|
size += 8 * sizeof(void *); // ta overhead
|
|
|
|
size += 10 * sizeof(void *); // additional estimate for ta_ext_header
|
2017-04-14 17:19:44 +00:00
|
|
|
if (dp->avpacket) {
|
demux: add a on-disk cache
Somewhat similar to the old --cache-file, except for the demuxer cache.
Instead of keeping packet data in memory, it's written to disk and read
back when needed.
The idea is to reduce main memory usage, while allowing fast seeking in
large cached network streams (especially live streams). Keeping the
packet metadata on disk would be rather hard (would use mmap or so, or
rewrite the entire demux.c packet queue handling), and since it's
relatively small, just keep it in memory.
Also for simplicity, the disk cache is append-only. If you're watching
really long livestreams, and need pruning, you're probably out of luck.
This still could be improved by trying to free unused blocks with
fallocate(), but since we're writing multiple streams in an interleaved
manner, this is slightly hard.
Some rather gross ugliness in packet.h: we want to store the file
position of the cached data somewhere, but on 32 bit architectures, we
don't have any usable 64 bit members for this, just the buf/len fields,
which add up to 64 bit - so the shitty union aliases this memory.
Error paths untested. Side data (the complicated part of trying to
serialize ffmpeg packets) untested.
Stream recording had to be adjusted. Some minor details change due to
this, but probably nothing important.
The change in attempt_range_joining() is because packets in cache
have no valid len field. It was a useful check (heuristically
finding broken cases), but not a necessary one.
Various other approaches were tried. It would be interesting to list
them and to mention the pros and cons, but I don't feel like it.
2019-06-13 17:10:32 +00:00
|
|
|
assert(!dp->is_cached);
|
|
|
|
size += ROUND_ALLOC(dp->len);
|
2017-04-14 17:19:44 +00:00
|
|
|
size += ROUND_ALLOC(sizeof(AVPacket));
|
2019-07-06 21:52:31 +00:00
|
|
|
size += 8 * sizeof(void *); // ta overhead
|
2017-04-14 17:19:44 +00:00
|
|
|
size += ROUND_ALLOC(sizeof(AVBufferRef));
|
2019-07-06 21:52:31 +00:00
|
|
|
size += ROUND_ALLOC(64); // upper bound estimate on sizeof(AVBuffer)
|
2017-04-14 17:19:44 +00:00
|
|
|
size += ROUND_ALLOC(dp->avpacket->side_data_elems *
|
|
|
|
sizeof(dp->avpacket->side_data[0]));
|
|
|
|
for (int n = 0; n < dp->avpacket->side_data_elems; n++)
|
|
|
|
size += ROUND_ALLOC(dp->avpacket->side_data[n].size);
|
|
|
|
}
|
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
2014-11-03 19:00:34 +00:00
|
|
|
int demux_packet_set_padding(struct demux_packet *dp, int start, int end)
|
|
|
|
{
|
2017-01-31 13:48:10 +00:00
|
|
|
if (!start && !end)
|
2014-11-03 19:00:34 +00:00
|
|
|
return 0;
|
|
|
|
if (!dp->avpacket)
|
|
|
|
return -1;
|
|
|
|
uint8_t *p = av_packet_new_side_data(dp->avpacket, AV_PKT_DATA_SKIP_SAMPLES, 10);
|
|
|
|
if (!p)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
AV_WL32(p + 0, start);
|
|
|
|
AV_WL32(p + 4, end);
|
|
|
|
return 0;
|
|
|
|
}
|
2017-01-31 13:48:10 +00:00
|
|
|
|
|
|
|
int demux_packet_add_blockadditional(struct demux_packet *dp, uint64_t id,
|
|
|
|
void *data, size_t size)
|
|
|
|
{
|
|
|
|
if (!dp->avpacket)
|
|
|
|
return -1;
|
|
|
|
uint8_t *sd = av_packet_new_side_data(dp->avpacket,
|
|
|
|
AV_PKT_DATA_MATROSKA_BLOCKADDITIONAL,
|
|
|
|
8 + size);
|
|
|
|
if (!sd)
|
|
|
|
return -1;
|
|
|
|
AV_WB64(sd, id);
|
|
|
|
if (size > 0)
|
|
|
|
memcpy(sd + 8, data, size);
|
|
|
|
return 0;
|
|
|
|
}
|