mirror of
https://github.com/mpv-player/mpv
synced 2024-12-27 09:32:40 +00:00
b9d351f02a
See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
60 lines
2.3 KiB
C
60 lines
2.3 KiB
C
#pragma once
|
|
|
|
#include <stdbool.h>
|
|
|
|
enum mp_frame_type {
|
|
MP_FRAME_NONE = 0, // NULL, placeholder, no frame available (_not_ EOF)
|
|
MP_FRAME_VIDEO, // struct mp_image*
|
|
MP_FRAME_AUDIO, // struct mp_aframe*
|
|
MP_FRAME_PACKET, // struct demux_packet*
|
|
MP_FRAME_EOF, // NULL, signals end of stream (but frames after it can
|
|
// resume filtering!)
|
|
};
|
|
|
|
const char *mp_frame_type_str(enum mp_frame_type t);
|
|
|
|
// Generic container for a piece of data, such as a video frame, or a collection
|
|
// of audio samples. Wraps an actual media-specific frame data types in a
|
|
// generic way. Also can be an empty frame for signaling (MP_FRAME_EOF and
|
|
// possibly others).
|
|
// This struct is usually allocated on the stack and can be copied by value.
|
|
// You need to consider that the underlying pointer is ref-counted, and that
|
|
// the _unref/_ref functions must be used accordingly.
|
|
struct mp_frame {
|
|
enum mp_frame_type type;
|
|
void *data;
|
|
};
|
|
|
|
// Return whether the frame contains actual data (audio, video, ...). If false,
|
|
// it's either signaling, or MP_FRAME_NONE.
|
|
bool mp_frame_is_data(struct mp_frame frame);
|
|
|
|
// Return whether the frame is for signaling (data flow commands like
|
|
// MP_FRAME_EOF). If false, it's either data (mp_frame_is_data()), or
|
|
// MP_FRAME_NONE.
|
|
bool mp_frame_is_signaling(struct mp_frame frame);
|
|
|
|
// Unreferences any frame data, and sets *frame to MP_FRAME_NONE. (It does
|
|
// _not_ deallocate the memory block the parameter points to, only frame->data.)
|
|
void mp_frame_unref(struct mp_frame *frame);
|
|
|
|
// Return a new reference to the given frame. The caller owns the returned
|
|
// frame. On failure returns a MP_FRAME_NONE.
|
|
struct mp_frame mp_frame_ref(struct mp_frame frame);
|
|
|
|
double mp_frame_get_pts(struct mp_frame frame);
|
|
void mp_frame_set_pts(struct mp_frame frame, double pts);
|
|
|
|
// Estimation of total size in bytes. This is for buffering purposes.
|
|
int mp_frame_approx_size(struct mp_frame frame);
|
|
|
|
struct AVFrame;
|
|
struct AVRational;
|
|
struct AVFrame *mp_frame_to_av(struct mp_frame frame, struct AVRational *tb);
|
|
struct mp_frame mp_frame_from_av(enum mp_frame_type type, struct AVFrame *frame,
|
|
struct AVRational *tb);
|
|
|
|
#define MAKE_FRAME(type, frame) ((struct mp_frame){(type), (frame)})
|
|
#define MP_NO_FRAME MAKE_FRAME(0, 0)
|
|
#define MP_EOF_FRAME MAKE_FRAME(MP_FRAME_EOF, 0)
|