1
0
mirror of https://github.com/mpv-player/mpv synced 2024-12-20 05:42:19 +00:00
mpv/filters/frame.h

57 lines
2.2 KiB
C
Raw Normal View History

video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
#pragma once
#include <stdbool.h>
enum mp_frame_type {
MP_FRAME_NONE = 0, // NULL, placeholder, no frame available (_not_ EOF)
MP_FRAME_VIDEO, // struct mp_image*
MP_FRAME_AUDIO, // struct mp_aframe*
MP_FRAME_PACKET, // struct demux_packet*
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
MP_FRAME_EOF, // NULL, signals end of stream (but frames after it can
// resume filtering!)
};
const char *mp_frame_type_str(enum mp_frame_type t);
// Generic container for a piece of data, such as a video frame, or a collection
// of audio samples. Wraps an actual media-specific frame data types in a
// generic way. Also can be an empty frame for signaling (MP_FRAME_EOF and
// possibly others).
// This struct is usually allocated on the stack and can be copied by value.
// You need to consider that the underlying pointer is ref-counted, and that
// the _unref/_ref functions must be used accordingly.
struct mp_frame {
enum mp_frame_type type;
void *data;
};
// Return whether the frame contains actual data (audio, video, ...). If false,
// it's either signaling, or MP_FRAME_NONE.
bool mp_frame_is_data(struct mp_frame frame);
// Return whether the frame is for signaling (data flow commands like
// MP_FRAME_EOF). If false, it's either data (mp_frame_is_data()), or
// MP_FRAME_NONE.
bool mp_frame_is_signaling(struct mp_frame frame);
// Unreferences any frame data, and sets *frame to MP_FRAME_NONE. (It does
// _not_ deallocate the memory block the parameter points to, only frame->data.)
void mp_frame_unref(struct mp_frame *frame);
// Return a new reference to the given frame. The caller owns the returned
// frame. On failure returns a MP_FRAME_NONE.
struct mp_frame mp_frame_ref(struct mp_frame frame);
double mp_frame_get_pts(struct mp_frame frame);
void mp_frame_set_pts(struct mp_frame frame, double pts);
struct AVFrame;
struct AVRational;
struct AVFrame *mp_frame_to_av(struct mp_frame frame, struct AVRational *tb);
struct mp_frame mp_frame_from_av(enum mp_frame_type type, struct AVFrame *frame,
struct AVRational *tb);
#define MAKE_FRAME(type, frame) ((struct mp_frame){(type), (frame)})
#define MP_NO_FRAME MAKE_FRAME(0, 0)
#define MP_EOF_FRAME MAKE_FRAME(MP_FRAME_EOF, 0)