1
0
mirror of https://github.com/mpv-player/mpv synced 2024-12-23 23:32:26 +00:00
mpv/filters/frame.c

211 lines
4.6 KiB
C
Raw Normal View History

video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
#include <libavutil/frame.h>
#include "audio/aframe.h"
#include "common/av_common.h"
#include "demux/packet.h"
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
#include "video/mp_image.h"
#include "frame.h"
struct frame_handler {
const char *name;
bool is_data;
bool is_signaling;
void *(*new_ref)(void *data);
double (*get_pts)(void *data);
void (*set_pts)(void *data, double pts);
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
int (*approx_size)(void *data);
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
AVFrame *(*new_av_ref)(void *data);
void *(*from_av_ref)(AVFrame *data);
void (*free)(void *data);
};
static void *video_ref(void *data)
{
return mp_image_new_ref(data);
}
static double video_get_pts(void *data)
{
return ((struct mp_image *)data)->pts;
}
static void video_set_pts(void *data, double pts)
{
((struct mp_image *)data)->pts = pts;
}
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
static int video_approx_size(void *data)
{
return mp_image_approx_byte_size(data);
}
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
static AVFrame *video_new_av_ref(void *data)
{
return mp_image_to_av_frame(data);
}
static void *video_from_av_ref(AVFrame *data)
{
return mp_image_from_av_frame(data);
}
static void *audio_ref(void *data)
{
return mp_aframe_new_ref(data);
}
static double audio_get_pts(void *data)
{
return mp_aframe_get_pts(data);
}
static void audio_set_pts(void *data, double pts)
{
mp_aframe_set_pts(data, pts);
}
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
static int audio_approx_size(void *data)
{
return mp_aframe_approx_byte_size(data);
}
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
static AVFrame *audio_new_av_ref(void *data)
{
return mp_aframe_to_avframe(data);
}
static void *audio_from_av_ref(AVFrame *data)
{
return mp_aframe_from_avframe(data);
}
static void *packet_ref(void *data)
{
return demux_copy_packet(data);
}
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
static const struct frame_handler frame_handlers[] = {
[MP_FRAME_NONE] = {
.name = "none",
},
[MP_FRAME_EOF] = {
.name = "eof",
.is_signaling = true,
},
[MP_FRAME_VIDEO] = {
.name = "video",
.is_data = true,
.new_ref = video_ref,
.get_pts = video_get_pts,
.set_pts = video_set_pts,
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
.approx_size = video_approx_size,
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
.new_av_ref = video_new_av_ref,
.from_av_ref = video_from_av_ref,
.free = talloc_free,
},
[MP_FRAME_AUDIO] = {
.name = "audio",
.is_data = true,
.new_ref = audio_ref,
.get_pts = audio_get_pts,
.set_pts = audio_set_pts,
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
.approx_size = audio_approx_size,
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
.new_av_ref = audio_new_av_ref,
.from_av_ref = audio_from_av_ref,
.free = talloc_free,
},
[MP_FRAME_PACKET] = {
.name = "packet",
.is_data = true,
.new_ref = packet_ref,
.free = talloc_free,
},
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
};
const char *mp_frame_type_str(enum mp_frame_type t)
{
return frame_handlers[t].name;
}
bool mp_frame_is_data(struct mp_frame frame)
{
return frame_handlers[frame.type].is_data;
}
bool mp_frame_is_signaling(struct mp_frame frame)
{
return frame_handlers[frame.type].is_signaling;
}
void mp_frame_unref(struct mp_frame *frame)
{
if (!frame)
return;
if (frame_handlers[frame->type].free)
frame_handlers[frame->type].free(frame->data);
*frame = (struct mp_frame){0};
}
struct mp_frame mp_frame_ref(struct mp_frame frame)
{
if (frame_handlers[frame.type].new_ref) {
assert(frame.data);
frame.data = frame_handlers[frame.type].new_ref(frame.data);
if (!frame.data)
frame.type = MP_FRAME_NONE;
}
return frame;
}
double mp_frame_get_pts(struct mp_frame frame)
{
if (frame_handlers[frame.type].get_pts)
return frame_handlers[frame.type].get_pts(frame.data);
return MP_NOPTS_VALUE;
}
void mp_frame_set_pts(struct mp_frame frame, double pts)
{
if (frame_handlers[frame.type].get_pts)
frame_handlers[frame.type].set_pts(frame.data, pts);
}
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
int mp_frame_approx_size(struct mp_frame frame)
{
if (frame_handlers[frame.type].approx_size)
return frame_handlers[frame.type].approx_size(frame.data);
return 0;
}
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
AVFrame *mp_frame_to_av(struct mp_frame frame, struct AVRational *tb)
{
if (!frame_handlers[frame.type].new_av_ref)
return NULL;
AVFrame *res = frame_handlers[frame.type].new_av_ref(frame.data);
if (!res)
return NULL;
res->pts = mp_pts_to_av(mp_frame_get_pts(frame), tb);
return res;
}
struct mp_frame mp_frame_from_av(enum mp_frame_type type, struct AVFrame *frame,
struct AVRational *tb)
{
struct mp_frame res = {type};
if (!frame_handlers[res.type].from_av_ref)
return MP_NO_FRAME;
res.data = frame_handlers[res.type].from_av_ref(frame);
if (!res.data)
return MP_NO_FRAME;
mp_frame_set_pts(res, mp_pts_from_av(frame->pts, tb));
return res;
}