This commit replaces all uses of sig_peak and maps all HDR metadata.
Form notable changes mixed usage of maxCLL and max_luma is resolved and
not always max_luma is used which makes vo_gpu and vo_gpu_next behave
the same way.
ffmpeg does not tag yuv levels for GRAY formats, but apparently they
should be infered as full range. Instead of defaulting to limited range
always. Fixes (M)JPEG playback.
This mimic ffmpeg's behaviour.
See: d295b6b693/libswscale/utils.c (L926-L962)Fixes: #12089
this changes mp_image_new_ref() to handle allocation failure itself
instead of doing it at its many call-sites (some of which never checked
for failure at all).
also remove MP_HANDLE_OOM() from the call sites since this is not
necessary anymore.
not all the call-sites have been touched, since some of the caller might
be relying on `mp_image_new_ref(NULL)` returning NULL.
Fixes: https://github.com/mpv-player/mpv/issues/11840
As the Dolby Vision metadata is only supported for vo_gpu_next, the check
whether to use the metadata is now handled by `mp_map_dovi_metadata_to_pl`.
It doesn't hurt to keep the metadata in `mp_image`, and might be useful to
library users.
Now that 0.35 has been released, we can consider increasing our minimum
required ffmpeg version. Currently, we think 4.4 is the most recent
version we can move to (from the current requirement of 4.0).
This allows us to remove a few conditionals. There are more that we
won't be able to remove unless we move further up to 5.1.
These helpers, for some reason, decided to round the returned values up
to multiples of the nearest plane alignment. This logic makes no sense
to me, and completely breaks any sort of oddly-sized mp_image.
This logic was introduced, presumably in error and without real
justification, as part of a major refactor commit (caee8748), As far as
I can tell, removing it again doesn't regress anything.
Fixes several serious bugs including buffer underflows and GPU crashes
in vo_gpu and vo_gpu_next.
This seems to work on gcc, clang and mingw as-is, but I made it
conditional on __GNUC__ just in case, even though I can't figure out
which compilers we care about that don't export this define.
Also replace all instances of assert(0) in the code by MP_UNREACHABLE(),
which is a strict improvement.
The old code ignored many corner cases, and even wrote "blacker than
black" to YUV images. Use the new pixel format metadata and other
recently added gimmicky crap, which should make this more correct. Even
the almighty fuckup of a format AV_PIX_FMT_UYYVYY411 should work,
although that couldn't be tested for obvious reasons.
This doesn't work for "monow", but this is so extremely fringe while at
the same time painful that I just won't care. In theory, it could be
modeled as some sort of inverted gray colorspace or something.
Remove the vaguely defined plane_bits and component_bits fields from
struct mp_imgfmt_desc. Add weird replacements for existing uses. Remove
the bytes[] field, replace uses with bpp[].
Fix some potential alignment issues in existing code. As a compromise,
split mp_image_pixel_ptr() into 2 functions, because I think it's a bad
idea to implicitly round, but for some callers being slightly less
strict is convenient.
This shouldn't really change anything. In fact, it's a 100% useless
change. I'm just cleaning up what I started almost 8 years ago (see
commit 00653a3eb0). With this I've decided to keep mp_imgfmt_desc,
just removing the weird parts, and keeping the saner parts.
Adding all these so I can use them for obscure processing purposes (see
later draw_bmp commit).
There isn't really a reason why they should exist. On the other hand,
they're just labels for formats that can be handled in a generic way,
and this commit adds support for them in the zimg wrapper and vo_gpu
just by making the formats exist. (Well, vo_gpu had to be fixed in the
previous commit.)
This is really basic for planar image data access; not sure why there
weren't such helpers before.
They also handle trickier formats that use bit-packing, or they would be
mich simpler. (This affects only BGR4/BGR4/MONOW/MONOH, I hope whoever
invented them is proud of triggering so many special cases for so little
gain.)
This is mostly for testing. It adds passing through the metadata through
the video chain. The metadata can be manipulated with vf_format. Support
for zimg alpha conversion (if built with zimg after it gained alpha
support) is implemented. Support premultiplied input in vo_gpu.
Some things still seem to be buggy.
Libav seems rather dead: no release for 2 years, no new git commits in
master for almost a year (with one exception ~6 months ago). From what I
can tell, some developers resigned themselves to the horrifying idea to
post patches to ffmpeg-devel instead, while the rest of the developers
went on to greener pastures.
Libav was a better project than FFmpeg. Unfortunately, FFmpeg won,
because it managed to keep the name and website. Libav was pushed more
and more into obscurity: while there was initially a big push for Libav,
FFmpeg just remained "in place" and visible for most people. FFmpeg was
slowly draining all manpower and energy from Libav. A big part of this
was that FFmpeg stole code from Libav (regular merges of the entire
Libav git tree), making it some sort of Frankenstein mirror of Libav,
think decaying zombie with additional legs ("features") nailed to it.
"Stealing" surely is the wrong word; I'm just aping the language that
some of the FFmpeg members used to use. All that is in the past now, I'm
probably the only person left who is annoyed by this, and with this
commit I'm putting this decade long problem finally to an end. I just
thought I'd express my annoyance about this fucking shitshow one last
time.
The most intrusive change in this commit is the resample filter, which
originally used libavresample. Since the FFmpeg developer refused to
enable libavresample by default for drama reasons, and the API was
slightly different, so the filter used some big preprocessor mess to
make it compatible to libswresample. All that falls away now. The
simplification to the build system is also significant.
With hwdec copy modes, mp_image_copy_attributes() is used to transfer
metadata other than the image data when copying the image from the
hardware surface. It didn't copy the closed caption data.
Fix this. This makes closed captions in copy mode work.
Fixes: #6376
And update the comment both explaining why this defaulting matters and
why we use BT.2020 instead.
tl;dr BT.709 clips even the one test file we *do* have, so if we don't
handle XYZ "natively" in vo_gpu we might as well at least handle it in a
way that runs less risk of clipping
This used to be needed for the "GPU memcpy" (shitty Intel methods to
deal with certain uncached memory types). This is now done in FFmpeg,
and the code in mp_image.c was just unnecessarily convoluted.
This was speculatively added 2 years ago in preparation for something
that apparently never happened. The D3D code was added as an "example",
but this too was never used/finished.
No reason to keep this.
Generally, using x86 SIMD efficiently (or crash-free) requires aligning
all data on boundaries of 16, 32, or 64 (depending on instruction set
used). 64 bytes is needed or AVX-512, 32 for old AVX, 16 for SSE. Both
FFmpeg and zimg usually require aligned data for this reason.
FFmpeg is very unclear about alignment. Yes, it requires you to align
data pointers and strides. No, it doesn't tell you how much, except
sometimes (libavcodec has a legacy-looking avcodec_align_dimensions2()
API function, that requires a heavy-weight AVCodecContext as argument).
Sometimes, FFmpeg will take a shit on YOUR and ITS OWN alignment. For
example, vf_crop will randomly reduce alignment of data pointers,
depending on the crop parameters. On the other hand, some libavfilter
filters or libavcodec encoders may randomly crash if they get the wrong
alignment. I have no idea how this thing works at all.
FFmpeg usually doesn't seem to signal alignment internal anywhere, and
usually leaves it to av_malloc() etc. to allocate with proper alignment.
libavutil/mem.c currently has a ALIGN define, which is set to 64 if
FFmpeg is built with AVX-512 support, or as low as 16 if built without
any AVX support. The really funny thing is that a normal FFmpeg build
will e.g. align tiny string allocations to 64 bytes, even if the machine
does not support AVX at all.
For zimg use (in a later commit), we also want guaranteed alignment.
Modern x86 should actually not be much slower at unaligned accesses, but
that doesn't help. zimg's dumb intrinsic code apparently randomly
chooses between aligned or unaligned accesses (depending on compiler, I
guess), and on some CPUs these can even cause crashes. So just treat the
requirement to align as a fact of life.
All this means that we should probably make sure our own allocations are
64 bit aligned. This still doesn't guarantee alignment in all cases, but
it's slightly better than before.
This also makes me wonder whether we should always override libavcodec's
buffer pool, just so we have a guaranteed alignment. Currently, we only
do that if --vd-lavc-dr is used (and if that actually works). On the
other hand, it always uses DR on my machine, so who cares.
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)
(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)
How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.
The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).
Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).
The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.
Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.
E.g.:
bool before = pts_a < pts_b;
would need to be:
bool before = forward
? pts_a < pts_b
: pts_a > pts_b;
or:
bool before = pts_a * dir < pts_b * dir;
or if you, as it's implemented now, just do this after decoding:
pts_a *= dir;
pts_b *= dir;
and then in the normal timing/renderer code:
bool before = pts_a < pts_b;
Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.
Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.
As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)
VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.
FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
This helps with compatibility and/or performance in particular for
oddly-sized formats like rgb24. We use a loop to avoid having to
calculate the lcm (or waste bytes in the extremely common case of the
byte size and the stride align having shared factors).
Also rename stereo3d to stereo_in. The only real change is that the
vo_gpu OSD code now uses the actual stereo 3D mode, instead of the
--video-steroe-mode value. (Why does this vo_gpu code even exist?)