This is fragile enough that it warrants getting "monitored".
This takes the commented test program code from img_format.c, makes it
output to a text file, and then compares it to a "ref" file stored in
git.
Originally, I wanted to do the comparison etc. in a shell or Python
script. But why not do it in C. So mpv calls /usr/bin/diff as a
sub-process now.
This test will start producing different output if FFmpeg adds new pixel
formats or pixel format flags, or if mpv adds new IMGFMT (either aliases
to FFmpeg formats or own formats). That is unavoidable, and requires
manual inspection of the results, and then updating the ref file.
The changes in the non-test code are to guarantee that the format ID
conversion functions only translate between valid IDs.
Defining NDEBUG via CFLAGS is the canonical way to disable assertions in
C. mpv respects this (and ta.c actually disables some debugging
machinery if it's defined).
But for tests, this is not useful at all. So if --enable-tests is passed
to configure, the user must not define NDEBUG, even if the rest of the
player does not care.
(We could just #undef NDEBUG, but let's not. Tests calling into the rest
of the player might depend on asserts there, or so.)
This stopped doing anything since how many years?
The only actual effect was that af_rubberband was made GPL only. Now it
is available in LGPL builds too.
Until now, each .c file in test/ was built as separate, self-contained
binary. Each binary could be run to execute the tests it contained.
Change this and make them part of the normal mpv binary. Now the tests
have to be invoked via the --unittest option. Do this for two reasons:
- Tests now run within a "properly" initialized mpv instance, so all
services are available.
- Possibly simplifying the situation for future build systems.
The first point is the main motivation. The mpv code is entangled with
mp_log and the option system. It feels like a bad idea to duplicate some
of the initialization of this just so you can call code using them.
I'm also getting rid of cmocka. There wouldn't be any problem to keep it
(it's a perfectly sane set of helpers), but NIH calls. I would have had
to aggregate all tests into a CMUnitTest list, and I don't see how I'd
get different types of entry points easily. Probably easily solvable,
but since we made only pretty basic use of this library, NIH-ing this is
actually easier (I needed a list of tests with custom metadata anyway,
so all what was left was reimplement the assert_* helpers).
Unit tests now don't output anything, and if they fail, they'll simply
crash and leave a message that typically requires inspecting the test
code to figure out what went wrong (and probably editing the test code
to get more information). I even merged the various test functions into
single ones. Sucks, but here you go.
chmap_sel.c is merged into chmap.c, because I didn't see the point of
this being separate. json.c drops the print_message() to go along with
the new silent-by-default idea, also there's a memory leak fix unrelated
to the rest of this commit.
The new code is enabled with --enable-tests (--enable-test goes away).
Due to waf's option parser, --enable-test still works, because it's a
unique prefix to --enable-tests.
The use of glXGetCurrentDisplay() restricted this to the GLX backend.
But actually it works under EGL as well. Removing the GLX-specific call
and using the general mpv-internal method to get the X "Display" makes
it work in mpv.
I didn't know this. Nvidia didn't list this as extension in the EGL
context when I still used their GPUs.
Note that this might in theory break use of vdpau in some libmpv clients
using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not
used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not
provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses
now. Considering that vaapi is preferable these days, it's not bad at
all if these clients get "broken". They can be easily fixed by passing
the display to mpv correctly.
Some stream inputs may have higher latency with higher buffer sizes, for
example network filesystems via normal OS filesystem interface (these
have to wait until the full buffer is read, which means higher latency).
Probably doesn't matter in practice, but why take chances.
It was set, but its value was never used. The stream cache used to use
it, but it was removed. It controlled how much data it tried to read
from the underlying stream at once.
The user can now control the buffer size with --stream-buffer-size,
which achieves a similar effect, because the stream will in the common
case read half of the buffer size at once. In fact, the new default size
is 128KB, i.e. 64KB read size, which is as much as stream_file and
stream_cb requested by default. stream_memory requested more, but it
doesn't matter anyway. Only stream_smb set a larger size with 128KB.
The dvdnav API reads in 2K blocks (dvdnav_get_next_block()). The mpv
wrapper (fill_buffer() in this file) expects that the read size done by
the mpv core is at least 2K for this reason. If not, it returns an
error.
This used to be OK, because there was a thing called section alignment
in the core code. This was removed because the core shouldn't suffer
from optical disc idiosyncrasies. Which means that ever since, it has
been working only by coincidence, or maybe not at all.
Fixing this would require keeping a buffer in the priv struct, and
returning it piece by piece if the core makes smaller reads. I have no
intention of writing such code, so add an error message asking for a
patch. If anyone actually cares about DVD, maybe it'll get fixed.
This isn't really needed, since it doesn't support byte seeking (only
for avoiding that demux_disc fucks up even more if the nested demux_lavf
tries to seek in the TS).
stream_read_peek() duplicated what stream_read_more() checks for anyway
(whether the forward buffer is large enough). This can be skipped by
making the stream_read_more() return value more consistent.
demux_mkv was the only thing using this, and everything else accessed it
directly. No need to keep the indirection wrapper around.
(Funny how this getter was in the initial commit of MPlayer.)
For some reason, the first frame displayed on X11 with amdgpu and OpenGL
will be garbled. This is especially visible if the player starts,
displays a frame, but then still takes a while to properly start
playback.
With --interpolation, the behavior somehow changes (usually gets worse).
I'm not sure what exactly is going on, and the code in video.c is way
too abstruse. Maybe there is some slight possibility that a frame with
uncleared contents gets displayed, which somehow also corrupts another
frame that is displayed immediately after that.
If clear is unconditionally run, this somehow doesn't happen, and you
see a video frame. By any logic this shouldn't happen: a video frame
should always overwrite the background. So I can't exclude that this
isn't some sort of driver bug, or at least very obscure interaction.
Clearing should be practically free anyway, so always do it.
Fixes: #7105
(Only half of the buffer is actually used in a useful way, see manpage
or commit which added the option.)
Might have some advantages with broken network filesystem drivers.
See: #6802
Was probably worthless, and I can't measure a difference anymore (I used
to be able and it still seemed worth doing so back then).
When the default buffer size is enlarged in the next commit, the inline
buffer probably won't even be useful in theory, because the data will
rarely be on the same page as the other stream fields. It surely makes
the inline buffer seem like a ridiculous micro-optimization. Farewell...
In some corner cases (see #6802), it can be beneficial to use a larger
stream buffer size. Use this as argument to rewrite everything for no
reason.
Turn stream.c itself into a ring buffer, with configurable size. The
latter would have been easily achievable with minimal changes, and the
ring buffer is the hard part. There is no reason to have a ring buffer
at all, except possibly if ffmpeg don't fix their awful mp4 demuxer, and
some subtle issues with demux_mkv.c wanting to seek back by small
offsets (the latter was handled with small stream_peek() calls, which
are unneeded now).
In addition, this turns small forward seeks into reads (where data is
simply skipped). Before this commit, only stream_skip() did this (which
also mean that stream_skip() simply calls stream_seek() now).
Replace all stream_peek() calls with something else (usually
stream_read_peek()). The function was a problem, because it returned a
pointer to the internal buffer, which is now a ring buffer with
wrapping. The new function just copies the data into a buffer, and in
some cases requires callers to dynamically allocate memory. (The most
common case, demux_lavf.c, required a separate buffer allocation anyway
due to FFmpeg "idiosyncrasies".) This is the bulk of the demuxer_*
changes.
I'm not happy with this. There still isn't a good reason why there
should be a ring buffer, that is complex, and most of the time just
wastes half of the available memory. Maybe another rewrite soon.
It also contains bugs; you're an alpha tester now.
This is something relatively frequently needed, and there must be half a
dozen ad-hoc implementations in mpv. The next commit uses this, the
suspected duplicate implementations are hiding.
The old code made it depend on ->seekable. If it isn't seekable, and
something discarded the data, then it'll just show an error message,
which will at least be somewhat informative. If no data was discarded,
the seek call is always a no-op.
There's a weird "timeline" condition in the old code; this doesn't
matter anymore, because timeline stuff does not pass streams down to
nested demuxers anymore.
tv:// and pvr:// are gone, DVD almost. The former didn't really have any
uses left, except webcams. Provide a replacement example for that.
We don't need a separate section for DVD. If you use DVD, you're on your
own. There's still enough documentation left to puzzle things together
even if you don't read the source code.
Not like anyone reads it. Although putting all this text before listing
the allowed option values sort of has the intention to discourage users
from using the option at all. Advertise Ctrl+h, which is a decent way of
enabling hardware decoding temporarily.
They were used at some point, but then fell into disuse. In general,
these old flags are all a bit fuzzy, so it's a good idea to remove them
as much as possible.
The comment about MP_IMGFLAG_PAL isn't true anymore. The old meaning was
deprecated at some point, and the flag was removed from "pseudo
paletted" formats. I think mpv at one point changed its own flag from
AV_PIX_FMT_FLAG_PSEUDOPAL to AV_PIX_FMT_FLAG_PAL, when the former was
deprecated, and it became unnecessary to allocate a palette for
non-paletted formats. (The one who deprecated in FFmpeg was me, if you
wonder.)
MP_IMGFLAG_PLANAR was used in command.c, use a relatively similar flag
as replacement.
This is slightly helpful for testing, and otherwise useless and without
consequence.
I'm not using the correct output format and using IMGFMT_RGB0 as
placeholder. This doesn't matter currently, as both sws and zimg support
this as output (and support any input for it). I'm doing this because
it's surprisingly tricky to get the correct output format at this point,
without digging deeper into x11 shit or refactoring parts of the VO. I
don't care enough about this.
The user can raise the number of tolerated hardware decoding errors. On
the other hand, we have a static limit on packets that are "saved" for
fallback handling (and that's a good idea to avoid unbounded memory
usage). In this case, it could happen that the start of a file was fine
after a fallback, but after that buffered amount of data, it would
suddenly skip.
It's more useful to skip buffering entirely if the number of tolerated
decoding errors exceeds the fixed buffer.
(And also, I'm sure nobody gives a shit about this feature.)
prepare_decoding() returned a bool that was supposed to tell whether
decoding could work, or if something was fucked. After recent changes to
the decoder loop, this did not work anymore, and caused an endless loop.
Redo it, so it makes more sense. avctx being NULL (software fallback
initialization failed) now signals EOF. hwdec_failed needs to be handled
on send_packet() only, where it probably never happens anyway.
(Who was the idiot who made libavcodec have two entrypoints for
decoding? Oh right, it was me. PEBKAC.)
Shovel the code around to make the data flow slightly simpler (?). At
least there's only one send_packet function now. The old code had the
problem that send_packet() could be called even if there were queued
packets; due to sending the queued packets in the receive_frame
function, this should not happen anymore (the code checking for this
case in send_packet should normally never be called).
Untested with actual full stream hw decoders (none available here); I
created a test case by making hwaccel decoding fail.
Forgotten in commit 5d5fdb7. This failed to return the error code
properly. In particular, if the decoder rejected the packet, this was
not properly detected. Normally, this mattered only in specific cases.
Fixes: #7115
MPlayer isn't all too well-known anymore. It does not make sense to
"advertise" with it (and it actually never did).
The GPU comment needed clarification. I think originally, it was just to
signal that you'll have a bad time with Intel. Make that broader.
Instead, use a YUV planar format. It doesn't matter, since we use the
format only internally and for "management" purposes. We're only
interested in the physical layout, not what colorspace FFmpeg "forcibly"
associates with it.
Also get rid of using the old and slightly sketchy mp_imgfmt_find()
function. Yep, the IMGFMT_RGB30 now "constructs" the planar format,
instead of using a pixfmt constant. Slightly inconvenient, tricky, and
fragile, but I like it, so bugger off.
This whole thing gets rid of some of the strange plane permutations that
were needed earlier.