This will just make it not work if mpv_request_log_messages() gets
extended to accept more names. Pass the argument without checking.
To keep the behavior the same (for whatever reasons, probably not
important), still raise an error if the libmpv API function returns an
error that the argument was bad.
(The check_loglevel() function is still used when the script _emits_ log
messages, which is different, and for which there is no API anyway.)
We should only be printing errors that occur when not probing, to
avoid creating the impression that something is wrong - and errors
during probing isn't a problem.
Until now, this didn't work, since the external image had pts 0; so
enabling video at a later time did nothing, because the image was
discarded. Since hrseek now ends on the last frame (instead of nothing),
reusing the hrseek mechanism solves this, and we don't even need to
treat the cursed coverart case separately.
If SEEK_FORWARD is set, a demuxer should skip to the next frame if the
timestamp does not fall on the start of a frame. If that flag is not
set, it should always seek to the first frame before the target
timestamp (or the first frame in the file).
See what the added code comment says. Normally when this is needed, it's
the cover art case. But this flag is not set when using an external
image. This gives weird seek behavior, because the frame will be
"normally" displayed for its determined duration, and during normal
video playback, the video pts will be used - which is always 0 here.
This should happen only if audio is active. Otherwise, we're more or
less in image viewer mode, where the image should be displayed for a
configured duration.
This gives much better behavior in general, and is what we want if video
somehow ends earlier than audio.
A common special is using an audio file with an external image file.
This commit makes things like switching aspect ratio work (provided the
demuxer for the image behaves correctly, which currently isn't the case
with demux_mf.c). Since the image file had timestamp 0, it was usually
skipped by hr-seek, and changed properties weren't applied to it at the
start of the filter chain.
This wasn't done, probably regression from one of the last dozen of
times this special code path was touched. This meant coverart images
ignored the user-set aspect ratio completely, and some other things.
Surprisingly, we've managed to get this far without context_glx ever
adding the X11 display as a native resource. But with the recent change
to attempt to enable vdpau when using EGL, the hwdec now requires the
display to be added. So let's add it.
eglGetPlatform() is a broken API, since it takes a windowing specific
argument, yet is supposed to work for multiple APIs at the same time. On
Linux, it can take both a X11 "Display" and a "wl_display". Obviously
there is no way to specify what kind of display the argument is (it's
just a void*).
Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff
to try to guess the display type, including trying to call mincore() to
determine whether the pointer can be accessed at all. I guess this
recently accidentally broke (as a bug), but on the other hand, maybe
it's time to do this properly.
The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus
Mesa needs to support the associated platform extension
(EGL_KHR_platform_x11).
Since I see no reasonable way to do this in a compatible way, just
require that EGL 1.5 is available. The problem is that EGL 1.4 seems to
require you to create a display to query EGL version and extension, and
you have a chicken-and-egg problem. It's very stupid. Maybe you could
jump through some more hoops to get something compatible, but fuck that.
Users on "too old" Mesa will fall back to GLX (which we keep around for
a regrettable company known by the name of Nvidia).
I think Wayland and GBM should do the same. They're sufficiently
bleeding-edge that you can expect them to have EGL 1.5. On the other
hand, the cursed RPI code will have to stay with a eglGetDisplay().
Speculative fix for #7154.
(Rant about EGL follows. Actually I deleted it.)
pass_color_map() (in video_shaders.c) and pass_colormanage() (video.c)
both duplicate the condition on whether to do peak computation. Peak
computation requires a compute shader, so if the duplicated conditions
don't match, video_shaders.c will generate a compute shader, but video.c
will try to run it as fragment shader. This leads to a "blue screen".
This can be reproduced by playing a HDTV video with --target-peak=99.
It's not clear how to fix this. Should pass_tone_map() be only invoked
if mp_trc_is_hdr() == true (what pass_colormanage() uses to decide
whether to enable peak computation), or should pass_colormanage() just
tell pass_color_map() to skip peak computation? Decide for the latter,
as it's more robust.
Even if not correct, at least it gets rid of the blue shit.
Fixes: #7149
It's too easy to introduce unintended circular lock dependencies. Just
now we found that the (old) cocoa vo_gpu backend is also affected by
this, because it waits on the Cocoa main thread, which in turn uses
libmpv API for OSX... stuff.
Also fix a missing initial property update after observe.
This leaves me unhappy, because it just leads to a stupid thread ping
pong. Will probably rewrite this later.
Give an overview over the various methods. I feel like I've written text
like this over and over again (compatibility.rst and
interface-changes.rst for example duplicate the list of mpv API
abstractions), but such is life in hell.
Use this in particular to strongly suggest not to parse terminal output.
This suggestion got lost or de-emphasized at some point (maybe when
removing MPlayer and "slave mode" references). Some of this text is
still there, but it can be considered "fine print" at best, that nobody
will see. Now we have it in a more prominent place. This is especially
important since MPlayer-style use of mpv still seems to be prevalent,
see for example #7153.
I have no idea why this still exists, since we have --input-ipc-server.
I think there was something about Windows, but the latter option is
implemented even on Windows.
It appears commit 4ad68d9452 broke handling the first video
frame duration through roundabout ways (I think because the duration of
the first frame was now available at all in the normal case). The first
frame was cut short, which showed up especially with looping, or if the
file had a low FPS.
This questionable change seems to fix it without breaking any other
known cases => push and call it a day.
The display-sync mode did not have this problem.
Fixes: #7150
It sometimes happens that HLS streams freeze because the HTTP server is
not responding for a fragment (or something similar, the exact
circumstances are unknown). The --timeout option didn't affect this,
because it's never set on HLS recursive connections (these download the
fragments, while the main connection likely nothing and just wastes a
TCP socket).
Apply an elaborate hack on top of an existing elaborate hack to somehow
get these options set. Of course this could still break easily, but hey,
it's ffmpeg, it can't not try to fuck you over. I'm so fucking sick of
ffmpeg's API bullshit, especially wrt. HLS.
Of course the change is sort of pointless. For HLS, GET requests should
just aggressively retried (because they're not "streamed", they're just
actual files on a CDN), while normal HTTP connections should probably
not be made this fragile (they could be streamed, i.e. they are backed
by some sort of real time encoder, and block if there is no data yet).
The 1 minute default timeout is too high to save playback if this
happens with HLS.
Vaguely related to #5793.
There isn't really a need to disable this for local playback. I think
originally I did this because I was afraid the code could mess up or be
annoying on local mode, but that's not really a good argument. I'd
rather test this code in local mode too. In this case, it shouldn't
really happen that it runs out of cache in the first place.
Until now, we've made FFmpeg use the default network timeout - which is
apparently infinite. I don't know if this was changed at some point,
although it seems likely, as I was sure there was a more useful default.
For most use cases, a smaller timeout is more useful (for example
recording something in the background), so force a timeout of 1 minute.
See: #5793
stream_skip() semantics were kind of bad, especially after the recent
change to the stream code. Forward stream_skip() calls could still
trigger a seek and fail, even if it was supposed to actually skip data.
(Maybe the idea that stream_skip() should try to seek is worthless in
the first place.)
Rename it to stream_seek_skip() (takes absolute position now because I
think that's better), and make it always skip if the stream is marked as
forward.
While we're at it, make EOF detection more robust. I guess s->eof
shouldn't exist at all, since it's valid only "sometimes". It should be
removed... but not today. A 1-byte stream_read_peek() call is good to
get the s->eof flag set to a correct value.
This makes the condition for including each init match the condition for
compiling the file that defines it.
It's possible to e.g. HAVE_GL and HAVE_VAAPI without HAVE_VAAPI_EGL,
which resulted in "undefined reference to `vaapi_gl_init'" with the old
code.
Used to contain flags for "save" setting of options at runtime. Now
there is nothing special needed anymore and it's 0. So drop it
completely, and remove anything that distinguishes between runtime and
initialization time.
Options marked with this flag were changed to strictly read-only after
initialization (mpv_initialize() in the client API, after option parsing
and config file loading with the CLI player).
This used to be necessary, because there was a single option struct that
could be accessed by multiple threads. For example, --config-dir sets
MPOpts.force_configdir, which was read whenever anything accessed the
mpv config dir (which could be on different threads, e.g. font
initialization tries to lookup fonts.conf from an arbitrary thread).
This isn't needed anymore, because threads now access these in a thread
safe way. In the case of --config-dir, the path is actually just copied
on init.
This M_OPT_FIXED mechanism is thus not strictly needed anymore. It still
prevents writing to some options that cannot take effect at runtime, but
even that can be dropped. In general, all mpv options can be changed any
time at runtime, even if they never take effect, and there's no need to
make an exception for a very low number of options. So just get rid of
it.
It's hard to see what FFmpeg does or what its API requires. It looks
like the alignment in our own allocation code might be slightly too
lenient, but who knows. Even if this is not needed, upping the alignment
only wastes memory and doesn't do anything bad.
(Note that the only reason why we have our own code is because FFmpeg
doesn't even provide it as API. API users are forced to recreate this,
even if they have no need for custom allocation!)
The "amultiply" filter crashes in AVX mode on unaligned access if an
audio pointer is unaligned (on 32 or 64 bytes I assume).
A requirement that audio data needs to be aligned isn't documented
anywhere. In our case, the data is still sample- and channel-aligned,
which is completely sane. Sure, you can imagine optimizations which make
some algorithms even faster by requiring higher alignment. But, and this
is a big but, you shouldn't crash api users because you just invented a
new undocumented requirement. And even more importantly, your
user-crashing optimization won't matter because it's just a trivial
algorithm working on audio. You don't need to pretend to be an
optimization devil, and nobody will give you a prize for this. But no,
lets random make API users crash (and then probably blame them for it!)
for something that wouldn't matter at all.
Not to mention that they do "document" some requirements on _video_
data, yet their vf_crop probably can still produce unaligned video
pointers. Oh how hilarious that the same documentation also talks about
libswscale alignment requirements. (This is weird because libswscale is
just one of many, many things which consume video data. Also did you
know that zimg, written in C++ and using intrinsics, i.e. the antithesis
to FFmpeg development, is much faster than libswscale, easier to use,
and produces more correct results, even if you ignore that libswscale
flat out doesn't support some very important features?)
Fucking tired of this bullshit. Can't wait until someone comes up with a
better framework than this... (well let's not write this out).
Fix this by copying instead of adjusting the start pointer when skipping
samples. This makes general operations slower just to fix interoperating
with a single filter. Thank you for your "optimization", FFmpeg. Go die
in a fire.
Didn't check whether this is correct. It probably is? If the frame needs
to be copied (due to COW), and memory allocation fails, it just silently
(or audibly lol) doesn't skip samples, because a never-fail function can
suddenly fail. Well, who cares.
Fixes: #7141
Probably. It's not like these pixel formats are formally specified -
FFmpeg added them because _some_ file format or decoder supports it, and
while that format/codec may define it precisely, the pixel format is
sort of disconnected and just a FFmpeg thing.
In any case, the yuva sample I had at hand uses the full range the
component data type can provide. The old code used the same "shifted"
range as for Y/U/V components, which must have been wrong.
This will not work correctly for packed YUVA formats, but fortunately
they matter even less.
It's not that we _want_ the log to be on an external site. We just want
the log, somehow. Probably not pasted inline into the issue text.
Also reword the "we are assholes who really want logs" part of the text.
It's a subtle balance between trying to be nice and being a complete
asshole, but no matter what you do, it will always sound like the
latter, so be direct.
This tests the RGB repacker code in zimg, which deserves to be tested
because it's tricky and there will be more formats.
scale_test.c contains some code that can be used to test any scaler. Or
at least that would be great; currently it can only test repacking of
some byte-aligned-component RGB formats. It should be called
repack_test.c, but I'm too lazy to change the filename now.
The idea is that libswscale is used to cross-check the conversions
performed by the zimg wrapper. This is why it's "OK" that scale_test.c
does libswscale calls.
scale_sws.c is the equivalent to scale_zimg.c, and is of course
worthless (because it tests libswscale by comparing the results with
libswscale), but still might help with finding bugs in scale_test.c.
This borrows a sorted list of image formats from test/img_format.c, for
the same reason that file sorts them.
There's a slight possibility that this can be used to test vo_gpu.c too
some times in the future.
This is fragile enough that it warrants getting "monitored".
This takes the commented test program code from img_format.c, makes it
output to a text file, and then compares it to a "ref" file stored in
git.
Originally, I wanted to do the comparison etc. in a shell or Python
script. But why not do it in C. So mpv calls /usr/bin/diff as a
sub-process now.
This test will start producing different output if FFmpeg adds new pixel
formats or pixel format flags, or if mpv adds new IMGFMT (either aliases
to FFmpeg formats or own formats). That is unavoidable, and requires
manual inspection of the results, and then updating the ref file.
The changes in the non-test code are to guarantee that the format ID
conversion functions only translate between valid IDs.
Defining NDEBUG via CFLAGS is the canonical way to disable assertions in
C. mpv respects this (and ta.c actually disables some debugging
machinery if it's defined).
But for tests, this is not useful at all. So if --enable-tests is passed
to configure, the user must not define NDEBUG, even if the rest of the
player does not care.
(We could just #undef NDEBUG, but let's not. Tests calling into the rest
of the player might depend on asserts there, or so.)
This stopped doing anything since how many years?
The only actual effect was that af_rubberband was made GPL only. Now it
is available in LGPL builds too.
Until now, each .c file in test/ was built as separate, self-contained
binary. Each binary could be run to execute the tests it contained.
Change this and make them part of the normal mpv binary. Now the tests
have to be invoked via the --unittest option. Do this for two reasons:
- Tests now run within a "properly" initialized mpv instance, so all
services are available.
- Possibly simplifying the situation for future build systems.
The first point is the main motivation. The mpv code is entangled with
mp_log and the option system. It feels like a bad idea to duplicate some
of the initialization of this just so you can call code using them.
I'm also getting rid of cmocka. There wouldn't be any problem to keep it
(it's a perfectly sane set of helpers), but NIH calls. I would have had
to aggregate all tests into a CMUnitTest list, and I don't see how I'd
get different types of entry points easily. Probably easily solvable,
but since we made only pretty basic use of this library, NIH-ing this is
actually easier (I needed a list of tests with custom metadata anyway,
so all what was left was reimplement the assert_* helpers).
Unit tests now don't output anything, and if they fail, they'll simply
crash and leave a message that typically requires inspecting the test
code to figure out what went wrong (and probably editing the test code
to get more information). I even merged the various test functions into
single ones. Sucks, but here you go.
chmap_sel.c is merged into chmap.c, because I didn't see the point of
this being separate. json.c drops the print_message() to go along with
the new silent-by-default idea, also there's a memory leak fix unrelated
to the rest of this commit.
The new code is enabled with --enable-tests (--enable-test goes away).
Due to waf's option parser, --enable-test still works, because it's a
unique prefix to --enable-tests.
The use of glXGetCurrentDisplay() restricted this to the GLX backend.
But actually it works under EGL as well. Removing the GLX-specific call
and using the general mpv-internal method to get the X "Display" makes
it work in mpv.
I didn't know this. Nvidia didn't list this as extension in the EGL
context when I still used their GPUs.
Note that this might in theory break use of vdpau in some libmpv clients
using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not
used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not
provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses
now. Considering that vaapi is preferable these days, it's not bad at
all if these clients get "broken". They can be easily fixed by passing
the display to mpv correctly.