When initialization failed, vo_lavc may cause an irrecoverable state in
the ffmpeg-related structs. Therefore, we reject additional
initialization attempts at least until we know a better way to clean up
the mess.
ao_lavc currently cannot be initialized more than once, yet it's good to
do consistent changes there as well.
Also, clean up uninit-after-failure handling to be less spammy.
The previous fix breaks another obscure case: if the second vf_sub adds
margins, the image is accidentally not extended, which would return in
an assertion failure when returning the bogus image.
This reverts commit d859549424174179768fcd0310c75823a5ff7cb1.
Going to apply the alternative fix through PR #1256, which came just
some seconds after pushing the reverted commit. The reverted commit
was reported as not actually working.
This silences the warning:
video/out/gl_video.c:1091:51: runtime error: division by zero
when running with clang -fsanitize=undefined. Division by zero is legal
according to IEEE, but I guess clang doesn't care about standard. While
triggering this warning isn't actually avoided in all cases, it's
avoided in the common case and also makes people shut up about it.
XRRGetOutputInfo contains a "name" element which corresponds to to the
display names given to the user by the "xrandr" command line
utility. Copy it into the xrandr_display struct for each display.
On VOCTRL_GET_DISPLAY_NAMES, send a copy of the names
of the displays spanned by the mpv window on.
Instead, use the native-endian alias, and switch the wayland format
depending on the target platform's endian.
This drops support for swapped-endian formats, but I think that is ok.
Not only are the affected formats rather ancient and backwards, but
using swapped formats probably does not make any sense for performance
either.
Untested.
These formats are still supported; you just can't reference them via a
defined constants directly. They are now handled via the generic
passthrough.
(If you want to use such a format, you either have to add the entry
back, or use AV_PIX_FMT_* directly.)
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.
In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.
AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.
Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).
We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
FFmpeg has only a AV_PIX_FMT_FLAG_BE flag, not a LE one, which causes
problems for us: we want to have the LE flag too, so code can actually
detect whether a format is non-native endian. Basically, we want to
reconstruct the LE/BE suffix all AV_PIX_FMT_*s have.
Doing this is hard due to the (messed up) way AVPixFmtDescriptor works.
The worst is AV_PIX_FMT_RGB444: this group of formats describe an
endian-independent access (since no component actually spans 2 bytes,
you only need byte accesses with a fixed offset), so we have to go
through some pain.
Add a generic mechanism to the VO to relay "extra" events from VO to
player. Use it to notify the core of window resizes, which in turn will
be used to mark all affected properties ("window-scale" in this case) as
changed.
(I refrained from hacking this as internal command into input_ctx, or to
poll the state change, etc. - but in the end, maybe it would be best to
actually pass the client API context directly to the places where events
can happen.)
NSDisableScreenUpdates came to hunt me in the end and when mpv was paused, it
did wait for a frame that never came (because of interaction with the live
resizing code)!
Apparently this is needed for correct 3D mode subtitles. In general,
it seems you need to duplicate the whole "GUI", so it's done for all
OSD elements.
This doesn't handle the "duplication" of the mouse pointer. Instead,
the mouse can be used for the top/left field only. Also, it's possible
that we should "compress" the OSD in the direction it's duplicated, but
I don't know about that.
Fixes#1124, at least partially.
In interlaced modes, we output fields, not complete frames, so the
framerate doubles.
The method to calculate this was borrowed from xrandr code.
Hopefully fixes#1224.
At least on my machine, reading back the frame with system memcpy is
slower than just using software rendering. Use the optimized gpu_memcpy
from LAV to speed things up.
Apparently if resizing a NSWindow from a secondary thread Cocoa will
automatically protect itself using NSViewHierarchyLock and in our case,
cause a deadlock.
Fixes#1210
Especially with other components (libavcodec, OSX stuff), the thread
list can get quite populated. Setting the thread name helps when
debugging.
Since this is not portable, we check the OS variants in waf configure.
old-configure just gets a special-case for glibc, since doing a full
check here would probably be a waste of effort.
After removing synchronous libdispatch calls, this looks like it doesn't
deadlock anymore. I also experimented with pthread_mutex_trylock liek wm4
suggested, but it leads to some annoying black flickering. I will fallback to
that only if some new deadlocks are discovered.
As I understand, otherwise, the code will try to destroy the same
window again in the cleanup part of the gui_thread(), which makes no
sense and is potentially dangerous.
mp_stat() instead of stat() was used in the normal code (i.e. even
on Unix), because MinGW-w64 has an unbelievable macro-mess in place,
which prevents solving this elegantly.
Add some dirty workarounds to hide mp_stat() from the normal code
properly. This now requires replacing all functions that use the
struct stat type. This includes fstat, lstat, fstatat, and possibly
others. (mpv currently uses stat and fstat only.)
The previous commit was actually incorrect, and the change had
absolutely no effect. The two formats are (fortunately) the same. I'm
probably too tired.
This would have been wrong for hw decoders which pass us NV12 or NV21.
The format the GL shader filter chain gets is stored in p->image_desc,
while p->image_format still contains the "real" input format (which in
case of hw decoding is an opsque hw accel format). Since no hw decoder
did this, this is really just a theoretical fix and doesn't fix any
actual bugs.