Previous API worked under the assumption that download_image is always called
after map_image. In practice this is true, but it's better to have a much
generic API that doesn't depend on the order in which the functions are called.
The hwdec driver can be loaded, even if it's not used (e.g. when playing
a file with no hardware decoding after one with it enabled).
Also, check whether dlimage is NULL. Since this will do call into the
native hwdec API, there's a chance a driver could fail doing this, it's
better to check the return value, even if this case currently can't
happen.
mpv was hardcoded to always consider the right Alt key as Alt Gr, but there
are parituclar combinations of platforms and keyboard layouts where it's more
convenient to treat the right Alt as a keyboard modifier just like the left
one.
Fixes#388
The harder work was done in the previous commits. After that this feature comes
out almost for free.
The only problem is I can't get the textures created with CGLTexImageIOSurface2D
to download properly, thus the code performs download using some CoreVideo APIs.
If someone knows why download of textures created with CGLTexImageIOSurface2D
doesn't work please contact me :)
This adds support for packed YUV formats (YUVY and UYVY) using the extension
GL_APPLE_rgb_422. While supporting this formats on their own is not that
important (considering most video is planar YUV) they are used for
interoperability with IOSurfaces.
Next commit will use this formats to render VDA hardware decoded frames through
IOSurface and OpenGL interoperability.
roaraudio has some sort of sndio emulation, but apparently its header
file is either blatantly broken, or an old version from the past. The
sio_onvol() function has the wrong return type (void instead of int),
and the SIO_DEVANY symbol is missing entirely. This broke the build,
because the configure check was successful anyway.
This allows vo_opengl to use GL_TEXTURE_RECTANGLE textures, either by
enabling it with the 'rectangle-textures' sub-option, or by having a
hwdec backend force it. By default it's off.
The _only_ reason we're adding this is because VDA can export rectangle
textures only.
We got a crash in libavutil when encoding with Y8 (GRAY8). The reason
was that libavutil was copying an Y8 image allocated by us, and expected
a palette. This is because GRAY8 is a PSEUDOPAL format. It's not clear
what PSEUDOPAL means, and it makes literally no sense at all. However,
it does expect a palette allocated for some formats that are not
paletted, and libavutil crashed when trying to access the non-existent
palette.
This is for key bindings that use multiple mouse buttons at once. (Yes,
this is weird, but MPlayer always had this feature, and apparently
there are people using it!)
Before this commit, clicking another mouse button while still holding
the previous mouse button forced the command bound to the previous
mouse button to be emitted. This is usually needed to make sure the
input consumer (the player and the OSC) stays in sync with the actual
mouse button state. If there's no command sent, the OSC in particular
would think the button is still held down. However, sending the command
is undesired behavior if you want to use these multiple-key binds.
Solve this by emitting commands in this situation only if a key down
command was sent earlier. Since mouse button key bindings are normally
executed on key-up only, this happens with special commands like
script_dispatch only (used by the OSD to track mouse buttons, but
also used for other OSC bindings).
See github issue #390.
This is not strictly needed anymore. (On the other hand, it's not really
possible to do hw decoding with vo_null, because the VO is still
responsible for opening the hw decoder API, but that's another story.)
There are some use cases for this. For example, you can use it to set
defaults of automatically inserted filters (like af_lavrresample). It's
also useful if you have a non-trivial VO configuration, and want to use
--vo to quickly change between the drivers without repeating the whole
configuration in the --vo argument.
The "run" input command does fork+exec to spawn child processes. But it
doesn't cleanup the child processes, so they are left as zombies until
mpv terminates. Leaving zombie processes around is not very nice, so
employ a simple trick to let pid 1 take care of this: we fork twice, and
when the first fork exits, the second fork becomes orphaned and becomes
pid 1's child. It becomes pid 1's responsibility to cleanup the process.
The advantage is that we don't need extra logic to cleanup the spawned
process, which could have an arbitrary lifetime.
This is e.g. described here: http://yarchive.net/comp/zombie_process.html
Also use _exit() instead of exit(). It's not really sane to run cleanup
handlers (atexit() etc.) inside a forked process.
This is needed so that new processes (created with fork+exec) don't
inherit open files, which can be important for a number of reasons.
Since O_CLOEXEC is relatively new (POSIX.1-2008, before that Linux
specific), we #define it to 0 in io.h to prevent compilation errors on
older/crappy systems. At least this is the plan.
input.c creates a pipe. For that, add a mp_set_cloexec() function (which
is based on Weston's code in vo_wayland.c, but more correct). We could
use pipe2() instead, but that is Linux specific. Technically, we have a
race condition, but it won't matter.
See the changes in input.rst for explanations.
Technically speaking, this also gets rid of some undefined behavior:
passing NULL as a vararg (execl()) is always a bug.
So e.g.
show_text abc#def
will now print "abc#def" instead of "#def". It's simpler, more
consistent with how ";" and other things are handled, and also
possibly avoids bothering the user with extra escaping.
The OSS checks were a big mess and quite buggy. This reimplementes them using
a declarative approach and clearly distinguishing between the various OSS
implementations. The code should now almost be auto-documenting.
We currently support the following implementations of OSS:
* platform-specific (with `sys/soundcard.h`)
* SunAudio (default on NetBSD and useable on OpenBSD even if we have sndio
support there).
* 4Front (default on FreeBSD)
Since now each OSS check also checks for the appropriate soundcard header,
remove the old soundcard check.
Many thanks to @bugmen0t for in depth info about all the BSDs.
Check #380 and #359 for more info on this commit.
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get
This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.
Mostly untested; it does compile with Libav 9.9.
This partially reverts commit 7d152965. It turns out that at least some
ALSA drivers (at least snd-hda-intel) report incorrect audio delay with
non-native sample rates, even if the sample rate is only very slightly
different from the native one.
For example, 48000Hz is fine on my hda-intel system, while both 8000Hz
and 47999Hz lead to a delay off by 40ms (according to mpv's A/V
difference display), which suggests that something in ALSA is
calculating the delay using the wrong sample rate.
As an additional problem, with ALSA resampling enabled, using
48001Hz/float/2ch fails, while 49000Hz/float/2ch or 48001Hz/s16/2ch
work. With resampling disabled, all these cases work obviously, because
our own resampler doesn't just refuse any of these formats.
Since some people want to use the ALSA resampler (because it's highly
configurable, supports multiple backends, etc.), we still allow enabling
ALSA resampling with an ao_alsa suboption.
These were confined to the video path, but resetting them even if no
video is available shouldn't really matter. Always resetting them makes
the logic easier to follow, I think.
Sometimes, vf_pullup hanged on seek. This was because it never was
properly reset. Old timestamps messed up the timestamp calculations,
which made the player show frames for a ridiculously long time, which is
perceived as pausing or hanging.
Recently, the check was moved, so it was printed only for source video
PTS (since that's easier, and filters should normally behave sane). But
it turns out it's trivial to print a warning in the filter case too by
reusing the code that normally checks for PTS forward jumps without
needing any additional code, so, fine, restore warning in this case.
The old ffmpeg vdpau support code uses separate vdpau pixel formats for
each decoder (pretty much because mplayer's architecture sucked), which
just gets into the way. Force the old decoder's output to IMGFMT_VDPAU,
and remove IMGFMT_IS_VDPAU() where we can remove it.
This should completely remove the differences betwene the old and new
vdpau decoder outside of the decoder.
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.
Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
pthreads should be available anywhere. Even if not, for environment
without threads a pthread wrapper could be provided that can't actually
start threads, thus disabling features that require threads.
Make pthreads mandatory in order to simplify build dependencies and to
reduce ifdeffery. (Admittedly, there wasn't much complexity, but maybe
we will use pthreads more in the future, and then it'd become a real
bother.)
Introduce a mp_cmd_def struct to define commands, instead of using
mp_cmd for this. This way each command parameter can be a m_option,
instead of m_option plus some more stuff.
Define the ARG_ macros directly in terms of the OPT_ macros. Not sure if
this makes it easier to read (maybe not, even if it looks simpler), but
at least it makes it easier to add other option types.
Another idea was adding a name for each parameter (so you could have
named parameters), but not today.
The hr-seek code assumes that when seeking the demuxer, the first image
decoded after the seek will have a PTS exactly equal to the demuxer seek
target time, or before that target time. Incorrect timestamps,
implicitly dropped initial frames, or broken files/demuxers can all
break this assumption, and lead to hr-seek missing the seek target.
Generally, this is not much a problem (the user won't notice being off
by one frame), but it really shows when using the backstep feature. In
this case, backstepping would simply hang.
Add a simple hack that basically forces a minimal value for the --hr-
seek-demuxer-offset option (which is 0 by default) when doing a
backstep-seek. The chosen minimum value is arbitrary. There's no perfect
value, though in general it should perhaps be slightly longer than the
frametime, which the chosen value is more than enough for typical
framerates.
So, FFmpeg/Libav requires us to figure out video timestamps ourselves
(see last 10 commits or so), but the methods it provides for this aren't
even sufficient. In particular, everything that uses AVI-style DTS (avi,
vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal
frame delay is broken. In this case, libavcodec will shift the packet-
to-image correspondence by the codec delay, meaning that with a delay=1,
the first AVFrame.pkt_dts is not 0, but that of the second packet. All
timestamps will appear shifted. The start time (e.g. the time displayed
when doing "mpv file.avi --pause") will not be exactly 0.
(According to Libav developers, this is how it's supposed to work; just
that the first DTS values are normally negative with formats that use
DTS "properly". Who cares if it doesn't work at all with very common
video formats? There's no indication that they'll fix this soon,
either. An elegant workaround is missing too.)
Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV.
Since these timestamps are not reorderd, we wouldn't need to sort them,
but it's less code this way (and possibly more robust, should a demuxer
unexpectedly output PTS).
The original intention of all the timestamp changes recently was
actually to get rid of demuxer-specific hacks and the old timestamp
sorting code, but it looks like this didn't work out. Yet another case
where trying to replace native MPlayer functionality with FFmpeg/Libav
led to disadvantages and bugs. (Note that the old PTS sorting code
doesn't and can't handle frame dropping correctly, though.)
Bug reports:
https://trac.ffmpeg.org/ticket/3178https://bugzilla.libav.org/show_bug.cgi?id=600
Using --start with files that use DTS only, or which simply have broken
PTS timestamps, would incorrectly drop frames and possibly not execute
the seek correctly.
Add yet another heuristic to detect this. The intent is that --start and
hr-seeks in general should work correctly, but in order to keep things
fast, we still want to allow frame dropping during hr-seek if there are
no problems doing so. Do this by disabling frame dropping by default,
but re-enabling it if there are no problems found for a while. As a
consequence, --start might be somewhat slower, but normal user
interaction should remain as fast as before.
Note that there's something subtle about the added code: the
has_broken_packet_pts field is checked even before the first packet is
fed to dec_video.c, so the field must not be set to 0 right on start.
It's not initially set to 0 anyway, because the heuristic requires
decoding some images before enabling frame drop anyway.
Note 2: it's not clear whether frame dropping during hr-seek really
helps; I didn't benchmark it.
Previous code was using the values of the AudioChannelLabel enum directly to
create the channel bitmap. While this was quite smart it was pretty unreadable
and fragile (what if Apple changes the values of those enums?).
Change it to use a 'dumb' conversion table.