A wayland output based on shared memory. This video output is useful for x11
free systems, because the current libGL in mesa provides GLX symbols. It is also
useful for embedded systems where the wayland backend for EGL is not
implemented like the raspberry pi.
At the moment only rgb formats are supported, because there is still no
compositor which supports planar formats like yuv420p. The most used compositor
at the moment, weston, supports only BGR0, BGRA and BGR16 (565).
The BGR16 format is the fastest to convert and render without any noticeable
differences to the BGR32 formats. For this reason the current (very basic)
auto-detection code will prefer the BGR16 format. Also the weston source code
indicates that the preferred format is BGR16 (RGB565).
There are 2 options:
* default-format (yes|no) Which uses the BGR32 format
* alpha (yes|no) For outputting images and videos with transparencies
Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec'
decoder in FFmpeg.
The Good: This new implementation has some advantages over the previous one:
- It works with Libav: vda_h264_dec never got into Libav since they prefer
client applications to use the hwaccel API.
- It is way more efficient: in my tests this implementation yields a
reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and
~65-75% compared to h264 software decoding. This is mainly because
`vo_corevideo` was adapted to perform direct rendering of the
`CVPixelBufferRefs` created by the Video Decode Acceleration API Framework.
The Bad:
- `vo_corevideo` is required to use VDA decoding acceleration.
- only works with versions of ffmpeg/libav new enough (needs reference
refcounting). That is FFmpeg 2.0+ and Libav's git master currently.
The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture.
One one end this makes the code simple since Apple's OpenGL implementation
actually supports this out of the box. It would be nice to support other
output image formats and choose the best format depending on the input, or at
least making it configurable. My tests indicate that CPU usage actually
increases with a 420p IMGFMT output which is not what I would have expected.
NOTE: There is a small memory leak with old versions of FFmpeg and with Libav
since the CVPixelBufferRef is not automatically released when the AVFrame is
deallocated. This can cause leaks inside libavcodec for decoded frames that
are discarded before mpv wraps them inside a refcounted mp_image (this only
happens on seeks).
For frames that enter mpv's refcounting facilities, this is not a problem
since we rewrap the CVPixelBufferRef in our mp_image that properly forwards
CVPixelBufferRetain/CvPixelBufferRelease calls to the underying
CVPixelBufferRef.
So, for FFmpeg use something more recent than `b3d63995` for Libav the patch
was posted to the dev ML in July and in review since, apparently, the proposed
fix is rather hacky.
Not actually useful. This would break whenever a new text subtitle
format would be added, which requires a binary->text transformation.
(mov_text is one such format; disable it.) In general, we would have
to know which packet formats are binary, which we don't, so the only
reasonable way to handle this is a white list.
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.
This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).
On the other hand, some features not present in the original patch were
added, like screenshot support.
VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).
Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.
Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)
Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)
This version number was essentially random. When I switched the test
to pkg-config, I took the libdvdread version from my Debian unstable
system as the minimum (as I knew that this version worked).
A user reported that the libdvdread version 4.1.4 appeared to work
fine, so lower the minimum version to the 4.1.x series.
The check for HAVE_AV_CODEC_NEW_VDPAU_API just determines whether the
new vdpau libavutil pixel format is available (which implies presence of
the new API). However, that pixel format (and the correspondig config
test define) is also used in generic code (compiled even without vdpau)
in fmt-conversion.c. Since the configure test didn't define the symbol
if vdpau was not available, it broke in this case.
Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This
file is named so because because it's written against the "old"
libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau").
Add support for the "new" libavcodec vdpau support. This was recently
added and replaces the "old" vdpau parts. (In fact, Libav is about to
deprecate and remove the "old" API without deprecation grace period,
so we have to support it now. Moreover, there will probably be no Libav
release which supports both, so the transition is even less smooth than
we could hope, and we have to support both the old and new API.)
Whether the old or new API is used is checked by a configure test: if
the new API is found, it is used, otherwise the old API is assumed.
Some details might be handled differently. Especially display preemption
is a bit problematic with the "new" libavcodec vdpau support: it wants
to keep a pointer to a specific vdpau API function (which can be driver
specific, because preemption might switch drivers). Also, surface IDs
are now directly stored in AVFrames (and mp_images), so they can't be
forced to VDP_INVALID_HANDLE on preemption. (This changes even with
older libavcodec versions, because mp_image always uses the newer
representation to make vo_vdpau.c simpler.)
Decoder initialization in the new code tries to deal with codec
profiles, while the old code always uses the highest profile per codec.
Surface allocation changes. Since the decoder won't call config() in
vo_vdpau.c on video size change anymore, we allow allocating surfaces
of arbitrary size instead of locking it to what the VO was configured.
The non-hwdec code also has slightly different allocation behavior now.
Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau
doesn't work anymore (a warning suggesting the --hwdec option is
printed instead).
On Linux, the check fails because NULL is not defined. Fix by using 0
instead, which is a perfectly valid null pointer constant, but doesn't
require stddef.h.
Still uses termcap, but uses terminfo for loading the termcap database if
possible. Adds configure test to find terminfo; skips the termcap test
if terminfo is found since terminfo provides termcap.
Use termcap completely for special keys; if we can't get it from termcap
and it isn't one of the known fallbacks, we ignore its specialness and
treat as a sequence of UTF-8 codes.
Further hardcoded fallbacks can be added by calling keys_push_once in
load_termcap; there is no limit to the amount of keys pushed.
Uses the "ke" and "ks" capabilities to start / exit application mode, which
is necessary on vt100 emulators (including screen, xterm and all terminals
that emulate either of those) to correctly receive arrow keys.
It's now possible to compile getch2 even without termcap, though it won't
be of much use since it'll be unable to detect special keys.
Converted to 4 spaces per tab, prettified some statements.
In my opinion this should be unneeded and unclean, which is why I
removed it some time ago. But apparently this is a convenience for BSD
users (so they don't have to use --extra-cflags), so add it back.
This doesn't help if -pthread is omitted. (Apparently, glibc 2.17, on
which I tested the previous commit, doesn't require -lpthread in order
to use pthreads either.)
Not sure how this worked. Only af_export.c and tvi_v4l2.c were
using mmap, but they didn't include osdep/mmap.h or mmap_anon.h. In
any case, we trust that the target system is sufficiently POSIX
compliant if mmap is actually defined (as checked by configure).
stream_vstream.c in particular was actually dependent on the network
code, and didn't compile anymore.
Cleanup the protocol list in mpv.rst, and add some missing ones
supported by libavformat to stream_lavf.c.
This commit removes the "old" networking code in favor of libavformat's
code.
The code was still used for mp_http, udp, ftp, cddb. http has been
mapped to libavformat's http support since approximately 6 months ago.
udp and ftp have support in ffmpeg (though ftp was added only last
month). cddb support is removed with this commit - it's probably not
important and rarely used if at all, so we don't care about it.
--disable-libquvi creates the impression that it disables libquvi 0.9
as well. It doesn't, because it refers to libquvi 0.4, and 0.4 and 0.9
are practically completely different libraries. Make this more explicit
by renaming the switch to include the "4" version number.
This adds support for libquvi 0.9.x, and these features:
- start time (part of youtube URL)
- youtube subtitles
- alternative source switching ('l' and 'L' keys)
- youtube playlists
Note that libquvi 0.9 is still in development. Although this seems to
be API stable now, it looks like there will be a 1.0 release, which is
supposed to be the next stable release and the actual successor of
libquvi 0.4.x.
libarclite provides method stubs for the Subscripting headers added in
0407869ae3. This allows to correclty build mpv on OSX 10.7 (I had tested that
commit with OSX 10.8 running 10.7 SDK).
It seems on 10.8 this option does't make any difference in the linked libraries
(checked with otool -L) so I just add it unconditionally.
Warning: This doesn't mean mpv moved to ARC. To do that one would have to add
`-fobjc-arc` to the cflags.
Basically rewrite all the code supporting the cache (i.e. anything other
than the ringbuffer logic). The underlying design is untouched.
Note that the old cache2.c (on which this code is based) already had a
threading implementation. This was mostly unused on Linux, and had some
problems, such as using shared volatile variables for communication and
uninterruptible timeouts, instead of using locks for synchronization.
This commit does use proper locking, while still retaining the way the
old cache worked. It's basically a big refactor.
Simplify the code too. Since we don't need to copy stream ctrl args
anymore (we're always guaranteed a shared address space now), lots of
annoying code just goes away. Likewise, we don't need to care about
sector sizes. The cache uses the high-level stream API to read from
other streams, and sector sizes are handled transparently.
Before this commit, the cache was franken-hacked on top of the stream
API. You had to use special functions (like cache_stream_fill_buffer()
instead of stream_fill_buffer()), which would access the stream in a
cached manner.
The whole idea about the previous design was that the cache runs in a
thread or in a forked process, while the cache awa functions made sure
the stream instance looked consistent to the user. If you used the
normal functions instead of the special ones while the cache was
running, you were out of luck.
Make it a bit more reasonable by turning the cache into a stream on its
own. This makes it behave exactly like a normal stream. The stream
callbacks call into the original (uncached) stream to do work. No
special cache functions or redirections are needed. The only different
thing about cache streams is that they are created by special functions,
instead of being part of the auto_open_streams[] array.
To make things simpler, remove the threading implementation, which was
messed into the code. The threading code could perhaps be kept, but I
don't really want to have to worry about this special case. A proper
threaded implementation will be added later.
Remove the cache enabling code from stream_radio.c. Since enabling the
cache involves replacing the old stream with a new one, the code as-is
can't be kept. It would be easily possible to enable the cache by
requesting a cache size (which is also much simpler). But nobody uses
stream_radio.c and I can't even test this thing, and the cache is
probably not really important for it either.
Otherwise this could happily open decoders for image subtitles or even
audio/video decoders. AV_CODEC_PROP_TEXT_SUB is a preprocessor symbol,
but it's still better to detect this properly instead of using #ifdef,
because these flags might as well be changed into enums sooner or later.
Mostly copied from vf_lavfi. The parts that could be shared are minor,
because most code is about setting up audio and video, which are too
different.
This won't work with Libav. I used ffplay.c as guide, and noticed too
late that their setup methods are incompatible with Libav's. Trying to
make it work with both would be too much effort. The configure test for
av_opt_set_int_list() should disable af_lavfi gracefully when compiling
with Libav.
Due to option parser chaos, you currently can't have a "," as part of
the filter graph string - not even with quoting or escaping. This will
probably be fixed later.
The audio filter chain is not PTS aware. So we have to do some hacks
to make up a fake PTS, and we have to map the output PTS back to the
filter chain's method of tracking PTS changes and buffering, by
adjusting af->delay.
Commit 02bbd87b disabled SDL linking by default. This commit followed
the ancient mplayer convention of disabling detection of compiler flags
with --enable-* switches. Unfortunately, this makes compiling with SDL
enabled a pain.
Make --enable-sdl/sdl2 use autodetection, even if it's inconsistent with
most other --enable-* switches. The same is already done for
--enable-openal, though.
Based on a pull request by qyot27.
mpv still builds with ffmpeg 1.0.x, however libswresample keeps cuasing
trouble. In older releases, libswresample simply crashed when
downmixing. In somewhat newer versions, it produces distorted output and
downmixing isn't even close to correct.
With ffmpeg release 1.1 (ffmpeg git tag n1.1), everything seems to work
fine. The release uses 0.17.102 as libswresample version, so bump the
required minimum version to that.
The libavresample version of the current Libav stable release lacks the
avresample_set_channel_mapping() function. (FFmpeg's libswresample seems
to be fine, because they added swr_set_channel_mapping() first.)
Add a cheap/slow workaround to do channel reordering on our own. We
don't use the recently removed MPlayer code (see commit 586b75a),
because that is not generic enough.
The functionality should be the same as with full-featured
libavresample, and any differences are bugs. It's probably slower,
though.