A wayland output based on shared memory. This video output is useful for x11
free systems, because the current libGL in mesa provides GLX symbols. It is also
useful for embedded systems where the wayland backend for EGL is not
implemented like the raspberry pi.
At the moment only rgb formats are supported, because there is still no
compositor which supports planar formats like yuv420p. The most used compositor
at the moment, weston, supports only BGR0, BGRA and BGR16 (565).
The BGR16 format is the fastest to convert and render without any noticeable
differences to the BGR32 formats. For this reason the current (very basic)
auto-detection code will prefer the BGR16 format. Also the weston source code
indicates that the preferred format is BGR16 (RGB565).
There are 2 options:
* default-format (yes|no) Which uses the BGR32 format
* alpha (yes|no) For outputting images and videos with transparencies
Apparently this was dropped some years ago, but judging from MPlayer's
handling of this, the original code wasn't so great anyway. The new
code handling clearing of panscan borders correctly, and integrates
better with the YUV path. (Although the VDPAU API sure makes this
annoying with its separate surface types for RGB.)
Note that we create 5 surfaces for some reason - I don't think this
makes too much sense (because we can't use the deinterlacer with RGB
surfaces), but at least it reduces the amount of differences with
the YUV code path.
Clearing the borders is done by drawing a single black pixel over the
window. This sounds pretty dumb, but it appears to work well, and
there is no other API for that. (One could try to use the video mixer
for this purpose, since it has all kinds of features, including
compositing multiple RGBA surfaces and clearing the window background.
But it would require an invisible dummy video surface to make the
video mixer happy, and that's getting too messy.)
The VDPAU default colorkey, although it seems to be driver specific, is
usually green. This is a pretty annoying color, and you usually see it
briefly (as flashes) if the VDPAU window resizes.
Change it to some shade of black. The new default color is close to what
MPlayer picks as colorkey (and apparently it worked well for them):
VdpColor vdp_bg = {0.01, 0.02, 0.03, 0};
Since our OPT_COLOR can set 8 bit colors only, we use '#020507' instead,
which should be the same assuming 8 bit colors.
Obviously, you can't use black, because black is a way too common color,
and would make it too easy to observe the colorkey effect when e.g.
moving a terminal with black background over the video window.
Formally, this sets the "background color" of the presentation queue.
But in practice, this color is also used as colorkey.
This commit doesn't change the VDPAU default yet.
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.
This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).
On the other hand, some features not present in the original patch were
added, like screenshot support.
VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).
Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.
Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)
Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)
Doing "mpv --vo=opengl:lscale=help" now lists possible scalers and
exits. The "backend" suboption behaves similar. Make the "stereo"
suboption a choice, instead of using magic integer values.
Use the video decoder chroma location flags and render chroma locations
other than centered. Until now, we've always used the intuitive and
obvious centered chroma location, but H.264 uses something else.
FFmpeg provides a small overview in libavcodec/avcodec.h:
-----------
/**
* X X 3 4 X X are luma samples,
* 1 2 1-6 are possible chroma positions
* X X 5 6 X 0 is undefined/unknown position
*/
enum AVChromaLocation{
AVCHROMA_LOC_UNSPECIFIED = 0,
AVCHROMA_LOC_LEFT = 1, ///< mpeg2/4, h264 default
AVCHROMA_LOC_CENTER = 2, ///< mpeg1, jpeg, h263
AVCHROMA_LOC_TOPLEFT = 3, ///< DV
AVCHROMA_LOC_TOP = 4,
AVCHROMA_LOC_BOTTOMLEFT = 5,
AVCHROMA_LOC_BOTTOM = 6,
AVCHROMA_LOC_NB , ///< Not part of ABI
};
-----------
The visual difference is literally minimal, but since videophiles
apparently consider this detail as quality mark of a video renderer,
support it anyway. We don't bother with chroma locations other than
centered and left, though.
Not sure about correctness, but it's probably ok.
The use of filters prior to PNG compression can greatly improve
compression ratio, with "mixed" (ImageMagick calls it "adaptive")
typically achieving the best results.
Use a different algorithm to generate the dithering matrix. This
looks much better than the previous ordered dither matrix with its
cross-hatch artifacts.
The matrix generation algorithm as well as its implementation was
contributed by Wessel Dankers aka Fruit. The code in dither.c is
his implementation, reformatted and with static global variables
removed by me.
The new matrix is uploaded as float texture - before this commit, it
was a normal integer fixed point matrix. This means dithering will
be disabled on systems without float textures.
The size of the dithering matrix can be configured, as the matrix is
generated at runtime. The generation of the matrix can take rather
long, and is already unacceptable with size 8. The default is at 6,
which takes about 100 ms on a Core2 Duo system with dither.c compiled
at -O2, which I consider just about acceptable.
The old ordered dithering is still available and can be selected by
putting the dither=ordered sub-option. The ordered dither matrix
generation code was moved to dither.c. This function was originally
written by Uoti Urpala.
Background: slice support has been completely removed, because it
doesn't work with multithreading, and provides a rather bad complexity
to performance tradeoff otherwise.
Allows playing video with alpha information on X11, as long as the video
contains alpha and the window manager does compositing. See vo.rst.
Whether a window can be transparent is decided by the choice of the X
Visual used for window creation. Unfortunately, there's no direct way to
request such a Visual through the GLX or the X API, and use of the
XRender extension is required to find out whether a Visual implies a
framebuffer with alpha used by XRender (see for example [1]). Instead of
depending on the XRender wrapper library (which would require annoying
configure checks, even though XRender is virtually always supported),
use a simple heuristics to find out whether a Visual has alpha. Since
getting it wrong just means an optional feature will not work as
expected, we consider this ok.
[1] http://stackoverflow.com/questions/4052940/how-to-make-an-opengl-rendering-context-with-transparent-background/9215724#9215724
VFCAP_OSD was used to determine at runtime whether the VO supports OSD
rendering. This was mostly unused. vo_direct3d had an option to disable
OSD (was supposed to allow to force auto-insertion of vf_ass, but we
removed that anyway). vo_opengl_old could disable OSD rendering when a
very old OpenGL version was detected, and had an option to explicitly
disable it as well.
Remove VFCAP_OSD from everything (and some associated logic). Now the
vo_driver.draw_osd callback can be set to NULL to indicate missing OSD
support (important so that vo_null etc. don't single-step on OSD
redraw), and if OSD support depends on runtime support, the VO's
draw_osd should just do nothing if OSD is not available.
Also, do not access vo->want_redraw directly. Change the want_redraw
reset logic for this purpose, too. (Probably unneeded, vo_flip_page
resets it already.)
All wayland only specific routines are placed in wayland_common.
This makes it easier to write other video outputs.
The EGL specific parts, as well as opengl context creation, are in gl_common.
This backend works for:
* opengl-old
* opengl
* opengl-hq
To use it just specify the opengl backend
--vo=opengl:backend=wayland
or disable the x11 build.
Don't forget to set EGL_PLATFORM to wayland.
Co-Author: Scott Moreau
(Sorry I lost the old commit history due to the file structure changes)
This allowed making the player switch the monitor video mode when
creating the video window. This was a questionable feature, and with
today's LCD screens certainly not useful anymore. Switching to a random
video mode (going by video width/height) doesn't sound too useful
either.
I'm not sure about the win32 implementation, but the X part had several
bugs. Even in mplayer-svn (where x11_common.c hasn't been receiving any
larger changes for a long time), this code is buggy and doesn't do the
right thing anyway. (And what the hell _did_ it do when using multiple
physical monitors?)
If you really want this, write a shell script that calls xrandr before
and after calling mpv.
vo_sdl still can do mode switching, because SDL has native support for
it, and using it is trivial. Add a new sub-option for this.
Dithering was disabled if the input bit depth was not larger than the
output bit depth of the screen framebuffer. But since scaling, RGB
conversion, and other filters change the number of significant bits
anyway, dithering could still benefit image quality even in these
cases. Always do dithering, unless dithering is completely disabled.
The original intention of this mechanism was not to change the image
needlessly when playing video that matches the native bit depth of the
screen.
This was an awkward hack that attempted to avoid the use of 16 bit
textures, while still allowing rendering 10-16 bit YUV formats. The
idea was that even if the hardware doesn't support 16 bit textures,
an A8L8 textures could be used to convert 10 bit (etc.) to 8 bit in
the shader, instead of doing this on the CPU.
This was an experiment, disabled by default, and was (probably) rarely
used. I've never heard of this being used successfully. Remove it.
Change from gamma 2.2 to the slightly more precise 1/0.45 as per BT.709.
https://www.itu.int/rec/R-REC-BT.709-5-200204-I/en mentions a value of
γ=0.45 for the conceptual non-linear precorrection of video signals.
This is approximately the inverse of 2.22, and not 2.20 as the code had
been using until now.
This mainly serves as a fallback for platforms where nothing better is
available; also as a debugging help. Both the audio and video driver are
not first class - the audio driver lacks delay detection, and the video
driver only supports a single YUV color space.
Configure options: --disable-sdl2 to disable SDL 2.0+ detection,
--disable-sdl to disable SDL 1.2+ detection. Both options need to be
specified to turn off SDL support entirely.
This wasn't actually used since the old gray-alpha OSD rendering has
been removed. Removing the documentation for the vo_opengl_old osdcolor
suboption was forgotten as well.
To simplify implementation, the same filter kernel was used for both
directions, even when the scaling factors were different. It turns
out that people actually did this, and that the resulting rendering
errors were rather visible. Disable this feature by default, as
fixing it would require structural changes, and it's a useless anyway.
Remove VFCTRL_DRAW_OSD, VFCAP_EOSD_FILTER, VFCAP_EOSD_RGBA, VFCAP_EOSD,
VOCTRL_DRAW_EOSD, VOCTRL_GET_EOSD_RES, VOCTRL_QUERY_EOSD_FORMAT.
Remove draw_osd_with_eosd(), which rendered the OSD by calling
VOCTRL_DRAW_EOSD. Change VOs to call osd_draw() directly, which takes
a callback as argument. (This basically works like the old OSD API,
except multiple OSD bitmap formats are supported and caching is
possible.)
Remove all mentions of "eosd". It's simply "osd" now.
Make OSD size per-OSD-object, as they can be different when using
vf_sub. Include display_par/video_par in resolution change detection.
Fix the issue with margin borders in vo_corevideo.
This changes the name of this project to mpv. Most user-visible mentions
of "MPlayer" and "mplayer" are changed to "mpv". The binary name and the
default config file location are changed as well.
The new default config file location is: ~/.mpv/
Remove etc/mplayer.desktop. Apparently this was for the MPlayer GUI,
which has been removed from mplayer2 ages ago.
We don't have a logo, and the MS Windows resource files sort-of require
one, so leave etc/mplayer.ico/.xpm as-is.
Remove the debian and rpm packaging scripts. These contained outdated
dependencies and likely were more harmful than useful. (Patches which
add working and well-tested packaging are welcome.)
Rename both the option and property to "osd-level", which fits a bit
better with the general naming scheme. Make it a choice instead of an
integer range. I failed to come up with good names for the various
levels, so leave them as-is.
Remove the useless property handler for the "loop" property too.
GL_RGB16 doesn't seem to work universally (e.g. Intel). Use GL_RGB by
default, and use GL_RGB16 for "opengl-hq" only.
This may require users of Intel GPUs to manually experiment with the
fbo-format suboption when using "opengl-hq", as GL_RGB16 doesn't seem to
work there in some cases (black screen).
It's not really known whether PBO use causes problems of any kind (most
likely not). They should slightly increase performance. Use them by
default with "opengl-hq".
Even though PBOs don't have anything to do with rendering quality,
"opengl-hq" provides a test bed for features that should be enabled by
default, but aren't out of fear for regressions.
Change the default settings for vo_opengl to highest performance and
compatibility, but lowest quality. Use bilinear as default scaler.
Add "opengl-hq" as alias for high quality settings. This alias uses
exactly the same settings as vo_opengl did before this commit.
This renames vo_gl3 to vo_opengl, and makes it the default. The old
vo_gl is still available under "opengl-old".
We keep "gl3" as alias to "opengl" for short-term compatibility.
For OSX/Cocoa, the autoprobe order changes (prefer the "opengl" over
"opengl-old").
Remove "gl_nosw". This was a compatibility alias for "opengl-old", and
there's no point in keeping it.
Now vo_gl3 should work with standard OpenGL 2.1, as long as the
GL_ARB_texture_rg extension is available. Optional features, which
require features that are always in OpenGL 3.0, but are available
as extensions only in OpenGL 2.1, are automatically disabled.
The force-gl2 suboption, which was an unreliable hack to run vo_gl3
in an OpenGL 2.1 context, is removed.
Significant changes are done to the extension loader to make it easier
to identify optional OpenGL features.
Context creation is a bit changed to simplify the code and to handle
the fallback better if OpenGL 3 context creation fails, and creating
an OpenGL legacy context is attempted.
Based on the initial work by Rudolf Polzer <divverent@xonotic.org>,
which included making the shader GLSL 1.20 compatible, and more.
While being able to play videos on a framebuffer device would be nice,
I didn't need it, and couldn't even test it (buggy nvidia binary
drivers that disable framebuffers, buggy DirectFB that crashes when
using the X11 backend). It's just dead weight, get rid of it.
vo_directx was very horrible, and by today it's mostly useless. I didn't
remove it, because there was that-guy who told me in amazement how
awesome mplayer was, because it was the only video player fast enough
for fast playback on his system when using vo_directx. Sorry, that-guy.