Instead of checking for resolution and image format changes, always
fully reinit on any parameter change. Let init_video do all required
initializations, which simplifies things a little bit.
Change the gl_video/hardware decoding interop API slightly, so that
hwdec initialization gets the full image parameters.
Also make some cosmetic changes.
Video has up to 4 textures, if you include obscure formats with alpha.
This means alpha formats could always overwrite the first scaler
texture, leading to corrupted video display. This problem was recently
brought to light, when commit 571e697 started to explicitly unbind all 4
video textures, which broke rendering for non-alpha formats as well.
Fix this by reserving the correct number of texture units.
Most hardware decoding APIs provide some OpenGL interop. This allows
using vo_opengl, without having to read the video data back from GPU.
This requires adding a backend for each hardware decoding API. (Each
backend is an entry in gl_hwdec_vaglx[].) The backends expose video data
as a set of OpenGL textures.
Add infrastructure to support this. The next commit will add support for
VA-API.
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
* #define HAVE_HURR 1
* #define HAVE_DURR 0
We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.
[1]: http://xkcd.com/927/ related
Keep track of the default values directly, instead of creating a new
instance of the option struct just to get the defaults.
Also get rid of the special handling of m_obj_desc.init_options.
Instead, handle it purely by the option parser. Originally, I wanted to
handle --vo=opengl-hq and --vo=direct3d_shaders with this (by making
them aliases to the real VOs with a different preset), but since --vo
=opengl-hq=help prints the wrong values (as consequence of the
simplification), I'm not doing that, and instead use something
different.
Before, a VO could easily refuse to respond to VOCTRL_REDRAW_FRAME,
which means the VO wouldn't redraw OSD and window contents, and the
player would appear frozen to the user. This was a bit stupid, and makes
dealing with some corner cases much harder (think of --keep-open, which
was hard to implement, because the VO gets into this state if there are
no new video frames after a seek reset).
Change this, and require VOs to always react to VOCTRL_REDRAW_FRAME.
There are two aspects of this: First, behavior after a (successful)
vo_reconfig() call, but before any video frame has been displayed.
Second, behavior after a vo_seek_reset().
For the first issue, we define that sending VOCTRL_REDRAW_FRAME after
vo_reconfig() should clear the window with black. This requires minor
changes to some VOs. In particular vaapi makes this horribly
complicated, because OSD rendering is bound to a video surface. We
create a black dummy surface for this purpose.
The second issue is much simpler and works already with most VOs: they
simply redraw whatever has been uploaded previously. The exception is
vdpau, which has a complicated mechanism to track and filter video
frames. The state associated with this mechanism is completely cleared
with vo_seek_reset(), so implementing this to work as expected is not
trivial. For now, we just clear the window with black.
Improves display of images and video with alpha channel, especially if
the transparent regions contain (supposed to be invisible) garbage
color values.
This probably has been broken since bbc865a: a test was added that uses
a FBO, but it's always run, even if FBOs were not detected. On the other
hand, fbotex_init() just runs into an assert. Fix the test that
triggered this condition, and make fbotex_init() "nicer" by just failing
if FBOs are not available.
Odd video sizes if pixel formats with chroma subsampling and PBOs were
used, garbage was rendered. This was because the PBO path created
buffers with an unpadded size, and then tried to upload a padded
image to it. Fix it by explicitly setting the padded size. (As with
the non-PBO path, we rely that image allocations are somehow padded,
which is normally the case.)
Until now, video output levels (obscure feature, like using TV screens
that require RGB output in limited range, similar to YUY) still required
handling of VOCTRL_SET_YUV_COLORSPACE. Simplify this, and use the new
mp_image_params code. This gets rid of some code. VOCTRL_SET_YUV_COLORSPACE
is not needed at all anymore in VOs that use the reconfig callback. The
result of VOCTRL_GET_YUV_COLORSPACE is now used only used for the
colormatrix related properties (basically, for display on OSD). For
other VOs, VOCTRL_SET_YUV_COLORSPACE will be sent only once after config
instead of twice.
Allocate textures big enough to include the bottom/right borders (so the
chroma texture sizes are rounded up instead of down). Make the texture
large enough to include the additional luma border. Conceptually, we
pretend that the video frame is fully aligned, and then crop away the
unwanted borders. Filtering (even just bilinear) will access the
borders anyway, so it's possible that we might need to switch to
"harder" cropping instead, but at least pixels not close to the
border should be displayed correctly now.
Add a comment to mp_image.c about this luma border. These semantics are
kind of subtle, and the image allocation code handle this in a subtle
way too, so it's better to document this explicitly. The libavutil
image allocation code does similar things.
In general, this warning can hint to actual bugs. We don't enable it
yet, because it would conflict with some unmerged code, and we should
check with clang too (this commit was done by testing with gcc).
Doing "mpv --vo=opengl:lscale=help" now lists possible scalers and
exits. The "backend" suboption behaves similar. Make the "stereo"
suboption a choice, instead of using magic integer values.
Until now, only formats directly supported by OpenGL were supported.
This excludes various permutations of 8-bit RGB[A|0]. But we can simply
permutate the color channels in the shader, so do that. This also adds
support for all these weird RGB0 formats.
Note that we could use libavutil's pixfmt list instead of the
mp_packed_formats array, but trying to decrypt the pixfmt info would
probably end in pain, so this array with duplicated information is
actually better and shorter.
Note: I didn't actually test whether the alpha components are reproduced
correctly with alpha formats.
Use the video decoder chroma location flags and render chroma locations
other than centered. Until now, we've always used the intuitive and
obvious centered chroma location, but H.264 uses something else.
FFmpeg provides a small overview in libavcodec/avcodec.h:
-----------
/**
* X X 3 4 X X are luma samples,
* 1 2 1-6 are possible chroma positions
* X X 5 6 X 0 is undefined/unknown position
*/
enum AVChromaLocation{
AVCHROMA_LOC_UNSPECIFIED = 0,
AVCHROMA_LOC_LEFT = 1, ///< mpeg2/4, h264 default
AVCHROMA_LOC_CENTER = 2, ///< mpeg1, jpeg, h263
AVCHROMA_LOC_TOPLEFT = 3, ///< DV
AVCHROMA_LOC_TOP = 4,
AVCHROMA_LOC_BOTTOMLEFT = 5,
AVCHROMA_LOC_BOTTOM = 6,
AVCHROMA_LOC_NB , ///< Not part of ABI
};
-----------
The visual difference is literally minimal, but since videophiles
apparently consider this detail as quality mark of a video renderer,
support it anyway. We don't bother with chroma locations other than
centered and left, though.
Not sure about correctness, but it's probably ok.
The filter chain and the video ouputs have config() functions. They are
strictly limited to transfering the video size and format. Other
parameters (like color levels) have to be transferred separately.
Improve upon this by introducing a separate set of reconfig() functions,
which use mp_image_params to carry format parameters. This struct
contains all image format related parameters from config(), plus
additional parameters such as colorspace.
Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just
an example/test case, but vo_opengl will need it later.
The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This
information is now handed to the VOs via reconfig(). The getter,
VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
When the displayed image is cropped in Y direction (like using panscan
controls when playing 4:3 video on a 16:9 monitor), and separated
scaling is used, the texture size for the FBO holding the intermediate
result was calculated incorrectly. This could lead to artifacts, which
were quite apparent with extreme scale factors.
Actually, the size of that texture is OK, but the texture shouldn't be
used to hold the complete scaled image. Instead, it should be used for
the visible part of the image only. Because separate scaling works by
scaling in Y direction first, it's still fine to scale the image on the
full image width on the first pass. This helps avoiding artifacts on
the left/right border of the image when scaling in X direction, as the
scaler will try to fetch pixels from beyond the border. (The left border
is still kind of fine, but the right border will fetch garbage, unless
the texture is strictly sized, or explicit clamping is added to the
shader. Too much trouble, so using the full image width is simpler.)
Also fix some issues with no-npot mode, which enables use of power-of-2
textures. Maybe this mode isn't really useful anymore (modern hardware
is faster with smaller non-power-of-2 textures), but keep it for now.
Originally, the header wasn't supposed to contain random compatibility
stuff, but now all that is printed with -v. Add a hack to skip it and
to reduce the noise.
This probes and prints the depth of some texture formats with the help
of a FBO. By default it tests the format used for scaling, as well as
the format used for dithering and the 3D LUT (if any of these are
enabled).
The output is visible only with -v. Some representative values are
probed, and the difference of input and output value is printed as hex-
float. Hex-floats are used because they make the implied precision more
obvious. Originally I wanted to do some more sophisticated guessing of
the implied depth/precision for more user-friendly reporting, but then
I decided that printing raw data is better for debugging, especially if
things go wrong.
This does not try to disable any functionality and does not print any
warnings if the depth is lower than what it should be.
The internal texture format GL_RED is typically 8 bit, which is clearly
not good enough for the new dither matrix. The idea was to use a float
texture format, but this was somehow "forgotten". Use GL_R16, since
16 bit textures are more robust, and provide more precision for the
same memory usage.
Change how the offset for centering the dither matrix is applied. This
is needed for making it possible to round up values to the target depth.
Before this commit, this changed the output even if the input was exact
and input and output depth were the same, which is not really what you
want. Now it doesn't do that anymore.
Use a different algorithm to generate the dithering matrix. This
looks much better than the previous ordered dither matrix with its
cross-hatch artifacts.
The matrix generation algorithm as well as its implementation was
contributed by Wessel Dankers aka Fruit. The code in dither.c is
his implementation, reformatted and with static global variables
removed by me.
The new matrix is uploaded as float texture - before this commit, it
was a normal integer fixed point matrix. This means dithering will
be disabled on systems without float textures.
The size of the dithering matrix can be configured, as the matrix is
generated at runtime. The generation of the matrix can take rather
long, and is already unacceptable with size 8. The default is at 6,
which takes about 100 ms on a Core2 Duo system with dither.c compiled
at -O2, which I consider just about acceptable.
The old ordered dithering is still available and can be selected by
putting the dither=ordered sub-option. The ordered dither matrix
generation code was moved to dither.c. This function was originally
written by Uoti Urpala.
gl_video_resize_redraw() simply resizes and redraws (but without
invoking swapGlBuffers()). The VO is not involved in any way, so this
can simply be called from inside the mpgl lock from any thread.
Requires a minor refactor of the GL OSD code in order to redraw without
an OSD object.
Useful for the j2k decoder.
Matrix taken from http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html
(XYZ to sRGB, whitepoint D65)
Gamma conversion follows what libswscale does (2.6 in, 2.2 out).
If linear RGB is used internally for scaling, the gamma conversion
will be undone by setting the exponent to 1. Unfortunately, the two
gamma values don't compensate each others exactly (2.2 vs. 1/0.45=2.22...),
so output is a little bit incorrect in sRGB or color-managed mode. But
for now try hard to match libswscale output, which may or may not be
correct.
The previous attempt was missing some code paths, so there were still
shaders generated that triggered the shader compilation error. Fix it
instead by special handling USE_CONV in the shader.
By the way, the shader compiler did not accept:
#if defined(USE_CONV) && (USE_CONV == ...)
In my opinion this should be perfectly fine, but it gives the same
error as before. So test USE_CONV separately with #ifndef.
The OSX shader compiler was giving this error:
ERROR: 0:235: '' : syntax error incorrect preprocessor directive
on this line:
[235] #if USE_CONV == CONV_PLANAR
USE_CONV was undefined in some cases. The expected behavior is that the
shader preprocessor interprets this as branch not taken (AFAIK exactly
as in C), which is probably what the standard would dictate. This is
possible an OSX bug. But admittedly, I'm not sure whether this is really
standard behavior (in C or GLSL), and doing this is extremely weird at
best, so make sure that USE_CONV is always defined.
Should fix behavior on OSX.
Allows playing video with alpha information on X11, as long as the video
contains alpha and the window manager does compositing. See vo.rst.
Whether a window can be transparent is decided by the choice of the X
Visual used for window creation. Unfortunately, there's no direct way to
request such a Visual through the GLX or the X API, and use of the
XRender extension is required to find out whether a Visual implies a
framebuffer with alpha used by XRender (see for example [1]). Instead of
depending on the XRender wrapper library (which would require annoying
configure checks, even though XRender is virtually always supported),
use a simple heuristics to find out whether a Visual has alpha. Since
getting it wrong just means an optional feature will not work as
expected, we consider this ok.
[1] http://stackoverflow.com/questions/4052940/how-to-make-an-opengl-rendering-context-with-transparent-background/9215724#9215724
When displaying YUV with alpha plane (an extremely rare special case),
we didn't upload the alpha plane, because we don't do anything with it.
This actually created some annoying special cases, so upload the alpha
planes as well, even if they're unused.