bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.
The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
While I'm not very fond of "const", it's important for declarations
(it decides whether a symbol is emitted in a read-only or read/write
section). Fix all these cases, so we have writeable global data only
when we really need.
The functions glXGetProcAddressARB() and glXQueryExtensionsString() were
loaded using dlsym(). This could fail when compiling to libmpv, because
then dlopen(NULL, ...) will look in the main program's list of
libraries, and the libGL linked to libmpv is never considered. (Don't
know if this somehow could be worked around.) The result is that using
vo_opengl with libmpv can fail.
Avoid this by not using dlsym(). glXGetProcAddressARB() was already used
directly in the same file, and that never caused any problems. (Still
add it to the configure test.) glXQueryExtensionsString() is documented
as added in GLX 1.1 - that's ancient.
Previous API worked under the assumption that download_image is always called
after map_image. In practice this is true, but it's better to have a much
generic API that doesn't depend on the order in which the functions are called.
The harder work was done in the previous commits. After that this feature comes
out almost for free.
The only problem is I can't get the textures created with CGLTexImageIOSurface2D
to download properly, thus the code performs download using some CoreVideo APIs.
If someone knows why download of textures created with CGLTexImageIOSurface2D
doesn't work please contact me :)
This adds support for packed YUV formats (YUVY and UYVY) using the extension
GL_APPLE_rgb_422. While supporting this formats on their own is not that
important (considering most video is planar YUV) they are used for
interoperability with IOSurfaces.
Next commit will use this formats to render VDA hardware decoded frames through
IOSurface and OpenGL interoperability.
This allows vo_opengl to use GL_TEXTURE_RECTANGLE textures, either by
enabling it with the 'rectangle-textures' sub-option, or by having a
hwdec backend force it. By default it's off.
The _only_ reason we're adding this is because VDA can export rectangle
textures only.
When blending OSD and subtitles onto the video, we write bogus alpha
values. This doesn't normally matter, because these values are normally
unused and discarded. But at least on Wayland, the alpha values are used
by the compositor and leads to transparent windows even with opaque
video on places where the OSD happens to use transparency.
(Also see github issue #338.)
Until now, the alpha basically contained garbage. The source factor
GL_SRC_ALPHA meant that alpha was multiplied with itself. Use GL_ONE
instead (which is why we have to use glBlendFuncSeparate()). This should
give correct results, even with video that has alpha. (Or at least it's
something close to correct, I haven't thought too hard how the
compositor will blend it, and in fact I couldn't manage to test it.)
If glBlendFuncSeparate() is not available, fall back to glBlendFunc(),
which does the same as the code did before this commit. Technically, we
support GL 1.1, but glBlendFuncSeparate is 1.4, and I guess we should
try not to crash if vo_opengl_old runs on a system with GL 1.1 drivers
only.
This uses vdpau OpenGL interop to convert a vdpau surface to a texture.
Note that this is a bit weak and primitive. Deinterlacing (or any other
form of vdpau postprocessing) is not supported. vo_opengl chroma scaling
and chroma sample position are not supported. Internally, the vdpau
video surfaces are converted to a RGBA surface first, because using the
video surfaces directly is too complicated. (These surfaces are always
split into separate fields, and the vo_opengl core expects progressive
frames or frames with weaved fields.)
Instead of checking for resolution and image format changes, always
fully reinit on any parameter change. Let init_video do all required
initializations, which simplifies things a little bit.
Change the gl_video/hardware decoding interop API slightly, so that
hwdec initialization gets the full image parameters.
Also make some cosmetic changes.
VA-API's OpenGL/GLX interop is pretty bad and perhaps slow (renders a
X11 pixmap into a FBO, and has to go over X11, probably involves one or
more copies), and this code serves more as an example, rather than for
serious use. On the other hand, this might be work much better than
vo_vaapi, even if slightly slower.
Most hardware decoding APIs provide some OpenGL interop. This allows
using vo_opengl, without having to read the video data back from GPU.
This requires adding a backend for each hardware decoding API. (Each
backend is an entry in gl_hwdec_vaglx[].) The backends expose video data
as a set of OpenGL textures.
Add infrastructure to support this. The next commit will add support for
VA-API.
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
* #define HAVE_HURR 1
* #define HAVE_DURR 0
We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.
[1]: http://xkcd.com/927/ related
Use a different algorithm to generate the dithering matrix. This
looks much better than the previous ordered dither matrix with its
cross-hatch artifacts.
The matrix generation algorithm as well as its implementation was
contributed by Wessel Dankers aka Fruit. The code in dither.c is
his implementation, reformatted and with static global variables
removed by me.
The new matrix is uploaded as float texture - before this commit, it
was a normal integer fixed point matrix. This means dithering will
be disabled on systems without float textures.
The size of the dithering matrix can be configured, as the matrix is
generated at runtime. The generation of the matrix can take rather
long, and is already unacceptable with size 8. The default is at 6,
which takes about 100 ms on a Core2 Duo system with dither.c compiled
at -O2, which I consider just about acceptable.
The old ordered dithering is still available and can be selected by
putting the dither=ordered sub-option. The ordered dither matrix
generation code was moved to dither.c. This function was originally
written by Uoti Urpala.
Instead of having separate callbacks for each backend-handled feature
(like MPGLContext.fullscreen, MPGLContext.border, etc.), pass the
VOCTRL responsible for this directly to the backend. This allows
removing a bunch of callbacks, that currently must be set even for
optional/lesser features (like VOCTRL_BORDER).
This requires changes to all VOs using gl_common, as well as all
backends that support gl_common.
Also introduce VOCTRL_CHECK_EVENTS. vo.check_events is now optional.
VO backends can use VOCTRL_CHECK_EVENTS instead to implementing
check_events. This has the advantage that the event handling code in
VOs doesn't have to be duplicated if vo_control() is used.
This commit is a followup on the previous one and uses a solution I like more
since it totally decouples the Cocoa code from mpv's core and tries to emulate
a generic Cocoa application's lifecycle as much as possible without fighting
the framework.
mpv's main is executed in a pthread while the main thread runs the native cocoa
event loop.
All of the thread safety is mainly accomplished with additional logic in
cocoa_common as to not increase complexity on the crossplatform parts of the
code.
Some OpenGL implementations on some platforms require that a context
is current only on one thread. For this reason, mpgl_lock() and
mpgl_unlock() take care of this as well for convenience.
Each backend that needs thread safety should provide it's own locking strategy
inside of `set_current`.
Do this instead of stuffing all x11/cocoa/win32/wayland specific code
into gl_common.c. The cocoa specific parts could probably go directly
into cocoa_common.m, possibly same with wayland.
Also redo how the list of backends is managed. Get rid of the GLTYPE_
constants. Instead of having a big switch() on GLTYPE_, each backend
entry has a function pointer to setup the MPGLContext callback (e.g.
mpgl_set_backend_x11()).
Apparently newer Mesa versions changed their <GL/glx.h> header, and
unconditionally define GLX_CONTEXT_MAJOR_VERSION_ARB and others. This
clashed with gl_header_fixes.h, a header which quarantines bad hacks
to make compilation possible on systems with outdated GL headers.
Specifically, our header was included before glx.h, so the hacks were
always active, and somehow Mesa's glx.h used to deal with this by not
redefining existing identifiers.
Fix the gl_header_fixes.h logic so the hacks are checked after including
glx.h.
This was done so because the X11 code had a hard to track down issue
with some window managers, and caused the VO window to be placed
incorrectly. This was fixed in the previous commit. Consequently, we can
remove this bad hack.
All wayland only specific routines are placed in wayland_common.
This makes it easier to write other video outputs.
The EGL specific parts, as well as opengl context creation, are in gl_common.
This backend works for:
* opengl-old
* opengl
* opengl-hq
To use it just specify the opengl backend
--vo=opengl:backend=wayland
or disable the x11 build.
Don't forget to set EGL_PLATFORM to wayland.
Co-Author: Scott Moreau
(Sorry I lost the old commit history due to the file structure changes)
create_window is really bad naming, because this function can be called
multiple times, while the name implies that it always creates a new
window. At least the name config_window is not actively misleading.
Allow the backend code to create a GL context on best effort basis,
instead of having to implement separate functions for each variation.
This means there's only a single create_window callback now. Also,
getFunctions() doesn't have the gl3 parameter anymore, which was
confusing and hard to explain.
create_window() tries to create a GL context of any version. The field
MPGLContext.requested_gl_version is taken as a hint whether a GL3 or a
legacy context is preferred. (This should be easy on all platforms.)
The cocoa part always assumes that GL 3 is always available on
OSX 10.7.0 and higher, and miserably fails if it's not. One could try
to put more effort into probing the context, but apparently this
situation never happens, so don't bother. (And even if, mpv should be
able to fall back to vo_corevideo.)
The X11 part doesn't change much, but moving these functions around
makes the diff bigger.
Note about some corner cases:
This doesn't handle CONTEXT_FORWARD_COMPATIBLE_BIT_ARB on OpenGL 3.0
correctly. This was the one thing getFunctions() actually needed the
gl3 parameter, and we just make sure we never use forward compatible
contexts on 3.0. It should work with any version above (e.g. 3.1, 3.2
and 3.3 should be fine). This is because the GL_ARB_compatibility
extension is specified for 3.1 and up only. There doesn't seem to be
any way to detect presence of legacy GL on 3.0 with a forward
compatible context. As a counter measure, remove the FORWARD_COMPATIBLE
flags from the win32 code. Maybe this will go wrong. (Should this
happen, the flag has the be added back, and the win32 will have to
explicitly check for GL 3.0 and add "GL_ARB_compatibility" to the extra
extension string.)
Note about GLX:
Probing GL versions by trying to create a context on an existing window
was (probably) not always possible. Old code used GLX 1.2 to create
legacy contexts, and it required code different from GLX 1.3 even before
creation of the X window (the problem was selections of the X Visual).
That's why there were two functions for window creation (create_window_old
and create_window_gl3). However, the legacy context creation code was
updated to GLX 1.3 in commit b3b20cc, so having different functions for
window creation is not needed anymore.
For 9-15 bit material, cutting off the lower bits leads to significant
quality reduction, because these formats leave the most significant bits
unused (e.g. 10 bit padded to 16 bit, transferred as 8 bit -> only
2 bits left). 16 bit formats still can be played like this, as cutting
the lower bits merely reduces quality in this case.
This problem was encountered with the following GPU/driver combination:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) 915GM x86/MMX/SSE2
OpenGL version string: 1.4 Mesa 9.0.1
It appears 16 bit support is rather common on GPUs, so testing the
actual texture depth wasn't needed until now. (There are some other Mesa
GPU/driver combinations which support 16 bit only when using RG textures
instead of LUMINANCE_ALPHA. This is due to OpenGL driver bugs.)
Finish renaming directories and moving files. Adjust all include
statements to make the previous commit compile.
The two commits are separate, because git is bad at tracking renames
and content changes at the same time.
Also take this as an opportunity to remove the separation between
"common" and "mplayer" sources in the Makefile. ("common" used to be
shared between mplayer and mencoder.)
Tis drops the silly lib prefixes, and attempts to organize the tree in
a more logical way. Make the top-level directory less cluttered as
well.
Renames the following directories:
libaf -> audio/filter
libao2 -> audio/out
libvo -> video/out
libmpdemux -> demux
Split libmpcodecs:
vf* -> video/filter
vd*, dec_video.* -> video/decode
mp_image*, img_format*, ... -> video/
ad*, dec_audio.* -> audio/decode
libaf/format.* is moved to audio/ - this is similar to how mp_image.*
is located in video/.
Move most top-level .c/.h files to core. (talloc.c/.h is left on top-
level, because it's external.) Park some of the more annoying files
in compat/. Some of these are relicts from the time mplayer used
ffmpeg internals.
sub/ is not split, because it's too much of a mess (subtitle code is
mixed with OSD display and rendering).
Maybe the organization of core is not ideal: it mixes playback core
(like mplayer.c) and utility helpers (like bstr.c/h). Should the need
arise, the playback core will be moved somewhere else, while core
contains all helper and common code.