Changed the reference from --gpu-gamma to --gamma-factor,
and changed the reference from --post-shader to --glsl-shaders,
in order to reflect actual changes to the option names.
Originally mpv vaapi support was based on the MPlayer-vaapi patches.
These were never merged in upstream MPlayer. The license headers
indicated they were GPL-only. Although the actual author agreed to
relicensing, the company employing him to write this code did not, so
the original code is unusable to us.
Fortunately, vaapi support was refactored and rewritten several times,
meaning little code is actually left. The previous commits removed or
moved that to GPL-only code. Namely, vo_vaapi.c remains GPL-only. The
other code went away or became unnecessary mainly because libavcodec
itself gained the ability to manage the hw decoder, and libavutil
provides code to manage vaapi surfaces. We also changed to mainly using
EGL interop, making any of the old rendering code unnecessary.
hwdec_vaglx.c is still GPL. It's possibly relicensable, because much of
it was changed, but I'm not too sure and further investigation would be
required. Also, this has been disabled by default for a while now, so
bothering with this is a waste of time. This commit simply disables it
at compile time as well in LGPL mode.
Done for license reasons. vo_vaapi.c is turned into some kind of
dumpster fire, and we'll remove it as soon as I'm mentally ready for
unkind users to complain about removal of this old POS.
This is for relicensing. Some of this code is loosely based on
vo_vaapi.c from the original MPlayer-vaapi patches. Most of the code has
changed, and only the initialization code and check_status() look
remotely similar. The initialization code is changed to be like Libav's
(hwcontext_vaapi.c). check_va_status() is just a C idiom, but to play it
safe, we'll either drop it from LGPL code (or recreate it).
vaapi.c still contains plenty of code from the original patches, but the
next commits will move them out of the LGPL code paths.
Seems to be fixed upstream in the nvidia driver, so it's probably a good
idea to 1. force the layout and 2. remove the warning, as it now
actually works. Users with older drivers would run into errors, but they
can still use shaderc as a replacement. (And it's not like the old
status quo was any better)
This was always set to the length of the VAO, but it should have been
set to the number of vertex attribs actually in use for this frame. No
idea how that managed to survive the test framework on nvidia/linux, but
ANGLE caught it.
This has several advantages:
1. no more redundant texcoords when we don't need them
2. no more arbitrary limit on how many textures we can bind
3. (that extends to user shaders as well)
4. no more arbitrary limits on tscale radius
To realize this, the VAO was moved from a hacky stateful approach
(gl_sc_set_vertex_attribs) - which always bothered me since it was
required for compute shaders as well even though they ignored it - to be
a proper parameter of gl_sc_dispatch_draw, and internally plumbed into
gl_sc_generate, which will make a (properly mangled) deep copy into
params.vertex_attribs.
Apparently this filter is broken in a weird way, which even makes some
libavfilter functions segfault in certain conditions. Don't waste time
with it and just remove the examples.
Also adjust the "life" example description (certainly this filter is
100% worthless, but the example does demonstrate how to use source
filters without any available input).
The case at hand was 5.1 -> fl-fr-fc-lfe-na-na (apparently triggered by
ALSA). That means only the NA channels have to be cleared, but the
result was actually that fc and lfe were cleared. This is due to a
simple regression in the reorder code, which quite obviously got the
index of the first NA channel wrong.
FlagBits is just the name of the enum. The actual data type representing
a combination of these flags follows the *Flags convention. (The
relevant difference is that the latter is defined to be uint32_t instead
of left implicit)
For consistency, use *Flags everywhere instead of randomly switching
between *Flags and *FlagBits.
Also fix a wrong type name on `stageFlags`, pointed out by @atomnuker
Using renderpass layout transitions is more optimal and doesn't require
a redundant pipeline barrier.
Since our render passes are static and don't change throughout the
lifetime of a ra_renderpass, we unfortunately don't have much
flexibility here - so just hard-code SHADER_READ_ONLY_OPTIMAL as the
output format as this will be the most common case.
We also can't short-circuit the transition when we need to preserve the
framebuffer contents, since that depends on the current layout; so we
still use an explicit tex_barrier in this case. (Most optimal for this
scenario would be an input attachment anyway)
auto-copy selects more modes than the ones listed. It will always be
outdated anyway.
The GLX vaapi backend is never selected anymore, because it sucks.
Now you need FFmpeg git, or something.
This also gets rid of the last real use of gpu_memcpy(). libavutil does
that itself. (vaapi.c still used it, but it was essentially unused,
because the code path isn't really in use anymore. It wasn't even
included due to the d3d-hwaccel dependency in wscript.)
This is apparently required to get storage images working on
windows/vulkan, and probably good practice either way. Not entirely sure
if it's the best idea to be always storing the value as 32-bit float,
but it should hardly matter in practice (since we're only writing one
sample per thread).
(Leaving them implicit requires the shaderStorageImageWriteWithoutFormat
feature to be enabled, which the windows nvidia vulkan driver doesn't
support, at least not for a GTX 670)
This makes the radeon driver shut up about frequently updating
STATIC_DRAW UBOs (--opengl-debug), and also reduces the amount of
synchronization necessary for vulkan uniform buffers.
Also add some extra debugging/tracing code paths. I went with a
flags-based approach in case we ever want to extend this.
In addition to the built-in nvidia compiler, we now also support a
backend based on libshaderc. shaderc is sort of like glslang except it
has a C API and is available as a dynamic library.
The generated SPIR-V is now cached alongside the VkPipeline in the
cached_program. We use a special cache header to ensure validity of this
cache before passing it blindly to the vulkan implementation, since
passing invalid SPIR-V can cause all sorts of nasty things. It's also
designed to self-invalidate if the compiler gets better, by offering a
catch-all `int compiler_version` that implementations can use as a cache
invalidation marker.
This time based on ra/vo_gpu. 2017 is the year of the vulkan desktop!
Current problems / limitations / improvement opportunities:
1. The swapchain/flipping code violates the vulkan spec, by assuming
that the presentation queue will be bounded (in cases where rendering
is significantly faster than vsync). But apparently, there's simply
no better way to do this right now, to the point where even the
stupid cube.c examples from LunarG etc. do it wrong.
(cf. https://github.com/KhronosGroup/Vulkan-Docs/issues/370)
2. The memory allocator could be improved. (This is a universal
constant)
3. Could explore using push descriptors instead of descriptor sets,
especially since we expect to switch descriptors semi-often for some
passes (like interpolation). Probably won't make a difference, but
the synchronization overhead might be a factor. Who knows.
4. Parallelism across frames / async transfer is not well-defined, we
either need to use a better semaphore / command buffer strategy or a
resource pooling layer to safely handle cross-frame parallelism.
(That said, I gave resource pooling a try and was not happy with the
result at all - so I'm still exploring the semaphore strategy)
5. We aggressively use pipeline barriers where events would offer a much
more fine-grained synchronization mechanism. As a result of this, we
might be suffering from GPU bubbles due to too-short dependencies on
objects. (That said, I'm also exploring the use of semaphores as a an
ordering tactic which would allow cross-frame time slicing in theory)
Some minor changes to the vo_gpu and infrastructure, but nothing
consequential.
NOTE: For safety, all use of asynchronous commands / multiple command
pools is currently disabled completely. There are some left-over relics
of this in the code (e.g. the distinction between dev_poll and
pool_poll), but that is kept in place mostly because this will be
re-extended in the future (vulkan rev 2).
The queue count is also currently capped to 1, because of the lack of
cross-frame semaphores means we need the implicit synchronization from
the same-queue semantics to guarantee a correct result.
Under some conditions, hide_osc() was calling render(), which then called
hide_osc() again, and so forth, until the stack overflows.
Tracking the exact conditions where this happens (and then managing them
to prevent it) is an excercise in futility.
Remove the osc directly - instead of going through the entire rendering
procedure just to end up rendering nothing.
Fixes#4900 .
Also implicitly fixes memory leak when mp.set_property_native was used,
because the cleanup did not expect more allocations from the accidental
use of mpv_get_property.
Tested by making the ra_tex_resize function always fail (apart from the
initial FBO check). This required a few changes:
1. reset shaders on failed dispatch
2. reset cleanup binds on failed dispatch
3. fall back to initializing the struct image to 1x1 on failure
4. handle output_fbo_valid gracefully
This was sort of grating by default and made it really hard to actually
read e.g. text on top of a transparent background. I decided to approach
the problem from both directions, making the whites darker and the grays
lighter. This brings it closer to the dynamic range of e.g. the
wikipedia transparent svg preview.
Due to the plethora of historical baggage from different eras getting
confusing, I decided to simplify and unify the struct organization and
naming scheme.
Structs that got renamed:
1. fbodst -> ra_fbo (and moved to gpu/context.h)
2. fbotex -> removed (redundant after 2af2fa7a)
3. fbosurface -> surface
4. img_tex -> image
In addition to these structs being renamed, all of the names have been
made consistent. The new scheme is as follows:
struct image img;
struct ra_tex *tex;
struct ra_fbo fbo;
This also affects derived names, e.g. indirect_fbo -> indirect_tex.
Notably also, finish_pass_fbo -> finish_pass_tex and finish_pass_direct
-> finish_pass_fbo.
The new equivalent of fbotex_change() is called ra_tex_resize().
This commit (should) contain no logic changes, just renaming a bunch of
crap.
I've observed the garbage pixels in more scenarios. They also were never
really needed to begin with, originally being a discovered work-around
for bug that we fixed since then anyway. Doesn't really seem to even
help resizing, since the OpenGL drivers are all smart enough to pool
resources internally anyway.
Fixes#1814
Enabling double buffering fixed some graphical glitches when entering
fullscreen, but it also caused a fullscreen performance regression. We
decided that the glitches were preferable to the performance regression.
This reverts commit cee764849e.
The code used ra_ctx_destroy even though ra_ctx_create was never called
(since it's just a dummy ctx), which led to a conflict of assumptions.
The proper fix is to only use ra_gl_ctx_uninit (mirroring the
ra_gl_ctx_init) and free the dummy ctx manually.
Fixes https://github.com/cmdrkotori/mpc-qt/issues/129