In addition to the built-in nvidia compiler, we now also support a
backend based on libshaderc. shaderc is sort of like glslang except it
has a C API and is available as a dynamic library.
The generated SPIR-V is now cached alongside the VkPipeline in the
cached_program. We use a special cache header to ensure validity of this
cache before passing it blindly to the vulkan implementation, since
passing invalid SPIR-V can cause all sorts of nasty things. It's also
designed to self-invalidate if the compiler gets better, by offering a
catch-all `int compiler_version` that implementations can use as a cache
invalidation marker.
This time based on ra/vo_gpu. 2017 is the year of the vulkan desktop!
Current problems / limitations / improvement opportunities:
1. The swapchain/flipping code violates the vulkan spec, by assuming
that the presentation queue will be bounded (in cases where rendering
is significantly faster than vsync). But apparently, there's simply
no better way to do this right now, to the point where even the
stupid cube.c examples from LunarG etc. do it wrong.
(cf. https://github.com/KhronosGroup/Vulkan-Docs/issues/370)
2. The memory allocator could be improved. (This is a universal
constant)
3. Could explore using push descriptors instead of descriptor sets,
especially since we expect to switch descriptors semi-often for some
passes (like interpolation). Probably won't make a difference, but
the synchronization overhead might be a factor. Who knows.
4. Parallelism across frames / async transfer is not well-defined, we
either need to use a better semaphore / command buffer strategy or a
resource pooling layer to safely handle cross-frame parallelism.
(That said, I gave resource pooling a try and was not happy with the
result at all - so I'm still exploring the semaphore strategy)
5. We aggressively use pipeline barriers where events would offer a much
more fine-grained synchronization mechanism. As a result of this, we
might be suffering from GPU bubbles due to too-short dependencies on
objects. (That said, I'm also exploring the use of semaphores as a an
ordering tactic which would allow cross-frame time slicing in theory)
Some minor changes to the vo_gpu and infrastructure, but nothing
consequential.
NOTE: For safety, all use of asynchronous commands / multiple command
pools is currently disabled completely. There are some left-over relics
of this in the code (e.g. the distinction between dev_poll and
pool_poll), but that is kept in place mostly because this will be
re-extended in the future (vulkan rev 2).
The queue count is also currently capped to 1, because of the lack of
cross-frame semaphores means we need the implicit synchronization from
the same-queue semantics to guarantee a correct result.
Tested by making the ra_tex_resize function always fail (apart from the
initial FBO check). This required a few changes:
1. reset shaders on failed dispatch
2. reset cleanup binds on failed dispatch
3. fall back to initializing the struct image to 1x1 on failure
4. handle output_fbo_valid gracefully
This was sort of grating by default and made it really hard to actually
read e.g. text on top of a transparent background. I decided to approach
the problem from both directions, making the whites darker and the grays
lighter. This brings it closer to the dynamic range of e.g. the
wikipedia transparent svg preview.
Due to the plethora of historical baggage from different eras getting
confusing, I decided to simplify and unify the struct organization and
naming scheme.
Structs that got renamed:
1. fbodst -> ra_fbo (and moved to gpu/context.h)
2. fbotex -> removed (redundant after 2af2fa7a)
3. fbosurface -> surface
4. img_tex -> image
In addition to these structs being renamed, all of the names have been
made consistent. The new scheme is as follows:
struct image img;
struct ra_tex *tex;
struct ra_fbo fbo;
This also affects derived names, e.g. indirect_fbo -> indirect_tex.
Notably also, finish_pass_fbo -> finish_pass_tex and finish_pass_direct
-> finish_pass_fbo.
The new equivalent of fbotex_change() is called ra_tex_resize().
This commit (should) contain no logic changes, just renaming a bunch of
crap.
I've observed the garbage pixels in more scenarios. They also were never
really needed to begin with, originally being a discovered work-around
for bug that we fixed since then anyway. Doesn't really seem to even
help resizing, since the OpenGL drivers are all smart enough to pool
resources internally anyway.
Fixes#1814
Enabling double buffering fixed some graphical glitches when entering
fullscreen, but it also caused a fullscreen performance regression. We
decided that the glitches were preferable to the performance regression.
This reverts commit cee764849e.
The code used ra_ctx_destroy even though ra_ctx_create was never called
(since it's just a dummy ctx), which led to a conflict of assumptions.
The proper fix is to only use ra_gl_ctx_uninit (mirroring the
ra_gl_ctx_init) and free the dummy ctx manually.
Fixes https://github.com/cmdrkotori/mpc-qt/issues/129
We want e.g. --opengl-shaders-append=foo to resolve to the new option,
all while printing an option name. --opengl-shader is a similar case.
These options are special, because they apply "actions" on actual
options by specifying a suffix. So the alias/deprecation handling has to
be part of resolving the actual option from prefix and suffix.
Turns out the option code apparently tries to directly talloc_free() the
allocated strings, instead of going through a tactx wrapper or
something. So we can't directly overwrite it. Do something else
instead..
Almost as fast as the old code, but more general. Notably, glslang
doesn't support nested arrays.
(cf. https://github.com/KhronosGroup/glslang/issues/1057)
Also much cleaner code-wise, so I think I'll keep it even if glslang
implements array_of_arrays.
This never really made sense since the BT.1886 changes. It should get
*brighter* for bright rooms, not darker for dark rooms. Picked some new
values that seemed reasonable-ish.
This causes a performance regression on 10.11 and newer, but the single
buffered method was broken and could cause partially rendered frames to
be presented to the screen.
This reverts 9f30cd8292 and
e543853a7f.
This is done in several steps:
1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
(note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
ra itself (and vo_gpu altered to compensate), but this was a stop-gap
measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten
Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.
Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.
Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
See "Copyright" file for caveats.
This changes the remaining "almost LGPL" files to LGPL, because we think
that the conditions the author set for these was finally fulfilled.
This is "wrong", because you might want mp_image_copy_attributes() to
preserve the information that the colorspace parameters are unknown.
This is important for hwdec -copy modes, which call this function before
fix_image_params() and mp_colorspace_merge() are called.
Instead, just wipe the colorspace attributes if the pixel format changes
in an apparently incompatible way. Use mp_image_params_guess_csp() logic
for this and factor that into its own function.
mp_image_set_attributes() attempts to do something similar, so change
that in the same way. Also, mp_image_params_guess_csp() just returned if
the imgfmt was invalid or unset - just remove that part, because it
annoyingly doesn't fit into the new code, and had little reason to exist
to begin with. (Probably.)
I see no reason not to do this. I think the check comes from the time
when mp_image stored the image aspect ratio, instead of the pixel aspect
ratio, where the logic might have made more sense.
It was noticed that -copy hwdec modes typically dropped the
chroma_location field. This happened because the attributes on hw
download are copied with mp_image_copy_attributes(), which tries to copy
these parameters only if src and dst were both YUV (in an attempt to
copy parameters only if it makes sense).
But hardware formats did not have the YUV flag set (anymore?), and code
shouldn't attempt to check the flag in this way anyway. Drop the check,
and always copy the whole color metadata struct. There is a call to
mp_image_params_guess_csp() below, which tries to unset nonsense
metadata if it was copied from a YUV format to RGB. This function would
also do the right thing for hw formats (although for the cited bug only
the software case matters).
Fixes#4804.
This reverts commit 96462040ec.
I guess the autoprobing is still too primitive to handle this well. What
it really should be trying is initializing the wrapper decoder, and if
that doesn't work, try another method. This is complicated by hwaccels
initializing in a delayed way, so there is no easy solution yet.
Probably fixes#4865.
This overrides the use of GLX_SGI_swap_control, because apparently
GLX_SGI_swap_control doesn't support SwapInterval(0), but the
GLX_MESA_swap_interval does.
Of course, everybody except mesa just accepts SwapInterval(0) even for
GLX_SGI_swap_control, but mesa needs to be the special snowflake here
and reject it, forcing us to load their stupid named extension instead.
Meanwhile khronos has done nothing except spit out GLX_EXT_swap_control
(not to be confused with GL_EXT_swap_control, which is exported by
WGL_EXT_swap_control), that doesn't fix the problem because mesa doesn't
implement it anyway.
What a fucking mess.
Even if the contents are entirely zero. In the current code, these
entries were left uninitialized. (Which always worked for nvidia - but
randomly blew up for AMD)
This is simultaneously generalized into two directions:
1. Support more sc_uniform types (needed for SC_UNIFORM_TYPE_PUSHC)
2. Support more flexible packing (needed for both PUSHC and ra_d3d11)
This is around 512 kB, which is just way too much. Heap-allocate it
instead. Also cut down the max pass count to 64, since 128 was
unrealistically high even for vo_opengl.
Instead of relying on power-of-two buffer sizes and unsigned overflow,
make this code more robust (and also cleaner).
Why can't C get a real modulo operator?
This was there even before the refactor, but the refactor exposed the
bug. I hate C's useless fucking modulo operator so much. I've gotten hit
by this exact bug way too many times.
Since the addition of UBOs, the assumption that the uniform index
corresponds to the pass->params.inputs index is no longer true. Also,
there's no reason it would even need this - since the `input` is also
available directly in sc_uniform.
I have no idea how I've been using this code for as long as I have
without any segfaults until earlier today.
This was needlessly complicated and prone to breakage, because even the
references to the ring buffer could end up getting invalidated and
containing garbage data on e.g. shader cache flush. For much the same
reason why we can't keep around the *timer_pool, we're also forced to
hard-copy the entire sample buffer per pass per frame.
Not a huge deal, though. This is, what, a few kB per frame? We have more
pressing CPU performance concerns anyway.
Also simplified/fixed some other code.
This clearly highlights all out-of-gamut/clipped pixels. (Either too
bright or too saturated)
Has some (documented) caveats. Also make TONE_MAPPING_CLIP stop actually
clamping the value range (it's unnecessary and breaks this feature).
Redefining texture1D / texture3D seems to be illegal, they are already
built-in macros or something. So just use tex1D and tex3D instead.
Additionally, GL_KHR_vulkan_glsl requires using explicit vertex
locations and bindings, so make some changes to facilitate this. (It
also requires explicitly setting location=0 for the color attachment
output)
Vulkan compat. rgb16 doesn't exist on hardware anyway, might as well
just generate the 3DLUT against rgba16 as well. We've decided this is
the simplest way to do vulkan compatibility: just make sure we never
actually need 3-component textures.
This is mostly done so we can support using textures with more
components than the scaler LUTs have entries. But while we're at it,
also change the way the weights are packed so that they're always
sequential with no gaps. This allows us to simplify
pass_sample_separated_get_weights as well.
Mouse wheel bindings have always been a cause of user confusion.
Previously, on Wayland and macOS, precise touchpads would generate AXIS
keycodes and notched mouse wheels would generate mouse button keycodes.
On Windows, both types of device would generate AXIS keycodes and on
X11, both types of device would generate mouse button keycodes. This
made it pretty difficult for users to modify their mouse-wheel bindings,
since it differed between platforms and in some cases, between devices.
To make it more confusing, the keycodes used on Windows were changed in
18a45a42d5 without a deprecation period or adequate communication to
users.
This change aims to make mouse wheel binds less confusing. Both the
mouse button and AXIS keycodes are now deprecated aliases of the new
WHEEL keycodes. This will technically break input configs on Wayland and
macOS that assign different commands to precise and non-precise scroll
events, but this is probably uncommon (if anyone does it at all) and I
think it's a fair tradeoff for finally fixing mouse wheel-related
confusion on other platforms.
It seems like the Cocoa backend used to return the same mpv keycodes for
mouse back/forward as it did for scrolling up and down. Fix this by
explicitly mapping all Cocoa button numbers to the right mpv keycodes.
mpv's mouse button numbering is based on X11 button numbering, which
allows for an arbitrary number of buttons and includes mouse wheel input
as buttons 3-6. This button numbering was used throughout the codebase
and exposed in input.conf, and it was difficult to remember which
physical button each number actually referred to and which referred to
the scroll wheel.
In practice, PC mice only have between two and five buttons and one or
two scroll wheel axes, which are more or less in the same location and
have more or less the same function. This allows us to use names to
refer to the buttons instead of numbers, which makes input.conf syntax a
lot easier to remember. It also makes the syntax robust to changes in
mpv's underlying numbering. The old MOUSE_BTNx names are still
understood as deprecated aliases of the named buttons.
This changes both the input.conf syntax and the MP_MOUSE_BTNx symbols in
the codebase, since I think both would benefit from using names over
numbers, especially since some platforms don't use X11 button numbering
and handle different mouse buttons in different windowing system events.
This also makes the names shorter, since otherwise they would be pretty
long, and it removes the high-numbered MOUSE_BTNx_DBL names, since they
weren't used.
Names are the same as used in Qt:
https://doc.qt.io/qt-5/qt.html#MouseButton-enum
If a VO-area option changes, gl_video_resize() is called
unconditionally. This function does something even if the size does not
change (at least it discards buffered frames for interpolation), which
can lead to stutter when you keep firing option change events during
playback.
Check for an actual resize, and if nothing changes, exit early.
Could cause a crash if anything called ra_get_imgfmt_desc(imgfmt=0). Let
it fail correctly. This can happen if a hwdec backend does not set
hw_subfmt correctly.
This also introduces RA_CAP_GLOBAL_UNIFORM. If this is not set, UBOs
*must* be used for non-bindings. Currently the cap is ignored though,
and the shader_cache *always* generates UBO-using code where it can.
Could be made an option in principle.
Only enabled for drivers new enough to support explicit UBO offsets,
just in case...
No change to performance, which is probably what we expect.
This no longer concerns the API user except in as much as the API user
probably wants to know whether or not PBOs are active, so keep around
the CAP field even though it's mostly useless now.
Both vulkan and opengl distinguish between rendering to an image and
using an image as a storage attachment. So make this an explicit
capability instead of lumping it in with render_dst. (That way we could
support, for example, using an image as a storage attachment without
requiring a framebuffer)
The real reason for this change is that you can directly use the output
FBO as a storage attachment on vulkan but you can't on opengl, which
makes this param structly separate from render_dst.
I don't like the feeling of "reusing" the int binding for this. It
feels... wrong, somehow. I'd prefer to use an explicit "offset" field.
(Plus, I might re-use this for uniform buffers or something)
YMMV
This is needed for HAVE_SSE4_INTRINSICS. config.h used to be included as
a transitive dependency of vf.h, but the include statement was removed
from vf.h in 8f2ccba71b.
Also silence an unused variable warning that was introduced in the same
commit.
Not resetting hwdec_request_reinit caused it to flush on every packet,
which not only caused it to fail triggering the actual fallback, and let
it never decode a new frame, but also to get stuck on EOF.
This removes all GPL only code from it, and that's the whole purpose.
Also happens to be much simpler.
The "deinterlace" option still sort of exists, but only as runtime
changeable option. The main change in behavior is that the property will
not report back the actual deint state. Or in other words, if inserting
or initializing the filter fails, the deinterlace property will still
return "yes". This is in line with most recent behavior changes to
properties and options.
I really wouldn't care much about this, but some parts of the core code
are under HAVE_GPL, so there's some need to get rid of it. Simply turn
the video equalizer from its current fine-grained handling with vf/vo
fallbacks into global options. This makes updating them much simpler.
This removes any possibility of applying video equalizers in filters,
which affects vf_scale, and the previously removed vf_eq. Not a big
loss, since the preferred VOs have this builtin.
Remove video equalizer handling from vo_direct3d, vo_sdl, vo_vaapi, and
vo_xv. I'm not going to waste my time on these legacy VOs.
vo.eq_opts_cache exists _only_ to send a VOCTRL_SET_EQUALIZER, which
exists _only_ to trigger a redraw. This seems silly, but for now I feel
like this is less of a pain. The rest of the equalizer using code is
self-updating.
See commit 96b906a51d for how some video equalizer code was GPL only.
Some command line option names and ranges can probably be traced back to
a GPL only committer, but we don't consider these copyrightable.
Both the video equalizer command/option glue, which drives this filter,
as well as the filter itself are slightly GPL contaminated. So it goes.
After this commit, "--vf=eq" will actually use libavfilter's vf_eq (if
FFmpeg was compiled in GPL mode), but it has different options and will
not listen to the equalizer VOCTRLs.
So far, we had a thread-safe way to read options, but no option update
notification mechanism. Everything was funneled though the main thread's
central mp_option_change_callback() function. For example, if the
panscan options were changed, the function called vo_control() with
VOCTRL_SET_PANSCAN to manually notify the VO thread of updates. This
worked, but's pretty inconvenient. Most of these problems come from the
fact that MPlayer was written as a single-threaded program.
This commit works towards a more flexible mechanism. It adds an update
callback to m_config_cache (the thing that is already used for
thread-safe access of global options).
This alone would still be rather inconvenient, at least in context of
VOs. Add another mechanism on top of it that uses mp_dispatch_queue, and
takes care of some annoying synchronization issues. We extend
mp_dispatch_queue itself to make this easier and slightly more
efficient.
As a first application, use this to reimplement certain VO scaling and
renderer options. The update_opts() function translates these to the
"old" VOCTRLs, though.
An annoyingly subtle issue is that m_config_cache's destructor now
releases pending notifications, and must be released before the
associated dispatch queue. Otherwise, it could happen that option
updates during e.g. VO destruction queue or run stale entries, which is
not expected.
Rather untested. The singly-linked list code in dispatch.c is probably
buggy, and I bet some aspects about synchronization are not entirely
sane.
Also refactors the usage of tex_upload to make ra_tex_upload_pbo a
RA-internal thing again.
ra_buf_pool has the main advantage of being dynamically sized depending
on buf_poll, so for OpenGL we'll end up only using one buffer (when not
persistently mapping) - while for vulkan we'll use as many as necessary,
which depends on the swapchain depth anyway.
This adds handling of spherical video metadata: retrieving it from
demux_lavf and demux_mkv, passing it through filters, and adjusting it
with vf_format. This does not include support for rendering this type of
video.
We don't expect we need/want to support the other projection types like
cube maps, so we don't include that for now. They can be added later as
needed.
Also raise the maximum sizes of stringified image params, since they
can get really long.
This broke screensaver/powersave inhibition with at least KDE and
LXDE. This is a release blocker.
Since fdo, KDE and GNOME idiots seem to be unable to reach
a consensus on a simple protocol, this seems unlikely to get
fixed upstream this year, so revert this change.
Fixes#4752.
Breaks #4706 but I don’t give a damn.
This reverts commit 3f75b3c343.
- tex_uploads args are moved to a struct
- the ability to directly upload texture data without going through a
buffer is made explicit
- the concept of buffer updates and buffer polling is made more explicit
and generalized to buf_update as well (not just mapped buffers)
- the ability to call tex_upload/buf_update on a tex/buf is made
explicit during tex/buf creation
- uploading from buffers now uses an explicit offset instead of
implicitly comparing *src against buf->data, because not all buffers
may actually be persistently mapped
- the initial_data = immutable requirement is dropped. (May be re-added
later for D3D11 if that ever becomes a thing)
This change helps the vulkan abstraction immensely and also helps move
common code (like the PBO pooling) out of ra_gl and into the
opengl/utils.c
This also technically has the side-benefit / side-constraint of using
PBOs for OSD texture uploads as well, which actually seems to help
performance on machines where --opengl-pbo is faster than the naive code
path. Because of this, I decided to hook up the OSD code to the
opengl-pbo option as well.
One drawback of this refactor is that the GL_STREAM_COPY hack for
texture uploads "got lost", but I think I'm happy with that going away
anyway since DR almost fully deprecates it, and it's not the "right
thing" anyway - but instead an nvidia-only hack to make this stuff work
somewhat better on NUMA systems with discrete GPUs.
Another change is that due to the way fencing works with ra_buf (we get
one fence per ra_buf per upload) we have to use multiple ra_bufs instead
of offsets into a shared buffer. But for OpenGL this is probably better
anyway. It's possible that in future, we could support having
independent “buffer slices” (each with their own fence/sync object), but
this would be an optimization more than anything. I also think that we
could address the underlying problem (memory closeness) differently by
making the ra_vk memory allocator smart enough to chunk together
allocations under the hood.
Instead of merging it into render_dst. This is better for vulkan,
because blitting in vulkan both does not require a FBO *and* requires a
different image layout.
Also less "hacky" for OpenGL, since now the weird blit=FBO requirement
is an implementation detail of ra_gl
This extracts non-ANGLE specific code to d3d11_helpers.c, which is
modeled after egl_helpers.c. Currently the only consumer is
context_angle.c, but in future this may allow the D3D11 device and
swapchain creation logic to be reused in other backends.
Also includes small improvements to D3D11 device creation. It is now
possible to create feature level 11_1 devices (though ANGLE does not
support these,) and BGRA swapchains, which might be slightly more
efficient than ARGB, since its the same format used by the compositor.
This is for legacy GL: if VAOs are not available, the helper has to
specify vertex attributes again on every rendering. gl_vao_init() keeps
the vertex array for this purpose. Unfortunately, a temporary argument
was passed to the function, instead of the permanent copy.
Also, it didn't use num_entries (instead expected the array being
terminated by a {0} entry). Fix that source code indentation too.
These are useless and shouldn't be confused with normal RGB formats.
Replace the earlier hack checking the format name with a proper check.
(Not sure when this flag was added. Libav won't have it anyway, but also
no bayer formats.)
In commit c6fafbffac we accidentally set the logical texture size to
the cropped video size, which is not correct. This caused rendering
artifacts in some cases.
Use the video surfaces size instead. Since the current mp_image_params
contains the cropped size only, wrapper texture creation has to be moved
to the _map function. Move the same code for the mixer case (strictly
speaking this is not needed, but seems more symmetric).
(Also there is no need to clear gl_textures on uninit - leftover from
the old hwdec mapper API. So we just drop that part.)
Fixes#4760.
Retrieve the depth for each component and internal texture format
separately. Only for 8 bit per component textures we assume that all
bits are used (or else we would in my opinion create too many probe
textures).
Assuming 8 bit components are always correct also fixes operation in
GLES3, where we assumed that each component had -1 bits depth, and this
all UNORM formats were considered unusable. On GLES, the function to
check the real bit depth is not available. Since GLES has no 16 bit
UNORM textures at all, except with the MPGL_CAP_EXT16 extension, just
drop the special condition for it. (Of course GLES still manages to
introduce a funny special case by allowing GL_LUMINANCE , but not
defining GL_TEXTURE_LUMINANCE_SIZE.)
Should fix#4749.
Runtime untested, because I get this:
[vo/rpi] Could not get DISPMANX objects.
This happened even when building older git versions, and on a RPI image
that hasn't changed in the recent years. I don't know how to make this
POS work again, so I guess if there's a bug in the new code, it will
remain broken.
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
Apparently this was broken by the "ctx->hwdec" check in the if condition
guarding the destroy call, and "ctx->hwdec = NULL;" was moved up
earlier, making this always dead code.
This should probably be refcounted or so, although that could make it
worse as well. For now, add a flag whether the device should be
destroyed.
Fixes#4735.
Less flexible than GL_TIMESTAMP but supported by more platforms. This
will mean that nested queries have to be detected and silently omitted,
but oh well. Not much use for them anyway.
Fixes#4721.
This reverts commit 0ce3dce03a.
We actually explicitly add read-only frames in some of the hwaccel code
via mp_image_new_custom_ref(), which sets AV_BUFFER_FLAG_READONLY. It's
probably better to keep it this way.
Fixes#4652 (and some related issues with D3D).
It's an ancient X11 protocol extension that apparently nobody uses
anymore (desktop environments in particular have replaced it with
equally bad protocols that require tons of dependencies). Users keep
complaining about it being a required dependency.
The impact is likely minimal to none.
Fixes#4706 and other annoying people.
Generic description of pixel formats is hard. In this case, the Apple
special format for packed YUV could have been interpreted as a RGB
format with funny packing.
Move multiple GL-specific things from the renderer to other places like
vo_opengl.c, vo_opengl_cb.c, and ra_gl.c.
The vp_w/vp_h parameters to gl_video_resize() make no sense anymore, and
are implicitly part of struct fbodst.
Checking the main framebuffer depth is moved to vo_opengl.c. For
vo_opengl_cb.c it always assumes 8. The API user now has to override
this manually. The previous heuristic didn't make much sense anyway.
The only remaining dependency on GL is the hwdec stuff, which is harder
to change.
Completely unnecessary, we can just update the uniforms immediately
after creating the program. In theory, for GLSL 4.20+, we could even
skip this, but oh well.
The vp_w/vp_h variables and parameters were not really used anymore
(they were redundant with ra_tex w/h) - but vp_h was still used to
identify whether rendering should be done mirrored.
Simplify this by adding a fbodst struct (some bad naming), which
contains the render target texture, and some parameters how it should be
rendered to (for now only flipping). It would not be appropriate to make
this a member of ra_tex, so it's a separate struct.
Introduces a weird regression for the first frame rendered after
interpolation is toggled at runtime, but seems to work otherwise. This
is possibly due to the change that blit() now mirrors, instead of just
copying. (This is also why ra_fns.blit is changed.)
Fixes#4719.
When using dumb mode, we can actually redraw a frame without uploading
it. Marking this as fresh as well results in unpredictable pass
behavior, which is confusing and makes debugging harder. So mark it as a
redraw instead, in that case.
Since the GL *gl is no longer needed for the timers, we can get rid of
the sc->gl dependency. This requires moving a utility function (which is
not GL-specific anyway) out of gl_utils.h and into utils.h
In the past, this always measured the per-shader execution times of the
individual OSD parts, which was thrown off because the shader was reused
anyway. (And apparently recording the OSD shader execution times was
removed completely, probably because of them being so unrealiably
anyway)
Since ra_timer no longer has the restriction of not allowing timers to
run concurrently, we can just wrap the entire OSD block inside a single
osd_timer now, and record that. (Technically, this can still be off when
using --blend-subtitles=video/yes and showing a full-screen OSD at the
same time. Maybe this can be done better?)
In order to prevent code duplication and keep the ra abstraction as
small as possible, `ra` only implements the actual timer queries,
it does not do pooling/averaging of the results. This is instead moved
to a ra-neutral struct timer_pool in utils.c.
This code is pretty much for the sake of vo_opengl_cb API users. It
resets certain state that either the user or our code doesn't reset
correctly. This is somewhat outdated. With GL implicit state being
so awfully large, it seems more reasonable require that any code
restores the default state when returning to the caller. Some
exceptions are defined in opengl_cb.h.
Now all GL-specifics of shader compilation are abstracted through ra.
Of course we still have everything hardcoded to GLSL - that isn't going
to change.
Some things will probably change later - in particular, the way we pass
uniforms and textures to the shader. Currently, there is a confusing
mismatch between "primitive" uniforms like floats, and others like
textures.
Also, SSBOs are not abstracted yet.
Instead of having a mutable ra_tex field (and the only one), move the
flag to struct ra, since we have only 2 tex_upload user calls anyway,
and both want the same PBO behavior. (At first I considered making it
a RA_TEX_UPLOAD_ flag, but why bother. PBOs are a terribly GL-specific
thing, so we can't expect a reasonable abstraction of it anyway.)
This requires a silly extension to ra_fns.tex_upload: since the OSD
texture can be much larger than the actual OSD image data to upload, a
mechanism for uploading only to a small part of the texture is needed.
Otherwise, we'd have to realloc/copy the data, just to pad it, and then
pay for uploading the padding too.
The RA_TEX_UPLOAD_DISCARD flag is not interpreted by GL (not sure how
you'd tell GL about this), but it clarifies the API and might be
helpful if we support other backend APIs in the future.
Actually GL-specific parts go into gl_utils.c/h, the shader cache
(gl_sc*) into shader_cache.c/h.
No semantic changes of any kind, except that the VAO helper is made
public again as part of gl_utils.c (all while the goal for gl_utils.c
itself is to be included by GL-specific code).
Another "small" step towards removing GL dependencies from the renderer.
This commit generally passes ra_tex objects instead of GL FBO integer
IDs to various rendering functions. video.c still manually binds the
FBOs when calling shaders.
This also happens to fix a memory leak with output_fbo.
Further work removing GL dependencies from the actual video renderer,
and moving them into ra backends.
Use of glInvalidateFramebuffer() falls away. I'd like to keep this, but
it's better to readd it once shader runs are in ra.
This was attempted before in fc9695e63b, but it was reverted in
1b7ce759b1 because it caused conflicts with other software watching
the same keys (See #2041.) It seems like some PCs ship with OEM software
that watches the volume keys without consuming key events and this
causes them to be handled twice, once by mpv and once by the other
software.
In order to prevent conflicts like this, use the WM_APPCOMMAND message
to handle media keys. Returning TRUE from the WM_APPCOMMAND handler
should indicate to the operating system that we consumed the key event
and it should not be propogated to the shell. Also, we now only listen
for keys that are directly related to multimedia playback (eg. the
APPCOMMAND_MEDIA_* keys.) Keys like APPCOMMAND_VOLUME_* are ignored, so
they can be handled by the shell, or by other mixer software.
This currently only works when using lcms-based color management
(--icc-profile-*).
In principle, we could also support using lcms even when the user has
not specified an ICC profile, by generating the profile against a fixed
reference (--target-prim/--target-trc) instead. I still might do that
some day, simply because 3dlut provides a higher quality conversion than
our simple gamut mapping does for stuff like BT.2020, and also because
it's now needed to enable embedded ICC profiles. But that would be a
separate change, so preserve the status quo for now.
(Besides, my opinion is still that you should be using an ICC profile if
you care about colors being accurate _at all_)
Since these need to be refcounted, we throw them directly into struct
mp_image instead of being part of mp_colorspace. Even though they would
semantically make more sense in mp_colorspace, having them there is
really awkward because mp_colorspace is passed around and stored a lot,
and this way their lifetime is exactly tied to the lifetime of the
mp_image associated with it.
mesa won't pick client storage unless this bit is set, and we
*absolutely* want to be using client storage for our DR PBOs.
Performance is shit on AMD otherwise. (Nvidia always uses client storage
for persistent coherent buffers whether you tell it it or not, probably
because it's way faster and nvidia doesn't trust users to figure that
out on their own)