Under some conditions, hide_osc() was calling render(), which then called
hide_osc() again, and so forth, until the stack overflows.
Tracking the exact conditions where this happens (and then managing them
to prevent it) is an excercise in futility.
Remove the osc directly - instead of going through the entire rendering
procedure just to end up rendering nothing.
Fixes#4900 .
Also implicitly fixes memory leak when mp.set_property_native was used,
because the cleanup did not expect more allocations from the accidental
use of mpv_get_property.
Tested by making the ra_tex_resize function always fail (apart from the
initial FBO check). This required a few changes:
1. reset shaders on failed dispatch
2. reset cleanup binds on failed dispatch
3. fall back to initializing the struct image to 1x1 on failure
4. handle output_fbo_valid gracefully
This was sort of grating by default and made it really hard to actually
read e.g. text on top of a transparent background. I decided to approach
the problem from both directions, making the whites darker and the grays
lighter. This brings it closer to the dynamic range of e.g. the
wikipedia transparent svg preview.
Due to the plethora of historical baggage from different eras getting
confusing, I decided to simplify and unify the struct organization and
naming scheme.
Structs that got renamed:
1. fbodst -> ra_fbo (and moved to gpu/context.h)
2. fbotex -> removed (redundant after 2af2fa7a)
3. fbosurface -> surface
4. img_tex -> image
In addition to these structs being renamed, all of the names have been
made consistent. The new scheme is as follows:
struct image img;
struct ra_tex *tex;
struct ra_fbo fbo;
This also affects derived names, e.g. indirect_fbo -> indirect_tex.
Notably also, finish_pass_fbo -> finish_pass_tex and finish_pass_direct
-> finish_pass_fbo.
The new equivalent of fbotex_change() is called ra_tex_resize().
This commit (should) contain no logic changes, just renaming a bunch of
crap.
I've observed the garbage pixels in more scenarios. They also were never
really needed to begin with, originally being a discovered work-around
for bug that we fixed since then anyway. Doesn't really seem to even
help resizing, since the OpenGL drivers are all smart enough to pool
resources internally anyway.
Fixes#1814
Enabling double buffering fixed some graphical glitches when entering
fullscreen, but it also caused a fullscreen performance regression. We
decided that the glitches were preferable to the performance regression.
This reverts commit cee764849e.
The code used ra_ctx_destroy even though ra_ctx_create was never called
(since it's just a dummy ctx), which led to a conflict of assumptions.
The proper fix is to only use ra_gl_ctx_uninit (mirroring the
ra_gl_ctx_init) and free the dummy ctx manually.
Fixes https://github.com/cmdrkotori/mpc-qt/issues/129
We want e.g. --opengl-shaders-append=foo to resolve to the new option,
all while printing an option name. --opengl-shader is a similar case.
These options are special, because they apply "actions" on actual
options by specifying a suffix. So the alias/deprecation handling has to
be part of resolving the actual option from prefix and suffix.
Turns out the option code apparently tries to directly talloc_free() the
allocated strings, instead of going through a tactx wrapper or
something. So we can't directly overwrite it. Do something else
instead..
Almost as fast as the old code, but more general. Notably, glslang
doesn't support nested arrays.
(cf. https://github.com/KhronosGroup/glslang/issues/1057)
Also much cleaner code-wise, so I think I'll keep it even if glslang
implements array_of_arrays.
This never really made sense since the BT.1886 changes. It should get
*brighter* for bright rooms, not darker for dark rooms. Picked some new
values that seemed reasonable-ish.
This causes a performance regression on 10.11 and newer, but the single
buffered method was broken and could cause partially rendered frames to
be presented to the screen.
This reverts 9f30cd8292 and
e543853a7f.
This is done in several steps:
1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
(note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
ra itself (and vo_gpu altered to compensate), but this was a stop-gap
measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten
Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.
Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.
Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
Let's blame FFmpeg for just overwriting the samplerate in
av_frame_copy_props(). Can't fully hide my own brain damage though,
since mp_aframe_config_copy() expected that the rate is copied (that
function also copies format and channel layout).
See "Copyright" file for caveats.
This changes the remaining "almost LGPL" files to LGPL, because we think
that the conditions the author set for these was finally fulfilled.
This code could not be relicensed. The intention was to write new filter
code (which could handle both audio and video), but that's a bit of
work. Write some code that can do audio conversion (resampling,
downmixing, etc.) without the old audio filter chain code in order to
speed up the LGPL relicensing.
If you build with --disable-libaf, nothing in audio/filter/* is compiled
in. It breaks a few features, such as --volume, --af, pitch correction
on speed changes, replaygain.
Most likely this adds some bugs, even if --disable-libaf is not used.
(How the fuck does EOF notification work again anyway?)
Move it from af_lavrresample.c to a new aconverter.c file, which is
independent from the filter chain code. It also doesn't use mp_audio,
and thus has no GPL dependencies.
Preparation for later commits. Not particularly well tested, so have
fun.
Just reimplement it in some way, as mp_audio is GPL-only.
Actually I wanted to get rid of audio_buffer.c completely (and instead
have a list of mp_aframes), but to do so would require rewriting some
more player core audio code. So to get this LGPL relicensing over
quickly, just do some extra work.
This is "wrong", because you might want mp_image_copy_attributes() to
preserve the information that the colorspace parameters are unknown.
This is important for hwdec -copy modes, which call this function before
fix_image_params() and mp_colorspace_merge() are called.
Instead, just wipe the colorspace attributes if the pixel format changes
in an apparently incompatible way. Use mp_image_params_guess_csp() logic
for this and factor that into its own function.
mp_image_set_attributes() attempts to do something similar, so change
that in the same way. Also, mp_image_params_guess_csp() just returned if
the imgfmt was invalid or unset - just remove that part, because it
annoyingly doesn't fit into the new code, and had little reason to exist
to begin with. (Probably.)
I see no reason not to do this. I think the check comes from the time
when mp_image stored the image aspect ratio, instead of the pixel aspect
ratio, where the logic might have made more sense.
It was noticed that -copy hwdec modes typically dropped the
chroma_location field. This happened because the attributes on hw
download are copied with mp_image_copy_attributes(), which tries to copy
these parameters only if src and dst were both YUV (in an attempt to
copy parameters only if it makes sense).
But hardware formats did not have the YUV flag set (anymore?), and code
shouldn't attempt to check the flag in this way anyway. Drop the check,
and always copy the whole color metadata struct. There is a call to
mp_image_params_guess_csp() below, which tries to unset nonsense
metadata if it was copied from a YUV format to RGB. This function would
also do the right thing for hw formats (although for the cited bug only
the software case matters).
Fixes#4804.
This reverts commit 96462040ec.
I guess the autoprobing is still too primitive to handle this well. What
it really should be trying is initializing the wrapper decoder, and if
that doesn't work, try another method. This is complicated by hwaccels
initializing in a delayed way, so there is no easy solution yet.
Probably fixes#4865.
This mechanism uses system() and shouldn't even exist. x11_common.c has
its own solution for the original problem (disabling Linux DE
screensavers without MPlayer/mpv having to link a dbus lib). If that is
not sufficient, you can create a simple Lua script.
Incidentally fixes#4888.
Instead of "deps", "deps_neg", and "deps_any" fields, just have a single
"deps" field, which changes from an array to a string. The string is now
an expression, which can contain the operators &&, ||, !, and allows
grouping with ( ).
It's probably overkill. If it gets a maintenance burden, we can switch
to specifiying the dep expressions as ASTs (or maybe eval()-able Python
expressions), and we could simplify the code that determines the reason
why a dependency is not fulfilled. The latter involves a complicated
conversion of the expression AST to DNF.
The parser is actually pretty simple, and pretty much follows:
https://en.wikipedia.org/wiki/Shunting_yard_algorithm
This fixes >2GiB files with at least API level 21.
See: https://android.googlesource.com/platform/bionic/+/master/docs/32-bit-abi.md
for the gritty details.
* Based on libavformat's things, except we do not care about API
versions or NDKs without unistd.h, which contains all sorts of
things that we utilize.
* Redefines lseek to always point to the 64bit version.
* Redefines fseeko to always point towards an inlined static
implementation that utilizes the 64bit version of lseek underneath.
The first one (line 140) comes from 69650851f8 and is the correct one.
The second one (line 731) comes from 72a8120daa and slipped in with the
revert commit.
Remove the second one.
This overrides the use of GLX_SGI_swap_control, because apparently
GLX_SGI_swap_control doesn't support SwapInterval(0), but the
GLX_MESA_swap_interval does.
Of course, everybody except mesa just accepts SwapInterval(0) even for
GLX_SGI_swap_control, but mesa needs to be the special snowflake here
and reject it, forcing us to load their stupid named extension instead.
Meanwhile khronos has done nothing except spit out GLX_EXT_swap_control
(not to be confused with GL_EXT_swap_control, which is exported by
WGL_EXT_swap_control), that doesn't fix the problem because mesa doesn't
implement it anyway.
What a fucking mess.