GLES2 randomly does not support the transpose parameter in matrix
uniform calls. So we have to do this manually. Sure it was worth to
mutilate the standard just so all these shitty SoC vendors can safe 3
lines of code.
(Obviously trying to handle all of GLES2 to GL 4.x in a single codebase
was a mistake.)
GLES2 shaders do not have line continuation characters. Abuse the
HAVE_ARRAYS define to exclude code which uses arrays, and which also
happens to cover all code that defines multi-line macros. (So yes, this
is a hack.)
Remove coded_width and coded_height. This was originally added in commit
fd7dde40, when BITMAPINFOHEADER was killed. The separate fields became
redundant in commit e68f4be1. Remove them (nothing passed to the
decoders actually changes with _this_ commit).
This is essentially what it is, and it's a useful for windowing or
downscaling. For upscaling we already have bilinear, no need to cause
extra confusion between biliner and bilinear_slow.
Also made it a bit more well-behaved.
These are EWA-based versions of the keys B/C splines, of which mitchell
is already a member. They are slightly softer and slightly sharper than
mitchell, respectively.
Very easy to define in terms of things we already have.
mitchell, hermite and catmull_rom are all B/C splines and can share the
code which was already written for mitchell. This just redefines them in
terms of that.
This adds a small check for candidates that could potentially be inside
the radius, but aren't necessarily. This speeds up performance by a
negligible amount on my hardware, but it's mainly a prerequisite for a
further change (using a larger true radius).
Previously, this was based on some arbitrary range 1-100, cut off for
no particular reason, and also defined in such a way that higher values
= *less* smoothness. Since it wasn't multiplied by e in the code, the
default had to be 10*e = 28.8539...
Now, it's sane: 1.0 = default, higher = blurrier.
This filter isn't supposed to have a second parameter in the first
place, all literature only uses a single parameter alpha in both places.
The second parameter doesn't even do anything other than adding a
constant factor, which is normalized by the LUT calculation either way.
This is done mainly for consistency, since all of the EWA filters share
similar properties and it's important to distinguish them for
documentation purposes.
No point in duplicating this check all over the place. No point in
really having it in the first place, to be perfectly honest, j1 should
not be THAT badly behaved.
The original filter window was design for a radius based on the true
zero, but we always cut it off at our selection of radius either way (by
necessity, due to the square matrix we sample from).
This window is tweaked from the original (true radius) to our actual
cut-off radius, and hence improves the result in a few edge cases. The
main win is the reduction of code complexity, since we no longer need to
know what the true radius actually is.
Check the scanf() return value, and don't continue if it doesn't find
both numbers (can happen with GLES 1.0). Also, some implementations can
return NULL from glGetString() if something is "broken".
This is a variation of ewa_lanczos that is sinc-windowed instead of
jinc-windowed. Results are pretty similar, but the logic is simpler.
This could potentially replace the ugly ewa_lanczos code.
It's hard to tell, but from comparing stills I think this one has
slightly less ringing than regular ewa_lanczos.
I've reworked pretty much all the logic to correspond to what the theory
actually describes. With this commit, playback is wonderfully smooth on
my machine.
A recent behavior change in libavcodec's h264 decoder keeps at least 1
surface even after avcodec_flush_buffers() has been called. We used to
flush the decoder in order to make sure all surfaces are free'd, so that
the hw decoder can be safely uninitialized. This doesn't work anymore.
Fix it by closing the AVCodecContext before the hw decoder is
uninitialized. This is actually simpler and more robust. It seems to be
well-supported too.
Fixes invalid read accesses with vaapi-copy and dxva2-copy. These
destroyed the hwdec API fully on uninit, and could not deal with
surfaces surviving the decoder.
Probably fixes#1587.
Makes all keys documented in XF86keysym.h mappable. This requires the
user to deal with numeric keycodes; no names are queried or exported.
This is an easy way to avoid adding all the hundreds of XF86 keys to
our X11 lookup table and mpv's keycode/name list.
Filters which merely wrap libavfilter (for user-compatibility) like
vf_gradfun had a "lavfi-enable" suboption, which could disable
libavfilter usage. Since none of these filters has an internal
implementation anymore, this was completely useless.
Remove the confusing crap that allowed a filter using the libavfilter
bridge to be compiled without libavfilter. Instead, compile the wrappers
only if libavfilter is enabled at compile time.
The only filter which still requires it is vf_stereo3d (unfortunately).
Special-case this one. (The whole filter and how it interacts with lavfi
is pure braindeath anyway.)
This reverts commit a33b46194c.
It turns out FFmpeg really considers this a bug, and fixed it by making
the decoder output the correct pixel format.
Fixes#1565. Reverts the fix#1528, though it should work fine with
a recent git master FFmpeg.
The intention is that we can test vo_opengl with high bit depth PNGs
better. This throws libswscale completely out of the loop, which before
was needed in order to convert from big endian to little endian.
Also apply a minimal cleanup to fmt-conversion.c (unrelated).
This introduces a new option linear-scaling, which is now implied by
srgb, icc-profile and sigmoid-upscaling.
Notably, this means (sigmoidized) linear upscaling is now enabled by
default in opengl-hq mode. The impact should be negligible, and there
has been no observation of negative side effects of sigmoidized scaling,
so it feels safe to do so.
Apparently CoreGraphics reports the actual refresh rate. DisplayLink can also
query the nominal refresh rate of the display so we use that as fallback
instead of the fugly 60fps hardcode added in aeb1fca0d.
Props to people on https://github.com/glfw/glfw/issues/137
Comment explains why I have been so doubtful at adding this. The Apple docs
say CGDisplayModeGetRefreshRate is supposed to work only for CRTs, but it
doesn't, and actually works for LCD TVs connected over HDMI and external
displays (at least that's what I'm told, I don't have the hardware to test).
Maybe Apple docs are incorrect.
Since AFAIK Apple doesn't want to give us a better API – maybe in the fear we
might be able to actually write some useful software instead of "apps" –
I decided not to care as well and commit this.
This reverts the default behavior introduced in commit 93feffad. Way too
often libavcodec will return RGB data that has an alpha channel as per
pixel format, but actually contains garbage.
On the other hand, this will actually render garbage color values in
e.g. PNG files (for pixels with alpha==0, the color value should be
essentially ignored, which is what the old alpha blend mode did).
This "fixes" #1528, which is probably a decoder bug (or far less likely,
a broken file).
Make the lazy gamma initialization less weird, and make the default
value of the "gamma" sub-option 1.0. This means --vo=opengl:help will
list the actual default value.
Also change the lower bound to 0.1 - avoids a division by zero (I don't
know how shaders handle NaN, but it's probably not a good idea to give
them this value).
There was some code accounting for different gamma values for R/G/B.
It's inherited from an old, undocumented MPlayer feature, which was at
some point disabled for convenience by myself (meaning you couldn't
actually set separate gamma because it was removed from the property
interface - mp_csp_copy_equalizer_values() just set them to the same
value). Get rid of these meaningless leftovers.