This automatically sets the gamma option depending on lighting conditions
measured from the computer's ambient light sensor.
sRGB – arguably the “sibling” to BT.709 for still images – has a reference
viewing environment defined in its specification (IEC 61966-2-1:1999, see
http://www.color.org/chardata/rgb/srgb.xalter). According to this data, the
assumed ambient illuminance is 64 lux. This is the illuminance where the gamma
that results from ICC color management is correct.
On the other hand, BT.1886 formalizes that the gamma level for dim environments
to be 2.40, and Apple resources (WWDC12: 2012 Session 523: Best practices for
color management) define the BT.1886 dim at 16 lux.
So the logic we apply is:
* >= 64lux -> 1.961 gamma
* =< 16lux -> 2.400 gamma
* 16lux < x < 64lux -> logaritmic rescale of lux to gamma. The human
perception of illuminance roughly follows a logaritmic scale of lux [1].
[1]: https://msdn.microsoft.com/en-us/library/windows/desktop/dd319008%28v=vs.85%29.aspx
This will be pretty useful to let mpv automatically change VO parameters based
on ambient lighting conditions.
The conversion code and polinomial equation from Apple LMU values to Lux is
taken from Firefox: their license, MPL is GPL compatible and allows
relicensing to GPL (MPL is more liberal).
This warning wasn't overly helpful in the past, and warned against
perfectly fine code. But at least with recent gcc versions, this is the
warning that complains about assignments in if expressions (why???), so
we want to enable it.
Also change all the code this warning complains about for no reason.
Semi-important, because --hwdec=dxva2 outputs NV12, and we really don't
want people to end up with the "old" StretchRect method.
Unfortunately, I couldn't actually get it to work. It seems most D3D
drivers (including the wine D3D implementation) reject D3DFMT_A8L8,
and I could not find any other 2-channel 8 bit Direct3D 9 format. It
seems newer D3D APIs have DXGI_FORMAT_R8G8_UNORM, but there's no way
to get it in D3D9.
Still pushing this; maybe it actually works on some drivers.
Breaks vo_opengl by default. I'm hot able to fix this myself, because I
have no clue about the overcomplicated color management logic. Also,
whilethis is apparently caused by commit fbacd5, the following commits
all depend on it, so revert them too.
This reverts the following commits:
e141caa97d653b0dd529729c8b3f64fbacd5de31Fixes#1636.
Just use makeFirstResponder on the mpv events view from client code
if you need the built in keyboard events (this is easier for dealing with view
nesting).
We have MP_CSP_TRC defined, but it wasn't being used by practically
anything. This commit adds missing conversion logic, adds it to
mp_image, and moves the auto-guessing logic to where it should be, in
mp_image_params_guess_csp (and out of vo_opengl).
Note that this also fixes a minor bug: csp_prim was not being copied
between mp_image structs if the format was not YUV in both cases, but
this is wrong - the primaries are always relevant.
In the past it happened quite often that flag options (yes/no) were
changed to choice options (yes/no/some more). The problem with this was
that while flag options don't need a parameter, this wasn't the case
with choice options. A hack was introduced to compensate for this:
setting M_OPT_OPTIONAL_PARAM on the option, and an empty string ("") was
added as choice, so that the choice could be used like a flag. So, for
example, "--mute" would set the choice "".
Fix this by 1. not requiring a parameter if there's a "yes" choice, and
2. redirect an empty parameter to "yes". The effect is that a choice
option with the choices ["yes", "no"] is pretty much equivalent to a
flag option.
This is based on pretty much the same (somewhat naive) logic right now.
I'm not convinced that the extra logic that eg. madVR includes is worth
enough to warrant heavily confusing the logic for it.
This shouldn't slow down the logic at all in any sane shader compiler,
and indeed it doesn't on any shader compiler that I tested.
Note that this currently doesn't affect cscale at all, due to the weird
implementation details of that.
Change test_fbo() so that it checks the FBO lazily, and restructure
check_gl_features() to invoke it only if we know that a FBO will be
needed for a certain enabled feature.
This can avoid strange error messages when using --vo=opengl and the
FBO format does not work. It's also less confusing when reading the
verbose log (users might think that the FBO is actually used, etc.).
This can happen with the "no-colorkey" suboption. Then the code in
xv_draw_colorkey() can be run before vo_x11_config_vo_window(), when
vo_gc is not allocated yet.
Fixes#1629.
Instead of rendering and upscaling each video frame on every vsync, this
version of the algorithm only draws them once and caches the result,
so the only operation that has to run on every vsync is a cheap linear
interpolation, plus CMS/dithering.
On my machine, this is a huge speedup for 24 Hz content (on a 60 Hz
monitor), up to 120% faster. (The speedup is not quite 250% because of
the overhead that the larger FBOs and CMS provides)
In terms of the implementation, this commit basically swaps
interpolation and upscaling - upscaling is moved to inter_program, and
interpolation is moved to the final_program.
Furthermore, the main bulk of the frame rendering logic (upscaling etc.)
was moved to a separete function, which is called from
gl_video_interpolate_frame only if it's actually necessarily, and
skipped otherwise.
GLES2 randomly does not support the transpose parameter in matrix
uniform calls. So we have to do this manually. Sure it was worth to
mutilate the standard just so all these shitty SoC vendors can safe 3
lines of code.
(Obviously trying to handle all of GLES2 to GL 4.x in a single codebase
was a mistake.)
GLES2 shaders do not have line continuation characters. Abuse the
HAVE_ARRAYS define to exclude code which uses arrays, and which also
happens to cover all code that defines multi-line macros. (So yes, this
is a hack.)
This is essentially what it is, and it's a useful for windowing or
downscaling. For upscaling we already have bilinear, no need to cause
extra confusion between biliner and bilinear_slow.
Also made it a bit more well-behaved.
These are EWA-based versions of the keys B/C splines, of which mitchell
is already a member. They are slightly softer and slightly sharper than
mitchell, respectively.
Very easy to define in terms of things we already have.
mitchell, hermite and catmull_rom are all B/C splines and can share the
code which was already written for mitchell. This just redefines them in
terms of that.
This adds a small check for candidates that could potentially be inside
the radius, but aren't necessarily. This speeds up performance by a
negligible amount on my hardware, but it's mainly a prerequisite for a
further change (using a larger true radius).
Previously, this was based on some arbitrary range 1-100, cut off for
no particular reason, and also defined in such a way that higher values
= *less* smoothness. Since it wasn't multiplied by e in the code, the
default had to be 10*e = 28.8539...
Now, it's sane: 1.0 = default, higher = blurrier.
This filter isn't supposed to have a second parameter in the first
place, all literature only uses a single parameter alpha in both places.
The second parameter doesn't even do anything other than adding a
constant factor, which is normalized by the LUT calculation either way.
This is done mainly for consistency, since all of the EWA filters share
similar properties and it's important to distinguish them for
documentation purposes.
No point in duplicating this check all over the place. No point in
really having it in the first place, to be perfectly honest, j1 should
not be THAT badly behaved.
The original filter window was design for a radius based on the true
zero, but we always cut it off at our selection of radius either way (by
necessity, due to the square matrix we sample from).
This window is tweaked from the original (true radius) to our actual
cut-off radius, and hence improves the result in a few edge cases. The
main win is the reduction of code complexity, since we no longer need to
know what the true radius actually is.
Check the scanf() return value, and don't continue if it doesn't find
both numbers (can happen with GLES 1.0). Also, some implementations can
return NULL from glGetString() if something is "broken".
This is a variation of ewa_lanczos that is sinc-windowed instead of
jinc-windowed. Results are pretty similar, but the logic is simpler.
This could potentially replace the ugly ewa_lanczos code.
It's hard to tell, but from comparing stills I think this one has
slightly less ringing than regular ewa_lanczos.
I've reworked pretty much all the logic to correspond to what the theory
actually describes. With this commit, playback is wonderfully smooth on
my machine.
Makes all keys documented in XF86keysym.h mappable. This requires the
user to deal with numeric keycodes; no names are queried or exported.
This is an easy way to avoid adding all the hundreds of XF86 keys to
our X11 lookup table and mpv's keycode/name list.
This reverts commit a33b46194c.
It turns out FFmpeg really considers this a bug, and fixed it by making
the decoder output the correct pixel format.
Fixes#1565. Reverts the fix#1528, though it should work fine with
a recent git master FFmpeg.
This introduces a new option linear-scaling, which is now implied by
srgb, icc-profile and sigmoid-upscaling.
Notably, this means (sigmoidized) linear upscaling is now enabled by
default in opengl-hq mode. The impact should be negligible, and there
has been no observation of negative side effects of sigmoidized scaling,
so it feels safe to do so.