This warning wasn't overly helpful in the past, and warned against
perfectly fine code. But at least with recent gcc versions, this is the
warning that complains about assignments in if expressions (why???), so
we want to enable it.
Also change all the code this warning complains about for no reason.
Semi-important, because --hwdec=dxva2 outputs NV12, and we really don't
want people to end up with the "old" StretchRect method.
Unfortunately, I couldn't actually get it to work. It seems most D3D
drivers (including the wine D3D implementation) reject D3DFMT_A8L8,
and I could not find any other 2-channel 8 bit Direct3D 9 format. It
seems newer D3D APIs have DXGI_FORMAT_R8G8_UNORM, but there's no way
to get it in D3D9.
Still pushing this; maybe it actually works on some drivers.
This time (there are a lot of times), libswscale randomly ignores
brightness/saturation/contrast settings.
Looking at MPlayer code, it appears the return value of
sws_setColorspaceDetails() signals if changing these settings is
supported at all.
(Nevermind that supporting this feature has almost 0 value, and
obviously eats maintenance time.)
Breaks vo_opengl by default. I'm hot able to fix this myself, because I
have no clue about the overcomplicated color management logic. Also,
whilethis is apparently caused by commit fbacd5, the following commits
all depend on it, so revert them too.
This reverts the following commits:
e141caa97d653b0dd529729c8b3f64fbacd5de31Fixes#1636.
Just use makeFirstResponder on the mpv events view from client code
if you need the built in keyboard events (this is easier for dealing with view
nesting).
This relies on upstream support in lavc, and will hence basically not
work at all. The intent is to get support for writing this information
into ffmpeg's PNG encoders etc.
We have MP_CSP_TRC defined, but it wasn't being used by practically
anything. This commit adds missing conversion logic, adds it to
mp_image, and moves the auto-guessing logic to where it should be, in
mp_image_params_guess_csp (and out of vo_opengl).
Note that this also fixes a minor bug: csp_prim was not being copied
between mp_image structs if the format was not YUV in both cases, but
this is wrong - the primaries are always relevant.
In the past it happened quite often that flag options (yes/no) were
changed to choice options (yes/no/some more). The problem with this was
that while flag options don't need a parameter, this wasn't the case
with choice options. A hack was introduced to compensate for this:
setting M_OPT_OPTIONAL_PARAM on the option, and an empty string ("") was
added as choice, so that the choice could be used like a flag. So, for
example, "--mute" would set the choice "".
Fix this by 1. not requiring a parameter if there's a "yes" choice, and
2. redirect an empty parameter to "yes". The effect is that a choice
option with the choices ["yes", "no"] is pretty much equivalent to a
flag option.
This is based on pretty much the same (somewhat naive) logic right now.
I'm not convinced that the extra logic that eg. madVR includes is worth
enough to warrant heavily confusing the logic for it.
This shouldn't slow down the logic at all in any sane shader compiler,
and indeed it doesn't on any shader compiler that I tested.
Note that this currently doesn't affect cscale at all, due to the weird
implementation details of that.
Change test_fbo() so that it checks the FBO lazily, and restructure
check_gl_features() to invoke it only if we know that a FBO will be
needed for a certain enabled feature.
This can avoid strange error messages when using --vo=opengl and the
FBO format does not work. It's also less confusing when reading the
verbose log (users might think that the FBO is actually used, etc.).
This can happen with the "no-colorkey" suboption. Then the code in
xv_draw_colorkey() can be run before vo_x11_config_vo_window(), when
vo_gc is not allocated yet.
Fixes#1629.
Instead of rendering and upscaling each video frame on every vsync, this
version of the algorithm only draws them once and caches the result,
so the only operation that has to run on every vsync is a cheap linear
interpolation, plus CMS/dithering.
On my machine, this is a huge speedup for 24 Hz content (on a 60 Hz
monitor), up to 120% faster. (The speedup is not quite 250% because of
the overhead that the larger FBOs and CMS provides)
In terms of the implementation, this commit basically swaps
interpolation and upscaling - upscaling is moved to inter_program, and
interpolation is moved to the final_program.
Furthermore, the main bulk of the frame rendering logic (upscaling etc.)
was moved to a separete function, which is called from
gl_video_interpolate_frame only if it's actually necessarily, and
skipped otherwise.
GLES2 randomly does not support the transpose parameter in matrix
uniform calls. So we have to do this manually. Sure it was worth to
mutilate the standard just so all these shitty SoC vendors can safe 3
lines of code.
(Obviously trying to handle all of GLES2 to GL 4.x in a single codebase
was a mistake.)
GLES2 shaders do not have line continuation characters. Abuse the
HAVE_ARRAYS define to exclude code which uses arrays, and which also
happens to cover all code that defines multi-line macros. (So yes, this
is a hack.)
Remove coded_width and coded_height. This was originally added in commit
fd7dde40, when BITMAPINFOHEADER was killed. The separate fields became
redundant in commit e68f4be1. Remove them (nothing passed to the
decoders actually changes with _this_ commit).
This is essentially what it is, and it's a useful for windowing or
downscaling. For upscaling we already have bilinear, no need to cause
extra confusion between biliner and bilinear_slow.
Also made it a bit more well-behaved.
These are EWA-based versions of the keys B/C splines, of which mitchell
is already a member. They are slightly softer and slightly sharper than
mitchell, respectively.
Very easy to define in terms of things we already have.
mitchell, hermite and catmull_rom are all B/C splines and can share the
code which was already written for mitchell. This just redefines them in
terms of that.
This adds a small check for candidates that could potentially be inside
the radius, but aren't necessarily. This speeds up performance by a
negligible amount on my hardware, but it's mainly a prerequisite for a
further change (using a larger true radius).
Previously, this was based on some arbitrary range 1-100, cut off for
no particular reason, and also defined in such a way that higher values
= *less* smoothness. Since it wasn't multiplied by e in the code, the
default had to be 10*e = 28.8539...
Now, it's sane: 1.0 = default, higher = blurrier.
This filter isn't supposed to have a second parameter in the first
place, all literature only uses a single parameter alpha in both places.
The second parameter doesn't even do anything other than adding a
constant factor, which is normalized by the LUT calculation either way.
This is done mainly for consistency, since all of the EWA filters share
similar properties and it's important to distinguish them for
documentation purposes.
No point in duplicating this check all over the place. No point in
really having it in the first place, to be perfectly honest, j1 should
not be THAT badly behaved.
The original filter window was design for a radius based on the true
zero, but we always cut it off at our selection of radius either way (by
necessity, due to the square matrix we sample from).
This window is tweaked from the original (true radius) to our actual
cut-off radius, and hence improves the result in a few edge cases. The
main win is the reduction of code complexity, since we no longer need to
know what the true radius actually is.
Check the scanf() return value, and don't continue if it doesn't find
both numbers (can happen with GLES 1.0). Also, some implementations can
return NULL from glGetString() if something is "broken".
This is a variation of ewa_lanczos that is sinc-windowed instead of
jinc-windowed. Results are pretty similar, but the logic is simpler.
This could potentially replace the ugly ewa_lanczos code.
It's hard to tell, but from comparing stills I think this one has
slightly less ringing than regular ewa_lanczos.
I've reworked pretty much all the logic to correspond to what the theory
actually describes. With this commit, playback is wonderfully smooth on
my machine.
A recent behavior change in libavcodec's h264 decoder keeps at least 1
surface even after avcodec_flush_buffers() has been called. We used to
flush the decoder in order to make sure all surfaces are free'd, so that
the hw decoder can be safely uninitialized. This doesn't work anymore.
Fix it by closing the AVCodecContext before the hw decoder is
uninitialized. This is actually simpler and more robust. It seems to be
well-supported too.
Fixes invalid read accesses with vaapi-copy and dxva2-copy. These
destroyed the hwdec API fully on uninit, and could not deal with
surfaces surviving the decoder.
Probably fixes#1587.
Makes all keys documented in XF86keysym.h mappable. This requires the
user to deal with numeric keycodes; no names are queried or exported.
This is an easy way to avoid adding all the hundreds of XF86 keys to
our X11 lookup table and mpv's keycode/name list.