Should take care of the planned FFmpeg AV_PIX_FMT_P010 addition. (This
will eventually be needed when doing HEVC Main 10 decoding with DXVA2
copyback.)
Commit 3909e4cd ended up losing the ability to tune the gaussian window,
which this commit trivially reintroduces.
The constant scaling factor (present in the code copied from glumpy)
also goes against filter_kernels.c conventions, which is that f(0.0) = 1
(and the invoking code takes care of normalization), and has been
removed.
The values of this new gaussian function corresponds to different
functions when compared against the old version. To translate the old
values p1 to the new values p2 requires solving 2^(e/p1) = e^(2/p2) or
p2 = p1 * 2/(e * ln(2)) ≈ p1 * 1.0615
In other words, to get the old default in the new function requires
setting scale-param1 to 1.0615. (The new function is *slightly* sharper
by default)
(Though most users should probably not notice the change)
To get a uniform license for this file, relicense the mpv parts to BSD
as well.
But leave the door open for a later change to LGPL. (All non-Glumpy code
was written within mpv, and all mpv authors have agreed to LGPL
relicensing.)
Closes#2688.
This file claims to be based on the "MPlayer VA-API patch", but this is
untrue. Only some glue code was copied from hwdec_vaglx.c, and this glue
code was never in MPlayer or the MPlayer VA-API patch in any form, and
instead part of the mpv-original way we do hardware decoding OpenGL
interop. The EGL interop method didn't exist at the time the MPlayer
VA-API patch was created either.
This commit replaces code based on AGG, taken from this source file:
http://vector-agg.cvs.sourceforge.net/viewvc/vector-agg/agg-2.5/include/agg_image_filters.h
The intention is that filter_kernels.c can be relicensed to LGPL or BSD.
Because the AGG author died, full replacement is the only way to achieve
it.
This affects only some filter functions. These are exclusively
mathematical functions for computing filter coefficients. (Other parts
in filter_kernel.c were originally written by me, with heavy additions
and refactoring done by other mpv contributors.) While the code is
mostly just well-known mathematical formulas written down in C form,
AGG copyright could perhaps be claimed anyway.
To remove the AGG code, I replaced it with the filter functions from:
https://github.com/glumpy/glumpy/blob/master/glumpy/library/build-spatial-filters.py
These functions conveniently compute exactly the same thing in mpv,
Glumpy, AGG (and about anything that will filter images using the same
mathematical principles).
First I ported the Python code in the file to C. Then I replaced all
functions in filter_kernels.c with this code that could be replaced.
Then I investigated whether the remaining functions were based on AGG
code and took appropriate action:
hanning(), hamming(), quadric(), bicubic(), kaiser(), blackman(),
spline16(), spline36(), gaussian(), sinc() were taken straight from
Glumpy.
For sinc(), re-add the "fabs(x) < 1e-8" check, which was added in commit
586dc557 for unknown reasons.
gaussian() loses its filter parameter for some reason. (Well, who cares,
not my problem.)
The really awkward thing is that the text for hanning() and hamming()
does not change. In theory these functions are now based on Glumpy code,
but it seems like this can be neither proven nor denied. (The same
happened in some other cases with at least a few lines of code.)
sphinx() was added in commit 586dc557, and looks suspiciously like
sinc() as well. Replace the first 3 lines of the body with the ported
function (of which 2 lines do not change; the first uses code only in
mpv, and the second is just "return 1.0;"). The 4th line is only similar
on an abstract level (and that because of the mathematical relation
between these functions). Although the original sinc() was probably used
as template for it, with the other lines replaced, I don't think you
could make the claim that it falls under AGG copyright.
jinc() was added in commit 26baf5b9, but the code for it might be based
on sinc(). Rewrite it based on the "new" sinc(). Some of the same
remarks as with sphinx() apply.
cubic_bc() was ported from Glumpy's Mitchell(). (As far as I'm aware,
with the default parameters it's called "the" Mitchell-Netravali filter,
but in mpv this function is used to generate a whole group of filters.)
spline64() was added in commit a8b67c66, and was probably derived from
spline36(). Re-derive it from the "new" spline36().
triangle() could be considered derived from the original bilinear().
This is this in the original commit:
static double bilinear(kernel *k, double x)
{
return 1.0 - x;
}
This _might_ be based on AGG's image_filter_bilinear:
struct image_filter_bilinear
{
static double radius() { return 1.0; }
static double calc_weight(double x)
{
return 1.0 - x;
}
};
Considering that the "framework" was written by me, and the only part
from AGG taken is "return 1.0 - x;", and this part is trivial and was
later thoroughly replaced, this is probably not under the AGG copyright.
I'm hoping this doesn't introduce regressions. But the main focus is not
being productive anyway, and I didn't rigorously check unintended
changes in functionality.
Apparently, the firmware will ignore pixel_x/pixel_y if the numeric
value of them gets too high (even if they indicate square pixel aspect
ratio). Even worse, the destination rectangle is ignored completely,
and the video frame is simply stretched to the screen. I suspect this
is an overflow or weird sanity check within the firmware.
Work it around by limiting the fields to 16000, which is an arbitrary
but apparently working limit.
GLSL in GLES 2.0 did not have line continuation in its preprocessor.
This broke shader compilation. It also broke subtitle rendering in
vo_rpi, which reuses some of the OpenGL code.
Line continuation was finally added in GLES 3.0, which is perhaps the
reason why ANGLE accepted it.
Untested, but should be fine. Broken by commit 0a0bb905.
Also fix the include statement in context_rpi.c, which caused another
compilation failure. Also untested. (Because I'm lazy.)
Fixes#2638.
gcc 4.8 does not support C11 thread local storage. This is a bit
annoying, so add a hack to use the gcc specific __thread extension if
C11 TLS is not available.
(This is used for the extremely silly mpv-internal way hwdec modules
access some platform specific handles. Disabling it simply made
hwdec_vaegl.c always fail initialization.)
Fixes#2631.
Add a "blend-tiles" choice to the "alpha" sub-option. This is pretty
simplistic and uses the GL raster position to derive the tiles. A weird
consequence is that using --vo=opengl and --vo=opengl-hq gives different
scaling behavior (screenspace pixel size vs. source video pixel size
16x16 tiles), but it seems we don't have easy access to the original
texture coordinates. Using the rasterpos is probably simpler.
Make this option the default.
long is 64 bits on x86_64 on Linux, which means the check for the corner
case of computing the depth mask is wrong.
Also, X11 compositors seem to expect premultiplied alpha.
MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
Since alpha isn't pulled through the colormatrix (maybe it should?), we
reject alpha formats with odd sizes, such as yuva444p10.
But the awful tex_mul path in vo_opengl does this anyway (at some points
even explicitly), which means there will be a subtle difference in
handling of 16 bit yuv alpha formats. Make it consistent and always
apply the range adjustment to the alpha component. This also means odd
sizes like 10 bit are supported now.
This assumes alpha uses the same "shifted" range as the yuv color
channels for depths larger than 8 bit. I'm not sure whether this is
actually the case.
Now common.c only contains the code for the function loader, while
context.c contains the backend loader/dispatcher.
Not calling it "backend.c", because the central struct is called
MPGLContext.
This is used for dithering, although I'm not aware of anyone who got
higher than 8 bit depth support to work on Linux.
Also put this into egl_helpers.c. Since EGL is pseudo-portable at best I
have no hope that the EGL context creation code in all the backends can
be fully shared. But some self-contained functionality can definitely be
shared.
Store the determined framebuffer depth in struct GL instead of
MPGLContext. This means gl_video_set_output_depth() can be removed, and
also justifies adding new fields describing framebuffer/backend
properties to struct GL instead of having to add more functions just to
shovel the information around.
Keep in mind that mpgl_load_functions() will wipe struct GL, so the
new fields must be set before calling it.
Although the source file is named w32.c, the backend name was "win"
until recently. It was accidentally changed to "w32"; fix it.
Fixes#2608 (the manual is correct).
When a Direct3D 9Ex device fails to reset, it gets put into the lost
state, so set the lost_device flag and don't attempt to present until
the device moves out of that state. Failure to recreate the size-
dependent objects should set lost_device as well, since we shouldn't try
to present in that state.
Also, it looks like I was too eager to remove code that sets priv
members to NULL and I accidentally removed some that was needed.
Direct3D doesn't like 0-sized swapchain dimensions, even when those
dimensions are automatically set. Manually set them to a size that isn't
zero instead.
Why not.
Also, instead of disabling hue/saturation for RGB, just don't apply
them. (They don't make sense for conversion matrixes other than YUV, but
I can't be bothered to keep the fine-grained disabling of UI controls
either.)
WGL_NV_DX_interop is widely supported by Nvidia and AMD drivers. It
allows a texture to be shared between Direct3D and WGL, so that
rendering can be done with WGL and presentation can be done with
Direct3D. This should allow us to work around some persistent WGL
issues, such as dropped frames with some driver/OS combos, drivers that
buffer frames to increase performance at the cost of latency, and the
inability to disable exclusive fullscreen mode when using WGL to render
to a fullscreen window.
The addition of a DX_interop backend might also enable some cool
Direct3D-specific enhancements in the future, such as using the
GetPresentStatistics API to get accurate frame presentation timestamps.
Note that due to a driver bug, this backend is currently broken on
Intel. It will appear to work as long as the window is not resized too
often, but after a few changes of size it will be unable to share the
newly created renderbuffer with GL. See:
https://software.intel.com/en-us/forums/graphics-driver-bug-reporting/topic/562051
With default setting, the matrix for fruit dithering requires 12 bits
precision (values from 0/4096 to 4095/4096). But 16-bit float
provides only 10 bits. In addition, when `dither-size-fruit=8` is
set, 16 bits are required from the texture format.
Fix this by attempting to use 16 bit integer texture first. This is
still not precise, but should be better than using a half float.
The recent LUT adjustment changes broke interpolation.
The concatenation of the shader stages is a bit messy, and it seems like
sampler_prelude is not a good place to add this macro. Always add the
macro to every shader instead. (While this doesn't seem too elegant,
this isn't too inelegant either, and goes these problems out of the
way.)
The computation of the tex_mul variable was broken in multiple ways.
This variable is used e.g. by debanding for moving expansion of 10 bit
fixed-point input to normalized range to another stage of processing.
One obvious bug was that the rgb555 pixel format was broken. This format
has component_bits=5, but obviously it's already sampled in normalized
range, and does not need expansion. The tex_mul-free code path avoids
this by not using the colormatrix. (The code was originally designed to
work around dealing with the generally complicated pixel formats by only
using the colormatrix in the YUV case.)
Another possible bug was with 10 bit input. It expanded the input by
bringing the [0,2^10) range to [0,1], and then treating the expanded
input as 16 bit input. I didn't bother to check what this actually
computed, but it's somewhat likely it was wrong anyway. Now it uses
mp_get_csp_mul(), and disables expansion when computing the YUV matrix.
It turns out that with accurate lookup we can decrease the
default size of texture now. Do it to compensate the performance
loss introduced by the LUT_POS macro.
Define a macro to correct the coordinate for lookup texture. Cache
the corrected coordinate for 1D filter and use mix() to minimize the
performance impact.
It always was a weird artifact - VOCTRLs are meant _not_ to require
special handling in the code that passes them through (like in vo.c).
Removing it is also interesting to further reduce the dependency of
backends on struct vo. Just get rid of it.
Removing it is somewhat inconvenient, because in many situations the UI
window is created after the first VOCTRL_UPDATE_WINDOW_TITLE. This means
these backends have to store it in a new field in their own context.
If the sampling point is placed diagonally, the radius difference
could be as large as sqrt(2.0). And a loosened check with (radius - 1)
would potentially include pixels out of the range.
Fix the check to handle those corner case properly to avoid
unnecessary texture lookup and improve the performance a bit.
There are claims that nnedi3.c doesn't constitute its own new
implementation, but is derived from existing HLSL or OpenCL shaders
distributed under the LGPLv3 license.
Until these are resolved, do the "correct" thing and require
--enable-gpl3 to build nnedi.
I guess gl_video->global was originally meant to be optional, but now it
crashes in some newer code with vo_opengl_cb, which tries to init it
with this field set to NULL (because normally it's not needed).
Probably fixes#2542.
Don't use the average FPS if there are likely skipped vsyncs. Note that
we don't use the normal skip detection, as it is unreliable if the real
and assumed display FPS differ too much. The normal skip detection is
still in place as it's more reliable in the case when vsync jitters
much, but the display FPS is relatively exact.
Further improvement over commit 41f2c653.
At least I hope so.
Deriving the duration from the pts was not really correct. It doesn't
include speed adjustments, and becomes completely wrong of the user e.g.
changes the playback speed by a huge amount. Pass through the accurate
duration value by adding a new vo_frame field.
The value for vsync_offset was not correct either. We don't need the
error for the next frame, but the error for the current one. This wasn't
noticed because it makes no difference in symmetric cases, like 24 fps
on 60 Hz.
I'm still not entirely confident in the correctness of this, but it sure
is an improvement.
Also, remove the MP_STATS() calls - they're not really useful to debug
anything anymore.
This was just converting back and forth between int64_t/microseconds and
double/seconds. Remove this stupidity. The pts/duration fields are still
in microseconds, but they have no meaning in the display-sync case (also
drop printing the pts field from opengl/video.c - it's always 0).
If we switched away from the system FPS, we were remaining in this mode
ssentially forever. There's no reason to do so; switch back if the
estimated FPS gets worse again. Improvement over commit 41f2c653.
Commit 12eb8b2d accidentally disabled framedropping in the audio timing
case. It tried to replace the last_flip field with the prev_vsync one,
which didn't work because prev_sync is reset to 0 if the timing code is
used. Fix it by always setting it properly. This field must (or should)
be reinitialized to something sensible when switching to display sync
timing mode; since prev_vsync is not reset anymore, the check when to
reinitialize this field has to be adjusted as well.
It's a bit weird that update_vsync_timing_after_swap() now does some
minor work for timing mode too, but I guess it's ok, if only to avoid
additional fields and timer calls.
If the system-reported display FPS (returned by the VO backends, or
forced with --display-fps) is too imprecise (deviating frame duration by
more than 1%). This works if the display FPS is off by almost 1 (typical
for old/bad/broken OS APIs). Actually it even works if the FPs is
completely wrong.
Is it a good idea? Probably not. It might be better to only output a
warning message. But unless there are reports about it going terribly
wrong, I'll go with this for now.
Actually I'm not content with the detection added in commit 44376d2d. It
triggers too often if vsync is very jittery. It's easy to avoid this: we
add more samples to the detection by reusing the drift computation loop.
If there's a significant step in the drift, we consider it a drop.
This adds basic support for ICC profiles. Per-monitor profiles are
supported. WCS profiles are not supported, but there is an API for
converting WCS profiles to ICC, so they might be supported in future.
I'm just not sure if anyone actually uses them.
Reloading the ICC profile when it's changed in the control panel is also
not supported. This might be possible by using the WCS APIs and watching
the registry for changes, but there is no official API for it, and as
far as I can tell, no other Windows programs can do it.
Return the estimated/ideal flip time to the timing logic (meaning
vo_get_delay() returns a smoothed out time). In addition to this add
some lame but working drift compensation. (Useful especially if the
display FPS is wrong by a factor such as 1.001.)
Also remove some older leftovers. The vsync_interval_approx and
last_flip fields are redundant or unneeded.
For the vo-delayed-frame-count property.
Slightly less dumb than the previous one (which was removed earlier),
but still pretty dumb. But this also seems to be relatively robust, even
with strong vsync jittering.
This logic was kind of questionable anyway, and --display-sync should
give much better results. (I would even go as far as saying that the
FPS-dependent framedrop code made things worse in some situations. Not
all, though.)
This is simply the average refresh rate. Including "bad" samples is
actually an advantage, because the property exists only for
informational purposes, and will reflect problems such as the driver
skipping a vsync.
Also export the standard deviation of the vsync frame duration
(normalized to the range 0-1) as vsync-jitter property.
The OSD takes up an entire fullscreen dispmanx layer. Although the GPU
should be able to handle it (possibly even without any disadvantages),
it'll still be useful for debugging performance issues.
This is a hack, but unfortunately the DwmGetCompositionTimingInfo
heuristic does not work in all cases (with multiple-monitors on Windows
8.1 and even with a single monitor in Windows 10.) See the comment in
mp_w32_is_in_exclusive_mode() for more details.
It should go without saying that if any better method of doing this
reveals itself, this hack should be dropped.
It doesn't have any real purpose anymore. Up until now, it was still
implemented by vo_wayland, but since we changed how the frame callbacks
work, even that appears to be pointless.
Originally, the plan was to somehow extend this mechanism to all
backends and to magically fix frame scheduling, but since we can't hope
for proper mechanisms even on wayland, this idea looks way less
interesting.
The D3D9 backend does not support GLES 3, which makes it pretty useless.
But it still might be a legitimate replacement of vo_direct3d.c on
Windows 7 machines.
Note that we could just use:
eglGetDisplay(EGL_D3D11_ELSE_D3D9_DISPLAY_ANGLE)
But for now I'll leave the old code. Maybe this can exclude use of
software rendering backends (EGL_PLATFORM_ANGLE_DEVICE_TYPE_WARP_ANGLE).
Since I'm not sure, I won't touch it.
Running mpv with default config will now pick up ANGLE by default. Since
some think ANGLE is still not good enough for hq features, extend the
"es" option to reject GLES backends, and add to to the opengl-hq preset.
One consequence is that mpv will by default use libswscale to convert
10 bit video to 8 bit, before it reaches the VO.
I decided that I actually can't stand how vo_opengl unnecessarily puts
the video through 3 shader stages (instead of 1). Thus, what was meant
to be a fallback for weak OpenGL implementations, the dumb-mode, now
becomes default if the user settings allow it.
The code required to check for the settings isn't so wild, so I guess
it's manageable. I still hope that one day, our rendering logic can
generate ideal shader stages for this case too.
Note that in theory, dumb-mode could be reenabled at runtime due to a
color management 3D LUT being set, so a separate dumb_mode field is
required. The dumb-mode option can't just be overwritten.
Unfortunately, color management can still not work, because no GLES
version specified so far support fixed-point 16 bit textures. Maybe
we could use integer textures, but these don't support filtering.
Using float textures would be another possibility.
Polar scalers use 1D textures, because they're slightly faster on some
GPUs than 2D textures. But 2D textures work too, so add support for
them.
Allows using these scalers with ANGLE.
Just like commit f9a2fc59. There are probably some more such cases.
The vec2 constructor calls are probably fine, but don't bother with
confusing inconsistencies.
While desktop GL's glTexImage2D() essentially accepts anything, GLES is
much stricter. The combination of allowed formats/types/internal formats
is exactly specified. The GLES 3.0.4 specification lists them in
table 3.2. (The ANGLE API validation code references this table.)
The table could probably be extended into a general declarative table
about GL formats covering other uses, but this would be a big
non-trivial project, so don't bother and accept a minor degree
of duplication with other tables.
Note that the format and type do (or should) not matter here, because
no image data is transferred to the GPU.
We don't only need float textures for advanced scaling - we also need
them to be filterable with GL_LINEAR. On GLES, this is not supported
until GLES 3.1, but some implementation expose them with extensions.
This makes advanced scaling sort-of work for GLES 3.0 (on ANGLE). It's
still not very advisable, as 8 bits might not be enough to avoid
debanding. (Ironically, the debanding filter can be enabled, and does
not raise any GL errors - but probably doesn't do anything useful.)
Turns out glGetTexLevelParameter, which is missing in ANGLE, is a
GLES3.1 function. Removing it from the list of core GLES3 functions
makes ANGLE work in GLES3 mode.
ANGLE is a GLES2 implementation for Windows that uses Direct3D 11 for
rendering, enabling vo_opengl to work on systems with poor OpenGL
drivers and bypassing some of the problems with native GL, such as VSync
in fullscreen mode.
Unfortunately, using GLES2 means that most of vo_opengl's advanced
features will not work, however ANGLE is under rapid development and
GLES3 support is supposed to be coming soon.
Something goes wrong somewhere. Don't bother, it's only needed for
compatibility with our absolute baseline (GL 2.1/GLES 2).
On the other hand, we can process nv12 formats just fine.
For the sake of vaapi interop, we want to use EGL, but on the other
hand, but because driver developers are full of shit, vdpau interop will
not work on EGL (even if the driver supports EGL). The latter happens
with both nvidia and AMD Mesa drivers.
Additionally, EGL vaapi interop support can apparently only detected at
runtime by actually using it. While hwdec_vaegl.c already does this, it
would require initializing libva on _every_ system, which will cause
libav to print an unpreventable bullshit message to the terminal.
Try to counter these huge loads of bullshit by adding more fucking
bullshit.
We want the following behavior:
- VO probed, backend probed: only accept non-sw, fail completely
otherwise
- VO forced, backend probed: use the first non-sw, or if none is found,
fall back to the first working sw backend
- VO probed, backend forced: (I don't care about this case)
- VO forced, backend forced: just use that backend
Also, on backend probe failure the vo->probed field was left in its old
state.
This adds support for the progress indicator taskbar extension
that was introduced with Windows 7 and Windows Server 2008 R2.
I don’t like this solution because it keeps its own state and
introduces another VOCTRL, but I couldn’t come up with anything
less messy.
closes#2399
In the display-sync, non-interpolation case, and if the display refresh
rate is higher than the video framerate, we duplicate display frames by
rendering exactly the same screen again. The redrawing is cached with a
FBO to speed up the repeat.
Use glBlitFramebuffer() instead of another shader pass. It should be
faster.
For some reason, post-process was run again on each display refresh.
Stop doing this, which should also be slightly faster. The only
disadvantage is that temporal dithering will be run only once per video
frame, but I can live with this.
One aspect is messy: clearing the background is done at the start on the
target framebuffer, so to avoid clearing twice and duplicating the code,
only copy the part of the framebuffer that contains the rendered video.
(Which also gets slightly messy - needs to compensate for coordinate
system flipping.)
Currently, vo.c will always continue to render the currently queued
frame, which sets last_flip, which in turn confuses vo_get_delay(),
which in turn will show a bogus A/V desync message on unpause. So just
reset it again on unpause.
I guess the removed code is an old leftover, and makes no sense anymore.
Should fix weird A/V diff dropouts when frames are being dropped with
display-sync.
If the player sends a frame with duration==0 to the VO, it can trivially
underrun. Don't panic, but keep the correct time.
Also, returning the absolute time from vo_get_next_frame_start_time()
just to turn it into a float with relative time was silly. Rename it and
make it return what the caller needs.
"Missed" implies the frame was dropped, but what really happens is that
the following frame will be shown later than intended (due to the
current frame skipping a vsync).
(As of this commit, this property is still inactive and always
returns 0. See git blame for details.)
Apparently Windows treats windows that use OpenGL, cover an entire
screen and have the WS_POPUP style set or are topmost windows as
exclusive fullscreen windows that bypass DWM and cannot be covered
by other windows.
This means we can’t use dwmflush in fullscreen mode, and it also
means that no other window can cover mpv, and it makes the screen
flicker when switching to fullscreen mode.
This can be avoided by not setting the WS_POPUP flag.
Users can still access the old behavior by enabling stay-on-top
(which IMO at least makes sense—now we just need to get dwmflush
autodetection right to avoid nasty surprises).
fixes#2177
This applies to unexpected freezes or deadlocks, not e.g. normal
framedrops. The verbose messages also might remind an API user if the
API usage is incorrect, such as not calling mpv_opengl_cb_draw() when a
redraw request was issued.
The nnedi3 prescaler requires a normalized range to work properly,
but the original implementation did the range normalization after
the first step of the first pass. This could lead to severe quality
degradation when debanding is not enabled for NNEDI3.
Fix this issue by passing `tex_mul` into the shader code.
Fixes#2464
vo_opengl_cb is a special case, because we somehow have to render video
asynchronously, all while "trusting" the API user to do it correctly.
This didn't quite work, and a while ago a compromise using a timeout to
prevent theoretically possible deadlocks was added.
Make it even more synchronous. Basically, go all the way, and
synchronize rendering between VO and user renderer thread to the
full extent possible.
This means the silly frame queue is dropped, and we event attempt to
synchronize the GL SwapBuffer call (via mpv_opengl_cb_report_flip()).
The changes introduced with commit dc33eb56 are effectively dropped. I
don't even remember if they mattered.
In the future, we might make all VOs fetch asynchronously from a frame
queue, which would mostly remove the differences between vo_opengl and
vo_opengl_cb, but this will take a while (if it will even be done).
Pick the correct GLSL version from the GL_SHADING_LANGUAGE_VERSION
string. Might be somewhat questionable, as we expect the minor version
number not to have leading 0s.
Should help with cases when the reported GLSL version is much higher
than the equivalent of the reported GL version. This problem was
observed in combination with GL_ARB_uniform_buffer_object, which
can't be used if the declared GLSL version is too low.
Simplifies some auto detection matters.
I _still_ don't want to remove the lazy loading mechanism, because it's
still slightly useful for filters using the hwdec APIs. My main
motivation for not always preloading them is actually that libva prints
random useless crap to the terminal with no way to prevent this.
Notes:
- Unfortunately the only way to talk to EGL from within DRM I could find
involves linking with GBM (generic buffer management for Mesa.)
Because of this, I'm pretty sure it won't work with proprietary NVidia
drivers, but then again, last time I checked NVidia didn't offer
proper screen resolution for VT.
- VT switching doesn't seem to work at all. It's worth mentioning that
using vo_drm before introduction of VT switcher had an anomaly where
user could switch to another VT and input text to it, while video
played on top of that VT. However, that isn't the case with drm_egl:
I can't switch to other VT during playback like this. This makes me
think that it's either a limitation coming from my firmware or from
EGL/KMS itself rather than a bug with my code. Nonetheless, I still
left (untestable) VT switching code in place, in case it's useful to
someone else.
- The mode_id, connector_id and device_path should be configurable for
power users and people who wish to watch videos on nonprimary screen.
Unfortunately I didn't see anything that would allow OpenGL backends
to register their own set of options. At the same time, adding them to
global namespace is pointless.
- A few dozens of lines could be shared with vo_drm (setting up VT
switching, most of code behind page flipping). I don't have any strong
opinion on this.
- Sometimes I get minor visual glitches. I'm not sure if there's a race
condition of some sort, unitialized variable (doubtful), or if it's
buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa)
- .config and .control are very minimal.
Signed-off-by: wm4 <wm4@nowhere>
glXCreateContextAttribsARB() by design can throw some X11 errors. We
ignore these, but we generally still print error messages to the
terminal. This was confusing/annoying users, so silence it. The stupid
part is that the Xlib error handler is global, so we have to be slightly
careful here.
They are evil and should be eradicated. Some of these were pretty dumb
anyway.
There are probably some more around in platform specific code or other
code not enabled by default on Linux.
This is based on an older patch by James Ross-Gowan. It was rebased and
cleaned up. Also, the DWM API usage present in the older patch was
removed, because DWM reports nonsense rates at least on Windows 8.1
(they are rounded to integers, just like with the old GDI API - except
the GDI API had a good excuse, as it could report only integers).
Signed-off-by: wm4 <wm4@nowhere>
This simplifies update_screen_rect a bit. Unless --fs-screen=all is
used, it will always get an HMONITOR and call GetMonitorInfo to
determine its dimensions. This will make it easier for the next few
commits to determine the colour profile and the refresh rate from the
HMONITOR.
There is a slight change in behaviour. When selecting a screen that is
out of range, such as --screen=9 on a machine with only two monitors,
the old code would silently select the last existing monitor. The new
code prints an error message and falls back to the default screen (same
as the Cocoa code.)
Signed-off-by: wm4 <wm4@nowhere>
The call to EnumDisplaySettings seems to be a relic from when MPlayer
ran on systems that didn't have GetMonitorInfo or SM_CX/CYVIRTUALSCREEN.
GetMonitorInfo was loaded dynamically, so it was possible for MPlayer to
run without it and use the values returned by EnumDisplaySettings.
These are always present in modern versions of Windows, so the values
returned from EnumDisplaySettings are always overwritten. Remove the
call to EnumDisplaySettings and assume SM_CX/CYVIRTUALSCREEN is always
present.
Signed-off-by: wm4 <wm4@nowhere>
Commit 27dc834f added it as such.
Also remove the check for glUniformBlockBinding() - it's part of an
extension, and the check glGetUniformBlockIndex() already checks whether
the extension is fully available.
Implement NNEDI3, a neural network based deinterlacer.
The shader is reimplemented in GLSL and supports both 8x4 and 8x6
sampling window now. This allows the shader to be licensed
under LGPL2.1 so that it can be used in mpv.
The current implementation supports uploading the NN weights (up to
51kb with placebo setting) in two different way, via uniform buffer
object or hard coding into shader source. UBO requires OpenGL 3.1,
which only guarantee 16kb per block. But I find that 64kb seems to be
a default setting for recent card/driver (which nnedi3 is targeting),
so I think we're fine here (with default nnedi3 setting the size of
weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the
"intBitsToFloat()" built-in function. This is necessary to precisely
represent these weights in GLSL. I tried several human readable
floating point number format (with really high precision as for
single precision float), but for some reason they are not working
nicely, bad pixels (with NaN value) could be produced with some
weights set.
We could also add support to upload these weights with texture, just
for compatibility reason (etc. upscaling a still image with a low end
graphics card). But as I tested, it's rather slow even with 1D
texture (we probably had to use 2D texture due to dimension size
limitation). Since there is always better choice to do NNEDI3
upscaling for still image (vapoursynth plugin), it's not implemented
in this commit. If this turns out to be a popular demand from the
user, it should be easy to add it later.
For those who wants to optimize the performance a bit further, the
bottleneck seems to be:
1. overhead to upload and access these weights, (in particular,
the shader code will be regenerated for each frame, it's on CPU
though).
2. "dot()" performance in the main loop.
3. "exp()" performance in the main loop, there are various fast
implementation with some bit tricks (probably with the help of the
intBitsToFloat function).
The code is tested with nvidia card and driver (355.11), on Linux.
Closes#2230
Add the Super-xBR filter for image doubling, and the prescaling framework
to support it.
The shader code was ported from MPDN extensions project, with
modification to process luma only.
This commit is largely inspired by code from #2266, with
`gl_transform_trans()` authored by @haasn taken directly.
The noframe event is logged whenever there is no new frame. This can
happen due to normal redraws, but also due to video frame queue
underflow.
The mpv_opengl_cb_report_flip() API function is currently pretty
useless, because blocking on the video frame queue is more reliable and
simpler. But at least we can log the actual vsync.
next_vsync/prev_vsync was only used to retrieve the vsync duration. We
can get this in a simpler way.
This also removes the vsync duration estimation from vo_opengl_cb.c,
which is probably worthless anyway. (And once interpolation is made
display-sync only, this won't matter at all.)
This affects only the display-sync code path, as for normal timing the
wakeup_pts stuff handles proper wakeup. It's probably mostly a
theoretical issue.
Quoting MSDN: "Notifies the Desktop Window Manager (DWM) to opt in to or
out of Multimedia Class Schedule Service (MMCSS) scheduling while the
calling process is alive.". Whatever this means. (An application can
change the scheduling priority of the window manager?)
Does this improve anything? I have no idea. Certainly this is a program
that does multimedia and graphics, so we seem to be a good match for
this.
Is it bad if we enable this even while playback is inactive or paused? I
have no idea either.
Is there a magic cargo cult function that will mark our renderer thread
as multimedia thing? I have no idea. (We use a function to enable MMCSS
for our audio thread in ao_wasapi.)
Enable it by default, but not unconditionally. Add an "auto" mode, which
disable DwmFlush if the compositor is (probably) inactive. Let's see how
this goes.
Since I accidentally enabled DwmFlush always by default (more or less)
in a previous commit touching this code, this is probably mostly just
cargo-culting, and it's uncertain whether it does anything.
Note that I still got bad vsync behavior when fullscreening mpv, and
making another window visible on the same screen. This happens even if
forcing DWM.
Commit acd5816a broke this. It was stopping playback occasionally.
Another case where the non-display-sync interpolation mode
(in->vsync_timed==true) is causing a lot of subtle issues and will be
removed soon.
Regression since commit 93db4233. I think the bit that was forgotten
here was to remove the vo_w32_config() return value completely. The VO
failed to init because that function always returned 0. This commit
removes these bits and fixes the VO.
Fixes#2434.
Yet another relatively useless option that tries to make OpenGL's sync
behavior somewhat sane. The results are not too encouraging. With a
value of 1, vsync jitter is gone on nVidia, but there are frame drops
(less than with glfinish). With 2, I get the usual vsync jitter _and_
frame drops.
There's still some hope that it might prevent too deep queuing with some
GPUs, I guess.
The timeout for the wait call is 1 second. The value is pretty
arbitrary; it should just not be too high to freeze the process (if
the GPU is un-nice), and not too low to trigger the timeout in normal
cases, even if the GPU load is very high. So I guess 1 second is ok
as a timeout.
The idea to use fences this way to control the queue depth was stolen
from RetroArch:
df01279cf3/gfx/drivers/gl.c (L1856)
Commit a1315c76 broke this slightly. Frame drops got counted multiple
times, and also vo.c was actually trying to "render" the dropped frame
over and over again (normally not a problem, since frames are always
queued "tightly" in display-sync mode, but could have caused 100% CPU
usage in some rare corner cases).
Do not repeat already dropped frames, but still treat new frames with
num_vsyncs==0 as dropped frames. Also, strictly count dropped frames in
the VO. This means we don't count "soft" dropped frames anymore (frames
that are shown, but for fewer vsyncs than intended). This will be
adjusted in the next commit.
vo_frame.num_vsyncs can be != 1 in some cases in normal sync mode too.
This is not a very exact fix, but in exchange it's robust. (These
vo_frame flags are way too tricky in combination with redrawing and
such.)
There were occasional shader compilation and rendering failures if FBOs
were unavailable. This is caused by the FBO caching code getting active,
even though FBOs are unavailable (i.e. dumb-mode).
Boken by commit 97fc4f.
Fixes#2432.
This was not very reliable.
In the normal vo_opengl case, this didn't deal well enough with vsync
jitter. Vsync timings can jitter quite extremely, up to a whole vsync
duration, in which case the "missed" frame counter keeps growing, even
though nothing is wrong. This behavior also messes up the A/V difference
calculation, but as long as it's within tolerance, it won't provoke
extra frame dropping/repeating. Real misses are harder to detect, and I
might add such detection later.
In the vo_opengl_cb case, this was additionally broken due to the
asynchronity between renderer and VO threads.
This speeds up redraws considerably (improving eg. <60 Hz material on a 60 Hz
monitor with display-sync active, or redraws while paused), but slightly
slows down the worst case (eg. video FPS = display FPS).
The IME is not useful for key-bindings. Handle the base ASCII chars
instead and don't show the IME window. For the sake of libmpv users, the
IME should only be disabled on mpv's GUI thread and not application-
wide.
No IME on the GUI thread should also mean that VK_PROCESSKEY will never
have to be handled, so the logic for that can be removed as well.
Older systems have certain EGL extension definitions missing. We
redefine them to make the build system easier, and because it's trivial.
But we forgot to define the EGL_LINUX_DMA_BUF_EXT identifier. (I hope
it's the only missing one.)
The equalizer code as it exists in vo_opengl works perfectly fine. The
situation in vo_opengl_cb is pretty different. The playback thread can't
communicate with the renderer thread synchronously (essentially to give
the API user more flexibility). So the equalizer communication has to be
done in an asynchronous way too.
There were two problems. First, the eq capabilities can change with the
pixel format, and the renderer initializes them on config only. This
means equalizers were disabled on the first config run, and options like
--video-output-levels or --brightness would not work. So we just
initialize the caps with a known superset. The player will not correctly
indicate when setting an eq doesn't work, but we're fine with it, as it
is a relatively cosmetic issue.
Second, it copied back the eq settings in the "wrong" moment (what
for?), which overwrote the settings in some cases.
Third, the eq was not reset correctly on vo init. This is needed to make
it behave the same as vo_opengl.
It's great that the new algorithm supports multiple placebo iterations
and all, but it's really not necessary and hurts performance in the
general case for the sake of the 0.1% that actually pause the screen
and look for minute differences.
Signed-off-by: wm4 <wm4@nowhere>
This reverts commit c10fb4ce9f.
This is already done in vo_wayland.c:resize,324 doing it here makes the window bigger before the video resizes showing a black area while dragging the border.
Adds support for AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12,
AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16, AV_PIX_FMT_GBRAP, and
AV_PIX_FMT_GBRAP16.
(Not that it matters, because nobody uses these anyway.)
When seeking, the current frame will have the wrong timestamp, until the
seek is done and a new frame from the seek target is shown. We still use
the "old" frame for redrawing during seeks. This frame has to be
explicitly marked with still=true in order not to confuse the
interpolation queue. If this is not done, it will ignore frames until a
frame with approximately the same timestamp as the "old" frame is
reached. (Does this mean interpolation handles timestamp resets
incorrectly? I have no idea.)
Of course we also have to clear possibly queued frames on seeks.
Also, in pausing, explicitly let the frame redraw.
Newer nVidia drivers support EGL, but they seem to work badly,
apparently don't support some needed features or not in a form we want
(such as swap control), and vdpau interop is not available. Disable it
by default, because I'm tired of explaining this issue.
Can be reverted as soon as nVidia release working drivers.
This fixes a regression since commit f4d62da8. The original code run
vo_cocoa_config_window() once without creating the window, which had the
effect that the last part of the function was run at least once before
the actual window was created. Fix the regression by moving it to before
the window is created.
The regression itself is hard to describe. One test case: start mpv from
a fullscreened terminal window. It should switch to another desktop,
with the mpv window visible. This didn't happen anymore.
This parameter has been unused for years (the last flag was removed in
commit d658b115). Get rid of it.
This affects the general VO API, as well as the vo_opengl backend API,
so it touches a lot of files.
The VOFLAGs are still used to control OpenGL context creation, so move
them to the OpenGL backend code.
This gets rid of an old hack, VOFLAG_HIDDEN. Although handling of it has
been sane for a while, it used to cause much pain, and is still
unintuitive and weird even today.
The main reason for this hack is that OpenGL selects a X11 Visual for
you, and you're supposed to use this Visual when creating the X window
for the OpenGL context. Which means the X window can't be created early
in the common X11 init code, but the OpenGL code needs to do something
before that. API-wise you need separate functions for X11 init and X11
window creation. The VOFLAG_HIDDEN hack conflated window creation and
the entrypoint for resizing on video resolution change into one
function, vo_x11_config_vo_window(). This required all platform backends
to handle this flag, even if they didn't need this mechanism.
Wayland still uses this for minor reasons (alpha support?), so the
wayland backend must be changed before the flag can be entirely removed.
If interpolation is enabled, then this causes heavy artifacts if done
while unpaused. It's preferable to allow a latency of a few frames for
the change to take full effect instead. If this is done paused, the
frame is fully redrawn anyway.
This reverts commit d11184a256.
Unfortunately, there was a lot of unexpected resistance.
Do note that this is still extremely slow, crappy, etc.
Note that vo_x11.c was further edited. Compared to the removed vo_x11.c,
an additional ~200 lines of code was removed in order to simplify it. I
tried to strip it down as much as possible. In particular, support for
odd non-32 bit formats (24, 16, 15, 8 bit) is dropped.
Closes#2300.
The vf_format suboption is replaced with --video-output-levels (a global
option and property). In particular, the parameter is removed from
mp_image_params. The mechanism is moved to the "video equalizer", which
also handles common video output customization like brightness and
contrast controls.
The new code is slightly cleaner, and the top-level option is slightly
more user-friendly than as vf_format sub-option.
It doesn't deal with VDA at all anymore. Rename it to hwdec_osx.c. Not
using hwdec_videotoolbox.c, because that would give it the longest
source path in this project yet. (Also, this code isn't even
VideoToolox-specific, other than the name of the pixel format used.)
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.
VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
There are at least 2 ways of using VAAPI without X11 (Wayland, DRM).
Remove the X11 requirement from the decoder part and the EGL interop.
This will be used by a following commit, which adds Wayland support.
The worst about this is the decoder part, which includes a bad hack for
using the decoder without any VO interop (also known as "vaapi-copy"
mode). Separate the X11 parts so that they're self-contained. For the
EGL interop code we do something similar (it's kept slightly simpler,
because it essentially only has to translate between our silly
MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
It looks like my hope that we can unconditionally include EGL headers in
the OpenGL code is not coming true, because OSX does not support EGL at
all. So I prefer loading the VAAPI EGL/GL specific extensions manually,
because it's less of a mess. Partially reverts commit d47dff3f.
While EGL 1.4 seemed a bit ambiguous about this to me, it actually says
quite clearly that core functions are not supported with
eglGetProcAddress() in the following paragraph.
Normally, we prefer GLX on X11. But for the VAAPI EGL interop, we
obviously want EGL. Since nvidia does not provide EGL with desktop GL
yet, we can leave it to the autoprobing. Just make sure some failure
messages don't unnecessarily show up in the nvidia case.
This breaks VAAPI GLX interop by default, but I don't care much. If
you use --hwdec=auto (which you should if you want hw decoding), this
should fallback to vaapi-copy instead.
Probe the surface format, and check whether it's really something we
support. This also does a complete check whether the EGL interop works
at all (the only way to find this out is actually running this code).
Also, support YV12. Under some circumstances, vaapi (with Intel
drivers) can be made to use this format.
Unfortunately, the Intel drivers show some very weird behavior, which
is hopefully a bug. insane_hack() provides a very evil workaround (see
comments). A proper solution might be passing the hw format as part of
mp_image_params, but as long as hw surfaces appear to be able to change
the format on the fly, attempting this is probably not worth the extra
complexity and likely fragility. The hack allows us to pretend that
there is sane behavior for now.
Broken by commit d47dff3f. If something is going to include EGL.h,
header_fixes.h has to know. This definitely affected vo_rpi, and
probably affects wayland builds (with x11egl didabled) as well.
Checking and resetting the VAImage.buf field is non-sense, even if it
happened to work out in the normal case. buf is actually freed when
vaDestroyImage() is called (not quite intuitive), and we need an extra
field to know whether vaReleaseBufferHandle() has to be called.
Add the upstream symbolic names as comments. Normally, these should be
defined in libdrm's drm_fourcc.h header. But DRM_FORMAT_R8 and
DRM_FORMAT_GR88 are not defined anywhere, except in the kernel userland
headers of Linux 4.3 (!). We don't want mpv to depend on bleeding-edge
Linux kernel headers, so this will have to do.
Also, just for completeness, add fourccs for the 3 and 4 channel
formats. I didn't manage to test them, though.
Reduces the amount of hardcoded assumptions about the layout
drastically. (Now adding yuv420 support would be just adjusting an if,
if you ignore the other problems, such as determining the hw format at
all early enough.)
Don't call eglDestroyImageKHR() on the same ID possibly more than once.
Clear the image reference on termination, or we would leak up to 1 image
per VO recreation.
Should work much better than the old GLX interop code. Requires Mesa 11,
and explicitly selecting the X11 EGL backend with:
--vo=opengl:backend=x11egl
Should it turn out that the new interop works well, we will try to
autodetect EGL by default.
This code still uses some bad assumptions, like expecting surfaces to be
in NV12. (This is probably ok, because virtually all HW will use this
format. But we should at least check this on init or so, instead of
failing to render an image if our assumption doesn't hold up.)
This repo was a lot of help: https://github.com/gbeauchesne/ffvademo
The kodi code was also helpful (the magic FourCC it uses for
EGL_LINUX_DRM_FOURCC_EXT are nowhere documented, and
EGL_IMAGE_INTERNAL_FORMAT_EXT as used in ffvademo does
not actually exist).
(This is the 3rd VAAPI GL interop that was implemented in this player.)
The VAAPI EGL interop code will need access to the X11 Display. While
GLX could return it from the current GLX context, EGL has no such
mechanism. (At least no standard one supported by all implementations.)
So mpv makes up such a mechanism.
For internal purposes, this is very rather awkward solution, but it's
needed for libmpv anyway.
These extensions use a bunch of EGL types, so we need to include the EGL
headers in common.h to use our GL function loader with this.
In the future, we should probably require presence of the EGL headers to
reduce the hacks. This might be not so simple at least with OSX, so for
now this has to do.
Surfaces used by hardware decoding formats can be mapped exactly like a
specific software pixel format, e.g. RGBA or NV12. p->image_params is
supposed to be set to this format, but it wasn't.
(How did this ever work?)
Also, setting params->imgfmt in the hwdec interop drivers is pointless
and redundant. (Change them to asserts, because why not.)
This is a pseudo-OpenGL extension for letting libmpv query native
windowing system handles from the API user. (It uses the OpenGL
extension mechanism because I'm lazy. In theory it would be nicer to let
the user pass them with mpv_opengl_cb_init_gl(), but this would require
a more intrusive API change to extend its argument list.)
The naming of the extension and associated function was unnecessarily
Windows specific (using "D3D"), even though it would work just fine for
other platforms. So deprecate the old names and introduce new ones. The
old ones still work.
This turns the old scalers (inherited from MPlayer) into a pre-
processing step (after color conversion and before scaling). The code
for the "sharpen5" scaler is reused for this.
The main reason MPlayer implemented this as scalers was perhaps because
FBOs were too expensive, and making it a scaler allowed to implement
this in 1 pass. But unsharp masking is not really a scaler, and I would
guess the result is more like combining bilinear scaling and unsharp
masking.
Window classes are process-wide (or at least DLL-wide), so you can't
have 2 classes with the same name. Our code attempted to do this when
for example 2 libmpv instances were created within the same process.
This failed, because RegisterWindowEx() fails if the class already
exists.
Fix this by ignoring RegisterWindowEx() errors. If the class can really
not be registered, we will fail on CreateWindowEx() instead. Of course
we also can't unregister the class, as another thread might be using it.
Windows will free the class automatically if the DLL is unloaded or the
process terminates.
Fixes#2319 (hopefully).
2 things are being stupid here: Apple for requiring rectangle textures
with their IOSurface interop for no reason, and OpenGL having a
different sampler type for rectangle textures.
The removal of source-shader is a side effect, since this effectively
replaces it - and the video-reading code has been significantly
restructured to make more sense and be more readable.
This means users no longer have to constantly download and maintain a
separate deband.glsl installation alongside mpv, which was the only real
use case for source-shader that we found either way.
This is mostly to cut down somewhat on the amount of code bloat in
video.c by moving out helper functions (including scaler kernels and
color management routines) to a separate file.
It would certainly be possible to move out more functions (eg. dithering
or CMS code) with some extra effort/refactoring, but this is a start.
Signed-off-by: wm4 <wm4@nowhere>
Instead of the other way around of disabling disallowed options. This is
more robust and also slightly simpler, at least conceptually. If new
vo_opengl features are added, they don't need to be explicitly disabled
for dumb-mode just to avoid that it accidentally breaks.
Sigh...
Hopefully this code will be completely unnecessary one day, as it's
only needed due to the sub-option parser craziness.
Move dumb_mode to the top of the struct, so the C universal initializer
doesn't cause warnings with all those broken compilers.
The single path optimization, rendering the video in one shader pass and
without FBO indirections, was removed soem commits ago. It didn't have a
place in this code, and caused considerable complexity and maintenance
issues.
On the other hand, it still has some worth, such as for use with
extremely crappy hardware (GLES only or OpenGL 2.1 without FBO
extension). Ideally, these use cases would be handled by a separate VO
(say, vo_gles). While cleaner, this would still cause code duplication
and other complexity.
The third option is making the single-pass optimization a completely
separate code path, with most vo_opengl features disabled. While this
does duplicate some functionality (such as "unpacking" the video data
from textures), it's also relatively unintrusive, and the high quality
code path doesn't need to take it into account at all. On another
positive node, this "dumb-mode" could be forced in other cases where
OpenGL 2.1 is not enough, and where we don't want to care about versions
this old.
This change makes vo_opengl slightly less compatible (ancient devices
without FBOs will no longer work) and decreases performance in the
simplest case (vo=opengl), in exchange for significantly reducing code
complexity and making everything easier to reason about.