This is based on an older patch by James Ross-Gowan. It was rebased and
cleaned up. Also, the DWM API usage present in the older patch was
removed, because DWM reports nonsense rates at least on Windows 8.1
(they are rounded to integers, just like with the old GDI API - except
the GDI API had a good excuse, as it could report only integers).
Signed-off-by: wm4 <wm4@nowhere>
This simplifies update_screen_rect a bit. Unless --fs-screen=all is
used, it will always get an HMONITOR and call GetMonitorInfo to
determine its dimensions. This will make it easier for the next few
commits to determine the colour profile and the refresh rate from the
HMONITOR.
There is a slight change in behaviour. When selecting a screen that is
out of range, such as --screen=9 on a machine with only two monitors,
the old code would silently select the last existing monitor. The new
code prints an error message and falls back to the default screen (same
as the Cocoa code.)
Signed-off-by: wm4 <wm4@nowhere>
The call to EnumDisplaySettings seems to be a relic from when MPlayer
ran on systems that didn't have GetMonitorInfo or SM_CX/CYVIRTUALSCREEN.
GetMonitorInfo was loaded dynamically, so it was possible for MPlayer to
run without it and use the values returned by EnumDisplaySettings.
These are always present in modern versions of Windows, so the values
returned from EnumDisplaySettings are always overwritten. Remove the
call to EnumDisplaySettings and assume SM_CX/CYVIRTUALSCREEN is always
present.
Signed-off-by: wm4 <wm4@nowhere>
Commit 27dc834f added it as such.
Also remove the check for glUniformBlockBinding() - it's part of an
extension, and the check glGetUniformBlockIndex() already checks whether
the extension is fully available.
Implement NNEDI3, a neural network based deinterlacer.
The shader is reimplemented in GLSL and supports both 8x4 and 8x6
sampling window now. This allows the shader to be licensed
under LGPL2.1 so that it can be used in mpv.
The current implementation supports uploading the NN weights (up to
51kb with placebo setting) in two different way, via uniform buffer
object or hard coding into shader source. UBO requires OpenGL 3.1,
which only guarantee 16kb per block. But I find that 64kb seems to be
a default setting for recent card/driver (which nnedi3 is targeting),
so I think we're fine here (with default nnedi3 setting the size of
weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the
"intBitsToFloat()" built-in function. This is necessary to precisely
represent these weights in GLSL. I tried several human readable
floating point number format (with really high precision as for
single precision float), but for some reason they are not working
nicely, bad pixels (with NaN value) could be produced with some
weights set.
We could also add support to upload these weights with texture, just
for compatibility reason (etc. upscaling a still image with a low end
graphics card). But as I tested, it's rather slow even with 1D
texture (we probably had to use 2D texture due to dimension size
limitation). Since there is always better choice to do NNEDI3
upscaling for still image (vapoursynth plugin), it's not implemented
in this commit. If this turns out to be a popular demand from the
user, it should be easy to add it later.
For those who wants to optimize the performance a bit further, the
bottleneck seems to be:
1. overhead to upload and access these weights, (in particular,
the shader code will be regenerated for each frame, it's on CPU
though).
2. "dot()" performance in the main loop.
3. "exp()" performance in the main loop, there are various fast
implementation with some bit tricks (probably with the help of the
intBitsToFloat function).
The code is tested with nvidia card and driver (355.11), on Linux.
Closes#2230
Add the Super-xBR filter for image doubling, and the prescaling framework
to support it.
The shader code was ported from MPDN extensions project, with
modification to process luma only.
This commit is largely inspired by code from #2266, with
`gl_transform_trans()` authored by @haasn taken directly.
The noframe event is logged whenever there is no new frame. This can
happen due to normal redraws, but also due to video frame queue
underflow.
The mpv_opengl_cb_report_flip() API function is currently pretty
useless, because blocking on the video frame queue is more reliable and
simpler. But at least we can log the actual vsync.
next_vsync/prev_vsync was only used to retrieve the vsync duration. We
can get this in a simpler way.
This also removes the vsync duration estimation from vo_opengl_cb.c,
which is probably worthless anyway. (And once interpolation is made
display-sync only, this won't matter at all.)
This affects only the display-sync code path, as for normal timing the
wakeup_pts stuff handles proper wakeup. It's probably mostly a
theoretical issue.
Quoting MSDN: "Notifies the Desktop Window Manager (DWM) to opt in to or
out of Multimedia Class Schedule Service (MMCSS) scheduling while the
calling process is alive.". Whatever this means. (An application can
change the scheduling priority of the window manager?)
Does this improve anything? I have no idea. Certainly this is a program
that does multimedia and graphics, so we seem to be a good match for
this.
Is it bad if we enable this even while playback is inactive or paused? I
have no idea either.
Is there a magic cargo cult function that will mark our renderer thread
as multimedia thing? I have no idea. (We use a function to enable MMCSS
for our audio thread in ao_wasapi.)
Enable it by default, but not unconditionally. Add an "auto" mode, which
disable DwmFlush if the compositor is (probably) inactive. Let's see how
this goes.
Since I accidentally enabled DwmFlush always by default (more or less)
in a previous commit touching this code, this is probably mostly just
cargo-culting, and it's uncertain whether it does anything.
Note that I still got bad vsync behavior when fullscreening mpv, and
making another window visible on the same screen. This happens even if
forcing DWM.
Commit acd5816a broke this. It was stopping playback occasionally.
Another case where the non-display-sync interpolation mode
(in->vsync_timed==true) is causing a lot of subtle issues and will be
removed soon.
Regression since commit 93db4233. I think the bit that was forgotten
here was to remove the vo_w32_config() return value completely. The VO
failed to init because that function always returned 0. This commit
removes these bits and fixes the VO.
Fixes#2434.
Yet another relatively useless option that tries to make OpenGL's sync
behavior somewhat sane. The results are not too encouraging. With a
value of 1, vsync jitter is gone on nVidia, but there are frame drops
(less than with glfinish). With 2, I get the usual vsync jitter _and_
frame drops.
There's still some hope that it might prevent too deep queuing with some
GPUs, I guess.
The timeout for the wait call is 1 second. The value is pretty
arbitrary; it should just not be too high to freeze the process (if
the GPU is un-nice), and not too low to trigger the timeout in normal
cases, even if the GPU load is very high. So I guess 1 second is ok
as a timeout.
The idea to use fences this way to control the queue depth was stolen
from RetroArch:
df01279cf3/gfx/drivers/gl.c (L1856)
Commit a1315c76 broke this slightly. Frame drops got counted multiple
times, and also vo.c was actually trying to "render" the dropped frame
over and over again (normally not a problem, since frames are always
queued "tightly" in display-sync mode, but could have caused 100% CPU
usage in some rare corner cases).
Do not repeat already dropped frames, but still treat new frames with
num_vsyncs==0 as dropped frames. Also, strictly count dropped frames in
the VO. This means we don't count "soft" dropped frames anymore (frames
that are shown, but for fewer vsyncs than intended). This will be
adjusted in the next commit.
vo_frame.num_vsyncs can be != 1 in some cases in normal sync mode too.
This is not a very exact fix, but in exchange it's robust. (These
vo_frame flags are way too tricky in combination with redrawing and
such.)
There were occasional shader compilation and rendering failures if FBOs
were unavailable. This is caused by the FBO caching code getting active,
even though FBOs are unavailable (i.e. dumb-mode).
Boken by commit 97fc4f.
Fixes#2432.
This was not very reliable.
In the normal vo_opengl case, this didn't deal well enough with vsync
jitter. Vsync timings can jitter quite extremely, up to a whole vsync
duration, in which case the "missed" frame counter keeps growing, even
though nothing is wrong. This behavior also messes up the A/V difference
calculation, but as long as it's within tolerance, it won't provoke
extra frame dropping/repeating. Real misses are harder to detect, and I
might add such detection later.
In the vo_opengl_cb case, this was additionally broken due to the
asynchronity between renderer and VO threads.
This speeds up redraws considerably (improving eg. <60 Hz material on a 60 Hz
monitor with display-sync active, or redraws while paused), but slightly
slows down the worst case (eg. video FPS = display FPS).
The IME is not useful for key-bindings. Handle the base ASCII chars
instead and don't show the IME window. For the sake of libmpv users, the
IME should only be disabled on mpv's GUI thread and not application-
wide.
No IME on the GUI thread should also mean that VK_PROCESSKEY will never
have to be handled, so the logic for that can be removed as well.
Older systems have certain EGL extension definitions missing. We
redefine them to make the build system easier, and because it's trivial.
But we forgot to define the EGL_LINUX_DMA_BUF_EXT identifier. (I hope
it's the only missing one.)
The equalizer code as it exists in vo_opengl works perfectly fine. The
situation in vo_opengl_cb is pretty different. The playback thread can't
communicate with the renderer thread synchronously (essentially to give
the API user more flexibility). So the equalizer communication has to be
done in an asynchronous way too.
There were two problems. First, the eq capabilities can change with the
pixel format, and the renderer initializes them on config only. This
means equalizers were disabled on the first config run, and options like
--video-output-levels or --brightness would not work. So we just
initialize the caps with a known superset. The player will not correctly
indicate when setting an eq doesn't work, but we're fine with it, as it
is a relatively cosmetic issue.
Second, it copied back the eq settings in the "wrong" moment (what
for?), which overwrote the settings in some cases.
Third, the eq was not reset correctly on vo init. This is needed to make
it behave the same as vo_opengl.
It's great that the new algorithm supports multiple placebo iterations
and all, but it's really not necessary and hurts performance in the
general case for the sake of the 0.1% that actually pause the screen
and look for minute differences.
Signed-off-by: wm4 <wm4@nowhere>
This reverts commit c10fb4ce9f.
This is already done in vo_wayland.c:resize,324 doing it here makes the window bigger before the video resizes showing a black area while dragging the border.
Adds support for AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12,
AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16, AV_PIX_FMT_GBRAP, and
AV_PIX_FMT_GBRAP16.
(Not that it matters, because nobody uses these anyway.)
When seeking, the current frame will have the wrong timestamp, until the
seek is done and a new frame from the seek target is shown. We still use
the "old" frame for redrawing during seeks. This frame has to be
explicitly marked with still=true in order not to confuse the
interpolation queue. If this is not done, it will ignore frames until a
frame with approximately the same timestamp as the "old" frame is
reached. (Does this mean interpolation handles timestamp resets
incorrectly? I have no idea.)
Of course we also have to clear possibly queued frames on seeks.
Also, in pausing, explicitly let the frame redraw.
Newer nVidia drivers support EGL, but they seem to work badly,
apparently don't support some needed features or not in a form we want
(such as swap control), and vdpau interop is not available. Disable it
by default, because I'm tired of explaining this issue.
Can be reverted as soon as nVidia release working drivers.
This fixes a regression since commit f4d62da8. The original code run
vo_cocoa_config_window() once without creating the window, which had the
effect that the last part of the function was run at least once before
the actual window was created. Fix the regression by moving it to before
the window is created.
The regression itself is hard to describe. One test case: start mpv from
a fullscreened terminal window. It should switch to another desktop,
with the mpv window visible. This didn't happen anymore.
This parameter has been unused for years (the last flag was removed in
commit d658b115). Get rid of it.
This affects the general VO API, as well as the vo_opengl backend API,
so it touches a lot of files.
The VOFLAGs are still used to control OpenGL context creation, so move
them to the OpenGL backend code.