It turns out that with accurate lookup we can decrease the
default size of texture now. Do it to compensate the performance
loss introduced by the LUT_POS macro.
This adds basic support for ICC profiles. Per-monitor profiles are
supported. WCS profiles are not supported, but there is an API for
converting WCS profiles to ICC, so they might be supported in future.
I'm just not sure if anyone actually uses them.
Reloading the ICC profile when it's changed in the control panel is also
not supported. This might be possible by using the WCS APIs and watching
the registry for changes, but there is no official API for it, and as
far as I can tell, no other Windows programs can do it.
This logic was kind of questionable anyway, and --display-sync should
give much better results. (I would even go as far as saying that the
FPS-dependent framedrop code made things worse in some situations. Not
all, though.)
This is simply the average refresh rate. Including "bad" samples is
actually an advantage, because the property exists only for
informational purposes, and will reflect problems such as the driver
skipping a vsync.
Also export the standard deviation of the vsync frame duration
(normalized to the range 0-1) as vsync-jitter property.
The OSD takes up an entire fullscreen dispmanx layer. Although the GPU
should be able to handle it (possibly even without any disadvantages),
it'll still be useful for debugging performance issues.
Running mpv with default config will now pick up ANGLE by default. Since
some think ANGLE is still not good enough for hq features, extend the
"es" option to reject GLES backends, and add to to the opengl-hq preset.
One consequence is that mpv will by default use libswscale to convert
10 bit video to 8 bit, before it reaches the VO.
I decided that I actually can't stand how vo_opengl unnecessarily puts
the video through 3 shader stages (instead of 1). Thus, what was meant
to be a fallback for weak OpenGL implementations, the dumb-mode, now
becomes default if the user settings allow it.
The code required to check for the settings isn't so wild, so I guess
it's manageable. I still hope that one day, our rendering logic can
generate ideal shader stages for this case too.
Note that in theory, dumb-mode could be reenabled at runtime due to a
color management 3D LUT being set, so a separate dumb_mode field is
required. The dumb-mode option can't just be overwritten.
Most of this is explained in the DOCS additions.
This gives us slightly more sanity, because there is less interaction
between the various parts. The goal is getting rid of the video_offset
entirely.
The simplification extends to the user API. In particular, we don't need
to fix missing parts in the API, such as the lack for a seek command
that seeks relatively to the start time. All these things are now
transparent.
(If someone really wants to know the real timestamps/start time, new
properties would have to be added.)
This is very "illustrative", unlike the video-speed-correction
property, and thus useful. It can also be used to observe scheduling
errors, which are not detected by the core. (These happen due to
rounding errors; possibly not evne our fault, but coming from
files with rounded timestamps and so on.)
"Missed" implies the frame was dropped, but what really happens is that
the following frame will be shown later than intended (due to the
current frame skipping a vsync).
(As of this commit, this property is still inactive and always
returns 0. See git blame for details.)
Apparently Windows treats windows that use OpenGL, cover an entire
screen and have the WS_POPUP style set or are topmost windows as
exclusive fullscreen windows that bypass DWM and cannot be covered
by other windows.
This means we can’t use dwmflush in fullscreen mode, and it also
means that no other window can cover mpv, and it makes the screen
flicker when switching to fullscreen mode.
This can be avoided by not setting the WS_POPUP flag.
Users can still access the old behavior by enabling stay-on-top
(which IMO at least makes sense—now we just need to get dwmflush
autodetection right to avoid nasty surprises).
fixes#2177
vo_opengl_cb is a special case, because we somehow have to render video
asynchronously, all while "trusting" the API user to do it correctly.
This didn't quite work, and a while ago a compromise using a timeout to
prevent theoretically possible deadlocks was added.
Make it even more synchronous. Basically, go all the way, and
synchronize rendering between VO and user renderer thread to the
full extent possible.
This means the silly frame queue is dropped, and we event attempt to
synchronize the GL SwapBuffer call (via mpv_opengl_cb_report_flip()).
The changes introduced with commit dc33eb56 are effectively dropped. I
don't even remember if they mattered.
In the future, we might make all VOs fetch asynchronously from a frame
queue, which would mostly remove the differences between vo_opengl and
vo_opengl_cb, but this will take a while (if it will even be done).
Has the same function as setting the option.
This commit changes the property in a bunch of other ways. For example
if the VO is not created, it will return the option value.
The examples demonstrates use with optical media, which is far from
mpv's main purpose.
The authors section is a leftover from MPlayer times. There are enough
other places which reiterate how mpv is based on mplayer2/MPlayer,
copyright statements, and so on.
Hint that the linked section contains information for Windows. (Well,
that's a lie, but it has a link to the Windows section.)
Avoid implying that lines in the config file end with ';'. Also, the <>
are probably just confusing.
Notes:
- Unfortunately the only way to talk to EGL from within DRM I could find
involves linking with GBM (generic buffer management for Mesa.)
Because of this, I'm pretty sure it won't work with proprietary NVidia
drivers, but then again, last time I checked NVidia didn't offer
proper screen resolution for VT.
- VT switching doesn't seem to work at all. It's worth mentioning that
using vo_drm before introduction of VT switcher had an anomaly where
user could switch to another VT and input text to it, while video
played on top of that VT. However, that isn't the case with drm_egl:
I can't switch to other VT during playback like this. This makes me
think that it's either a limitation coming from my firmware or from
EGL/KMS itself rather than a bug with my code. Nonetheless, I still
left (untestable) VT switching code in place, in case it's useful to
someone else.
- The mode_id, connector_id and device_path should be configurable for
power users and people who wish to watch videos on nonprimary screen.
Unfortunately I didn't see anything that would allow OpenGL backends
to register their own set of options. At the same time, adding them to
global namespace is pointless.
- A few dozens of lines could be shared with vo_drm (setting up VT
switching, most of code behind page flipping). I don't have any strong
opinion on this.
- Sometimes I get minor visual glitches. I'm not sure if there's a race
condition of some sort, unitialized variable (doubtful), or if it's
buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa)
- .config and .control are very minimal.
Signed-off-by: wm4 <wm4@nowhere>
While it seemed like a pretty good idea at first, it's just a dead end
and works only in the simplest cases. While it may or may not help
slightly with audio sync mode, the display-sync mode already compensates
this in a better way. The main issue is that timestamps at this layer
are not in order, so it can look at single timestamps only.
Implement NNEDI3, a neural network based deinterlacer.
The shader is reimplemented in GLSL and supports both 8x4 and 8x6
sampling window now. This allows the shader to be licensed
under LGPL2.1 so that it can be used in mpv.
The current implementation supports uploading the NN weights (up to
51kb with placebo setting) in two different way, via uniform buffer
object or hard coding into shader source. UBO requires OpenGL 3.1,
which only guarantee 16kb per block. But I find that 64kb seems to be
a default setting for recent card/driver (which nnedi3 is targeting),
so I think we're fine here (with default nnedi3 setting the size of
weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the
"intBitsToFloat()" built-in function. This is necessary to precisely
represent these weights in GLSL. I tried several human readable
floating point number format (with really high precision as for
single precision float), but for some reason they are not working
nicely, bad pixels (with NaN value) could be produced with some
weights set.
We could also add support to upload these weights with texture, just
for compatibility reason (etc. upscaling a still image with a low end
graphics card). But as I tested, it's rather slow even with 1D
texture (we probably had to use 2D texture due to dimension size
limitation). Since there is always better choice to do NNEDI3
upscaling for still image (vapoursynth plugin), it's not implemented
in this commit. If this turns out to be a popular demand from the
user, it should be easy to add it later.
For those who wants to optimize the performance a bit further, the
bottleneck seems to be:
1. overhead to upload and access these weights, (in particular,
the shader code will be regenerated for each frame, it's on CPU
though).
2. "dot()" performance in the main loop.
3. "exp()" performance in the main loop, there are various fast
implementation with some bit tricks (probably with the help of the
intBitsToFloat function).
The code is tested with nvidia card and driver (355.11), on Linux.
Closes#2230
Add the Super-xBR filter for image doubling, and the prescaling framework
to support it.
The shader code was ported from MPDN extensions project, with
modification to process luma only.
This commit is largely inspired by code from #2266, with
`gl_transform_trans()` authored by @haasn taken directly.
A hw decoder might fail to decode a frame for multiple reasons, and not
always just because decoding is impossible. We can't generally
distinguish these reasons well. Make it more tolerant by accepting
failures of 3 frames, but not more. The threshold can be adjusted by the
repurposed --vd-lavc-software-fallback option.
(This behavior was suggested much earlier in some PR, but at the time
the "proper" hwdec fallback was indistinguishable from decoding error.
With the current situation, "proper" fallback is still instantious.)
Enable it by default, but not unconditionally. Add an "auto" mode, which
disable DwmFlush if the compositor is (probably) inactive. Let's see how
this goes.
Since I accidentally enabled DwmFlush always by default (more or less)
in a previous commit touching this code, this is probably mostly just
cargo-culting, and it's uncertain whether it does anything.
Note that I still got bad vsync behavior when fullscreening mpv, and
making another window visible on the same screen. This happens even if
forcing DWM.
Yet another relatively useless option that tries to make OpenGL's sync
behavior somewhat sane. The results are not too encouraging. With a
value of 1, vsync jitter is gone on nVidia, but there are frame drops
(less than with glfinish). With 2, I get the usual vsync jitter _and_
frame drops.
There's still some hope that it might prevent too deep queuing with some
GPUs, I guess.
The timeout for the wait call is 1 second. The value is pretty
arbitrary; it should just not be too high to freeze the process (if
the GPU is un-nice), and not too low to trigger the timeout in normal
cases, even if the GPU load is very high. So I guess 1 second is ok
as a timeout.
The idea to use fences this way to control the queue depth was stolen
from RetroArch:
df01279cf3/gfx/drivers/gl.c (L1856)