Not sure how much can be gained with this, as we can't use it properly
yet. For now, this is used only before rendering, which probably does
overwhelmingly nothing.
In the future, this should be used after temporary passes, which could
possibly reduce memory usage and even memory bandwidth usage, depending
on the drivers.
Center window position after applying W and H parameters of the --geometry
option. Passing valid X and Y values will still override the position.
Fixes#2397.
Before that position of the window top-left corner was remaining the same
when the window was scaled.
Right now VOCTRL_SET_UNFS_WINDOW_SIZE is called only by window-scale. This
change will not affect resizes made by the user (dragging window edge).
Fixes#3164.
Center the window on the original window center instead of the screen center
when the window has been resized due to requested window size exceeding the
size of the screen.
If user moved the window, he probably did it for the reason and he probably
don't want it to get back to the center of the screen when he is resizing it
(with window-scale for example).
Properly update stored client area size when the window is resized in
reinit_window_state due to window size exceeding the size of the screen.
This was causing wrong behavior with window-scale - when window size was
becoming too big the window was resized but the video was not.
I've got a broken webm that fails to seek correctly with "--start=0".
The problem is that every index entry points to 1 byte before cluster
start (!!!). demux_mkv tries to resync to the next cluster, but since it
already has read 2 bytes with ebml_read_id(), it doesn't get the first
cluster, but the following one. Actually, it can be any amount of bytes
from 1-4, whatever happens to look valid at this essentially random byte
position.
Improve this by resyncing from the original position, instead of the one
after the EBML element ID has been attempted to be read.
The file shows the following headers:
| + Muxing application: google at 177
| + Writing application: google at 186
Indeed, the file was downloaded with youtube-dl. I can only guess that
Google got it completely wrong.
Following commit 84ccebd9, the internal helpers don't allow GL_RGB and
GL_RGBA as internal formats for FBO attachments anymore.
While OpenGL itself is perfectly fine with it, I don't see much of a
reason to bother, and mixing sized and unsized internal formats is
confusing anyway.
Just remove these formats.
This code evolved into an ifdef mess as support for cancellation on
Windows was added. Make the Windows-specific code completely separate.
It looks cleaner, and it also means that some of the posix code is not
uselessly enabled on Windows. The latter made msvcrt.dll output warnings
because it does not like -1 passed as FD to read/write. (The same would
be harmless on POSIX.)
Cropping usually happens by adjusting the plane start pointers and the
image size. The former is obviously not possible for opaque hwaccel
formats, but the latter must work.
Since the code already takes care of aligning the top/left crop origin
to chroma alignment, simply set the crop origin to 0/0 in the hwaccel
case. Also add a message if such an adjustment happens.
Supporting this isn't worth much; the main usefulness is with debugging.
ES 2.0 has this weird rule that not the internalformat parameter
determines the internal format, but the combination of all texture
parameters. GL_OES_texture_half_float thus does not specify e.g. a
GL_RGBA16F format, but requires passing GL_RGBA as format and
GL_HALF_FLOAT_OES as type. We won't bother with this, since ES 2.0 is a
lost cause anyway.
This also removes the OpenGL error when the code is trying to create a
f16 FBO for testing whether FBOs work.
gl_video_upload_image() can fail in the hardware decoding case. In this
case rendering continued "normally", which meant that pass_get_img_tex()
would kill the process with an assertion failure.
Fix this by allowing gl_video_upload_image() to fail, and exit rendering
early enough to skip code which requires an image to be present. (Maybe
this is still a bit too subtle, but better than before.)
Set an error flag, and render the blue screen we introduced for shader
errors. (For this purpose also move the rendering of it to final output,
to ensure it's visible at all.) The error flag is temporary, because the
associated failure might also be temporary, unlike shader compilation
errors.
ANGLE doesn't handle this very strictly. But if they change this in the
future, it shouldn't brick us.
Not quite happy with this glsl_extensions fields, but it is quite
unintrusive after all.
No reason not to, and makes the following commit slightly simpler.
In fact, this makes the shaders more correct too. Normally, "#extension"
must come before any normal shader text, including the "precision"
directive. Not sure why this worked before. (Probably didn't.)
ANGLE was missing texture() overloads in the shader compiler for
GL_TEXTURE_EXTERNAL_OES textures. Support has been added upstream,
so we can use it now.
With the new hooks mechanism, user shaders and such are actually loaded
before rendering starts, instead of being loaded during rendering. This
is used to cache them (instead of e.g. reparsing them every frame).
The cached state wasn't cleared correctly in some situations. Namely,
resizing didn't correctly enable/disable prescale hooks.
Reorganize how these reinitializations are handled. Get rid of
reinit_rendering(), whose meaning was pretty unclear. Call the required
functions to reset or recreate state directly wherever they are needed.
For some reason, the d3d9/dxva2/d3d11 DLLs are still optional. But we
don't need to try so hard to keep exact references. In fact, there's no
reason to unload them at all.
So load them once in a central place. For simplicity, the d3d9/d3d11
backends both load all DLLs. (They will error out only if the required
DLLs could not be loaded.)
In theory, we could just call LoadLibrary multiple times (without
calling FreeLibrary), but I'm slightly worried that this could be
detected as a "bug", or that the reference count could even have a low
static limit that could be hit soon.
wscript builds hwdec_dxva2gldx.c if gl-dxinterop is enabled, while
video/dxva2.c depends on d3d-hwaccel. If d3d-hwaccel is disabled, then
hwdec_dxva2gldx.c will fail to link, because it uses
d3d9_surface_in_mp_image(), defined in dxva2.c.
Fix this by removing the use of this function. It has barely any value
at this point anyway. Just use the libavcodec documented way to get the
surface directly.
Fixes#3150.
Use dynamic memory allocation, as the static allocation is starting to
get annoying.
Currently, SC_MAX_ENTRIES is essentially still a static upper limit on
the number of shaders. But in future we could try a more clever cache
replacement strategy, which does not keep stale entries forever if the
maximum happens not to be reached.
The new uniforms introduced by 362015c have exceeded the uniform limit
when using high-radius tscale. In addition, the SC limit of 32 entries
might be pushing it with user shaders.
Just make these value a bigger to delay the onset of this same failure
mode. Maybe in the future it should be reworked to grow dynamically?
Either way, we *can* always predict a static upper bound on the number
of uniforms and shader cache entries, it's just that we forgot to do so.
Fixes#3151
This makes it so that users with actual HDR displays can just set their
config to target-trc=st2084 and get native HDR output. This will look a
bit silly for SDR content (everything will be really bright), but for
lack of a better tone mapping situation (including reverse tone mapping)
this is the easiest thing to do for now.
Ideally the brightness metadata should be part of the colorspace struct
or something (with mpv always adapting where necessary), but it depends
on the TRC and not the primaries so it's a bit more complicated than
that.
Since dumb mode is affected by tone mapping (which I'll call a feature,
not a bug), we need to copy over the configuration - in particular, the
defaults. (To prevent a render failure)
Since HDR content is now auto-detected as such, we should probably do
something smarter in the "no configuration" case, such as outputting
gamma 2.2 instead.
This decision will affect the majority of users of stock configurations
who just play back appropriately tagged HDR files, so having a good
default behavior is important. "Output the HDR content as-is" is
definitely not likely to give the user a good result.
This now lets us auto-detect appropriately tagged HDR content using
FFmpeg's new TRC entries (when available).
Hidden behind an #if because Libav stable doesn't have it yet.
Make it dynamic and never remove entries from it.
For now, this is better than possibly creating dangling pointers all
over the place in the gl_user_shader struct.
Untested.
This is now a configurable option, with tunable parameters.
I got inspiration for these algorithms off wikipedia. "simple" seems to
work pretty well, but not well enough to make it a reasonable default.
Some other notable candidates:
- Local functions (e.g. based on local contrast or gradient)
- Clamp with soft knee (linear up to a point)
- Mapping in CIE L*Ch. Map L smoothly, clamp C and h.
- Color appearance models
These will have to be implemented some other time.
Note that the parameter "peak_src" to pass_tone_map should, in
principle, be auto-detected from the SEI information of the source file
where available. This will also have to be implemented in a later
commit.
Due to the way color management in mpv worked historically, the subtitle
blending function was written to preserve the linearity of the input.
(In the past, the 3DLUT function required linear inputs)
Since the 3DLUT was refactored to accept the video color directly, the
re-linearization after blending is now virtually always redundant.
(Notably, it's also redundant when CMS is turned off, so this way of
writing the code stopped making sense a long time ago. It is a remnant
from before the pass_colormanage function was as flexible as it is now)
Currently, this relies on the user manually entering their display
brightness (since we have no way to detect this at runtime or from ICC
metadata). The default value of 250 was picked by looking at ~10 reviews
on tftcentral.co.uk and realizing they all come with around 250 cd/m^2
out of the box. (In addition, ITU-R Rec. BT.2022 supports this)
Since there is no metadata in FFmpeg to indicate usage of this TRC, the
only way to actually play HDR content currently is to set
``--vf=format=gamma=st2084``. (It could be guessed based on SEI, but
this is not implemented yet)
Incidentally, since SEI is ignored, it's currently assumed that all
content is scaled to 10,000 cd/m^2 (and hard-clipped where out of
range). I don't see this assumption changing much, though.
As an unfortunate consequence of the fact that we don't know the display
brightness, mixed with the fact that LittleCMS' parametric tone curves
are not flexible enough to support PQ, we have to build the 3DLUT
against gamma 2.2 if it's used. This might be a good thing, though,
consdering the PQ source space is probably not fantastic for
interpolation either way.
Partially addresses #2572.
This is much more readable than hard-coding magic IDs all over the file,
and removes the need for all the explanatory comments that were a direct
result of this.