In addition to using the new VAO mechanism introduced in the previous
commit, this tries to keep the OSD code self-contained. This doesn't
work all too well (because of the pass and CMS stuff), but it's still
better than before.
This removes VAO handling from video.c. Instead the shader cache will
create the VAO as needed. The consequence is that this creates a VAO
per shader, which might be a bit wasteful, but doesn't matter anyway.
Reduce this to 1 draw call per OSD pass. This removes the need for some
annoying special handling regarding 3D video support (we supported
duplicating the OSD/subtitles for side-by-side 3D output etc.).
Remove the unneeded texture sampler uniform thing.
Remove this code because it could be argued that it contains GPL-only
code (see commit 642e963c86 for details).
The remaining aspect methods appear to work just as well, are
potentially more compatible to other players, and the code becomes much
simpler.
These are apparently expensive on some drivers which are not smart
enough to turn x/42 into x*1.0/42. So, do it for them.
My great test framework says it's okay
It seems like adjusting the raw stream ID should be done only for DVD.
Otherwise, getting the subtitle language for Bluray breaks.
Untested. Regression since fb9a32977d.
Fixes#4611 (probably).
Performance seems pretty much unchanged but I no longer get nasty spikes
on NUMA systems, probably because glBufferSubData runs in the driver or
something.
As a simplification of the code, we also just size the PBO to always
have the full size, even for cropped textures. This seems slower but not
by relevant amounts, and only affects e.g. --vf=crop. It also slightly
increases VRAM usage for textures with big strides.
This new code path is especially nice because it no longer depends on
GL_ARB_map_buffer_range, and no longer uses any functions that can
possibly fail, thus simplifying control flow and seemingly deprecating
the manpage's claim about possible image corruption.
In theory we could also reduce NUM_PBO_BUFFERS since it doesn't seem
like we're streaming uploads anyway, but leave it in there just in
case some drivers disagree...
Lua scripts can call osd_set_external() early (before the VO window is
created and obj->vo_res is filled), in which case the PlayResX field
would be set to nonsense, and libass would print a pointless warning.
There's an easy and a hard fix: either just go on and pass dummy values
to libass (basically like before, just clamp them to avoid the values
which make libass print the warning). Or attempt to update the PlayRes
field to correct values on rendering (since at rendering time, you
always know the screen size and the correct values). Do the latter.
Since various things use PlayRes for scaling things, this might still
not be fully ideal. This is a general problem with the async scripting
interface.
This API isn't deprecated (yet?), but it's still inferior and harder to
use than avcodec_free_context().
Leave the call only in 1 case in af_lavcac3enc.c, where we apparently
seriously close and reopen the encoder for whatever reason.
STREAM is better than DYNAMIC because we're only using it once per
frame. As for COPY vs DRAW, that was pretty much incorrect to begin with
- but surprisngly, COPY is actually faster (sometimes significantly so,
e.g. on my NUMA system).
After testing, the best I can gather is that it has to do with the fact
that COPY requires fewer redundant memcpy()s, and also 3x reduce RAM
bandwidth (in theory).
Anyway, that bit shouldn't introduce any regressions, it's just a
documentation update. Maybe I'll change my mind about the comment again
the future, it's really hard to tell. Vulkan, please save us!
Instead of allocating three PBOs and cycling through them, we allocate
one PBO that's three times as large, and cycle through the subregion
offsets.
This results in arguably simpler code and faster initialization
performance. Especially for 4K textures, initializing PBOs can take
quite some time (e.g. 180ms -> 110ms). For 1080p, it's more like 66ms ->
52ms for me.
The alignment to 4096 is completely unnecessary by spec, but we do it
anyway just for peace of mind.
This is unnecessary to call from gl_video_resize, because the hooks only
(possibly) change when the actual vo_opengl options change. This used to
be required back when mpv still had prescaling built in, but since that
was all moved to user shaders and the code removed, this is a left-over
artifact.
It used to use the "encoding" section. Change this to the default
section to remove another small special case. encoding-profiles.conf
didn't use this by default anyway. The previous revert could mitigate
potential impacts of this a little.
This reverts commit 0dcb51c7fa.
I randomly decided that this was better. It can be re-applied once it
actually becomes necessary in some way.
Note that this worked fine. My main gripe with this is that it can
spam the log file with encoding stuff even if playback mode is used.
This is more of a niche usecase than --ytdl-format and --ytdl-raw-options,
so a simple script option should be enough.
Either create lua-settings/ytdl_hook.conf with
'exclude=example.com,sub.example.com' option or
"--script-opts=ytdl_hook-exclude=example.com,sub.example.com"
The renderer code doesn't list a fixed set of supported formats, but
supports anything that is described by AVPixFmtDescriptor and follows a
number of constraints.
Plane order is not included in those constraints. This means the planes
could be in random order, rather than what the vo_opengl renderer
happens to assume. For example, it assumes that the 4th plane is alpha,
even though alpha could be on any plane. Likewise it assumes that plane
0 was always luma, and planes 2/3 chroma. (In earlier iterations of
format support, this was guaranteed by MP_IMGFLAG_YUV_P, but this is not
used anymore.)
Explicitly set the plane semantics (enum plane_type) by the component
descriptors and the used colorspace. The behavior should mostly not
change, but it's less likely to break when FFmpeg adds new pixel
formats.
Use avcodec_free_context() unstead of random other calls. Actually it
was already used in the second case, but calling avcodec_close() is
redundant.
Don't crash if allocating a codec context fails.
With some newer ANGLE builds, mapping can fail with "Failed to create
EGL surface" during playback. The reason is unknown, and it might just
be an ANGLE bug. Probe whether it works at init time to avoid the
problem.
Commit d5702d3b95 messed up the order of destruction of the elements: it
destroyed the avctx before the hwaccel uninit, even though the hwaccel
uninit could access avctx. This could happen with some old hwaccels
only, such as D3D ones before the new libavcodec hwaccel API.
Fix this by making use of the fact that avcodec_flush_buffers() will
uninit the underlying hwaccel. Thus we can be sure avctx is not using
any hwaccel objects anymore, and it's safe to uninit the hwaccel.
Move the hwdec_dev dstruction code with it, because otherwise we would
in theory potentially create some dangling pointers in avctx.
In commit 6eb0bbe this was changed from xs[n] to use gl_format.chroma_w
indiscriminately, which broke chroma rendering when zooming/cropping.
The solution is to only use chroma_w for chroma planes.
Fixes#4592.