No method of taking a screenshot was implemented at all. vo_opengl
lacked window screenshotting, because ANGLE doesn't allow reading the
frontbuffer. There was no way to read back from a D3D11 texture either.
Implement reading image data from D3D11 textures. This is a low-quality
effort to get basic screenshots done. Eventually there will be a better
implementation: once we use AVHWFramesContext natively, the readback
implementation will be in libavcodec, and will be able to cache the
staging texture correctly. Hopefully. (For now it doesn't even have a
AVHWFramesContext for D3D11 yet. But the abstraction is more appropriate
for this purpose.)
OK, this was dumb. The file didn't have much to do with ANGLE, and the
functionality can simply be moved to d3d.c. That file contains helpers
for decoding, but can always be present (on Windows) since it doesn't
access any D3D specific libavcodec APIs. Thus it doesn't need to be
conditionally built like the actual hwaccel wrappers.
Instead of hard-coding a big list, move some of the functionality
to csputils. Affects both the auto-guess blacklist and the peak
estimation.
Also update the comments.
Too many "exceptions" these days, it's easier to just hard-code a
whitelist instead of a blacklist. And besides, it only really makes
sense to avoid adaptation for BT.601 specifically, since that's the one
we auto-guess based on the resolution.
I'm not even sure why we ever consulted *_src to begin with, since that
just describes the current image format - and not the original metadata.
(And in fact, we specifically had logic to work around the impliciations
this had on linear scaling)
image_params is *the* authoritative source on the intended (i.e.
reference) image metadata, whereas *_src may be changed by previous
passes already. So only consult image_params for picking auto-generated
values.
Also, add some more missing "wide gamut" and "non-gamma" curves to the
autoconfig blacklist. (Maybe it would make sense to move this list to
csputils in the future? Or perhaps even auto-detect it based on the
associated primaries)
User request and not that hard. Closes#3157.
Note that FFmpeg doesn't support this and there's no signalling in HEVC
etc., so the only way users can access it is by using vf_format
manually.
Mind: This encoding uses full range values, not TV range.
This is actually not entirely trivial since it involves negative Yxy
coordinates, so the CMM has to be capable of full floating point
operation. Fortunately, LittleCMS is, so we can just blindly implement
it.
This HDR function is unique in that it's still display-referred, it just
allows for values above the reference peak (super-highlights). The
official standard doesn't actually document this very well, but the
nominal peak turns out to be exactly 12.0 - so we normalize to this
value internally in mpv. (This lets us preserve the property that the
textures are encoded in the range [0,1], preventing clipping and making
the best use of an integer texture's range)
This was grouped together with SMPTE ST2084 when checking libavutil
compatibility since they were added in the same release window, in a
similar timeframe.
The main framebuffer is not the default framebuffer for the dxinterop
backend. Bind the main framebuffer and use the appropriate attachment
when reading the window content.
Fix#3284
The size check introduced in commit d941a57b did not consider that Xv
can round up the image size to the next chroma boundary. Doing that
makes sense, so it can't certainly be considered server misbehavior.
Do 2 things against this: allow if the server returns a larger image (we
just crop it then), and also allocate a properly aligned image in the
first place.
See previous commit. (The mixing case was never supported, so this has
equivalent functionality.)
This also implicitly fixes a bug: the old code allocated image data for
the cropped surface size only, which means the
video_surface_get_bits_y_cb_cr call could write beyond the allocated
image memory.
For now, this affects taking screenshots only.
If the video mixer is actually needed, we want to go through the video
mixer for screenshots too - which is why this function simply always
used the video mixer, and then copied the surface to RAM as RGBA.
Add reading the surface as nv12 if possible. If the format is correct,
and no special video processing (like deinterlacing) is used, then it's
read directly without going through the video mixer.
There's no particular reason for doing this, other than avoiding the
video mixer (like vo_opengl can do it now). Also, the next commit will
make use of this in vf_vdpaurb.c.
The GL_ARB_timer_query extension and thus the GL_TIME_ELAPSED constant
don't exist for GLES.
For ES the EXT_disjoint_timer_query is used so take the constant from
that else provide the constant manually.
See pr #3216 which introduced this error.
Until now, we've always converted vdpau video surfaces to RGB, and then
mapped the resulting RGB texture. Change this so that the surface is
mapped as NV12 plane textures.
The reason this wasn't done until now is because vdpau surfaces are
mapped in an "interlaced" way as separate fields, even for progressive
video. This requires messy reinterleraving. It turns out that even
though it's an extra processing step, the result can be faster than
going through the video mixer for RGB conversion.
Other than some potential speed-gain, doing this has multiple other
advantages. We can apply our own color conversion, which is important in
more complex cases. We can correctly apply debanding and potentially
other processing that requires chroma-specific or in-YUV handling.
If deinterlacing is enabled, this switches back to the old RGB
conversion method. Until we have at least a primitive deinterlacer in
vo_opengl, this will stay this way. The d3d11 and vaapi code paths are
similar. (Of course these don't require any crazy field reinterleaving.)
Otherwise stale references will survive forever. Could leak hardware
video surfaces.
In particular, the mpv vdpau code crashed with an assertion when exiting
after toggling deinterlacing, because not all references were released.
1. this basically reverts commit de4c74e5a4.
even with CVDisplayLinkCreateWithActiveCGDisplays and
CVDisplayLinkSetCurrentCGDisplayFromOpenGLContext we still have to
explicitly set the current display ID, otherwise it will just always
choose the display with the lowest refresh rate. another weird thing is,
we still have to set the display ID another time with
CVDisplayLinkSetCurrentCGDisplay after the link was started. otherwise
the display period is 0 and the fallback will be used.
if we ever use the callback method for something useful it's probably
better to use CVDisplayLinkCreateWithActiveCGDisplays since we will need
to keep the display link around instead of releasing it at the end.
in that case we have to call CVDisplayLinkSetCurrentCGDisplay two times,
once before and once after LinkStart.
2. add windowDidChangeScreen delegate to update the display refresh rate
when mpv is moved to a different screen.
We have two problems here.
1. CVDisplayLinkGetActualOutputVideoRefreshPeriod, like the name suggests,
returns a frame period and not a refresh rate. using this as screen_fps
just leads to a slideshow. why didn't this break video playback on OS X
completely? the answer to this leads us to the second problem.
2. it seems that CVDisplayLinkGetActualOutputVideoRefreshPeriod always
returns 0 if used without CVDisplayLinkSetOutputCallback and hence always
fell back to CVDisplayLinkGetNominalOutputVideoRefreshPeriod. adding a
callback to CVDisplayLink solves this problem. the callback function at
this moment doesn't do anything but could possibly used in the future.
This can also remove all the stuff for lazily attaching the texture. It
doesn't matter if the dxinterop backend changes the bound framebuffer
during a VOCTRL, since the renderer does not rely on the GL state being
preserved.
The previous few commits changed sd_lavc.c's output to packed RGB sub-
images. In particular, this means all sub-bitmaps are part of a larger,
single bitmap. Change the vo_opengl OSD code such that it can make use
of this, and upload the pre-packed image, instead of packing and copying
them again.
This complicates the upload code a bit (4 code paths due to messy PBO
handling). The plan is to make sub-bitmaps always packed, but some more
work is required to reach this point. The plan is to pack libass images
as well. Since this implies a copy, this will make it easy to refcount
the result.
(This is all targeted towards vo_opengl. Other VOs, vo_xv, vo_x11, and
vo_wayland in particular, will become less efficient. Although at least
vo_vdpau and vo_direct3d could be switched to the new method as well.)
Until now, subtitle renderers could export SUBBITMAP_INDEXED, which is a
8 bit per pixel with palette format. sd_lavc.c was the only renderer
doing this, and the result was converted to RGBA in every use-case
(except maybe when the subtitles were hidden.)
Change it so that sd_lavc.c converts to RGBA on its own. This simplifies
everything a bit, and the palette handling can be removed from the
common code.
This is also preparation for making subtitle images refcounted. The
"caching" in img_convert.c is a PITA in this respect, and needs to be
redone. So getting rid of some img_convert.c code is a positive side-
effect. Also related to refcounted subtitles is packing them into a
single mp_image. Fewer objects to refcount is easier, and for the libass
format the same will be done. The plan is to remove manual packing from
the VOs which need single images entirely.
Apply the padding internally to each input bitmap, instead of requiring
this for the semi-public API.
Right now, everything still uses packer_pack_from_subbitmaps() to fill
the input bitmap sizes, but that's going to change with the following
commit. Since bitmap_packer.in is mutated during packing anyway, it's
more convenient to add the padding automatically.
Also, guarantee that every sub-bitmap has a padding border around it.
Don't let the padding overlap. Add padding even on the containing
borders.
This is simpler, doesn't cost much in memory usage, and is convenient
for one of the following commits.
This is the ES equivalent to ARB_timer_query. It enables the performance
timers on ANGLE. All the added functions should be identical in
semantics to their desktop GL equivalents.
The OpenGL 3.0+ and ES specs are quite clear on what values are
accepted for the attachment object name parameter. And there's no
overlap for the default framebuffer. Sigh.
Probably fixes Mesa raising an error in this case and might fix#3251.
Regression by the previous vo_opengl change.
Until now, we've used system-specific API (GLX, EGL, etc.) to retrieve
the depth of the default framebuffer. (We equal this to display depth
and use the determined depth for dithering.)
We can actually retrieve this value through standard GL API, and it
works everywhere (except GLES 2 of course). This simplifies everything a
great deal.
egl_helpers.c is empty now. But I expect that some EGL boilerplate will
be moved to it, so don't remove it yet.
This was somehow done under the assumption that ANGLE would somehow
always use RG in ES2 mode. But there's no basis for this. Even if ANGLE
supports NV12 textures with drivers that do not allow for texture_rg,
this cas eis too obscure to worry about. So do the robust and correct
thing instead, and disable this code if texture_rg is not available.