This HDR function is unique in that it's still display-referred, it just
allows for values above the reference peak (super-highlights). The
official standard doesn't actually document this very well, but the
nominal peak turns out to be exactly 12.0 - so we normalize to this
value internally in mpv. (This lets us preserve the property that the
textures are encoded in the range [0,1], preventing clipping and making
the best use of an integer texture's range)
This was grouped together with SMPTE ST2084 when checking libavutil
compatibility since they were added in the same release window, in a
similar timeframe.
Some bugs in this code are exposed by e.g. playing lossless audio files
with --ad-lavc-threads=16. (libavcodec doesn't really support threaded
audio decoding, except for lossless files.) In these cases, a major
amount of audio can be buffered, which makes incorrect handling of this
buffering obvious.
For one, draining the decoder can take a while, so if there's a new
segment, we shouldn't read audio.
The segment end check was completely wrong, and used the start value.
The main framebuffer is not the default framebuffer for the dxinterop
backend. Bind the main framebuffer and use the appropriate attachment
when reading the window content.
Fix#3284
The size check introduced in commit d941a57b did not consider that Xv
can round up the image size to the next chroma boundary. Doing that
makes sense, so it can't certainly be considered server misbehavior.
Do 2 things against this: allow if the server returns a larger image (we
just crop it then), and also allocate a properly aligned image in the
first place.
Instead of hardcoding what the libavcodec ac3 encoder expects, configure
it based on the AVCodec fields.
Unfortunately, it doesn't export the list of sample rates, so that is
done manually. This commit actually fixes the rate always to 48Khz. I
don't even know whether the other rates worked. (Possibly did, but
they'd still change the spdif parameters, and would work differently
from ad_spdif.c.)
Workaround for an awful corner-case. The new decode API "locks" the
decoder into the EOF state once a drain packet has been sent. The
problem starts with a file containing a 0-sized packet, which is
interpreted as drain packet.
This should probably be changed in libavcodec (not treating 0-sized
packets as drain packets with the new API) or in libavformat (discard
0-sized packets as invalid), but efforts to do so have been fruitless.
Note that vd_lavc.c already does something similar, but originally for
other reasons.
Fixes#3106.
See previous commit. (The mixing case was never supported, so this has
equivalent functionality.)
This also implicitly fixes a bug: the old code allocated image data for
the cropped surface size only, which means the
video_surface_get_bits_y_cb_cr call could write beyond the allocated
image memory.
For now, this affects taking screenshots only.
If the video mixer is actually needed, we want to go through the video
mixer for screenshots too - which is why this function simply always
used the video mixer, and then copied the surface to RAM as RGBA.
Add reading the surface as nv12 if possible. If the format is correct,
and no special video processing (like deinterlacing) is used, then it's
read directly without going through the video mixer.
There's no particular reason for doing this, other than avoiding the
video mixer (like vo_opengl can do it now). Also, the next commit will
make use of this in vf_vdpaurb.c.
The GL_ARB_timer_query extension and thus the GL_TIME_ELAPSED constant
don't exist for GLES.
For ES the EXT_disjoint_timer_query is used so take the constant from
that else provide the constant manually.
See pr #3216 which introduced this error.
Until now, we've always converted vdpau video surfaces to RGB, and then
mapped the resulting RGB texture. Change this so that the surface is
mapped as NV12 plane textures.
The reason this wasn't done until now is because vdpau surfaces are
mapped in an "interlaced" way as separate fields, even for progressive
video. This requires messy reinterleraving. It turns out that even
though it's an extra processing step, the result can be faster than
going through the video mixer for RGB conversion.
Other than some potential speed-gain, doing this has multiple other
advantages. We can apply our own color conversion, which is important in
more complex cases. We can correctly apply debanding and potentially
other processing that requires chroma-specific or in-YUV handling.
If deinterlacing is enabled, this switches back to the old RGB
conversion method. Until we have at least a primitive deinterlacer in
vo_opengl, this will stay this way. The d3d11 and vaapi code paths are
similar. (Of course these don't require any crazy field reinterleaving.)
Otherwise stale references will survive forever. Could leak hardware
video surfaces.
In particular, the mpv vdpau code crashed with an assertion when exiting
after toggling deinterlacing, because not all references were released.
1. this basically reverts commit de4c74e5a4.
even with CVDisplayLinkCreateWithActiveCGDisplays and
CVDisplayLinkSetCurrentCGDisplayFromOpenGLContext we still have to
explicitly set the current display ID, otherwise it will just always
choose the display with the lowest refresh rate. another weird thing is,
we still have to set the display ID another time with
CVDisplayLinkSetCurrentCGDisplay after the link was started. otherwise
the display period is 0 and the fallback will be used.
if we ever use the callback method for something useful it's probably
better to use CVDisplayLinkCreateWithActiveCGDisplays since we will need
to keep the display link around instead of releasing it at the end.
in that case we have to call CVDisplayLinkSetCurrentCGDisplay two times,
once before and once after LinkStart.
2. add windowDidChangeScreen delegate to update the display refresh rate
when mpv is moved to a different screen.
We have two problems here.
1. CVDisplayLinkGetActualOutputVideoRefreshPeriod, like the name suggests,
returns a frame period and not a refresh rate. using this as screen_fps
just leads to a slideshow. why didn't this break video playback on OS X
completely? the answer to this leads us to the second problem.
2. it seems that CVDisplayLinkGetActualOutputVideoRefreshPeriod always
returns 0 if used without CVDisplayLinkSetOutputCallback and hence always
fell back to CVDisplayLinkGetNominalOutputVideoRefreshPeriod. adding a
callback to CVDisplayLink solves this problem. the callback function at
this moment doesn't do anything but could possibly used in the future.
This can also remove all the stuff for lazily attaching the texture. It
doesn't matter if the dxinterop backend changes the bound framebuffer
during a VOCTRL, since the renderer does not rely on the GL state being
preserved.
Since there are not many sub-rectangles, this doesn't cost too much. On
the other hand, it avoids frequent warnings with vo_xv.
Also, the second copy in mp_blur_rgba_sub_bitmap() can be dropped.
The previous few commits changed sd_lavc.c's output to packed RGB sub-
images. In particular, this means all sub-bitmaps are part of a larger,
single bitmap. Change the vo_opengl OSD code such that it can make use
of this, and upload the pre-packed image, instead of packing and copying
them again.
This complicates the upload code a bit (4 code paths due to messy PBO
handling). The plan is to make sub-bitmaps always packed, but some more
work is required to reach this point. The plan is to pack libass images
as well. Since this implies a copy, this will make it easy to refcount
the result.
(This is all targeted towards vo_opengl. Other VOs, vo_xv, vo_x11, and
vo_wayland in particular, will become less efficient. Although at least
vo_vdpau and vo_direct3d could be switched to the new method as well.)
The sub-bitmaps get extended by --sub-gauss, so we have to compute the
bounding box on the original subs. Not sure if this is really
eqauivalent to what the code did before, and I don't have the sample
anymore. (But this approach sure is a _shitty_ hack.)
Implement it directly in sd_lavc.c as well. Blurring requires extending
the size of the sub-images by the blur radius. Since we now want
sub_bitmaps to be packed into a single image, and we don't want to
repack for blurring, we add some extra padding to each sub-bitmap in the
initial packing, and then extend their size later. This relies on the
previous bitmap_packer commit, which always adds the padding in all
cases.
Since blurring is now done on parts of a large bitmap, the data pointers
can become unaligned, depending on their position. To avoid shitty
libswscale printing a dumb warning, allocate an extra image, so that the
blurring pass is done on two newly allocated images. (I don't find this
feature important enough to waste more time on it.)
The previous refactor accidentally broke this feature due to a logic bug
in osd.c. It didn't matter before it happened to break, and doesn't
matter now since the code paths are different.
Until now, subtitle renderers could export SUBBITMAP_INDEXED, which is a
8 bit per pixel with palette format. sd_lavc.c was the only renderer
doing this, and the result was converted to RGBA in every use-case
(except maybe when the subtitles were hidden.)
Change it so that sd_lavc.c converts to RGBA on its own. This simplifies
everything a bit, and the palette handling can be removed from the
common code.
This is also preparation for making subtitle images refcounted. The
"caching" in img_convert.c is a PITA in this respect, and needs to be
redone. So getting rid of some img_convert.c code is a positive side-
effect. Also related to refcounted subtitles is packing them into a
single mp_image. Fewer objects to refcount is easier, and for the libass
format the same will be done. The plan is to remove manual packing from
the VOs which need single images entirely.
Apply the padding internally to each input bitmap, instead of requiring
this for the semi-public API.
Right now, everything still uses packer_pack_from_subbitmaps() to fill
the input bitmap sizes, but that's going to change with the following
commit. Since bitmap_packer.in is mutated during packing anyway, it's
more convenient to add the padding automatically.
Also, guarantee that every sub-bitmap has a padding border around it.
Don't let the padding overlap. Add padding even on the containing
borders.
This is simpler, doesn't cost much in memory usage, and is convenient
for one of the following commits.
I do not understand why github requires adding this crap to source code
repositories themselves, instead of making them part of the repository
configuration. Remove CONTRIBUTING.md to compensate for github crap
accumulating.