Typically happens with some implementations if no context is currrent,
or is otherwise broken. This is particularly relevant to the opengl_cb
API, because the API user will have no other indication what went wrong.
The underlying intention of this code is to make changing
--videotoolbox-format at runtime work. For this reason, the format can't
just be statically setup, but must be read from the option at runtime.
This means the format is not fixed anymore, and we have to make sure the
renderer is property reinitialized if the format changes. There is
currently no way to trigger reinit on this level, which is why the
mp_image_params.hw_subfmt field was introduced.
One sketchy thing remains: normally, the renderer is supposed to be
involved with VO format negotiation, which would ensure that the VO
can take the format at all. Since the hw_subfmt is not part of this
format negotiation, it's implied the get_vt_fmt() callback only
returns formats supported by the renderer. This is not necessarily
clear because vo_opengl checks this with converted_imgfmt separately.
None of this matters in practice though, because we know all formats
are always supported.
(This still requires somehow triggering decoder reinit to make the
change effective.)
For hwaccel formats, mp_image will merely point to a hardware surface
handle. In these cases, the mp_image_params.imgfmt field describes the
format insufficiently, because it mostly only describes the type of the
hardware format, not its underlying format.
Introduce hw_subfmt to describe the underlying format. It makes sense to
use it with most hwaccels, though for now it will be used with the
following commit only.
Until now, the presence of the process_image() callback was used to set
a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec
field instead, so the decoder can explicitly set this if it's really
needed.
Do this so process_image() can be used in the VideoToolbox glue code for
something entirely unrelated.
Some functions which expected a codec name (i.e. the name of the video
format itself) were passed a decoder name. Most "native" libavcodec
decoders have the same name as the codec, so this was never an issue.
This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal"
should now actually enable native surface mode (instead of doing copy-
back).
The sync-by-display mode relies on using the vsync statistics for
timing. As a consequence discontinuities must be handled somehow. Until
now we have done this by completely resetting these statistics.
This can be somewhat annoying, especially if the GL driver's vsync
timing is not ideal. So after e.g. a seek it could take a second until
it locked back to the proper values.
Change it not to reset all statistics. Some state obviously has to be
reset, because it's a discontinuity. To make it worse, the driver's
vsync behavior will also change on such discontinuities. To compensate,
we discard the timings for the first 2 vsyncs after each discontinuity
(via num_successive_vsyncs). This is probably not fully ideal, and
num_total_vsync_samples handling in particular becomes a bit
questionable.
Tuning it in a way to be actually useful is too much effort.
As alternative, there's the "buffering" detection, which operates on a
much higher level. The only disadvantage is that it's harder to guess
for the user whether this is a network problem, or if e.g. libavformat
is probing too much data when opening a stream. Maybe the cache-speed
property is helpful here.
For now, do not remove the associated code, but just silence the
warning.
Fixes#3019.
It's pretty "unfriendly" and causes too many issues. (Probably. At least
they're more obvious to a user than e.g. broken frame timing.)
Potentially we could apply heuristics like applying this only on
fullscreen, but let's not. It's up to the user to configure this to
get best results.
Fixes#2997.
The past behavior was a bit weird, especially when zooming out. There
was no simple way to zoom in or out in consistent increments using
keybindings alone.
The new behavior preserves most of the old behavior's semantics but
scales out to infinity better. It coincidentally also makes it
really easy to get clean power of 2 ratios (e.g. 2x, 4x, 8x and their
inverses).
Fixes#3004.
I got a report that the build on a recent aarch64 Linux kernel failed.
DVB support was detected, but errored on compilation:
In file included from ../stream/stream_dvb.c:57:0:
../stream/dvbin.h:72:5: error: unknown type name 'fe_bandwidth_t'
fe_bandwidth_t bw;
Make the test stricter, which should take care of this. (I couldn't find
out what exactly triggered the failure, nor could I attempt to reproduce
it.)
The change in stream/dvbin.h is to make sure that this isn't caused by
incorrect header inclusion. It now includes the same files as the
configure test.
There is an obscure feature which requires essentially reordering PTS
from different packets.
Unfortunately, libavcodec introduced a ridiculously shitty API for
this, which works very much unlike the audio/video API. Instead of
simply passing through the PTS, it wants to fuck with it for no reason,
and even worse, fucks with other fields and changes their semantivcs
(??????). This affects AVSubtitle.end_display_time. This probably will
cause issues for us, and I have no desire to find out whether it will.
Since only PGS requires this, and it happens not to use
end_display_time, do it for PGS only.
Fixes#3016.
Basically, this information is useless, because some muxers (hurr
libavformat) write bogus information anyway. This means if we e.g. see
PGS packets in mkv with duration explicitly set to 0, we must not trust
that value anyway. (The FFmpeg API problem is leaking into files, how
nice.)
This makes the black point closer (chromatically) to the white point, by
ensuring channels keep their consistent brightness ratios as they go
down to zero.
I also raised the 3DLUT version as this changes semantics and is a
separate commit from the previous one.
This commit refactors the 3DLUT loading mechanism to build the 3DLUT
against the original source characteristics of the file. This allows us,
among other things, to use a real BT.1886 profile for the source. This
also allows us to actually use perceptual mappings. Finally, this
reduces errors on standard gamut displays (where the previous 3DLUT
target of BT.2020 was unreasonably wide).
This also improves the overall accuracy of the 3DLUT due to eliminating
rounding errors where possible, and allows for more accurate use of
LUT-based ICC profiles.
The current code is somewhat more ugly than necessary, because the idea
was to implement this commit in a working state first, and then maybe
refactor the profile loading mechanism in a later commit.
Fixes#2815.
FFmpeg partially merged the API change. It added the AVCodecParameters
definition, but not the AVCodecContext.codecpar field. The new code
compiles only with the API fully merged, so adjust the check.
Encoding mode uses deprecated API. See previous commit. Encoding mode
will stop working/compiling at some point in the future, so unless
someone fixes the encoding code, it will stay disabled by default.
(Note that the deprecations are not merged in FFmpeg yet, but they will
soon. They've been deprecated in Libav for a while now.)
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
This was changed in 2014, so I suppose users will usually have a FFmpeg
release which includes the corresponding upstream change. If not, well
too bad for those MicroDVD-obsessed users.
Also don't try to retrieve the default framerate as exported by the
demuxer, and instead hardcode it and trust it won't ever change. this
avoids that we have to deal with a larger mess in the codecpar commit.
I don't trust it one bit, and it's a bother with the codecpar change.
If it turns out to be important for some file formats, it could be
added back (or FFmpeg fixed).
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
Commit 57506b27 accidentally broke this. The status (including the
usually always active demuxer cache) should be shown only if the stream
cache is actually enabled.
This reverts commit 503c6f7fd6.
There are situations where some decoders (MF apparently) always require
a timestamp. Also, this makes bitrate estimation more granular than
necessary. It seems it's better to try to detect fiels with broken
default durations explicitly instead. Or maybe something should be
added to smooth audio timestamps after filters.
This also draws it after color management etc. In a nutshell, this
change makes the transparency checkerboard independent of upscaling,
panning, cropping etc. It will always be the same apparent size and
position (relative to the window).
It will also be independent of the video colorspace and such things.
(Note: This might cause white imbalance issues if playing a file with a
white point that does not match the display, in absolute colorimetric
mode. But that's uncommon, especially in conjunction with transparent
image files, so it's not a primary concern here)
Until now, we've let the windowing backend decide. But since they
usually require premultiplied alpha, and premultiplied alpha is easier
to handle, hardcode it.
The recent changes fixed rotation handling, but reversed the rotation
direction. The direction is expected to be counter-clockwise, because
demuxers export video rotation metadata as such.
Don't assume EOF if we didn't try to read anything in the first place.
Fixes regressions in particular with low cache sizes, which triggered
the other code paths more often.
Instead of having a separate for each, which also requires separate
additional caching in the demuxer. (The demuxer adds an indirection,
since STREAM_CTRLs are not thread-safe.)
Since this includes the cache speed, this should fix#3003.
This would get stuck in reconfiguring the filter chain forever, because
params was mutated ("params.rotate = 0;"). This was used as input for
vf_reconfig(), but the filter chain input must always be equivalent to
the decoder output, or filter chain reconfiguration will be triggered.
The line of code to reset the rotation is from a time when this used to
work differently.
Also remove the unnecessary try_filter() parameter.