Will be helpful for the coming filter support. I planned on merging
audio/video decoding, but this will have to wait a bit longer, so only
remove the duplicate status codes.
Commit b53cb8de added a delay queue for decoded frames. This is supposed
to be used with copy-back decoders like dxva2-copy and vaapi-copy.
Surfaces returned by them can't be referenced after uninitializing the
decoders, so they have to be released before destroying the decoder.
Move the flush_all() call above decoder uninit accordingly. Also move
the destruction of the AVFrame used for decoding (just for being
defensive - normally it doesn't hold any reference).
We just need to provide an entrypoint for it, and move the main init
code to a separate function. This gets rid of the messy video chain full
reinit in command.c, which completely destroyed and recreated the video
state for the purpose of mid-stream hw/sw switching.
Don't give the "software_fallback_decoder" field special meaning. Alwass
set it, and rename it to "decoder". Whether hw decoding is used is
determined by the "hwdec" field already.
This codes tries to deal with broken PTS timestamps, but since commit
271cabe6 it didn't always overwrite the previous timestamp as it should
have. This mattered only if there were broken timestamps in the video
stream.
Also remove the pointless prev_codec_pts variables, since the decoder
doesn't overwrite these fields anymore.
Commit b53cb8de increased this by the number of additionally delayed
surfaces. But since this is only enabled in copy-back mode (which is
what process_image is about), the other additional surfaces accounted
for the direct rendering case can be ignored.
Another fix for the crazy and insane nvidia preemption behavior.
This time, the situation is that we are using vo_opengl with vdpau
interop, and that vdpau got preempted in the background while mpv was
sitting idly. This can be e.g. reproduced by using:
--force-window=immediate --idle --hwdec=vdpau
and switching VTs. Then after switching back, load a video file.
This will not let mp_vdpau_handle_preemption() perform preemption
recovery, simply because it will do so only once vdp_decoder_create()
has been called. There are some other API calls which trigger
preemption, but many don't.
Due to the way the libavcodec API works, vdp_decoder_create() is way too
late. It does so when get_format returns. It notices creating the
decoder fails, and continues calling get_format without the vdpau
format. We could perhaps force it to reinit again (by adding a call to
vdpau.c, that checks for preemption, and sets hwdec_request_reinit), but
this seems too much of a mess.
Solve it by calling API in mp_vdpau_handle_preemption() that empirically
does trigger preemption: output_surface_put_bits_native(). This call is
useless, and in fact should be doing nothing (empty update VdpRect).
There's the slight chance that in theory it will slow down operation,
but in practice it's bound to be harmless. It's the likely cheapest and
simplest API call I've found that can trigger the fallback this way.
(The driver is closed source, so it was up to trial & error.)
Also, when initializing decoding, allow initial preemption recovery,
which is needed to pass the test mention above.
vo_chain_uninit() isn't supposed to care much about the decoder
(although decoders and outputs still go strictly together, so there is
not much of an actual difference now).
Also unset track.d_video correctly.
Remove a stale declaration from dec_video.h as well.
This covers source files which were added in mplayer2 and mpv times
only, and where all code is covered by LGPL relicensing agreements.
There are probably more files to which this applies, but I'm being
conservative here.
A file named ao_sdl.c exists in MPlayer too, but the mpv one is a
complete rewrite, and was added some time after the original ao_sdl.c
was removed. The same applies to vo_sdl.c, for which the SDL2 API is
radically different in addition (MPlayer supports SDL 1.2 only).
common.c contains only code written by me. But common.h is a strange
case: although it originally was named mp_common.h and exists in MPlayer
too, by now it contains only definitions written by uau and me. The
exceptions are the CONTROL_ defines - thus not changing the license of
common.h yet.
codec_tags.c contained once large tables generated from MPlayer's
codecs.conf, but all of these tables were removed.
From demux_playlist.c I'm removing a code fragment from someone who was
not asked; this probably could be done later (see commit 15dccc37).
misc.c is a bit complicated to reason about (it was split off mplayer.c
and thus contains random functions out of this file), but actually all
functions have been added post-MPlayer. Except get_relative_time(),
which was written by uau, but looks similar to 3 different versions of
something similar in each of the Unix/win32/OSX timer source files. I'm
not sure what that means in regards to copyright, so I've just moved it
into another still-GPL source file for now.
screenshot.c once had some minor parts of MPlayer's vf_screenshot.c, but
they're all gone.
This moves some code related to decoding from video.c to dec_video.c,
and also removes some accesses to dec_video.c from the filtering code.
dec_video.ch is starting to make sense, and simply returns video frames
from a demuxer stream. The API exposed is also somewhat intended to be
easily changeable to move decoding to a separate thread, if we ever want
this (due to libavcodec already being threaded, I don't see much of a
reason, but it might still be helpful).
The aspect ratio calculations are cached (mainly so that aspect ratio
related messages are not logged on every frame). The cache is not clared
anymore when video filters are reconfigured, but changing the
video-aspect-ratio property relied on it. Make it explicit.
Fixes#2714.
Lots of noise to remove the vfilter/vo fields from dec_video.
From now on, video filtering and output will still be done together,
summarized under struct vo_chain.
There is the question where exactly the vf_chain should go in such a
decoupled architecture. The end goal is being able to place a "complex"
filter between video decoders and output (which will culminate in
natural integration of A->V filters for natural integration of
libavfilter audio visualizations). The vf_chain is still useful for
"final" processing, such as format conversions and deinterlacing. Also,
there's only 1 VO and 1 --vf option. So having 1 vf_chain for a VO seems
ideal, since otherwise there would be no natural way to handle all these
existing options and mechanisms.
There is still some work required to truly decouple decoding.
Instead of handling this on filter chain reinit, do it directly after
the decoder. This makes the code less entangled. In particular, this
gets rid of the really weird "override params" concept in the video
filter code.
The last_format/fixed_formats have some redundance with decoder_output,
but unfortunately the latter has a slightly different use.
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
If reinit after a fallback from hardware fails, this field can be NULL.
The check in control() was broken due to a typo (found by Coverity), and
decode() lacked the check entirely.
Approximately reverts commit 3ccac74d. This failed with some avi files,
which do pseudo-VFR by sending packets with empty frames (or repeat
frames, depending on point of view). Specifically, these packets are not
0 bytes, so they don't get skipped by libavformat, as with the usual VFR
avi hack. Instead, the packet contains a VOP with vop_coded=0, so
libavcodec will just return no frame. We could probably distinguish such
skipped frames and delayed frames by explicitly measuring the codec
delay by counting how long it takes to get the very first frame (and
then treat skips as explicit drops), but we may as well simply reinstate
the old code.
To appease to at least one semi-broken case, do not enable this logic on
the RPI, as the FFmpeg MMAL wrapper has arbitrary buffering (and MMAL
itself is asynchronous).
10 bit HEVC would require DXVA2_ModeHEVC_VLD_Main10, and most a
different surface type (judging by lavfsplitter source code, both
P010 and P016 would work). Since I'm unable to test this stuff,
exclude 10 bit for now.
See #2516.
Until now, we've relied on the following things:
- you can send flush packets to the decoder even if it's fully flushed,
- you can send new packets to a flushed decoder,
- you can send new packers to a partially flushed decoder.
("flushing" refers to sending flush packets to the decoder until the
decoder does not return new pictures, not avcodec_flush_buffers().)
All of these are questionable. The libavcodec API probably doesn't
guarantee that these work well or at all, even though most decoders have
no issue with these. But especially with hardware decoding wrappers
(like MMAL), real problems can be expected. Isolate us from these corner
cases by handling them explicitly.
A hw decoder might fail to decode a frame for multiple reasons, and not
always just because decoding is impossible. We can't generally
distinguish these reasons well. Make it more tolerant by accepting
failures of 3 frames, but not more. The threshold can be adjusted by the
repurposed --vd-lavc-software-fallback option.
(This behavior was suggested much earlier in some PR, but at the time
the "proper" hwdec fallback was indistinguishable from decoding error.
With the current situation, "proper" fallback is still instantious.)
The uninit() function was called twice if the uninit() function failed
(once by init(), once by vd_lavc.c code), which caused crashes due to
double-free. (This failure is a corner case, and all other hwdec
backends appear to handle this case gracefully.)
I do not think this code should be able to deal with uninit() being
called other than once. Guarantee that it's called exactly once.
Fixes linker failure. How did this ever work? Apparently it did most of
the time, but apparently we just got the first case where it didn't.
Fixes#2433.
The previous commit moved the av_frame_unref() after the got_picture
check. This accidentally also deferred the software fallback
reinitialization to until a software picture was decoded (instead of the
exact time of the fallback), which is not ideal.
Just rely on the fact that calling av_frame_unref() on a frame is ok
even if nothing was decoded.
Commit 12cd48a8 started setting the hwdec_failed field even if hwdec was
not active, and because it also checked this field even if hwdec was not
active, broke decoding forever.
Fix this, and also avoid a memory leak or API misuse by releasing the
decoded picture. Passing an unreleased frame to the decoder has as far
as I know no defined effects.
The libavcodec h264 decoder contains some idiotic code with unknown
purpose (no sample or explanation known that necessitates its
existence), that causes the AVCodecContext.get_format callback to be
invoked at a time when hwaccels can't be initialized. By definition, the
get_format callback is supposed to initialize hwaccels (another idiotic
thing now part of the API, but different story). This causes hwdec
initialization sometimes to fail (WolfensteinTwitch.mp4): the first
get_format callback will mark it as failed, so the second get_format
(the "proper" normal one) will not bother restoring the state, and hwdec
init fails.
While this should be fixed in libavcodec (good luck with that), it's
quite easy to workaround.
Use the first encountered packet PTS/DTS as base, instead of the last
one. This does not add the amount of frames buffered in the codec to the
PTS offset, and thus is better.
Also, don't add the frame time if there was no decoded frame yet. The
first frame should obviously have the timestamp of the first packet
(going by this heuristic).
While b-frame reordering limits the maximum required number to around
16, the number of additionally buffered frames can be much higher.
Guess when this actually matters? (For the libavcodec MMAL wrapper.)
Useless. Sometimes it might be useful to make some extremely broken
files work, but on the other hand --no-correct-pts is sufficient for
these cases.
While we still need some of the code for AVI, the "auto" mode in
particular inflated the size of the code.
This can't be handled correctly at all. Other cases when the decoder
might drop a frame (such as completely failing to decode a frame) will
shift timestamps by a frame, and it can't be avoided.
While we could maybe find a better way to handle this with libavcodec's
main decoders, this seems to be much harder if it should work with
certain HW decoders, which don't passthrough the DTS field (such as
MMAL). Another problem are .avi files with b-frames. So just leave it
as it is.
This was used only by the timestamp sorting code, which is a fallback
for avi files (as well as avi-muxed mkv files). This was supposed to
prevent accumulating timestamps in case the decoder consumes more
packets than it outputs frames (i.e. frames are dropped). This didn't
work very well (timestamps could be off by a large amount), the
estimation of the delay was fragile, and the interdependencies with the
decoder were annoying, so kill it.
This essentially reverts commit 009dfbe3. FFmpeg VideoToolbox support
is being wacky, and can cause major issues, such as not being able
to decode a single frame. (E.g. by playing a .ts file. This should be
fixed in FFmpeg eventually.)
This is not a straight revert of the commit; just a functional one. We
keep the slightly simpler code structure.
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.
VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
Definitely not needed anymore, and fixes a crash in some weird corner-
cases.
The extradata freeing is apparently still needed, though. (Because a
codec context can be opened again, which makes no sense, but ok.)
There are at least 2 ways of using VAAPI without X11 (Wayland, DRM).
Remove the X11 requirement from the decoder part and the EGL interop.
This will be used by a following commit, which adds Wayland support.
The worst about this is the decoder part, which includes a bad hack for
using the decoder without any VO interop (also known as "vaapi-copy"
mode). Separate the X11 parts so that they're self-contained. For the
EGL interop code we do something similar (it's kept slightly simpler,
because it essentially only has to translate between our silly
MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
Make the GPU memcpy from the dxva2 code generally useful to other parts
of the player.
We need to check at configure time whether SSE intrinsics work at all.
(At least in this form, they won't work on clang, for example. It also
won't work on non-x86.)
Introduce a mp_image_copy_gpu(), and make the dxva2 code use it. Do some
awkward stuff to share the existing code used by mp_image_copy(). I'm
hoping that FFmpeg will sooner or later provide a function like this, so
we can remove most of this again. (There is a patch, bit it's stuck in
limbo since forever.)
All this is used by the following commit.
Usually, libavcodec ignores errors reported by the hardware decoding
API, so it's not like we can actually escape if the hardware is somehow
acting up.
For normal fallback purposes, or if parts of the hw decoding API which
we actually check fails, we do this by setting and checking the
hwdec_failed flag anyway.
The comment was largely outdated, and described the old situation when
we used a "violent" fallback by making get_buffer2 fail completely.
Also, for the case when the hw decoder initialization succeeded (in
get_format), but get_buffer2 for some reason requests something
unexpected, we also can fallback more gracefully and in the same way.
Often, we don't know whether hardware decoding will work until we've
tried. (This used to be different, but API changes and improvements in
libavcodec led to this situation.) We will often output that we're going
to use hardware decoding, and then print a fallback warning.
Instead, print the status once we have decoded a frame.
Some of the old messages are turned into verbose messages, which should
be helpful for debugging. Also add some new ones.
The fallback at initialization time was basically duplicated, maybe for
the sake of showing a different error message. This doesn't matter
anymore; not much can fail at initialization anymore. Most meaningful
and common errors happen either at probing or in get_format (when the
actual hw decoder is initialized).
libavcodec does not support HEVC via VAAPI yet, so this won't work.
However, there is ongoing work to add HEVC support to VAAPI, and this
change might help with testing. (Or maybe not - but there is no harm in
this change.)
This affects vo_opengl_cb in particular: it'll most likely auto-load
VDA, and then the VideoToolbox decoder won't work. And everything fails.
This is mainly caused by FFmpeg using separate pixfmts for the _same_
thing (CVPixelBuffers), simply because libavcodec's architecture demands
that hwaccel backends are selected by pixfmts. (Which makes no sense,
but now we have the mess.)
So instead of duplicating FFmpeg's misdesign, just change the format to
our own canonical one on the image output by the decoder. Now the GL
interop code is exactly the same for VDA and VT, and we use the VT name
only.
While the "old" libavcodec vdpau API is not deprecated (only the very-
old API is), it's still relatively complicated code that badly
duplicates the much simpler newer vdpau code. It exists only for the
sake of older FFmpeg releases; get rid of it.
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
Revert "win32: more wchar_t -> WCHAR replacements"
Revert "win32: replace wchar_t with WCHAR"
Doing a "partial" port of this makes no sense anymore from my
perspective. Revert the changes, as they're confusing without
context, maintenance, and progress. These changes were a bit
premature anyway, and might actually cause other issues
(locale neutrality etc. as it was pointed out).
This was essentially missing from commit 0b52ac8a.
Since L"..." string literals have the type wchar_t[], we can't use them
for UTF-16 strings. Use C11 u"..." string literals instead. These have
the type char16_t[], but we simply assume char16_t is the same
underlying type as WCHAR. In practice, they're both unsigned short.
For this reason use -std=c11 on Windows. Since Windows is a "special"
environment (we require either MinGW or Cygwin), we don't need to worry
too much about compiler compatibility.
Fixes problems with --vo=opengl:interpolation. The issue here is that
vo_opengl retains more surfaces than what was preallocated for the
decoder. Until now, we just explicitly failed to decode frames for which
no additional surfaces are available. Since modern drivers usually are
fine with not "registering" surfaces before the decoder is created, just
allow allocating additional surfaces if needed.
(We also could probably recreate the HW decoder, since the HW decoder
should be stateless. But let's try to avoid raising the overall
complexity of the code.)
Sometime recently, hardware decoding started to fail if h264 with full
reference frames was decoded, and --vo=vaapi was used. VAAPI requires
registering all surfaces that the decoder will ever use in advance, so
if the playback chain uses more surfaces than originally allocated, we
fail and drop back to software decoding.
I'm not really sure why or when this started happening. Commit 7b9d7265
for one is not the cause - it can be reproduced with earlier commits. It
also seems to be timing dependent. Possibly it has to do with the way
vo.c retains previous surfaces, and the way they can be queued/unqueued
asynchronously.
Increasing the number of reserved additional surfaces by 1 fixes it.
(Though I have no idea where exactly all these surfaces are being used.
Or rather, _when_.)
Basically, we need to make sure to allocate enough data for the pretty
dumb copy_nv12 function. (It could be avoided by making the function
less dumb, but this fix is simpler.)
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
This is basically a hack for drivers which prevent the mpv DXVA2 decoder
glue from working if OpenGL is in fullscreen mode.
Since it doesn't add any "hard" new API to the client API, some of the
code would be required for a true zero-copy hw decoding pipeline, and
sine it isn't too much code after all, this is probably acceptable.
Again. With the old OpenGL interop dropped, this probably works better
than vaapi-copy now. Last time we defaulted to vaapi-copy, because the
OpenGL interop could swap U/V planes and other stupid crap. We'll see.
MPlayer traditionally had completely separate sh_ structs for
audio/video/subs, without a good way to share fields. This meant that
fields shared across all these headers had to be duplicated. This commit
deduplicates essentially the last remaining duplicated fields.
When using --hwdec=auto, about half of all systems will print:
"[vdpau] Error when calling vdp_device_create_x11: 1"
this happens because usually mpv will be linked against both vdpau and
vaapi libs, but the drivers are not necessarily available. Then trying
to load a driver will fail. This is a normal part of probing, but the
error messages were printed anyway. Silence them by explicitly
distinguishing probing.
This pretty much goes through all the layers. We actually consider
loading hw backends for vo_opengl always "auto probed", even if a hw
backend is explicitly requested. In this case vd_lavc will print a
warning message anyway (adjust this message a bit).
When showing cover art, the decoding logic pretends that the source has
an infinite number of frames. This slightly simplifies dealing with
filter data flow. It was done by feeding the same packet repeatedly to
the decoder (each decode run produces new output).
Change this by decoding once at the video initialization. This is easier
to follow, and increases robustness in case of broken images. Usually,
we try to tolerate decoding errors, so decoding normally continues, but
in this case it would just burn the CPU for no reason.
Fixes#2056.
This must have been some non-sense in the original vaapi mplayer patch.
While I still have no good idea what this "direct mapping" business is
about, it appears to be pretty much pointless. Nothing can hold
additional "real" surface references (due to how the API and mpv/lavc
refcounting work), so removing the additional surfaces won't break
anything. It still could be that this was for achieving additional
buffering (not reusing surfaces as soon), but we buffer some additional
data anyway. Plus, the original intention of the vaapi mplayer code was
probably increasing surface count just by 1 or 2, not actually doubling
it, and/or it was a "trick" to get to the maximum count of 21 when h264
is in use.
gstreamer-vaapi uses "ref_frames + SCRATCH_SURFACES_COUNT" here, with
SCRATCH_SURFACES_COUNT defined to 4. It doesn't appear to check the
overlay attributes at all in the decoder.
In any case, remove this non-sense.
On hw decoder reinit failure we did not actually always return a sw
format, because the first format (fmt[0]) is not always a sw format.
This broke some cases of fallback. We must go through the trouble to
determine the first actual sw format.
Yet another of these dozens of hwaccel changes. This time, libavcodec
provides utility functions, which initialize the vdpau decoder and map
codec profiles. So a lot of work the API user had to do falls away.
This also will give us support for high bit depth profiles, and possibly
HEVC once libavcodec supports it.
...instead of relying on the hw decoding API to align it for us. The old
method could in theory have gone wrong if the video is cropped by an
amount large enough to step over several blocks.
There's not much of a reason to keep get_surface_hwdec() and
get_buffer2_hwdec() separate. Actually, the way the mpi->AVFrame
referencing is done makes this confusing. The separation is probably
an artifact of the pre-libavcodec-refcounting compatibility glue.
Most of hardware decoding is initialized lazily. When the first packet
is parsed, libavcodec will call get_format() to check whether hw or sw
decoding is wanted. Until now, we've returned AV_PIX_FMT_NONE from
get_format() if hw decoder initialization failed. This caused the
avcodec_decode_video2() call to fail, which in turn let us trigger the
fallback. We didn't return a sw format from get_format(), because we
didn't want to continue decoding at all. (The reason being that full
reinitialization is more robust when continuing sw decoding.)
This has some disadvantages. libavcodec vomited some unwanted error
messages. Sometimes the failures are more severe, like it happened with
HEVC. In this case, the error code path simply acted up in a way that
was extremely inconvenient (and had to be fixed by myself). In general,
libavcodec is not designed to fallback this way.
Make it a bit less violent from the API usage point of view. Return a sw
format if hw decoder initialization fails. In this case, we let
get_buffer2() call avcodec_default_get_buffer2() as well. libavcodec is
allowed to perform its own sw fallback. But once the decode function
returns, we do the full reinitialization we wanted to do.
The result is that the fallback is more robust, and doesn't trigger any
decoder error codepaths or messages either. Change our own fallback
message to a warning, since there are no other messages with error
severity anymore.
This is pretty much copy&pasted from Libav commit
a7e0380497306d9723dec8440a4c52e8bf0263cf.
Note that if FFmpeg was not compiled with HEVC DXVA2 support or your
video drivers do not support HEVC, the player will not fallback and
just fail decoding any video. This is because libavcodec appears not
to return an error in this case. The situation is made worse by the
fact that MSYS2 is on an ancient MinGW-w64 release, which does not
have the required headers for HEVC DXVA2 support.
The hardware always decodes to nv12 so using this image format causes less cpu
usage than uyvy (which we are currently using, since Apple examples and other
free software use that). The reduction in cpu usage can add up to quite a bit,
especially for 4k or high fps video.
This needs an accompaning commit in libavcodec.
Remove the old implementation for these properties. It was never very
good, often returned very innaccurate values or just 0, and was static
even if the source was variable bitrate. Replace it with the
implementation of "packet-video-bitrate". Mark the "packet-..."
properties as deprecated. (The effective difference is different
formatting, and returning the raw value in bits instead of kilobits.)
Also extend the documentation a little.
It appears at least some decoders (sipr?) need the
AVCodecContext.bit_rate field set, so this one is still passed through.
Remove the colorspace-related top-level options, add them to vf_format.
They are rather obscure and not needed often, so it's better to get them
out of the way. In particular, this gets rid of the semi-complicated
logic in command.c (most of which was needed for OSD display and the
direct feedback from the VO). It removes the duplicated color-related
name mappings.
This removes the ability to write the colormatrix and related
properties. Since filters can be changed at runtime, there's no loss of
functionality, except that you can't cycle automatically through the
color constants anymore (but who needs to do this).
This also changes the type of the mp_csp_names and related variables, so
they can directly be used with OPT_CHOICE. This probably ended up a bit
awkward, for the sake of not adding a new option type which would have
used the previous format.
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
Codecs for hardware acceleration are not blacklisted, but whitelisted.
Also, if this emssage is printed, the codec might not have any hardware
acceleration support in the first place.
Trying to handle such video is almost worthless, but it was requested by
at least 2 users.
If there are no timestamps, enable byte seeking by setting
ts_resets_possible. Use the video FPS (wherever it comes from) and the
audio samplerate for timing. The latter was already done by making the
first packet emit DTS=0; remove this again and do it "properly" in a
higher level.
For some reason there were two points in the code where it warned
against non-monotonic video PTS. The one in video.c triggered on PTS
going backwards or making large jumps forwards, while dec_video.c
triggered on PTS going backwards or PTS not changing. Merge them into a
single check, which warns against all cases.
This played e.g. a 1264x722 file as 1264x720. There was some code which
dropped the aspect ratio if the video (in original resolution) wasn't
scaled by more than 4 pixels. Commit 5f3c3f8c introduced this (although
I'm not really sure what the code replaced by it did).
Just remove this "feature".
Instead of "vaapi", simply by changing the probe order.
"vaapi" uses the GLX GL interop, which has causing us more problems than
it solved.
Unfortunately this leads also to copying if "--hwdec=auto --vo=vaapi" is
used, even though GLX is not involved in this case - but I don't care
enough to make the probe logic cleverer just for this. You can still get
the zero-copy path with --hwdec=vaapi.
Breaks vo_opengl by default. I'm hot able to fix this myself, because I
have no clue about the overcomplicated color management logic. Also,
whilethis is apparently caused by commit fbacd5, the following commits
all depend on it, so revert them too.
This reverts the following commits:
e141caa97d653b0dd529729c8b3f64fbacd5de31Fixes#1636.
Remove coded_width and coded_height. This was originally added in commit
fd7dde40, when BITMAPINFOHEADER was killed. The separate fields became
redundant in commit e68f4be1. Remove them (nothing passed to the
decoders actually changes with _this_ commit).
A recent behavior change in libavcodec's h264 decoder keeps at least 1
surface even after avcodec_flush_buffers() has been called. We used to
flush the decoder in order to make sure all surfaces are free'd, so that
the hw decoder can be safely uninitialized. This doesn't work anymore.
Fix it by closing the AVCodecContext before the hw decoder is
uninitialized. This is actually simpler and more robust. It seems to be
well-supported too.
Fixes invalid read accesses with vaapi-copy and dxva2-copy. These
destroyed the hwdec API fully on uninit, and could not deal with
surfaces surviving the decoder.
Probably fixes#1587.
The intention is that we can test vo_opengl with high bit depth PNGs
better. This throws libswscale completely out of the loop, which before
was needed in order to convert from big endian to little endian.
Also apply a minimal cleanup to fmt-conversion.c (unrelated).
This is somewhat imperfect, because detection of hw decoding APIs is
mostly done on demand, and often avoided if not necessary. (For example,
we know very well that there are no hw decoders for certain codecs.)
This also requires every hwdec backend to identify itself (see hwdec.h
changes).
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
At least on my machine, reading back the frame with system memcpy is
slower than just using software rendering. Use the optimized gpu_memcpy
from LAV to speed things up.
This was once central, but now it's almost unused. Only vf_divtc still
uses it for extremely weird and incomprehensible reasons. The use in
stream.c is trivial. Replace these, and remove mpbswap.h.
MPlayer traditionally did this because it made sense: the most important
formats (avi, asf/wmv) used Microsoft formats, and many important
decoders (win32 binary codecs) also did. But the world has changed, and
I've always wanted to get rid of this thing from the codebase.
demux_mkv.c internally still uses it, because, guess what, Matroska has
a VfW muxing mode, which uses these data structures natively.
The oldest supported FFmpeg release doesn't provide
av_vdpau_alloc_context(). With these versions, the application has no
other choice than to hard code the size of AVVDPAUContext. (On the other
hand, there's av_alloc_vdpaucontext(), which does the same thing, but is
FFmpeg specific - not sure if it was available early enough, so I'm not
touching it.)
Newer FFmpeg and Libav releases require you to call this function, for
ABI compatibility reasons. It's the typcal lakc of foresight that make
FFmpeg APIs terrible. mpv successfully pretended that this crap didn't
exist (ABI compat. is near impossible to reach anyway) - but it appears
newer developments in Libav change the function from initializing the
struct with all-zeros to something else, and mpv vdpau decoding would
stop working as soon as this new work is relewased.
So, add a configure test (sigh).
CC: @mpv-player/stable
This inserts an automatic conversion filter if a Matroska file is marked
as 3D (StereoMode element). The basic idea is similar to video rotation
and colorspace handling: the 3D mode is added as a property to the video
params. Depending on this property, a video filter can be inserted.
As of this commit, extending mp_image_params is actually completely
unnecessary - but the idea is that it will make it easier to integrate
with VOs supporting stereo 3D mogrification. Although vo_opengl does
support some stereo rendering, it didn't support the mode my sample file
used, so I'll leave that part for later.
Not that most mappings from Matroska mode to vf_stereo3d mode are
probably wrong, and some are missing.
Assuming that Matroska modes, and vf_stereo3d in modes, and out modes
are all the same might be an oversimplification - we'll see.
See issue #1045.
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.
The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
So talking to a certain Intel dev, it sounded like modern VA-API drivers
are reasonable thread-safe. But apparently that is not the case. Not at
all. So add approximate locking around all vaapi API calls.
The problem appeared once we moved decoding and display to different
threads. That means the "vaapi-copy" mode was unaffected, but decoding
with vo_vaapi or vo_opengl lead to random crashes.
Untested on real Intel hardware. With the vdpau emulation, it seems to
work fine - but actually it worked fine even before this commit, because
vdpau was written and designed not by morons, but competent people
(vdpau is guaranteed to be fully thread-safe).
There is some probability that this commit doesn't fix things entirely.
One problem is that locking might not be complete. For one, libavcodec
_also_ accesses vaapi, so we have to rely on our own guesses how and
when lavc uses vaapi (since we disable multithreading when doing hw
decoding, our guess should be relatively good, but it's still a lavc
implementation detail). One other reason that this commit might not
help is Intel's amazing potential to fuckup anything that is good and
holy.
Playing with high framedrop could make it run out of surfaces. In
theory, we wouldn't need an additional surface, if we could just clear
the vo_vaapi internal surface - but doing so would probably be a pain,
so I don't care.
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
Found with valgrind. This is somewhat terrifying, because the VA-API API
function is supposed to fill these values, and we access them only if
the API functions return success. So this shouldn't have happened.
Completely useless, and could accidentally be enabled by cycling
framedrop modes. Just get rid of it.
But still allow triggering the old code with --vd-lavc-framedrop, in
case someone asks for it. If nobody does, this new option will be
removed eventually.
Use OPT_KEYVALUELIST() for all places where AVOptions are directly set
from mpv command line options. This allows escaping values, better
diagnostics (also no more "pal"), and somehow reduces code size.
Remove the old crappy option parser (av_opts.c).
Since the new hwaccel API is now merged in ffmpeg's stable release, we can
finally remove support for the old API.
I pretty much kept lu_zero's new code unchanged and just added some error
printing (that we had with the old glue code) to make the life of our users
less miserable.
DVD and Bluray (and to some extent cdda) require awful hacks all over
the codebase to make them work. The main reason is that they act like
container, but are entirely implemented on the stream layer. The raw
mpeg data resulting from these streams must be "extended" with the
container-like metadata transported via STREAM_CTRLs. The result were
hacks all over demux.c and some higher-level parts.
Add a "disc" pseudo-demuxer, and move all these hacks and special-cases
to it.
This add support for reading primary information from lavc, categorized
into BT.601-525, BT.601-625, BT.709 and BT.2020; and passes it on to the
vo. In vo_opengl, we always generate the 3dlut against the wider BT.2020
and transform our source into this colorspace in the shader.
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
This means use of the min/max fields can be dropped for the flag option
type, which makes some things slightly easier. I'm also not sure if the
client API handled the case of flag not being 0 or 1 correctly, and this
change gets rid of this concern.