Commit a9bd4535 generally changed properties are set to string values.
This actually broke the fallback for non-string properties, because the
set string action was redirected directly to the property, instead of
the generic handler and its fallback code.
As of ffmpeg git master, only the libavdevice decklink wrapper supports
this. Everything else has dropped support.
You're now supposed to use AV_CODEC_ID_WRAPPED_AVFRAME, which works a
bit differently. Normal AVFrames should still work for these encoders.
And remove the same thing from the client API code.
The command.c code has to deal with many specialized M_PROPERTY_SET_*
actions, and we bother with a subset only.
If a mpv_node wrapped a string, the behavior was different from calling
mpv_set_property() with MPV_FORMAT_STRING directly. Change this.
The original intention was to be strict about types if MPV_FORMAT_NODE
is used. But I think the result was less than ideal, and the same change
towards less strict behavior was made to mpv_set_option() ages ago.
Since what we're doing is a linear blend of the four colors, we can just
do it for free by using GPU sampling.
This requires significantly fewer texture fetches and calculations to
compute the final color, making it much more efficient. The code is also
much shorter and simpler.
Now it will always be able to seek back to the start, even if the index
is sparse or misses the first entry.
This can be achieved by reusing the logic for incremental index
generation (for files with no index), and start time probing (for making
sure the first block is always indexed).
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
In the past, --video-unscaled also disabled zooming and aspect ratio
corrections. But this didn't make much sense in terms of being a useful
option. The new behavior just sets the initial video size to be
unscaled, but it's still affected by zoom commands and aspect ratio
corrections.
To get the old behavior back, --video-aspect=0 --video-zoom=0 need to be
added as well (in the general case). Most of the time it should not make
a difference though.
Also, there seems to have been some additional dst_rect clamping code
inside src_dst_split_scaling that didn't seem to either be necessary nor
ever get triggered. (The code immediately above it already makes sure to
crop the video if it's larger than the dst_rect)
No idea why it was there, but I just removed it.
Apply basic transformations like rotation by 90° and mirroring when
sampling from the source textures. The original idea was making this
part of img_tex.transform, but this didn't work: lots of code plays
tricks on the transform, so manipulating it is not necessarily
transparent, especially when width/height are switched. So add a new
pre_transform field, which is strictly applied before the normal
transform.
This fixes most glitches involved with rotating the image.
Cropping and rotation are now weirdly separated, even though they could
be done in the same step. I think this is not much of a problem, and
has the advantage that changing panscan does not trigger FBO
reallocations (I think...).
Typically happens with some implementations if no context is currrent,
or is otherwise broken. This is particularly relevant to the opengl_cb
API, because the API user will have no other indication what went wrong.
The underlying intention of this code is to make changing
--videotoolbox-format at runtime work. For this reason, the format can't
just be statically setup, but must be read from the option at runtime.
This means the format is not fixed anymore, and we have to make sure the
renderer is property reinitialized if the format changes. There is
currently no way to trigger reinit on this level, which is why the
mp_image_params.hw_subfmt field was introduced.
One sketchy thing remains: normally, the renderer is supposed to be
involved with VO format negotiation, which would ensure that the VO
can take the format at all. Since the hw_subfmt is not part of this
format negotiation, it's implied the get_vt_fmt() callback only
returns formats supported by the renderer. This is not necessarily
clear because vo_opengl checks this with converted_imgfmt separately.
None of this matters in practice though, because we know all formats
are always supported.
(This still requires somehow triggering decoder reinit to make the
change effective.)
For hwaccel formats, mp_image will merely point to a hardware surface
handle. In these cases, the mp_image_params.imgfmt field describes the
format insufficiently, because it mostly only describes the type of the
hardware format, not its underlying format.
Introduce hw_subfmt to describe the underlying format. It makes sense to
use it with most hwaccels, though for now it will be used with the
following commit only.
Until now, the presence of the process_image() callback was used to set
a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec
field instead, so the decoder can explicitly set this if it's really
needed.
Do this so process_image() can be used in the VideoToolbox glue code for
something entirely unrelated.
Some functions which expected a codec name (i.e. the name of the video
format itself) were passed a decoder name. Most "native" libavcodec
decoders have the same name as the codec, so this was never an issue.
This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal"
should now actually enable native surface mode (instead of doing copy-
back).
The sync-by-display mode relies on using the vsync statistics for
timing. As a consequence discontinuities must be handled somehow. Until
now we have done this by completely resetting these statistics.
This can be somewhat annoying, especially if the GL driver's vsync
timing is not ideal. So after e.g. a seek it could take a second until
it locked back to the proper values.
Change it not to reset all statistics. Some state obviously has to be
reset, because it's a discontinuity. To make it worse, the driver's
vsync behavior will also change on such discontinuities. To compensate,
we discard the timings for the first 2 vsyncs after each discontinuity
(via num_successive_vsyncs). This is probably not fully ideal, and
num_total_vsync_samples handling in particular becomes a bit
questionable.