Repurpose demuxer->filetype for this. It used to be used to print a
human readable format description; change it to a symbolic format name
and export it as property.
Unfortunately, libavformat has its own weird conventions, which are
reflected through the new property, e.g. the .mp4 case mentioned in the
manpage.
Fixes#1504.
The symlink trick made waf go crazy (deleting source files, getting
tangled up in infinite recursion... I wish I was joking). This means we
still can't build the client API examples in a reasonable way using the
include files of the local repository (instead of globally installed
headers). Not building them at all is better than deleting source files.
Instead, provide some manual instructions how to build each example
(except for the Qt examples, which provide qmake project files).
The logic disabled framedropping if the frame was interpolated (i.e. the
render call is only done to interpolate between the previous frame, and
the frame before that).
It seems doing this wasn't even necessary, and broke framedrop in
smoothmotion mode. In fact, this code did nothing for display with video
fps below display fps. It did prevent the framedrop counter from going
up, though. So change it so that dropped interpolated frames are never
reported. (Doing so can give confusing results, such as dropping 1000s
of frames on slow operations like video start or changing filters.)
The MSDN documentation for IsFormatSupported says a return code of
AUDCLNT_E_UNSUPPORTED_FORMAT means the function "succeeded but the
specified format is not supported in exclusive mode." This seems to
imply that the format is supported in shared mode, and that's what the
old code assumed, however try_format would incorrectly return success
with some drivers.
The remarks section of the documentation contradicts that assumption. It
says that in shared mode, if the audio engine does not support the
caller-specified format or any similar format, ppClosestMatch is set to
NULL and the function returns AUDCLNT_E_UNSUPPORTED_FORMAT. This is the
same as in exclusive mode, so treat AUDCLNT_E_UNSUPPORTED_FORMAT the
same regardless of opt_exclusive. In shared mode, the format selection
code will fall back to the mix format, which should always be supported.
`core-idle` depends on seeking state `mpctx->restart_complete`,
so make `core-idle` notified whenever `seeking` is notified, too.
`paused-for-cache` can be changed on MPV_EVENT_CACHE_UPDATE obviously.
Finally, `MPV_EVENT_PLAYBACK_RESTART` should be notified after
`mpctx->restart_complete` changed.
SmoothMotion is a way to time and blend frames made popular by MadVR. It's
intended behaviour is to remove stuttering caused by mismatches between the
display refresh rate and the video fps, while preserving the video's original
artistic qualities (no soap opera effect). It's supposed to make 24fps video
playback on 60hz monitors as close as possible to a 24hz monitor.
Instead of drawing a frame once once it's pts has passed the vsync time, we
redraw at the display refresh rate, and if we detect the vsync is between two
frames we interpolated them (depending on their position relative to the vsync).
We actually interpolate as few frames as possible to avoid a blur effect as
much as possible. For example, if we were to play back a 1fps video on a 60hz
monitor, we would blend at most on 1 vsync for each frame (while the other 59
vsyncs would be rendered as is).
Frame interpolation is always done before scaling and in linear light when
possible (an ICC profile is used, or :srgb is used).
These aliases were removed in commit 1ec77214. Add a notice to the
manpage how to get these back. Apparently, "lanczos2" and "lanczos3"
were the only interesting aliases possibly used by someone, so the
description is limited to these two.
These are now auto-detected sanely; and enabled whenever it would be a
performance or quality gain (which is pretty much everything except
bilinear/bilinear scaling).
Perhaps notably, with the absence of scale_sep, there's no more way to
use convolution filters on hardware without FBOs, but I don't think
there's hardware in existence that doesn't have FBOs but is still fast
enough to run the fallback (slow) 2D convolution filters, so I don't
think it's a net loss.
This is significantly faster for FBOs on most modern GPUs, although it
did not result in a huge difference for the video source texture on the
sizes I tested. It might be more significant for 1080p or 4K content, so
it's worth revisiting this in the future.
It also renames SAMPLE_BILINEAR to SAMPLE_TRIVIAL to match the
semantics.
This is better even for non-separable. The only exception is when using
bilinear for both lscale and cscale. I've fixed the
documentation/comments to make more sense.
This is not quite the same thing as madVR's antiringing algorithm, but
it essentially does something similar.
Porting madVR's approach to elliptic coordinates will take some amount
of thought.
This fixes compatibility with GLES 2.0 and makes the code a bit neater
in general. It also properly forces indirect scaling for subsampled
video regardless of the lscale setting.
At least the scale_sep_fbo could have been uninitialized or initialized
incorrectly when switching between scalers (e.g. from bilinear to
lanczos). Calling check_resize() should take care of this.
Instead of reading back the image from textures, keep a reference to the
original image, and return that.
The main reason this was done this way was that originally, images
weren't refcounted, and would be deallocated or overwritten as soon as
the VO's draw call returned. But now there isn't really a good reason
for this anymore. One possibly _could_ argue that it was better because
other code could reuse the image sooner (e.g. for the cache), but on the
other hand, the VO runs already on a different thread, and filtering and
decoding each run on other threads too, so this argument probably
wouldn't hold up.
There was a case when we could have rendered to an output surface while
it's still used for display. Not sure why the API doesn't do this
automatically.
Instead of converting the hw surface to an image in the VO, provide a
generic way to convet hw surfaces, and use this in the screenshot code.
It's all relatively straightforward, except vdpau is being terrible. It
needs a huge chunk of new code, because copying back is not simple.
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
This also fixes the maximum range to 16.0, which was previously set to
32.0 and incorrectly documented as 8.0. 16 taps should be more than
anybody will ever need, but it's the highest radius that's supported by
all affected filters.
And remove all uses of the VFCAP_CSP_SUPPORTED* constants. This is
supposed to reduce conversions if many filters are used (with many
incompatible pixel formats), and also for preferring the VO's natively
supported pixel formats (as opposed to conversion).
This is worthless by now. Not only do the main VOs not use software
conversion, but also the way vf_lavfi and libavfilter work mostly break
the way the old MPlayer mechanism worked. Other important filters like
vf_vapoursynth do not support "proper" format negotation either.
Part of this was already removed with the vf_scale cleanup from today.
While I'm touching every single VO, also fix the query_format argument
(it's not a FourCC anymore).
I'm still not sure how exactly handling of "lost" devices is supposed
to be handled. In theory, you only have to "reset" the device, instead
of recreating _everything_. But as it is, the code for proper uninit
and for handling the reset is exactly the same, so move it into a
function to reduce code duplication and the danger of potential bugs.
Apparently, extremely crappy graphics drivers don't allow you to use
shaders. Simply disable use of shaders if this happens, and use the
"old" method instead.
One unexpectedly tricky thing is that you need a d3d_device to create
a shader, which in turn requires a window, so the initialization order
changes.
Don't load all the legacy functions (including ancient extensions).
Slightly simplify function loader and context creation, now that legacy
GL doesn't need to be handled. Remove the code for drawing OSD in legacy
mode.
Remove all the header hacks, which were meant for ancient OpenGL headers
which didn't even support things like OpenGL 1.3. Instead, adjust the
GLX check to make sure we get both OpenGL 3x and 2.1 symbols. For win32
and OSX, we assume that the user has the latest headers anyway. For
wayland, we hope that things somehow go right.
Apparently, physically disconnecting the audio device (consider USB
audio) breaks the ALSA device handle forever. It will signal ENODEV.
Fortunately, it's easy for us to handle this, and we can just use
existing mechanisms that will make the playback core close and reopen
the AO. Whether the immediate reopening will actually succeeds really is
ALSA's problem, though.
Simply clamp off the U/V components in the colormatrix, instead of doing
something special in the shader.
Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix
calculation overflowed with 32, add a component_bits field to the image
format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had
no bad consequences otherwise).
If video output and VO don't support the same format, a conversion
filter needs to be insert. Since a VO can support multiple formats, and
the filter chain also can deal with multiple formats, you basically have
to pick from a huge matrix of possible conversions.
The old MPlayer code had a quite naive algorithm: it first checked
whether any conversion from the list of preferred conversions matched,
and if not, it was falling back on checking a hardcoded list of output
formats (more or less sorted by quality). This had some unintended side-
effects, like not using obvious "replacement" formats, selecting the
wrong colorspace, selecting a bit depth that is too high or too low, and
more.
Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This
function was made for this purpose, and should select the "best" format.
Libav provides a similar function, but with a different name - there is
a function with the same name in FFmpeg, but it has different semantics
(I'm not sure if Libav or FFmpeg fucked up here).
This also removes handling of VFCAP_CSP_SUPPORTED vs.
VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly
for filter chains with multiple scale filters.
Fixes#1494.
In my opinion, libavformat should be doing this. But a patch handling a
very safe case rejected, so I suppose we have to do it manually. (This
patch was only escaping spaces, which can never work because they break
the basic syntax of the HTTP protocol.)
This commit attempts to do 2 things:
- Try to guess whether libavformat will use the URL for http. This is
not always trivial, because some protocols will recursively pass part
of the user URL to http in some way.
- Try to fix invalid URLs. We fix only the simplest case: only
characters that are never valid are escaped. This excludes invalid
escape codes, which happen with freestanding '%' characters.
Fixes#1495.
Sigh...
The C locale system is incredibly shitty, and if used, breaks basic
string functions. The locale can change the decimal mark from "." to
",", which affects conversion between floats and strings: snprintf() and
strtod() respect the locale decimal mark, and change behavior. (What's
even better, the behavior of these functions can change asynchronously,
if setlocale() is called after threads were started.)
So just check the locale in the client API, and refuse to work if it's
wrong. This also makes the lib print to stderr, which I consider the
lesser evil in this specific situation.
Handles mismatching libavfilter/libavdevice and libavcodec slightly
better.
libavfilter and libavdevice are optional, and thus are checked
separately and at a later point of the build. But if a user system has
at least 2 FFmpeg installations, and one of them lacks libavfilter or
libavdevice, the build script will pick up the libavfilter/libavdevice
package of the "other" FFmpeg installation. The moment waf picks these
up, all include paths will start pointing at the "wrong" FFmpeg, and the
FFmpeg API checks done earlier might be wrong too, leading to obscure
and hard to explain compilation failures.
Just moving the libavfilter/libavdevice checks before the FFmpeg API
checks somewhat deals with this issue. Certainly not a proper solution,
but since the change is harmless, and there is no proper solution, and
the change doesn't actually add anything new, why not.