These are now auto-detected sanely; and enabled whenever it would be a
performance or quality gain (which is pretty much everything except
bilinear/bilinear scaling).
Perhaps notably, with the absence of scale_sep, there's no more way to
use convolution filters on hardware without FBOs, but I don't think
there's hardware in existence that doesn't have FBOs but is still fast
enough to run the fallback (slow) 2D convolution filters, so I don't
think it's a net loss.
This is significantly faster for FBOs on most modern GPUs, although it
did not result in a huge difference for the video source texture on the
sizes I tested. It might be more significant for 1080p or 4K content, so
it's worth revisiting this in the future.
It also renames SAMPLE_BILINEAR to SAMPLE_TRIVIAL to match the
semantics.
This is better even for non-separable. The only exception is when using
bilinear for both lscale and cscale. I've fixed the
documentation/comments to make more sense.
This is not quite the same thing as madVR's antiringing algorithm, but
it essentially does something similar.
Porting madVR's approach to elliptic coordinates will take some amount
of thought.
This fixes compatibility with GLES 2.0 and makes the code a bit neater
in general. It also properly forces indirect scaling for subsampled
video regardless of the lscale setting.
At least the scale_sep_fbo could have been uninitialized or initialized
incorrectly when switching between scalers (e.g. from bilinear to
lanczos). Calling check_resize() should take care of this.
Instead of reading back the image from textures, keep a reference to the
original image, and return that.
The main reason this was done this way was that originally, images
weren't refcounted, and would be deallocated or overwritten as soon as
the VO's draw call returned. But now there isn't really a good reason
for this anymore. One possibly _could_ argue that it was better because
other code could reuse the image sooner (e.g. for the cache), but on the
other hand, the VO runs already on a different thread, and filtering and
decoding each run on other threads too, so this argument probably
wouldn't hold up.
There was a case when we could have rendered to an output surface while
it's still used for display. Not sure why the API doesn't do this
automatically.
Instead of converting the hw surface to an image in the VO, provide a
generic way to convet hw surfaces, and use this in the screenshot code.
It's all relatively straightforward, except vdpau is being terrible. It
needs a huge chunk of new code, because copying back is not simple.
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
This also fixes the maximum range to 16.0, which was previously set to
32.0 and incorrectly documented as 8.0. 16 taps should be more than
anybody will ever need, but it's the highest radius that's supported by
all affected filters.
And remove all uses of the VFCAP_CSP_SUPPORTED* constants. This is
supposed to reduce conversions if many filters are used (with many
incompatible pixel formats), and also for preferring the VO's natively
supported pixel formats (as opposed to conversion).
This is worthless by now. Not only do the main VOs not use software
conversion, but also the way vf_lavfi and libavfilter work mostly break
the way the old MPlayer mechanism worked. Other important filters like
vf_vapoursynth do not support "proper" format negotation either.
Part of this was already removed with the vf_scale cleanup from today.
While I'm touching every single VO, also fix the query_format argument
(it's not a FourCC anymore).
I'm still not sure how exactly handling of "lost" devices is supposed
to be handled. In theory, you only have to "reset" the device, instead
of recreating _everything_. But as it is, the code for proper uninit
and for handling the reset is exactly the same, so move it into a
function to reduce code duplication and the danger of potential bugs.
Apparently, extremely crappy graphics drivers don't allow you to use
shaders. Simply disable use of shaders if this happens, and use the
"old" method instead.
One unexpectedly tricky thing is that you need a d3d_device to create
a shader, which in turn requires a window, so the initialization order
changes.
Don't load all the legacy functions (including ancient extensions).
Slightly simplify function loader and context creation, now that legacy
GL doesn't need to be handled. Remove the code for drawing OSD in legacy
mode.
Remove all the header hacks, which were meant for ancient OpenGL headers
which didn't even support things like OpenGL 1.3. Instead, adjust the
GLX check to make sure we get both OpenGL 3x and 2.1 symbols. For win32
and OSX, we assume that the user has the latest headers anyway. For
wayland, we hope that things somehow go right.
Apparently, physically disconnecting the audio device (consider USB
audio) breaks the ALSA device handle forever. It will signal ENODEV.
Fortunately, it's easy for us to handle this, and we can just use
existing mechanisms that will make the playback core close and reopen
the AO. Whether the immediate reopening will actually succeeds really is
ALSA's problem, though.
Simply clamp off the U/V components in the colormatrix, instead of doing
something special in the shader.
Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix
calculation overflowed with 32, add a component_bits field to the image
format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had
no bad consequences otherwise).
If video output and VO don't support the same format, a conversion
filter needs to be insert. Since a VO can support multiple formats, and
the filter chain also can deal with multiple formats, you basically have
to pick from a huge matrix of possible conversions.
The old MPlayer code had a quite naive algorithm: it first checked
whether any conversion from the list of preferred conversions matched,
and if not, it was falling back on checking a hardcoded list of output
formats (more or less sorted by quality). This had some unintended side-
effects, like not using obvious "replacement" formats, selecting the
wrong colorspace, selecting a bit depth that is too high or too low, and
more.
Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This
function was made for this purpose, and should select the "best" format.
Libav provides a similar function, but with a different name - there is
a function with the same name in FFmpeg, but it has different semantics
(I'm not sure if Libav or FFmpeg fucked up here).
This also removes handling of VFCAP_CSP_SUPPORTED vs.
VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly
for filter chains with multiple scale filters.
Fixes#1494.
In my opinion, libavformat should be doing this. But a patch handling a
very safe case rejected, so I suppose we have to do it manually. (This
patch was only escaping spaces, which can never work because they break
the basic syntax of the HTTP protocol.)
This commit attempts to do 2 things:
- Try to guess whether libavformat will use the URL for http. This is
not always trivial, because some protocols will recursively pass part
of the user URL to http in some way.
- Try to fix invalid URLs. We fix only the simplest case: only
characters that are never valid are escaped. This excludes invalid
escape codes, which happen with freestanding '%' characters.
Fixes#1495.
Sigh...
The C locale system is incredibly shitty, and if used, breaks basic
string functions. The locale can change the decimal mark from "." to
",", which affects conversion between floats and strings: snprintf() and
strtod() respect the locale decimal mark, and change behavior. (What's
even better, the behavior of these functions can change asynchronously,
if setlocale() is called after threads were started.)
So just check the locale in the client API, and refuse to work if it's
wrong. This also makes the lib print to stderr, which I consider the
lesser evil in this specific situation.
Handles mismatching libavfilter/libavdevice and libavcodec slightly
better.
libavfilter and libavdevice are optional, and thus are checked
separately and at a later point of the build. But if a user system has
at least 2 FFmpeg installations, and one of them lacks libavfilter or
libavdevice, the build script will pick up the libavfilter/libavdevice
package of the "other" FFmpeg installation. The moment waf picks these
up, all include paths will start pointing at the "wrong" FFmpeg, and the
FFmpeg API checks done earlier might be wrong too, leading to obscure
and hard to explain compilation failures.
Just moving the libavfilter/libavdevice checks before the FFmpeg API
checks somewhat deals with this issue. Certainly not a proper solution,
but since the change is harmless, and there is no proper solution, and
the change doesn't actually add anything new, why not.
This function is always available, which is reflected by the fact that
the configure check doesn't actually bother to check for its existence.
Instead, MinGW and Cygwin imply it. The check was probably "needed" when
the priority code was still in a separate source file.
Remove the check, and use _WIN32 for testing for the win32 API (in a
bunch of other places too).
In general, you need to check errno when using strtol(), but as far as I
know, strtol() won't reset errno on success. This has to be done
manually. The code could have failed sporadically if strtol() succeeded,
and errno was already set to one of the checked values.
(This strtol() still isn't fully error checked, but I don't know if it's
intentional, e.g. for parsing a numeric prefix only.)
Before this commit, ao_null was used as last fallback. This doesn't make
too much sense. Why would you decode audio just to discard it? Let audio
initialization fail instead. This also handles the weird but possible
corner-case that ao_null might fail initializing, in which case e.g.
ao_pcm could be autoselected. (This happened once, and had to be fixed
manually.)
This removes the slightly duplicated code for picking the required AO
driver if --audio-device forces one. Now --audio-device reuses the same
code as --ao for this.
As a consequence, ao_alloc_pb() and ao_create() can be merged into
ao_init(). Although the ao_init() argument list, which is already pretty
big, grows by one, it's better than having all these similar sounding
functions around.
Actually, I just wanted to do the change the following commit will do,
but I found this code was more of a mess than it had to be.
libdvdnav is garbage. Seeking by time is incredibly inexact, which is in
part due to the fact that it does not use the DVD seek tables. Instead,
it assumes CBR for certain ranges within the DVD, which makes especially
small seeks unreliable.
I have no good fix for this, other than hacking libdvdnav (I'd rather
prefer to remove mpv DVD support completely than doing this). So here's
a shitty hack that tries to workaround these problems. A basic
observation is that seeking in VLC seems to work quite well; however it
seems to be based on seeking by blocks (unless there is a subtle "trick"
I didn't see in the source code). mpv usually seeks by timestamps, so
this is not an option for us. However, we can pretend we are doing this
in the DVD layer.
The previous commit added a way to pass through relative seeks. This
commit uses the relative seek. STREAM_CTRL_SEEK_TO_TIME is backwards
compatible (there's still dvdread and bluray), so most code is about
extracing the relative seek information and turning it into a block
seek.
(Another way would have been using SEEK_FACTOR stuff, but that would
probably make for a less reliable way to handle this situation.)
Additionally, if a hr-seek is done, add an offset by 10 seconds. As long
as the error done by libdvdnav is not worse, this should help with hr-
seeks - although it makes them much slower.
Pass through the seek flags to the stream layer. The STREAM_CTRL
semantics become a bit awkward, but that's still the least awkward
part about optical disc media.
Make demux_disc.c request relative seeks. Now the player will use
relative seeks if the user sends relative seek commands, and the
demuxer announces it wants these by setting rel_seeks to true. This
change probably changes seek behavior for dvd, dvdnav, bluray, cdda,
and possibly makes seeking useless if the demuxer-cache is set to
a high value.
Will be used in the next commit. (Split to make reverting the next
commit easier.)
Before this, we merely printed a message to the terminal. Now the API
user can determine this properly. This might be important for API users
which somehow maintain complex state, which all has to be invalidated if
(state-changing) events are missing due to an overflow.
This also forces the client API user to empty the event queue, which is
good, because otherwise the event queue would reach the "filled up"
state immediately again due to further asynchronous events being added
to the queue.
Also add some minor improvements to mpv_wait_event() documentation, and
some other minor cosmetic changes.
We still need to send the VO a duration in these cases. Disabling
framedrop has logically absolutely nothing to do with these cases; it
was overlooked in commit 918b06c4.
So we always send the frame duration (or a guess for it), and check
whether framedropping is actually enabled in the VO code. (It would
be cleaner to send framedrop as a flag, but I don't care about that
right now.)