The comment was largely outdated, and described the old situation when
we used a "violent" fallback by making get_buffer2 fail completely.
Also, for the case when the hw decoder initialization succeeded (in
get_format), but get_buffer2 for some reason requests something
unexpected, we also can fallback more gracefully and in the same way.
Window classes are process-wide (or at least DLL-wide), so you can't
have 2 classes with the same name. Our code attempted to do this when
for example 2 libmpv instances were created within the same process.
This failed, because RegisterWindowEx() fails if the class already
exists.
Fix this by ignoring RegisterWindowEx() errors. If the class can really
not be registered, we will fail on CreateWindowEx() instead. Of course
we also can't unregister the class, as another thread might be using it.
Windows will free the class automatically if the DLL is unloaded or the
process terminates.
Fixes#2319 (hopefully).
I took this out because I thought the filter chain would auto-negotiate
using nv12 without the explicit hint, and it does in the basic case
with no intermediate filter, but once you start adding filters, it
can end up negotiating a different format and then failing.
Today, vdpaurb will fail if it's used with non-hardware-decoded
content. This created work for the user as they have to explicitly
add or not add it, depending on the content.
As an improvement, we can make vdpaurb pass through any frames
that aren't hardware decoded, so that it can always be present in the
filter chain, if desired.
I see no point in keeping these around. Keeping wrappers for some select
libavfilter filters just because MPlayer had these filters is not a good
reason.
Ultimately, all real filtering work should go to libavfilter, and users
should get used to using vf_lavfi directly. We might even not require
the awful double-nested syntax for using libavfilter one day.
vf_rotate, vf_yadif, vf_stereo3d are kept because mpv uses them
internally. (They all extend the lavfi filters or change their
defaults.) vf_mirror is kept for symmetry with vf_flip. vf_gradfun and
vf_pullup are probably semi-popular, so I'll remove them not yet - only
after some more discussion.
These were normalized and are saner now. We want to use the new fields,
and also get rid of the deprecation warnings, so use them. There's no
release yet which uses these, so some ifdeffery is unfortunately needed.
2 things are being stupid here: Apple for requiring rectangle textures
with their IOSurface interop for no reason, and OpenGL having a
different sampler type for rectangle textures.
The removal of source-shader is a side effect, since this effectively
replaces it - and the video-reading code has been significantly
restructured to make more sense and be more readable.
This means users no longer have to constantly download and maintain a
separate deband.glsl installation alongside mpv, which was the only real
use case for source-shader that we found either way.
This is mostly to cut down somewhat on the amount of code bloat in
video.c by moving out helper functions (including scaler kernels and
color management routines) to a separate file.
It would certainly be possible to move out more functions (eg. dithering
or CMS code) with some extra effort/refactoring, but this is a start.
Signed-off-by: wm4 <wm4@nowhere>
Instead of the other way around of disabling disallowed options. This is
more robust and also slightly simpler, at least conceptually. If new
vo_opengl features are added, they don't need to be explicitly disabled
for dumb-mode just to avoid that it accidentally breaks.
Sigh...
Hopefully this code will be completely unnecessary one day, as it's
only needed due to the sub-option parser craziness.
Move dumb_mode to the top of the struct, so the C universal initializer
doesn't cause warnings with all those broken compilers.
The single path optimization, rendering the video in one shader pass and
without FBO indirections, was removed soem commits ago. It didn't have a
place in this code, and caused considerable complexity and maintenance
issues.
On the other hand, it still has some worth, such as for use with
extremely crappy hardware (GLES only or OpenGL 2.1 without FBO
extension). Ideally, these use cases would be handled by a separate VO
(say, vo_gles). While cleaner, this would still cause code duplication
and other complexity.
The third option is making the single-pass optimization a completely
separate code path, with most vo_opengl features disabled. While this
does duplicate some functionality (such as "unpacking" the video data
from textures), it's also relatively unintrusive, and the high quality
code path doesn't need to take it into account at all. On another
positive node, this "dumb-mode" could be forced in other cases where
OpenGL 2.1 is not enough, and where we don't want to care about versions
this old.
This change makes vo_opengl slightly less compatible (ancient devices
without FBOs will no longer work) and decreases performance in the
simplest case (vo=opengl), in exchange for significantly reducing code
complexity and making everything easier to reason about.
This didn't seem entirely sane. It probably worked by accident, because
"position" is always the first attribute, and thus the default value 0
for the location was always correct.
Often, we don't know whether hardware decoding will work until we've
tried. (This used to be different, but API changes and improvements in
libavcodec led to this situation.) We will often output that we're going
to use hardware decoding, and then print a fallback warning.
Instead, print the status once we have decoded a frame.
Some of the old messages are turned into verbose messages, which should
be helpful for debugging. Also add some new ones.
The fallback at initialization time was basically duplicated, maybe for
the sake of showing a different error message. This doesn't matter
anymore; not much can fail at initialization anymore. Most meaningful
and common errors happen either at probing or in get_format (when the
actual hw decoder is initialized).
If PBO upload fails, disable PBOs and revert to the normal codepath. In
theory we should retry PBO upload on failure (because OpenGL specifies
that it can sporadically fail), but since it normally doesn't happen,
and the fallback will work, I'm not bothering.
Some restructuring is needed, since glUnmapBuffer needs to be called
earlier. In fact, the old code structure didn't make too much sense, and
is a leftover from MPlayer's direct rendering support, which let the
decoder decode to a PBO-mapped region. This means the buffer_ptr field
can be dropped. Drop buffer_size as well, since it only had 2 possible
values (0 or the size required for the current config).
Can significantly help with very large video resolutions on nvidia
drivers. It doesn't seem to have negative effects on Intel drivers
either. (Although it could have on Intel drivers for older hardware.)
For now, this is only for --vo=opengl-hq. Maybe --vo=opengl should use
it too, but it's still meant to be the crappy, fail-safe default.
Setup a dummy image for the given image params, and get the plane sizes
from that. Admittedly not much of a simplification, but conceptually
it's simpler and less error-prone, as the image layout is guaranteed to
be the same, rather than essentially duplicating the way it is
determined.
This is from times when we supported padded/non-NPOT textures. The
difference is not useful anymore, and theoretical support for different
sizes is most likely buggy and unmaintained. So remove it.
Also remove the tex_ prefix wherever it appears.
Use mp_image_copy() instead of copying manually. (This function checks
whether the destination is regarded writeable, which it is not, because
the destination is the source image with changed pointers, so
refcounting has to be removed from the destination image by resetting
mpi->bufs.)
This shouldn't be needed anymore. Textures are now always allocated with
the exact size. Any padding (including non-NPOT support) is gone. The
texture sizes will always match the memory plane sizes.
Drop the unused and forgotten "npot" field from the option struct too.
Undo 292266f2. Reapply 3e12e79b.
An additional copy is not really justified, as it could reduce
performance. On the other hand, we can force API users to create
a GL 3.x context.
eglTerminate() affects the EGLDisplay in all threads. Since the RPI
firmware apparently only ever uses EGL_DEFAULT_DISPLAY, this means it
will trash all other contexts on other threads in the same process.
Thus we don't call eglTerminate() at all, at least on RPI. Call
eglReleaseThread() instead (which may or may not be a NOP).
yuva444p worked, yuva420p didn't. This happened because the chroma pass
discards the alpha plane, which is referenced by the alpha blend code
later.
Add a terrible hack to work this around, actually using the same hack as
was used for the Y plane. (A terrible hack for terrible code.)
If the drag and drop action is anything other than
XdndActionCopy, append the dropped files rather than
replacing the existing playlist. With most file managers,
this will mean at least pressing shift while dropping.
This puts in place the machinery to merely append dropped file to the playlist
instead of replacing the existing playlist. In this commit, all front-ends
set this to false preserving the existing behaviour.
This might fix some problems when framestepping with interpolation
enabled. The problem here is that we want to show the non-interpolated
frame while paused. Framestepping is like unpausing the video for a
frame, and then pausing again. This draws an interpolated frame, and
redrawing on pausing is supposed to take care of this.
This possibly didn't always work, because vo->want_redraw is not checked
by the vo_control() code path. So wake up the VO thread (which takes
care of servicing redraw requests, kind of) explicitly.
The correct solution is getting rid of the public-writable want_redraw
field and replacing it with a new vo_request_redraw() function, but this
can come later.
libavcodec does not support HEVC via VAAPI yet, so this won't work.
However, there is ongoing work to add HEVC support to VAAPI, and this
change might help with testing. (Or maybe not - but there is no harm in
this change.)
Should not happen, but since we don't control decoder video surface
allocation, anything could happen, and the code should be able to deal
with it. Untested.