This also fixes the maximum range to 16.0, which was previously set to
32.0 and incorrectly documented as 8.0. 16 taps should be more than
anybody will ever need, but it's the highest radius that's supported by
all affected filters.
And remove all uses of the VFCAP_CSP_SUPPORTED* constants. This is
supposed to reduce conversions if many filters are used (with many
incompatible pixel formats), and also for preferring the VO's natively
supported pixel formats (as opposed to conversion).
This is worthless by now. Not only do the main VOs not use software
conversion, but also the way vf_lavfi and libavfilter work mostly break
the way the old MPlayer mechanism worked. Other important filters like
vf_vapoursynth do not support "proper" format negotation either.
Part of this was already removed with the vf_scale cleanup from today.
While I'm touching every single VO, also fix the query_format argument
(it's not a FourCC anymore).
I'm still not sure how exactly handling of "lost" devices is supposed
to be handled. In theory, you only have to "reset" the device, instead
of recreating _everything_. But as it is, the code for proper uninit
and for handling the reset is exactly the same, so move it into a
function to reduce code duplication and the danger of potential bugs.
Apparently, extremely crappy graphics drivers don't allow you to use
shaders. Simply disable use of shaders if this happens, and use the
"old" method instead.
One unexpectedly tricky thing is that you need a d3d_device to create
a shader, which in turn requires a window, so the initialization order
changes.
Don't load all the legacy functions (including ancient extensions).
Slightly simplify function loader and context creation, now that legacy
GL doesn't need to be handled. Remove the code for drawing OSD in legacy
mode.
Remove all the header hacks, which were meant for ancient OpenGL headers
which didn't even support things like OpenGL 1.3. Instead, adjust the
GLX check to make sure we get both OpenGL 3x and 2.1 symbols. For win32
and OSX, we assume that the user has the latest headers anyway. For
wayland, we hope that things somehow go right.
Simply clamp off the U/V components in the colormatrix, instead of doing
something special in the shader.
Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix
calculation overflowed with 32, add a component_bits field to the image
format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had
no bad consequences otherwise).
If video output and VO don't support the same format, a conversion
filter needs to be insert. Since a VO can support multiple formats, and
the filter chain also can deal with multiple formats, you basically have
to pick from a huge matrix of possible conversions.
The old MPlayer code had a quite naive algorithm: it first checked
whether any conversion from the list of preferred conversions matched,
and if not, it was falling back on checking a hardcoded list of output
formats (more or less sorted by quality). This had some unintended side-
effects, like not using obvious "replacement" formats, selecting the
wrong colorspace, selecting a bit depth that is too high or too low, and
more.
Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This
function was made for this purpose, and should select the "best" format.
Libav provides a similar function, but with a different name - there is
a function with the same name in FFmpeg, but it has different semantics
(I'm not sure if Libav or FFmpeg fucked up here).
This also removes handling of VFCAP_CSP_SUPPORTED vs.
VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly
for filter chains with multiple scale filters.
Fixes#1494.
We still need to send the VO a duration in these cases. Disabling
framedrop has logically absolutely nothing to do with these cases; it
was overlooked in commit 918b06c4.
So we always send the frame duration (or a guess for it), and check
whether framedropping is actually enabled in the VO code. (It would
be cleaner to send framedrop as a flag, but I don't care about that
right now.)
Broke operation with GLSL.
Since 1D texture usage was apparently (and mysteriously) good for speed,
it might be added back, but it's unknown how to do so in a clean way.
The "ontop" and "border" properties already used a common
mp_property_vo_flag() function, and the corresponding VOCTRLs used the
same conventions. "fullscreen" is pretty similar, but was handled
slightly similar. Change how VOCTRL_FULLSCREEN behaves, and use the same
helper function for "fullscreen" as the other flags.
Fixes#1472.
(Maybe these options should have been named --autofit-max and
--autofit-min, but since --autofit-larger already exists, use
--autofit-smaller for symmetry.)
The last video frame is another case that has a separate code path,
although it's pretty similar to the one in commit 73e5aa87. Fix this
in a different way, which also takes care of the last frame case,
although without context the code becomes slightly more tricky.
As further cleanup, move the decision about framedropping itself to
the same place, so the check in vo.c becomes much simpler. The check
for the vo->driver->encode flag, which is remvoed completely, was
redundant too.
Fixes#1480.
After finding out more about how video mastering is done in the real
world it dawned upon me why the "hack" we figured out in #534 looks so
much better.
Since mastering studios have historically been using only CRTs, the
practice adopted for backwards compatibility was to simulate CRT
responses even on modern digital monitors, a practice so ubiquitous that
the ITU-R formalized it in R-Rec BT.1886 to be precisely gamma 2.40.
As such, we finally have enough proof to get rid of the option
altogether and just always do that.
The value 1.961 is a rounded version of my experimentally obtained
approximation of the BT.709 curve, which resulted in a value of around
1.9610336. This is the closest average match to the source brightness
while preserving the nonlinear response of the BT.1886 ideal monitor.
For playback in dark environments, it's expected that the gamma shift
should be reproduced by a user controlled setting, up to a maximum of
1.224 (2.4/1.961) for a pitch black environment.
More information:
https://developer.apple.com/library/mac/technotes/tn2257/_index.html
Support for taking screenshots when doing hardware decoding needs to be
added later.
This takes the last image queued to the VO, which is logically the image
the player thinks is on screen (so e.g. subtitles will match).
forget_frames() does not clear this, because seeking does not remove the
current image from the screen (until the next one is drawn).
Having any of these set to 0 makes no sense.
I think some code might still be using 0/0 aspect ratio to signal unset
aspect ratio, but I didn't find it. If there is still code like this, it
should be fixed instead.
Fixes#1467.
For some reason, schedule_resize() can be called with everything set to
0. The code couldn't handle wl->window.aspect set to 0, converting NaNs
to integers. Just work this around.
(I have no idea what I'm doing. This is probably a corner case caused
by my broken-ish wayland setup.)
For some reason, mpv sometimes does not get a MapNotify event with
GtkSocket embedding. This happens maybe 1 out of 10 times. I'm not sure
how this can happen - it certainly shouldn't. Since I was not able to
find the cause, and causes an apparent "deadlock", here's a lazy hack to
fix the misbehavior.
Seems to work with GtkSocket and passing the gtk_socket_get_id() value
via "wid" option to mpv.
One caveat is that using <tab> to move input focus from mpv to GTK does
not work. It seems we would have to interpret <tab> ourselves in this
case. I'm not sure if we really should do this - it would probably
require emulating some other typical conventions too. I'm not sure if an
embedder could do something about this on the toolkit level, but in
theory it would be possible, so leave it as is for now.
Don't use vo_control() for sending VOCTRL_RESET when starting a seek.
This means vo_seek_reset() won't wait until the VO actually processed
VOCTRL_RESET. It happens asynchronously instead.
The impact of this change should be minimal, unless the VO is somehow
too busy (like blocking on vsync).
Make their meaning more exact, and don't pretend that there's a
reasonable definition for "bits-per-pixel". Also make unset fields
unavailable.
average_depth still might be inconsistent: for example, 10 bit 4:2:0 is
identified as 24 bits, but RGB 4:4:4 as 12 bits. So YUV formats
seemingly drop the per-component padding, while RGB formats do not.
Internally it's consistent though: 10 bit YUV components are read as
16 bit, and the padding must be 0 (it's basically like an odd fixed-
point representation, rather than a bitfield).
This avoids issues when upscaling directly in linear light, and is the
recommended way to upscale images according to imagemagick.
The default slope of 6.5 offers a reasonable compromise between
ringing artifacts eliminated and ringing artifacts introduced by
sigmoid-upscaling. Same goes for the default center of 0.75.
The previous implementation of opengl-cb kept only latest flipped frame.
This can cause massive frame drops because rendering is done asynchronously
and only the latest frame can be rendered.
This commit introduces frame queue and releated options to opengl-cb.
frame-queue-size: the maximum size of frame queue (1-100, default: 1)
frame-drop-mode: behavior when frame queue is full (pop, clear, default: pop)
The frame queue holds delayed frames and drops frames if the frame queue is
overflowed with next method:
'pop' mode: drops all the oldest frames overflown.
'clear' mode: drops all frames in queue and clear it.
With default options(frame-queue-size=1:frame-drop-mode=pop),
opengl-cb behaves in the same way as previous implementation effectively.
For frame-queue-size > 1, opengl-cb tries to calls update() without waiting
next flip_page() in order to consume queued frames.
Signed-off-by: wm4 <wm4@nowhere>
vo_opengl was crashing since f811348d because it passed NULL for the
events parameter to vo_control. Normally the parameter should not be
NULL, so add a hack to account for this. In particular, we should
handle the events that are returned. For the call in preinit() we
skip this, but it most likely has no meaning anyway, because in this
stage no window is visible yet.
This was always supposed to work.
Just add the option declaration. Normally I'm not a fan of duplicating
such things, but in this case it's (still) harmless.
This affects OSX, where memory profiles are updated e.g. on fullscreen
switches. The profile most likely doesn't change, but the LUT will
be generated and reloaded anyway.
Somewhat of a regression from commit f811348.
Fixes#1439.
If icc-path is set, but the thing is replaced with a memory profile,
then p->icc_path would point to deallocated memory.
Also, the NULL checks are unnecessary.
glXGetProcAddress() is outdated, and as far as I know doesn't give all
the guarantees the "new" ARB function gives.
Probably doesn't matter too much, because until now it always appeared
to work. On the other hand, since this function is (bogusly) used only
on the gl3 code path, it could have happened that users hit this, and
just reverted to vo_opengl_old instead.
When the given mp_image_params does not match with that of gl_video,
gl_video_config() always calls uninit_video() but calls init_video()
only if valid format is given.
Since uninit_video() does not change image_params of gl_video,
when the same params as the previous one is given to gl_video_config()
after gl_video is unitialized with invalid format, gl_video_config()
never calls init_video().
To prevent this, invalidate image_params of gl_video in uninit_video().
Previously we just forced loading a profile from file, but that has poor
integration for querying the OS / display server for an ICC profile, and
generating profiles on the fly (which we might use in the future for creating
preset 3dluts).
Also changed the previous icc-profile-auto code to use this mechanism, and
moved gl_lcms to be an opaque type with state instead of just providing pure
functions.
This makes vo_opengl_cb respond to controls like "gamma" and
"brightness". The commit includes an awkward refactor for vo_opengl to
make it easier for vo_opengl_cb.
One problem is a logical race condition. The set of supported controls
depends on the pixelformat, which in turn is set by reconfig(). But the
actual reconfig() call (on the renderer) happens asynchronously on the
renderer thread. At the time it happens, the player most likely already
tried to set some controls for command line options (see init_vo() in
video.c). So setting this command line options will fail most of the
time, though it could randomly succeed. This can't be fixed directly,
because the player can't wait on the renderer thread, because the
renderer thread might already wait on the player.
Although the line count increases, this is better for making sure
everything is handled consistently for all users of the mp_csp_params
stuff.
This also makes sure mp_csp_params is always initialized with
MP_CSP_PARAMS_DEFAULTS (for consistency).
This adds an "auto" choice to the concurrent-frames suboption, and makes
it the default.
I'm not so sure about making this the default, though. It could lead to
excessive buffering with large CPU counts. But we'll see.
Returning the property before the window is mapped could lead to
confusing behavior, and in particular strange differences between
vo_vdpau and vo_opengl. (vo_opengl creates the window right at the
start, while vdpau waits until the first reconfigure event.) It might
even be possible that for vo_opengl random results were returned,
because the hidden window can have different placement than the actual,
final one on initial video reconfig.
Fix this by returning the property only if the window is considered
mapped. command.c handles this case specifically, and makes the property
unavailable, instead of returning an empty list.
There are currently 568 pixel formats (actually fewer, but the namespace
is this big), and for each format elaborate synchronization was done to
call it synchronously on the VO. This is completely unnecessary, and we
can do with just a single call.
This is basically a hack; but apparently a needed one, since many
vapoursynth filters insist on having a FPS set.
We need to apply the FPS override before creating the filters. Also
change some terminal output related to the FPS value.
Most of this is explained in the code comments. This change should
improve performance with vapoursynth, especially if concurrent requests
are used.
This should change nothing if vf_vapoursynth is not in the filter chain,
since non-threaded filters obviously can not asynchronously finish
filtering of frames.
While there's no actual need to get rid of these, I want to make sure
nobody actually needs this stuff, and removing it is the best way to
get to know this. We still can revert this commit if it turns out there
is a significant need for this stuff.
The final goal is removing vo_opengl_old entirely. Add a warning, which
basically announces this intention.
This is only needed for switching video track with `_`, since Cocoa
automatically handles cleaning up the application's presentation options when
quitting the process.
Fixes#1399
The way we use rectangle textures (required by VDA for no reason) works
onl in OpenGL 3.0 or higher. Below that, the shader will fail to
compile. We could add support for older OpenGL versions, but that would
be a major pain.
This normally doesn't matter; mpv itself always creates OpenGL 3.2
contexts on OSX. But it could matter if a client API user uses
vo_opengl_cb, and gives it a 2.1 context (which OSX also allows you to
create).
Until now, calling mpv_opengl_cb_uninit_gl() at a "bad moment" could
make the whole thing to explode. The API user was asked to avoid such
situations by calling it only in "good moments". But this was probably a
bit too subtle and could easily be overlooked.
Integrate the approach the qml example uses directly into the
implementation. If the OpenGL context is to be unitialized, forcefully
disable video, and block until this is done.
If a filter exists, but has no metadata, just return success. This
allows the user to distinguish between no metadata available, and filter
not inserted.
See #1408.
The details of the non-linear transformation from/to BT.2020's constant
luminance system don't really make sense with any other gamma curve,
since changing the gamma curve completely breaks the chroma channels.
vo_opengl was originally written against OpenGL 3 core, and it seems
GPUs/drivers supporting this are mostly sane. Later, it was made to work
with OpenGL 2.1 too. Lately we removed the requirement for RG textures,
and look, someone reported a problem with "lesser" Intel GPUs.
This commit does the same in vo_opengl what was added to vo_opengl_old a
long time ago.
Fixes#1383.
Using them reduces state change, since now at least vo_opengl_cb has to
setup/break the vertex array bindings on every frame if no VAOs are
available.
This reverts the VAO related change in commit f64665e7.
Originally, this code was written to have full control over the OpenGL
state, rather than having to cooperate with unknown components by being
embeded like vo_opengl_cb is meant to be. As a consequence, it was
thought to be ok to setup a global binding (if the context is below
OpenGL 3.0, which guarantees VAOs).
This could break badly. Fix it by setting up and breaking the bindings
on entry/exit.
GL_ARB_debug_output provides a logging callback, which can be used to
diagnose problems etc. in case the driver supports it. It's enabled only
if the vo_opengl "debug" suboption is set.
Now it accepts the same renderer arguments as vo_opengl. This also
disables debug checks by default, and reverts the background color
override. Both can now be controlled by the host application.
Whether we have texture_rg doesn't matter much anymore; the scaler
should be fine with this. But on ES 2.0, 1st class arrays are missing,
so even if filterable float textures should be available, it won't work.
Dithering (at least the "fruit" variant) will not work either, because
it uses floats.