Missed during the recent changes.
Also simplify error checking code and check for POLLNVAL
as well (the display fd was never actually checked to be valid).
This requires changing the pixel upload alignment because the odd sizes
might not be aligned to multiples of 4.
Anyway, the restriction has no real benefit and the sizes in between 32
and 64 might be worth using, so just drop it.
Following testing after ebe798a, this is a more than sufficient size to
cover our use case.
The old default was a drop of about 58 dB PSNR using the old code, and
this new default is about 65 dB PSNR, so it's actually an improvement
despite resulting in a smaller size.
There was no outlier whatsoever when comparing sizes around the 64
neighbourhood (with every step corresponding to a PSNR drop of about
0.07 dB), so I picked this since it's a power of two and requires no
change to the current 3dlut-size parsing logic.
I also tested smaller sizes such as 32x32x32 which performed almost as
well on colorful samples, but this results in noticeable black boost in
the dark regions, which is pretty undesirable. Therefore, we should
avoid going much further below 64x64x64.
Either way, this new size is so fast to compute that the 3dlut cache is
almost useless on my end. In fact, it might even be slower to load the
profile from the cache than to recompute it from scratch. (For caches on
a disk. For cache on a tmpfs, it makes no difference)
It seems vo_x11_check_events() was supposed to return the currently
flagged events and reset them. But there are many places where
vo_x11_check_events() is called without checking its return value. This
could lead to forgotten events.
Change the code such that they can't get lost.
This code had the exact same texture indexing bug that the original
scaler code had before the introduction of the LUT_POS macro to fix it.
We can re-use this same macro here, and the performance drop is
virtually entirely negligible. The benefit is greatly improved LUT
accuracy as the 3DLUT size decreases - in particular, the old LUT
started introducing more and more black crush the lower your LUT size is
(because the error was essentially an over-contrast bias, with a
magnitude linearly related to the lut size).
The new code improves black stability as the LUT size decreases, and
only at very low values (16 and below) do black levels start noticeably
getting affected (due to crude linearization of the nonlinear response
curve).
The default value of 3dlut-size is definitely generous enough for this
to make no difference out of the box, but it also causes no performance
drop at all on my machine so I see no harm in improving the logic.
Furthermore, this means we could easily decrease the default 3dlut size
in a future commit, perhaps even down to 64x64x64 as a default. (But
more testing is warranted here)
The FFmpeg API is incredibly weird and inconsistent about this. This is
also a FFmpeg-only issue and nothing like this is in Libav - which
doesn't really show FFmpeg in a very positive light.
(To make it even worse: this is a full-blown Libav API incompatibility,
even though this crap was added for Libav ABI-compatibility. It's
absurd.)
Quoting the FFmpeg header for the AVFrame.channels field:
/**
* number of audio channels, only used for audio.
* Code outside libavutil should access this field using:
* av_frame_get_channels(frame)
* - encoding: unused
* - decoding: Read by user.
*/
int channels;
It says "should" not must, and it doesn't even mention
av_frame_set_channels(). It's also in the section for public fields (not
below a marker that indicates private fields in a public struct, like
it's done e.g. in AVCodecContext).
But not using the accessor will cause silent failures on ABI changes.
The failure that happened due to this code didn't even make it apparent
what was wrong. So just use the idiotic accessor.
Also harmonize the FFmpeg-cursing in the code. (It's fully justified.)
Fixes#3295.
Note that mpv will still check the exact library version numbers, and
reject mismatches - to protect itself from such issues in the future.
Otherwise it behaves dumb. (Although you could argue it shouldn't try to
guess whether speed changes work, but instead simply disable DS if they
don't work.)
It used not to work - but now it apparently does. Not sure when that got
fixed in FFmpeg, but there's no longer a reason to keep this hack.
This also gets rid of the check for the read_seek2 field, which is not
part of the public API.
Both backends have code to close each FD of their wakeup_pipe array.
This array is default-initialized with 0, which means if the backends
exit before the wakeup pipe is created (e.g. when probing), they would
close FD 0.
Initialize the FDs with -1. Then we call close(-1) in these situations,
which is perfectly allowed and has no bad consequences.
This fits natively into the vo/backend and allows to simplify the
polling code.
One new change is the fact that surface_handle_enter flags VO_EVENT_WIN_STATE
and VO_EVENT_RESIZE instead of only VO_EVENT_WIN_STATE. Before this, the code
hackily relied on the timeout and the loop in the wait_frame function to track
and set the scaling factor. Instead, this triggers mpv to run a schedule_resize
and adjust the new VO output dimensions immediately. This is also more accurate
since surface_handle_enter() gets called when a surface is created, moved and
resized, which is exactly what the rest of the player might be interested in.
This uses GLSL mix() instead of going through an indirect texture
access. Easy to implement and might require less resources on some
devices, since the oversample code was already essentially just a
special case of this.
Could be made the new default (as per issue #2685), but that should be
done in a separate commit.
Until now, this has been either handled over vo.event_fd (which should
go away), or by putting event handling on a separate thread. The
backends which do the latter do it for a reason and won't need this, but
X11 and Wayland will, in order to get rid of event_fd.
There's no need to call wl_display_flush() since all the client-side
buffered data has already been flushed prior to polling the fd.
Instead only check for POLLIN and the usual ERR+HUP.
Don't just cause vo_opengl to update the ICC profile every time the
window is moved. Instead, explicitly check if the screen was changed.
Mostly untested.
Some client API users simply don't like such filenames. For their sake,
don't return them, but return a dummy filename instead. (Returning a
latin1-ized version would work too, but is slightly more work.)
Also remove the "\n" from the replacement dummy filename. This was
accidental.
mixer.c didn't really deserve to be separate anymore, as half of its
contents were unnecessary glue code after recent changes. It also
created a weird split between audio.c and af.c due to the fact that
mixer.c could insert audio filters. With the code being in audio.c
directly, together with other code that unserts filters during runtime,
it will be possible to cleanup this code a bit and make it work like the
video filter code.
As part of this change, make the balance code work like the volume code,
and add an option to back the current balance value. Also, since the
balance semantics are unexpected for most users (panning between the
audio channels, instead of just changing the relative volume), and there
are some other volumes, formally deprecate both the old property and the
new option.
Since mixer->ao is always NULL now (it was really just forgotten to be
removed), the uninit call never actually cleared the af field, leaving
a dangling pointer that could be accessed by volume control.
This makes the difference between passing VA_FRAME_PICTURE or
VA_BOTTOM_FIELD for progressive frames (that should be force-
deinterlaced) to VAProcPipelineParameterBuffer.flags. VA-VPP doesn't
really seem to care, and we can get rid of mp_refqueue_is_interlaced()
entirely. It could be argued it's better to pass field flags instead of
the progressive flag.
"Real" frame flag vs. what we pretend it to be. It always used the real
flag, and thus never deinterlaced unflagged frames, even if the
suboption was set to "no".
The hw_subfmt field roughly corresponds to the field
AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type
AVPixelFormat (instead of the underlying hardware format), so it's a
good idea to switch to this too for preparation.
Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API-
specific number. VDPAU and Direct3D11 already used mp_imgfmt, but
Videotoolbox and VAAPI had to be switched.
One somewhat user-visible change is that the verbose log will now always
show the hw_subfmt as image format, instead of as nonsensical number.
(In the end it would be good if we could switch to AVHWFramesContext
completely, but the upstream API is incomplete and doesn't cover
Direct3D11 and Videotoolbox.)
Old-style commands using _ as separator (e.g. show_progress) were still
used in some places, including documentation and configuration files.
This commit updates all such instances to the new style (show-progress)
so that commands are easier to find in the manual.
Since it turns out that knowing what exactly a file was tagged with can
be useful for debugging purposes, expose this as a property so I can
check it more easily.
This is mostly useful for sig-peak (since nom-peak is currently entirely
calculated by us), but I added both for consistency.
--deinterlace=auto is the default, and has the obscure semantics that
deinterlacing is disabled, unless the user has manually inserted a
deinterlacing filter.
While in software decoding this doesn't matter, and we will happily
insert 2 yadif filters (if the user has already added one), or not
remove the yadif filter (if deinterlacing is disabled, but the user has
added the filter manually), this is different with hardware deinterlacer
filters. These support VFCTRL_SET_DEINTERLACE for toggling deinterlacing
filtering at runtime. It exists mainly for legacy reasons, and possibly
because it makes switching deinterlacing modes more efficient. It might
also gives us an entry-point for VO deinterlacing, maybe. For whatever
reasons this mechanism exists, we still support and use it.
This commit fixes that video.c always used VFCTRL_SET_DEINTERLACE to
disable deinterlacing, even if --deinterlace=auto was set. Fix this by
checking the value of the option directly.
For some reason, the lack of version info was preventing mpv from
appearing in the Default Programs dialog. Re-add it, but don't set the
string version numbers from version.h, because that's what was causing
trouble when the version info was removed. Like the binary version
numbers, these are now hardcoded to 2.0.0.0, which probably doesn't
matter.
The new version info block is also slightly different to the old one. It
fills out all the binary VERSIONINFO fields and makes better use of
macros. It also removes the \000 line terminators from the string
version info, since as far as I can tell, this was just cargo-culting
for an old broken version of the Microsoft resource compiler, and
binutils' windres terminates the strings properly without them.
This should get mpv working on Windows 7 machines without hardware
accelerated graphics adapters. It already worked on Windows 8 and up
because those systems would silently fall back to WARP if there was no
graphics hardware installed.
The normal MPGL_CAP_SW flag is not set, so unlike other opengl backends,
this will choose a software adapter even if opengl:sw is not specified.
The reason for this is, unlike on Linux, where vo_xv and vo_x11 can be
used, mpv on Windows does not have any VO to fall back on when hardware
acceleration isn't available, so if software adapters are rejected, the
user won't see any video output when using the default settings. WARP
seems to perform quite well, so it should be used in this case.