I plan to remove the S24 sample formats in mpv. It seems like we should
still support this _somehow_ in AOs though. So the idea is to convert
the data to more obscure representations (that would not be useful for
filtering etc. anyway) within the AO.
This commit adds helper to enable this. ao_convert_fmt is meant to
provide mechanisms for this, rather than a generic audio format
description (as the latter leads only to overly generic misery). The
conversion also supports only cases which we think will be needed at
all.
The main advantage of this approach is that we get S24 out of sight,
and that we could support other crazy formats (like S20). The main
disadvantage is that usually S32 will be selected (if both S32 and S24
are available), and there's no user control to force S24. That doesn't
really matter though, and at worst makes testing harder or will lead
to unpleasant arguments with audiophiles (they'd be wrong anyway).
ao_convert_fmt.pad_lsb is ignored, although if we ever find a case in
which playing S32 with data in the LSBs breaks when playing it as padded
24 bit format. (For example, WAVEFORMATEXTENSIBLE recommends setting the
unused bits to 0 if wValidBitsPerSample implies LSB padding.)
It's now possible to request non-dumb mode as a user, even when not
using any non-dumb features. This change is mostly intended for testing,
so I can easily switch between dumb and non-dumb mode on default
settings. The default behavior is unaffected.
This backend is selected if vaapi is available, but vaapi-over-EGL is
not. This causes various issues around the forced RGB conversion, which
is done with fixed, usually incorrect parameters.
It seems the existing auto probing check is too weak, and doesn't really
prevent it from getting loaded. Fix this by adding a flag to not ever
load this during auto probing.
I'm still not deleting it, because it's useful for testing on nvidia
machines.
See #4555.
The current algorithm blew up when the color was negative, such as the
case when downscaling with dscale=mitchell or other algorithms that
introduce negative ringing. The simplest solution is to just slightly
change the calculation to force both parameters to be in-range.
Was at least somewhat broken, and is misleading. I don't really have an
idea why FFmpeg has two AVOptions here anyway. We don't need to care,
and I'm only aware of 1 user trying this option ever.
See #4579.
HOME isn't set by default on Windows. But if the user does set it,
prefer it by default.
Enables stuff like --log-file=~/mpv.log to work, even if HOME isn't set.
The first time I saw a user try to use this option, and apparently it
didn't work. I'm not exactly sure why, but the code seems to be broken
anyway. Apart from not doing any error checking (neither mallocs nor
warning the user against invalid input), it forgets to add a 0
terminator.
Use the corresponding AVOption instead, which probably works.
See #4579.
This is exposed so that bjin/mpv-prescalers can use textureGatherOffset
for performance.
Since there are now quite a lot of parameters where it isn't quite clear
why they're all defined, add a paragraph to the man page that explains
them a bit.
This helps prevent unnaturally, weirdly colorized blown out highlights
for direct images of the sunlit sky and other way-too-bright HDR
content. I was debating whether to set the default at 1.0 or 2.0, but
went with the more conservative option that preserves more detail/color.
This logic doesn't really make sense. copy_img_tex already binds the
texture, so why would we bind it a second time? Furthermore, nothing
actually uses this return value. Must have been some left-over artifact
of a previous iteration of this function. Anyway, it's harmless, just
nonsensical. So remove it.
This is more efficient on my machine (nvidia), but only when applied to
groups of exactly 4 texels. So we switch to the more efficient
textureGather for groups of 4. Some notes:
- textureGatherOffset seems to be faster than textureGather by a
non-negligible amount, but for some reason, textureOffset is still
slower than a straight-up texture
- textureGather* requires GLSL 400; and at least on nvidia, this
requires actually allocating a GL 4.0 context.
- the code in opengl/common.c that clamped the GLSL version to 330 is
deprecated, because the old user shader style has been removed
completely in the meantime
- To combat the growing complexity of the polar sampling code, we drop
the antiringing functionality from EWA shaders completely, since it
never really worked well for EWA to begin with. (Horrific artifacting)
Instead of PostMessage, use SendNotifyMessage from the SendMessage
family of functions to wake up the Win32 thread from the VO thread. When
a message is sent rather than posted between threads, it ends up in a
different queue which is processed before posted messages and can be
processed in more places. This prevents a playback glitch when clicking
on the titlebar, but not moving the window. With PostMessage-based
wakeups, VOCTRLs could be delayed for up to 500ms after the user clicks
on the titlebar, but with SendNotifyMessage, they still complete in
under a millisecond.
Also, instead of handling WM_USER, process the dispatch queue before
every message. This ensures the dispatch queue is processed as soon as
possible. WM_NULL is used to wake up the window procedure in case there
are no other messages being processed.
This sets AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH, which some hwaccels
using the new generic API respect. These do profile selection in
libavcodec, so it can be controlled only with an external flag, instead
of in mpv code like it used to be done.
They have been deprecated for a decade, yet you're forced to explicitly
deal with them at every step, or they will break your shit.
FFmpeg insists on keeping them, because libavfilter is too stupid to
deal with color ranges properly. Ridiculous.
- change asserts to silent exits
- check all pointers before use
- move the p->pass initialization code to the right place
This should hopefully cut down on the amount of crashing by making the
code fundamentally more robust, while also fixing a concrete issue where
opengl-cb failed to initialize p->pass.
This seems to reduce glitches when resizing a --wid program (or it could
be a placebo.) Since we don't need the WM_WINDOWPOSCHANGING handler in
--wid mode, it should be fine.
This allows filter functions to be prematurely cut off once their
contributions start becoming insignificant. This effectively prevents
wasted GPU time sampling from parts of the function that are essentially
reduced to zero by the window function, providing anywhere from a 10% to
20% speedup. (5700μs -> 4700μs for me)
This is more confusing than it helps, and forces escaping more stuff.
For example, for string lists we could remove all need for escaling with
-add and -pre.
The user can simply use multiple of those options.
Remove the various redundant m_config_set_option* calls, rename the
remaining one to m_config_set_option_cli(), and merge the
m_config_parse_option() function.
Now it's sourced from the etc/ PNG files directly, instead of
preprocessing them with imagemagick.
Add some ad-hoc code to decode PNG files with libavcodec. At least we
can drop the zlib code in exchange.
Actually contains some code fragments by Michael Niedermayer (command
line stuff, video equalizer), thus it can be LGPL only once the formal
requirement of mpv's core being LGPL is fulfilled.
2f41c4e8 exposed some other edge cases as well. Globally resetting the
pass info was not the right way to go about it, because we don't know in
advance what the frame type is going to be - at least not with the
current code structure. (In principle, we could separately indicate the
frame type and the pass type and then only reset it on the first
actual pass_describe call, but that's annoying as well)
Also fixes a latent issue where p->pass was never initialized, which
broke the MP_DBG debugging code in some cases.
Since all existing code does gl_video_upload immediately followed by
pass_render_frame, we can just move the upload into pass_render_frame
itself, which arguably makes more sense anyway.
This replaces `vo-performance` by `vo-passes`, bringing with it a number
of changes and improvements:
1. mpv users can now introspect the vo_opengl passes, which is something
that has been requested multiple times.
2. performance data is now measured per-pass, which helps both
development and debugging.
3. since adding more passes is cheap, we can now report information for
more passes (e.g. the blit pass, and the osd pass). Note: we also
switch to nanosecond scale, to be able to measure these passes
better.
4. `--user-shaders` authors can now describe their own passes, helping
users both identify which user shaders are active at any given time
as well as helping shader authors identify performance issues.
5. the timing data per pass is now exported as a full list of samples,
so projects like Argon-/mpv-stats can immediately read out all of the
samples and render a graph without having to manually poll this
option constantly.
Due to gl_timer's design being complicated (directly reading performance
data would block, so we delay the actual read-back until the next _start
command), it's vital not to conflate different passes that might be
doing different things from one frame to another. To accomplish this,
the actual timers are stored as part of the gl_shader_cache's sc_entry,
which makes them unique for that exact shader.
Starting and stopping the time measurement is easy to unify with the
gl_sc architecture, because the existing API already relies on a
"generate, render, reset" flow, so we can just put timer_start and
timer_stop in sc_generate and sc_reset, respectively.
The ugliest thing about this code is that due to the need to keep pass
information relatively stable in between frames, we need to distinguish
between "new" and "redrawn" frames, which bloats the code somewhat and
also feels hacky and vo_opengl-specific. (But then again, this entire
thing is vo_opengl-specific)
For some braindead reason, Microsoft decided to prevent you from
dynamically loading system libraries. This makes portability harder.
And we're talking about portability between Microsoft OSes!
This partially reverts the change from a longer time ago to always build
DXVA2 and D3D11VA together.
To make it simpler, we change the following:
- building with ANGLE headers is now required to build D3D hwaccels
- if DXVA2 is enabled, D3D11VA is still forcibly built
- the CLI vo_opengl ANGLE backend is now under --egl-angle-win32
This is done to reduce the dependency mess slightly.
Slightly cleaner, possibly slightly more correct. (The last case should
be dead code now. In general, we can't know the implied colorspace from
a AV_PIX_FMT, at least not if FFmpeg adds a new one.)
It never worked. It relied on some obscure texture format to provide the
equivalent of GL_RG or GL_LUMINANCE_ALPHA, but no hardware seemed to
report support for it ever. No idea what's the correct way to do this.
On D3D11 it exists, of course.
(Actually I'd like to remove the whole VO.)
Another legacy annoyance. The only place where packed YUV is still
important is slightly older Apple hardware or drivers, which require
it for efficient hardware decoding.
Instead of setting up a weird swizzle (which is linked to how the
internal renderer code works, rather than the generic format code), add
per-component mapping to gl_imgfmt_desc.
The renderer still computes the weird swizzle, but at least it's
confined to itself. Also, it appears the hwdec backends don't need this
anymore.
It's really nice that the messy init_format() goes away too.