The renderer code doesn't list a fixed set of supported formats, but
supports anything that is described by AVPixFmtDescriptor and follows a
number of constraints.
Plane order is not included in those constraints. This means the planes
could be in random order, rather than what the vo_opengl renderer
happens to assume. For example, it assumes that the 4th plane is alpha,
even though alpha could be on any plane. Likewise it assumes that plane
0 was always luma, and planes 2/3 chroma. (In earlier iterations of
format support, this was guaranteed by MP_IMGFLAG_YUV_P, but this is not
used anymore.)
Explicitly set the plane semantics (enum plane_type) by the component
descriptors and the used colorspace. The behavior should mostly not
change, but it's less likely to break when FFmpeg adds new pixel
formats.
Use avcodec_free_context() unstead of random other calls. Actually it
was already used in the second case, but calling avcodec_close() is
redundant.
Don't crash if allocating a codec context fails.
With some newer ANGLE builds, mapping can fail with "Failed to create
EGL surface" during playback. The reason is unknown, and it might just
be an ANGLE bug. Probe whether it works at init time to avoid the
problem.
Commit d5702d3b95 messed up the order of destruction of the elements: it
destroyed the avctx before the hwaccel uninit, even though the hwaccel
uninit could access avctx. This could happen with some old hwaccels
only, such as D3D ones before the new libavcodec hwaccel API.
Fix this by making use of the fact that avcodec_flush_buffers() will
uninit the underlying hwaccel. Thus we can be sure avctx is not using
any hwaccel objects anymore, and it's safe to uninit the hwaccel.
Move the hwdec_dev dstruction code with it, because otherwise we would
in theory potentially create some dangling pointers in avctx.
In commit 6eb0bbe this was changed from xs[n] to use gl_format.chroma_w
indiscriminately, which broke chroma rendering when zooming/cropping.
The solution is to only use chroma_w for chroma planes.
Fixes#4592.
Previously, the entire convert_buffer was being copied to the desination without
regard to the fact that it may be packed and therefore smaller.
The allocated conversion buffer was also way to big
bytes * (channels * samples) ** 2
instead of
bytes * channels * samples
This shouldn't affect which are chosen, but it should speed up the search by
putting more common configurations earlier so that a working sample format and
sample rates can be found sooner obviating the need to search them for each
iteration of the outer loops.
The loop to select the native wasapi_format for the incoming audio was
not breaking correctly when it found the most desirable format. It
therefore executed completely leaving the least desirable format (u8) as
the choice.
fixes#4582
On optional hook points, we store to a temp FBO and then read from it
again to complete any operations that may still be left (e.g.
sigmoidization after MAIN/LINEAR).
In theory this mechanism should be reworked to avoid the temporary FBO
until the next time we actually need one - and also skip redundant
passes if we the next thing we need *is* a FBO - but both are those are
tricky. Anyway, in the meantime, at least we can label the
(semi-)redundant passes that get generated when using user shaders.
This just indicates a fixed linear coefficient to multiply into the
signal, similar to the old option --target-brightness (but the inverse
thereof). Good for testing purposes, which is why I added it. (This also
corresponds somewhat to what zimg does)
This is the last sample format that was only in mpv and not in FFmpeg
(except the spdif special formats). It was a huge pain, even if the
removed code in af_lavrresample is pretty small after all.
Note that this drops S24 from the ao_coreaudio AOs too. I'm not sure
about the impact, but I expect it doesn't matter.
af_fmt_change_bytes() was unused as well, so remove that too.
I'd actually be somewhat interested in supporting this, as it could help
testing the S24 conversion code. But then again it's only a pain,
there's no immediate need, and it would require new options to make
ao_pcm.c select this output format at all.
Do conversion directly, using the infrastructure that was added before.
This also rewrites part of format negotation, I guess.
I couldn't test the format that was used for S24 - my hardware does not
report support for it. So I commented it, as it could be buggy. Testing
this with the wasapi_formats[] entry for 24/24 uncommented would be
appreciated.
Instead of the infrastructure added in the previous commit to do the
conversion within the AO.
If this is used, and snd_pcm_status_get_avail() returns more frames than
snd_pcm_write*() actually accepts, you will get some nice audio
corruption.
Also, this mutates the data passed via play(), which is rather fishy,
but sort of doesn't matter for now. Surely this will cause unintended
bugs and WTFs.
I plan to remove the S24 sample formats in mpv. It seems like we should
still support this _somehow_ in AOs though. So the idea is to convert
the data to more obscure representations (that would not be useful for
filtering etc. anyway) within the AO.
This commit adds helper to enable this. ao_convert_fmt is meant to
provide mechanisms for this, rather than a generic audio format
description (as the latter leads only to overly generic misery). The
conversion also supports only cases which we think will be needed at
all.
The main advantage of this approach is that we get S24 out of sight,
and that we could support other crazy formats (like S20). The main
disadvantage is that usually S32 will be selected (if both S32 and S24
are available), and there's no user control to force S24. That doesn't
really matter though, and at worst makes testing harder or will lead
to unpleasant arguments with audiophiles (they'd be wrong anyway).
ao_convert_fmt.pad_lsb is ignored, although if we ever find a case in
which playing S32 with data in the LSBs breaks when playing it as padded
24 bit format. (For example, WAVEFORMATEXTENSIBLE recommends setting the
unused bits to 0 if wValidBitsPerSample implies LSB padding.)
It's now possible to request non-dumb mode as a user, even when not
using any non-dumb features. This change is mostly intended for testing,
so I can easily switch between dumb and non-dumb mode on default
settings. The default behavior is unaffected.
This backend is selected if vaapi is available, but vaapi-over-EGL is
not. This causes various issues around the forced RGB conversion, which
is done with fixed, usually incorrect parameters.
It seems the existing auto probing check is too weak, and doesn't really
prevent it from getting loaded. Fix this by adding a flag to not ever
load this during auto probing.
I'm still not deleting it, because it's useful for testing on nvidia
machines.
See #4555.
The current algorithm blew up when the color was negative, such as the
case when downscaling with dscale=mitchell or other algorithms that
introduce negative ringing. The simplest solution is to just slightly
change the calculation to force both parameters to be in-range.
Was at least somewhat broken, and is misleading. I don't really have an
idea why FFmpeg has two AVOptions here anyway. We don't need to care,
and I'm only aware of 1 user trying this option ever.
See #4579.
HOME isn't set by default on Windows. But if the user does set it,
prefer it by default.
Enables stuff like --log-file=~/mpv.log to work, even if HOME isn't set.
The first time I saw a user try to use this option, and apparently it
didn't work. I'm not exactly sure why, but the code seems to be broken
anyway. Apart from not doing any error checking (neither mallocs nor
warning the user against invalid input), it forgets to add a 0
terminator.
Use the corresponding AVOption instead, which probably works.
See #4579.
This is exposed so that bjin/mpv-prescalers can use textureGatherOffset
for performance.
Since there are now quite a lot of parameters where it isn't quite clear
why they're all defined, add a paragraph to the man page that explains
them a bit.
This helps prevent unnaturally, weirdly colorized blown out highlights
for direct images of the sunlit sky and other way-too-bright HDR
content. I was debating whether to set the default at 1.0 or 2.0, but
went with the more conservative option that preserves more detail/color.
This logic doesn't really make sense. copy_img_tex already binds the
texture, so why would we bind it a second time? Furthermore, nothing
actually uses this return value. Must have been some left-over artifact
of a previous iteration of this function. Anyway, it's harmless, just
nonsensical. So remove it.
This is more efficient on my machine (nvidia), but only when applied to
groups of exactly 4 texels. So we switch to the more efficient
textureGather for groups of 4. Some notes:
- textureGatherOffset seems to be faster than textureGather by a
non-negligible amount, but for some reason, textureOffset is still
slower than a straight-up texture
- textureGather* requires GLSL 400; and at least on nvidia, this
requires actually allocating a GL 4.0 context.
- the code in opengl/common.c that clamped the GLSL version to 330 is
deprecated, because the old user shader style has been removed
completely in the meantime
- To combat the growing complexity of the polar sampling code, we drop
the antiringing functionality from EWA shaders completely, since it
never really worked well for EWA to begin with. (Horrific artifacting)
Instead of PostMessage, use SendNotifyMessage from the SendMessage
family of functions to wake up the Win32 thread from the VO thread. When
a message is sent rather than posted between threads, it ends up in a
different queue which is processed before posted messages and can be
processed in more places. This prevents a playback glitch when clicking
on the titlebar, but not moving the window. With PostMessage-based
wakeups, VOCTRLs could be delayed for up to 500ms after the user clicks
on the titlebar, but with SendNotifyMessage, they still complete in
under a millisecond.
Also, instead of handling WM_USER, process the dispatch queue before
every message. This ensures the dispatch queue is processed as soon as
possible. WM_NULL is used to wake up the window procedure in case there
are no other messages being processed.
This sets AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH, which some hwaccels
using the new generic API respect. These do profile selection in
libavcodec, so it can be controlled only with an external flag, instead
of in mpv code like it used to be done.
They have been deprecated for a decade, yet you're forced to explicitly
deal with them at every step, or they will break your shit.
FFmpeg insists on keeping them, because libavfilter is too stupid to
deal with color ranges properly. Ridiculous.
- change asserts to silent exits
- check all pointers before use
- move the p->pass initialization code to the right place
This should hopefully cut down on the amount of crashing by making the
code fundamentally more robust, while also fixing a concrete issue where
opengl-cb failed to initialize p->pass.