This happens to be better for the af_volume filter (for softvol), and
saves some code too. It's "better" because you want to affect the
final filtered audio, such as after a manually inserted drc filter.
Drop the code for switching the volume options and properties between
af_volume and AO volume controls. interface-changes.rst mentions the
changes in detail.
Do this because this was exceedingly complex and had other problems as
well. It was also very hard to test. It's just not worth the trouble.
Some leftovers like AOCONTROL_HAS_PER_APP_VOLUME will be removed at a
later point.
Fixes#3322.
I made this call up because I sort of thought this makes senssssse. I'm
now convinced that it does not, and even is actively harmful. I'm still
quite in the dark how the DirectD 11 video API is supposed to work with
synchronization, but at least for normal graphics this call would not
make much sense.
When selecting a device that simply doesn't exist with --audio-device,
AudioUnit will still initialize and start playback without complaining.
But it will never call the audio render callback, which leads to audio
playback simply not progressing.
I couldn't find a way to get AudioUnit to report an error at all, so
here's a crappy hack that takes care of this in most cases. We assume
that all devices have a kAudioDevicePropertyDeviceIsAlive property.
Invalid devices will error when querying the property (with 'obj!' as
status code).
This is not the correct fix, because we try to double-guess AudioUnit's
behavior by accessing a lower label API. Suggestions welcome.
Although it appears to be accepted by the function, MSGL_STATUS messages
are never passed to the client API. Consequently "status" has the same
meaning as "v" and is useless.
Commit 771a8bf5 added code to avoid unnecessary vf_reconfig() calls for
unrelated reasons, but forget to consider that it has to be called at
least once if the input format changes. As a consequence it got "stuck"
due to not being able to decode more frames.
vo_frame can have more than 1 frame - the extra frames are future
references, which are sometimes useful for filtering (vo_opengl
interpolation). There's no harm in reducing the number of frames sent to
the VO requested amount of future frames, so do that.
Doesn't actually reduce the number of concurrently in use frames in
practice.
If the status line is wider than the reported terminal size, then cut it
off instead of causing the terminal to scroll down for the next line.
This is done in the most primitive way possible, assuming ASCII.
This was actually done in the past as far as I'm aware; do it again.
(Probably differently.)
Some filters support VFCTRL_SET_DEINTERLACE. This affects most hardware
deinterlace filters. They can be inserted by the user manually, or auto-
inserted by vf.c itself as conversion filter (vf_d3d11vpp). In these
cases, we shouldn't insert or remove filters outselves, and instead
VFCTRL_SET_DEINTERLACE should be invoked to switch the mode.
This wasn't done correctly in the recently refactored code and could
have broken with --deinterlace. (The refactor only considered switching
via property in this case.) Fix it by making it a proper part of the
filter_reconfig() function, and making set_deinterlacing() (which is
called by the property handler) merely call filter_reconfig() in all
cases to do the real work.
We can even avoid rebuilding the filter chain - though only if no other
auto-filters are inserted. It probably also provides a slightly cleaner
way to implement functionality in the VO while still inserting video
filter fallbacks correctly if required.
The test scenario at hand was hardware decoding a file with d3d11 and
with deinterlacing enabled. The file switches to a non-hardware
dedocdeable format mid-stream. This failed, because it tried to call
vf_reconfig() with the old filters inserted, with was fatal due to
vf_d3d11vpp accepting only hardware input formats.
Fix this by always strictly removing all auto-inserted filters
(including the deinterlacing one), and reconfiguring only after that.
Note that this change is good for other situations too, because we
generally don't want to use a hardware deinterlacer for software
decoding by default. They're not necessarily optimal, and VAAPI VPP even
has incomprehensible deinterlacer bugs specifically with software frames
not coming from a hardware decoder.
Instead of using the "vf" command code (which changes filters at runtime
on user input), use the general filter-insertion code. The latter was
added later, and is more suitable for automatically inserted filters.
The old code failed in particular when using watch-later saving, which
stored the filter list in the resume config file. If a user changed the
hardware decoding mode via command line, the stored filter chain was out
of date and could cause failure due to not working with hardware or
software decoding mode. Storing the deinterlace filter in the filter
list was unavoidable, because it was part of the user state. (The new
code only edits the actually instantiated filters.)
These mostly happen in situations where the correct behavior is
relatively new and not found in the wild (therefore not worth
implementing) and/or extremely complicated (and thus not worth worrying
about the potential edge cases and UI changes).
Still, it's best to document these where they happen to guide the poor
souls maintaining these files in the future.
This uses eglPostSubBufferNV to trigger ANGLE to check the window size
and update the size of the swapchain to match, which is recommended
here: https://groups.google.com/d/msg/angleproject/RvyVkjRCQGU/gfKfT64IAgAJ
With the D3D11 backend, using eglPostSubBufferNV with a 0-sized update
region will even skip the Present() call, meaning it won't block for a
vsync period. Hopefully ANGLE will have a less hacky way of doing this
in future. See the relevant ANGLE issue: http://anglebug.com/1438Fixes#3301
This can for example happen with vo_opengl_cb, if it is used with a GL
implementation that does not supports FBOs. (mpv itself should never
attempt to use FBOs if they're not available.)
Without this check it would trigger an assert() in our dummy
glBindFramebuffer wrapper.
Suspected cause of #3308, although it's still unlikely.
For example it should be set to IMGFMT_D3D11NV12 if it isn't already.
Otherwise, an assertion in vf.c could trigger.
This probably couldn't be provoked yet.
This moves some of the bulky user-shader specific logic into the file
dedicated to it. Rather than expose video.c state, variable lookup is
now done via a simulated closure.
This greatly improves the result when decoding typical (ST.2084) HDR
content, since the job of tone mapping gets significantly easier when
you're only mapping from 1000 to 250, rather than 10000 to 250.
The difference is so drastic that we can now even reasonably use
`hdr-tone-mapping=linear` and get a very perceptually uniform result
that is only slightly darker than normal. (To compensate for the extra
dynamic range)
Due to weird implementation details, this only seems to be present on
keyframes (or something like that), so we have to cache the last seen
value for the frames in between.
Also, in some files the metadata is just completely broken /
nonsensical, so I decided to apply a simple heuristic to detect
completely broken metadata.
This involves multiple changes:
1. Brightness metadata is split into nominal peak and signal peak.
For a quick and dirty explanation: nominal peak is the brightest value
that your color space can represent (i.e. the brightness of an encoded
1.0), and signal peak is the brightest value that actually occurs in
the video (i.e. the brightest thing that's displayed).
2. vo_opengl uses a new decision logic to figure out the right nom_peak
and sig_peak for all situations. It also does a better job of picking
the right target gamut/colorspace to use for the OSD. (Which still is
and still should be treated as sRGB). This change in logic also
fixes#3293 en passant.
3. Since it was growing rapidly, the logic for auto-guessing / inferring
the right colorimetry configuration (in pass_colormanage) was split from
the logic for actually performing the adaptation (now pass_color_map).
Right now, the new logic doesn't do a whole lot since HDR metadata is
still ignored (but not for long).
This has two reasons:
1. I tend to add new fields to this metadata, and every time I've done
so I've consistently forgotten to update all of the dozens of places in
which this colorimetry metadata might end up getting used. While most
usages don't really care about most of the metadata, sometimes the
intend was simply to “copy” the colorimetry metadata from one struct to
another. With this being inside a substruct, those lines of code can now
simply read a.color = b.color without having to care about added or
removed fields.
2. It makes the type definitions nicer for upcoming refactors.
In going through all of the usages, I also expanded a few where I felt
that omitting the “young” fields was a bug.
Commit 883d3114 seems to have (accidentally?) dropped the FBOTEX_FUZZY
from the output_fbo resize, which means that current master will keep
resizing and resizing the FBO as you change the window size, introducing
severe memory leaking after a while. (Not sure why that would cause
memory leaks, but I blame nvidia)
Either way, it's bad for performance too, so it's worth fixing.
vo_vaapi is the only thing which can't scale RGBA on the GPU. (Other
cases of RGBA scaling are handled in draw_bmp.c for some reason.)
Move this code and get rid of the osd_conv_cache thing.
Functionally, nothing changes.
This affects VOs (or other code which render OSD) which does not support
the LIBASS format, but only RGBA. Instead of having a converter stage in
osd.c, make mp_ass_packer_pack() output directly in RGBA.
In general, this is work towards refcounted subtitle images.
Although we could keep the "converter" design, doing it this way seems
simpler, at least considering the current situation with only 2 OSD
formats. It also prevents copying & packing the data twice, which will
lead to better performance. (Although I guess this case is not important
at all.)
It also fixes --force-rgba-osd-rendering when used with vo_opengl,
vo_vdpau, and vo_direct3d.
The intention is to let mp_ass_packer_pack() produce different output
for the RGBA and LIBASS formats. VOs (or whatever generates the OSD)
currently do not signal a preferred format, and this mechanism just
exists to switch between RGBA and LIBASS formats correctly, preferring
LIBASS if the VO supports it.
This is how PBOs are normally supposed to be used.
Unfortunately I can't see an any absolute improvement on nVidia binary
drivers and playing 4K material. Compared to the "old" PBO path with 1
buffer, the measured GL time decreases significantly, though.
GL generally does not support flipping the image on upload, meaning
negative strides are not supported. vo_opengl handles this by flipping
rendering if the stride is inverted, and gl_pbo_upload() "ignores"
negative strides by uploading without flipping the image.
If individual planes had strides with different signs, this broke. The
flipping affected the entire image, and only the sign of the first plane
was respected.
This is just a crazy corner case that will never happen, but it turns
out this is quite simple to support, and actually improves the code
somewhat.
This introduces a gl_pbo_upload_tex() function, which works almost like
our gl_upload_tex() glTexSubImage2D() wrapper, except it takes a struct
which caches the PBO handles. It also takes the full texture size (to
make allocating an ideal buffer size easier), and a parameter to disable
PBOs (so that the caller doesn't have to duplicate the gl_upload_tex()
call if PBOs are disabled or unavailable).
This also removes warnings and fallbacks on PBO failure. We just
silently try using PBOs on every frame, and if that fails at some point,
revert to normal texture uploads. Probably doesn't matter.
This makes the geometry of the sizing borders more like the ones in
Windows 10. It also fixes an off-by-one error that made the right and
bottom borders thinner than the left and top borders, which made it
difficult to resize the window when using the Windows 7 classic theme
(because it has pretty thin sizing borders to begin with.)