GLES requires this. Some more common sampler types have default
precisions, but not usampler2D. Newer ANGLE builds verify this more
strictly than older builds, so this wasn't caught before.
Fixes#2761.
GLES does not support high bit depth fixed point textures for unknown
reasons, so direct 10 bit input is not possible. But we can still use
integer textures, which are supported by GLES 3.0. These store integer
data just like the standard fixed point textures, except they are not
normalized on sampling. They also don't support bilinear filtering, and
require a special sampler ("usampler2D").
While these texture formats enable us to shuffle the data to the GPU,
they're rather impractical with the requirements mentioned above and our
current architecture. One problem is that most code assumes it can
always use bilinear scaling (even if bilinear is never used when using
appropriate scale/cscale options). Another is that we don't have any
concept of running a function on a texture in an uniform way.
So for now, run a simple conversion step through a FBO. The FBO will use
the rgba16f format normally, which gives enough bits for 10 bit, and
will at least gracefully degrade with higher depth input.
This is bound to be much slower than a more "direct" method, but at
least it works and is simple to implement.
The odd change of function call order in init_video() is to properly
disable "dumb mode" (no FBO use) if these texture formats are in use.
This was never reset - absolutely can't be right. If the renderer
somehow switches back to another codepath, it certainly has to be reset.
Maybe this was hard to hit, as the normalization is going to be
idempotent in simpler cases (like rendering RGBA input).
Also get rid of the "merged" variable.
This seems generally easier when using libmpv (and was already requested
and implemented before: see commit 327a779a; it was reverted some time
later).
With the weird internal logic we have to deal with, in particular the
--softvol=no case (using system volume), and using the audio API's mixer
(--softvol=auto on some systems), we still can't avoid all glitches and
corner cases that complicate this issue so much. The API user is either
recommended to use --softvol=yes or auto, or to watch the new
mixer-active property, and assume the volume/mute properties have
significant values if the mixer is active.
Remaining glitches:
- changing the volume/mute properties has no effect if no internal mixer
is used (--softvol=no) and the mixer is not active; the actual mixer
controls do not change, only the property values
- --volume/--mute do not have an effect on the volume/mute properties
before mixer initialization (the options strictly are only applied
during mixer init)
- volume-max is 100 while the mixer is not active
Commit b53cb8de increased this by the number of additionally delayed
surfaces. But since this is only enabled in copy-back mode (which is
what process_image is about), the other additional surfaces accounted
for the direct rendering case can be ignored.
Often requested. The main argument, that prominent scalers like sharpen
change the image even if no scaling happens, disappeared anyway.
("sharpen", unsharp masking, is neither prominent nor a scaler anymore.
This is an artifact from MPlayer, which fuses unsharp masking with
bilinear scaling in order to make it single-pass, or so.)
Another fix for the crazy and insane nvidia preemption behavior.
This time, the situation is that we are using vo_opengl with vdpau
interop, and that vdpau got preempted in the background while mpv was
sitting idly. This can be e.g. reproduced by using:
--force-window=immediate --idle --hwdec=vdpau
and switching VTs. Then after switching back, load a video file.
This will not let mp_vdpau_handle_preemption() perform preemption
recovery, simply because it will do so only once vdp_decoder_create()
has been called. There are some other API calls which trigger
preemption, but many don't.
Due to the way the libavcodec API works, vdp_decoder_create() is way too
late. It does so when get_format returns. It notices creating the
decoder fails, and continues calling get_format without the vdpau
format. We could perhaps force it to reinit again (by adding a call to
vdpau.c, that checks for preemption, and sets hwdec_request_reinit), but
this seems too much of a mess.
Solve it by calling API in mp_vdpau_handle_preemption() that empirically
does trigger preemption: output_surface_put_bits_native(). This call is
useless, and in fact should be doing nothing (empty update VdpRect).
There's the slight chance that in theory it will slow down operation,
but in practice it's bound to be harmless. It's the likely cheapest and
simplest API call I've found that can trigger the fallback this way.
(The driver is closed source, so it was up to trial & error.)
Also, when initializing decoding, allow initial preemption recovery,
which is needed to pass the test mention above.
With the format left untouched, this would just try to reinit with a
spdif format again.
We're not clearing the format in reset_audio_state() so the audio chain
can be recreated any time without having to wait for a frame to be
decoded.
Some VOs had support for these - remove them.
Typically, these formats will have only some use in cases where using
RGB software conversion with libswscale is faster than letting the
VO/GPU do it (i.e. almost never). For the sake of testing this case,
keep IMGFMT_RGB565. This is the least messy format, because it has no
padding/alpha bits with unknown semantics.
Note that decoding to these formats still works. We'll let libswscale
repack the data to whatever the VO in use can take.
Notably, the address of the enumerator->count member is passed to
IMMDeviceCollection::GetCount(), which expects a UINT variable, not an int. How
did this ever work?
Previously, if the enumerator found no devices, attempting to get the default
device with IMMDeviceEnumerator::GetDefaultAudioEndpoint would result in the
cryptic (and undocumented) E_PROP_ID_UNSUPPORTED. This way, the user is given a
better indication of what exactly is wrong and isolates any other possible
triggers for this error.
At least DTV_ENUM_DELSYS is not available in older versions.
It's hard to tell when this identifier was introduced, but it appears it
was probably API version 5.5.
Even though the timing logic is correct, it tends to mess with looping
videos and such in unappreciated ways.
It also has to be admitted that most file formats seem not to properly
define the duration of the last video frame (or libavformat does not
export it in a useful way), so whether or not we should use the demuxer
reported framerate for the last frame is questionable. (Still, why would
you essentially just discard the last frame?)
The timing logic is kept, but disabled for video with "normal" FPS
values. In particular, we want to keep it for displaying images, which
implicitly set the frame duration to 1 second by reporting 1 FPS. It's
also good for slide shows with mf://.
Fixes#2745.
Most text subtitles are read completely on loading (libavformat works
this way, and there are good reasons to do it on the higher levels too).
This leads to some messy problems. For example, the subtitle path is the
only one which might read packets during decoder initialization. This is
not neccessary; get rid of it.
This fixes a potential problem of seeking to position 0 for image
subtitles on init, and if the start position is not at the beginning of
the timeline.
It doesn't need to be part of the big context, but is strictly part of
shuffling data from the audio filters to audio output, and thus belongs
into ao_chain.
It also turns out that clearing it in clear_audio_output_buffers() is
completely redundant.
(Of course ao_buffer is an abomination in the first place and shouldn't
exist at all.)
vo_chain_uninit() isn't supposed to care much about the decoder
(although decoders and outputs still go strictly together, so there is
not much of an actual difference now).
Also unset track.d_video correctly.
Remove a stale declaration from dec_video.h as well.
Similar to the video path. dec_audio.c now handles decoding only. It
also looks very similar to dec_video.c, and actually contains some of
the rewritten code from it. (A further goal might be unifying the
decoders, I guess.)
High potential for regressions.
Using the new API is a necessity for multiple-delivery-system
devices, since the old API does not offer a way to switch
the delivery system of the card.
This should in principle also be done for DVB-T / ATSC,
especially since most DVB-T devices also support DVB-C,
but I can not test such an implementation due to lack of hardware
(currently) so it seems better to leave the existing, tested code-path
in place for now.
No need use use all capital letters, and don't warn
if DVB-S2 is supported in addition since we handle that
in DVB-S case already.
Also, print the delivery system number for still unhandled
delivery systems to simplify debugging.
Saves one unnecessary additional ioctl per tuning
by just reusing existing information.
Should also fix the case of multiple supported delivery types
since we now rely on the initial query from the chosen
configuration after channel list parsing
instead of requerying the device.
Most common case would be DVB-C / DVB-T combination cards.
Cards with multiple delivery systems are only supported
starting from DVBv5 API (Kernel 2.6.38).
In this case, we loop over all delivery systems and
just treat them as different cards would be treated:
They all get their own TUNER-type, channel-list parsing etc.
Only request the current screen configuration instead of polling for new
screens, too. We're not interested in detecting any new screens as we're
merely enumerating what is currently connected and configured.
On some hardware (like mine) calling XRRGetScreenResources will stall
X11 for about 10 to 20 seconds. This has annoyed me for a few months
now and almost made me switch to VLC ;)
Signed-off-by: wm4 <wm4@nowhere>