Instead of requiring the decoder to set the PTS directly on the
dec_audio context (including handling absence of PTS etc.), transfer the
packet PTS to the decoded audio frame. Marginally simpler, and gives
more control to the generic code.
Notes:
- Unfortunately the only way to talk to EGL from within DRM I could find
involves linking with GBM (generic buffer management for Mesa.)
Because of this, I'm pretty sure it won't work with proprietary NVidia
drivers, but then again, last time I checked NVidia didn't offer
proper screen resolution for VT.
- VT switching doesn't seem to work at all. It's worth mentioning that
using vo_drm before introduction of VT switcher had an anomaly where
user could switch to another VT and input text to it, while video
played on top of that VT. However, that isn't the case with drm_egl:
I can't switch to other VT during playback like this. This makes me
think that it's either a limitation coming from my firmware or from
EGL/KMS itself rather than a bug with my code. Nonetheless, I still
left (untestable) VT switching code in place, in case it's useful to
someone else.
- The mode_id, connector_id and device_path should be configurable for
power users and people who wish to watch videos on nonprimary screen.
Unfortunately I didn't see anything that would allow OpenGL backends
to register their own set of options. At the same time, adding them to
global namespace is pointless.
- A few dozens of lines could be shared with vo_drm (setting up VT
switching, most of code behind page flipping). I don't have any strong
opinion on this.
- Sometimes I get minor visual glitches. I'm not sure if there's a race
condition of some sort, unitialized variable (doubtful), or if it's
buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa)
- .config and .control are very minimal.
Signed-off-by: wm4 <wm4@nowhere>
While it seemed like a pretty good idea at first, it's just a dead end
and works only in the simplest cases. While it may or may not help
slightly with audio sync mode, the display-sync mode already compensates
this in a better way. The main issue is that timestamps at this layer
are not in order, so it can look at single timestamps only.
Essentially we'd use something random, just because it's part of the srt
of traditionally used ALSA channel mappings. But each driver can do its
own things.
This doesn't let me sleep at night, so remove it.
This could accidentally change some spdif formats to AAC (because AAC is
the first on the list and will match first). spdif formats are
inherently uninterchangeable, so treat them as their own class of
formats (like int vs. float).
Might fix some issues with ao_wasapi.c.
glXCreateContextAttribsARB() by design can throw some X11 errors. We
ignore these, but we generally still print error messages to the
terminal. This was confusing/annoying users, so silence it. The stupid
part is that the Xlib error handler is global, so we have to be slightly
careful here.
They are evil and should be eradicated. Some of these were pretty dumb
anyway.
There are probably some more around in platform specific code or other
code not enabled by default on Linux.
This is based on an older patch by James Ross-Gowan. It was rebased and
cleaned up. Also, the DWM API usage present in the older patch was
removed, because DWM reports nonsense rates at least on Windows 8.1
(they are rounded to integers, just like with the old GDI API - except
the GDI API had a good excuse, as it could report only integers).
Signed-off-by: wm4 <wm4@nowhere>
This simplifies update_screen_rect a bit. Unless --fs-screen=all is
used, it will always get an HMONITOR and call GetMonitorInfo to
determine its dimensions. This will make it easier for the next few
commits to determine the colour profile and the refresh rate from the
HMONITOR.
There is a slight change in behaviour. When selecting a screen that is
out of range, such as --screen=9 on a machine with only two monitors,
the old code would silently select the last existing monitor. The new
code prints an error message and falls back to the default screen (same
as the Cocoa code.)
Signed-off-by: wm4 <wm4@nowhere>
The call to EnumDisplaySettings seems to be a relic from when MPlayer
ran on systems that didn't have GetMonitorInfo or SM_CX/CYVIRTUALSCREEN.
GetMonitorInfo was loaded dynamically, so it was possible for MPlayer to
run without it and use the values returned by EnumDisplaySettings.
These are always present in modern versions of Windows, so the values
returned from EnumDisplaySettings are always overwritten. Remove the
call to EnumDisplaySettings and assume SM_CX/CYVIRTUALSCREEN is always
present.
Signed-off-by: wm4 <wm4@nowhere>
Commit 27dc834f added it as such.
Also remove the check for glUniformBlockBinding() - it's part of an
extension, and the check glGetUniformBlockIndex() already checks whether
the extension is fully available.
Implement NNEDI3, a neural network based deinterlacer.
The shader is reimplemented in GLSL and supports both 8x4 and 8x6
sampling window now. This allows the shader to be licensed
under LGPL2.1 so that it can be used in mpv.
The current implementation supports uploading the NN weights (up to
51kb with placebo setting) in two different way, via uniform buffer
object or hard coding into shader source. UBO requires OpenGL 3.1,
which only guarantee 16kb per block. But I find that 64kb seems to be
a default setting for recent card/driver (which nnedi3 is targeting),
so I think we're fine here (with default nnedi3 setting the size of
weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the
"intBitsToFloat()" built-in function. This is necessary to precisely
represent these weights in GLSL. I tried several human readable
floating point number format (with really high precision as for
single precision float), but for some reason they are not working
nicely, bad pixels (with NaN value) could be produced with some
weights set.
We could also add support to upload these weights with texture, just
for compatibility reason (etc. upscaling a still image with a low end
graphics card). But as I tested, it's rather slow even with 1D
texture (we probably had to use 2D texture due to dimension size
limitation). Since there is always better choice to do NNEDI3
upscaling for still image (vapoursynth plugin), it's not implemented
in this commit. If this turns out to be a popular demand from the
user, it should be easy to add it later.
For those who wants to optimize the performance a bit further, the
bottleneck seems to be:
1. overhead to upload and access these weights, (in particular,
the shader code will be regenerated for each frame, it's on CPU
though).
2. "dot()" performance in the main loop.
3. "exp()" performance in the main loop, there are various fast
implementation with some bit tricks (probably with the help of the
intBitsToFloat function).
The code is tested with nvidia card and driver (355.11), on Linux.
Closes#2230
Add the Super-xBR filter for image doubling, and the prescaling framework
to support it.
The shader code was ported from MPDN extensions project, with
modification to process luma only.
This commit is largely inspired by code from #2266, with
`gl_transform_trans()` authored by @haasn taken directly.
This check disables the display-sync resample method. If the filters
convert PCM to AC3, we can still insert a filter to change speed. This
is because filters are inserted at the beginning of the filter chain.
Actually, it didn't really require that before (most work was avoided),
but some bits had to be run anyway. Separate the speed change into a
light-weight function, which merely updates already created filters, and
a heavy-weight one which messes with filter insertion.
This also happens to fix the case where the filters would "forget" the
current speed (force resampling, change speed, hit a volume control to
force af_volume insertion - it will reset speed and desync).
Since we now always run the light-weight function, remove the
af_scaletempo verbose message that is printed on speed setting. Other
than that, all setters are cheap.
Move it (in a cosmetic sense), and also move its invocation to below all
the video handling.
All other changes remain cosmetic, including moving the framedrop
calculation code, and getting rid of the video_speed_correction
variable.
For some reason, the encoder didn't like that the AVPacket already had
fields set. I'm not quite sure, but this might just be invalid API
usage. Do it as it's recommended.
We still have a sample-based buffer between filters and audio outputs.
In order to avoid cutting frames into half (which can upset receivers),
we strictly need to align the boundaries on which we cut the audio.
Update msg.c state immediately if a terminal or logging setting is set.
Until now, this was delayed until mp[v]_initialize() was called. When
using the client API, you could easily miss logged error messages, even
when logging was initialized early on by calling
mpv_request_log_messages().
(Properties can't be used for this either, because properties do not
work before mpv_initialize().)
The noframe event is logged whenever there is no new frame. This can
happen due to normal redraws, but also due to video frame queue
underflow.
The mpv_opengl_cb_report_flip() API function is currently pretty
useless, because blocking on the video frame queue is more reliable and
simpler. But at least we can log the actual vsync.
next_vsync/prev_vsync was only used to retrieve the vsync duration. We
can get this in a simpler way.
This also removes the vsync duration estimation from vo_opengl_cb.c,
which is probably worthless anyway. (And once interpolation is made
display-sync only, this won't matter at all.)
We need to effectively swap the last channel pair. See commit 4e358a96
and 5a18c5ea for details.
Doing this seems rather strange, as 7.1 just extends 5.1 with 2 new
speakers, and 5.1 doesn't need this change. Going by the HDMI standard
and the Intel HDA sources (cited in the referenced commits), it also
looks like 7.1 should simply append two channels to 5.1 as well. But
swapping them is apparently correct. This is also what XBMC does. (I
didn't find any other applications doing 7.1 PCM using the ALSA channel
map API. VLC seems to ignore the 7.1 case.) Testing reveals that at
least the end result is correct.
"Normal" ALSA 7.1 is unaffected by this, as it reports a different
(and saner) channel layout.
Instead of constructing an ALSA channel map from mpv ones from scratch,
try to find the original ALSA channel map again. Th result is that we
need to convert channel maps only in one direction. If we need to map
a mp_chmap to ALSA, we fetch the device's channel map list, convert
each entry to mp_chmap, and find the first one which fits.
This seems helpful for the following commit. For now, this only gets rid
of mapping back the trivial MONO mapping, which alone would still be
acceptable, but with other channel layout mogrifications it gets messy
fast. While we need to do something awkward to keep our channel map
reordering for VAR chmaps (which basically gives nicer output and
possibly slightly better performance), this is still the better
solution.
This reverts commit 4e358a9636.
Testing shows the channel pairs must indeed be swapped (details see
commit message of the reverted commit). Making the downmix code move
sl/sr to sdl/sdr is not an appropriate solution anymore, and it's
better to fix the unusual channel layout in ao_alsa.c directly.
(Not reverting the change in chmap.c; this is still correct.)
This affects only the display-sync code path, as for normal timing the
wakeup_pts stuff handles proper wakeup. It's probably mostly a
theoretical issue.
Discontinuities (like toggling fullscreen) can cause multiple frames to
be dropped in succession, which sounds very weird. It's better to drop
some video frames instead to compensate for larger desyncs.
We roughly base it on the maximum allowed speed changes (audio change is
"additional" to the video change to account for deviations when playing
at max. video speed change).
A hw decoder might fail to decode a frame for multiple reasons, and not
always just because decoding is impossible. We can't generally
distinguish these reasons well. Make it more tolerant by accepting
failures of 3 frames, but not more. The threshold can be adjusted by the
repurposed --vd-lavc-software-fallback option.
(This behavior was suggested much earlier in some PR, but at the time
the "proper" hwdec fallback was indistinguishable from decoding error.
With the current situation, "proper" fallback is still instantious.)
ao_alsa: attempt to fix 7.1 over HDMI
The last 2 channels of 7.1 (RLC/RRC in ALSA) were exported as sdl/sdr
instead of sl/sr (I don't even know why I chose sdl/sdr, but SL/SR
and RLC/RRC are different in the ALSA API). libsw/avresample do not
move the sl/sr channels to sdl/sdr when rematrixing, so silence was
sent for 2 channels. If my selection of sdl/sdr is essentially API
abuse, there's no reason why they should do this differently.
The mess here is really that ALSa doesn't map the HDMI layouts cleanly.
Most ALSA drivers export 7.1 in a way compatible to our expectations,
but Intel HDA/HDMI does not:
mpv/ffmpeg: fl-fr-fc-lfe-bl-br-sl-sr
ALSA/generic: FL FR FC LFE RL RR SL SR [1]
ALSA/HDMI: FL FR LFE FC RL RR RLC RRC [2]
The HDMI layout is layout 0x13 (going by CEA-861-B). The comment in
the kernel code has to be correct too. The early standard defines only
1 other layout, which replaces RLC/RRC with FRC/FLC - this probably
corresponds to what we call "7.1(wide)".
So it appears when ALSA requests RLC/RRC, we should feed it sl/sr.
To make it more complicated, Kodi/xbmc apparently also have to deal with
ALSA being special, but instead of sending sl/sr to RLC/RRC, they swap
the last two pairs of the layout, and send sl/sr to RL/RR and bl/br to
RLC/RRC. Or I might have misunderstood their code. I don't have a
7.1-capable A/V receiver, so I can't test this.
For now, go with the simpler solution, and wait until someone tests it.
If the speakers end up swapped, a completely different solution will be
needed.
[1] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/sound/core/pcm_lib.c?id=refs/tags/v4.3#n2434
[2] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/sound/pci/hda/patch_hdmi.c?id=refs/tags/v4.3#n307