Just set the ratio directly by working around the intended semantics of
the API function. The silly rounding stuff we had isn't needed anymore
(and not entirely correct anyway).
Note that since the compensation is virtually active forever, we need to
reset if it's not needed. So always run this code to be sure to reset
it.
Also note that libswresample itself had a precision issue, until it
was fixed in FFmpeg commit 351e625d.
When the audio format is not known yet and the audio chain is still
initializing, filter reinit will fail. Normally, attempts to
reinitialize filters at this stage should be rare (e.g. user commands
editing the filter chain). But it sometimes happened with track
switching in combination with the video code calling
update_playback_speed() at arbitrary times.
Get rid of the message by not trying to change the filters for the sake
of playback speed update while decoding is still being initialized.
Until now, we've relied on the following things:
- you can send flush packets to the decoder even if it's fully flushed,
- you can send new packets to a flushed decoder,
- you can send new packers to a partially flushed decoder.
("flushing" refers to sending flush packets to the decoder until the
decoder does not return new pictures, not avcodec_flush_buffers().)
All of these are questionable. The libavcodec API probably doesn't
guarantee that these work well or at all, even though most decoders have
no issue with these. But especially with hardware decoding wrappers
(like MMAL), real problems can be expected. Isolate us from these corner
cases by handling them explicitly.
This applies to unexpected freezes or deadlocks, not e.g. normal
framedrops. The verbose messages also might remind an API user if the
API usage is incorrect, such as not calling mpv_opengl_cb_draw() when a
redraw request was issued.
This commit introduces logic to read other volumes from the same source
as the primary archive. Both .rar formats as well as 7z are supported for now.
It also changes the libarchive callback structure to be per-volume
consistent with the libarchive intenal client data array constructed
with archive_read_append_callback_data.
Added open, close and switch callbacks. Only the latter is strictly
required to make sure that the streams always start at position 0, but
leaving all volumes open can eat a lot of memory for archives with many
parts.
The nnedi3 prescaler requires a normalized range to work properly,
but the original implementation did the range normalization after
the first step of the first pass. This could lead to severe quality
degradation when debanding is not enabled for NNEDI3.
Fix this issue by passing `tex_mul` into the shader code.
Fixes#2464
vo_opengl_cb is a special case, because we somehow have to render video
asynchronously, all while "trusting" the API user to do it correctly.
This didn't quite work, and a while ago a compromise using a timeout to
prevent theoretically possible deadlocks was added.
Make it even more synchronous. Basically, go all the way, and
synchronize rendering between VO and user renderer thread to the
full extent possible.
This means the silly frame queue is dropped, and we event attempt to
synchronize the GL SwapBuffer call (via mpv_opengl_cb_report_flip()).
The changes introduced with commit dc33eb56 are effectively dropped. I
don't even remember if they mattered.
In the future, we might make all VOs fetch asynchronously from a frame
queue, which would mostly remove the differences between vo_opengl and
vo_opengl_cb, but this will take a while (if it will even be done).
Pick the correct GLSL version from the GL_SHADING_LANGUAGE_VERSION
string. Might be somewhat questionable, as we expect the minor version
number not to have leading 0s.
Should help with cases when the reported GLSL version is much higher
than the equivalent of the reported GL version. This problem was
observed in combination with GL_ARB_uniform_buffer_object, which
can't be used if the declared GLSL version is too low.
Has the same function as setting the option.
This commit changes the property in a bunch of other ways. For example
if the VO is not created, it will return the option value.
Simplifies some auto detection matters.
I _still_ don't want to remove the lazy loading mechanism, because it's
still slightly useful for filters using the hwdec APIs. My main
motivation for not always preloading them is actually that libva prints
random useless crap to the terminal with no way to prevent this.
The examples demonstrates use with optical media, which is far from
mpv's main purpose.
The authors section is a leftover from MPlayer times. There are enough
other places which reiterate how mpv is based on mplayer2/MPlayer,
copyright statements, and so on.
Hint that the linked section contains information for Windows. (Well,
that's a lie, but it has a link to the Windows section.)
Avoid implying that lines in the config file end with ';'. Also, the <>
are probably just confusing.
Deal with jittering Matroska crap timestamps. This reuses the mechanism
that is needed for frames without PTS, and adds a heuristic to it. If
the interpolated timestamp is less than 1ms away from the real one, it
might be due to Matroska timestamp rounding (or other file formats with
such rounding, or files remuxed from Matroska).
While there actually isn't much of a need to do this (audio PTS
jittering by such a low amount doesn't negatively influence much), it
helps with identifying jitter from other sources.
Instead of requiring the decoder to set the PTS directly on the
dec_audio context (including handling absence of PTS etc.), transfer the
packet PTS to the decoded audio frame. Marginally simpler, and gives
more control to the generic code.
Notes:
- Unfortunately the only way to talk to EGL from within DRM I could find
involves linking with GBM (generic buffer management for Mesa.)
Because of this, I'm pretty sure it won't work with proprietary NVidia
drivers, but then again, last time I checked NVidia didn't offer
proper screen resolution for VT.
- VT switching doesn't seem to work at all. It's worth mentioning that
using vo_drm before introduction of VT switcher had an anomaly where
user could switch to another VT and input text to it, while video
played on top of that VT. However, that isn't the case with drm_egl:
I can't switch to other VT during playback like this. This makes me
think that it's either a limitation coming from my firmware or from
EGL/KMS itself rather than a bug with my code. Nonetheless, I still
left (untestable) VT switching code in place, in case it's useful to
someone else.
- The mode_id, connector_id and device_path should be configurable for
power users and people who wish to watch videos on nonprimary screen.
Unfortunately I didn't see anything that would allow OpenGL backends
to register their own set of options. At the same time, adding them to
global namespace is pointless.
- A few dozens of lines could be shared with vo_drm (setting up VT
switching, most of code behind page flipping). I don't have any strong
opinion on this.
- Sometimes I get minor visual glitches. I'm not sure if there's a race
condition of some sort, unitialized variable (doubtful), or if it's
buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa)
- .config and .control are very minimal.
Signed-off-by: wm4 <wm4@nowhere>
While it seemed like a pretty good idea at first, it's just a dead end
and works only in the simplest cases. While it may or may not help
slightly with audio sync mode, the display-sync mode already compensates
this in a better way. The main issue is that timestamps at this layer
are not in order, so it can look at single timestamps only.
Essentially we'd use something random, just because it's part of the srt
of traditionally used ALSA channel mappings. But each driver can do its
own things.
This doesn't let me sleep at night, so remove it.
This could accidentally change some spdif formats to AAC (because AAC is
the first on the list and will match first). spdif formats are
inherently uninterchangeable, so treat them as their own class of
formats (like int vs. float).
Might fix some issues with ao_wasapi.c.
glXCreateContextAttribsARB() by design can throw some X11 errors. We
ignore these, but we generally still print error messages to the
terminal. This was confusing/annoying users, so silence it. The stupid
part is that the Xlib error handler is global, so we have to be slightly
careful here.
They are evil and should be eradicated. Some of these were pretty dumb
anyway.
There are probably some more around in platform specific code or other
code not enabled by default on Linux.
This is based on an older patch by James Ross-Gowan. It was rebased and
cleaned up. Also, the DWM API usage present in the older patch was
removed, because DWM reports nonsense rates at least on Windows 8.1
(they are rounded to integers, just like with the old GDI API - except
the GDI API had a good excuse, as it could report only integers).
Signed-off-by: wm4 <wm4@nowhere>
This simplifies update_screen_rect a bit. Unless --fs-screen=all is
used, it will always get an HMONITOR and call GetMonitorInfo to
determine its dimensions. This will make it easier for the next few
commits to determine the colour profile and the refresh rate from the
HMONITOR.
There is a slight change in behaviour. When selecting a screen that is
out of range, such as --screen=9 on a machine with only two monitors,
the old code would silently select the last existing monitor. The new
code prints an error message and falls back to the default screen (same
as the Cocoa code.)
Signed-off-by: wm4 <wm4@nowhere>
The call to EnumDisplaySettings seems to be a relic from when MPlayer
ran on systems that didn't have GetMonitorInfo or SM_CX/CYVIRTUALSCREEN.
GetMonitorInfo was loaded dynamically, so it was possible for MPlayer to
run without it and use the values returned by EnumDisplaySettings.
These are always present in modern versions of Windows, so the values
returned from EnumDisplaySettings are always overwritten. Remove the
call to EnumDisplaySettings and assume SM_CX/CYVIRTUALSCREEN is always
present.
Signed-off-by: wm4 <wm4@nowhere>
Commit 27dc834f added it as such.
Also remove the check for glUniformBlockBinding() - it's part of an
extension, and the check glGetUniformBlockIndex() already checks whether
the extension is fully available.