This prevents the machine from going to sleep while a video-only stream
is playing. When audio is playing, the audio stack should make this
request for us.
Logging was meant to be silenced only when user uses connector
auto-detection feature. If user supplies connector ID manually, he
should see exact reason why connecting to this specific connector
failed.
We already use 2 screensaver APIs when attempting to disable the
screensaver: XResetScreenSaver() (from xlib) and XScreenSaverSuspend
(from the X11 Screen Saver extension). None of these actually work.
On modern desktop Linux, we are expected to make dbus calls using some
freedesktop-defined protocol (and possibly we'd have to fallback to a
Gnome specific one). At least xscreensaver doesn't respect the "old"
APIs either.
Solve this by running the xdg-screensaver script. It's a terrible, ugly
piece of shit (just read the script if you disagree), but at least it
appears to work everywhere. It's also simpler than involving various
dbus client libraries.
I hope this can replace the --heartbeat-cmd option, and maybe we could
remove our own DPMS/XSS code too.
This is optional, but ensures that linking with -Wl,--as-needed does
not drop the MMAL VC driver. The driver normally "registers" itself
in the library constructor, but since no symbols are explicitly
referenced, the linker could remove it with as-needed enabled.
Not sure why this was so roundabout; probably to keep spam down if the
user's OpenGL drivers are crap (but then just don't enable extended
features), or because the "Disabling..." text was so repetitious.
But there doesn't seem to be a good reason after all. Also, this could
already overflow the fixed size disabled[] array. Just print the
messages directly.
This could help in cases where the DWM (Windows desktop compositor) adds another
layer of bufferring and therefore the SwapBuffers timing could get messed up.
Signed-off-by: wm4 <wm4@nowhere>
on my windows system this allows smoothmotion to work perfectly also in windowed
mode. There's no real right or wrong here, with the the only goal being to
always hit the next vsync. however, on cases where vsync timing is jittery (as
could happen with DWM), this patch tries to aim to the middle of the vsync cycle
to get as least affected as possible by such jitter.
adds 1 vsync interval "slack" before deciding to drop the first frame. it should
help on cases of timing jitter (sleep duration, container timestamps, compositor
vsync timing, etc). once the drop threshold has been crossed, it will keep
dropping until perfect timing alignment. this prevents crossing the drop
threshold back and forth repeatedly and therefore more resilient to frame drops
Increase the default queue size. This helps with "missed" frames due to
the asynchronous nature of the API. All the other VOs are synchronous,
so if rendering and displaying takes a while, the common code in vo.c
will be blocked until it can continue. But with opengl-cb, vo.c might
immediately push the next ready frame, which causes the current frame
to be dropped _if_ it wasn't rendered yet.
One could fix this by making vo.c wait a while (until the API user calls
the render function, which pulls the frame). But setting the default
queue size to 2 seems much simpler: instead of dropping the frame, it
will be pushed to the API user once the next renderer call finishes.
(This is still a bit strange, and will hopefully be cleaned up when
video scheduling is redone, but for now this appears to deliver
relatively good results.)
Use texture-from-pixmap instead of vaapi's "native" GLX support.
Apparently the latter is unused by other projects. Possibly it's broken
due that, and Intel's inability to provide anything non-broken in
relation to video.
The new code basically uses the X11 output method on a in-memory pixmap,
and maps this pixmap as texture using standard GLX mechanisms. This
requires a lot of X11 and GLX boilerplate, so the code grows. (I don't
know why libva's GLX interop doesn't just do the same under the hood,
instead of bothering the world with their broken/unmaintained "old"
method, whatever it did. I suspect that Intel programmers are just
genuine sadists.)
This change was suggested in issue #1765.
The old GLX support is removed, as it's redundant and broken anyway.
One remaining issue is that the first vaPutSurface() call fails with an
unknown error. It returns -1, which is pretty strange, because vaapi
error codes are normally positive. It happened with the old GLX code
too, but does not happen with vo_vaapi. I couldn't find out why.
This is a peculiar filter I stumbled upon while playing around with
windows, it removes aliasing almost completely while not ringing at all.
The downside is that it's quite blurry, but at high resolutions it's not
so noticeable.
This merges all of the scaler-related options into a single
configuration struct, and also cleans up the way they're passed through
the code. (For example, the scaler index is no longer threaded through
pass_sample, just the scaler configuration itself, and there's no longer
duplication of the params etc.)
In addition, this commit makes scale-down more principled, and turns it
into a scaler in its own right - so there's no longer an ugly separation
between scale and scale-down in the code.
Finally, the radius stuff has been made more proper - filters always
have a radius now (there's no more radius -1), and get a new .resizable
attribute instead for when it's tunable.
User-visible changes:
1. scale-down has been renamed dscale and now has its own set of config
options (dscale-param1, dscale-radius) etc., instead of reusing
scale-param1 (which was arguably a bug).
2. The default radius is no longer fixed at 3, but instead uses that
filter's preferred radius by default. (Scalers with a default radius
other than 3 include sinc, gaussian, box and triangle)
3. scale-radius etc. now goes down to 0.5, rather than 1.0. 0.5 is the
smallest radius that theoretically makes sense, and indeed it's used
by at least one filter (nearest).
Apart from that, it should just be internal changes only.
Note that this sets up for the refactor discussed in #1720, which would
be to merge scaler and window configurations (include parameters etc.)
into a single, simplified string. In the code, this would now basically
just mean getting rid of all the OPT_FLOATRANGE etc. lines related to
scalers and replacing them by a single function that parses a string and
updates the struct scaler_config as appropriate.
This makes the core much more elegant, reusable, reconfigurable and also
allows us to more easily add aliases for specific configurations.
Furthermore, this lets us apply a generic blur factor / window function
to arbitrary filters, so we can finally "mix and match" in order to
fine-tune windowing functions.
A few notes are in order:
1. The current system for configuring scalers is ugly and rapidly
getting unwieldy. I modified the man page to make it a bit more
bearable, but long-term we have to do something about it; especially
since..
2. There's currently no way to affect the blur factor or parameters of
the window functions themselves. For example, I can't actually
fine-tune the kaiser window's param1, since there's simply no way to
do so in the current API - even though filter_kernels.c supports it
just fine!
3. This removes some lesser used filters (especially those which are
purely window functions to begin with). If anybody asks, you can get
eg. the old behavior of scale=hanning by using
scale=box:scale-window=hanning:scale-radius=1 (and yes, the result is
just as terrible as that sounds - which is why nobody should have
been using them in the first place).
4. This changes the semantics of the "triangle" scaler slightly - it now
has an arbitrary radius. This can possibly produce weird results for
people who were previously using scale-down=triangle, especially if
in combination with scale-radius (for the usual upscaling). The
correct fix for this is to use scale-down=bilinear_slow instead,
which is an alias for triangle at radius 1.
In regards to the last point, in future I want to make it so that
filters have a filter-specific "preferred radius" (for the ones that
are arbitrarily tunable), once the configuration system for filters has
been redesigned (in particular in a way that will let us separate scale
and scale-down cleanly). That way, "triangle" can simply have the
preferred radius of 1 by default, while still being tunable. (Rather
than the default radius being hard-coded to 3 always)
Rarely used and essentially useless. The only VO for which this was
implemented correctly and for which this did anything was vo_xv, but you
shouldn't use vo_xv anyway (plus it support BT.601 only, plus a vendor
specific extension for BT.709, whose presence this function essentially
reported - use xvinfo instead).
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
Currently, the code just skipped CMS completely. This commit treats them
as sRGB by default, instead.
This also refactors much of the color management code to make it more
generalized and re-usable.
This is a minor precaution, because in some cases the number of shader
programs can already hit 10. (chroma merging + separated cscale/scale +
+ sub blending (interpolated) + sub blending (non-interpolated)
+ output (interpolated) + output (non-interpolated) + OSD)
This has a number of user-visible changes:
1. A new flag blend-subtitles (default on for opengl-hq) to control this
behavior.
2. The OSD itself will not be color managed or affected by
gamma controls. To get subtitle CMS/gamma, blend-subtitles must be
used.
3. When enabled, this will make subtitles be cleanly interpolated by
:interpolation, and also dithered etc. (just like the normal output).
Signed-off-by: wm4 <wm4@nowhere>