With --idle --force-window, or when started from the bundle, the cocoa
code dropped the first frame. This resulted in a black frame on start
sometimes.
The reason was that the live resizing/redrawing code was invoked, which
simply set skip_swap_buffer to false, blocking redrawing whatever was
going to be rendered next. Normally this is done so that the following
works:
1. vo_opengl draw a frame, releases GL lock
2. live resizing kicks in, redraw the frame
3. vo_opengl wants to call SwapBuffers, drawing a stale buffer
overwritten by the live resizing code
This is solved by setting skip_swap_buffer in 2., and querying it in 3.
Fix this by resetting the skip_swap_buffer at a known good point: when
vo_opengl starts drawing a new frame.
The start_frame function returns bool, so that it can be merged with
is_active in a following commit.
Commit f1746741de changed the drop
logic to have more slack (drop more frames but less frequent) to prevent
drops due to timing jitter when the clip and screen have similar rates.
However, if the clip has higher rate than the screen (or just higher
playback rate), then that policy hurts smoothness since these "chunked
drops" look worse than one frame drop at a time.
This patch restores the old drop logic when the playback frame rate is
higher than ~5% above the screen refresh rate, and solves this issue.
Fixes#1897
Also factor the display size initialization into a separate function.
For some reason this seems to work, although setting the background
color using this 1x1 pixel bitmap does not work. I blame the RPI
beign a terrible piece of hardware with even worse drivers.
It's weird that this basically adjusts the contrast between luma and
chroma, and not blackness.
This is more in line with the behavior of libswscale, the vdpau
"procamp" (which mpv doesn't use), and Xv.
Right now, the default behavior is to pick the numerically lowest screen
ID that overlaps the window in any way - but this means that mpv will
decide to pick an ICC profile in a pretty arbitrary way even if the
window only overlaps another screen by a single pixel.
The new behavior is to query it based on the center of the window
instead.
vo_opengl (or gl_hwdec_vdpau.c to be specific) calls
mp_vdpau_mixer_render() with video_rect=NULL, which means to use the
full surface. This is incorrect if the surface is actually cropped, as
it can happen with h264. In this case, it was rendering the parts
outside of the image.
Fix it by making this case use the cropped size instead.
Alternative fix for PR #1863.
MP_IMGFIELD_TOP/MP_IMGFIELD_BOTTOM were completely unused, and
MP_IMGFIELD_ORDERED was always set (even though vf_vdpaupp.c strangely
checked for the latter).
If user switched terminals frantically, mpv could get SIGUSR1 twice in a
row, which, up until now, resulted in destroying CRTC twice. This caused
it to segfault. After this fix, double SIGUSR1 should be ignored.
Vapoursynth made an incompatible API change: clips with unknown length
are not supported anymore. In fact, Vapoursynth abort()s the program
(which by the way invalidate all of its claims of API/ABI stability).
So add some nonsense to make it work again.
Remove the old implementation for these properties. It was never very
good, often returned very innaccurate values or just 0, and was static
even if the source was variable bitrate. Replace it with the
implementation of "packet-video-bitrate". Mark the "packet-..."
properties as deprecated. (The effective difference is different
formatting, and returning the raw value in bits instead of kilobits.)
Also extend the documentation a little.
It appears at least some decoders (sipr?) need the
AVCodecContext.bit_rate field set, so this one is still passed through.
This prevents the machine from going to sleep while a video-only stream
is playing. When audio is playing, the audio stack should make this
request for us.
Logging was meant to be silenced only when user uses connector
auto-detection feature. If user supplies connector ID manually, he
should see exact reason why connecting to this specific connector
failed.
It's entirely useless, especially now that vo.c handles screenshots in a
generic way, and requires no special VO support. There are some
potential weird use-cases, but actually I've never seen it being used.
We already use 2 screensaver APIs when attempting to disable the
screensaver: XResetScreenSaver() (from xlib) and XScreenSaverSuspend
(from the X11 Screen Saver extension). None of these actually work.
On modern desktop Linux, we are expected to make dbus calls using some
freedesktop-defined protocol (and possibly we'd have to fallback to a
Gnome specific one). At least xscreensaver doesn't respect the "old"
APIs either.
Solve this by running the xdg-screensaver script. It's a terrible, ugly
piece of shit (just read the script if you disagree), but at least it
appears to work everywhere. It's also simpler than involving various
dbus client libraries.
I hope this can replace the --heartbeat-cmd option, and maybe we could
remove our own DPMS/XSS code too.
This is optional, but ensures that linking with -Wl,--as-needed does
not drop the MMAL VC driver. The driver normally "registers" itself
in the library constructor, but since no symbols are explicitly
referenced, the linker could remove it with as-needed enabled.
Not sure why this was so roundabout; probably to keep spam down if the
user's OpenGL drivers are crap (but then just don't enable extended
features), or because the "Disabling..." text was so repetitious.
But there doesn't seem to be a good reason after all. Also, this could
already overflow the fixed size disabled[] array. Just print the
messages directly.
Because gcc (and clang) is a goddamn PITA and unnecessarily warns if
the universal initializer for structs is used (like mp_image x = {})
and the first member of the struct is also a struct, move the w/h
fields to the top.
They are redundant. They were used by draw_bmp.c only, and only in a
special code path that 1. used fixed image formats, and 2. had image
sized perfectly aligned to chroma boundaries (so computing the chroma
width/height is trivial).
This could help in cases where the DWM (Windows desktop compositor) adds another
layer of bufferring and therefore the SwapBuffers timing could get messed up.
Signed-off-by: wm4 <wm4@nowhere>
on my windows system this allows smoothmotion to work perfectly also in windowed
mode. There's no real right or wrong here, with the the only goal being to
always hit the next vsync. however, on cases where vsync timing is jittery (as
could happen with DWM), this patch tries to aim to the middle of the vsync cycle
to get as least affected as possible by such jitter.
adds 1 vsync interval "slack" before deciding to drop the first frame. it should
help on cases of timing jitter (sleep duration, container timestamps, compositor
vsync timing, etc). once the drop threshold has been crossed, it will keep
dropping until perfect timing alignment. this prevents crossing the drop
threshold back and forth repeatedly and therefore more resilient to frame drops
Increase the default queue size. This helps with "missed" frames due to
the asynchronous nature of the API. All the other VOs are synchronous,
so if rendering and displaying takes a while, the common code in vo.c
will be blocked until it can continue. But with opengl-cb, vo.c might
immediately push the next ready frame, which causes the current frame
to be dropped _if_ it wasn't rendered yet.
One could fix this by making vo.c wait a while (until the API user calls
the render function, which pulls the frame). But setting the default
queue size to 2 seems much simpler: instead of dropping the frame, it
will be pushed to the API user once the next renderer call finishes.
(This is still a bit strange, and will hopefully be cleaned up when
video scheduling is redone, but for now this appears to deliver
relatively good results.)
This matters for png screenshots. We used to hardcode rgb24, but
libavformat's png encoder can do much more. Use the image format list
provided by the encoder, and select the best format from it (according
to the source format).
As a consequence, rgb48 (i.e. 16 bit per component) will be selected if
the source format is e.g. 10 bit yuv. This happens in accordance to
FFmpeg's avcodec_find_best_pix_fmt_of_list() function, which assumes
that 16 bit rgb should be preferred for 10 bit yuv.
This also causes it to print this message in this case:
[ffmpeg] swscaler: full chroma interpolation for destination format 'rgb48be' not yet implemented
I'm not 100% sure whether this is a problem.
Use texture-from-pixmap instead of vaapi's "native" GLX support.
Apparently the latter is unused by other projects. Possibly it's broken
due that, and Intel's inability to provide anything non-broken in
relation to video.
The new code basically uses the X11 output method on a in-memory pixmap,
and maps this pixmap as texture using standard GLX mechanisms. This
requires a lot of X11 and GLX boilerplate, so the code grows. (I don't
know why libva's GLX interop doesn't just do the same under the hood,
instead of bothering the world with their broken/unmaintained "old"
method, whatever it did. I suspect that Intel programmers are just
genuine sadists.)
This change was suggested in issue #1765.
The old GLX support is removed, as it's redundant and broken anyway.
One remaining issue is that the first vaPutSurface() call fails with an
unknown error. It returns -1, which is pretty strange, because vaapi
error codes are normally positive. It happened with the old GLX code
too, but does not happen with vo_vaapi. I couldn't find out why.
This is a peculiar filter I stumbled upon while playing around with
windows, it removes aliasing almost completely while not ringing at all.
The downside is that it's quite blurry, but at high resolutions it's not
so noticeable.
This merges all of the scaler-related options into a single
configuration struct, and also cleans up the way they're passed through
the code. (For example, the scaler index is no longer threaded through
pass_sample, just the scaler configuration itself, and there's no longer
duplication of the params etc.)
In addition, this commit makes scale-down more principled, and turns it
into a scaler in its own right - so there's no longer an ugly separation
between scale and scale-down in the code.
Finally, the radius stuff has been made more proper - filters always
have a radius now (there's no more radius -1), and get a new .resizable
attribute instead for when it's tunable.
User-visible changes:
1. scale-down has been renamed dscale and now has its own set of config
options (dscale-param1, dscale-radius) etc., instead of reusing
scale-param1 (which was arguably a bug).
2. The default radius is no longer fixed at 3, but instead uses that
filter's preferred radius by default. (Scalers with a default radius
other than 3 include sinc, gaussian, box and triangle)
3. scale-radius etc. now goes down to 0.5, rather than 1.0. 0.5 is the
smallest radius that theoretically makes sense, and indeed it's used
by at least one filter (nearest).
Apart from that, it should just be internal changes only.
Note that this sets up for the refactor discussed in #1720, which would
be to merge scaler and window configurations (include parameters etc.)
into a single, simplified string. In the code, this would now basically
just mean getting rid of all the OPT_FLOATRANGE etc. lines related to
scalers and replacing them by a single function that parses a string and
updates the struct scaler_config as appropriate.
This makes the core much more elegant, reusable, reconfigurable and also
allows us to more easily add aliases for specific configurations.
Furthermore, this lets us apply a generic blur factor / window function
to arbitrary filters, so we can finally "mix and match" in order to
fine-tune windowing functions.
A few notes are in order:
1. The current system for configuring scalers is ugly and rapidly
getting unwieldy. I modified the man page to make it a bit more
bearable, but long-term we have to do something about it; especially
since..
2. There's currently no way to affect the blur factor or parameters of
the window functions themselves. For example, I can't actually
fine-tune the kaiser window's param1, since there's simply no way to
do so in the current API - even though filter_kernels.c supports it
just fine!
3. This removes some lesser used filters (especially those which are
purely window functions to begin with). If anybody asks, you can get
eg. the old behavior of scale=hanning by using
scale=box:scale-window=hanning:scale-radius=1 (and yes, the result is
just as terrible as that sounds - which is why nobody should have
been using them in the first place).
4. This changes the semantics of the "triangle" scaler slightly - it now
has an arbitrary radius. This can possibly produce weird results for
people who were previously using scale-down=triangle, especially if
in combination with scale-radius (for the usual upscaling). The
correct fix for this is to use scale-down=bilinear_slow instead,
which is an alias for triangle at radius 1.
In regards to the last point, in future I want to make it so that
filters have a filter-specific "preferred radius" (for the ones that
are arbitrarily tunable), once the configuration system for filters has
been redesigned (in particular in a way that will let us separate scale
and scale-down cleanly). That way, "triangle" can simply have the
preferred radius of 1 by default, while still being tunable. (Rather
than the default radius being hard-coded to 3 always)
Use OPT_CHOICE_C() instead of the custom parser. The functionality is
pretty much equivalent.
(On a side note, it seems --video-stereo-mode can't be removed, because
it controls whether to "reduce" stereo video to mono, which is also the
default. In fact I'm not sure how this should be handled at all.)
Rarely used and essentially useless. The only VO for which this was
implemented correctly and for which this did anything was vo_xv, but you
shouldn't use vo_xv anyway (plus it support BT.601 only, plus a vendor
specific extension for BT.709, whose presence this function essentially
reported - use xvinfo instead).
Remove the colorspace-related top-level options, add them to vf_format.
They are rather obscure and not needed often, so it's better to get them
out of the way. In particular, this gets rid of the semi-complicated
logic in command.c (most of which was needed for OSD display and the
direct feedback from the VO). It removes the duplicated color-related
name mappings.
This removes the ability to write the colormatrix and related
properties. Since filters can be changed at runtime, there's no loss of
functionality, except that you can't cycle automatically through the
color constants anymore (but who needs to do this).
This also changes the type of the mp_csp_names and related variables, so
they can directly be used with OPT_CHOICE. This probably ended up a bit
awkward, for the sake of not adding a new option type which would have
used the previous format.
It was "by design" possible to make mpv crash if the parameters didn't
make enough sense, like "format=rgb24:yuv420p". While forcing the format
has some minor (rather questionable) use for debugging, allowing it to
crash is just stupid.
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
Currently, the code just skipped CMS completely. This commit treats them
as sRGB by default, instead.
This also refactors much of the color management code to make it more
generalized and re-usable.
This is a minor precaution, because in some cases the number of shader
programs can already hit 10. (chroma merging + separated cscale/scale +
+ sub blending (interpolated) + sub blending (non-interpolated)
+ output (interpolated) + output (non-interpolated) + OSD)
This has a number of user-visible changes:
1. A new flag blend-subtitles (default on for opengl-hq) to control this
behavior.
2. The OSD itself will not be color managed or affected by
gamma controls. To get subtitle CMS/gamma, blend-subtitles must be
used.
3. When enabled, this will make subtitles be cleanly interpolated by
:interpolation, and also dithered etc. (just like the normal output).
Signed-off-by: wm4 <wm4@nowhere>
Bilinear scaling is not a suitable default for something named "hq"; the
whole reason this was done in the past was because cscale used to be
obscenely slow. This is no longer the case, with cscale being nearly
free.
Handling this perfectly with VapourSynth is probably not possible: you
either need to tell it the total number of input frames in advance, or
deliver an infinite stream. With playback, EOF can happen at an
unpredictable point, for which the VapourSynth API has no mechanism at
all. We handle EOF by returning an error to the filter, which will the
filter return all pending frame callbacks.
We still can try to handle it approximately: if the filter requests a
frame past EOF, then send it an error. This seems to work relatively
well with filters which don't request future frames.
With the previous commit, we have no need anymore to check a part of an
extension string (for ignoring a prefix). So check the extension string
properly, instead of just using the broken old strstr() method, which
could accidentally ignore prefixes or suffixes. Do this by extending
the check to whether the extension name is properly delimited by spaces
or string start/end.
Instead of somehow looking for the substring "_swap_control" and trying
to several arbitrary function names, do it cleanly. The old approach has
the problem that it's not very exact, and may even load a pointer to a
function which doesn't exist. (Some GL implementations like Mesa return
function pointers even the functions which don't exist, and calling them
crashes.)
I couldn't find any evidence that glXSwapInterval, wglSwapIntervalSGI,
or wglSwapInterval actually exist, so don't include them. They were
carried over from MPlayer times.
To make diagnostics easier, print a warning in verbose mode if the
function could not be loaded.
Drop support for GL_EXT_framebuffer_object. It has 2 problems: semantics
might be slightly different from the "proper" GL_ARB_framebuffer_object
extension (but is likely completely untested), and also our extension
loader is too dumb to load the same group of function pointers
represented by different extensions only once.
When not receiving frame callbacks, we should not draw anything to avoid
blocking the OpenGL renderer. We do this by extending gl context api, by
introducing new optional function 'is_active', that indicates whether
OpenGL renderers should draw or not.
This fixes issue #249.
Define frame callback logic in wayland_common.c
As this should be used by opengl renderer as well.
Preferably drawing should be skipped entierly when no frame callbacks
are received. However, for now only swap buffers is skipped.
Like FFmpeg/Libav do. It seems not all code can actually deal with this
situation, so it's better to shift the special-cases to code which needs
it (possibly OSD code; screenshots of 0x0 windows would just fail).
I guess we don't really care whether this particular function succeeds.
If it fails, it must be completely broken anyway and it would not matter
much to us.
Unlike other VOs, this rendered OSD even while no VO was created
(because the renderer lives as long as the API user wants). Change this,
and refactor the code so that the OSD object is accessible only while
the VO is created.
(There is a short time where the OSD can still be accessed even after VO
destruction - this is not a race condition, though it's inelegant and
unfortunately unavoidable.)
gl_video_set_options() didn't update it, so the default value set on
initialization was used. Fix by always setting the clear color before
the clear command; it's slightly easier to follow too.
Codecs for hardware acceleration are not blacklisted, but whitelisted.
Also, if this emssage is printed, the codec might not have any hardware
acceleration support in the first place.
Trying to handle such video is almost worthless, but it was requested by
at least 2 users.
If there are no timestamps, enable byte seeking by setting
ts_resets_possible. Use the video FPS (wherever it comes from) and the
audio samplerate for timing. The latter was already done by making the
first packet emit DTS=0; remove this again and do it "properly" in a
higher level.
Normally, the size of an mage plane is assumed to be stride*height. But
in theory, if stride is larger than width*bpp, the last line might not
be padded, simply because it's not necessary. FFmpeg's or mpv's image
allocators always guarantee that this padding exists (it wastes some
insignificant memory for avoiding such subtle issues), but some other
libraries might not.
I suspect one such case might be Xv via vo_xv (see #1698), although my X
server appears to provide full padding. In any case, it can't harm.
There's literally no reason why these functions have to be inline (they
might be performance critical, but then the function call overhead isn't
going to matter at all).
Uninline them and move them to mp_image.c. Drop the header file and fix
all uses of it.
For some reason there were two points in the code where it warned
against non-monotonic video PTS. The one in video.c triggered on PTS
going backwards or making large jumps forwards, while dec_video.c
triggered on PTS going backwards or PTS not changing. Merge them into a
single check, which warns against all cases.
There was a somewhat obscure optimization in the OSD and subtitle
rendering path: if only the position of the sub-images changed, and not
the actual image data, uploading of the image data could be skipped. In
theory, this could speed up things like scrolling subtitles.
But it turns out that even in the rare cases subtitles have such scrolls
or axis-aligned movement, modern libass rarely signals this kind of
change. Possibly this is because of sub-pixel handling and such, which
break this.
As such, it's a worthless optimization and just introduces additional
complexity and subtle bugs (especially in cases libass does the
opposite: incorrectly signaling a position change only, which happened
before). Remove this optimization, and rename bitmap_pos_id to
change_id.
This played e.g. a 1264x722 file as 1264x720. There was some code which
dropped the aspect ratio if the video (in original resolution) wasn't
scaled by more than 4 pixels. Commit 5f3c3f8c introduced this (although
I'm not really sure what the code replaced by it did).
Just remove this "feature".
We now update uniforms every time, so we should try to reduce the number
of uniforms to avoid performance penalties. (Originally, some caching
was planned, but it looks like it would be too complicated to implement
compared to the expected gains.)
OPT_REPLACED can't specify option values or multiple options. Change to
OPT_REMOVED. Also, target-prim doesn't have an srgb option. BT.709 uses
sRGB primaries, so use it instead.
The default scaling was a slight bit too low, which could cause buffer
underruns in some cases.
This should improve the result when using tscale filters other than
oversample. The oversample case should be unaffected.
This adds extra debugging output for buffer underruns, to help track
down possible queueing issues. It also inverts the numberic output for
tscale=oversample to make more sense, without changing the logic.
This moves the color management code out of pass_render_main (which is
now dedicated solely to up/downscaling and hence renamed pass_scale_main)
and into a new function, which gets called from pass_draw_to_screen
instead.
This makes more sense from a logical standpoint, and also means that we
interpolate in linear RGB, before color management - rather than after
it, which is significantly better for color accuracy and probably also
interpolation quality.
This replaces the old smoothmotion code by a more flexible tscale
option, which essentially allows any scaler to be used for interpolating
frames. (The actual "smoothmotion" scaler which behaves identical to the
old code does not currently exist, but it will be re-added in a later commit)
The only odd thing is that larger filters require a larger queue size
offset, which is currently set dynamically as it introduces some issues
when pausing or framestepping. Filters with a lower radius are not
affected as much, so this is identical to the old smoothmotion if the
smoothmotion interpolator is used.
Also the size is now a simple #define that can easily be changed later.
This is done for smoothmotion, which might want to blend more than 4
frames at once, depending on the setting.
Since the gl_rework merge, this started to print some OpenGL errors when
using vdpau hardware decoding with vo_opengl smoothmotion. This happens
because some hwdec unmap_image call were not paired with a map_image
call. Unlike the old vo_opengl, the new code does not do this out of
convenience (it would be a pain to track this exactly). It was triggered
by smoothmotion, because not every rendered frame has actually a new
input video frame (i.e. no map_image call, but it called unmap_image
anyway).
Solve this by handling unmapping differently in the vdpau code. The next
commit will remove the unmap_image callback completely.
Fixes#1687.
This caused complaints because the fps was basically rounded on
microsecond boundaries in the vsync interval (it seemed convenient to
store only the vsync interval). So store the fps as float too, and let
the "display-fps" property return it directly.
Even the lowest supported GL versions have arrays. This test was for
returning arrays from functions, which didn't work in lower GL versions,
but we don't need it anymore.
Previously, mpv would hide the cursor when the autohide timer expired,
even if the window menu was open. This made it difficult to use the menu
with the mouse.
When handling VOCTRL_SET_CURSOR_VISIBILITY, instead of determining
whether to call SetCursor by checking if the cursor is in the client
area, call it based on the parameters to the last WM_SETCURSOR message.
When the window enters "menu mode," it gets a WM_SETCURSOR message with
HIWORD(lParam) set to 0 to indicate that the cursor shouldn't be set.
Requested change in behavior.
Note that we set the assumed "infinite" display_fps to 1e6, which
conveniently lets vo_get_vsync_interval() return a dummy value of 1,
which can be easily checked against, and still avoids doing math with
float INFs.
I'm not comfortable with VOCTRL_GET_DISPLAY_FPS being called every
frame.
This requires the VO to set VO_EVENT_WIN_STATE if the FPS could have
changed. At least the X11 backend does this.
This adds stuff related to gamma, linear light, sigmoid, BT.2020-CL,
etc, as well as color management. Also adds a new gamma function (gamma22).
This adds new parameters to configure the CMS settings, in particular
letting us target simple colorspaces without requiring usage of a 3DLUT.
This adds smoothmotion. Mostly working, but it's still sensitive to
timing issues. It's based on an actual queue now, but the queue size
is kept small to avoid larger amounts of latency.
Also makes “upscale before blending” the default strategy.
This is justified because the "render after blending" thing doesn't seme
to work consistently any way (introduces stutter due to the way vsync
timing works, or something), so this behavior is a bit closer to master
and makes pausing/unpausing less weird/jumpy.
This adds the remaining scalers, including bicubic_fast, sharpen3,
sharpen5, polar filters and antiringing. Apparently, sharpen3/5 also
consult scale-param1, which was undocumented in master.
This also implements cropping and chroma transformation, plus
rotation/flipping. These are inherently part of the same logic, although
it's a bit rough around the edges in some case, mainly due to the fallback
code paths (for bilinear scaling without indirection).
The basic idea is to use dynamically generated shaders instead of a
single monolithic file + a ton of ifdefs. Instead of having to setup
every aspect of it separately (like compiling shaders, setting uniforms,
perfoming the actual rendering steps, the GLSL parts), we generate the
GLSL on the fly, and perform the rendering at the same time. The GLSL
is regenerated every frame, but the actual compiled OpenGL-level shaders
are cached, which makes it fast again. Almost all logic can be in a
single place.
The new code is significantly more flexible, which allows us to improve
the code clarity, performance and add more features easily.
This commit is incomplete. It drops almost all previous code, and
readds only the most important things (some of them actually buggy).
The next commit will complete it - it's separate to preserve authorship
information.
If you click on a window that doesn't have a focus, a LeaveNotify
followed by a EnterNotify event can be generated. The former will have
mode set to NotifyGrab, the latter to NotifyUngrab. This will make the
player think the mouse left the window, even though this is not the
case. Ignore these and only react to those with mode set to
NotifyNormal.
Probably fixes#1672, and some other strange issues on some WMs.
mpv_opengl_cb_render() is supposed to clear the screen with the
background color if there's no video... I think.
It didn't do this, because although uninit() requested gl_video_config()
to be called, this didn't happen, because this function checks whether
the VO is set - and it's unsert after uninit() releases the lock.
Also call the user wakeup callback in this situation, so the user
actually redraws immediately.
Do not rely on the pointed-to argument to be initialized; VOCTRLs are
supposed to completely overwrite them on success (or not to touch them
on failure).
The currently only caller of VOCTRL_GET_WIN_STATE initializes the value
before calling this, so this is merely about correctness and didn't lead
to any actual bugs.
Add filter parameters to VAAPI deinterlacing filter to actually process
bottom fields instead of deinterlacing top field twice.
Signed-off-by: wm4 <wm4@nowhere>
Some UI elements in OS X – like Launchpad and Dock folders – are implemented
using borderless windows in background demonized applications.
When we use these elements, mpv doesn't stop to be the active application, but
the mpv window leaves keyWindow state while we use the OS controls.
This commit just keeps track of window state to update the cursor visibility
accordingly.
Fixes#513
Make MpvEventsView -signalMousePosition a public method so it can be
called without a compiler warning. Previously, the mouse position would
be reported as (0,0) until the cursor was moved.
mpv would attempt to load ICC profiles several times during VO init
even if no window is displayed. This potentially causes it to load
a profile for a different screen than it is going to be displayed
on, thereby invalidating the profile cache and rebuilding the LUT
every single time.
It would not unload a previously loaded profile when the video
window is moved to a display without an installed profile.
Fix these issues and tweak the log messages a little.
The vaapi equalizer have a custom range, and can have a smaller range
than mpv's normalized video equalizer values. The result is that a vaapi
equalizer value can map to multiple mpv values, so changing a mpv value
by 1 can get "stuck". Fix by remember the mpv value, and returning it if
it still corresponds to the vaapi value.
Really fixes#1647.
(Why am I even bothering with this irredeemable crap?)
Instead of "vaapi", simply by changing the probe order.
"vaapi" uses the GLX GL interop, which has causing us more problems than
it solved.
Unfortunately this leads also to copying if "--hwdec=auto --vo=vaapi" is
used, even though GLX is not involved in this case - but I don't care
enough to make the probe logic cleverer just for this. You can still get
the zero-copy path with --hwdec=vaapi.
This automatically sets the gamma option depending on lighting conditions
measured from the computer's ambient light sensor.
sRGB – arguably the “sibling” to BT.709 for still images – has a reference
viewing environment defined in its specification (IEC 61966-2-1:1999, see
http://www.color.org/chardata/rgb/srgb.xalter). According to this data, the
assumed ambient illuminance is 64 lux. This is the illuminance where the gamma
that results from ICC color management is correct.
On the other hand, BT.1886 formalizes that the gamma level for dim environments
to be 2.40, and Apple resources (WWDC12: 2012 Session 523: Best practices for
color management) define the BT.1886 dim at 16 lux.
So the logic we apply is:
* >= 64lux -> 1.961 gamma
* =< 16lux -> 2.400 gamma
* 16lux < x < 64lux -> logaritmic rescale of lux to gamma. The human
perception of illuminance roughly follows a logaritmic scale of lux [1].
[1]: https://msdn.microsoft.com/en-us/library/windows/desktop/dd319008%28v=vs.85%29.aspx
This will be pretty useful to let mpv automatically change VO parameters based
on ambient lighting conditions.
The conversion code and polinomial equation from Apple LMU values to Lux is
taken from Firefox: their license, MPL is GPL compatible and allows
relicensing to GPL (MPL is more liberal).
This warning wasn't overly helpful in the past, and warned against
perfectly fine code. But at least with recent gcc versions, this is the
warning that complains about assignments in if expressions (why???), so
we want to enable it.
Also change all the code this warning complains about for no reason.
Semi-important, because --hwdec=dxva2 outputs NV12, and we really don't
want people to end up with the "old" StretchRect method.
Unfortunately, I couldn't actually get it to work. It seems most D3D
drivers (including the wine D3D implementation) reject D3DFMT_A8L8,
and I could not find any other 2-channel 8 bit Direct3D 9 format. It
seems newer D3D APIs have DXGI_FORMAT_R8G8_UNORM, but there's no way
to get it in D3D9.
Still pushing this; maybe it actually works on some drivers.
This time (there are a lot of times), libswscale randomly ignores
brightness/saturation/contrast settings.
Looking at MPlayer code, it appears the return value of
sws_setColorspaceDetails() signals if changing these settings is
supported at all.
(Nevermind that supporting this feature has almost 0 value, and
obviously eats maintenance time.)
Breaks vo_opengl by default. I'm hot able to fix this myself, because I
have no clue about the overcomplicated color management logic. Also,
whilethis is apparently caused by commit fbacd5, the following commits
all depend on it, so revert them too.
This reverts the following commits:
e141caa97d653b0dd529729c8b3f64fbacd5de31Fixes#1636.