If the VO doesn't support a format output by vf_lavfi, no conversion
filter was inserted, and filter chain creation failed.
This is because vf_lavfi doesn't properly follow the format negotiation
model, which means the format negotiation pass does not catch all cases
where conversion is needed. Specifically, vf_lavfi supports that all
output formats are supported for any given input format, but then does
not actually call vf_next_query_format() in reconfig() to check which
format it uses, but outputs whatever it gets from libavfilter.
I think this is ok to avoid excessive complexity in vf_lavfi.c, but it
also means adding more kludges to vf.c. I justify this (and the code
duplication) with the idea that the current filter chain code will die
anyway at some point.
The .log field additions for c->first/c->last are strictly speaking not
needed, but useful for debugging.
Previously, the shared behaviour for each mouse-button message lived at
the bottom of the WndProc. Move it into handle_mouse_down/up functions
(similar to the handle_key_down/up functions.) This makes the WndProc
slightly less complicated. There should be no change in behaviour.
Now you can for example do "--vf=hue=h=60" - there is no "hue" filter in
mpv, so libavfilter's will be used.
This has certain caveats (see manpage).
The point of this is providing a relatively smooth transition path to
removing our own filter stuff.
The plan is to nuke the custom filter chain completely. It's not clear
what will happen to the still needed builtin filters (mostly hardware
deinterlacing and vf_vapoursynth). Most likely we'll replace them with
different filter chain concept (whose main purpose will be providing
builtin things and bridging to libavfilter).
The undocumented "warn" options are there to disable deprecation
warnings when the player inserts filter automatically.
The same will be done to audio filters, at a later point.
Now taking a screenshot actually works, if libjpeg is disabled at
compile time.
In particular, the AV_PIX_FMT_YUVJ formats (with the "J") cause us
problems. They have been deprecated years ago, but the libavcodec jpg
encoder won't accept anything else. This is made worse by the fact that
mpv doesn't have J formats internally - it always uses the color levels
metadata to decide the range instead.
The old code called reinit_window_state() from the VO thread, which is
not safe. Instead of calling reinit_window_state() when
VO_EVENT_FULLSCREEN_STATE is captured, it should be called when sending
the event from the Win32 thread instead.
there are two minor bugs. mpv could try to retrieve the size when in
fullscreen and would get the fullscreen size. to fix that we keep track
of the window size before going into fullscreen.
the second small bug is when using HiDPI resolution and the
--hidpi-window-scale option. we actually want the OSD to show the proper
window scale depending on the hidpi settings. before when resizing the
window to double the size it could show "window-scale: 1.0" or
"window-scale: 0.5" when resizing to normal size. now it considers the
backing scale factor and the hidpi option to return a logical correct
window size.
this fixes a weird behaviour when a borderless window's style mask is
set to a none-borderless style mask. this can happen when cycling the
border or just toggling fullscreen. what happens is that the first
responder is reset to the NSWindow instead of being kept, the events
view in our case. this happens without the usual resignFirstResponder,
becomeFirstResponder routine.
this is a small workaround that overrides the setStyleMask method. it
keeps the first responder from before the style mask change and resets
this first responder after the new style mask was applied.
for a reason i can just assume some key events can vanish from the
event chain and mpv seems unresponsive.
after quite some testing i could confirm that the events are present at
the first entry point of the event chain, the sendEvent method of the
Application, and that they vanish at a point afterwards. now we use
that entry point to grab keyDown and keyUp events. we also stop
propagating those key events to prevent the no key input' error sound.
if we ever need the key events somewhere down the event chain we need
to start propagating them again. though this is not necessary currently.
DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL might be buggy on some hardware.
Additionaly DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL might be supported on some
Windows 7 systems with the platform update, but it might have poor
performance. In these cases, the user might want to disable the use of
DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL swap chains with --angle-flip=no.
Basically, see the example in input.rst.
This is better than the "old" vf-toggle method, because it doesn't
require the user to duplicate the filter string in mpv.conf and
input.conf.
Some aspects of this changes are untested, so enjoy your alpha testing.
This makes the code more closely match mpv's style. Specifically, it
changes some names from camelCase to snake_case, removes some Hungarian
notation, replaces direct vtbl access with COM macros, makes use of the
SAFE_RELEASE macro, moves some declarations closer to their first use,
and fixes the brace style, as well as a few other things.
This also makes the IDropTargetVtbl static and it fixes the buggy
QueryInterface implementation.
This was mostly self-contained, so its removal makes w32_common.c a bit
easier to read. Also, because it was self contained and its author has
agreed to LGPL relicencing, the new file has the LGPL licence header.
The delay between call to .resize, which cleared the buffer, and
actually rendering the first video frame, was significant, resulting in
short flicker on navigation and resizing. This was especially visible
when zooming and navigating between images.
Now the clearing is scheduled to happen just before the rendering, which
looks to be good enough even without double buffering.
If vaapi was found, but neither the old or new libavcodec vaapi hwaccel
API, then HAVE_VAAPI_HWACCEL will be defined, but not _OLD or _NEW. The
HAVE_VAAPI_HWACCEL define pretty much exists only for acrobatics with
our own waf dependency checker helper code.
The new API works like the new vaapi API, using generic hwaccel support.
One minor detail is the error message that will be printed if using
non-4:2:0 surfaces (which as far as I can tell is completely broken in
the nVidia drivers and thus not supported by mpv). The HEVC warning
(which is completely broken in the nVidia drivers but should work with
Mesa) had to be added to the generic hwaccel code.
This also trashes display preemption recovery. Fuck that. It never
really worked. If someone complains, I might attempt to add it back
somehow.
This is the 4th iteration of the libavcodec vdpau API (after the
separate decoder API, the manual hwaccel API, and the automatic vdpau
hwaccel API). Fortunately, further iterations will be generic, and not
require much vdpau-specific changes (if any at all).
Might be useful for other backends too. For context_vdpau, resize
handling, presentation, and handling the mapping state becomes somewhat
less awkward.
In some cases, such as when using the libmpv opengl-cb API, or with
certain vo_opengl backends, the main framebuffer is never accessed.
Instead, rendering is done to a FBO that acts as back buffer. This meant
an incorrect/broken bit depth could be used for dithering.
Change it to read the framebuffer depth lazily on the first render call.
Also move the main FBO field out of the GL struct to MPGLContext,
because the renderer's init function does not need to access it anymore.
Useful for testing. Unfortunately, the nVidia EGL driver ignores this,
and returns a GLES 3.2 context anyway (which it is allowed to do). Might
still be useable with ANGLE, which will really give you a GLES 2 context
if you ask for it.
When dumb mode is used (the "simple" rendering path), respect the dither
options. Options should never be ignored (except in GLESv2 mode); either
they should be respected in dumb mode, or they should disable dumb mode.
In this case, the former applies.
This actually fixes the dreaded errors during resizing. It works pretty
much like before, except each surface is reallocated before it's used.
It implies surfaces with the old size remain in the presentation queue
and will be displayed.
Don't assume 0 is an invalid object handle. vdpau with its weird API
design makes all objects indexes, with 0 being a perfectly valid and
common value. You need to use VDP_INVALID_HANDLE, which is not 0.
Don't crash if init fails at vdpau initialization. It's because
mp_vdpau_destroy(NULL) crashes. Simplify it.
Destroy output surface backed FBO before output surface. Also, strictly
bookkeep the map/unmap calls (and unmap surfaces before destroying the
FBO/texture). I can't see a change in the weird errors when resizing the
window, but I guess it's slightly more correct.
Add the GL_WRITE_DISCARD_NV symbol to header_fixes.h, because we might
fail compilation with headers that do not contain the vdpau extension
(well, probably doesn't matter).
The gl_timer_last_us() function could access samples[-1]. Fix by
coercing to unsigned, so the % will put it into index [0,max). The
real value returned in this corner case doesn't mean too much, I
guess.
As the manpage says, this has no value other than adding bugs.
It uses code based on context_x11.c, and basically does very stripped
down context creation (no alpha support etc.). It uses vdpau for
display, and maps vdpau output surfaces as FBOs to render into them.
This might be good to experiment with asynchronous presentation. For
now, it presents synchronously, with a 4 frame delay (which should whack
off A/V sync). The forced 4 frame delay is probably also why interaction
feels slower.
There are some weird vdpau errors on resizing and uninit. No idea what
causes them.
Should have done this 1000 years ago. Now GL backends can use mp_log
macros directly on the MPGLContext, instead of doing stupid things like
for example MP_WARN(ctx->vo, ...).
This is just a pointless refactor with the only goal of making
image_writer_opts.format a number.
The pointless part of it is that instead of using some sort of arbitrary
ID (in place of a file extension string), we use a AV_CODEC_ID_. There
was also some idea of falling back to the libavcodec MJPEG encoder if
mpv was not linked against libjpeg, but this fails. libavcodec insist on
having AV_PIX_FMT_YUVJ420P, which we pretend does not exist, and which
we always map to AV_PIX_FMT_YUV420P (without the J indicating full
range), so encoder init fails. This is pretty dumb, but whatever. The
not-caring factor is raised by the fact that we don't know that we
should convert to full range, because encoders have no proper way to
signal this. (Be reminded that AV_PIX_FMT_YUVJ420P is deprecated.)
The function tried to do something clever but ignored the fact that
the middle button followed the left button rather than the right.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
I guess that's the full extent I still care about nVidia's broken
garbage. In theory, we could always force the video mixer (which is the
only method getting the video data that works), but why bother.
due to the see-through nature of the title bar and our standard black
window background, the title bar appears dark grey opposed to the
expected light grey.
we change the window background to white but at the same time set the
background of the enclosed view to black. that way the title bar has a
white background and the background of our video stays black in all
cases. this prevents white flashing in some cases when the video is
resized with too heavy render settings.
The existing code modifies f.radius so that it is in terms of the
filter sample radius (in the source coordinate space) and has
some small errors because of this behavior.
This commit changes f.radius so that it is always in terms of
the filter function radius (in the destination coordinate space).
The sample radius can always be derived by multiplying f.radius
by filter_scale, which is the new, more descriptive name for the
previous inv_scale.
Modifications to the input coordinates should all be performed
before the final range check against the filter boundaries.
However, in the existing code, the blur/taper is applied after the
filter radius check is performed. Thus, effectively the filter radius
cutoff is applied to only-downscaling-metric-modified coordinates, not
the final coordinates.
Correct this issue and restructure the returns a bit to make it
more obvious what is being done.
I'm not sure what's going on here, but it appears kodi switches forward
and backwards references for advanced VPP deinterlacing modes. This in
turn makes deinterlacing with these modes apparently work. If you don't
switch the directions, you get a stuttering mess.
As far as the libva trace dump is concerned, this makes mpv's libva
deinterlacing API use behave like kodi's, and appears to reproduce
smooth video with advanced libva deinterlacing enabled.
I'm hearing that Mesa actually does it correctly, and I'm not sure what
will happen there. For now, passing "reversal-bug=no" as sub-option to
the vavpp filter will undo this behavior.
forcibly moving a window from one screen to another is supposed to put
it in a position that looks relative the same as on the old screen, as
in bottom, top, left and right margin look the same, without changing
the window size. in some situations the old code moved the window off
screen or on top of the menu bar so it ended up at a somewhat random
position. the new code fixes some edge cases but is probably not
completely correct since the priority is to make sure that the window
ends up on the right screen.