Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
This should actually cover all of them, if you take into account that
some unchanged GPL source files include header files with such checks.
Also this was done already for the libaf derived code.
This is only for "safety" and to avoid misunderstandings.
I really wouldn't care much about this, but some parts of the core code
are under HAVE_GPL, so there's some need to get rid of it. Simply turn
the video equalizer from its current fine-grained handling with vf/vo
fallbacks into global options. This makes updating them much simpler.
This removes any possibility of applying video equalizers in filters,
which affects vf_scale, and the previously removed vf_eq. Not a big
loss, since the preferred VOs have this builtin.
Remove video equalizer handling from vo_direct3d, vo_sdl, vo_vaapi, and
vo_xv. I'm not going to waste my time on these legacy VOs.
vo.eq_opts_cache exists _only_ to send a VOCTRL_SET_EQUALIZER, which
exists _only_ to trigger a redraw. This seems silly, but for now I feel
like this is less of a pain. The rest of the equalizer using code is
self-updating.
See commit 96b906a51d for how some video equalizer code was GPL only.
Some command line option names and ranges can probably be traced back to
a GPL only committer, but we don't consider these copyrightable.
It never worked. It relied on some obscure texture format to provide the
equivalent of GL_RG or GL_LUMINANCE_ALPHA, but no hardware seemed to
report support for it ever. No idea what's the correct way to do this.
On D3D11 it exists, of course.
(Actually I'd like to remove the whole VO.)
Another legacy annoyance. The only place where packed YUV is still
important is slightly older Apple hardware or drivers, which require
it for efficient hardware decoding.
Long planned. Leads to some sanity.
There still are some rather gross things. Especially g_groups is ugly,
and a hack that can hopefully be removed. (There is a plan for it, but
whether it's implemented depends on how much energy is left.)
- win32-console-wrapper.c was inconsistently using the explicit Unicode
versions of some Windows API functions and structures.
- vo.c should use llabs for int64_t, since long is 32-bit on Windows.
- vo_direct3d.c had a potential use of an uninitialized variable if it
took the first goto error_exit.
It was used to determine whether the VO supports VOCTRL_SET_PANSCAN.
With all those changes to property semantics this became unnecessary,
and its only use was dropped at some point.
Some VOs had support for these - remove them.
Typically, these formats will have only some use in cases where using
RGB software conversion with libswscale is faster than letting the
VO/GPU do it (i.e. almost never). For the sake of testing this case,
keep IMGFMT_RGB565. This is the least messy format, because it has no
padding/alpha bits with unknown semantics.
Note that decoding to these formats still works. We'll let libswscale
repack the data to whatever the VO in use can take.
Regression since commit 93db4233. I think the bit that was forgotten
here was to remove the vo_w32_config() return value completely. The VO
failed to init because that function always returned 0. This commit
removes these bits and fixes the VO.
Fixes#2434.
This parameter has been unused for years (the last flag was removed in
commit d658b115). Get rid of it.
This affects the general VO API, as well as the vo_opengl backend API,
so it touches a lot of files.
The VOFLAGs are still used to control OpenGL context creation, so move
them to the OpenGL backend code.
A user complains that it leads to the dxva driver failing, leading to
messages like this:
[ffmpeg/video] h264: Failed to execute: 0x8007000e
[ffmpeg/video] h264: hardware accelerator failed to decode picture
Reportedly, this happens only with vo_direct3d, not with vo_opengl. The
only difference is that vo_direct3d attempts to share the D3D device
with the decoder. Possibly the error is that the device in the VO is not
created with D3DCREATE_MULTITHREADED. Change this.
Probably fixes#2178.
Rarely used and essentially useless. The only VO for which this was
implemented correctly and for which this did anything was vo_xv, but you
shouldn't use vo_xv anyway (plus it support BT.601 only, plus a vendor
specific extension for BT.709, whose presence this function essentially
reported - use xvinfo instead).
There's literally no reason why these functions have to be inline (they
might be performance critical, but then the function call overhead isn't
going to matter at all).
Uninline them and move them to mp_image.c. Drop the header file and fix
all uses of it.
There was a somewhat obscure optimization in the OSD and subtitle
rendering path: if only the position of the sub-images changed, and not
the actual image data, uploading of the image data could be skipped. In
theory, this could speed up things like scrolling subtitles.
But it turns out that even in the rare cases subtitles have such scrolls
or axis-aligned movement, modern libass rarely signals this kind of
change. Possibly this is because of sub-pixel handling and such, which
break this.
As such, it's a worthless optimization and just introduces additional
complexity and subtle bugs (especially in cases libass does the
opposite: incorrectly signaling a position change only, which happened
before). Remove this optimization, and rename bitmap_pos_id to
change_id.
Semi-important, because --hwdec=dxva2 outputs NV12, and we really don't
want people to end up with the "old" StretchRect method.
Unfortunately, I couldn't actually get it to work. It seems most D3D
drivers (including the wine D3D implementation) reject D3DFMT_A8L8,
and I could not find any other 2-channel 8 bit Direct3D 9 format. It
seems newer D3D APIs have DXGI_FORMAT_R8G8_UNORM, but there's no way
to get it in D3D9.
Still pushing this; maybe it actually works on some drivers.
At the time screenshot support was added, images weren't refcounted yet,
so screenshots required specialized implementations in the VOs. But now
we can handle these things much simpler. Also see commit 5bb24980.
If there are VOs in the future which can't do this (e.g. they need to
write to the image passed to vo_driver->draw_image), this still could be
disabled on a per-VO basis etc., so we lose no potential performance
advantages.
Use different VOCTRLs for "window" and normal screenshot modes. The
normal one will probably be removed, and replaced by generic code in
vo.c, and this commit is preparation for this. (Doing it the other way
around would be slightly simpler, but I haven't decided yet about the
second one, and touching every VO is needed anyway in order to remove
the unneeded crap. E.g. has_osd has been unused for a long time.)
And remove all uses of the VFCAP_CSP_SUPPORTED* constants. This is
supposed to reduce conversions if many filters are used (with many
incompatible pixel formats), and also for preferring the VO's natively
supported pixel formats (as opposed to conversion).
This is worthless by now. Not only do the main VOs not use software
conversion, but also the way vf_lavfi and libavfilter work mostly break
the way the old MPlayer mechanism worked. Other important filters like
vf_vapoursynth do not support "proper" format negotation either.
Part of this was already removed with the vf_scale cleanup from today.
While I'm touching every single VO, also fix the query_format argument
(it's not a FourCC anymore).
I'm still not sure how exactly handling of "lost" devices is supposed
to be handled. In theory, you only have to "reset" the device, instead
of recreating _everything_. But as it is, the code for proper uninit
and for handling the reset is exactly the same, so move it into a
function to reduce code duplication and the danger of potential bugs.
Apparently, extremely crappy graphics drivers don't allow you to use
shaders. Simply disable use of shaders if this happens, and use the
"old" method instead.
One unexpectedly tricky thing is that you need a d3d_device to create
a shader, which in turn requires a window, so the initialization order
changes.
Although the line count increases, this is better for making sure
everything is handled consistently for all users of the mp_csp_params
stuff.
This also makes sure mp_csp_params is always initialized with
MP_CSP_PARAMS_DEFAULTS (for consistency).
I suspect this is what is happening in github issue #1265 (at least
partially).
If D3DFMT_A8 is not available, fall back to RGBA. This is less efficient
in general, so we normally want to avoid it.
This sub-option was turned into a flag when the sub-option parser was
changed to the generic one (probably accidentally). Turn it into a
proper choice-option.
Also, adjust what the options do. Though none of this probably makes
much sense; the default should work, and if it doesn't, the GPU/driver
is probably beyond help.
Add a generic mechanism to the VO to relay "extra" events from VO to
player. Use it to notify the core of window resizes, which in turn will
be used to mark all affected properties ("window-scale" in this case) as
changed.
(I refrained from hacking this as internal command into input_ctx, or to
poll the state change, etc. - but in the end, maybe it would be best to
actually pass the client API context directly to the places where events
can happen.)
This can just happen in the time between VO creation, and the first call
to vo_reconfig. It seems the recent threading changes exposed this bug.
Fixes#986.
Sometimes GetClientRect() appeared to fail during init, and since we
don't check GetClientRect() calls (because they're on our own window,
and logically can never fail), bogus resizes were triggered. This could
cause vo_direct3d to fail initialization.
The reason was that w32->window was set to 0 during early window
initialization: CreateWindow*() can send messages to the new window,
even though it hasn't returned yet. This means w32->window is not yet
set to our window handle, and functions in WndProc may accidentally pass
hwnd=0 to win32 API functions.
Fix it by initializing w32->window on opportunity. This also means we
always strictly expect that the WndProc is used with our own window
only.
Preparation for moving win32 windowing to a separate thread.
The codesize is reduced a bit, because some small functions are
inlined, which reduces noise.
The main change is that now most functions use the private struct
directly, instead of accessing it indirectly through vo->w32.
Accesses to vo are minimalized.
The final goal is adding some sort of new windowing backend API. It
would be cleaner to use that as context pointer for all functions
(like struct vo was previously used), but since this is work in
progress, we just go with this commit.
With the change to merge osd drawing into video frame drawing, some
bogus logic got in: they skipped drawing the OSD if no video frame is
available. This broke --no-video --force-window mode.