Like the manual says, this is technically undefined behaviour. See:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476085.aspx
In particular, MSDN says texture arrays created with the BIND_DECODER
flag cannot be used with CreateShaderResourceView, which means they
can't be sampled through SRVs like normal Direct3D textures. However,
some programs (Google Chrome included) do this anyway for performance
and power-usage reasons, and it appears to work with most drivers.
Older AMD drivers had a "bug" with zero-copy decoding, but this appears
to have been fixed. See #3255, #3464 and http://crbug.com/623029.
This is a new RA/vo_gpu backend that uses Direct3D 11. The GLSL
generated by vo_gpu is cross-compiled to HLSL with SPIRV-Cross.
What works:
- All of mpv's internal shaders should work, including compute shaders.
- Some external shaders have been tested and work, including RAVU and
adaptive-sharpen.
- Non-dumb mode works, even on very old hardware. Most features work at
feature level 9_3 and all features work at feature level 10_0. Some
features also work at feature level 9_1 and 9_2, but without high-bit-
depth FBOs, it's not very useful. (Hardware this old is probably not
fast enough for advanced features anyway.)
Note: This is more compatible than ANGLE, which requires 9_3 to work
at all (GLES 2.0,) and 10_1 for non-dumb-mode (GLES 3.0.)
- Hardware decoding with D3D11VA, including decoding of 10-bit formats
without truncation to 8-bit.
What doesn't work / can be improved:
- PBO upload and direct rendering does not work yet. Direct rendering
requires persistent-mapped PBOs because the decoder needs to be able
to read data from images that have already been decoded and uploaded.
Unfortunately, it seems like persistent-mapped PBOs are fundamentally
incompatible with D3D11, which requires all resources to use driver-
managed memory and requires memory to be unmapped (and hence pointers
to be invalidated) when a resource is used in a draw or copy
operation.
However it might be possible to use D3D11's limited multithreading
capabilities to emulate some features of PBOs, like asynchronous
texture uploading.
- The blit() and clear() operations don't have equivalents in the D3D11
API that handle all cases, so in most cases, they have to be emulated
with a shader. This is currently done inside ra_d3d11, but ideally it
would be done in generic code, so it can take advantage of mpv's
shader generation utilities.
- SPIRV-Cross is used through a NIH C-compatible wrapper library, since
it does not expose a C interface itself.
The library is available here: https://github.com/rossy/crossc
- The D3D11 context could be made to support more modern DXGI features
in future. For example, it should be possible to add support for
high-bit-depth and HDR output with DXGI 1.5/1.6.
We don't hope to auto-detect them at load time, as that would be too
much of a pain - even FFmpeg requires fetching and parsing of video
packets, and exposes the information only via deprecated API.
But there still needs to be a way to select them by default. This is
also needed to get the first CC packet at all (without seeking back).
This commit also attempts to clean up locking a bit, which is a PITA,
but it's better be careful & clean.
See manpage additions.
(In ffmpeg-mpv and Libav, this is still called "cuvid". Libav won't work
yet, because it has no frame params support yet, but this could get
fixed soon.)
Comparing mpv's implementation against the ACES ODR reference samples
and algorithms, it seems like they're happy desaturating highlights
_way_ more aggressively than mpv currently does. And indeed, looking at
some example clips like The Redwoods (which is actually well-mastered),
the current desaturation produces unnatural-looking brightness fringes
where the sky meets the treeline.
Adjust the algorithm to make it apply to a much larger, more gradual
brightness region; and change the interpretation of the parameter. As a
bonus, the new parameter is actually sanely scaled (higher values = more
desaturation). Also, make it scale based on the signal level instead of
the luminance, to avoid under-desaturating bright blues.
This improves upon the previous commit, and partially rewrites it (and
other code). It does:
- disable the seeking within cache by default, and add an option to
control it
- mess with the buffer estimation reporting code, which will most likely
lead to funny regressions even if the new features are not enabled
- add a back buffer to the packet cache
- enhance the seek code so you can seek into the back buffer
- unnecessarily change a bunch of other stuff for no reason
- fuck up everything and vomit ponies and rainbows
This should actually be pretty usable. One thing we should add are some
properties to report the proper buffer state. Then the OSC could show a
nice buffer range. Also configuration of the buffers could be made
simpler. Once this has been tested enough, it can be enabled by default,
and might replace the stream cache's byte ringbuffer.
In addition it may or may not be possible to keep other buffer ranges
when seeking outside of the current range, but that would be much more
complex.
This should be functionally identical to rgba16f, since the formats only
differ in their representation on the CPU, but it could be useful for RA
backends that don't expose rgba16f, like Vulkan. It's definitely useful
for the WIP D3D11 backend.
It seems this will be useful for Rokchip DRM hwcontext integration.
DRM hwcontexts have additional internal structure which can be different
depending on the decoder, and which is not part of the generic hwcontext
API. Rockchip has 1 layer, which EGL interop happens to translate to a
RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple
layers. Both will use sw_format=nv12, and thus are indistinguishable on
the mp_image_params level. But this is needed to initialize the EGL
mapping and the vo_gpu video renderer correctly.
We hope that the layer count is enough to tell whether EGL will
translate the data to a RGB texture (vs. 2 texture resembling raw nv12
data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE.
This commit adds the flag, infrastructure to set it, and an "example"
for D3D11.
The D3D11 addition is quite useless at this point. But later we want to
get rid of d3d11_update_image_attribs() anyway, while we still need a
way to force d3d11vpp filter insertion, so maybe it has some
justification (who knows). In any case it makes testing this easier.
Obviously it also adds some basic support for triggering the opaque
format for decoding, which will use a driver-specific format, but which
is not supported in shaders. The opaque flag is not used to determine
whether d3d11vpp needs to be inserted, though.
Mostly an obscure option for testing. But --videotoolbox-format can be
deprecated, as it becomes redundant.
We rely on the libavutil hwcontext implementation to reject invalid
pixfmts, or not to blow up if they are incompatible.
This was confusing at best. Change it to output the actual choices.
(Seems like in the end it's always me who has to clean up other people's
bullshit.)
Context names were not unique - but they should be, so fix it. The whole
point of the original --opengl-backend option was to side-step the
tricky auto-detection, so you know exactly what you get. The goal of
this commit is to make --gpu-context work the same way. Fix the
non-unique names by appending "vk" to the names.
Keep in mind that this was not suitable for slecting the "UI" backend
anyway, since "x11" would force GLX, whereas people on not-NVIDIA
actually want "x11egl". Users trying to use --gpu-context=x11 to force
the X11 backend would always end up with GLX, which would at least break
VAAPI hardware decoding for them. Basically the idea that this option
could select the "UI" type is completely broken - it selects an
implementation, which implies a UI. Selecting the UI type This would
require a separate mechanism. (Although in theory this separate
mechanism could be part of the --gpu-context option - in any case,
someone would have to implement it.)
To achieve help output that can actually be understood, just duplicate
the code. Most of that code is duplicated anyway, and trying to share
just the list code with the result of making the output unreadable
doesn't make too much sense. If we wanted to save code/effort, we could
just remove the help output altogether.
--gpu-api has non-unique entries, and it would be nice to group them
(e.g. list all OpenGL capable contexts with "opengl"), but C makes this
simple idea too much of a pain, so don't do it.
Also remove a stray tab from the android entry on the manpage.
Signed-off-by: wm4 <wm4@nowhere>
Rename --stats to --load-stats-overlay and add an entry to options.rst
over the original commit.
Signed-off-by: wm4 <wm4@nowhere>
At the moment, rendering on Android requires ``--vo=opengl-cb`` and
a lot of java<->c++ bridging code to receive the receive and react to
the render callback in java. Performance also suffers with opengl-cb,
due to the overhead of context switching in JNI.
With this patch, Android can render using ``--vo=gpu --gpu-context=android``
(after setting ``--wid`` to point to an android.view.Surface on-screen).
But --msg-level can only raise the log level used for --log-file,
because the original idea with --log-file was that it'd log verbose
messages to disk even if terminal logging is lower than -v or fully
disabled.
Changed the reference from --gpu-gamma to --gamma-factor,
and changed the reference from --post-shader to --glsl-shaders,
in order to reflect actual changes to the option names.
Seems to be fixed upstream in the nvidia driver, so it's probably a good
idea to 1. force the layout and 2. remove the warning, as it now
actually works. Users with older drivers would run into errors, but they
can still use shaderc as a replacement. (And it's not like the old
status quo was any better)
This has several advantages:
1. no more redundant texcoords when we don't need them
2. no more arbitrary limit on how many textures we can bind
3. (that extends to user shaders as well)
4. no more arbitrary limits on tscale radius
To realize this, the VAO was moved from a hacky stateful approach
(gl_sc_set_vertex_attribs) - which always bothered me since it was
required for compute shaders as well even though they ignored it - to be
a proper parameter of gl_sc_dispatch_draw, and internally plumbed into
gl_sc_generate, which will make a (properly mangled) deep copy into
params.vertex_attribs.
Apparently this filter is broken in a weird way, which even makes some
libavfilter functions segfault in certain conditions. Don't waste time
with it and just remove the examples.
Also adjust the "life" example description (certainly this filter is
100% worthless, but the example does demonstrate how to use source
filters without any available input).
auto-copy selects more modes than the ones listed. It will always be
outdated anyway.
The GLX vaapi backend is never selected anymore, because it sucks.
In addition to the built-in nvidia compiler, we now also support a
backend based on libshaderc. shaderc is sort of like glslang except it
has a C API and is available as a dynamic library.
The generated SPIR-V is now cached alongside the VkPipeline in the
cached_program. We use a special cache header to ensure validity of this
cache before passing it blindly to the vulkan implementation, since
passing invalid SPIR-V can cause all sorts of nasty things. It's also
designed to self-invalidate if the compiler gets better, by offering a
catch-all `int compiler_version` that implementations can use as a cache
invalidation marker.
This time based on ra/vo_gpu. 2017 is the year of the vulkan desktop!
Current problems / limitations / improvement opportunities:
1. The swapchain/flipping code violates the vulkan spec, by assuming
that the presentation queue will be bounded (in cases where rendering
is significantly faster than vsync). But apparently, there's simply
no better way to do this right now, to the point where even the
stupid cube.c examples from LunarG etc. do it wrong.
(cf. https://github.com/KhronosGroup/Vulkan-Docs/issues/370)
2. The memory allocator could be improved. (This is a universal
constant)
3. Could explore using push descriptors instead of descriptor sets,
especially since we expect to switch descriptors semi-often for some
passes (like interpolation). Probably won't make a difference, but
the synchronization overhead might be a factor. Who knows.
4. Parallelism across frames / async transfer is not well-defined, we
either need to use a better semaphore / command buffer strategy or a
resource pooling layer to safely handle cross-frame parallelism.
(That said, I gave resource pooling a try and was not happy with the
result at all - so I'm still exploring the semaphore strategy)
5. We aggressively use pipeline barriers where events would offer a much
more fine-grained synchronization mechanism. As a result of this, we
might be suffering from GPU bubbles due to too-short dependencies on
objects. (That said, I'm also exploring the use of semaphores as a an
ordering tactic which would allow cross-frame time slicing in theory)
Some minor changes to the vo_gpu and infrastructure, but nothing
consequential.
NOTE: For safety, all use of asynchronous commands / multiple command
pools is currently disabled completely. There are some left-over relics
of this in the code (e.g. the distinction between dev_poll and
pool_poll), but that is kept in place mostly because this will be
re-extended in the future (vulkan rev 2).
The queue count is also currently capped to 1, because of the lack of
cross-frame semaphores means we need the implicit synchronization from
the same-queue semantics to guarantee a correct result.
This never really made sense since the BT.1886 changes. It should get
*brighter* for bright rooms, not darker for dark rooms. Picked some new
values that seemed reasonable-ish.
This is done in several steps:
1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
(note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
ra itself (and vo_gpu altered to compensate), but this was a stop-gap
measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten
Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.
Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.
Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
This mechanism uses system() and shouldn't even exist. x11_common.c has
its own solution for the original problem (disabling Linux DE
screensavers without MPlayer/mpv having to link a dbus lib). If that is
not sufficient, you can create a simple Lua script.
Incidentally fixes#4888.
This clearly highlights all out-of-gamut/clipped pixels. (Either too
bright or too saturated)
Has some (documented) caveats. Also make TONE_MAPPING_CLIP stop actually
clamping the value range (it's unnecessary and breaks this feature).
This removes all GPL only code from it, and that's the whole purpose.
Also happens to be much simpler.
The "deinterlace" option still sort of exists, but only as runtime
changeable option. The main change in behavior is that the property will
not report back the actual deint state. Or in other words, if inserting
or initializing the filter fails, the deinterlace property will still
return "yes". This is in line with most recent behavior changes to
properties and options.
This was attempted before in fc9695e63b, but it was reverted in
1b7ce759b1 because it caused conflicts with other software watching
the same keys (See #2041.) It seems like some PCs ship with OEM software
that watches the volume keys without consuming key events and this
causes them to be handled twice, once by mpv and once by the other
software.
In order to prevent conflicts like this, use the WM_APPCOMMAND message
to handle media keys. Returning TRUE from the WM_APPCOMMAND handler
should indicate to the operating system that we consumed the key event
and it should not be propogated to the shell. Also, we now only listen
for keys that are directly related to multimedia playback (eg. the
APPCOMMAND_MEDIA_* keys.) Keys like APPCOMMAND_VOLUME_* are ignored, so
they can be handled by the shell, or by other mixer software.
This currently only works when using lcms-based color management
(--icc-profile-*).
In principle, we could also support using lcms even when the user has
not specified an ICC profile, by generating the profile against a fixed
reference (--target-prim/--target-trc) instead. I still might do that
some day, simply because 3dlut provides a higher quality conversion than
our simple gamut mapping does for stuff like BT.2020, and also because
it's now needed to enable embedded ICC profiles. But that would be a
separate change, so preserve the status quo for now.
(Besides, my opinion is still that you should be using an ICC profile if
you care about colors being accurate _at all_)
This broke float textures, which were actually used by some shaders.
There were probably some other bugs as well.
Lots of code can be avoided by using ra_tex_params directly, so do that.
The main change is that COMPONENT/FORMAT are replaced by a single FORMAT
directive, which takes different parameters now. Due to the mess with
16/32 bit float textures, and because we want to support other APIs than
just GL in the future, it's not really clear how this should be handled,
and the nice component/type separation makes things actually harder. So
just jump the gun and use the ra_format.name names, which were
originally meant mostly for debugging. (This is probably something that
will be regretted later.)
Still only superficially tested, but seems to work.
Fixes#4708.
Since this code was already written for HDR, and is now per-channel
(because it works better for HDR as well), we can actually reuse this to
get very high quality gamut mapping without clipping. The only required
change is to move the tone mapping from before the gamut map to after
the gamut map. Additonally, we need to also account for changes in the
signal range as a result of applying the CMS when we compute ref_peak,
which is fortunately pretty easy because we only need to consider the
case of primaries mapping to themselves.
Since `HDR` no longer really makes sense as a label, rename it to
`--tone-mapping` in general. Also fits better with
`--tone-mapping-desat` etc.
Arguably we could also rename `--hdr-compute-peak`, but that option is
basically only useful for HDR content anyway because we don't need
information about the signal range for gamut mapping.
This (finally!) gives us reasonably high quality gamut mapping even in
the absence of an ICC profile / 3DLUT.
Parsing the texture data as raw strings makes the textures the most
portable and self-contained. In order to facilitate different types of
shaders, the parse_user_shader interaction has been changed to instead
have it loop through blocks and call the passed functions for each valid
block parsed. This is more modular and also cleaner, with better code
separation.
Closes#4586.
Two changes, compounded into one since they affect the same logic:
1. Never use linearization for HDR downscaling
2. Always use linearization for interpolation
Instead of fixing p->use_linear at the beginning of pass_render_frame,
we flip it on "dynamically" as needed. I plan on killing this
p->use_linear frame (along with other per-pass metadata) and moving them
into their own struct for tracking the "current" state of the video, but
that's a separate/upcoming refactor.
As a small bonus, reduce some code duplication in the interpolation
logic.
Fixes#4631
I've found more test cases where hwdec=cuda shits itself, even
hwdec=cuda-copy. So the whole “copyback is no worse than swdec” is
simply not true. Also, in the light of 10 bit media files and APIs
silently truncating to 8 bit, the warnings need to be generalized a bit.
It's no longer safe to say that “doesn't convert to RGB” means “perfect
playback”.
I've also added a very strong disclaimer to the whole hwdec scenario
clarifying why hwdec is usually a bad idea unless absolutely needed,
because I've seen issue after issue that is resolved by disabling hwdec.
This is done via compute shaders. As a consequence, the tone mapping
algorithms had to be rewritten to compute their known constants in GLSL
(ahead of time), instead of doing it once. Didn't affect performance.
Using shmem/SSBO atomics in this way is extremely fast on nvidia, but it
might be slow on other platforms. Needs testing.
Unfortunately, setting up the SSBO still requires OpenGL calls, which
means I can't have it in video_shaders.c, where it belongs. But I'll
defer worrying about that until the backend refactor, since then I'll be
breaking up the video/video_shaders structure anyway.
Can be enabled via --vd-lavc-dr=yes. See manpage additions for what it
does.
This reminds of the MPlayer -dr flag, but the implementation is
completely different. It's the same basic concept: letting the decoder
render into a GPU buffer to avoid a copy. Unlike MPlayer, this doesn't
try to go through filters (libavfilter doesn't support this anyway).
Unless a filter can work in-place, DR will be silently disabled. MPlayer
had very complex semantics about buffer types and management (which
apparently nobody ever understood) and weird restrictions that mostly
limited it to mpeg2 style codecs. The mpv code does not do any of this,
and just lets the decoder allocate an arbitrary number of untyped
images. (No MPlayer code was used.)
Parts of the code based on work by atomnuker (starting point for the
generic code) and haasn (some GL definitions, some basic PBO code, and
correct fencing).
Remove this code because it could be argued that it contains GPL-only
code (see commit 642e963c86 for details).
The remaining aspect methods appear to work just as well, are
potentially more compatible to other players, and the code becomes much
simpler.
Performance seems pretty much unchanged but I no longer get nasty spikes
on NUMA systems, probably because glBufferSubData runs in the driver or
something.
As a simplification of the code, we also just size the PBO to always
have the full size, even for cropped textures. This seems slower but not
by relevant amounts, and only affects e.g. --vf=crop. It also slightly
increases VRAM usage for textures with big strides.
This new code path is especially nice because it no longer depends on
GL_ARB_map_buffer_range, and no longer uses any functions that can
possibly fail, thus simplifying control flow and seemingly deprecating
the manpage's claim about possible image corruption.
In theory we could also reduce NUM_PBO_BUFFERS since it doesn't seem
like we're streaming uploads anyway, but leave it in there just in
case some drivers disagree...
This is more of a niche usecase than --ytdl-format and --ytdl-raw-options,
so a simple script option should be enough.
Either create lua-settings/ytdl_hook.conf with
'exclude=example.com,sub.example.com' option or
"--script-opts=ytdl_hook-exclude=example.com,sub.example.com"
This just indicates a fixed linear coefficient to multiply into the
signal, similar to the old option --target-brightness (but the inverse
thereof). Good for testing purposes, which is why I added it. (This also
corresponds somewhat to what zimg does)
It's now possible to request non-dumb mode as a user, even when not
using any non-dumb features. This change is mostly intended for testing,
so I can easily switch between dumb and non-dumb mode on default
settings. The default behavior is unaffected.
Was at least somewhat broken, and is misleading. I don't really have an
idea why FFmpeg has two AVOptions here anyway. We don't need to care,
and I'm only aware of 1 user trying this option ever.
See #4579.
This is exposed so that bjin/mpv-prescalers can use textureGatherOffset
for performance.
Since there are now quite a lot of parameters where it isn't quite clear
why they're all defined, add a paragraph to the man page that explains
them a bit.
This helps prevent unnaturally, weirdly colorized blown out highlights
for direct images of the sunlit sky and other way-too-bright HDR
content. I was debating whether to set the default at 1.0 or 2.0, but
went with the more conservative option that preserves more detail/color.
This is more efficient on my machine (nvidia), but only when applied to
groups of exactly 4 texels. So we switch to the more efficient
textureGather for groups of 4. Some notes:
- textureGatherOffset seems to be faster than textureGather by a
non-negligible amount, but for some reason, textureOffset is still
slower than a straight-up texture
- textureGather* requires GLSL 400; and at least on nvidia, this
requires actually allocating a GL 4.0 context.
- the code in opengl/common.c that clamped the GLSL version to 330 is
deprecated, because the old user shader style has been removed
completely in the meantime
- To combat the growing complexity of the polar sampling code, we drop
the antiringing functionality from EWA shaders completely, since it
never really worked well for EWA to begin with. (Horrific artifacting)
This allows filter functions to be prematurely cut off once their
contributions start becoming insignificant. This effectively prevents
wasted GPU time sampling from parts of the function that are essentially
reduced to zero by the window function, providing anywhere from a 10% to
20% speedup. (5700μs -> 4700μs for me)
This replaces `vo-performance` by `vo-passes`, bringing with it a number
of changes and improvements:
1. mpv users can now introspect the vo_opengl passes, which is something
that has been requested multiple times.
2. performance data is now measured per-pass, which helps both
development and debugging.
3. since adding more passes is cheap, we can now report information for
more passes (e.g. the blit pass, and the osd pass). Note: we also
switch to nanosecond scale, to be able to measure these passes
better.
4. `--user-shaders` authors can now describe their own passes, helping
users both identify which user shaders are active at any given time
as well as helping shader authors identify performance issues.
5. the timing data per pass is now exported as a full list of samples,
so projects like Argon-/mpv-stats can immediately read out all of the
samples and render a graph without having to manually poll this
option constantly.
Due to gl_timer's design being complicated (directly reading performance
data would block, so we delay the actual read-back until the next _start
command), it's vital not to conflate different passes that might be
doing different things from one frame to another. To accomplish this,
the actual timers are stored as part of the gl_shader_cache's sc_entry,
which makes them unique for that exact shader.
Starting and stopping the time measurement is easy to unify with the
gl_sc architecture, because the existing API already relies on a
"generate, render, reset" flow, so we can just put timer_start and
timer_stop in sc_generate and sc_reset, respectively.
The ugliest thing about this code is that due to the need to keep pass
information relatively stable in between frames, we need to distinguish
between "new" and "redrawn" frames, which bloats the code somewhat and
also feels hacky and vo_opengl-specific. (But then again, this entire
thing is vo_opengl-specific)
The changes to path list options is basically getting rid of the need to
pass multiple paths to a single option. Instead, you can use the option
multiple times. The old behavior can be used by using the -set suffix
with the option.
Change some options to path lists. For example --script is now append by
default, and if you use --script-set, you need to use ":"/";" as
separator instead of ",".
--sub-paths/--audio-file-paths is a deprecated alias now, and will break
if the user tries to pass multiple paths to it. I'm assuming that if
these are used, most users will pass only 1 path anyway.
--opengl-shaders has more compatibility handling, since it's probably
rather common that users pass multiple options to it.
Also document all that in the manpage.
I'll probably regret this later, as it somewhat increases the complexity
of the option parser, rather than increasing it.
I noticed that the previous default, bitstream, actually breaks with
some shitty anamorphic DVD rips that signal square pixel aspect in the
bitstream. So I think the "container" method is a better default.
st2084 and std-b67 are really weird names for PQ and HLG, which is what
everybody else (including e.g. the ITU-R) calls them. Follow their
example.
I decided against naming them bt2020-pq and bt2020-hlg because it's not
necessary in this case. The standard name is only used for the other
colorspaces etc. because those literally have no other names.
List of changes:
1. Kill nom_peak, since it's a pointless non-field that stores nothing
of value and is _always_ derived from ref_white anyway.
2. Kill ref_white/--target-brightness, because the only case it really
existed for (PQ) actually doesn't need to be this general: According
to ITU-R BT.2100, PQ *always* assumes a reference monitor with a
white point of 100 cd/m².
3. Improve documentation and comments surrounding this stuff.
4. Clean up some of the code in general. Move stuff where it belongs.
"Almost" because this might contain copyright by michael, who agreed
with LGPL, but only once the core is LGPL. This is preparation for that
to happen.
Apart from that, the usual remarks apply. In particular, dec_video.c
started out quite chaotic with no modularization, but was later
basically gutted, and in general rewritten a bunch of times. Not going
to give a history lesson.
Special attention needs to be given to 3 patches by cehosos, who did not
agree to the relicensing:
240b743ebdf: --field-dominance
e32cbbf7dc3: reinit VO if aspect ratio changes
306f6243fdf: use container aspect if codec aspect unset (?)
The first patch is pretty clearly still in the current code, and needs
to be disabled for LGPL.
The functionality of the second patch is still active, but implemented
completely different, and as part of general frame parameter changes (at
the time of the patch, MPlayer already reinitialized the VO on frame
size and pixel format changes - all this was merged into a single check
for changing image parameters).
The third patch makes me a bit more uncomfortable. It appears the code
was moved to dec_video.c in de68b8f23c, and further changed in
82f0d373, 0a0bb905, and bf13bd0d. You could claim that cehoyos'
copyright still sticks. Fortunately, we implement alternative aspect
detection, which is simpler and probably preferable, and which arguably
contains none of the original code and logic, and thus should be fully
safe.
While I don't know if cehoyos' copyright actually still applies, I'm
more comfortable with making the code GPL-only for now. Also change the
default to use the (in future) plain LGPL code, and deprecate the one
associated with the GPL code, so we can eventually remove the GPL code.
But it's also possible we decide that the copyright doesn't apply, and
undo the deprecation and GPL guards.
I expect that users won't notice anything. If you ask me, the old aspect
method was probably an accidental bug instead of intentional behavior.
Although, the new aspect method was broken too, so I had to fix it.
I call it `mobius` because apparently the form f(x) = (cx+a)/(dx+b) is
called a Möbius transform, which is the algorithm this is based on. In
the extremes it becomes `reinhard` (param=0.0 and `clip` (param=1.0),
smoothly transitioning between the two depending on the parameter.
This is a useful tone mapping algorithm since the tunable mobius
transform allows the user to decide the trade-off between color accuracy
and detail preservation on a continuous scale. The default of 0.3 is
already far more accurate than `reinhard` while also being reasonably
good at preserving highlights, without suffering from the overall
brightness drop and color distortion of `hable`.
For these reasons, make this the new default. Also expand and improve
the documentation for these tone mapping functions.
List of changes:
1. Rename `signfs` to `scale`, to better match what it actually does
(force --sub-scale to apply to ASS subtitles), and fix the blatantly
wrong documentation (it actually specifically does *not* apply to
signs)
2. Rename `--sub-ass-style-override` to `--sub-ass-override` to help
reduce confusion between it and `--sub-ass-force-style`, as well as
pointing out that it doesn't necessarily actually override styles.
(The new `scale` option, for example, only sets
ASS_OVERRIDE_BIT_FONT_SIZE, but not ASS_OVERRIDE_BIT_STYLE)
3. Mention that `--sub-ass-override` is generally sort of smart about
only overriding dialog, not signs.
In a multi GPU scenario, it may be desirable to use different GPUs
for decode and display responsibilities. For example, if a secondary
GPU has better video decoding capabilities.
In such a scenario, we need to initialise a separate context for each
GPU, and use the display context in hwdec_cuda, while passing the
decode context to avcodec.
Once that's done, the actually hand-off between the two GPUs is
transparent to us (It happens during the cuMemcpy2D operation which
copies the decoded frame from a cuda buffer to the OpenGL texture).
In the end, the bulk of the work is around introducing a new
configuration option to specify the decode device.
af_volume is deprecated, and so are its replaygain sub-options. To make
it possible to use replaygain without deprecated options (and of course
to make it available at all after af_volume is dropped), reintroduce
them as top-level options.
This also means that they are easily changeable at runtime by using them
as properties. Change the "volume" property to use the new update
mechanism as well.
We don't actually bother sharing the implementation between new and
deprecated mechanisms, as the deprecated one will simply be deleted.
For the from_dB() functions, we mention anders' copyright, although I'm
not sure if a mere formula is copyrightable. This will have to be
determined later.
This whole change is mostly untested. Our distributed human CI will take
care of it.
It's all explained in the DOCS changes. Although this option was always
kind of obscure and pointless. Until it is removed, the only reason for
setting it would be to raise the static default limit, so change its
default to INT_MAX so that it does nothing by default.
Instead of pausing if --keep-open is active, stop
at end but continue playing if seeking backwards.
And then stop again when end is reached.
Signed-off-by: wm4 <wm4@nowhere>
Over the PR, the option was renamed, and the manpage additions were
slightly changed/enhanced.
Also "announce" the plans to undeprecate it with changed semantics
later. The deprecation period is needed to warn script authors and
client API users (etc.) of the change.
This is done because everyone seems to expect --loop to loop the current
file, not the playlist. Even in cases when only 1 file is on the
playlist, the --loop-file semantics seem to be preferred.
Mostly because of ANGLE (sadly).
The implementation became unpleasantly big, but at least it's relatively
self-contained.
I'm not sure to what degree shaders from different drivers are
compatible as in whether a driver would randomly misbehave if it's fed
a binary created by another driver. The useless binayFormat parameter
won't help it, as they can probably easily clash. As usual, OpenGL is
pretty shit here.
for a reason i can just assume some key events can vanish from the
event chain and mpv seems unresponsive.
after quite some testing i could confirm that the events are present at
the first entry point of the event chain, the sendEvent method of the
Application, and that they vanish at a point afterwards. now we use
that entry point to grab keyDown and keyUp events. we also stop
propagating those key events to prevent the no key input' error sound.
if we ever need the key events somewhere down the event chain we need
to start propagating them again. though this is not necessary currently.
DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL might be buggy on some hardware.
Additionaly DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL might be supported on some
Windows 7 systems with the platform update, but it might have poor
performance. In these cases, the user might want to disable the use of
DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL swap chains with --angle-flip=no.
Add subtitle filter to remove additions for deaf or hard-of-hearing
(SDH). This is for English, but may in part work for others too.
This is an ASS filter and the intention is that it can always be
enabled as it by default do not remove parts that may be normal text.
Harder filtering can be enabled with an additional option.
Signed-off-by: wm4 <wm4@nowhere>
Useful for testing. Unfortunately, the nVidia EGL driver ignores this,
and returns a GLES 3.2 context anyway (which it is allowed to do). Might
still be useable with ANGLE, which will really give you a GLES 2 context
if you ask for it.
As the manpage says, this has no value other than adding bugs.
It uses code based on context_x11.c, and basically does very stripped
down context creation (no alpha support etc.). It uses vdpau for
display, and maps vdpau output surfaces as FBOs to render into them.
This might be good to experiment with asynchronous presentation. For
now, it presents synchronously, with a 4 frame delay (which should whack
off A/V sync). The forced 4 frame delay is probably also why interaction
feels slower.
There are some weird vdpau errors on resizing and uninit. No idea what
causes them.
This is just a pointless refactor with the only goal of making
image_writer_opts.format a number.
The pointless part of it is that instead of using some sort of arbitrary
ID (in place of a file extension string), we use a AV_CODEC_ID_. There
was also some idea of falling back to the libavcodec MJPEG encoder if
mpv was not linked against libjpeg, but this fails. libavcodec insist on
having AV_PIX_FMT_YUVJ420P, which we pretend does not exist, and which
we always map to AV_PIX_FMT_YUV420P (without the J indicating full
range), so encoder init fails. This is pretty dumb, but whatever. The
not-caring factor is raised by the fact that we don't know that we
should convert to full range, because encoders have no proper way to
signal this. (Be reminded that AV_PIX_FMT_YUVJ420P is deprecated.)
Includes hls, mp4, mkv by default. This also avoids stupid things like
decoding at least 1 video frame per stream in the demuxer.
This also add --demuxer-lavf-probe-info to give finer control over what
happens.
Implements --hwdec=videotoolbox on iOS. Similar to hwdec_osx.c, but
using CVPixelBuffer APIs available on iOS instead of the equivalent
IOSurface APIs in macOS.
We can drop the custom table.
For some reason, the interop does not accept GL_RGB_RAW_422_APPLE as
internal format for GL_RGB_422_APPLE, so switch the format table to use
GL_RGB (this way both interop and real textures work the same).
Another victim of the apparent requirement of exactly matching texture
formats is kCVPixelFormatType_32BGRA. vo_opengl wants to handle this as
normal RGBA texture, with a swizzle applied in the shader.
CGLTexImageIOSurface2D() rejects this, because it wants the exact
internal format. Just drop the format, because it's useless anyway.
(Maybe this is a bit too fragile...)
since there are different views on what ontop is, we make the ontop
window level modifiable. at the moment only support for macOS was added.
the default for macOS was changed from 'system' to 'window' since this
fixes an unwanted behaviour in fullscreen and in general causes less
issues with expected behaviour.
Fixes#2376#3974
This replaces the old backend that exclusively used EGL windowing with
one that can also use ANGLE's ability to render to directly to a
texture. The advantage of this is that it allows mpv to create the swap
chain itself and this allows mpv to use a flip-mode swap chain on a HWND
(which avoids problems with DirectComposition) and to use a longer swap
chain that has six backbuffers by default (which reportedly fixes
problems with rendering 24fps video on 24Hz monitors.)
Also, "screenshot window" should now work on DXGI 1.2 and up (Windows 8
and up.)
As the manual entry for --hwdec states that d3d11va and d3d11va-copy require Windows, it can be assumed that it also works for Windows 7. Since it doesn't, according to https://github.com/mpv-player/mpv/issues/3285#issuecomment-228593539, and personal testing, updating the manual accordingly and making the hwdec OS requirements for ANGLE in line with videotoolbox, where OS version is stated.
To make it easier for the eyes, multi line subtitles should
be left justified (for most languages).
This adds an option to define how subtitles are to be justified
inpendently of how they are aligned.
Also add option to enable --sub-justify to be applied on ASS subtitles.
This was excessively useless, and I want my time back that was needed to
explain users why they don't want to use it.
It captured the byte stream only, and even for types of streams it was
designed for (like transport streams), it was rather questionable.
As part of the removal, un-inline demux_run_on_thread() (which has only
1 call-site now), and sort of reimplement --stream-dump to write the
data directly instead of using the removed capture code.
(--stream-dump is also very useless, and I struggled coming up with an
explanation for it in the manpage.)
Scale the window by the assumed DPI scaling factor, using 96 DPI as
base. For example, a screen that reports 192 DPI is assumed to have a
DPI scale factor 2. The window will then be created with twice the size.
For robustness reasons, we accept only integer DPI scales between 1 and
9. We also error out if the X and Y scales are very different, as this
most likely indicates a multiscreen system with botched size reporting.
I'm not sure if reading the X server's DPI is such a good idea - maybe
the Xrdb "Xft.dpi" value should be used instead. The current method
follows what xdpyinfo does.
This can be disabled with --hidpi-window-scale=no.
Since for mpv CLI, the player state is a singleton, full prefetching is
a bit tricky. We do it only on the demuxer layer.
The implementation reuses the old "open thread". This means there is
significant potential for regressions even if the new option is not
used. This is made worse by the fact that I barely tested this code.
The generic mpctx_run_reentrant() wrapper is also removed - this was its
only user, and its remains become part of the new implementation.
Introduce the --opengl-hwdec-interop option, which replaces
--hwdec-preload. The new option allows explicit selection of the interop
backend.
This is relatively complex, and I would have preferred not to add this,
but it's probably useful to debug certain problems. In exchange, the
"new" option documents that pretty much any but the simplest use of it
will not be forward compatible.
Remove ad_spdif from the normal codec list, and select it explicitly.
One goal was to decouple this from the normal codec selection, so
they're less entangled and the decoder selection code can be simplified
in the far future. This means spdif codec selection is now done
explicitly via select_spdif_codec(). We can also remove the weird
requirements on "dts" and "dts-hd" for the --audio-spdif option, and it
can just do the right thing.
Now both video and audio codecs consist of a single codec family each,
vd_lavc and ad_lavc.
this replaces the old fullscreen with the native
macOS fullscreen. additional the
--fs-black-out-screens was removed since the new
API doesn't support it in a way the old one did.
it can possibly be re-added if done manually.
Fixes#2857#3272#1352#2062#3864
As documented in interface-changes.rst. This makes it much easier to
follow what the heck is going on.
Whether this is adequate for real-world use is unknown.
The latest 375.xx nvidia drivers add support for P016 output
surfaces. In combination with an ffmpeg change to return those
surfaces, we can display them.
The bulk of the work is related to knowing which format you're
dealing with at the right time. Once you know, it's straight forward.
Deactivating this options makes it possible to
circumvent the default OS X behavior of using
points. Windows on HiDPI resolutions won't open
in double the size anymore and videos are display
in their native resolution when windowed.
Fixes#3716
Enumerate all of the scaling-related options, even for the ``--cscale``
/ ``--tscale`` etc. variants. Unfortunately this breaks 80col quite
severely, but there's nothing I can do about it due to a bug in rst2man
preventing definition list labels from spanning multiple lines.
Also reorder some of the scaling-related options to be closer together
and in a more consistent order (for a top-to-bottom reading flow).
This allows us to define the tukey window (and other tapered windows).
Also add a missing option definition for `wblur` while we're at it, to
make testing out window-related stuff easier.
It turns out the glFlush() call really helps in some cases, though only
in audio timing mode (where we render, then wait for a while, then
display the frame). Add a --opengl-early-flush=auto mode, which does
exactly that.
It's unclear whether this is fine on OSX (strange things going on
there), but it should be.
See #3670.
At this point, all other hwaccels provide -copy modes, and vdpau is the
exception with not having one. Although there is vf_vdpaurb, it's less
convenient in certain situations, and exposes some issues with the
filter chain code as well.
The glFlush() call was made optional recently
since it's not needed in most cases. On OSX though
this is needed since we removed kCGLPFADoubleBuffer
from the context creation, so the glFlush() call
was added to the cocoa backend only.
The CGLFlushDrawable() call can be safely removed
since it only does something when a double
buffered context is used. Also fixes a small typo.
Fixes#3627.
It seems this can cause issues with certain platforms, so better to
disable it by default. The original reason for this isn't overly
justified, and display-sync mode should get rid of the need for it
anyway.
The new option is meant for testing, and will probably be removed if
nobody comes up and reports that enabling the option actually improves
anything.
Rename the text subtitle options from --sub-text- to --sub-
and --ass- options to --sub-ass-.
The intention is to common sub options to prefixed --sub-
and special ASS option be seen as a special version of sub options.
The OSD options that work like the --sub- options are still named
--osd-.
Man page updated including a short note about renamed --sub-text-*
and --ass-* options to --sub-* and --sub-ass-*.
Seems like this confused users quite often.
Instead of --profile=pseudo-gui, --player-operation-mode=pseudo-gui now
has to be used to invoke pseudo GUI mode. The old way still works, and
still behaves in the old way.
Conflicts with the "playlist-pos" property. They're really a bit too
different, and since the --playlist-pos option is relatively new and
obscure, just rename it to get this out of the way.
This also lets you just do "mpv --hwdec file.mkv", with the minor caveat
that the legacy syntax "--hwdec val" or "-hwdec val" (without "=") does
not work as expected anymore.
Minimal support just for testing.
Only the window surface creation (including size determination) is
really platform specific, so this could be some generic thing with
platform-specific support as some sort of sub-driver, but on the other
hand I don't see much of a need for such a thing.
While most of the fbdev usage is done by the EGL driver, using this
fbdev ioctl is apparently the only way to get the display resolution.
The cuvid decoder already knows how to copy back to system memory
if NV12 frames are requested, and this will happen if the decoder
is used without the hwdec.
For convenience, let's add a wrapper hwdec so people don't have
to explicitly pick the cuvid decoder if they want this behaviour.
And introduce a global option which does this. Or more precisely, this
deprecates the global wasapi and coreaudio options, and adds a new one
that merges their functionality. (Due to the way the sub-option
deprecation mechanism works, this is simpler.)
Whitelisting supported codecs is (probably) still better than just
allowing everything, given the weird FFmpeg API. I'm also assuming
Libav doesn't even have the codec ID, but I didn't check.
Also add a --teletext-page option, since otherwise it decodes every
teletext page and shows them in succession.
And yes, we can't use av_opt_set_int() - instead we have to set it as
string. Because FFmpeg's option system is terrible.
vo_opengl sub-option were always rather annoying to handle. It seems
better to make them global options instead. This is simpler and easier
to use. The only disadvantage we are aware of is that it's not clear
that many/all of these new global options work with vo_opengl only.
--vo=opengl-hq is also deprecated.
There is extensive compatibility with the old behavior. One exception is
that --vo-defaults will not apply to opengl-hq (though with opengl it
still works). vo-cmdline is also dysfunctional and will be removed in a
following commit.
These changes also affect opengl-cb.
The update mechanism is still rather inefficient: it requires syncing
with the VO after each option change, rather than batching updates.
There's also no granularity (video.c just updates "everything", and if
auto-ICC profiles are enabled, vo_opengl.c will fetch them on each
update).
Most of the manpage changes were done by Niklas Haas <git@haasn.xyz>.
Now options are accessible through the property list as well, which
unifies them to a degree.
Not all options support runtime changes (meaning affected components
need to be restarted for the options to take effects). Remove from the
manpage those properties which are cleanly mapped to options anyway.
From the user-perspective they're just options available through the
property interface.
Instead, add a hacky OPT_ASPECT option type, which only exists to accept
a "no" parameter, which in combination with the "--no-..." handling code
makes --no-video-aspect work again.
We can also remove the code in m_config.c, which only existed to make
"--no-aspect" (a deprecated alias) to work.
The client API can do this (and there are apparently some libmpv using
projects which rely on this). But it's just unnecessary bloat as it
requires a separate code path from the option parser. It would be better
to remove this code. Formally deprecate it, including API bump and
warning in the API changes file to make it really clear.
Normally, OSD can be disabled with --osd-level=0. But this also disables
terminal OSD, and some users want _only_ the terminal OSD. Add
--video-osd=no, which essentially disables the video OSD.
Ideally, it should probably be possible to control terminal and video
OSD levels independently, but that would require separate OSD timers
(and other state) for both components, so don't do it. But because the
current situation isn't too ideal, add a threat to the manpage that
might be changed in the future.
Fixes#3387.
The --image-display-duration option controls how long an image is
displayed. It's also possible to display the image forever (until manual
user interaction stops playback).
With this, the core drops the old method to "drain" video (i.e. waiting
for the last frame duration on end of playback). Instead, we reuse
MPContext.time_frame. The old mechanism was disabled for non-images
anyway.
Fixes#3425.
This commit adds an --audio-channel=auto-safe mode, and makes it the
default. This mode behaves like "auto" with most AOs, except with
ao_alsa. The intention is to allow multichannel output by default on
sane APIs. ALSA is not sane as in it's so low level that it will e.g.
configure any layout over HDMI, even if the connected A/V receiver does
not support it. The HDMI fuckup is of course not ALSA's fault, but other
audio APIs normally isolate applications from dealing with this and
require the user to globally configure the correct output layout.
This will help with other AOs too. ao_lavc (encoding) is changed to the
new semantics as well, because it used to force stereo (perhaps because
encoding mode is supposed to produce safe files for crap devices?).
Exclusive mode output on Windows might need to be adjusted accordingly,
as it grants the same kind of low level access as ALSA (requires more
research).
In addition to the things mentioned above, the --audio-channels option
is extended to accept a set of channel layouts. This is supposed to be
the correct way to configure mpv ALSA multichannel output. You need to
put a list of channel layouts that your A/V receiver supports.
mixer.c didn't really deserve to be separate anymore, as half of its
contents were unnecessary glue code after recent changes. It also
created a weird split between audio.c and af.c due to the fact that
mixer.c could insert audio filters. With the code being in audio.c
directly, together with other code that unserts filters during runtime,
it will be possible to cleanup this code a bit and make it work like the
video filter code.
As part of this change, make the balance code work like the volume code,
and add an option to back the current balance value. Also, since the
balance semantics are unexpected for most users (panning between the
audio channels, instead of just changing the relative volume), and there
are some other volumes, formally deprecate both the old property and the
new option.
Old-style commands using _ as separator (e.g. show_progress) were still
used in some places, including documentation and configuration files.
This commit updates all such instances to the new style (show-progress)
so that commands are easier to find in the manual.
Drop the code for switching the volume options and properties between
af_volume and AO volume controls. interface-changes.rst mentions the
changes in detail.
Do this because this was exceedingly complex and had other problems as
well. It was also very hard to test. It's just not worth the trouble.
Some leftovers like AOCONTROL_HAS_PER_APP_VOLUME will be removed at a
later point.
Fixes#3322.
Until now, we've always converted vdpau video surfaces to RGB, and then
mapped the resulting RGB texture. Change this so that the surface is
mapped as NV12 plane textures.
The reason this wasn't done until now is because vdpau surfaces are
mapped in an "interlaced" way as separate fields, even for progressive
video. This requires messy reinterleraving. It turns out that even
though it's an extra processing step, the result can be faster than
going through the video mixer for RGB conversion.
Other than some potential speed-gain, doing this has multiple other
advantages. We can apply our own color conversion, which is important in
more complex cases. We can correctly apply debanding and potentially
other processing that requires chroma-specific or in-YUV handling.
If deinterlacing is enabled, this switches back to the old RGB
conversion method. Until we have at least a primitive deinterlacer in
vo_opengl, this will stay this way. The d3d11 and vaapi code paths are
similar. (Of course these don't require any crazy field reinterleaving.)
This uses the normal autoprobing rules like "auto", but rejects anything
that isn't flagged as copying data back to system memory.
The chunk in command.c was dead code, so remove it instead of updating
it.
Add --taskbar-progress command line option and property which controls taskbar
progress indication rendering in Windows 7+. This option is on by default and
can be toggled during playback.
This option does not affect the creation process of ITaskbarList3. When the
option is turned off the progress bar is just hidden with TBPF_NOPROGRESS.
Closes#2535
Flag that is set by default. Reseting it will result in mpv trying to fit
client area with video instead of the whole window with border and
decorations on the screen.
Marked as (Windows only) for now until it's implemented on other platforms.
--sub-ass=no / --ass=no still work, but --ass-style-override=strip is
preferred now. With this change, --ass-style-override can control all
the types of style overriding.
This uses ID3D11VideoProcessor to convert the video to a RGBA surface,
which is then bound to ANGLE. Currently ANGLE does not provide any way
to bind nv12 surfaces directly, so this will have to do.
ID3D11VideoContext1 would give us slightly more control about the
colorspace conversion, though it's still not good, and not available
in MinGW headers yet.
The video processor is created lazily, because we need to have the coded
frame size, of which AVFrame and mp_image have no concept of. Doing the
creation lazily is less of a pain than somehow hacking the coded frame
size into mp_image.
I'm not really sure how ID3D11VideoProcessorInputView is supposed to
work. We recreate it on every frame, which is simple and hopefully
doesn't affect performance.
Commit 382bafcb changed the behavior for ab-loop-a. This commit changes
ab-loop-b so that the behavior is symmetric.
Adjust the OSD rendering accordingly to the two changes.
Also fix mentions of the "ab_loop" command to the now preferred
"ab-loop".
In the past, --video-unscaled also disabled zooming and aspect ratio
corrections. But this didn't make much sense in terms of being a useful
option. The new behavior just sets the initial video size to be
unscaled, but it's still affected by zoom commands and aspect ratio
corrections.
To get the old behavior back, --video-aspect=0 --video-zoom=0 need to be
added as well (in the general case). Most of the time it should not make
a difference though.
Also, there seems to have been some additional dst_rect clamping code
inside src_dst_split_scaling that didn't seem to either be necessary nor
ever get triggered. (The code immediately above it already makes sure to
crop the video if it's larger than the dst_rect)
No idea why it was there, but I just removed it.
It's pretty "unfriendly" and causes too many issues. (Probably. At least
they're more obvious to a user than e.g. broken frame timing.)
Potentially we could apply heuristics like applying this only on
fullscreen, but let's not. It's up to the user to configure this to
get best results.
Fixes#2997.
The past behavior was a bit weird, especially when zooming out. There
was no simple way to zoom in or out in consistent increments using
keybindings alone.
The new behavior preserves most of the old behavior's semantics but
scales out to infinity better. It coincidentally also makes it
really easy to get clean power of 2 ratios (e.g. 2x, 4x, 8x and their
inverses).
Fixes#3004.
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
See --lavfi-complex option.
This is still quite rough. There's no support for dynamic configuration
of any kind. There are probably corner cases where playback might freeze
or burn 100% CPU (due to dataflow problems when interaction with
libavfilter).
Future possible plans might include:
- freely switch tracks by providing some sort of default track graph
label
- automatically enabling audio visualization
- automatically mix audio or stack video when multiple tracks are
selected at once (similar to how multiple sub tracks can be selected)
This is probably the 3rd time the user-visible behavior changes. This
time, switch back because not normalizing seems to be the more expected
behavior from users.
Too many problems. Well, actually it's just Linux audio systems which
cause problems, and exclusive audio access on other platforms.
In any case, it seems you have to do some manual configuration if you
want multichannel audio output.