This is part of trying to get rid of --af-defaults, and the af
resample filter.
It requires a complicated mechanism to set the defaults on the resample
filter for backwards compatibility.
With the recent changes to the script it does not incur a startup delay
by default due to starting youtube-dl and waiting for it. This was the
main reason for making libmpv have a different default.
Starting sub processes from a library can still be a bit fishy, but I
think it's ok. Still mention it in the libmpv header. There were already
other cases where libmpv would start its own processes, such as the X11
backend calling xdg-screensaver. (The reason why this is fishy is
because UNIX process management sucks: SIGCHLD and the wait() syscall
make sub processes non-transparent and could potentially introduce
conflicts with code trying to use them.)
Previously, toggling pause would generate no osd response, and changing
that wasn't even configurable. This was surprising to users who
generally expect to see *where* pause / unpause is taking place (#3028).
The previous default was osd-bar (unless the user specified
--no-osd-bar, in which case case it was osd-msg). Aside from requiring
some twisted logic to implement, this surprised users since osd-msg3
wasn't displayed when seeking with the keyboard (#3028), so the time
seeked to was never displayed.
Before this commit, some autoselection of tracks coming from files
loaded with --external-files was still done. This commit removes all of
it, and the only way to select a track is via the explicit stream
selection options like --vid/--sid/--aid.
I think this was always the original intention. The change could in
theory still unintentionally surprise some users, so add a changelog
entry.
This does not affect --audio-file/--sub-file, even if these contain
mismatching track types. E.g. if audio files passed to --audio-file
contain subtitles, these should still be selected. Past feature requests
indicate that users want this.
This enables DXVA2 hardware decoding with ra_d3d11. It should be useful
for Windows 7, where D3D11VA is not available. Images are transfered
from D3D9 to D3D11 using D3D9Ex surface sharing[1].
Following Microsoft's recommendations, it uses a queue of shared
surfaces, similar to Microsoft's ISurfaceQueue. This will hopefully
prevent surface sharing from impacting parallelism and allow multiple
D3D11 frames to be in-flight at once.
[1]: https://msdn.microsoft.com/en-us/library/windows/desktop/ee913554.aspx
Reasons why you'd want this see manpage additions. Disabled by default,
because it would increase latency of live streams by default. (Or well,
at least it would be another problem when trying getting lower latency.)
This tried to be clever by waiting for a longer time each time the
buffer was underrunning, or shorter if it was getting better. I think
this was pretty weird behavior and makes no sense. If the user really
wants the stream to buffer longer, he/she/it can just pause the player
(the network caches will continue to be filled until they're full).
Every time I actually noticed this code triggering in my own use, I
didn't find it helpful. Apart from that it was pretty hard to test.
Some waiting is needed to avoid that the player just plays the available
data as fast as possible (to compensate for late frames and underrunning
audio). Just use a fixed wait time, which can now be controlled by the
new --cache-pause-wait option.
Uses the EGL width/height by default when the user fails to set
the android-surface-width/android-surface-height options.
This means the vo-resize command is optional, and does not need to
be implemented on android devices which do not support rotation.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Technically, the user could just use --vd-lavc-o with the same result.
But I find it better to make this an explicit option, so we can document
the ups and downs, and also avoid setting it for non-h264.
Async compute in particular seems to cause problems on some drivers, and
even when supprted the benefits are not that massive from the tests I
have seen, so it's probably safe to keep off by default.
Async transfer on the other hand seems to work better and offers a more
substantial improvement, so it's kept on.
Instead of using a single primary queue, we generate multiple
vk_cmdpools and pick the right one dynamically based on the intent.
This has a number of immediate benefits:
1. We can use async texture uploads
2. We can use the DMA engine for buffer updates
3. We can benefit from async compute on AMD GPUs
Unfortunately, the major downside is that due to the lack of QF
ownership tracking, we need to use CONCURRENT sharing for all resources
(buffers *and* images!). In theory, we could try figuring out a way to
get rid of the concurrent sharing for buffers (which is only needed for
compute shader UBOs), but even so, the concurrent sharing mode doesn't
really seem to have a significant impact over here (nvidia). It's
possible that other platforms may disagree.
Our deadlock-avoidance strategy is stupidly simple: Just flush the
command every time we need to switch queues, and make sure all
submission and callbacks happen in FIFO order. This required lifting the
cmds_pending and cmds_queued out from vk_cmdpool to mpvk_ctx, and some
functions died/got moved as a result, but that's a relatively minor
change.
On my hardware this is a fairly significant performance boost, mainly
due to async transfers. (Nvidia doesn't expose separate compute queues
anyway). On AMD, this should be a performance boost as well due to async
compute.
This uses the new vk_signal mechanism to order all access to textures.
This has several advantageS:
1. It allows real synchronization of image access across multiple frames
when using multiple queues for parallelism.
2. It allows using events instead of pipeline barriers, which is a
finer-grained synchronization primitive that allows for more
efficient layout transitions over longer durations.
This commit also restructures some of the implicit transition code for
renderpasses to be more flexible and correct. (Note: this technically
drops the ability to transition the image out of undefined layout when
not blending, but that was a bug anyway and needs to be done properly)
vo_gpu: vulkan: remove no-longer-true optimization
The change to the output_tex format makes this no longer true, and it
actually seems to hurt performance now as well. So just don't do it
anymore. I also realized it hurts performance when drawing an OSD, so
it's probably not a good idea anyway.
Reduce it from 75MB in both directions (forward/backwards) to 10MB each.
The stream cache is kind of becoming useless in favor of the demuxer
cache. Using both doesn't make much sense, because they will contain
duplicated data for no reason.
Still leave it at 10MB, which may help with mp4 a bit. libavformat's mp4
demuxer tends to seek too much, so we try to avoid triggering network
level seeks by having some caching in the stream layer.
Some old crap which nobody needs and which probably nobody uses.
This relies on a GCC extension: using "## __VA_ARGS__" to remove the
comma from the argument list if the va args are empty. It's supported
by clang, and there's some chance newer standards will introduce a
proper way to do this. (Even if it breaks somewhere, it will be a
problem only for 1 release, since I want to drop the deprecated
properties immediately.)
This was off for mpv CLI, but on for libmpv. The motivation behind this
was that it would be confusing for applications if libmpv continued
playback in a severely "degraded" way (without either audio or video),
and that it would be better to fail early.
In reality the behavior was just a confusing difference to mpv CLI, and
has confused actual users as well. Get rid of it.
Not bothering with a version bump, since this is so minor, and it's easy
to ensure compatibility in affected applications by just setting the
option explicitly.
(Also adding the missing next-release-marker in client-api-changes.rst.)
Annoying exception that makes no sense to keep. Normally, users or
client applications will either use --hwdec=auto, or not set the option
at all, which both leads to the expected result.
This was a bit confused, and I bet nobody understood whether to use
--sub-file or --sub-files, and what the difference is. Explicitly
mention that both variants exist, and how they are related.
Previously when using a libmpv instance to play multiple videos,
once --start was set there was no clear way to unset it. You could
use --start=0, but 0 does not always mean the beginning of the file
(especially when using --rebase-start-time=no). Looking up the start
timestamp and passing that in also does not always work, particularly
when the first timestamp is negative (since negative values to --start
have a special meaning).
This commit adds a new "none" value which maps to the internal
REL_TIME_NONE, matching the default value of the play_start option.
Most options that change the playback endpoint coexist and playback
stops when it reaches any of them. (e.g. --ab-loop-b, --end, or
--chapter). This patch extends that behavior to --length so it isn't
automatically trumped by --end if both are present. These two will
interact now as the other options do.
This change is also documented in DOCS/man/options.rst.
The libavcodec mediacodec support does not conform to the new hwaccel
APIs yet. It has been agreed uppon that this glue code can be deleted
for now, and support for it will be restored at a later point.
Readding would require that it supports the AVCodecContext.hw_device_ctx
API. The hw_device_ctx would then contain the surface ID.
vo_mediacodec_embed would actually perform the task of creating
vo.hwdec_devs and adding a mp_hwdec_ctx, whose av_device_ref is a
AVHWDeviceContext containing the android surface.
Make the VO<->decoder interface capable of supporting multiple hwdec
APIs at once. The main gain is that this simplifies autoprobing a lot.
Before this change, it could happen that the VO loaded the "wrong" hwdec
API, and the decoder was stuck with the choice (breaking hw decoding).
With the change applied, the VO simply loads all available APIs, so
autoprobing trickery is left entirely to the decoder.
In the past, we were quite careful about not accidentally loading the
wrong interop drivers. This was in part to make sure autoprobing works,
but also because libva had this obnoxious bug of dumping garbage to
stderr when using the API. libva was fixed, so this is not a problem
anymore.
The --opengl-hwdec-interop option is changed in various ways (again...),
and renamed to --gpu-hwdec-interop. It does not have much use anymore,
other than debugging. It's notable that the order in the hwdec interop
array ra_hwdec_drivers[] still matters if multiple drivers support the
same image formats, so the option can explicitly force one, if that
should ever be necessary, or more likely, for debugging. One example are
the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both
support d3d11 input.
vo_gpu now always loads the interop lazily by default, but when it does,
it loads them all. vo_opengl_cb now always loads them when the GL
context handle is initialized. I don't expect that this causes any
problems.
It's now possible to do things like changing between vdpau and nvdec
decoding at runtime.
This is also preparation for cleaning up vd_lavc.c hwdec autoprobing.
It's another reason why hwdec_devices_request_all() does not take a
hwdec type anymore.
These couldn't be relicensed, and won't survive the LGPL transition. The
other existing filters are mostly LGPL (except libaf glue code).
This remove the deprecated pan option. I guess it could be restored by
inserting a libavfilter filter (if there's one), but for now let it be
gone.
This temporarily breaks volume control (and things related to it, like
replaygain).
The internal stereo3d filter was removed due to being GPL only, and due
to being a mess that somehow used libavfilter's filter. Without this
filter, it's hard to remove our internal stereo3d image attribute, so
even using libavfilter's stereo3d filter would not work too well (unless
someone fixes it and makes it able to use AVFrame metadata, which we
then could mirror in mp_image).
This was never well thought-through anyway, so just drop it. I think
some "downsampling" support would still make sense, maybe that can be
readded later.
The option for enabling it has now an "auto" choice, which is the
default, and which will enable it if the media is thought to be via
network or if the stream cache is enabled (same logic as --cache-secs).
Also bump the --cache-secs default from 10 to 120.
Some back buffer is required to make the immediate forward range
seekable. This is because the back buffer limit is strictly enforced.
Just set a rather high back buffer by default. It's not use if
--demuxer-seekable-cache is disabled, so this is without risk.
Until now, the demuxer cache was limited to a single range. Extend this
to multiple range. Should be useful for slow network streams.
This commit changes a lot in the internal demuxer cache logic, so
there's a lot of room for bugs and regressions. The logic without
demuxer cache is mostly untouched, but also involved with the code
changes. Or in other words, this commit probably fucks up shit.
There are two things which makes multiple cached ranges rather hard:
1. the need to resume the demuxer at the end of a cached range when
seeking to it
2. joining two adjacent ranges when the lowe range "grows" into it (and
resuming the demuxer at the end of the new joined range)
"Resuming" the demuxer means that we perform a low level seek to the end
of a cached range, and properly append new packets to it, without adding
packets multiple times or creating holes due to missing packets.
Since audio and video never line up exactly, there is no clean "cut"
possible, at which you could resume the demuxer cleanly (for 1.) or
which you could use to detect that two ranges are perfectly adjacent
(for 2.). The way how the demuxer interleaves multiple streams is also
unpredictable. Typically you will have to expect that it randomly allows
one of the streams to be ahead by a bit, and so on.
To deal with this, we have heuristics in place to detect when one packet
equals or is "behind" a packet that was demuxed earlier. We reuse the
refresh seek logic (used to "reread" packets into the demuxer cache when
enabling a track), which checks for certain packet invariants.
Currently, it observes whether either the raw packet position, or the
packet DTS is strictly monotonically increasing. If none of them are
true, we discard old ranges when creating a new one.
This heavily depends on the file format and the demuxer behavior. For
example, not all file formats have DTS, and the packet position can be
unset due to libavformat not always setting it (e.g. when parsers are
used).
At the same time, we must deal with all the complicated state used to
track prefetching and seek ranges. In some complicated corner cases, we
just give up and discard other seek ranges, even if the previously
mentioned packet invariants are fulfilled.
To handle joining, we're being particularly dumb, and require a small
overlap to be confident that two ranges join perfectly. (This could be
done incrementally with as little overlap as 1 packet, but corner cases
would eat us: each stream needs to be joined separately, and the cache
pruning logic could remove overlapping packets for other streams again.)
Another restriction is that switching the cached range will always
trigger an asynchronous low level seek to resume demuxing at the new
range. Some users might find this annoying.
Dealing with interleaved subtitles is not fully handled yet. It will
clamp the seekable range to where subtitle packets are.
Like the manual says, this is technically undefined behaviour. See:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476085.aspx
In particular, MSDN says texture arrays created with the BIND_DECODER
flag cannot be used with CreateShaderResourceView, which means they
can't be sampled through SRVs like normal Direct3D textures. However,
some programs (Google Chrome included) do this anyway for performance
and power-usage reasons, and it appears to work with most drivers.
Older AMD drivers had a "bug" with zero-copy decoding, but this appears
to have been fixed. See #3255, #3464 and http://crbug.com/623029.
This is a new RA/vo_gpu backend that uses Direct3D 11. The GLSL
generated by vo_gpu is cross-compiled to HLSL with SPIRV-Cross.
What works:
- All of mpv's internal shaders should work, including compute shaders.
- Some external shaders have been tested and work, including RAVU and
adaptive-sharpen.
- Non-dumb mode works, even on very old hardware. Most features work at
feature level 9_3 and all features work at feature level 10_0. Some
features also work at feature level 9_1 and 9_2, but without high-bit-
depth FBOs, it's not very useful. (Hardware this old is probably not
fast enough for advanced features anyway.)
Note: This is more compatible than ANGLE, which requires 9_3 to work
at all (GLES 2.0,) and 10_1 for non-dumb-mode (GLES 3.0.)
- Hardware decoding with D3D11VA, including decoding of 10-bit formats
without truncation to 8-bit.
What doesn't work / can be improved:
- PBO upload and direct rendering does not work yet. Direct rendering
requires persistent-mapped PBOs because the decoder needs to be able
to read data from images that have already been decoded and uploaded.
Unfortunately, it seems like persistent-mapped PBOs are fundamentally
incompatible with D3D11, which requires all resources to use driver-
managed memory and requires memory to be unmapped (and hence pointers
to be invalidated) when a resource is used in a draw or copy
operation.
However it might be possible to use D3D11's limited multithreading
capabilities to emulate some features of PBOs, like asynchronous
texture uploading.
- The blit() and clear() operations don't have equivalents in the D3D11
API that handle all cases, so in most cases, they have to be emulated
with a shader. This is currently done inside ra_d3d11, but ideally it
would be done in generic code, so it can take advantage of mpv's
shader generation utilities.
- SPIRV-Cross is used through a NIH C-compatible wrapper library, since
it does not expose a C interface itself.
The library is available here: https://github.com/rossy/crossc
- The D3D11 context could be made to support more modern DXGI features
in future. For example, it should be possible to add support for
high-bit-depth and HDR output with DXGI 1.5/1.6.
We don't hope to auto-detect them at load time, as that would be too
much of a pain - even FFmpeg requires fetching and parsing of video
packets, and exposes the information only via deprecated API.
But there still needs to be a way to select them by default. This is
also needed to get the first CC packet at all (without seeking back).
This commit also attempts to clean up locking a bit, which is a PITA,
but it's better be careful & clean.
See manpage additions.
(In ffmpeg-mpv and Libav, this is still called "cuvid". Libav won't work
yet, because it has no frame params support yet, but this could get
fixed soon.)
Comparing mpv's implementation against the ACES ODR reference samples
and algorithms, it seems like they're happy desaturating highlights
_way_ more aggressively than mpv currently does. And indeed, looking at
some example clips like The Redwoods (which is actually well-mastered),
the current desaturation produces unnatural-looking brightness fringes
where the sky meets the treeline.
Adjust the algorithm to make it apply to a much larger, more gradual
brightness region; and change the interpretation of the parameter. As a
bonus, the new parameter is actually sanely scaled (higher values = more
desaturation). Also, make it scale based on the signal level instead of
the luminance, to avoid under-desaturating bright blues.
This improves upon the previous commit, and partially rewrites it (and
other code). It does:
- disable the seeking within cache by default, and add an option to
control it
- mess with the buffer estimation reporting code, which will most likely
lead to funny regressions even if the new features are not enabled
- add a back buffer to the packet cache
- enhance the seek code so you can seek into the back buffer
- unnecessarily change a bunch of other stuff for no reason
- fuck up everything and vomit ponies and rainbows
This should actually be pretty usable. One thing we should add are some
properties to report the proper buffer state. Then the OSC could show a
nice buffer range. Also configuration of the buffers could be made
simpler. Once this has been tested enough, it can be enabled by default,
and might replace the stream cache's byte ringbuffer.
In addition it may or may not be possible to keep other buffer ranges
when seeking outside of the current range, but that would be much more
complex.
This should be functionally identical to rgba16f, since the formats only
differ in their representation on the CPU, but it could be useful for RA
backends that don't expose rgba16f, like Vulkan. It's definitely useful
for the WIP D3D11 backend.
It seems this will be useful for Rokchip DRM hwcontext integration.
DRM hwcontexts have additional internal structure which can be different
depending on the decoder, and which is not part of the generic hwcontext
API. Rockchip has 1 layer, which EGL interop happens to translate to a
RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple
layers. Both will use sw_format=nv12, and thus are indistinguishable on
the mp_image_params level. But this is needed to initialize the EGL
mapping and the vo_gpu video renderer correctly.
We hope that the layer count is enough to tell whether EGL will
translate the data to a RGB texture (vs. 2 texture resembling raw nv12
data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE.
This commit adds the flag, infrastructure to set it, and an "example"
for D3D11.
The D3D11 addition is quite useless at this point. But later we want to
get rid of d3d11_update_image_attribs() anyway, while we still need a
way to force d3d11vpp filter insertion, so maybe it has some
justification (who knows). In any case it makes testing this easier.
Obviously it also adds some basic support for triggering the opaque
format for decoding, which will use a driver-specific format, but which
is not supported in shaders. The opaque flag is not used to determine
whether d3d11vpp needs to be inserted, though.
Mostly an obscure option for testing. But --videotoolbox-format can be
deprecated, as it becomes redundant.
We rely on the libavutil hwcontext implementation to reject invalid
pixfmts, or not to blow up if they are incompatible.
This was confusing at best. Change it to output the actual choices.
(Seems like in the end it's always me who has to clean up other people's
bullshit.)
Context names were not unique - but they should be, so fix it. The whole
point of the original --opengl-backend option was to side-step the
tricky auto-detection, so you know exactly what you get. The goal of
this commit is to make --gpu-context work the same way. Fix the
non-unique names by appending "vk" to the names.
Keep in mind that this was not suitable for slecting the "UI" backend
anyway, since "x11" would force GLX, whereas people on not-NVIDIA
actually want "x11egl". Users trying to use --gpu-context=x11 to force
the X11 backend would always end up with GLX, which would at least break
VAAPI hardware decoding for them. Basically the idea that this option
could select the "UI" type is completely broken - it selects an
implementation, which implies a UI. Selecting the UI type This would
require a separate mechanism. (Although in theory this separate
mechanism could be part of the --gpu-context option - in any case,
someone would have to implement it.)
To achieve help output that can actually be understood, just duplicate
the code. Most of that code is duplicated anyway, and trying to share
just the list code with the result of making the output unreadable
doesn't make too much sense. If we wanted to save code/effort, we could
just remove the help output altogether.
--gpu-api has non-unique entries, and it would be nice to group them
(e.g. list all OpenGL capable contexts with "opengl"), but C makes this
simple idea too much of a pain, so don't do it.
Also remove a stray tab from the android entry on the manpage.
Signed-off-by: wm4 <wm4@nowhere>
Rename --stats to --load-stats-overlay and add an entry to options.rst
over the original commit.
Signed-off-by: wm4 <wm4@nowhere>
At the moment, rendering on Android requires ``--vo=opengl-cb`` and
a lot of java<->c++ bridging code to receive the receive and react to
the render callback in java. Performance also suffers with opengl-cb,
due to the overhead of context switching in JNI.
With this patch, Android can render using ``--vo=gpu --gpu-context=android``
(after setting ``--wid`` to point to an android.view.Surface on-screen).
But --msg-level can only raise the log level used for --log-file,
because the original idea with --log-file was that it'd log verbose
messages to disk even if terminal logging is lower than -v or fully
disabled.
Changed the reference from --gpu-gamma to --gamma-factor,
and changed the reference from --post-shader to --glsl-shaders,
in order to reflect actual changes to the option names.
Seems to be fixed upstream in the nvidia driver, so it's probably a good
idea to 1. force the layout and 2. remove the warning, as it now
actually works. Users with older drivers would run into errors, but they
can still use shaderc as a replacement. (And it's not like the old
status quo was any better)
This has several advantages:
1. no more redundant texcoords when we don't need them
2. no more arbitrary limit on how many textures we can bind
3. (that extends to user shaders as well)
4. no more arbitrary limits on tscale radius
To realize this, the VAO was moved from a hacky stateful approach
(gl_sc_set_vertex_attribs) - which always bothered me since it was
required for compute shaders as well even though they ignored it - to be
a proper parameter of gl_sc_dispatch_draw, and internally plumbed into
gl_sc_generate, which will make a (properly mangled) deep copy into
params.vertex_attribs.
Apparently this filter is broken in a weird way, which even makes some
libavfilter functions segfault in certain conditions. Don't waste time
with it and just remove the examples.
Also adjust the "life" example description (certainly this filter is
100% worthless, but the example does demonstrate how to use source
filters without any available input).
auto-copy selects more modes than the ones listed. It will always be
outdated anyway.
The GLX vaapi backend is never selected anymore, because it sucks.
In addition to the built-in nvidia compiler, we now also support a
backend based on libshaderc. shaderc is sort of like glslang except it
has a C API and is available as a dynamic library.
The generated SPIR-V is now cached alongside the VkPipeline in the
cached_program. We use a special cache header to ensure validity of this
cache before passing it blindly to the vulkan implementation, since
passing invalid SPIR-V can cause all sorts of nasty things. It's also
designed to self-invalidate if the compiler gets better, by offering a
catch-all `int compiler_version` that implementations can use as a cache
invalidation marker.
This time based on ra/vo_gpu. 2017 is the year of the vulkan desktop!
Current problems / limitations / improvement opportunities:
1. The swapchain/flipping code violates the vulkan spec, by assuming
that the presentation queue will be bounded (in cases where rendering
is significantly faster than vsync). But apparently, there's simply
no better way to do this right now, to the point where even the
stupid cube.c examples from LunarG etc. do it wrong.
(cf. https://github.com/KhronosGroup/Vulkan-Docs/issues/370)
2. The memory allocator could be improved. (This is a universal
constant)
3. Could explore using push descriptors instead of descriptor sets,
especially since we expect to switch descriptors semi-often for some
passes (like interpolation). Probably won't make a difference, but
the synchronization overhead might be a factor. Who knows.
4. Parallelism across frames / async transfer is not well-defined, we
either need to use a better semaphore / command buffer strategy or a
resource pooling layer to safely handle cross-frame parallelism.
(That said, I gave resource pooling a try and was not happy with the
result at all - so I'm still exploring the semaphore strategy)
5. We aggressively use pipeline barriers where events would offer a much
more fine-grained synchronization mechanism. As a result of this, we
might be suffering from GPU bubbles due to too-short dependencies on
objects. (That said, I'm also exploring the use of semaphores as a an
ordering tactic which would allow cross-frame time slicing in theory)
Some minor changes to the vo_gpu and infrastructure, but nothing
consequential.
NOTE: For safety, all use of asynchronous commands / multiple command
pools is currently disabled completely. There are some left-over relics
of this in the code (e.g. the distinction between dev_poll and
pool_poll), but that is kept in place mostly because this will be
re-extended in the future (vulkan rev 2).
The queue count is also currently capped to 1, because of the lack of
cross-frame semaphores means we need the implicit synchronization from
the same-queue semantics to guarantee a correct result.
This never really made sense since the BT.1886 changes. It should get
*brighter* for bright rooms, not darker for dark rooms. Picked some new
values that seemed reasonable-ish.
This is done in several steps:
1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
(note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
ra itself (and vo_gpu altered to compensate), but this was a stop-gap
measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten
Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.
Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.
Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
This mechanism uses system() and shouldn't even exist. x11_common.c has
its own solution for the original problem (disabling Linux DE
screensavers without MPlayer/mpv having to link a dbus lib). If that is
not sufficient, you can create a simple Lua script.
Incidentally fixes#4888.
This clearly highlights all out-of-gamut/clipped pixels. (Either too
bright or too saturated)
Has some (documented) caveats. Also make TONE_MAPPING_CLIP stop actually
clamping the value range (it's unnecessary and breaks this feature).
This removes all GPL only code from it, and that's the whole purpose.
Also happens to be much simpler.
The "deinterlace" option still sort of exists, but only as runtime
changeable option. The main change in behavior is that the property will
not report back the actual deint state. Or in other words, if inserting
or initializing the filter fails, the deinterlace property will still
return "yes". This is in line with most recent behavior changes to
properties and options.
This was attempted before in fc9695e63b, but it was reverted in
1b7ce759b1 because it caused conflicts with other software watching
the same keys (See #2041.) It seems like some PCs ship with OEM software
that watches the volume keys without consuming key events and this
causes them to be handled twice, once by mpv and once by the other
software.
In order to prevent conflicts like this, use the WM_APPCOMMAND message
to handle media keys. Returning TRUE from the WM_APPCOMMAND handler
should indicate to the operating system that we consumed the key event
and it should not be propogated to the shell. Also, we now only listen
for keys that are directly related to multimedia playback (eg. the
APPCOMMAND_MEDIA_* keys.) Keys like APPCOMMAND_VOLUME_* are ignored, so
they can be handled by the shell, or by other mixer software.
This currently only works when using lcms-based color management
(--icc-profile-*).
In principle, we could also support using lcms even when the user has
not specified an ICC profile, by generating the profile against a fixed
reference (--target-prim/--target-trc) instead. I still might do that
some day, simply because 3dlut provides a higher quality conversion than
our simple gamut mapping does for stuff like BT.2020, and also because
it's now needed to enable embedded ICC profiles. But that would be a
separate change, so preserve the status quo for now.
(Besides, my opinion is still that you should be using an ICC profile if
you care about colors being accurate _at all_)
This broke float textures, which were actually used by some shaders.
There were probably some other bugs as well.
Lots of code can be avoided by using ra_tex_params directly, so do that.
The main change is that COMPONENT/FORMAT are replaced by a single FORMAT
directive, which takes different parameters now. Due to the mess with
16/32 bit float textures, and because we want to support other APIs than
just GL in the future, it's not really clear how this should be handled,
and the nice component/type separation makes things actually harder. So
just jump the gun and use the ra_format.name names, which were
originally meant mostly for debugging. (This is probably something that
will be regretted later.)
Still only superficially tested, but seems to work.
Fixes#4708.
Since this code was already written for HDR, and is now per-channel
(because it works better for HDR as well), we can actually reuse this to
get very high quality gamut mapping without clipping. The only required
change is to move the tone mapping from before the gamut map to after
the gamut map. Additonally, we need to also account for changes in the
signal range as a result of applying the CMS when we compute ref_peak,
which is fortunately pretty easy because we only need to consider the
case of primaries mapping to themselves.
Since `HDR` no longer really makes sense as a label, rename it to
`--tone-mapping` in general. Also fits better with
`--tone-mapping-desat` etc.
Arguably we could also rename `--hdr-compute-peak`, but that option is
basically only useful for HDR content anyway because we don't need
information about the signal range for gamut mapping.
This (finally!) gives us reasonably high quality gamut mapping even in
the absence of an ICC profile / 3DLUT.
Parsing the texture data as raw strings makes the textures the most
portable and self-contained. In order to facilitate different types of
shaders, the parse_user_shader interaction has been changed to instead
have it loop through blocks and call the passed functions for each valid
block parsed. This is more modular and also cleaner, with better code
separation.
Closes#4586.
Two changes, compounded into one since they affect the same logic:
1. Never use linearization for HDR downscaling
2. Always use linearization for interpolation
Instead of fixing p->use_linear at the beginning of pass_render_frame,
we flip it on "dynamically" as needed. I plan on killing this
p->use_linear frame (along with other per-pass metadata) and moving them
into their own struct for tracking the "current" state of the video, but
that's a separate/upcoming refactor.
As a small bonus, reduce some code duplication in the interpolation
logic.
Fixes#4631
I've found more test cases where hwdec=cuda shits itself, even
hwdec=cuda-copy. So the whole “copyback is no worse than swdec” is
simply not true. Also, in the light of 10 bit media files and APIs
silently truncating to 8 bit, the warnings need to be generalized a bit.
It's no longer safe to say that “doesn't convert to RGB” means “perfect
playback”.
I've also added a very strong disclaimer to the whole hwdec scenario
clarifying why hwdec is usually a bad idea unless absolutely needed,
because I've seen issue after issue that is resolved by disabling hwdec.
This is done via compute shaders. As a consequence, the tone mapping
algorithms had to be rewritten to compute their known constants in GLSL
(ahead of time), instead of doing it once. Didn't affect performance.
Using shmem/SSBO atomics in this way is extremely fast on nvidia, but it
might be slow on other platforms. Needs testing.
Unfortunately, setting up the SSBO still requires OpenGL calls, which
means I can't have it in video_shaders.c, where it belongs. But I'll
defer worrying about that until the backend refactor, since then I'll be
breaking up the video/video_shaders structure anyway.
Can be enabled via --vd-lavc-dr=yes. See manpage additions for what it
does.
This reminds of the MPlayer -dr flag, but the implementation is
completely different. It's the same basic concept: letting the decoder
render into a GPU buffer to avoid a copy. Unlike MPlayer, this doesn't
try to go through filters (libavfilter doesn't support this anyway).
Unless a filter can work in-place, DR will be silently disabled. MPlayer
had very complex semantics about buffer types and management (which
apparently nobody ever understood) and weird restrictions that mostly
limited it to mpeg2 style codecs. The mpv code does not do any of this,
and just lets the decoder allocate an arbitrary number of untyped
images. (No MPlayer code was used.)
Parts of the code based on work by atomnuker (starting point for the
generic code) and haasn (some GL definitions, some basic PBO code, and
correct fencing).
Remove this code because it could be argued that it contains GPL-only
code (see commit 642e963c86 for details).
The remaining aspect methods appear to work just as well, are
potentially more compatible to other players, and the code becomes much
simpler.