--record-file is nice, but only sometimes. If you watch some sort of
livestream which you want to record, it's actually much nicer not to
record what you're currently "seeing", but anything you're receiving.
This option has been deprecated upstream for a long time, probably
doesn't even work anymore, and won't work moving forwards as we replace
the vulkan code by libplacebo wrappers.
I haven't removed the option completely yet since in theory we could
still add support for e.g. a native glslang wrapper in the future. But
most likely the future of this code is deletion.
As an aside, fix an issue where the man page didn't mention d3d11.
Despite their place in the tree, hwdecs can be loaded and used just
fine by the vulkan GPU backend.
In this change we add Vulkan interop support to the cuda/nvdec hwdec.
The overall process is mostly straight forward, so the main observation
here is that I had to implement it using an intermediate Vulkan buffer
because the direct VkImage usage is blocked by a bug in the nvidia
driver. When that gets fixed, I will revist this.
Nevertheless, the intermediate buffer copy is very cheap as it's all
device memory from start to finish. Overall CPU utilisiation is pretty
much the same as with the OpenGL GPU backend.
Note that we cannot use a single intermediate buffer - rather there
is a pool of them. This is done because the cuda memcpys are not
explicitly synchronised with the texture uploads.
In the basic case, this doesn't matter because the hwdec is not
asked to map and copy the next frame until after the previous one
is rendered. In the interpolation case, we need extra future frames
available immediately, so we'll be asked to map/copy those frames
and vulkan will be asked to render them. So far, harmless right? No.
All the vulkan rendering, including the upload steps, are batched
together and end up running very asynchronously from the CUDA copies.
The end result is that all the copies happen one after another, and
only then do the uploads happen, which means all textures are uploaded
the same, final, frame data. Whoops. Unsurprisingly this results in
the jerky motion because every 3/4 frames are identical.
The buffer pool ensures that we do not overwrite a buffer that is
still waiting to be uploaded. The ra_buf_pool implementation
automatically checks if existing buffers are available for use and
only creates a new one if it really has to. It's hard to say for sure
what the maximum number of buffers might be but we believe it won't
be so large as to make this strategy unusable. The highest I've seen
is 12 when using interpolation with tscale=bicubic.
A future optimisation here is to synchronise the CUDA copies with
respect to the vulkan uploads. This can be done with shared semaphores
that would ensure the copy of the second frames only happens after the
upload of the first frame, and so on. This isn't trivial to implement
as I'd have to first adjust the hwdec code to use asynchronous cuda;
without that, there's no way to use the semaphore for synchronisation.
This should result in fewer intermediate buffers being required.
Since linear downscaling makes sense to handle independently from
linear/sigmoid upscaling, we split this option up. Now,
linear-downscaling is its own option that only controls linearization
when downscaling and nothing more. Likewise, linear-upscaling /
sigmoid-upscaling are two mutually exclusive options (the latter
overriding the former) that apply only to upscaling and no longer
implicitly enable linear light downscaling as well.
The old behavior was very confusing, as evidenced by issues such
as #6213. The current behavior should make much more sense, and only
minimally breaks backwards compatibility (since using linear-scaling
directly was very uncommon - most users got this for free as part of
gpu-hq and relied only on that).
Closes#6213.
by default the pixel format creation falls back to software renderer
when everything fails. this is mostly needed for VMs. additionally one
can directly request an sw renderer or exclude it entirely.
The demuxer cache is the only cache now. Might need another change to
combat seeking failures in mp4 etc. The only bad thing is the loss of
cache-speed, which was sort of nice to have.
Until now, stopping playback aborted the demuxer and I/O layer violently
by signaling mp_cancel (bound to libavformat's AVIOInterruptCB
mechanism). Change it to try closing them gracefully.
The main purpose is to silence those libavformat errors that happen when
you request termination. Most of libavformat barely cares about the
termination mechanism (AVIOInterruptCB), and essentially it's like the
network connection is abruptly severed, or file I/O suddenly returns I/O
errors. There were issues with dumb TLS warnings, parsers complaining
about incomplete data, and some special protocols that require server
communication to gracefully disconnect.
We still want to abort it forcefully if it refuses to terminate on its
own, so a timeout is required. Users can set the timeout to 0, which
should give them the old behavior.
This also removes the old mechanism that treats certain commands (like
"quit") specially, and tries to terminate the demuxers even if the core
is currently frozen. This is for situations where the core synchronized
to the demuxer or stream layer while network is unresponsive. This in
turn can only happen due to the "program" or "cache-size" properties in
the current code (see one of the previous commits). Also, the old
mechanism doesn't fit particularly well with the new one. We wouldn't
want to abort playback immediately on a "quit" command - the new code is
all about giving it a chance to end it gracefully. We'd need some sort
of watchdog thread or something equally complicated to handle this. So
just remove it.
The change in osd.c is to prevent that it clears the status line while
waiting for termination. The normal status line code doesn't output
anything useful at this point, and the code path taken clears it, both
of which is an annoying behavior change, so just let it show the old
one.
With the advent of actual HDR devices, my real measured ICC profile has
an "infinite" contrast, since the display is completely off on pure
black inputs. 100k:1 might not be enough, so let's just bump it up to
1m:1 to be safe.
Also, improve the logging in the case that the detected contrast is too
high by default.
Instead of using an internal counter to keep track of the value that was
set last, attempt to find the current value of the property/option in
the value list, and then set the next value in the list.
There are some potential problems. If a property refuses to accept a
specific value, the cycle-values command will fail, and start from the
same position again. It can't know that it's supposed to skip the next
value. The same can happen to properties which behave "strangely", such
as the "aspect" property, which will return the current aspect if you
write "-1" to it. As a consequence, cycle-values can appear to get
"stuck".
I still think the new behavior is what users expect more, and which is
generally more useful. We won't restore the ability to get the old
behavior, unless we decide to revert this commit entirely.
Fixes#5772, and hopefully other complaints.
Until recently, the AO was reinitialized strictly only on decoder format
changes. But the commit for simplifying audio format negotiation removed
this. Now the AO is recreated for any format change.
This is sort of annoying if you change playback speed. The
insertion/removal of af_scaletempo can change the sample format. For
example, the acompressor filter will convert output to double, so
toggling scaletempo will force the format back to float. This recreates
the AO under the --gapless-audio=weak default. This likely affects a lot
of other filters too.
Work this around by allowing sample format changes, and keeping the
current AO format in these cases. This is probably not a big problem.
Most audio APIs force the output format to float anyway.
This means you actually have to worry about what the default gapless
mode does to your audio. If you start with a file that uses 8 bit per
sample, and then continue playing a 24 bit FLAC, it will be converted
down to 8 bit per sample. (Assuming they are played in a way that uses
the gapless logic.)
The playback start logic explicitly waits until the first frame has been
displayed. Usually this will introduce a wait of 1 vsync. For normal
playback this doesn't matter, but with respect to low latency needs,
this only leads to additional data getting queued up in the demuxer or
network buffers.
Another thing is that the timing logic decodes 1 frame ahead (= 1 frame
extra latency) to determine the exact duration of a frame.
To be fair, there doesn't really seem to be a hard reason why this is
needed. With the current code, enabling the option does lead to A/V
desync sometimes (if the demuxer FPS is too inaccurate), and also frame
drops at playback start in some situations. But this all seems to be
avoidable, if the timing logic were to be rewritten completely, which
should probably happen in the future. Thus the new option comes with the
warning that it can be removed any time. This is also why the option has
"hack" in the name.
The purpose of the new API is to make it useable with other APIs than
OpenGL, especially D3D11 and vulkan. In theory it's now possible to
support other vo_gpu backends, as well as backends that don't use the
vo_gpu code at all.
This also aims to get rid of the dumb mpv_get_sub_api() function. The
life cycle of the new mpv_render_context is a bit different from
mpv_opengl_cb_context, and you explicitly create/destroy the new
context, instead of calling init/uninit on an object returned by
mpv_get_sub_api().
In other to make the render API generic, it's annoyingly EGL style, and
requires you to pass in API-specific objects to generic functions. This
is to avoid explicit objects like the internal ra API has, because that
sounds more complicated and annoying for an API that's supposed to never
change.
The opengl_cb API will continue to exist for a bit longer, but
internally there are already a few tradeoffs, like reduced
thread-safety.
Mostly untested. Seems to work fine with mpc-qt.
the title bar is now within the window bounds instead of outside. same
as QuickTime Player. it supports several standard styles, two dark and
two light ones. additionally we have properly rounded corners now and
the borderless window also has the proper window shadow.
Also make the earliest supported macOS version 10.10.
Fixes#4789, #3944
This solves a number of problems simultaneously:
1. When outputting HLG, this allows tuning the OOTF based on the display
characteristics.
2. When outputting PQ or other HDR curves, this allows soft-limiting the
output brightness using the tone mapping algorithm.
3. When outputting SDR, this allows HDR-in-SDR style output, by
controlling the output brightness directly.
Closes#5521
Before this, we made deinterlacing dependent on the video codec metadata
(AVFrame.interlaced_frame for libavcodec). So even if --deinterlace=yes
was set, we skipped deinterlacing if the flag wasn't set. This is very
unreliable and there are many streams with flags incorrectly set.
The potential problem is that this might upset people who alwase enabled
deinterlace and hoped it worked. But it's likely these people were
screwed by this setting anyway. The new behavior is less tricky and
easier to understand, and this preferable. Maybe one day we could
introduce a --deinterlace=auto, which does the right thing, but of
course this would be hard to implement (esecially with hwdec).
Fixes#5219.
this is meant to replace the old and not properly working vo_gpu/opengl
cocoa backend in the future. the problems are various shortcomings of
Apple's opengl implementation and buggy behaviour in certain
circumstances that couldn't be properly worked around. there are also
certain regressions on newer macOS versions from 10.11 onwards.
- awful opengl performance with a none layer backed context
- huge amount of dropped frames with an early context flush
- flickering of system elements like the dock or volume indicator
- double buffering not properly working with a none layer backed context
- bad performance in fullscreen because of system optimisations
all the problems were caused by using a normal opengl context, that
seems somewhat abandoned by apple, and are fixed by using a layer backed
opengl context instead. problems that couldn't be fixed could be
properly worked around.
this has all features our old backend has sans the wid embedding,
the possibility to disable the automatic GPU switching and taking
screenshots of the window content. the first was deemed unnecessary by
me for now, since i just use the libmpv API that others can use anyway.
second is technically not possible atm because we have to pre-allocate
our opengl context at a time the config isn't read yet, so we can't get
the needed property. third one is a bit tricky because of deadlocking
and it needed to be in sync, hopefully i can work around that in the
future.
this also has at least one additional feature or eye-candy. a properly
working fullscreen animation with the native fs. also since this is a
direct port of the old backend of the parts that could be used, though
with adaptions and improvements, this looks a lot cleaner and easier to
understand.
some credit goes to @pigoz for the initial swift build support which
i could improve upon.
Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739,
#2392, #2217
early flushing only caused problems on macOS, which includes:
- performance problems and huge amount of dropped frames
- problems with playing back video files with fps close to the display
refresh rate
- rendering at twice the rate of the video fps
- not properly detected display refresh rate
we always deactivate any early flush for macOS to fix these problems.
Disable by default.
This feature was added in 7eb342757, which allowed stream selection
in runtime. Problem with this atm is that FFmpeg will try to demux
every first packet of every track leading to noticeable delay opening
the URL.
This option can be changed to enabled by default or removed when
HLS/DASH demuxers are improved upstream.
Using the GL renderer for color conversion will make sure screenshots
will use the same conversion as normal video rendering. It can do this
for all types of screenshots.
The logic when to write 16 bit PNGs changes. To approximate the old
behavior, we decide by looking whether the source video format has more
than 8 bits per component. We apply this logic even for window
screenshots. Also, 16 bit PNGs now always include an unused alpha
channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no
RGB064. RGB48 is 3 bytes and usually not supported by GPUs for
rendering, so we have to use RGBA64, which forces an alpha channel.
Will break for users who use --target-trc and similar options.
I considered creating a new gl_video context, but it could double GPU
memory use, so I didn't.
This uses FBOs instead of glGetTexImage(), because that increases the
chance it could work on GLES (e.g. ANGLE). Untested. No support for the
Vulkan and D3D11 backends yet.
Fixes#5498. Also fixes#5240, because the code for reading back is not
used with the new code path.
The current peak detection algorithm was very bugged (which contributed
to the excessive cross-frame flicker without long normalization) and
also didn't take into account the frame average brightness level.
The new algorithm both takes into account frame average brightness (in
addition to peak brightness), and also computes the values in a more
stable/correct way. (The old path was basically undefined behavior)
In addition to improving the algorithm, we also switch to hable tone
mapping by default, and try to enable peak computation automatically
whever possible (compute shaders + SSBOs supported). We also make the
desaturation milder, after extensive testing during libplacebo
development.
I also had to compensate a bit for the representational differences
between mpv and libplacebo (libplacebo treats 1.0 as the reference peak,
but mpv treats it as the nominal peak), but it shouldn't have caused any
problems.
This is still not quite the same as libplacebo, since libplacebo also
allows tagging the desired scene average brightness on the output, and
it also supports reading the scene average brightness from static
metadata (MaxFALL) where available. But those changes are a bit more
involved. It's possible we could also read this from metadata in the
future, but we have problems communicating with AVFrames as it is and I
don't want to touch the mpv colorimetry structs for the time being.
Similar to the previous commit, and for the same reasons. Unlike with
af_scaletempo, resampling does not have a natural frame size, so we set
an arbitrary size limit on output frames. We add a new option to control
this size, although I'm not sure whether anyone will use it, so mark it
for testing only.
Note that we go through some effort to avoid buffering data in
libswresample itself. One reason is that we might have to reinitialize
the resampler completely when changing speed, which drops the buffered
data. Another is that I'm not sure whether the resampler will do the
right thing when applying dynamic speed changes.
FFmpeg only suppports http proxies and ignores it if
the resulting url is https. Also, no SOCKS.
Use it like `--ytdl-raw-options=proxy=[http://127.0.0.1:3128]` so
it doesn't confuse mpv because of the colons.
You need to pass it as an option because youtube-dl doesn't give
us the proxy.
Or just set `http_proxy` environment variable as recommended before.
Added example using -append, which doesn't need escaping.
Restores behaviour prior to aef2ed5dc1.
That change was apparently unpopular. However, given the amount of
complaining over how hard it is to change the defaults by rebinding every
key, I think the extra option introduced by this commit is justified.
Technically not all behaviour is restored, because now --no-osd-bar will
not instead display the msg text on seek. I think that feature was a
little weird and is now easy enough to remedy with the --osd-on-seek
option.
This reverts commit 9812e276aa.
This was apparently unpopular. I still think the pause OSD should be the
same as seek even if it's not visible by default, but it seems that
whether to display a given property change is currently conflated with
what to display.
The reverted behaviour can be restored by adding something like the
following to input.conf:
SPACE cycle pause; show_progress
And use it for 2 demuxer options. It could be used for more options
later. (Though the --cache options can not use this, because they use KB
as base unit.)
This is part of trying to get rid of --af-defaults, and the af
resample filter.
It requires a complicated mechanism to set the defaults on the resample
filter for backwards compatibility.
With the recent changes to the script it does not incur a startup delay
by default due to starting youtube-dl and waiting for it. This was the
main reason for making libmpv have a different default.
Starting sub processes from a library can still be a bit fishy, but I
think it's ok. Still mention it in the libmpv header. There were already
other cases where libmpv would start its own processes, such as the X11
backend calling xdg-screensaver. (The reason why this is fishy is
because UNIX process management sucks: SIGCHLD and the wait() syscall
make sub processes non-transparent and could potentially introduce
conflicts with code trying to use them.)
Previously, toggling pause would generate no osd response, and changing
that wasn't even configurable. This was surprising to users who
generally expect to see *where* pause / unpause is taking place (#3028).
The previous default was osd-bar (unless the user specified
--no-osd-bar, in which case case it was osd-msg). Aside from requiring
some twisted logic to implement, this surprised users since osd-msg3
wasn't displayed when seeking with the keyboard (#3028), so the time
seeked to was never displayed.
Before this commit, some autoselection of tracks coming from files
loaded with --external-files was still done. This commit removes all of
it, and the only way to select a track is via the explicit stream
selection options like --vid/--sid/--aid.
I think this was always the original intention. The change could in
theory still unintentionally surprise some users, so add a changelog
entry.
This does not affect --audio-file/--sub-file, even if these contain
mismatching track types. E.g. if audio files passed to --audio-file
contain subtitles, these should still be selected. Past feature requests
indicate that users want this.
This enables DXVA2 hardware decoding with ra_d3d11. It should be useful
for Windows 7, where D3D11VA is not available. Images are transfered
from D3D9 to D3D11 using D3D9Ex surface sharing[1].
Following Microsoft's recommendations, it uses a queue of shared
surfaces, similar to Microsoft's ISurfaceQueue. This will hopefully
prevent surface sharing from impacting parallelism and allow multiple
D3D11 frames to be in-flight at once.
[1]: https://msdn.microsoft.com/en-us/library/windows/desktop/ee913554.aspx
Reasons why you'd want this see manpage additions. Disabled by default,
because it would increase latency of live streams by default. (Or well,
at least it would be another problem when trying getting lower latency.)
This tried to be clever by waiting for a longer time each time the
buffer was underrunning, or shorter if it was getting better. I think
this was pretty weird behavior and makes no sense. If the user really
wants the stream to buffer longer, he/she/it can just pause the player
(the network caches will continue to be filled until they're full).
Every time I actually noticed this code triggering in my own use, I
didn't find it helpful. Apart from that it was pretty hard to test.
Some waiting is needed to avoid that the player just plays the available
data as fast as possible (to compensate for late frames and underrunning
audio). Just use a fixed wait time, which can now be controlled by the
new --cache-pause-wait option.
Uses the EGL width/height by default when the user fails to set
the android-surface-width/android-surface-height options.
This means the vo-resize command is optional, and does not need to
be implemented on android devices which do not support rotation.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Technically, the user could just use --vd-lavc-o with the same result.
But I find it better to make this an explicit option, so we can document
the ups and downs, and also avoid setting it for non-h264.
Async compute in particular seems to cause problems on some drivers, and
even when supprted the benefits are not that massive from the tests I
have seen, so it's probably safe to keep off by default.
Async transfer on the other hand seems to work better and offers a more
substantial improvement, so it's kept on.
Instead of using a single primary queue, we generate multiple
vk_cmdpools and pick the right one dynamically based on the intent.
This has a number of immediate benefits:
1. We can use async texture uploads
2. We can use the DMA engine for buffer updates
3. We can benefit from async compute on AMD GPUs
Unfortunately, the major downside is that due to the lack of QF
ownership tracking, we need to use CONCURRENT sharing for all resources
(buffers *and* images!). In theory, we could try figuring out a way to
get rid of the concurrent sharing for buffers (which is only needed for
compute shader UBOs), but even so, the concurrent sharing mode doesn't
really seem to have a significant impact over here (nvidia). It's
possible that other platforms may disagree.
Our deadlock-avoidance strategy is stupidly simple: Just flush the
command every time we need to switch queues, and make sure all
submission and callbacks happen in FIFO order. This required lifting the
cmds_pending and cmds_queued out from vk_cmdpool to mpvk_ctx, and some
functions died/got moved as a result, but that's a relatively minor
change.
On my hardware this is a fairly significant performance boost, mainly
due to async transfers. (Nvidia doesn't expose separate compute queues
anyway). On AMD, this should be a performance boost as well due to async
compute.
This uses the new vk_signal mechanism to order all access to textures.
This has several advantageS:
1. It allows real synchronization of image access across multiple frames
when using multiple queues for parallelism.
2. It allows using events instead of pipeline barriers, which is a
finer-grained synchronization primitive that allows for more
efficient layout transitions over longer durations.
This commit also restructures some of the implicit transition code for
renderpasses to be more flexible and correct. (Note: this technically
drops the ability to transition the image out of undefined layout when
not blending, but that was a bug anyway and needs to be done properly)
vo_gpu: vulkan: remove no-longer-true optimization
The change to the output_tex format makes this no longer true, and it
actually seems to hurt performance now as well. So just don't do it
anymore. I also realized it hurts performance when drawing an OSD, so
it's probably not a good idea anyway.
Reduce it from 75MB in both directions (forward/backwards) to 10MB each.
The stream cache is kind of becoming useless in favor of the demuxer
cache. Using both doesn't make much sense, because they will contain
duplicated data for no reason.
Still leave it at 10MB, which may help with mp4 a bit. libavformat's mp4
demuxer tends to seek too much, so we try to avoid triggering network
level seeks by having some caching in the stream layer.
Some old crap which nobody needs and which probably nobody uses.
This relies on a GCC extension: using "## __VA_ARGS__" to remove the
comma from the argument list if the va args are empty. It's supported
by clang, and there's some chance newer standards will introduce a
proper way to do this. (Even if it breaks somewhere, it will be a
problem only for 1 release, since I want to drop the deprecated
properties immediately.)
This was off for mpv CLI, but on for libmpv. The motivation behind this
was that it would be confusing for applications if libmpv continued
playback in a severely "degraded" way (without either audio or video),
and that it would be better to fail early.
In reality the behavior was just a confusing difference to mpv CLI, and
has confused actual users as well. Get rid of it.
Not bothering with a version bump, since this is so minor, and it's easy
to ensure compatibility in affected applications by just setting the
option explicitly.
(Also adding the missing next-release-marker in client-api-changes.rst.)
Annoying exception that makes no sense to keep. Normally, users or
client applications will either use --hwdec=auto, or not set the option
at all, which both leads to the expected result.
This was a bit confused, and I bet nobody understood whether to use
--sub-file or --sub-files, and what the difference is. Explicitly
mention that both variants exist, and how they are related.
Previously when using a libmpv instance to play multiple videos,
once --start was set there was no clear way to unset it. You could
use --start=0, but 0 does not always mean the beginning of the file
(especially when using --rebase-start-time=no). Looking up the start
timestamp and passing that in also does not always work, particularly
when the first timestamp is negative (since negative values to --start
have a special meaning).
This commit adds a new "none" value which maps to the internal
REL_TIME_NONE, matching the default value of the play_start option.
Most options that change the playback endpoint coexist and playback
stops when it reaches any of them. (e.g. --ab-loop-b, --end, or
--chapter). This patch extends that behavior to --length so it isn't
automatically trumped by --end if both are present. These two will
interact now as the other options do.
This change is also documented in DOCS/man/options.rst.
The libavcodec mediacodec support does not conform to the new hwaccel
APIs yet. It has been agreed uppon that this glue code can be deleted
for now, and support for it will be restored at a later point.
Readding would require that it supports the AVCodecContext.hw_device_ctx
API. The hw_device_ctx would then contain the surface ID.
vo_mediacodec_embed would actually perform the task of creating
vo.hwdec_devs and adding a mp_hwdec_ctx, whose av_device_ref is a
AVHWDeviceContext containing the android surface.
Make the VO<->decoder interface capable of supporting multiple hwdec
APIs at once. The main gain is that this simplifies autoprobing a lot.
Before this change, it could happen that the VO loaded the "wrong" hwdec
API, and the decoder was stuck with the choice (breaking hw decoding).
With the change applied, the VO simply loads all available APIs, so
autoprobing trickery is left entirely to the decoder.
In the past, we were quite careful about not accidentally loading the
wrong interop drivers. This was in part to make sure autoprobing works,
but also because libva had this obnoxious bug of dumping garbage to
stderr when using the API. libva was fixed, so this is not a problem
anymore.
The --opengl-hwdec-interop option is changed in various ways (again...),
and renamed to --gpu-hwdec-interop. It does not have much use anymore,
other than debugging. It's notable that the order in the hwdec interop
array ra_hwdec_drivers[] still matters if multiple drivers support the
same image formats, so the option can explicitly force one, if that
should ever be necessary, or more likely, for debugging. One example are
the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both
support d3d11 input.
vo_gpu now always loads the interop lazily by default, but when it does,
it loads them all. vo_opengl_cb now always loads them when the GL
context handle is initialized. I don't expect that this causes any
problems.
It's now possible to do things like changing between vdpau and nvdec
decoding at runtime.
This is also preparation for cleaning up vd_lavc.c hwdec autoprobing.
It's another reason why hwdec_devices_request_all() does not take a
hwdec type anymore.
These couldn't be relicensed, and won't survive the LGPL transition. The
other existing filters are mostly LGPL (except libaf glue code).
This remove the deprecated pan option. I guess it could be restored by
inserting a libavfilter filter (if there's one), but for now let it be
gone.
This temporarily breaks volume control (and things related to it, like
replaygain).
The internal stereo3d filter was removed due to being GPL only, and due
to being a mess that somehow used libavfilter's filter. Without this
filter, it's hard to remove our internal stereo3d image attribute, so
even using libavfilter's stereo3d filter would not work too well (unless
someone fixes it and makes it able to use AVFrame metadata, which we
then could mirror in mp_image).
This was never well thought-through anyway, so just drop it. I think
some "downsampling" support would still make sense, maybe that can be
readded later.
The option for enabling it has now an "auto" choice, which is the
default, and which will enable it if the media is thought to be via
network or if the stream cache is enabled (same logic as --cache-secs).
Also bump the --cache-secs default from 10 to 120.
Some back buffer is required to make the immediate forward range
seekable. This is because the back buffer limit is strictly enforced.
Just set a rather high back buffer by default. It's not use if
--demuxer-seekable-cache is disabled, so this is without risk.
Until now, the demuxer cache was limited to a single range. Extend this
to multiple range. Should be useful for slow network streams.
This commit changes a lot in the internal demuxer cache logic, so
there's a lot of room for bugs and regressions. The logic without
demuxer cache is mostly untouched, but also involved with the code
changes. Or in other words, this commit probably fucks up shit.
There are two things which makes multiple cached ranges rather hard:
1. the need to resume the demuxer at the end of a cached range when
seeking to it
2. joining two adjacent ranges when the lowe range "grows" into it (and
resuming the demuxer at the end of the new joined range)
"Resuming" the demuxer means that we perform a low level seek to the end
of a cached range, and properly append new packets to it, without adding
packets multiple times or creating holes due to missing packets.
Since audio and video never line up exactly, there is no clean "cut"
possible, at which you could resume the demuxer cleanly (for 1.) or
which you could use to detect that two ranges are perfectly adjacent
(for 2.). The way how the demuxer interleaves multiple streams is also
unpredictable. Typically you will have to expect that it randomly allows
one of the streams to be ahead by a bit, and so on.
To deal with this, we have heuristics in place to detect when one packet
equals or is "behind" a packet that was demuxed earlier. We reuse the
refresh seek logic (used to "reread" packets into the demuxer cache when
enabling a track), which checks for certain packet invariants.
Currently, it observes whether either the raw packet position, or the
packet DTS is strictly monotonically increasing. If none of them are
true, we discard old ranges when creating a new one.
This heavily depends on the file format and the demuxer behavior. For
example, not all file formats have DTS, and the packet position can be
unset due to libavformat not always setting it (e.g. when parsers are
used).
At the same time, we must deal with all the complicated state used to
track prefetching and seek ranges. In some complicated corner cases, we
just give up and discard other seek ranges, even if the previously
mentioned packet invariants are fulfilled.
To handle joining, we're being particularly dumb, and require a small
overlap to be confident that two ranges join perfectly. (This could be
done incrementally with as little overlap as 1 packet, but corner cases
would eat us: each stream needs to be joined separately, and the cache
pruning logic could remove overlapping packets for other streams again.)
Another restriction is that switching the cached range will always
trigger an asynchronous low level seek to resume demuxing at the new
range. Some users might find this annoying.
Dealing with interleaved subtitles is not fully handled yet. It will
clamp the seekable range to where subtitle packets are.
Like the manual says, this is technically undefined behaviour. See:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476085.aspx
In particular, MSDN says texture arrays created with the BIND_DECODER
flag cannot be used with CreateShaderResourceView, which means they
can't be sampled through SRVs like normal Direct3D textures. However,
some programs (Google Chrome included) do this anyway for performance
and power-usage reasons, and it appears to work with most drivers.
Older AMD drivers had a "bug" with zero-copy decoding, but this appears
to have been fixed. See #3255, #3464 and http://crbug.com/623029.
This is a new RA/vo_gpu backend that uses Direct3D 11. The GLSL
generated by vo_gpu is cross-compiled to HLSL with SPIRV-Cross.
What works:
- All of mpv's internal shaders should work, including compute shaders.
- Some external shaders have been tested and work, including RAVU and
adaptive-sharpen.
- Non-dumb mode works, even on very old hardware. Most features work at
feature level 9_3 and all features work at feature level 10_0. Some
features also work at feature level 9_1 and 9_2, but without high-bit-
depth FBOs, it's not very useful. (Hardware this old is probably not
fast enough for advanced features anyway.)
Note: This is more compatible than ANGLE, which requires 9_3 to work
at all (GLES 2.0,) and 10_1 for non-dumb-mode (GLES 3.0.)
- Hardware decoding with D3D11VA, including decoding of 10-bit formats
without truncation to 8-bit.
What doesn't work / can be improved:
- PBO upload and direct rendering does not work yet. Direct rendering
requires persistent-mapped PBOs because the decoder needs to be able
to read data from images that have already been decoded and uploaded.
Unfortunately, it seems like persistent-mapped PBOs are fundamentally
incompatible with D3D11, which requires all resources to use driver-
managed memory and requires memory to be unmapped (and hence pointers
to be invalidated) when a resource is used in a draw or copy
operation.
However it might be possible to use D3D11's limited multithreading
capabilities to emulate some features of PBOs, like asynchronous
texture uploading.
- The blit() and clear() operations don't have equivalents in the D3D11
API that handle all cases, so in most cases, they have to be emulated
with a shader. This is currently done inside ra_d3d11, but ideally it
would be done in generic code, so it can take advantage of mpv's
shader generation utilities.
- SPIRV-Cross is used through a NIH C-compatible wrapper library, since
it does not expose a C interface itself.
The library is available here: https://github.com/rossy/crossc
- The D3D11 context could be made to support more modern DXGI features
in future. For example, it should be possible to add support for
high-bit-depth and HDR output with DXGI 1.5/1.6.
We don't hope to auto-detect them at load time, as that would be too
much of a pain - even FFmpeg requires fetching and parsing of video
packets, and exposes the information only via deprecated API.
But there still needs to be a way to select them by default. This is
also needed to get the first CC packet at all (without seeking back).
This commit also attempts to clean up locking a bit, which is a PITA,
but it's better be careful & clean.
See manpage additions.
(In ffmpeg-mpv and Libav, this is still called "cuvid". Libav won't work
yet, because it has no frame params support yet, but this could get
fixed soon.)
Comparing mpv's implementation against the ACES ODR reference samples
and algorithms, it seems like they're happy desaturating highlights
_way_ more aggressively than mpv currently does. And indeed, looking at
some example clips like The Redwoods (which is actually well-mastered),
the current desaturation produces unnatural-looking brightness fringes
where the sky meets the treeline.
Adjust the algorithm to make it apply to a much larger, more gradual
brightness region; and change the interpretation of the parameter. As a
bonus, the new parameter is actually sanely scaled (higher values = more
desaturation). Also, make it scale based on the signal level instead of
the luminance, to avoid under-desaturating bright blues.
This improves upon the previous commit, and partially rewrites it (and
other code). It does:
- disable the seeking within cache by default, and add an option to
control it
- mess with the buffer estimation reporting code, which will most likely
lead to funny regressions even if the new features are not enabled
- add a back buffer to the packet cache
- enhance the seek code so you can seek into the back buffer
- unnecessarily change a bunch of other stuff for no reason
- fuck up everything and vomit ponies and rainbows
This should actually be pretty usable. One thing we should add are some
properties to report the proper buffer state. Then the OSC could show a
nice buffer range. Also configuration of the buffers could be made
simpler. Once this has been tested enough, it can be enabled by default,
and might replace the stream cache's byte ringbuffer.
In addition it may or may not be possible to keep other buffer ranges
when seeking outside of the current range, but that would be much more
complex.
This should be functionally identical to rgba16f, since the formats only
differ in their representation on the CPU, but it could be useful for RA
backends that don't expose rgba16f, like Vulkan. It's definitely useful
for the WIP D3D11 backend.
It seems this will be useful for Rokchip DRM hwcontext integration.
DRM hwcontexts have additional internal structure which can be different
depending on the decoder, and which is not part of the generic hwcontext
API. Rockchip has 1 layer, which EGL interop happens to translate to a
RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple
layers. Both will use sw_format=nv12, and thus are indistinguishable on
the mp_image_params level. But this is needed to initialize the EGL
mapping and the vo_gpu video renderer correctly.
We hope that the layer count is enough to tell whether EGL will
translate the data to a RGB texture (vs. 2 texture resembling raw nv12
data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE.
This commit adds the flag, infrastructure to set it, and an "example"
for D3D11.
The D3D11 addition is quite useless at this point. But later we want to
get rid of d3d11_update_image_attribs() anyway, while we still need a
way to force d3d11vpp filter insertion, so maybe it has some
justification (who knows). In any case it makes testing this easier.
Obviously it also adds some basic support for triggering the opaque
format for decoding, which will use a driver-specific format, but which
is not supported in shaders. The opaque flag is not used to determine
whether d3d11vpp needs to be inserted, though.
Mostly an obscure option for testing. But --videotoolbox-format can be
deprecated, as it becomes redundant.
We rely on the libavutil hwcontext implementation to reject invalid
pixfmts, or not to blow up if they are incompatible.
This was confusing at best. Change it to output the actual choices.
(Seems like in the end it's always me who has to clean up other people's
bullshit.)
Context names were not unique - but they should be, so fix it. The whole
point of the original --opengl-backend option was to side-step the
tricky auto-detection, so you know exactly what you get. The goal of
this commit is to make --gpu-context work the same way. Fix the
non-unique names by appending "vk" to the names.
Keep in mind that this was not suitable for slecting the "UI" backend
anyway, since "x11" would force GLX, whereas people on not-NVIDIA
actually want "x11egl". Users trying to use --gpu-context=x11 to force
the X11 backend would always end up with GLX, which would at least break
VAAPI hardware decoding for them. Basically the idea that this option
could select the "UI" type is completely broken - it selects an
implementation, which implies a UI. Selecting the UI type This would
require a separate mechanism. (Although in theory this separate
mechanism could be part of the --gpu-context option - in any case,
someone would have to implement it.)
To achieve help output that can actually be understood, just duplicate
the code. Most of that code is duplicated anyway, and trying to share
just the list code with the result of making the output unreadable
doesn't make too much sense. If we wanted to save code/effort, we could
just remove the help output altogether.
--gpu-api has non-unique entries, and it would be nice to group them
(e.g. list all OpenGL capable contexts with "opengl"), but C makes this
simple idea too much of a pain, so don't do it.
Also remove a stray tab from the android entry on the manpage.
Signed-off-by: wm4 <wm4@nowhere>
Rename --stats to --load-stats-overlay and add an entry to options.rst
over the original commit.
Signed-off-by: wm4 <wm4@nowhere>
At the moment, rendering on Android requires ``--vo=opengl-cb`` and
a lot of java<->c++ bridging code to receive the receive and react to
the render callback in java. Performance also suffers with opengl-cb,
due to the overhead of context switching in JNI.
With this patch, Android can render using ``--vo=gpu --gpu-context=android``
(after setting ``--wid`` to point to an android.view.Surface on-screen).
But --msg-level can only raise the log level used for --log-file,
because the original idea with --log-file was that it'd log verbose
messages to disk even if terminal logging is lower than -v or fully
disabled.
Changed the reference from --gpu-gamma to --gamma-factor,
and changed the reference from --post-shader to --glsl-shaders,
in order to reflect actual changes to the option names.
Seems to be fixed upstream in the nvidia driver, so it's probably a good
idea to 1. force the layout and 2. remove the warning, as it now
actually works. Users with older drivers would run into errors, but they
can still use shaderc as a replacement. (And it's not like the old
status quo was any better)
This has several advantages:
1. no more redundant texcoords when we don't need them
2. no more arbitrary limit on how many textures we can bind
3. (that extends to user shaders as well)
4. no more arbitrary limits on tscale radius
To realize this, the VAO was moved from a hacky stateful approach
(gl_sc_set_vertex_attribs) - which always bothered me since it was
required for compute shaders as well even though they ignored it - to be
a proper parameter of gl_sc_dispatch_draw, and internally plumbed into
gl_sc_generate, which will make a (properly mangled) deep copy into
params.vertex_attribs.
Apparently this filter is broken in a weird way, which even makes some
libavfilter functions segfault in certain conditions. Don't waste time
with it and just remove the examples.
Also adjust the "life" example description (certainly this filter is
100% worthless, but the example does demonstrate how to use source
filters without any available input).
auto-copy selects more modes than the ones listed. It will always be
outdated anyway.
The GLX vaapi backend is never selected anymore, because it sucks.
In addition to the built-in nvidia compiler, we now also support a
backend based on libshaderc. shaderc is sort of like glslang except it
has a C API and is available as a dynamic library.
The generated SPIR-V is now cached alongside the VkPipeline in the
cached_program. We use a special cache header to ensure validity of this
cache before passing it blindly to the vulkan implementation, since
passing invalid SPIR-V can cause all sorts of nasty things. It's also
designed to self-invalidate if the compiler gets better, by offering a
catch-all `int compiler_version` that implementations can use as a cache
invalidation marker.
This time based on ra/vo_gpu. 2017 is the year of the vulkan desktop!
Current problems / limitations / improvement opportunities:
1. The swapchain/flipping code violates the vulkan spec, by assuming
that the presentation queue will be bounded (in cases where rendering
is significantly faster than vsync). But apparently, there's simply
no better way to do this right now, to the point where even the
stupid cube.c examples from LunarG etc. do it wrong.
(cf. https://github.com/KhronosGroup/Vulkan-Docs/issues/370)
2. The memory allocator could be improved. (This is a universal
constant)
3. Could explore using push descriptors instead of descriptor sets,
especially since we expect to switch descriptors semi-often for some
passes (like interpolation). Probably won't make a difference, but
the synchronization overhead might be a factor. Who knows.
4. Parallelism across frames / async transfer is not well-defined, we
either need to use a better semaphore / command buffer strategy or a
resource pooling layer to safely handle cross-frame parallelism.
(That said, I gave resource pooling a try and was not happy with the
result at all - so I'm still exploring the semaphore strategy)
5. We aggressively use pipeline barriers where events would offer a much
more fine-grained synchronization mechanism. As a result of this, we
might be suffering from GPU bubbles due to too-short dependencies on
objects. (That said, I'm also exploring the use of semaphores as a an
ordering tactic which would allow cross-frame time slicing in theory)
Some minor changes to the vo_gpu and infrastructure, but nothing
consequential.
NOTE: For safety, all use of asynchronous commands / multiple command
pools is currently disabled completely. There are some left-over relics
of this in the code (e.g. the distinction between dev_poll and
pool_poll), but that is kept in place mostly because this will be
re-extended in the future (vulkan rev 2).
The queue count is also currently capped to 1, because of the lack of
cross-frame semaphores means we need the implicit synchronization from
the same-queue semantics to guarantee a correct result.
This never really made sense since the BT.1886 changes. It should get
*brighter* for bright rooms, not darker for dark rooms. Picked some new
values that seemed reasonable-ish.
This is done in several steps:
1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
(note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
ra itself (and vo_gpu altered to compensate), but this was a stop-gap
measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten
Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.
Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.
Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
This mechanism uses system() and shouldn't even exist. x11_common.c has
its own solution for the original problem (disabling Linux DE
screensavers without MPlayer/mpv having to link a dbus lib). If that is
not sufficient, you can create a simple Lua script.
Incidentally fixes#4888.
This clearly highlights all out-of-gamut/clipped pixels. (Either too
bright or too saturated)
Has some (documented) caveats. Also make TONE_MAPPING_CLIP stop actually
clamping the value range (it's unnecessary and breaks this feature).
This removes all GPL only code from it, and that's the whole purpose.
Also happens to be much simpler.
The "deinterlace" option still sort of exists, but only as runtime
changeable option. The main change in behavior is that the property will
not report back the actual deint state. Or in other words, if inserting
or initializing the filter fails, the deinterlace property will still
return "yes". This is in line with most recent behavior changes to
properties and options.
This was attempted before in fc9695e63b, but it was reverted in
1b7ce759b1 because it caused conflicts with other software watching
the same keys (See #2041.) It seems like some PCs ship with OEM software
that watches the volume keys without consuming key events and this
causes them to be handled twice, once by mpv and once by the other
software.
In order to prevent conflicts like this, use the WM_APPCOMMAND message
to handle media keys. Returning TRUE from the WM_APPCOMMAND handler
should indicate to the operating system that we consumed the key event
and it should not be propogated to the shell. Also, we now only listen
for keys that are directly related to multimedia playback (eg. the
APPCOMMAND_MEDIA_* keys.) Keys like APPCOMMAND_VOLUME_* are ignored, so
they can be handled by the shell, or by other mixer software.
This currently only works when using lcms-based color management
(--icc-profile-*).
In principle, we could also support using lcms even when the user has
not specified an ICC profile, by generating the profile against a fixed
reference (--target-prim/--target-trc) instead. I still might do that
some day, simply because 3dlut provides a higher quality conversion than
our simple gamut mapping does for stuff like BT.2020, and also because
it's now needed to enable embedded ICC profiles. But that would be a
separate change, so preserve the status quo for now.
(Besides, my opinion is still that you should be using an ICC profile if
you care about colors being accurate _at all_)
This broke float textures, which were actually used by some shaders.
There were probably some other bugs as well.
Lots of code can be avoided by using ra_tex_params directly, so do that.
The main change is that COMPONENT/FORMAT are replaced by a single FORMAT
directive, which takes different parameters now. Due to the mess with
16/32 bit float textures, and because we want to support other APIs than
just GL in the future, it's not really clear how this should be handled,
and the nice component/type separation makes things actually harder. So
just jump the gun and use the ra_format.name names, which were
originally meant mostly for debugging. (This is probably something that
will be regretted later.)
Still only superficially tested, but seems to work.
Fixes#4708.
Since this code was already written for HDR, and is now per-channel
(because it works better for HDR as well), we can actually reuse this to
get very high quality gamut mapping without clipping. The only required
change is to move the tone mapping from before the gamut map to after
the gamut map. Additonally, we need to also account for changes in the
signal range as a result of applying the CMS when we compute ref_peak,
which is fortunately pretty easy because we only need to consider the
case of primaries mapping to themselves.
Since `HDR` no longer really makes sense as a label, rename it to
`--tone-mapping` in general. Also fits better with
`--tone-mapping-desat` etc.
Arguably we could also rename `--hdr-compute-peak`, but that option is
basically only useful for HDR content anyway because we don't need
information about the signal range for gamut mapping.
This (finally!) gives us reasonably high quality gamut mapping even in
the absence of an ICC profile / 3DLUT.
Parsing the texture data as raw strings makes the textures the most
portable and self-contained. In order to facilitate different types of
shaders, the parse_user_shader interaction has been changed to instead
have it loop through blocks and call the passed functions for each valid
block parsed. This is more modular and also cleaner, with better code
separation.
Closes#4586.
Two changes, compounded into one since they affect the same logic:
1. Never use linearization for HDR downscaling
2. Always use linearization for interpolation
Instead of fixing p->use_linear at the beginning of pass_render_frame,
we flip it on "dynamically" as needed. I plan on killing this
p->use_linear frame (along with other per-pass metadata) and moving them
into their own struct for tracking the "current" state of the video, but
that's a separate/upcoming refactor.
As a small bonus, reduce some code duplication in the interpolation
logic.
Fixes#4631