Semantics a bit questionable. This is done for the OSC (next commit),
and a comment added the manpage explicitly states this. Meaning this is
probably garbage and needs to revisit when the OSC changes and/or
someone wants to use this margin feature for something else.
Not sure about the subtitle thing. It's imaginable that someone uses
these options to create empty borders for subtitles on the bottom, so
subtitles should be located there. On the other hand, this gives a
rather unpolished user experience when using the (later added) OSC
feature to not overlap with the video. There's not much of a point if
the OSC still overlaps the video. However, I'm too lazy to think about
this, so it stays like it is.
Somewhat similar to the old --cache-file, except for the demuxer cache.
Instead of keeping packet data in memory, it's written to disk and read
back when needed.
The idea is to reduce main memory usage, while allowing fast seeking in
large cached network streams (especially live streams). Keeping the
packet metadata on disk would be rather hard (would use mmap or so, or
rewrite the entire demux.c packet queue handling), and since it's
relatively small, just keep it in memory.
Also for simplicity, the disk cache is append-only. If you're watching
really long livestreams, and need pruning, you're probably out of luck.
This still could be improved by trying to free unused blocks with
fallocate(), but since we're writing multiple streams in an interleaved
manner, this is slightly hard.
Some rather gross ugliness in packet.h: we want to store the file
position of the cached data somewhere, but on 32 bit architectures, we
don't have any usable 64 bit members for this, just the buf/len fields,
which add up to 64 bit - so the shitty union aliases this memory.
Error paths untested. Side data (the complicated part of trying to
serialize ffmpeg packets) untested.
Stream recording had to be adjusted. Some minor details change due to
this, but probably nothing important.
The change in attempt_range_joining() is because packets in cache
have no valid len field. It was a useful check (heuristically
finding broken cases), but not a necessary one.
Various other approaches were tried. It would be interesting to list
them and to mention the pros and cons, but I don't feel like it.
Has been deprecated for almost 3 years. Manpage didn't mention the
deprecation, but CLI and release notes did. It wouldn't be much effort
to keep this option working, but I just don't see the damn point.
--start/--end can specify chapters using special syntax, which is
equivalent.
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)
(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)
How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.
The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).
Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).
The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.
Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.
E.g.:
bool before = pts_a < pts_b;
would need to be:
bool before = forward
? pts_a < pts_b
: pts_a > pts_b;
or:
bool before = pts_a * dir < pts_b * dir;
or if you, as it's implemented now, just do this after decoding:
pts_a *= dir;
pts_b *= dir;
and then in the normal timing/renderer code:
bool before = pts_a < pts_b;
Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.
Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.
As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)
VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.
FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
Linux analog TV support (via tv://) was excessively complex, and
whenever I attempted to use it (cameras or loopback devices), it didn't
work well, or would have required some major work to update it. It's
very much stuck in the analog past (my favorite are the frequency tables
in frequencies.c for analog TV channels which don't exist anymore).
Especially cameras and such work fine with libavdevice and better than
tv://, for example:
mpv av://v4l2:/dev/video0
(adding --profile=low-latency --untimed even makes it mostly realtime)
Adding a new input layer that targets such "modern" uses would be
acceptable, if anyone is interested in it. The old TV code is just too
focused on actual analog TV.
DVB is rather obscure, but has an active maintainer, so don't remove it.
However, the demux/stream ctrl layer must go, so remove controls for
channel switching. Most of these could be reimplemented by using the
normal method for option runtime changes.
The demuxer cache is the only cache now. Might need another change to
combat seeking failures in mp4 etc. The only bad thing is the loss of
cache-speed, which was sort of nice to have.
The --hwdec* options are a good fit for the vd_lavc local option
struct. This annoyingly requires manual prefixing of most of these
options with --vd-lavc (could be avoided by using more sub-struct
craziness, but let's not).
Until now, stopping playback aborted the demuxer and I/O layer violently
by signaling mp_cancel (bound to libavformat's AVIOInterruptCB
mechanism). Change it to try closing them gracefully.
The main purpose is to silence those libavformat errors that happen when
you request termination. Most of libavformat barely cares about the
termination mechanism (AVIOInterruptCB), and essentially it's like the
network connection is abruptly severed, or file I/O suddenly returns I/O
errors. There were issues with dumb TLS warnings, parsers complaining
about incomplete data, and some special protocols that require server
communication to gracefully disconnect.
We still want to abort it forcefully if it refuses to terminate on its
own, so a timeout is required. Users can set the timeout to 0, which
should give them the old behavior.
This also removes the old mechanism that treats certain commands (like
"quit") specially, and tries to terminate the demuxers even if the core
is currently frozen. This is for situations where the core synchronized
to the demuxer or stream layer while network is unresponsive. This in
turn can only happen due to the "program" or "cache-size" properties in
the current code (see one of the previous commits). Also, the old
mechanism doesn't fit particularly well with the new one. We wouldn't
want to abort playback immediately on a "quit" command - the new code is
all about giving it a chance to end it gracefully. We'd need some sort
of watchdog thread or something equally complicated to handle this. So
just remove it.
The change in osd.c is to prevent that it clears the status line while
waiting for termination. The normal status line code doesn't output
anything useful at this point, and the code path taken clears it, both
of which is an annoying behavior change, so just let it show the old
one.
On machines with multiple GPUs, /dev/dri/renderD128 isn't guaranteed
to point to a valid vaapi device. This just adds the option to specify
what path to use.
The old fallback /dev/dri/card0 is gone but that's not a loss as its
a legacy interface no longer accepted as valid by libva.
Fixes#4320
Removes the awkward notification through VO_EVENT_WIN_STATE.
Unfortunately, some awkwardness remains in mp_property_display_fps(),
because the property has conflicting semantics with the option.
The playback start logic explicitly waits until the first frame has been
displayed. Usually this will introduce a wait of 1 vsync. For normal
playback this doesn't matter, but with respect to low latency needs,
this only leads to additional data getting queued up in the demuxer or
network buffers.
Another thing is that the timing logic decodes 1 frame ahead (= 1 frame
extra latency) to determine the exact duration of a frame.
To be fair, there doesn't really seem to be a hard reason why this is
needed. With the current code, enabling the option does lead to A/V
desync sometimes (if the demuxer FPS is too inaccurate), and also frame
drops at playback start in some situations. But this all seems to be
avoidable, if the timing logic were to be rewritten completely, which
should probably happen in the future. Thus the new option comes with the
warning that it can be removed any time. This is also why the option has
"hack" in the name.
the title bar is now within the window bounds instead of outside. same
as QuickTime Player. it supports several standard styles, two dark and
two light ones. additionally we have properly rounded corners now and
the borderless window also has the proper window shadow.
Also make the earliest supported macOS version 10.10.
Fixes#4789, #3944
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.
This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.
vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.
f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).
The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.
Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)
In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.
vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.
The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.
Exporting VO capabilities is still a big mess (mp_stream_info stuff).
The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.
f_async_queue is unused.
Restores behaviour prior to aef2ed5dc1.
That change was apparently unpopular. However, given the amount of
complaining over how hard it is to change the defaults by rebinding every
key, I think the extra option introduced by this commit is justified.
Technically not all behaviour is restored, because now --no-osd-bar will
not instead display the msg text on seek. I think that feature was a
little weird and is now easy enough to remedy with the --osd-on-seek
option.
This is part of trying to get rid of --af-defaults, and the af
resample filter.
It requires a complicated mechanism to set the defaults on the resample
filter for backwards compatibility.
Reasons why you'd want this see manpage additions. Disabled by default,
because it would increase latency of live streams by default. (Or well,
at least it would be another problem when trying getting lower latency.)
This tried to be clever by waiting for a longer time each time the
buffer was underrunning, or shorter if it was getting better. I think
this was pretty weird behavior and makes no sense. If the user really
wants the stream to buffer longer, he/she/it can just pause the player
(the network caches will continue to be filled until they're full).
Every time I actually noticed this code triggering in my own use, I
didn't find it helpful. Apart from that it was pretty hard to test.
Some waiting is needed to avoid that the player just plays the available
data as fast as possible (to compensate for late frames and underrunning
audio). Just use a fixed wait time, which can now be controlled by the
new --cache-pause-wait option.
Remove them from the big MPOpts struct and move them to their sub
structs. In the places where their fields are used, create a private
copy of the structs, instead of accessing the semi-deprecated global
option struct instance (mpv_global.opts) directly.
This actually makes accessing these options finally thread-safe. They
weren't even if they should have for years. (Including some potential
for undefined behavior when e.g. the OSD font was changed at runtime.)
This is mostly transparent. All options get moved around, but most users
of the options just need to access a different struct (changing sd.opts
to a different type changes a lot of uses, for example).
One thing which has to be considered and could cause potential
regressions is that the new option copies must be explicitly updated.
sub_update_opts() takes care of this for example.
Another thing is that writing to the option structs manually won't work,
because the changes won't be propagated to other copies. Apparently the
only affected case is the implementation of the sub-step command, which
tries to change sub_delay. Handle this one explicitly (osd_changed()
doesn't need to be called anymore, because changing the option triggers
UPDATE_OSD, and updates the OSD as a consequence). The way the option
value is propagated is rather hacky, but for now this will do.
A release has been made, so drop options deprecated for that release.
Also drop some options which have been deprecated a much longer time
before.
Also fix a typo in client-api-changes.rst.
Change it from explicit metadata about every hwaccel method to trying to
get it from libavcodec. As shown by add_all_hwdec_methods(), this is a
quite bumpy road, and a bit worse than expected.
This will probably cause a bunch of regressions. In particular I didn't
check all the strange decoder wrappers, which all cause some sort of
special cases each. You're volunteering for beta testing by using this
commit.
One interesting thing is that we completely get rid of mp_hwdec_ctx in
vd_lavc.c, and that HWDEC_* mostly goes away (some filters still use it,
and the VO hwdec interops still have a lot of code to set it up, so it's
not going away completely for now).
Make the VO<->decoder interface capable of supporting multiple hwdec
APIs at once. The main gain is that this simplifies autoprobing a lot.
Before this change, it could happen that the VO loaded the "wrong" hwdec
API, and the decoder was stuck with the choice (breaking hw decoding).
With the change applied, the VO simply loads all available APIs, so
autoprobing trickery is left entirely to the decoder.
In the past, we were quite careful about not accidentally loading the
wrong interop drivers. This was in part to make sure autoprobing works,
but also because libva had this obnoxious bug of dumping garbage to
stderr when using the API. libva was fixed, so this is not a problem
anymore.
The --opengl-hwdec-interop option is changed in various ways (again...),
and renamed to --gpu-hwdec-interop. It does not have much use anymore,
other than debugging. It's notable that the order in the hwdec interop
array ra_hwdec_drivers[] still matters if multiple drivers support the
same image formats, so the option can explicitly force one, if that
should ever be necessary, or more likely, for debugging. One example are
the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both
support d3d11 input.
vo_gpu now always loads the interop lazily by default, but when it does,
it loads them all. vo_opengl_cb now always loads them when the GL
context handle is initialized. I don't expect that this causes any
problems.
It's now possible to do things like changing between vdpau and nvdec
decoding at runtime.
This is also preparation for cleaning up vd_lavc.c hwdec autoprobing.
It's another reason why hwdec_devices_request_all() does not take a
hwdec type anymore.
These couldn't be relicensed, and won't survive the LGPL transition. The
other existing filters are mostly LGPL (except libaf glue code).
This remove the deprecated pan option. I guess it could be restored by
inserting a libavfilter filter (if there's one), but for now let it be
gone.
This temporarily breaks volume control (and things related to it, like
replaygain).
Like the manual says, this is technically undefined behaviour. See:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476085.aspx
In particular, MSDN says texture arrays created with the BIND_DECODER
flag cannot be used with CreateShaderResourceView, which means they
can't be sampled through SRVs like normal Direct3D textures. However,
some programs (Google Chrome included) do this anyway for performance
and power-usage reasons, and it appears to work with most drivers.
Older AMD drivers had a "bug" with zero-copy decoding, but this appears
to have been fixed. See #3255, #3464 and http://crbug.com/623029.
This is a new RA/vo_gpu backend that uses Direct3D 11. The GLSL
generated by vo_gpu is cross-compiled to HLSL with SPIRV-Cross.
What works:
- All of mpv's internal shaders should work, including compute shaders.
- Some external shaders have been tested and work, including RAVU and
adaptive-sharpen.
- Non-dumb mode works, even on very old hardware. Most features work at
feature level 9_3 and all features work at feature level 10_0. Some
features also work at feature level 9_1 and 9_2, but without high-bit-
depth FBOs, it's not very useful. (Hardware this old is probably not
fast enough for advanced features anyway.)
Note: This is more compatible than ANGLE, which requires 9_3 to work
at all (GLES 2.0,) and 10_1 for non-dumb-mode (GLES 3.0.)
- Hardware decoding with D3D11VA, including decoding of 10-bit formats
without truncation to 8-bit.
What doesn't work / can be improved:
- PBO upload and direct rendering does not work yet. Direct rendering
requires persistent-mapped PBOs because the decoder needs to be able
to read data from images that have already been decoded and uploaded.
Unfortunately, it seems like persistent-mapped PBOs are fundamentally
incompatible with D3D11, which requires all resources to use driver-
managed memory and requires memory to be unmapped (and hence pointers
to be invalidated) when a resource is used in a draw or copy
operation.
However it might be possible to use D3D11's limited multithreading
capabilities to emulate some features of PBOs, like asynchronous
texture uploading.
- The blit() and clear() operations don't have equivalents in the D3D11
API that handle all cases, so in most cases, they have to be emulated
with a shader. This is currently done inside ra_d3d11, but ideally it
would be done in generic code, so it can take advantage of mpv's
shader generation utilities.
- SPIRV-Cross is used through a NIH C-compatible wrapper library, since
it does not expose a C interface itself.
The library is available here: https://github.com/rossy/crossc
- The D3D11 context could be made to support more modern DXGI features
in future. For example, it should be possible to add support for
high-bit-depth and HDR output with DXGI 1.5/1.6.
This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced
format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor
struct.
That struct holds dmabuf fds and information allowing zerocopy rendering
using KMS / DRM Atomic.
This has been tested on RockChip ROCK64 device.
Mostly an obscure option for testing. But --videotoolbox-format can be
deprecated, as it becomes redundant.
We rely on the libavutil hwcontext implementation to reject invalid
pixfmts, or not to blow up if they are incompatible.
Signed-off-by: wm4 <wm4@nowhere>
Rename --stats to --load-stats-overlay and add an entry to options.rst
over the original commit.
Signed-off-by: wm4 <wm4@nowhere>
In addition to the built-in nvidia compiler, we now also support a
backend based on libshaderc. shaderc is sort of like glslang except it
has a C API and is available as a dynamic library.
The generated SPIR-V is now cached alongside the VkPipeline in the
cached_program. We use a special cache header to ensure validity of this
cache before passing it blindly to the vulkan implementation, since
passing invalid SPIR-V can cause all sorts of nasty things. It's also
designed to self-invalidate if the compiler gets better, by offering a
catch-all `int compiler_version` that implementations can use as a cache
invalidation marker.
This time based on ra/vo_gpu. 2017 is the year of the vulkan desktop!
Current problems / limitations / improvement opportunities:
1. The swapchain/flipping code violates the vulkan spec, by assuming
that the presentation queue will be bounded (in cases where rendering
is significantly faster than vsync). But apparently, there's simply
no better way to do this right now, to the point where even the
stupid cube.c examples from LunarG etc. do it wrong.
(cf. https://github.com/KhronosGroup/Vulkan-Docs/issues/370)
2. The memory allocator could be improved. (This is a universal
constant)
3. Could explore using push descriptors instead of descriptor sets,
especially since we expect to switch descriptors semi-often for some
passes (like interpolation). Probably won't make a difference, but
the synchronization overhead might be a factor. Who knows.
4. Parallelism across frames / async transfer is not well-defined, we
either need to use a better semaphore / command buffer strategy or a
resource pooling layer to safely handle cross-frame parallelism.
(That said, I gave resource pooling a try and was not happy with the
result at all - so I'm still exploring the semaphore strategy)
5. We aggressively use pipeline barriers where events would offer a much
more fine-grained synchronization mechanism. As a result of this, we
might be suffering from GPU bubbles due to too-short dependencies on
objects. (That said, I'm also exploring the use of semaphores as a an
ordering tactic which would allow cross-frame time slicing in theory)
Some minor changes to the vo_gpu and infrastructure, but nothing
consequential.
NOTE: For safety, all use of asynchronous commands / multiple command
pools is currently disabled completely. There are some left-over relics
of this in the code (e.g. the distinction between dev_poll and
pool_poll), but that is kept in place mostly because this will be
re-extended in the future (vulkan rev 2).
The queue count is also currently capped to 1, because of the lack of
cross-frame semaphores means we need the implicit synchronization from
the same-queue semantics to guarantee a correct result.
This is done in several steps:
1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
(note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
ra itself (and vo_gpu altered to compensate), but this was a stop-gap
measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten
Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.
Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.
Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
This mechanism uses system() and shouldn't even exist. x11_common.c has
its own solution for the original problem (disabling Linux DE
screensavers without MPlayer/mpv having to link a dbus lib). If that is
not sufficient, you can create a simple Lua script.
Incidentally fixes#4888.
I really wouldn't care much about this, but some parts of the core code
are under HAVE_GPL, so there's some need to get rid of it. Simply turn
the video equalizer from its current fine-grained handling with vf/vo
fallbacks into global options. This makes updating them much simpler.
This removes any possibility of applying video equalizers in filters,
which affects vf_scale, and the previously removed vf_eq. Not a big
loss, since the preferred VOs have this builtin.
Remove video equalizer handling from vo_direct3d, vo_sdl, vo_vaapi, and
vo_xv. I'm not going to waste my time on these legacy VOs.
vo.eq_opts_cache exists _only_ to send a VOCTRL_SET_EQUALIZER, which
exists _only_ to trigger a redraw. This seems silly, but for now I feel
like this is less of a pain. The rest of the equalizer using code is
self-updating.
See commit 96b906a51d for how some video equalizer code was GPL only.
Some command line option names and ranges can probably be traced back to
a GPL only committer, but we don't consider these copyrightable.
This was especially grating because it causes problems with the
option/property unification, uses as only thing OPT_FLAG_STORE, and
behaves weird with the client API or scripts.
It can be reimplemented in a much simpler way, although it needs
slightly more code. (Simpler because less special cases.)
In a multi GPU scenario, it may be desirable to use different GPUs
for decode and display responsibilities. For example, if a secondary
GPU has better video decoding capabilities.
In such a scenario, we need to initialise a separate context for each
GPU, and use the display context in hwdec_cuda, while passing the
decode context to avcodec.
Once that's done, the actually hand-off between the two GPUs is
transparent to us (It happens during the cuMemcpy2D operation which
copies the decoded frame from a cuda buffer to the OpenGL texture).
In the end, the bulk of the work is around introducing a new
configuration option to specify the decode device.
af_volume is deprecated, and so are its replaygain sub-options. To make
it possible to use replaygain without deprecated options (and of course
to make it available at all after af_volume is dropped), reintroduce
them as top-level options.
This also means that they are easily changeable at runtime by using them
as properties. Change the "volume" property to use the new update
mechanism as well.
We don't actually bother sharing the implementation between new and
deprecated mechanisms, as the deprecated one will simply be deleted.
For the from_dB() functions, we mention anders' copyright, although I'm
not sure if a mere formula is copyrightable. This will have to be
determined later.
This whole change is mostly untested. Our distributed human CI will take
care of it.
Instead of pausing if --keep-open is active, stop
at end but continue playing if seeking backwards.
And then stop again when end is reached.
Signed-off-by: wm4 <wm4@nowhere>
Over the PR, the option was renamed, and the manpage additions were
slightly changed/enhanced.
Add subtitle filter to remove additions for deaf or hard-of-hearing
(SDH). This is for English, but may in part work for others too.
This is an ASS filter and the intention is that it can always be
enabled as it by default do not remove parts that may be normal text.
Harder filtering can be enabled with an additional option.
Signed-off-by: wm4 <wm4@nowhere>
since there are different views on what ontop is, we make the ontop
window level modifiable. at the moment only support for macOS was added.
the default for macOS was changed from 'system' to 'window' since this
fixes an unwanted behaviour in fullscreen and in general causes less
issues with expected behaviour.
Fixes#2376#3974
To make it easier for the eyes, multi line subtitles should
be left justified (for most languages).
This adds an option to define how subtitles are to be justified
inpendently of how they are aligned.
Also add option to enable --sub-justify to be applied on ASS subtitles.
This was excessively useless, and I want my time back that was needed to
explain users why they don't want to use it.
It captured the byte stream only, and even for types of streams it was
designed for (like transport streams), it was rather questionable.
As part of the removal, un-inline demux_run_on_thread() (which has only
1 call-site now), and sort of reimplement --stream-dump to write the
data directly instead of using the removed capture code.
(--stream-dump is also very useless, and I struggled coming up with an
explanation for it in the manpage.)
vo_opengl used to have it as sub-option, which made it very hard to pass
down option values to backends in a generic way (even if these options
were completely backend-specific). For --opengl-dcomposition we used a
VOFLAG to deal with this. Fortunately, sub-options are gone, and we can
just add it as global option.
Move the option to context_angle.c and add it as global option. I
thought about adding a mechanism to let backends declare options, which
would get magically picked up my m_config instead of having to add them
to the global option list manually (similar to VO vo_driver.options),
but decided against this complexity just for 1 or 2 backends. Likewise,
it could have been added as a single option to avoid the boilerplate of
an option struct, but then again there are probably going to be more
angle suboptions, and it's cleaner.
Since for mpv CLI, the player state is a singleton, full prefetching is
a bit tricky. We do it only on the demuxer layer.
The implementation reuses the old "open thread". This means there is
significant potential for regressions even if the new option is not
used. This is made worse by the fact that I barely tested this code.
The generic mpctx_run_reentrant() wrapper is also removed - this was its
only user, and its remains become part of the new implementation.
Introduce the --opengl-hwdec-interop option, which replaces
--hwdec-preload. The new option allows explicit selection of the interop
backend.
This is relatively complex, and I would have preferred not to add this,
but it's probably useful to debug certain problems. In exchange, the
"new" option documents that pretty much any but the simplest use of it
will not be forward compatible.
this replaces the old fullscreen with the native
macOS fullscreen. additional the
--fs-black-out-screens was removed since the new
API doesn't support it in a way the old one did.
it can possibly be re-added if done manually.
Fixes#2857#3272#1352#2062#3864
Long planned. Leads to some sanity.
There still are some rather gross things. Especially g_groups is ugly,
and a hack that can hopefully be removed. (There is a plan for it, but
whether it's implemented depends on how much energy is left.)
Deactivating this options makes it possible to
circumvent the default OS X behavior of using
points. Windows on HiDPI resolutions won't open
in double the size anymore and videos are display
in their native resolution when windowed.
Fixes#3716
- Change connector selection to accept human readable names (such as
eDP-1, HDMI-A-2) rather than arbitrary numbers.
- Change GPU selection to accept GPU number rather than device paths.
- Merge connector and GPU selection into one --drm-connector.
- Add support for --drm-connector=help.
- Add support for --drm-* in EGL backend.
- Refactor KMS; reduce state sharing across drm_common.
Rename the text subtitle options from --sub-text- to --sub-
and --ass- options to --sub-ass-.
The intention is to common sub options to prefixed --sub-
and special ASS option be seen as a special version of sub options.
The OSD options that work like the --sub- options are still named
--osd-.
Man page updated including a short note about renamed --sub-text-*
and --ass-* options to --sub-* and --sub-ass-*.
Seems like this confused users quite often.
Instead of --profile=pseudo-gui, --player-operation-mode=pseudo-gui now
has to be used to invoke pseudo GUI mode. The old way still works, and
still behaves in the old way.
A recent change merged the window-scaler option and property, but forgot
that the option is float for some reason, while the property uses
double. This led to undefined behavior. Fix it by changing the option
to double too.
Don't access MPOpts directly, and always use the new m_config.h
functions for accessing them in a thread-safe way.
The goal is eventually removing the mpv_global.opts field, and the
demuxer/stream-layer specific hack that copies MPOpts to deal with
thread-safety issues.
This moves around a lot of options. For one, we often change the
physical storage location of options to make them more localized,
but these changes are not user-visible (or should not be). For
shared options on the other hand it's better to do messy direct
access, which is worrying as in that somehow renaming an option
or changing its type would break code reading them manually,
without causing a compilation error.
And introduce a global option which does this. Or more precisely, this
deprecates the global wasapi and coreaudio options, and adds a new one
that merges their functionality. (Due to the way the sub-option
deprecation mechanism works, this is simpler.)
Instead of requiring each VO or AO to manually add members to MPOpts and
the global option table, make it possible to register them automatically
via vo_driver/ao_driver.global_opts members. This avoids modifying
options.c/options.h every time, including having to duplicate the exact
ifdeffery used to enable a driver.
Whitelisting supported codecs is (probably) still better than just
allowing everything, given the weird FFmpeg API. I'm also assuming
Libav doesn't even have the codec ID, but I didn't check.
Also add a --teletext-page option, since otherwise it decodes every
teletext page and shows them in succession.
And yes, we can't use av_opt_set_int() - instead we have to set it as
string. Because FFmpeg's option system is terrible.
vo_opengl sub-option were always rather annoying to handle. It seems
better to make them global options instead. This is simpler and easier
to use. The only disadvantage we are aware of is that it's not clear
that many/all of these new global options work with vo_opengl only.
--vo=opengl-hq is also deprecated.
There is extensive compatibility with the old behavior. One exception is
that --vo-defaults will not apply to opengl-hq (though with opengl it
still works). vo-cmdline is also dysfunctional and will be removed in a
following commit.
These changes also affect opengl-cb.
The update mechanism is still rather inefficient: it requires syncing
with the VO after each option change, rather than batching updates.
There's also no granularity (video.c just updates "everything", and if
auto-ICC profiles are enabled, vo_opengl.c will fetch them on each
update).
Most of the manpage changes were done by Niklas Haas <git@haasn.xyz>.
This is still rather basic.
run_reconfig() and run_control() update the options because it's needed
for panscan (and other video scaling options), and fullscreen, border,
ontop updates. In the old model, these options could be accessed only
while both playback thread and VO threads were locked (i.e. during
synchronous calls like vo_control()), so this should be sufficient in
order not to miss any updates. In the future, a more fine-grained update
mechanism could be added to handle these updates "exactly".
x11_common.c contains an evil hack, as I see no reasonable way to handle
this properly. The VO thread can't "lock" the main thread, so this is
not simple.
Now options are accessible through the property list as well, which
unifies them to a degree.
Not all options support runtime changes (meaning affected components
need to be restarted for the options to take effects). Remove from the
manpage those properties which are cleanly mapped to options anyway.
From the user-perspective they're just options available through the
property interface.
Just a minor refactor along the planned option change. This commit will
make it easier to update (i.e. copy) the VO options without copying
_all_ options. For now, behavior should be equivalent, though.
(The VO options were put into a separate struct quite early - when all
global variables were removed from the source code. It wasn't clear
whether the separate struct would have any actual purpose, but it seems
it will now. Awesome, huh.)
Normally, OSD can be disabled with --osd-level=0. But this also disables
terminal OSD, and some users want _only_ the terminal OSD. Add
--video-osd=no, which essentially disables the video OSD.
Ideally, it should probably be possible to control terminal and video
OSD levels independently, but that would require separate OSD timers
(and other state) for both components, so don't do it. But because the
current situation isn't too ideal, add a threat to the manpage that
might be changed in the future.
Fixes#3387.
The --image-display-duration option controls how long an image is
displayed. It's also possible to display the image forever (until manual
user interaction stops playback).
With this, the core drops the old method to "drain" video (i.e. waiting
for the last frame duration on end of playback). Instead, we reuse
MPContext.time_frame. The old mechanism was disabled for non-images
anyway.
Fixes#3425.
This commit adds an --audio-channel=auto-safe mode, and makes it the
default. This mode behaves like "auto" with most AOs, except with
ao_alsa. The intention is to allow multichannel output by default on
sane APIs. ALSA is not sane as in it's so low level that it will e.g.
configure any layout over HDMI, even if the connected A/V receiver does
not support it. The HDMI fuckup is of course not ALSA's fault, but other
audio APIs normally isolate applications from dealing with this and
require the user to globally configure the correct output layout.
This will help with other AOs too. ao_lavc (encoding) is changed to the
new semantics as well, because it used to force stereo (perhaps because
encoding mode is supposed to produce safe files for crap devices?).
Exclusive mode output on Windows might need to be adjusted accordingly,
as it grants the same kind of low level access as ALSA (requires more
research).
In addition to the things mentioned above, the --audio-channels option
is extended to accept a set of channel layouts. This is supposed to be
the correct way to configure mpv ALSA multichannel output. You need to
put a list of channel layouts that your A/V receiver supports.
mixer.c didn't really deserve to be separate anymore, as half of its
contents were unnecessary glue code after recent changes. It also
created a weird split between audio.c and af.c due to the fact that
mixer.c could insert audio filters. With the code being in audio.c
directly, together with other code that unserts filters during runtime,
it will be possible to cleanup this code a bit and make it work like the
video filter code.
As part of this change, make the balance code work like the volume code,
and add an option to back the current balance value. Also, since the
balance semantics are unexpected for most users (panning between the
audio channels, instead of just changing the relative volume), and there
are some other volumes, formally deprecate both the old property and the
new option.
Drop the code for switching the volume options and properties between
af_volume and AO volume controls. interface-changes.rst mentions the
changes in detail.
Do this because this was exceedingly complex and had other problems as
well. It was also very hard to test. It's just not worth the trouble.
Some leftovers like AOCONTROL_HAS_PER_APP_VOLUME will be removed at a
later point.
Fixes#3322.
Add --taskbar-progress command line option and property which controls taskbar
progress indication rendering in Windows 7+. This option is on by default and
can be toggled during playback.
This option does not affect the creation process of ITaskbarList3. When the
option is turned off the progress bar is just hidden with TBPF_NOPROGRESS.
Closes#2535
Flag that is set by default. Reseting it will result in mpv trying to fit
client area with video instead of the whole window with border and
decorations on the screen.
Marked as (Windows only) for now until it's implemented on other platforms.
See --lavfi-complex option.
This is still quite rough. There's no support for dynamic configuration
of any kind. There are probably corner cases where playback might freeze
or burn 100% CPU (due to dataflow problems when interaction with
libavfilter).
Future possible plans might include:
- freely switch tracks by providing some sort of default track graph
label
- automatically enabling audio visualization
- automatically mix audio or stack video when multiple tracks are
selected at once (similar to how multiple sub tracks can be selected)
This is probably the 3rd time the user-visible behavior changes. This
time, switch back because not normalizing seems to be the more expected
behavior from users.
Requested. It works like --sub-paths. This will also load audio files
from a "audio" sub directory in the config file (because the same code
as for subtitles is used, and it also had such a feature).
Fixes#2632.
Most of this is explained in the DOCS additions.
This gives us slightly more sanity, because there is less interaction
between the various parts. The goal is getting rid of the video_offset
entirely.
The simplification extends to the user API. In particular, we don't need
to fix missing parts in the API, such as the lack for a seek command
that seeks relatively to the start time. All these things are now
transparent.
(If someone really wants to know the real timestamps/start time, new
properties would have to be added.)
Useless. Sometimes it might be useful to make some extremely broken
files work, but on the other hand --no-correct-pts is sufficient for
these cases.
While we still need some of the code for AVI, the "auto" mode in
particular inflated the size of the code.
The manpage entry explains this.
(Maybe this option could be always enabled and removed. I don't quite
remember what valid use-cases there are for just disabling audio
entirely, other than that this is also needed for audio decoder init
failure.)
The vf_format suboption is replaced with --video-output-levels (a global
option and property). In particular, the parameter is removed from
mp_image_params. The mechanism is moved to the "video equalizer", which
also handles common video output customization like brightness and
contrast controls.
The new code is slightly cleaner, and the top-level option is slightly
more user-friendly than as vf_format sub-option.
If this mode is enabled, the player tries to strictly synchronize video
to display refresh. It will adjust playback speed to match the display,
so if you play 23.976 fps video on a 24 Hz screen, playback speed is
increased by approximately 1/1000. Audio wll be resampled to keep up
with playback.
This is different from the default sync mode, which will sync video to
audio, with the consequence that video might skip or repeat a frame once
in a while to make video keep up with audio.
This is still unpolished. There are some major problems as well; in
particular, mkv VFR files won't work well. The reason is that Matroska
is terrible and rounds timestamps to milliseconds. This makes it rather
hard to guess the framerate of a section of video that is playing. We
could probably fix this by just accepting jittery timestamps (instead
of explicitly disabling the sync code in this case), but I'm not ready
to accept such a solution yet.
Another issue is that we are extremely reliant on OS video and audio
APIs working in an expected manner, which of course is not too often
the case. Consequently, the new sync mode is a bit fragile.
Add --demuxer-max-packets and --demuxer-max-bytes, which control the
maximum size of the packet queue. These can be helpful to avoid
excessive memory usage.
Memory usage is the reason why there's a limit in the first place. If a
file is more or less broken, and audio and video don't line up, the
decoders will fill up the packet queue trying to read more audio or
video, and the maximum sizes are required to avoid unbounded memory
allocation. Being able to override the maximum sizes is useful; either
for restricting memory usage further, or enlarging the sizes when
attempting to play various broken files.
Remove --demuxer-readahead-packets and --demuxer-readahead-bytes. These
were a bit useless. They could force a minimum packet queue size, but
controlling the queue size with --demuxer-readahead-secs is much nicer.
It's fairly certain nobody ever used these options.
Allow setting an arbitrary amount, instead of the fixed 50%.
This is nto striclty backwards compatible. The defaults don't change,
but the --cache/--cache-default options now set the readahead portion.
So in practice, users who configured this until now will see the
double amount of cache being used, _plus_ the 75MB default backbuffer
will be in use.
Probably makes users happy who want bitmap subtitles to show up in the
screen margins, and stops them from doing idiotic crap with vf_expand.
Fixes#2098.
Until now, if a stream wasn't seekable, but the stream cache was enabled
(--cache), we've enabled seeking anyway. The idea was that at least
short seeks would typically fall within the cache. And if not, the user
was out of luck and terrible things happened. In other words, it was
unreliable.
Be stricter about it and remove this behavior. Effectively, this will
for example disable seeking in piped data.
Instead of trying to be clever, add an --force-seekable option, which
will always enable seeking if the user really wants it.
See manpage additions. This is mainly useful for vo_opengl_cb, but can
also be applied to vo_opengl.
On a side note, gl_hwdec_load_api() should stop using a name string, and
instead always use the IDs. This should be cleaned up another time.
This provides a new method for enabling spdif passthrough. The old
method via --ad (--ad=spdif:ac3 etc.) is deprecated. The deprecated
method will probably stop working at some point.
This also supports PCM fallback. One caveat is that it will lose at
least 1 audio packet in doing so. (I don't care enough to prevent this.)
(This is named after the old S/PDIF connector, because it uses the same
underlying technology as far as the higher level protoco is concerned.
Also, the user should be renamed that passthrough is backwards.)
The options don't change, but they're now declared and used privately by
demux_mkv.c. This also brings with it a minor refactor of the subpreroll
seek handling - merge the code from playloop.c into demux_mkv.c. The
change in demux.c is pretty much equivalent as well.
Remove the colorspace-related top-level options, add them to vf_format.
They are rather obscure and not needed often, so it's better to get them
out of the way. In particular, this gets rid of the semi-complicated
logic in command.c (most of which was needed for OSD display and the
direct feedback from the VO). It removes the duplicated color-related
name mappings.
This removes the ability to write the colormatrix and related
properties. Since filters can be changed at runtime, there's no loss of
functionality, except that you can't cycle automatically through the
color constants anymore (but who needs to do this).
This also changes the type of the mp_csp_names and related variables, so
they can directly be used with OPT_CHOICE. This probably ended up a bit
awkward, for the sake of not adding a new option type which would have
used the previous format.
This option allows the user to pass non-supported options directly to
youtube-dl, such as "--proxy URL", "--username USERNAME" and
'--password PASSWORD".
There is no sanity checking so it's possible to break things (i.e.
if you pass "--version" mpv exits with random JSON error).
Signed-off-by: wm4 <wm4@nowhere>
Now --ass-use-margins doesn't apply to normal subtitles anymore. This is
probably the inverse from the mpv behavior users expected so far, and
thus a breaking change, so rename the option, that the user at least has
a chance to lookup the option and decide whether the new behavior is
wanted or not.
The basic idea here is:
- plain text subtitles should have a certain useful defalt behavior,
like actually using margins
- ASS subtitles should never be broken by default
- ASS subtitles should look and behave like plaintext subtitles if
the --ass-style-override=force option is used
This also subtly changes --sub-scale-with-window and adds the --ass-
scale-with-window option. Since this one isn't so important, don't
bother with compatibility.
Make it accept "," as separator, instead of only ":". Do this by using
the key-value-list parser. Before this, the option was stored as a
string, with the option parser verifying that the option value as
correct. Now it's stored pre-parsed, although the log levels still
require separate verification and parsing-on-use to some degree (which
is why the msg-level option type doesn't go away).
Because the internal type changes, the client API "native" type also
changes. This could be prevented with some more effort, but I don't
think it's worth it - if MPV_FORMAT_STRING is used, it still works the
same, just with a different separator on read accesses.
In ancient times, this was needed because it was not default, and many
VOs had problems with it. But it was always default in mpv, and all VOs
are required to deal with it. Also, running --fixed-vo=no is not useful
and just creates weird corner cases. Get rid of it.
This allows getting the log at all with --no-terminal and without having
to retrieve log messages manually with the client API. The log level is
hardcoded to -v. A higher log level would lead to too much log output
(huge file sizes and latency issues due to waiting on the disk), and
isn't too useful in general anyway. For debugging, the terminal can be
used instead.
Fixes#1472.
(Maybe these options should have been named --autofit-max and
--autofit-min, but since --autofit-larger already exists, use
--autofit-smaller for symmetry.)
--sub-scale-by-window=no attempts to keep subs always at the same pixel
size.
The implementation is a bit all over the place, because it compensates
already done scaling by an inverse scale factor, but it will probably do
its job.
Fixes#1424. (The semantics and name of --sub-scale-with-window are
kept, and this adds a new option - the name is confusingly similar, but
it's actually analogue to --osd-scale-by-window.)
This attempts to increase user-friendliness by excluding useless tags.
It should be especially helpful with mp4 files, because the FFmpeg mp4
demuxer adds tons of completely useless information to the metadata.
Fixes#1403.
- --lua and --lua-opts change to --script and --script-opts
- 'lua' default script dirs change to 'scripts'
- DOCS updated
- 'lua-settings' dir was _not_ modified
The old lua-based names/dirs still work, but display a warning.
Signed-off-by: wm4 <wm4@nowhere>
Makeshift-solution for working around certain fontconfig issues.
With --use-text-osd=no, libass and fontconfig won't be initialized, and
fontconfig won't block everything with scanning for fonts.
It's passed with the '--format' option to youtube-dl.
If it isn't set, we don't pass '--format best' so that youtube-dl can
use the options from its configuration file.
Signed-off-by: wm4 <wm4@nowhere>
Probably needs to be polished a bit more. Also, might require a key
binding that can set/clear the loop points in a more intuitive way.
For now, something like this can be put into input.conf to use it:
ctrl+y set ab-loop-a ${time-pos} # set A
ctrl+x set ab-loop-b ${time-pos} # set B
ctrl+c set ab-loop-a no # clear (mostly)
Fixes#1241.
Make the changes started in commit c827ae5f more eloborate, and provide
an option to control the amount of data read before the seek-target. To
achieve this, rewrite the loop that finds the lowest still acceptable
target cluster. It is now searched by time instead of file position. The
behavior (both with and without preroll option) may be different from
before this change, although it shouldn't be worse.
The change demux_mkv_read_cues() fixes a bug: when seeking after playing
normally, the code would erroneously assume that durations are set. This
doesn't happen if the first operation after loading was a seek instead
of playback.
The main need I see for this is with libmpv - it would be confusing if
some application showed up as "mpv" on whateverthehell PulseAudio uses
it for (generally it does show up on various PA GUI tools).
Note that you can't pass .cue or .edl files to it, at least not yet.
Requested in context of allowing to specify custom chapters. For that
to work well, we probably need to add some sort of chapter metadata
pseudo-demuxer.
This is probably what libmpv users want; and it also improves error
reporting (or we'd have to add a way to communicate such mid-playback
failures as events).
No development activity (or even any sign of life) for almost a year.
A replacement based on youtube-dl will probably be provided before the
next mpv release. Ask on the IRC channel if you want to test.
Simplify the Lua check too: libquvi linking against a different Lua
version than mpv was a frequent issue, but with libquvi gone, no
direct dependency uses Lua, and such a clash is rather unlikely.
Apparently using the stream index is the best way to refer to the same
streams across multiple FFmpeg-using programs, even if the stream index
itself is rarely meaningful in any way.
For Matroska, there are some possible problems, depending how FFmpeg
actually adds streams. Normally they seem to match though.