--record-file is nice, but only sometimes. If you watch some sort of
livestream which you want to record, it's actually much nicer not to
record what you're currently "seeing", but anything you're receiving.
This option has been deprecated upstream for a long time, probably
doesn't even work anymore, and won't work moving forwards as we replace
the vulkan code by libplacebo wrappers.
I haven't removed the option completely yet since in theory we could
still add support for e.g. a native glslang wrapper in the future. But
most likely the future of this code is deletion.
As an aside, fix an issue where the man page didn't mention d3d11.
This commit bumps the libmpv version to 1.102
drm-osd-plane -> drm-draw-plane
drm-video-plane -> drm-drmprime-video-plane
drm-osd-size -> drm-draw-surface-size
"draw plane", as in the plane that OpenGL draws to, whether it be
video + OSD or just OSD.
"drmprime video plane", as in the plane used for hwdec video imported
via drmprime.
"draw surface size", as in the size of the surface used for the draw plane
The new names are invariant whether or not hwdec_drmprime_drm is being
used or not. The original naming was very confusing, as when doing
regular rendering (swdec or vaapi) the video would be displayed on the
"OSD plane", and the "Video plane" would remain unused.
Add general primary/overlay plane option to drm-osd-plane-id and
drm-video-plane-id, so that the user can just request any usable
primary or overlay plane for either of these two options. This should
be somewhat more user-friendly (especially as neither of these two
options currently have a useful help function), as usually you would
only be interested in the type of the plane, and not exactly which
plane gets picked.
Despite their place in the tree, hwdecs can be loaded and used just
fine by the vulkan GPU backend.
In this change we add Vulkan interop support to the cuda/nvdec hwdec.
The overall process is mostly straight forward, so the main observation
here is that I had to implement it using an intermediate Vulkan buffer
because the direct VkImage usage is blocked by a bug in the nvidia
driver. When that gets fixed, I will revist this.
Nevertheless, the intermediate buffer copy is very cheap as it's all
device memory from start to finish. Overall CPU utilisiation is pretty
much the same as with the OpenGL GPU backend.
Note that we cannot use a single intermediate buffer - rather there
is a pool of them. This is done because the cuda memcpys are not
explicitly synchronised with the texture uploads.
In the basic case, this doesn't matter because the hwdec is not
asked to map and copy the next frame until after the previous one
is rendered. In the interpolation case, we need extra future frames
available immediately, so we'll be asked to map/copy those frames
and vulkan will be asked to render them. So far, harmless right? No.
All the vulkan rendering, including the upload steps, are batched
together and end up running very asynchronously from the CUDA copies.
The end result is that all the copies happen one after another, and
only then do the uploads happen, which means all textures are uploaded
the same, final, frame data. Whoops. Unsurprisingly this results in
the jerky motion because every 3/4 frames are identical.
The buffer pool ensures that we do not overwrite a buffer that is
still waiting to be uploaded. The ra_buf_pool implementation
automatically checks if existing buffers are available for use and
only creates a new one if it really has to. It's hard to say for sure
what the maximum number of buffers might be but we believe it won't
be so large as to make this strategy unusable. The highest I've seen
is 12 when using interpolation with tscale=bicubic.
A future optimisation here is to synchronise the CUDA copies with
respect to the vulkan uploads. This can be done with shared semaphores
that would ensure the copy of the second frames only happens after the
upload of the first frame, and so on. This isn't trivial to implement
as I'd have to first adjust the hwdec code to use asynchronous cuda;
without that, there's no way to use the semaphore for synchronisation.
This should result in fewer intermediate buffers being required.
Since linear downscaling makes sense to handle independently from
linear/sigmoid upscaling, we split this option up. Now,
linear-downscaling is its own option that only controls linearization
when downscaling and nothing more. Likewise, linear-upscaling /
sigmoid-upscaling are two mutually exclusive options (the latter
overriding the former) that apply only to upscaling and no longer
implicitly enable linear light downscaling as well.
The old behavior was very confusing, as evidenced by issues such
as #6213. The current behavior should make much more sense, and only
minimally breaks backwards compatibility (since using linear-scaling
directly was very uncommon - most users got this for free as part of
gpu-hq and relied only on that).
Closes#6213.
Someone on IRC pointed out that the default stats bindings weren't
documented in the interactive control section of the manual, so
let's add them with a short mention and a reference to the STATS
section of the manual.
by default the pixel format creation falls back to software renderer
when everything fails. this is mostly needed for VMs. additionally one
can directly request an sw renderer or exclude it entirely.
The demuxer cache is the only cache now. Might need another change to
combat seeking failures in mp4 etc. The only bad thing is the loss of
cache-speed, which was sort of nice to have.
duration is parsed as an integer, and the default value is used if ```-1``` is passed. Passing ```-``` as described here causes a parameter value error.
The player fully restarts playback when the edition or disk title is
changed. Before this, the player tried to reinitialized playback
partially. For example, it did not print a new "Playing: <file>"
message, and did not send playback end to libmpv users (scripts or
applications).
This playback restart code was a bit messy and could have unforeseen
interactions with various state. There have been bugs before. Since it's
a mostly cosmetic thing for an obscure feature, just change it to a full
restart. This works well, though since it may have consequences for
scripts or client API users, mention it in interface-changes.rst.
The only effective difference is that the former explicitly checks
whether the JSON value type is string, and errors out if not. The rest
is exactly the same (mpv_set_property_string is mpv_set_property with
MPV_FORMAT_STRING).
It seems silly to keep this, so just remove it.
Until now, stopping playback aborted the demuxer and I/O layer violently
by signaling mp_cancel (bound to libavformat's AVIOInterruptCB
mechanism). Change it to try closing them gracefully.
The main purpose is to silence those libavformat errors that happen when
you request termination. Most of libavformat barely cares about the
termination mechanism (AVIOInterruptCB), and essentially it's like the
network connection is abruptly severed, or file I/O suddenly returns I/O
errors. There were issues with dumb TLS warnings, parsers complaining
about incomplete data, and some special protocols that require server
communication to gracefully disconnect.
We still want to abort it forcefully if it refuses to terminate on its
own, so a timeout is required. Users can set the timeout to 0, which
should give them the old behavior.
This also removes the old mechanism that treats certain commands (like
"quit") specially, and tries to terminate the demuxers even if the core
is currently frozen. This is for situations where the core synchronized
to the demuxer or stream layer while network is unresponsive. This in
turn can only happen due to the "program" or "cache-size" properties in
the current code (see one of the previous commits). Also, the old
mechanism doesn't fit particularly well with the new one. We wouldn't
want to abort playback immediately on a "quit" command - the new code is
all about giving it a chance to end it gracefully. We'd need some sort
of watchdog thread or something equally complicated to handle this. So
just remove it.
The change in osd.c is to prevent that it clears the status line while
waiting for termination. The normal status line code doesn't output
anything useful at this point, and the code path taken clears it, both
of which is an annoying behavior change, so just let it show the old
one.
The player fully restarts playback when the edition or disk title is
changed. Before this, the player tried to reinitialized playback
partially. For example, it did not print a new "Playing: <file>"
message, and did not send playback end to libmpv users (scripts or
applications).
This playback restart code was a bit messy and could have unforeseen
interactions with various state. There have been bugs before. Since it's
a mostly cosmetic thing for an obscure feature, just change it to a full
restart. This works well, though since it may have consequences for
scripts or client API users, mention it in interface-changes.rst.
Before this change, only 1 command or so had named arguments. There is
no reason why other commands can't have them, except that it's a bit of
work to add them.
Commands with variable number of arguments are inherently incompatible
to named arguments, such as the "run" command. They still have dummy
names, but obviously you can't assign multiple values to a single named
argument (unless the argument has an array type, which would be
something different). For now, disallow using named argument APIs with
these commands. This might change later.
2 commands are adjusted to not need a separate default value by changing
flag constants. (The numeric values are C only and can't be set by
users.)
Make the command syntax in the manpage more consistent. Now none of the
allowed choice/flag names are in the command header, and all arguments
are shown with their proper name and quoted with <...>.
Some places in the manpage and the client.h doxygen are updated to
reflect that most commands support named arguments. In addition, try to
improve the documentation of the syntax and need for escaping etc. as
well.
(Or actually most uses of the word "argument" should be "parameter".)
This affects async commands started by client API, commands with async
capability run in a sync way by client API (think mpv_command_node()
with "subprocess"), and detached async work.
Since scripts might want to do some cleanup work (that might involve
launching processes, don't ask), we don't unconditionally kill
everything on exit, but apply an arbitrary timeout of 2 seconds until
async commands are aborted.
I wanted to put all commands through mpv_command_node_async() instead of
mpv_command_node(). Using synchronous commands over a synchronous
transport doesn't make sense anyway.
This would have used the request_id field in IPC requests as reply ID
for the async commands. But the latter need to be [u]int64, while the
former can be any type. To avoid that we need an extra lookup table for
mapping reply IDs to request_id values, we now require that request_id
fields are integers.
Since this would be an incompatible change, just deprecate non-integers
for now, and plan the change for a later time.
The only effective difference is that the former explicitly checks
whether the JSON value type is string, and errors out if not. The rest
is exactly the same (mpv_set_property_string is mpv_set_property with
MPV_FORMAT_STRING).
It seems silly to keep this, so just remove it.
Many asynchronous commands are potentially long running operations, such
as loading something from network or running a foreign process.
Obviously it shouldn't just be possible for them to freeze the player if
they don't terminate as expected. Also, there will be situations where
you want to explicitly stop some of those operations explicitly. So add
an infrastructure for this.
Commands have to support this explicitly. The next commit uses this to
actually add support to a command.
The "run" command is old. I'm not sure why the separate Lua
implementation was added. But maybe it as because the "run" command used
to be limited to a small number of arguments. This limit has been
removed a while ago. In any case, the old implementation is not needed
anymore.
We keep mp.subprocess() with roughly the same semantics for
compatibility with scripts (including the internal ytdl script).
Seems to work with rhe ytdl wrapper. Not tested further.
This supports named arguments. It benefits from the infrastructure of
async commands.
The plan is to reimplement Lua's utils.subprocess() on top of it.
Named arguments should make it easier to have long time compatibility,
even if command arguments get added or removed. They're also much nicer
for commands with a large number of arguments, especially if many
arguments are optional.
As of this commit, this can not be used, because there is no command yet
which supports them. See the following commit.
Basically reimplement the async behavior on top of the async command
code. With this, all screenshot commands are async, and the "async"
prefix basically does nothing. The prefix now behaves exactly like with
other commands that use spawn_thread.
This also means using the prefix in the preset input.conf is pointless
(without effect) and misleading, so remove that.
The each_frame mode was actually particularly painful in making this
change, since the player wants to block for it when writing a
screenshot, and generally doesn't fit into the new infrastructure. It
was still relatively easy to reimplement by copying the original command
and then repeating it on each frame. The waiting is reentrant now, so
move the call in video.c to a "safer" spot.
One way to observe how the new semantics interact with everything is
using the mpv repl script and sending a screenshot command through it.
Without async flag, the script will freeze while writing the screenshot
(while playback continues), while with async flag it continues.
Pretty trivial, since commands can be async now, and the common code
even provides convenience like running commands on a worker thread.
The only ugly thing is that mp_add_external_file() needs an extra flag
for locking. This is because there's still some code which calls this
synchronously from the main thread, and unlocking the core makes no
sense there.
This enables two types of command behavior:
1. Plain async behavior, like "loadfile" not completing until the file
is fully loaded.
2. Running parts of the command on worker threads, e.g. for I/O, such as
"sub-add" doing network accesses on a thread while the core
continues.
Both have no implementation yet, and most new code is actually inactive.
The plan is to implement a number of useful cases in the following
commits.
The most tricky part is handling internal keybindings (input.conf) and
the multi-command feature (concatenating commands with ";"). It requires
a bunch of roundabout code to make it do the expected thing in
combination with async commands.
There is the question how commands should be handled that come in at a
higher rate than what can be handled by the core. Currently, it will
simply queue up input.conf commands as long as memory lasts. The client
API is limited by the size of the reply queue per client. For commands
which require a worker thread, the thread pool is limited to 30 threads,
and then will queue up work in memory. The number is completely
arbitrary.
With the advent of actual HDR devices, my real measured ICC profile has
an "infinite" contrast, since the display is completely off on pure
black inputs. 100k:1 might not be enough, so let's just bump it up to
1m:1 to be safe.
Also, improve the logging in the case that the detected contrast is too
high by default.
With the internal change from stringlist to keyvaluelist, these
sub-options stop working. I don't really care enough to bring them
back. (Order doesn't matter, -del always seemed annoying.)
We are currently using primary / overlay planes drm objects, assuming that primary plane is osd and overlay plane is video.
This commit is doing two things :
- replace the primary / overlay planes members with osd and video planes member without the assumption
- Add two more options to determine which one of the primary / overlay is associated to osd / video.
- It will default osd to overlay and video to primary if unspecified
That new API was introduced and allows to have several native resources.
Thisuses that mechanisma for drm resources rather than the deprecated
opengl-cb structs.
This patch therefore add two structs that can be used with the drm atomic interop.
- mpv_opengl_drm_params : which will hold all the drm handles
- mpv_opengl_drm_osd_size : which will hold osd layer size
This commit adds a drm-osd-size=WxH parameter to commandline which
allows to define the OSD plane dimension. OSD can be upscaled to
screen resolution when having OSD at video resolution is too heavy.
This is especially useful for UHD modes on embedded devices where
the GPU cannot handle UHD modes at a decent framerate.
Instead of using an internal counter to keep track of the value that was
set last, attempt to find the current value of the property/option in
the value list, and then set the next value in the list.
There are some potential problems. If a property refuses to accept a
specific value, the cycle-values command will fail, and start from the
same position again. It can't know that it's supposed to skip the next
value. The same can happen to properties which behave "strangely", such
as the "aspect" property, which will return the current aspect if you
write "-1" to it. As a consequence, cycle-values can appear to get
"stuck".
I still think the new behavior is what users expect more, and which is
generally more useful. We won't restore the ability to get the old
behavior, unless we decide to revert this commit entirely.
Fixes#5772, and hopefully other complaints.
The main change is that we wait with opening the muxer ("writing
headers") until we have data from all streams. This fixes race
conditions at init due to broken assumptions in the old code.
This also changes a lot of other stuff. I found and fixed a few API
violations (often things for which better mechanisms were invented, and
the old ones are not valid anymore). I try to get away from the public
mutex and shared fields in encode_lavc_context. For now it's still
needed for some timestamp-related fields, but most are gone. It also
removes some bad code duplication between audio and video paths.
Attempts to enable the following things:
- let a render API user do "proper" audio-sync video timing itself
- make it possible to not re-render repeated frames if the API user has
better mechanisms available (e.g. waiting for a DisplayLink cycle
instead)
- allow the user to delay or skip redraws if it makes sense
Basically this information will be needed by API users who want to be
"clever" about optimizing timing and rendering.
DR (letting the decoder allocate texture memory) requires running the
allocation on the render thread. This is rather hard with the render
API, because the user controls this thread and when it's entered. It was
not possible until now.
This commit adds a bunch of infrastructure to make this possible. We add
a new optional mode (MPV_RENDER_PARAM_ADVANCED_CONTROL) which basically
lets the user's render thread and libmpv agree how this should be done.
Misuse would lead to deadlocks. To make this less likely, strictly
document thread safety/locking issues. In particular, document which
libmpv functions can be called without issues. (The rest has to be
assumed unsafe.)
The worst issue is destruction of the render context while video is
still active. To avoid certain unintended recursive locks (i.e.
deadlocks, unless we'd make the locks recursive), make the update
callback lock separate. Make "killing" the video chain asynchronous, so
we can do extra work while video is being destroyed.
Because losing wakeups is a big deal, setting the update callback now
triggers a wakeup. (It would have been better if the wakeup callback
were a parameter to mpv_render_context_create(), but too late.)
This commit does not add DR yet; the following commit does this.
Fundamentally, scripts are loaded asynchronously, but as a feature,
there was code to wait until a script is loaded (for a certain arbitrary
definition of "loaded"). This was done in scripting.c with the
wait_loaded() function.
This called mp_idle(), and since there are commands to load/unload
scripts, it meant the player core loop could be entered recursively. I
think this is a major complication and has some problems. For example,
if you had a script that does 'os.execute("sleep inf")', then every time
you ran a command to load an instance of the script would add a new
stack frame of mp_idle(). This would lead to some sort of reentrancy
horror that is hard to debug. Also misc/dispatch.c contains a somewhat
tricky mess to support such recursive invocations. There were also some
bugs due to this and due to unforeseen interactions with other messes.
This scripting stuff was the only thing making use of that reentrancy,
and future commands that have "logical" waiting for something should be
implemented differently. So get rid of it.
Change the code to wait only in the player initialization phase: the
only place where it really has to wait is before playback is started,
because scripts might want to set options or hooks that interact with
playback initialization. Unloading of builtin scripts (can happen with
e.g. "set osc no") is left asynchronous; the unloading wasn't too robust
anyway, and this change won't make a difference if someone is trying to
break it intentionally. Note that this is not in mp_initialize(),
because mpv_initialize() uses this by locking the core, which would have
the same problem.
In the future, commands which logically wait should use different
mechanisms. Originally I thought the current approach (that is removed
with this commit) should be used, but it's too much of a mess and can't
even be used in some cases. Examples are:
- "loadfile" should be made blocking (needs to run the normal player
code and manually unblock the thread issuing the command)
- "add-sub" should not freeze the player until the URL is opened (needs
to run opening on a separate thread)
Possibly the current scripting behavior could be restored once new
mechanisms exist, and if it turns out that anyone needs it.
With this commit there should be no further instances of recursive
playloop invocations (other than the case in the following commit),
since all mp_idle()/mp_wait_events() calls are done strictly from the
main thread (and not commands/properties or libmpv client API that
"lock" the main thread).
The first change is about spdif - I mostly ignore spdif issues these
days, but it seems like the recent changes made handling of it slightly
better (but I didn't really test).
The second change is about broken libavfilter filters. We won't restore
the old behavior, because people were complaining about the old behavior
in the past. Possibly we could make libavfilter export this was metadata
and use the old behavior if we know they're broken - but it doesn't
exist yet.
Until recently, the AO was reinitialized strictly only on decoder format
changes. But the commit for simplifying audio format negotiation removed
this. Now the AO is recreated for any format change.
This is sort of annoying if you change playback speed. The
insertion/removal of af_scaletempo can change the sample format. For
example, the acompressor filter will convert output to double, so
toggling scaletempo will force the format back to float. This recreates
the AO under the --gapless-audio=weak default. This likely affects a lot
of other filters too.
Work this around by allowing sample format changes, and keeping the
current AO format in these cases. This is probably not a big problem.
Most audio APIs force the output format to float anyway.
This means you actually have to worry about what the default gapless
mode does to your audio. If you start with a file that uses 8 bit per
sample, and then continue playing a 24 bit FLAC, it will be converted
down to 8 bit per sample. (Assuming they are played in a way that uses
the gapless logic.)
One can now set the number of buffers and the buffer size.
This can reduce the CPU usage and the total latency stays mostly the same.
As there are sync mechanisms the A/V sync continue intact and working.
It also modifies 6.1 channel order, as per OpenAL spec
and add AOPLAY_FINAL_CHUNK support
Uses OpenAL Soft's AL_DIRECT_CHANNELS_SOFT extension and can be controlled through
a new CLI option, --openal-direct-channels.
This allows one to send the audio data direrctly to the desired channel without
effects applied.
Due to earlier misinterpretation of the Lua docs as if mp.register_idle
registers a one-shot callback, the JS docs suggested to use setTimeout.
But the behavior and Lua docs are such that it's a repeating callback
which fires just before the script thread goes to sleep.
Implement it for JS too.
As it turns out, there are multiple libmpv users who saw a need to
use the hook API. The API is kind of shitty and was never meant to be
actually public (it was mostly a hack for the ytdl script).
Introduce a proper API and deprecate the old one. The old one will
probably continue to work for a few releases, but will be removed
eventually.
There are some slight changes to the old API, but if a user followed
the manual properly, it won't break.
Mostly untested. Appears to work with ytdl_hook.
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
It's a WTF that we have something as specific in the API. It could be
argued that we should provide helpers for other language and GUI toolkit
combinations. Obviously that's not going to scale, and it's somewhat
likely that it will bitrot. The rest is said in the API changelog.
Before this change, mpv_wait_event() could inconsistently return
multiple MPV_EVENT_SHUTDOWN events to a single mpv_handle, up to the
point of spamming the event queue under certain circumstances. Change
this and just send it exactly once to each mpv_handle.
Some client API users might have weird requirements about destroying
their state asynchronously (and not reacting immediately to the SHUTDOWN
event). This change will help a bit to make this less weird and
surprising.
This changes how mpv_terminate_destroy() and mpv_detach_destroy()
behave. The doxygen in client.h tries to point out the differences. The
goal is to make this more useful to the API user (making it behave like
refcounting).
This will be refined in follow up commits.
Initialization is unfortunately closely tied to termination, so that
changes as well. This also removes earlier hacks that make sure that
some parts of FFmpeg initialization are run in the playback thread
(instead of the user's thread). This does not matter with standard
FFmpeg, and I have no reason to care about this anymore.
This adds key bindings for some semi-popular features. It also tries to
cleanup some old bindings. For example w/e for panscan is now changed to
w/W. In all cases, the old bindings are still kept and work, though.
Part of an ongoing attempt to cleanup the default key bindings.
See #973 for some context.
The playback start logic explicitly waits until the first frame has been
displayed. Usually this will introduce a wait of 1 vsync. For normal
playback this doesn't matter, but with respect to low latency needs,
this only leads to additional data getting queued up in the demuxer or
network buffers.
Another thing is that the timing logic decodes 1 frame ahead (= 1 frame
extra latency) to determine the exact duration of a frame.
To be fair, there doesn't really seem to be a hard reason why this is
needed. With the current code, enabling the option does lead to A/V
desync sometimes (if the demuxer FPS is too inaccurate), and also frame
drops at playback start in some situations. But this all seems to be
avoidable, if the timing logic were to be rewritten completely, which
should probably happen in the future. Thus the new option comes with the
warning that it can be removed any time. This is also why the option has
"hack" in the name.
Well I guess it doesn't help that much.
Also add some stuff that might help to the manpage.
The fundamental problem with some "live" sources (e.g. x11grab) is
actually that the player gets behind initially, and never thinks it has
to catch up. This is also why --untimed can help.
The purpose of the new API is to make it useable with other APIs than
OpenGL, especially D3D11 and vulkan. In theory it's now possible to
support other vo_gpu backends, as well as backends that don't use the
vo_gpu code at all.
This also aims to get rid of the dumb mpv_get_sub_api() function. The
life cycle of the new mpv_render_context is a bit different from
mpv_opengl_cb_context, and you explicitly create/destroy the new
context, instead of calling init/uninit on an object returned by
mpv_get_sub_api().
In other to make the render API generic, it's annoyingly EGL style, and
requires you to pass in API-specific objects to generic functions. This
is to avoid explicit objects like the internal ra API has, because that
sounds more complicated and annoying for an API that's supposed to never
change.
The opengl_cb API will continue to exist for a bit longer, but
internally there are already a few tradeoffs, like reduced
thread-safety.
Mostly untested. Seems to work fine with mpc-qt.
the title bar is now within the window bounds instead of outside. same
as QuickTime Player. it supports several standard styles, two dark and
two light ones. additionally we have properly rounded corners now and
the borderless window also has the proper window shadow.
Also make the earliest supported macOS version 10.10.
Fixes#4789, #3944
This introduces the option --drm-format (currently used only by
context_drm_egl, vo_drm implementation is pending) which allows you to
pick between a xrgb8888 or a xrgb2101010 visual for --gpu-context=drm.
Requires a recent mesa (18.0.0_rc4 or later) to work.
This also fixes a bug when using --gpu-context=drm on a 30bpp-enabled
mesa (allow_rgb10_configs set to true). Previously it would've set up
an XRGB8888 format at the DRM/GBM level, while a 30bpp EGLConfig would
be picked, resulting in a garbled image.
Do this because retrying reading on higher levels (like the demuxer)
usually causes tons of problems. A hack like this is simpler and could
allow to remove some of the higher level retry behavior.
This works by trying to detect whether the file is appended. If we reach
EOF, check if the file size changed compared to the initial value. If it
did, it means the file was appended at least once, and we set the
p->appending flag. If that flag is set, we simply retry reading more
data every time we encounter EOF. The only way to do this is polling,
and we poll for at most 10 times, after waiting for 200ms every time.
This solves a number of problems simultaneously:
1. When outputting HLG, this allows tuning the OOTF based on the display
characteristics.
2. When outputting PQ or other HDR curves, this allows soft-limiting the
output brightness using the tone mapping algorithm.
3. When outputting SDR, this allows HDR-in-SDR style output, by
controlling the output brightness directly.
Closes#5521
Usable for uniquely identifying mpv instances from
subprocesses, controlling mpv with AppleScript, ...
Adds a new mp_getpid() wrapper for cross-platform reasons.
This should be helpful for the new OSX Cocoa backend, which uses
opengl-cb internally. Since it comes with a behavior change that could
possibly interfere with libmpv/opengl_cb users, we mark it as explicit
API change.
This switches the default away from "bob" to the best algorithm reported
as supported by the driver. This is convenient for users, and there is
no reason to use something worse by default.
Untested.
Before this, we made deinterlacing dependent on the video codec metadata
(AVFrame.interlaced_frame for libavcodec). So even if --deinterlace=yes
was set, we skipped deinterlacing if the flag wasn't set. This is very
unreliable and there are many streams with flags incorrectly set.
The potential problem is that this might upset people who alwase enabled
deinterlace and hoped it worked. But it's likely these people were
screwed by this setting anyway. The new behavior is less tricky and
easier to understand, and this preferable. Maybe one day we could
introduce a --deinterlace=auto, which does the right thing, but of
course this would be hard to implement (esecially with hwdec).
Fixes#5219.
this is meant to replace the old and not properly working vo_gpu/opengl
cocoa backend in the future. the problems are various shortcomings of
Apple's opengl implementation and buggy behaviour in certain
circumstances that couldn't be properly worked around. there are also
certain regressions on newer macOS versions from 10.11 onwards.
- awful opengl performance with a none layer backed context
- huge amount of dropped frames with an early context flush
- flickering of system elements like the dock or volume indicator
- double buffering not properly working with a none layer backed context
- bad performance in fullscreen because of system optimisations
all the problems were caused by using a normal opengl context, that
seems somewhat abandoned by apple, and are fixed by using a layer backed
opengl context instead. problems that couldn't be fixed could be
properly worked around.
this has all features our old backend has sans the wid embedding,
the possibility to disable the automatic GPU switching and taking
screenshots of the window content. the first was deemed unnecessary by
me for now, since i just use the libmpv API that others can use anyway.
second is technically not possible atm because we have to pre-allocate
our opengl context at a time the config isn't read yet, so we can't get
the needed property. third one is a bit tricky because of deadlocking
and it needed to be in sync, hopefully i can work around that in the
future.
this also has at least one additional feature or eye-candy. a properly
working fullscreen animation with the native fs. also since this is a
direct port of the old backend of the parts that could be used, though
with adaptions and improvements, this looks a lot cleaner and easier to
understand.
some credit goes to @pigoz for the initial swift build support which
i could improve upon.
Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739,
#2392, #2217
early flushing only caused problems on macOS, which includes:
- performance problems and huge amount of dropped frames
- problems with playing back video files with fps close to the display
refresh rate
- rendering at twice the rate of the video fps
- not properly detected display refresh rate
we always deactivate any early flush for macOS to fix these problems.
Disable by default.
This feature was added in 7eb342757, which allowed stream selection
in runtime. Problem with this atm is that FFmpeg will try to demux
every first packet of every track leading to noticeable delay opening
the URL.
This option can be changed to enabled by default or removed when
HLS/DASH demuxers are improved upstream.
Using the GL renderer for color conversion will make sure screenshots
will use the same conversion as normal video rendering. It can do this
for all types of screenshots.
The logic when to write 16 bit PNGs changes. To approximate the old
behavior, we decide by looking whether the source video format has more
than 8 bits per component. We apply this logic even for window
screenshots. Also, 16 bit PNGs now always include an unused alpha
channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no
RGB064. RGB48 is 3 bytes and usually not supported by GPUs for
rendering, so we have to use RGBA64, which forces an alpha channel.
Will break for users who use --target-trc and similar options.
I considered creating a new gl_video context, but it could double GPU
memory use, so I didn't.
This uses FBOs instead of glGetTexImage(), because that increases the
chance it could work on GLES (e.g. ANGLE). Untested. No support for the
Vulkan and D3D11 backends yet.
Fixes#5498. Also fixes#5240, because the code for reading back is not
used with the new code path.
The current peak detection algorithm was very bugged (which contributed
to the excessive cross-frame flicker without long normalization) and
also didn't take into account the frame average brightness level.
The new algorithm both takes into account frame average brightness (in
addition to peak brightness), and also computes the values in a more
stable/correct way. (The old path was basically undefined behavior)
In addition to improving the algorithm, we also switch to hable tone
mapping by default, and try to enable peak computation automatically
whever possible (compute shaders + SSBOs supported). We also make the
desaturation milder, after extensive testing during libplacebo
development.
I also had to compensate a bit for the representational differences
between mpv and libplacebo (libplacebo treats 1.0 as the reference peak,
but mpv treats it as the nominal peak), but it shouldn't have caused any
problems.
This is still not quite the same as libplacebo, since libplacebo also
allows tagging the desired scene average brightness on the output, and
it also supports reading the scene average brightness from static
metadata (MaxFALL) where available. But those changes are a bit more
involved. It's possible we could also read this from metadata in the
future, but we have problems communicating with AVFrames as it is and I
don't want to touch the mpv colorimetry structs for the time being.
Similar to the previous commit, and for the same reasons. Unlike with
af_scaletempo, resampling does not have a natural frame size, so we set
an arbitrary size limit on output frames. We add a new option to control
this size, although I'm not sure whether anyone will use it, so mark it
for testing only.
Note that we go through some effort to avoid buffering data in
libswresample itself. One reason is that we might have to reinitialize
the resampler completely when changing speed, which drops the buffered
data. Another is that I'm not sure whether the resampler will do the
right thing when applying dynamic speed changes.
MPlayer used this to distinguish multiple decoder wrappers (such as
libavcodec vs. binary codec loader vs. builtin decoders). It lost
meaning in mpv as non-libavcodec things were dropped. Now it doesn't
serve any purpose anymore.
Parsing was removed quite a while ago, and the recent filter change
removed any use of the internal family field. Get rid of it.
FFmpeg only suppports http proxies and ignores it if
the resulting url is https. Also, no SOCKS.
Use it like `--ytdl-raw-options=proxy=[http://127.0.0.1:3128]` so
it doesn't confuse mpv because of the colons.
You need to pass it as an option because youtube-dl doesn't give
us the proxy.
Or just set `http_proxy` environment variable as recommended before.
Added example using -append, which doesn't need escaping.
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.
This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.
vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.
f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).
The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.
Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)
In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.
vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.
The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.
Exporting VO capabilities is still a big mess (mp_stream_info stuff).
The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.
f_async_queue is unused.
Restores behaviour prior to aef2ed5dc1.
That change was apparently unpopular. However, given the amount of
complaining over how hard it is to change the defaults by rebinding every
key, I think the extra option introduced by this commit is justified.
Technically not all behaviour is restored, because now --no-osd-bar will
not instead display the msg text on seek. I think that feature was a
little weird and is now easy enough to remedy with the --osd-on-seek
option.
This reverts commit 9812e276aa.
This was apparently unpopular. I still think the pause OSD should be the
same as seek even if it's not visible by default, but it seems that
whether to display a given property change is currently conflated with
what to display.
The reverted behaviour can be restored by adding something like the
following to input.conf:
SPACE cycle pause; show_progress
Requested. See manpage additions.
The main reason why this goes through the trouble to keep the
action/operation parameter separate is so that we don't expose some
option parser implementation details to the command (although that is a
relatively weak reason), and also to make it more different from the
"set" command, which can't support this type of option as it goes
through the property layer.
Fixes#5435.
And use it for 2 demuxer options. It could be used for more options
later. (Though the --cache options can not use this, because they use KB
as base unit.)
This commit introduces the multiply-pitch af-command. Users may bind
keys to this command in order to incrementally adjust the pitch of a
track. This will probably mostly be useful for musicians trying to
transpose up and down by semi tones without having to calculate
the correct ratio beforehand.
As an example, here is an input.conf to test this feature:
{ af-command all multiply-pitch 0.9438743126816935
} af-command all multiply-pitch 1.059463094352953
The future direction might be not having such a user-visible filter at
all, similar to how vf_scale went away (or actually, redirects to
libavfilter's vf_scale).