This was incorrect, and grossly misleading. It created the impression
that the buffer is passed to mpv_render_context_create(), instead of
mpv_render_context__render().
Sharing a process sure is hard in POSIX.
The rationale is that you'd have to handle EINTR on every single
blocking syscall. stream_file.c does not seem to handle it on read()
calls.
It appears that on most modern systems, this can happen only if you call
sigaction(), and incompetently forget to add SA_RESTART. signal()
usually adds it.
This can be used to make vo_libmpv render video to a memory buffer. It
only adds a new backend API that takes memory surfaces. All the render
API (such as frame rendering control and so on) is reused.
I'm not quite convinced of the usefulness of this, and until now I
always resisted providing something like this. It only seems to
facilitate inefficient implementation. But whatever.
Unfortunately, this duplicates the software rendering glue code yet
again (like it exists in vo_x11, vo_wlshm, vo_drm, and probably more).
But in theory, these could reuse this backend in the future, just like
vo_gpu could reuse the render_gl API.
Fixes: #7852
May or may not help when dealing with playlist loading in scripts. It's
supposed to help with the mean fact that loading a recursive playlist
will essentially edit the playlist behind the API user's back.
I mostly intend this for internal purposes. Probably pretty useless for
external API users, but on the other hand trivial to expose. While it
makes a lot of sense internally, I'll probably regret exposing it.
Both Lua and the JSON IPC code need to convert the mpv_event struct (and
everything it points to) to Lua tables or JSON.
I was getting sick of having to make the same changes to Lua and IPC. Do
what has been done everywhere else, and let the core handle this by
going through mpv_node (which is supposed to serve both Lua tables and
JSON, and potentially other scripting language backends). Expose it as
new libmpv API function.
The new API is still a bit "rough" and support for other event types
might be added in the future.
This silently adds support for the playlist_entry_id fields to both Lua
and JSON IPC.
There is a small API change for Lua; I don't think this matters, so I
didn't care about compatibility. The new code in client.c is mashed up
from the Lua and the IPC code. The manpage additions are moved from the
Lua docs, and made slightly more "general".
Some danger for unintended regressions both in Lua and IPC. Also damn
these node functions suck, expect crashes due to UB.
Not sure why this became more code instead of less compared to before
(according to the diff stat), even though some code duplication across
Lua and IPC was removed. Software development sucks.
If the user manages to run a "loadfile x append" command before the loop
in mp_play_files() is entered, then the player could start playing
these. This isn't expected, because appending files to the playlist in
idle mode does not normally start playback. It could happen because
there is a short time window where commands are processed before the
loop is entered (such as running the command when a script is loaded).
The idle mode semantics are pretty weird: if files were provided in
advance (on the command line), then these should be played immediately.
But if idle mode was already entered, and something is appended to the
playlist using "append", i.e. without explicitly triggering playback,
then it should remain in idle mode.
Try to follow this by redefining PT_STOP to strictly mean idle mode.
Remove the playlist->current check from idle_loop(), since only the
stop_play field counts now (cf. what mp_set_playlist_entry() does).
This actually introduces the possibility that playlist->current, and
with it playlist-pos, are set to something, even though playback is not
active or being started. Previously, this was only possible during state
transitions, such as when changing playlist entries.
Very annoyingly, this means the current way MPV_EVENT_IDLE was sent
doesn't work anymore. Logically, idle mode can be "active" even if
idle_loop() was not entered yet (between the time after mp_initialize()
and before the loop in mp_play_files()). Instead of worrying about this,
redo the "idle-active" property, and deprecate the event.
See: #7543
No replacement. Qt or C++ code has no business in this repository, and
new code (even if it uses Qt) should not use it. Get rid of it.
We consider the libmpv API itself as stable. Symbols can be deprecated,
but not be removed. However, qthelper.hpp was never considered part of
the libmpv API. There no ABI implications either, since it's a header-
only implementation that uses C API symbols only. It's just a header
provided for convenience for Qt/C++ programs (i.e. extremely limited
usefulness).
mpv_event_property (for property observation) actually never sets an
error status. You cannot distinguish between unavailable properties and
properties which returned an error. Not sure if it ever did.
Another thing nobody will read. I'm attempting to document the rules by
which incompatible changes can be made. These rules have always been
present in this project, but I don't think they were written down. Or
maybe they were, but I forgot where.
I think due to the time of the day it became increasingly incoherent
(not necessarily near the end of the text). Hopefully no logical or
freudian lapses in there.
There's potential confusion about how long a process started with the
"subprocess" command is allowed to live. Add some more explanations
regarding "subprocess" specifics (such as the playback_only field), and
things that apply to asynchronous commands in general.
Partially for #7025.
This allows stream_cb backends to implement blocking
behavior inside read_fn, and still get notified when the user
wants to cancel and stop playback.
Signed-off-by: Aman Gupta <aman@tmm1.net>
I think a popular libmpv application did exactly this: enabling advanced
control, and then receiving deadlocks. I didn't confirm it, though. In
any case, the API docs should avoid tricking users into making this easy
mistake.
The render API (vo_libmpv) had potential deadlock problems with
MPV_RENDER_PARAM_ADVANCED_CONTROL. This required vd-lavc-dr to be
enabled (the default). I never observed these deadlocks in the wild
(doesn't mean they didn't happen), although I could specifically provoke
them with some code changes.
The problem was mostly about DR (direct rendering, letting the video
decoder write to OpenGL buffer memory). Allocating/freeing a DR image
needs to be done on the OpenGL thread, even though _lots_ of threads are
involved with handling images. Freeing a DR image is a special case that
can happen any time. dr_helper.c does most of the evil magic of
achieving this. Unfortunately, there was a (sort of) circular lock
dependency: freeing an image while certain internal locks are held would
trigger the user's context update callback, which in turn would call
mpv_render_context_update(), which processed all pending free requests,
and then acquire an internal lock - which the caller might not release
until a further DR image could be freed.
"Solve" this by making freeing DR images asynchronous. This is slightly
risky, but actually not much. The DR images will be free'd eventually.
The biggest disadvantage is probably that debugging might get trickier.
Any solution to this problem will probably add images to free to some
sort of queue, and then process it later. I considered making this more
explicit (so there'd be a point where the caller forcibly waits for all
queued items to be free'd), but discarded these ideas as this probably
would only increase complexity.
Another consequence is that freeing DR images on the GL thread is not
synchronous anymore. Instead, it mpv_render_context_update() will do it
with a delay. This seems roundabout, but doesn't actually change
anything, and avoids additional code.
This also fixes that the render API required the render API user to
remain on the same thread, even though this wasn't documented. As such,
it was a bug. OpenGL essentially forces you to do all GL usage on a
single thread, but in theory the API user could for example move the GL
context to another thread.
The API bump is because I think you can't make enough noise about this.
Since we don't backport fixes to old versions, I'm specifically stating
that old versions are broken, and I'm supplying workarounds.
Internally, dr_helper_create() does not use pthread_self() anymore, thus
the vo.c change. I think it's better to make binding to the current
thread as explicit as possible.
Of course it's not sure that this fixes all deadlocks (probably not).
This change adds a version of `mpv_command` that also returns a result.
The main rationale behind this is `mpv_command_node` requires defining
multiple structs before you can even use it, which results in a pretty
painful to use interface just to get the result from a command.
There isn't really a good name for this function, so I'm open to
suggestions on a better name for it.
This is because dr_helper.c calls pthread_self(). It's used to avoid
deadlocks. This was not a problem internal to mpv (dr_helper.c was first
created for vo_gpu.c), but it accidentally leaked as an unintended
consequence of an implementation detail to libmpv.
This problem existed for MPV_RENDER_PARAM_ADVANCED_CONTROL users, ever
since it was introduced.
Maybe this could be done differently, but it's certainly very tricky.
the pthread_t is used to avoid deadlocks when the caller is on the same
thread as the one which needs to handle the calls.
The critical code is in free_dr_buffer_on_dr_thread(). It's called when
a DR image is free'd. Freeing a DR image requires access to the GL
context, i.e. it just has to run on the "GL" thread. In libmpv's case,
this is done by calling the API user's wakeup call, and the user will
eventually call mpv_render_context_update() on the "GL" thread, which in
turn calls mp_dispatch_queue_process() on the dispatch queue that was
passed to dr_helper_create(), which then calls av_buffer_unref(), which
calls gl_video_dr_free_buffer(). (God who came up with this rube
goldberg shit.)
The above case will typically happen when e.g. vd_lavc.c (internal mpv
thread) frees images. Allocation works similarly; deallocation is just
trickier because calls to free images are everywhere.
Now consider if mpv_render_context_render() releases a DR image. This
function is always called on the "GL" thread. Going through the dispatch
queue would obviously deadlock, because according to the render API
rules, the user's wakeup callback must not mpv_render_context_update(),
instead it has to signal the "GL" thread, which must wait until
mpv_render_context_render() returns. To avoid this deadlock, dr_helper.c
checks whether the calling thread is the "GL" thread, and if so, allows
a reentrant call (basically gl_video_dr_free_buffer() is called
directly).
While with GL, you usually will stay on the same thread, API users were
in theory allowed to e.g. move the GL context to a different thread. But
this dr_helper issue means this is not possible. Moreover, other APIs
will not have the same thread-locality problems as GL.
FUCK THIS SHIT
In theory, libmpv could provide an API to "move" the context to a
separate thread, but let's not start with even more broken crap like
this. I'm not sure yet whether there is an easy solution. Maybe all
mpv_render() function could be guarded by entry/exit functions, which
set/unset a flag that dr_helper.c should use reentrant calls.
libmpv users which do not set MPV_RENDER_PARAM_ADVANCED_CONTROL were
never affected.
Extending the client-allocated mpv_opengl_drm_params struct
constituted a break of ABI that could cause UB.
Create a clean break by deprecating "drm_params" and related structs
and enum values, and replacing it with "drm_params_v2".
Also fix some comments and code that wrongly assumed that open could
return any other negative number than -1 for failure.
This commit updates the libmpv version to 1.104
This commit bumps the libmpv version to 1.102
drm-osd-plane -> drm-draw-plane
drm-video-plane -> drm-drmprime-video-plane
drm-osd-size -> drm-draw-surface-size
"draw plane", as in the plane that OpenGL draws to, whether it be
video + OSD or just OSD.
"drmprime video plane", as in the plane used for hwdec video imported
via drmprime.
"draw surface size", as in the size of the surface used for the draw plane
The new names are invariant whether or not hwdec_drmprime_drm is being
used or not. The original naming was very confusing, as when doing
regular rendering (swdec or vaapi) the video would be displayed on the
"OSD plane", and the "Video plane" would remain unused.
Add another parameter to mpv_opengl_drm_params to hold the FD to the
render node, so that the fd can be passed to hwdec_vaegl.
The render node is opened in context_drm_egl and inferred from the
primary device fd using drmGetRenderDeviceNameFromFd.
Before this change, only 1 command or so had named arguments. There is
no reason why other commands can't have them, except that it's a bit of
work to add them.
Commands with variable number of arguments are inherently incompatible
to named arguments, such as the "run" command. They still have dummy
names, but obviously you can't assign multiple values to a single named
argument (unless the argument has an array type, which would be
something different). For now, disallow using named argument APIs with
these commands. This might change later.
2 commands are adjusted to not need a separate default value by changing
flag constants. (The numeric values are C only and can't be set by
users.)
Make the command syntax in the manpage more consistent. Now none of the
allowed choice/flag names are in the command header, and all arguments
are shown with their proper name and quoted with <...>.
Some places in the manpage and the client.h doxygen are updated to
reflect that most commands support named arguments. In addition, try to
improve the documentation of the syntax and need for escaping etc. as
well.
(Or actually most uses of the word "argument" should be "parameter".)
Many asynchronous commands are potentially long running operations, such
as loading something from network or running a foreign process.
Obviously it shouldn't just be possible for them to freeze the player if
they don't terminate as expected. Also, there will be situations where
you want to explicitly stop some of those operations explicitly. So add
an infrastructure for this.
Commands have to support this explicitly. The next commit uses this to
actually add support to a command.
Named arguments should make it easier to have long time compatibility,
even if command arguments get added or removed. They're also much nicer
for commands with a large number of arguments, especially if many
arguments are optional.
As of this commit, this can not be used, because there is no command yet
which supports them. See the following commit.
Both asynchronous and synchronous calls used to be put into the core's
dispatch queue. Also, asynchronous calls were actually synchronous, just
without forcing a wait on the client's thread. This meant that both
kinds of calls were always strictly ordered.
A longer time ago, synchronous calls were changed to simply lock the
core. This could possibly lead to reordering. Recently, some commands
were changed to run on worker threads, which made the order even looser.
Also remove another now incorrect doxygen comment regarding async
commands.
This patch adds
- DRM connector object to atomic context.
- fd property to the drm atomic object as well as a method to read blob type properties.
This allows to ensure that the proper connector is picked up, especially when specifying it
from the commandline, and also allows to make sure we're using the right one when embedding
with interop into an application.
That new API was introduced and allows to have several native resources.
Thisuses that mechanisma for drm resources rather than the deprecated
opengl-cb structs.
This patch therefore add two structs that can be used with the drm atomic interop.
- mpv_opengl_drm_params : which will hold all the drm handles
- mpv_opengl_drm_osd_size : which will hold osd layer size
This commit adds a drm-osd-size=WxH parameter to commandline which
allows to define the OSD plane dimension. OSD can be upscaled to
screen resolution when having OSD at video resolution is too heavy.
This is especially useful for UHD modes on embedded devices where
the GPU cannot handle UHD modes at a decent framerate.
Strictly speaking redundant, but probably helpful.
In particular I want to push MPV_RENDER_PARAM_ADVANCED_CONTROL. Not
enabling this parameter is actually not very sane.
Attempts to enable the following things:
- let a render API user do "proper" audio-sync video timing itself
- make it possible to not re-render repeated frames if the API user has
better mechanisms available (e.g. waiting for a DisplayLink cycle
instead)
- allow the user to delay or skip redraws if it makes sense
Basically this information will be needed by API users who want to be
"clever" about optimizing timing and rendering.
DR (letting the decoder allocate texture memory) requires running the
allocation on the render thread. This is rather hard with the render
API, because the user controls this thread and when it's entered. It was
not possible until now.
This commit adds a bunch of infrastructure to make this possible. We add
a new optional mode (MPV_RENDER_PARAM_ADVANCED_CONTROL) which basically
lets the user's render thread and libmpv agree how this should be done.
Misuse would lead to deadlocks. To make this less likely, strictly
document thread safety/locking issues. In particular, document which
libmpv functions can be called without issues. (The rest has to be
assumed unsafe.)
The worst issue is destruction of the render context while video is
still active. To avoid certain unintended recursive locks (i.e.
deadlocks, unless we'd make the locks recursive), make the update
callback lock separate. Make "killing" the video chain asynchronous, so
we can do extra work while video is being destroyed.
Because losing wakeups is a big deal, setting the update callback now
triggers a wakeup. (It would have been better if the wakeup callback
were a parameter to mpv_render_context_create(), but too late.)
This commit does not add DR yet; the following commit does this.
Normally, MPV_RENDER_PARAM* arguments are copied, unless documented
otherwise. Of course we can't copy X11 Display or Wayland wl_display
types, but for arguments that are "summarized" in a struct (like
MPV_RENDER_PARAM_OPENGL_FBO), a copy is expected.
Also add some unused infrastructure to make this explicit, and to make
it easier to add parameter types that require a copy.
Untested.
As it turns out, there are multiple libmpv users who saw a need to
use the hook API. The API is kind of shitty and was never meant to be
actually public (it was mostly a hack for the ytdl script).
Introduce a proper API and deprecate the old one. The old one will
probably continue to work for a few releases, but will be removed
eventually.
There are some slight changes to the old API, but if a user followed
the manual properly, it won't break.
Mostly untested. Appears to work with ytdl_hook.
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
It's a WTF that we have something as specific in the API. It could be
argued that we should provide helpers for other language and GUI toolkit
combinations. Obviously that's not going to scale, and it's somewhat
likely that it will bitrot. The rest is said in the API changelog.