This protocol is pretty important since it finally lets us solve the
longstanding issue of fractional scaling in wayland (no more mpv doing
rendering over the target resolution and then being scaled down). This
protocol also can completely replace the buffer_scale usage that we are
currently using for integer scaling so hopefully this can be removed
sometime in the future. Note that vo_dmabuf_wayland is omitted from the
fractional scale handling because we want the compositor to handle all
the scaling for that VO.
Fixes#9443.
See https://sw.kovidgoyal.net/kitty/graphics-protocol/
This makes no attempt at querying terminal features or handling
terminal errors, as it would require mpv to pass the response codes
from the terminal to the vo instead of interpreting them as
keystrokes made by the user and acting very unpredictably.
Tested with kitty and konsole.
Fixes#9605
The new single-pixel-buffer protocol is designed to optimize the case
for using a solid color as an underlay wl_surface. It works the same as
the wl_shm 1x1 pixel trick currently used, but it allows the compositor
to make optimizations with more certainty than the wl_shm trick.
The content-type protocol allows mpv to send compositor a hint about the
type of content being displayed on its surface so it could potentially
make some sort of optimization. Fundamentally, this is pretty simple but
since this requires a very new wayland-protocols version (1.27), we have
to mess with the build to add a new define and add a bunch of if's in
here. The protocol itself exposes 4 different types of content: none,
photo, video, and game.
To do that, let's add a new option (wayland-content-type) that lets
users control what hint to send to the compossitor. Since the previous
commit adds a VOCTRL that notifies us about the content being displayed,
we can also add an auto value to this option. As you'd expect, the
compositor hint would be set to photo if mpv's core detects an image,
video for other things, and it is set to none for the special case of
forcing a window when there is not a video track. For completion's sake,
game is also allowed as a value for this option, but in practice there
shouldn't be a reason to use that.
Wayland VO that can display images from either vaapi or drm hwdec
The PR adds the following changes:
1. a context_wldmabuf context with no gl dependencies
2. no-op ra_wldmabuf and dmabuf_interop_wldmabuf objects
no-op because there is no need to map/unmap the drmprime buffer,
and there is no need to manage any textures.
Tested on both x86_64 and rk3399 AArch64
We've had some annoying names for interops, which we can't simply
rename because that would break config files and command lines. So we
need to put a little more effort in and add a concept of legacy names
that allow us to continue loading them, but with a warning.
The two I'm renaming here are:
* vaapi-egl -> vaapi (vaapi works with Vulkan too)
* drmprime-drm -> drmprime-overlay (actually describes what it does)
* cuda-nvdec -> cuda (cuda interop is not nvdec specific)
Add xoshiro as a PRNG implementation instead of relying
on srand() and rand() from the C standard library. This,
in particular, lets us avoid platform-defined behavior with
respect to threading.
In the confusing landscape of hardware video decoding APIs, we have had
a long standing support gap for the v4l2 based APIs implemented for the
various SoCs from Rockship, Amlogic, Allwinner, etc. While VAAPI is the
defacto default for desktop GPUs, the developers who work on these SoCs
(who are not the vendors!) have preferred to implement kernel APIs
rather than maintain a userspace driver as VAAPI would require.
While there are two v4l2 APIs (m2m and requests), and multiple forks of
ffmpeg where support for those APIs languishes without reaching
upstream, we can at least say that these APIs export frames as DRMPrime
dmabufs, and that they use the ffmpeg drm hwcontext.
With those two constants, it is possible for us to write a
hwdec-interop without worrying about the mess underneath - for the most
part.
Accordingly, this change implements a hwdec-interop for any decoder
that produces frames as DRMPrime dmabufs. The bulk of the heavy
lifting is done by the dmabuf interop code we already had from
supporting vaapi, and which I refactored for reusability in a previous
set of changes.
When we combine that with the fact that we can't probe for supported
formats, the new code in this change is pretty simple.
This change also includes the hwcontext_fns that are required for us to
be able to configure the hwcontext used by `hwdec=drm-copy`. This is
technically unrelated, but it seemed a good time to fill this gap.
From a testing perspective, I have directly tested on a RockPRO64,
while others have tested with different flavours of Rockchip and on
Amlogic, providing m2m coverage.
I have some other SoCs that I need to spin up to test with, but I don't
expect big surprises, and when we inevitably need to account for new
special cases down the line, we can do so - we won't be able to support
every possible configuration blindly.
Annoyingly, libva and libdrm use different structs to describe dmabufs
and if we are going to support drmprime, we must pick one format and do
some shuffling in the other case.
I've decided to use AVDRMFrameDescriptor as our internal format as this
removes the libva dependency from dmabuf_interop. That means that the
future drmprime hwdec will be able to populate it directly and the
existing hwdec_vaapi needs to copy the struct members around, but
that's cheap and not a concern.
This is the first in a series of changes that will introduce a drmprime
hwdec. As our vaapi hwdec is based around exporting surfaces as
drmprime dmabufs, we've actually got a lot of useful code already in
place in the GL/PL interops. I'm going to reorganise and adjust this
code to make the interops usable with the new hwdec as well.
The first step is to rename the files and functions. There are no
functional or other changes here. They will come next.
This builds off of present_sync which was introduced in a previous
commit to support xorg's present extension in all of the X11 backends
(sans vdpau) in mpv. It turns out there is an Xpresent library that
integrates the xorg present extention with Xlib (which barely anyone
seems to use), so this can be added without too much trouble. The
workflow is to first setup the event by telling Xorg we would like to
receive PresentCompleteNotify (there are others in the extension but
this is the only one we really care about). After that, just call
XPresentNotifyMSC after every buffer swap with a target_msc of 0. Xorg
then returns the last presentation through its usual event loop and we
go ahead and use that information to update mpv's values for vsync
timing purposes. One theoretical weakness of this approach is that the
present event is put on the same queue as the rest of the XEvents. It
would be nicer for it be placed somewhere else so we could just wait
on that queue without having to deal with other possible events in
there. In theory, xcb could do that with special events, but it doesn't
really matter in practice.
Unsurprisingly, this doesn't work on NVIDIA. Well NVIDIA does actually
receive presentation events, but for whatever the calculations used make
timings worse which defeats the purpose. This works perfectly fine on
Mesa however. Utilizing the previous commit that detects Xrandr
providers, we can enable this mechanism for users that have both Mesa
and not NVIDIA (to avoid messing up anyone that has a switchable
graphics system or such). Patches welcome if anyone figures out how to
fix this on NVIDIA.
Unlike the EGL/GLX sync extensions, the present extension works with any
graphics API (good for vulkan since its timing extension has been in
development hell). NVIDIA also happens to have zero support for the
EGL/GLX sync extensions, so we can just remove it with no loss. Only
Xorg ever used it and other backends already have their own present
methods. vo_vdpau VO is a special case that has its own fancying timing
code in its flip_page. This presumably works well, and I have no way of
testing it so just leave it as it is.
Wayland had some specific code that it used for implementing the
presentation time protocol. It turns out that xorg's present extension
is extremely similar, so it would be silly to duplicate this whole mess
again. Factor this out to separate, independent code and introduce the
mp_present struct which is used for handling the ust/msc values and some
other associated values. Also, add in some helper functions so all the
dirty details live specifically in present_sync. The only
wayland-specific part is actually obtaining ust/msc values. Since only
wayland or xorg are expected to use this, add a conditional to the build
that only adds this file when either one of those are present.
You may observe that sbc is completely omitted. This field existed in
wayland, but was completely unused (presentation time doesn't return
this). Xorg's present extension also doesn't use this so just get rid of
it all together. The actual calculation is slightly altered so it is
correct for our purposes. We want to get the presentation event of the
last frame that was just occured (this function executes right after the
buffer swap). The adjustment is to just remove the vsync_duration
subtraction. Also, The overly-complicated queue approach is removed.
This has no actual use in practice (on wayland or xorg). Presentation
statistics are only ever used after the immediate preceding swap to
update vsync timings or thrown away.
This is the new FFmpeg channel layout structure, which now
combines channel count and layout into a single location.
Only unspecified (channel count only) and native (channel layout
mask based) layouts are currently supported for the initial move
towards non-deprecated APIs.
This driver makes use of dmabuffer and viewporter interfaces
to enable efficient display of vaapi surfaces, avoiding
any unnecessary colour space conversion, and avoiding scaling
or colour conversion using GPU shader resources.
There's really nothing vulkan-specific about this hwdec wrapper, and it
actually works perfectly fine with an OpenGL-based ra_pl. This is not
hugely important at the time, but I still think it makes sense in case
we ever decide to make vo_gpu_next wrap OpenGL contexts to ra_pl instead
of exposing the underlying ra_gl.
Changes:
- rewrite to use new internal MPV API;
- code refactoring;
- fix buffers size calculations;
- buffer set to auto;
- reset() - clean/reinit device only after errors;
The AO provides a way for mpv to directly submit audio to the PipeWire
audio server.
Doing this directly instead of going through the various compatibility
layers provided by PipeWire has the following advantages:
* It reduces complexity of going through the compatibility layers
* It allows a richer integration between mpv and PipeWire
(for example for metadata)
* Some users report issues with the compatibility layers that to not
occur with the native AO
For now the AO is ordered after all the other relevant AOs, so it will
most probably not be picked up by default.
This is for the following reasons:
* Currently it is not possible to detect if the PipeWire daemon that mpv
connects to is actually driving the system audio.
(https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/1835)
* It gives the AO time to stabilize before it is used by everyone.
Based-on-patch-by: Oschowa <oschowa@web.de>
Based-on-patch-by: Andreas Kempf <aakempf@gmail.com>
Helped-by: Ivan <etircopyhdot@gmail.com>
Not all deprecated symbols were removed. Only three events were removed for now
since these are not used internally.
This bumps the library version to 2.0.
This is done to avoid cluttering vo_gpu_next.c with more ifdeffery and context-specific code
when additional backends are added in the near future.
Eventually gpu_ctx is intended to take the place of ra_ctx to further separate gpu and gpu_next.
The extension is completely arbitrary since ebml_defs.c isn't a real c
file that actually is compiled at any point in time. It's just used as
an include. The reason for changing the extension is because meson needs
to add this to its list of sources for dependency/ordering purposes.
Understandably, meson will try to compile any .c file added to a c
project executable object. Obviously, this compilation will never
succeed, and this shouldn't be compiled anyways. Just make it .inc
instead.
As discussed in #8799, this will eventually replace vo_gpu. However, it
is not yet complete. Currently missing:
- OpenGL contexts
- hardware decoding
- blend-subtitles=video
- VOCTRL_SCREENSHOT
However, it's usable enough to cover most use cases, and as such is
enough to start getting in some crucial testing.
Pretty much identical to filter-regex but with JS expressions and
requires only JS support. Shares the filter-regex-* control options.
The target audience is Windows users - where filter-regex doesn't
work due to missing APIs, but mujs builds cleanly on Windows, and JS
is usually enabled in 3rd party Windows mpv builds.
Lua could have been used with similar effort, however, the JS regex
syntax is more extensive and also much more similar to POSIX.
This is the Vulkan equivalent of the drm context for OpenGL, with
the big difference that it's implemented purely in terms of Vulkan
calls and doesn't actually require drm or kms.
The basic idea is to identify a display, mode, and plane on a device,
and then create a display backed surface for the swapchain. In theory,
past that point, everything is the same, and this is in fact the case
on Intel hardware. I can get a video playing on a vt.
On nvidia, naturally, things don't work that way. Instead, nvidia only
implemented the extension for scenarios where a VR application is
stealing a display from a running window system, and not for
standalone scenarios. With additional code, I've got this scenario to
work but that's a separate incremental change.
Other people have tested on AMD, and report roughly the same behaviour
as on Intel.
Note, that in this change, the VT will not be correctly restored after
qutting. The only way to restore the VT is to introduce some drm
specific code which I will illustrate in a separate change.
Changes:
- code refactored;
- mixer options removed;
- new mpv sound API used;
- add sound devices detect (mpv --audio-device=help will show all available devices);
- only OSSv4 supported now;
Tested on FreeBSD 12.2 amd64.
Based on the implementation of ffmpeg's sixel backend output written
by Hayaki Saito
https://github.com/saitoha/FFmpeg-SIXEL/blob/sixel/libavdevice/sixel.c
Sixel is a protocol to display graphics in a terminal. This commit
adds support to play videos on a sixel enabled terminal using libsixel.
With --vo=sixel, the output will be in sixel format.
The input frame will be scaled to the user specified resolution
(--vo-sixel-width and --vo-sixel-height) using swscaler and then
encoded using libsixel and output to the terminal. This method
requires high cpu and there are high frame drops for 720p and
higher resolution videos and might require using lesser colors and
have drop in quality. Docs have all the supported options listed
to fine tune the output quality.
TODO: A few parameters of libsixel such as the sixel_encode_policy
and the SIXEL_XTERM16 variables are hardcoded, might want to
expose them as command line options. Also the initialization
resolution is not automatic and if the user doesn't specify the
dimensions, it picks 320x240 as the default resolution which is not
optimal. So need to automatically pick the best fit resolution for
the current open terminal window size.
Fixes an issue with clang not using the -mconsole option if mwindows
is present resulting in mpv.com being a gui program instead of a
console program.
Does not interfere with gcc compilation.
result without this patch
```
file .\mpv.com .\mpv.exe
.\mpv.com: PE32+ executable (GUI) x86-64 (stripped to external PDB)
.\mpv.exe: PE32+ executable (GUI) x86-64 (stripped to external PDB)
```
both executables open the mpv gui with out console output.
result with this patch
```
file .\mpv.com .\mpv.exe
.\mpv.com: PE32+ executable (console) x86-64 (stripped to external PDB)
.\mpv.exe: PE32+ executable (GUI) x86-64 (stripped to external PDB)
```
mpv.com properly outputs text to console instead of instantly opening
a gui
`, for MS Windows` removed from the end of file outputs to reduce col
count
https://github.com/m-ab-s/media-autobuild_suite/issues/1794
Signed-off-by: Christopher Degawa <ccom@randomderp.com>
Unused, was terrible garbage. It was (or at least its implementation
was) always a make-shift solution, and just gross bullshit. It is unused
now, so delete it.
move all backend independent code parts in their own folder and files,
to simplify adding new backends. the goal is to only extend one class
and add the backend dependent parts there. usually only the (un)init,
config and related parts need to be implemented per backend. furthermore
all needed windowing and related events are propagated and can be
overwritten. the other backend dependent part is usually the surface for
rendering, for example the opengl oder metal layer.
in the best case a new backend can be added with only a few hundred
lines.
Add support for reading a byte range from a stream via
the `slice://` protocol.
Syntax is `slice://start[-end]@URL` where end is a maximum
(read until end or eof).
Size suffixes support in `m_option` is reused so they can
be used with start/end.
This can be very useful with e.g. large MPEGTS streams with
corruption or time-stamp jumps or other issues in them.
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
This is taken from a somewhat older proof-of-concept script. The basic
idea, and most of the implementation, is still the same. The way the
profiles are actually defined changed.
I still feel bad about this being a Lua script, and running user
expressions as Lua code in a vaguely defined environment, but I guess as
far as balance of effort/maintenance/results goes, this is fine.
It's a bit bloated (the Lua scripting state is at least 150KB or so in
total), so in order to enable this by default, I decided it should
unload itself by default if no auto-profiles are used. (And currently,
it does not actually rescan the profile list if a new config file is
loaded some time later, so the script would do nothing anyway if no auto
profiles were defined.)
This still requires defining inverse profiles for "unapplying" a
profile. Also this is still somewhat racy. Both will probably be
alleviated to some degree in the future.
scaletempo2 is a new audio filter for playing back
audio at modified speed and is based on chromium
commit 51ed77e3f37a9a9b80d6d0a8259e84a8ca635259.
It sounds subjectively better than the existing
implementions scaletempo and rubberband.
This fixes the "run" and "subprocess" commands on Windows, including
youtube-dl support.
Unix-like FD inheritance is emulated on Windows by using an undocumented
data structure[1] that gets passed to the newly created process in
STARTUPINFO.lpReserved2. It consists of two sparse arrays listing the
HANDLE and internal CRT flags corresponding to each FD. This structure
is used and understood primarily by MSVCRT, but there are other runtimes
and frameworks that can write it, like libuv.
The code for creating asynchronous "anonymous" pipes in Windows has been
enhanced and moved into windows_utils.c. This is mainly an artifact of
an unfinished future change to support anonymous IPC clients in Windows.
Right now, it's still only used in subprocess-win.c
[1]: https://www.catch22.net/tuts/undocumented-createprocess
Add env and detach arguments. This means the command.c code must use the
"new" mp_subprocess2(). So also take this as an opportunity to clean up.
win32 support gets broken by it, because it never made the switch to the
newer function.
The new detach parameter makes the "run" command fully redundant, but I
guess we'll keep it for simplicity. But change its implementation to use
mp_subprocess2() (couldn't do this earlier, because win32).
Privately, I'm going to use the "env" argument to add a key binding that
starts a shell with a FILE environment variable set to the currently
playing file, so this is very useful to me.
Note: breaks windows, so for example youtube-dl on windows will not work
anymore. mp_subprocess2() has to be implemented. The old functions are
gone, and subprocess-win.c is not built anymore. It will probably work
on Cygwin.
This can be used to make vo_libmpv render video to a memory buffer. It
only adds a new backend API that takes memory surfaces. All the render
API (such as frame rendering control and so on) is reused.
I'm not quite convinced of the usefulness of this, and until now I
always resisted providing something like this. It only seems to
facilitate inefficient implementation. But whatever.
Unfortunately, this duplicates the software rendering glue code yet
again (like it exists in vo_x11, vo_wlshm, vo_drm, and probably more).
But in theory, these could reuse this backend in the future, just like
vo_gpu could reuse the render_gl API.
Fixes: #7852
This is preparation to further cleanups (and eventually actual
improvements) of the audio output code.
AOs are split into two classes: pull and push. Pull AOs let an audio
callback of the native audio API read from a ring buffer. Push AOs
expose a function that works similar to write(), and for which we start
a "feeder" thread. It seems making this split was beneficial, because of
the different data flow, and emulating the one or other in the AOs
directly would have created code duplication (all the "pull" AOs had
their own ring buffer implementation before it was cleaned up).
Unfortunately, both types had completely separate implementations (in
pull.c and push.c). The idea was that little can be shared anyway. But
that's very annoying now, because I want to change the API between AO
and player.
This commit attempts to merge them. I've moved everything from push.c to
pull.c, the trivial entrypoints from ao.c to pull.c, and attempted to
reconcile the differences. It's a mess, but at least there's only one
ring buffer within the AO code now. Everything should work mostly the
same. Pull AOs now always copy the audio data under a lock; before this
commit, all ring buffer access was lock-free (except for the decoder
wakeup callback, which acquired a mutex). In theory, this is "bad", and
people obsessed with lock-free stuff will hate me, but in practice
probably won't matter. The planned change will probably remove this
copying-under-lock again, but who knows when this will happen.
One change for the push AOs now makes it drop audio, where before only a
warning was logged. This is only in case of AOs or drivers which exhibit
unexpected (and now unsupported) behavior.
This is a risky change. Although it's completely trivial conceptually,
there are too many special cases. In addition, I barely tested it, and
I've messed with it in a half-motivated state over a longer time, barely
making any progress, and finishing it under a rush when I already should
have been asleep. Most things seem to work, and I made superficial tests
with alsa, sdl, and encode mode. This should cover most things, but
there are a lot of tricky things that received no coverage. All this
text means you should be prepared to roll back to an older commit and
report your problem.
This mode drops or repeats audio data to adapt to video speed, instead
of resampling it or such. It was added to deal with SPDIF. The
implementation was part of fill_audio_out_buffers() - the entire
function is something whose complexity exploded in my face, and which I
want to clean up, and this is hopefully a first step.
Put it in a filter, and mess with the shitty glue code. It's all sort of
roundabout and illogical, but that can be rectified later. The important
part is that it works much like the resample or scaletempo filters.
For PCM audio, this does not work on samples anymore. This makes it much
worse. But for PCM you can use saner mechanisms that sound better. Also,
something about PTS tracking is wrong. But not wasting more time on
this.
For whatever purpose. If anything, this makes the zimg wrapper cleaner.
The added tests are not particular exhaustive, but nice to have. This
also makes the scale_zimg.c test pretty useless, because it only tests
repacking (going through the zimg wrapper). In theory, the repack_tests
things could also be used on scalers, but I guess it doesn't matter.
Some things are added over the previous zimg wrapper code. For example,
some fringe formats can now be expanded to 8 bit per component for
convenience.
Add an infrastructure for collecting performance-related data, use it in
some places. Add rendering of them to stats.lua.
There were two main goals: minimal impact on the normal code and normal
playback. So all these stats_* function calls either happen only during
initialization, or return immediately if no stats collection is going
on. That's why it does this lazily adding of stats entries etc. (a first
iteration made each stats entry an API thing, instead of just a single
stats_ctx, but I thought that was getting too intrusive in the "normal"
code, even if everything gets worse inside of stats.c).
You could get most of this information from various profilers (including
the extremely primitive --dump-stats thing in mpv), but this makes it
easier to see the most important information at once (at least in
theory), partially because we know best about the context of various
things.
Not very happy with this. It's all pretty primitive and dumb. At this
point I just wanted to get over with it, without necessarily having to
revisit it later, but with having my stupid statistics.
Somehow the code feels terrible. There are a lot of meh decisions in
there that could be better or worse (but mostly could be better), and it
just sucks but it's also trivial and uninteresting and does the job. I
guess I hate programming. It's so tedious and the result is always shit.
Anyway, enjoy.
Ancient Linux audio output. Apparently it survived until now, because
some BSDs (but not all) had use of this. But these should work with
ao_sdl or ao_openal too (that's why these AOs exist after all). ao_oss
itself has the problem that it's virtually unmaintainable from my point
of view due to all the subtle (or non-subtle) difference. Look at the
ifdef mess and the multiple code paths (that shouldn't exist) in the
removed source code.
I wonder what this even is. I've never heard of anyone using it, and
can't find a corresponding library that actually builds with it. Good
enough to remove.