mpv/DOCS/man/vo.rst

624 lines
27 KiB
ReStructuredText
Raw Normal View History

VIDEO OUTPUT DRIVERS
====================
Video output drivers are interfaces to different video output facilities. The
syntax is:
``--vo=<driver1,driver2,...[,]>``
Specify a priority list of video output drivers to be used.
If the list has a trailing ``,``, mpv will fall back on drivers not contained
in the list.
2013-07-08 16:02:14 +00:00
.. note::
See ``--vo=help`` for a list of compiled-in video output drivers.
vo_opengl: refactor into vo_gpu This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.
2017-09-14 06:04:55 +00:00
The recommended output driver is ``--vo=gpu``, which is the default. All
other drivers are for compatibility or special purposes. If the default
does not work, it will fallback to other drivers (in the same order as
listed by ``--vo=help``).
Available video output drivers are:
2013-07-08 16:02:14 +00:00
``xv`` (X11 only)
Uses the XVideo extension to enable hardware-accelerated display. This is
2013-07-08 16:02:14 +00:00
the most compatible VO on X, but may be low-quality, and has issues with
OSD and subtitle display.
.. note:: This driver is for compatibility with old systems.
The following global options are supported by this video output:
``--xv-adaptor=<number>``
2014-09-01 02:25:57 +00:00
Select a specific XVideo adapter (check xvinfo results).
``--xv-port=<number>``
Select a specific XVideo port.
``--xv-ck=<cur|use|set>``
2014-09-01 02:25:57 +00:00
Select the source from which the color key is taken (default: cur).
cur
2014-09-01 02:25:57 +00:00
The default takes the color key currently set in Xv.
use
2014-09-01 02:25:57 +00:00
Use but do not set the color key from mpv (use the ``--colorkey``
option to change it).
set
2014-09-01 02:25:57 +00:00
Same as use but also sets the supplied color key.
``--xv-ck-method=<none|man|bg|auto>``
2014-09-01 02:25:57 +00:00
Sets the color key drawing method (default: man).
none
Disables color-keying.
man
2014-09-01 02:25:57 +00:00
Draw the color key manually (reduces flicker in some cases).
bg
2014-09-01 02:25:57 +00:00
Set the color key as window background.
auto
2014-09-01 02:25:57 +00:00
Let Xv draw the color key.
``--xv-colorkey=<number>``
2014-09-01 02:25:57 +00:00
Changes the color key to an RGB value of your choice. ``0x000000`` is
black and ``0xffffff`` is white.
``--xv-buffers=<number>``
Number of image buffers to use for the internal ringbuffer (default: 2).
Increasing this will use more memory, but might help with the X server
not responding quickly enough if video FPS is close to or higher than
the display refresh rate.
``x11`` (X11 only)
Shared memory video output driver without hardware acceleration that works
whenever X11 is present.
Since mpv 0.30.0, you may need to use ``--profile=sw-fast`` to get decent
performance.
.. note:: This is a fallback only, and should not be normally used.
2013-07-08 16:02:14 +00:00
``vdpau`` (X11 only)
Uses the VDPAU interface to display and optionally also decode video.
Hardware decoding is used with ``--hwdec=vdpau``.
.. note::
Earlier versions of mpv (and MPlayer, mplayer2) provided sub-options
2014-09-01 02:25:57 +00:00
to tune vdpau post-processing, like ``deint``, ``sharpen``, ``denoise``,
``chroma-deint``, ``pullup``, ``hqscaling``. These sub-options are
deprecated, and you should use the ``vdpaupp`` video filter instead.
The following global options are supported by this video output:
``--vo-vdpau-sharpen=<-1-1>``
(Deprecated. See note about ``vdpaupp``.)
For positive values, apply a sharpening algorithm to the video, for
negative values a blurring algorithm (default: 0).
``--vo-vdpau-denoise=<0-1>``
(Deprecated. See note about ``vdpaupp``.)
2013-07-08 16:02:14 +00:00
Apply a noise reduction algorithm to the video (default: 0; no noise
reduction).
``--vo-vdpau-chroma-deint``
(Deprecated. See note about ``vdpaupp``.)
Makes temporal deinterlacers operate both on luma and chroma (default).
Use no-chroma-deint to solely use luma and speed up advanced
deinterlacing. Useful with slow video memory.
``--vo-vdpau-pullup``
(Deprecated. See note about ``vdpaupp``.)
Try to apply inverse telecine, needs motion adaptive temporal
deinterlacing.
``--vo-vdpau-hqscaling=<0-9>``
(Deprecated. See note about ``vdpaupp``.)
0
Use default VDPAU scaling (default).
1-9
Apply high quality VDPAU scaling (needs capable hardware).
``--vo-vdpau-fps=<number>``
Override autodetected display refresh rate value (the value is needed
for framedrop to allow video playback rates higher than display
refresh rate, and for vsync-aware frame timing adjustments). Default 0
means use autodetected value. A positive value is interpreted as a
refresh rate in Hz and overrides the autodetected value. A negative
value disables all timing adjustment and framedrop logic.
``--vo-vdpau-composite-detect``
NVIDIA's current VDPAU implementation behaves somewhat differently
under a compositing window manager and does not give accurate frame
timing information. With this option enabled, the player tries to
detect whether a compositing window manager is active. If one is
detected, the player disables timing adjustments as if the user had
2013-07-08 16:02:14 +00:00
specified ``fps=-1`` (as they would be based on incorrect input). This
means timing is somewhat less accurate than without compositing, but
2013-07-08 16:02:14 +00:00
with the composited mode behavior of the NVIDIA driver, there is no
hard playback speed limit even without the disabled logic. Enabled by
2016-10-21 16:50:40 +00:00
default, use ``--vo-vdpau-composite-detect=no`` to disable.
``--vo-vdpau-queuetime-windowed=<number>`` and ``queuetime-fs=<number>``
Use VDPAU's presentation queue functionality to queue future video
frame changes at most this many milliseconds in advance (default: 50).
See below for additional information.
``--vo-vdpau-output-surfaces=<2-15>``
Allocate this many output surfaces to display video frames (default:
3). See below for additional information.
``--vo-vdpau-colorkey=<#RRGGBB|#AARRGGBB>``
Set the VDPAU presentation queue background color, which in practice
is the colorkey used if VDPAU operates in overlay mode (default:
``#020507``, some shade of black). If the alpha component of this value
is 0, the default VDPAU colorkey will be used instead (which is usually
green).
``--vo-vdpau-force-yuv``
Never accept RGBA input. This means mpv will insert a filter to convert
to a YUV format before the VO. Sometimes useful to force availability
of certain YUV-only features, like video equalizer or deinterlacing.
2014-09-01 02:25:57 +00:00
Using the VDPAU frame queuing functionality controlled by the queuetime
2013-07-08 16:02:14 +00:00
options makes mpv's frame flip timing less sensitive to system CPU load and
allows mpv to start decoding the next frame(s) slightly earlier, which can
reduce jitter caused by individual slow-to-decode frames. However, the
NVIDIA graphics drivers can make other window behavior such as window moves
choppy if VDPAU is using the blit queue (mainly happens if you have the
composite extension enabled) and this feature is active. If this happens on
your system and it bothers you then you can set the queuetime value to 0 to
disable this feature. The settings to use in windowed and fullscreen mode
are separate because there should be no reason to disable this for
fullscreen mode (as the driver issue should not affect the video itself).
You can queue more frames ahead by increasing the queuetime values and the
2013-07-08 16:02:14 +00:00
``output_surfaces`` count (to ensure enough surfaces to buffer video for a
certain time ahead you need at least as many surfaces as the video has
frames during that time, plus two). This could help make video smoother in
some cases. The main downsides are increased video RAM requirements for
the surfaces and laggier display response to user commands (display
changes only become visible some time after they're queued). The graphics
driver implementation may also have limits on the length of maximum
queuing time or number of queued surfaces that work well or at all.
``direct3d`` (Windows only)
Video output driver that uses the Direct3D interface.
.. note:: This driver is for compatibility with systems that don't provide
proper OpenGL drivers, and where ANGLE does not perform well.
The following global options are supported by this video output:
``--vo-direct3d-disable-texture-align``
Normally texture sizes are always aligned to 16. With this option
enabled, the video texture will always have exactly the same size as
the video itself.
2013-07-08 16:02:14 +00:00
Debug options. These might be incorrect, might be removed in the future,
might crash, might cause slow downs, etc. Contact the developers if you
actually need any of these for performance or proper operation.
``--vo-direct3d-force-power-of-2``
Always force textures to power of 2, even if the device reports
non-power-of-2 texture sizes as supported.
``--vo-direct3d-texture-memory=<mode>``
Only affects operation with shaders/texturing enabled, and (E)OSD.
Possible values:
``default`` (default)
Use ``D3DPOOL_DEFAULT``, with a ``D3DPOOL_SYSTEMMEM`` texture for
locking. If the driver supports ``D3DDEVCAPS_TEXTURESYSTEMMEMORY``,
``D3DPOOL_SYSTEMMEM`` is used directly.
``default-pool``
Use ``D3DPOOL_DEFAULT``. (Like ``default``, but never use a
shadow-texture.)
``default-pool-shadow``
Use ``D3DPOOL_DEFAULT``, with a ``D3DPOOL_SYSTEMMEM`` texture for
locking. (Like ``default``, but always force the shadow-texture.)
``managed``
Use ``D3DPOOL_MANAGED``.
``scratch``
Use ``D3DPOOL_SCRATCH``, with a ``D3DPOOL_SYSTEMMEM`` texture for
locking.
``--vo-direct3d-swap-discard``
2013-07-08 16:02:14 +00:00
Use ``D3DSWAPEFFECT_DISCARD``, which might be faster.
Might be slower too, as it must(?) clear every frame.
``--vo-direct3d-exact-backbuffer``
Always resize the backbuffer to window size.
vo_opengl: refactor into vo_gpu This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.
2017-09-14 06:04:55 +00:00
``gpu``
General purpose, customizable, GPU-accelerated video output driver. It
supports extended scaling methods, dithering, color management, custom
shaders, HDR, and more.
vo_opengl: refactor into vo_gpu This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.
2017-09-14 06:04:55 +00:00
See `GPU renderer options`_ for options specific to this VO.
By default, it tries to use fast and fail-safe settings. Use the
vo_opengl: refactor into vo_gpu This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.
2017-09-14 06:04:55 +00:00
``gpu-hq`` profile to use this driver with defaults set to high quality
rendering. The profile can be applied with ``--profile=gpu-hq`` and its
contents can be viewed with ``--show-profile=gpu-hq``.
vo_opengl: refactor into vo_gpu This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.
2017-09-14 06:04:55 +00:00
This VO abstracts over several possible graphics APIs and windowing
contexts, which can be influenced using the ``--gpu-api`` and
``--gpu-context`` options.
Hardware decoding over OpenGL-interop is supported to some degree. Note
that in this mode, some corner case might not be gracefully handled, and
2014-09-01 02:25:57 +00:00
color space conversion and chroma upsampling is generally in the hand of
the hardware decoder APIs.
vo_opengl: refactor into vo_gpu This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.
2017-09-14 06:04:55 +00:00
``gpu`` makes use of FBOs by default. Sometimes you can achieve better
quality or performance by changing the ``--fbo-format`` option to
``rgb16f``, ``rgb32f`` or ``rgb``. Known problems include Mesa/Intel not
accepting ``rgb16``, Mesa sometimes not being compiled with float texture
support, and some OS X setups being very slow with ``rgb16`` but fast
with ``rgb32f``. If you have problems, you can also try enabling the
vo_opengl: refactor into vo_gpu This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.
2017-09-14 06:04:55 +00:00
``--gpu-dumb-mode=yes`` option.
2013-07-08 16:02:14 +00:00
``sdl``
SDL 2.0+ Render video output driver, depending on system with or without
2013-07-08 16:02:14 +00:00
hardware acceleration. Should work on all platforms supported by SDL 2.0.
For tuning, refer to your copy of the file ``SDL_hints.h``.
.. note:: This driver is for compatibility with systems that don't provide
2019-08-29 20:02:09 +00:00
proper graphics drivers.
The following global options are supported by this video output:
``--sdl-sw``
Continue even if a software renderer is detected.
``--sdl-switch-mode``
Instruct SDL to switch the monitor video mode when going fullscreen.
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
``vaapi``
Intel VA API video output driver with support for hardware decoding. Note
that there is absolutely no reason to use this, other than compatibility.
This is low quality, and has issues with OSD.
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
.. note:: This driver is for compatibility with crappy systems. You can
2018-01-09 12:20:37 +00:00
use vaapi hardware decoding with ``--vo=gpu`` too.
The following global options are supported by this video output:
``--vo-vaapi-scaling=<algorithm>``
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
default
Driver default (mpv default as well).
fast
Fast, but low quality.
hq
Unspecified driver dependent high-quality scaling, slow.
nla
``non-linear anamorphic scaling``
``--vo-vaapi-deint-mode=<mode>``
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
Select deinterlacing algorithm. Note that by default deinterlacing is
initially always off, and needs to be enabled with the ``d`` key
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
(default key binding for ``cycle deinterlace``).
This option doesn't apply if libva supports video post processing (vpp).
In this case, the default for ``deint-mode`` is ``no``, and enabling
deinterlacing via user interaction using the methods mentioned above
actually inserts the ``vavpp`` video filter. If vpp is not actually
supported with the libva backend in use, you can use this option to
forcibly enable VO based deinterlacing.
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
no
Don't allow deinterlacing (default for newer libva).
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
first-field
Show only first field.
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
bob
bob deinterlacing (default for older libva).
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
``--vo-vaapi-scaled-osd=<yes|no>``
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 12:01:30 +00:00
If enabled, then the OSD is rendered at video resolution and scaled to
display resolution. By default, this is disabled, and the OSD is
rendered at display resolution if the driver supports it.
2013-07-08 16:02:14 +00:00
``null``
Produces no video output. Useful for benchmarking.
2015-05-24 13:48:48 +00:00
Usually, it's better to disable video with ``--no-video`` instead.
The following global options are supported by this video output:
``--vo-null-fps=<value>``
2015-05-24 13:48:48 +00:00
Simulate display FPS. This artificially limits how many frames the
VO accepts per second.
2013-07-08 16:02:14 +00:00
``caca``
Color ASCII art video output driver that works on a text console.
.. note:: This driver is a joke.
``tct``
Color Unicode art video output driver that works on a text console.
By default depends on support of true color by modern terminals to display
the images at full color range, but 256-colors outout is also supported (see
below). On Windows it requires an ansi terminal such as mintty.
Since mpv 0.30.0, you may need to use ``--profile=sw-fast`` to get decent
performance.
Note: the TCT image output is not synchronized with other terminal output
from mpv, which can lead to broken images. The options ``--no-terminal`` or
``--really-quiet`` can help with that.
``--vo-tct-algo=<algo>``
Select how to write the pixels to the terminal.
half-blocks
Uses unicode LOWER HALF BLOCK character to achieve higher vertical
resolution. (Default.)
plain
Uses spaces. Causes vertical resolution to drop twofolds, but in
theory works in more places.
``--vo-tct-width=<width>`` ``--vo-tct-height=<height>``
Assume the terminal has the specified character width and/or height.
These default to 80x25 if the terminal size cannot be determined.
2016-10-22 13:48:30 +00:00
``--vo-tct-256=<yes|no>`` (default: no)
Use 256 colors - for terminals which don't support true color.
``sixel``
Sixel graphics video output driver based on libsixel that works on a
console that has sixel graphics enabled such as ``xterm`` or ``mlterm``.
Additionally some terminals have limitation on the dimensions, so may
not display images bigger than 1000x1000 for example. Make sure that
``img2sixel`` can display images of the corresponding resolution.
You may need to use ``--profile=sw-fast`` to get decent performance.
Note: the Sixel image output is not synchronized with other terminal output
from mpv, which can lead to broken images. The option ``--really-quiet``
can help with that, and is recommended.
``--vo-sixel-diffusion=<algo>``
Selects the diffusion algorithm for dithering used by libsixel.
Can be one of the below list as per libsixel's documentation.
auto
Choose diffuse type automatically
none
Don't diffuse
atkinson
Diffuse with Bill Atkinson's method. (Default)
fs
Diffuse with Floyd-Steinberg method
jajuni
Diffuse with Jarvis, Judice & Ninke method
stucki
Diffuse with Stucki's method
burkes
Diffuse with Burkes' method
arithmetic
Positionally stable arithmetic dither
xor
Positionally stable arithmetic xor based dither
``--vo-sixel-width=<width>`` ``--vo-sixel-height=<height>``
The output video resolution will be set to width and height
These default to 320x240 if not set. The terminal window must
be bigger than this resolution to have smooth playback.
Additionally the last row will be a blank line and can't be
used to display pixel data.
``--vo-sixel-fixedpalette=<0|1>`` (default: 0)
Use libsixel's built-in static palette using the XTERM256 profile
for dither. Fixed palette uses 256 colors for dithering.
``--vo-sixel-reqcolors=<colors>`` (default: 256)
Setup libsixel to use required number of colors for dynamic palette.
This value depends on the console as well. Xterm supports 256.
Can set this to a lower value for faster performance.
This option has no effect if fixed palette is used.
``--vo-sixel-color-threshold=<threshold>`` (default: 0)
This threshold value is used in dynamic palette mode to
recompute the palette based on the scene changes.
``--vo-sixel-offset-top=<top>`` (default: 1)
The output video playback will start from the specified row number.
If this is greater than 1, then those many rows will be skipped.
This option can be used to shift video below in the terminal.
If it is greater than number of rows in terminal, then it is ignored.
``--vo-sixel-offset-left=<left>`` (default: 1)
The output video playback will start from the specified column number.
If this is greater than 1, then those many columns will be skipped.
This option can be used to shift video to the right in the terminal.
If it is greater than number of columns in terminal, then it is ignored.
2013-07-08 16:02:14 +00:00
``image``
2012-08-06 17:15:04 +00:00
Output each frame into an image file in the current directory. Each file
takes the frame number padded with leading zeros as name.
The following global options are supported by this video output:
``--vo-image-format=<format>``
2012-08-06 17:15:04 +00:00
Select the image file format.
jpg
2012-11-15 13:25:20 +00:00
JPEG files, extension .jpg. (Default.)
2012-08-06 17:15:04 +00:00
jpeg
JPEG files, extension .jpeg.
png
PNG files.
webp
WebP files.
2012-08-06 17:15:04 +00:00
``--vo-image-png-compression=<0-9>``
2012-08-06 17:15:04 +00:00
PNG compression factor (speed vs. file size tradeoff) (default: 7)
``--vo-image-png-filter=<0-5>``
Filter applied prior to PNG compression (0 = none; 1 = sub; 2 = up;
3 = average; 4 = Paeth; 5 = mixed) (default: 5)
``--vo-image-jpeg-quality=<0-100>``
JPEG quality factor (default: 90)
``--vo-image-jpeg-optimize=<0-100>``
2012-08-22 13:45:34 +00:00
JPEG optimization factor (default: 100)
``--vo-image-webp-lossless=<yes|no>``
Enable writing lossless WebP files (default: no)
``--vo-image-webp-quality=<0-100>``
WebP quality (default: 75)
``--vo-image-webp-compression=<0-6>``
WebP compression factor (default: 4)
``--vo-image-outdir=<dirname>``
2012-08-06 17:15:04 +00:00
Specify the directory to save the image files to (default: ``./``).
``libmpv``
For use with libmpv direct embedding. As a special case, on OS X it
cocoa-cb: initial implementation via opengl-cb API this is meant to replace the old and not properly working vo_gpu/opengl cocoa backend in the future. the problems are various shortcomings of Apple's opengl implementation and buggy behaviour in certain circumstances that couldn't be properly worked around. there are also certain regressions on newer macOS versions from 10.11 onwards. - awful opengl performance with a none layer backed context - huge amount of dropped frames with an early context flush - flickering of system elements like the dock or volume indicator - double buffering not properly working with a none layer backed context - bad performance in fullscreen because of system optimisations all the problems were caused by using a normal opengl context, that seems somewhat abandoned by apple, and are fixed by using a layer backed opengl context instead. problems that couldn't be fixed could be properly worked around. this has all features our old backend has sans the wid embedding, the possibility to disable the automatic GPU switching and taking screenshots of the window content. the first was deemed unnecessary by me for now, since i just use the libmpv API that others can use anyway. second is technically not possible atm because we have to pre-allocate our opengl context at a time the config isn't read yet, so we can't get the needed property. third one is a bit tricky because of deadlocking and it needed to be in sync, hopefully i can work around that in the future. this also has at least one additional feature or eye-candy. a properly working fullscreen animation with the native fs. also since this is a direct port of the old backend of the parts that could be used, though with adaptions and improvements, this looks a lot cleaner and easier to understand. some credit goes to @pigoz for the initial swift build support which i could improve upon. Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739, #2392, #2217
2018-02-12 11:28:19 +00:00
is used like a normal VO within mpv (cocoa-cb). Otherwise useless in any
other contexts.
(See ``<mpv/render.h>``.)
This also supports many of the options the ``gpu`` VO has, depending on the
backend.
RPI support This requires FFmpeg git master for accelerated hardware decoding. Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav will also work. Most things work. Screenshots don't work with accelerated/opaque decoding (except using full window screenshot mode). Subtitles are very slow - even simple but huge overlays can cause frame drops. This always uses fullscreen mode. It uses dispmanx and mmal directly, and there are no window managers or anything on this level. vo_opengl also kind of works, but is pretty useless and slow. It can't use opaque hardware decoding (copy back can be used by forcing the option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend is preferred over the X11 ones in case you're trying on X11; but X11 is even more useless on RPI. This doesn't correctly reject extended h264 profiles and thus doesn't fallback to software decoding. The hw supports only up to the high profile, and will e.g. return garbage for Hi10P video. This sets a precedent of enabling hw decoding by default, but only if RPI support is compiled (which most hopefully it will be disabled on desktop Linux platforms). While it's more or less required to use hw decoding on the weak RPI, it causes more problems than it solves on real platforms (Linux has the Intel GPU problem, OSX still has some cases with broken decoding.) So I can live with this compromise of having different defaults depending on the platform. Raspberry Pi 2 is required. This wasn't tested on the original RPI, though at least decoding itself seems to work (but full playback was not tested).
2015-03-29 13:12:11 +00:00
``rpi`` (Raspberry Pi)
Native video output on the Raspberry Pi using the MMAL API.
2018-01-09 12:20:37 +00:00
This is deprecated. Use ``--vo=gpu`` instead, which is the default and
2016-09-12 17:58:06 +00:00
provides the same functionality. The ``rpi`` VO will be removed in
2018-01-09 12:20:37 +00:00
mpv 0.23.0. Its functionality was folded into --vo=gpu, which now uses
2016-09-12 17:58:06 +00:00
RPI hardware decoding by treating it as a hardware overlay (without applying
GL filtering). Also to be changed in 0.23.0: the --fs flag will be reset to
2016-09-12 17:58:06 +00:00
"no" by default (like on the other platforms).
The following deprecated global options are supported by this video output:
``--rpi-display=<number>``
RPI support This requires FFmpeg git master for accelerated hardware decoding. Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav will also work. Most things work. Screenshots don't work with accelerated/opaque decoding (except using full window screenshot mode). Subtitles are very slow - even simple but huge overlays can cause frame drops. This always uses fullscreen mode. It uses dispmanx and mmal directly, and there are no window managers or anything on this level. vo_opengl also kind of works, but is pretty useless and slow. It can't use opaque hardware decoding (copy back can be used by forcing the option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend is preferred over the X11 ones in case you're trying on X11; but X11 is even more useless on RPI. This doesn't correctly reject extended h264 profiles and thus doesn't fallback to software decoding. The hw supports only up to the high profile, and will e.g. return garbage for Hi10P video. This sets a precedent of enabling hw decoding by default, but only if RPI support is compiled (which most hopefully it will be disabled on desktop Linux platforms). While it's more or less required to use hw decoding on the weak RPI, it causes more problems than it solves on real platforms (Linux has the Intel GPU problem, OSX still has some cases with broken decoding.) So I can live with this compromise of having different defaults depending on the platform. Raspberry Pi 2 is required. This wasn't tested on the original RPI, though at least decoding itself seems to work (but full playback was not tested).
2015-03-29 13:12:11 +00:00
Select the display number on which the video overlay should be shown
(default: 0).
``--rpi-layer=<number>``
RPI support This requires FFmpeg git master for accelerated hardware decoding. Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav will also work. Most things work. Screenshots don't work with accelerated/opaque decoding (except using full window screenshot mode). Subtitles are very slow - even simple but huge overlays can cause frame drops. This always uses fullscreen mode. It uses dispmanx and mmal directly, and there are no window managers or anything on this level. vo_opengl also kind of works, but is pretty useless and slow. It can't use opaque hardware decoding (copy back can be used by forcing the option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend is preferred over the X11 ones in case you're trying on X11; but X11 is even more useless on RPI. This doesn't correctly reject extended h264 profiles and thus doesn't fallback to software decoding. The hw supports only up to the high profile, and will e.g. return garbage for Hi10P video. This sets a precedent of enabling hw decoding by default, but only if RPI support is compiled (which most hopefully it will be disabled on desktop Linux platforms). While it's more or less required to use hw decoding on the weak RPI, it causes more problems than it solves on real platforms (Linux has the Intel GPU problem, OSX still has some cases with broken decoding.) So I can live with this compromise of having different defaults depending on the platform. Raspberry Pi 2 is required. This wasn't tested on the original RPI, though at least decoding itself seems to work (but full playback was not tested).
2015-03-29 13:12:11 +00:00
Select the dispmanx layer on which the video overlay should be shown
(default: -10). Note that mpv will also use the 2 layers above the
selected layer, to handle the window background and OSD. Actual video
rendering will happen on the layer above the selected layer.
2015-04-16 19:43:01 +00:00
``--rpi-background=<yes|no>``
Whether to render a black background behind the video (default: no).
Normally it's better to kill the console framebuffer instead, which
gives better performance.
``--rpi-osd=<yes|no>``
Enabled by default. If disabled with ``no``, no OSD layer is created.
This also means there will be no subtitles rendered.
2015-04-16 19:43:01 +00:00
``drm`` (Direct Rendering Manager)
Video output driver using Kernel Mode Setting / Direct Rendering Manager.
vo_opengl: add DRM EGL backend Notes: - Unfortunately the only way to talk to EGL from within DRM I could find involves linking with GBM (generic buffer management for Mesa.) Because of this, I'm pretty sure it won't work with proprietary NVidia drivers, but then again, last time I checked NVidia didn't offer proper screen resolution for VT. - VT switching doesn't seem to work at all. It's worth mentioning that using vo_drm before introduction of VT switcher had an anomaly where user could switch to another VT and input text to it, while video played on top of that VT. However, that isn't the case with drm_egl: I can't switch to other VT during playback like this. This makes me think that it's either a limitation coming from my firmware or from EGL/KMS itself rather than a bug with my code. Nonetheless, I still left (untestable) VT switching code in place, in case it's useful to someone else. - The mode_id, connector_id and device_path should be configurable for power users and people who wish to watch videos on nonprimary screen. Unfortunately I didn't see anything that would allow OpenGL backends to register their own set of options. At the same time, adding them to global namespace is pointless. - A few dozens of lines could be shared with vo_drm (setting up VT switching, most of code behind page flipping). I don't have any strong opinion on this. - Sometimes I get minor visual glitches. I'm not sure if there's a race condition of some sort, unitialized variable (doubtful), or if it's buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa) - .config and .control are very minimal. Signed-off-by: wm4 <wm4@nowhere>
2015-11-07 18:06:57 +00:00
Should be used when one doesn't want to install full-blown graphical
environment (e.g. no X). Does not support hardware acceleration (if you
2018-01-09 12:20:37 +00:00
need this, check the ``drm`` backend for ``gpu`` VO).
2015-04-16 19:43:01 +00:00
Since mpv 0.30.0, you may need to use ``--profile=sw-fast`` to get decent
performance.
The following global options are supported by this video output:
``--drm-connector=[<gpu_number>.]<name>``
Select the connector to use (usually this is a monitor.) If ``<name>``
is empty or ``auto``, mpv renders the output on the first available
connector. Use ``--drm-connector=help`` to get a list of available
connectors. When using multiple graphic cards, use the ``<gpu_number>``
argument to disambiguate.
(default: empty)
2015-05-28 18:53:14 +00:00
``--drm-mode=<preferred|highest|N|WxH[@R]>``
Mode to use (resolution and frame rate).
Possible values:
:preferred: Use the preferred mode for the screen on the selected
connector. (default)
:highest: Use the mode with the highest resolution available on the
selected connector.
:N: Select mode by index.
:WxH[@R]: Specify mode by width, height, and optionally refresh rate.
In case several modes match, selects the mode that comes
first in the EDID list of modes.
Use ``--drm-mode=help`` to get a list of available modes for all active
connectors.
``--drm-atomic=<no|auto>``
Toggle use of atomic modesetting. Mostly useful for debugging.
:no: Use legacy modesetting.
:auto: Use atomic modesetting, falling back to legacy modesetting if
not available. (default)
Note: Only affects ``gpu-context=drm``. ``vo=drm`` supports legacy
modesetting only.
``--drm-draw-plane=<primary|overlay|N>``
Select the DRM plane to which video and OSD is drawn to, under normal
circumstances. The plane can be specified as ``primary``, which will
pick the first applicable primary plane; ``overlay``, which will pick
the first applicable overlay plane; or by index. The index is zero
based, and related to the CRTC.
(default: primary)
When using this option with the drmprime-drm hwdec interop, only the OSD
is rendered to this plane.
``--drm-drmprime-video-plane=<primary|overlay|N>``
Select the DRM plane to use for video with the drmprime-drm hwdec
interop (used by e.g. the rkmpp hwdec on RockChip SoCs, and v4l2 hwdec:s
on various other SoC:s). The plane is unused otherwise. This option
accepts the same values as ``--drm-draw-plane``. (default: overlay)
To be able to successfully play 4K video on various SoCs you might need
to set ``--drm-draw-plane=overlay --drm-drmprime-video-plane=primary``
and setting ``--drm-draw-surface-size=1920x1080``, to render the OSD at a
lower resolution (the video when handled by the hwdec will be on the
drmprime-video plane and at full 4K resolution)
``--drm-format=<xrgb8888|xrgb2101010>``
Select the DRM format to use (default: xrgb8888). This allows you to
choose the bit depth of the DRM mode. xrgb8888 is your usual 24 bit per
pixel/8 bits per channel packed RGB format with 8 bits of padding.
xrgb2101010 is a packed 30 bits per pixel/10 bits per channel packed RGB
format with 2 bits of padding.
2018-02-19 18:23:44 +00:00
There are cases when xrgb2101010 will work with the ``drm`` VO, but not
with the ``drm`` backend for the ``gpu`` VO. This is because with the
``gpu`` VO, in addition to requiring support in your DRM driver,
requires support for xrgb2101010 in your EGL driver
``--drm-draw-surface-size=<[WxH]>``
Sets the size of the surface used on the draw plane. The surface will
then be upscaled to the current screen resolution. This option can be
useful when used together with the drmprime-drm hwdec interop at high
resolutions, as it allows scaling the draw plane (which in this case
only handles the OSD) down to a size the GPU can handle.
When used without the drmprime-drm hwdec interop this option will just
cause the video to get rendered at a different resolution and then
scaled to screen size.
Note: this option is only available with DRM atomic support.
(default: display resolution)
``mediacodec_embed`` (Android)
Renders ``IMGFMT_MEDIACODEC`` frames directly to an ``android.view.Surface``.
Requires ``--hwdec=mediacodec`` for hardware decoding, along with
``--vo=mediacodec_embed`` and ``--wid=(intptr_t)(*android.view.Surface)``.
Since this video output driver uses native decoding and rendering routines,
many of mpv's features (subtitle rendering, OSD/OSC, video filters, etc)
are not available with this driver.
2019-08-29 20:02:09 +00:00
To use hardware decoding with ``--vo=gpu`` instead, use
``--hwdec=mediacodec-copy`` along with ``--gpu-context=android``.
``wlshm`` (Wayland only)
Shared memory video output driver without hardware acceleration that works
whenever Wayland is present.
Since mpv 0.30.0, you may need to use ``--profile=sw-fast`` to get decent
performance.
.. note:: This is a fallback only, and should not be normally used.