1
0
mirror of https://github.com/mpv-player/mpv synced 2025-01-02 13:02:24 +00:00
Commit Graph

10815 Commits

Author SHA1 Message Date
wm4
66810c1550 ao_pulse: reduce requested device buffer size
Same deal as with the previous commit for ALSA.

Untested.
2018-04-15 23:11:33 +03:00
wm4
401bd57d44 ao_alsa: add options for controlling period/buffer size 2018-04-15 23:11:33 +03:00
LAGonauta
e00ca83006 ao/openal: Remove notes on experimentality from the documentation
Also, multi-channel audio should be fast now with the use of the MC
extensions.
2018-04-15 00:57:34 +03:00
LAGonauta
614ad62f89 ao/openal: Add option to set buffering characteristics
One can now set the number of buffers and the buffer size.
This can reduce the CPU usage and the total latency stays mostly the same.
As there are sync mechanisms the A/V sync continue intact and working.

It also modifies 6.1 channel order, as per OpenAL spec
and add AOPLAY_FINAL_CHUNK support
2018-04-15 00:57:01 +03:00
LAGonauta
dd357a7d53 ao/openal: Add support for direct channels output
Uses OpenAL Soft's AL_DIRECT_CHANNELS_SOFT extension and can be controlled through
a new CLI option, --openal-direct-channels.
This allows one to send the audio data direrctly to the desired channel without
effects applied.
2018-04-15 00:57:01 +03:00
Kevin Mitchell
cacb0ad3dc manpage: document vaapi-device
This was left out of e3e2c79 by mistake.
2018-04-08 22:24:04 +03:00
Kevin Mitchell
576dabf654 manpage: move cuda-decode-device with hwdec options 2018-04-08 22:24:04 +03:00
Avi Halachmi (:avih)
9a47023c44
js: implement mp.register_idle
Due to earlier misinterpretation of the Lua docs as if mp.register_idle
registers a one-shot callback, the JS docs suggested to use setTimeout.

But the behavior and Lua docs are such that it's a repeating callback
which fires just before the script thread goes to sleep.

Implement it for JS too.
2018-04-07 16:02:19 -07:00
Avi Halachmi (:avih)
b04f0cad43
js: implement mp.options.read_options 2018-04-07 16:02:19 -07:00
Avi Halachmi (:avih)
9eadc068fa
config: replace config dir lua-settings/ with dir script-opts/
lua-settings/ is still supported, with deprecation warning.
2018-04-07 16:02:16 -07:00
Tom Yan
e3b3e28deb ao_opensles: remove useless cfg_sample_rate
We should always use the ao-neutral --audio-samplerate option.
2018-04-05 04:35:49 +03:00
wm4
f60826c3a1
client API: add a first class hook API, and deprecate old API
As it turns out, there are multiple libmpv users who saw a need to
use the hook API. The API is kind of shitty and was never meant to be
actually public (it was mostly a hack for the ytdl script).

Introduce a proper API and deprecate the old one. The old one will
probably continue to work for a few releases, but will be removed
eventually.

There are some slight changes to the old API, but if a user followed
the manual properly, it won't break.

Mostly untested. Appears to work with ytdl_hook.
2018-03-26 23:02:23 -07:00
wm4
6d7cfdfae5 client API: deprecate mpv_get_wakeup_pipe()
I don't think anything even uses it.
2018-03-26 19:47:08 +02:00
wm4
5532d8cffe command: remove an old compatibility hack
Was removed 3 releases ago and was spamming warning messages that it'll
be dropped, so it's fine to remove it now.
2018-03-26 19:47:08 +02:00
wm4
4655923d38 manpage: mention how to get multiple video tracks for --lavfi-complex
See #5670.
2018-03-26 19:47:08 +02:00
wm4
52dd38a48a client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).

This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.

Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.

The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).

To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.

The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-26 19:47:08 +02:00
wm4
bfb3a78964 manpage: document that ---ao overrides --audio-device
Fixes #5640.
2018-03-15 23:13:53 -07:00
wm4
2c572e2bb1 video: add an option to tune waiting for video timing
Probably mostly useful for the libmpv render API.
2018-03-15 23:13:53 -07:00
wm4
8163b8d390 client API: deprecate qthelper.hpp
It's a WTF that we have something as specific in the API. It could be
argued that we should provide helpers for other language and GUI toolkit
combinations. Obviously that's not going to scale, and it's somewhat
likely that it will bitrot. The rest is said in the API changelog.
2018-03-15 23:13:53 -07:00
Ricardo Constantino
38e5b141c6
DOCS/options: clarify that --end also supports relative time 2018-03-15 14:20:12 +00:00
wm4
2edf00fb94 client API: send MPV_EVENT_SHUTDOWN only once
Before this change, mpv_wait_event() could inconsistently return
multiple MPV_EVENT_SHUTDOWN events to a single mpv_handle, up to the
point of spamming the event queue under certain circumstances. Change
this and just send it exactly once to each mpv_handle.

Some client API users might have weird requirements about destroying
their state asynchronously (and not reacting immediately to the SHUTDOWN
event). This change will help a bit to make this less weird and
surprising.
2018-03-15 00:00:04 -07:00
wm4
4d9c6ab6b9 client API: rename mpv_detach_destroy() to mpv_destroy()
Since this has clearer semantics now, the old name is just clunky and
confusing.
2018-03-15 00:00:04 -07:00
wm4
a7f3cf4737 client API: add mpv_create_weak_client() 2018-03-15 00:00:04 -07:00
wm4
410a1b49ed client API: cleanup mpv_handle termination
This changes how mpv_terminate_destroy() and mpv_detach_destroy()
behave. The doxygen in client.h tries to point out the differences. The
goal is to make this more useful to the API user (making it behave like
refcounting).

This will be refined in follow up commits.

Initialization is unfortunately closely tied to termination, so that
changes as well. This also removes earlier hacks that make sure that
some parts of FFmpeg initialization are run in the playback thread
(instead of the user's thread). This does not matter with standard
FFmpeg, and I have no reason to care about this anymore.
2018-03-15 00:00:04 -07:00
Aman Gupta
b0da883b13 doc: fix formatting of video-frame-info properties 2018-03-11 22:13:12 -07:00
wm4
496b13227b input: minor additions to default key bindings
This adds key bindings for some semi-popular features. It also tries to
cleanup some old bindings. For example w/e for panscan is now changed to
w/W. In all cases, the old bindings are still kept and work, though.

Part of an ongoing attempt to cleanup the default key bindings.
See #973 for some context.
2018-03-04 16:27:49 -08:00
wm4
775b86212d video: add option to reduce latency by 1 or 2 frames
The playback start logic explicitly waits until the first frame has been
displayed. Usually this will introduce a wait of 1 vsync. For normal
playback this doesn't matter, but with respect to low latency needs,
this only leads to additional data getting queued up in the demuxer or
network buffers.

Another thing is that the timing logic decodes 1 frame ahead (= 1 frame
extra latency) to determine the exact duration of a frame.

To be fair, there doesn't really seem to be a hard reason why this is
needed. With the current code, enabling the option does lead to A/V
desync sometimes (if the demuxer FPS is too inaccurate), and also frame
drops at playback start in some situations. But this all seems to be
avoidable, if the timing logic were to be rewritten completely, which
should probably happen in the future. Thus the new option comes with the
warning that it can be removed any time. This is also why the option has
"hack" in the name.
2018-03-03 02:38:01 +02:00
wm4
c917992359 manpage: describe how to list/inspect/apply profiles
This is all documented elsewhere in the manpage, but hard to find from
here.
2018-03-03 02:38:01 +02:00
wm4
8288fa6978 options: add a builtin low-latency profile
Well I guess it doesn't help that much.

Also add some stuff that might help to the manpage.

The fundamental problem with some "live" sources (e.g. x11grab) is
actually that the player gets behind initially, and never thinks it has
to catch up. This is also why --untimed can help.
2018-03-03 02:38:01 +02:00
wm4
16eca7139a demux_lavf: add --demuxer-lavf-probe-info=nostreams
Another attempt to try to make it behave in certain situations.
2018-03-03 02:38:01 +02:00
wm4
4184f8585d DOCS/interface-changes: add note about desyncing audio filters
For example af_loudnorm is a known filter with this behavior.
2018-03-03 02:38:01 +02:00
wm4
b037121430 client API: deprecate opengl-cb API and introduce a replacement API
The purpose of the new API is to make it useable with other APIs than
OpenGL, especially D3D11 and vulkan. In theory it's now possible to
support other vo_gpu backends, as well as backends that don't use the
vo_gpu code at all.

This also aims to get rid of the dumb mpv_get_sub_api() function. The
life cycle of the new mpv_render_context is a bit different from
mpv_opengl_cb_context, and you explicitly create/destroy the new
context, instead of calling init/uninit on an object returned by
mpv_get_sub_api().

In other to make the render API generic, it's annoyingly EGL style, and
requires you to pass in API-specific objects to generic functions. This
is to avoid explicit objects like the internal ra API has, because that
sounds more complicated and annoying for an API that's supposed to never
change.

The opengl_cb API will continue to exist for a bit longer, but
internally there are already a few tradeoffs, like reduced
thread-safety.

Mostly untested. Seems to work fine with mpc-qt.
2018-02-28 00:55:06 -08:00
Akemi
aa974b2aa7 cocoa-cb: make fullscreen resize animation duration configurable 2018-02-28 00:48:44 -08:00
Akemi
938ad6ebc0 cocoa-cb: change border and borderless window styling
the title bar is now within the window bounds instead of outside. same
as QuickTime Player. it supports several standard styles, two dark and
two light ones. additionally we have properly rounded corners now and
the borderless window also has the proper window shadow.

Also make the earliest supported macOS version 10.10.

Fixes #4789, #3944
2018-02-28 00:48:44 -08:00
Anton Kindestam
3325c7a912 context_drm_egl: Introduce 30bpp support
This introduces the option --drm-format (currently used only by
context_drm_egl, vo_drm implementation is pending) which allows you to
pick between a xrgb8888 or a xrgb2101010 visual for --gpu-context=drm.

Requires a recent mesa (18.0.0_rc4 or later) to work.

This also fixes a bug when using --gpu-context=drm on a 30bpp-enabled
mesa (allow_rgb10_configs set to true). Previously it would've set up
an XRGB8888 format at the DRM/GBM level, while a 30bpp EGLConfig would
be picked, resulting in a garbled image.
2018-02-26 23:56:13 -08:00
wm4
fc76d41194 stream_file: add mode for reading appended files
Do this because retrying reading on higher levels (like the demuxer)
usually causes tons of problems. A hack like this is simpler and could
allow to remove some of the higher level retry behavior.

This works by trying to detect whether the file is appended. If we reach
EOF, check if the file size changed compared to the initial value. If it
did, it means the file was appended at least once, and we set the
p->appending flag. If that flag is set, we simply retry reading more
data every time we encounter EOF. The only way to do this is polling,
and we poll for at most 10 times, after waiting for 200ms every time.
2018-02-21 22:57:39 -08:00
Niklas Haas
441e384390 vo_gpu: introduce --target-peak
This solves a number of problems simultaneously:

1. When outputting HLG, this allows tuning the OOTF based on the display
   characteristics.
2. When outputting PQ or other HDR curves, this allows soft-limiting the
   output brightness using the tone mapping algorithm.
3. When outputting SDR, this allows HDR-in-SDR style output, by
   controlling the output brightness directly.

Closes #5521
2018-02-20 22:02:51 +02:00
sfan5
8f9785d128 lua+js: Implement utils.getpid()
Usable for uniquely identifying mpv instances from
subprocesses, controlling mpv with AppleScript, ...

Adds a new mp_getpid() wrapper for cross-platform reasons.
2018-02-13 20:16:01 -08:00
wm4
7b5a2588bd vo: make opengl-cb first in the autoprobing order
This should be helpful for the new OSX Cocoa backend, which uses
opengl-cb internally. Since it comes with a behavior change that could
possibly interfere with libmpv/opengl_cb users, we mark it as explicit
API change.
2018-02-13 17:45:29 -08:00
wm4
4107a8be6c vf_vavpp: select best quality deinterlacing algorithm by default
This switches the default away from "bob" to the best algorithm reported
as supported by the driver. This is convenient for users, and there is
no reason to use something worse by default.

Untested.
2018-02-13 17:45:29 -08:00
wm4
6b2b2b75b9 manpage: remove mention of --vf=eq
This doesn't work anymore.
2018-02-13 17:45:29 -08:00
wm4
d6890c19dd input: add a keybinding to toggle hardware decoding
We sure as hell won't enable hardware decoding by default, but we can
make it more accessible with a key binding.
2018-02-13 17:45:29 -08:00
wm4
830f0aed97 video: make --deinterlace and HW deinterlace filters always deinterlace
Before this, we made deinterlacing dependent on the video codec metadata
(AVFrame.interlaced_frame for libavcodec). So even if --deinterlace=yes
was set, we skipped deinterlacing if the flag wasn't set. This is very
unreliable and there are many streams with flags incorrectly set.

The potential problem is that this might upset people who alwase enabled
deinterlace and hoped it worked. But it's likely these people were
screwed by this setting anyway. The new behavior is less tricky and
easier to understand, and this preferable. Maybe one day we could
introduce a --deinterlace=auto, which does the right thing, but of
course this would be hard to implement (esecially with hwdec).

Fixes #5219.
2018-02-13 17:45:29 -08:00
wm4
562f563ff1 DOCS/interface-changes.rst: fix typo 2018-02-13 17:45:29 -08:00
Akemi
c5e4538bc4 cocoa-cb: initial implementation via opengl-cb API
this is meant to replace the old and not properly working vo_gpu/opengl
cocoa backend in the future. the problems are various shortcomings of
Apple's opengl implementation and buggy behaviour in certain
circumstances that couldn't be properly worked around. there are also
certain regressions on newer macOS versions from 10.11 onwards.

- awful opengl performance with a none layer backed context
- huge amount of dropped frames with an early context flush
- flickering of system elements like the dock or volume indicator
- double buffering not properly working with a none layer backed context
- bad performance in fullscreen because of system optimisations

all the problems were caused by using a normal opengl context, that
seems somewhat abandoned by apple, and are fixed by using a layer backed
opengl context instead. problems that couldn't be fixed could be
properly worked around.

this has all features our old backend has sans the wid embedding,
the possibility to disable the automatic GPU switching and taking
screenshots of the window content. the first was deemed unnecessary by
me for now, since i just use the libmpv API that others can use anyway.
second is technically not possible atm because we have to pre-allocate
our opengl context at a time the config isn't read yet, so we can't get
the needed property. third one is a bit tricky because of deadlocking
and it needed to be in sync, hopefully i can work around that in the
future.

this also has at least one additional feature or eye-candy. a properly
working fullscreen animation with the native fs. also since this is a
direct port of the old backend of the parts that could be used, though
with adaptions and improvements, this looks a lot cleaner and easier to
understand.

some credit goes to @pigoz for the initial swift build support which
i could improve upon.

Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739,
#2392, #2217
2018-02-12 04:49:15 -08:00
Akemi
abf2efb107 osx: always deactivate the early opengl flush on macOS
early flushing only caused problems on macOS, which includes:
- performance problems and huge amount of dropped frames
- problems with playing back video files with fps close to the display
refresh rate
- rendering at twice the rate of the video fps
- not properly detected display refresh rate

we always deactivate any early flush for macOS to fix these problems.
2018-02-12 04:49:15 -08:00
Ricardo Constantino
57228b6581
ytdl_hook: add script opt for using manifest URLs
Disable by default.
This feature was added in 7eb342757, which allowed stream selection
in runtime. Problem with this atm is that FFmpeg will try to demux
every first packet of every track leading to noticeable delay opening
the URL.

This option can be changed to enabled by default or removed when
HLS/DASH demuxers are improved upstream.
2018-02-11 23:27:37 -08:00
wm4
9f595f3a80 vo_gpu: make screenshots use the GL renderer
Using the GL renderer for color conversion will make sure screenshots
will use the same conversion as normal video rendering. It can do this
for all types of screenshots.

The logic when to write 16 bit PNGs changes. To approximate the old
behavior, we decide by looking whether the source video format has more
than 8 bits per component. We apply this logic even for window
screenshots. Also, 16 bit PNGs now always include an unused alpha
channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no
RGB064. RGB48 is 3 bytes and usually not supported by GPUs for
rendering, so we have to use RGBA64, which forces an alpha channel.

Will break for users who use --target-trc and similar options.

I considered creating a new gl_video context, but it could double GPU
memory use, so I didn't.

This uses FBOs instead of glGetTexImage(), because that increases the
chance it could work on GLES (e.g. ANGLE). Untested. No support for the
Vulkan and D3D11 backends yet.

Fixes #5498. Also fixes #5240, because the code for reading back is not
used with the new code path.
2018-02-11 17:45:51 -08:00
Niklas Haas
e3d93fde2f vo_gpu: port HDR tone mapping algorithm from libplacebo
The current peak detection algorithm was very bugged (which contributed
to the excessive cross-frame flicker without long normalization) and
also didn't take into account the frame average brightness level.

The new algorithm both takes into account frame average brightness (in
addition to peak brightness), and also computes the values in a more
stable/correct way. (The old path was basically undefined behavior)

In addition to improving the algorithm, we also switch to hable tone
mapping by default, and try to enable peak computation automatically
whever possible (compute shaders + SSBOs supported). We also make the
desaturation milder, after extensive testing during libplacebo
development.

I also had to compensate a bit for the representational differences
between mpv and libplacebo (libplacebo treats 1.0 as the reference peak,
but mpv treats it as the nominal peak), but it shouldn't have caused any
problems.

This is still not quite the same as libplacebo, since libplacebo also
allows tagging the desired scene average brightness on the output, and
it also supports reading the scene average brightness from static
metadata (MaxFALL) where available. But those changes are a bit more
involved. It's possible we could also read this from metadata in the
future, but we have problems communicating with AVFrames as it is and I
don't want to touch the mpv colorimetry structs for the time being.
2018-02-05 23:11:18 -08:00
wm4
7019e0dcfe
swresample: limit output size of audio frames
Similar to the previous commit, and for the same reasons. Unlike with
af_scaletempo, resampling does not have a natural frame size, so we set
an arbitrary size limit on output frames. We add a new option to control
this size, although I'm not sure whether anyone will use it, so mark it
for testing only.

Note that we go through some effort to avoid buffering data in
libswresample itself. One reason is that we might have to reinitialize
the resampler completely when changing speed, which drops the buffered
data. Another is that I'm not sure whether the resampler will do the
right thing when applying dynamic speed changes.
2018-02-03 05:01:29 -08:00