If for example the physical format is set to stereo, the reported
multichannel layout will actually be stereo. It fixes itself only after
the physical format is changed.
Reduces (but likely does not remove) the danger of rounding intermediate
values down to 8 bit. This is important for cscale, or any other
processing that might store raw YUV values in framebuffers.
Fixes#1918.
ao_coreaudio uses AudioUnit - the OSX software mixer. In theory, it
supports multichannel audio just fine. But in practice, this might be
disabled by default, and the user is supposed to select a multichannel
base format in the "Audio MIDI Setup" utility.
This option attempts to change this setting automatically. Some possible
disadvantages and caveats are listed in the manpage additions. It is off
by default, since changing this might be rather bad behavior for a
normal application.
They are not really interesting. At least one user complained about the
noise resulting from use with shell scripts, which connect and
disconnect immediately.
If for example the audio settings are set to 5.1 output, but the
hardware does 8 channels natively (HDMI), the reported channel
layout will have 2 dummy channels. To avoid falling back to stereo,
we have to write audio in this format to the device.
Some audio APIs explicitly require you to add dummy channels. These are
not rendered, and only exist for the sake of the audio API or hardware
strangeness. At least ALSA, Sndio, and CoreAudio seem to have them.
This commit is preparation for using them with ao_coreaudio.
The result is a bit messy. libavresample/libswresample don't have good
API for this; avresample_set_channel_mapping() is pretty useless.
Although in theory you can use it to add and remove channels, you
can't set the channel counts. So we do the ordering ourselves by making
sure the audio data is planar, and by swapping the plane pointers. This
requires lots of messiness to get the conversions in place. Also, the
input reordering is still done with the "old" method, and doesn't
support padded channels - hopefully this will never be needed. (I tried
to come up with cleaner solutions, but compared to my other attempts,
the final commit is not that bad.)
ca_label_to_mp_speaker_id() checked whether the last entry was >= 0, but
actually this condition was never true, and MP_SPEAKER_ID_UNKNOWN0 is
not negative.
This was a mistake, it should definitely be using the device namespace
rather than the file namespace. As it says in the docs, all pipe names
must start with \\.\pipe\
Path expansion (like "~/dir/" in config file) was used inconsistently,
so the cache directory wasn't always created correctly. Fix this by
moving the path expansion from load_file() to its callers.
Since commit 7381db60, strings like "~desktop/" were expanded as
platform-specific paths by mpv. Apparently this similarity to standard
Unix shell expansion caused confusion, so change it to "~~desktop/". The
shell doesn't expand this, so it should be better.
And split the Cocoa and Unix cases. Simplify the Cocoa case slightly by
calling mpv_main directly, instead of passing a function pointer. Also
add a comment explaining why Cocoa needs a special case at all.
This unbreaks compiling command line player and libmpv at the same
time. The problem was that doing so silently disabled the OSX
application thing - but the command line player can not use the
vo_opengl Cocoa backend without it.
The OSX application code is basically dead in libmpv, but it's not
that much code anyway.
If you want a mpv binary that does not create an OSX application
singleton (and creates a menu etc.), you must disable cocoa
completely, as cocoa can't be used anyway in this case.
This now stores caches for multiple ICC profiles, potentially all the
user has ever used. The big use case for this is for users with multiple
monitors. The old logic would mandate recomputing the LUT and discarding
the cache whenever dragging mpv from one screen to another.
This also avoids having to save and check the ICC profile itself, since
the file name already uniquely determines it.
This will essentially make screenshot-tag-colorspace also affect the
"screenshot window" command, where possible.
Unfortunately, it's completely incompatible with icc-profile, due to API
limitations of ffmpeg (we can only give it an enum of well-known
primaries, rather than an actual ICC profile or primaries).
This should take care of the endless complaints about the default
location for screenshots (and will of course create new ones).
If the screenshot-template is set to an absolute path, the directory
won't be used. So this should be reasonably compatible.
So that the user realizes where they come from, or can find them at all.
This was a common complaint, and this is the most lazy solution. Better
suggestions for a default template are welcome.
win32 has a special function for this.
I'm not sure about OSX - it seems ~/Desktop can be hardcoded, and the
OSX GUI actually localizes the _displayed_ path in its UI.
For Unix, there is not much to be done, or is there.
Somewhat less ifdeffery, higher flexibility. Now there are 3 separate
config file resolvers for 3 platforms (unix, win, osx), and they can
still interact with each other somewhat. For example, OSX for now uses
most of Unix, but adds the OSX bundle path.
This can be extended to resolve very specific platform paths, such as
location of the desktop.
Most of the Unix specific code moves to path-unix.c.
The behavior should be the same - if not, it is likely a bug.
(Not sure why it worked without this when I tested the previous
changes.)
Untested, but should be fine. This is equivalent what is done on e.g.
panscan changes.
I think this used to be quite important, because the ancient VfW support
in MPlayer used to output flipped frames. This code has been dead in mpv
for quite some time (because VfW decoders were removed, and the --flip
option was dropped too), so get rid of it.
Currently, the wayland backend needs extra work to avoid drawing more
often than the wayland frame callback allows. (This is not ideal, but
will be fixed at a later time.)
Unify this with the start_frame callback added for cocoa. Some details
change for the better. For example, if a frame is dropped, and a redraw
is done afterwards, the actually correct frame is redrawn, instead
whatever was in the textures from before the dropped frame.
With --idle --force-window, or when started from the bundle, the cocoa
code dropped the first frame. This resulted in a black frame on start
sometimes.
The reason was that the live resizing/redrawing code was invoked, which
simply set skip_swap_buffer to false, blocking redrawing whatever was
going to be rendered next. Normally this is done so that the following
works:
1. vo_opengl draw a frame, releases GL lock
2. live resizing kicks in, redraw the frame
3. vo_opengl wants to call SwapBuffers, drawing a stale buffer
overwritten by the live resizing code
This is solved by setting skip_swap_buffer in 2., and querying it in 3.
Fix this by resetting the skip_swap_buffer at a known good point: when
vo_opengl starts drawing a new frame.
The start_frame function returns bool, so that it can be merged with
is_active in a following commit.
Commit f1746741de changed the drop
logic to have more slack (drop more frames but less frequent) to prevent
drops due to timing jitter when the clip and screen have similar rates.
However, if the clip has higher rate than the screen (or just higher
playback rate), then that policy hurts smoothness since these "chunked
drops" look worse than one frame drop at a time.
This patch restores the old drop logic when the playback frame rate is
higher than ~5% above the screen refresh rate, and solves this issue.
Fixes#1897