JSON IPC works on Windows now, and although the transports for each
plaform have similar characteristics, they unfortunately have different
names (Unix domain sockets on Linux/Unix vs. named pipes on Windows.)
Hopefully this change better reflects the purpose of the option too,
since with --input-ipc-server, mpv acts as an IPC server that can
service many simultaneous clients (as opposed to --input-file, which can
only do one-to-one IPC.)
See --lavfi-complex option.
This is still quite rough. There's no support for dynamic configuration
of any kind. There are probably corner cases where playback might freeze
or burn 100% CPU (due to dataflow problems when interaction with
libavfilter).
Future possible plans might include:
- freely switch tracks by providing some sort of default track graph
label
- automatically enabling audio visualization
- automatically mix audio or stack video when multiple tracks are
selected at once (similar to how multiple sub tracks can be selected)
This is probably the 3rd time the user-visible behavior changes. This
time, switch back because not normalizing seems to be the more expected
behavior from users.
Too many problems. Well, actually it's just Linux audio systems which
cause problems, and exclusive audio access on other platforms.
In any case, it seems you have to do some manual configuration if you
want multichannel audio output.
This is labeled as "name of the service in broadcasting (channel name)"
and exported as a generic tag by avformat-demuxers, notably lavf
when handling mpegts-streams from DVB.
If you do "mpv /bla/", and then branch out into sub-directories using
playlist navigation, and then used quit and watch later, then playing
the same directory did not resume from the previous point. This was
because resuming is based on the path hash, so a path prefix can't be
detected when resuming the parent directory.
Solve this by writing each path prefix when playing directories is
involved. (This includes all parent paths, so interestingly, "mpv /"
would also resume in the above example.)
Something like this was requested multiple times, and I want it too.
Requested. It works like --sub-paths. This will also load audio files
from a "audio" sub directory in the config file (because the same code
as for subtitles is used, and it also had such a feature).
Fixes#2632.
This is only for specific Hauppage cards. According to the comments in
who is actively using this feature. Get it out of the way.
Anyone who still wants to use this should complain. Keeping this code
would not cause terribly much additional work, and it could be restored
again. (But not if the request comes months later.)
Most of this is explained in the DOCS additions.
This gives us slightly more sanity, because there is less interaction
between the various parts. The goal is getting rid of the video_offset
entirely.
The simplification extends to the user API. In particular, we don't need
to fix missing parts in the API, such as the lack for a seek command
that seeks relatively to the start time. All these things are now
transparent.
(If someone really wants to know the real timestamps/start time, new
properties would have to be added.)
They are evil and should be eradicated. Some of these were pretty dumb
anyway.
There are probably some more around in platform specific code or other
code not enabled by default on Linux.
Update msg.c state immediately if a terminal or logging setting is set.
Until now, this was delayed until mp[v]_initialize() was called. When
using the client API, you could easily miss logged error messages, even
when logging was initialized early on by calling
mpv_request_log_messages().
(Properties can't be used for this either, because properties do not
work before mpv_initialize().)
Thanks to rcombs, ffmpeg now properly supports DASH and we can
remove our hacks for it and use it by default whenever
available. If you don't like this for whatever reason, you
can get the "normal" streams back with --ytdl-format=best .
Closes#579Closes#1321Closes#2359
Useless. Sometimes it might be useful to make some extremely broken
files work, but on the other hand --no-correct-pts is sufficient for
these cases.
While we still need some of the code for AVI, the "auto" mode in
particular inflated the size of the code.
The manpage entry explains this.
(Maybe this option could be always enabled and removed. I don't quite
remember what valid use-cases there are for just disabling audio
entirely, other than that this is also needed for audio decoder init
failure.)
The vf_format suboption is replaced with --video-output-levels (a global
option and property). In particular, the parameter is removed from
mp_image_params. The mechanism is moved to the "video equalizer", which
also handles common video output customization like brightness and
contrast controls.
The new code is slightly cleaner, and the top-level option is slightly
more user-friendly than as vf_format sub-option.
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.
VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
The "PWD" enviornment variable is described by POSIX. We don't go to
length to verify its contents, but just trust it.
This affects the logic for resuming playback.
This was completely broken. It was checked manually in some config
loading paths, so it appeared to work. But the intention was always to
completely disable reading from the normal config dir. This logic was
broken in commit 2263f37d.
The manual checks are actually redundant, and are not needed if
--no-config is implemented properly - remove them.
Additionally, the change to load the libmpv defaults from an embedded
profile also failed to set "config=no". The option is marked as not
being settable by a config file, and the libmpv default profile is
parsed as a config file, so this option was rejected. Fix it by removing
the CONF_NOCFG flag. (Alternatively, m_config_set_profile() could be
changed not to set the "config file" flag by default, but I'm not
bothering with this.)
If this mode is enabled, the player tries to strictly synchronize video
to display refresh. It will adjust playback speed to match the display,
so if you play 23.976 fps video on a 24 Hz screen, playback speed is
increased by approximately 1/1000. Audio wll be resampled to keep up
with playback.
This is different from the default sync mode, which will sync video to
audio, with the consequence that video might skip or repeat a frame once
in a while to make video keep up with audio.
This is still unpolished. There are some major problems as well; in
particular, mkv VFR files won't work well. The reason is that Matroska
is terrible and rounds timestamps to milliseconds. This makes it rather
hard to guess the framerate of a section of video that is playing. We
could probably fix this by just accepting jittery timestamps (instead
of explicitly disabling the sync code in this case), but I'm not ready
to accept such a solution yet.
Another issue is that we are extremely reliant on OS video and audio
APIs working in an expected manner, which of course is not too often
the case. Consequently, the new sync mode is a bit fragile.
Since we're on the topic of consistency, I've seen multiple users
complain about the presence of this period, which does not really match
other programs' behavior.
Add --demuxer-max-packets and --demuxer-max-bytes, which control the
maximum size of the packet queue. These can be helpful to avoid
excessive memory usage.
Memory usage is the reason why there's a limit in the first place. If a
file is more or less broken, and audio and video don't line up, the
decoders will fill up the packet queue trying to read more audio or
video, and the maximum sizes are required to avoid unbounded memory
allocation. Being able to override the maximum sizes is useful; either
for restricting memory usage further, or enlarging the sizes when
attempting to play various broken files.
Remove --demuxer-readahead-packets and --demuxer-readahead-bytes. These
were a bit useless. They could force a minimum packet queue size, but
controlling the queue size with --demuxer-readahead-secs is much nicer.
It's fairly certain nobody ever used these options.
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
Allow setting an arbitrary amount, instead of the fixed 50%.
This is nto striclty backwards compatible. The defaults don't change,
but the --cache/--cache-default options now set the readahead portion.
So in practice, users who configured this until now will see the
double amount of cache being used, _plus_ the 75MB default backbuffer
will be in use.
Probably makes users happy who want bitmap subtitles to show up in the
screen margins, and stops them from doing idiotic crap with vf_expand.
Fixes#2098.
Until now, if a stream wasn't seekable, but the stream cache was enabled
(--cache), we've enabled seeking anyway. The idea was that at least
short seeks would typically fall within the cache. And if not, the user
was out of luck and terrible things happened. In other words, it was
unreliable.
Be stricter about it and remove this behavior. Effectively, this will
for example disable seeking in piped data.
Instead of trying to be clever, add an --force-seekable option, which
will always enable seeking if the user really wants it.
See manpage additions. This is mainly useful for vo_opengl_cb, but can
also be applied to vo_opengl.
On a side note, gl_hwdec_load_api() should stop using a name string, and
instead always use the IDs. This should be cleaned up another time.
Now there's a "canonical" table for mapping the names, that other code
can use, without having to rely too much on option code magic.
Also, use the central HWDEC constants, instead of magic values. (There
used to be semi-ok reasons to do this, but now it makes no sense
anymore.)