This is the first of a series of commits that will change the Cocoa way in a
way that is easily embeddable inside parent views. To reach that point common
code must avoid referencing the parent NSWindow since that could be the host
application's window.
--x11-netwm=yes now forces NetWM fullscreen, while --x11-netwm=auto
(detect whether NetWM fullsctreen support is available) is the old
behavior and still the default.
See #888.
Apparently this is what users want. When playing with normal speed,
nothing is done. When playing slower than normal, resampling is used
instead, because scaletempo (which does the pitch correction) adds
too many artifacts.
This would play some silence in case video was slower than audio. If
framedropping is already enabled, there's no other way to keep A/V
sync, short of changing audio playback speed (which would give worse
results). The --audiodrop option inserted silence if there was more
than 500ms desync.
This worked somewhat, but I think it was a silly idea after all. Whether
the playback experience is really bad or slightly worse doesn't really
matter. There also was a subtle bug with PTS handling, that apparently
caused A/V desync anyway at ridiculous playback speeds.
Just remove this feature; nobody is going to use it anyway.
E.g. --loop-file=2 will play the file 3 times (one time normally, and 2
repeats).
Minor syntax issue: "--loop-file 5" won't work, you have to use
"--loop-file=5". This is because "--loop-file" still has to work for
compatibility, so the "old" syntax with a space between option name and
value can't work.
It's just confusing; users are encouraged to edit input.conf instead
(changing the argument to the "add" command).
Update input.conf to keep the old behavior.
Until now, you could override only level 3 with --osd-status-msg. Extend
this, add add --osd-msg1 to --osd-msg3 (one for each OSD level). OSD
level 0 always means disable OSD, so that isn't included.
--osd-msg3 corresponds to --osd-status-msg, but they're not exactly the
same. To allow more customization, --osd-msgN do not include the OSD
symbol. The symbol can be manually added with "${osd-sym-cc}". We keep
the "old" option for some short-term compatibility.
--osd-msg1 should be particularly useful; for example you could do:
--osd-msg1='${?pause==yes:${osd-sym-cc}}'
to display a "paused" symbol when paused, and nothing during normal
playback. (Although admittedly, the syntax is quite a bit of work.)
With default settings, this allows you to hit the 100% mark (with
default --softvol-max in the middle) even if you've reached min or max
volume before. This is because 50 is not divisible by 3 (old default)
but by 2 (new default).
Not really sure why there still can be issues with higher --softvol-max
and --volstep=1, but this is where I stop caring.
The memcpy() is actually not enough: the types are incompatible, and no
memcpy, union, etc. will change that. (Although no real compiler will
ever break this.) Attempt to make this theoretically correct by actually
using a struct pointer. It's not the same struct type, but supposedly
it's ok, because all struct pointers always have the same size and
representation in standard C.
Don't worry, your ~/.config/... paths are safe. This merely removes
handling of $XDG_CONFIG_DIRS for global paths.
Maybe there is a better solution for this, like still including the
"traditional" config dir. But I will leave the fine reading of this
(crappy) spec and fixing the code accordingly to someone else. So, if
anyone has interest in getting this behavior back, you will have to
write a patch. This patch should _also_ not break expected behavior.
Fixes#1060.
--hls-bitrate=min/max lets you select the min or max bitrate. That's it.
Something more sophisticated might be possible, but is probably not even
worth the effort.
This catches a few cases which basically call:
m_property_strdup_ro(..., ..., NULL)
which would return NULL strings. This should generally be avoided
(although it's allowed due to reasons), and it seems most callers
actually intend this to mean M_PROPERTY_UNAVAILABLE.
This inserts an automatic conversion filter if a Matroska file is marked
as 3D (StereoMode element). The basic idea is similar to video rotation
and colorspace handling: the 3D mode is added as a property to the video
params. Depending on this property, a video filter can be inserted.
As of this commit, extending mp_image_params is actually completely
unnecessary - but the idea is that it will make it easier to integrate
with VOs supporting stereo 3D mogrification. Although vo_opengl does
support some stereo rendering, it didn't support the mode my sample file
used, so I'll leave that part for later.
Not that most mappings from Matroska mode to vf_stereo3d mode are
probably wrong, and some are missing.
Assuming that Matroska modes, and vf_stereo3d in modes, and out modes
are all the same might be an oversimplification - we'll see.
See issue #1045.
A (hopefully) temporary hack to make stream switching delays tolerable.
It's not clear how this should be handled (either executing a precise
seek on track switching, or always enabling all streams), so get this
issue out of the way for now by picking a rather low value.
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.
The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
Add the --cache-secs option, which literally overrides the value of
--demuxer-readahead-secs if the stream cache is active. The default
value is very high (10 seconds), which means it can act as network
cache.
Remove the old behavior of trying to pause once the byte cache runs
low. Instead, do something similar wit the demuxer cache. The nice
thing is that we can guess how many seconds of video it has cached,
and we can make better decisions. But for now, apply a relatively
naive heuristic: if the cache is below 0.5 secs, pause, and wait
until at least 2 secs are available.
Note that due to timestamp reordering, the estimated cached duration
of video might be inaccurate, depending on the file format. If the
file format has DTS, it's easy, otherwise the duration will seemingly
jump back and forth.
--demuxer-readahead-secs now controls how much the demuxer should
readahead by an amount of seconds. This is based on the raw packet
timestamps. It's not always very exact. For example, h264 in Matroska
does not store any linear timestamps (only PTS values which are going
to be reordered by the decoder), so this heuristic is usually off by
several hundred milliseconds.
The decision whether to readahead is basically OR-ed with the other
--demuxer-readahead-packets options. Change the manpage descriptions
to subtly convey these semantics.
Since the display FPS is currently detected on X11 only (and even there
it's known to be wrong on certain setups), it seems like a good idea to
make this user-configurable.
This mostly uses the same idea as with vo_vdpau.c, but much simplified.
On X11, it tries to get the display framerate with XF86VM, and limits
the frequency of new video frames against it. Note that this is an old
extension, and is confirmed not to work correctly with multi-monitor
setups. But we're using it because it was already around (it is also
used by vo_vdpau).
This attempts to predict the next vsync event by using the time of the
last frame and the display FPS. Even if that goes completely wrong,
the results are still relatively good.
On other systems, or if the X11 code doesn't return a display FPS, a
framerate of 1000 is assumed. This is infinite for all practical
purposes, and means that only frames which are definitely too late are
dropped. This probably has worse results, but is still useful.
"--framedrop=yes" is basically replaced with "--framedrop=decoder". The
old framedropping mode is kept around, and should perhaps be improved.
Dropping on the decoder level is still useful if decoding itself is too
slow.
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
Completely useless, and could accidentally be enabled by cycling
framedrop modes. Just get rid of it.
But still allow triggering the old code with --vd-lavc-framedrop, in
case someone asks for it. If nobody does, this new option will be
removed eventually.
The parser can be called with dst (the target) set to NULL if the option
should be verified only. The code didn't respect this, and could result
in crashes when used in config profiles or filter sub-options.
Fixes#981.
Almost nothing was left of it.
The only thing this commit actually removes is support for reading
input commands from stdin. But you can emulate this via:
--input-file=/dev/stdin --input-terminal=no
However, this won't work on Windows. Just use a named pipe.
Useful for Windows stuff. Actually, ENCA support should catch this, but,
well, whatever, everyone seems to hate ENCA.
Detection with BOM is trivial, although it needs some hackery to
integrate it with the existing autodetection support. For one, change
the default value of --sub-codepage to make this easier.
Probably fixes issue #937 (the second part).
This adds a thread to the demuxer which reads packets asynchronously.
It will do so until a configurable minimum packet queue size is
reached. (See options.rst additions.)
For now, the thread is disabled by default. There are some corner cases
that have to be fixed, such as fixing cache behavior with webradios.
Note that most interaction with the demuxer is still blocking, so if
e.g. network dies, the player will still freeze. But this change will
make it possible to remove most causes for freezing.
Most of the new code in demux.c actually consists of weird caches to
compensate for thread-safety issues (with the previously single-threaded
design), or to avoid blocking by having to wait on the demuxer thread.
Most of the changes in the player are due to the fact that we must not
access the source stream directly. the demuxer thread already accesses
it, and the stream stuff is not thread-safe.
For timeline stuff (like ordered chapters), we enable the thread for the
current segment only. We also clear its packet queue on seek, so that
the remaining (unconsumed) readahead buffer doesn't waste memory.
Keep in mind that insane subtitles (such as ASS typesetting muxed into
mkv files) will practically disable the readahead, because the total
queue size is considered when checking whether the minimum queue size
was reached.
Something like "char *s = ...; isdigit(s[0]);" triggers undefined
behavior, because char can be signed, and thus s[0] can be a negative
value. The is*() functions require unsigned char _or_ EOF. EOF is a
special value outside of unsigned char range, thus the argument to the
is*() functions can't be a char.
This undefined behavior can actually trigger crashes if the
implementation of these functions e.g. uses lookup tables, which are
then indexed with out-of-range values.
Replace all <ctype.h> uses with our own custom mp_is*() functions added
with misc/ctype.h. As a bonus, these functions are locale-independent.
(Although currently, we _require_ C locale for other reasons.)