Basically rewrite all the code supporting the cache (i.e. anything other
than the ringbuffer logic). The underlying design is untouched.
Note that the old cache2.c (on which this code is based) already had a
threading implementation. This was mostly unused on Linux, and had some
problems, such as using shared volatile variables for communication and
uninterruptible timeouts, instead of using locks for synchronization.
This commit does use proper locking, while still retaining the way the
old cache worked. It's basically a big refactor.
Simplify the code too. Since we don't need to copy stream ctrl args
anymore (we're always guaranteed a shared address space now), lots of
annoying code just goes away. Likewise, we don't need to care about
sector sizes. The cache uses the high-level stream API to read from
other streams, and sector sizes are handled transparently.
Before this commit, the cache was franken-hacked on top of the stream
API. You had to use special functions (like cache_stream_fill_buffer()
instead of stream_fill_buffer()), which would access the stream in a
cached manner.
The whole idea about the previous design was that the cache runs in a
thread or in a forked process, while the cache awa functions made sure
the stream instance looked consistent to the user. If you used the
normal functions instead of the special ones while the cache was
running, you were out of luck.
Make it a bit more reasonable by turning the cache into a stream on its
own. This makes it behave exactly like a normal stream. The stream
callbacks call into the original (uncached) stream to do work. No
special cache functions or redirections are needed. The only different
thing about cache streams is that they are created by special functions,
instead of being part of the auto_open_streams[] array.
To make things simpler, remove the threading implementation, which was
messed into the code. The threading code could perhaps be kept, but I
don't really want to have to worry about this special case. A proper
threaded implementation will be added later.
Remove the cache enabling code from stream_radio.c. Since enabling the
cache involves replacing the old stream with a new one, the code as-is
can't be kept. It would be easily possible to enable the cache by
requesting a cache size (which is also much simpler). But nobody uses
stream_radio.c and I can't even test this thing, and the cache is
probably not really important for it either.
Some code in mplayer.c did stuff like accessing (dvd_priv_t *)st->priv.
Do this indirectly by introducing STREAM_CTRL_GET_DVD_INFO. This is
extremely specific to DVD, so it's not worth abstracting this further.
This is a preparation for turning the cache into an actual stream, which
simply wraps the cached stream. There are other streams which are
accessed in the way DVD was, at least TV/radio/DVB. We assume these
can't be used with the cache. The code doesn't look thread-safe or fork
aware.
show_chapters, show_tracks, and show_playlist are killed and replaced
with the properties chapter-list, track-list, and playlist. The code
and the output of these stays the same, this is just moving a lot of
code around and reducing the number of properties.
The "old" commands will still be supported for a while (to avoid making
everyone angry), so handle them with the legacy layer. Add something to
suppress printing the legacy warnings for these commands.
Slightly better output when printing ${metadata}. Print each metadata
item as "name: value", instead of the raw list. It's still not very
great, though. The old format is still available through ${=metadata}
for things which dare to use the broken slave mode.
This adds a the property 'clock', which returns the current
local time as the string hh:mm.
Additionally the keybinding 'shift' + 'o' was added to displaying
the clock as '[hh:mm]' .
This commit addresses some issues with the users had with the previous
implementation in commit c39efb9. Here's the changes:
* Use Quartz Event Taps to remove Media Key events mpv handles from
the global OS X queue. This prevents conflicts with iTunes. I did this on
the main thread since it is mostly idling. It's the playloop thread that
actually does all the work so there is no danger of blocking the event tap
callback.
* Introduce `--no-media-keys` switch so that users can disable all of mpv's
media key handling at runtime (some prefer iTunes for example).
* Use mpv's bindings so that users can customize what the media keys do via
input.conf. Current bindings are:
MK_PLAY cycle pause
MK_PREV playlist_prev
MK_NEXT playlist_next
An additional benefit of this implementation is that it is completly handled
by the `macosx_events` file instead of `macosx_application` making the
project organization more straightforward.
This branch heavily refactors the subtitle code (both loading and
rendering), and adds support for a few new formats through FFmpeg.
We don't remove any of the old code yet. There are still some subtleties
related to subreader.c to be resolved: code page detection & conversion,
timing post-processing, UTF-16 subtitle support, support for the -subfps
option. Also, SRT reading and loading ASS via libass should be turned
into proper demuxers. (SRT is needed because Libav's is gravely broken,
and we want ASS loading via libass to cover full libass format support.
Both should be demuxers which are probed _before_ libavformat, so that
all subtitles can be loaded through the demuxer infrastructure, and
libavformat subtitles don't need to be treated in a special way.)
Until now, this happened only when the -no-ass option was used. This
difference in behavior doesn't make much sense, so change it so that
whether -no-ass is used or not doesn't matter. (-no-ass enables the OSD
subtitle renderer, which has the terminal fallback, while the normal
path is video only.)
the changes in set_osd_subtitle() and reinit_video_chain() are for
resetting the state correctly when switching between video/no-video.
Audio and video had their own (very similar) functions to initialize an
AVPacket (ffmpeg's packet struct) from a demux_packet (mplayer's packet
struct). Add a common function for these.
Also use this function for sd_lavc_conv. This is actually a functional
change, as some libavfilter subtitle demuxers add weird out-of-band
stuff as side-data.
Before this, subtitle packets were returned as data ptr/len pairs, and
mplayer.c got the rest (pts and duration) directly from the demuxer
data structures. Then mplayer.c reassembled the packet data structure
again.
Pass packets directly instead. The mplayer.c side stays a bit awkward,
because the (now by default unused) DVD path keeps getting in the way.
In demux.c there's lots of weird stuff (3 functions that read packets,
really?), but we want to keep the code equivalent for now to avoid
hitting weird issues and corner cases.
The -no-ass option used to disable all use of libass completely. This
doesn't work this way anymore, and the text subtitle path has an
inherent dependency on libass. Currently -no-ass does 3 things:
1. Strip tags and formatting on display, and use a separate renderer for
the result. (Which might be the terminal, or libass via OSD code.)
2. Not loading attached fonts from Matroska files.
3. Use subreader.c instead of libass for reading .ass files.
1. and 2. are ok and what the user (probably wants), but 3. doesn't
really make sense anymore. subreader.c reads .ass files just fine, but
then does some strange things to them (something about coalescing and
re-adding newlines?), leading to even more broken display with -no-ass.
Instead of fighting with subreader.c, just use libass as loader.
This means subassconvert.c is split in sd_srt.c and sd_microdvd.c. Now
this code is involved in the sub conversion chain like sd_movtext is.
The invocation of the converter in sd_ass.c is removed.
This requires some other changes to make the new sub converter code work
with loading external subtitles. Until now, subtitles loaded via
subreader.c was assumed to be in plaintext, or for some formats, in ASS
(except in -no-ass mode). Then these were added to an ASS_Track. Change
this so that subtitles are always in their original format (as far as
decoders/converters for them are available), and turn every sub event
read by subreader.c as packet to the dec_sub.c subtitle chain.
This removes differences between external/demuxed and -ass/-no-ass code
paths further.
After killing the non functional AR support in c8fd9e5 I got much complaints so
this adds AR support back in (and it works). I am using the HIDRemote class by
Felix Schwarz and that part of the code is under the BSD license. I slightly
modified it replacing [NSApplication sharedApplication] with NSApp. The code
of the class is quite complex (probably because it had to deal with all the
edge cases with IOKit) but it works nicely as a black box.
In a later commit I'll remove the deprecation warnings caused by HIDRemote's
usage of Gestalt.
Check out `etc/input.conf` for the default bindings.
Apple Remote functionality is automatically compiled in when cocoa is enabled.
It can be disabled at runtime with the `--no-ar` option.
On OSX with Cocoa enabled keyDown events are now handled with
addLocalMonitorForEventsMatchingMask:handler:. This allows to respond to
events even when there is no VO initialized but the GUI is focused.
Add a basic infrastructure for subtitle converters. These converters
work sort-of like decoders, except that they produce packets instead
of subtitle bitmaps. They are put in front of actual decoders.
Start with sd_movtext. 4 lines of code are blown up to a 55 lines file,
but fortunately this is not going to be that bad for the following
converters.
OPT_STRING_VALIDATE actually did nothing. This made -vo opengl crash or
misbehave when passing an invalid value for the lscale, cscale or 3dlut-
size (the only users for this option type).
The code added with this commit was either blatantly forgotten with the
commit introducing this option type, or somehow lost.
Make the sub decoder stuff independent from sh_sub (except for
initialization of course). Sub decoders now access a struct sd only,
instead of getting access to sh_sub. The glue code in dec_sub.c is
similarily independent from osd.
Some simplifications are made. For example, the switch_id stuff is
unneeded: the frontend code just has to make sure to call osd_changed()
any time subtitles are switched.
This is also preparation for introducing subtitle converters. It's much
cleaner to completely separate demuxer header/renderer glue/decoders
for this purpose, especially since sub converters might completely
change how demuxer headers have to be interpreted.
Also pass data as demux_packets. Currently, this doesn't help much, but
libavcodec converters might need scary stuff like packet side data, so
it's perhaps better to go with passing packets.
Subtitle files are opened in mplayer.c, not using the demuxer
infrastructure in general. Pretend that this is not the case (outside of
the loading code) by opening a pseudo demuxer that does nothing. One
advantage is that the initialization code is now the same, and there's
no confusion about what the difference between track->stream,
track->sh_sub and mpctx->sh_sub is supposed to be.
This is a bit stupid, and it would be much better if there were proper
subtitle demuxers (there are many in recent FFmpeg, but not Libav). So
for now this is just a transition to a more proper architecture. Look
at demux_sub like an artifical limb: it's ugly, but don't hate it - it
helps you to get on with your life.
This unifies the subtitle rendering path. Now all subtitle rendering
goes through sd_ass.c/sd_lavc.c/sd_spu.c.
Before that commit, the spudec.h functions were used directly in
mplayer.c, which introduced many special cases. Add sd_spu.c, which is
just a small wrapper connecting the new subtitle render API with the
dusty old vobsub decoder in spudec.c.
One detail that changes is that we always pass the palette as extra
data, instead of passing the libdvdread palette as pointer to spudec
directly. This is a bit roundabout, but actually makes the code simpler
and more elegant: the difference between DVD and non-DVD dvdsubs is
reduced.
Ideally, we would just delete spudec.c and use libavcodec's DVD sub
decoder. However, DVD playback with demux_mpg produces packets
incompatible to lavc. There are incompatibilities the other way around
as well: packets from libavformat's vobsub demuxer are incompatible to
spudec.c. So we define a new subtitle codec name for demux_mpg subs,
"dvd_subtitle_mpg", which only sd_spu can decode.
There is actually code in spudec.c to "assemble" fragments into complete
packets, but using the whole spudec.c is easier than trying to move this
code into demux_mpg to fix subtitle packets.
As additional complication, Libav 9.x can't decode DVD subs correctly,
so use sd_spu in that case as well.
It appears demux_mpg doesn't output timestamps for subtitles. The vobsub
code handled this by doing its own PTS calculations. This code is absent
from the normal subtitle decoder path. Copy this code into the normal
path, so that we can unify the subtitle decoder paths in a later commit.
Decoding subtitles with sd_lavc when playing DVD with demux_mpg still
doesn't work.
The -no-ass switch used to disable any use of libass for text subtitles.
This is not really the case anymore, because libass is now always
involved when rendering text. The only remaining use of -no-ass is
disabling styling or showing subtitles on the terminal. On the other
hand, the old subtitle rendering path is a big reason why the subtitle
code is still a big mess with an awful number of obscure special cases.
In order to simplify it, remove the old subtitle rendering code, and
always go through sd_ass.c. Basically, we use ASS_Track as central data
structure for storing text subtitles instead of struct sub_data. This
also makes libass mandatory for all text subs, even if they are printed
to the terminal in -no-video mode. (We could add something like sd_text
to avoid this, but it's not worth the trouble.)
struct sub_data and subreader.c are still around, even its ASS/SSA
reader. But struct sub_data is freed right after converting it to
ASS_Track. The internal ASS reader actually can handle some obscure
cases libass can't, like files encoded in UTF-16.
The core deselected all streams on initialization, and then selected the
streams it actually wanted. This was no problem for
demux_mkv/demux_lavf, but old demuxers (like demux_asf) could lose some
packets. The problem is that these demuxers can buffer some data on
initialization, which then is flushed on track switching. Fix this by
explicitly avoiding deselecting a wanted stream.
Most of these are rather questionable, the rest you rarely need to set
manually. You still can set all of them with -lavdopts-o (because
libavcodec has AVOptions for them).
Playing something with "mpv f1.mkv f2.mkv --gapless-audio --volume=20"
caused the volume to be reset when playing a new file. Normally, the
volume should not be reset (unless explicitly requested with per-file
options), and without either --gapless-audio or --volume it works as
expected.
The underlying problem is that volume was saved only when the AO was
uninitialized, and also the volume was always set when starting a file.
Fix this by saving the volume when playback ends, and when the audio
is reinitialized. To make sure the volume is never restored twice or
saved in the wrong situation, introduce INITIALIZED_VOL.
Also note that this volume saving and restoring only happens if the
--volume option is used. mixer.c does its own bookkeeping of volume.
The main reason for this is that the volume option could be reset by
per-file options (see manpage), and mixer.c doesn't know anything
about this stuff. This is probably dumb, and maybe some things could
be simplified. But for now this will work.
When AAC is streamed over HTTP, using libavformat defaults is
pathetically slow. One solution for that is skipping probing and using
the mimetype to identify that it's AAC instead. This is what we did
before this commit (and ffmpeg does it too, but their logic is too
"inaccessible" for mpv).
This is still pretty fragile though. Make it a bit more robust by
requiring minimal probing. A probescore of 25 is reached after feeding
2 KB to libavformat (instead of > 500 KB for the normal probescore), so
use that. This is done only when streaming AAC from HTTP to reduce the
possibility of weird breakages for other formats.
Also reduce analyzeduration. The default analyzeduration will make
libavformat read lots of data, which makes playback start slow. So we
set analyzeduration to a low value. On the other hand, doing that for
other formats is risky, because there are unspecified effects with
certain "strange" formats (like transport streams). So we do this only
if we're streaming AAC from HTTP as well.
tl;dr libavformat is shit for media players
This can control whether demux_lavf should use the HTTP mime type to
determine the format, instead of probing the data with the libavformat
API. Do this to allow easier debugging in case the mimetype is
incorrect. (This is done only for AAC streams right now.)
In commit 0e07189, I made the status line always print a newline,
instead of cutting the output at 80 columns (or if stderr is a terminal,
whatever width the terminal reports). This is better in the case the
output goes into a log file or a pipe.
This caused problems for people who want to pipe raw video to mpv, so
change it again. (Not sure why they won't use FIFOs instead.)
Now output untrimmed lines if the slave mode flag is set, which makes
sense to do, too. The current slave mode is still on life support,
though.
GetTimer() is generally replaced with mp_time_us(). Both calls return
microseconds, but the latter uses int64_t, us defined to never wrap,
and never returns 0 or negative values.
GetTimerMS() has no direct replacement. Instead the other functions are
used.
For some code, switch to mp_time_sec(), which returns the time as double
float value in seconds. The returned time is offset to program start
time, so there is enough precision left to deliver microsecond
resolution for at least 100 years. Unless it's casted to a float
(or the CPU reduces precision), which is why we still use mp_time_us()
out of paranoia in places where precision is clearly needed.
Always switch to the correct time. The whole point of the new timer
calls is that they don't wrap, and storing microseconds in unsigned int
variables would negate this.
In some cases, remove wrap-around handling for time values.
This was used by some VOs to do timing of cursor autohiding, but we
recently moved that out of the VOs. Even though this mechanism might
be a good idea and could be needed again in future (but for what?),
it's unused now. So better just get rid of it.
Make OS specific timer code export a mp_raw_time_us() function, and
add generic implementations of GetTimer()/GetTimerMS() using this
function. New mpv code is supposed to call mp_time_us() in situations
where precision is absolutely needed, or mp_time_s() otherwise.
Make it so that mp_time_us() will return a value near program start.
We don't set it to 0 though to avoid confusion with relative vs.
absolute time. Instead, pick an arbitrary offset.
Move the test program in timer-darwin.c to timer.c, and modify it to
work with the generic timer functions.
If VO deinterlacing is unavailable, try to insert vf_yadif.
If vf_lavfi is available, actually use vf_yadif from libavfilter. The
libavfilter version of this filter is faster, more correct, etc., so it
is preferred. Unfortunately vf_yadif obviously doesn't support
VFCTRL_GET/SET_DEINTERLACE, and with the current state of the
libavfilter API, it doesn't look like there is any simple way to
emulate it. Instead, we simply insert the filter with a specific label,
and if deinterlacing is to be disabled, the filter is removed again by
label.
This won't do the right thing if the user inserts any deinterlacing
filter manually (except native vf_yadif, which understands the VFCTRL).
For example, with '-vf lavfi=yadif', pressing 'D' (toggle deinterlacing)
will just insert a second deinterlacer filter. In these cases, the user
is supposed to map a command that toggles his own filter instead of
using 'D' and the deinterlace property.
The same applies if the user wants to pass different parameters to the
deinterlacer filters.
If a complete filter description is passed to -vf-del, search for an
existing filter with the same label or the same name/arguments, and
delete it. The rules for filter entry equality are the same as with
the -vf-toggle option.
E.g.
-vf-add gradfun=123:gradfun=456
-vf-del gradfun=456
does what you would expect.
Can be used to refer to filters by name. Intended to be used when the
filter chain is changed at runtime.
A label can be assigned to a filter by prefixing it with '@name:', where
'name' is an user-chosen identifier. For example, a filter added with
'-vf-add @label1:gradfun=123' can be removed with '-vf-del @label1'.
If a filter with an already existing label is added, the existing filter
is replaced with the new filter (this happens for both -vf-add and
-vf-pre). If a filter is replaced, the new filter takes the position of
the old filter, instead of being appended/prepended to the filter chain
as usual. For -vf-toggle, labels are compared if at least one of the
filters has a label; otherwise they are compared by filter name and
arguments (like before). This means two filters are never considered
equal if one has a label and the other one does not.
This prefers ./ on Windows if-and-only-if the file being searched for
already exists there. (If the mpv directory is non-writable, the result
is still intended behavior.) This change is transparent to most users
because the user has to move the config files there intentionally, and
if anything, not being detected would be the surprising behavior.