In my opinion, config.h inclusions should be kept to a minimum. MPlayer
code really liked including config.h everywhere, though, even in often
used header files. Try to reduce this.
Since m_option.h and options.h are extremely often included, a lot of
files have to be changed.
Moving path.c/h to options/ is a bit questionable, but since this is
mainly about access to config files (which are also handled in
options/), it's probably ok.
The tmsg stuff was for the internal gettext() based translation system,
which nobody ever attempted to use and thus was removed. mp_gtext() and
set_osd_tmsg() were also for this.
mp_dbg was once enabled in debug mode only, but since we have log level
for enabling debug messages, it seems utterly useless.
The OSD style settings depend on the PlayRes, simply because all style
values are implicitly scaled by the PlayResY in libass. Also, the OSC
changes the PlayResY in certain situations, so something could go wrong.
But not sure if this actually matters in practice.
This simplifies things, although it is slightly less efficient (probably
uses a bit more memory).
This also happens to fix that the OSC dropped the libass cache on every
frame.
This doesn't have much value. It can't be accessed by anything else than
the actual subtitle renderer (sd_ass.c). sd_ass.c could create the
renderer itself, except that we apparently want to save memory (and some
font loading time) when using ordered chapters or multiple subtitle
tracks.
This readds a more or less completely new dvdnav implementation, though
it's based on the code from before commit 41fbcee. Note that this is
rather basic, and might be broken or not quite usable in many cases.
Most importantly, navigation highlights are not correctly implemented.
This would require changes in the FFmpeg dvdsub decoder (to apply a
different internal CLUT), so supporting it is not really possible right
now. And in fact, I don't think I ever want to support it, because it's
a very small gain for a lot of work. Instead, mpv will display fake
highlights, which are an approximate bounding box around the real
highlights.
Some things like mouse input or switching audio/subtitles stream using
the dvdnav VM are not supported.
Might be quite fragile on transitions: if dvdnav initiates a transition,
and doesn't give us enough mpeg data to initialize video playback, the
player will just quit.
This is added only because some users seem to want it. I don't intend to
make mpv a good DVD player, so the very basic minimum will have to do.
How about you just convert your DVD to proper video files?
If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.
Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)
Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.
This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
This is not needed anymore, because we decided that the PAR of the
decoded video matters, and not the PAR of the filtered video that
arrives at the VO.
Use sh_stream over sh_sub. Use dec_sub (and mpctx->d_sub) instead of the
stream header. This aligns the subtitle code with the recent audio and
video refactoring.
sh_sub still has the decoder context, though. This is because we want to
avoid reinit when switching segments with ordered chapters. (Reinit is
fast, except for creating the ASS_Renderer, which in turn triggers
fontconfig.) Not sure how much this matters, though, because the initial
segment switch will lazily initialize the decoder anyway.
We found that the stretching - although it usually improves the looks of
the fonts - is incorrect.
On DVD, subtitles can cover the full area of the picture, and they have
the same pixel aspect as the movie itself.
Too bad many commercially released DVDs use bitmap fonts made with the
wrong pixel aspect (i.e. assuming 1:1) - --stretch-dvd-subs will make
these more pretty then.
The previous code used the output video's pixel aspect for stretching
purposes, breaking rendering with e.g. -vf scale in the chain. Now
subtitles are stretched using the input video's pixel aspect only,
matching the intentions of the original subtitle author.
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
* #define HAVE_HURR 1
* #define HAVE_DURR 0
We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.
[1]: http://xkcd.com/927/ related
DVD subs (rarely) have subtitle events without end timestamp. The
duration is unknown, and they should be displayed until they're
replaced by the next event.
FFmpeg fails hard to make us aware whether duration is unknown or
actually 0, so we can't distinguish between these two cases. It fails
at this twice: AVPacket.duration is set to 0 if duration is unknown,
and AVSubtitle.end_display_time has the same issue.
Add a hack that considers all bitmap subtitles with duration==0 as
events with uknown length. I'd rather accidentally display a hidden
subtitle (if they exist at all), instead of not displaying random
subtitles at all.
See github issue #325.
First, don't try to seek if the result is 0 (i.e. nothing found, or
subtitle event happens to be exactly on spot).
Second, since we never can make sure that we actually seek to the exact
subtitle PTS (seeking "snaps" to video PTS), offset the seek by 10ms.
Since most subtitle events are longer than 10ms, this should work fine.
This is pretty much a hack for the OSC. It will allow it to rely on a
somewhat predictable style, instead of having to overwrite all user
OSD settings manually with override tags.
This is preliminary. There are still tons of issues, and any aspect
of scripting may change in the future. I decided to merge this
(preliminary) work now because it makes it easier to develop it, not
because it's done. lua.rst is clear enough about it (plus some
sarcasm).
This requires linking to Lua. Lua has no official pkg-config file, but
there are distribution specific .pc files, all with different names.
Adding a non-pkg-config based configure test was considered, but we'd
rather not.
One major complication is that libquvi links against Lua too, and if
the Lua version is different from mpv's, you will get a crash as soon
as libquvi uses Lua. (libquvi by design always runs when a file is
opened.) I would consider this the problem of distros and whoever
builds mpv, but to make things easier for users, we add a terrible
runtime test to the configure script, which probes whether libquvi
will crash. This is disabled when cross-compiling, but in that case
we hope the user knows what he is doing.
This code was made inactive some months ago. At this time it wasn't
entirely clear whether this code was still needed, but now I'm pretty
sure it isn't. Even if it is, it didn't work anymore.
Cherry picked from various commits in lua_experiment by ChrisK2.
The metrics of the OSD symbols change slightly, possibly due to the
font editor that was used, and the metrics were not correct to begin
with. (But the real reason seems unknown.) Remove the rescaling of
the OSD font in ASS_USE_OSD_FONT, because the height more or less fits
now. (This change wasn't in the lua_experiment branch.)
Even if a subtitle was explicitly loaded with -sub, it was still auto-
loaded (if auto-loading applied to that file). Fix this by explicitly
checking whether a file is already loaded.
The check is maximal naive and just compares the filenames as strings.
The change in find_subfiles.c is so that "-sub something.ass" happens to
work (auto-loading prepended a "./" to it, so the naive filename
comparison check didn't work).
External vobsubs usually come as .idx/.sub pairs. Loading the .idx file
implicitly loads the .sub file, whereas loading the .sub file will kind
of work, but miss important information such as subtitle resolution. Or
in other words, if the .idx file exists, adding the .sub file as track
is useless and confusing.
Explicitly remove .sub file from the auto-load suntitle list in these
cases. Standalone .sub files are still loaded.
We also drop that weird logic that excluded .utf8 files from being
loaded if -subcp was in use. I hope the associated use case didn't make
much sense to begin with. If not, we could still implement it properly,
instead of this weird hack.
Use mp_path_exists() to check for existence of a file (which in turn
uses stat()), instead of opening and closing it. The difference is that
if we don't have sufficient permissions to read the subtitle files, we
will loudly complain. Personally, I prefer this behavior.
The way this was added to FFmpeg is less than ideal, because it requires
text parsing in the Matroska demuxer. But in order to use the FFmpeg
webvtt-to-ass converter, we still have to mimic this in some way. We do
this by putting the parsing into sd_lavc_conv.c, before the subtitle
packet is passed to libavcodec. At least this keeps the ugliness out of
unrelated code.
There is some change that FFmpeg will fix their design eventually.
Instead of rewriting the parsing code, we simply borrow it from FFmpeg's
Matroska demuxer.
Not actually useful. This would break whenever a new text subtitle
format would be added, which requires a binary->text transformation.
(mov_text is one such format; disable it.) In general, we would have
to know which packet formats are binary, which we don't, so the only
reasonable way to handle this is a white list.
Broken UTF-8 in this context means we treat it as UTF-8, but we also
interpret broken UTF-8 sequences as Latin1.
Also, run our own UTF-8 check function before the charset detectors.
This prevents from ENCA's UTF-8 check possibly messing up (like
detecting 7-bit clean UTF-8 as ASCII, or other things). It also takes
care of UTF-8 detection if no charset detector (ENCA, libguess) is
compiled in, and it lets us deal better with cut-off UTF-8 sequences.
The fix_overlaps_and_gaps() function in dec_sub.c fixes small gaps or
overlaps between subtitle events. However, sometimes it could happen
that the corrected subtitle events could overlap by 1ms due to bad
rounding, making libass shift subtitles to reduce collisions. (The
second subtitle will be shown above the previous one, even if both
subtitles are visible only for 1ms.)
sd_ass.c rounds the timestamps when converting to integers for unknown
reasons. I think it would work fine without that rounding, but since
I have no clue why it rounds, and since it could be needed to ensure
correct timestamps with ASS subtitles demuxed from Matroska, I'd rather
not touch it. So the solution is to use already rounded timestamps to
calculate the new subtitle duration in fix_overlaps_and_gaps().
See github issue #182.
This reverts commit 689a25003f, with some
adjustments to code that was added after that commit.
I just messed up big time. We don't need this, and in fact the commit
confused straight and premultiplied alpha at one point (just a simple
inverted condition due to an oversight), which is why it looked like
it was working.
In commit 2827295 I wrote:
Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures [...]
That was just me messing up and being severely confused by my own bugs.
VA API uses premultiplied alpha, which by the way is nice and
thoughtful of the VA API devs.
Well, this was stupid. But in the end, I'm glad that I could actually
reduce codesize by a good amount again.
This is for VAAPI support. VAAPI does not support premultiplied alpha
for OSD. (Normally, we prefer premultiplied, because it has better
behavior on scaling.)
I'm not sure whether blending in the ASS->RGBA part is correct and I
didn't test it extensively.
In general, this warning can hint to actual bugs. We don't enable it
yet, because it would conflict with some unmerged code, and we should
check with clang too (this commit was done by testing with gcc).
I'm not sure what's correct: stretching the DVD subtitles from storage
aspect ratio to video display aspect ratio, or displaying subtitles
using 1:1 PAR. Until now, DVD subtitles (as well as all other bitmap
subtitles) were always stretched to the video. There are good arguments
why this would be the correct behavior: DVDs were made for playback on
TV, which display anamorphic video by adjusting the horizontal refresh
rate, and thus wouldn't even be capable of DVD subtitles with square PAR
(other than resampling the subtitles additionally).
However, I haven't seen a sample yet where subtitles do _not_ look
stretched using this method. Rendering them at 1:1 PAR looks better.
Technically, we render them at display PAR (and not 1:1 PAR). Do this in
a way so that the subtitle area is always inside of the video frame if
display and video aspect ratios mismatch.
For DVB subtitles, the old method looks more correct, so this is special
cased to DVD subtitles.
I might revert this commit if it turns out that it's an disimprovement.
Partial packet reads were needed because the video/audio parsers were
working on top of them. So it could happen that a parser read a part of
a packet, and returned that to the decoder. With libavformat/libavcodec,
packets are already parsed, and everything is much simpler.
Most of the simplifications in ad_spdif could have been done earlier.
Remove some other stuff as well, like the questionable slave mode start
time reporting (could be replaced by proper code, but we don't bother).
Remove the unused skip_audio_frame() functionality as well (it was used
by old demuxers). Some functions become private to demux.c, like
demux_fill_buffer(). Introduce new packet read functions, which have
simpler semantics. Packets returned from them are owned by the caller,
and all packets in the demux.c packet queue are considered unread.
Remove special code that dropped subtitle packets with size 0. This
used to be needed because it caused special cases in the old code.
Delete demux_avi, demux_asf, demux_mpg, demux_ts. libavformat does
better than them (except in rare corner cases), and the demuxers have
a bad influence on the rest of the code. Often they don't output
proper packets, and require additional audio and video parsing. Most
work only in --no-correct-pts mode.
Remove them to facilitate further cleanups.
This means the direct libass usage can be removed from command.c, and no
weird hacks for retrieving the ASS_Track are needed.
Also fix a bug when using this feature with ordered chapters.
This was changed as part of commit b44202b as an intended
simplification, but it's actually nicer to have the subtitles
update immediately even if paused.
Should we actually get into trouble for unproper handling of
frame-based subtitle formats, this might be the simplest way to
work this around. Also is a bit more intuitive than -subfps, which
might use an unknown, misdetected, or non-sense video FPS.
Still pretty silly, though.
Before this commit, SRT demuxing and display actually happened to work
on Libav. But it was using the libavcodec srt converter (which is
essentially unmaintained in Libav), and timing postprocessing didn't
work. For some background explanations see sd_lavf_srt.c.
Until now, timing and charset recoding postprocessing was applied on
packets as they were output by the demuxer, and then passed to the
decoders. Make it so that postprocessing can happen after some decoders
in special situations.
This code was once part of subreader.c, then traveled to libass, and now
made its way back to the fork of the fork of the original code, MPlayer.
It works pretty much the same as subreader.c, except that we have to
concatenate some packets to do auto-detection. This is rather annoying,
but for all we know the actual source file could be a binary format.
Unlike subreader.c, the iconv context is reopened on each packet. This
is simpler, and with respect to multibyte encodings, more robust.
Reopening is probably not a very fast, but I suspect subtitle charset
conversion is not an operation that happens often or has to be fast.
Also, this auto-detection is disabled for microdvd - this is the only
format we know that has binary data in its packets, but is actually
decoded to text. FFmpeg doesn't really allow us to solve this properly,
because a) the input packets can be binary, and b) the output will be
checked whether it's UTF-8, and if it's not, the output is thrown away
and an error message is printed. We could just recode the decoded
subtitles before sd_ass if it weren't for that.
demux_libass.c allows us to make subtitle format detection part of the
normal file loading process. libass has no probe function, but trying to
load the start of a file (the first 4 KB) is good enough. Hope that
libass can even handle random binary input gracefully without printing
stupid log messages, and that the libass parser doesn't accept too many
non-ASS files as input.
This doesn't handle the -subcp option correctly yet. This will be fixed
later.
subreader.c (before this commit renamed to demux_subreader.c) was
special cased to the -sub option. The plan is using the normal demuxer
codepath for all subtitle formats (so we can prefer libavformat demuxers
for most formats).
There are some subtle changes. The probe size is restricted to 32 KB
(instead of unlimitted + giving up after 100 lines of input). For
formats like MicroDVD, the video FPS isn't used anymore, because it's
not available on the subtitle demuxer level. Instead, hardcode it to
23.976 FPS (libavformat seems to do the same). The user can probably
still use -sub-fps to fix the timing. Checking the file extension for
".utf"/".utf8"/".utf-8" is simply removed (seems worthless, was in the
way, and I've never seen this anywhere).
Actually check the newly added text for whitespace, and not the
uninitialized buffer after it. Also, if an even is only whitespace,
don't add it at all.
sd_ass contains some code that treats subtitle events with duration 0
specially, and adjust their duration so that they will disappear with
the next event.
This is most likely not needed anymore. Some subtitle formats allow
omitting the duration so that the event is visible until the next one,
but both subreader.c as well as libavformat subtitle demuxers already
handle this.
Subtitles embedded in mp4 files (movtext) used to trigger this code. But
these files appear to export subtitle duration correctly (at least
libavcodec's movtext decoder is using this assumption). Since commit
6dbedd2 changed demux_lavf to actually copy the packet duration field,
the code removed with this commit isn't needed anymore for correct
display of movtext subtitles. (The change in sd_movtext is for dropping
empty subtitle events, which would now be "displayed" - libavcodec does
the same.)
On the other hand, this code incorrectly displayed hidden events in .srt
subtitles. See for example the first event in SubRip_capability_tester.srt
(part of FFmpeg's FATE). These intentionally have a duration of 0, and
should not be displayed. (As of with this commit, they are still
displayed in external .srt subs because of subreader.c hacks.)
However, we can't be 100% sure that this code is really unneeded, so
just comment the code. Hopefully it can be removed if there are no
regressions after some weeks or months.
Currently, we are filtering libavformat style ASS packets by checking
whether they are prefixed "Dialogue: ". Unfortunately, comment packets
are demuxed too. These start with "Comment: ", so they are not caught.
Change the filtering, and use the codec ID instead. libavformat uses
"ssa" as codec ID for ASS subtitles, while mpv uses "ass". Also, at
least FFmpeg will change the ASS packet format to the same format mpv
and Matroska use, and identify these with "ass" as codec ID, so this is
works out nicely.
Some of this (fixing timing) is now done in dec_sub.c (although it's
not active for subreader.c code yet - this will be fixed when
subreader.c subs are read through a demuxer wrapper).
Another reason to remove this is that this code doesn't do much good
anymore. libass does handle overlap, and trying to fold overlapping
lines into single subtitle events will prevent libass from handling
this properly.
This fixes the -subfps option (which unfortunately is still useful),
and fixes minor annoying timing errors (which unfortunately still
happen).
Note that none of these affect ASS or image subtitles. ASS is specially
handled: libass loads subtitles as ASS_Track. There are no actual
packets passed around, and sd_ass just uses the ASS_Track.
Disable the --sub-no-text-pp option. It's misleading now and always was
completely useless.
If a subtitle is external, read it completely and add all subtitle
events in advance when the subtitle track is selected. This is done
for text subtitles only. (Note that subreader.c and subtitles loaded
with libass are different and don't have anything to do with this
commit.)
Seems like a completely unnecessary complication. Instead, always add a
1 byte padding (could be extended if a caller needs it), and clear it.
Also add some documentation. There was some, but it was outdated and
incomplete.
This function was called in various places. Most time, it was used
before a seek. In other cases, the purpose was apparently resetting
the EOF flag. As far as I can see, this makes no sense anymore. At
least the stream_reset() calls paired with stream_seek() are completely
pointless. A seek will either seek inside the buffer (and reset the
EOF flag), or do an actual seek and reset all state.
Both converters can output \pos and deal with font sizes, so they assume
a specific script resolution (PlayResX/PlayResY). The implicit
assumption was that a specific resolution was guaranteed. The
MP_ASS_FONT_PLAYRESY constant is connected to this.
Better make it explicit, so that the implicit dependency on
MP_ASS_FONT_PLAYRESY is removed. (Unfortunately, libavcodec sub
converters still don't set PlayResX/PlayResY explicitly, so the value
set by that constant can't be declared as arbitrary yet.)
PlayResY=288 is most likely the SSA natural script resolution (or
something like this?), as well as the libass and VSFilter default.
PlayResX=384 is the fallback value set by libass if PlayResY is set to
288, and PlayResX is unset.
The default style is added by mp_ass_default_track(), but not by
ass_new_track(). Considering this, the previous condition at this point
didn't make much sense anymore: the actual (converted) subtitle format
doesn't matter much for what styling should be applied. What matters is
if the subtitle was originally ASS, or if it was converted to it.
Change the code such that the default style is added if there aren't
any, even after reading sub extradata. (The extradata contains the ASS
header, including the style section.) This might change behavior with
scripts that don't define any styles. The change is either with this
commit or with an earlier commit in this branch, depending on the
situation - there are multiple places where default styles are added
in libass API functions, and it's all a big mess.
Other than with very old or broken files (where different behavior
doesn't matter much), the current code should be pretty safe, though.
Audio and video had their own (very similar) functions to initialize an
AVPacket (ffmpeg's packet struct) from a demux_packet (mplayer's packet
struct). Add a common function for these.
Also use this function for sd_lavc_conv. This is actually a functional
change, as some libavfilter subtitle demuxers add weird out-of-band
stuff as side-data.