PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get
This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.
Mostly untested; it does compile with Libav 9.9.
These were confined to the video path, but resetting them even if no
video is available shouldn't really matter. Always resetting them makes
the logic easier to follow, I think.
Recently, the check was moved, so it was printed only for source video
PTS (since that's easier, and filters should normally behave sane). But
it turns out it's trivial to print a warning in the filter case too by
reusing the code that normally checks for PTS forward jumps without
needing any additional code, so, fine, restore warning in this case.
pthreads should be available anywhere. Even if not, for environment
without threads a pthread wrapper could be provided that can't actually
start threads, thus disabling features that require threads.
Make pthreads mandatory in order to simplify build dependencies and to
reduce ifdeffery. (Admittedly, there wasn't much complexity, but maybe
we will use pthreads more in the future, and then it'd become a real
bother.)
Introduce a mp_cmd_def struct to define commands, instead of using
mp_cmd for this. This way each command parameter can be a m_option,
instead of m_option plus some more stuff.
Define the ARG_ macros directly in terms of the OPT_ macros. Not sure if
this makes it easier to read (maybe not, even if it looks simpler), but
at least it makes it easier to add other option types.
Another idea was adding a name for each parameter (so you could have
named parameters), but not today.
The hr-seek code assumes that when seeking the demuxer, the first image
decoded after the seek will have a PTS exactly equal to the demuxer seek
target time, or before that target time. Incorrect timestamps,
implicitly dropped initial frames, or broken files/demuxers can all
break this assumption, and lead to hr-seek missing the seek target.
Generally, this is not much a problem (the user won't notice being off
by one frame), but it really shows when using the backstep feature. In
this case, backstepping would simply hang.
Add a simple hack that basically forces a minimal value for the --hr-
seek-demuxer-offset option (which is 0 by default) when doing a
backstep-seek. The chosen minimum value is arbitrary. There's no perfect
value, though in general it should perhaps be slightly longer than the
frametime, which the chosen value is more than enough for typical
framerates.
Using --start with files that use DTS only, or which simply have broken
PTS timestamps, would incorrectly drop frames and possibly not execute
the seek correctly.
Add yet another heuristic to detect this. The intent is that --start and
hr-seeks in general should work correctly, but in order to keep things
fast, we still want to allow frame dropping during hr-seek if there are
no problems doing so. Do this by disabling frame dropping by default,
but re-enabling it if there are no problems found for a while. As a
consequence, --start might be somewhat slower, but normal user
interaction should remain as fast as before.
Note that there's something subtle about the added code: the
has_broken_packet_pts field is checked even before the first packet is
fed to dec_video.c, so the field must not be set to 0 right on start.
It's not initially set to 0 anyway, because the heuristic requires
decoding some images before enabling frame drop anyway.
Note 2: it's not clear whether frame dropping during hr-seek really
helps; I didn't benchmark it.
The d_video->pts field was a bit strange. The code overwrote it multiple
times (on decoding, on filtering, then once again...), and it wasn't
really clear what purpose this field had exactly. Replace it with the
mpctx->video_next_pts field, which is relatively unambiguous.
Move the decreasing PTS check to dec_video.c. This means it acts on
decoder output, not on filter output. (Just like in the previous commit,
assume the filter chain is sane.) Drop the jitter vs. reset semantics;
the dec_video.c determined PTS never goes backwards, and demuxer
timestamps don't "jitter".
Also get rid of the PTS check _after_ filters. This means if there's a
video filter which unsets PTS, no warning will be printed. But we assume
that all filters are well-behaved enough by now.
Refactor the PTS handling code to make it cleaner, and to separate the
bits that use PTS sorting.
Add a heuristic to fall back to DTS if the PTS us non-monotonic. This
code is based on what FFmpeg/Libav use for ffplay/avplay and also
best_effort_timestamp (which is only in FFmpeg). Basically, this 1. just
uses the DTS if PTS is unset, and 2. ignores PTS entirely if PTS is non-
monotonic, but DTS is sorted.
The code is pretty much the same as in Libav [1]. I'm not sure if all of
it is really needed, or if it does more than what the paragraph above
mentions. But maybe it's fine to cargo-cult this.
This heuristic fixes playback of mpeg4 in ogm, which returns packets
with PTS==DTS, even though the PTS timestamps should follow codec
reordering. This is probably a libavformat demuxer bug, but good luck
trying to fix it.
The way vd_lavc.c returns the frame PTS and DTS to dec_video.c is a bit
inelegant, but maybe better than trying to mess the PTS back into the
decoder callback again.
[1] https://git.libav.org/?p=libav.git;a=blob;f=cmdutils.c;h=3f1c667075724c5cde69d840ed5ed7d992898334;hb=fa515c2088e1d082d45741bbd5c05e13b0500804#l1431
These used the suffix _resync_stream, which is a bit misleading. Nothing
gets "resynchronized", they really just reset state.
(Some audio decoders actually used to "resync" by reading packets for
resuming playback, but that's not the case anymore.)
Also move the function in dec_video.c to the top of the file.
These packets have to be explicitly dropped, because usually libavcodec
uses 0-sized packets to flush delayed frames, meaning just passing
through these packets would have bad consequences.
Normally, libavformat doesn't output 0-sized packets anyway. But I don't
want to take any chances, so don't delete it, and just move it out of
the way to demux.c.
If the value for --cache-on-pause is larger than --cache-min, and the
cache runs below --cache-on-pause, but above --cache-min, the logic
would demand to pause the player and then unpause immediately again.
This doesn't make much sense, and alternating the pause state in each
playloop iteration has negative consequences. Add an explicit check to
avoid this situation.
This means the code that tries to figure out the timestamp from
demuxer and decoder output is now all in dec_video.c. We set the
final timestamp on the returned image (mp_image.pts), as well as
the d_video->pts field.
The way the player uses d_video->pts field is still a bit messy. Maybe
this could be cleaned up later.
It appears PTS sorting was useful only for avi files (and VfW-muxed
mkv). Maybe it was historically also important for decoders with broken
or non-existent PTS reordering (win32 codecs?). But now that we handle
demuxers which outputs DTS only correctly, it just seems dead weight.
Disable it by default. The --pts-association-mode option is now forced
to always use the decoder's PTS value. You can still enable the old
default (auto) or force sorting. But we will probably remove this option
entirely at some point.
Make demux_mkv export timestamps at DTS when it's in VfW mode. This is
needed to get correct timestamps with the new default mode. demux_lavf
already does that.
Having the DTS directly can be useful for restoring PTS values.
The avi file format doesn't actually store PTS values, just DTS. An
older hack explicitly exported the DTS as PTS (ignoring the [I assume]
genpts generated non-sense PTS), which is not necessary anymore due to
this change.
Now the --no-correct-pts mode is like the normal mode, just with
different timestamp calculations. The semantics should be about the
same as before this commit.
Before this commit, this mode estimated the frame time by subtracting
successive packet PTS values. This is complete non-sense for video
codecs which use reordering. The code compensated frame times for these
non-sense using the FPS value, but confused the rest of the player with
non-sense jumping around timestamps. So, all in all this mode is not
very useful.
Repurpose this mode for fixed frame rate playback. This gives almost the
same behavior as the old mode with forced framerate (--fps option). The
result is simpler and often more robust.
Instead of passing the PTS as separate field, pass it as part of the
usual data structures. Basically, this removes strange artifacts from
the API. (It's not finished, though: the final decoded PTS goes through
strange paths, and filter_video() finally overwrites the decoded
mp_image's pts field with it.)
We also stop using libavcodec's reordered_opaque fields, and use
AVPacket.pts and AVFrame.pkt_pts. This is slightly unorthodox, because
these pts fields are not "really" opaque anymore, yet we treat them as
such. But the end result should be the same, and reordered_opaque is
marked as partially deprecated (it's not clear whether it's really
deprecated).
This normally shouldn't happen. It does happen with VfW-muxed mkv files,
and would normally happen with avi files (except that we force the
correct association mode using an explicit hack for avi).
This is usually prepended by warnings like:
Decreasing video pts: 0.125000 < 0.167000
and after the switch, there should be no warnings anymore.
Background: avi likes to use DTS for timestamps instead of PTS.
Basically, there are no proper timestamps in the file, only frame
numbers. And Matroska is insane and stores the made-up DTS instead
of a proper PTS. This will be handled properly later.
Apparently Cocoa precise scrolling generates a lot of spurious events with
a delta that is equal to 0.0. Make sure that they are discarded and not added
to the input queue.
Even though this only known to happen with Cocoa, I implemented this at core
level since it makes sense in general.
Fixes: #310
This is not needed anymore, because we decided that the PAR of the
decoded video matters, and not the PAR of the filtered video that
arrives at the VO.
In theory, we can't really do this, because we don't know when a spdif
frame ends. Spdif transports compressed audio through audio setups that
were originally designed for PCM only (which includes the audio filter
chain, the AO API, most audio output APIs, etc.), and to reach this
goal, spdif pretends to be PCM. Compressed data frames are padded with
zeros, until a certain data rate is reached, which corresponds to a
pseudo-PCM format with 2 bytes per sample and 2 channels at 48000 Hz.
Of course an actual spdif frame is significantly larger than a frame
of the PCM format it pretends to be, so cutting audio data on frame
boundaries (as according to the pseudo-PCM format) merely yields an
incomplete and broken frame, not audio that plays for the desired
duration.
However, sending an incomplete frame might still be much better than the
current behavior, which simply ignores --end/--length (but still lets
the video end at the exact end time).
Should this result in trouble with real spdif receivers, this commit
probably has to be reverted.
If the --fps option was given (MPOpts->force_fps), the demuxer FPS value
was overwritten with the forced value. This was fine, since the demuxer
value wasn't needed anymore. But with the recent changes not to write to
the demuxer stream headers, we don't want to do this anymore. So
maintain the (forced/updated) FPS value in dec_video->fps.
The removed code in loadfile.c is probably redundant, and an artifact
from past refactorings.
Note that sub.c will now always use the demuxer FPS value, instead of
the user override value. I think this is fine, because it used the
demuxer's video size values too. (And it's rare that these values are
used at all.)
The only reason why these structs were dynamically allocated was to
avoid recursive includes in stheader.h, which is (or was) a very central
file included by almost all other files. (If a struct is referenced via
a pointer type only, it can be forward referenced, and the definition of
the struct is not needed.) Now that they're out of stheader.h, this
difference doesn't matter anymore, and the code can be simplified.
Also sneak in some sanity checks.
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
Use sh_stream over sh_sub. Use dec_sub (and mpctx->d_sub) instead of the
stream header. This aligns the subtitle code with the recent audio and
video refactoring.
sh_sub still has the decoder context, though. This is because we want to
avoid reinit when switching segments with ordered chapters. (Reinit is
fast, except for creating the ASS_Renderer, which in turn triggers
fontconfig.) Not sure how much this matters, though, because the initial
segment switch will lazily initialize the decoder anyway.
This is similar to the sh_audio commit.
This is mostly cosmetic in nature, except that it also adds automatical
freeing of the decoder driver's state struct (which was in
sh_video->context, now in dec_video->priv).
Also remove all the stheader.h fields that are not needed anymore.
This drops the --pp option, which was probably broken for a while. The
option automatically inserted the "pp" filter. The value passed to it
was ignored (which is probably broken, it always selected maximal
quality).
Inserting this filter can be done simply with --vf=pp, so this is not
needed anymore.
This means most code accessing this struct must now include hwdec.h
instead of dec_video.h. I just put it into dec_video.h at first because
I thought a separate file would be a waste, but it's more proper to do
it this way, as there are too many files which include dec_video.h only
to get the mp_hwdec_info definition.
This was printed before something was decoded, and thus was not really
correct. Also, this code is hilariously broken:
/* Assume FOURCC if all bytes >= 0x20 (' ') */
if (sh_audio->format >= 0x20202020)
mp_msg(MSGT_IDENTIFY, MSGL_INFO,
"ID_AUDIO_FORMAT=%.4s\n", (char *)&sh_audio->format);
Time to kill it.
This information can be accessed through properties instead.
sh_audio is supposed to contain file headers, not whatever was decoded.
Fix this, and write the decoded format to separate fields in the decoder
context, the dec_audio.decoded field. (Note that this field is really
only needed to communicate the audio format from decoder driver to the
generic code, so no other code accesses it.)
Move all state that basically changes during decoding or is needed in
order to manage decoding itself into a new struct (dec_audio).
sh_audio (defined in stheader.h) is supposed to be the audio stream
header. This should reflect the file headers for the stream. Putting the
decoder context there is strange design, to say the least.