1
0
mirror of https://github.com/mpv-player/mpv synced 2024-12-26 09:02:38 +00:00
Commit Graph

163 Commits

Author SHA1 Message Date
wm4
2c08bf1bd7 Reduce recursive config.h inclusions in headers
In my opinion, config.h inclusions should be kept to a minimum. MPlayer
code really liked including config.h everywhere, though, even in often
used header files. Try to reduce this.
2013-12-18 17:12:21 +01:00
wm4
0112143fda Split mpvcore/ into common/, misc/, bstr/ 2013-12-17 02:39:45 +01:00
wm4
eb15151705 Move options/config related files from mpvcore/ to options/
Since m_option.h and options.h are extremely often included, a lot of
files have to be changed.

Moving path.c/h to options/ is a bit questionable, but since this is
mainly about access to config files (which are also handled in
options/), it's probably ok.
2013-12-17 02:07:57 +01:00
wm4
7dc7b900c6 Replace mp_tmsg, mp_dbg -> mp_msg, remove mp_gtext(), remove set_osd_tmsg
The tmsg stuff was for the internal gettext() based translation system,
which nobody ever attempted to use and thus was removed. mp_gtext() and
set_osd_tmsg() were also for this.

mp_dbg was once enabled in debug mode only, but since we have log level
for enabling debug messages, it seems utterly useless.
2013-12-16 20:41:08 +01:00
wm4
88432b817d dec_video: fix handling of timestamp resets
This code tried to pass a still monotonic (even if not strictly
monotonic) PTS to the player, but as a result it remained stuck on
the PTS before a reset (since the PTS was lower).
2013-12-12 23:46:27 +01:00
wm4
227d087db6 video: display last frame, drain frames on video reconfig
Until now, the player didn't care to drain frames on video reconfig.
Instead, the VO was reconfigured (i.e. resized) before the queued frames
finished displaying. This can for example be observed by passing
multiple images with different size as mf:// filename. Then the window
would resize one frame before image with the new size is displayed. With
--vo=vdpau, the effect is worse, because this VO queues more than 1
frame internally.

Fix this by explicitly draining buffered frames before video reconfig.

Raise the display time of the last frame. Otherwise, the last frame
would be shown for a very short time only. This usually doesn't matter,
but helps when playing image files. This is a byproduct of frame
draining, because normally, video timing is based on the frames queued
to the VO, and we can't do that with frames of different size or format.
So we pretend that the frame before the change is the last frame in
order to time it. This code is incorrect though: it tries to use the
framerate, which often doesn't make sense. But it's good enough to test
this code with mf://.
2013-12-10 20:07:39 +01:00
wm4
2f46b23d51 video: move handling of brightness and deinterlacing control
Handling of brightness/gamma/saturation/etc. and deinterlacing is moved
from vf_vo.c to dec_video.c.
2013-12-10 20:07:39 +01:00
wm4
9838bf5565 video: move video filter chain initialization from decoder to player
This should help fixing some issues (like not draining video frames
correctly on reinit), as well as decoupling the decoder, filter chain,
and VO code.

I also wanted to make the hardware video decoding fallback work properly
if software-only video filters are inserted. This currently has the
issue that the fallback is too violent, and throws away a bunch of
demuxer packets needed to restart software decoding properly. But
keeping "backup" packets turned out as too hacky, so I'm not doing this,
at least not yet.
2013-12-10 20:07:39 +01:00
wm4
bb6165342d video: create a separate context for video filter chain
This adds vf_chain, which unlike vf_instance refers to the filter chain
as a whole. This makes the filter API less awkward, and will allow
handling format negotiation better.
2013-12-07 19:32:44 +01:00
wm4
66e20ef8ad video: remove --flip
The --flip option flipped the image upside-down, by trying to use VO
support, or if not available, by inserting a video filter. I'm not sure
why it existed. Maybe it was important in ancient times when VfW based
decoders output an image this way (but even then, flipping an image is a
free operation by negating the stride).

One nice thing about this is that it provided a possible path for
implementing video orientation, which is a feature we should probably
support eventually. The important part is that it would be for free for
VOs that support it, and would work even with hardware decoding.

But for now get rid of it. It's useless, trivial, stands in the way, and
supporting video orientation would require solving other problems first.
2013-12-05 22:58:54 +01:00
wm4
47c4b5c000 vd_lavc: factor out libavcodec thread setup 2013-12-04 23:12:51 +01:00
wm4
0afd121ae6 vd_lavc: don't check required hwdec fields 2013-12-04 23:12:51 +01:00
wm4
8a84da8102 av_common: add timebase parameter to mp_set_av_packet()
If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.

Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)

Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.

This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
2013-12-04 23:12:51 +01:00
Stefano Pigozzi
a74d9c1803 vo_opengl: support for vda hardware decoding
The harder work was done in the previous commits. After that this feature comes
out almost for free.

The only problem is I can't get the textures created with CGLTexImageIOSurface2D
to download properly, thus the code performs download using some CoreVideo APIs.

If someone knows why download of textures created with CGLTexImageIOSurface2D
doesn't work please contact me :)
2013-12-02 09:03:31 +01:00
wm4
597b8a3550 Take care of some libavutil deprecations, drop support for FFmpeg 1.0
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get

This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.

Mostly untested; it does compile with Libav 9.9.
2013-11-29 17:39:57 +01:00
wm4
2a316c3506 vdpau: always let decoder output IMGFMT_VDPAU
The old ffmpeg vdpau support code uses separate vdpau pixel formats for
each decoder (pretty much because mplayer's architecture sucked), which
just gets into the way. Force the old decoder's output to IMGFMT_VDPAU,
and remove IMGFMT_IS_VDPAU() where we can remove it.

This should completely remove the differences betwene the old and new
vdpau decoder outside of the decoder.
2013-11-29 14:19:44 +01:00
wm4
60cd300558 vaapi: remove unused hw image formats, simplify
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.

Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
2013-11-29 14:19:29 +01:00
wm4
0d255f07bf build: make pthreads mandatory
pthreads should be available anywhere. Even if not, for environment
without threads a pthread wrapper could be provided that can't actually
start threads, thus disabling features that require threads.

Make pthreads mandatory in order to simplify build dependencies and to
reduce ifdeffery. (Admittedly, there wasn't much complexity, but maybe
we will use pthreads more in the future, and then it'd become a real
bother.)
2013-11-28 19:28:38 +01:00
wm4
dc0b2046cd video: add insane hack to work around FFmpeg/Libav insanity
So, FFmpeg/Libav requires us to figure out video timestamps ourselves
(see last 10 commits or so), but the methods it provides for this aren't
even sufficient. In particular, everything that uses AVI-style DTS (avi,
vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal
frame delay is broken. In this case, libavcodec will shift the packet-
to-image correspondence by the codec delay, meaning that with a delay=1,
the first AVFrame.pkt_dts is not 0, but that of the second packet. All
timestamps will appear shifted. The start time (e.g. the time displayed
when doing "mpv file.avi --pause") will not be exactly 0.

(According to Libav developers, this is how it's supposed to work; just
that the first DTS values are normally negative with formats that use
DTS "properly". Who cares if it doesn't work at all with very common
video formats? There's no indication that they'll fix this soon,
either. An elegant workaround is missing too.)

Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV.
Since these timestamps are not reorderd, we wouldn't need to sort them,
but it's less code this way (and possibly more robust, should a demuxer
unexpectedly output PTS).

The original intention of all the timestamp changes recently was
actually to get rid of demuxer-specific hacks and the old timestamp
sorting code, but it looks like this didn't work out. Yet another case
where trying to replace native MPlayer functionality with FFmpeg/Libav
led to disadvantages and bugs. (Note that the old PTS sorting code
doesn't and can't handle frame dropping correctly, though.)

Bug reports:

 https://trac.ffmpeg.org/ticket/3178

 https://bugzilla.libav.org/show_bug.cgi?id=600
2013-11-28 15:20:33 +01:00
wm4
d9b5dedfe9 video: warn against non-monotonic PTS instead of decreasing PTS
And by non-monotonic, we mean "strictly non-monotonic".
2013-11-28 15:20:33 +01:00
wm4
3bed78fdfd video: add heuristic to prevent framedrop during hrseek if pts broken
Using --start with files that use DTS only, or which simply have broken
PTS timestamps, would incorrectly drop frames and possibly not execute
the seek correctly.

Add yet another heuristic to detect this. The intent is that --start and
hr-seeks in general should work correctly, but in order to keep things
fast, we still want to allow frame dropping during hr-seek if there are
no problems doing so. Do this by disabling frame dropping by default,
but re-enabling it if there are no problems found for a while. As a
consequence, --start might be somewhat slower, but normal user
interaction should remain as fast as before.

Note that there's something subtle about the added code: the
has_broken_packet_pts field is checked even before the first packet is
fed to dec_video.c, so the field must not be set to 0 right on start.
It's not initially set to 0 anyway, because the heuristic requires
decoding some images before enabling frame drop anyway.

Note 2: it's not clear whether frame dropping during hr-seek really
helps; I didn't benchmark it.
2013-11-28 15:20:33 +01:00
wm4
aa73ac8db8 video: replace d_video->pts field, change PTS jump checks
The d_video->pts field was a bit strange. The code overwrote it multiple
times (on decoding, on filtering, then once again...), and it wasn't
really clear what purpose this field had exactly. Replace it with the
mpctx->video_next_pts field, which is relatively unambiguous.

Move the decreasing PTS check to dec_video.c. This means it acts on
decoder output, not on filter output. (Just like in the previous commit,
assume the filter chain is sane.) Drop the jitter vs. reset semantics;
the dec_video.c determined PTS never goes backwards, and demuxer
timestamps don't "jitter".
2013-11-27 21:14:39 +01:00
wm4
5d97ac229a video: if PTS is missing, make something up using the framerate
Also get rid of the PTS check _after_ filters. This means if there's a
video filter which unsets PTS, no warning will be printed. But we assume
that all filters are well-behaved enough by now.
2013-11-27 21:14:39 +01:00
wm4
f5219720f8 video: refactor PTS code, add fall back heuristic to DTS
Refactor the PTS handling code to make it cleaner, and to separate the
bits that use PTS sorting.

Add a heuristic to fall back to DTS if the PTS us non-monotonic. This
code is based on what FFmpeg/Libav use for ffplay/avplay and also
best_effort_timestamp (which is only in FFmpeg). Basically, this 1. just
uses the DTS if PTS is unset, and 2. ignores PTS entirely if PTS is non-
monotonic, but DTS is sorted.

The code is pretty much the same as in Libav [1]. I'm not sure if all of
it is really needed, or if it does more than what the paragraph above
mentions. But maybe it's fine to cargo-cult this.

This heuristic fixes playback of mpeg4 in ogm, which returns packets
with PTS==DTS, even though the PTS timestamps should follow codec
reordering. This is probably a libavformat demuxer bug, but good luck
trying to fix it.

The way vd_lavc.c returns the frame PTS and DTS to dec_video.c is a bit
inelegant, but maybe better than trying to mess the PTS back into the
decoder callback again.

[1] https://git.libav.org/?p=libav.git;a=blob;f=cmdutils.c;h=3f1c667075724c5cde69d840ed5ed7d992898334;hb=fa515c2088e1d082d45741bbd5c05e13b0500804#l1431
2013-11-27 21:14:39 +01:00
wm4
1e96f5bcd9 Move some code from player to audio/video reset functions 2013-11-27 21:14:39 +01:00
wm4
f09b2ff661 cosmetics: rename video/audio reset functions
These used the suffix _resync_stream, which is a bit misleading. Nothing
gets "resynchronized", they really just reset state.

(Some audio decoders actually used to "resync" by reading packets for
resuming playback, but that's not the case anymore.)

Also move the function in dec_video.c to the top of the file.
2013-11-27 21:14:39 +01:00
wm4
f2b5267e88 video: remove commented code 2013-11-27 21:14:39 +01:00
wm4
55070ea85f video: use dts as fallback when determining pts by sorting
This makes the new code equivalent with the old one, which often passed
dts as pts. Also rename some variables to clear up things.
2013-11-27 21:13:42 +01:00
wm4
7a0299478e video: unbreak --no-correct-pts with demuxers that use DTS 2013-11-26 23:43:56 +01:00
wm4
56d3ff33f1 video: move timestamp determination code to dec_video
This means the code that tries to figure out the timestamp from
demuxer and decoder output is now all in dec_video.c. We set the
final timestamp on the returned image (mp_image.pts), as well as
the d_video->pts field.

The way the player uses d_video->pts field is still a bit messy. Maybe
this could be cleaned up later.
2013-11-25 23:16:22 +01:00
wm4
b5b1692593 video: disable PTS sorting fallback by default
It appears PTS sorting was useful only for avi files (and VfW-muxed
mkv). Maybe it was historically also important for decoders with broken
or non-existent PTS reordering (win32 codecs?). But now that we handle
demuxers which outputs DTS only correctly, it just seems dead weight.

Disable it by default. The --pts-association-mode option is now forced
to always use the decoder's PTS value. You can still enable the old
default (auto) or force sorting. But we will probably remove this option
entirely at some point.

Make demux_mkv export timestamps at DTS when it's in VfW mode. This is
needed to get correct timestamps with the new default mode. demux_lavf
already does that.
2013-11-25 23:14:54 +01:00
wm4
9f72a9753e demux: export dts from demux_lavf, use it for avi
Having the DTS directly can be useful for restoring PTS values.

The avi file format doesn't actually store PTS values, just DTS. An
older hack explicitly exported the DTS as PTS (ignoring the [I assume]
genpts generated non-sense PTS), which is not necessary anymore due to
this change.
2013-11-25 23:13:01 +01:00
wm4
d8b59aa17f player: merge no-correct-pts with correct-pts code
Now the --no-correct-pts mode is like the normal mode, just with
different timestamp calculations. The semantics should be about the
same as before this commit.
2013-11-25 23:12:18 +01:00
wm4
88fa420b20 player: change semantics of --no-correct-pts
Before this commit, this mode estimated the frame time by subtracting
successive packet PTS values. This is complete non-sense for video
codecs which use reordering. The code compensated frame times for these
non-sense using the FPS value, but confused the rest of the player with
non-sense jumping around timestamps. So, all in all this mode is not
very useful.

Repurpose this mode for fixed frame rate playback. This gives almost the
same behavior as the old mode with forced framerate (--fps option). The
result is simpler and often more robust.
2013-11-25 23:10:18 +01:00
wm4
83dc3a81f1 dec_video: fix function signature
Just why...? And why did this take 7 years?
2013-11-25 23:09:40 +01:00
wm4
4205bbf243 video: pass PTS as part of demux_packet/AVPacket and mp_image/AVFrame
Instead of passing the PTS as separate field, pass it as part of the
usual data structures. Basically, this removes strange artifacts from
the API. (It's not finished, though: the final decoded PTS goes through
strange paths, and filter_video() finally overwrites the decoded
mp_image's pts field with it.)

We also stop using libavcodec's reordered_opaque fields, and use
AVPacket.pts and AVFrame.pkt_pts. This is slightly unorthodox, because
these pts fields are not "really" opaque anymore, yet we treat them as
such. But the end result should be the same, and reordered_opaque is
marked as partially deprecated (it's not clear whether it's really
deprecated).
2013-11-25 23:08:29 +01:00
wm4
df8d00cc1f vd_lavc: improve a comment 2013-11-24 14:44:58 +01:00
wm4
5a3e01fa80 vd_lavc: when falling back to software, revert filter error status
When mpv is started with some video filters set (--vf is used), and
hardware decoding is requested, and hardware decoding would be possible,
but is prevented due to video filters that accept software formats only,
the fallback didn't work properly sometimes.

This fallback works rather violently: it tries to initialize the filter
chain, and if it fails it throws away the frame decoded using the
hardware, and retries with software. The case that didn't work was when
decoding the current packet didn't immediately lead to a new frame. Then
the filter chain wouldn't be reinitialized, and the playloop would stop
playback as soon as it encounters the error flag.

Fix this by resetting the filter error flag (back to "uninitialized"),
which is a rather violent, but somewhat working solution.

The fallback in general should perhaps be cleaned up later.
2013-11-23 22:28:39 +01:00
wm4
36744a30fb Attempt to fix build on older libavcodec versions 2013-11-23 22:08:18 +01:00
wm4
f90e7ef7ea video: don't overwrite demuxer FPS value
If the --fps option was given (MPOpts->force_fps), the demuxer FPS value
was overwritten with the forced value. This was fine, since the demuxer
value wasn't needed anymore. But with the recent changes not to write to
the demuxer stream headers, we don't want to do this anymore. So
maintain the (forced/updated) FPS value in dec_video->fps.

The removed code in loadfile.c is probably redundant, and an artifact
from past refactorings.

Note that sub.c will now always use the demuxer FPS value, instead of
the user override value. I think this is fine, because it used the
demuxer's video size values too. (And it's rare that these values are
used at all.)
2013-11-23 21:41:40 +01:00
wm4
de68b8f23c video: move handling of container vs. stream AR out of vd_lavc.c
Now the actual decoder doesn't need to care about this anymore, and it's
handled in generic code instead. This simplifies vd_lavc.c, and in
particular we don't need to detect format changes in the old way
anymore.
2013-11-23 21:40:51 +01:00
wm4
4c2fb8f3a2 dec_video: make vf_input and hwdec_info statically allocated
The only reason why these structs were dynamically allocated was to
avoid recursive includes in stheader.h, which is (or was) a very central
file included by almost all other files. (If a struct is referenced via
a pointer type only, it can be forward referenced, and the definition of
the struct is not needed.) Now that they're out of stheader.h, this
difference doesn't matter anymore, and the code can be simplified.

Also sneak in some sanity checks.
2013-11-23 21:39:07 +01:00
wm4
02f96efc50 dec_video: remove "initialized" field
It's redundant.
2013-11-23 21:38:39 +01:00
wm4
904c73d2d2 demux: remove gsh field from sh_audio/sh_video/sh_sub
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.

Also move the "format" field from the specific headers to sh_stream.
2013-11-23 21:37:56 +01:00
wm4
3486302514 video: move decoder context from sh_video into new struct
This is similar to the sh_audio commit.

This is mostly cosmetic in nature, except that it also adds automatical
freeing of the decoder driver's state struct (which was in
sh_video->context, now in dec_video->priv).

Also remove all the stheader.h fields that are not needed anymore.
2013-11-23 21:36:20 +01:00
wm4
25855059af video: remove vf_pp auto-insertion
This drops the --pp option, which was probably broken for a while. The
option automatically inserted the "pp" filter. The value passed to it
was ignored (which is probably broken, it always selected maximal
quality).

Inserting this filter can be done simply with --vf=pp, so this is not
needed anymore.
2013-11-23 21:30:56 +01:00
wm4
acfeb869a3 video: merge vd.c into dec_video.c
I don't feel like the separation ever made sense, and it was hard to
tell which file a function you were looking for was in.
2013-11-23 21:28:28 +01:00
wm4
4fa2babacc video: move struct mp_hwdec_info into its own header file
This means most code accessing this struct must now include hwdec.h
instead of dec_video.h. I just put it into dec_video.h at first because
I thought a separate file would be a waste, but it's more proper to do
it this way, as there are too many files which include dec_video.h only
to get the mp_hwdec_info definition.
2013-11-23 21:26:31 +01:00
wm4
de22d2b1ba vf_vavpp: make it work with vo_opengl and software decoding
vo_opengl always loads the hwdec backend lazily, so hwdec_request_api()
has to be called to possibly load it. This makes vf_vavpp work with
software decoding. (Hardware decoding loads the backend before the
filter is initialized, so this case is different.)

Also, the VFCTRL_GET_HWDEC_INFO call doesn't need to be checked. If it
fails, the info will be left blank.
2013-11-22 18:06:34 +01:00
wm4
16233bc546 vdpau_old: enable OpenGL interop
OpenGL interop was essentially disabled, because the decoder didn't
request vdpau device creation from vo_opengl.
2013-11-19 22:15:28 +01:00