Explicitly advancing the playlist with input commands ("playlist_next")
didn't jump back to the first file, if the current file was the last on
the playlist and looping was enabled.
Fix this and make the behavior with explicit input and playback EOF the
same.
Also add a minor feature: if looping is enabled, and the current file is
the first on the playlist, going back one entry jumps to the last
playlist entry (without changing loop count).
Fixes#22.
Should be dead code. Stream selection is handled either during
demuxer initialization, or via DEMUXER_CTRL_SWITCH_*.
(If there were actually situations where this code did something, it
was probably broken anyway.)
Allow negative times. Timestamps can be negative, and we actually
display negative time for other reasons too, such as when waiting for
the old audio to drain with gapless audio.)
Avoid overflows with relatively large time values. (We still don't
handle values too large for int64_t.)
This functionality looked smart but created problems with some kinds of
multi touch events. Moreover some events coming from the windows server – like
hovering a corner for window resize – didn't cause the player to wake up
immediately.
The "correct" non hacky way to implement async event polling with cocoa would
be having the vanilla cocoa event loop driving the player and setting up mpv's
terminal FDs as event sources for the cocoa event loop.
Fixes#20
Remove screenshot_force and associated logic. Always try to use the
screenshot video filter before trying taking screenshots with the VO,
which means that --vf=screenshot now takes the role of --vf=screenshot_force.
(To make this clear, not adding a video filter is still the recommended
way to take screenshots; we just change how VF screenshots are forced.)
Preferring VO over VF and having --vf=screenshot_force used to make
sense when not all VOs supported screenshots, and some VOs had somewhat
broken screenshots (like vo_xv taking screenshots with OSD in it). But
all these issues are fixed now, so just get rid of the cruft.
Move things that are used by vo_xv only into vo_xv, same for vo_x11.
Rename some functions exported by x11_common, like vo_init to
vo_x11_common. Make functions not used outsode of x11_common.c private
to that file. Eliminate all global variables defined by x11_common
(except error handler and colormap stuff).
There shouldn't be any functional changes, and only code is moved
around. There are some minor simplifications in the X11 init code, as
we completely remove the ability to initialize X11 and X11+VO
separately (see commit b4d9647 "mplayer: do not create X11 state in player frontend"),
and the respective functions are conflated into vo_x11_init() and
vo_x11_uninit().
The "http:" protocol has been switched to use ffmpeg's HTTP
implementation some time ago. One problem with this was that many HTTP
specific options stopped working, because they were obviously
implemented for the internal HTTP implementation only.
Add the missing things. Note that many options will work for ffmpeg
only, as Libav's HTTP implementation is missing these. They will
silently be ignored on Libav.
Some options we can't fix:
--ipv4-only-proxy, --prefer-ipv4, --prefer-ipv6
As far as I can see, not even libavformat internals distinguish
between ipv4 and ipv6.
--user, --passwd
ffmpeg probably supports specifying these in the URL directly.
If we detect Libav, always use the old builtin vobsub decoder (in
spudec.c). Note that we do not want to use it for newer ffmpeg, as
spudec.c can't handle the vobsub packets as generated by the .idx
demuxer, and we want to get rid of spudec.c in general anyway.
Now, when backgrounded, mpv plays and outputs messages to stdout, but
statusline is not output.
Background<->foreground transitions are detected by signals and polling
the process groups.
This was disabled in 4ea60a3 and 70c455a, when all options were still
forced file local, and resetting fullscreen was annoyingly reset when
switching to the next file. mpv keeps all options by default, so this
isn't needed anymore.
-x/-y were rather useless and obscure. The only use I can see is
forcing a specific aspect ratio without having to calculate the aspect
ratio float value (although --aspect takes values of the form w:h).
This can be also done with --geometry and --no-keepaspect. There was
also a comment that -x/-y is useful for -vm, although I don't see how
this is useful as it still messes up aspect ratio.
-xy is mostly obsolete. It does two things: a) set the window width to
a pixel value, b) scale the window size by a factor. a) is already done
by --autofit (--autofit=num does exactly the same thing as --xy=num, if
num >= 8). b) is not all that useful, so we just drop that
functionality.
--autofit=WxH sets the window size to a maximum width and/or height,
without changing the window's aspect ratio.
--autofit-larger=WxH does the same, but only if the video size is
actually larger than the window size that would result when using
the --autofit=WxH option with the same arguments.
Now all numbers in the --geometry specification can take percentages.
Rewrite the parsing of --geometry, because adjusting the sscanf() mess
would require adding all the combinations of using and not using %. As
a side effect, using % and pixel values can be freely mixed.
Keep the aspect if only one of width or height is set. This is more
useful in general.
Note: there is one semantic change: --geometry=num used to mean setting
the window X position, but now it means setting the window width.
Apparently this was a mplayer-specific feature (not part of standard X
geometry specifications), and it doesn't look like an overly useful
feature, so we are fine with breaking it.
In general, the new parsing should still adhere to standard X geometry
specification (as used by XParseGeometry()).
This also means the option is verified on program start, not when the VO
is created. The actual code becomes a bit more complex, because the
screen width/height is not available at program start.
The actual parsing code is still the same, with its unusual sscanf()
usage.
Drop queued frames on seek. Reset the internal state of some filters
that seem to need it as well: at least vf_divtc still produced some
frames using the previous PTS.
This fixes weird behavior with some filters on seeking. In particular,
this could lead to A/V desync or apparent lockups due to the PTS of
filtered frames being too far away from audio PTS.
This commit does only the minimally required work to fix these PTS
related issues. Some filters have state dependent on previously filtered
frames, and these are not automatically reset with this commit (even
vf_divtc and vf_softpulldown reset the PTS info only). Filters that
actually require a full reset can implement VFCTRL_SEEK_RESET.
Format changes within a file can e.g. happen in MPEG-TS streams. This
fix also fixes encoding of such files, because ao_lavc is not capable of
reconfiguring the audio stream.
This printed per-frame statistics into a file, like bitrate or frame
type. Not very useful and accesses obscure AVCodecContext fields
(danger of deprecation/breakage), so get rid of it.
This was a "broken misfeature" according to Libav developers. It wasn't
implemented for modern codecs (like h264), and has been removed from
Libav a while ago (the AVCodecContext field has been marked as
deprecated and its value is ignored). FFmpeg still supports it, but
isn't much useful due to aforementioned reasons.
Remove the code to enable it.
It's not easy to tell whether the OSD/subs are empty, or if something is
drawn. In general you have to use osd_draw() with a custom callback. If
nothing is visible, the callback is never invoked. (The actual reason
why this is so "hard" is the implementation of osd_libass.c, which
doesn't allow separating rendering and drawing of OSD elements, because
all OSD elements share the same ASS_Renderer.)
To simplify avoiding copies, make osd_draw_on_image() instead of the
caller use mp_image_make_writeable(). Introduce osd_draw_on_image_p(),
which works like osd_draw_on_image(), but gets the new image allocation
from an image pool. This is supposed to be an optimization, because it
reduces the frequency of large allocations/deallocations for image data.
The result of this is that the frequency of copies needed in conjunction
with vf_sub, screenshots, and vo_lavc (encoding) should be reduced.
vf_sub now always does true pass-through if no subs are shown.
Drop the pts check from vf_sub. This didn't make much sense.
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
Instead of using a callback to "capture" the image next time the filter
function is called, do it the other way around: on every filter
invocation, create a reference to the image, and return it if a
screenshot is requested. This also fixes the 1-frame delay when taking
screenshots with the filter.
This also allows simplifying screenshot.c.
VFCAP_TIMER disables any additional waiting done by mpv in the
playloop. Remove VFCAP_TIMER, but re-use the idea for vo_image and
vo_lavc.
This means --untimed doesn't have to be passed when using --vo=image.
Change the entire filter API to use reference counted images instead
of vf_get_image().
Remove filter "direct rendering". This was useful for vf_expand and (in
rare cases) vf_sub: DR allowed these filters to pass a cropped image to
the filters before them. Then, on filtering, the image was "uncropped",
so that black bars could be added around the image without copying. This
means that in some cases, vf_expand will be slower (-vf gradfun,expand
for example).
Note that another form of DR used for in-place filters has been replaced
by simpler logic. Instead of trying to do DR, filters can check if the
image is writeable (with mp_image_is_writeable()), and do true in-place
if that's the case. This affects filters like vf_gradfun and vf_sub.
Everything has to support strides now. If something doesn't, making a
copy of the image data is required.
Deprecate the hardware specific video codec entries (like ffh264vdpau).
Replace them with the --hwdec switch, which requests that a specific
hardware decoding API should be used. The codecs.conf entries will be
removed at a later time, but for now they are useful for testing and
compatibility.
Instead of --vc=ffh264vdpau, --hwdec=vdpau should be used.
Add a fallback if hardware decoding fails. Most hardware decoders
(including vdpau) support only a subset of h264, and having such a
fallback is supposed to enable a better user experience.
Simplify the decoder pixel format handling by making it handle only
the case vd_lavc needs: a video stream always decodes to a single
pixel format.
Remove the handling for multiple pixel formats, and remove the
codecs.conf pixel format declarations that are left.
Remove the handling of "ambiguous" pixel formats like YV12 vs. I420 (via
VDCTRL_QUERY_FORMAT etc.). This is only a problem if the video chain
supports I420, but not YV12, which doesn't seem to be the case anywhere,
and in fact would not have any advantage.
Make the "flip" flag a global per-codec flag, rather than a pixel format
specific flag. (Some ffmpeg decoders still return a flipped image, so
this has to be done manually.) Also fix handling of the flip operation:
do not overwrite the global flip option, and make the --flip option
invert the codec flip option rather than overriding it.
Slices allowed filtering or drawing video in horizontal bands or
blocks. This allowed working on the video in smaller units. In theory,
this could bring a performance win by lowering cache pressure, as you
didn't have to keep the whole video frame in cache while filtering,
only the slice.
In practice, the slice code path was barely used for the following
reasons:
- Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg
slice callback was disabled, because it can be called from another
thread, and the mplayer video chain is not thread-safe.
- There was nothing that would turn "full" images into appropriate
slices, so slices were rarely used.
- Most filters didn't actually support slices.
On the other hand, supporting slices lead to code duplication and more
complex code in general. I made some experiments and didn't find any
actual measurable performance improvements when using slices. Even
ffmpeg removed slices based filtering from libavfilter in favor of
simpler code.
The most broken thing about the slices code path is that slices can't
be queued, like it is done for images in vo.c.
This function sucks and apparently is not very portable (at least on
mingw, the configure check fails). Also remove the emulation of that
function from osdep/strsep*, and remove the configure check.
Ensure that even if a seek is inaccurate it will not show video from
outside the defined timeline. Previously, seeking to the beginning of
a segment could show frames from before the start of the segment if
the seek was done in inaccurate mode and the demuxer seeked to an
earlier position. Now hr-seek machinery is used to skip at least the
frames that should not be part of playback timeline at all.
Now external subtitles essentially use the playback time, instead of
the segment time.
This is more useful when using external subtitles with mkv ordered
chapters. The previous behavior is not necessarily incorrect, and e.g.
makes it easier to use subtitles directly extracted from ordered
chapters segments. But we consider the new behavior more useful.
Also see commit 06e3dc8.
This is simpler and more useful. We could add a new switch for the old
functionality, but that would probably be more confusing than helpful.
When passing only a single file to the command line, this commit
shouldn't change behavior.
(Classic mplayer provided both features by duplicating the loop
functionality in the "playtree".)
When the last frame is displayed, and a frame step command is issued,
playback ands and advances to the next file. But before this commit,
the next file was played unpause. Fix this, and make sure pause is
kept.
Before this commit, the --osd-* options (like --osd-font-size etc.)
configured both the OSD and subtitle font. Make them separate, and add
--sub-text-* options (like --sub-text-size etc.). Now --osd-* affects
the OSD font only, and --sub-text-* unstyled text subtitles only.
This is better than having just the operating system type decide the
wakeup period, as e.g. when compiling for Win32/cygwin, a wakeup period
of 0.5 would work perfectly fine.
Instead, the default wakeup period is now only decided by availability
of a working select() system call (which is the case on cygwin but not
mingw and MSVC) AND a vo that can provide an event file descriptor or a
similar hack (vo_corevideo). vos that cannot do either need polling for
event handling and now can set the wakeup period to 0.02 in the vo code.
Add `mp_find_config_file` to search different known paths and use that in
ass_mp to look for the fontconfig configuration file.
Some incidental changes spawned by this feature where:
* Buffer allocation for the strings containing the paths is now performed
with talloc. All of the allocations are done on a NULL context, but it still
improves readability of the code.
* Move the OSX function for lookup inside of a bundle: this code path was
currently not used by the bundle generated with `make osxbundle`. The plan
is to use it again in a future commit to get a fontconfig config file.
Even if this is not so bad as other files, I need to add some stuff so...
why not!?
`uncrustify -l C -c TOOLS/uncrustify.cfg --no-backup --replace core/path.h`
`uncrustify -l C -c TOOLS/uncrustify.cfg --no-backup --replace core/path.c`
The header was unchanged by the tool.
Keep the currently displayed subtitles even when the user cycles through
subtitle tracks, and the subtitle is decoded by libavcodec (such as
vobsubs). Do this by not clearing the subtitles on reset(). reset() is
also called on seek, so check the start PTS whether the subtitle should
really be displayed (there's already an end PTS). Note that sd_ass does
essentially something similar.
The existing code has checks for whether the PTS reported by the demuxer
is invalid (MP_NOPTS_VALUE). I don't know under what circumstances this
can happens, so fall back to the old behavior if the PTS is invalid.
This slightly improves display of the current playback time in files
with sparse video packets (like video tracks containing a slow MJPG
slideshows as in [1]), or audio files with cover art image attachments.
While the video PTS is always "stuck" at the last frame displayed or
the last seek, audio is usually continuous. Given sane samplerates and
working audio drivers (to query how much of the current audio buffer has
been played), the audio PTS should always be more reliable.
[1] http://www.podtrac.com/pts/redirect.mp3/traffic.libsyn.com/rtpodcast/Rooster_Teeth_Podcast_191.m4a
ffmpeg pretends that image attachments (such as contained in ID3v2
metadata) are video streams. It injects the attached pictures as packets
into the packet stream received with av_read_frame().
Add the --audio-display option to allow configuring whether attached
pictures should be displayed. The default behavior doesn't change
(images are displayed).
Identify video streams, that are actually image attachments, with "[P]"
in the terminal output.
Modify the default stream selection such that real video streams are
preferred over attached pictures. (This is just for robustness; I do not
know of any samples where images are added before actual video streams
and could lead to bad default stream selection with the old code.)