libpostproc has been removed from Libav and the library now exists as
a separate project. Because it's not essential, separate it from the
Libav library check and allow compiling without it.
Add helper function pkg_config_add() that checks for the presence of a
package and also adds cflags/ldflags if it is found. Change existing
pkg-config-using feature tests to use that. Also change the freetype
test that used a separate libfreetype-config binary before; using
pkg-config instead helps cross-compiling. Drop other kinds of checks
(such as test compiles) from these tests. It's possible that this
could cause problems on some (broken) systems, but that can't be
verified without user testing.
demux_lavf was returning a static size value when libavformat queried
file size with AVSEEK_SIZE. Add code to query the stream for possibly
changed value first. This at least improves seeking with growing MPEG
files; before seeks would never go beyond the part of the file that
existed when the stream was first opened.
The terminal OSD line was written with mp_msg(MSGT_CPLAYER, ...) but
erased with printf(). This meant that disabling MSGT_CPLAYER messages
would prevent the terminal line from being printed, but a line
(probably unrelated) would still be cleared. Change the clearing code
to use mp_msg(MSGT_CPLAYER, ...) too.
Libavcodec started checking that avctx->pix_fmt is set when opening
an encoder. The existing code (originally from vf_screenshot.c) only
set it afterwards, which now made screenshots fail. Fix the code to
set parameters before calling avcodec_open2().
Also fix some minor things, which seems to make it work for other
encoders. This could be used to add more libavcodec based image
writers. Fix memory leak (missing av_free(avctx)).
The vd_ffmpeg decode() function returned without doing anything if the
input packet had size 0. This meant that flushing buffered frames at
EOF did not work. Remove this test. Have the core code skip such
packets coming from the file being played instead (Libav treats
0-sized packets as flush signals anyway, so better assume such packets
do not represent real frames with any codec).
Libav has changed back to not modifying avctx->has_b_frames based on
the extra buffering caused by thread use. Add back the code to do the
adjustment on the player side once again.
The timing mode using the buffering info is no longer the default, so
in most cases having this right or not won't matter for playback.
Remove the private bswap and intreadwrite.h implementations and use
libavutil headers instead.
Originally these headers weren't publicly installed by libavutil at
all. That already changed in 2010, but the pure C bswap version in
installed headers was very inefficient. That was recently (2011-12)
improved and now using the public bswap version probably shouldn't
cause noticeable performance problems, at least if using a new enough
compiler.
Change demux_lavf to use CodecID -> RIFF tag mappings that are now
available through the public Libav API. Previously it used a copy in
ffmpeg_files/taglists.c. That can now be deleted.
Change various code to use the latest Libav API. The libavcodec
error_recognition setting has been removed and replaced with different
semantics. I removed the "--lavdopts=er=<value>" option accordingly,
as I don't think it's widely enough used to be worth attempting to
emulate the old option semantics using the new API. A new option with
the new semantics can be added later if needed.
Libav dropped APIs that were necessary with all Libav versions
until quite recently (like setting avctx->age), and it would thus not
be possible to keep compatibility with previous Libav versions without
adding workarounds. The new APIs also had some bugs/limitations in the
recent Libav release 0.8, and it would not work fully (at least some
avcodec options would not be set correctly). Because of those issues,
this commit makes no attempt to maintain compatibility with anything
but the latest Libav git head. Hopefully the required fixes and
improvements will be included in a following Libav point release.
Libav started automatically enabling threaded decoding a while ago.
This is not safe, as it means callbacks can suddenly get called from
other threads and outside calls to libavcodec. We need to know when
threading will be used and disable thread-unsafe callbacks in those
cases. Explicitly set thread count to 1 instead of leaving it at 0
(which triggers the autodetection) when we are not requesting more
threads; this should make sure that autodetection on libavcodec side
will not be used.
Now PulseAudio is preferred over ALSA, which in turn is preferred over
OSS. This should give best results on all systems. On systems with
PulseAudio, we will always use it natively, rather than through the
suboptimal ALSA emulation (which the default ALSA output is normally
redirected to when PulseAudio is active; ALSA hardware devices will
not be, but to use those the user must set AO explicitly in any case,
so changing the defaults makes no difference). The fallback from
ao_pulse to ao_alsa causes no noticeable delay on systems without
PulseAudio. On systems with ALSA, we won't attempt to use OSS anymore.
Also, move OpenAL above SDL. OpenAL should generally work better than
SDL.
When the volume multiplier is 1, the data shouldn't be changed, but the
code actually multiplied each sample with 255/256. Change the factor to
256, and hope there wasn't a good reason for the value 255.
Additionally, don't work on the data if it wouldn't be changed anyway.
This is a micro-optimization.
This doesn't touch the code path for the float format.
Since the recent OSD redraw changes, every GUI expose event causes the
message "===== PAUSE =====" to be printed on console. This was a bit
annoying, so change it so that it is only printed once when going into
paused mode. It's also printed again if the cache status changes (when
playing URLs), or when the status line is printed during pause mode (when
you seek while paused).
This also removes some minor code duplication.
When the OSD was enabled and the player was paused by executing the
frame_step command, the OSD still displayed the icon indicating
playback. Fix this and always set the proper icon when the pause
state is changed.
stream_ffmpeg was using the libavformat URLContext API. This API has
been deprecated (for public use at least) in libavformat. Switch to
the AVIOContext API.
Update various code using Libav libraries to remove use of API
features that were deprecated at Libav release 0.7. I think this
removes them all with the exception of URLContext functions still used
in stream_ffmpeg.c (at least other uses that generated deprecation
warnings with libraries from 0.7 are removed).
Require versions of the Libav libraries corresponding to Libav release
0.7. These are:
libavutil 51.7.0
libavcodec 53.5.0
libavformat 53.2.0
libswscale 2.0.0
libpostproc 52.0.0
Also disable the fallback to simple header check if these libraries
could not be found with pkg-config; now compiling without pkg-config
support for these always requires explicitly setting --enable-libav
and any needed compiler/linker flags. The simple check would have let
compilation proceed even if a version mismatch was detected.
Recent commits for screenshot support and video redraw changes didn't
handle vdpau driver preemption state correctly, which could make the
player crash if preemption occurred. Fix this and improve preemption
handling a bit otherwise.
Using the "expand" filter makes the image area larger by adding
borders to the video frame. These borders are supposed to be always
black. The filter relied on the borders in its output buffer staying
black without redrawing them for each frame. However, when using
direct rendering, a video filter inserted after vf_expand can draw
into these borders, for example the "unsharp" and "ass" filters. These
changes incorrectly stayed visible in the the following video frames.
Fix this by always clearing the borders in vf_expand. In some cases,
this might be more work than necessary, but vf_expand has no way of
detecting whether a subsequent filter draws into the borders or not,
and this avoids fragile assumptions about the existing contents of the
output buffer(s).
This also deals with frame size changes when config() is called again.
Before this commit, remains of the old video were visible if the new
video frame size was smaller than before.
Since we now always clear the borders, there's no more need for the
complicated code that cleared only the regions that were covered by the
OSD. Delete that.
When config() is called multiple times (e.g. aspect ratio changes
while the same file is playing), the user settings are not honoured,
because config() overwrites them. Don't do that.
This fixes a crash with vo_gl when the switch_ratio slave command is used
while a displaying an animated subtitle. switch_ratio will cause a config()
call to vo_gl, which will reset all state, including the EOSD state. Next
time the EOSD is rendered, its change detection will indicate that only
subtitle positions have changed. The render code will attempt to access
the EOSD data structures which have been deleted with config().
Fix this by forcing the change detection to indicate a full change if
config() has been called.
This only happens when doing switch_ratio with vo_gl, and the current
subtitle is only changing positions, i.e. mp_eosd_images_t.changed == 1.
With current typical video sizes, font sizes are large enough that
they don't really need hinting (and particularly so for font sizes in
display-resolution rendered subtitles in fullscreen mode), and hinting
apparently causes problems with some fonts.
Currently there is no way to set the swap interval with a function
that has a signature compatible with other platforms' gl extensions.
Make a wrapper function around the gui toolkit method of setting the
swap interval property, and point gl->SwapInterval to it.
Remove the useless dependency on MPGLContext from cocoa_common, since
it was used just to access the vo struct. Change gl_common to pass the
vo struct directly to all the cocoa_common functions.
Always set the X11 window title properties as UTF-8. This is a bit tricky
for X11 window properties which are not specified to use UTF-8, such as
WM_NAME.
We also properly set WM_ICON_NAME, which means the window caption and the
text used in the task bar (of the WM has one) will be the same on most
window managers. Before this commit, WM_ICON_NAME was always hardcoded to
"MPlayer", even if --title or --use-filename-title was used.
Also update the window title only on reconfigure, like it is done in
mplayer-svn commit 34380.
This affects only the "new" VO API. The config() title argument was barely
used, and it's hardcoded to "MPlayer" in vf_vo.c. The X11 and the Cocoa
GUI backends, which are the only ones properly supporting window titles,
ignored this argument. Remove the title argument.
Add the vo_get_window_title function. All GUI VOs are supposed to use it
for the window title.
Avoid calling avcodec_close() in uninit() if avcodec_open() failed.
Calling avcodec_close() on a non-open codec context causes a crash
with recent Libav versions.
Remove no longer necessary tests from hrseek code. As a result each
field of vo_vdpau framerate-doubling deinterlace modes is now
considered as a possible seek target.
Adjust the scheduled time until next frame when changing playback
speed (only affects behavior without audio). The main case where this
makes a difference is when it would take a noticeably long time to
switch frames with the previous speed and you switch to a faster
speed.
Remove code refreshing window contents after events such as resize
from vo_vdpau, vo_gl and vo_xv. Instead have them simply set a flag
indicating that a refresh is needed, and have the player core perform
that refresh by doing an OSD redraw. Also add support for updating the
OSD contents over existing frames during slow-but-not-paused playback.
The VOs now also request a refresh if parameters affecting the picture
change (equalizer settings, colormatrix, VDPAU deinterlacing setting).
Even previously the picture was typically redrawn with the new
settings while paused because new OSD messages associated with setting
changes triggered a redraw, but this did not happen if OSD was turned
off.
A minor imperfection is that now window system events can trigger a
single one-frame step forward when using vo_xv after pausing so that
vo_xv does not yet have a copy of the current image. This could be
fixed but I think it's not important enough to bother.
Previously the core sent VFCTRL_REDRAW_OSD to change OSD contents over
the current frame. Change this to VFCTRL_REDRAW_FRAME followed by
normal EOSD and OSD drawing calls, then vo_flip_page(). The new
version supports changing EOSD contents for libass-rendered subtitles
and simplifies the redraw support code needed per VO. vo_xv doesn't
support EOSD changes because it relies on vf_ass to render EOSD
contents earlier in the filter chain.
vo_xv logic is additionally simplified because the previous commit
removed the need to track the status of current and next images
separately (now each frame is guaranteed to become "visible" soon
after we receive it as "next", with no VO code running in the interval
between).
Separate passing a new frame to VOs using the new API into two steps.
The first, vo_draw_image(), happens after a new frame is available
from the filter chain. In constrast to old behavior, now the frame is
not actually rendered yet at this point (though possible slice draw
calls can already reach the VO before). The second step,
vo_new_frame_imminent(), happens when we're close enough to the
display time of the new frame that we'll commit to flipping it as the
next action and will not change the OSD over the previous frame any
more.
This new behavior fixes a previous problem with vo_vdpau and vo_gl in
the situation where the player is paused after decoding a new frame
but before flipping it; previously changing OSD in that state would
switch to the new frame as a side effect. It would also allow an easy
way to fix extra output files produced with something like "--vo=png
--frames=1" with precise seeking, but this is not done yet.
The code now relies on a new mp_image from the filter chain staying
valid even after the vf_vo put_image() call providing it returns. In
other words decoders/filters must not deallocate or otherwise
invalidate their output frame between passing it forward and returning
from the decode/filter call.
Add native Cocoa code to display an OpenGL window. Some of the code is
based on the OpenGL parts of vo_corevideo but I took the time to remove
old code based on Carbon.
There is autodetection in the configure script but you can use
--enable[disable]-cocoa to enable[disable] this.
When interpreting a key event, use the "charactersIgnoringModifiers"
method of the event in order to extract Alt+key combinations while
keeping the normal meaning of "key". When the right alt modifier is
pressed use the "characters" method to allow AltGr behavior to be used
to generate different characters.
The ARB shader code generated at the end of the shaders for scaling mode 4
and 5 was something like:
MAD yuv.g, b.r, {0.5}, a.r;
This appears to be semantically equivalent with:
MAD yuv.g, b.rrrr, {0.5, 0, 0, 0}, a.rrrr;
This has the consequence that the result register, yuv.g, will not contain
the value computed by the scale filter, but a.r. a.r is the unchanged
value sampled from the normal texture coordinates, so the filter did
effectively nothing and behaved as if cscale=0 was specified. The basic
mistake here is that yuv.g does not specify a single register, but it
specifies the full vector register yuv, with writing enabled on the g
channel. This means yuv.g will assigned the g channel of the the result
vector computed by the MAD instruction.
The GL_LUMINANCE16 texture format had only 8 bit precision on Mesa
based drivers. This caused heavy degradation of the image when playing
formats with more than 8 bits per pixel, such as 10 bit h264. Use
GL_R16 instead, which at least Mesa and Nvidia drivers actually
implement as 16 bit textures. Since sampling from this texture format
doesn't return anything meaningful in the other color components
(unlike luminance textures), the shader code has to be slightly
changed.
GL_R16 requires the GL_ARB_texture_rg extension. Check for it, and fall
back to the old texture format if it's not available.
The low precision of the GL_LUMINANCE16 format has just been fixed in
upstream Mesa, but it'll take a while before that fix is available in
distros.
The shader code was generated from very long strings with lots of
format specifiers with snprintf calls. It was almost impossible to
quickly tell what variables were inserted where in the shader. Make
this more readable by implementing a kind of simple variable
substitution, which allows replacing the format specifiers in the code
templates with with variable names.