This video output is not useful anymore. It is based on Carbon to draw
the mplayer window and this has been deprecated by Apple in 10.5.
The upcoming 10.8 OSX release should deprecate most of Carbon, so it
doesn't make sense to keep vo_quartz in the codebase when there are
modern and better alternatives (vo_gl and vo_corevideo).
macosx_finder_args was using Carbon and wasn't usable any longer on
modern versions of MacOSX. This is very useful to embed mplayer in a
mac application bundle.
When using application bundles, the operating system will call the
main function with only one argument that identifies the process
serial number (this is some additional process identifier in osx other
than the pid). File open events are then dispatched to the application
through events that must be handled accordingly.
The Cocoa framework generates only a NS*MouseDown event when handling
the second click of a double click (no NS*MouseUp). If that's the case
put mouse up key in mplayer2's fifo when dealing with the MouseDown
Cocoa event.
Change the window to accept mouse drag events not only on the title
bar, but also on the rest of the window surface; this includes the
video area.
It looks like the changing of the window mask resets the behaviour
specified in the delegate method, probably due to some strange
interaction with NSBorderlessWindow. For this reason call
-setPresentationOptions in the -fullscreen method to remind cocoa the
behaviour we want.
Add option --cursor-autohide-delay to control the number of milliseconds
with no user interaction before the mouse cursor is hidden.
There are two negative values with useful special meanings:
* A value of -1 prevents the cursor from hiding (useful for users
with multiple displays).
* A value of -2 prevents the cursor from showing upon activity.
The default is 1 second to keep the behaviour consistent with the
past X11 backend implementation.
Remove the vo_mouse_autohide field as it was always true.
There were some slight differences between what input.conf mapped, and
what was in input.c def_cmd_binds[]. Make them match.
Add some minor documentation improvements in input.cfg.
Also remove double comments ('##'), because they were confusing.
At least on some keyboards, the key between '0' and 'Enter' on the
key pad is mapped to KP_Separator. Since X11 VOs accept unicode
input, the mplayer keycode this key generates depended on the numlock
state, and with numlock enabled this mapped to an ASCII character.
This is probably not what the user wanted, since two physical keys
will always map to the same key code.
Map it to KP_DEC.
This change allows using non-ASCII keys with X11. These keys were ingored
before.
Technically, this creates an invisible, non-interactive input method
context. If creation fails, the code falls back to the old method, which
allows a subset of ASCII only.
This assumes the terminal uses UTF-8. If invalid UTF-8 is encountered (for
example because the terminal uses a legacy encoding), the code falls back
to the old method and feeds each byte as key code to the input code.
In theory, UTF-8 input could randomly fail, because the code in getch2.c
doesn't try to fill the input buffer correctly with input sequences
longer than a byte. This is a problem with the design of the existing
code.
This moves all key codes above the highest valid unicode code point
(which is 0x10FFFF). All key codes below MP_KEY_BASE now directly map
to unicode (KEY_ENTER is 13, carriage return). Configuration files
(input.conf) can contain unicode characters in UTF-8 to map non-ASCII
characters/keys.
This shouldn't change anything user visible, except that "direct key
codes" (as used in input.conf) will change their meaning.
Parts of the bstr functions taken from libavutil's GET_UTF8 and
slightly modified.
Setting the WM_NAME/WM_ICON_NAME window properties didn't always work:
apparently there are some characters that can't be represented in the X
STRING or COMPOUND_TEXT encodings, such as U+2013 EN DASH. The function
Xutf8TextListToTextProperty partially converts the string, and returns
a value different from 'Success'. This means vo_x11_set_property_string
didn't set these window properties.
On most modern window managers, this is not a problem, since these use
the _NET_WM_NAME/_NET_ICON_NAME and the UTF8_STRING encoding. Some older
WMs like IceWM don't read these, and the window title remains blank.
It's not clear what exactly we should do in this situation, but fix it
by setting set the WM_NAME/WM_ICON_NAME properties as UTF8_TEXT. This
violates the ICCCM, but at least IceWM seems to handle this well.
See also:
http://lists.freedesktop.org/archives/xorg/2004-September/003391.htmlhttp://lists.freedesktop.org/archives/xorg/2004-September/003395.html
Timeline handling converted the pts values from demuxed subtitles to
timeline scale. Change the code to do most subtitle handling in
original subtitle source pts, and instead convert current playback
timeline pts to those units when deciding which subtitle to show.
The main functionality changes are that now demuxed subtitles which
overlap chapter boundaries are handled correctly (at least for libass
subtitles), and external subtitles are assumed to use same pts scale
as current source (this needs improvements later).
Before, a video subtitle that had a duration continuing past the end
of the chapter would continue to be shown for the original duration,
even if the chapter ended and playback switched to a position in the
source where the subtitle shouldn't exist. Now, the subtitle will
correctly end.
Before, external subtitle files were interpreted as specifying pts
values in timeline scale. Now, they're interpreted as specifying pts
values in source file time scale, for _every_ source file. This is
probably more likely to be what the user wants for the "main" source
file in case there is one, but almost certainly not quite right for
multiple source files where the same subs could be shown over
different scenes. If the user wants them to match some main source
file, it's probably still better to have incorrect extra subs for
video from some files than to have every subtitle appearing at the
wrong time. The new code makes it easier to change the interpretation
of the subtitle times, and some configurability should be added in
the future.
Direct rendering support in vo_xv (used with --dr) had at least two
problems. First, OSD drawing modified the buffers; this meant that
if the buffers were used for reference frames there would be video
corruption. I don't think "performance optimization" with this level
of drawbacks is appropriate with today's machines any more. Direct
rendering could still be used for non-reference frames, but there's a
second problem: with direct rendering enabled the same buffer is used
for every frame, and with the XShm extension that is used by default
there's no checking that the previous frame has been completely
uploaded to the graphics card before it's overwritten by the next one.
This could be fixed, but as Xv is becoming obsolete I don't see it as
a priority to improve it. Thus I'm simply removing the parts of
functionality that were more likely to break things than improve
playback.
When switching to a timeline part from another file, decoders were
reinitialized after doing the demuxer-level seek. This is necessary
for audio because some decoders read from the demuxer stream during
initialization and the previous stream position before seek could have
been at EOF. However, this initialization sequence could lose first
subtitles or first part of audio.
The problem for subtitles was that the seek itself or audio
initialization could already have buffered subtitle packets from the
new position, and the way subtitles are reinitialized flushes packet
buffers. Thus early subtitles could be lost (even if they were demuxed
- unfortunately demuxers may not know about still active subtitles
earlier in the file, but that's another issue). Fix this by moving
subtitle and video reinitialization before the demuxer seek; they
don't have the problems which prevent that for audio.
Audio initialization can already decode and buffer some output.
However, the seek_reset() call done last would then throw away this
buffered output. Work around this by adding an extra flag to
seek_reset().
Restructure parts of the code in the main play loop. The main
functionality difference is that if a video track ends first, now
audio will continue to be played until it ends too.
Now the process also wakes up less often if there's no need to update
video or audio. This will reduce unnecessary wakeups especially when
paused, but may make handling of input events laggier when fd-based
notifications are not supported (like most input on Windows).
Change the terminal status line to show "???" instead of a huge
negative number if audio or video pts is missing (there was a partial
workaround for audio before, but not video or A-V difference).
Modify the YUV->RGB conversion matrix to take into account the
difference between the same color value being x/255 in a 8-bit texture
and x*256/65535 in a 16-bit texture (actually things are stored as
x*4/65535 for 10-bit color, but that can be ignored here). This 0.4 %
difference in the shader float value could make shades of gray in
10-bit (or generally more than 8 bit) YUV produce RGB values with
green slightly higher than red/blue.
Latest liblivemedia version disables APIs we need. The code still
exists in the library and the changelog says the old interface can be
enabled with "#define RTSPCLIENT_SYNCHRONOUS_INTERFACE". However, the
code on the library side is disabled by default too, and seems to be
disabled in distro packages, so defining that in the player does not
help (just delays the failure until link time). It's possible the
distro packages will be changed to enable this, but since dropping
live555 support is desirable anyway, change configure to disable
support by default at least for now.
The live555 code is the only part of the source that's in C++.
Including C headers in code compiled as C++ has caused issues at
times, so deleting this code would have a maintenance benefit.
Reportedly the rtsp support in Libav has improved, so there should
be less need for live555.
Remove the old EDL implementation that was activated with the --edl
option. It is mostly redundant and inferior compared to the newer
demux_edl support, though currently there's no support for using the
same EDL files with the new implementation and the mute functionality
of the old implementation is not supported. The main reason to remove
the old implementation at this point is that the mute functionality
would conflict with following audio volume handling changes, and
working on the old code would be a wasted effort in the long run as at
some point it would be removed anyway.
The --edlout functionality is kept for now, even though after this
commit there is no code that could directly read its output.
Changing the volume when softvol is enabled or if the audio output driver
doesn't support volume controls causes insertion of the "volume" filter.
This fails with AC3. Since the filter wasn't removed after that, and the
filter chain was in a bogus state, random crashes occured past this
point.
Fix it by reinitializing the filter chain completely on failure. Volume
controls simply won't work. (This can't be fixed, because AC3 is a
compressed format, and would require additional decoding/encoding passes
in order to support arbitrary volume changes.)
This also affects balance controls.
Windows uses a legacy codepage for char* / runtime functions accepting
char *. Using UTF-8 as the codepage with setlocale() is explicitly
forbidden.
Work this around by overriding the MSVCRT functions with wrapper
macros, that assume UTF-8 and use "proper" API calls like _wopen etc.
to deal with unicode filenames. All code that uses standard functions
that take or return filenames must now include osdep/io.h. stat()
can't be overridden, because MinGW-w64 itself defines "stat" as a
macro. Change code to use use mp_stat() instead.
This is not perfectly clean, but still somewhat sane, and much better
than littering the rest of the mplayer code with MinGW specific hacks.
It's also a bit fragile, but that's actually little different from the
previous situation. Also, MinGW is unlikely to ever include a nice way
of dealing with this.
Some of the code, especially the dshow and windows codec loader parts,
are extremely hacky and likely full of bugs. The goal is merely getting
rid of warnings that could obscure more important warnings and actual
bugs, instead of fixing actual problems. This reduces the number of
warnings from over 500 to almost the same as when compiling on Linux.
Note that many problems stem from using the ancient wine-derived
windows headers. There are some differences to the "proper" windows
header. Changing the code to compile with the proper headers would be
too much trouble, and it still has to work on Unix.
Some of the changes might actually break compilation on legacy MinGW,
but we don't support that anymore. Always use MinGW-w64, even when
compiling to 32 bit.
Fixes some warnings in the win32 loader code on Linux too.
MinGW maps the "printf" format string archetype to the non-standard
MSVCRT functions, even if __USE_MINGW_ANSI_STDIO is defined and set
to 1. We need to use "gnu_printf" to use the format strings as provided
by vsnprintf and similar functions to get correct warnings.
Since "gnu_printf" isn't necessarily available on other GCC compatible
compilers (such as clang), do this only on MinGW.
This makes MinGW redirect certain stdio functions (such as the sprintf
family) from the MSVCRT libc to a standard compliant MinGW
implementation.
This fixes a crash in talloc.c when compiling mplayer with MinGW-w64.
The problem is most likely with talloc_vasprintf(), which calls
vsnprintf with a small buffer and checks its return value to find out
how much space the formatted string requires. Without this commit,
vsnprintf would always return -1, and then the code calls abort().
(lachs0r figured out this one.)
The _UWIN define causes the mingw headers not to declare deprecated (on
Windows) function names such as open and mkdir. But the code uses these. I
have no idea why this used to work (if it even did), but the original
reason why it was defined seems to have vanished.
If --enable-cross-compile is specified, passing
--target=i686-w64-mingw32 for example will check if
i686-w64-mingw32-gcc can be used. This is only done if the compiler
isn't specified via --cc or the CC environment variable.
The same is done for some other build tools, such as pkg-config.
(Only the C compiler will try to use a fallback in this case.)
This didn't work very well when cross compiling from Linux to Windows:
it tries to execute an .exe file, which succeeds if wine is installed.
As consequence it detects "no" as result.
In general this won't work if emulation for the target architecture is
available. Remove it.
When the build wrapper repo scripts run configure they set a custom
PKG_CONFIG_PATH environment variable. Show the value of this in
config.log to make it easier to rerun configure with a tweaked version
of the same parameters. Also show CFLAGS if set, as it's likely to
break things.
Instead of opening avctx in preinit() and setting paramters later,
(re)open it in config() where parameters can be set first. This fixes
a failure to open the codec with new libavcodec versions that check
pix_fmt during avcodec_open2().
Remove "Please check mtrr settings at /proc/mtrr" and "NOTE: Win32
codec DLLs are not supported on your CPU" messages printed at the end
of a configure run. mtrr should be irrelevant on today's machines, and
the DLLs are a lot less important nowadays. Also remove mtrr detection
logic that was only used to decide whether or not to print that
message. Bizarrely, there were --enable-mtrr and --disable-mtrr
options for this too (with no effect except for the message).
vo_xv crashed if existing frames had been lost due to a config() call
in the middle of a file and vo_redraw_frame() was called. Add checks
to reject vo_redraw_frame() unless at least one frame has been flipped
after the the last configuration change, so individual VOs do not have
to deal with this case.
libpostproc has been removed from Libav and the library now exists as
a separate project. Because it's not essential, separate it from the
Libav library check and allow compiling without it.
Add helper function pkg_config_add() that checks for the presence of a
package and also adds cflags/ldflags if it is found. Change existing
pkg-config-using feature tests to use that. Also change the freetype
test that used a separate libfreetype-config binary before; using
pkg-config instead helps cross-compiling. Drop other kinds of checks
(such as test compiles) from these tests. It's possible that this
could cause problems on some (broken) systems, but that can't be
verified without user testing.
demux_lavf was returning a static size value when libavformat queried
file size with AVSEEK_SIZE. Add code to query the stream for possibly
changed value first. This at least improves seeking with growing MPEG
files; before seeks would never go beyond the part of the file that
existed when the stream was first opened.
The terminal OSD line was written with mp_msg(MSGT_CPLAYER, ...) but
erased with printf(). This meant that disabling MSGT_CPLAYER messages
would prevent the terminal line from being printed, but a line
(probably unrelated) would still be cleared. Change the clearing code
to use mp_msg(MSGT_CPLAYER, ...) too.
Libavcodec started checking that avctx->pix_fmt is set when opening
an encoder. The existing code (originally from vf_screenshot.c) only
set it afterwards, which now made screenshots fail. Fix the code to
set parameters before calling avcodec_open2().
Also fix some minor things, which seems to make it work for other
encoders. This could be used to add more libavcodec based image
writers. Fix memory leak (missing av_free(avctx)).