Make mixer support setting the mute attribute at audio driver level,
if one exists separately from volume. As of this commit, no libao2
driver exposes such an attribute yet; that will be added in later
commits.
Since the mute status can now be set externally, it's no longer
completely obvious when the player should automatically disable mute
when uninitializing an audio output. The implemented behavior is to
turn mute off at uninitialization if we turned it on and haven't
noticed it turn off (by external means) since.
The player tried to disable mute before exiting, so that if mute is
emulated by setting volume to 0 and the volume setting is a
system-global one, we don't leave it at 0. However, the logic doing
this at process exit was flawed, as volume settings are handled by
audio output instances and the audio output that set the mute state
may have been closed earlier. Trying to write reliably working logic
that restores volume at exit only would be tricky, so change the code
to always unmute an audio driver before closing it and restore mute
status if one is opened again later.
Restore the audio balance setting when the audio chain is
reinitialized (also after switching to another file).
Also add a note about the balance code being seriously buggy.
MPlayer volume control was originally implemented with the assumption
that it controls a system-wide volume setting which keeps its value
even if a process closes and reopens the audio device. However, this
is not actually true for --softvol mode or some audio output APIs that
only consider volume as a per-client setting for software mixing. This
could have annoying results, as the volume would be reset to a default
value if the AO was closed and reopened, for example whem moving to a
new file or crossing ordered chapter boundaries. Add code to set the
previous volume again after audio reinitialization if the current
audio chain is known to behave this way (softvol active or the AO
driver is known to not keep persistent volume externally).
This also avoids an inconsistency with the mute flag. The frontend
assumed the mute status is persistent across file changes, but it
could be similarly lost.
The audio drivers that are assumed to not keep persistent volume are:
coreaudio, dsound, esd, nas, openal, sdl. None of these changes have
been tested. I'm guessing that ESD and NAS do per-connection
non-persistent volume settings.
Partially based on code by wm4.
The volume filter was automatically inserted if setting AO volume
failed. Remove that logic, and instead enable softvol mode fully if
querying current volume (which will happen before any set attempts)
fails. Fully switching to softvol mode is more robust, and any case
where the behavior would differ (the behavior is neither that both
querying/setting always work nor that both always fail) would have
been buggy.
Current volume was always queried from the the audio output driver (or
filter in case of --softvol). The only case where it was stored on
mixer level was that when turning off mute, volume was set to the
value it had before mute was activated. Change the mixer code to
always store the current target volume internally. It still checks for
significant changes from external sources and resets the internal
value in that case.
The main functionality changes are:
Volume will now be kept separately from mute status. Increasing or
decreasing volume will now change it relative to the original value
before mute, even if mute is implemented by setting AO level volume to
0. Volume changes no longer automatically disable mute. The exception
is relative changes up (like the volume increase key in default
keybindings); that's the only case which still disables mute.
Keeping the value internally avoids problems with granularity of
possible volume values supported by AO. Increase/decrease keys could
work unsymmetrically, or when specifying a smaller than default
--volstep, even fail completely. In one case occurring in practice, if
the AO only supports changing volume in steps of about 2 and rounds
down the requested volume, then volume down key would decrease by 4
but volume up would increase by 2 (previous volume plus or minus the
default change of 3, rounded down to a multiple of 2). Now, the
internal value will keep full precision.
Change the audio driver control() command argument from "int" to "enum
aocontrol". Remove unused control types (SET_DEVICE, GET_DEVICE,
QUERY_FORMAT, SET_PLUGIN_DRIVER, SET_PLUGIN_LIST). The QUERY_FORMAT
one looks like there's a possibility such functionality could be
useful in the future, but as ao_oss was the only driver to have an
actual implementation of it, the current code wasn't worth keeping.
Stop trying to read terminal input if a read attempt returns EOF. The
most important case where this matters is when someone runs the player
with stdin redirected from /dev/null and without specifying
--no-consolecontrols. This used to cause 100% CPU load while paused,
as select() would continuously trigger on stdin (the need for
--no-consolecontrols was not apparent to people with older mplayer
versions, as input reading was less efficient and latencies like
hardcoded sleeps kept CPU use well below 100%). Now this will only
cause a "Dead key input" error message.
Make the code read current real time again after drawing OSD. This
ensures time taken in OSD drawing is properly deducted from the
duration of the following sleep. The main practical effect is to avoid
the A-V field on the status line staying at a value a couple of
milliseconds above 0 (depending on VO).
Fix a missing check that could sometimes result in video frames being
shown after specified end pts (end of timeline segment or --endpos).
Fix mistaken video EOF detection after aspect change in video stream,
when there is no current valid visible frame but the next frame is
already buffered in VO.
pa_stream_flush() seems to work pretty badly in general. The visible
symptoms included at least old audio continuing for a significant time
after the call, and bogus latency reporting causing temporary video
freezes after a seek. Add some hacks to work around these problems.
The result seems to work most of the time on my machine at least...
For ao_pulse, the current latency is not a good indicator of how soon
the AO requires new data to avoid underflow. Add an internal pipe that
can be used to wake up the input loop from select(), and make the
pulseaudio main loop (which runs in a separate thread) use this
mechanism when pulse requests more data. The wakeup signal currently
contains no information about the reason for the wakup, but audio
buffers are always filled when the event loop wakes up.
Also, request a latency of 1 second from the Pulseaudio server. The
default is normally significantly higher. We don't need low latency,
while higher latency helps prevent underflows reduces need for
wakeups.
If the user moved the window to another screen, fullscreen mode would
still use the original screen. Fix to use the screen the window is
currently on (unless overridden by --xineramascreen).
Change the macosx_finder_args function so that when mplayer2 is
invoked from the Finder in a Mac application bundle, it redirects the
output to ~/Library/Logs/mplayer2.log instead of cluttering the global
system.log.
This doesn't affect terminal use which keeps writing to stdout and
stderr.
The gl video output is faster and has more features than corevideo, so
it should be preferred on mac osx.
This doesn't affect GUI compatibility because they specify the
corevideo video output along with the suboptions for the shared buffer
name to mmap in.
The recommended way to get function pointers to the functions in the
OpenGL library is through dlopen/dlsym/dlclose. This causes problems
in the Cocoa OpenGL backend when -lGL (X11's OpenGL headers) is linked
to the binary together with -framework OpenGL.
The linked OpenGL symbols are always from -lGL, causing all the
function pointers to point to null when getFunctions is called against
a Cocoa OpenGL context.
For this reason change the configure autodetection code to disable
the vo_gl X11 backend when cocoa is active.
This video output is not useful anymore. It is based on Carbon to draw
the mplayer window and this has been deprecated by Apple in 10.5.
The upcoming 10.8 OSX release should deprecate most of Carbon, so it
doesn't make sense to keep vo_quartz in the codebase when there are
modern and better alternatives (vo_gl and vo_corevideo).
macosx_finder_args was using Carbon and wasn't usable any longer on
modern versions of MacOSX. This is very useful to embed mplayer in a
mac application bundle.
When using application bundles, the operating system will call the
main function with only one argument that identifies the process
serial number (this is some additional process identifier in osx other
than the pid). File open events are then dispatched to the application
through events that must be handled accordingly.
The Cocoa framework generates only a NS*MouseDown event when handling
the second click of a double click (no NS*MouseUp). If that's the case
put mouse up key in mplayer2's fifo when dealing with the MouseDown
Cocoa event.
Change the window to accept mouse drag events not only on the title
bar, but also on the rest of the window surface; this includes the
video area.
It looks like the changing of the window mask resets the behaviour
specified in the delegate method, probably due to some strange
interaction with NSBorderlessWindow. For this reason call
-setPresentationOptions in the -fullscreen method to remind cocoa the
behaviour we want.
Add option --cursor-autohide-delay to control the number of milliseconds
with no user interaction before the mouse cursor is hidden.
There are two negative values with useful special meanings:
* A value of -1 prevents the cursor from hiding (useful for users
with multiple displays).
* A value of -2 prevents the cursor from showing upon activity.
The default is 1 second to keep the behaviour consistent with the
past X11 backend implementation.
Remove the vo_mouse_autohide field as it was always true.
There were some slight differences between what input.conf mapped, and
what was in input.c def_cmd_binds[]. Make them match.
Add some minor documentation improvements in input.cfg.
Also remove double comments ('##'), because they were confusing.
At least on some keyboards, the key between '0' and 'Enter' on the
key pad is mapped to KP_Separator. Since X11 VOs accept unicode
input, the mplayer keycode this key generates depended on the numlock
state, and with numlock enabled this mapped to an ASCII character.
This is probably not what the user wanted, since two physical keys
will always map to the same key code.
Map it to KP_DEC.
This change allows using non-ASCII keys with X11. These keys were ingored
before.
Technically, this creates an invisible, non-interactive input method
context. If creation fails, the code falls back to the old method, which
allows a subset of ASCII only.
This assumes the terminal uses UTF-8. If invalid UTF-8 is encountered (for
example because the terminal uses a legacy encoding), the code falls back
to the old method and feeds each byte as key code to the input code.
In theory, UTF-8 input could randomly fail, because the code in getch2.c
doesn't try to fill the input buffer correctly with input sequences
longer than a byte. This is a problem with the design of the existing
code.
This moves all key codes above the highest valid unicode code point
(which is 0x10FFFF). All key codes below MP_KEY_BASE now directly map
to unicode (KEY_ENTER is 13, carriage return). Configuration files
(input.conf) can contain unicode characters in UTF-8 to map non-ASCII
characters/keys.
This shouldn't change anything user visible, except that "direct key
codes" (as used in input.conf) will change their meaning.
Parts of the bstr functions taken from libavutil's GET_UTF8 and
slightly modified.
Setting the WM_NAME/WM_ICON_NAME window properties didn't always work:
apparently there are some characters that can't be represented in the X
STRING or COMPOUND_TEXT encodings, such as U+2013 EN DASH. The function
Xutf8TextListToTextProperty partially converts the string, and returns
a value different from 'Success'. This means vo_x11_set_property_string
didn't set these window properties.
On most modern window managers, this is not a problem, since these use
the _NET_WM_NAME/_NET_ICON_NAME and the UTF8_STRING encoding. Some older
WMs like IceWM don't read these, and the window title remains blank.
It's not clear what exactly we should do in this situation, but fix it
by setting set the WM_NAME/WM_ICON_NAME properties as UTF8_TEXT. This
violates the ICCCM, but at least IceWM seems to handle this well.
See also:
http://lists.freedesktop.org/archives/xorg/2004-September/003391.htmlhttp://lists.freedesktop.org/archives/xorg/2004-September/003395.html
Timeline handling converted the pts values from demuxed subtitles to
timeline scale. Change the code to do most subtitle handling in
original subtitle source pts, and instead convert current playback
timeline pts to those units when deciding which subtitle to show.
The main functionality changes are that now demuxed subtitles which
overlap chapter boundaries are handled correctly (at least for libass
subtitles), and external subtitles are assumed to use same pts scale
as current source (this needs improvements later).
Before, a video subtitle that had a duration continuing past the end
of the chapter would continue to be shown for the original duration,
even if the chapter ended and playback switched to a position in the
source where the subtitle shouldn't exist. Now, the subtitle will
correctly end.
Before, external subtitle files were interpreted as specifying pts
values in timeline scale. Now, they're interpreted as specifying pts
values in source file time scale, for _every_ source file. This is
probably more likely to be what the user wants for the "main" source
file in case there is one, but almost certainly not quite right for
multiple source files where the same subs could be shown over
different scenes. If the user wants them to match some main source
file, it's probably still better to have incorrect extra subs for
video from some files than to have every subtitle appearing at the
wrong time. The new code makes it easier to change the interpretation
of the subtitle times, and some configurability should be added in
the future.
Direct rendering support in vo_xv (used with --dr) had at least two
problems. First, OSD drawing modified the buffers; this meant that
if the buffers were used for reference frames there would be video
corruption. I don't think "performance optimization" with this level
of drawbacks is appropriate with today's machines any more. Direct
rendering could still be used for non-reference frames, but there's a
second problem: with direct rendering enabled the same buffer is used
for every frame, and with the XShm extension that is used by default
there's no checking that the previous frame has been completely
uploaded to the graphics card before it's overwritten by the next one.
This could be fixed, but as Xv is becoming obsolete I don't see it as
a priority to improve it. Thus I'm simply removing the parts of
functionality that were more likely to break things than improve
playback.
When switching to a timeline part from another file, decoders were
reinitialized after doing the demuxer-level seek. This is necessary
for audio because some decoders read from the demuxer stream during
initialization and the previous stream position before seek could have
been at EOF. However, this initialization sequence could lose first
subtitles or first part of audio.
The problem for subtitles was that the seek itself or audio
initialization could already have buffered subtitle packets from the
new position, and the way subtitles are reinitialized flushes packet
buffers. Thus early subtitles could be lost (even if they were demuxed
- unfortunately demuxers may not know about still active subtitles
earlier in the file, but that's another issue). Fix this by moving
subtitle and video reinitialization before the demuxer seek; they
don't have the problems which prevent that for audio.
Audio initialization can already decode and buffer some output.
However, the seek_reset() call done last would then throw away this
buffered output. Work around this by adding an extra flag to
seek_reset().
Restructure parts of the code in the main play loop. The main
functionality difference is that if a video track ends first, now
audio will continue to be played until it ends too.
Now the process also wakes up less often if there's no need to update
video or audio. This will reduce unnecessary wakeups especially when
paused, but may make handling of input events laggier when fd-based
notifications are not supported (like most input on Windows).
Change the terminal status line to show "???" instead of a huge
negative number if audio or video pts is missing (there was a partial
workaround for audio before, but not video or A-V difference).
Modify the YUV->RGB conversion matrix to take into account the
difference between the same color value being x/255 in a 8-bit texture
and x*256/65535 in a 16-bit texture (actually things are stored as
x*4/65535 for 10-bit color, but that can be ignored here). This 0.4 %
difference in the shader float value could make shades of gray in
10-bit (or generally more than 8 bit) YUV produce RGB values with
green slightly higher than red/blue.
Latest liblivemedia version disables APIs we need. The code still
exists in the library and the changelog says the old interface can be
enabled with "#define RTSPCLIENT_SYNCHRONOUS_INTERFACE". However, the
code on the library side is disabled by default too, and seems to be
disabled in distro packages, so defining that in the player does not
help (just delays the failure until link time). It's possible the
distro packages will be changed to enable this, but since dropping
live555 support is desirable anyway, change configure to disable
support by default at least for now.
The live555 code is the only part of the source that's in C++.
Including C headers in code compiled as C++ has caused issues at
times, so deleting this code would have a maintenance benefit.
Reportedly the rtsp support in Libav has improved, so there should
be less need for live555.
Remove the old EDL implementation that was activated with the --edl
option. It is mostly redundant and inferior compared to the newer
demux_edl support, though currently there's no support for using the
same EDL files with the new implementation and the mute functionality
of the old implementation is not supported. The main reason to remove
the old implementation at this point is that the mute functionality
would conflict with following audio volume handling changes, and
working on the old code would be a wasted effort in the long run as at
some point it would be removed anyway.
The --edlout functionality is kept for now, even though after this
commit there is no code that could directly read its output.
Changing the volume when softvol is enabled or if the audio output driver
doesn't support volume controls causes insertion of the "volume" filter.
This fails with AC3. Since the filter wasn't removed after that, and the
filter chain was in a bogus state, random crashes occured past this
point.
Fix it by reinitializing the filter chain completely on failure. Volume
controls simply won't work. (This can't be fixed, because AC3 is a
compressed format, and would require additional decoding/encoding passes
in order to support arbitrary volume changes.)
This also affects balance controls.