Add general code to separate the HTML-like attribute=value syntax used
in srt font tags into attribute and value parts. This simplifies some
of the parsing code, makes detection of malformed input more robust,
and allows warning about unrecognized attributes.
The AVFrame data structure may be extended at the end in new
binary-compatible libavcodec versions. Therefore applications should
not depend on the size of the structure. But screenshot.c declared an
"AVFrame pic;" on stack. Change the code to allocate an AVFrame with
avcodec_alloc_frame() instead. The frame is now stored in struct
screenshot_ctx (rather than reallocated each time).
Libass was set to use the file "subfont.ttf" in the user configuration
directory as a default/fallback font. This triggered "Error opening
font" errors from libass if it tried to use the fallback font for some
glyph and the user had not copied/linked any font there (and there is
generally little reason to do that nowadays when using fontconfig).
Check whether the path exists and only set it in ass_set_fonts() if it
does.
Frame rate information is mostly irrelevant for playback, but it's
needed at least to convert frame numbers used in some subtitle formats
(like MicroDVD) into timestamps. Libavformat stopped making up a frame
rate if no "reliable" information is available (commit 7929e22bd
"lavf: don't guess r_frame_rate from either stream or codec timebase",
1.5 months ago). This caused a regression with AVI files and MicroDVD
subtitles. Add a heuristic similar to what libavformat used to have,
to make up FPS values which should work at least for the AVI+MicroDVD
use case.
struct station_elem_s had a field "name[8]", but the rest of the code
used PVR_STATION_NAME_SIZE as field size in snprintf and some other
calls accessing the field. Change the field size to
PVR_STATION_NAME_SIZE so it matches the accesses.
If digital pass-through is used, this supported setting the volume
(just mute, actually), but not getting the volume. This will probably
lead to a stuck mute state in the mplayer frontend. Make the code
respond to volume queries even if digital pass-through is used.
Make mixer support setting the mute attribute at audio driver level,
if one exists separately from volume. As of this commit, no libao2
driver exposes such an attribute yet; that will be added in later
commits.
Since the mute status can now be set externally, it's no longer
completely obvious when the player should automatically disable mute
when uninitializing an audio output. The implemented behavior is to
turn mute off at uninitialization if we turned it on and haven't
noticed it turn off (by external means) since.
The player tried to disable mute before exiting, so that if mute is
emulated by setting volume to 0 and the volume setting is a
system-global one, we don't leave it at 0. However, the logic doing
this at process exit was flawed, as volume settings are handled by
audio output instances and the audio output that set the mute state
may have been closed earlier. Trying to write reliably working logic
that restores volume at exit only would be tricky, so change the code
to always unmute an audio driver before closing it and restore mute
status if one is opened again later.
Restore the audio balance setting when the audio chain is
reinitialized (also after switching to another file).
Also add a note about the balance code being seriously buggy.
MPlayer volume control was originally implemented with the assumption
that it controls a system-wide volume setting which keeps its value
even if a process closes and reopens the audio device. However, this
is not actually true for --softvol mode or some audio output APIs that
only consider volume as a per-client setting for software mixing. This
could have annoying results, as the volume would be reset to a default
value if the AO was closed and reopened, for example whem moving to a
new file or crossing ordered chapter boundaries. Add code to set the
previous volume again after audio reinitialization if the current
audio chain is known to behave this way (softvol active or the AO
driver is known to not keep persistent volume externally).
This also avoids an inconsistency with the mute flag. The frontend
assumed the mute status is persistent across file changes, but it
could be similarly lost.
The audio drivers that are assumed to not keep persistent volume are:
coreaudio, dsound, esd, nas, openal, sdl. None of these changes have
been tested. I'm guessing that ESD and NAS do per-connection
non-persistent volume settings.
Partially based on code by wm4.
The volume filter was automatically inserted if setting AO volume
failed. Remove that logic, and instead enable softvol mode fully if
querying current volume (which will happen before any set attempts)
fails. Fully switching to softvol mode is more robust, and any case
where the behavior would differ (the behavior is neither that both
querying/setting always work nor that both always fail) would have
been buggy.
Current volume was always queried from the the audio output driver (or
filter in case of --softvol). The only case where it was stored on
mixer level was that when turning off mute, volume was set to the
value it had before mute was activated. Change the mixer code to
always store the current target volume internally. It still checks for
significant changes from external sources and resets the internal
value in that case.
The main functionality changes are:
Volume will now be kept separately from mute status. Increasing or
decreasing volume will now change it relative to the original value
before mute, even if mute is implemented by setting AO level volume to
0. Volume changes no longer automatically disable mute. The exception
is relative changes up (like the volume increase key in default
keybindings); that's the only case which still disables mute.
Keeping the value internally avoids problems with granularity of
possible volume values supported by AO. Increase/decrease keys could
work unsymmetrically, or when specifying a smaller than default
--volstep, even fail completely. In one case occurring in practice, if
the AO only supports changing volume in steps of about 2 and rounds
down the requested volume, then volume down key would decrease by 4
but volume up would increase by 2 (previous volume plus or minus the
default change of 3, rounded down to a multiple of 2). Now, the
internal value will keep full precision.
Change the audio driver control() command argument from "int" to "enum
aocontrol". Remove unused control types (SET_DEVICE, GET_DEVICE,
QUERY_FORMAT, SET_PLUGIN_DRIVER, SET_PLUGIN_LIST). The QUERY_FORMAT
one looks like there's a possibility such functionality could be
useful in the future, but as ao_oss was the only driver to have an
actual implementation of it, the current code wasn't worth keeping.
Stop trying to read terminal input if a read attempt returns EOF. The
most important case where this matters is when someone runs the player
with stdin redirected from /dev/null and without specifying
--no-consolecontrols. This used to cause 100% CPU load while paused,
as select() would continuously trigger on stdin (the need for
--no-consolecontrols was not apparent to people with older mplayer
versions, as input reading was less efficient and latencies like
hardcoded sleeps kept CPU use well below 100%). Now this will only
cause a "Dead key input" error message.
Make the code read current real time again after drawing OSD. This
ensures time taken in OSD drawing is properly deducted from the
duration of the following sleep. The main practical effect is to avoid
the A-V field on the status line staying at a value a couple of
milliseconds above 0 (depending on VO).
Fix a missing check that could sometimes result in video frames being
shown after specified end pts (end of timeline segment or --endpos).
Fix mistaken video EOF detection after aspect change in video stream,
when there is no current valid visible frame but the next frame is
already buffered in VO.
pa_stream_flush() seems to work pretty badly in general. The visible
symptoms included at least old audio continuing for a significant time
after the call, and bogus latency reporting causing temporary video
freezes after a seek. Add some hacks to work around these problems.
The result seems to work most of the time on my machine at least...
For ao_pulse, the current latency is not a good indicator of how soon
the AO requires new data to avoid underflow. Add an internal pipe that
can be used to wake up the input loop from select(), and make the
pulseaudio main loop (which runs in a separate thread) use this
mechanism when pulse requests more data. The wakeup signal currently
contains no information about the reason for the wakup, but audio
buffers are always filled when the event loop wakes up.
Also, request a latency of 1 second from the Pulseaudio server. The
default is normally significantly higher. We don't need low latency,
while higher latency helps prevent underflows reduces need for
wakeups.
If the user moved the window to another screen, fullscreen mode would
still use the original screen. Fix to use the screen the window is
currently on (unless overridden by --xineramascreen).
Change the macosx_finder_args function so that when mplayer2 is
invoked from the Finder in a Mac application bundle, it redirects the
output to ~/Library/Logs/mplayer2.log instead of cluttering the global
system.log.
This doesn't affect terminal use which keeps writing to stdout and
stderr.
The gl video output is faster and has more features than corevideo, so
it should be preferred on mac osx.
This doesn't affect GUI compatibility because they specify the
corevideo video output along with the suboptions for the shared buffer
name to mmap in.
The recommended way to get function pointers to the functions in the
OpenGL library is through dlopen/dlsym/dlclose. This causes problems
in the Cocoa OpenGL backend when -lGL (X11's OpenGL headers) is linked
to the binary together with -framework OpenGL.
The linked OpenGL symbols are always from -lGL, causing all the
function pointers to point to null when getFunctions is called against
a Cocoa OpenGL context.
For this reason change the configure autodetection code to disable
the vo_gl X11 backend when cocoa is active.
This video output is not useful anymore. It is based on Carbon to draw
the mplayer window and this has been deprecated by Apple in 10.5.
The upcoming 10.8 OSX release should deprecate most of Carbon, so it
doesn't make sense to keep vo_quartz in the codebase when there are
modern and better alternatives (vo_gl and vo_corevideo).
macosx_finder_args was using Carbon and wasn't usable any longer on
modern versions of MacOSX. This is very useful to embed mplayer in a
mac application bundle.
When using application bundles, the operating system will call the
main function with only one argument that identifies the process
serial number (this is some additional process identifier in osx other
than the pid). File open events are then dispatched to the application
through events that must be handled accordingly.
The Cocoa framework generates only a NS*MouseDown event when handling
the second click of a double click (no NS*MouseUp). If that's the case
put mouse up key in mplayer2's fifo when dealing with the MouseDown
Cocoa event.
Change the window to accept mouse drag events not only on the title
bar, but also on the rest of the window surface; this includes the
video area.
It looks like the changing of the window mask resets the behaviour
specified in the delegate method, probably due to some strange
interaction with NSBorderlessWindow. For this reason call
-setPresentationOptions in the -fullscreen method to remind cocoa the
behaviour we want.
Add option --cursor-autohide-delay to control the number of milliseconds
with no user interaction before the mouse cursor is hidden.
There are two negative values with useful special meanings:
* A value of -1 prevents the cursor from hiding (useful for users
with multiple displays).
* A value of -2 prevents the cursor from showing upon activity.
The default is 1 second to keep the behaviour consistent with the
past X11 backend implementation.
Remove the vo_mouse_autohide field as it was always true.
There were some slight differences between what input.conf mapped, and
what was in input.c def_cmd_binds[]. Make them match.
Add some minor documentation improvements in input.cfg.
Also remove double comments ('##'), because they were confusing.
At least on some keyboards, the key between '0' and 'Enter' on the
key pad is mapped to KP_Separator. Since X11 VOs accept unicode
input, the mplayer keycode this key generates depended on the numlock
state, and with numlock enabled this mapped to an ASCII character.
This is probably not what the user wanted, since two physical keys
will always map to the same key code.
Map it to KP_DEC.
This change allows using non-ASCII keys with X11. These keys were ingored
before.
Technically, this creates an invisible, non-interactive input method
context. If creation fails, the code falls back to the old method, which
allows a subset of ASCII only.
This assumes the terminal uses UTF-8. If invalid UTF-8 is encountered (for
example because the terminal uses a legacy encoding), the code falls back
to the old method and feeds each byte as key code to the input code.
In theory, UTF-8 input could randomly fail, because the code in getch2.c
doesn't try to fill the input buffer correctly with input sequences
longer than a byte. This is a problem with the design of the existing
code.
This moves all key codes above the highest valid unicode code point
(which is 0x10FFFF). All key codes below MP_KEY_BASE now directly map
to unicode (KEY_ENTER is 13, carriage return). Configuration files
(input.conf) can contain unicode characters in UTF-8 to map non-ASCII
characters/keys.
This shouldn't change anything user visible, except that "direct key
codes" (as used in input.conf) will change their meaning.
Parts of the bstr functions taken from libavutil's GET_UTF8 and
slightly modified.
Setting the WM_NAME/WM_ICON_NAME window properties didn't always work:
apparently there are some characters that can't be represented in the X
STRING or COMPOUND_TEXT encodings, such as U+2013 EN DASH. The function
Xutf8TextListToTextProperty partially converts the string, and returns
a value different from 'Success'. This means vo_x11_set_property_string
didn't set these window properties.
On most modern window managers, this is not a problem, since these use
the _NET_WM_NAME/_NET_ICON_NAME and the UTF8_STRING encoding. Some older
WMs like IceWM don't read these, and the window title remains blank.
It's not clear what exactly we should do in this situation, but fix it
by setting set the WM_NAME/WM_ICON_NAME properties as UTF8_TEXT. This
violates the ICCCM, but at least IceWM seems to handle this well.
See also:
http://lists.freedesktop.org/archives/xorg/2004-September/003391.htmlhttp://lists.freedesktop.org/archives/xorg/2004-September/003395.html
Timeline handling converted the pts values from demuxed subtitles to
timeline scale. Change the code to do most subtitle handling in
original subtitle source pts, and instead convert current playback
timeline pts to those units when deciding which subtitle to show.
The main functionality changes are that now demuxed subtitles which
overlap chapter boundaries are handled correctly (at least for libass
subtitles), and external subtitles are assumed to use same pts scale
as current source (this needs improvements later).
Before, a video subtitle that had a duration continuing past the end
of the chapter would continue to be shown for the original duration,
even if the chapter ended and playback switched to a position in the
source where the subtitle shouldn't exist. Now, the subtitle will
correctly end.
Before, external subtitle files were interpreted as specifying pts
values in timeline scale. Now, they're interpreted as specifying pts
values in source file time scale, for _every_ source file. This is
probably more likely to be what the user wants for the "main" source
file in case there is one, but almost certainly not quite right for
multiple source files where the same subs could be shown over
different scenes. If the user wants them to match some main source
file, it's probably still better to have incorrect extra subs for
video from some files than to have every subtitle appearing at the
wrong time. The new code makes it easier to change the interpretation
of the subtitle times, and some configurability should be added in
the future.
Direct rendering support in vo_xv (used with --dr) had at least two
problems. First, OSD drawing modified the buffers; this meant that
if the buffers were used for reference frames there would be video
corruption. I don't think "performance optimization" with this level
of drawbacks is appropriate with today's machines any more. Direct
rendering could still be used for non-reference frames, but there's a
second problem: with direct rendering enabled the same buffer is used
for every frame, and with the XShm extension that is used by default
there's no checking that the previous frame has been completely
uploaded to the graphics card before it's overwritten by the next one.
This could be fixed, but as Xv is becoming obsolete I don't see it as
a priority to improve it. Thus I'm simply removing the parts of
functionality that were more likely to break things than improve
playback.
When switching to a timeline part from another file, decoders were
reinitialized after doing the demuxer-level seek. This is necessary
for audio because some decoders read from the demuxer stream during
initialization and the previous stream position before seek could have
been at EOF. However, this initialization sequence could lose first
subtitles or first part of audio.
The problem for subtitles was that the seek itself or audio
initialization could already have buffered subtitle packets from the
new position, and the way subtitles are reinitialized flushes packet
buffers. Thus early subtitles could be lost (even if they were demuxed
- unfortunately demuxers may not know about still active subtitles
earlier in the file, but that's another issue). Fix this by moving
subtitle and video reinitialization before the demuxer seek; they
don't have the problems which prevent that for audio.
Audio initialization can already decode and buffer some output.
However, the seek_reset() call done last would then throw away this
buffered output. Work around this by adding an extra flag to
seek_reset().