Normally, --subcp always forces conversion. This really always forces
conversion, even if the UTF-8 check on the input succeeds.
Extend the --subcp to allow codepages as fallback if UTF-8 doesn't
work. So, for example --subcp=utf8:cp1250 will use UTF-8 if the input
looks like UTF-8, and will fall back to use cp1250 if the UTF-8 check
fails.
I think this should actually be the default, but on the other hand,
this changes the semantics of the option, and a user would actually
expect --subcp to force conversion, rather than silently using UTF-8
if that happens to work.
Don't accept overlong sequences. Don't accept codepoints past the
maximum unicode codepoint. Don't accept the UTF-16 surrogate codepoints.
I'm not sure if there are more codepoints that are defined to be
invalid, but we just want to make libavcodec happy, so this is enough.
(libavcodec's subtitle converter checks for valid UTF-8 and throws up
and dies if it's not - now we want to use bstr_sanitize_utf8_latin1() to
force valid UTF-8, so the strictness of our UTF-8 parser has to match at
least that of the libavcodec's check.)
I'm not sure whether the min test is actually 100% correct.
Note that libavcodec also treats BOM codepoints as invalid. This is
definitely a bug: the BOM is really just "zero-width non-breaking space"
redefined by Microsoft, but it is perfectly valid to appear in the
middle of a string. Official Unicode has merely deprecated the old
usage of the BOM codepoint, and didn't make it illegal. Besides, the
string could be from the start of a file, so even this check doesn't
make sense even with libavcodec's insane logic. We don't copy this bug.
Broken UTF-8 in this context means we treat it as UTF-8, but we also
interpret broken UTF-8 sequences as Latin1.
Also, run our own UTF-8 check function before the charset detectors.
This prevents from ENCA's UTF-8 check possibly messing up (like
detecting 7-bit clean UTF-8 as ASCII, or other things). It also takes
care of UTF-8 detection if no charset detector (ENCA, libguess) is
compiled in, and it lets us deal better with cut-off UTF-8 sequences.
If pthreads are enabled the input queue accesses are regulated by acquiring
a mutex. This is useful for platforms like OS X, where the events are created
in the cocoa thread and added to the queue to then be dequeued in the playloop
thread.
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.
This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).
On the other hand, some features not present in the original patch were
added, like screenshot support.
VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).
Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.
Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)
Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)
Change how the HW decoding stuff is organized, the way it's initialized
in particular. Instead of duplicating the list of supported codecs for
hwaccel decoders, add a probe function which allows each decoder to
report whether it supports a given codec.
Add an "auto" choice to the --hwdec option, which automatically enables
hardware decoding if libavcodec and/or the VO supports it.
What mpv prints on the terminal changes a bit. Now it will just print
a single line whether hw decoding is used or not (and nothing at all if
no hw decoding at all was requested). The pretty violent fallback from
hw decoding to software decoding is still quite verbose and evil-looking
though.
Support horizontal and vertical axes of input devices.
If the input device support precise scrolling with an input value then it
should first be scaled to a standard multiplier, where 1.0 is the default.
The multiplier will then applied to the following commands if possible:
* MP_CMD_SEEK
* MP_CMD_SPEED_MULT
* MP_CMD_ADD
All other commands will triggered on every axis event, without change the
values specified in the config file.
core is used in many unix systems for core dumps. For that reason some tools
work under the assumption that the file is indeed a core dump (for example
autoconf does this).
This commit just renames the files. The following one will change all the
includes to fix compilation. This is done this way because git has a easier
time tracing file changes if there is a pure rename commit.