OpenAL is disabled by default, because it supposedly inteferes with
some other configure tests and makes them fail silently.
Previously, --enable-openal followed configure's utterly braindead
semantics and disabled auto detection. However, since OpenAL was
disabled by default, there was no easy way to enable OpenAL at all,
even if it was explicitly requested. Solve this by making
--enable-openal use auto detection.
Remove the following #defines, which should never change in practice:
CONFIG_FAKE_MONO, OUTBURST, FAST_OSD, FAST_OSD_TABLE
The configure script hardcoded these to particular values in config.h.
They could only be changed by manually editing it. I don't think
anyone would want to.
X11_FULLSCREEN
This once did something, but became meaningless years ago and was now
always set to true if the files using it were compiled at all.
Conflicts:
configure
libvo/osd.c
libvo/vo_gl.c
Merged from mplayer2. The OSD defines were already removed in this fork.
Enabling optimization _still_ causes annoyances when using a debugger,
and it increaes compilation times too.
Now --enable-debug basically replaces the -O2 flag with -g.
Remove printing VO modules that were removed earlier, but were enabled
by configure tests that are still needed.
Print "libavcodecs" with the "Codecs: " output.
Probably all of these are supported by libavcodec. Missing things can
be added back.
Also remove qtpalette.h. It was used by demux_mov.c, and should have
been deleted with commit 1fde09db6f.
The main excuse for removing this is that LIVE555 deprecated the API
the mplayer implementation was using. The old API still seems to be
somewhat supported, but must be explicitly enabled at LIVE555
compilation, so mplayer won't always work on any user installation.
The implementation was also very messy, in C++, and FFmpeg support is
available as alternative.
Remove it completely.
Support for internal libdvdread has been removed in commit 41fbcee1f5,
but some bits have been missed in Makefile/configure.
Support for libdvdread as normal library is left unchanged.
While being able to play videos on a framebuffer device would be nice,
I didn't need it, and couldn't even test it (buggy nvidia binary
drivers that disable framebuffers, buggy DirectFB that crashes when
using the X11 backend). It's just dead weight, get rid of it.
vo_directx was very horrible, and by today it's mostly useless. I didn't
remove it, because there was that-guy who told me in amazement how
awesome mplayer was, because it was the only video player fast enough
for fast playback on his system when using vo_directx. Sorry, that-guy.
When the internal mplayer MPEG demuxer was removed (commit 1fde09db),
the default demuxer when using dvdnav was set to libavformat. Now it
turns out that this doesn't work with libavformat. It will terminate
playback right after the audio runs out (instead of looping it like the
video, or whatever it's supposed to do). I'm not sure what exactly the
problem is, but since 1. even mplayer-svn can't handle DVD menus
directly (missing highlights), 2. DVD menus are essentially worthless,
and 3. I don't directly watch DVDs, don't bother with it and remove it.
For basic playback, there's still libdvdread support.
Also, use pkg-config for libdvdread, and drop support for in-tree
libdvdread. Remove support for in-tree libdvdcss as well.
Remove the win32 loader - the win32 emulation layer, as well as the
code for using DirectShow/DMO/VFW codecs. Remove loading of xanim,
QuickTime, and RealMedia codecs.
The win32 emulation layer is based on a very old version of wine.
Apparently, wine code was copied and hacked until it was somehow able
to load a limited collection of binary codecs. It poked around in the
code segment of some known binary codecs to disable unsupported win32
API calls to make them work. Example from module.c:
for (i=0;i<5;i++) RVA(0x19e842)[i]=0x90; // make_new_region ?
for (i=0;i<28;i++) RVA(0x19e86d)[i]=0x90; // call__call_CreateCompatibleDC ?
for (i=0;i<5;i++) RVA(0x19e898)[i]=0x90; // jmp_to_call_loadbitmap ?
for (i=0;i<9;i++) RVA(0x19e8ac)[i]=0x90; // call__calls_OLE_shit ?
for (i=0;i<106;i++) RVA(0x261b10)[i]=0x90; // disable threads
Just to show how utterly insane this code is. You wouldn't want even
your worst enemy to have to maintain this. In fact, it seems nobody
made major changes to this code ever since it was committed.
Most formats can be decoded by libavcodecs these days, and the loader
couldn't be used on 64 bit platforms anyway. The same is (probably)
true for the other binary codecs.
General note about how support for win32 codecs could be added back:
It's not possible to replace the win32 loader code by using wine as
library, because modern wine can not be linked with native Linux
programs for certain reasons. It would be possible to to move DirectShow
video decoding into a separate process linked with wine, like the
CoreAVC-for-Linux patches do. There is also the mplayer-ww fork, which
uses the dshownative library to use DirectShow codecs on Windows.
Since slave mode is not planned to be kept, this VO is useless and I'm
removing it.
This VO was useful for OSX GUIs. Since in cocoa you can't embed views in
windows from other processes, this VO was writing to a sharedbuffer with
mmap. The OSX GUIs would then read from the buffer and render the image
with an external renderer.
If in the future we will want to support GUIs we will need to reasearch the
IOSurface framework. This allows to share kernel managed image data
across processes and integrates well with OpenGL.
The commit 74df1d8e05 (and f752212c62) replaced the configure
endian check with byte order macros defined by standard headers. It
turns out that MinGW-w64 actually doesn't define these macros in the
sys/types.h system header. (I assumed it does, because a quick test
seemed to work. But that was because gcc -W -Wall doesn't warn against
undefined macros. You need -Wundef for that.) MinGW-w64 has a
sys/params.h header defining these macros, but sys/types.h doesn't
include it, so it's useless without special casing the mplayer code.
Add a hack top configure instead. Define the macros directly, and
assume MinGW-w64 only works on little endian machines.
The other changes are basically random typos and superficial oversights.
The removed VO and AO took MPEG data and decoded it with V4L2. I'm not
exactly sure what's the use of this today, but get rid of it.
As far as feeding video data to V4L2 is concerned, there are other
ways. For example, there is this script, that feeds yuv4mpeg formatted
raw video data to V4L2:
https://raw.github.com/umlaeute/v4l2loopback/master/examples/yuv4mpeg_to_v4l2.c
The encoding branch by divverent can handle of these via libavformat.
Note: for some reason, libav/ffmpeg have a GIF muxer only, and no
demuxer. The gif configure checks needef for the mplayer internal gif
demuxer can't be removed yet.
build: Fix vo directx configure check on Cygwin
Without windows.h included syntax errors will
arise inside (some versions of) the ddraw.h header.
The wine-based ddraw.h header was reported to not
have this problems.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@34953 b3059339-0415-0410-9bf9-f77b7e298cf2
Author: al
This is required to safely #include inttypes.h in .cpp files.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@34231 b3059339-0415-0410-9bf9-f77b7e298cf2
Author: diego
Adding the standard library locations to the search path is doubtful
behavior and unnecessary at least on FreeBSD.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@34128 b3059339-0415-0410-9bf9-f77b7e298cf2
Author: diego
This was a horrible little tool to detect the host CPU at build time.
Forgotten in commit 74df1d8e05.
Also remove forgotten codec-cfg entry in .gitignore .
There are still various other RTSP implementations available, such as
libnemesi, live555, and libav. The mplayer native version was a huge
chunk of old unmaintained code.
This was intended for translating filenames from filesystem charset to
the terminal charset. Modern sane platforms use UTF-8 for everything,
and on Windows we use unicode APIs, so this is not needed anymore.
Remove filename_recode, all uses of it, options and configure checks
related to terminal output charset, and code that tries to determine
the same.
Including <malloc.h>, especially if all you want is malloc(), has no
legitimate uses (on sane platforms at least). Remove the check for it,
and remove all uses in the code.
Remove unused check for alloca().
Most of these demuxers and decoders are provided in better form by
libav, while the mplayer builtin ones are essentially unmaintained. The
only legimitate use case for not using the libav ones was working around
libav bugs or bugs related to the way mplayer uses libav. Instead of
trying to keep dead code alive, development effort should go into
improving libav or the mplayer libav glue code.
Note that the libav demuxer have been preferred over the mplayer builtin
ones for a while in mplayer2. There were some exceptions: playing DVDs
with dvdnav or playing network sources. (That's because some stream
modules and network.c requested explicit file formats, such as
DEMUXER_TYPE_MPEG_PS, which mapped to builtin demuxers.) With this
commit, they are switched to use libav. One caveat is that the requested
format is not passed to libavformat, instead we rely on the auto probing
to select the correct libav demuxer (see code in demux_open_stream()).
This used /dev/rtc for timing. /dev/rtc root only by default, and I
have a hard time believing that the standard OS functions are not good
enough. (Even if not, support for POSIX high resolution timers should
be added instead, see clock_gettime() and others.)
The function mp_dbg() was silent if MP_DEBUG was not defined. MP_DEBUG
was defined only if configure was invoked with --enable-debug. Remove
it and make mp_dbg an alias to mp_msg. This has the advantage tha
--enable-debug changes less. (It only adds -g now and disables stripping
the binary on installation.)
Using log levels is the better way to silence annoying debug messages.
mplayer tries to catch all signals by default, and displays a "nice"
crash message if a signal is caught. This is mostly useless for
diagnosing problems, and it's extremely fragile. It's likely to cause
more harm than it possibly solves.
Also remove the current_module variable, which was supposed to give a
hint which submodule was being run. This was far from accurate or
useful.
mplayer also caught SIG_CHILD, and tried to wait for any children. This
potentially gets rid of zombies, but I'm not sure which ones. The only
places that fork(), cache2.c and unrar_exec.c, seem to wait for their
child processes properly. Just get rid of it.
Note that we don't even catch SIGTERM. Maybe this will have to be added
back in order to re-enable screensavers and such when the user
terminates mplayer with ^C on the terminal.
Ancient AMD specific enhancement to the MMX instruction set. Officually
discontinued by AMD.
Note that support for this was already disabled in the previous commit.
This commit removes the actual code.
mplayer had three ways of enabling CPU specific assembler routines:
a) Enable them at compile time; crash if the CPU can't handle it.
b) Enable them at compile time, but let the configure script detect
your CPU. Your binary will only crash if you try to run it on a
different system that has less features than yours.
This was the default, I think.
c) Runtime detection.
The implementation of b) and c) suck. a) is not really feasible (it
sucks for users). Remove all code related to this, and use libav's CPU
detection instead. Now the configure script will always enable CPU
specific features, and disable them at runtime if libav reports them
not as available.
One implication is that now the compiler is always expected to handle
SSE (etc.) inline assembly at runtime, unless it's explicitly disabled.
Only checks for x86 CPU specific features are kept, the rest is either
unused or barely used.
Get rid of all the dump -mpcu, -march etc. flags. Trust the compiler
to select decent settings.
Get rid of support for the following operating systems:
- BSD/OS (some ancient BSD fork)
- QNX (don't care)
- BeOS (dead, Haiku support is still welcome)
- AIX (don't care)
- HP-UX (don't care)
- OS/2 (dead, actual support has been removed a while ago)
Remove the configure code for detecting the endianness. Instead, use
the standard header <endian.h>, which can be used if _GNU_SOURCE or
_BSD_SOURCE is defined. (Maybe these changes should have been in a
separate commit.)
Since this is a quite violent code removal orgy, and I'm testing only
on x86 32 bit Linux, expect regressions.
aclib[_template].c contained inline assembler versions of memcpy using
MMX/SSE/3dnow etc. instructions. It's possible that this gave quite a
speed a decade ago, but it's unlikely to have any use on modern
systems. Also, libc implementations already have their own
optimizations for the native memcpy function.
I did not verify my assumptions eith benchmarks, so I could be wrong.
Also note that some platforms have extremely crappy libc
implementations, and it's well possible that these might suffer from a
major performance loss (hello Windows). Unfortunately, I do not care.
mplayer doesn't use yasm, as or ranlib. They were only needed to
build with internal libav, which is gone.
Also, get rid of nm. nm was used to find out how external symbols are
mangled. Replace this by a platform check in mangle.h. As far as I
know, sane systems don't mangle symbols. Windows prefixes them with
an underscore ("_symbol"). I don't know about OSX.