2010-01-30 23:24:23 +00:00
|
|
|
/*
|
|
|
|
* This file is part of MPlayer.
|
|
|
|
*
|
|
|
|
* MPlayer is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License as published by
|
|
|
|
* the Free Software Foundation; either version 2 of the License, or
|
|
|
|
* (at your option) any later version.
|
|
|
|
*
|
|
|
|
* MPlayer is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License along
|
|
|
|
* with MPlayer; if not, write to the Free Software Foundation, Inc.,
|
|
|
|
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
|
|
|
*/
|
|
|
|
|
2001-10-19 00:40:19 +00:00
|
|
|
#include <stdio.h>
|
2002-11-06 23:54:29 +00:00
|
|
|
#include <string.h>
|
Remove compile time/runtime CPU detection, and drop some platforms
mplayer had three ways of enabling CPU specific assembler routines:
a) Enable them at compile time; crash if the CPU can't handle it.
b) Enable them at compile time, but let the configure script detect
your CPU. Your binary will only crash if you try to run it on a
different system that has less features than yours.
This was the default, I think.
c) Runtime detection.
The implementation of b) and c) suck. a) is not really feasible (it
sucks for users). Remove all code related to this, and use libav's CPU
detection instead. Now the configure script will always enable CPU
specific features, and disable them at runtime if libav reports them
not as available.
One implication is that now the compiler is always expected to handle
SSE (etc.) inline assembly at runtime, unless it's explicitly disabled.
Only checks for x86 CPU specific features are kept, the rest is either
unused or barely used.
Get rid of all the dump -mpcu, -march etc. flags. Trust the compiler
to select decent settings.
Get rid of support for the following operating systems:
- BSD/OS (some ancient BSD fork)
- QNX (don't care)
- BeOS (dead, Haiku support is still welcome)
- AIX (don't care)
- HP-UX (don't care)
- OS/2 (dead, actual support has been removed a while ago)
Remove the configure code for detecting the endianness. Instead, use
the standard header <endian.h>, which can be used if _GNU_SOURCE or
_BSD_SOURCE is defined. (Maybe these changes should have been in a
separate commit.)
Since this is a quite violent code removal orgy, and I'm testing only
on x86 32 bit Linux, expect regressions.
2012-07-29 15:20:57 +00:00
|
|
|
#include <stdlib.h>
|
2001-10-19 00:40:19 +00:00
|
|
|
|
Remove compile time/runtime CPU detection, and drop some platforms
mplayer had three ways of enabling CPU specific assembler routines:
a) Enable them at compile time; crash if the CPU can't handle it.
b) Enable them at compile time, but let the configure script detect
your CPU. Your binary will only crash if you try to run it on a
different system that has less features than yours.
This was the default, I think.
c) Runtime detection.
The implementation of b) and c) suck. a) is not really feasible (it
sucks for users). Remove all code related to this, and use libav's CPU
detection instead. Now the configure script will always enable CPU
specific features, and disable them at runtime if libav reports them
not as available.
One implication is that now the compiler is always expected to handle
SSE (etc.) inline assembly at runtime, unless it's explicitly disabled.
Only checks for x86 CPU specific features are kept, the rest is either
unused or barely used.
Get rid of all the dump -mpcu, -march etc. flags. Trust the compiler
to select decent settings.
Get rid of support for the following operating systems:
- BSD/OS (some ancient BSD fork)
- QNX (don't care)
- BeOS (dead, Haiku support is still welcome)
- AIX (don't care)
- HP-UX (don't care)
- OS/2 (dead, actual support has been removed a while ago)
Remove the configure code for detecting the endianness. Instead, use
the standard header <endian.h>, which can be used if _GNU_SOURCE or
_BSD_SOURCE is defined. (Maybe these changes should have been in a
separate commit.)
Since this is a quite violent code removal orgy, and I'm testing only
on x86 32 bit Linux, expect regressions.
2012-07-29 15:20:57 +00:00
|
|
|
#include <libavutil/cpu.h>
|
2012-11-09 00:06:43 +00:00
|
|
|
#include "compat/libav.h"
|
2001-11-26 23:08:48 +00:00
|
|
|
|
Remove compile time/runtime CPU detection, and drop some platforms
mplayer had three ways of enabling CPU specific assembler routines:
a) Enable them at compile time; crash if the CPU can't handle it.
b) Enable them at compile time, but let the configure script detect
your CPU. Your binary will only crash if you try to run it on a
different system that has less features than yours.
This was the default, I think.
c) Runtime detection.
The implementation of b) and c) suck. a) is not really feasible (it
sucks for users). Remove all code related to this, and use libav's CPU
detection instead. Now the configure script will always enable CPU
specific features, and disable them at runtime if libav reports them
not as available.
One implication is that now the compiler is always expected to handle
SSE (etc.) inline assembly at runtime, unless it's explicitly disabled.
Only checks for x86 CPU specific features are kept, the rest is either
unused or barely used.
Get rid of all the dump -mpcu, -march etc. flags. Trust the compiler
to select decent settings.
Get rid of support for the following operating systems:
- BSD/OS (some ancient BSD fork)
- QNX (don't care)
- BeOS (dead, Haiku support is still welcome)
- AIX (don't care)
- HP-UX (don't care)
- OS/2 (dead, actual support has been removed a while ago)
Remove the configure code for detecting the endianness. Instead, use
the standard header <endian.h>, which can be used if _GNU_SOURCE or
_BSD_SOURCE is defined. (Maybe these changes should have been in a
separate commit.)
Since this is a quite violent code removal orgy, and I'm testing only
on x86 32 bit Linux, expect regressions.
2012-07-29 15:20:57 +00:00
|
|
|
#include "config.h"
|
2013-12-17 01:39:45 +00:00
|
|
|
#include "common/cpudetect.h"
|
2003-01-18 19:29:46 +00:00
|
|
|
|
Remove compile time/runtime CPU detection, and drop some platforms
mplayer had three ways of enabling CPU specific assembler routines:
a) Enable them at compile time; crash if the CPU can't handle it.
b) Enable them at compile time, but let the configure script detect
your CPU. Your binary will only crash if you try to run it on a
different system that has less features than yours.
This was the default, I think.
c) Runtime detection.
The implementation of b) and c) suck. a) is not really feasible (it
sucks for users). Remove all code related to this, and use libav's CPU
detection instead. Now the configure script will always enable CPU
specific features, and disable them at runtime if libav reports them
not as available.
One implication is that now the compiler is always expected to handle
SSE (etc.) inline assembly at runtime, unless it's explicitly disabled.
Only checks for x86 CPU specific features are kept, the rest is either
unused or barely used.
Get rid of all the dump -mpcu, -march etc. flags. Trust the compiler
to select decent settings.
Get rid of support for the following operating systems:
- BSD/OS (some ancient BSD fork)
- QNX (don't care)
- BeOS (dead, Haiku support is still welcome)
- AIX (don't care)
- HP-UX (don't care)
- OS/2 (dead, actual support has been removed a while ago)
Remove the configure code for detecting the endianness. Instead, use
the standard header <endian.h>, which can be used if _GNU_SOURCE or
_BSD_SOURCE is defined. (Maybe these changes should have been in a
separate commit.)
Since this is a quite violent code removal orgy, and I'm testing only
on x86 32 bit Linux, expect regressions.
2012-07-29 15:20:57 +00:00
|
|
|
CpuCaps gCpuCaps;
|
2003-01-18 19:29:46 +00:00
|
|
|
|
Remove compile time/runtime CPU detection, and drop some platforms
mplayer had three ways of enabling CPU specific assembler routines:
a) Enable them at compile time; crash if the CPU can't handle it.
b) Enable them at compile time, but let the configure script detect
your CPU. Your binary will only crash if you try to run it on a
different system that has less features than yours.
This was the default, I think.
c) Runtime detection.
The implementation of b) and c) suck. a) is not really feasible (it
sucks for users). Remove all code related to this, and use libav's CPU
detection instead. Now the configure script will always enable CPU
specific features, and disable them at runtime if libav reports them
not as available.
One implication is that now the compiler is always expected to handle
SSE (etc.) inline assembly at runtime, unless it's explicitly disabled.
Only checks for x86 CPU specific features are kept, the rest is either
unused or barely used.
Get rid of all the dump -mpcu, -march etc. flags. Trust the compiler
to select decent settings.
Get rid of support for the following operating systems:
- BSD/OS (some ancient BSD fork)
- QNX (don't care)
- BeOS (dead, Haiku support is still welcome)
- AIX (don't care)
- HP-UX (don't care)
- OS/2 (dead, actual support has been removed a while ago)
Remove the configure code for detecting the endianness. Instead, use
the standard header <endian.h>, which can be used if _GNU_SOURCE or
_BSD_SOURCE is defined. (Maybe these changes should have been in a
separate commit.)
Since this is a quite violent code removal orgy, and I'm testing only
on x86 32 bit Linux, expect regressions.
2012-07-29 15:20:57 +00:00
|
|
|
void GetCpuCaps(CpuCaps *c)
|
2001-11-26 23:08:48 +00:00
|
|
|
{
|
Remove compile time/runtime CPU detection, and drop some platforms
mplayer had three ways of enabling CPU specific assembler routines:
a) Enable them at compile time; crash if the CPU can't handle it.
b) Enable them at compile time, but let the configure script detect
your CPU. Your binary will only crash if you try to run it on a
different system that has less features than yours.
This was the default, I think.
c) Runtime detection.
The implementation of b) and c) suck. a) is not really feasible (it
sucks for users). Remove all code related to this, and use libav's CPU
detection instead. Now the configure script will always enable CPU
specific features, and disable them at runtime if libav reports them
not as available.
One implication is that now the compiler is always expected to handle
SSE (etc.) inline assembly at runtime, unless it's explicitly disabled.
Only checks for x86 CPU specific features are kept, the rest is either
unused or barely used.
Get rid of all the dump -mpcu, -march etc. flags. Trust the compiler
to select decent settings.
Get rid of support for the following operating systems:
- BSD/OS (some ancient BSD fork)
- QNX (don't care)
- BeOS (dead, Haiku support is still welcome)
- AIX (don't care)
- HP-UX (don't care)
- OS/2 (dead, actual support has been removed a while ago)
Remove the configure code for detecting the endianness. Instead, use
the standard header <endian.h>, which can be used if _GNU_SOURCE or
_BSD_SOURCE is defined. (Maybe these changes should have been in a
separate commit.)
Since this is a quite violent code removal orgy, and I'm testing only
on x86 32 bit Linux, expect regressions.
2012-07-29 15:20:57 +00:00
|
|
|
memset(c, 0, sizeof(*c));
|
|
|
|
int flags = av_get_cpu_flags();
|
|
|
|
#if ARCH_X86
|
|
|
|
c->hasMMX = flags & AV_CPU_FLAG_MMX;
|
|
|
|
c->hasMMX2 = flags & AV_CPU_FLAG_MMX2;
|
|
|
|
c->hasSSE = flags & AV_CPU_FLAG_SSE;
|
|
|
|
c->hasSSE2 = (flags & AV_CPU_FLAG_SSE2) && !(flags & AV_CPU_FLAG_SSE2SLOW);
|
|
|
|
c->hasSSE3 = (flags & AV_CPU_FLAG_SSE3) && !(flags & AV_CPU_FLAG_SSE3SLOW);
|
|
|
|
c->hasSSSE3 = flags & AV_CPU_FLAG_SSSE3;
|
|
|
|
#endif
|
2001-11-26 23:08:48 +00:00
|
|
|
}
|