We must not try to remap channels with this. Whethever ALSA gives us,
and whatever we do with it, the result will probably be nonsense.
Untested, as I don't have the required hardware.
This used to be required to workaround PulseAudio bugs. Even later, when
the bugs were (partially?) fixed in PulseAudio, I had the feeling the
hacks gave better behavior. On the other hand, I couldn't actually
reproduce any bad behavior without the hacks lately. On top of this, it
seems our hacks sometimes perform much worse than PulseAudio's native
implementation (see #1430).
So disable the hacks by default, but still leave the code and the option
in case it still helps somewhere. Also, being able to blame PulseAudio's
code by using its native API is much easier than trying to debug our own
(mplayer2-derived) hacks.
* bits instead of bytes
* add valid_bits argument
* just pass in the mp_chmap and get the number and wavext channel map from that
* indicate valid bits in waveformat_to_str
* make appropriate accomodations in try_format
This message is printed when the audio device advertised a channel map,
but couldn't set it - which is probably a dmix bug (we'll never know,
ALSA doesn't take bug reports).
Print the requested map, so that the user (maybe) can make a connection
when seeing the message and the actually used channel map, which might
be less confusing. Or at least less useless.
Instead of just failing during channel map selection, try to select a close
layout that makes most sense and upmix/downmix to that instead of failing AO
initialization. The heuristic is rather simple, and uses the following steps:
1) If mono is required always prefer stereo to a multichannel upmix.
2) Search for an upmix that is an exact superset of the required channel map.
3) Search for a downmix that is the exact subset of the required channel map.
4) Search for either an upmix or downmix that is the closest (minimum difference
of channels) to the required channel map.
There where 3 major errors in the previous code:
1) The kAudioDevicePropertyPreferredChannelLayout selector returns a single
layout not an array.
2) The check for AudioChannelLayout allocation size was wrong (didn't account
for variable sized struct).
3) Didn't query the kAudioDevicePropertyPreferredChannelsForStereo selector
since I didn't know about it's existence.
All of these are fixed.
Might help with #1367
Makes all of overlay_add work on windows/mingw.
Since we now don't explicitly check for mmap() anymore (it's always
present), this also requires us to make af_export.c compile, but I
haven't tested it.
AudioChannelLayout uses a trailing variable sized array so we need to
query CoreAudio for the size of the struct it is going to need (or the
conversion of that particular layout would fail).
Fixes#1366
snd_pcm_prepare() was not always called, which could result in an
infinite loop.
Whether snd_pcm_prepare() was actually called depended on whether the
device was a hw device (or other characteristics; depending on
snd_pcm_hw_params_can_pause()), and required real suspend (annoying for
testing), so it was somewhat tricky to reproduce without knowing these
things.
When setting the ALSA channel map, we never actually set the map we got
from ALSA directly, but convert it to mpv's, and then back to ALSA's.
mpv and ALSA use different conventions for mono, and there is already an
exception for ALSA->mpv, but not mpv->ALSA.
This was only added recently (c1e97161) as an attempt to minimize the
bad impact of channel layout device aliases. But use of these was
removed in commit 49df0132. Now this code does pretty much nothing, and
shouldn't be needed anymore. It does something when using spdif, but
this fallback won't work anyway.
The "old" method (before the ALSA channel map API) used device aliases
like "surround51" to set the channel layout. The "interesting" part was
that these devices usually redirect to a hardware device. This means
playing stereo would lead you to the "default" device (dmix), while e.g.
5.1 to "surround51", which automatically takes care of the fact that
dmix can't do 5.1.
This is pretty much nonsense, though. It shouldn't depend on the damn
input media file whether the player is going to use shared access (dmix)
or exclusive access (direct hw device).
As a consequence, by default ao_alsa will do only what dmix can do. If
the user actually wants multichannel, he has to select a suitable hw
device with --audio-device. From there on, the correct speaker mapping
will be ensured via the channel mapping API.
The change is preparation for making multichannel output the default (as
far as supported by the audio output API). Of the common APIs, only ALSA
messes up beyond repair, so I feel like this change is needed.
On ancient alsa-lib versions, only stereo and mono can be played with
this branch.
dmix reports channel layouts it doesn't support. The rest of the
technical part of the story is in the code comment.
This seems to be the only reasonable way to fallback from trying to
initialize certain devices (like dmix) with multichannel audio. We could
probably add support for such padding channels to our audio chain or to
ao_alsa itself, but this would probably be much more work than this
commit.
What dmix does is probably a bug. I've tried to report it to ALSA. Thay
have a link on their website to a bug tracker, but it's a dead link, and
has been for years. I've posted to alsa-devel, but received no reply.
I'm thus assuming this absolutely retarded behavior is by design, and
nothing will happen to improve upon it.
I'm considering sending Lennart Poettering a "thank you" email, because
with PulseAudio, multichannel audio just works (although some other
things just don't work).
Based on patch by Yuriy Kaminskiy [yumkam gmail].
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@37330 b3059339-0415-0410-9bf9-f77b7e298cf2
Signed-off-by: wm4 <wm4@nowhere>
Whether we print it as warning or error doesn't really matter; we
continue anyway. (I don't actually know what the implications of running
in non-blocking mode are; for what's it worth, when I tested with
explicitly changing to non-blocking, it seemed to work fine anyway, so
don't change that part.)
ALSA returns "FL" as channel layout when trying to play mono. mpv and
libavresample don't like this; in particular, using libavresample to
convert stereo to "FL" fails.
If no-block was given, the device would be opened with SND_PCM_NOBLOCK.
Also, after opening, blocking mode was unconditionally enabled anyway
with snd_pcm_nonblock(). Further, if opening with SND_PCM_NOBLOCK
failed, opening was retried without this flag.
This doesn't make any sense to me, and I've never heard of someone using
this suboption. I suspect it has to do with ancient ALSA bugs or API
caveats. Remove it and simplify the code.
ALSA is crap. It's impossible to make multichannel playback just do the
right thing. dmix (the default on most distros) can do stereo only, and
will refuse to play multichannel. On the other hand, if you try like mpv
(and mplayer) to open a multichannel device (like "surround51" etc.),
this will actually open a hardware device, which will either fail if
dmix is active, or block out dmix if opening succeeds.
This commit falls back to "default" (i.e. dmix) if opening a
multichannel device fails, which is a tiny step towards the right
behavior. (Although fixing it fully is impossible.)
This could trigger an assertion when using ao_alsa or ao_coreaudio. The
code was simply assuming the number of channel maps was bounded
statically (which was true at first in both AOs).
Fix by using dynamic memory allocation. It needs to be explicitly
enabled by the AOs by setting a temp context, because otherwise the
memory couldn't be freed. (Or at least this seems to be the most elegant
solution.)
Fixes#1306.
Before it used whatever was in ao->format and changed the bits even
though this might have nothing to do with the actual WAVEFORMAT
negotiated with WASAPI.
For example, if the initial ao->format was a float and we had set the
WAVEFORMAT to s24, this would create a non-existent float24 format.
Worse, it might put an u16 into ao->format when WAVEFORMAT described s16.
WASAPI doesn't support unsigned at all as far as I can tell.
this involved inverting the logic of find_formats, enumerate_devies
and wasapi_fill_VistaBlob. The latter two were trivial as their return
values were not actually checked (to be fixed in a later
commit).
Before these definitions were incorrectly guarded by and #ifdef
but since they aren't macros, this would never be true so that
if they were ever added to mingw headers we would have problems.
rename KSDATAFORMAT constants with the same mp prefix for consistency.
also use DEFINE_GUID rather than defining the bare structure
...because everything is terrible.
strerror() is not documented as having to be thread-safe by POSIX and
C11. (Which is pretty much bullshit, because both mandate threads and
some form of thread-local storage - so there's no excuse why
implementation couldn't implement this in a thread-safe way. Especially
with C11 this is ridiculous, because there is no way to use threads and
convert error numbers to strings at the same time!)
Since we heavily use threads now, we should avoid unsafe functions like
strerror().
strerror_r() is in POSIX, but GNU/glibc deliberately fucks it up and
gives the function different semantics than the POSIX one. It's a bit of
work to convince this piece of shit to expose the POSIX standard
function, and not the messed up GNU one.
strerror_l() is also in POSIX, but only since the 2008 standard, and
thus is not widespread.
The solution is using avlibc (libavutil, by its official name), which
handles the unportable details for us, mostly. We avoid some pain.
This seems safer: otherwise, opening the AO could randomly fail if the
audio formats happens to be not float.
Unfortunately, this only works if the user does not select a device.
Since ALSA devices are arbitrary strings, including plugins with complex
parameters, it's not trivial or maybe even impossible to edit the string
in a way the "plug" plugin is added.
With --audio-device, it would be safe for users to select either
"default" or one of the "plughw" devices. Everything else seems
questionable.
Use the ALSA channel map API for querying and selecting supported
channel maps.
Since we (probably?) want to be compatible with ALSA versions before the
change, we still try to select the device name by channel map, and open
that device. There's no way to negotiate a channel map before opening,
so we're stuck with this approach. Fortunately, it seems these devices
allow selecting and setting any other supported channel layout, so maybe
this is not an issue at all. In particular, this avoids selecting the
default (dmix) device, which can only do stereo.
Most code is based on Martin Herkt <lachs0r@srsfckn.biz>'s alsa_ng
branch, with heavy modifications.
Don't crash if no fallback channel layout could be found (caller can't
handle NULL return from select_chmap()). Apparently this could never
actually happen, though.
Don't treat snd_pcm_hw_params_set_periods_near() failure as fatal error.
Same deal as with snd_pcm_hw_params_set_buffer_time_near().
Actually free channel maps returned by snd_pcm_get_chmap().
Adjust some messages.
No functional changes.
ALSA_PCM_NEW_HW_PARAMS_API was a pre-ALSA 1.0.0 thing and does nothing
with modern ALSA. It stopped being necessary about 10 years ago.
3 functions are moved to avoid forward references.
If ALSA reports a channel map, and it looks like it makes sense (i.e.
could be converted to mpv channel map, and the channel count matches),
then use that instead of the channel map we are assuming.
This is based on code written by lachs0r (alsa_ng branch).
The caller set up the "start" pointer array using the number of planes,
the encode() function used the number of channels. This copied
uninitialized values for packed formats, which makes Coverity warn.
From what I understand the division is to align the dimension of the
value from seconds to milliseconds. Hard to tell whether the "rounding"
was intentional or not; I'm tipping on "not".
Found by Coverity.
When the audio thread fails to properly init, it signals failure
to the main thread, AND THEN starts to clean up. For this to work,
ao_init callback must not return until the thread's cleanup is finished.
This is correctly handled in the ao_uninit callback by waiting for
the thread to exit, so just call that to clean up the main thread.
I have no idea why I didn't do this in the first place.
dsound was set as default, because there were some hard to fix problems
with wasapi. These problems were probably fixed now, so let's try with
wasapi as default again.
Even with change notifications, there are still (rare) cases when the
feed thread gets AUDCLIENT_DEVICE_INVALIDATED. So handle failures in
thread_feed by requesting ao_reload.
on changes to PKEY_AudioEngine_DeviceFormat, device status, and default device.
call ao_reload directly in the change_notify "methods".
this requires keeping a device enumerator around for the duration of
execution, rather than just for initially querying devices
Implement skeleton IMMNotificationClient to watch for changes in the
sound device. This will make recovery possible from changes shared
mode sample rate, bit depth, "enhancements"/effects and even graceful
device removal.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd371417%28v=vs.85%29.aspx
Signed-off-by: Kevin Mitchell <kevmitch@gmail.com>
Before, failures, particularly in the thread loop init, could lead to a
bad state for the duration of mpvs execution. Make sure that
everything that was initialized gets properly and safely
uninitialized.
When initialization failed, vo_lavc may cause an irrecoverable state in
the ffmpeg-related structs. Therefore, we reject additional
initialization attempts at least until we know a better way to clean up
the mess.
ao_lavc currently cannot be initialized more than once, yet it's good to
do consistent changes there as well.
Also, clean up uninit-after-failure handling to be less spammy.
The mp_audio_from_avframe() function requires the AVFrame to be
refcounted, and merely increases its refcount while referencing the same
data. For non-refcounted frames, it simply did nothing and potentially
would make the caller pass around a frame with dangling pointers.
(libavcodec should always return refcounted frames, but it's not clear
what other code does; and also the function should simply work, instead
of having weird requirements on its arguments.)
This rewrites the audio decode loop to some degree. Audio filters don't
do refcounted frames yet, so af.c contains a hacky "emulation".
Remove some of the weird heuristic-heavy code in dec_audio.c. Instead of
estimating how much audio we need to filter, we always filter full
frames. Maybe this should be adjusted later: in case filtering increases
the volume of the audio data, we should try not to buffer too much
filter output by reducing the input that is fed at once.
For ad_spdif.c and ad_mpg123.c, we don't avoid extra copying yet - it
doesn't seem worth the trouble.
Use a pseudo-filter when changing speed with resampling, instead of
somehow changing a samplerate somewhere. This uses the same underlying
mechanism, but is a bit more structured and cleaner. It also makes some
of the following changes easier.
Since we now always use filters to change audio speed, move most of the
work set_playback_speed() does to recreate_audio_filters().
A helper to allocate refcounted audio frames from a pool. This will
replace the static buffer many audio filters use (af->data), because
such static buffers are incompatible with refcounting.
A first step towards refcounted audio frames.
Amazingly, the API just does what we want, and the code becomes
simpler. We will need to NIH allocation from a pool, though.
If the audio callback suddenly stops, and the AO provides no "reset"
callback, then reset() could deadlock by waiting on the audio callback
forever.
The waiting was needed to enter a consistent state, where the audio
callback guarantees it won't access the ringbuffer. This in turn is
needed because mp_ring_reset() is not concurrency-safe.
This active waiting is unavoidable. But the way it was implemented, the
audio callback had to call ao_read_data() at least once when reset() is
called. Fix this by making ao_read_data() set a flag upon entering and
leaving, which basically turns p->state into some sort of spinlock.
The audio callback actually never needs to spin, because there are only
2 states: playing audio, or playing silence. This might be a bit
surprising, because usually atomic_compare_exchange_strong() requires a
retry-loop idiom for correct operation.
This commit is needed because ao_wasapi can (or will in the future)
randomly stop the audio callback in certain corner cases. Then the
player would hang forever in reset().
This is what you would expect. Before this commit, each
ao_request_reload() call would just queue a reload command, and then
recreate the AO for the number of times the function was called.
Instead of sending a command, introduce some sort of event retrieval
mechanism. At least for the reload case, use atomics, because we're too
lazy to setup an extra mutex.
The main need I see for this is with libmpv - it would be confusing if
some application showed up as "mpv" on whateverthehell PulseAudio uses
it for (generally it does show up on various PA GUI tools).
The intention is to avoid using the timeout-based fallback.
There's some minor hope that this will help with OpenBSD (see #1239),
although it probably won't.
Some chance that this will cause trouble with obscure OSS
implementations or emulations.
If calling ao->driver->wait() fails, we need to fallback to timeout-
based waiting. But it could be that at this point, the mutex was already
released (and then re-acquired). So we need to recheck the condition in
order to avoid missed wakeups.
This probably wasn't an actually occurring problem, but still could
cause a small race-condition window if the dynamic fallback is actually
used.
Apparently this can "sometimes" return an error. In my opinion, this
should never return an error: neither the semantics of the function,
nor the ALSA documentation or ALSA sample code seem to indicate that
a failure is to be expected. I'm not perfectly sure about this though
(I blame ALSA being a weird, big, underdocumented API).
Since it causes problems for some users, and since there is really no
reason why we should abort on such an error, turn it into a warning.
Fixes#1231.
Since the list associated with --audio-device is supposed to enable
simple user-selection, it doesn't make much sense to include overly
special things like ao_pcm or ao_null in the list. Specifically,
ao_pcm is harmful, because it will just dump all audio to a file
named audiodump.wav in the current working directory. The user can't
choose the filename (it can be customized, but not through this
option), and the working directory might be essentially random,
especially if this is used from a GUI.
Exclude "strange" entries. We reuse the fact that there's already a
simple list ordered by auto-probe priority in order to avoid having to
add an additional flag. This is also why coreaudio_exclusive was moved
above ao_null: ao_null ends auto-probing and marks the start of
"special" outputs, which don't show up on the device, but we want
coreaudio_exclusive to be selectable (I think).
Move it above ao_null, so that it can be selected during auto-probing
(even if it's only last). I see no reason why it should not be included,
and it makes the following commit slightly more elegant. (See
explanations there.)
Especially with other components (libavcodec, OSX stuff), the thread
list can get quite populated. Setting the thread name helps when
debugging.
Since this is not portable, we check the OS variants in waf configure.
old-configure just gets a special-case for glibc, since doing a full
check here would probably be a waste of effort.
While conceptually this sink stuff in PulseAudio does just the right
thing, actually listing the sinks is unbelievable complicated. Not only
is the idea that listing them should happen asynchronously completely
bullshit (who the fuck runs the PulseAudio server on a separate
computer), but the way this is done is full of bullshit too. Why
separate callbacks for each device? Why this obtuse mainloop shit?
Especially the mainloop shit makes it actively worse than doing things
manually with pthread primitives, and the reason for that (different
mainloop implementations for GUIs?) is laughable too. It's like they
chose the most complicated API possible just because they attempted
to "abstract" basic mechanisms in order to handle "everything". While
I don't claim to design the best APIs, this API is fucking terrible
without any excuse. (End of rant.)
All the dumb crap in pa_init_boilerplate() is needed to talk to the
audio server at all. Might also fix some subtle bugs in the init code
(which is strange, because the original file was contributed by the
devil himself).
The one in msg.c was mistakenly removed with commit e99a37f6.
I didn't actually test the change in ao_sndio.c (but obviously "ap"
shouldn't be static).
Don't wait after the audio thread has pushed the remaining audio to the
AO. Avoids hard hangs if the heuristic fails completely (could still
happen if get_delay returns absurd values).
CC: @mpv-player/stable
Since the internal AO driver API has no proper way to determine EOF, we
need to guess by querying get_delay. But some AOs (e.g. ao_pulse with
no-latency-hacks set) may never reach 0, maybe because they naively add
the latency to the buffer level. In this case our heuristic can break.
Fix by always using the delay to estimate the EOF time. It's not even
that important - it's mostly used to avoid blocking draining. So this
should be ok.
CC: @mpv-player/stable (maybe)
Unfortunately, ALSA is particularly bad with this, because mpv has to
add all sorts of magic crap to the device name to make things work. The
device selection overrides this, so explicitly selecting devices will
most likely break your audio. This has yet to be solved.
This function is available starting with PulseAudio 2.0, while we only
require 1.0. This broke compilation on Ubuntu 12.04.5 LTS.
Use our own function to calculate the buffer size, which is actually
simpler and needs slightly less code.
Hopefully fixes#1154.
CC: @mpv-player/stable
It was more complicated than it had to be: the audio thread already
determines whether audio has ended, so we can use that. Remove the
separate logic for draining.
Commit 957097 attempted to use PA_STREAM_FAIL_ON_SUSPEND to make
ao_pulse exit if the stream was started suspended.
Unfortunately, PA_STREAM_FAIL_ON_SUSPEND is active even during playback.
If you pause mpv, pulseaudio will close the actual audio device after a
while (or something like this), and unpausing won't work. Instead, it
will spam "Entity killed" error messages.
Undo this change and check for suspended audio manually during init.
CC: @mpv-player/stable
Sometimes, ao_pulse starts in suspended mode, which means playback is
essentially paused in pulseaudio. This gives the impression that mpv is
hanging, since it times video against the audio playback progress, and
audio never makes progress in this state.
I'm not sure if this will help - possibly it does with mixed
pulseaudio/alsa setups. However, if the alsa setup has the pulseaudio
plugin, alsa will hang too. But there's still a chance we get less
blame for pulseaudio messes.
This gets rid of this warning:
Could not update timestamps for skipped samples.
This required an API addition to FFmpeg (otherwise it would instead
doing arithmetic on the timestamps itself), so whether it works depends
on the FFmpeg version.
Although the "af" command already could do this, it seems it's better
to introduce a lower level mechanism for now. This avoids some messy
issues, since that code would recursive call reinit_audio_chain().
To be used by the next commit.
There's no real reason why audio_init_filter() should exist. Just use
af_init or af_reinit directly. (We lose a useless message; the same
information is printed in a quite close place with more details.)
Requires less code, and the way the filter chain is marked as having
failed to initialize allows just switching off audio instead of
crashing if trying to insert a volume filter in mixer.c fails, and
recreating the old filter chain fails too.
libsndio has absolutely no mechanism to discard already written audio
(other than SIGKILLing the sound server). sio_stop() will always block
until all audio is played. This is a legitimate design bug.
In theory, we could just not stop it at all, so if the player is e.g.
paused, the remaining audio would be played. When resuming, we would
have to do something to ensure get_delay() returns the right value. But
I couldn't get it to work in all cases.
get_delay needs to report the current audio buffer status. It's
important for A/V sync that this information is current, but functions
which update it were called on play() or get_space() calls only.
This was in bytes, but it's more convenient to use samples (or frames;
in any case the smallest unit of audio that includes all channels).
Remove the ao->bps line too; it will be set after init() returns.
Let codec_tags.c do the messy mapping.
In theory we could simplify further by makign demux_mkv.c directly use
codec names instead of the MPlayer-inherited "internal FourCC" business,
but I'd rather not touch this - it would just break things.
For a while, we used this to transfer PCM from demuxer to the filter
chain. We had a special "codec" that mapped what MPlayer used to do
(MPlayer passes the AF sample format over an extra field to ad_pcm,
which specially interprets it).
Do this by providing a mp_set_pcm_codec() function, which describes a
sample format in a generic way, and sets the appropriate demuxer header
fields so that libavcodec interprets it correctly. We use the fact that
libavcodec has separate PCM decoders for each format. These are
systematically named, so we can easily map them.
This has the advantage that we can change the audio filter chain as we
like, without losing features from the "rawaudio" demuxer. In fact, this
commit also gets rid of the audio filter chain formats completely.
Instead have an explicit list of PCM formats. (We could even just have
the user pass libavcodec PCM decoder names directly, but that would be
annoying in other ways.)
Digital pass-through was probably broken. Possibly fix it (no way to
test). This also should make the logic slightly saner.
Fortunately, it's unlikely that anyone who uses OSS has a spdif setup.
Commit 5b5a3d0c broke this. The really funny thing is that this code was
actually always under "#if BYTE_ORDER == BIG_ENDIAN". The breaking
commit just edited this code slightly, but it must have failed to
compile on big endian long before (since over 1 year ago, commit d3fb58).
Should be able to pass-through AC3, DTS, and others.
It seems PulseAudio wants players to fallback to PCM on certain events
signaled by the server, but we don't implement that. There's not much
documentation available anyway.
Before this commit, there was AF_FORMAT_AC3 (the original spdif format,
used for AC3 and DTS core), and AF_FORMAT_IEC61937 (used for AC3, DTS
and DTS-HD), which was handled as some sort of superset for
AF_FORMAT_AC3. There also was AF_FORMAT_MPEG2, which used
IEC61937-framing, but still was handled as something "separate".
Technically, all of them are pretty similar, but may use different
bitrates. Since digital passthrough pretends to be PCM (just with
special headers that wrap digital packets), this is easily detectable by
the higher samplerate or higher number of channels, so I don't know why
you'd need a separate "class" of sample formats (AF_FORMAT_AC3 vs.
AF_FORMAT_IEC61937) to distinguish them. Actually, this whole thing is
just a mess.
Simplify this by handling all these formats the same way.
AF_FORMAT_IS_IEC61937() now returns 1 for all spdif formats (even MP3).
All AOs just accept all spdif formats now - whether that works or not is
not really clear (seems inconsistent due to earlier attempts to make
DTS-HD work). But on the other hand, enabling spdif requires manual user
interaction, so it doesn't matter much if initialization fails in
slightly less graceful ways if it can't work at all.
At a later point, we will support passthrough with ao_pulse. It seems
the PulseAudio API wants to know the codec type (or maybe not - feeding
it DTS while telling it it's AC3 works), add separate formats for each
codecs. While this reminds of the earlier chaos, it's stricter, and most
code just uses AF_FORMAT_IS_IEC61937().
Also, modify AF_FORMAT_TYPE_MASK (renamed from AF_FORMAT_POINT_MASK) to
include special formats, so that it always describes the fundamental
sample format type. This also ensures valid AF formats are never 0 (this
was probably broken in one of the earlier commits from today).
This code tried to play with the format bits, and potentially could
create invalid formats, or reinterpret obscure formats in unexpected
ways.
Also there was an abort() call if the winapi or mpv used a format with
unexpected bit-width. This could probably easily happen; for example,
mpv supports at least one 64 bit format. And what would happen on 8 bit
formats anyway?
Untested.