If sys/soundcard.h is actually linux/soundcard.h then it supports only OSSv3
API. This may happen when OSSLIBDIR == /usr while forgetting to replace
sys/soundcard.h from glibc.
However, after fa620ff waf prefers native implementation which is inferior
on Linux. To fix try making waf prefer oss-audio-4front. It's quite unusual
to have 4Front OSS installed where native implementation is superior, anyway.
Signed-off-by: bugmen0t <@>
Make the false positives path also undef the 4Front define.
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
Fixes#396
Bug introduced by commit 6fb020f5. It doesn't always happen, since it is
caused by the playloop and cocoa UI code running in separate threads.
Fixes#398.
Uncompressed rar archives can be transparently opened, but the filename
the player doesn't have the direct filename (but something starting
with rar://... instead). This will lead to external subtitles not
being loaded.
This doesn't handle multi-volume rar files, but in that cases just use
the --autosub-match=fuzzy option.
Fixes#397 on github.
If only coreaudio was activativated and not cocoa, the build failed for
missing CoreFoundation.
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
Fixes#395
They didn't do anything.
vf_screenshot.c actually did release the previous image, but that's not
really required. At worst you could take a screenshot and get an old
frame when there's no new frame yet.
The --flip option flipped the image upside-down, by trying to use VO
support, or if not available, by inserting a video filter. I'm not sure
why it existed. Maybe it was important in ancient times when VfW based
decoders output an image this way (but even then, flipping an image is a
free operation by negating the stride).
One nice thing about this is that it provided a possible path for
implementing video orientation, which is a feature we should probably
support eventually. The important part is that it would be for free for
VOs that support it, and would work even with hardware decoding.
But for now get rid of it. It's useless, trivial, stands in the way, and
supporting video orientation would require solving other problems first.
In particular, this disables mpeg4. There are some files out there that
use GMC, a usually rarely used and ineffective feature, which is not
supported by most hardware decoders. In these cases the hw decoder
outputs garbage, while software decoding works perfectly fine. We can't
really fallback to software decoding in these cases, because we don't
know that something is wrong in the first place. I can't see any
advantages of hw decoding of mpeg4, so it's better to disable it.
This can be reproduced with:
mpv short.wav -af 'lavfi="aecho=0.8:0.9:5000|6800:0.3|0.25"'
An audio file that is just 1-2 seconds long should play for 8-9 seconds,
which audible echo towards the end.
The code assumes that when playing with AF_FILTER_FLAG_EOF, the filter
will either produce output, or has all remaining data flushed. I'm not
really sure whether this really works if there are multiple filters with
EOF handling in the chain. To handle it correctly, af_lavfi should retry
filtering if 1. EOF flag is set, 2. there were input samples, and 3. no
output samples were produced. But currently it seems to work well enough
anyway.
The new signature is actually closer to how it actually works, and
someone who is not familiar to the API and how it works might make fewer
fatal mistakes with the new signature than the old one. Pretty weird.
Do this to sneak in a flags parameter, which will later be used to flush
remaining data of at least vf_lavfi.
This will make af_channels output a channel layout that is compatible
with any destination layout. Not sure if that's a good idea though,
since the way the AO choses a layout is perhaps less predictable. On the
other hand, using the old MPlayer standard layouts doesn't make much
sense either. We'll see whether this improves or breaks someone's use
case.
Apparently this stopped working after some planar changes (broken format
negotiation). Radically change option parsing in an incompatible way.
Suggest alternatives to this filter, since it barely has any importance
anymore.
Normally, audio decoder don't have a decoder delay, so the code was
fine. But FFmpeg supports multithreaded decoding for some audio codecs,
which introduces such a delay.
The delay means that we won't get decoded audio for the first few
packets, and that we need to do something to get the trailing audio
still buffered in the decoder when reaching EOF.
Two changes are needed to deal with the delay:
- If EOF is reached, pass a "flush" packet to the decoder to return the
buffered audio. Such a flush packet is automatically setup when
calling mp_set_av_packet() with a NULL packet.
- Use the PTS returned by the decoder, instead of the packet's. This is
important to get correct timestamps for decoded audio. Ignoring this
would result into offsetting the audio playback time by the decoder
delay. Note that we can still use the timestamp of the first packet
to get the timestamp for the start of the audio.