If a frame could only be partially filled with real audio data, the
silence wasn't written at the correct offset. It could have happened
that the remainder of the frame contained garbage.
(This didn't happen in the more common case of playing dummy silence.)
Used a wrong condition, and I suppose it could crash in some situations.
Change it to lazily initialize the hotplug stuff, like the
audio-device-list property does.
This was matching e.g. both "foo/bar" and "foobar" against "foo", when
only the former should match. This could cause more property
notifications than necessary.
Absence of license header implies LGPL, as mentioned in the "Copyright"
file. But vaapi.h contains some code taken from the mplayer-vaapi
patch, which was under the typical MPlayer license.
All vo_gl.c related code has been GPL+LGPL dual-licensed. The OSD code
is no exception and is also derived from vo_gl.c. Thus it should have
the same license (although I think technically speaking sub-licensing
it by removing one of the licenses is ok).
Commits 92b27be and f4ce99d removed high-fps logic to to a bug. That bug was
a missing parenthesis around everything after duration >= 0 && ... at the
removed code.
This patch restores the removed code, fixes the bug and then refactors the
code a bit.
This reverts commit f1746741de.
Together with the other revert, this fixes#2023 (the reason being
broken framedrop handling - it was dropping frames when it shouldn't).
read_output_surface() could fail and return NULL.
Also, make sure we don't set the image to a size larger than the
allocated size. Normally this shouldn't happen, but in theory it could
in corner cases; this is for robustness.
This provides a new method for enabling spdif passthrough. The old
method via --ad (--ad=spdif:ac3 etc.) is deprecated. The deprecated
method will probably stop working at some point.
This also supports PCM fallback. One caveat is that it will lose at
least 1 audio packet in doing so. (I don't care enough to prevent this.)
(This is named after the old S/PDIF connector, because it uses the same
underlying technology as far as the higher level protoco is concerned.
Also, the user should be renamed that passthrough is backwards.)
This deprecates the --ad-spdif-dtshd option, and replaces it with a
pseudo decoder. This means ad_spdif will report two decoders, "dts" and
"dts-hd", of which the second simply enables what the option did.
The --ad-spdif-dtshd option will actually be deprecated in the next
commit.
This makes no sense, because the format can't be converted anyway. It
just sets up the filter chain init code, which will vomit a bunch of
useless and confusing messages. So uninit and fail explicitly when this
happens.
We can't do much in this case, but at least we can not call the vdpau
API functions with too large sizes. Apparently the API considers this
undefined behavior, and random stuff might happen.
The previous code was not wrong, but I'd claim this makes the code more
robust. If a situation could happen in which the passed surface size is
incorrect, we could have passed a too small image, and
VdpOutputSurfaceGetBitsNative could have randomly overwritten memory.
Check the maximum size of video surfaces, and refuse initialization if
the video is too large for them.
Maybe we could do something more sophisticated, like inserting a
software scaler. On the other hand, this would have a very questionable
benefit, as it would be guaranteed to be too slow.
When starting in paused mode, no audio is written to the device at all,
because writing audio implicitly unpauses the AO. If the file is very
small, and all audio fits within the AO buffer, this accidentally
triggered the EOF condition. (In unpaused mode, it would write all
audio, end playback, and then wait until the AO has everything played.)
This is better, because now we call swr_get_delay() with the output
samplerate, instead of with the input samplerate and then multiplying it
with the ratio and rounding it up.
Apparently, just running ./waf and hoping that it will be run with a
Python interpreter doesn't necessarily work. The workaround is pretty
simple and reliable.
The "osd" command cycles between 4 states (OSD level 0-3), which is
probably confusing and inconvenient. OSD levels 0 and 2 are rarely
needed. I would claim there is normally not much of a need to completely
disable OSD by setting level 0 during playback. Level 2 is just slightly
less information than level 3, and I'm not sure why it exists at all.
Change it so that it toggles between level 3 and 1. Note that this
ignores the default OSD level. If the default is 3, the first use of
this key will set it to 3 again. Just assume 1 is the default. If
someone complains, this could be improved.
There's a short time during loading where external commands can add
external streams even before the main file is loaded (like during ytdl
hook execution). The track list is printed every time an external track
is added via commands. This was quite awkward when ytdl was adding
multiple streams, so don't print it in this stage. They are printed
anyway at the end of the loading process.
Listening to kAudioDevicePropertyDeviceHasChanged does not send any
property change notifications when the device dies. Makes no sense,
but I suppose in CoreAudio logic a dead/removed device can't send
any notifications.
This caused the player to essentially pause playback if the audio
device was removed during playback.
Fix by listening to the kAudioHardwarePropertyDevices property too,
which will actually be sent in this specific case. Then, if
querying the already dead device fails, we know we have to reload.
In short, instead of letting the coreaudio property listener set atomic
flags (which are then polled), make the property listeners actually
active.
The format change listener used during audio output now simply calls
ao_request_reload() on its own. All code involved is thread-safe, so
there's no need to do it during this audio callback (we assumed the
callback was never run concurrently with itself).
The listener installed temporarily during ca_change_format() is changed
to post a semaphore. Get rid of the weird retry logic and replace it
with a flat loop + timeout. It appears the maximum wait time could be
2500ms; reduce the total timeout to 500ms instead.
This manually retrieved the remaining audio from the resampler. It
subtly missed a conversion which could leave to an unsubtle crash.
This could happen if reorder_planes() was supposed to insert NA
channels, and the resampler/actual output format were different.
Simplify it by reusing the normal drain path. One oddness is that
the filter will add an output frame outside of normal filtering,
but that should be fine.
This was missing for extended deinterlacer.
Unfortunately, these deinterlacer still do not work. The provided future
frame (which is all the deinterlacers want) seems to be correct, though.
One minor behavioral change is that this always keeps the previous frame
for PTS computations. This could be avoided (in order to keep exactly
the same behavior as before), but it seems more elegant and should not
do any harm. (Also, if we really cared about reducing hw frame refs,
a more worthy goal is producing the field output incrementally.)
The mapped data (pointed to by the param variable) is not needed before,
so the call can be moved down. Also, this prevents that the buffer
remains mapped forever if the other vaMapBuffer() call above fails (the
cleanup code forgets to unmap the buffer - this commit makes it
unnecessary).
This used a do-while loop, which runs only once, as replacement for a
cleanup goto. While this is ok, doing a goto directly is easier to
follow and is closer to idiomatic C. But mainly remove it so that the
indentation can be reduced.
Reconfiguring with the same video size should never cause the window to
resize back to the video size (if the user changed its size). This was
broken and it resized anyway.