Use the "native" underrun detection, instead of guessing by a low cache
duration. The new underrun detection (which was added with the original
commit) might have the problem that it's easy for the playloop to miss
the underrun event. The underrun is actually not stored as state, so if
the demuxer thread adds a new packet before the playloop happens to see
the state, it's as if it never happened. On the other hand, this means
that network was fast enough, so it should be just fine.
Also, should it happen that we don't know the cached range (the
ts_duration < 0 case), just wait until the demuxer goes idle (i.e.
read_packet() decides to stop). This pretty much should affect broken or
unusual files only, and there might be various things that could go
wrong. But it's more robust in the normal case: this situation also
happens when no packets have been read yet, and we don't want to
consider this as reason to resume playback.
The cache percentage was useless. It showed how much of the total stream
cache was in use, but since the cache size is something huge and
unrelated to the bitrate or network speed, the information content of
the percentage was rather low.
Replace this with printing the duration of the demuxer-cached data, and
the size of the stream cache in KB.
I'm not completely sure about the formatting; suggestions are welcome.
Note that it's not easy to know how much playback time the stream cache
covers, so it's always in bytes.
The "buffering" logic was active even if the stream cache was disabled.
This is contrary to what the manpage says. It also breaks playback
because of another bug: the demuxer cache is smaller than 2 seconds,
and thus the resume condition never becomes true.
Explicitly run this code only if the stream cache is enabled. Also, fix
the underlying problem of the breakage, and resume when the demuxer
thread stops reading in any case, not just on EOF.
Broken by previous commit. Unbreaks playback of local files.
Add the --cache-secs option, which literally overrides the value of
--demuxer-readahead-secs if the stream cache is active. The default
value is very high (10 seconds), which means it can act as network
cache.
Remove the old behavior of trying to pause once the byte cache runs
low. Instead, do something similar wit the demuxer cache. The nice
thing is that we can guess how many seconds of video it has cached,
and we can make better decisions. But for now, apply a relatively
naive heuristic: if the cache is below 0.5 secs, pause, and wait
until at least 2 secs are available.
Note that due to timestamp reordering, the estimated cached duration
of video might be inaccurate, depending on the file format. If the
file format has DTS, it's easy, otherwise the duration will seemingly
jump back and forth.
The purpose of the unconditional pthread_cond_signal() when reading
cached DEMUXER_CTRLs and STREAM_CTRLs was apparently to update the
stream cache state. Otherwise, the cached fields would never be updated
when the stream is e.g. paused.
The same could be said about other CTRLs, but these aren't as important,
since they are normally updated while reading packet data.
In order to reduce wakeups, make this logic explicit.
Add a new parameter 'p' to gaussian filter. The new formula used
a different base taken from fmtconv plugin, so that the
new parameter is exactly same as the one used in Avisynth and
Vapoursynth.
The new default value is 2 / log(2) * 10, with the default value it
conforms to the original kernel taken from vector-agg.
Add two new options, make it possible for user to set the radius
for some of the filters with no fixed radius.
Also add three new filters with the new radius parameter supported.
When video format changes, the frame before the frame with the new
format sets video_status briefly to STATUS_DRAINING. This caused the
code to handle the EOF case to kick in, which just pauses the player
when trying to step past the last frame. As a result, trying to
framestep over format changes resulted in pausing the player.
Fix by testing against the correct status.
"Shift+X" didn't actually map any key, as opposed to "Shift+x". This is
because shift usually changes the case of a character, so a plain
printable character like "X" simply can never be combined with shift.
But this is not very intuitive. Always remove the shift code from
printable characters. Also, for ASCII, actually apply the case mapping
to uppercase characters if combined with shift. Doing this for unicode
in general would be nice, but that would require lookup tables. In
general, we don't know anyway what character a key produces when
combined with shift - it could be anything, and depends on the keyboard
layout.
The little lua snippet at #488 as well as the actual implementation
seems to indicate that not expanding properties is indeed the correct
behavior. Document that.
Signed-off-by: wm4 <wm4@nowhere>
This shouldn't change anything functionally.
Change the A/V desync message. --framedrop is enabled by default now, so
the text must be changed a little. I've never heard of audio outputs
messing up A/V sync recently, so remove that part.
Remove the unused ao_pts field.
Reorder 2 A/V sync related expressions so that they look the same.
Abandon the "old" infrastructure for --input-file (mp_input_add_fd(),
select() loop, non-blocking reads). Replace it with something that
starts a reader thread, using blocking input.
This is for the sake of Windows. Windows is a truly insane operating
system, and there's not even a way to read a pipe in a non-blocking
way, or to wait for new input in an interruptible way (like with
poll()). And unfortunately, some want to use pipe to send input to
mpv. There are probably (slightly) better IPC mechanisms available
on Windows, but for the sake of platform uniformity, make this work
again for now.
On Vista+, CancelIoEx() could probably be used. But there's no way on
XP. Also, that function doesn't work on wine, making development
harder. We could forcibly terminate the thread, which might work, but
is unsafe. So what we do is starting a thread, and if we don't want
the pipe input anymore, we just abandon the thread. The thread might
remain blocked forever, but if we exit the process, the kernel will
forcibly kill it. On Unix, just use poll() to handle this.
Unfortunately the code is pretty crappy, but it's ok, because it's late
and I wanted to stop working on this an hour ago.
Tested on wine; might not work on a real Windows.
For --input-test, print messages on terminal by default.
Raise message level for enabling input sections, because the OSC makes
this very extremely annoying.
This is a simplification, because it lets us use the AVPacket
functions, instead of handling the details manually.
It also allows the libavcodec rawvideo decoder to use reference
counting, so it doesn't have to memcpy() the full image data. The change
in av_common.c enables this.
This change is somewhat risky, because we rely on the following AVPacket
implementation details and assumptions:
- av_packet_ref() doesn't access the input padding, and just copies the
data. By the API, AVPacket is always padded, and we violate this. The
lavc implementation would have to go out of its way to make this a
real problem, though.
- We hope that the way we make the AVPacket refcountable in av_common.c
is actually supported API-usage. It's hard to tell whether it is.
Of course we still use our own "old" demux_packet struct, just so that
libav* API usage is somewhat isolated.
If a packet is appended to a stream, and there were already packets
queued, nothing about the state changed, as far as the user (i.e. the
player) is concerned. Thus no wakeup is needed.
The pthread_cond_signal() call following this is not interesting - it
will simply be a NOP if there are actually no waiters.
It makes no sense to set the packet duration, because libavcodec doesn't
know the timebase. And in fact, no subtitle decoder accesses the packet
duration, except text subtitle converters, which are not relevant here.
So this code did nothing - drop it.
Also fix a blatantly incorrect comment.
If duration<0, it means the duration is unknown. Disable framedropping,
because end_time makes no sense in this case.
Also, strictly never drop the first frame.
This fixes weird behavior with the cover-art case (for the 100th time).
Commit 846257da introduced an accidental feature: if you kept seeking
(so playback never really resumes), the audio would never be played.
This was nice, but commit 4c25b000 accidentally removed it again (due
to the video_next_pts being earlier available than it used to be, so
audio could be played before the player executed the next queued seek).
Implicitly reintroduce the old behavior again by not decoding a second
video frame immediately. Usually, the second frame is used to compute
the frame duration needed to for accurate framedropping, but since the
first frame after a seek is never dropped, we don't need this.
Now the video code will queue the new frame to the VO immediately, and
since fill_audio_out_buffers() is called in the playloop before
write_video() and execute_queued_seek(), it never gets the chance to
enter STATUS_READY, and seeks will be silent.
This also has a nice side-effect: since the second frame is not decoded
and filtered, seeking becomes slightly faster (back to the same level
as with framedrop disabled).
It seems this still sometimes plays a period of audio when keeping a
seek key down. In my tests, this appeared to happen because the seek
finished before the next key repeat was sent.
In theory, timestamps can be negative, so we shouldn't just return -1
as special value.
Remove the separate code for clearing decode buffers; use the same code
that is used for normal seek reset.
Commit 5afc025c broke this. The reason is that mpctx->delay is updated
when a new video frame is added. This value is also needed to resync
audio, but it will be for the wrong PTS. They must be consistent with
each other, and if they aren't, initial sync will be off by N video
frames, which results at least in worse user experience.
This can be reproduced by for example heavily switching between normal
and 2x speed, or similar.
Fix by readding the video_next_pts field (keeping its use minimal,
instead of reverting the commit that removed it).
If video reaches EOF, and audio is also EOF (or is otherwise not
meaningful, like audio disabled), then the playback position was briefly
set to 0. Fix this by not trying to use a bogus audio PTS.
CC: @mpv-player/stable (maybe)
This simplifies the code, and fixes an odd bug: the second-last frame
was displayed for a very short duration if framedrop was enabled. The
reason was that basically the time difference between second-last and
last frame were skipped, because at this point EOF was already
signaled. Also see commit b0959488 for a similar issue in the
same code.
This removes the messiness of the next_frame 2-frame queue, and
strictly runs the "new frame" code when a frame is moved to the first
position of the queue, instead of somehow messing with return codes.
This also merges update_video() into video_output_image().
No functional changes. init_vo() is now needed a bit further down, and
moving it keeps definition and use close. adjust_sync() will be used by
a function further up in one of the following commits.
Add two new script environment variables 'video_in_dw' and
'video_in_dh', representing the display resolution of video. Along
with video resolution, sample ratio aspect can be calculated in
scripts.
Currently it's impossible to change sample ratio aspect with single
vapoursynth filter since '_SARNum' and '_SARDen' frame properties
from output clip will be ignored. A following 'dsize' filter is
necessary for this purpose.