Or rather, keep hacking it until it somehow works. The problem here was
that trying to avoid calling STREAM_CTRL_GET_CURRENT_TIME too often
didn't really work, so the cache sometimes returned incorrect times.
Also try to avoid the situation that looking up the time with an
advanced read position doesn't really work, as well as when trying to
look it up when EOF or cache end has been reached. In that case we have
read_filepos == max_filepos, which is "outside" of the cache, but
querying the time is still valid.
Should also fix the issue that demuxing streams with demux_lavf and if
STREAM_CTRL_GET_CURRENT_TIME is not supported messed up the reported
playback position.
This stuff is still not sane, but the way the player tries to fix the
playback time and how the DVD/BD stream inputs return the current time
based on the current byte position isn't sane to begin with. So, let's
leave it at bad hacks.
The two changes that touch s->eof are unrelated and basically of
cosmetic nature (separate commit would be too noisy.)
Doing this makes the encoder force the same pict type as original, which
is often not even possible. Rather let the codec decide!
As there is no documented value to mean "decoder shall pick", I rather
save/restore the default value filled by libavcodec.
This is useless on the cache side. The sector is needed only to deal
with stream implementations which are not byte addressable, and the
cache is always byte addressable.
Also set a default read_chunk value. (This value is never used unless
you chain multiple caches, but it's cleaner.)
DVD and bluray packet streams carry (essentially) random timestamps,
which don't start at 0, can wrap, etc. libdvdread and libbluray provide
a linear timestamp additionally. This timestamp can be retrieved with
STREAM_CTRL_GET_CURRENT_TIME.
The problem is that this timestamp is bound to the current raw file
position, and the stream cache can be ahead of playback by an arbitrary
amount. This is a big problem for the user, because the displayed
playback time and actual time don't match (depending on cache size),
and relative seeking is broken completely.
Attempt to fix this by saving the linear timestamp all N bytes (where
N = BYTE_META_CHUNK_SIZE = 16 KB). This is a rather crappy hack, but
also very effective.
A proper solution would probably try to offset the playback time with
the packet PTS, but that would require at least knowing how the PTS can
wrap (e.g. how many bits is the PTS comprised of, and what are the
maximum and reset values). Another solution would be putting the cache
between libdvdread and the filesystem/DVD device, but that can't be done
currently. (Also isn't that the operating system's responsibility?)
This was probably done this way to ensure that after a successful seek,
the reported stream position is the same as the requested seek position.
But it doesn't make too much sense, since both stream->pos and the
stream implementation's internal position will go out of sync.
The stream EOF flag should only be set when trying to read past the end
of the file (relatively similar to unix files). Always clear the EOF
flag on seeking. Trying to set it "properly" (depending whether data is
available at seek destination or not) might be an ok idea, but would
require attention to too many special cases. I suspect before this
commit (and in MPlayer etc. too), the EOF flag wasn't handled
consistently when the stream position was at the end of the file.
Fix one special case in ebml.c and stream_skip(): this function couldn't
distinguish between at-EOF and past-EOF either.
EOF should be set when reading more data fails. The stream
implementations have nothing to say here and should behave correctly
when trying to read when EOF was actually read.
Even when seeking, a correct EOF flag should be guaranteed. stream_seek()
(or actually stream_seek_long()) calls stream_fill_buffer() at least
once, which also updates the EOF flag.
This function was called in various places. Most time, it was used
before a seek. In other cases, the purpose was apparently resetting
the EOF flag. As far as I can see, this makes no sense anymore. At
least the stream_reset() calls paired with stream_seek() are completely
pointless. A seek will either seek inside the buffer (and reset the
EOF flag), or do an actual seek and reset all state.
This happens with something like "mpv https://www.youtube.com/watch".
The URL is obviously not valid, but the stream layer tries to reconnect.
This commit at least allows to use the terminal to abort gracefully.
(Other than killing the process.)
Basically rewrite all the code supporting the cache (i.e. anything other
than the ringbuffer logic). The underlying design is untouched.
Note that the old cache2.c (on which this code is based) already had a
threading implementation. This was mostly unused on Linux, and had some
problems, such as using shared volatile variables for communication and
uninterruptible timeouts, instead of using locks for synchronization.
This commit does use proper locking, while still retaining the way the
old cache worked. It's basically a big refactor.
Simplify the code too. Since we don't need to copy stream ctrl args
anymore (we're always guaranteed a shared address space now), lots of
annoying code just goes away. Likewise, we don't need to care about
sector sizes. The cache uses the high-level stream API to read from
other streams, and sector sizes are handled transparently.
demux_lavf probes up to 2 MB of data in the worst case. When the ffmpeg
demuxer is actually opened, the stream is seeked back to 0, and the
previously read data is thrown away.
This wasn't a problem for playback of local files, but it's less than
ideal for playing from slow media (like web streams), and breaks
completely if the media is not seekable (pipes, some web streams).
This new function is intended to allow fixing this. demux_lavf will use
it to put the read probe data back into the buffer.
The simplest way of implementing this function is by making it
transparently extend the normal stream buffer. This makes sure no
existing code is broken by new weird special cases. For simplicity
and to avoid possible performance loss due to extra dereferencing
when accessing the buffer, we just extend the static buffer from
8 KB to 2 MB. Normally, most of these 2 MB will stay uncommitted, so
there's no associated waste of memory. If demux_lavf really reads all
2 MB, the memory will be committed and stay unused, though.
Before this commit, the cache was franken-hacked on top of the stream
API. You had to use special functions (like cache_stream_fill_buffer()
instead of stream_fill_buffer()), which would access the stream in a
cached manner.
The whole idea about the previous design was that the cache runs in a
thread or in a forked process, while the cache awa functions made sure
the stream instance looked consistent to the user. If you used the
normal functions instead of the special ones while the cache was
running, you were out of luck.
Make it a bit more reasonable by turning the cache into a stream on its
own. This makes it behave exactly like a normal stream. The stream
callbacks call into the original (uncached) stream to do work. No
special cache functions or redirections are needed. The only different
thing about cache streams is that they are created by special functions,
instead of being part of the auto_open_streams[] array.
To make things simpler, remove the threading implementation, which was
messed into the code. The threading code could perhaps be kept, but I
don't really want to have to worry about this special case. A proper
threaded implementation will be added later.
Remove the cache enabling code from stream_radio.c. Since enabling the
cache involves replacing the old stream with a new one, the code as-is
can't be kept. It would be easily possible to enable the cache by
requesting a cache size (which is also much simpler). But nobody uses
stream_radio.c and I can't even test this thing, and the cache is
probably not really important for it either.
The core didn't use these fields, and use of them was inconsistent
accross AOs. Some didn't use them at all. Some only set them; the values
were completely unused by the core. Some made full use of them.
Remove these fields. In places where they are still needed, make them
private AO state.
Remove the --abs option. It set the buffer size for ao_oss and ao_dsound
(being ignored by all other AOs), and was already marked as obsolete. If
it turns out that it's still needed for ao_oss or ao_dsound, their
default buffer sizes could be adjusted, and if even that doesn't help,
AO suboptions could be added in these cases.
Some still do, because they use the value in other places of the init
function. ao_portaudio is tricky and reads ao->bps in the stream
thread, which might be started on initialization (not sure about that,
but better safe than sorry).
Currently every single AO was implementing it's own ringbuffer, many times
with slightly different semantics. This is an attempt to fix the problem.
I stole some good ideas from ao_portaudio's ringbuffer and went from there.
The main difference is this one stores wpos and rpos which are absolute
positions in an "infinite" buffer. To find the actual position for writing /
reading just apply modulo size.
The producer only modifies wpos while the consumer only modifies rpos. This
makes it pretty easy to reason about and make the operations thread safe by
using barriers (thread safety is guaranteed only in the Single-Producer/Single-
Consumer case).
Also adapted ao_coreaudio to use this ringbuffer.
This is hopefully the start of something good. ca_ringbuffer_read and
ca_ringbuffer_write can probably cleaned up from all the NULL checks once
ao_coreaudio.c gets simplyfied.
Conflicts:
audio/out/ao_coreaudio.c
This allows having properties like time-pos in the window title update
properly. There is a danger of this causing significant CPU usage,
depending on the properties used and the window manager.