VO backends which are or will run in their own thread have a problem
with vo_mouse_movement() calling vo_control(). Restrict this to VOs
which actually need this.
win32 does not provide a proper per-window context pointer. Although it
does allow passing a user-chosen value to WM_CREATE/WM_NCCREATE, this
is not enough - the first message doesn't even have to be WM_NCCREATE.
This gets us in trouble later on, so go the easy route and just use a
TLS variable.
__thread is gcc specific, but Windows is a very "special" platform
anyway. We support only MinGW and Cygwin on it, so who cares. (C11
standardizes __thread as _Thread_local; we can use that later.)
Plan 9 has a very interesting synchronization mechanism, the
rendezvous() call. A good property of this is that you don't need to
explicitly initialize and destroy a barrier object, unlike as with e.g.
POSIX barriers (which are mandatory to begin with). Upon "meeting", they
can exchange a value.
This mechanism will be nice to synchronize certain stages of
initialization between threads in the following commit.
Unlike Plan 9 rendezvous(), this is not implemented with a hashtable,
because that would require additional effort (especially if you want to
make it actually scele). Unlike the Plan 9 variant, we use intptr_t
instead of void* as type for the value, because I expect that we will be
mostly passing a status code as value and not a pointer. Converting an
integer to void* requires two cast (because the integer needs to be
intptr_t), the other way around it's only one cast.
We don't particularly care about performance in this case either. It's
simply not important for our use-case. So a simple linked list is used
for waiters, and on wakeup, all waiters are temporarily woken up.
This shouldn't change anything. But it's worth making this explicit,
since it's very subtle and unintuitive: if the X parameter is the
magic value CW_USEDEFAULT, then the Y parameter is used as nCmdShow
parameter to ShowWindow(). And in our case, this is SW_HIDE (0),
because we want to create a hidden window.
This looked a bit overcomplicated. We don't care about the window
position (it should always be 0/0, unless the parent program moved it,
which it shouldn't). We don't care about the global on-screen position.
Also, we will just retrieve a WM_SIZE message if our window is resized,
and we don't need to update it manually.
The only thing we have to do is making sure our window fills the parent
window completely.
CS_OWNDC will make GetDC() always return the same HDC. This might
become a problem when OpenGL rendering and window management are
on different threads. Although I'm not too sure about this; our
code never requests a HDC outside of the OpenGL backend, and it
depends on whether win32 will internally request DCs. But in any
case, this seems to be cleaner.
Move the GL pixelformat setup code to gl_w32.c, where it's actually
needed. This also fixes that SetPixelFormat() should be called only
once, and not every time video params change.
These mostly describe self-explanatory things, and fail to explain
actually tricky things. Which means you just waste your time reading
this, and have to figure it out from the code anyway.
Preparation for moving win32 windowing to a separate thread.
The codesize is reduced a bit, because some small functions are
inlined, which reduces noise.
The main change is that now most functions use the private struct
directly, instead of accessing it indirectly through vo->w32.
Accesses to vo are minimalized.
The final goal is adding some sort of new windowing backend API. It
would be cleaner to use that as context pointer for all functions
(like struct vo was previously used), but since this is work in
progress, we just go with this commit.
ao_null is used to stop autoprobing (if all AOs before fail to init).
After it come things like ao_pcm, which should never be automatically
selected.
Remove a certain theoretically possible failure case, and force "some"
fallback.
mp_make_wakeup_pipe() always fails on win32. If this call fails on Linux
(and e.g. ao_alsa is used), this will probably burn CPU since poll()
won't work on the invalid file descriptor, but whatever, the failure
case is obscure enough.
Actually free the old mmap region when readding an overlay of the same
ID without removing it before. (This is explicitly documented as
working.)
Replace the OSD atomically. Before this commit, the overlays were
removed and then readded to avoid synchronization problems.
Simplify the code: now there is no weird mapping between index and ID.
The OSD sub-bitmap list still needs to be prepared to skip unused IDs
(since each sub-bitmap list entry must be in use), but the code for this
is relatively separated now.
Fixes issue #956.
Currently entries are added after the current playlist element. This is kinda
confusing, more so given that "loadfile append" appends at the end of the
playlist.
Accidentally broken in b6af44d3. For ad_lavc (and in general), the PTS
was not updated correctly when filtering only parts of audio frames,
and for ad_mpg123 and ad_spdif the PTS was additionally offset by the
frame size.
This could lead to incorrect time display, and possibly broken A/V sync.
Execute the format change based on whether we logically detected EOF
(after filters), instead of when the decode buffer was drained. It's
slightly cleaner. (The requirement of len>0 existed before.)
Don't return an EOF code if there's still buffered data.
Also, don't call demux_stream_eof() in the playloop. There's probably
nothing wrong with it, but it's cleaner not to use it.
Also give AD_EOF its own value, so that a decoding error doesn't drain
audio by causing an EOF condition.
Move a function call, which does not change semantics.
Write the extra buffer sample count in a more straight-forward way; the
old code was not meaningful in any way (anymore).
It's true that the decoder can successfully decode, but return no data
(for various reasons). We don't need to handle this specially, though.
We just let the decoder decode some more data. This doesn't increase the
danger of an endless loop either, because audio_decode() already calls
this function until enough is decoded.
"loadfile filename append-play" will now always append the file to the
playlist, and if nothing is playing yet, start playback. I don't want to
change the semantics of "append" mode, so a new mode is needed.
Probably fixes issue #950.
This allows using external subtitle files with e.g. transport stream
files that don't start at time 0.
Note that if the .ts file has timestamp resets, everything goes south.
But I guess this was already the case before, unless there are external
subtitle files that also include timestamp resets, which is unlikely.
(On the other hand, you could for example expect that it works with
embedded DVB subtitles, that were somehow captured from the same stream
and use the same timestamps.)
Useful for Windows stuff. Actually, ENCA support should catch this, but,
well, whatever, everyone seems to hate ENCA.
Detection with BOM is trivial, although it needs some hackery to
integrate it with the existing autodetection support. For one, change
the default value of --sub-codepage to make this easier.
Probably fixes issue #937 (the second part).
The video flushing logic was broken: if there are no more packets,
decode_image() will feed flush packets to the decoder. Even if an image
was produced, it will return the demuxer EOF state, and since commit
7083f88c, this EOF state is returned to the caller, which is incorrect.
Revert this part of the change, and explicitly check for VD_WAIT (the
bogus change was intended to forward this error code to the caller).
Also, turn the "r < 1" into something equivalent that doesn't rely on
the exact value of VD_EOF. "r < 0" is ok, because at least here, errors
are always negative.
The travis guys were so nice to activate multi OS support for us (it's a beta
feature). So now we build on OS X ass well to check for OS X specific breakage.
Later I might investigate further and build with the minimum supported SDK
version so that we don't break older systems by using newer Cocoa features.
This commit mainly moves the initial decoding of data (done to probe the
audio format) to generic code. This will make it easier to make audio
decoding non-blocking in a later commit.
This commit also changes how decoders return data: instead of having
them write the data into a prepared buffer, they return a reference to
an internal buffer (by setting dec_audio.decoded). This makes it
significantly easier to handle audio format changes, since the decoders
don't really need to care anymore.
If the decoder didn't set a samplerate, it was initialized from the
container samplerate.
This probably didn't make much sense, because it's passed to the
decoder on initialization (so it could definitely use it). It's an
artifact from commit 66a9eb57 (which removed some Matroska-specific non-
sense), and I've never seen it actually happen since it was made into a
warning. Just get rid of it.
This tells the demuxer thread that it should seek, instead of waiting
until the demuxer thread is ready.
Care has to be taken about the state between seek request and actual
seeking: newly demuxed packets have to be discarded. We can't just
flush when doing the actual seek, because the user thread could read
these packets.
I'm wondering if this could lead to issues due to relaxed ordering of
operations. But it should be fine, since seeking influences packet
reading only, and seeking is always strictly done before that.
Currently, this will have no advantages; unless audio is disabled. Then
seeking as well as normal playback can be non-blocking.
The MPlayer style syntax ("-mf fps=10:type=png") was removed a while
ago, and now only the flat variants ("--mf-fps=10" etc.) work.
CC: @mpv-player/stable
Move a condition somewhere else, which makes it conceptually simpler.
Also, the assignment to full_audio_buffers removed with this commit was
dead, and its value never used.
Fatal errors in the vidoe chain (such as failing to initialize the video
chain) disable video decoding. Restart the playloop, instead of just
continuing the current iteration.
The resulting behavior should be the same, but it gets rid of possible
corner cases.
Instead of starting to fill the packet queue if at least 1 stream is
selected, wait until there is at least 1 stream had new packets
requested.
In theory this is cleaner, because it allows you to e.g. do a seek and
then reselect streams without losing packets. Seeking marks all streams
as inactive, and without this new logic, the thread would read new
packets anyway right after seek.
Broken by commit 1301a907. This commit added demuxer threading, and
changed some other things to make them simpler and more orthogonal. One
of these things was ntofications about streams that appear during
playback. That's an obscure corner case, but the change made handling of
it as natural as normal initialization.
This didn't work for two reasons:
1. When playing an ordered chapters file where the initial segment was
not from the main file, its streams were added to the track list. So
they were printed twice, and switching to the next segment didn't work,
because the right streams were not selected.
2. EDL, CUE, as well as possibly certain Matroska files don't have any
data or tracks in the "main" demuxer, so normally the first segment is
picked for the track list. This was simply broken.
Fix by sprinkling the code with various hacks.
This called demux_flush(), but that doesn't make any sense with an
asynchronously running demuxer. It would just keep reading and add new
packets again. Explicitly pause the demuxer, so that this can't happen.
Also, when flushing, data will be missing, so the decoders should
always be reinitialized, even if the operation fails.
This fixes the same symptom as the previous commit, but when the demuxer
thread is enabled. In this case, if nothing was read from the demuxer,
the STREAM_CTRLs weren't updated either. To the player, this looked like
the stream cache was never making progress, so playback was kept paused.