Some files have the first audio much later into the video (for whatever
reasons). Instead of appending large amounts of silence to the audio
buffer (and refusing to sync if the audio to append is "too large"),
just wait until enough video has played.
Regression since commit 261506e3. Internally speaking, playback was
often not properly terminated, and the main part of handle_keep_open()
was just executed once, instead of any time the user tries to seek. This
means playback_pts was not set, and the "current time" was determined by
the seek target PTS.
So fix this aspect of video EOF handling, and also remove the now
unnecessary eof_reached field.
The pause check before calling pause_player() is a lazy workaround for
a strange event feedback loop that happens on EOF with --keep-open.
If an imprecise seek is issues while a precise seek is ongoing,
don't wait up to 300ms (herustistic which usually improves user
experience), but instead let it cancel the seek.
Improves responsiveness of the OSC after the previous commit.
Note that we don't do this on "default-precise" seeks, because we
don't know if they're going to be precise or not.
It probably happens relatively often that the first packet (or even the
first N packets) of a stream will fail to decode, but decoding will
eventually succeed at a later point. Before commit 261506e3, this was
handled by an explicit retry loop (although this was also for other
purposes), but with then was changed to abort on the first error. This
makes it impossible to decode some audio streams.
Change this so that errors are ignored for the first 50 packets, which
should make it equivalent to the old code.
If you for example use --audio-file, disable the external track, seek,
and enable the external track again, the playback position of the
external file was off, and you would get major A/V desync. This was
actually supposed to work, but broke at some time ago (probably commit
2b87415f). It didn't work, because it attempted to seek the stream if it
was already selected, which was always true due to
reselect_demux_streams() being called before that.
Fix by putting the initial selection and the seek together.
Seeking in .ts files (and some other formats) is too unreliable, so
there's a separate code path for this case. But it breaks hr-seek.
Maybe hr-seek could actually be enabled in this case if we're careful
enough about timestamp resets, but for now nothing changes.
With software decoding, images were uploaded to vdpau surfaces as they
were queued to the VO. This makes it slightly more complicated
(especially later on), and has no advantages - so stop doing it.
The only reason why this was done explicitly was due to attempts to keep
the code equivalent (instead of risking performance regressions). The
original code did this naturally for certain reasons, but now that we
can measure that it has no advantages and just requires extra code, we
can just drop it.
If the actual PTS is not known yet right after a seek, the "time-pos"
property will just return the seek target PTS. For this purpose, trigger
a change event to make the client API update the "time-pos" and related
properties. (MPV_EVENT_TICK triggers this update.)
Commit 261506e3 made constant seeking feel slower, because a subtle
change in the restart logic makes it now waste time showing another
video frame. The slowdown is about 20%.
(Background: the seek logic explicitly waits until a video frame is
displayed, because this makes it easier for the user to search for
something in the video. Without this logic, the display would freeze
until the user stops giving seek commands.)
Fix this by letting the seek logic issue another seek as soon as the
first video frame is displayed. This will prevent it from showing a
(useless, slow) second frame. Now it seems to be as fast as before the
change.
One side-effect is that the next seek happens after the first video
frame, but _before_ audio is restarted. Seeking is now silent. I guess
this is ok, so we don't do anything about it. Actually, I think whether
this happens is probably random; the seeking logic simply doesn't make
this explicit, so anything can happen.
Usually mp_image_params_guess_csp takes care of finding *some* default
for a video channel, but files with no video (or with extremely broken
configurations) end up leaving the colorspace information as
MP_CSP_PRIM_AUTO, which has no associated primaries.
As a result of this, color managed OSD messages could not display
because they were being color managed to match the (non-existing/absurd)
video channel. With this change, such non-spaces will have BT.709
primaries as far as color management and the OSD is concerned.
This fixes#961.
CC: @mpv-player/stable
Signed-off-by: wm4 <wm4@nowhere>
This commit makes audio decoding non-blocking. If e.g. the network is
too slow the playloop will just go to sleep, instead of blocking until
enough data is available.
For video, this was already done with commit 7083f88c. For audio, it's
unfortunately much more complicated, because the audio decoder was used
in a blocking manner. Large changes are required to get around this.
The whole playback restart mechanism must be turned into a statemachine,
especially since it has close interactions with video restart. Lots of
video code is thus also changed.
(For the record, I don't think switching this code to threads would
make this conceptually easier: the code would still have to deal with
external input while blocked, so these in-between states do get visible
[and thus need to be handled] anyway. On the other hand, it certainly
should be possible to modularize this code a bit better.)
This will probably cause a bunch of regressions.
Apparently this switch means all mouse input should be strictly
rejected. Some VO backends (such as X11) explicitly disable all mouse
events if this option is set, but others don't. So check them in
input.c, which increases consistency.
Follow up on commit 760548da. Mouse handling is a bit confusing, because
there are at least 3 coordinate systems associated with it, and it
should be cleaned up. But that is hard, so just apply a hack which gets
the currently-annoying issue (VO backends needing access to the VO) out
of the way.
Add an option that enables using native PulseAudio auto-updated timing
information, instead of the manual calculations added in mplayer2 times.
You can use --ao=pulse:no-latency-hacks to enable the new code. The code
is almost the same as the code that was removed with commit de435ed5,
but I didn't readd some bits I didn't understand. Likewise, the option
will disable the code added with that commit.
In my tests this seemed to work well, though the A/V sync display looks
funny when seeking.
The default is still the old behavior.
See issue #959.
This was needed by very old (0.9) versions only. Get rid of it.
Unfortunately, I can't cross-check with the original bug report, since
the bug URL leads to this:
Internal Server Error
TracError: IOError: [Errno 2] No such file or directory: '/home/lennart/svn/trac/pulseaudio/VERSION'
The windows message loop now runs in a separate thread. Rendering,
such as with Direct3D or OpenGL, still happens in the main thread.
In particular, this should prevent the video from freezing if the
window is dragged. (The reason was that the message dispatcher won't
return while the dragging is active, so mpv couldn't update the
video at all.)
This is pretty "rough" and just hacked in, and there's no API yet to
make this easier for other backends. It will be cleaned up later
once we're sure that it works, or when we know how exactly it should
work. One oddity is that OpenGL is actually completely managed in the
renderer thread, while e.g. Cocoa (which has its own threading code)
creates the context in the GUI thread, and then lets the renderer
thread access it.
One strange issue is that we now have to stop WM_CLOSE from actually
closing the window. Instead, we wait until the playloop handles the
close command, and requests the VO to shutdown. This is done mainly
because closing the window apparently destroys it, and then WM_USER
can't be handled anymore - which means the playloop has no way to
wakeup the GUI thread. It seems you can't really win here... maybe
there's a better way to have a thread receive messages with and
without a window, but I didn't find one yet.
Dragging the window (by clicking into the middle of it) behaves
strangely in wine, but didn't before the change. Reason unknown.
VO backends which are or will run in their own thread have a problem
with vo_mouse_movement() calling vo_control(). Restrict this to VOs
which actually need this.
win32 does not provide a proper per-window context pointer. Although it
does allow passing a user-chosen value to WM_CREATE/WM_NCCREATE, this
is not enough - the first message doesn't even have to be WM_NCCREATE.
This gets us in trouble later on, so go the easy route and just use a
TLS variable.
__thread is gcc specific, but Windows is a very "special" platform
anyway. We support only MinGW and Cygwin on it, so who cares. (C11
standardizes __thread as _Thread_local; we can use that later.)
Plan 9 has a very interesting synchronization mechanism, the
rendezvous() call. A good property of this is that you don't need to
explicitly initialize and destroy a barrier object, unlike as with e.g.
POSIX barriers (which are mandatory to begin with). Upon "meeting", they
can exchange a value.
This mechanism will be nice to synchronize certain stages of
initialization between threads in the following commit.
Unlike Plan 9 rendezvous(), this is not implemented with a hashtable,
because that would require additional effort (especially if you want to
make it actually scele). Unlike the Plan 9 variant, we use intptr_t
instead of void* as type for the value, because I expect that we will be
mostly passing a status code as value and not a pointer. Converting an
integer to void* requires two cast (because the integer needs to be
intptr_t), the other way around it's only one cast.
We don't particularly care about performance in this case either. It's
simply not important for our use-case. So a simple linked list is used
for waiters, and on wakeup, all waiters are temporarily woken up.
This shouldn't change anything. But it's worth making this explicit,
since it's very subtle and unintuitive: if the X parameter is the
magic value CW_USEDEFAULT, then the Y parameter is used as nCmdShow
parameter to ShowWindow(). And in our case, this is SW_HIDE (0),
because we want to create a hidden window.
This looked a bit overcomplicated. We don't care about the window
position (it should always be 0/0, unless the parent program moved it,
which it shouldn't). We don't care about the global on-screen position.
Also, we will just retrieve a WM_SIZE message if our window is resized,
and we don't need to update it manually.
The only thing we have to do is making sure our window fills the parent
window completely.
CS_OWNDC will make GetDC() always return the same HDC. This might
become a problem when OpenGL rendering and window management are
on different threads. Although I'm not too sure about this; our
code never requests a HDC outside of the OpenGL backend, and it
depends on whether win32 will internally request DCs. But in any
case, this seems to be cleaner.
Move the GL pixelformat setup code to gl_w32.c, where it's actually
needed. This also fixes that SetPixelFormat() should be called only
once, and not every time video params change.
These mostly describe self-explanatory things, and fail to explain
actually tricky things. Which means you just waste your time reading
this, and have to figure it out from the code anyway.
Preparation for moving win32 windowing to a separate thread.
The codesize is reduced a bit, because some small functions are
inlined, which reduces noise.
The main change is that now most functions use the private struct
directly, instead of accessing it indirectly through vo->w32.
Accesses to vo are minimalized.
The final goal is adding some sort of new windowing backend API. It
would be cleaner to use that as context pointer for all functions
(like struct vo was previously used), but since this is work in
progress, we just go with this commit.
ao_null is used to stop autoprobing (if all AOs before fail to init).
After it come things like ao_pcm, which should never be automatically
selected.
Remove a certain theoretically possible failure case, and force "some"
fallback.
mp_make_wakeup_pipe() always fails on win32. If this call fails on Linux
(and e.g. ao_alsa is used), this will probably burn CPU since poll()
won't work on the invalid file descriptor, but whatever, the failure
case is obscure enough.
Actually free the old mmap region when readding an overlay of the same
ID without removing it before. (This is explicitly documented as
working.)
Replace the OSD atomically. Before this commit, the overlays were
removed and then readded to avoid synchronization problems.
Simplify the code: now there is no weird mapping between index and ID.
The OSD sub-bitmap list still needs to be prepared to skip unused IDs
(since each sub-bitmap list entry must be in use), but the code for this
is relatively separated now.
Fixes issue #956.
Currently entries are added after the current playlist element. This is kinda
confusing, more so given that "loadfile append" appends at the end of the
playlist.
Accidentally broken in b6af44d3. For ad_lavc (and in general), the PTS
was not updated correctly when filtering only parts of audio frames,
and for ad_mpg123 and ad_spdif the PTS was additionally offset by the
frame size.
This could lead to incorrect time display, and possibly broken A/V sync.
Execute the format change based on whether we logically detected EOF
(after filters), instead of when the decode buffer was drained. It's
slightly cleaner. (The requirement of len>0 existed before.)
Don't return an EOF code if there's still buffered data.
Also, don't call demux_stream_eof() in the playloop. There's probably
nothing wrong with it, but it's cleaner not to use it.
Also give AD_EOF its own value, so that a decoding error doesn't drain
audio by causing an EOF condition.
Move a function call, which does not change semantics.
Write the extra buffer sample count in a more straight-forward way; the
old code was not meaningful in any way (anymore).
It's true that the decoder can successfully decode, but return no data
(for various reasons). We don't need to handle this specially, though.
We just let the decoder decode some more data. This doesn't increase the
danger of an endless loop either, because audio_decode() already calls
this function until enough is decoded.
"loadfile filename append-play" will now always append the file to the
playlist, and if nothing is playing yet, start playback. I don't want to
change the semantics of "append" mode, so a new mode is needed.
Probably fixes issue #950.