If the win32 taskbar progress update is sent before the VO window is
created, then w32_common.c will ignore it because the actual taskbar
object was not created yet. (At least this is what I suspect happens.
The window is already created at this point, but not mapped.)
Hopefully fix this is fixed by creating until after the window is
created, i.e. the VO has been configured at least once.
Untested (who wants to boot into Windows just to wait until it has
applied all of its stupid updates).
Also not explicit is whether update_vo_playback_state() will actually be
called soon enough in all cases. It probably is.
Probably fixes#3482.
The --cache option and cache property conflict, so one of them has to be
renamed. The option is probably used frequently, so initiate
deprecation/rename of the property.
Create the core thread right in mpv_create(), and reduce what
mpv_initialize() does further. This is simpler, and allows the API user
to do more before calling mpv_initialize(). The latter is not the real
goal, rather we'd like mpv_intialize() reduced to do almost nothing. It
still does a lot, but nothing truly special anymore that is absolutely
required for basic mpv workings.
One thing we want the user to be able to do is changing properties
before mpv_initialize() to reduce the special status of
mpv_set_option().
Make some existing properties behave more like options. This mostly
means they don't deny access if the associated component is not active,
but redirects to the option.
One kind of fishy change is that we apply --brightness etc. only if
they're not set to the default value. This won't necessarily work with
--vo=xv, but affects only cases where 1. the Xv adapter has been changed
to non-defaults, and 2. the user tries to reset them with mpv by passing
e.g. --brightness=0. We don't care about Xv, and the noted use-case is
dumb, so this change is acceptable.
These conflict with options of the same name, and prevent a "full"
unification. Not addressed is the "cache" property, and possibly a few
properties that behave differently from their equivalent options.
Now options are accessible through the property list as well, which
unifies them to a degree.
Not all options support runtime changes (meaning affected components
need to be restarted for the options to take effects). Remove from the
manpage those properties which are cleanly mapped to options anyway.
From the user-perspective they're just options available through the
property interface.
Before this commit, all VOs had to toggle the option flag themselves,
now command.c does it.
I can't really comprehend why it required every VO to do this manually.
Maybe it was for rejecting the property/option change if the VO didn't
support a specific capability. But then it could have checked the VOCTRL
result. In any case, I don't care, and successfully changing the
property without doing anything (With some VOs) is fine too. Many things
work this way now, and it's simpler overall.
This change will be useful for cleaning up VO option handling.
Just a minor refactor along the planned option change. This commit will
make it easier to update (i.e. copy) the VO options without copying
_all_ options. For now, behavior should be equivalent, though.
(The VO options were put into a separate struct quite early - when all
global variables were removed from the source code. It wasn't clear
whether the separate struct would have any actual purpose, but it seems
it will now. Awesome, huh.)
Some files not only use rounded timestamps, but they also do it
incorrectly. They may jitter between up to 4 specific frame durations.
In this case, I found a file that mostly used 41ms and 42ms, but also
had 40ms and 43ms outliers (often but not always following each other).
This breaks the assumption of the framerate estimation code that the
frame duration can deviate up to 1ms. If it jitters around 4 possible
frame durations, the maximum deviation is 3ms. Increase it accordingly.
The change might make playback of "true VFR" video via display-sync mode
worse, but it's not like it was particularly good in the first place.
Also, the check whether to usen the container FPS needs to be stricter.
In the worst case, num_dur is 1, which doesn't really indicate any
evidence that the framerate is correct. Only if there are "enough"
frames the deviation check will become meaningful. 16 is an arbitrary
value that has been designated "enough" by myself.
Also otuput the frame duration values for --dump-stats.
Normally, OSD can be disabled with --osd-level=0. But this also disables
terminal OSD, and some users want _only_ the terminal OSD. Add
--video-osd=no, which essentially disables the video OSD.
Ideally, it should probably be possible to control terminal and video
OSD levels independently, but that would require separate OSD timers
(and other state) for both components, so don't do it. But because the
current situation isn't too ideal, add a threat to the manpage that
might be changed in the future.
Fixes#3387.
Cleaner and makes it easier to change the underlying stream.
mp_property_stream_capture() still directly accesses it directly via
demux_run_on_thread(). This is evil, but still somewhat sane and is not
getting into the way here.
Not sure if I got all field accesses.
If spdif is enabled, the channel layout has no meaning other than
setting the number of channels. The number of channels must be fixed to
achieve the exact bitrate required.
Fixes#3445.
Doing this required synchronizing with the VO thread, which could lead
to audio dropouts if the VO was frozen (which can happen in practice if
e.g. an opengl_cb user is not doing what the API demands).
Add a way to send asynchronous VOCTRLs, and use that for the playback
state. In theory, it would be better to make this status update a
several function and to "merge" several queued update, but that would be
slightly more effort/code, and the update is so infrequent that the
merging would never happen anyway.
The change to vo_destroy() is to make sure all queued asynchronous
reuqests are finished before making the vo_thread exit.
Even though it's only used on MS Windows, it's run on any platform with
any VO, which makes this worse.
When fetching the playlist property, playlist_entry_from_index would be
called for each playlist entry, which traversed a linked list to get the
entry corresponding to the specified index. This was very slow for large
playlists. Since get_playlist_entry is called for each index in order,
it can avoid a full traversal of the linked list by using the next
pointer on the previously requested entry.
This affects A-B loops and --loop-file, and audio. Instead of dropping
audio by resetting the AO, try to make it seamless by not sending data
after the loop point, and after the seek send new data without a reset.
The code actually kept going out of EOF mode into resync mode back into
EOF mode when the playloop had to wait after an audio EOF caused by the
endpts. This would break seamless looping (as added by the next commit).
Apply endpts earlier, to ensure the filter_audio() function always
returns AD_EOF in this case.
The idiotic ao_buffer makes this an amazing pain in the ass.
Instead of letting it keep decoding by trying to find a new frame,
"plug" the frame queue by not removing it. (Or actually, by putting
it back instead of discarding it.)
Matters for seamless looping (following commits), and possibly some
other corner cases.
The added function vf_unread_output_frame() is a bit of a sin, but still
reasonable, since its implementation is trivial.
The --image-display-duration option controls how long an image is
displayed. It's also possible to display the image forever (until manual
user interaction stops playback).
With this, the core drops the old method to "drain" video (i.e. waiting
for the last frame duration on end of playback). Instead, we reuse
MPContext.time_frame. The old mechanism was disabled for non-images
anyway.
Fixes#3425.
Change the last parameter from a bool to an int, which is supposed to
take bit-flags. The at this point only flag is MPSEEK_FLAG_DELAY, which
replaces the previous bool parameter. The old false parameter becomes 0,
the old true parameter becomes MPSEEK_FLAG_DELAY.
Since the old "immediate" parameter is now essentially inverted, two
coalesced immediate and delayed seeks end up as delayed instead of
immediate. This change doesn't matter, since there are no relative
immediate seeks anyway.
Relative seeks backwards with external audio tracks does not always work
well: it tends to happen that video seek back further than audio, so
audio will remain silent until the audio's after-seek position is
reached. This happens because we strictly seek both video and audio
demuxer to the approximate desirted target PTS, and then start decoding
from that.
Commit 81358380 removes an older method that was supposed to deal with
this. It was sort of bad, because it could lead to the playback core
freezing by waiting on network.
Ideally, the demuxer layer would probably somehow deal with such seeks,
and do them in a way the audio is seeked after video. Currently this is
infeasible, because the demuxer layer assumes a single demuxer, and
external tracks simply use separate demuxer layers. (MPlayer actually
had a pseudo-demuxer that joined external tracks into a single demuxer,
but this is not flexible enough - and also, the demuxer layer as it
currently exists can't deal with dynamically removing external tracks
either. Maybe some time in the future.)
Instead, add a gross hack, that essentially reseeks the audio if it
detects that it's too far off. The result is actually not too bad,
because we can reuse the mechanism that is used for instant track
switching. This way we can make sure of the right position, without
having to care about certain other issues.
It should be noted that if the audio demuxer is used for other tracks
too, and the demuxer does not support refresh seeking, audio will
probably be off by even a higher amount. But this should be rare.
This code is for resyncing audio-only streams (e.g. switching between
audio tracks if no video track is active). This must not be run if the
video PTS just isn't known yet. (Although the case in which this changes
anything is probably very obscure, if it can even happen. Still, it's a
bit more correct.)
This is a correction to commit 91a3bda6.
In display-sync mode, the very first video frame is idiotically fully
timed, even though audio has not been synced yet at this point, and the
video frame is more like a "preview" frame. But since it's fully timed,
an underflow is detected if audio takes longer than the display time of
the frame (we send the second frame only after audio is done).
The timing code will try to compensate for the determined desync, but it
really shouldn't. So explicitly discard the timing info in this specific
case. On the other hand, if the first frame still hasn't finished
display, we can pretend everything is ok.
This is a hack - ideally, we either would send a frame without timing
info (and then send it again or so when playback starts properly), or we
would add real pause support to the VO, and pause it during syncing.
If an audio track is enabled during playback, then make it resume at the
exact "current position", instead of playing audio before that position.
This was already done for video.
When switching tracks, we normally have the problem that data gets lost
due to readahead buffering. (Which in turn is because we're stubborn and
instruct the demuxers to discard data on unselected streams.) The
demuxer layer has a hack that re-reads discarded buffered data if a
stream is enabled mid-stream, so track switching will seem instant.
A somewhat similar problem is when all tracks of an external files were
disabled - when enabling the first track, we have to seek to the target
position.
Handle these with the same mechanism. Pass the "current time" to the
demuxer's stream switch function, and let the demuxer figure out what to
do. The demuxer will issue a refresh seek (if possible) to update the
new stream, or will issue a "normal" seek if there was no active stream
yet.
One case that changes is when a video/audio stream is enabled on an
external file with only a subtitle stream active, and the demuxer does
not support rrefresh seeks. This is a fuzzy case, because subtitles are
sparse, and the demuxer might have skipped large amounts of data. We
used to seek (and send the subtitle decoder some subtitle packets
twice). This case is sort of obscure and insane, and the fix would be
questionable, so we simply don't care.
Should mostly fix#3392.
This commit adds an --audio-channel=auto-safe mode, and makes it the
default. This mode behaves like "auto" with most AOs, except with
ao_alsa. The intention is to allow multichannel output by default on
sane APIs. ALSA is not sane as in it's so low level that it will e.g.
configure any layout over HDMI, even if the connected A/V receiver does
not support it. The HDMI fuckup is of course not ALSA's fault, but other
audio APIs normally isolate applications from dealing with this and
require the user to globally configure the correct output layout.
This will help with other AOs too. ao_lavc (encoding) is changed to the
new semantics as well, because it used to force stereo (perhaps because
encoding mode is supposed to produce safe files for crap devices?).
Exclusive mode output on Windows might need to be adjusted accordingly,
as it grants the same kind of low level access as ALSA (requires more
research).
In addition to the things mentioned above, the --audio-channels option
is extended to accept a set of channel layouts. This is supposed to be
the correct way to configure mpv ALSA multichannel output. You need to
put a list of channel layouts that your A/V receiver supports.
Pointless anyway. With superficial checking I couldn't find any decoder
which actually outputs this, and AO chmap negotiation would properly
ignore them anyway in most cases.
Assume you use a large value like --audio-delay=20. Then until now the
player would just have seeked normally to a "too late" position, and
played silence for about 20 seconds until audio in the correct time
range is coming again.
Change this by offsetting seeks by the right amount. This works for both
external and muxed files. If a seek isn't precise, then it works only
for external files.
This might cause issues with very large delay options. Hr-seek skipping
could take a lot of time (especially because it affects video too), the
demuxer queue could overflow, and other weird corner cases could appear.
But we just try this on best-effort basis, and if the user uses extreme
values we don't guarantee good behavior.
Otherwise it behaves dumb. (Although you could argue it shouldn't try to
guess whether speed changes work, but instead simply disable DS if they
don't work.)
mixer.c didn't really deserve to be separate anymore, as half of its
contents were unnecessary glue code after recent changes. It also
created a weird split between audio.c and af.c due to the fact that
mixer.c could insert audio filters. With the code being in audio.c
directly, together with other code that unserts filters during runtime,
it will be possible to cleanup this code a bit and make it work like the
video filter code.
As part of this change, make the balance code work like the volume code,
and add an option to back the current balance value. Also, since the
balance semantics are unexpected for most users (panning between the
audio channels, instead of just changing the relative volume), and there
are some other volumes, formally deprecate both the old property and the
new option.
Old-style commands using _ as separator (e.g. show_progress) were still
used in some places, including documentation and configuration files.
This commit updates all such instances to the new style (show-progress)
so that commands are easier to find in the manual.
Since it turns out that knowing what exactly a file was tagged with can
be useful for debugging purposes, expose this as a property so I can
check it more easily.
This is mostly useful for sig-peak (since nom-peak is currently entirely
calculated by us), but I added both for consistency.
--deinterlace=auto is the default, and has the obscure semantics that
deinterlacing is disabled, unless the user has manually inserted a
deinterlacing filter.
While in software decoding this doesn't matter, and we will happily
insert 2 yadif filters (if the user has already added one), or not
remove the yadif filter (if deinterlacing is disabled, but the user has
added the filter manually), this is different with hardware deinterlacer
filters. These support VFCTRL_SET_DEINTERLACE for toggling deinterlacing
filtering at runtime. It exists mainly for legacy reasons, and possibly
because it makes switching deinterlacing modes more efficient. It might
also gives us an entry-point for VO deinterlacing, maybe. For whatever
reasons this mechanism exists, we still support and use it.
This commit fixes that video.c always used VFCTRL_SET_DEINTERLACE to
disable deinterlacing, even if --deinterlace=auto was set. Fix this by
checking the value of the option directly.
Drop the code for switching the volume options and properties between
af_volume and AO volume controls. interface-changes.rst mentions the
changes in detail.
Do this because this was exceedingly complex and had other problems as
well. It was also very hard to test. It's just not worth the trouble.
Some leftovers like AOCONTROL_HAS_PER_APP_VOLUME will be removed at a
later point.
Fixes#3322.
Commit 771a8bf5 added code to avoid unnecessary vf_reconfig() calls for
unrelated reasons, but forget to consider that it has to be called at
least once if the input format changes. As a consequence it got "stuck"
due to not being able to decode more frames.
vo_frame can have more than 1 frame - the extra frames are future
references, which are sometimes useful for filtering (vo_opengl
interpolation). There's no harm in reducing the number of frames sent to
the VO requested amount of future frames, so do that.
Doesn't actually reduce the number of concurrently in use frames in
practice.
If the status line is wider than the reported terminal size, then cut it
off instead of causing the terminal to scroll down for the next line.
This is done in the most primitive way possible, assuming ASCII.
This was actually done in the past as far as I'm aware; do it again.
(Probably differently.)
Some filters support VFCTRL_SET_DEINTERLACE. This affects most hardware
deinterlace filters. They can be inserted by the user manually, or auto-
inserted by vf.c itself as conversion filter (vf_d3d11vpp). In these
cases, we shouldn't insert or remove filters outselves, and instead
VFCTRL_SET_DEINTERLACE should be invoked to switch the mode.
This wasn't done correctly in the recently refactored code and could
have broken with --deinterlace. (The refactor only considered switching
via property in this case.) Fix it by making it a proper part of the
filter_reconfig() function, and making set_deinterlacing() (which is
called by the property handler) merely call filter_reconfig() in all
cases to do the real work.
We can even avoid rebuilding the filter chain - though only if no other
auto-filters are inserted. It probably also provides a slightly cleaner
way to implement functionality in the VO while still inserting video
filter fallbacks correctly if required.
The test scenario at hand was hardware decoding a file with d3d11 and
with deinterlacing enabled. The file switches to a non-hardware
dedocdeable format mid-stream. This failed, because it tried to call
vf_reconfig() with the old filters inserted, with was fatal due to
vf_d3d11vpp accepting only hardware input formats.
Fix this by always strictly removing all auto-inserted filters
(including the deinterlacing one), and reconfiguring only after that.
Note that this change is good for other situations too, because we
generally don't want to use a hardware deinterlacer for software
decoding by default. They're not necessarily optimal, and VAAPI VPP even
has incomprehensible deinterlacer bugs specifically with software frames
not coming from a hardware decoder.
Instead of using the "vf" command code (which changes filters at runtime
on user input), use the general filter-insertion code. The latter was
added later, and is more suitable for automatically inserted filters.
The old code failed in particular when using watch-later saving, which
stored the filter list in the resume config file. If a user changed the
hardware decoding mode via command line, the stored filter chain was out
of date and could cause failure due to not working with hardware or
software decoding mode. Storing the deinterlace filter in the filter
list was unavoidable, because it was part of the user state. (The new
code only edits the actually instantiated filters.)
This has two reasons:
1. I tend to add new fields to this metadata, and every time I've done
so I've consistently forgotten to update all of the dozens of places in
which this colorimetry metadata might end up getting used. While most
usages don't really care about most of the metadata, sometimes the
intend was simply to “copy” the colorimetry metadata from one struct to
another. With this being inside a substruct, those lines of code can now
simply read a.color = b.color without having to care about added or
removed fields.
2. It makes the type definitions nicer for upcoming refactors.
In going through all of the usages, I also expanded a few where I felt
that omitting the “young” fields was a bug.
We don't support this anymore.
This tries to exit in a controlled way after command line options are
applied in order to honor logging options and, in case of libmpv, not to
kill the host. Not sure if it would be better to just vomit text to
stderr and call abort().
Working towards refcounted sub images, and also for removing bitmap
packers from VOs.
I'm not sure why we even have this overlay-add command. It was sort of
"needed" before opengl-cb was introduced, and before Lua scripts could
put ASS drawings on OSD without conflicting with the OSC. But now trying
to use it doesn't make too much sense anymore.
Still keep it because we're trying to be nice, but throw performance out
of the window. Now image data is copied 2 more times before displaying
it. This also makes using the command a bit simpler.
Of course we can't just skip updating the OSD if the playloop was woken
up for the purpose of removing OSD after an OSD timer expired.
Fixes e.g. OSD bars sometimes sticking along when seeking while paused.
Normally, OSD is updated every time the playloop is run. This has to be
done, because the OSD may implicitly reference various properties,
without knowing whether they really need to be updated or not. (There's
a property update mechanism, but it's mostly unavailable, because OSD is
special-cased and can not use the client API mechanism properly.)
Normally, these updates are no problem, because the OSD is only actually
printed when the OSD text actually changes.
But commit d23ffd24 added a rate-limiting mechanism, which tries to
limit OSD updates at most every 50ms (or the next video frame). Since it
can't know in advance whether the OSD is going to change or not, this
simply waked up the player every 50ms.
Change this so that the player is updated only as part of general
updates determined through mp_notify(). (This function also notifies the
client API of changed properties.) The desired result is that the player
will not wake up at all in normal idle mode, but still update properties
that can change when paused, such as the cache.
This is mostly a cosmetic change (in the sense of making runtime
behavior just slightly better). It has the slightly more negative
consequence that properties which update implicitly (such as "clock")
will not update periodically anymore.
Instead of having 9 different properties, requiring 18 different
VOCTRLs to read them all, they are now exposed as a single property.
This is not only cleaner (since they're all together) but also allows
querying all 9 of them with only a single VOCTRL (by using
mp.get_property_native).
(The extra factor of 2 was due to an extra query being needed to get the
type, which is now also unnecessary)
This makes it much easier to access performance metrics from within a
lua script, and also makes it easier to just show a readable, formatted
version via show-text.
This comes up often, see e.g. #3220. The issue is that if the stream
input is not seekable, the demuxer is marked as not seekable. But if the
stream cache is enabled, the file still _might_ be seekable to a degree.
We recently disabled seeking in this mode because it can cause very
weird issues, mostly because if stream-layer seeking fails, the demuxers
will arbitrarily misbehave. On the other hand, it can work if the seek
is within the cached range, which is why the user can still enable it
with --force-seeking. There is a weird trade-off between allowing this
and not crapping up too easily, so just informing the user about the
possibility seems best.
For clang, it's enough to just put (void) around usages we are
intentionally ignoring the result of.
Since GCC does not seem to want to respect this decision, we are forced
to disable the warning globally.
This is plumbed through a new VOCTRL, VOCTRL_PERFORMANCE_DATA, and
exposed as properties render-time-last, render-time-avg etc.
All of these numbers are in microseconds, which gives a good precision
range when just outputting them via show-text. (Lua scripts can
obviously still do their own formatting etc.)
Signed-off-by: wm4 <wm4@nowhere>
We now have a video filter that uses the d3d11 video processor, so it
makes no sense to have one in the VO interop code. The VO uses it for
formats not directly supported by ANGLE (so the video data is converted
to a RGB texture, which ANGLE can take in).
Change this so that the video filter is automatically inserted if
needed. Move the code that maps RGB surfaces to its own inteorp backend.
Add a bunch of new image formats, which are used to enforce the new
constraints, and to automatically insert the filter only when needed.
The added vf mechanism to auto-insert the d3d11vpp filter is very dumb
and primitive, and will work only for this specific purpose. The format
negotiation mechanism in the filter chain is generally not very pretty,
and mostly broken as well. (libavfilter has a different mechanism, and
these mechanisms don't match well, so vf_lavfi uses some sort of hack.
It only works because hwaccel and non-hwaccel formats are strictly
separated.)
The RGB interop is now only used with older ANGLE versions. The only
reason I'm keeping it is because it's relatively isolated (uses only
existing mechanisms and adds no new concepts), and because I want to be
able to compare the behavior of the old code with the new one for
testing. It will be removed eventually.
If ANGLE has NV12 interop, P010 is now handled by converting to NV12
with the video processor, instead of converting it to RGB and using the
old mechanism to import that as a texture.
Main use: deinterlacing.
I'm not sure how to select the deinterlacing mode at all. You can
enumate the available video processors, but at least on Intel, all of
them either signal support for all deinterlacers, or none (the latter is
apparently used for IVTC). I haven't found anything that actually tells
the processor _which_ algorithm to use.
Another strange detail is how to select top/bottom fields and field
dominance. At least I'm getting quite similar results to vavpp on Linux,
so I'm content with it for now.
Future plans include removing the D3D11 video processor use from the
ANGLE interop code.
This has often been requested for use on OSD. I don't really like having
such "special" properties, but whatever. Hopefully this will be the only
case.
Untested because I'm too damn lazy.
Fixes#2828.
This uses the normal autoprobing rules like "auto", but rejects anything
that isn't flagged as copying data back to system memory.
The chunk in command.c was dead code, so remove it instead of updating
it.
Commit 786f37ae accidentally changed seeking behavior such that
continuous seeking (holding the seek button down) would use the previous
seek target timestamp, instead of the new video timestamp. (This is for
the default mode, seeking to keyframes.)
The result is that the movement on the seekbar is smooth, but the way
the video updates is awkward. Some might actually prefer the new
behavior (and some players effectively show similar bahavior), but I
don't. So restore the old behavior.
This is done in two steps:
First: strictly wait for the entire seek process to finish, which will
effectively make the seeking code pick up the new video timestamp
correctly.
This would play audio immediately, which would result in noise during
continuous seeking, which leads to second: explicitly abort the playback
restarting process if this case is detected, and never play audio.
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.
The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)
This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.
Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.
The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.
This is a big refactor. Some things may be untested and could be broken.
Add --taskbar-progress command line option and property which controls taskbar
progress indication rendering in Windows 7+. This option is on by default and
can be toggled during playback.
This option does not affect the creation process of ITaskbarList3. When the
option is turned off the progress bar is just hidden with TBPF_NOPROGRESS.
Closes#2535
Introduce hwdec-current and hwdec-interop properties.
Deprecate hwdec-detected, which never made a lot of sense, and which is
replaced by the new properties. hwdec-active also becomes useless, as
hwdec-current is a superset, so it's deprecated too (for now).
For "current" markers on OSD properties like chapter-list. The marker is
now an actual arrow instead of "> ", and non-current entries will have
the same indentation as the current entry.
While I'm not entirely sure about the new look of those lists, it's a
bit better than the visual mess that was before.
Because it's annoying and feels unnatural.
If the B point is set while paused, don't seek. If not paused, it should
properly loop immediately.
In theory there's a chance that it will show at least 1 frame after the
loop point when setting the B point. But let's not care about that.
This fixes backstepping getting "stuck" when e.g. holding down a key
bound to the backstep command. The reason is that even if the backstep
itself is finished, the next backstep might not take the new video PTS
as reference if the hr-seek itself isn't finished yet.
The intention of not waiting for the hr-seek to finish was faster
backstepping by possibly skipping audio decoding. But it probably
doesn't matter enough to make the rest of the code more complex.
As a positive side-effect, this also errors out gracefully for the
extremely unlikely but possible case certain builtin filters are not
available. (This could happen only with crippled libavfilter builds that
can't be used by anything using its public API.)
Another crappy fix for timestamp reset issues. This time, we try to fix
files which have very weird but legitimate frame durations, such as
cdgraphics. It can have many short frames, but once in a while there are
potentially very long frames.
Fixes#3027.
Commit 382bafcb changed the behavior for ab-loop-a. This commit changes
ab-loop-b so that the behavior is symmetric.
Adjust the OSD rendering accordingly to the two changes.
Also fix mentions of the "ab_loop" command to the now preferred
"ab-loop".
The check whether video is ready yet was done only in STATUS_FILLING.
But it also switched to STATUS_READY, which means the next time
fill_audio_out_buffers() was called, audio would actually be started
before video.
In most situations, this bug didn't show up, because it was only
triggered if the demuxer didn't provide video packets quickly enough,
but did for audio packets.
Also log when audio is started.
(I hate fill_audio_out_buffers(), why did I write it?)
Strictly schedule an update in regular intervals as long as either
stream cache or demuxer are prefetching. Don't update just always
because the stream cache is enabled ("idle != -1") or cache-related
properties are observed (mp_client_event_is_registered()).
Also, the "idle" variable was awkard; get rid of it with equivalent
code.
Calculate the buffering percentage in the same code which determines
whether the player is or should be buffering. In particular it can't
happen that percentage and buffering state are slightly out of sync due
to calling DEMUXER_CTRL_GET_READER_STATE and reusing it with the
previously determined buffering state.
Now it's also easier to guarantee that the buffering state is updated
properly.
Add some more verbose output as well.
(Damn I hate this code, why did I write it?)
And remove the same thing from the client API code.
The command.c code has to deal with many specialized M_PROPERTY_SET_*
actions, and we bother with a subset only.
If a mpv_node wrapped a string, the behavior was different from calling
mpv_set_property() with MPV_FORMAT_STRING directly. Change this.
The original intention was to be strict about types if MPV_FORMAT_NODE
is used. But I think the result was less than ideal, and the same change
towards less strict behavior was made to mpv_set_option() ages ago.
Commit 57506b27 accidentally broke this. The status (including the
usually always active demuxer cache) should be shown only if the stream
cache is actually enabled.
Instead of having a separate for each, which also requires separate
additional caching in the demuxer. (The demuxer adds an indirection,
since STREAM_CTRLs are not thread-safe.)
Since this includes the cache speed, this should fix#3003.
This would get stuck in reconfiguring the filter chain forever, because
params was mutated ("params.rotate = 0;"). This was used as input for
vf_reconfig(), but the filter chain input must always be equivalent to
the decoder output, or filter chain reconfiguration will be triggered.
The line of code to reset the rotation is from a time when this used to
work differently.
Also remove the unnecessary try_filter() parameter.
This implements the JSON IPC protocol with named pipes, which are
probably the closest Windows equivalent to Unix domain sockets in terms
of functionality. Like with Unix sockets, this will allow mpv to listen
for IPC connections and handle multiple IPC clients at once. A few cross
platform libraries and frameworks (Qt, node.js) use named pipes for IPC
on Windows and Unix sockets on Linux and Unix, so hopefully this will
ease the creation of portable JSON IPC clients.
Unlike the Unix implementation, this doesn't share code with
--input-file, meaning --input-file on Windows won't understand JSON
commands (yet.) Sharing code and removing the separate implementation in
pipe-win32.c is definitely a possible future improvement.
Should reflect I/O speed.
This could go into the terminal status line. But I'm not sure how to put
it there, since it already uses too much space, so it's not there yet.
This changes behavior somewhat. The old behavior can be restored by
running "mp.use_suspend=true". It was originally introduced for the OSC,
but I can't reproduce whatever misbehavior I was seeing.
(See mp.suspend()/resume() for explanations what the suspend mechanism
does.)
This pause stuff is bothersome and is needed only for a few corner-
cases. This commit removes it from the demuxer public API and replaces
it with a demux_run_on_thread() function and refactors the code which
needed demux_pause(). The next commit will change the implementation.
Changing the byte stream position without cooperation of the demuxer
seems a bit insane, and is certainly useless. A user should do factor
seeks instead. For formats like ts, this will actually translate to byte
seeks, while treating the rest of the playback chain a bit more
gracefully. With this argument, remove write access to this property.
If someone really complains, proper byte seeks could be added as seek
mode (although I'm going to need a convincing argument for this).
Read access changes too, but in a more subtle way.
No need to have them everywhere. The only exception/annoyance is
MAX_OSD_PARTS, which is now basically duplicated (and at runtime
initialization is checked with an assert()).
Until now, there was only 1 global ASS overlay that could be set by all
scripts. This was often perceived as bug when multiple scripts tried to
set their own ASS overlay.
This was kind of hard to solve because the script could set its own ASS
PlayResX/Y, which makes it impossible to share a single ASS_Renderer for
multiple scripts. The OSC unfortunately makes use of this feature (and
unfortunately can't be fixed because it's a POS), so we're stuck with
this complication.
Implement the worst-case solution and fix this by creating separate ASS
track and renderer objects for each script that wants to set an ASS
overlay.
The z-order is decided by the order the scripts set their text first.
This is essentially random, unless you do it at script init, and you
pass scripts in a specific order. Script initialization is currently
serialized (as a feature), so the first loaded script gets lowest
Z-order.
The Lua script API interestingly remains the same. (And also will remain
undocumented, unsupported, and potentially volatile.)
Do not scale OSD mouse input to the ASS OSD script resolution. The
original idea of this mechanism was that the user doesn't have to care
about the actual resolution of anything, and can just use the OSD
resolution consistently. But this made things worse.
Remove the implicit scaling, and always use the screen resolution.
(Except with --vo=xv, where additional scaling is forced upon
everything.)
Drop get_osd_resolution(). There is no replacement. Rename
get_screen_size() and get_screen_margins() to use "osd" instead of
"screen". For anything but --vo=xv these are equivalent, but with
--vo=xv the OSD resolution has additional implicit scaling.
Add code to osc.lua which emulates the old behavior.
Note that none of the changed functions were public API, so implicit
breakage of scripts which used it is just going to happen.
Subtitles can be preloaded, which means they're fully read and copied
into ASS_Track. This in turn is mainly for the sake of being able to do
subtitle seeking (when it comes down to it, subtitle seeking is the
cause for most trouble here).
Commit a714f8e92 broke preloaded subtitles which have events with
unknown duration, such as some MicroDVD samples. The event list gets
cleared on every seek, so the property of being preloaded obviously gets
lost.
Fix this by moving most of the preloading logic to dec_sub.c. If the
subtitle list gets cleared, they are not considered preloaded anymore,
and the logic for demuxed subtitles is used.
As another minor thing, preloadeding subtitles did neither disable the
demux stream, nor did it discard packets. Thus you could get queue
overflows in theory (harmless, but annoying). Fix this by explicitly
discarding packets in preloaded mode.
In summary, now the only difference between preloaded and normal
demuxing are:
1. a seek is issued, and all packets are read on start
2. during playback, discard the packets instead of feeding them to the
subtitle decoder
This is still petty annoying. It would be nice if maintaining the
subtitle index (and maybe a subtitle packet cache for instant subtitle
presentation when seeking back) could be maintained in the demuxer
instead. Half of all file formats with interleaved subtitles have
this anyway (mp4, mkv muxed with newer mkvmerge).
Commit 8d4a179c made subtitle decoders pick up fonts strictly from the
same source file (i.e. the same demuxer).
It breaks some fucked up use-case, and 2 people on this earth complained
about the change because of this. Add it back.
This copies all attached fonts on each subtitle init. I considered
converting attachments to use refcounting, but it'd probably be much
more complex.
Since it's slightly harder to get a list of active demuxers with
duplicate removed, the prev_demuxer variable serves as a hack to achieve
almost the same thing, except in weird corner cases. (In which fonts
could be added twice.)
Was only available via --vd=help and --ad=help (i.e. not at all via
client API). Not bothering with separating audio and video codecs, since
this list isn't all that useful anyway in general. If someone complains,
a type field could be added.
Export a number of container fields, which may or may not be useful in
some scenarios. They are explicitly marked as originating from the
demuxer, in order to make it explicit that they might be unreliable.
I'd actually like to remove all other cases where container information
is exported, but those numerous cases are going to be somewhat hard to
deprecate.
Also, not directly related, export the description of the currently
active decoder. (This has been requested before.)
Ever since a change in mplayer2 or so, relative seeks were translated to
absolute seeks before sending them to the demuxer in most cases. The
only exception in current mpv is DVD seeking.
Remove the SEEK_ABSOLUTE flag; it's not the implied default. SEEK_FACTOR
is kept, because it's sometimes slightly useful for seeking in things
like transport streams. (And maybe mkv files without duration set?)
DVD seeking is terrible because DVD and libdvdnav are terrible, but
mostly because libdvdnav is terrible. libdvdnav does not expose seeking
with seek tables. (Although I know xbmc/kodi use an undocumented API
that is not declared in the headers by dladdr()ing it - I think the
function is dvdnav_jump_to_sector_by_time().) With the current mpv
policy if not giving a shit about DVD, just revert our half-working seek
hacks and always use dvdnav_time_search(). Relative seeking might get
stuck sometimes; in this case --hr-seek=always is recommended.
Adds always-on mode by internally utilizing hidetimeout as negative and
forbidding the user to set negative values.
This removes script-message to enable/disable the osc, and instead introduces a
combined 'visibility' control with the values never/auto/always.
It's available via script_opts and script_message as 'osc-visibility'.
As message, it also supports a 'cycle' value.
The del key is bound to cycling the visibility modes.
There were few issues:
- When it's disabled and then enabled, it was displaying the osc briefly and
then autohide right away. Don't do that.
- When it's enabled and then disabled, it was not removing the osc from screen
if called while the osc is visible (because tick() is responsible for the hide
but it doesn't render() the empty osc when the osc is disabled).
- Due to delayed/async unbinding of mouse events it was possible to show_osc()
after it got disabled e.g. from mouse_move. Prevent this.
_Of course_ the previous commit broke --force-window behavior (like it
does every single time I touch it).
vo_has_frame() gets cleared after a seek, so e.g. stopping playback of a
file and going to the next by keeping the seek key down will enter a
short moment without video at the end of the first file, which will set
the stalled_video variable to true. Prevent it by using the indication
whether the window was properly created (which is probably exactly what
we want here).
This function is also responsible for destroying the window when needed,
and obviously we should never do that while video is active. (This is
the actual bug, although the other change in this commit already hides
the common breakage it caused.)
Some oddity that is not needed anymore. The only thing which still
referenced them was avoiding loading external files more than once,
which is now prevented by checking the list of tracks instead.
When playback of a video ends, and the next file has no video at all (no
cover art or anything), then the window must be cleared.
This also resizes the window forcibly, which is by design.
Fixes#2825.
Especially useful to see what video formats are involved on the various
filter links.
I suspect this function is not available on Libav, so add necessary
ifdeffery preemptively.
It would make somewhat sense for libcs which don't implement locales at
all, such as Bionic.
Beyond that, setlocale() is specified that it can return NULL, and we
shouldn't crash if that happens.
Unfortunately I see no better solution.
The refresh seek is skipped if the amount of buffered audio is not
overly huge.
Unfortunately softvol af_volume insertion still can cause this issue,
because it's outside of the normal dynamic filter chain changing code.
Move the video refresh call to reinit_video_filters() to make it more
uniform along with the audio code.
This was dumb. Could make it burn 100% CPU and not exit at the end.
(Because it would retry as instructed, instead of terminating playback.)
It also needs to consider EOF as waiting for input. lavfi_process() will
decide if it's really EOF, or if further input might come in the future.
Without this, it'd would think that it does not need to wait for input,
i.e. that new input will be available immediately.
(Not so fond of the duplication of subtle logic.)
It doesn't provide this function. The code is not really designed to
work without it, so it will probably mess up big time, but at least
make it compile again.
See --lavfi-complex option.
This is still quite rough. There's no support for dynamic configuration
of any kind. There are probably corner cases where playback might freeze
or burn 100% CPU (due to dataflow problems when interaction with
libavfilter).
Future possible plans might include:
- freely switch tracks by providing some sort of default track graph
label
- automatically enabling audio visualization
- automatically mix audio or stack video when multiple tracks are
selected at once (similar to how multiple sub tracks can be selected)
track can't be NLUL at this point, so the if is redundant. Remove it and
unindent the block. Also, make the function check whether the track is
selected at all, which makes it safer and idempotent.
Will be helpful for the coming filter support. I planned on merging
audio/video decoding, but this will have to wait a bit longer, so only
remove the duplicate status codes.
Let's fix broken samples with questionable heuristic without real
reasoning. Until this gets fixed properly, this is a good compromise,
though. A proper fix would properly resync audio and video without
brutally resetting the decoders, but on the other hand not doing the
brutal reset would cause issues in other obscure corner cases such
resyncing might cause.
This code is tricky because it has to wakeup the mainloop to make
progressing during syncing audio, but also has to avoid waking it up
when it's not needed. Failure to do so either burns CPU by not ever
going to sleep, or causes apparent "freezes" by going to sleep (and it
will continue if the mainloop is woken up e.g. due to user input).
In this case, simply starting A/V playback with --start=5 and removing
an unrelated wakeup in osd.c can trigger such a "freeze". The unrelated
wakeup did hide this bug, nonetheless it's a bug.
(Can't wait to rewrite this shitty audio resync code. And it's all my
fault.)
We just need to provide an entrypoint for it, and move the main init
code to a separate function. This gets rid of the messy video chain full
reinit in command.c, which completely destroyed and recreated the video
state for the purpose of mid-stream hw/sw switching.
These changes don't make too much sense without context, but are
preparation for later. Then the audio_src/video_src fields will be
actually be NULL under circumstances.
Before this commit, reinit_audio_chain() did 2 things: create all the
management data structures and initialize the decoder, and handling lazy
filter/output init (as well as dealing with format changes). For the
second purpose, it could be called multiple times (even though it wasn't
really idempotent). This was pretty weird, so make them separate
functions. The new function is actually idempotent too.
It also turns out the reinit functions don't have to call themselves
recursively for the spdif PCM fallback.
Regression caused by commit 3b95dd47. Also see commit 4c25b000. We can
either use video_next_pts and add "delay", or we just use video_pts. Any
other combination breaks. The reason why the assumption that delay==0 at
this point was wrong exactly because after displaying the first video
frame (usually done before audio resync) a new frame might be "added"
immediately, resulting in a new video_next_pts and "delay", which will
still amount to video_pts.
Fixes#2770. (The reason why display-sync was blamed in this issue is
because enabling display-sync in the options forces a prefetch by 2
instead of 1 frames for seeks/playback restart, which triggers the
issue, even if display-sync is not actually enabled. In this case,
display-sync is never enabled because the frames have a unusually high
frame duration. This is also what exposed the initial desync issue.)
This seems generally easier when using libmpv (and was already requested
and implemented before: see commit 327a779a; it was reverted some time
later).
With the weird internal logic we have to deal with, in particular the
--softvol=no case (using system volume), and using the audio API's mixer
(--softvol=auto on some systems), we still can't avoid all glitches and
corner cases that complicate this issue so much. The API user is either
recommended to use --softvol=yes or auto, or to watch the new
mixer-active property, and assume the volume/mute properties have
significant values if the mixer is active.
Remaining glitches:
- changing the volume/mute properties has no effect if no internal mixer
is used (--softvol=no) and the mixer is not active; the actual mixer
controls do not change, only the property values
- --volume/--mute do not have an effect on the volume/mute properties
before mixer initialization (the options strictly are only applied
during mixer init)
- volume-max is 100 while the mixer is not active
With the format left untouched, this would just try to reinit with a
spdif format again.
We're not clearing the format in reset_audio_state() so the audio chain
can be recreated any time without having to wait for a frame to be
decoded.
Even though the timing logic is correct, it tends to mess with looping
videos and such in unappreciated ways.
It also has to be admitted that most file formats seem not to properly
define the duration of the last video frame (or libavformat does not
export it in a useful way), so whether or not we should use the demuxer
reported framerate for the last frame is questionable. (Still, why would
you essentially just discard the last frame?)
The timing logic is kept, but disabled for video with "normal" FPS
values. In particular, we want to keep it for displaying images, which
implicitly set the frame duration to 1 second by reporting 1 FPS. It's
also good for slide shows with mf://.
Fixes#2745.