This is a pretty major rewrite of the internal texture binding
mechanic, which makes it more flexible.
In general, the difference between the old and current approaches is
that now, all texture description is held in a struct img_tex and only
explicitly bound with pass_bind. (Once bound, a texture unit is assumed
to be set in stone and no longer tied to the img_tex)
This approach makes the code inside pass_read_video significantly more
flexible and cuts down on the number of weird special cases and
spaghetti logic.
It also has some improvements, e.g. cutting down greatly on the number
of unnecessary conversion passes inside pass_read_video (which was
previously mostly done to cope with the fact that the alternative would
have resulted in a combinatorial explosion of code complexity).
Some other notable changes (and potential improvements):
- texture expansion is now *always* handled in pass_read_video, and the
colormatrix never does this anymore. (Which means the code could
probably be removed from the colormatrix generation logic, modulo some
other VOs)
- struct fbo_tex now stores both its "physical" and "logical"
(configured) size, which cuts down on the amount of width/height
baggage on some function calls
- vo_opengl can now technically support textures with different bit
depths (e.g. 10 bit luma, 8 bit chroma) - but the APIs it queries
inside img_format.c doesn't export this (nor does ffmpeg support it,
really) so the status quo of using the same tex_mul for all planes is
kept.
- dumb_mode is now only needed because of the indirect_fbo being in the
main rendering pipeline. If we reintroduce p->use_indirect and thread
a transform through the entire program this could be skipped where
unnecessary, allowing for the removal of dumb_mode. But I'm not sure
how to do this in a clean way. (Which is part of why it got introduced
to begin with)
- It would be trivial to resurrect source-shader now (it would just be
one extra 'if' inside pass_read_video).
stream->info can be NULL if it's the cache wrapper. To be fair,
stream->info is considered private API anyway. So don't access it, but
check the URL instead.
Deals with broken mkv subtitle tracks generated by tvheadend. The subs
are srt, but without packet durations.
We need this logic for CCs anyway. CCs in particular will be unaffected
by this change because they are also marked with unknown duration. It
could be that there are actual demuxers outputting CCs - in this case,
we rely on the fact that they don't set a (meaningless) packet duration
(or we'd have to work that around).
Commit 8d4a179c made subtitle decoders pick up fonts strictly from the
same source file (i.e. the same demuxer).
It breaks some fucked up use-case, and 2 people on this earth complained
about the change because of this. Add it back.
This copies all attached fonts on each subtitle init. I considered
converting attachments to use refcounting, but it'd probably be much
more complex.
Since it's slightly harder to get a list of active demuxers with
duplicate removed, the prev_demuxer variable serves as a hack to achieve
almost the same thing, except in weird corner cases. (In which fonts
could be added twice.)
This reverts commit af66fa8fa5.
The reverted commit caused AVCodecContext.channel_layout to be set,
while requesting stereo downmix will make libavcodec output a stupid
message:
ac3: Channel layout '5.1' with 6 channels does not match specified number of channels 2: ignoring specified channel layout
The same happens with --demuxer=lavf (without this change too).
I'm not quite sure what acrobatics are required to shut up libavcodec,
but for now revert the commit. It was a rather minor, almost cosmetic
issue, which I consider less important than clean CLI terminal output.
Completely pointless abominations that FFmpeg refuses to remove. They
are ancient, long deprecated API which we can't use anymore. They
confused users as well.
Pretend that they don't exist. Due to the way --vd works, they can't
even be forced anymore. The older hack which explicitly rejects these
can be dropped as well.
The goal is reducing log messups (which happen surprisingly often) by
buffering partial lines in mp_log. This is still not 100% reliable, but
better.
The extrabuffers for MSGL_STATUS and MSGL_STATS are not needed anymore,
because a separate mp_log instance can be used if problems really occur.
Also, give up, and replace the snprintf acrobatics with bstr.
mp_log.partial has a quite subtle problem wrt. talloc: talloc parents
can not be used, because there's no lock around the internal talloc
structures associated with mp_log. Thus it has to be freed manually,
even if this happens through a talloc destructor.
We want to add a prefix to the ffmpeg log message, so we called mp_msg
multiple times until now. But logging such partial lines is a race
condition, because there's only one internal mp_msg buffer, and no
external mp_msg locks.
Avoid this by building the message on a stack buffer.
I might make a mp_log-local partial line buffer, but even then av_log()
can be called from multiple threads, while targetting the same mp_log.
(Really, ffmpeg's log API needs to be fixed.)
Until now, a rather large stack buffer was used for this, and also a
static buffer in mp_log_root. The latter was added to buffer partial
lines, and the stack buffer was used only for MSGL_STATUS and MSGL_STATS
(I guess because these are the most likely/severe to clash with partial
line buffering).
Make the buffer in mp_log_root dynamically sized, so we don't get cut
off log lines if the text is excessively large. (The OpenGL extension
list dumped by vo_opengl is such an example.)
Since we still have to support partial line buffering (FFmpeg's log
callbacks leave no other choice), keep the stack buffer. But make it
smaller; there's no way all ~6KB are going to be needed in any
situation.
Was only available via --vd=help and --ad=help (i.e. not at all via
client API). Not bothering with separating audio and video codecs, since
this list isn't all that useful anyway in general. If someone complains,
a type field could be added.
Export a number of container fields, which may or may not be useful in
some scenarios. They are explicitly marked as originating from the
demuxer, in order to make it explicit that they might be unreliable.
I'd actually like to remove all other cases where container information
is exported, but those numerous cases are going to be somewhat hard to
deprecate.
Also, not directly related, export the description of the currently
active decoder. (This has been requested before.)
Use the mp_lavc_set_extradata() function instead of setting up the
extradata manually. This takes care of the corner case when
extradata_len is 0.
This apparently fixes#2888.
Hr-seek was often off by one frame due to rounding issues, which have
been traditionally taken care off by adding a "tolerance". Essentially,
frames very close to the seek target PTS are not dropped, even if they
may strictly are before the seek target.
Commit 0af53353 accidentally removed this by always removing frames even
if they're within the "tolerance". Fix this by "unsharing" the logic and
making sure the segment code is inactive for normal seeks.
Ever since a change in mplayer2 or so, relative seeks were translated to
absolute seeks before sending them to the demuxer in most cases. The
only exception in current mpv is DVD seeking.
Remove the SEEK_ABSOLUTE flag; it's not the implied default. SEEK_FACTOR
is kept, because it's sometimes slightly useful for seeking in things
like transport streams. (And maybe mkv files without duration set?)
DVD seeking is terrible because DVD and libdvdnav are terrible, but
mostly because libdvdnav is terrible. libdvdnav does not expose seeking
with seek tables. (Although I know xbmc/kodi use an undocumented API
that is not declared in the headers by dladdr()ing it - I think the
function is dvdnav_jump_to_sector_by_time().) With the current mpv
policy if not giving a shit about DVD, just revert our half-working seek
hacks and always use dvdnav_time_search(). Relative seeking might get
stuck sometimes; in this case --hr-seek=always is recommended.
Adds always-on mode by internally utilizing hidetimeout as negative and
forbidding the user to set negative values.
This removes script-message to enable/disable the osc, and instead introduces a
combined 'visibility' control with the values never/auto/always.
It's available via script_opts and script_message as 'osc-visibility'.
As message, it also supports a 'cycle' value.
The del key is bound to cycling the visibility modes.
There were few issues:
- When it's disabled and then enabled, it was displaying the osc briefly and
then autohide right away. Don't do that.
- When it's enabled and then disabled, it was not removing the osc from screen
if called while the osc is visible (because tick() is responsible for the hide
but it doesn't render() the empty osc when the osc is disabled).
- Due to delayed/async unbinding of mouse events it was possible to show_osc()
after it got disabled e.g. from mouse_move. Prevent this.
This eliminates some intermittent pops heard in a HRT MicroStreamer DAC
uncorrelated with user interaction. As a bonus, this resolves#1773 which I can
o longer reproduce as of this commit. Leave the 50ms buffer for shared mode
since that seems to be working quite well.
This is also the way exclusive mode is done in the MSDN example code:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd370844%28v=vs.85%29.aspx
This was originally increased in c545c40 to mitigate glitches that subsequent
refactorings have eliminated.
A COM message loop is apparently totally inappropriate for a low latency
thread. It leads to audio glitches because the thread doesn't wake up fast
enough when it should. It also causes mysterious correlations between the vo
and ao thread (i.e., toggling fullscreen delays audio feed events). Instead use
an mp_dispatch_queue to set/get volume/mute/session display name from the audio
thread. This has the added benefit of obviating the need to marshal the
associated interfaces from the audio thread.
Don't wait for WASAPI to send another feed event if we detect an underfull
buffer. It seems that WASAPI doesn't always send extra feed events if
something causes rendering to fall behind. This causes every subsequent playback
buffer to under-run until playback is reset. The fix is simply to do a one-shot
double feed when this happens, which allows rendering to catch up with playback.
This was observed to happen when using MsgWaitForMultipleObjects to wait for the
feed event and toggling fullscreen with vo=opengl:backend=win. This commit
improves the behaviour in that specific case and more generally makes exclusive
mode significantly more robust.
This commit also moves the logic to avoid *over*filling the exclusive mode
buffer into thread_feed right next to the above described underfil logic.
OpenSL ES is used on Android. At the moment only stereo output is
supported. Two options are supported: 'frames-per-buffer' and
'sample-rate'. To get better latency the user of libmpv should pass
values obtained from AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER)
and AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE).
_Of course_ the previous commit broke --force-window behavior (like it
does every single time I touch it).
vo_has_frame() gets cleared after a seek, so e.g. stopping playback of a
file and going to the next by keeping the seek key down will enter a
short moment without video at the end of the first file, which will set
the stalled_video variable to true. Prevent it by using the indication
whether the window was properly created (which is probably exactly what
we want here).
This function is also responsible for destroying the window when needed,
and obviously we should never do that while video is active. (This is
the actual bug, although the other change in this commit already hides
the common breakage it caused.)