Like dxinterop, this uses StretchRect or RGB conversion. This is unavoidable as
long as we use the dxva2 API, as there is no way to access the raw hardware
decoded Direct3D9 surfaces.
This is simpler, because it doesn't have to wait from both threads for
synchronization.
Apart from being simpler/cleaner, this serves vague plans to stop/start
the demuxer thread itself automatically on demand (for the purpose of
reducing unneeded resource usage).
This pause stuff is bothersome and is needed only for a few corner-
cases. This commit removes it from the demuxer public API and replaces
it with a demux_run_on_thread() function and refactors the code which
needed demux_pause(). The next commit will change the implementation.
Changing the byte stream position without cooperation of the demuxer
seems a bit insane, and is certainly useless. A user should do factor
seeks instead. For formats like ts, this will actually translate to byte
seeks, while treating the rest of the playback chain a bit more
gracefully. With this argument, remove write access to this property.
If someone really complains, proper byte seeks could be added as seek
mode (although I'm going to need a convincing argument for this).
Read access changes too, but in a more subtle way.
No need to have them everywhere. The only exception/annoyance is
MAX_OSD_PARTS, which is now basically duplicated (and at runtime
initialization is checked with an assert()).
Until now, there was only 1 global ASS overlay that could be set by all
scripts. This was often perceived as bug when multiple scripts tried to
set their own ASS overlay.
This was kind of hard to solve because the script could set its own ASS
PlayResX/Y, which makes it impossible to share a single ASS_Renderer for
multiple scripts. The OSC unfortunately makes use of this feature (and
unfortunately can't be fixed because it's a POS), so we're stuck with
this complication.
Implement the worst-case solution and fix this by creating separate ASS
track and renderer objects for each script that wants to set an ASS
overlay.
The z-order is decided by the order the scripts set their text first.
This is essentially random, unless you do it at script init, and you
pass scripts in a specific order. Script initialization is currently
serialized (as a feature), so the first loaded script gets lowest
Z-order.
The Lua script API interestingly remains the same. (And also will remain
undocumented, unsupported, and potentially volatile.)
Instead of passing an explicit cache to the function, the res parameter
is used. Also, instead of replacing its contents, sub bitmaps are now
appended to it (all assuming the format doesn't actually change).
This is preparation for the following commits.
The default of 1.0 was basically making half the algorithm do nothing,
since it turned off all diagonal contributions. The upstream default is
0.6, and this produces a more reasonable image.
The values were changed to reflect an upstream change in the source for
the super-xBR implementation.
The anti-ringing code was basically not working at all, the new
algorithm _significantly_ improves the result (reduces ringing).
This is a fresh implementation from scratch that carries with it
significantly less baggage and verbosity from the previous (ported)
version.
The actual values for the masks and such were copied from the
current code. Behavior and performance should be unaffected.
An important difference between the old code and the new code is that
the new code always explicitly samples from the first component, rather
than being able to process multiple planes at once.
Since prescale-luma only affects luma, I deemed this unnecessary. May
change in the future, if prescale-chroma ever gets implemented. But
prescaling multiple planes would be slow to do this way. (Better would
be to generalize it to differently-sized vectors)
Do not scale OSD mouse input to the ASS OSD script resolution. The
original idea of this mechanism was that the user doesn't have to care
about the actual resolution of anything, and can just use the OSD
resolution consistently. But this made things worse.
Remove the implicit scaling, and always use the screen resolution.
(Except with --vo=xv, where additional scaling is forced upon
everything.)
Drop get_osd_resolution(). There is no replacement. Rename
get_screen_size() and get_screen_margins() to use "osd" instead of
"screen". For anything but --vo=xv these are equivalent, but with
--vo=xv the OSD resolution has additional implicit scaling.
Add code to osc.lua which emulates the old behavior.
Note that none of the changed functions were public API, so implicit
breakage of scripts which used it is just going to happen.
Deselecting cover art and then reselecting it did not work. The second
time the cover art picture is not displayed again. (This seems to break
every other month...)
The reason is commit 6640b22a. It mutates the input packet. And it is
correct that we don't own d_video->header->attached_picture at this
point. Fix it by creating a new packet reference.
Subtitles can be preloaded, which means they're fully read and copied
into ASS_Track. This in turn is mainly for the sake of being able to do
subtitle seeking (when it comes down to it, subtitle seeking is the
cause for most trouble here).
Commit a714f8e92 broke preloaded subtitles which have events with
unknown duration, such as some MicroDVD samples. The event list gets
cleared on every seek, so the property of being preloaded obviously gets
lost.
Fix this by moving most of the preloading logic to dec_sub.c. If the
subtitle list gets cleared, they are not considered preloaded anymore,
and the logic for demuxed subtitles is used.
As another minor thing, preloadeding subtitles did neither disable the
demux stream, nor did it discard packets. Thus you could get queue
overflows in theory (harmless, but annoying). Fix this by explicitly
discarding packets in preloaded mode.
In summary, now the only difference between preloaded and normal
demuxing are:
1. a seek is issued, and all packets are read on start
2. during playback, discard the packets instead of feeding them to the
subtitle decoder
This is still petty annoying. It would be nice if maintaining the
subtitle index (and maybe a subtitle packet cache for instant subtitle
presentation when seeking back) could be maintained in the demuxer
instead. Half of all file formats with interleaved subtitles have
this anyway (mp4, mkv muxed with newer mkvmerge).
Commit 503c6f7f essentially removed timestamps from "laces" (Block sub-
divisions), which means many audio packets will have no timestamp.
There's no reason why bitrate calculation can't just delayed to a point
when the next timestamp is known.
Fixes#2903 (no audio bitrate with mkv files).
Although there is logic to prune subtitles as soon as they get too old
in this mode, this is not done for the _currently_ shown subtitles. Thus
explicitly clearing subtitles on seek is required to avoid duplicate
subtitles in certain cases when seeking.
Instead of hard-coding the logic and planes to skip, factor this out
to a reusible function, and instead add the number of relevant
coordinates to the texture state.
Since prescale now literally only affects the luma plane (and the
filters are all designed for luma-only operation either way), the option
has been renamed and the documentation updated to clarify this.
This is a pretty major rewrite of the internal texture binding
mechanic, which makes it more flexible.
In general, the difference between the old and current approaches is
that now, all texture description is held in a struct img_tex and only
explicitly bound with pass_bind. (Once bound, a texture unit is assumed
to be set in stone and no longer tied to the img_tex)
This approach makes the code inside pass_read_video significantly more
flexible and cuts down on the number of weird special cases and
spaghetti logic.
It also has some improvements, e.g. cutting down greatly on the number
of unnecessary conversion passes inside pass_read_video (which was
previously mostly done to cope with the fact that the alternative would
have resulted in a combinatorial explosion of code complexity).
Some other notable changes (and potential improvements):
- texture expansion is now *always* handled in pass_read_video, and the
colormatrix never does this anymore. (Which means the code could
probably be removed from the colormatrix generation logic, modulo some
other VOs)
- struct fbo_tex now stores both its "physical" and "logical"
(configured) size, which cuts down on the amount of width/height
baggage on some function calls
- vo_opengl can now technically support textures with different bit
depths (e.g. 10 bit luma, 8 bit chroma) - but the APIs it queries
inside img_format.c doesn't export this (nor does ffmpeg support it,
really) so the status quo of using the same tex_mul for all planes is
kept.
- dumb_mode is now only needed because of the indirect_fbo being in the
main rendering pipeline. If we reintroduce p->use_indirect and thread
a transform through the entire program this could be skipped where
unnecessary, allowing for the removal of dumb_mode. But I'm not sure
how to do this in a clean way. (Which is part of why it got introduced
to begin with)
- It would be trivial to resurrect source-shader now (it would just be
one extra 'if' inside pass_read_video).
stream->info can be NULL if it's the cache wrapper. To be fair,
stream->info is considered private API anyway. So don't access it, but
check the URL instead.
Deals with broken mkv subtitle tracks generated by tvheadend. The subs
are srt, but without packet durations.
We need this logic for CCs anyway. CCs in particular will be unaffected
by this change because they are also marked with unknown duration. It
could be that there are actual demuxers outputting CCs - in this case,
we rely on the fact that they don't set a (meaningless) packet duration
(or we'd have to work that around).
Commit 8d4a179c made subtitle decoders pick up fonts strictly from the
same source file (i.e. the same demuxer).
It breaks some fucked up use-case, and 2 people on this earth complained
about the change because of this. Add it back.
This copies all attached fonts on each subtitle init. I considered
converting attachments to use refcounting, but it'd probably be much
more complex.
Since it's slightly harder to get a list of active demuxers with
duplicate removed, the prev_demuxer variable serves as a hack to achieve
almost the same thing, except in weird corner cases. (In which fonts
could be added twice.)
This reverts commit af66fa8fa5.
The reverted commit caused AVCodecContext.channel_layout to be set,
while requesting stereo downmix will make libavcodec output a stupid
message:
ac3: Channel layout '5.1' with 6 channels does not match specified number of channels 2: ignoring specified channel layout
The same happens with --demuxer=lavf (without this change too).
I'm not quite sure what acrobatics are required to shut up libavcodec,
but for now revert the commit. It was a rather minor, almost cosmetic
issue, which I consider less important than clean CLI terminal output.
Completely pointless abominations that FFmpeg refuses to remove. They
are ancient, long deprecated API which we can't use anymore. They
confused users as well.
Pretend that they don't exist. Due to the way --vd works, they can't
even be forced anymore. The older hack which explicitly rejects these
can be dropped as well.
The goal is reducing log messups (which happen surprisingly often) by
buffering partial lines in mp_log. This is still not 100% reliable, but
better.
The extrabuffers for MSGL_STATUS and MSGL_STATS are not needed anymore,
because a separate mp_log instance can be used if problems really occur.
Also, give up, and replace the snprintf acrobatics with bstr.
mp_log.partial has a quite subtle problem wrt. talloc: talloc parents
can not be used, because there's no lock around the internal talloc
structures associated with mp_log. Thus it has to be freed manually,
even if this happens through a talloc destructor.
We want to add a prefix to the ffmpeg log message, so we called mp_msg
multiple times until now. But logging such partial lines is a race
condition, because there's only one internal mp_msg buffer, and no
external mp_msg locks.
Avoid this by building the message on a stack buffer.
I might make a mp_log-local partial line buffer, but even then av_log()
can be called from multiple threads, while targetting the same mp_log.
(Really, ffmpeg's log API needs to be fixed.)
Until now, a rather large stack buffer was used for this, and also a
static buffer in mp_log_root. The latter was added to buffer partial
lines, and the stack buffer was used only for MSGL_STATUS and MSGL_STATS
(I guess because these are the most likely/severe to clash with partial
line buffering).
Make the buffer in mp_log_root dynamically sized, so we don't get cut
off log lines if the text is excessively large. (The OpenGL extension
list dumped by vo_opengl is such an example.)
Since we still have to support partial line buffering (FFmpeg's log
callbacks leave no other choice), keep the stack buffer. But make it
smaller; there's no way all ~6KB are going to be needed in any
situation.