Generic statement about how this is not really appropriate, etc., and
only useful for temporary debugging things, and how I commit it anyway
despite violating my own principles (and how I'd reject this change if
it came from you).
Consider e.g. --aid=2 with a file that has only 1 track. Then it would
fall back to selecting track 1. Stop doing this. If no matching track is
found, this will not select any track now.
Note that the fingerprint stuff (track_layout_hash in the source)
prevents softens the impact of this change. Without the fingerprint,
playing a dual-audio file with the second track selected, and then a
single-audio file, would play the second file without audio. But the
fingerprint resets it due to differences in the track list.
Try to exhaustively document this and tricky interactions between the
other features. What a damn mess, I think it's simply cursed. Of course
it's still my fault.
See: #7608
Some time ago, properties and options were mostly unified. However, the
track selection properties/options semantics are incompatible to this
change. I'm still trying to handle the fallout.
There are two things that are in the way:
1. Track properties somehow return the runtime selection, not the option
value (all while properties are supposed to be aliases to options
with the same name).
2. The user's track options are not supposed to be changed without
interaction. If a track is auto-selected, the property should return
its ID, but the option value should remain at "auto". Only if the
user actually writes to the property the option should change. E.g.
playing e.g. an audio-only file and then a normal video file not play
the video file with --vid=no just because the audio file had no video
track.
In addition to each of them being in conflict with the property/option
unification, attempt to fix one of them breaks the other one.
Today, we're trying to fix parts of this and avoiding an unfortunate
case where you can get a conflicting option/property value, and where
trying to select a track does nothing if the track to select has the
same ID as the option value.
This breaks 2. from above in certain situations. See manpage additions.
See: #7608
Unfortunately, attached pictures (from tags etc.) are treated as video
tracks. That meant --sub-create-cc-track added a CC track for them as
well. Stop doing that.
See: #7608
This covers 8 and 16 bit packed RGB formats. It doesn't really help with
any actual use-cases, other than giving the finger to libswscale.
One problem is with different color depths. For example, rgb565 provides
1 bit more resolution to the green channel. zimg can only dither to a
uniform depth. I tried dithering to the highest depth and shifting away
1 bit for the lower channels, but that looked ugly (or I messed up
somewhere), so instead it dithers to the lowest depth, and adjusts the
value range if needed. Testing with bgr4_byte (extreme case with 1/2/1
depths), it looks more "grainy" (ordered dithering artifacts) than
libswscale, but it also looks cleaner and smoother. It doesn't have
libswscale's weird red-shift. So I call it a success.
Big endian formats need to be handled explicitly; the generic big endian
swapper code assumes byte-aligned components.
Unpacking is done with shifts and 3 LUTs. This is symmetric to the
packer. Using a generated palette might be better, but I preferred to
keep the symmetry, and not having to mess with a generated palette and
the pal8 code.
This uses FFmepg pixfmts constants directly. I would have preferred
keeping zimg completely separate. But neither do I want to add an IMGFMT
alias for every of these formats, nor do I want to extend our imgfmt
code such that it can provide a complete description of each packed RGB
format (similar to FFmpeg pixdesc).
It also appears that FFmpeg pixdesc as well as the FFmpeg pixfmt doxygen
have an error regarding RGB8: the R/B bit depths are swapped. libswscale
appears to be handling them differently. Not completely sure, as this is
the only packed format case with R/B havuing different depths (instead
of G, the middle component, where things are symmetric).
One of the extremely annoying dumb things in ffmpeg is that most pixel
formats are available as little endian and big endian variants. (The
sane way would be having native endian formats only.) Usually, most of
the real codecs use native formats only, while non-native formats are
used by fringe raw codecs only. But the PNG encoders and decoders
unfortunately use big endian formats, and since PNG it such a popular
format, this causes problems for us. In particular, the current zimg
wrapper will refuse to work (and mpv will fall back to sws) when writing
non-8 bit PNGs.
So add non-native endian support to zimg. This is done in a fairly
"generic" way (which means lots of potential for bugs). If input is a
"regular" format (and just byte-swapped), the rest happens
automatically, which happens to cover all interesting formats.
Some things could be more efficient; for example, unpacking is done on
the data before it's passed to the unpacker. You could make endian
swapping part of the actual unpacking process, which might be slightly
faster. You could avoid copying twice in some cases (such as when
there's no actual repacker, or if alignment needs to be corrected). But
I don't really care. It's reasonably fast for the normal case.
Not entirely sure whether this is correct. Some (but not many) formats
are covered by the tests, some I tested manually. Some I can't even
test, because libswscale doesn't support them (like nv20*).
This sucks, but is helpful for testing.
Obviously, it would be much nicer if there were a way to specify _all_
scaler options per filter (if the user wanted), instead of always using
the global options. But this is "too hard" for now. For testing, it is
extremely convenient to select the scaler backend, so add this option,
but make clear that it could go away. We'd delete it once there is a
better mechanism for this.
Edition title is already exposed in demux_edition, it was just never
added to the display. If no edition title exists it will fall back
to the edition number.
Keys and lines-to-scroll are configurabe, and the scroll keys are only
bound on pages which support scrolling (currently only page 4) - also
during oneshot (like the page-switching keys).
Scroll offset is reset for all pages on any key - except scroll keys, so
that entering or switching to a page resets the scroll, as well as when
"re-entering" the same page or "re-activating" the stats oneshot view.
TODO: print_page(..) is highly associated with extending the oneshot
timer if required. The timer handling can probably move into print_page
and removed from all the places which boilerplate its management.
The call was hidden very well, via
dvb_streaming_read -> dvb_update_config
-> dvb_streaming_start -> dvb_set_channel,
and broke the stream buffering logic.
Dropping that call does not noticeably slow down channel switches.
This used 1 MB due to building the complete command and property list
when starting the script. These are needed only for auto-completion, so
build them only on demand. Since building them is fast enough, rebuild
them every time.
The key bindings table is not that much, but saves some KBs. Oddly, the
code to build it uses less memory than the table at runtime (???), so
build it at runtime as well.
Add 2 tactic collectgarbage() calls as well. This frees unused heap when
it is known that the script is going to be completely inactive until
re-enabled by the user.
The buffer can be larger than the normal size when "peeking" is used
(such as done with some file formats, where a large number of bytes masy
need to be "peeked" at the beginning, because FFmpeg). Once normal
operation resumes, it's supposed to free this buffer again. Apparently
this didn't happen as intended, because normal reading did have no way
to discard back buffer before/while resizing the buffer. There's only a
path for discarding the back buffer when actually reading.
It seems like this unfortunately needs 2 code paths for discarding old
data. Just put it into stream_resize_buffer(), where it's rather
non-tricky (discarding can be done by adjusting the copy offset when
moving data to the new allocation). The function now drops old data if
it doesn't fit into the allocation. The caller must ensure that the new
size is sufficient; the function signature changes only so the size of
the implicitly guaranteed kept part can be checked with assert().
In display-sync mode, the core doesn't need to woken up every vsync, but
only every time a new actual video frame needs to be queued. So don't
wake up if there are still frames to repeat.
In audio-sync mode, the wakeup is simply redundant, since there's a
separate timer (in->wakeup_pts) to control when to queue a new frame. I
think.
This finally brings the required playloop iterations down to almost the
number of video frames. (As originally intended, really.)
Also a fairly risky change.
The wakeup at the end of VO frame rendering seems redundant, because
after rendering almost no state changes. The player core can queue a new
frame once frame rendering begins, and there's a separate wakeup for
this. The only thing that actually changes is in->rendering. The only
thing that seems to depend on it and can trigger a wakeup is the
vo_still_displaying() function. Change it so that it needs an explicit
call to a new API function, so we can avoid wakeups in the common case.
The vo_still_displaying() code is mostly just moved around due to
locking and for avoiding forward declarations.
Also a somewhat risky change (tasty new bugs).
This should be unnecessary, since the VO itself performs wakeups once a
new frame can be queued. The only situation I can think of where this
might be required are EOF situations (which are always strange).
If I'm wrong, there'll be fun new bugs, probably causing frame drops or
temporary stalls.
When the player core requests new frames from the filter, this is called
external/recursive filtering, which acts slightly differently from when
filters request new data internally. Mainly this is so the external user
doesn't have to call mp_filter_graph_run() just to get a frame. This
causes a number of complications, and the short version is that until
now, mp_filter_graph_run() has unnecessarily returned true in the
current common case, which made the playloop run too often for no
reason.
The problem is that when a mp_pin is read externally, updating the same
mp_pin during recursive filtering flagged external_pending when the
result was written, which made mp_filter_graph_run() return true, which
made the playloop call mp_filter_graph_run() again. This is redundant
because the caller is obviously checking the new state of the mp_pin
immediately.
The only situation in which external_pending really must be set is if
_another_ pin is changed. In theory, we could also unset it if the set
of "external" pins that are not in a signaled state becomes empty, but
we don't track that in a convenient way.
This commit removes the redundant signaling, and avoids running the
playloop an additional time for each video and audio frame (as it
actually was planned from the beginning, but duh).
If a filter receives an asynchronous wakeup during filtering, then
process newly pending filters resulting from that as well, before
returning to the user. Might possibly skip some redundant playloop
cycles.
There is an explicit comment in the code about how this shouldn't be
done, but I think it makes no sense. Filters have no business trying to
interrupt the mainloop, and mp_filter_graph_interrupt() provides a
proper mechanism to do this (though intended to be used by the filter
user, not filters).
In this case, init_buffers() was not called, and the unrelated cache
sample buffers were not initialized. It appears they are indeed
completely unrelated, so move their initialization away. Not sure what
exactly the purpose of calling init_buffers() is, maybe clearing old
data when displaying stats again. The new place for initializing the
cache sample buffers should achieve the same anyway.
Fixes: #7597
This should make it behave roughly like when switching from a file to
the next (clearing audio buffers, keeping AO, but closing AO if the
audio format seems to have changed and gapless mode is "weak").
Not necessarily useful, but harmless and may help with #7579 (untested).
Replace use of .min==1 with a proper flag. This is a good idea, because
it has nothing to do with numeric limits (also see commit 9d32d62b61
for how this can go wrong).
With this, m_option.min/max are strictly used for numeric limits.
This was optional, with the intention that normally such options require
a valid format. But there is no reason for this (at least not anymore),
and it's actually more logical to accept "no" in all situations this
option type is used. This also gets rid of the weird min field special
use.
These used ".min = MP_NOPTS_VALUE" to indicate certain exceptions. This
broke with the recent change to how min/max are handled, which made
setting min or max mean that a value range is used, thus setting max=0.
Fix this by not using magic a value in .min; replace it with a proper
flag.
Fixes: #7596
This is the proper fix for 1e7802. Turns out the solution is dead
simple: we can still set the allocator with lua_getallocf /
lua_setalloc.
This commit makes memory accounting work on luajit as well.
This is a stopgap measure. In theory we could maybe poll the memory
usage on luajit, but for now, simply reverting this part of fd3caa26
makes Lua work again. (And we can still collect cpu usage metrics)
Proper solution pending (tm)
While --input-file was removed for justified reasons, wanting to pass
down socket FDs this way is legitimate, useful, and easy to implement.
One odd thing is that
Fixes: #7592
Add an infrastructure for collecting performance-related data, use it in
some places. Add rendering of them to stats.lua.
There were two main goals: minimal impact on the normal code and normal
playback. So all these stats_* function calls either happen only during
initialization, or return immediately if no stats collection is going
on. That's why it does this lazily adding of stats entries etc. (a first
iteration made each stats entry an API thing, instead of just a single
stats_ctx, but I thought that was getting too intrusive in the "normal"
code, even if everything gets worse inside of stats.c).
You could get most of this information from various profilers (including
the extremely primitive --dump-stats thing in mpv), but this makes it
easier to see the most important information at once (at least in
theory), partially because we know best about the context of various
things.
Not very happy with this. It's all pretty primitive and dumb. At this
point I just wanted to get over with it, without necessarily having to
revisit it later, but with having my stupid statistics.
Somehow the code feels terrible. There are a lot of meh decisions in
there that could be better or worse (but mostly could be better), and it
just sucks but it's also trivial and uninteresting and does the job. I
guess I hate programming. It's so tedious and the result is always shit.
Anyway, enjoy.
I think that makes more sense.
And also remove the graph from the total cache usage, since that wasn't
very interesting. So there's still a total of 2 graphs.
That's where it comes from after all. The other property does not have
much of a reason to exist anymore, but there's no real reason to remove
it either.
Previously, EGL as provided by a pkg-config was checked for independently
in several places. The effect this had is that --disable-egl would not
actually disable EGL from the build, as this only affected the "egl" option
relied upon by egl-x11. wayland-gl and egl-drm did their own EGL checks.
By making wayland-gl and drm-egl depend on egl instead, we fix this
behaviour and can simplify egl-helpers a bit, as we can now simply
check whether egl or one of the other features providing some non-pc egl
is enabled, instead of checking every single thing that might be pulling
in egl.
Future work could make the "egl" option just be a catchall for any
EGL implementation, so that brcmegl and angle and Android can piggyback
on the egl option as well.
Ancient Linux audio output. Apparently it survived until now, because
some BSDs (but not all) had use of this. But these should work with
ao_sdl or ao_openal too (that's why these AOs exist after all). ao_oss
itself has the problem that it's virtually unmaintainable from my point
of view due to all the subtle (or non-subtle) difference. Look at the
ifdef mess and the multiple code paths (that shouldn't exist) in the
removed source code.
I wonder what this even is. I've never heard of anyone using it, and
can't find a corresponding library that actually builds with it. Good
enough to remove.
It was always marked as "experimental", and had inherent problems that
were never fixed. It was disabled by default, and I don't think anyone
is using it.