The new wavpack packet format (see previous commit) doesn't work with
older libavcodec versions, so disable the new code in this case.
The version numbers are only approximate, since the libavcodec version
wasn't bumped with the wavpack change, but it's close enough.
Libav introduced a silent API breakage by changing what wavpack packets
the libavcodec decoder accepts. Originally the libavcodec codec accepted
Matroska-style wavpack packets. Libav commit 9b6f47c removed this
capability from the libavcodec code, and added code to libavformat's
Matroska demuxer to "rearrange" wavpack packets. Since demux_mkv still
sent Matroska-style packets, playback failed.
Fix this by "rearranging" packets in demux_mkv as well by copying
libavformat's code. (The best kind of fix.)
Tested with [CCCP]_Mega_Lossless_Audio_Test.mkv, as well as with a
sample generated by mkvmerge.
0 is invalid. The intention of the code turning off any additional
alignment, so we need 1.
Change a comment: obviously we don't try to set alignment parameters
etc.to handle stride correctly, and instead do everything by row.
This probes and prints the depth of some texture formats with the help
of a FBO. By default it tests the format used for scaling, as well as
the format used for dithering and the 3D LUT (if any of these are
enabled).
The output is visible only with -v. Some representative values are
probed, and the difference of input and output value is printed as hex-
float. Hex-floats are used because they make the implied precision more
obvious. Originally I wanted to do some more sophisticated guessing of
the implied depth/precision for more user-friendly reporting, but then
I decided that printing raw data is better for debugging, especially if
things go wrong.
This does not try to disable any functionality and does not print any
warnings if the depth is lower than what it should be.
This might be better with dumb shader compilers, which won't vectorize
this to a single vector-division, assuming the hardware does have such
an instruction. Affects "bicubic_fast" scale mode only.
The internal texture format GL_RED is typically 8 bit, which is clearly
not good enough for the new dither matrix. The idea was to use a float
texture format, but this was somehow "forgotten". Use GL_R16, since
16 bit textures are more robust, and provide more precision for the
same memory usage.
Change how the offset for centering the dither matrix is applied. This
is needed for making it possible to round up values to the target depth.
Before this commit, this changed the output even if the input was exact
and input and output depth were the same, which is not really what you
want. Now it doesn't do that anymore.
The core deselected all streams on initialization, and then selected the
streams it actually wanted. This was no problem for
demux_mkv/demux_lavf, but old demuxers (like demux_asf) could lose some
packets. The problem is that these demuxers can buffer some data on
initialization, which then is flushed on track switching. Fix this by
explicitly avoiding deselecting a wanted stream.
Most of these are rather questionable, the rest you rarely need to set
manually. You still can set all of them with -lavdopts-o (because
libavcodec has AVOptions for them).
Playing something with "mpv f1.mkv f2.mkv --gapless-audio --volume=20"
caused the volume to be reset when playing a new file. Normally, the
volume should not be reset (unless explicitly requested with per-file
options), and without either --gapless-audio or --volume it works as
expected.
The underlying problem is that volume was saved only when the AO was
uninitialized, and also the volume was always set when starting a file.
Fix this by saving the volume when playback ends, and when the audio
is reinitialized. To make sure the volume is never restored twice or
saved in the wrong situation, introduce INITIALIZED_VOL.
Also note that this volume saving and restoring only happens if the
--volume option is used. mixer.c does its own bookkeeping of volume.
The main reason for this is that the volume option could be reset by
per-file options (see manpage), and mixer.c doesn't know anything
about this stuff. This is probably dumb, and maybe some things could
be simplified. But for now this will work.
When AAC is streamed over HTTP, using libavformat defaults is
pathetically slow. One solution for that is skipping probing and using
the mimetype to identify that it's AAC instead. This is what we did
before this commit (and ffmpeg does it too, but their logic is too
"inaccessible" for mpv).
This is still pretty fragile though. Make it a bit more robust by
requiring minimal probing. A probescore of 25 is reached after feeding
2 KB to libavformat (instead of > 500 KB for the normal probescore), so
use that. This is done only when streaming AAC from HTTP to reduce the
possibility of weird breakages for other formats.
Also reduce analyzeduration. The default analyzeduration will make
libavformat read lots of data, which makes playback start slow. So we
set analyzeduration to a low value. On the other hand, doing that for
other formats is risky, because there are unspecified effects with
certain "strange" formats (like transport streams). So we do this only
if we're streaming AAC from HTTP as well.
tl;dr libavformat is shit for media players
This can control whether demux_lavf should use the HTTP mime type to
determine the format, instead of probing the data with the libavformat
API. Do this to allow easier debugging in case the mimetype is
incorrect. (This is done only for AAC streams right now.)
In commit 0e07189, I made the status line always print a newline,
instead of cutting the output at 80 columns (or if stderr is a terminal,
whatever width the terminal reports). This is better in the case the
output goes into a log file or a pipe.
This caused problems for people who want to pipe raw video to mpv, so
change it again. (Not sure why they won't use FIFOs instead.)
Now output untrimmed lines if the slave mode flag is set, which makes
sense to do, too. The current slave mode is still on life support,
though.
This fixes a bug that caused the application to never leave it's frontmost
position.
The idea is stolen from @donmelton who used it in MPlayerShell. Thanks!
This is basically a "do not use" label. We don't remove them yet,
because we still support FFmpeg releases where we can not use
libavfilter for various reasons. Also, Libav causes pain as usual
due to the lack of ported mplayer filters in its codebase, so not
all filters will be available there.
There's no point duplicating all the text that is already in the man
pages, and synchronizing them is a pain. Place a link to the github
generated pages instead.
Unfortunately, the anchor '#vo-opengl' does not work. Maybe github's
rst converter just sucks, as the actually generated HTML contains
links using that anchor too, but does not generate the anchor itself.
Too bad.
If the image is not writeable, the image actually has to be copied
beforehand. This was overlooked when converting the video chain to
reference counted images.
Fix a double free issue. This was overlooked when vf.c was changed to
free filter priv data automatically.
Tests with demux_mkv show that the speed doesn't change (or actually,
it seems to be faster after this change). In any case, there is not
the slightest reason why these should be inline. Functions for which
this will (probably) actually matter, like stream_read_char, are
still left inline.
This was tested with demux_mkv's indexing. For broken files without
index, demux_mkv creates an on-the-fly index. If you seek to a later
part of the file, all data has to be read and parsed until the wanted
position is found. This means demux_mkv will do mostly I/O, calling
stream_read_char() and stream_read(). This should be the most I/O
intensive non-deprecated part of mpv that uses the stream interface.
(demux_lavf has its own buffering.)
Use a different algorithm to generate the dithering matrix. This
looks much better than the previous ordered dither matrix with its
cross-hatch artifacts.
The matrix generation algorithm as well as its implementation was
contributed by Wessel Dankers aka Fruit. The code in dither.c is
his implementation, reformatted and with static global variables
removed by me.
The new matrix is uploaded as float texture - before this commit, it
was a normal integer fixed point matrix. This means dithering will
be disabled on systems without float textures.
The size of the dithering matrix can be configured, as the matrix is
generated at runtime. The generation of the matrix can take rather
long, and is already unacceptable with size 8. The default is at 6,
which takes about 100 ms on a Core2 Duo system with dither.c compiled
at -O2, which I consider just about acceptable.
The old ordered dithering is still available and can be selected by
putting the dither=ordered sub-option. The ordered dither matrix
generation code was moved to dither.c. This function was originally
written by Uoti Urpala.
GetTimer() is generally replaced with mp_time_us(). Both calls return
microseconds, but the latter uses int64_t, us defined to never wrap,
and never returns 0 or negative values.
GetTimerMS() has no direct replacement. Instead the other functions are
used.
For some code, switch to mp_time_sec(), which returns the time as double
float value in seconds. The returned time is offset to program start
time, so there is enough precision left to deliver microsecond
resolution for at least 100 years. Unless it's casted to a float
(or the CPU reduces precision), which is why we still use mp_time_us()
out of paranoia in places where precision is clearly needed.
Always switch to the correct time. The whole point of the new timer
calls is that they don't wrap, and storing microseconds in unsigned int
variables would negate this.
In some cases, remove wrap-around handling for time values.
This was used by some VOs to do timing of cursor autohiding, but we
recently moved that out of the VOs. Even though this mechanism might
be a good idea and could be needed again in future (but for what?),
it's unused now. So better just get rid of it.
Make OS specific timer code export a mp_raw_time_us() function, and
add generic implementations of GetTimer()/GetTimerMS() using this
function. New mpv code is supposed to call mp_time_us() in situations
where precision is absolutely needed, or mp_time_s() otherwise.
Make it so that mp_time_us() will return a value near program start.
We don't set it to 0 though to avoid confusion with relative vs.
absolute time. Instead, pick an arbitrary offset.
Move the test program in timer-darwin.c to timer.c, and modify it to
work with the generic timer functions.
Notify the core of mouse movement events. The coordinates are converted to a
coordinate system with the origin in upper left corner, since Cocoa has it in
the lower left corner.
Use VOCTRL_CHECK_EVENTS instead. Change the remaining VOs to use it.
Only vo_sdl and vo_caca actually need this, and vo_null, vo_lavc, and
vo_image had stubs only.
Instead of having separate callbacks for each backend-handled feature
(like MPGLContext.fullscreen, MPGLContext.border, etc.), pass the
VOCTRL responsible for this directly to the backend. This allows
removing a bunch of callbacks, that currently must be set even for
optional/lesser features (like VOCTRL_BORDER).
This requires changes to all VOs using gl_common, as well as all
backends that support gl_common.
Also introduce VOCTRL_CHECK_EVENTS. vo.check_events is now optional.
VO backends can use VOCTRL_CHECK_EVENTS instead to implementing
check_events. This has the advantage that the event handling code in
VOs doesn't have to be duplicated if vo_control() is used.
The ALSA device was not closed when initialization failed.
The ALSA error handler (set with snd_lib_error_set_handler()) was not
unset when closing ao_alsa. If this is not done, the handler will still
be called when other libraries using ALSA cause errors, even though
ao_alsa was long closed. Since these messages were prefixed with
"[AO_ALSA]", they were misleading and implying ao_alsa was still used.
For some reason, our error handler is still called even after doing
snd_lib_error_set_handler(NULL), which should be impossible. Checking
with the debuggers, inserting printf(), as well as the alsa-lib source
code all suggest our error handler should not be called, but it still
happens. It's a complete mystery.
Mostly copied from vf_lavfi. The parts that could be shared are minor,
because most code is about setting up audio and video, which are too
different.
This won't work with Libav. I used ffplay.c as guide, and noticed too
late that their setup methods are incompatible with Libav's. Trying to
make it work with both would be too much effort. The configure test for
av_opt_set_int_list() should disable af_lavfi gracefully when compiling
with Libav.
Due to option parser chaos, you currently can't have a "," as part of
the filter graph string - not even with quoting or escaping. This will
probably be fixed later.
The audio filter chain is not PTS aware. So we have to do some hacks
to make up a fake PTS, and we have to map the output PTS back to the
filter chain's method of tracking PTS changes and buffering, by
adjusting af->delay.