This bug was the result of crappy position detection in the previous code
combined with the commits moving autohide delay out of the cocoa backend and
into the core.
The hit detection was improved and now takes also account of interactions with
the Dock and Menubar. Moreover VOCTRL_SET_CURSOR_VISIBILITY now has an effect
only if the mouse position matches with this improved hit detection. This means
that both interaction with the Dock and Menubar are considered as well as
moving the mouse inside another screen.
This removes a bit of ugly code and bookeeping which is never bad. `drawRect`
needs to guard against different window instances since in fullscreen the view
is wrapped in a fullscreen window provided by the toolkit (a instance of
NSFullScreenWindow to be precise).
The event handling was moved to the view so that it can still get all the
events when in the fullscreen window. Ideally these should be moved to
some NSResponder subclass within macosx_application and made available even
when no window is present. I refrained from this because "small steps".
At this point 10.6 is pretty old and we don't want to supporting old platforms.
I'm killing all the 10.6 compatibility code before doing more refactorings.
Next commits will also use newer Objective-C syntax such as literals and
@autoreleasepool.
This was broken with 84829a4 "Merge branch 'osd_changes' into master".
The new OSD/subtitle code never respected the --sub-forced-only option,
and the old code containing the code for this was removed in fd5c4a1.
This unifies the subtitle rendering path. Now all subtitle rendering
goes through sd_ass.c/sd_lavc.c/sd_spu.c.
Before that commit, the spudec.h functions were used directly in
mplayer.c, which introduced many special cases. Add sd_spu.c, which is
just a small wrapper connecting the new subtitle render API with the
dusty old vobsub decoder in spudec.c.
One detail that changes is that we always pass the palette as extra
data, instead of passing the libdvdread palette as pointer to spudec
directly. This is a bit roundabout, but actually makes the code simpler
and more elegant: the difference between DVD and non-DVD dvdsubs is
reduced.
Ideally, we would just delete spudec.c and use libavcodec's DVD sub
decoder. However, DVD playback with demux_mpg produces packets
incompatible to lavc. There are incompatibilities the other way around
as well: packets from libavformat's vobsub demuxer are incompatible to
spudec.c. So we define a new subtitle codec name for demux_mpg subs,
"dvd_subtitle_mpg", which only sd_spu can decode.
There is actually code in spudec.c to "assemble" fragments into complete
packets, but using the whole spudec.c is easier than trying to move this
code into demux_mpg to fix subtitle packets.
As additional complication, Libav 9.x can't decode DVD subs correctly,
so use sd_spu in that case as well.
It appears demux_mpg doesn't output timestamps for subtitles. The vobsub
code handled this by doing its own PTS calculations. This code is absent
from the normal subtitle decoder path. Copy this code into the normal
path, so that we can unify the subtitle decoder paths in a later commit.
Decoding subtitles with sd_lavc when playing DVD with demux_mpg still
doesn't work.
This was once needed to handle subtitle packages coming from a demuxer,
where seeking back might repeat previous events. This doesn't happen
anymore, and this code is used to convert complete files. So if there
are any duplicate lines, they must have been duplicated in the file,
and the old subtitle renderer would have shown them twice as well.
Today checking for duplicate events happens in sd_ass.c (and has been
for a while). There's no reason to keep this code, and it actually
causes trouble. Loading big subtitle files is extremely slow because
this makes adding n subtitles O(n^2).
The -no-ass switch used to disable any use of libass for text subtitles.
This is not really the case anymore, because libass is now always
involved when rendering text. The only remaining use of -no-ass is
disabling styling or showing subtitles on the terminal. On the other
hand, the old subtitle rendering path is a big reason why the subtitle
code is still a big mess with an awful number of obscure special cases.
In order to simplify it, remove the old subtitle rendering code, and
always go through sd_ass.c. Basically, we use ASS_Track as central data
structure for storing text subtitles instead of struct sub_data. This
also makes libass mandatory for all text subs, even if they are printed
to the terminal in -no-video mode. (We could add something like sd_text
to avoid this, but it's not worth the trouble.)
struct sub_data and subreader.c are still around, even its ASS/SSA
reader. But struct sub_data is freed right after converting it to
ASS_Track. The internal ASS reader actually can handle some obscure
cases libass can't, like files encoded in UTF-16.
The new wavpack packet format (see previous commit) doesn't work with
older libavcodec versions, so disable the new code in this case.
The version numbers are only approximate, since the libavcodec version
wasn't bumped with the wavpack change, but it's close enough.
Libav introduced a silent API breakage by changing what wavpack packets
the libavcodec decoder accepts. Originally the libavcodec codec accepted
Matroska-style wavpack packets. Libav commit 9b6f47c removed this
capability from the libavcodec code, and added code to libavformat's
Matroska demuxer to "rearrange" wavpack packets. Since demux_mkv still
sent Matroska-style packets, playback failed.
Fix this by "rearranging" packets in demux_mkv as well by copying
libavformat's code. (The best kind of fix.)
Tested with [CCCP]_Mega_Lossless_Audio_Test.mkv, as well as with a
sample generated by mkvmerge.
0 is invalid. The intention of the code turning off any additional
alignment, so we need 1.
Change a comment: obviously we don't try to set alignment parameters
etc.to handle stride correctly, and instead do everything by row.
This probes and prints the depth of some texture formats with the help
of a FBO. By default it tests the format used for scaling, as well as
the format used for dithering and the 3D LUT (if any of these are
enabled).
The output is visible only with -v. Some representative values are
probed, and the difference of input and output value is printed as hex-
float. Hex-floats are used because they make the implied precision more
obvious. Originally I wanted to do some more sophisticated guessing of
the implied depth/precision for more user-friendly reporting, but then
I decided that printing raw data is better for debugging, especially if
things go wrong.
This does not try to disable any functionality and does not print any
warnings if the depth is lower than what it should be.
This might be better with dumb shader compilers, which won't vectorize
this to a single vector-division, assuming the hardware does have such
an instruction. Affects "bicubic_fast" scale mode only.
The internal texture format GL_RED is typically 8 bit, which is clearly
not good enough for the new dither matrix. The idea was to use a float
texture format, but this was somehow "forgotten". Use GL_R16, since
16 bit textures are more robust, and provide more precision for the
same memory usage.
Change how the offset for centering the dither matrix is applied. This
is needed for making it possible to round up values to the target depth.
Before this commit, this changed the output even if the input was exact
and input and output depth were the same, which is not really what you
want. Now it doesn't do that anymore.
The core deselected all streams on initialization, and then selected the
streams it actually wanted. This was no problem for
demux_mkv/demux_lavf, but old demuxers (like demux_asf) could lose some
packets. The problem is that these demuxers can buffer some data on
initialization, which then is flushed on track switching. Fix this by
explicitly avoiding deselecting a wanted stream.
Most of these are rather questionable, the rest you rarely need to set
manually. You still can set all of them with -lavdopts-o (because
libavcodec has AVOptions for them).
Playing something with "mpv f1.mkv f2.mkv --gapless-audio --volume=20"
caused the volume to be reset when playing a new file. Normally, the
volume should not be reset (unless explicitly requested with per-file
options), and without either --gapless-audio or --volume it works as
expected.
The underlying problem is that volume was saved only when the AO was
uninitialized, and also the volume was always set when starting a file.
Fix this by saving the volume when playback ends, and when the audio
is reinitialized. To make sure the volume is never restored twice or
saved in the wrong situation, introduce INITIALIZED_VOL.
Also note that this volume saving and restoring only happens if the
--volume option is used. mixer.c does its own bookkeeping of volume.
The main reason for this is that the volume option could be reset by
per-file options (see manpage), and mixer.c doesn't know anything
about this stuff. This is probably dumb, and maybe some things could
be simplified. But for now this will work.
When AAC is streamed over HTTP, using libavformat defaults is
pathetically slow. One solution for that is skipping probing and using
the mimetype to identify that it's AAC instead. This is what we did
before this commit (and ffmpeg does it too, but their logic is too
"inaccessible" for mpv).
This is still pretty fragile though. Make it a bit more robust by
requiring minimal probing. A probescore of 25 is reached after feeding
2 KB to libavformat (instead of > 500 KB for the normal probescore), so
use that. This is done only when streaming AAC from HTTP to reduce the
possibility of weird breakages for other formats.
Also reduce analyzeduration. The default analyzeduration will make
libavformat read lots of data, which makes playback start slow. So we
set analyzeduration to a low value. On the other hand, doing that for
other formats is risky, because there are unspecified effects with
certain "strange" formats (like transport streams). So we do this only
if we're streaming AAC from HTTP as well.
tl;dr libavformat is shit for media players
This can control whether demux_lavf should use the HTTP mime type to
determine the format, instead of probing the data with the libavformat
API. Do this to allow easier debugging in case the mimetype is
incorrect. (This is done only for AAC streams right now.)
In commit 0e07189, I made the status line always print a newline,
instead of cutting the output at 80 columns (or if stderr is a terminal,
whatever width the terminal reports). This is better in the case the
output goes into a log file or a pipe.
This caused problems for people who want to pipe raw video to mpv, so
change it again. (Not sure why they won't use FIFOs instead.)
Now output untrimmed lines if the slave mode flag is set, which makes
sense to do, too. The current slave mode is still on life support,
though.
This fixes a bug that caused the application to never leave it's frontmost
position.
The idea is stolen from @donmelton who used it in MPlayerShell. Thanks!
This is basically a "do not use" label. We don't remove them yet,
because we still support FFmpeg releases where we can not use
libavfilter for various reasons. Also, Libav causes pain as usual
due to the lack of ported mplayer filters in its codebase, so not
all filters will be available there.
There's no point duplicating all the text that is already in the man
pages, and synchronizing them is a pain. Place a link to the github
generated pages instead.
Unfortunately, the anchor '#vo-opengl' does not work. Maybe github's
rst converter just sucks, as the actually generated HTML contains
links using that anchor too, but does not generate the anchor itself.
Too bad.
If the image is not writeable, the image actually has to be copied
beforehand. This was overlooked when converting the video chain to
reference counted images.
Fix a double free issue. This was overlooked when vf.c was changed to
free filter priv data automatically.
Tests with demux_mkv show that the speed doesn't change (or actually,
it seems to be faster after this change). In any case, there is not
the slightest reason why these should be inline. Functions for which
this will (probably) actually matter, like stream_read_char, are
still left inline.
This was tested with demux_mkv's indexing. For broken files without
index, demux_mkv creates an on-the-fly index. If you seek to a later
part of the file, all data has to be read and parsed until the wanted
position is found. This means demux_mkv will do mostly I/O, calling
stream_read_char() and stream_read(). This should be the most I/O
intensive non-deprecated part of mpv that uses the stream interface.
(demux_lavf has its own buffering.)
Use a different algorithm to generate the dithering matrix. This
looks much better than the previous ordered dither matrix with its
cross-hatch artifacts.
The matrix generation algorithm as well as its implementation was
contributed by Wessel Dankers aka Fruit. The code in dither.c is
his implementation, reformatted and with static global variables
removed by me.
The new matrix is uploaded as float texture - before this commit, it
was a normal integer fixed point matrix. This means dithering will
be disabled on systems without float textures.
The size of the dithering matrix can be configured, as the matrix is
generated at runtime. The generation of the matrix can take rather
long, and is already unacceptable with size 8. The default is at 6,
which takes about 100 ms on a Core2 Duo system with dither.c compiled
at -O2, which I consider just about acceptable.
The old ordered dithering is still available and can be selected by
putting the dither=ordered sub-option. The ordered dither matrix
generation code was moved to dither.c. This function was originally
written by Uoti Urpala.
GetTimer() is generally replaced with mp_time_us(). Both calls return
microseconds, but the latter uses int64_t, us defined to never wrap,
and never returns 0 or negative values.
GetTimerMS() has no direct replacement. Instead the other functions are
used.
For some code, switch to mp_time_sec(), which returns the time as double
float value in seconds. The returned time is offset to program start
time, so there is enough precision left to deliver microsecond
resolution for at least 100 years. Unless it's casted to a float
(or the CPU reduces precision), which is why we still use mp_time_us()
out of paranoia in places where precision is clearly needed.
Always switch to the correct time. The whole point of the new timer
calls is that they don't wrap, and storing microseconds in unsigned int
variables would negate this.
In some cases, remove wrap-around handling for time values.
This was used by some VOs to do timing of cursor autohiding, but we
recently moved that out of the VOs. Even though this mechanism might
be a good idea and could be needed again in future (but for what?),
it's unused now. So better just get rid of it.
Make OS specific timer code export a mp_raw_time_us() function, and
add generic implementations of GetTimer()/GetTimerMS() using this
function. New mpv code is supposed to call mp_time_us() in situations
where precision is absolutely needed, or mp_time_s() otherwise.
Make it so that mp_time_us() will return a value near program start.
We don't set it to 0 though to avoid confusion with relative vs.
absolute time. Instead, pick an arbitrary offset.
Move the test program in timer-darwin.c to timer.c, and modify it to
work with the generic timer functions.