ffmpeg's and the internal palette format used to have different
endianess, but that is not the case anymore. This code was forgotten
when that change was made.
I guess this code was supposed to handle cases like drawing RGBA as ARGB
by offsetting it by 1 byte.
The code didn't make any sense, though. It used to make sense before mpv
switched internal pixel formats from FourCCs to a simple enum. With the
FourCCs, "fmt | 128" selected the big endian version of a format. Of
course this doesn't work this way with the new pixel formats. It just so
happens that there are no formats with whose values match
IMGFMT_RGB32|128 or IMGFMT_BGR32|128, so this code was inactive.
All involved pixel formats seem to play fine on my setup (though it's
little endian only), and the code strictly matches the mpv pixel formats
against the format of the X image, so I'm not quite sure why this code
was there in the first place.
The original commit that added this was b333ae1 (svn 21602):
Support for different endianness on client and server with -vo x11
Instead of handling colorspaces with VFCTRLs/VOCTRLs, make them part of
the normal video format negotiation. The colorspace is passed down like
other video params with config/reconfig calls.
Forcing colorspaces (via the --colormatrix options and properties) is
handled differently too: if it's changed, completely reinit the video
chain. This is slower and requires a precise seek to the same position
to perform an update, but it's simpler and less bug-prone. Considering
switching the colorspace at runtime by user-interaction is a rather
obscure feature, this is a good change.
The colorspace VFCTRLs and VOCTRLs are still kept. The VOs rely on it,
and would have to be changed to get rid of them. We'll do that later,
and convert them incrementally instead of in one go.
Note that controlling the output range now always works on VO level.
Basically, this means you can't get vf_scale to output full-range YUV
for whatever reason. If that is really wanted, it should be a vf_scale
option. the previous behavior didn't make too much sense anyway.
This commit fixes a few bugs (such as playing RGB video and converting
that to YUV with vf_scale - a recent commit broke this and forced the
VO to display YUV as RGB if possible), and might introduce some new
ones.
I'm not sure what's correct: stretching the DVD subtitles from storage
aspect ratio to video display aspect ratio, or displaying subtitles
using 1:1 PAR. Until now, DVD subtitles (as well as all other bitmap
subtitles) were always stretched to the video. There are good arguments
why this would be the correct behavior: DVDs were made for playback on
TV, which display anamorphic video by adjusting the horizontal refresh
rate, and thus wouldn't even be capable of DVD subtitles with square PAR
(other than resampling the subtitles additionally).
However, I haven't seen a sample yet where subtitles do _not_ look
stretched using this method. Rendering them at 1:1 PAR looks better.
Technically, we render them at display PAR (and not 1:1 PAR). Do this in
a way so that the subtitle area is always inside of the video frame if
display and video aspect ratios mismatch.
For DVB subtitles, the old method looks more correct, so this is special
cased to DVD subtitles.
I might revert this commit if it turns out that it's an disimprovement.
This fixes playback of the sample linked by FFmpeg ticket 2508. The fix
follows ffmpeg commit 6158a3b (although it's not exactly the same).
The problem here is that the file contains an apparently non-sense
DefaultDuration value. DefaultDuration for audio tracks is used to
derive PTS values for packets with no timestamps, like they can happen
with frames inside a laced block. So the first packet of a SimpleBlock
will have a correct PTS, while the PTS values of the following packets
are calculated using DefaultDuration, and thus are broken.
This leads to seemingly ok playback, but broken A/V sync. Not using the
DefaultDuration value will leave the PTS values of these packets unset,
and the audio decoder can derive them from the output instead.
The fix more or less uses a heuristic to detect the broken case: if the
sample rate is 8 KHz (Matroska default, can assume unset), and the codec
is AC3 (as the broken file did), don't use it. I'm not sure why this
should be done only for AC3, maybe the muxing application (mkvmerge
v4.9.1) has known issues with AC3. AC3 also doesn't support 8 KHz as
sample rate natively.
(By the way, I'm not sure why we should honor the DefaultDuration at all
for audio. It doesn't seem to be needed. You can't seek to these frames,
and decoders should always be able to produce perfect PTS values by
adding the duration of the decoded audio to the first PTS.)
Matroska has an output sample rate (OutputSamplingFrequency), which in
theory should be forced instead of whatever the decoder outputs. But it
appears no software (other than mplayer2 and mpv until now) actually
respects this. Even worse, there were broken files around, which played
correctly with (in theory) broken software, but not mplayer2/mpv. Hacks
were added to our code to play these files correctly, but they didn't
catch all cases.
Simplify this by doing what everyone else does, and always use the
decoder's sample rate instead. In particular, we try to handle all
sample rate issues like libavformat's Matroska demuxer does.
Calculate the aspect ratio in vo_config, when we get the window size and in the
inside the resize function we calculate the aspect ratio of the output in order
to determine if we have to change the height or the width of the video.
If the ratio of the output is bigger than the ratio of the video then we have
to set the width accordingly and if the ratio is smaller we change the size.
But only if no resize edges are passed, because this indicates that we want to
change the windows state instead of just a simple resize and the video should
not grow bigger than the requested size.
This issue hits users way too often. Copy the explanation printed by the
configure script to the README to give it more visibility.
We will fix this properly once we have a new build system.
aspdat.asp is a problem, because it's updated when the VO calls
vo_get_src_dst_rects(). Nothing guarantees that the value has been
updated when the w32 code accesses it.
Instead, use the aspect vo_w32_config() was called with.
From now on, usage of these macros is encouraged over using FFMAX and
FFMIN. FFMAX and FFMIN are perfectly fine, and the added macros are
actually exactly the same as the FFMAX and FFMIN definitions. But they
require including libavutil headers, and certain differences between
Libav and FFmpeg very often introduced breakages if these macros were
somehow not defined because a header was not recursively included.
Defining this macro on our own is the best way to escape from this
annoying issue.
In my opinion this should be unneeded and unclean, which is why I
removed it some time ago. But apparently this is a convenience for BSD
users (so they don't have to use --extra-cflags), so add it back.
Guess the colorspace directly in mpcodecs_reconfig_vo(), instead of in
set_video_colorspace(). The difference is that the latter function just
makes the video filter chain (and VOs) force the detected colorspace,
and then throws it away, while the former is a bit more general and
central. Not really a big difference and it doesn't matter much in
practice, but it guarantees that there is no internal disagreement about
the colorspace.
DVD playback had some trouble with PTS resets: libavformat's genpts
feature would try reading until EOF (worst case) to find a new usable
PTS in case a packet's PTS is not set correctly. Especially with slow
DVD access, this would make the player to appear frozen.
Reimplement it partially in demux_lavf.c, and use that code in the DVD
case. This is heavily "inspired" by the code in av_read_frame from
libavformat/utils.c. The difference is that we stop reading if no PTS
has been found after 50 packets (consider this a heuristic). Also, we
don't bother with the PTS wrapping and last-frame-before-EOF handling.
Even with normal PTS wraps, the player frontend will go to hell for the
duration of a frame anyway, and should recover quickly after that.
The terribleness of this commit is mostly that we duplicate libavformat
functionality, and that we suddenly need a packet queue.
Since Windows Vista, when running at 144 DPI or higher with composition
switched on, applications that don't declare themselves to be DPI aware
are stretched by the window manager, kind of like low resolution apps in
OSX.
To avoid this, declare DPI awareness in the manifest. Since mpv is
practically resolution independent this shouldn't cause any trouble. The
'True/PM' value declares per-monitor DPI awareness in Windows 8.1, so
that the mpv isn't shrunk when moved from a high DPI screen to one with
a lower DPI.
Also, avoid compatibility shims by declaring compatibility with all
Windows versions from Vista to 8.1 and add the missing uiAccess
attribute to the requestedExecutionLevel element.
Move the reading loop from read_all_fd_events to read_events. If
got_new_events is set when calling read_events, don't actually wait
and set the timeout to 0.
(Note that not waiting is sort of transparent to the caller: the caller
is just supposed to execute the event loop again, and then it will
actually wait. mplayer.c handles this correctly.)
This might reduce latency with some input sources.
Removes some code duplication. Also restructure the input waiting code
a bit: split the select() loop into a input_wait_read() function. On
systems which do not have POSIX select(), this function has an alternate
implementation, which waits unconditionally.
This was useless for anything but the raw demuxers. In most cases, this
would most likely lead to display of bogus duration values, because the
bitrates used are per-track, not the total file bitrate. There was
actually no case left where this code was helpful.
Note that demux_lavf has its own code for this using the total file
bitrate. Also, mplayer.c can calculate the playback percentage from
current file position / current file size. This is not removed.