This commit removes the pointer to the single different structures for input
and window and puts them as anonymous structures inside the wayland_state
structure.
This has the disadvantage of passing the substructure to the listeners, but the
advantage is that we don't have to allocate them and check for NULL pointers.
This makes it more reliable and easier to follow.
The vo_wayland_fullscreen handles resizing for the video, because the video
could still be in fullscreen mode and resizing it in gl_wayland could make it
grow or shrink.
There was a MPOpts fullscreen field, a mp_vo_opts.fs field, and
VOFLAG_FULLSCREEN. Remove all these and introduce a
mp_vo_opts.fullscreen flag instead.
When VOs receive VOCTRL_FULLSCREEN, they are supposed to set the
current fullscreen mode to the state in mp_vo_opts.fullscreen. They
also should do this implicitly on config().
VOs which are capable of doing so can update the mp_vo_opts.fullscreen
if the actual fullscreen mode changes (e.g. if the user uses the
window manager controls). If fullscreen mode switching fails, they
can also set mp_vo_opts.fullscreen to the actual state.
Note that the X11 backend does almost none of this, and it has a
private fs flag to store the fullscreen flag, instead of getting it
from the WM. (Possibly because it has to deal with broken WMs.)
The fullscreen option has to be checked on config() to deal with
the -fs option, especially with something like:
mpv --fs file1.mkv --{ --no-fs file2.mkv --}
(It should start in fullscreen mode, but go to windowed mode when
playing file2.mkv.)
Wayland changes by: Alexander Preisinger <alexander.preisinger@gmail.com>
Cocoa changes by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
This was bad, because it was the only aspdat member updated by
vo_get_src_dst_rects() instead of vo_reconfig(). Now it isn't
accessed anymore, so remove it.
Until now, only formats directly supported by OpenGL were supported.
This excludes various permutations of 8-bit RGB[A|0]. But we can simply
permutate the color channels in the shader, so do that. This also adds
support for all these weird RGB0 formats.
Note that we could use libavutil's pixfmt list instead of the
mp_packed_formats array, but trying to decrypt the pixfmt info would
probably end in pain, so this array with duplicated information is
actually better and shorter.
Note: I didn't actually test whether the alpha components are reproduced
correctly with alpha formats.
Image parameters like colorspace, color levels, and chroma location are
generally less important, and many filters don't set them correctly.
Force them instead in the generic VF code, which is probably better and
more convenient over all. So we designate this is a proper solution,
instead of a dirty hack.
This splits the monolithic mp_image_swscale() function into a bunch of
functions and a context struct. This means it's possible to set
arbitrary parameters (e.g. even obscure ones without getting in the
way), and you don't have to create the context on every call.
This code is preparation for removing duplicated libswscale API usage
from other parts of the code.
libswscale doesn't seem to require this (anymore?), and libavfiltert's
vf_scale doesn't do it either. Moreover, this wasn't done for most other
subsampled formats, not even very old ones. So just remove it.
(It'd be quite easy to align on chroma boundaries with all pixel
formats, though.)
Until now, vf_scale only tried formats listed in the outfmt_list array.
Extend this and try every pixel format supported by mpv if trying
outfmt_list doesn't lead to success.
Also add some checks whether swscale really supports a given input or
output format. This was implicitly done with outfmt_list before.
ffmpeg's and the internal palette format used to have different
endianess, but that is not the case anymore. This code was forgotten
when that change was made.
I guess this code was supposed to handle cases like drawing RGBA as ARGB
by offsetting it by 1 byte.
The code didn't make any sense, though. It used to make sense before mpv
switched internal pixel formats from FourCCs to a simple enum. With the
FourCCs, "fmt | 128" selected the big endian version of a format. Of
course this doesn't work this way with the new pixel formats. It just so
happens that there are no formats with whose values match
IMGFMT_RGB32|128 or IMGFMT_BGR32|128, so this code was inactive.
All involved pixel formats seem to play fine on my setup (though it's
little endian only), and the code strictly matches the mpv pixel formats
against the format of the X image, so I'm not quite sure why this code
was there in the first place.
The original commit that added this was b333ae1 (svn 21602):
Support for different endianness on client and server with -vo x11
Instead of handling colorspaces with VFCTRLs/VOCTRLs, make them part of
the normal video format negotiation. The colorspace is passed down like
other video params with config/reconfig calls.
Forcing colorspaces (via the --colormatrix options and properties) is
handled differently too: if it's changed, completely reinit the video
chain. This is slower and requires a precise seek to the same position
to perform an update, but it's simpler and less bug-prone. Considering
switching the colorspace at runtime by user-interaction is a rather
obscure feature, this is a good change.
The colorspace VFCTRLs and VOCTRLs are still kept. The VOs rely on it,
and would have to be changed to get rid of them. We'll do that later,
and convert them incrementally instead of in one go.
Note that controlling the output range now always works on VO level.
Basically, this means you can't get vf_scale to output full-range YUV
for whatever reason. If that is really wanted, it should be a vf_scale
option. the previous behavior didn't make too much sense anyway.
This commit fixes a few bugs (such as playing RGB video and converting
that to YUV with vf_scale - a recent commit broke this and forced the
VO to display YUV as RGB if possible), and might introduce some new
ones.
I'm not sure what's correct: stretching the DVD subtitles from storage
aspect ratio to video display aspect ratio, or displaying subtitles
using 1:1 PAR. Until now, DVD subtitles (as well as all other bitmap
subtitles) were always stretched to the video. There are good arguments
why this would be the correct behavior: DVDs were made for playback on
TV, which display anamorphic video by adjusting the horizontal refresh
rate, and thus wouldn't even be capable of DVD subtitles with square PAR
(other than resampling the subtitles additionally).
However, I haven't seen a sample yet where subtitles do _not_ look
stretched using this method. Rendering them at 1:1 PAR looks better.
Technically, we render them at display PAR (and not 1:1 PAR). Do this in
a way so that the subtitle area is always inside of the video frame if
display and video aspect ratios mismatch.
For DVB subtitles, the old method looks more correct, so this is special
cased to DVD subtitles.
I might revert this commit if it turns out that it's an disimprovement.
This fixes playback of the sample linked by FFmpeg ticket 2508. The fix
follows ffmpeg commit 6158a3b (although it's not exactly the same).
The problem here is that the file contains an apparently non-sense
DefaultDuration value. DefaultDuration for audio tracks is used to
derive PTS values for packets with no timestamps, like they can happen
with frames inside a laced block. So the first packet of a SimpleBlock
will have a correct PTS, while the PTS values of the following packets
are calculated using DefaultDuration, and thus are broken.
This leads to seemingly ok playback, but broken A/V sync. Not using the
DefaultDuration value will leave the PTS values of these packets unset,
and the audio decoder can derive them from the output instead.
The fix more or less uses a heuristic to detect the broken case: if the
sample rate is 8 KHz (Matroska default, can assume unset), and the codec
is AC3 (as the broken file did), don't use it. I'm not sure why this
should be done only for AC3, maybe the muxing application (mkvmerge
v4.9.1) has known issues with AC3. AC3 also doesn't support 8 KHz as
sample rate natively.
(By the way, I'm not sure why we should honor the DefaultDuration at all
for audio. It doesn't seem to be needed. You can't seek to these frames,
and decoders should always be able to produce perfect PTS values by
adding the duration of the decoded audio to the first PTS.)
Matroska has an output sample rate (OutputSamplingFrequency), which in
theory should be forced instead of whatever the decoder outputs. But it
appears no software (other than mplayer2 and mpv until now) actually
respects this. Even worse, there were broken files around, which played
correctly with (in theory) broken software, but not mplayer2/mpv. Hacks
were added to our code to play these files correctly, but they didn't
catch all cases.
Simplify this by doing what everyone else does, and always use the
decoder's sample rate instead. In particular, we try to handle all
sample rate issues like libavformat's Matroska demuxer does.
Calculate the aspect ratio in vo_config, when we get the window size and in the
inside the resize function we calculate the aspect ratio of the output in order
to determine if we have to change the height or the width of the video.
If the ratio of the output is bigger than the ratio of the video then we have
to set the width accordingly and if the ratio is smaller we change the size.
But only if no resize edges are passed, because this indicates that we want to
change the windows state instead of just a simple resize and the video should
not grow bigger than the requested size.
This issue hits users way too often. Copy the explanation printed by the
configure script to the README to give it more visibility.
We will fix this properly once we have a new build system.
aspdat.asp is a problem, because it's updated when the VO calls
vo_get_src_dst_rects(). Nothing guarantees that the value has been
updated when the w32 code accesses it.
Instead, use the aspect vo_w32_config() was called with.
From now on, usage of these macros is encouraged over using FFMAX and
FFMIN. FFMAX and FFMIN are perfectly fine, and the added macros are
actually exactly the same as the FFMAX and FFMIN definitions. But they
require including libavutil headers, and certain differences between
Libav and FFmpeg very often introduced breakages if these macros were
somehow not defined because a header was not recursively included.
Defining this macro on our own is the best way to escape from this
annoying issue.
In my opinion this should be unneeded and unclean, which is why I
removed it some time ago. But apparently this is a convenience for BSD
users (so they don't have to use --extra-cflags), so add it back.
Guess the colorspace directly in mpcodecs_reconfig_vo(), instead of in
set_video_colorspace(). The difference is that the latter function just
makes the video filter chain (and VOs) force the detected colorspace,
and then throws it away, while the former is a bit more general and
central. Not really a big difference and it doesn't matter much in
practice, but it guarantees that there is no internal disagreement about
the colorspace.
DVD playback had some trouble with PTS resets: libavformat's genpts
feature would try reading until EOF (worst case) to find a new usable
PTS in case a packet's PTS is not set correctly. Especially with slow
DVD access, this would make the player to appear frozen.
Reimplement it partially in demux_lavf.c, and use that code in the DVD
case. This is heavily "inspired" by the code in av_read_frame from
libavformat/utils.c. The difference is that we stop reading if no PTS
has been found after 50 packets (consider this a heuristic). Also, we
don't bother with the PTS wrapping and last-frame-before-EOF handling.
Even with normal PTS wraps, the player frontend will go to hell for the
duration of a frame anyway, and should recover quickly after that.
The terribleness of this commit is mostly that we duplicate libavformat
functionality, and that we suddenly need a packet queue.