Since m_option.h and options.h are extremely often included, a lot of
files have to be changed.
Moving path.c/h to options/ is a bit questionable, but since this is
mainly about access to config files (which are also handled in
options/), it's probably ok.
The --flip option flipped the image upside-down, by trying to use VO
support, or if not available, by inserting a video filter. I'm not sure
why it existed. Maybe it was important in ancient times when VfW based
decoders output an image this way (but even then, flipping an image is a
free operation by negating the stride).
One nice thing about this is that it provided a possible path for
implementing video orientation, which is a feature we should probably
support eventually. The important part is that it would be for free for
VOs that support it, and would work even with hardware decoding.
But for now get rid of it. It's useless, trivial, stands in the way, and
supporting video orientation would require solving other problems first.
The old ffmpeg vdpau support code uses separate vdpau pixel formats for
each decoder (pretty much because mplayer's architecture sucked), which
just gets into the way. Force the old decoder's output to IMGFMT_VDPAU,
and remove IMGFMT_IS_VDPAU() where we can remove it.
This should completely remove the differences betwene the old and new
vdpau decoder outside of the decoder.
This means most code accessing this struct must now include hwdec.h
instead of dec_video.h. I just put it into dec_video.h at first because
I thought a separate file would be a waste, but it's more proper to do
it this way, as there are too many files which include dec_video.h only
to get the mp_hwdec_info definition.
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
* #define HAVE_HURR 1
* #define HAVE_DURR 0
We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.
[1]: http://xkcd.com/927/ related
This one really did bite me hard (see previous commit), so enable it by
default.
Fix some cases of shadowing throughout the codebase. None of these
change behavior, and all of these were correct code, and just tripped up
the warning.
vo_vdpau is the only VO which implements VOCTRL_RESET. Redrawing the
last output frame is hard, because the output could consist of several
source video frames with certain types of post-processing
(deinterlacing). Implement redrawing as special case by keeping the
previous video frames aside until at least one new frame is decoded.
This improves the previous commit, but is separate, because it's rather
complicated.
Before, a VO could easily refuse to respond to VOCTRL_REDRAW_FRAME,
which means the VO wouldn't redraw OSD and window contents, and the
player would appear frozen to the user. This was a bit stupid, and makes
dealing with some corner cases much harder (think of --keep-open, which
was hard to implement, because the VO gets into this state if there are
no new video frames after a seek reset).
Change this, and require VOs to always react to VOCTRL_REDRAW_FRAME.
There are two aspects of this: First, behavior after a (successful)
vo_reconfig() call, but before any video frame has been displayed.
Second, behavior after a vo_seek_reset().
For the first issue, we define that sending VOCTRL_REDRAW_FRAME after
vo_reconfig() should clear the window with black. This requires minor
changes to some VOs. In particular vaapi makes this horribly
complicated, because OSD rendering is bound to a video surface. We
create a black dummy surface for this purpose.
The second issue is much simpler and works already with most VOs: they
simply redraw whatever has been uploaded previously. The exception is
vdpau, which has a complicated mechanism to track and filter video
frames. The state associated with this mechanism is completely cleared
with vo_seek_reset(), so implementing this to work as expected is not
trivial. For now, we just clear the window with black.
Apparently this was dropped some years ago, but judging from MPlayer's
handling of this, the original code wasn't so great anyway. The new
code handling clearing of panscan borders correctly, and integrates
better with the YUV path. (Although the VDPAU API sure makes this
annoying with its separate surface types for RGB.)
Note that we create 5 surfaces for some reason - I don't think this
makes too much sense (because we can't use the deinterlacer with RGB
surfaces), but at least it reduces the amount of differences with
the YUV code path.
Clearing the borders is done by drawing a single black pixel over the
window. This sounds pretty dumb, but it appears to work well, and
there is no other API for that. (One could try to use the video mixer
for this purpose, since it has all kinds of features, including
compositing multiple RGBA surfaces and clearing the window background.
But it would require an invisible dummy video surface to make the
video mixer happy, and that's getting too messy.)
When panscan was used, i.e. the video is cropped to make the video fill
the screen if video and screen aspects don't match, screenshots
contained only the visible region of the source video, stretched to
original video size.
The VDPAU default colorkey, although it seems to be driver specific, is
usually green. This is a pretty annoying color, and you usually see it
briefly (as flashes) if the VDPAU window resizes.
Change it to some shade of black. The new default color is close to what
MPlayer picks as colorkey (and apparently it worked well for them):
VdpColor vdp_bg = {0.01, 0.02, 0.03, 0};
Since our OPT_COLOR can set 8 bit colors only, we use '#020507' instead,
which should be the same assuming 8 bit colors.
Obviously, you can't use black, because black is a way too common color,
and would make it too easy to observe the colorkey effect when e.g.
moving a terminal with black background over the video window.
Formally, this sets the "background color" of the presentation queue.
But in practice, this color is also used as colorkey.
This commit doesn't change the VDPAU default yet.
Instead of generating vdpau_template.c with a Perl script, just include
the generated file in git. This is ok because it changes very rarely,
and the script is larger than the output it generates.
It also simplify the Makefile, and fixes the build. The problem was that
transitive dependencies do not work with generated files: there is no
dependency information yet when building it the first time. I overlooked
this because I didn't delete the .d files for testing (which contained
the correct dependencies, but only _after_ a first successful build).
Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This
file is named so because because it's written against the "old"
libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau").
Add support for the "new" libavcodec vdpau support. This was recently
added and replaces the "old" vdpau parts. (In fact, Libav is about to
deprecate and remove the "old" API without deprecation grace period,
so we have to support it now. Moreover, there will probably be no Libav
release which supports both, so the transition is even less smooth than
we could hope, and we have to support both the old and new API.)
Whether the old or new API is used is checked by a configure test: if
the new API is found, it is used, otherwise the old API is assumed.
Some details might be handled differently. Especially display preemption
is a bit problematic with the "new" libavcodec vdpau support: it wants
to keep a pointer to a specific vdpau API function (which can be driver
specific, because preemption might switch drivers). Also, surface IDs
are now directly stored in AVFrames (and mp_images), so they can't be
forced to VDP_INVALID_HANDLE on preemption. (This changes even with
older libavcodec versions, because mp_image always uses the newer
representation to make vo_vdpau.c simpler.)
Decoder initialization in the new code tries to deal with codec
profiles, while the old code always uses the highest profile per codec.
Surface allocation changes. Since the decoder won't call config() in
vo_vdpau.c on video size change anymore, we allow allocating surfaces
of arbitrary size instead of locking it to what the VO was configured.
The non-hwdec code also has slightly different allocation behavior now.
Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau
doesn't work anymore (a warning suggesting the --hwdec option is
printed instead).
In general, this warning can hint to actual bugs. We don't enable it
yet, because it would conflict with some unmerged code, and we should
check with clang too (this commit was done by testing with gcc).
There was a MPOpts fullscreen field, a mp_vo_opts.fs field, and
VOFLAG_FULLSCREEN. Remove all these and introduce a
mp_vo_opts.fullscreen flag instead.
When VOs receive VOCTRL_FULLSCREEN, they are supposed to set the
current fullscreen mode to the state in mp_vo_opts.fullscreen. They
also should do this implicitly on config().
VOs which are capable of doing so can update the mp_vo_opts.fullscreen
if the actual fullscreen mode changes (e.g. if the user uses the
window manager controls). If fullscreen mode switching fails, they
can also set mp_vo_opts.fullscreen to the actual state.
Note that the X11 backend does almost none of this, and it has a
private fs flag to store the fullscreen flag, instead of getting it
from the WM. (Possibly because it has to deal with broken WMs.)
The fullscreen option has to be checked on config() to deal with
the -fs option, especially with something like:
mpv --fs file1.mkv --{ --no-fs file2.mkv --}
(It should start in fullscreen mode, but go to windowed mode when
playing file2.mkv.)
Wayland changes by: Alexander Preisinger <alexander.preisinger@gmail.com>
Cocoa changes by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
GetTimer() is generally replaced with mp_time_us(). Both calls return
microseconds, but the latter uses int64_t, us defined to never wrap,
and never returns 0 or negative values.
GetTimerMS() has no direct replacement. Instead the other functions are
used.
For some code, switch to mp_time_sec(), which returns the time as double
float value in seconds. The returned time is offset to program start
time, so there is enough precision left to deliver microsecond
resolution for at least 100 years. Unless it's casted to a float
(or the CPU reduces precision), which is why we still use mp_time_us()
out of paranoia in places where precision is clearly needed.
Always switch to the correct time. The whole point of the new timer
calls is that they don't wrap, and storing microseconds in unsigned int
variables would negate this.
In some cases, remove wrap-around handling for time values.
Separate the video output options from the big MPOpts structure and also only
pass the new mp_vo_opts structure to the vo backend.
Move video_driver_list into mp_vo_opts
Removes almost every global variabel in vo.h and puts them in a special struct
in MPOpts for video output related options.
Also we completly remove the options/globals pts and refresh rate because
they were unused.
OPT_BASE_STRUCT defines which struct the OPT_ macros (like OPT_INT etc.)
reference implicitly, since these macros take struct member names but no
struct type. Normally, only cfg-mplayer.h should need this, and other
places shouldn't be bothered with having to #undef it.
(Some files, like demux_lavf.c, still store their options in MPOpts. In
the long term, this should be removed, and handled like e.g. with VO
suboptions instead.)
VFCAP_OSD was used to determine at runtime whether the VO supports OSD
rendering. This was mostly unused. vo_direct3d had an option to disable
OSD (was supposed to allow to force auto-insertion of vf_ass, but we
removed that anyway). vo_opengl_old could disable OSD rendering when a
very old OpenGL version was detected, and had an option to explicitly
disable it as well.
Remove VFCAP_OSD from everything (and some associated logic). Now the
vo_driver.draw_osd callback can be set to NULL to indicate missing OSD
support (important so that vo_null etc. don't single-step on OSD
redraw), and if OSD support depends on runtime support, the VO's
draw_osd should just do nothing if OSD is not available.
Also, do not access vo->want_redraw directly. Change the want_redraw
reset logic for this purpose, too. (Probably unneeded, vo_flip_page
resets it already.)
create_window is really bad naming, because this function can be called
multiple times, while the name implies that it always creates a new
window. At least the name config_window is not actively misleading.
OPT_MAKE_FLAGS() used to emit two options (one with "no" prefixed),
but that has been long removed by special casing flag options in the
option parser. OPT_FLAG_ON() used to imply that there's no "no-"
prefixed option, but this hasn't been the case for a while either.
(Conceptually, it has been replaced by OPT_FLAG_STORE().)
Remove OPT_FLAG_OFF, which was unused.
Normally, all flag options can be negated by prepending a "no-", for
example "--no-opt" becomes "--opt=no". Some flag options can't actually
be negated, so add a CONF_TYPE_STORE option type to disallow the "no-"
fallback.
Do the same for choice options. Remove the explicit "no-" prefixed
options, add "no" as choice.
Move the handling of automatic "no-" options from parser-mpcmd.c to
m_config.c, and use it in m_config_set_option/m_config_parse_option.
This makes these options available in the config file. It also
simplifies sub-option parsing, because it doesn't need to handle "no-"
anymore.
Don't force VOs to pick an arbitrary default Visual and Colormap. They
still can override them if needed. This simplifies the X11 VO interface.
Always create a Colormap for simplicity. Using CopyFromParent fails if
the selected visual is not the same of that of the parent window, which
happens for me with vo_opengl.
vo_vdpau and vo_xv explicitly set CWBorderPixel, do that in x11_common
instead (it was already done for native windows, but not for slave mode
windows).
What gl_common did was incorrect in theory: freeing a colormap while a
window uses it will change the colormap of the window to "None", and
the color mapping for such windows is "undefined".
Some parts for initiating mode switches were duplicated in every VO
supporting X11 (except vo_opengl/gl_common, which didn't support mode
switching). Move this to x11_common.c.
Note that this might be slightly risky: is it really guaranteed that no
VO needed to do "special" setup that depends on X parameters changing
after a mode switch, such as bit depth, visuals etc.? From what I can
see, this shouldn't be the case (X probably can't even change depth on
the fly). Even if this should be a one-way road, VM switching is in
general very useless, and its implementation buggy, so it can just be
removed should unfixable problems arise.
Move things that are used by vo_xv only into vo_xv, same for vo_x11.
Rename some functions exported by x11_common, like vo_init to
vo_x11_common. Make functions not used outsode of x11_common.c private
to that file. Eliminate all global variables defined by x11_common
(except error handler and colormap stuff).
There shouldn't be any functional changes, and only code is moved
around. There are some minor simplifications in the X11 init code, as
we completely remove the ability to initialize X11 and X11+VO
separately (see commit b4d9647 "mplayer: do not create X11 state in player frontend"),
and the respective functions are conflated into vo_x11_init() and
vo_x11_uninit().
Using vdpau on an X server configured to a bit depth of 30 (10 bit per
component) failed finding a visual. The cause was a hack that tried to
normalize the bit depth to 24 if it was not a known depth. It's unknown
why/if this is needed, but the following things speak against it:
- it prevented unusual bit depths like 30 bit from working
- it wasn't needed with normal bit depth like 24 bit
- it's probably copy-pasted from vo_x11 (where this code possibly makes
sense, unlike in vo_vdpau)
Just remove this code and look for a visual with native depth.
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
Change the entire filter API to use reference counted images instead
of vf_get_image().
Remove filter "direct rendering". This was useful for vf_expand and (in
rare cases) vf_sub: DR allowed these filters to pass a cropped image to
the filters before them. Then, on filtering, the image was "uncropped",
so that black bars could be added around the image without copying. This
means that in some cases, vf_expand will be slower (-vf gradfun,expand
for example).
Note that another form of DR used for in-place filters has been replaced
by simpler logic. Instead of trying to do DR, filters can check if the
image is writeable (with mp_image_is_writeable()), and do true in-place
if that's the case. This affects filters like vf_gradfun and vf_sub.
Everything has to support strides now. If something doesn't, making a
copy of the image data is required.
Setting the size of a mp_image must be done with mp_image_set_size()
now. Do this to guarantee that the redundant fields (like chroma_width)
are updated consistently. Replacing the redundant fields by function
calls would probably be better, but there are too many uses of them,
and is a bit less convenient.
Most code actually called mp_image_setfmt(), which did this as well.
This commit just makes things a bit more explicit.
Warning: the video filter chain still sets up mp_images manually,
and vf_get_image() is not updated.
vdpau hardware decoding used the DR (direct rendering) path to let the
decoder query a surface from the VO. Special-case the HW decoding path
instead, to make it separate from DR.
Slices allowed filtering or drawing video in horizontal bands or
blocks. This allowed working on the video in smaller units. In theory,
this could bring a performance win by lowering cache pressure, as you
didn't have to keep the whole video frame in cache while filtering,
only the slice.
In practice, the slice code path was barely used for the following
reasons:
- Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg
slice callback was disabled, because it can be called from another
thread, and the mplayer video chain is not thread-safe.
- There was nothing that would turn "full" images into appropriate
slices, so slices were rarely used.
- Most filters didn't actually support slices.
On the other hand, supporting slices lead to code duplication and more
complex code in general. I made some experiments and didn't find any
actual measurable performance improvements when using slices. Even
ffmpeg removed slices based filtering from libavfilter in favor of
simpler code.
The most broken thing about the slices code path is that slices can't
be queued, like it is done for images in vo.c.
For some reason, libavcodec abuses the slices rendering code path for
hardware decoding: in that case, the only purpose of the draw callback
is to pass a vdpau video surface object to video output. (It is unclear
to me why this had to use the slices code, instead of just returning an
AVFrame with the required vdpau state.)
Make this code separate within mpv, so that the internal slices code
path is not used for hardware decoding. Pass the vdpau state with
VOCTRL_HWDEC_DECODER_RENDER instead.
Remove the mencoder specific VOCTRLs.
Remove VOCTRL_DRAW_IMAGE and always set vo_driver.draw_image in VOs.
Make draw_image mandatory: change some VOs (like vo_x11) to support it,
and remove the image-to-slices fallback in vf_vo.
Remove vo_driver.is_new. This member indicated whether draw_image is
supported unconditionally, which is now always the case.
draw_image_pts is a hack until the video filter chain is changed to
include the PTS as field in mp_image. Then vo_vdpau and vo_lavc will
be changed to use draw_image.
The -zoom option enabled scaling with vo_x11. Remove the -zoom option,
and make its behavior default. Since vo_x11 has to use libswscale for
colorspace conversion anyway, which doesn't do actual extra scaling when
vo_x11 is run in windowed mode, there should be no speed difference with
this change.
The code removed from vf_scale attempted to scale the video to d_width/
d_height, which matters for anamorphic video and the --xy option only.
vo_x11 can handle these natively. The only case for which the removed
vf_scale code could matter is encoding with vo_lavc, but since that
didn't set VOFLAG_SWSCALE, nothing actually changes.
Finish renaming directories and moving files. Adjust all include
statements to make the previous commit compile.
The two commits are separate, because git is bad at tracking renames
and content changes at the same time.
Also take this as an opportunity to remove the separation between
"common" and "mplayer" sources in the Makefile. ("common" used to be
shared between mplayer and mencoder.)
Tis drops the silly lib prefixes, and attempts to organize the tree in
a more logical way. Make the top-level directory less cluttered as
well.
Renames the following directories:
libaf -> audio/filter
libao2 -> audio/out
libvo -> video/out
libmpdemux -> demux
Split libmpcodecs:
vf* -> video/filter
vd*, dec_video.* -> video/decode
mp_image*, img_format*, ... -> video/
ad*, dec_audio.* -> audio/decode
libaf/format.* is moved to audio/ - this is similar to how mp_image.*
is located in video/.
Move most top-level .c/.h files to core. (talloc.c/.h is left on top-
level, because it's external.) Park some of the more annoying files
in compat/. Some of these are relicts from the time mplayer used
ffmpeg internals.
sub/ is not split, because it's too much of a mess (subtitle code is
mixed with OSD display and rendering).
Maybe the organization of core is not ideal: it mixes playback core
(like mplayer.c) and utility helpers (like bstr.c/h). Should the need
arise, the playback core will be moved somewhere else, while core
contains all helper and common code.