This commit adds a new build system based on waf. configure and Makefile
are deprecated effective immediately and someday in the future they will be
removed (they are still available by running ./old-configure).
You can find how the choice for waf came to be in `DOCS/waf-buildsystem.rst`.
TL;DR: we couldn't get the same level of abstraction and customization with
other build systems we tried (CMake and autotools).
For guidance on how to build the software now, take a look at README.md
and the cross compilation guide.
CREDITS:
This is a squash of ~250 commits. Some of them are not by me, so here is the
deserved attribution:
- @wm4 contributed some Windows fixes, renamed configure to old-configure
and contributed to the bootstrap script. Also, GNU/Linux testing.
- @lachs0r contributed some Windows fixes and the bootstrap script.
- @Nikoli contributed a lot of testing and discovered many bugs.
- @CrimsonVoid contributed changes to the bootstrap script.
The existing code tried to remove the "extra" profile flags for h264.
FF_PROFILE_H264_INTRA doesn't matter for us at all, because it's set
only for profiles the vdpau/vaapi APIs don't support.
The FF_PROFILE_H264_CONSTRAINED flag on the other hand is added to
H264_BASELINE, except that it makes the file a real subset of H264_MAIN
and H264_HIGH. Removing that flag would select the BASELINE profile,
which appears to be rarely supported by hardware decoders. This means we
accidentally rejected perfectly hardware decodable files. Use MAIN for
it instead.
(vaapi has explicit support for CONSTRAINED_BASELINE, but it seems to be
a new thing, and is not reported as supported where I tried. So don't
bother to check it, and do the same as on vdpau.)
See github issue #204.
When blending OSD and subtitles onto the video, we write bogus alpha
values. This doesn't normally matter, because these values are normally
unused and discarded. But at least on Wayland, the alpha values are used
by the compositor and leads to transparent windows even with opaque
video on places where the OSD happens to use transparency.
(Also see github issue #338.)
Until now, the alpha basically contained garbage. The source factor
GL_SRC_ALPHA meant that alpha was multiplied with itself. Use GL_ONE
instead (which is why we have to use glBlendFuncSeparate()). This should
give correct results, even with video that has alpha. (Or at least it's
something close to correct, I haven't thought too hard how the
compositor will blend it, and in fact I couldn't manage to test it.)
If glBlendFuncSeparate() is not available, fall back to glBlendFunc(),
which does the same as the code did before this commit. Technically, we
support GL 1.1, but glBlendFuncSeparate is 1.4, and I guess we should
try not to crash if vo_opengl_old runs on a system with GL 1.1 drivers
only.
This was supposed to handle preemption better. I still think the current
state isn't very nice, since the decoder can "accidentally" call the
previous render function after preemption (instead of calling the
reloaded function), so there might be issues. But all in all, this
dummy_render function is a bit confusing, and still not entirely
correct, so it's not worth it.
This removes "--hwdec=crystalhd".
I doubt anyone even tried to use this. But even if someone wants to
use it, the decoders can still be explicitly invoked with e.g.:
--vd=lavc:h264_crystalhd
The only advantage our special code provided was fallback to
software decoding. (But I'm not sure how the ffmpeg crystalhd
pseudo-decoder actually behaves.)
Removing this will allow some simplifications as soon as we don't need
vdpau_old.c anymore.
This uses vdpau OpenGL interop to convert a vdpau surface to a texture.
Note that this is a bit weak and primitive. Deinterlacing (or any other
form of vdpau postprocessing) is not supported. vo_opengl chroma scaling
and chroma sample position are not supported. Internally, the vdpau
video surfaces are converted to a RGBA surface first, because using the
video surfaces directly is too complicated. (These surfaces are always
split into separate fields, and the vo_opengl core expects progressive
frames or frames with weaved fields.)
Instead of checking for resolution and image format changes, always
fully reinit on any parameter change. Let init_video do all required
initializations, which simplifies things a little bit.
Change the gl_video/hardware decoding interop API slightly, so that
hwdec initialization gets the full image parameters.
Also make some cosmetic changes.
These formats are helpful for distinguishing surfaces with and without
alpha. Unfortunately, Libav and older version of FFmpeg don't support
them, so code will break. Fix this by treating these formats specially
on the mpv side, mapping them to RGBA on Libav, and unseting the alpha
bit in the mp_imgfmt_desc struct.
Before the bitstream_buffers field was deprecated, you had to free it,
otherwise you would leak memory.
(Although vdpau.c uses a new API, they managed to introduce a new
deprecation this quickly. This is a complaint.)
This introduces a memory leak of 12 bytes per file on every file on some
_older_ libavcodec versions. This is minor enough that I don't care.
Video has up to 4 textures, if you include obscure formats with alpha.
This means alpha formats could always overwrite the first scaler
texture, leading to corrupted video display. This problem was recently
brought to light, when commit 571e697 started to explicitly unbind all 4
video textures, which broke rendering for non-alpha formats as well.
Fix this by reserving the correct number of texture units.
VA-API's OpenGL/GLX interop is pretty bad and perhaps slow (renders a
X11 pixmap into a FBO, and has to go over X11, probably involves one or
more copies), and this code serves more as an example, rather than for
serious use. On the other hand, this might be work much better than
vo_vaapi, even if slightly slower.
Most hardware decoding APIs provide some OpenGL interop. This allows
using vo_opengl, without having to read the video data back from GPU.
This requires adding a backend for each hardware decoding API. (Each
backend is an entry in gl_hwdec_vaglx[].) The backends expose video data
as a set of OpenGL textures.
Add infrastructure to support this. The next commit will add support for
VA-API.
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
* #define HAVE_HURR 1
* #define HAVE_DURR 0
We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.
[1]: http://xkcd.com/927/ related
There are some Microsoft Windows symbols which are traditionally used by
the mplayer core, because it used to be convenient (avi was the big
format, using binary windows decoders made sense...). So these symbols
have the exact same definition as the Windows one, and if mplayer is
compiled on Windows, the symbols from windows.h are used.
This broke recently just because some files were shuffled around, and
the symbols defined in ms_hdr.h collided with windows.h ones. Since we
don't have windows binary decoders anymore, there's not the slightest
reason our symbols should have the same names. Rename them to reduce the
risk for collision, and to fix the recent regression.
Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines
its own version if the windows headers don't define it, and ao_wasapi is
not available on systems where this symbol is missing.
Also reindent ms_hdr.h.
We had some code for checking profiles earlier, which was removed in
commits 2508f38 and adfb71b. These commits mentioned that (working) hw
decoding was sometimes prevented due to profile checking, but I can't
find the samples anymore that showed this behavior. Also, I changed my
opinion, and I think checking the profiles is something that should be
done for better fallback to software decoding behavior.
The checks roughly follow VLC's vdpau profile checks, although we do
not check codec levels. (VLC's profile checks aren't necessarily
completely correct, but they're a welcome help anyway.)
Add a --vd-lavc-check-hw-profile option, which skips the profile check.
This one really did bite me hard (see previous commit), so enable it by
default.
Fix some cases of shadowing throughout the codebase. None of these
change behavior, and all of these were correct code, and just tripped up
the warning.
As preparation for resizing the window with input commands in the
following commit.
Since there are already so many functions which somehow resize the
window, add the word "highlevel" to the name of this new function.
We mixed the "old" AVFrame management functions (avcodec_alloc_frame,
avcodec_free_frame) with reference counting. This doesn't work
correctly; you must use av_frame_alloc and av_frame_free. Of course
ffmpeg doesn't warn us about the bad usage, but will just mess up
things silently. (Thanks a lot...)
While the alloc function seems to be 100% compatible, the free function
will do bad things, such as freeing memory that might still be
referenced by another frame. I didn't experience any actual bugs, but
maybe that was pure luck.
This stopped working when the code was changed to create a window even
if --wid is used.
It appears we can't create our own window in this case, because in X11
there is no difference between a window with the root window as parent,
and a window that is managed by the WM. So make this (kind of worthless)
special case use the root window itself.
On systems that provide legacy OpenGL (up to 2.1), but not GL3 and
later, creating a GL3 context will fail. We then revert to legacy GL.
Apparently the error message printed when the GL3 context creation
fails is confusing. We could just silence it, but there's still a X
error ("X11 error: GLXBadFBConfig"), which would be quite hard to
filter out. For one, it would require messing with the X11 error
handler, which doesn't even carry a context pointer (for application
private data), so we don't even want to touch it. Instead, change
the error message to inform the user what's actually happening: a
fallback to an older version of OpenGL.
Trying to toggle the border during fullscreen (with "cycle border")
would leave the window stuck without border, and it couldn't be
restored. This was because vo_x11_decoration() always excepted to be
called when toggling the state, and thus confusing the contents of the
olddecor variable. Add got_motif_hints to hopefully prevent this.
Also, when changing the border, don't take fs in account. May break on
older/broken WMs, but all in all is in fact more robust and simpler,
because you do not need to update the border state manually when
returning from fullscreen.
The default behavior of weston changed some time ago to not fill the surface
black for fullscreen windows.
Now let mpv draw the whole screen in fullscreen mode.
Keep track of the default values directly, instead of creating a new
instance of the option struct just to get the defaults.
Also get rid of the special handling of m_obj_desc.init_options.
Instead, handle it purely by the option parser. Originally, I wanted to
handle --vo=opengl-hq and --vo=direct3d_shaders with this (by making
them aliases to the real VOs with a different preset), but since --vo
=opengl-hq=help prints the wrong values (as consequence of the
simplification), I'm not doing that, and instead use something
different.
The code did not set and unset the current context inside sync sections. I am
not sure if this was an actual problem but this is better since the context is
linked to a single thread. In my brief tests this seems to avoid garbage to
show up in fullscreen.
Instead of removing dragging we now test if it we should drag the window or
not. Because if the OSC shows up we can not drag the window because that would
cause mouse events that makes the OSC disappear.
Was disabled by default, was never used, internal support was
inconsistent and poor, and there has been virtually no interest in
creating translations.
And I don't even think that a terminal program should be translated.
This is something for (hypothetical) GUIs.
This could cause the OSC to be displayed without mouse interaction: for
example, starting mpv with --fs, and putting the mouse to where the OSC
area is beforehand, would cause the OSC to appear and stay visible. We
don't want that. The simplest solution is not generating artificial
mouse move events from mouse enter events, because they make the OSC
think the mouse was actually moved.
Also see commit 0c7978c, where handling of mouse enter events was added.
This was supposed to fix certain corner cases, but they're not relevant
anymore due to changes in OSC behavior.
Commit 9777047 fixed this as well (by resetting the mouse state on
MOUSE_LEAVE), but all the behavior reverted with this commit as perhaps
a bad idea. It wasn't very robust, made it hard to distinguish real
events from artificial ones, and finally made the mouse cursor more
often visible than needed. (Now switching between workspaces doesn't
make the cursor visible again when switching to a fullscreened mpv.)
vo_image didn't handle OSD redrawing correctly anymore after OSD
redrawing behavior was changed in commit ed9295c (or maybe it has been a
problem for a longer time, and only showed up now). Basically, flip_page
was called unexpectedly and when no image was stored, which made it
crash trying to access the image. This could happen when for example
provoking OSD redrawing by pausing while using --vo=image, or by using
this command line: mpv --vo=image '-vf=lavfi="select=not(mod(n\,3))"'
Fix by removing the code that pretends vo_image can redraw OSD, and by
removing the framestepping fallback, which could make bad things happen
if the VO didn't support OSD redrawing. By now, there aren't any real
VOs that can't redraw the OSD properly, so this code is not needed and
just complicates things like vo_image.
This change likely will also be useful for vo_lavc (encoding).
Change talloc destructor so that they can never signal failure, and
don't return a status code. This makes our talloc copy even more
incompatible to upstream talloc, but on the other hand this is
preparation for getting rid of talloc entirely.
(The talloc replacement in the next commit won't allow the talloc_free
equivalent to fail, and the destructor return value would be useless.
But I don't want to change any mpv code either; the idea is that the
talloc replacement commit can be reverted for some time in order to
test whether the talloc replacement introduced a regression.)
This only shows any differences when mpv isn't frontmost and is in fullscreen.
Cmd+Tab overlay is still at a higher level as to avoid complete usability fail.
glXGetVisualFromFBConfig() specifies specifies that it can return NULL
if there is no associated X visual. Instead of crashing, let
initialization fail. I'm not sure if this is actually supposed to work
with a fallback visual (passing a NULL visual to vo_x11_config_vo_window
would just do this), but let's play safe for now.
Apparently this can happen when trying to use vo_opengl over a remote
X display.
Reverts a small change made in commit ed9295c. This is needed, because
otherwise mplayer.c/update_video_attached_pic() thinks it never has to
update the picture after initialization. (Maybe there would be more
elegant ways to handle this, but not without adding extra state.)
This commit adds the --force-window option, which will cause mpv always
to create a window when started. This can be useful when pretending that
mpv is a GUI application (which it isn't, but users pretend anyway), and
playing audio files would run mpv in the background without giving a
window to control it.
This doesn't actually create the window immediately: it only does so
only after initializing playback and when it is clear that there won't
be any actual video. This could be a problem when starting slow or
completely stuck network streams (mpv would remain frozen in the
background), or if video initialization somehow is stuck forever in
an in-between state (like when the decoder doesn't output a video
frame, but doesn't return an error either). Well, we can pretend only
so much that mpv is a GUI application.
vo_vdpau is the only VO which implements VOCTRL_RESET. Redrawing the
last output frame is hard, because the output could consist of several
source video frames with certain types of post-processing
(deinterlacing). Implement redrawing as special case by keeping the
previous video frames aside until at least one new frame is decoded.
This improves the previous commit, but is separate, because it's rather
complicated.
Before, a VO could easily refuse to respond to VOCTRL_REDRAW_FRAME,
which means the VO wouldn't redraw OSD and window contents, and the
player would appear frozen to the user. This was a bit stupid, and makes
dealing with some corner cases much harder (think of --keep-open, which
was hard to implement, because the VO gets into this state if there are
no new video frames after a seek reset).
Change this, and require VOs to always react to VOCTRL_REDRAW_FRAME.
There are two aspects of this: First, behavior after a (successful)
vo_reconfig() call, but before any video frame has been displayed.
Second, behavior after a vo_seek_reset().
For the first issue, we define that sending VOCTRL_REDRAW_FRAME after
vo_reconfig() should clear the window with black. This requires minor
changes to some VOs. In particular vaapi makes this horribly
complicated, because OSD rendering is bound to a video surface. We
create a black dummy surface for this purpose.
The second issue is much simpler and works already with most VOs: they
simply redraw whatever has been uploaded previously. The exception is
vdpau, which has a complicated mechanism to track and filter video
frames. The state associated with this mechanism is completely cleared
with vo_seek_reset(), so implementing this to work as expected is not
trivial. For now, we just clear the window with black.
vo_x11 had a clever trick to implement a video equalizer: it requested a
DirectColor visual. This is a X11 mechanism which allows you to specify
a lookup table for each color channel. Effectively, this is a safe
override for the graphic card's gamma ramp. If X thinks the window
deserves priority over other windows in the system, X would temporarily
switch the gamma ramp so that DirectColor visuals can be displayed as
the application intends. (I'm not sure what the exact policy is, but in
practice, this meant the equalizer worked when the mouse button was
inside the window.)
But all in all, this is just lots of useless code for a feature that is
rarely ever useful. Remove it and use the libswscale equalizer instead.
(This comes without a cost, since vo_x11 already uses libswscale.)
One worry was that using DirectColor could have made it work better in
8-bit paletted mode. But this is not the case: there's no difference,
and in both cases, the video looks equally bad.
After rebasing my dev branch it turned out that the code deadlocked on
recursive calls of `vo_control`. Make the locking code a little bit smarter
by making always skip locking/unlocking if we are executing a chunck of
code that is already synchronized with `dispatch_sync`.
Split the code to several files. The GUI elements now each have they own files
and private state. The original code was a mess to respect the retarded mplayer
convention of having everything in a single file.
This commit also seems to fix the long running bug of artifacts showing
randomly when going fullscreen using nVidia GPUs.
Don't allocate a VAImage and a mp_image every time. VAImage are cached
in the surfaces themselves, and for mp_image an explicit pool is
created. The retry loop runs only once for each surface now.
This also makes use of vaDeriveImage() if possible.
Until now, mouse positions were just passed to the core as-is, even if
the mouse coordinates didn't map to any useful coordinate space, like
OSD coordinates. Lua scripting (used by the OSC, the only current user
of mouse input) had to translate mouse coordinates manually to OSD space
using mp_get_osd_mouse_pos(). This actually didn't work correctly in
cases mouse coordinates didn't map to OSD (like vo_xv): the mouse
coordinates the OSC got were correct, but input.c was still expecting
"real" mosue coordinates for mouse areas.
Fix this by converting to OSD coordinates before passing the mouse
position to the core.
Nothing really accesses it. Subtitle initialization actually does in a
somewhat meaningful way, but there container size is probably fine, as
subtitles were always initialized before the first video frame was
decoded.
Now writing -1 to the 'aspect' property resets the video to the auto
aspect ratio. Returning the aspect from the property becomes a bit more
complicated, because we still try to return the container aspect ratio
if no frame has been decoded yet.
This function would probably be useful in other places too.
I'm not sure why vd.c doesn't apply the aspect if it changes size by
less than 4 pixels. Maybe it's supposed to avoid ugly results with bad
scalers if the difference is too small to be noticed normally.
This time it broke because I didn't actually test compiling vo_vaapi.c,
and it was using a macro from mp_image.h, which implicitly assumed
FFALIGN was available. Screw that too, and copy the definition of
ffmpeg's FFALIGN to MP_ALIGN_UP, and move these macros to mp_comnon.h.
The code using FFSWAP was moved from vo_vaapi.c to vaapi.c, which didn't
include libavutil/common.h anymore, just libavutil/avutil.h. The header
avutil.h doesn't include common.h recursively in Libav, so it broke
there.
Add FFSWAP as MPSWAP in mp_common.h (copy pasted from ffmpeg) to make
sure this doesn't happen again. (This kind of stuff happens all too
often, so screw libavutil.)
This code is actually quite inefficient: it reuses the (slow, simple)
screenshot code. It uses an inefficient method to read the image
(vaGetImage() instead of vaDeriveImage()), allocates new memory for
each frame that is read, and it tries all image formats again each
time.
Also, in my tests it always picked NV12 as image format, which is not
ideal if you actually want to filter the video, and vo_xv can't handle
this format without conversion either.
However, a user confirmed that it worked for him, so everything is fine.
This will allow GPU read-back with process_image.
We have to restructure how init_vo() works. Instead of initializing the
VO before process_image is called, rename init_vo() to
update_image_params(), and let it update the params only. Then we really
initialize the VO after process_image.
As a consequence of these changes, already decoded hw frames are
correctly unreferenced if creation of the filter chain fails. This
could trigger assertions on VO uninitialization, because it's not
allowed to reference hw frames past VO lifetime.
Merged from pull request #246 by xylosper. Minor cosmetic changes, some
adjustments (compatibility with older libva versions), and manpage
additions by wm4.
Signed-off-by: wm4 <wm4@nowhere>
Moving the window was convenient but generates a MOUSE_LEAVE event
which it shouldn't. Now we remove it, because it is still possible
to move the window in weston with MOD+BTN0.
Normally, we need this for Xutf8LookupString(). But we can just fall
back to XLookupString(). In fact, the code for this was already there,
the code was just never tested and was actually crashing when active
(see commit 2115c4a).
XOpenIM can fail to find a valid input method, in which case it
returns NULL. Passing a NULL pointer to XCreateIC would cause a
crash, so fail VO init before that happens.
Before this commit there was just an error message, but the file descriptor was
still open. Now we close the file descriptor and prevent it from calling
endlessly. Also a CLOSE_WIN event is sent which closes the window eventually if
the action of CLOSE_WIN is set to quit or quit_watch_later.
Improves display of images and video with alpha channel, especially if
the transparent regions contain (supposed to be invisible) garbage
color values.
This is mainly to avoid spurious cursor states due to the mouse moving inside
or outside the window as a result of the window resize (with cmd-0/1/2).
This avoids complex logic and triggers a mouse move so that the player
recomputes the correct cursor state based on the autohide configuration of
the user.
This keeps the state in sync with the current state in cocoa_common. Infact the
cocoa code in mpv can decide wether it really wants to hide the cursor based on
the result of the `canHideCursor` method (this is so that the cursor is only
hidden when hovering on the video window).
This is supposed to reduce the amount of useless error messages shown
during initialization of vo_opengl. If multiple backends are compiled,
usually only one of them will work. For example, on Linux both X and
Wayland backends can be compiled, but usually either Wayland or X is
running. Then, if Wayland is not running, but X is, trying to initialize
the Wayland backend should not spam the terminal with error messages.
Signed-off-by: Andreas Sinz <andreas.sinz@aon.at>
In init_vo(), if sh->aspect is 0 or last_sample_aspect_ratio is set,
sh->aspect is overwritten. With software decoding fallback behaviour,
this makes the aspect ratio from container ignored since
last_sample_aspect_ratio is already set in first try with hardware
decoding.
The --deinterlace option does on playback start what the "deinterlace"
property normally does at runtime. You could do this before by using the
--vf option or by messing with the vo_vdpau default options, but this
new option is supposed to be a "foolproof" way.
The main motivation for adding this is so that the deinterlace property
can be restored when using the video resume functionality
(quit_watch_later command).
Implementation-wise, this is a bit messy. The video chain is rebuilt in
mpcodecs_reconfig_vo(), where we don't have access to MPContext, so the
usual mechanism for enabling deinterlacing can't be used. Further,
mpcodecs_reconfig_vo() is called by the video decoder, which doesn't
have access to MPContext either. Moving this call to mplayer.c isn't
currently possible either (see below). So we just do this before frames
are filtered, which potentially means setting the deinterlacing every
frame. Fortunately, setting deinterlacing is stable and idempotent, so
this is hopefully not a problem. We also add a counter that is
incremented on each reconfig to reduce the amount of additional work per
frame to nearly zero.
The reason we can't move mpcodecs_reconfig_vo() to mplayer.c is because
of hardware decoding: we need to check whether the video chain works
before we decide that we can use hardware decoding. Changing it so that
this can be decided in advance without building a filter chain sounds
like a good idea and should be done, but we aren't there yet.
Problem: I own the buffer and I destroyed while still being displayed.
Solution: Add a temporary buffer and destroy it when the next buffer is
attached.
This is mostly related to the fullscreen behaviour. cecbd8864 introduces an
option to make mpv behave like a OSX user would expect. This commit changes
the Cocoa parts of the code to be consistent with the behaviour on X11. Old
behaviour is still available through the option mentioned in cecbd8864.
There is still custom logic in the cocoa backend and it can probably be moved
to core:
* Don't perform autohide if the mouse is down
* Don't perform autohide outside of the video window
Fixes#218 (by accident)
icon_size is the number of array items of type long, not bytes. Change
the type of icon_size to int, because size_t makes you think of byte
quantities too quickly.
As an unrelated change, change the (char *) cast to (unsigned char *),
because it matches the common XChangeProperty idiom better.
The png file added to etc/ are taken from the link mentioned in commit
303096b, except that they have been converted to 16 bit, sRGB (with
color profile info dropped, if there was one), and transparent pixels
reset for better compression.
The file x11_icon.bin is generated by gen-x11-icon.sh. I'm adding it to
the git repo directly, because the script requires ImageMagick, and we
don't want to make building even more complicated.
The way how this is done is basically a compromise between effort
required in x11_common.c and in gen-x11-icon.sh. Ideally, x11_icon.bin
would be directly in the format as required by _NET_WM_ICON, but trying
to write the binary width/height values from shell would probably be a
nightmare, so here we go.
The zlib code in x11_common.c is lifted from demux_mkv.c, with some
modifications (like accepting a gzip header, because I don't know how to
make gzip write raw compressed data).