This replaces translate_key_input with a solution that gives mpv more
control over how keyboard input is converted to unicode. As a result:
- Key up/down events are generated the same way for all keys.
- Dead keys generate their base character instead of being combined with
the following character.
- Many Ctrl and Ctrl+Alt key combinations that were previously broken
are fixed, since it's possible to discover the base keys.
- AltGr doesn't produce special characters when mp_input_use_alt_gr is
false.
This also fixes some logic to do with detecting AltGr and adds proper
UTF-16 decoding.
I hate tabs.
This replaces all tabs in all source files with spaces. The only
exception is old-makefile. The replacement was made by running the
GNU coreutils "expand" command on every file. Since the replacement was
automatic, it's possible that some formatting was destroyed (but perhaps
only if it was assuming that the end of a tab does not correspond to
aligning the end to multiples of 8 spaces).
mpv was resizing to the same size before it went to fullscreen, we don't need to schedule a resize because the compositor will send a configure event with the new dimensions and thats when we should do it.
The stats were retrieved and written on every encode call, instead of
every encode call that actually returned a packet. ffmpeg.c also does it
this way, so it must be "more correct". Fixes 2-pass encoding.
This might be a good idea in order to prevent queuing a frame too far in
the future (causing apparent freezing of the video display), or dropping
an infinite number of frames (also apparent as freezing).
I think at this point this is most of what we can do if the vdpau time
source is unreliable (like with Mesa). There are still inherent race
conditions which can't be fixed.
The strange thing about this code was the shift parameter of the
prev_vs2 function. The parameter is used to handle timestamps before the
last vsync, since the % operator handles negative values incorrectly.
Most callers set shift to 0, and _usually_ pass a timestamp after the
last vsync. One caller sets it to 16, and can pass a timestamp before
the last timestamp.
The mystery is why prev_vs2 doesn't just compensate for the % operator
semantics in the most simple way: if the result of the operator is
negative, add the divisor to it. Instead, it adds a huge value to it
(how huge is influenced by shift). If shift is 0, the result of the
function will not be aligned to vsyncs.
I have no idea why it was written in this way. Were there concerns about
certain numeric overflows that could happen in the calculations? But I
can't think of any (the difference between ts and vc->recent_vsync_time
is usually not that huge). Or is there something more clever about it,
which is important for the timing code? I can't think of anything
either.
So scrap it and simplify it.
vo_vdpau used a somewhat complicated and fragile mechanism to convert
the vdpau time to internal mpv time. This was fragile as in it couldn't
deal well with Mesa's (apparently) random timestamps, which can change
the base offset in multiple situations. It can happen when moving the
mpv window to a different screen, and somehow it also happens when
pausing the player.
It seems this mechanism to synchronize the vdpau time is not actually
needed. There are only 2 places where sync_vdptime() is used (i.e.
returning the current vdpau time interpolated by system time).
The first call is for determining the PTS used to queue a frame. This
also uses convert_to_vdptime(). It's easily replaced by querying the
time directly, and adding the wait time to it (rel_pts_ns in the patch).
The second call is pretty odd: it updates the vdpau time a second time
in the same function. From what I can see, this can matter only if
update_presentation_queue_status() is very slow. I'm not sure what to
make out of this, because the call merely queries the presentation
queue. Just assume it isn't slow, and that we don't have to update the
time.
Another potential issue with this is that we call VdpPresentationQueueGetTime()
every frame now, instead of every 5 seconds and interpolating the other
calls via system time. More over, this is per video frame (which can be
portantially dropped, and not per actually displayed frame. Assume this
doesn't matter.
This simplifies the code, and should make it more robust on Mesa. But
note that what Mesa does is obviously insane - this is one situation
where you really need a stable time source. There are still plenty of
race condition windows where things can go wrong, although this commit
should drastically reduce the possibility of this.
In my tests, everything worked well. But I have no access to a Mesa
system with vdpau, so it needs testing by others.
See github issues #520, #694, #695.
This commit adds support for automatic selection of color profiles based on
the display where mpv is initialized, and automatically changes the color
profile when display is changed or the profile itself is changed from
System Preferences.
@UliZappe was responsible with the testing and implementation of a lot of this
commit, including the original implementation of `cocoa_get_icc_profile_path`
(See #594).
Fixes#594
Reduce most dependencies on struct mp_csp_details, which was a bad first
attempt at dealing with colorspace stuff. Instead, consistently use
mp_image_params.
Code which retrieves colorspace matrices from csputils.c still uses this
type, though.
There were some bad interactions with the OSC.
For one, dragging the OSC bar, and then moving the mouse outside of the
OSC (while mouse button still held) would suddenly initiate window
dragging. This was because win_drag_button1_down was not reset when
sending a normal mouse event, which means the window dragging code can
become active even after we've basically decided that the preceding
click didn't initiate window dragging.
Second, dragging the window and clicking on the OSC bar after that did
nothing. This was because no mouse button up event was sent to the core,
even though a mouse down event was sent. So make sure the key state is
erased with MP_INPUT_RELEASE_ALL.
We don't check whether the WM supports _NET_WM_MOVERESIZE_MOVE, but
if it doesn't, nothing bad happens. There might be a race condition
when pressing a button, and then moving the mouse and releasing the
button at the same time; then the WM might get the message to initiate
moving the window after the mouse button has been released, in which
case the result will probably be annoying. This could possibly be fixed
by sending _NET_WM_MOVERESIZE_CANCEL on button release, but on the
other hand, we probably won't receive a button release event in this
situation, so ignore this problem.
The dragging is initiated only when moving the mouse pointer after a
click in order to reduce annoying behavior when the user is e.g.
doubleclicking.
Closes#608.
It's not really needed to be public. Other code can just use mp_image.
The only disadvantage is that the other code needs to call an accessor
to get the VASurfaceID.
Although I at first thought it would be better to have a separate
implementation for hwaccels because the difference to software images
are too large, it turns out you can actually save some code with it.
Note that the old implementation had a small memory management bug. This
got painted over in commit 269c1e1, but is hereby solved properly.
Also note that I couldn't test vf_vavpp.c (due to lack of hardware), and
I hope I didn't accidentally break it.
They were used by ancient libavcodec versions. This also removes the
need to distinguish vdpau image formats at all (since there is only
one), and some code can be simplified.
The window doesn't recieve a WM_LBUTTONUP message after it's dragged,
probably because it's swallowed by the modal loop. To stop the button
from sticking, release it manually when the drag is complete.
Mouse buttons can get stuck down if the button is pressed inside the
video window and released outside. Avoid this by capturing mouse input
when a button is pressed.
This updates the logic for the new, somewhat unified behavior of SRGB
and 3DLUT since 34bf9be (not that it was particularly correct even that
change) and checks for the presence of corresponding extensions only in
the cases in which they're needed.
This commit:
- Changes some of the #define and variable names for clarification and
adds comments where appropriate.
- Unifies :srgb and :icc-profile, making them fit into the same step of
the decoding process and removing the weird interactions between both
of them.
- Makes :icc-profile take precedence over :srgb (to significantly reduce
the number of confusing and useless special cases)
- Moves BT709 decompanding (approximate or actual) to the shader in all
cases, making it happen before upscaling (instead of the old 0.45
gamma function). This is the simpler and more proper way to do it.
- Enables the approx gamma function to work with :srgb as well due to
this (since they now share the gamma expansion code).
- Renames :icc-approx-gamma to :approx-gamma since it is no longer tied
to the ICC options or LittleCMS.
- Uses gamma 2.4 as input space for the actual 3DLUT, this is now a
pretty arbitrary factor but I picked 2.4 mainly because a higher pure
power value here seems to produce visually better results with wide
gamut profiles, rather then the previous 1.95 or BT.709.
- Adds the input gamma space to the 3dlut cache header in case we change
it more in the future, or even make it user customizable (though I
don't see why the latter would really be necessary).
- Fixes the OSD's gamma when using :srgb, which was previously still
using the old (0.45) approximation in all cases.
- Updates documentation on :srgb, it was still mentioning the old
behavior from circa a year ago.
This commit should serve to both open up and make the CMS/shader code much
more accessible and less confusing/error-prone and simultaneously also
improve the performance of 3DLUTs with wide gamut color spaces.
I would liked to have made it more modular but almost all of these
changes are interdependent, save for the documentation updates.
Note: Right now, the "3DLUT takes precedence over SRGB" logic is just
coded into gl_lcms.c's compile_shaders function. Ideally, this should be
done earlier, when parsing the options (by overriding the actual
opts.srgb flag) and output a warning to the user.
Note: I'm not sure how well this works together with real-world
subtitles that may need to be color corrected as well. I'm not sure
whether :approx-gamma needs to apply to subtitles as well. I'll need to
test this on proper files later.
Note: As of now, linear light scaling is still intrinsically tied to
either :srgb or :icc-profile. It would be thinkable to have this as an
extra option, :linear-scaling or similar, that could be used with or
without the two color management options.
Since the AO will run in a thread, and there's lots of shared state with
encoding, we have to add locking.
One case this doesn't handle correctly are the encode_lavc_available()
calls in ao_lavc.c and vo_lavc.c. They don't do much (and usually only
to protect against doing --ao=lavc with normal playback), and changing
it would be a bit messy. So just leave them.
The previous version of the gamma suboption was pretty useless. It could
be used to disable delayed gamma enabling, which is a mechanism to avoid
having to adjust gamma in the shader by default.
Repurpose the suboption and allow setting an exact gamma value with it.
You can already override gamma with the --gamma option as well as the
gamma input property, but these use a weird curve to create the
impression of a linear perceived brightness change when changing the
value. This suboption now allows setting an exact gamma value.
This used to be absolute colorimetric, but relative colorimetric is a
saner default due to the arguments presented in issue #595.
A short summary: In general it doesn't affect much because our eyes
adapt to the white point either way, but if running in windowed mode it
would make the whites seem inconsistent/tinted. For fullscreen
projection it's also undesirable since it reduces the dynamic range
without much benefit (again, since our eyes adapt either way) and it
also breaks calibration against ambient lighting.
This shouldn't change much, since most profile types that aren't 3DLUTs
aren't capable of either of those transforms, and most displays are
calibrated against D65 (same as BT.709 source) either way.
This uses the value of 1.95 as an approximation for the exact gamma
curve, which replicates the behavior of popular video software including
anything in the Apple ecosystem, as per issue #534.
This is the same issue as addressed by 257d9f1, except this time for
the :srgb option as well. (257d9f1 only addressed :icc-profile)
The conditions of the srgb_compand mix() call are also flipped to
prevent an off-by-one error.
I was unhappy with the old way of handling buffers, especially resizing. But my
original plan to use wl_shm_pool_resize wasn't as good as I initially thought.
I might get back to it.
With the new buffer pools it now possible to select triple buffering. Also the
buffer pools are also needed for the upcoming subsurfaces for osd and subtitles.
I hope this change was worth it.
I could not see any difference whatsoever, but for usage with a 3DLUT
there's zero performance difference so we might as well follow the spec to
the letter.
Legacy GL context creation (glXCreateContext) explicitly requires a X
visual, while the modern one (glXCreateContextAttribsARB) does not for
some reason. So fail only on the legacy code path if we don't find a
visual. Note that vo_x11_config_vo_window() will select a default visual
if a NULL visual is passed to it.
This fixes issue #504. For some reason, glXChooseFBConfig() will return
a fbconfig with no associated visual. (I'm not sure if this allowed.
They don't always have a visual, but since GLX_X_RENDERABLE is set
and GLX_DRAWABLE_TYPE is (implicitly) set to GLX_WINDOW_BIT, why would
there be no visual?)
Even worse, a test program seems to show that a 16 bit fbconfig is
selected (instead of 24/32 bit), which doesn't sound nice at all. Since
there _are_ better fbconfigs available, glXChooseFBConfig() should
normally sort them by quality, and return the better ones first. It's
worth noting that this function should also prefer GLX_TRUE_COLOR
over anything else, although this comes last in the sort order.
Whatever is going on, requesting GLX_X_VISUAL_TYPE with GLX_TRUE_COLOR
seems to fix it.
This was done incorrectly in the previous commit: the fallback size used
the window size as requested with the first config call, which is the
size of the hidden window in the vo_opengl case. (That damn hidden
window again...)
This code essentially does nothing. As far as I could find out, this
actually used to do something. Then it was removed with commit efe7c39f,
leaving some leftover code that didn't do anything useful. This happened
12 years ago!
Also remove a commented debug printf.
vo_opengl creates a hidden X11 window to probe the OpenGL context. It
must do that before creating a visible window, because VO creation and
VO config are separate phases.
There's a race condition involving the hidden window: when starting with
--fs, and then leaving fullscreen, the unfullscreened window is
sometimes set to the aspect ratio of the hidden window. I'm not sure why
the window size itself uses the correct size (but corrupted by the wrong
aspect), but that's perhaps because the window manager is free to ignore
the size hint while honoring the aspect, or something equally messed up.
It turns out this happens because x11_common.c thinks the size of the
hidden window is the size of the unfullscreened window. This in turn
happens because vo_x11_update_geometry() reads the size of the hidden
window when called in vo_x11_fullscreen() (called from
vo_x11_config_vo_window()) when mapping the fullscreen window. At that
point, the window could be mapped, but not necessarily. If it's not
mapped, it will get the size of the unfullscreened window... I think.
One could fix this by actively waiting until the window is mapped. Try
to pick a less hacky approach instead, and never read the window size
until MapNotify is received.
vo_x11_create_window() needs a hack, because we'd possibly set the VO's
size to 0, resulting e.g. in vdpau to fail initialization. (It'll print
error messages until a proper resize is received.)
RGB565 is one of the fastest and most supported formats on low end consumer
devices, but ffmpeg spams warning when using it. Make it opt-in instead of
opt-out.
The problem seems to have solved itself. I guess the previous changes to
resizing and commit ba101ab made this possible. Consider me happy for removing
that crap.
Still untested, because now it crashes inside of libSDL for unknown
reasons. (This also happens with mpv git from yesterday - probably an
installation problem, or SDL doing weird things it shouldn't be doing.)
The main difference between the old and new callbacks is that the old
callbacks required passing the window size, which is and always was very
inconvenient and confusing, since the window size is already in
vo->dwidth and vo->dheight.
Rename vo_get_src_dst_rects() to mp_get_src_dst_rects() and make it
independent from the VO (it takes a comical amount of parameters now to
pass all required state). Add a convenience wrapper with the name
vo_get_src_dst_rects() to vo.c. Replace all aspdat and vo usages with
immediate parameters.
Functionally, nothing should change, except that the window size is
clamped to a minimum of size 1 much earlier, and some log messages
change the prefix (don't bother with vo.vo_log stuff).
The plan is to make all the code in aspect.c independent from vo.c,
which should make the code easier to understand, will allow removal of
vo->aspdat, and reduces the amount of code that accesses weird mutable
struct vo fields.
vo->aspdat is basically an outdated version of vo->params, plus some
weirdness. Get rid of it, which will allow further cleanups and which
will make multithreading easier (less state to care about).
Also, simplify some VO code by using mp_image_set_attributes() instead
of caring about display size, colorspace, etc. manually. Add the
function osd_res_from_image_params(), which is often needed in the case
OSD renders into an image.
Do two things:
1. add locking to struct osd_state
2. make struct osd_state opaque
While 1. is somewhat simple, 2. is quite horrible. Lots of code accesses
lots of osd_state (and osd_object) members. To make sure everything is
accessed synchronously, I prefer making osd_state opaque, even if it
means adding pretty dumb accessors.
All of this is meant to allow running VO in their own threads.
Eventually, VOs will request OSD on their own, which means osd_state
will be accessed from foreign threads.
This is a bit of a hack, but in order to prevent TranslateMessage from
seeing WM_KEYDOWN messages that we already know how to decode, move the
decoding logic to the event loop. This should fix#476, since it stops
the generation of extraneous WM_CHAR messages that were triggering more
than one action on keydown.
Doesn't make any sense anymore. X11 (which was mentioned in the manpage)
autodetects it, and everything else ignored the option values.
Since for incomprehensible reasons the backends and vo.c still need to
exchange information about the screensize using the option fields,
they're not removed yet.
For some reason, this made all VO backends both set the screen
resolution in opts->screenwidth/height, and call
aspect_save_screenres(). Remove the latter. Move the code to calculate
the PAR-corrected window size from aspect.c to vo.c, and make it so that
the monitor PAR is recalculated when it makes sense.
When using --monitoraspect, but either the screen width or height or
both are unknown, a fallback is applied. This is a completely useless
obscure corner case that's going to help nobody, so get it out of the
way.
For a long time the cocoa backend set the xinerama_x/y and used dx/dy from the
VO instance. This somewhat worked with some workarounds but wasn't really
what was supposed to be happening. Moreover 27e4360, which touched this
workaround introduced a regression.
New code doesn't set the xinerama_x/y values so that dx/dy are offsets in the
current screen (not a virtual screen composed of all the screens). The screen
reference detected during VOCTRL_UPDATE_SCREENINFO is also passed down to the
window initialization code.
Fixes#472
On X11, if no wayland compositor is running, wl_list_init() will never
be called. This will cause destroy_display() to segfault when trying to
iterate over the list.
The user_data is passed on add_listener and can later be changed with
set_user_data. But because we don't want to change it later and because it is
the same object remove the set_user_data call.
This might be a copy&paste leftover from the initial draft for the wayland
backend.
I added enough logic to never set ontop or fullscreen twitce.
This commit keeps also the size of the video if multiple videos are played.
If the ratio differs the width will be kept at the same size and only the
height changes.
libwayland-client contains the following code [1]:
runtime_dir = getenv("XDG_RUNTIME_DIR");
if (!runtime_dir) {
fprintf(stderr,
"error: XDG_RUNTIME_DIR not set in the environment.\n");
This means this message will unconditionally and unavoidably be printed
if XDG_RUNTIME_DIR is not set. Since mpv is a terminal program, and we
want to avoid unnecessary output, work it around by not attempting to
use wayland if this environment variable is not set.
[1] http://cgit.freedesktop.org/wayland/wayland/tree/src/wayland-client.c#n636
(cd0dccd01e16fa404e03974d30ded3aebdb1c4bc)
This commonly happens when initializing vo_opengl on a X11-only system.
Unfortunately, most wl_*_destroy() functions appear not to accept NULL
pointers, making partial deinitialization a pain: you have to add your
own NULL checks everywhere to avoid crashes.
xkb.context is uninitialized separately, because you can initialize it
just fine, even if the rest of input initialization fails.
Because of this commit there were problems displaying the frmase in their right
order.
This reverts commit 96e75d234a.
Conflicts:
video/out/gl_wayland.c
video/out/wayland_common.h
The changes in the vo_wayland_ontop function have no effect on the workaround.
Somehow the problem just disappeared. I guess it is because of the new control
function in gl_wayland.c where the resize happens immediatly after the event
dispatch/flush.
Both X11 and Wayland support the same format for drag & drop operations
(text/uri-list), and the code for that was copied from x11_common.c to
wayland_common.c. Factor it out.
This solves the issue where we would not receive any frame events. The
difference to my earlier tests is that now it looks like eglSwapBuffers uses
it's own event queue or something similiar along the lines. Becaues the
performance is the same as without any redraw callback.
At the moment there are visual glitches when we resize the window. This happens
because in wayland there a special function for resizing EGL windows.
To prevent the glitches move the egl_context to the wayland state in
wayland_common.h and add a new control function to gl_wayland.c to wrap the
vo_wayland_control function to check for resize events.
With the new control wrapper the glitches are gone and the resizing is fluid.
The reason a segmentation happend here was because we couldn't get the
requested minor version. The major version is enough for differentiating
between OpenGL 3 and OpenGL 2. If it fails there is still a fallback to any
version available.
Also add a warning if we use the fallback.
Note that we don't try to be clever about detecting the files as
subtitles: we just check the file extension. We could go all the way and
check the files by opening them with a demuxer, but that would probably
do more bad than good.
Drag and drop is pretty complicated (just note how the number of Atoms
in use almost doubles), so I'm not sure whether this works everywhere.
This has been written by looking at the specification [1] or what looks
like the specification, and some external example code [2]. (The latter
one has no code license, but we didn't copy any code.)
We completely ignore the "requirement" of the spec. that the filename
"must" include username and hostname, e.g. "file://user@host/path/file".
In theory, this is required because X is network transparent, but at
this point the so called network transparency is a complete joke, and
Konqueror for one didn't include hostnames in "file://" URIs.
Tested with konqueror as drop source.
[1] http://www.newplanetsoftware.com/xdnd/
[2] http://www.edwardrosten.com/code/dist/x_clipboard-1.1/paste.cc
This reverts commit 877303aaa9.
The OpenGL 2.1 fallback for vo_opengl didn't work. Two things come
together: 1. trying to create an OpenGL 3.0 context will fail with
a GLXBadFBConfig error, and 2. X errors are fatal by default. Since
the reverted commit removed the X error handler, the mpv process was
killed, instead of continuing for the fallback.
(Note that this commit is not an exact inverse commit, since mp_msg
changed, but it does about the same thing.)
query_format was setting state even if wasn't the correct thing to do. Somehow
it worked by pure luck (until commit e6e6b88b6d).
Fix the initialization by setting state inside of reconfig.
If the utf8 string used to create the NSString for title was invalid utf8,
-stringWithUTF8String returned nil and triggered an assertion in Cocoa's
framework code.
Sanitize the utf8 string and if the sanitation wasn't enough just avoid
crashing by not setting a title.
Fixes#406
How embarrassing...
This code is inactive for all VOs other than vo_vdpau. For vo_vdpau,
this caused various issues, such as stuttering after about an hour of
running mpv; see github issue #403.
Note that this will print a difference even with perfect sync, because
the code queues the frames _between_ vsync, probably for error margin
(though I don't understand why it uses the exact values chosen).
There's a single mp_msg() in path.c, but all path lookup functions seem
to depend on it, so we get a rat-tail of stuff we have to change. This
is probably a good thing though, because we can have the path lookup
functions also access options, so we could allow overriding the default
config path, or ignore the MPV_HOME environment variable, and such
things.
Also take the chance to consistently add talloc_ctx parameters to the
path lookup functions.
Also, this change causes a big mess on configfiles.c. It's the same
issue: everything suddenly needs a (different) context argument. Make it
less wild by providing a mp_load_auto_profiles() function, which
isolates most of it to configfiles.c.
Always pass around mp_log contexts in the option parser code. This of
course affects all users of this API as well.
In stream.c, pass a mp_null_log, because we can't do it properly yet.
This will be fixed later.
Until now, there were two functions to add input sources (stuff like
stdin input, slave mode, lirc, joystick). Unify them to a single
function (mp_input_add_fd()), and make sure the associated callbacks
always have a context parameter.
Change the lirc and joystick code such that they take store their state
in a context struct (probably worthless), and use the new mp_msg
replacements (the point of this refactoring).
Additionally, get rid of the ugly USE_FD0_CMD_SELECT etc. ifdeffery in
the terminal handling code.
This ended up a little bit messy. In order to get a mp_log everywhere,
mostly make use of the fact that va_surface already references global
state anyway.
This removes the messages printed on unknown pixel format messages.
Passing a mp_log to them would be too messy. Actually, this is a good
change, because in the past we often had trouble with these messages
printed too often (causing terminal spam etc.), and printing warnings or
error messages on the caller sides is much cleaner.
vd_lavc.c had a change earlier to print an error message if a decoder
outputs an unsupported pixel format.
lcms2 has a global message callback for error reporting. If you don't
set this, these error messages are silently thrown away. I think we
still want the error messages, so we have to do dumb stuff to avoid
clashes. This doesn't handle the case if another library in the same
process sets the message callback, but at least this should exclude
possible memory errors when running multiple instances of mpv.
This has similar problems as the ALSA message callback, though in theory
we could use the Display handle to find the right mpv instance from the
global callback. It still wouldn't work if another library happens to
set the error handler at the same time. There doesn't seem much of an
advantage overriding the error handler (though it used to be required),
so remove it.
Apparently this has been broken for a year or so. The were three
reasons for the breakage here:
1. The window dragging hack prevented any DOWN event from
passing through since it always returned before we even got
the button.
2. The window style had CS_DBLCLKS in its flags, so we did not
get any DOWN events when the OS had detected a double click
(instead expecting us to handle a DBL event).
3. We never sent any mouse buttons when mouse movement handling
was disabled.
In my opinion, config.h inclusions should be kept to a minimum. MPlayer
code really liked including config.h everywhere, though, even in often
used header files. Try to reduce this.
Since m_option.h and options.h are extremely often included, a lot of
files have to be changed.
Moving path.c/h to options/ is a bit questionable, but since this is
mainly about access to config files (which are also handled in
options/), it's probably ok.
The tmsg stuff was for the internal gettext() based translation system,
which nobody ever attempted to use and thus was removed. mp_gtext() and
set_osd_tmsg() were also for this.
mp_dbg was once enabled in debug mode only, but since we have log level
for enabling debug messages, it seems utterly useless.
Add the mp_get_user_path() function, and make it expand special path
prefixes. Use it for some things in mpv which take filenames
(--input-config, --screenshot-template, opengl icc-profile suboption).
This allows accessing files in the mpv config dir without hardcoding the
config path by prefixing the path with ~~/. Details see manpage
additions.
Use the scaled video size (i.e. as shown on the window) as reference for
zoom. This is the easiest way to fix different width/height scale
factors as they happen when zooming video with a pixel aspect ratio
other than 1:1.
Also fix the unscaled mode, so that it 1. doesn't scale even with
--video-zoom, and 2. doesn't scale by small amounts when the video is
cropped by making the window smaller than the video.
The --flip option flipped the image upside-down, by trying to use VO
support, or if not available, by inserting a video filter. I'm not sure
why it existed. Maybe it was important in ancient times when VfW based
decoders output an image this way (but even then, flipping an image is a
free operation by negating the stride).
One nice thing about this is that it provided a possible path for
implementing video orientation, which is a feature we should probably
support eventually. The important part is that it would be for free for
VOs that support it, and would work even with hardware decoding.
But for now get rid of it. It's useless, trivial, stands in the way, and
supporting video orientation would require solving other problems first.
Previous API worked under the assumption that download_image is always called
after map_image. In practice this is true, but it's better to have a much
generic API that doesn't depend on the order in which the functions are called.
The hwdec driver can be loaded, even if it's not used (e.g. when playing
a file with no hardware decoding after one with it enabled).
Also, check whether dlimage is NULL. Since this will do call into the
native hwdec API, there's a chance a driver could fail doing this, it's
better to check the return value, even if this case currently can't
happen.
mpv was hardcoded to always consider the right Alt key as Alt Gr, but there
are parituclar combinations of platforms and keyboard layouts where it's more
convenient to treat the right Alt as a keyboard modifier just like the left
one.
Fixes#388
The harder work was done in the previous commits. After that this feature comes
out almost for free.
The only problem is I can't get the textures created with CGLTexImageIOSurface2D
to download properly, thus the code performs download using some CoreVideo APIs.
If someone knows why download of textures created with CGLTexImageIOSurface2D
doesn't work please contact me :)
This adds support for packed YUV formats (YUVY and UYVY) using the extension
GL_APPLE_rgb_422. While supporting this formats on their own is not that
important (considering most video is planar YUV) they are used for
interoperability with IOSurfaces.
Next commit will use this formats to render VDA hardware decoded frames through
IOSurface and OpenGL interoperability.
This allows vo_opengl to use GL_TEXTURE_RECTANGLE textures, either by
enabling it with the 'rectangle-textures' sub-option, or by having a
hwdec backend force it. By default it's off.
The _only_ reason we're adding this is because VDA can export rectangle
textures only.
This is not strictly needed anymore. (On the other hand, it's not really
possible to do hw decoding with vo_null, because the VO is still
responsible for opening the hw decoder API, but that's another story.)
There are some use cases for this. For example, you can use it to set
defaults of automatically inserted filters (like af_lavrresample). It's
also useful if you have a non-trivial VO configuration, and want to use
--vo to quickly change between the drivers without repeating the whole
configuration in the --vo argument.
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get
This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.
Mostly untested; it does compile with Libav 9.9.
The old ffmpeg vdpau support code uses separate vdpau pixel formats for
each decoder (pretty much because mplayer's architecture sucked), which
just gets into the way. Force the old decoder's output to IMGFMT_VDPAU,
and remove IMGFMT_IS_VDPAU() where we can remove it.
This should completely remove the differences betwene the old and new
vdpau decoder outside of the decoder.
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.
Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
This is not needed anymore, because we decided that the PAR of the
decoded video matters, and not the PAR of the filtered video that
arrives at the VO.
This means most code accessing this struct must now include hwdec.h
instead of dec_video.h. I just put it into dec_video.h at first because
I thought a separate file would be a waste, but it's more proper to do
it this way, as there are too many files which include dec_video.h only
to get the mp_hwdec_info definition.
This initialized only the load_api and load_api_ctx fields, and left the
other fields as they were. This failed with vf_vavpp, which assumed all
fields are initialized.
This commit adds a new build system based on waf. configure and Makefile
are deprecated effective immediately and someday in the future they will be
removed (they are still available by running ./old-configure).
You can find how the choice for waf came to be in `DOCS/waf-buildsystem.rst`.
TL;DR: we couldn't get the same level of abstraction and customization with
other build systems we tried (CMake and autotools).
For guidance on how to build the software now, take a look at README.md
and the cross compilation guide.
CREDITS:
This is a squash of ~250 commits. Some of them are not by me, so here is the
deserved attribution:
- @wm4 contributed some Windows fixes, renamed configure to old-configure
and contributed to the bootstrap script. Also, GNU/Linux testing.
- @lachs0r contributed some Windows fixes and the bootstrap script.
- @Nikoli contributed a lot of testing and discovered many bugs.
- @CrimsonVoid contributed changes to the bootstrap script.
When blending OSD and subtitles onto the video, we write bogus alpha
values. This doesn't normally matter, because these values are normally
unused and discarded. But at least on Wayland, the alpha values are used
by the compositor and leads to transparent windows even with opaque
video on places where the OSD happens to use transparency.
(Also see github issue #338.)
Until now, the alpha basically contained garbage. The source factor
GL_SRC_ALPHA meant that alpha was multiplied with itself. Use GL_ONE
instead (which is why we have to use glBlendFuncSeparate()). This should
give correct results, even with video that has alpha. (Or at least it's
something close to correct, I haven't thought too hard how the
compositor will blend it, and in fact I couldn't manage to test it.)
If glBlendFuncSeparate() is not available, fall back to glBlendFunc(),
which does the same as the code did before this commit. Technically, we
support GL 1.1, but glBlendFuncSeparate is 1.4, and I guess we should
try not to crash if vo_opengl_old runs on a system with GL 1.1 drivers
only.
This uses vdpau OpenGL interop to convert a vdpau surface to a texture.
Note that this is a bit weak and primitive. Deinterlacing (or any other
form of vdpau postprocessing) is not supported. vo_opengl chroma scaling
and chroma sample position are not supported. Internally, the vdpau
video surfaces are converted to a RGBA surface first, because using the
video surfaces directly is too complicated. (These surfaces are always
split into separate fields, and the vo_opengl core expects progressive
frames or frames with weaved fields.)
Instead of checking for resolution and image format changes, always
fully reinit on any parameter change. Let init_video do all required
initializations, which simplifies things a little bit.
Change the gl_video/hardware decoding interop API slightly, so that
hwdec initialization gets the full image parameters.
Also make some cosmetic changes.
Video has up to 4 textures, if you include obscure formats with alpha.
This means alpha formats could always overwrite the first scaler
texture, leading to corrupted video display. This problem was recently
brought to light, when commit 571e697 started to explicitly unbind all 4
video textures, which broke rendering for non-alpha formats as well.
Fix this by reserving the correct number of texture units.
VA-API's OpenGL/GLX interop is pretty bad and perhaps slow (renders a
X11 pixmap into a FBO, and has to go over X11, probably involves one or
more copies), and this code serves more as an example, rather than for
serious use. On the other hand, this might be work much better than
vo_vaapi, even if slightly slower.