Rename vo_get_src_dst_rects() to mp_get_src_dst_rects() and make it
independent from the VO (it takes a comical amount of parameters now to
pass all required state). Add a convenience wrapper with the name
vo_get_src_dst_rects() to vo.c. Replace all aspdat and vo usages with
immediate parameters.
Functionally, nothing should change, except that the window size is
clamped to a minimum of size 1 much earlier, and some log messages
change the prefix (don't bother with vo.vo_log stuff).
The plan is to make all the code in aspect.c independent from vo.c,
which should make the code easier to understand, will allow removal of
vo->aspdat, and reduces the amount of code that accesses weird mutable
struct vo fields.
vo->aspdat is basically an outdated version of vo->params, plus some
weirdness. Get rid of it, which will allow further cleanups and which
will make multithreading easier (less state to care about).
Also, simplify some VO code by using mp_image_set_attributes() instead
of caring about display size, colorspace, etc. manually. Add the
function osd_res_from_image_params(), which is often needed in the case
OSD renders into an image.
Do two things:
1. add locking to struct osd_state
2. make struct osd_state opaque
While 1. is somewhat simple, 2. is quite horrible. Lots of code accesses
lots of osd_state (and osd_object) members. To make sure everything is
accessed synchronously, I prefer making osd_state opaque, even if it
means adding pretty dumb accessors.
All of this is meant to allow running VO in their own threads.
Eventually, VOs will request OSD on their own, which means osd_state
will be accessed from foreign threads.
This is a bit of a hack, but in order to prevent TranslateMessage from
seeing WM_KEYDOWN messages that we already know how to decode, move the
decoding logic to the event loop. This should fix#476, since it stops
the generation of extraneous WM_CHAR messages that were triggering more
than one action on keydown.
Doesn't make any sense anymore. X11 (which was mentioned in the manpage)
autodetects it, and everything else ignored the option values.
Since for incomprehensible reasons the backends and vo.c still need to
exchange information about the screensize using the option fields,
they're not removed yet.
For some reason, this made all VO backends both set the screen
resolution in opts->screenwidth/height, and call
aspect_save_screenres(). Remove the latter. Move the code to calculate
the PAR-corrected window size from aspect.c to vo.c, and make it so that
the monitor PAR is recalculated when it makes sense.
When using --monitoraspect, but either the screen width or height or
both are unknown, a fallback is applied. This is a completely useless
obscure corner case that's going to help nobody, so get it out of the
way.
For a long time the cocoa backend set the xinerama_x/y and used dx/dy from the
VO instance. This somewhat worked with some workarounds but wasn't really
what was supposed to be happening. Moreover 27e4360, which touched this
workaround introduced a regression.
New code doesn't set the xinerama_x/y values so that dx/dy are offsets in the
current screen (not a virtual screen composed of all the screens). The screen
reference detected during VOCTRL_UPDATE_SCREENINFO is also passed down to the
window initialization code.
Fixes#472
Like with the previous commit, this is probably not needed, but it's
unclear whether that really is the case. Most likely, it used to be
needed by some demuxer, and now the only demuxer left that could
_possibly_ trigger this is demux_mkv.c.
Note that mjpeg is the only decoder that reads the extra_huff option,
and nothing in libavformat actually sets the option. So maybe it's
fundamentally not needed anymore.
This case can't happen with the normal realvideo codepath in
demux_mkv.c, because the code would errors out if the extradata is too
small, and everything would be broken anyway in the case the vd_lavc.c
condition is actually triggered.
It still might happen with VfW-muxed realvideo in Matroska, though.
Basically, I'm hoping this doesn't matter anyway, and that the vd_lavc.c
code was for other old demuxers, like demux_avi or demux_rm. Following
the commit history, it's not really clear for what demuxer this code
was added.
On X11, if no wayland compositor is running, wl_list_init() will never
be called. This will cause destroy_display() to segfault when trying to
iterate over the list.
The user_data is passed on add_listener and can later be changed with
set_user_data. But because we don't want to change it later and because it is
the same object remove the set_user_data call.
This might be a copy&paste leftover from the initial draft for the wayland
backend.
I added enough logic to never set ontop or fullscreen twitce.
This commit keeps also the size of the video if multiple videos are played.
If the ratio differs the width will be kept at the same size and only the
height changes.
libwayland-client contains the following code [1]:
runtime_dir = getenv("XDG_RUNTIME_DIR");
if (!runtime_dir) {
fprintf(stderr,
"error: XDG_RUNTIME_DIR not set in the environment.\n");
This means this message will unconditionally and unavoidably be printed
if XDG_RUNTIME_DIR is not set. Since mpv is a terminal program, and we
want to avoid unnecessary output, work it around by not attempting to
use wayland if this environment variable is not set.
[1] http://cgit.freedesktop.org/wayland/wayland/tree/src/wayland-client.c#n636
(cd0dccd01e16fa404e03974d30ded3aebdb1c4bc)
This commonly happens when initializing vo_opengl on a X11-only system.
Unfortunately, most wl_*_destroy() functions appear not to accept NULL
pointers, making partial deinitialization a pain: you have to add your
own NULL checks everywhere to avoid crashes.
xkb.context is uninitialized separately, because you can initialize it
just fine, even if the rest of input initialization fails.
Because of this commit there were problems displaying the frmase in their right
order.
This reverts commit 96e75d234a.
Conflicts:
video/out/gl_wayland.c
video/out/wayland_common.h
The changes in the vo_wayland_ontop function have no effect on the workaround.
Somehow the problem just disappeared. I guess it is because of the new control
function in gl_wayland.c where the resize happens immediatly after the event
dispatch/flush.
Both X11 and Wayland support the same format for drag & drop operations
(text/uri-list), and the code for that was copied from x11_common.c to
wayland_common.c. Factor it out.
This solves the issue where we would not receive any frame events. The
difference to my earlier tests is that now it looks like eglSwapBuffers uses
it's own event queue or something similiar along the lines. Becaues the
performance is the same as without any redraw callback.
At the moment there are visual glitches when we resize the window. This happens
because in wayland there a special function for resizing EGL windows.
To prevent the glitches move the egl_context to the wayland state in
wayland_common.h and add a new control function to gl_wayland.c to wrap the
vo_wayland_control function to check for resize events.
With the new control wrapper the glitches are gone and the resizing is fluid.
The reason a segmentation happend here was because we couldn't get the
requested minor version. The major version is enough for differentiating
between OpenGL 3 and OpenGL 2. If it fails there is still a fallback to any
version available.
Also add a warning if we use the fallback.
Note that we don't try to be clever about detecting the files as
subtitles: we just check the file extension. We could go all the way and
check the files by opening them with a demuxer, but that would probably
do more bad than good.
Drag and drop is pretty complicated (just note how the number of Atoms
in use almost doubles), so I'm not sure whether this works everywhere.
This has been written by looking at the specification [1] or what looks
like the specification, and some external example code [2]. (The latter
one has no code license, but we didn't copy any code.)
We completely ignore the "requirement" of the spec. that the filename
"must" include username and hostname, e.g. "file://user@host/path/file".
In theory, this is required because X is network transparent, but at
this point the so called network transparency is a complete joke, and
Konqueror for one didn't include hostnames in "file://" URIs.
Tested with konqueror as drop source.
[1] http://www.newplanetsoftware.com/xdnd/
[2] http://www.edwardrosten.com/code/dist/x_clipboard-1.1/paste.cc
Set the flag CODEC_FLAG_OUTPUT_CORRUPT by default. Note that there is
also CODEC_FLAG2_SHOW_ALL, which is older, but this seems to be ffmpeg
only.
Note that whether you want this enabled depends on the user. Some might
prefer that only good frames are output, while others want the decoder
to try as hard as possible to output _anything_. Since mplayer/mpv is
rather the kind of player that tries hard instead of being "clever", set
the new default to override libavcodec's default.
A nice way to test this is switching video tracks. Since mpv doesn't
wait for the next key frame, it'll start feeding the decoder with a
packet from the middle of the stream.
This reverts commit 877303aaa9.
The OpenGL 2.1 fallback for vo_opengl didn't work. Two things come
together: 1. trying to create an OpenGL 3.0 context will fail with
a GLXBadFBConfig error, and 2. X errors are fatal by default. Since
the reverted commit removed the X error handler, the mpv process was
killed, instead of continuing for the fallback.
(Note that this commit is not an exact inverse commit, since mp_msg
changed, but it does about the same thing.)
query_format was setting state even if wasn't the correct thing to do. Somehow
it worked by pure luck (until commit e6e6b88b6d).
Fix the initialization by setting state inside of reconfig.
If the utf8 string used to create the NSString for title was invalid utf8,
-stringWithUTF8String returned nil and triggered an assertion in Cocoa's
framework code.
Sanitize the utf8 string and if the sanitation wasn't enough just avoid
crashing by not setting a title.
Fixes#406
How embarrassing...
This code is inactive for all VOs other than vo_vdpau. For vo_vdpau,
this caused various issues, such as stuttering after about an hour of
running mpv; see github issue #403.
Note that this will print a difference even with perfect sync, because
the code queues the frames _between_ vsync, probably for error margin
(though I don't understand why it uses the exact values chosen).
There's a single mp_msg() in path.c, but all path lookup functions seem
to depend on it, so we get a rat-tail of stuff we have to change. This
is probably a good thing though, because we can have the path lookup
functions also access options, so we could allow overriding the default
config path, or ignore the MPV_HOME environment variable, and such
things.
Also take the chance to consistently add talloc_ctx parameters to the
path lookup functions.
Also, this change causes a big mess on configfiles.c. It's the same
issue: everything suddenly needs a (different) context argument. Make it
less wild by providing a mp_load_auto_profiles() function, which
isolates most of it to configfiles.c.
Always pass around mp_log contexts in the option parser code. This of
course affects all users of this API as well.
In stream.c, pass a mp_null_log, because we can't do it properly yet.
This will be fixed later.
Until now, there were two functions to add input sources (stuff like
stdin input, slave mode, lirc, joystick). Unify them to a single
function (mp_input_add_fd()), and make sure the associated callbacks
always have a context parameter.
Change the lirc and joystick code such that they take store their state
in a context struct (probably worthless), and use the new mp_msg
replacements (the point of this refactoring).
Additionally, get rid of the ugly USE_FD0_CMD_SELECT etc. ifdeffery in
the terminal handling code.
This ended up a little bit messy. In order to get a mp_log everywhere,
mostly make use of the fact that va_surface already references global
state anyway.
This removes the messages printed on unknown pixel format messages.
Passing a mp_log to them would be too messy. Actually, this is a good
change, because in the past we often had trouble with these messages
printed too often (causing terminal spam etc.), and printing warnings or
error messages on the caller sides is much cleaner.
vd_lavc.c had a change earlier to print an error message if a decoder
outputs an unsupported pixel format.
This requires the caller to provide a mp_log in order to see error
messages. Unfortunately we don't do this in most places, but I guess we
have to live with it.
lcms2 has a global message callback for error reporting. If you don't
set this, these error messages are silently thrown away. I think we
still want the error messages, so we have to do dumb stuff to avoid
clashes. This doesn't handle the case if another library in the same
process sets the message callback, but at least this should exclude
possible memory errors when running multiple instances of mpv.
This has similar problems as the ALSA message callback, though in theory
we could use the Display handle to find the right mpv instance from the
global callback. It still wouldn't work if another library happens to
set the error handler at the same time. There doesn't seem much of an
advantage overriding the error handler (though it used to be required),
so remove it.
Apparently this has been broken for a year or so. The were three
reasons for the breakage here:
1. The window dragging hack prevented any DOWN event from
passing through since it always returned before we even got
the button.
2. The window style had CS_DBLCLKS in its flags, so we did not
get any DOWN events when the OS had detected a double click
(instead expecting us to handle a DBL event).
3. We never sent any mouse buttons when mouse movement handling
was disabled.
In my opinion, config.h inclusions should be kept to a minimum. MPlayer
code really liked including config.h everywhere, though, even in often
used header files. Try to reduce this.
Since m_option.h and options.h are extremely often included, a lot of
files have to be changed.
Moving path.c/h to options/ is a bit questionable, but since this is
mainly about access to config files (which are also handled in
options/), it's probably ok.
The tmsg stuff was for the internal gettext() based translation system,
which nobody ever attempted to use and thus was removed. mp_gtext() and
set_osd_tmsg() were also for this.
mp_dbg was once enabled in debug mode only, but since we have log level
for enabling debug messages, it seems utterly useless.
Add the mp_get_user_path() function, and make it expand special path
prefixes. Use it for some things in mpv which take filenames
(--input-config, --screenshot-template, opengl icc-profile suboption).
This allows accessing files in the mpv config dir without hardcoding the
config path by prefixing the path with ~~/. Details see manpage
additions.
Use the scaled video size (i.e. as shown on the window) as reference for
zoom. This is the easiest way to fix different width/height scale
factors as they happen when zooming video with a pixel aspect ratio
other than 1:1.
Also fix the unscaled mode, so that it 1. doesn't scale even with
--video-zoom, and 2. doesn't scale by small amounts when the video is
cropped by making the window smaller than the video.
This code tried to pass a still monotonic (even if not strictly
monotonic) PTS to the player, but as a result it remained stuck on
the PTS before a reset (since the PTS was lower).
Until now, the player didn't care to drain frames on video reconfig.
Instead, the VO was reconfigured (i.e. resized) before the queued frames
finished displaying. This can for example be observed by passing
multiple images with different size as mf:// filename. Then the window
would resize one frame before image with the new size is displayed. With
--vo=vdpau, the effect is worse, because this VO queues more than 1
frame internally.
Fix this by explicitly draining buffered frames before video reconfig.
Raise the display time of the last frame. Otherwise, the last frame
would be shown for a very short time only. This usually doesn't matter,
but helps when playing image files. This is a byproduct of frame
draining, because normally, video timing is based on the frames queued
to the VO, and we can't do that with frames of different size or format.
So we pretend that the frame before the change is the last frame in
order to time it. This code is incorrect though: it tries to use the
framerate, which often doesn't make sense. But it's good enough to test
this code with mf://.
This gets rid of the vf_vo pseudo-filter. It ends the idea of MPlayer's
architecture that the VO is just a (terminating) video filter. It didn't
really work for us with respect to video timing (the "end" of the video
chain isn't really made for video timing, and making it do so would be
awkward), and now we're removing it entirely. We will be able to fix
some things, such as properly draining video on reconfiguration.
I don't think this has any reason to exist. It's likely that this used
to be required by the old direct rendering infrastructure. (See
git blame output.)
This should help fixing some issues (like not draining video frames
correctly on reinit), as well as decoupling the decoder, filter chain,
and VO code.
I also wanted to make the hardware video decoding fallback work properly
if software-only video filters are inserted. This currently has the
issue that the fallback is too violent, and throws away a bunch of
demuxer packets needed to restart software decoding properly. But
keeping "backup" packets turned out as too hacky, so I'm not doing this,
at least not yet.
Libav 9 still uses the unprefixed PIX_FMT_... symbols, but they will
probably be removed some time in the future.
There are some other deprecations we have yet to take care of, but
there are no clear replacements yet.
Remove the inconsistent, duplicated, and insufficient scale filter
insertion code, and do it in one place instead. This also compensates
for the earlier removal of vf_match_csp() (which was in fact duplicated
code).
The algorithm to determine where to insert a filter etc. is probably the
same, though it also comes with some changes that should make debugging
easier when trying to figure out why a chain is failing to configure.
Add an "in" pseudo filter, which makes insertion of conversion filters
easier. Also change the vf->reconfig signature. At a later point, I'll
probably change format negotiation such that the generic filter code
will choose the output format, so having separate in and out params will
be useful.
Reason: I never liked it being recursive. Generally, this seems to
cause more problems than trouble, and is less flexible for access
outside of the chain.
I don't think we need these flags anymore. Simplify the code and get rid
of the vf_format struct.
There still is the vf_format.configured field, but this can be replaced
by checking for a valid image format.
This adds vf_chain, which unlike vf_instance refers to the filter chain
as a whole. This makes the filter API less awkward, and will allow
handling format negotiation better.
This function improves automatic filter insertion, but this really
should be done by the generic filter code. Remove vf_match_csp() and all
code using it as preparation for that.
This commit temporarily makes handling of filter insertion worse for
now, but it will be fixed with the following commits.
They didn't do anything.
vf_screenshot.c actually did release the previous image, but that's not
really required. At worst you could take a screenshot and get an old
frame when there's no new frame yet.
The --flip option flipped the image upside-down, by trying to use VO
support, or if not available, by inserting a video filter. I'm not sure
why it existed. Maybe it was important in ancient times when VfW based
decoders output an image this way (but even then, flipping an image is a
free operation by negating the stride).
One nice thing about this is that it provided a possible path for
implementing video orientation, which is a feature we should probably
support eventually. The important part is that it would be for free for
VOs that support it, and would work even with hardware decoding.
But for now get rid of it. It's useless, trivial, stands in the way, and
supporting video orientation would require solving other problems first.
If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.
Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)
Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.
This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
The vf_eq context contains a very large lookup table, and the method of
setting default values caused the vf_eq context to be included in the
compiled code.
All filters now either use the generic option parser, or don't have
options. This finally finishes a transition started in 2003 (see git
commit 33b62af947).
Why are MPlayer devs so monumentally lazy? Sorry, but this takes the
cake. You had 10 years.
Mostly backwards compatible, we don't change much because we just want
to get rid of the legacy option string handling.
You can't pass an aspect as first argument anymore.
Apparently you can get this with: stereo3d=ab[2]{l,r}:sbs[2]{l,r}
So it seems the filter is redundant and can be removed.
Also see FFmpeg commit 2f11aa141a01.
Unfortunately, this forces filtering both luma and chroma, because
otherwise we'd have to deal with libavfilter's vf_noise weird handling
of YUV vs. RGB formats. Would we e.g. filter luma only, it would filter
red in RGB mode only, because it goes by component and there's no way to
distinguish YUV and RGB by just using the filter's options.