The code removed from handle_input_and_seek_coalesce() did two things:
1. If there's a queued seek, stop accepting non-seek commands, and delay
them to the next playloop iteration.
2. If a seek is executing (i.e. the seek was unqueued, and now it's
trying to decode and display the first video frame), stop accepting
seek commands (and in fact all commands that were queued after the
first seek command). This logic is disabled if seeking started longer
than 300ms ago. (To avoid starvation.)
I'm not sure why 1. would be needed. It's still possible that a command
immediately executed after a seek command sees a "seeking in progress"
state, because it affects queued seeks only, and not seeks in progress.
Drop this code, since it can easily lead to input starvation, and I'm
not aware of any disadvantages.
The logic in 2. is good to make seeking behave much better, as it
guarantees that the video display is updated frequently. Keep the core
idea, but implement it differently. Now this logic is applied to seeks
only. Commands after the seek can execute freely, and like with 1., I
don't see a reason why they couldn't. However, in some cases, seeks are
supposed to be executed instantly, so queue_seek() needs an additional
parameter to signal the need for immediate update.
One nice thing is that commands like sub_seek automatically profit from
the seek delay logic. On the other hand, hitting chapter seek multiple
times still does not update the video on chapter boundaries (as it
should be).
Note that the main goal of this commit is actually simplification of the
input processing logic and to allow all commands to be executed
immediately.
Legacy GL context creation (glXCreateContext) explicitly requires a X
visual, while the modern one (glXCreateContextAttribsARB) does not for
some reason. So fail only on the legacy code path if we don't find a
visual. Note that vo_x11_config_vo_window() will select a default visual
if a NULL visual is passed to it.
This fixes issue #504. For some reason, glXChooseFBConfig() will return
a fbconfig with no associated visual. (I'm not sure if this allowed.
They don't always have a visual, but since GLX_X_RENDERABLE is set
and GLX_DRAWABLE_TYPE is (implicitly) set to GLX_WINDOW_BIT, why would
there be no visual?)
Even worse, a test program seems to show that a 16 bit fbconfig is
selected (instead of 24/32 bit), which doesn't sound nice at all. Since
there _are_ better fbconfigs available, glXChooseFBConfig() should
normally sort them by quality, and return the better ones first. It's
worth noting that this function should also prefer GLX_TRUE_COLOR
over anything else, although this comes last in the sort order.
Whatever is going on, requesting GLX_X_VISUAL_TYPE with GLX_TRUE_COLOR
seems to fix it.
There is some logic to discard packets from streams that are not
selected. Run the metadata update code before this, just to make 100%
sure that no metadata updates can be lost when streams are deselected.
(I'm not sure why this logic would be needed, since both libavformat and
the generic demuxer code do this already. But a quick test shows that
av_read_frame() can return a packet from a stream even if the stream has
AVStream.discard set to AVDISCARD_ALL. This happened after stream
switching. Maybe libavformat doesn't discard already queued packets.)
Instead of printing lines like:
Demuxer info GENRE changed to Alternative Rock
Just output all tags once they change. The assumption is that individual
tags rarely change, while all tags change in the common case.
This changes tag updates to use polling. This could be fixed later,
although the ICY stuff makes it a bit painful, so maybe it will remain
this way.
Also remove DEMUXER_CTRL_UPDATE_INFO. This was intended to check for tag
updates, but now we use a different approach.
Use an arbitrary constant instead, which is as good as PATH_MAX.
This helps us to avoid having to think about pull request #523.
Also fix a case where a potentially signed char was passed to isspace().
This was done incorrectly in the previous commit: the fallback size used
the window size as requested with the first config call, which is the
size of the hidden window in the vo_opengl case. (That damn hidden
window again...)
This code essentially does nothing. As far as I could find out, this
actually used to do something. Then it was removed with commit efe7c39f,
leaving some leftover code that didn't do anything useful. This happened
12 years ago!
Also remove a commented debug printf.
vo_opengl creates a hidden X11 window to probe the OpenGL context. It
must do that before creating a visible window, because VO creation and
VO config are separate phases.
There's a race condition involving the hidden window: when starting with
--fs, and then leaving fullscreen, the unfullscreened window is
sometimes set to the aspect ratio of the hidden window. I'm not sure why
the window size itself uses the correct size (but corrupted by the wrong
aspect), but that's perhaps because the window manager is free to ignore
the size hint while honoring the aspect, or something equally messed up.
It turns out this happens because x11_common.c thinks the size of the
hidden window is the size of the unfullscreened window. This in turn
happens because vo_x11_update_geometry() reads the size of the hidden
window when called in vo_x11_fullscreen() (called from
vo_x11_config_vo_window()) when mapping the fullscreen window. At that
point, the window could be mapped, but not necessarily. If it's not
mapped, it will get the size of the unfullscreened window... I think.
One could fix this by actively waiting until the window is mapped. Try
to pick a less hacky approach instead, and never read the window size
until MapNotify is received.
vo_x11_create_window() needs a hack, because we'd possibly set the VO's
size to 0, resulting e.g. in vdpau to fail initialization. (It'll print
error messages until a proper resize is received.)
This fixes a weird bug with aspect ratio handling. It has to do with
float handling: with -std=gnu99, gcc implicitly enables broken non-
standard semantics giving float variables excess precision. This can for
example make this fail in theory: "float a = 0.1; assert(a == a);"
While standard C allows excess precision _within_ expressions, it
requires truncation when storing float values in variables of types
"float" or "double". The "gnu99" mode breaks this. It can be unbroken by
using "c99", or by specifying -fexcess-precision=standard. The former
seems less likely to break compilers other than modern gcc. Note that
-ffloat-store would also fix this, but also makes float expressions less
efficient and less precise for no reason.
The code that mistakenly fails because of this is dec_video.c line 393.
It caused the container aspect to be ignored in some or all situations,
depending how the compiler optimizes. For example, on gcc-4.6 with -Os,
the aspect is always ignored.
In future, we should probably just get rid of storing aspects as floats.
Note that this still happens in the stream level, so we can't have
nice highlevel behavior restricting seeking. Instead, if a seek leads
to the demuxer requesting data outside of the cached range, the seek
will simply fail. This might confuse the demuxer, and the resulting
behavior is not necessarily useful.
Note that this also doesn't try to skip data on a forward seek. This
would just freeze the stream with slow unseekable streams.
One nice thing is that stream.h has a separate function for merely
skipping data (separate from seeking forward), which is pretty useful
in this case: we want skipping of data to work, even if we reject
seeking forward by skipping data as too expensive. This probably is
or will be useful for demux_mkv.c.
Usually, you have to call pthread_cond_timedwait() in a loop (because it
can wake up sporadically). If this function is used by another higher
level function, which uses a relative timeout, we actually have to
reduce the timeout on each iteration - or, simpler, compute the
"deadline" at the beginning of the function, and always pass the same
absolute time to the waiting function.
Might be unsafe if the system time is changed. On the other hand, this
is a fundamental race condition with these APIs.
This used to work; I'm not sure when or why it regressed. When setting
AVProbeData.filename to NULL, libavformat will crash in rtp_probe() by
unconditionally accessing the string.
We used to set the filename to NULL to prevent probing by file extension
when we don't deem it as necessary. Using an empty string also works for
this purpose.
Try to make it more intuitive by not requiring hex values. The new way
uses float values in the range 0.0-1.0, separated by '/' (':' was
suggested, but that wouldn't allow color options in sub-options).
Example: --osd-color=1.0/0.0/0.0/0.75
Using the range 0.0-1.0 has the advantage that it could be easily
extended to colors beyond 8 bit.
Details see manpage.
Suggestions for alternative syntax or value ranges are welcome, but be
quick with it.
This avoids stray newlines when:
1. Some (non-status line) text was output
2. Then an empty status line is output
According to the logic, 2. should print an empty line to show the blank
status line. Don't do that, and instead output nothing in this case.
This caused problems with mpv_identify.sh, and also looked ugly when
using --quiet.
Larger sizes can introduce overflows, depending on the image format. In
the worst case, something larger than 16000x16000 with 8 bytes per pixel
will overflow 31 bits.
Maybe there should be a proper failure path instead of a hard crash, but
not yet. I imagine anything that sets a higher image size than a known
working size should be forced to call a function to check the size (much
like in ffmpeg/libavutil).
Not everything in the OSD path handles 0x0 sized sub-bitmaps well. At
least the code implementing --sub-gray had severe problems with it.
Fix this by skipping such bitmaps.
RGB565 is one of the fastest and most supported formats on low end consumer
devices, but ffmpeg spams warning when using it. Make it opt-in instead of
opt-out.
The problem seems to have solved itself. I guess the previous changes to
resizing and commit ba101ab made this possible. Consider me happy for removing
that crap.
Windows applications that use LoadLibrary are vulnerable to DLL
preloading attacks if a malicious DLL with the same name as a system DLL
is placed in the current directory. mpv had some code to avoid this in
ao_wasapi.c. This commit just moves it to main.c, since there's no
reason it can't be used process-wide.
This change can affect how plugins are loaded in AviSynth, but it
shouldn't be a problem since MPC-HC also does this and it's a very
popular AviSynth client.
Windows users expect this when a program crashes. Without it, the
program just disappears. Also change the SetErrorMode call to use macros
instead of a hardcoded constant.
If, for some reason, the subtitle renderer attempts to render a
subtitle before SD_CTRL_SET_VIDEO_PARAMS was called, it passed a
value calculated from invalid values. This can happen with --vf=sub
and --start. The crash happens if 1. there was a subtitle packet that
falls into the timestamp of the rendered video frame, 2. the playloop
hasn't informed the subtitle decoder about the video resolution yet
(normally unneeded, because that is used for weird corner cases only,
so this code is a bit fuzzy), and 3. something actually requests a
frame to be drawn from the subtitle renderer, like with vf_sub.
The actual crash was due to passing NaN as pixel aspect to libass,
which then created glyphs with ridiculous sizes, involving a few
integer overflows and unchecked mallocs.
The sd_lavc.c and sd_spu.c cases probably don't crash, but I'm not
sure, and it's better fix them anyway.
Not bothering with sd_spu.c, this crap is for compatibility and will
be removed soon.
Note that this would have been no problem, had the code checked whether
SD_CTRL_SET_VIDEO_PARAMS was actually called. This commit adds such a
check (although it basically checks after using the parameters).
Regression since 49caa0a7 and 633fde4a.
This generally affects mp3 files that don't have any (or many) mp3
frames in the first 2 MB. 2 MB is the maximum probe size, and
libavformat returns a low probescore even if we give it the full 2 MB.
Trying to probe a larger buffer (or even the full file) doesn't work for
mysterious reasons.
The workaround consists in accepting a very weak probescore if the
format is detected as mp3 and we probed already 2 MB.
Restructure it a bit, so we can use the format hack list even if no mime
type applies. Shouldn't change anything functionally yet. Preparation
for the next commit.