Normally, the size of an mage plane is assumed to be stride*height. But
in theory, if stride is larger than width*bpp, the last line might not
be padded, simply because it's not necessary. FFmpeg's or mpv's image
allocators always guarantee that this padding exists (it wastes some
insignificant memory for avoiding such subtle issues), but some other
libraries might not.
I suspect one such case might be Xv via vo_xv (see #1698), although my X
server appears to provide full padding. In any case, it can't harm.
There's literally no reason why these functions have to be inline (they
might be performance critical, but then the function call overhead isn't
going to matter at all).
Uninline them and move them to mp_image.c. Drop the header file and fix
all uses of it.
The libavformat rtmp protocol's "timeout" option has two problems:
1) Unlike all other protocols, it's in seconds and not microseconds
2) It enables "listen" mode, which breaks playback
Make the --network-timeout do nothing in the rtmp case.
Fixes#1704.
For some reason there were two points in the code where it warned
against non-monotonic video PTS. The one in video.c triggered on PTS
going backwards or making large jumps forwards, while dec_video.c
triggered on PTS going backwards or PTS not changing. Merge them into a
single check, which warns against all cases.
There was a somewhat obscure optimization in the OSD and subtitle
rendering path: if only the position of the sub-images changed, and not
the actual image data, uploading of the image data could be skipped. In
theory, this could speed up things like scrolling subtitles.
But it turns out that even in the rare cases subtitles have such scrolls
or axis-aligned movement, modern libass rarely signals this kind of
change. Possibly this is because of sub-pixel handling and such, which
break this.
As such, it's a worthless optimization and just introduces additional
complexity and subtle bugs (especially in cases libass does the
opposite: incorrectly signaling a position change only, which happened
before). Remove this optimization, and rename bitmap_pos_id to
change_id.
To handle seeking correctly, we need to flush the filter. libavfilter
does not support flushing, so we destroy and recreate it. We also need
to handle resume-after-EOF, because the mpv audio code sends an EOF
before and after seeking (the latter happens because the player drains
the filter chain in a generic way, which "causes" EOF).
This played e.g. a 1264x722 file as 1264x720. There was some code which
dropped the aspect ratio if the video (in original resolution) wasn't
scaled by more than 4 pixels. Commit 5f3c3f8c introduced this (although
I'm not really sure what the code replaced by it did).
Just remove this "feature".
We now update uniforms every time, so we should try to reduce the number
of uniforms to avoid performance penalties. (Originally, some caching
was planned, but it looks like it would be too complicated to implement
compared to the expected gains.)
OPT_REPLACED can't specify option values or multiple options. Change to
OPT_REMOVED. Also, target-prim doesn't have an srgb option. BT.709 uses
sRGB primaries, so use it instead.
The default scaling was a slight bit too low, which could cause buffer
underruns in some cases.
This should improve the result when using tscale filters other than
oversample. The oversample case should be unaffected.
This adds extra debugging output for buffer underruns, to help track
down possible queueing issues. It also inverts the numberic output for
tscale=oversample to make more sense, without changing the logic.
This moves the color management code out of pass_render_main (which is
now dedicated solely to up/downscaling and hence renamed pass_scale_main)
and into a new function, which gets called from pass_draw_to_screen
instead.
This makes more sense from a logical standpoint, and also means that we
interpolate in linear RGB, before color management - rather than after
it, which is significantly better for color accuracy and probably also
interpolation quality.
This replaces the old smoothmotion code by a more flexible tscale
option, which essentially allows any scaler to be used for interpolating
frames. (The actual "smoothmotion" scaler which behaves identical to the
old code does not currently exist, but it will be re-added in a later commit)
The only odd thing is that larger filters require a larger queue size
offset, which is currently set dynamically as it introduces some issues
when pausing or framestepping. Filters with a lower radius are not
affected as much, so this is identical to the old smoothmotion if the
smoothmotion interpolator is used.
Also the size is now a simple #define that can easily be changed later.
This is done for smoothmotion, which might want to blend more than 4
frames at once, depending on the setting.
Since the gl_rework merge, this started to print some OpenGL errors when
using vdpau hardware decoding with vo_opengl smoothmotion. This happens
because some hwdec unmap_image call were not paired with a map_image
call. Unlike the old vo_opengl, the new code does not do this out of
convenience (it would be a pain to track this exactly). It was triggered
by smoothmotion, because not every rendered frame has actually a new
input video frame (i.e. no map_image call, but it called unmap_image
anyway).
Solve this by handling unmapping differently in the vdpau code. The next
commit will remove the unmap_image callback completely.
Fixes#1687.
This caused complaints because the fps was basically rounded on
microsecond boundaries in the vsync interval (it seemed convenient to
store only the vsync interval). So store the fps as float too, and let
the "display-fps" property return it directly.