Check the return value when setting any video mixer attribute and
print an error message if the operation failed. Also simplify code by
changing update_csc_matrix() to use the utility function added for
this.
Remove the help text explaining -vo vdpau suboptions that was printed
in case of parsing errors. It did perhaps have some value, but there
are also reasons to remove it: it was printed in an ugly manner in the
middle of output, most other MPlayer options do not have such internal
help texts either, and it was detailed enough that it required
maintaining documentation about the options in two separate places
(the man page and the help message).
Part of the code is currently under #ifdef to allow compilation with
older VDPAU library versions; that can be removed later.
Partially based on a patch by Carl Eugen Hoyos.
Add a property to select YUV colorspace. Currently implemented only in
vo_vdpau and vo_xv. Allows switching between BT.601, BT.709 and
SMPTE-240M (vdpau only).
The xv support uses the "XV_ITURBT_709" attribute. At least my NVIDIA
card supports that; I don't know whether other xv implementations do.
Bind the colorspace switch to the 'c' key by default. 'c' is currently
used by vo_sdl for some fullscreen mode change thing, but at the moment
that does not conflict and if it will in the future then vo_sdl can
change.
VDPAU part based on a patch from Lauri Mylläri <lauri.myllari@gmail.com>
Main things added are custom frame dropping for VDPAU to work around
the display FPS limit, frame timing adjustment to avoid jitter when
video frame times keep falling near vsyncs, and use of VDPAU's timing
feature to keep one future frame queued in advance.
NVIDIA's VDPAU implementation refuses to change the displayed frame
more than once per vsync. This set a limit on how much video could be
sped up, and caused problems for nearly all videos on low-FPS video
projectors (playing 24 FPS video on a 24 FPS projector would not work
reliably as MPlayer may need to slightly speed up the video for AV
sync). This commit adds a framedrop mechanism that drops some frames
so that no more than one is sent for display per vsync. The code
tries to select the dropped frames smartly, selecting the best one to
show for each vsync. Because of the timing features needed the drop
functionality currently does not work if the correct-pts option is
disabled.
The code also adjusts frame timing slightly to avoid jitter. If you
for example play 24 FPS video content on a 72 FPS display then
normally a frame would be shown for 3 vsyncs, but if the frame times
happen to fall near vsyncs and change between just before and just
after then there could be frames alternating between 2 and 4
vsyncs. The code changes frame timing by up to one quarter vsync
interval to avoid this.
The above functionality depends on having reliable vsync timing
information available. The display refresh rate is not directly
provided by the VDPAU API. The current code uses information from the
XF86VidMode extension if available; I'm not sure how common cases
where that is inaccurate are. The refresh rate can be specified
manually if necessary.
After the changes in this commit MPlayer now always tries to keep one
frame queued for future display using VDPAU's internal timing
mechanism (however no more than 50 ms to the future). This should make
video playback somewhat more robust against timing inaccuracies caused
by system load.
Clean up code related to frame buffering and generate pts information
also for the next frame in the output queue. The timing information
will be used in a following framedrop patch.
This commit adds one frame of buffering delay in vo_vdpau and
increases the number of buffered vdpau video surfaces from 3 to 4. The
delay increase makes it more important to fix remaining code in
MPlayer that doesn't deal well with filter/VO delay; OTOH it should
help any decoding/filtering parallelism in the underlying VDPAU
implementation as now filtering a frame for display can happen while
the next one is being decoded.
check_events() first checked for a RESIZE event and called resize() if
needed, and then queued a frame to be reshown if in pause state and
the event was either RESIZE or EXPOSE. The most obvious problems with
the code were:
- resize() already called flip_page() internally, so the code in
check_events could lead to _two_ frames being queued.
- The call in resize() didn't depend on pause status, so the
behavior was inconsistent.
- The code in check_events() actually queued the wrong output
surface. It showed the same surface as flip_page() would show
_next_, while it should have shown the previously shown one. This
typically led to the screen contents changing to a previous
state, as the new surface had not been initialized and had
contents from a previous use.
Fix the double update. Make resize() also only immediately update the
video if paused (this also affects changing to/from fullscreen) and
otherwise leave the old window contents be until the next frame. Queue
the right frame in check_events(). Also make resize() a bit more
careful to only show contents if they were successfully updated
(though a case where we're paused without content to show shouldn't
normally happen).
Add interfaces to allow VO drivers to add or remove frames from the
video stream and to alter timestamps. Currently this functionality
only works with in correct-pts mode. Use the new functionality in
vo_vdpau to properly support frame-adding deinterlace modes.
Frames added by the VDPAU deinterlacing code are now properly timed.
Before every second frame was always shown immediately (probably next
monitor refresh) after the previous one, even if you were watching
things in slow motion, and framestepping didn't stop at them at all.
When seeking the deinterlace algorithm is no longer fed a mix of
frames from old and new positions.
As a side effect of the changes a problem with resize events was also
fixed. Resizing calls video_to_output_surface() to render the frame at
the new resolution, but before this function also changed the list of
history frames, so resizing could give an image different from the
original one, and also corrupt next frames due to them seeing the
wrong history. Now the function has no such side effects. There are
more resize-related problems though that will be fixed in a later
commit.
The deint_mpi[] list of reserved frames is increased from 2 to 3
entries for reasons related to the above. Having 2 entries is enough
when you initially get a new frame in draw_image() because then you'll
have those two entries plus the new one for a total of 3 (the code
relied on the oldest mpi implicitly staying reserved for the duration
of the call even after usage count was decreased). However if you want
to be able to reproduce the rendering outside draw_image(), relying on
the explicitly reserved list only, then it needs to store 3 entries.
Add code to reinitialize all VDPAU objects if a display preemption
condition occurs. Reinitializing them in the middle of playback will
cause video corruption at least until the next keyframe when using
hardware decoding, but decoding does seem to recover after a keyframe.
Create a single large bitmap surface for EOSD objects and pack all the
bitmap rectangles inside that. The old code created a separate bitmap
surface for every bitmap and then resized the cached surfaces when
drawing later frames. The number of surfaces could be large (at least
about 2000 for one sample subtitle script) so this was very
inefficient. The old code also used a very simple strategy for pairing
existing surfaces to new bitmaps; it could resize tiny surfaces to
hold large glyphs while using existing large surfaces to hold tiny
glyphs and as a result allocate arbitrarily much more total surface
area than was necessary.
The new code only supports using a single surface, freeing it and
allocating a larger one if necessary. It would be possible to support
multiple surfaces in case of hitting the maximum bitmap surface size,
but I'll wait to see if that is actually needed before implementing
it. NVIDIA seems to support bitmap surface sizes up to 8192x8192, so
it would take either a really pathological subtitle script rendered at
a high resolution or an implementation with lower limits before
multiple surfaces would be necessary.
The packing algorithm should successfully pack the bitmaps into a
surface of size w*h as long as the total area of the bitmaps does not
exceed 16/17 (w-max_bitmap_width)*(h-max_bitmap_height), so there
should be no totally catastrophic failure cases. The 16/17 factor
comes from approximate sorting used in the algorithm. On average
performance should be better than this minimum guaranteed level.
Add a template file that contains a single listing of various
information needed about the VDPAU interface functions, and is then
included multiple times to create required declarations and tables.
Previously some of the information needed to be duplicated for each of
those uses.
The GUI is badly designed and too closely coupled to the internal
details of other code. The GUI code is in bad shape and unmaintained
for years. There is no indication that anyone would maintain it in the
future either. Even if someone did volunteer to implement a better
integrated GUI having the current code in the tree probably wouldn't
help much. So get rid of it.
It leads to VDPAU errors after video aspect ratio changes.
Patch by Stephen Warren.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@29276 b3059339-0415-0410-9bf9-f77b7e298cf2
Many VOs kept track of pause status, but reset the value when their
config() function was called. However it can be called while playback
stays in pause mode. Modify the VOs to not change anything in
config(). Also send the VO either VOCTRL_PAUSE or VOCTRL_RESUME when
the playback of a new file is starting to make sure they have the
right status.
This stops creating a window even if hardware decoding is certainly
going to fail.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@29040 b3059339-0415-0410-9bf9-f77b7e298cf2
with a small value for max_reference_frames.
This does not make automatic recovery by using software decoder possible,
but lets MPlayer fail more graciously on - actually existing - buggy
hardware that does not support certain H264 widths when using
hardware accelerated decoding (784, 864, 944, 1024, 1808, 1888 pixels on
NVIDIA G98) and if the user tries to hardware-decode more samples at
the same time than supported.
Might break playback of H264 Intra-Only samples on hardware with very
little video memory.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@29027 b3059339-0415-0410-9bf9-f77b7e298cf2
custom color space conversion matrices in VDPAU.
Patch by Grigori Goronzy, greg A chown D ath D cx
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@28760 b3059339-0415-0410-9bf9-f77b7e298cf2
Make temporal deinterlacing standard when pressing "D" to activate
deinterlacer.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@28744 b3059339-0415-0410-9bf9-f77b7e298cf2
Patch by Grigori G (greg <at> chown ath cx) with minor cosmetic changes by me.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@28710 b3059339-0415-0410-9bf9-f77b7e298cf2
Deinterlacing can not yet be toggled at runtime, and actually it does
not seem to work at all...
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@28673 b3059339-0415-0410-9bf9-f77b7e298cf2
YV12 - since VDPAU only has functions to upload the full frame at once
there is no sense in supporting draw_slice for that.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@28646 b3059339-0415-0410-9bf9-f77b7e298cf2
Convert vo_x11_border (used in vo_gl/gl2 though the vo_gl_border
macro) to use a wrapper macro in old-style VOs which do not provide a
VO object argument. Before this function had an explicit global_vo
argument in vo_gl/gl2. New vo_vdpau uses it too so use the same
mechanism as most other functions.