mpv/TOOLS/vdpau_functions.py

65 lines
1.9 KiB
Python
Raw Normal View History

#!/usr/bin/env python3
# Generate vdpau_template.c
functions = """
# get_error_string should be first, because the function lookup loop should
# have it available to print errors for other functions
get_error_string
bitmap_surface_create
bitmap_surface_destroy
bitmap_surface_put_bits_native
vo_vdpau: Allocate one large surface for EOSD content Create a single large bitmap surface for EOSD objects and pack all the bitmap rectangles inside that. The old code created a separate bitmap surface for every bitmap and then resized the cached surfaces when drawing later frames. The number of surfaces could be large (at least about 2000 for one sample subtitle script) so this was very inefficient. The old code also used a very simple strategy for pairing existing surfaces to new bitmaps; it could resize tiny surfaces to hold large glyphs while using existing large surfaces to hold tiny glyphs and as a result allocate arbitrarily much more total surface area than was necessary. The new code only supports using a single surface, freeing it and allocating a larger one if necessary. It would be possible to support multiple surfaces in case of hitting the maximum bitmap surface size, but I'll wait to see if that is actually needed before implementing it. NVIDIA seems to support bitmap surface sizes up to 8192x8192, so it would take either a really pathological subtitle script rendered at a high resolution or an implementation with lower limits before multiple surfaces would be necessary. The packing algorithm should successfully pack the bitmaps into a surface of size w*h as long as the total area of the bitmaps does not exceed 16/17 (w-max_bitmap_width)*(h-max_bitmap_height), so there should be no totally catastrophic failure cases. The 16/17 factor comes from approximate sorting used in the algorithm. On average performance should be better than this minimum guaranteed level.
2009-09-02 17:21:24 +00:00
bitmap_surface_query_capabilities
decoder_create
decoder_destroy
decoder_render
device_destroy
generate_csc_matrix GenerateCSCMatrix # CSC completely capitalized
output_surface_create
output_surface_destroy
2011-10-06 18:46:01 +00:00
output_surface_get_bits_native
output_surface_put_bits_indexed
output_surface_put_bits_native
output_surface_render_bitmap_surface
output_surface_render_output_surface
preemption_callback_register
presentation_queue_block_until_surface_idle
presentation_queue_create
presentation_queue_destroy
presentation_queue_display
Implement vsync-aware frame timing for VDPAU Main things added are custom frame dropping for VDPAU to work around the display FPS limit, frame timing adjustment to avoid jitter when video frame times keep falling near vsyncs, and use of VDPAU's timing feature to keep one future frame queued in advance. NVIDIA's VDPAU implementation refuses to change the displayed frame more than once per vsync. This set a limit on how much video could be sped up, and caused problems for nearly all videos on low-FPS video projectors (playing 24 FPS video on a 24 FPS projector would not work reliably as MPlayer may need to slightly speed up the video for AV sync). This commit adds a framedrop mechanism that drops some frames so that no more than one is sent for display per vsync. The code tries to select the dropped frames smartly, selecting the best one to show for each vsync. Because of the timing features needed the drop functionality currently does not work if the correct-pts option is disabled. The code also adjusts frame timing slightly to avoid jitter. If you for example play 24 FPS video content on a 72 FPS display then normally a frame would be shown for 3 vsyncs, but if the frame times happen to fall near vsyncs and change between just before and just after then there could be frames alternating between 2 and 4 vsyncs. The code changes frame timing by up to one quarter vsync interval to avoid this. The above functionality depends on having reliable vsync timing information available. The display refresh rate is not directly provided by the VDPAU API. The current code uses information from the XF86VidMode extension if available; I'm not sure how common cases where that is inaccurate are. The refresh rate can be specified manually if necessary. After the changes in this commit MPlayer now always tries to keep one frame queued for future display using VDPAU's internal timing mechanism (however no more than 50 ms to the future). This should make video playback somewhat more robust against timing inaccuracies caused by system load.
2009-11-15 02:39:22 +00:00
presentation_queue_get_time
presentation_queue_query_surface_status
presentation_queue_target_create_x11
presentation_queue_target_destroy
video_mixer_create
video_mixer_destroy
video_mixer_query_feature_support
video_mixer_render
video_mixer_set_attribute_values
video_mixer_set_feature_enables
video_surface_create
video_surface_destroy
video_surface_put_bits_y_cb_cr
"""
print("""
/* List the VDPAU functions used by MPlayer.
* Generated by vdpau_functions.py.
* First argument on each line is the VDPAU function type name,
* second macro name needed to get function address,
* third name MPlayer uses for the function.
*/
""")
for line in functions.splitlines():
parts = line.split('#')[0].strip().split()
if not parts:
continue # empty/comment line
if len(parts) > 1:
mp_name, vdpau_name = parts
else:
mp_name = parts[0]
vdpau_name = ''.join(part.capitalize() for part in mp_name.split('_'))
macro_name = mp_name.upper()
print('VDP_FUNCTION(Vdp%s, VDP_FUNC_ID_%s, %s)' % (vdpau_name, macro_name, mp_name))