Modeled after the old playlist_parser.c, but actually new code, and it
works a bit differently.
Demuxers (and sometimes streams) are the component that should be used
to open files and to determine the file format. This was already done
for subtitles, but playlists still use a separate code path.
af_str2fmt_short(), which is used by the command line option parser,
allowed passing a hex number. The user could set arbitrary integers as
internal audio formats, even formats which don't exist or make no sense.
This is not very useful, so get rid of it.
Having to use -1 for that is generally quite annoying.
Audio formats are created from bitmasks, and it can't be excluded that
0 is not a valid format. Fix this by adjusting AF_FORMAT_I so that it
is never 0. Along with AF_FORMAT_F and the special formats, all valid
formats are covered and guaranteed to be non-0.
It's possible that this commit will cause some regressions, as the
check for invalid audio formats changes a bit.
A wayland output based on shared memory. This video output is useful for x11
free systems, because the current libGL in mesa provides GLX symbols. It is also
useful for embedded systems where the wayland backend for EGL is not
implemented like the raspberry pi.
At the moment only rgb formats are supported, because there is still no
compositor which supports planar formats like yuv420p. The most used compositor
at the moment, weston, supports only BGR0, BGRA and BGR16 (565).
The BGR16 format is the fastest to convert and render without any noticeable
differences to the BGR32 formats. For this reason the current (very basic)
auto-detection code will prefer the BGR16 format. Also the weston source code
indicates that the preferred format is BGR16 (RGB565).
There are 2 options:
* default-format (yes|no) Which uses the BGR32 format
* alpha (yes|no) For outputting images and videos with transparencies
The obtained information from the shm listener isn't used by anything and is
also wrong now in wayland git master branch because of new shm formats which
need a different way of saving the supported formats.
Moves a good chunk of the resizing code to wayland_common.c. This makes it
possible to share it with future video drivers.
It doesn't resizit it immediatly, it calcutlates the new position and size and
then shedules a resizing event. This removes the ugly callback and void pointer
from the wayland data structure.
Regression since 18b6c01d92. That commit changed the colorspace handling to
always reinit the video output. Since the CVPixelBuffers are lazily created,
VOCTRL_SET_YUV_COLORSPACE was always called when the CVPixelBufferRef was NULL.
Since CoreVideo functions do not complain when called on NULL, no one noticed
that CVBufferSetAttachment, which stored the color matrix meta data was called
on NULL.
Until now, video output levels (obscure feature, like using TV screens
that require RGB output in limited range, similar to YUY) still required
handling of VOCTRL_SET_YUV_COLORSPACE. Simplify this, and use the new
mp_image_params code. This gets rid of some code. VOCTRL_SET_YUV_COLORSPACE
is not needed at all anymore in VOs that use the reconfig callback. The
result of VOCTRL_GET_YUV_COLORSPACE is now used only used for the
colormatrix related properties (basically, for display on OSD). For
other VOs, VOCTRL_SET_YUV_COLORSPACE will be sent only once after config
instead of twice.
This affects VOs which just reuse the mp_image from draw_image() to
return screenshots. The aspect of these images is never different
from the aspect the screenshots should be, so there's no reason to
adjust the aspect in these cases.
Other VOs still need it in order to restore the original image
attributes.
This requires some changes to the video filter code to make sure that
the aspect in the passed mp_images is consistent.
The changes in mplayer.c and vd_lavc.c are (probably) not strictly
needed for this commit, but contribute to consistency.
mp_image_set_params() doesn't check whether the colorspace parameters
are consistent (e.g. setting YUV colorspaces with RGB formats), and
shouldn't need to.
The way this was added to FFmpeg is less than ideal, because it requires
text parsing in the Matroska demuxer. But in order to use the FFmpeg
webvtt-to-ass converter, we still have to mimic this in some way. We do
this by putting the parsing into sd_lavc_conv.c, before the subtitle
packet is passed to libavcodec. At least this keeps the ugliness out of
unrelated code.
There is some change that FFmpeg will fix their design eventually.
Instead of rewriting the parsing code, we simply borrow it from FFmpeg's
Matroska demuxer.
cocoa_common contains some locking calls to support video outputs that support
live resizing (at this moment only vo=opengl).
These should not be used unless the VO declares it is multithreaded by
registering the resize_redraw callback used for live resizing.
Fixes#200
Use the new MP_ macros for some AOs instead of mp_msg.
Not all AOs are converted, and some only partially. In some cases, some
additional cosmetic changes are made.
Otherwise, this would just try to demux a good chunk of the file, even
though the operation can't succeed anyway.
This caused some pretty strange issues, where perfectly valid use cases
would print a "Too many packets in the demuxer packet queue..." message.
The rawaudio demuxer read one frame per packet, basically a few bytes,
which caused insane overhead. (I found this when I couldn't play raw
audio without dropouts when using -v, which printed a line per packet
read.)
Fix this and read 1 second of audio per packet. This is a regression
since cfa5712 (merging of demux_rawaudio and demux_rawvideo).
Instead of always skipping in STREAM_BUFFER_SIZE blocks, allow an
arbitrary size. This allows - in theory - faster forward seeking in
pipes.
(Maybe not a very significant change, but it reduces the number of
things that depend on STREAM_BUFFER_SIZE for no good reason. Though
we still use that value as minimum read size.)
stream_file.c contains some code meant for forward seeking with pipes.
This simply reads data until the seek position is reached. Move this
code to stream.c. This stops stream_file from doing strange things
(messing with stream internals), and removes the code duplication too.
We also make stream_seek_long() use the new skip code. This is shorter
and much easier to follow than the old code, which basically did strange
things.
Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec'
decoder in FFmpeg.
The Good: This new implementation has some advantages over the previous one:
- It works with Libav: vda_h264_dec never got into Libav since they prefer
client applications to use the hwaccel API.
- It is way more efficient: in my tests this implementation yields a
reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and
~65-75% compared to h264 software decoding. This is mainly because
`vo_corevideo` was adapted to perform direct rendering of the
`CVPixelBufferRefs` created by the Video Decode Acceleration API Framework.
The Bad:
- `vo_corevideo` is required to use VDA decoding acceleration.
- only works with versions of ffmpeg/libav new enough (needs reference
refcounting). That is FFmpeg 2.0+ and Libav's git master currently.
The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture.
One one end this makes the code simple since Apple's OpenGL implementation
actually supports this out of the box. It would be nice to support other
output image formats and choose the best format depending on the input, or at
least making it configurable. My tests indicate that CPU usage actually
increases with a 420p IMGFMT output which is not what I would have expected.
NOTE: There is a small memory leak with old versions of FFmpeg and with Libav
since the CVPixelBufferRef is not automatically released when the AVFrame is
deallocated. This can cause leaks inside libavcodec for decoded frames that
are discarded before mpv wraps them inside a refcounted mp_image (this only
happens on seeks).
For frames that enter mpv's refcounting facilities, this is not a problem
since we rewrap the CVPixelBufferRef in our mp_image that properly forwards
CVPixelBufferRetain/CvPixelBufferRelease calls to the underying
CVPixelBufferRef.
So, for FFmpeg use something more recent than `b3d63995` for Libav the patch
was posted to the dev ML in July and in review since, apparently, the proposed
fix is rather hacky.
Regression since ff3b98d11c. The window positioning code relied on the
visibleFrame's height without taking into account the dock's presence.
Also moved the constraining code to the proper method that overrides the
original NSWindow behaviour. This avoids having to check for border since the
constraining is performed by Cocoa only for titled windows.
Fixes#190
Too many people thought "D:" really meant number of dropped frames. But
it's actually the number of frames where the playloop thought it'd be
a good idea to drop them. Of course this does nothing if frame dropping
is disabled, but even with normal frame dropping, this doesn't indicate
whether a frame was _really_ dropped. (Looks like libavcodec doesn't
even give us this information reliably? The decode function can return
no frame in case of codec delay due to threading and such.)
Originally, the objective of this commit was changing --edition to be
1-based, but this was cancelled. I'm still leaving the change to
demux_mkv.c though, which is now only of cosmetic nature.
We don't need to store the offsets of the options corresponding to the
properties, because the option-property bridge knows about this already.
The check against 1000 was for the case if e.g. the --brightness option
is not used. Always overwrite the option value instead, both when
querying and setting the property. (This is needed to make the settings
persistent even if vf_eq is used and the video chain is reinitialized.)
This commit assumes that VFCTRL_SET_EQUALIZER is always paired with
VFCTRL_GET_EQUALIZER (likewise for VOCTRL), which is the case.
Using -vf eq and changing brightness, contrast, etc. using key bindings
with e.g. "add brightness 1" didn't work well: with step width 1, the
property gets easily "stuck". This is a rounding problem: e.g. setting
gamma to 3 would actually make it report that gamma is set to 2, so
the "add" command will obviously never reach 3 with a step width of 1.
Fix this by storing the parameters as integers.
This was broken in cac7702. This commit effectively changed these
properties to use the value as reported by vf_eq, instead of the
previously set value for the "add" command. This was more robust,
but not very correct either, so we keep the new behavior and make
vf_eq report its parameters more accurately.
The display, window, keyboard and cursor structures are now cleanly and
logically separated. Also could prevent a future bug where no shm format is set
when the cursor image is loaded (Never happened until now).
Add --video-align-x/y, --video-pan-x/y, --video-scale options and
properties. See the additions to the manpage for description and
semantics.
These transformations are intentionally done on top of panscan. Unlike
the (now removed) --panscanrange option, this doesn't affect the default
panscan behavior. (Although panscan itself becomes kind of useless if
the new options are used.)
This option allowed you to extend the range of the panscan controls, so
that you could essentially use it to scale the video. This will be
replaced by a separate option to set the zoom factor directly.
See github issue #194.
Unfortunately, this breaks the property that going back in the playlist
always works as expected. This changes, because the playlist_prev
command will work on the reshuffled playlist, instead of loading the
previously played files in order. If this ever becomes an issue, I
might revert this commit.
Now the code does the same as the original MPlayer VAAPI patch, instead
of trying to map the profiles exactly.
See previous commit for justification and discussion.
Instead, do what MPlayer did all these years. It worked for them, so
there's probably no reason to change this.
Apparently fixes playback with some files, where the VDPAU decoder does
not formally support a profile, but decoding works with a more powerful
profile anyway.
Though note that MPlayer did this because it couldn't do it in a better
way (no decoder reported profiles available when creating the VDPAU
decoder), so it's not entirely clear whether this is a good idea. An
alterbative implementation might try to map the profiles exactly, and
do some fall backs if the exact profile is not available. But this
hack-solution works too.