We found that the stretching - although it usually improves the looks of
the fonts - is incorrect.
On DVD, subtitles can cover the full area of the picture, and they have
the same pixel aspect as the movie itself.
Too bad many commercially released DVDs use bitmap fonts made with the
wrong pixel aspect (i.e. assuming 1:1) - --stretch-dvd-subs will make
these more pretty then.
This removes "--hwdec=crystalhd".
I doubt anyone even tried to use this. But even if someone wants to
use it, the decoders can still be explicitly invoked with e.g.:
--vd=lavc:h264_crystalhd
The only advantage our special code provided was fallback to
software decoding. (But I'm not sure how the ffmpeg crystalhd
pseudo-decoder actually behaves.)
Removing this will allow some simplifications as soon as we don't need
vdpau_old.c anymore.
This uses vdpau OpenGL interop to convert a vdpau surface to a texture.
Note that this is a bit weak and primitive. Deinterlacing (or any other
form of vdpau postprocessing) is not supported. vo_opengl chroma scaling
and chroma sample position are not supported. Internally, the vdpau
video surfaces are converted to a RGBA surface first, because using the
video surfaces directly is too complicated. (These surfaces are always
split into separate fields, and the vo_opengl core expects progressive
frames or frames with weaved fields.)
VA-API's OpenGL/GLX interop is pretty bad and perhaps slow (renders a
X11 pixmap into a FBO, and has to go over X11, probably involves one or
more copies), and this code serves more as an example, rather than for
serious use. On the other hand, this might be work much better than
vo_vaapi, even if slightly slower.
Most hardware decoding APIs provide some OpenGL interop. This allows
using vo_opengl, without having to read the video data back from GPU.
This requires adding a backend for each hardware decoding API. (Each
backend is an entry in gl_hwdec_vaglx[].) The backends expose video data
as a set of OpenGL textures.
Add infrastructure to support this. The next commit will add support for
VA-API.
We had some code for checking profiles earlier, which was removed in
commits 2508f38 and adfb71b. These commits mentioned that (working) hw
decoding was sometimes prevented due to profile checking, but I can't
find the samples anymore that showed this behavior. Also, I changed my
opinion, and I think checking the profiles is something that should be
done for better fallback to software decoding behavior.
The checks roughly follow VLC's vdpau profile checks, although we do
not check codec levels. (VLC's profile checks aren't necessarily
completely correct, but they're a welcome help anyway.)
Add a --vd-lavc-check-hw-profile option, which skips the profile check.
Roughly follows MPlayer svn commits 36492 and 36493. We also remove
the volume peak reporting. (There are much better libavfilter filters
for this, I think.)
af_format is the old audio conversion filter. It could do all possible
conversions supported by the audio chain. However, ever since the
addition of af_lavrresample, most conversions are done by
libav/swresample, and af_format is used as fallback.
Separate out the fallback cases and remove af_format. af_convert24 does
24 bit <-> 32 bit conversions, while af_convertsignendian does sign and
endian conversions. Maybe the way the conversions are split sounds a bit
odd. But the former changes the size of the audio data, while the latter
is fully in-place, so there's at least different buffer management.
This requires a quite complicated algorithm to make sure all these
"partial" conversion filters can actually get from one format to
another. E.g. s24le->s32be always requires convertsignendian and
convert24, but af.c has no idea what the intermediate format should
be. So I added a graph search (trying every possible format and
filter) to determine required format and filter. When I wrote this,
it seemed this was still better than messing everything into
af_lavrresample, but maybe this is overkill and I'll change my
opinion. For now, it seems nice to get rid of af_format though.
The AC3->IEC61937 conversion isn't supported anymore, but I don't think
this is needed anywhere. Most AOs test all formats explicitly, or use
the AF_FORMAT_IS_IEC61937() macro (which includes AC3).
One positive consequence of this change is that conversions always
include dithering (done by libav/swresample), instead of possibly going
through af_format, which doesn't do anything fancy.
Rename af_force to af_format. It's essentially compatible with command
line uses of af_format. We retain a compatibility alias for af_force.
Now that talloc has been removed, the license can be switched back to
GPLv2+. Actually, there never was a GPLv2+ licensed MPlayer (fork or
not) until now, but removal of some GPLv2-only code makes this possible
now. Rewrite the Copyright file to explain the reasons for the licenses
MPlayer and forks use. The old Copyright file didn't contain anything
interesting anymore, and all information it contained is available at
other places in the source tree.
The reason for the license change itself is that it should improve
interoperability with differently licensed code in general.
This essentially reverts commit 1752808.
The argument or this change is that --loop should set how often the
file is played, not the number of additional repeats.
Based on pull request 277, with additions to the manpage and removal
of "--loop=0".
Signed-off-by: wm4 <wm4@nowhere>
This commit adds the --force-window option, which will cause mpv always
to create a window when started. This can be useful when pretending that
mpv is a GUI application (which it isn't, but users pretend anyway), and
playing audio files would run mpv in the background without giving a
window to control it.
This doesn't actually create the window immediately: it only does so
only after initializing playback and when it is clear that there won't
be any actual video. This could be a problem when starting slow or
completely stuck network streams (mpv would remain frozen in the
background), or if video initialization somehow is stuck forever in
an in-between state (like when the decoder doesn't output a video
frame, but doesn't return an error either). Well, we can pretend only
so much that mpv is a GUI application.
This is preliminary. There are still tons of issues, and any aspect
of scripting may change in the future. I decided to merge this
(preliminary) work now because it makes it easier to develop it, not
because it's done. lua.rst is clear enough about it (plus some
sarcasm).
This requires linking to Lua. Lua has no official pkg-config file, but
there are distribution specific .pc files, all with different names.
Adding a non-pkg-config based configure test was considered, but we'd
rather not.
One major complication is that libquvi links against Lua too, and if
the Lua version is different from mpv's, you will get a crash as soon
as libquvi uses Lua. (libquvi by design always runs when a file is
opened.) I would consider this the problem of distros and whoever
builds mpv, but to make things easier for users, we add a terrible
runtime test to the configure script, which probes whether libquvi
will crash. This is disabled when cross-compiling, but in that case
we hope the user knows what he is doing.
This code is actually quite inefficient: it reuses the (slow, simple)
screenshot code. It uses an inefficient method to read the image
(vaGetImage() instead of vaDeriveImage()), allocates new memory for
each frame that is read, and it tries all image formats again each
time.
Also, in my tests it always picked NV12 as image format, which is not
ideal if you actually want to filter the video, and vo_xv can't handle
this format without conversion either.
However, a user confirmed that it worked for him, so everything is fine.
Merged from pull request #246 by xylosper. Minor cosmetic changes, some
adjustments (compatibility with older libva versions), and manpage
additions by wm4.
Signed-off-by: wm4 <wm4@nowhere>
By default, libavformat uses UDP for rtsp playback. This doesn't work
very well. Apparently the reason is that the buffer sizes libavformat
chooses for UDP are way too small, and switching to TCP gets rid of this
issue entirely (thanks go to Reimar Döffinger for figuring this out).
In theory, you can set buffer sizes as libavformat options, but that
doesn't seem to help.
Add an option to select the rtsp transport, and make TCP the default.
Also remove an outdated comment from stream.c.
Allows for example: --status-msg='${?pause==yes:(Paused) } ...' to
emulate the normal terminal status line. It's useful in other situations
too.
I'm a bit worried about extending this mini-DSL, and sure hope nobody
will implement a generic formula evaluator at some point in the future.
But for now we're probably safe.
Note that this is intentionally never done if the AO or softvolume is
different, or if the current volume control method is thought to control
system wide volume (such as ALSA) or otherwise user controllable (such
as PulseAudio). The intention is to keep things robust and to avoid
messing with the user's audio settings as far as possible, while still
providing the ability to resume volume if it makes sense.
Commit broke text subtitles without embedded fonts. Will look for a better
solution later. Revert it for now, since I'm starting to get bug reports.
This reverts commit 4a9f618d9f.
Improves display of images and video with alpha channel, especially if
the transparent regions contain (supposed to be invisible) garbage
color values.
This is to avoid the 30s hang while mpv caches fonts. In practice all the
fonts an average user is going to use are embedded in mkv files so there is
no reason to build fontconfig's cache on all of OS X system directories.
I might add something similar for terminal usage, but I am highly undecided.
These would have to be updated manually all the time. Replacing them
automatically would be possible, but additional work, and would force
regeneration of the manpage way too often.
We decided that we don't need these fields.