I don't think we need these flags anymore. Simplify the code and get rid
of the vf_format struct.
There still is the vf_format.configured field, but this can be replaced
by checking for a valid image format.
This adds vf_chain, which unlike vf_instance refers to the filter chain
as a whole. This makes the filter API less awkward, and will allow
handling format negotiation better.
This function improves automatic filter insertion, but this really
should be done by the generic filter code. Remove vf_match_csp() and all
code using it as preparation for that.
This commit temporarily makes handling of filter insertion worse for
now, but it will be fixed with the following commits.
They didn't do anything.
vf_screenshot.c actually did release the previous image, but that's not
really required. At worst you could take a screenshot and get an old
frame when there's no new frame yet.
The --flip option flipped the image upside-down, by trying to use VO
support, or if not available, by inserting a video filter. I'm not sure
why it existed. Maybe it was important in ancient times when VfW based
decoders output an image this way (but even then, flipping an image is a
free operation by negating the stride).
One nice thing about this is that it provided a possible path for
implementing video orientation, which is a feature we should probably
support eventually. The important part is that it would be for free for
VOs that support it, and would work even with hardware decoding.
But for now get rid of it. It's useless, trivial, stands in the way, and
supporting video orientation would require solving other problems first.
If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.
Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)
Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.
This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
The vf_eq context contains a very large lookup table, and the method of
setting default values caused the vf_eq context to be included in the
compiled code.
All filters now either use the generic option parser, or don't have
options. This finally finishes a transition started in 2003 (see git
commit 33b62af947).
Why are MPlayer devs so monumentally lazy? Sorry, but this takes the
cake. You had 10 years.
Mostly backwards compatible, we don't change much because we just want
to get rid of the legacy option string handling.
You can't pass an aspect as first argument anymore.
Apparently you can get this with: stereo3d=ab[2]{l,r}:sbs[2]{l,r}
So it seems the filter is redundant and can be removed.
Also see FFmpeg commit 2f11aa141a01.
Unfortunately, this forces filtering both luma and chroma, because
otherwise we'd have to deal with libavfilter's vf_noise weird handling
of YUV vs. RGB formats. Would we e.g. filter luma only, it would filter
red in RGB mode only, because it goes by component and there's no way to
distinguish YUV and RGB by just using the filter's options.
Also remove the ability to disable deinterlacing at runtime. You can
still disable deinterlacing at runtime by using the ``D`` key and its
automatical filter insertion/removal.
This will allow old filter to run libavfilter instead by calling
vf_lw_set_graph(), which turns the filter into a wrapper, using a given
libavfilter graph.
Later commits use that to automatically "reroute" a bunch of filters to
libavfilter. We want to get rid of the old MPlayer filter code, because
it's bad an unmaintained, but we still don't want to force everyone to
use vf_lavfi, so this solution will do for a while.
Previous API worked under the assumption that download_image is always called
after map_image. In practice this is true, but it's better to have a much
generic API that doesn't depend on the order in which the functions are called.
The hwdec driver can be loaded, even if it's not used (e.g. when playing
a file with no hardware decoding after one with it enabled).
Also, check whether dlimage is NULL. Since this will do call into the
native hwdec API, there's a chance a driver could fail doing this, it's
better to check the return value, even if this case currently can't
happen.
mpv was hardcoded to always consider the right Alt key as Alt Gr, but there
are parituclar combinations of platforms and keyboard layouts where it's more
convenient to treat the right Alt as a keyboard modifier just like the left
one.
Fixes#388
The harder work was done in the previous commits. After that this feature comes
out almost for free.
The only problem is I can't get the textures created with CGLTexImageIOSurface2D
to download properly, thus the code performs download using some CoreVideo APIs.
If someone knows why download of textures created with CGLTexImageIOSurface2D
doesn't work please contact me :)
This adds support for packed YUV formats (YUVY and UYVY) using the extension
GL_APPLE_rgb_422. While supporting this formats on their own is not that
important (considering most video is planar YUV) they are used for
interoperability with IOSurfaces.
Next commit will use this formats to render VDA hardware decoded frames through
IOSurface and OpenGL interoperability.
This allows vo_opengl to use GL_TEXTURE_RECTANGLE textures, either by
enabling it with the 'rectangle-textures' sub-option, or by having a
hwdec backend force it. By default it's off.
The _only_ reason we're adding this is because VDA can export rectangle
textures only.
We got a crash in libavutil when encoding with Y8 (GRAY8). The reason
was that libavutil was copying an Y8 image allocated by us, and expected
a palette. This is because GRAY8 is a PSEUDOPAL format. It's not clear
what PSEUDOPAL means, and it makes literally no sense at all. However,
it does expect a palette allocated for some formats that are not
paletted, and libavutil crashed when trying to access the non-existent
palette.
This is not strictly needed anymore. (On the other hand, it's not really
possible to do hw decoding with vo_null, because the VO is still
responsible for opening the hw decoder API, but that's another story.)
There are some use cases for this. For example, you can use it to set
defaults of automatically inserted filters (like af_lavrresample). It's
also useful if you have a non-trivial VO configuration, and want to use
--vo to quickly change between the drivers without repeating the whole
configuration in the --vo argument.
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get
This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.
Mostly untested; it does compile with Libav 9.9.
Sometimes, vf_pullup hanged on seek. This was because it never was
properly reset. Old timestamps messed up the timestamp calculations,
which made the player show frames for a ridiculously long time, which is
perceived as pausing or hanging.
The old ffmpeg vdpau support code uses separate vdpau pixel formats for
each decoder (pretty much because mplayer's architecture sucked), which
just gets into the way. Force the old decoder's output to IMGFMT_VDPAU,
and remove IMGFMT_IS_VDPAU() where we can remove it.
This should completely remove the differences betwene the old and new
vdpau decoder outside of the decoder.
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.
Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
pthreads should be available anywhere. Even if not, for environment
without threads a pthread wrapper could be provided that can't actually
start threads, thus disabling features that require threads.
Make pthreads mandatory in order to simplify build dependencies and to
reduce ifdeffery. (Admittedly, there wasn't much complexity, but maybe
we will use pthreads more in the future, and then it'd become a real
bother.)
So, FFmpeg/Libav requires us to figure out video timestamps ourselves
(see last 10 commits or so), but the methods it provides for this aren't
even sufficient. In particular, everything that uses AVI-style DTS (avi,
vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal
frame delay is broken. In this case, libavcodec will shift the packet-
to-image correspondence by the codec delay, meaning that with a delay=1,
the first AVFrame.pkt_dts is not 0, but that of the second packet. All
timestamps will appear shifted. The start time (e.g. the time displayed
when doing "mpv file.avi --pause") will not be exactly 0.
(According to Libav developers, this is how it's supposed to work; just
that the first DTS values are normally negative with formats that use
DTS "properly". Who cares if it doesn't work at all with very common
video formats? There's no indication that they'll fix this soon,
either. An elegant workaround is missing too.)
Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV.
Since these timestamps are not reorderd, we wouldn't need to sort them,
but it's less code this way (and possibly more robust, should a demuxer
unexpectedly output PTS).
The original intention of all the timestamp changes recently was
actually to get rid of demuxer-specific hacks and the old timestamp
sorting code, but it looks like this didn't work out. Yet another case
where trying to replace native MPlayer functionality with FFmpeg/Libav
led to disadvantages and bugs. (Note that the old PTS sorting code
doesn't and can't handle frame dropping correctly, though.)
Bug reports:
https://trac.ffmpeg.org/ticket/3178https://bugzilla.libav.org/show_bug.cgi?id=600
Using --start with files that use DTS only, or which simply have broken
PTS timestamps, would incorrectly drop frames and possibly not execute
the seek correctly.
Add yet another heuristic to detect this. The intent is that --start and
hr-seeks in general should work correctly, but in order to keep things
fast, we still want to allow frame dropping during hr-seek if there are
no problems doing so. Do this by disabling frame dropping by default,
but re-enabling it if there are no problems found for a while. As a
consequence, --start might be somewhat slower, but normal user
interaction should remain as fast as before.
Note that there's something subtle about the added code: the
has_broken_packet_pts field is checked even before the first packet is
fed to dec_video.c, so the field must not be set to 0 right on start.
It's not initially set to 0 anyway, because the heuristic requires
decoding some images before enabling frame drop anyway.
Note 2: it's not clear whether frame dropping during hr-seek really
helps; I didn't benchmark it.
The d_video->pts field was a bit strange. The code overwrote it multiple
times (on decoding, on filtering, then once again...), and it wasn't
really clear what purpose this field had exactly. Replace it with the
mpctx->video_next_pts field, which is relatively unambiguous.
Move the decreasing PTS check to dec_video.c. This means it acts on
decoder output, not on filter output. (Just like in the previous commit,
assume the filter chain is sane.) Drop the jitter vs. reset semantics;
the dec_video.c determined PTS never goes backwards, and demuxer
timestamps don't "jitter".
Also get rid of the PTS check _after_ filters. This means if there's a
video filter which unsets PTS, no warning will be printed. But we assume
that all filters are well-behaved enough by now.
Refactor the PTS handling code to make it cleaner, and to separate the
bits that use PTS sorting.
Add a heuristic to fall back to DTS if the PTS us non-monotonic. This
code is based on what FFmpeg/Libav use for ffplay/avplay and also
best_effort_timestamp (which is only in FFmpeg). Basically, this 1. just
uses the DTS if PTS is unset, and 2. ignores PTS entirely if PTS is non-
monotonic, but DTS is sorted.
The code is pretty much the same as in Libav [1]. I'm not sure if all of
it is really needed, or if it does more than what the paragraph above
mentions. But maybe it's fine to cargo-cult this.
This heuristic fixes playback of mpeg4 in ogm, which returns packets
with PTS==DTS, even though the PTS timestamps should follow codec
reordering. This is probably a libavformat demuxer bug, but good luck
trying to fix it.
The way vd_lavc.c returns the frame PTS and DTS to dec_video.c is a bit
inelegant, but maybe better than trying to mess the PTS back into the
decoder callback again.
[1] https://git.libav.org/?p=libav.git;a=blob;f=cmdutils.c;h=3f1c667075724c5cde69d840ed5ed7d992898334;hb=fa515c2088e1d082d45741bbd5c05e13b0500804#l1431
These used the suffix _resync_stream, which is a bit misleading. Nothing
gets "resynchronized", they really just reset state.
(Some audio decoders actually used to "resync" by reading packets for
resuming playback, but that's not the case anymore.)
Also move the function in dec_video.c to the top of the file.
When using PIC on x86 (eg with hardened toolchains) the ebx register is
reserverd and cannot be used in assembly code.
For vf_eq we allow the compiler to use memory as input.
For vf_noise we temoporarily borrow the ebp register.
This fixes#361.
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
This means the code that tries to figure out the timestamp from
demuxer and decoder output is now all in dec_video.c. We set the
final timestamp on the returned image (mp_image.pts), as well as
the d_video->pts field.
The way the player uses d_video->pts field is still a bit messy. Maybe
this could be cleaned up later.
It appears PTS sorting was useful only for avi files (and VfW-muxed
mkv). Maybe it was historically also important for decoders with broken
or non-existent PTS reordering (win32 codecs?). But now that we handle
demuxers which outputs DTS only correctly, it just seems dead weight.
Disable it by default. The --pts-association-mode option is now forced
to always use the decoder's PTS value. You can still enable the old
default (auto) or force sorting. But we will probably remove this option
entirely at some point.
Make demux_mkv export timestamps at DTS when it's in VfW mode. This is
needed to get correct timestamps with the new default mode. demux_lavf
already does that.
Having the DTS directly can be useful for restoring PTS values.
The avi file format doesn't actually store PTS values, just DTS. An
older hack explicitly exported the DTS as PTS (ignoring the [I assume]
genpts generated non-sense PTS), which is not necessary anymore due to
this change.
Now the --no-correct-pts mode is like the normal mode, just with
different timestamp calculations. The semantics should be about the
same as before this commit.
Before this commit, this mode estimated the frame time by subtracting
successive packet PTS values. This is complete non-sense for video
codecs which use reordering. The code compensated frame times for these
non-sense using the FPS value, but confused the rest of the player with
non-sense jumping around timestamps. So, all in all this mode is not
very useful.
Repurpose this mode for fixed frame rate playback. This gives almost the
same behavior as the old mode with forced framerate (--fps option). The
result is simpler and often more robust.
Instead of passing the PTS as separate field, pass it as part of the
usual data structures. Basically, this removes strange artifacts from
the API. (It's not finished, though: the final decoded PTS goes through
strange paths, and filter_video() finally overwrites the decoded
mp_image's pts field with it.)
We also stop using libavcodec's reordered_opaque fields, and use
AVPacket.pts and AVFrame.pkt_pts. This is slightly unorthodox, because
these pts fields are not "really" opaque anymore, yet we treat them as
such. But the end result should be the same, and reordered_opaque is
marked as partially deprecated (it's not clear whether it's really
deprecated).
This is not needed anymore, because we decided that the PAR of the
decoded video matters, and not the PAR of the filtered video that
arrives at the VO.
When mpv is started with some video filters set (--vf is used), and
hardware decoding is requested, and hardware decoding would be possible,
but is prevented due to video filters that accept software formats only,
the fallback didn't work properly sometimes.
This fallback works rather violently: it tries to initialize the filter
chain, and if it fails it throws away the frame decoded using the
hardware, and retries with software. The case that didn't work was when
decoding the current packet didn't immediately lead to a new frame. Then
the filter chain wouldn't be reinitialized, and the playloop would stop
playback as soon as it encounters the error flag.
Fix this by resetting the filter error flag (back to "uninitialized"),
which is a rather violent, but somewhat working solution.
The fallback in general should perhaps be cleaned up later.
If the --fps option was given (MPOpts->force_fps), the demuxer FPS value
was overwritten with the forced value. This was fine, since the demuxer
value wasn't needed anymore. But with the recent changes not to write to
the demuxer stream headers, we don't want to do this anymore. So
maintain the (forced/updated) FPS value in dec_video->fps.
The removed code in loadfile.c is probably redundant, and an artifact
from past refactorings.
Note that sub.c will now always use the demuxer FPS value, instead of
the user override value. I think this is fine, because it used the
demuxer's video size values too. (And it's rare that these values are
used at all.)
Now the actual decoder doesn't need to care about this anymore, and it's
handled in generic code instead. This simplifies vd_lavc.c, and in
particular we don't need to detect format changes in the old way
anymore.
The only reason why these structs were dynamically allocated was to
avoid recursive includes in stheader.h, which is (or was) a very central
file included by almost all other files. (If a struct is referenced via
a pointer type only, it can be forward referenced, and the definition of
the struct is not needed.) Now that they're out of stheader.h, this
difference doesn't matter anymore, and the code can be simplified.
Also sneak in some sanity checks.
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
This is similar to the sh_audio commit.
This is mostly cosmetic in nature, except that it also adds automatical
freeing of the decoder driver's state struct (which was in
sh_video->context, now in dec_video->priv).
Also remove all the stheader.h fields that are not needed anymore.
This drops the --pp option, which was probably broken for a while. The
option automatically inserted the "pp" filter. The value passed to it
was ignored (which is probably broken, it always selected maximal
quality).
Inserting this filter can be done simply with --vf=pp, so this is not
needed anymore.
This means most code accessing this struct must now include hwdec.h
instead of dec_video.h. I just put it into dec_video.h at first because
I thought a separate file would be a waste, but it's more proper to do
it this way, as there are too many files which include dec_video.h only
to get the mp_hwdec_info definition.
vo_opengl always loads the hwdec backend lazily, so hwdec_request_api()
has to be called to possibly load it. This makes vf_vavpp work with
software decoding. (Hardware decoding loads the backend before the
filter is initialized, so this case is different.)
Also, the VFCTRL_GET_HWDEC_INFO call doesn't need to be checked. If it
fails, the info will be left blank.
This initialized only the load_api and load_api_ctx fields, and left the
other fields as they were. This failed with vf_vavpp, which assumed all
fields are initialized.
This commit adds a new build system based on waf. configure and Makefile
are deprecated effective immediately and someday in the future they will be
removed (they are still available by running ./old-configure).
You can find how the choice for waf came to be in `DOCS/waf-buildsystem.rst`.
TL;DR: we couldn't get the same level of abstraction and customization with
other build systems we tried (CMake and autotools).
For guidance on how to build the software now, take a look at README.md
and the cross compilation guide.
CREDITS:
This is a squash of ~250 commits. Some of them are not by me, so here is the
deserved attribution:
- @wm4 contributed some Windows fixes, renamed configure to old-configure
and contributed to the bootstrap script. Also, GNU/Linux testing.
- @lachs0r contributed some Windows fixes and the bootstrap script.
- @Nikoli contributed a lot of testing and discovered many bugs.
- @CrimsonVoid contributed changes to the bootstrap script.
The existing code tried to remove the "extra" profile flags for h264.
FF_PROFILE_H264_INTRA doesn't matter for us at all, because it's set
only for profiles the vdpau/vaapi APIs don't support.
The FF_PROFILE_H264_CONSTRAINED flag on the other hand is added to
H264_BASELINE, except that it makes the file a real subset of H264_MAIN
and H264_HIGH. Removing that flag would select the BASELINE profile,
which appears to be rarely supported by hardware decoders. This means we
accidentally rejected perfectly hardware decodable files. Use MAIN for
it instead.
(vaapi has explicit support for CONSTRAINED_BASELINE, but it seems to be
a new thing, and is not reported as supported where I tried. So don't
bother to check it, and do the same as on vdpau.)
See github issue #204.
When blending OSD and subtitles onto the video, we write bogus alpha
values. This doesn't normally matter, because these values are normally
unused and discarded. But at least on Wayland, the alpha values are used
by the compositor and leads to transparent windows even with opaque
video on places where the OSD happens to use transparency.
(Also see github issue #338.)
Until now, the alpha basically contained garbage. The source factor
GL_SRC_ALPHA meant that alpha was multiplied with itself. Use GL_ONE
instead (which is why we have to use glBlendFuncSeparate()). This should
give correct results, even with video that has alpha. (Or at least it's
something close to correct, I haven't thought too hard how the
compositor will blend it, and in fact I couldn't manage to test it.)
If glBlendFuncSeparate() is not available, fall back to glBlendFunc(),
which does the same as the code did before this commit. Technically, we
support GL 1.1, but glBlendFuncSeparate is 1.4, and I guess we should
try not to crash if vo_opengl_old runs on a system with GL 1.1 drivers
only.
This was supposed to handle preemption better. I still think the current
state isn't very nice, since the decoder can "accidentally" call the
previous render function after preemption (instead of calling the
reloaded function), so there might be issues. But all in all, this
dummy_render function is a bit confusing, and still not entirely
correct, so it's not worth it.
This removes "--hwdec=crystalhd".
I doubt anyone even tried to use this. But even if someone wants to
use it, the decoders can still be explicitly invoked with e.g.:
--vd=lavc:h264_crystalhd
The only advantage our special code provided was fallback to
software decoding. (But I'm not sure how the ffmpeg crystalhd
pseudo-decoder actually behaves.)
Removing this will allow some simplifications as soon as we don't need
vdpau_old.c anymore.
This uses vdpau OpenGL interop to convert a vdpau surface to a texture.
Note that this is a bit weak and primitive. Deinterlacing (or any other
form of vdpau postprocessing) is not supported. vo_opengl chroma scaling
and chroma sample position are not supported. Internally, the vdpau
video surfaces are converted to a RGBA surface first, because using the
video surfaces directly is too complicated. (These surfaces are always
split into separate fields, and the vo_opengl core expects progressive
frames or frames with weaved fields.)
Instead of checking for resolution and image format changes, always
fully reinit on any parameter change. Let init_video do all required
initializations, which simplifies things a little bit.
Change the gl_video/hardware decoding interop API slightly, so that
hwdec initialization gets the full image parameters.
Also make some cosmetic changes.
These formats are helpful for distinguishing surfaces with and without
alpha. Unfortunately, Libav and older version of FFmpeg don't support
them, so code will break. Fix this by treating these formats specially
on the mpv side, mapping them to RGBA on Libav, and unseting the alpha
bit in the mp_imgfmt_desc struct.
Before the bitstream_buffers field was deprecated, you had to free it,
otherwise you would leak memory.
(Although vdpau.c uses a new API, they managed to introduce a new
deprecation this quickly. This is a complaint.)
This introduces a memory leak of 12 bytes per file on every file on some
_older_ libavcodec versions. This is minor enough that I don't care.
Video has up to 4 textures, if you include obscure formats with alpha.
This means alpha formats could always overwrite the first scaler
texture, leading to corrupted video display. This problem was recently
brought to light, when commit 571e697 started to explicitly unbind all 4
video textures, which broke rendering for non-alpha formats as well.
Fix this by reserving the correct number of texture units.
VA-API's OpenGL/GLX interop is pretty bad and perhaps slow (renders a
X11 pixmap into a FBO, and has to go over X11, probably involves one or
more copies), and this code serves more as an example, rather than for
serious use. On the other hand, this might be work much better than
vo_vaapi, even if slightly slower.
Most hardware decoding APIs provide some OpenGL interop. This allows
using vo_opengl, without having to read the video data back from GPU.
This requires adding a backend for each hardware decoding API. (Each
backend is an entry in gl_hwdec_vaglx[].) The backends expose video data
as a set of OpenGL textures.
Add infrastructure to support this. The next commit will add support for
VA-API.
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
* #define HAVE_HURR 1
* #define HAVE_DURR 0
We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.
[1]: http://xkcd.com/927/ related
There are some Microsoft Windows symbols which are traditionally used by
the mplayer core, because it used to be convenient (avi was the big
format, using binary windows decoders made sense...). So these symbols
have the exact same definition as the Windows one, and if mplayer is
compiled on Windows, the symbols from windows.h are used.
This broke recently just because some files were shuffled around, and
the symbols defined in ms_hdr.h collided with windows.h ones. Since we
don't have windows binary decoders anymore, there's not the slightest
reason our symbols should have the same names. Rename them to reduce the
risk for collision, and to fix the recent regression.
Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines
its own version if the windows headers don't define it, and ao_wasapi is
not available on systems where this symbol is missing.
Also reindent ms_hdr.h.
We had some code for checking profiles earlier, which was removed in
commits 2508f38 and adfb71b. These commits mentioned that (working) hw
decoding was sometimes prevented due to profile checking, but I can't
find the samples anymore that showed this behavior. Also, I changed my
opinion, and I think checking the profiles is something that should be
done for better fallback to software decoding behavior.
The checks roughly follow VLC's vdpau profile checks, although we do
not check codec levels. (VLC's profile checks aren't necessarily
completely correct, but they're a welcome help anyway.)
Add a --vd-lavc-check-hw-profile option, which skips the profile check.
This one really did bite me hard (see previous commit), so enable it by
default.
Fix some cases of shadowing throughout the codebase. None of these
change behavior, and all of these were correct code, and just tripped up
the warning.
As preparation for resizing the window with input commands in the
following commit.
Since there are already so many functions which somehow resize the
window, add the word "highlevel" to the name of this new function.
We mixed the "old" AVFrame management functions (avcodec_alloc_frame,
avcodec_free_frame) with reference counting. This doesn't work
correctly; you must use av_frame_alloc and av_frame_free. Of course
ffmpeg doesn't warn us about the bad usage, but will just mess up
things silently. (Thanks a lot...)
While the alloc function seems to be 100% compatible, the free function
will do bad things, such as freeing memory that might still be
referenced by another frame. I didn't experience any actual bugs, but
maybe that was pure luck.
This stopped working when the code was changed to create a window even
if --wid is used.
It appears we can't create our own window in this case, because in X11
there is no difference between a window with the root window as parent,
and a window that is managed by the WM. So make this (kind of worthless)
special case use the root window itself.
On systems that provide legacy OpenGL (up to 2.1), but not GL3 and
later, creating a GL3 context will fail. We then revert to legacy GL.
Apparently the error message printed when the GL3 context creation
fails is confusing. We could just silence it, but there's still a X
error ("X11 error: GLXBadFBConfig"), which would be quite hard to
filter out. For one, it would require messing with the X11 error
handler, which doesn't even carry a context pointer (for application
private data), so we don't even want to touch it. Instead, change
the error message to inform the user what's actually happening: a
fallback to an older version of OpenGL.