Commit Graph

364 Commits

Author SHA1 Message Date
wm4 c731513efa wscript: fix typo 2016-11-22 15:54:44 +01:00
wm4 07b6969ba4 vf_vdpaurb: remove this filter
Was deprecated, superseded by --hwdec=vdpau-copy.
2016-11-22 15:54:44 +01:00
Aman Gupta 3f5b41dfa3 audio/out: add AudioUnit output driver for iOS 2016-11-01 16:25:40 +01:00
Avi Halachmi (:avih) 02d2c2cc97 vo_tct: optional custom size, support non-posix with 80x25 default
Also, replace the UTF8 half block char at the source code with C escape.
2016-10-25 00:03:01 +11:00
rr- dd02369c32 vo_tct: introduce modern caca alternative 2016-10-20 14:59:54 +02:00
Philip Langdale b83bfea05d hwdec_cuda: Rename config variable to be more consistent
'cuda-gl' isn't right - you can turn this on without any GL and
get some non-zero benefit (with the cuda-copy hwaccel). So
'cuda-hwaccel' seems more consistent with everything else.
2016-09-16 14:26:30 +02:00
wm4 2b0c620b22 player: move builtin profiles to a separate file
Move the embedded string with the builtin profiles to a separate
builtin.conf file. This makes it easier to read and edit, and you can
also check it for errors with --include=etc/builtin.conf. (Normally
errors are hidden intentionally, because there's no way to output error
messages this early, and because some options might not be present on
all platforms or with all configurations.)
2016-09-15 14:50:38 +02:00
wm4 0ccceecdc6 vo_opengl: mali fbdev support
Minimal support just for testing.

Only the window surface creation (including size determination) is
really platform specific, so this could be some generic thing with
platform-specific support as some sort of sub-driver, but on the other
hand I don't see much of a need for such a thing.

While most of the fbdev usage is done by the EGL driver, using this
fbdev ioctl is apparently the only way to get the display resolution.
2016-09-13 18:26:06 +02:00
wm4 274e71ee8b vo_opengl: add hw overlay support and use it for RPI
This overlay support specifically skips the OpenGL rendering chain, and
uses GL rendering only for OSD/subtitles. This is for devices which
don't have performant GL support.

hwdec_rpi.c contains code ported from vo_rpi.c. vo_rpi.c is going to be
deprecated. I left in the code for uploading sw surfaces (as it might
be slightly more efficient for rendering sw decoded video), although
it's dead code for now.
2016-09-12 19:58:58 +02:00
Philip Sequeira 1e684e3cca build: recompile zsh completion if zsh.pl changes 2016-09-10 21:46:03 +02:00
Philip Langdale 2048ad2b8a hwdec/opengl: Add support for CUDA and cuvid/NvDecode
Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross
platform, but nvidia proprietary API that exposes their hardware
video decoding capabilities. It is analogous to their DXVA or VDPAU
support on Windows or Linux but without using platform specific API
calls.

As a rule, you'd rather use DXVA or VDPAU as these are more mature
and well supported APIs, but on Linux, VDPAU is falling behind the
hardware capabilities, and there's no sign that nvidia are making
the investments to update it.

Most concretely, this means that there is no VP8/9 or HEVC Main10
support in VDPAU. On the other hand, NvDecode does export vp8/9 and
partial support for HEVC Main10 (more on that below).

ffmpeg already has support in the form of the "cuvid" family of
decoders. Due to the design of the API, it is best exposed as a full
decoder rather than an hwaccel. As such, there are decoders like
h264_cuvid, hevc_cuvid, etc.

These decoders support two output paths today - in both cases, NV12
frames are returned, either in CUDA device memory or regular system
memory.

In the case of the system memory path, the decoders can be used
as-is in mpv today with a command line like:

mpv --vd=lavc:h264_cuvid foobar.mp4

Doing this will take advantage of hardware decoding, but the cost
of the memcpy to system memory adds up, especially for high
resolution video (4K etc).

To avoid that, we need an hwdec that takes advantage of CUDA's
OpenGL interop to copy from device memory into OpenGL textures.

That is what this change implements.

The process is relatively simple as only basic device context
aquisition needs to be done by us - the CUDA buffer pool is managed
by the decoder - thankfully.

The hwdec looks a bit like the vdpau interop one - the hwdec
maintains a single set of plane textures and each output frame
is repeatedly mapped into these textures to pass on.

The frames are always in NV12 format, at least until 10bit output
supports emerges.

The only slightly interesting part of the copying process is that
CUDA works by associating PBOs, so we need to define these for
each of the textures.

TODO Items:
* I need to add a download_image function for screenshots. This
  would do the same copy to system memory that the decoder's
  system memory output does.
* There are items to investigate on the ffmpeg side. There appears
  to be a problem with timestamps for some content.

Final note: I mentioned HEVC Main10. While there is no 10bit output
support, NvDecode can return dithered 8bit NV12 so you can take
advantage of the hardware acceleration.

This particular mode requires compiling ffmpeg with a modified
header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg
yet.

Usage:

You will need to specify vo=opengl and hwdec=cuda.

Note that hwdec=auto will probably not work as it will try to use
vdpau first.

mpv --hwdec=cuda --vo=opengl foobar.mp4

If you want to use filters that require frames in system memory,
just use the decoder directly without the hwdec, as documented
above.
2016-09-08 16:06:12 +02:00
wm4 a9a55ea7f2 misc: add some annoying mpv_node helpers
Sigh.

Some parts of mpv essentially duplicate this code (with varrying levels
of triviality) - this can be fixed "later".
2016-08-28 19:39:05 +02:00
Paul B Mahol e2a54bb1ca audio/filter: remove delay audio filter
Similar filter is available in libavfilter.
2016-08-12 19:45:39 +02:00
Aman Gupta 7ca4a453e0 client API: add stream_cb API for user-defined stream implementations
Based on #2630. Some heavy changes by committer.

Signed-off-by: wm4 <wm4@nowhere>
2016-08-07 19:33:20 +02:00
Jan Ekström aaf6e3b58d wscript: add proper non-version'd SONAME for Android
This seems to have become a requirement since API target 23+, and
matches what FFmpeg does.
2016-07-30 00:02:40 +02:00
Chris Mayo f95cde60ff build: add --htmldir option
Defaults to docdir but makes it possible to install html documentation
separately.
2016-07-30 00:02:40 +02:00
wm4 77e1e8e38e audio: refactor mixer code and delete mixer.c
mixer.c didn't really deserve to be separate anymore, as half of its
contents were unnecessary glue code after recent changes. It also
created a weird split between audio.c and af.c due to the fact that
mixer.c could insert audio filters. With the code being in audio.c
directly, together with other code that unserts filters during runtime,
it will be possible to cleanup this code a bit and make it work like the
video filter code.

As part of this change, make the balance code work like the volume code,
and add an option to back the current balance value. Also, since the
balance semantics are unexpected for most users (panning between the
audio channels, instead of just changing the relative volume), and there
are some other volumes, formally deprecate both the old property and the
new option.
2016-07-17 19:21:28 +02:00
wm4 17c5738cb4 d3d: merge angle_common.h into d3d.h
OK, this was dumb. The file didn't have much to do with ANGLE, and the
functionality can simply be moved to d3d.c. That file contains helpers
for decoding, but can always be present (on Windows) since it doesn't
access any D3D specific libavcodec APIs. Thus it doesn't need to be
conditionally built like the actual hwaccel wrappers.
2016-06-28 20:07:56 +02:00
Bin Jin 67a6203ce0 vo_opengl: remove prescaling framework with superxbr prescaler
Signed-off-by: wm4 <wm4@nowhere>
2016-06-18 19:17:28 +02:00
Bin Jin 61bc96518a vo_opengl: remove nnedi3 prescaler 2016-06-18 19:16:27 +02:00
wm4 262ceca731 vo_opengl: fix d3d11 hardware decoding probing on Windows 7
Although D3D11 video decoding is unuspported on Windows 7, the
associated APIs almost work. Where they fail is texture creation, where
we try to create D3D11_BIND_DECODER surfaces. So specifically try to
detect this situation.

One issue is that once the hwdec interop is created, the damage is done,
and it can't use another backend (because currently only 1 hwdec backend
is supported). So that's where we prevent attempts to use it.

It still can fail when trying to use d3d11va-copy (since that doesn't
require an interop backend), but at that point we don't care anymore -
dxva2(-copy) is tried before that anyway.
2016-06-09 11:18:36 +02:00
Ilya Tumaykin b1dd3beda9 build: don't install tests, only build them
Considering tests are usually executed right after a successful build,
there's no reason to keep them around for later.
2016-05-29 20:02:47 +02:00
wm4 0348cd080f video: remove d3d11 video processor use from OpenGL interop
We now have a video filter that uses the d3d11 video processor, so it
makes no sense to have one in the VO interop code. The VO uses it for
formats not directly supported by ANGLE (so the video data is converted
to a RGB texture, which ANGLE can take in).

Change this so that the video filter is automatically inserted if
needed. Move the code that maps RGB surfaces to its own inteorp backend.
Add a bunch of new image formats, which are used to enforce the new
constraints, and to automatically insert the filter only when needed.

The added vf mechanism to auto-insert the d3d11vpp filter is very dumb
and primitive, and will work only for this specific purpose. The format
negotiation mechanism in the filter chain is generally not very pretty,
and mostly broken as well. (libavfilter has a different mechanism, and
these mechanisms don't match well, so vf_lavfi uses some sort of hack.
It only works because hwaccel and non-hwaccel formats are strictly
separated.)

The RGB interop is now only used with older ANGLE versions. The only
reason I'm keeping it is because it's relatively isolated (uses only
existing mechanisms and adds no new concepts), and because I want to be
able to compare the behavior of the old code with the new one for
testing. It will be removed eventually.

If ANGLE has NV12 interop, P010 is now handled by converting to NV12
with the video processor, instead of converting it to RGB and using the
old mechanism to import that as a texture.
2016-05-29 19:00:55 +02:00
wm4 49f73eaf7b vf_d3d11vpp: add a D3D11 video processor filter
Main use: deinterlacing.

I'm not sure how to select the deinterlacing mode at all. You can
enumate the available video processors, but at least on Intel, all of
them either signal support for all deinterlacers, or none (the latter is
apparently used for IVTC). I haven't found anything that actually tells
the processor _which_ algorithm to use.

Another strange detail is how to select top/bottom fields and field
dominance. At least I'm getting quite similar results to vavpp on Linux,
so I'm content with it for now.

Future plans include removing the D3D11 video processor use from the
ANGLE interop code.
2016-05-28 19:28:08 +02:00
wm4 15a5d33b79 vf_vavpp: move frame handling to separate file
Move the handling of the future/past frames and the associated dataflow
rules to a separate source file.

While this on its own seems rather questionable and just inflates the
code, I intend to reuse it for other filters. The logic is annoying
enough that it shouldn't be duplicated a bunch of times.

(I considered other ways of sharing this logic, such as an uber-
deinterlace filter, which would access the hardware deinterlacer via a
different API. Although that sounds like kind of the right approach,
this would have other problems, so let's not, at least for now.)
2016-05-25 23:51:24 +02:00
wm4 39b64fb176 video: merge dxva2 source files
video/dxva2.c exported only 2 functions, both used only by
video/decode/dxva2.c.

The same was already done for d3d11.
2016-05-17 11:05:51 +02:00
Niklas Haas 7c3d78fd82 vo_opengl: support external user hooks
This allows users to add their own near-arbitrary hooks to the vo_opengl
processing pipeline, greatly enhancing the flexibility of user shaders.
This enables, among other things, user shaders such as CrossBilateral,
SuperRes, LumaSharpen and many more.

To make parsing the user shaders easier, shaders are now loaded as
bstrs, and the hooks are set up during video reconfig instead of on
every single frame.
2016-05-15 20:42:02 +02:00
wm4 84ccebd9b9 vo_opengl: reorganize texture format handling
This merges all knowledge about texture format into a central table.

Most of the work done here is actually identifying which formats exactly
are supported by OpenGL(ES) under which circumstances, and keeping this
information in the format table in a somewhat declarative way. (Although
only to the extend needed by mpv.) In particular, ES and float formats
are a horrible mess.

Again this is a big refactor that might cause regression on "obscure"
configurations.
2016-05-12 21:22:28 +02:00
wm4 fd82e14888 build: merge d3d11va and dxva2 hwaccel checks
We don't have any reason to disable either. Both are loaded dynamically
at runtime anyway. There is also no reason why dxva2 would disappear
from libavcodec any time soon.
2016-05-11 15:40:31 +02:00
wm4 fde20d10bc vo_opengl: angle: dynamically load ANGLE
ANGLE is _really_ annoying to build. (Requires special toolchain and a
recent MSVC version.) This results in various issues with people
having trouble to build mpv against ANGLE (apparently linking it
against a prebuilt binary doesn't count, or using binaries from
potentially untrusted sources is not wanted).

Dynamically loading ANGLE is going to be a huge convenience. This commit
implements this, with special focus on keeping it source compatible to
a normal build with ANGLE linked at build-time.
2016-05-11 15:39:29 +02:00
wm4 46fff8d31a video: refactor how VO exports hwdec device handles
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.

The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)

This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.

Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.

The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.

This is a big refactor. Some things may be untested and could be broken.
2016-05-09 20:03:22 +02:00
wm4 dff33893f2 d3d11va: store texture/subindex in IMGFMT_D3D11VA plane pointers
Basically this gets rid of the need for the accessors in d3d11va.h, and
the code can be cleaned up a little bit.

Note that libavcodec only defines a ID3D11VideoDecoderOutputView pointer
in the last plane pointers, but it tolerates/passes through the other
plane pointers we set.
2016-04-27 14:06:50 +02:00
wm4 3706918311 vo_opengl: D3D11VA + ANGLE interop
This uses ID3D11VideoProcessor to convert the video to a RGBA surface,
which is then bound to ANGLE. Currently ANGLE does not provide any way
to bind nv12 surfaces directly, so this will have to do.

ID3D11VideoContext1 would give us slightly more control about the
colorspace conversion, though it's still not good, and not available
in MinGW headers yet.

The video processor is created lazily, because we need to have the coded
frame size, of which AVFrame and mp_image have no concept of. Doing the
creation lazily is less of a pain than somehow hacking the coded frame
size into mp_image.

I'm not really sure how ID3D11VideoProcessorInputView is supposed to
work. We recreate it on every frame, which is simple and hopefully
doesn't affect performance.
2016-04-27 13:49:47 +02:00
wm4 7e3d8e7134 vd_lavc: simplify RPI and Mediacodec wrappers
Use the recently added lavc_suffix mechanism to select the wrapper
decoder.

With all hwdec callbacks being optional, and RPI/Mediacodec having only
dummy callbacks, all the callbacks can be removed as well.

The result is that the vd_lavc_hwdec struct for both of them is tiny.
It's better to move them to vd_lavc.c directly, because they are so
trivial and small.
2016-04-25 12:23:38 +02:00
Kevin Mitchell a7110862c8 vd_lavc: add d3d11va hwdec
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
2016-03-30 09:01:27 -07:00
Jan Ekström 5843392db5 Add a mediacodec decoder hwdec wrapper
Does the same thing as the rpi one - makes fallback possible by
pretending that h264_mediacodec is a hwdec.
2016-03-25 21:35:59 +01:00
James Ross-Gowan 5bf473d5ca ipc: add Windows implementation with named pipes
This implements the JSON IPC protocol with named pipes, which are
probably the closest Windows equivalent to Unix domain sockets in terms
of functionality. Like with Unix sockets, this will allow mpv to listen
for IPC connections and handle multiple IPC clients at once. A few cross
platform libraries and frameworks (Qt, node.js) use named pipes for IPC
on Windows and Unix sockets on Linux and Unix, so hopefully this will
ease the creation of portable JSON IPC clients.

Unlike the Unix implementation, this doesn't share code with
--input-file, meaning --input-file on Windows won't understand JSON
commands (yet.) Sharing code and removing the separate implementation in
pipe-win32.c is definitely a possible future improvement.
2016-03-23 23:15:20 +11:00
wm4 3c59f54ec8 build: be less strict about line endings
This is a shitty hack, but also not terribly offensive.
2016-03-11 17:22:50 +01:00
Kevin Mitchell 8ff09f3217 vo_opengl: add dxva2 interop to angle backend
Like dxinterop, this uses StretchRect or RGB conversion. This is unavoidable as
long as we use the dxva2 API, as there is no way to access the raw hardware
decoded Direct3D9 surfaces.
2016-03-10 15:49:55 -08:00
Jashandeep Sohi d54b60c63d build: install symbolic SVG icon 2016-03-10 22:47:54 +01:00
wm4 9972847265 demux: add null demuxer
It's useless, but can be used for fancy --lavfi-complex nonsense.
2016-03-04 23:51:55 +01:00
Ilya Zhuravlev 72aea5a12b ao: initial OpenSL ES support
OpenSL ES is used on Android. At the moment only stereo output is
supported. Two options are supported: 'frames-per-buffer' and
'sample-rate'. To get better latency the user of libmpv should pass
values obtained from AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER)
and AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE).
2016-02-27 00:00:36 +01:00
Emmanuel Gil Peyrot 87b09a180a wscript: don’t install the encoding profiles with encoding disabled 2016-02-22 20:51:06 +01:00
Kevin Mitchell 0d40140668 wscript: remove dxva2-dxinterop configure test
Wasn't really necessary as it was equivalent to gl-dxinterop.
2016-02-17 10:17:52 -08:00
Kevin Mitchell 2d1f42089c vo_opengl: dxinterop: add dxva2 passthrough
Use dxva2 surface to fill RGB IDirect3DSurface9 shared with opengl via
DXRegisterObjectNV.
2016-02-17 09:07:12 -08:00
wm4 0af5335383 Rewrite ordered chapters and timeline stuff
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.

The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.

The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)

To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
  concatenating files with completely different codecs for the sake
  of EDL (which also uses the timeline infrastructure). A "lighter"
  approach would try to make use of decoder mechanism to update e.g.
  the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
  normal playback core mechanisms like hr-seek, but now the playback
  core doesn't need to care about these things.

These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)

There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.

Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
2016-02-15 21:04:07 +01:00
Kevin Mitchell 06f1e934db dxva2: use mp_image pool for d3d surfaces
This is required so that the individual surfaces can pass beyond the dxva2
decoder and be passed to the vo.

This also adds additional data to mp_image->planes[0] for IMGFMT_DXVA2, which is
required for maintaining and releasing the surface even if the decoder code is
uninited.

The IDirectXVideoDecoder itself is encapsulated together with its surface pool
and configuration in a dxva2_decoder structure whose creation and destruction is
managed by talloc.
2016-02-14 11:01:12 -08:00
Jan Ekström 8af7112802 wscript_build: disable SONAME generation when building for Android
Android is well-known for not supporting SONAME'd libraries. All
libraries imported into an APK have to end with '.so'.
2016-02-10 21:29:44 +01:00
Jan Ekström ff0112e08d Initial Android support
* Adds an 'android' feature, which is automatically detected.
* Android has a broken strnlen, so a wrapper is added from FreeBSD.
2016-02-10 21:29:36 +01:00
wm4 c0de087ba1 player: add complex filter graph support
See --lavfi-complex option.

This is still quite rough. There's no support for dynamic configuration
of any kind. There are probably corner cases where playback might freeze
or burn 100% CPU (due to dataflow problems when interaction with
libavfilter).

Future possible plans might include:
- freely switch tracks by providing some sort of default track graph
  label
- automatically enabling audio visualization
- automatically mix audio or stack video when multiple tracks are
  selected at once (similar to how multiple sub tracks can be selected)
2016-02-05 23:19:56 +01:00
wm4 45345d9c41 build: make libavfilter mandatory
The complex filter support that will be added makes much more complex
use of libavfilter, and I'm not going to bother with adding hacks to
keep libavfilter optional.
2016-02-05 23:17:33 +01:00
wm4 796d5266f1 build: fix mpv.conf installation 2016-01-11 22:13:16 +01:00
wm4 31a4547187 ao_wasapi: move out some utility functions
Note that hresult_to_str() (coming from wasapi_explain_err()) is mostly
wasapi-specific, but since HRESULT error codes are unique, it can be
extended for any other use.
2016-01-11 16:24:13 +01:00
wm4 3e90a5fe81 ao_dsound: remove this audio output
It existed for XP-compatibility only. There was also a time where
ao_wasapi caused issues, but we're relatively confident that ao_wasapi
works better or at least as good as ao_dsound on Windows Vista and
later.
2016-01-06 13:52:15 +01:00
Chris Mayo d0cd7fa31b build: add option of html manual 2016-01-05 11:24:08 +01:00
James Ross-Gowan 7558d1ed7b win32: input: use Vista CancelIoEx
libwaio was added due to the complete inability to cancel synchronous
I/O cleanly using the public Windows API in Windows XP. Even calling
TerminateThread on the thread performing I/O was a bad solution, because
the TerminateThread function in XP would leak the thread's stack.

In Vista and up, however, this is no longer a problem. CancelIoEx can
cancel synchronous I/O running on other threads, allowing the thread to
exit cleanly, so replace libwaio usage with native Vista API functions.

It should be noted that this change also removes the hack added in
8a27025 for preventing a deadlock that only seemed to happen in Windows
XP. KB2009703 says that Vista and up are not affected by this, due to a
change in the implementation of GetFileType, so the hack should not be
needed anymore.
2015-12-20 21:06:02 +11:00
wm4 4cc1861378 vo_opengl: prefix per-backend source files with context_ 2015-12-19 14:14:12 +01:00
wm4 6154c1d06d vo_opengl: split backend code from common.c to context.c
Now common.c only contains the code for the function loader, while
context.c contains the backend loader/dispatcher.

Not calling it "backend.c", because the central struct is called
MPGLContext.
2015-12-19 14:14:12 +01:00
wm4 32cd85bc7e vo_opengl: x11egl: retrieve framebuffer depth
This is used for dithering, although I'm not aware of anyone who got
higher than 8 bit depth support to work on Linux.

Also put this into egl_helpers.c. Since EGL is pseudo-portable at best I
have no hope that the EGL context creation code in all the backends can
be fully shared. But some self-contained functionality can definitely be
shared.
2015-12-19 14:14:12 +01:00
wm4 00135a87f0 sub: rename sd_lavc_conv.c to lavc_conv.c
The previous commit turned sd_lavc_conv from a sd_driver to
free-standing functions. Do the rename to reflect this change
separately to avoid confusing git's content tracking. (Or did
git solve this, making separating renames and content changes
unnecessary?)
2015-12-18 03:59:52 +01:00
wm4 e798cf1ff6 sub: remove sd_srt.c
The FFmpeg subtitle converter does the same. There used to be some
deficiencies in FFmpeg's code, but it seems at least some of them have
been fixed. There also used to be the timestamp issue (see previous
commit messages), but this doesn't matter anymore. So no reason to
keep this code - get rid of it.
2015-12-15 21:05:48 +01:00
wm4 cab942acae sub: remove sd_microdvd.c
This can be dropped for the same reasons as in the previous commits. It
removes MicroDVD conversion support on Libav, although MicroDVD files
couldn't be read in the first place ever since demux_subreader.c was
removed.
2015-12-15 21:05:48 +01:00
wm4 c7985fe5f7 sub: remove sd_lavf_srt.c
This restored timestamps when demuxing srt subtitles in Libav, which
was important for avoiding slightly overlapping subtitles. Since the
way this works was changed, there is no real reason to maintain proper
timestamps anymore on this level - this can be dropped without issues.
2015-12-15 21:05:48 +01:00
wm4 29226e6a99 sub: remove sd_movtext.c
libavcodec's movtext-to-ass converter does the same and has more
features. On Libav, this commit disables mp4 subtitle display.
2015-12-15 21:05:48 +01:00
James Ross-Gowan 3d12312806
vo_opengl: add dxinterop backend
WGL_NV_DX_interop is widely supported by Nvidia and AMD drivers. It
allows a texture to be shared between Direct3D and WGL, so that
rendering can be done with WGL and presentation can be done with
Direct3D. This should allow us to work around some persistent WGL
issues, such as dropped frames with some driver/OS combos, drivers that
buffer frames to increase performance at the cost of latency, and the
inability to disable exclusive fullscreen mode when using WGL to render
to a fullscreen window.

The addition of a DX_interop backend might also enable some cool
Direct3D-specific enhancements in the future, such as using the
GetPresentStatistics API to get accurate frame presentation timestamps.

Note that due to a driver bug, this backend is currently broken on
Intel. It will appear to work as long as the window is not resized too
often, but after a few changes of size it will be unable to share the
newly created renderbuffer with GL. See:
https://software.intel.com/en-us/forums/graphics-driver-bug-reporting/topic/562051
2015-12-11 18:29:18 +01:00
wm4 a6521f3941 demux: remove old subtitle parser
All of these are supported by FFmpeg now. It was disabled by default
too (with FFmpeg).

If compiled against Libav, mpv will lose the ability to read some
subtitle formats (but the most important ones, srt and ass, still should
work).
2015-12-10 22:53:37 +01:00
wm4 475fe453cc stream: drop PVR support
This is only for specific Hauppage cards. According to the comments in
who is actively using this feature. Get it out of the way.

Anyone who still wants to use this should complain. Keeping this code
would not cause terribly much additional work, and it could be restored
again. (But not if the request comes months later.)
2015-12-10 22:53:02 +01:00
Christian Hesse 30f18b8dc7 build: install scalable svg icon as well 2015-12-01 10:58:35 +01:00
James Ross-Gowan b7d614aa0d vo_opengl: win32: test for exclusive mode
This is a hack, but unfortunately the DwmGetCompositionTimingInfo
heuristic does not work in all cases (with multiple-monitors on Windows
8.1 and even with a single monitor in Windows 10.) See the comment in
mp_w32_is_in_exclusive_mode() for more details.

It should go without saying that if any better method of doing this
reveals itself, this hack should be dropped.
2015-11-26 00:38:03 +11:00
James Ross-Gowan e76ec2c963 vo_opengl: add initial ANGLE support
ANGLE is a GLES2 implementation for Windows that uses Direct3D 11 for
rendering, enabling vo_opengl to work on systems with poor OpenGL
drivers and bypassing some of the problems with native GL, such as VSync
in fullscreen mode.

Unfortunately, using GLES2 means that most of vo_opengl's advanced
features will not work, however ANGLE is under rapid development and
GLES3 support is supposed to be coming soon.
2015-11-18 23:07:33 +11:00
wm4 384b13c5fd demux_libass: remove this demuxer
This loaded external .ass files via libass. libavformat's .ass reader is
now good enough, so use that instead.

Apparently libavformat still doesn't support fonts embedded into text
.ass files, but support for this has been accidentally broken in mpv for
a while anyway. (And only 1 person complained.)
2015-11-11 21:28:20 +01:00
rr- c3f2ef5491 vo_opengl: add DRM EGL backend
Notes:

- Unfortunately the only way to talk to EGL from within DRM I could find
  involves linking with GBM (generic buffer management for Mesa.)
  Because of this, I'm pretty sure it won't work with proprietary NVidia
  drivers, but then again, last time I checked NVidia didn't offer
  proper screen resolution for VT.

- VT switching doesn't seem to work at all. It's worth mentioning that
  using vo_drm before introduction of VT switcher had an anomaly where
  user could switch to another VT and input text to it, while video
  played on top of that VT. However, that isn't the case with drm_egl:
  I can't switch to other VT during playback like this. This makes me
  think that it's either a limitation coming from my firmware or from
  EGL/KMS itself rather than a bug with my code. Nonetheless, I still
  left (untestable) VT switching code in place, in case it's useful to
  someone else.

- The mode_id, connector_id and device_path should be configurable for
  power users and people who wish to watch videos on nonprimary screen.
  Unfortunately I didn't see anything that would allow OpenGL backends
  to register their own set of options. At the same time, adding them to
  global namespace is pointless.

- A few dozens of lines could be shared with vo_drm (setting up VT
  switching, most of code behind page flipping). I don't have any strong
  opinion on this.

- Sometimes I get minor visual glitches. I'm not sure if there's a race
  condition of some sort, unitialized variable (doubtful), or if it's
  buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa)

- .config and .control are very minimal.

Signed-off-by: wm4 <wm4@nowhere>
2015-11-08 15:00:15 +01:00
James Ross-Gowan 647b360a0a w32: use DisplayConfig API to retrieve correct monitor refresh rate
This is based on an older patch by James Ross-Gowan. It was rebased and
cleaned up. Also, the DWM API usage present in the older patch was
removed, because DWM reports nonsense rates at least on Windows 8.1
(they are rounded to integers, just like with the old GDI API - except
the GDI API had a good excuse, as it could report only integers).

Signed-off-by: wm4 <wm4@nowhere>
2015-11-06 19:53:18 +01:00
Bin Jin 27dc834f37 vo_opengl: implement NNEDI3 prescaler
Implement NNEDI3, a neural network based deinterlacer.

The shader is reimplemented in GLSL and supports both 8x4 and 8x6
sampling window now. This allows the shader to be licensed
under LGPL2.1 so that it can be used in mpv.

The current implementation supports uploading the NN weights (up to
51kb with placebo setting) in two different way, via uniform buffer
object or hard coding into shader source. UBO requires OpenGL 3.1,
which only guarantee 16kb per block. But I find that 64kb seems to be
a default setting for recent card/driver (which nnedi3 is targeting),
so I think we're fine here (with default nnedi3 setting the size of
weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the
"intBitsToFloat()" built-in function. This is necessary to precisely
represent these weights in GLSL. I tried several human readable
floating point number format (with really high precision as for
single precision float), but for some reason they are not working
nicely, bad pixels (with NaN value) could be produced with some
weights set.

We could also add support to upload these weights with texture, just
for compatibility reason (etc. upscaling a still image with a low end
graphics card). But as I tested, it's rather slow even with 1D
texture (we probably had to use 2D texture due to dimension size
limitation). Since there is always better choice to do NNEDI3
upscaling for still image (vapoursynth plugin), it's not implemented
in this commit. If this turns out to be a popular demand from the
user, it should be easy to add it later.

For those who wants to optimize the performance a bit further, the
bottleneck seems to be:
1. overhead to upload and access these weights, (in particular,
   the shader code will be regenerated for each frame, it's on CPU
   though).
2. "dot()" performance in the main loop.
3. "exp()" performance in the main loop, there are various fast
   implementation with some bit tricks (probably with the help of the
   intBitsToFloat function).

The code is tested with nvidia card and driver (355.11), on Linux.

Closes #2230
2015-11-05 17:38:20 +01:00
Bin Jin 4c43c30421 vo_opengl: add Super-xBR filter for upscaling
Add the Super-xBR filter for image doubling, and the prescaling framework
to support it.

The shader code was ported from MPDN extensions project, with
modification to process luma only.

This commit is largely inspired by code from #2266, with
`gl_transform_trans()` authored by @haasn taken directly.
2015-11-05 17:38:20 +01:00
wm4 d27d91715f build: remove explicit checks for VPP
This was once done when old versions of libva without VPP had to be
supported. Some parts of it were removed earlier already.
2015-10-17 14:25:09 +02:00
wm4 ebb43f5176 Revert "vo_x11: remove this video output"
This reverts commit d11184a256.

Unfortunately, there was a lot of unexpected resistance.

Do note that this is still extremely slow, crappy, etc.

Note that vo_x11.c was further edited. Compared to the removed vo_x11.c,
an additional ~200 lines of code was removed in order to simplify it. I
tried to strip it down as much as possible. In particular, support for
odd non-32 bit formats (24, 16, 15, 8 bit) is dropped.

Closes #2300.
2015-09-30 22:52:22 +02:00
wm4 853fe788f8 vo_opengl: rename hwdec_vda.c to hwdec_osx.c
It doesn't deal with VDA at all anymore. Rename it to hwdec_osx.c. Not
using hwdec_videotoolbox.c, because that would give it the longest
source path in this project yet. (Also, this code isn't even
VideoToolox-specific, other than the name of the pixel format used.)
2015-09-28 22:03:14 +02:00
wm4 1dd7b7bddc video: remove VDA support
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.

VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
2015-09-28 22:03:14 +02:00
wm4 710872bc22 vaapi: remove dependency on X11
There are at least 2 ways of using VAAPI without X11 (Wayland, DRM).
Remove the X11 requirement from the decoder part and the EGL interop.
This will be used by a following commit, which adds Wayland support.

The worst about this is the decoder part, which includes a bad hack for
using the decoder without any VO interop (also known as "vaapi-copy"
mode). Separate the X11 parts so that they're self-contained. For the
EGL interop code we do something similar (it's kept slightly simpler,
because it essentially only has to translate between our silly
MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
2015-09-27 21:33:15 +02:00
wm4 0ae8aebb89 video: refactor GPU memcpy usage
Make the GPU memcpy from the dxva2 code generally useful to other parts
of the player.

We need to check at configure time whether SSE intrinsics work at all.
(At least in this form, they won't work on clang, for example. It also
won't work on non-x86.)

Introduce a mp_image_copy_gpu(), and make the dxva2 code use it. Do some
awkward stuff to share the existing code used by mp_image_copy(). I'm
hoping that FFmpeg will sooner or later provide a function like this, so
we can remove most of this again. (There is a patch, bit it's stuck in
limbo since forever.)

All this is used by the following commit.
2015-09-25 19:18:16 +02:00
wm4 8d8a2045bd vo_opengl: support new VAAPI EGL interop
Should work much better than the old GLX interop code. Requires Mesa 11,
and explicitly selecting the X11 EGL backend with:

    --vo=opengl:backend=x11egl

Should it turn out that the new interop works well, we will try to
autodetect EGL by default.

This code still uses some bad assumptions, like expecting surfaces to be
in NV12. (This is probably ok, because virtually all HW will use this
format. But we should at least check this on init or so, instead of
failing to render an image if our assumption doesn't hold up.)

This repo was a lot of help: https://github.com/gbeauchesne/ffvademo
The kodi code was also helpful (the magic FourCC it uses for
EGL_LINUX_DRM_FOURCC_EXT are nowhere documented, and
EGL_IMAGE_INTERNAL_FORMAT_EXT as used in ffvademo does
not actually exist).

(This is the 3rd VAAPI GL interop that was implemented in this player.)
2015-09-25 00:26:19 +02:00
wm4 8782354e6d player: rename and move find_subfiles.c
This was in sub/, because the code used to be specific to subtitles. It
was extended to automatically load external audio files too, and moving
the file and renaming it was long overdue.
2015-09-20 18:05:06 +02:00
wm4 7020908691 video/filter: remove some vf_lavfi wrappers
I see no point in keeping these around. Keeping wrappers for some select
libavfilter filters just because MPlayer had these filters is not a good
reason.

Ultimately, all real filtering work should go to libavfilter, and users
should get used to using vf_lavfi directly. We might even not require
the awful double-nested syntax for using libavfilter one day.

vf_rotate, vf_yadif, vf_stereo3d are kept because mpv uses them
internally. (They all extend the lavfi filters or change their
defaults.) vf_mirror is kept for symmetry with vf_flip. vf_gradfun and
vf_pullup are probably semi-popular, so I'll remove them not yet - only
after some more discussion.
2015-09-11 23:47:00 +02:00
wm4 d96f6edf49 vf: vf_stereo3d depends on libavfilter 2015-09-11 18:03:16 +02:00
Niklas Haas eb56807b41 vo_opengl: move self-contained shader routines to a separate file
This is mostly to cut down somewhat on the amount of code bloat in
video.c by moving out helper functions (including scaler kernels and
color management routines) to a separate file.

It would certainly be possible to move out more functions (eg. dithering
or CMS code) with some extra effort/refactoring, but this is a start.

Signed-off-by: wm4 <wm4@nowhere>
2015-09-09 18:17:44 +02:00
Niklas Haas 44eda2177d vo_opengl: remove gl_ prefixes from files in video/out/opengl
This is a bit redundant with the name of the directory itself, and not
in line with existing naming conventions.
2015-09-09 18:09:31 +02:00
Niklas Haas deebc55014 vo_opengl: move gl_* files to their own subdir
This is mainly just to keep things a bit more organized and separated
inside the codebase.
2015-09-09 18:09:25 +02:00
wm4 d04d2380e3 audio/filter: remove af_bs2b too
Some users still use this filter, so the filter was going to be kept.
But I overlooked that libavfilter provides this filter. Remove the
redundant wrapper from mpv. Something like --af=lavfi=bs2b should work
and give exactly the same results.
2015-09-04 00:23:39 +02:00
wm4 091bfa3abf audio/filter: remove some useless filters
All of these filters are considered not useful anymore by us. Some have
replacements in libavfilter (useable through af_lavfi).

af_center, af_extrastereo, af_karaoke, af_sinesuppress, af_sub,
af_surround, af_sweep: pretty simple and useless filters which probably
nobody ever wants.

af_ladspa: has a replacement in libavfilter.

af_hrtf: the algorithm doesn't work properly on most sources, and the
implementation was buggy and complicated. (The filter was inherited from
MPlayer; but even in mpv times we had to apply fixes that fixed major
issues with added noise.) There is a ladspa filter if you still want to
use it.

af_export: I'm not even sure what this is supposed to do. Possibly it
was meant for GUIs rendering audio visualizations, but it couldn't
really work well. For example, the size of the audio depended on the
samplerate (fixed number of samples only), and it couldn't retrieve the
complete audio, only fragments. If this is really needed for GUIs, mpv
should add native visualization, or a proper API for it.
2015-09-03 23:55:36 +02:00
wm4 58ba2a9087 vo_rpi: use EGL to render subtitles
Slightly faster than using the dispmanx mess (perhaps to a large amount
due to the rather stupid C-only unoptimized ASS->RGBA blending code).

Do this by reusing vo_opengl's subtitle renderer, and vo_opengl's RPI
backend.
2015-08-18 23:01:09 +02:00
wm4 2b280f4522 stream: libarchive wrapper for reading compressed archives
This works similar to the existing .rar support, but uses libarchive.
libarchive supports a number of formats, including zip and (most of)
rar.

Unfortunately, seeking does not work too well. Most libarchive readers
do not support seeking, so it's emulated by skipping data until the
target position. On backwards seek, the file is reopened. This works
fine on a local machine (and if the file is not too large), but will
perform not so well over network connection.

This is disabled by default for now. One reason is that we try
libarchive on every file we open, before trying libavformat, and I'm not
sure if I trust libarchive that much yet. Another reason is that this
breaks multivolume rar support. While libarchive supports seeking in
rar, and (probably) supports multivolume archive, our support of
libarchive (probably) does not. I don't care about multivolume rar, but
vocal users do.
2015-08-17 00:55:26 +02:00
wm4 8d66bd76e2 video: remove old vdpau hwaccel API usage
While the "old" libavcodec vdpau API is not deprecated (only the very-
old API is), it's still relatively complicated code that badly
duplicates the much simpler newer vdpau code. It exists only for the
sake of older FFmpeg releases; get rid of it.
2015-08-10 00:07:35 +02:00
Sebastien Zwickert 31b5a211f4 hwdec: add VideoToolbox support
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).

Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
2015-08-05 17:47:30 +02:00
wm4 f792f56440 player: remove higher-level remains of DVD/BD menu support
Nobody wanted to restore this, so it gets the boot.

If anyone still wants to volunteer to restore menu support, this would
be welcome. (I might even try it myself if I feel masochistic and like
wasting a lot of time for nothing.) But if it does get restored, it
should be done differently. There were many stupid things about how it
was done. For example, it somehow tried to pull mp_nav_events through
all the layers (including needing to "buffer" them in the demuxer),
which was needlessly complicated. It could be done simpler.

This code was already inactive, so this commit actually changes nothing.
Also keep in mind that normal DVD/BD playback still works.
2015-08-03 23:49:14 +02:00
wm4 e0c55cbfea audio: remove af_dummy
Was used internally once; has no function anymore.
2015-08-01 21:20:55 +02:00
Stefano Pigozzi 7ba403002f build: fix windows compilation after clean
broken since 4730e0aaba
2015-07-17 09:41:09 +02:00
Stefano Pigozzi 4730e0aaba build: make mpv.rc depend on version.h 2015-07-15 15:24:31 +02:00
Philip Langdale 4b0b9b515b vf_vdpaurb: Add a new filter for reading back vdpau decoded frames
Normally, vdpau decoded frames are passed directly to a suitable
vo (vo_vdpau or vo_opengl) without ever touching system memory. This
is efficient for output purposes, but prevents any of the regular
filters from being used with such frames.

This new filter implements a read-back step to pull the frames back
into system memory where they can be acted on by other filters.
Eventually the frames will be sent to the vo as if they were normal
software-decoded frames.

Note that a vdpau compatible vo must still be used to ensure that
the decoder is properly initialised.

Signed-off-by: wm4 <wm4@nowhere>
2015-07-11 10:44:34 +02:00
wm4 561416597e client API, dxva2: add a workaround for OpenGL fullscreen issues
This is basically a hack for drivers which prevent the mpv DXVA2 decoder
glue from working if OpenGL is in fullscreen mode.

Since it doesn't add any "hard" new API to the client API, some of the
code would be required for a true zero-copy hw decoding pipeline, and
sine it isn't too much code after all, this is probably acceptable.
2015-07-03 16:38:12 +02:00
wm4 d11184a256 vo_x11: remove this video output
It only causes additional maintenance work.

Even if you wanted to have a fallback, it's probably better to use
--vo=sdl or so.
2015-06-26 17:17:34 +02:00
wm4 552dc0d564 af_convert24: remove this filter 2015-06-16 22:40:37 +02:00
wm4 831d7c3c40 audio: remove S8, U16, U24, U32 formats
They are useless. Not only are they actually rarely in use; but
libavcodec doesn't even output them, as libavcodec has no such sample
formats for decoded audio.

Even if it should happen that we actually still need them (e.g. if doing
direct hardware output), there are better solutions. Swapping the sign
is a fast and lossless operation and can be done inplace, so AO actually
needing it could do this directly.

If you wonder why we keep U8 instead of S8: because libavcodec does it.
2015-06-16 21:11:59 +02:00
wm4 6f5a10542c vdpau: add support for the "new" libavcodec vdpau API
Yet another of these dozens of hwaccel changes. This time, libavcodec
provides utility functions, which initialize the vdpau decoder and map
codec profiles. So a lot of work the API user had to do falls away.

This also will give us support for high bit depth profiles, and possibly
HEVC once libavcodec supports it.
2015-05-28 21:56:13 +02:00
wm4 0699a6c598 video: rename vdpau.c to vdpau_old.c
vdpau.c will be used for new code.
2015-05-28 21:56:09 +02:00
wm4 0ed1719e1a demux_cue: move cue parser to a separate file
Preparation for the next commit.
2015-05-19 21:36:21 +02:00
wm4 934109a35b ao_coreaudio: move channel mapping code to a separate file
Move all of the channel map retrieval/negotiation code to a separate
file. This will (probably) be helpful when extending
ao_coreaudio_exclusive.c.

Nothing else changes, other than some minor cosmetics and renaming,
and changing some details for decoupling it from the ao_coreaudio.c
internals.
2015-05-05 21:47:19 +02:00
wm4 1e7831070f build: move main-fn files to osdep
And split the Cocoa and Unix cases. Simplify the Cocoa case slightly by
calling mpv_main directly, instead of passing a function pointer. Also
add a comment explaining why Cocoa needs a special case at all.
2015-05-02 18:59:58 +02:00
wm4 19a5b20752 cocoa: always compile OSX application code with cocoa
This unbreaks compiling command line player and libmpv at the same
time. The problem was that doing so silently disabled the OSX
application thing - but the command line player can not use the
vo_opengl Cocoa backend without it.

The OSX application code is basically dead in libmpv, but it's not
that much code anyway.

If you want a mpv binary that does not create an OSX application
singleton (and creates a menu etc.), you must disable cocoa
completely, as cocoa can't be used anyway in this case.
2015-05-02 18:09:56 +02:00
wm4 d3a3cfe54c path: refactor
Somewhat less ifdeffery, higher flexibility. Now there are 3 separate
config file resolvers for 3 platforms (unix, win, osx), and they can
still interact with each other somewhat. For example, OSX for now uses
most of Unix, but adds the OSX bundle path.

This can be extended to resolve very specific platform paths, such as
location of the desktop.

Most of the Unix specific code moves to path-unix.c.

The behavior should be the same - if not, it is likely a bug.
2015-05-01 21:51:10 +02:00
xylosper ebe2c2b6d1 build: fix libavfilter dependency for vf_mirror
Since e207c24b32, vf_mirror requires
libavfilter.

Signed-off-by: wm4 <wm4@nowhere>
2015-04-20 17:08:29 +02:00
Marcin Kurczewski 34d5b73fbb vo_drm: extract vt_switcher to drm_common 2015-04-19 21:18:15 +02:00
wm4 f4292ebf52 vf_screenshot: remove this filter
It's entirely useless, especially now that vo.c handles screenshots in a
generic way, and requires no special VO support. There are some
potential weird use-cases, but actually I've never seen it being used.
2015-04-16 22:16:04 +02:00
Marcin Kurczewski 7ee18376a9 vo_drm: add KMS/DRM renderer support
Signed-off-by: wm4 <wm4@nowhere>
2015-04-16 21:07:25 +02:00
wm4 d55c41501f subprocess: move implementation for deatched subprocesses 2015-04-15 22:43:02 +02:00
James Ross-Gowan ac7ecbe30c win32: use a platform-specific unicode entry-point
Add a platform-specific entry-point for Windows. This will allow some
platform-specific initialization to be added without the need for ugly
ifdeffery in main.c.

As an immediate advantage, mpv can now use a unicode entry-point and
convert the command line arguments to UTF-8 before passing them to
mpv_main, so osdep_preinit can be simplified a little bit.
2015-04-11 14:27:25 +10:00
wm4 8fff125422 RPI support
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.

Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.

This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.

vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.

This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.

This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.

Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
2015-03-29 16:09:56 +02:00
wm4 1ad4a62336 demux: fix rar support for files containing DTS audio tracks
With a recent cleanup, rar support was stuffed into demux_playlist.c
(because "opening" rar files pretty much just lists archive contents and
adds them to a playlist using a special rar:// protocol, which will
actually access the rar file contents).

Since demux_playlist.c is probed _after_ demux_lavf.c (and should/must
be), libavformat was given the chance to detect DTS streams embedded
within the rar file. This is not really what we want, and a regression
what happened before rar listing was moved to demux_playlist.c.

Fix it by moving the rar listing into its own pseudo-demuxer, and let ir
probe before demux_lavf.c.

(Yes, this feature still has users.)
2015-03-24 21:29:09 +01:00
wm4 9b5a7241e8 input: remove Linux joystick support
Why did this exist in the first place? Other than being completely
useless, this even caused some regressions in the past. For example,
there was the case of a laptop exposing its accelerometer as joystick
device, which led to extremely fun things due to the default mappings of
axis movement being mapped to seeking.

I suppose those who really want to use their joystick to control a media
player (???) can configure it as mouse device or so.
2015-03-24 16:04:44 +01:00
wm4 1e659a9f0f input: remove classic LIRC support
It's much easier to configure remotes as X11 input devices.
2015-03-24 16:04:44 +01:00
wm4 d5318e5e09 audio: remove internal libmpg123 wrapper
We've been prefering the libavcodec mp3 decoder for half a year now.
There is likely no benefit at all for using the libmpg123 one. It's just
a maintenance burden, and tricks users into thinking it's a required
dependency.
2015-03-24 16:04:44 +01:00
wm4 e74a4d5bc0 vo_opengl: refactor shader generation (part 1)
The basic idea is to use dynamically generated shaders instead of a
single monolithic file + a ton of ifdefs. Instead of having to setup
every aspect of it separately (like compiling shaders, setting uniforms,
perfoming the actual rendering steps, the GLSL parts), we generate the
GLSL on the fly, and perform the rendering at the same time. The GLSL
is regenerated every frame, but the actual compiled OpenGL-level shaders
are cached, which makes it fast again. Almost all logic can be in a
single place.

The new code is significantly more flexible, which allows us to improve
the code clarity, performance and add more features easily.

This commit is incomplete. It drops almost all previous code, and
readds only the most important things (some of them actually buggy).
The next commit will complete it - it's separate to preserve authorship
information.
2015-03-12 23:20:20 +01:00
wm4 ae6019cbc9 build: simplify windows checks
There's no reason to do finegrained checks for libraries which always
must be present. It also reduces the number of extra dependencies.
2015-03-11 23:44:34 +01:00
wm4 89bc2975e9 audio: change playback speed directly in resampler
Although the libraries we use for resampling (libavresample and
libswresample) do not support changing sampelrate on the fly, this makes
it easier to make sure no audio buffers are implicitly dropped. In fact,
this commit adds additional code to drain the resampler explicitly.

Changing speed twice without feeding audio in-between made it crash
with libavresample inc ertain cases (libswresample is fine). This is
probably a libavresample bug. Hopefully this will be fixed, and also I
attempted to workaround the situation that crashes it. (It seems to
point in direction of random memory corruption, though.)
2015-03-02 19:09:44 +01:00
wm4 1e44c811f3 demux_edl: move implementation
Same deal as with demux_cue, and a separate commit for the same reasons.
2015-02-17 23:48:39 +01:00
wm4 7f03f46882 demux_cue: move implementation
Move the implementation, of which most was in tl_cue.c, to demux_cue.c.
Currently, this is illogical, because tl_cue.c still accesses MPContext.
This is going to change, and then it will be better if everything is in
demux_cue.c. This is only a separate commit to distinguish code movement
and actual work; the next commit will do the actual work.
2015-02-17 23:48:03 +01:00
wm4 edc0007e74 matroska: move timeline code to demux/
Separate from previous commit, because git is bad at tracking file
renames when the file contents are also changed.
2015-02-17 23:47:37 +01:00
wm4 a0a089f6a4 player: use a separate context for timeline loader stuff
Instead of accessing MPContext in player/timeline/*, create a separate
context struct, which the timeline loaders fill out. It turns out that
there's not much in the way too big MPContext that these need to access.

One major PITA is managing (and closing) the set of open demuxers. The
problem is that we need a list of all demuxers to make sure no unneeded
streams are enabled.

This adds a callback to the demuxer_desc struct, with the intention of
leaving to to the demuxer to call the right loader, instead of
explicitly checking the demuxer type and dispatching manually in common
code. I also considered making the timeline part of the demuxer state,
but decided against: it's too much of a mess wrt. memory management and
threading, and also doesn't make it clear who owns the child demuxers.
With the struct timeline decoupled from the demuxer state, it's at least
somewhat clear that the child demuxers are independent from the "main"
demuxer.

The actual changes to player/timeline/* are separated in the following
commits, because they're quite verbose. Some artifacts will be removed
later as soon as there's only 1 timeline loading mechanism.
2015-02-17 23:46:12 +01:00
wm4 2522bff565 video/filters: simplify libavfilter bridge
Remove the confusing crap that allowed a filter using the libavfilter
bridge to be compiled without libavfilter. Instead, compile the wrappers
only if libavfilter is enabled at compile time.

The only filter which still requires it is vf_stereo3d (unfortunately).
Special-case this one. (The whole filter and how it interacts with lavfi
is pure braindeath anyway.)
2015-02-11 17:35:58 +01:00
wm4 b6ab34fc98 af_rubberband: pitch correction with librubberband
If "--af=rubberband" is used, librubberband will be used to speed up or
slow down audio with pitch correction.

This still has some problems: the audio delay is not calculated
correctly, so the audio position jitters around by a few milliseconds.
This will probably ruin video timing.
2015-02-11 00:29:12 +01:00
wm4 3583559164 vo_opengl: move utility functions from loader to a separate file
gl_common.c contained the function loader (which is big) and additional
utility functions (not so big, but will grow when moving more out of
gl_video.c). Just split them. There are no changes other than some
modifications to comments.
2015-01-28 19:40:46 +01:00
wm4 b473477fc5 vf_ilpack: remove this filter
This was apparently useful for correct interlaced scaling (although I
don't know anyone who used this). It was rarely used (if at all), had an
inconvenient output format (packed YUV), and now has a better solution
in libavfilter (using the libavfilter "scale" filter via vf_lavfi).
There is no reason to keep this filter any longer.
2015-01-27 19:13:51 +01:00
wm4 86bba0dc5b vf_divtc: remove this filter
Better solutions are available in vf_vapoursynth and vf_lavfi. The only
user I know who used this is now using vf_vapoursynth.
2015-01-27 19:10:13 +01:00
wm4 97f32fdb23 vf_phase: remove this filter
If you really want it, it's in libavfilter and can be used via vf_lavfi.
2015-01-27 19:06:59 +01:00
wm4 82e3d06f09 vf_swapuv: remove this filter
It's entirely useless. I left it in for a while, because the analog TV
code had a transitional bug that could switch chroma planes, but it was
fixed long ago. It's also available in libavfilter.
2015-01-27 19:04:02 +01:00
wm4 c92f4a1126 vf_softpulldown: remove this filter
Apparently it was completely broken and essentially did nothing. This
was broken sometime in early mpv or mplayer2 times.

Get rid of it. If you _really_ need it, wait until FFmpeg ports it from
MPlayer, which will happen very soon.
2015-01-27 18:58:16 +01:00
wm4 69e5dd9bce vf_pullup: remove builtin implementation
Now it requires libavfilter. The wrapper is left in place, so FFmpeg
users will not notice any change. On Libav, the filter stops working.
2015-01-27 18:53:28 +01:00
wm4 a0ed62fc1d build: remove bogus client API examples build
The symlink trick made waf go crazy (deleting source files, getting
tangled up in infinite recursion... I wish I was joking). This means we
still can't build the client API examples in a reasonable way using the
include files of the local repository (instead of globally installed
headers). Not building them at all is better than deleting source files.

Instead, provide some manual instructions how to build each example
(except for the Qt examples, which provide qmake project files).
2015-01-23 15:32:23 +01:00
wm4 724f722d7f vo_opengl_old: remove this VO
At this point, there is probably no hardware left that doesn't do
OpenGL 2.1, and at the same time is fast enough to handle video.
2015-01-20 21:15:04 +01:00
wm4 d7dfbc8610 player: use libavutil API to get number of CPUs
Our own code was introduced when FFmpeg didn't provide this API (or
maybe didn't even have a way to determine the CPU count). But now,
av_cpu_count() is available for all FFmpeg/Libav versions we support,
and there's no reason to have our own code.

libavutil's code seems to be slightly more sophisticated than our's, and
it's possible that the detected CPU count is different on some platforms
after this change.
2015-01-05 12:34:34 +01:00
wm4 4ed0907f2d build: try to make examples build both in-tree and out-of-tree
The examples simple.c and cocoabasic.m can be compiled without
installing libmpv. But also, they didn't use the correct include path
libmpv programs normally use, so they couldn't be built with a properly
installed system-libmpv. That's pretty bad for examples, which are
supposed to show how to use libmpv correctly.

So do some bullshit that symlinks libmpv to a "mpv" include directory
under the build directory. This name-mismatch is a direct consequence of
the bullshit done in 499a6758 (requested in #539 for dumb reasons). (We
don't want to name the client API headers directory "mpv", because that
would be too unspecific, and clashes with having the mpv binary in the
same directory.)

If you have spaces or other "unusual" characters in your paths, the
build will break, because I couldn't find out where waf hides its
function to escape shell parameters (or a way to invoke programs
without involving the shell). Neither does such a thing to be
documented, nor do they seem to have a clear way to do this in
their code.

This also doesn't compile the Qt examples, because everything becomes
even more terrible from there on.
2015-01-02 00:00:03 +01:00
wm4 da4f722579 DOCS/client_api_examples: move all examples into their own subdirs
Also get rid of shared.h; it actually doesn't have much value. Just copy
the tiny function it contained into the 2 files which used it.
2015-01-01 23:08:15 +01:00
wm4 bafb9b2271 win32: add native wrappers for pthread functions
Off by default, use --enable-win32-internal-pthreads .

This probably still needs a lot more testing. It also won't work on
Windows XP.
2015-01-01 15:10:42 +01:00
wm4 8eaa63689a demux_mf: move mf.c contents to demux_mf.c
There's no reason why parts of this demuxer would be in a separate
source file. The existence of this code is already somewhat questionable
anyway, so it may as well be dumped into a single file.

Even stranger that demux.c included mf.h for no reason (it was an
artifact from 2002 when the architecture was uncleaner).
2014-12-29 23:09:50 +01:00
wm4 4075518011 ao_portaudio: remove this audio output
It's just completely useless. We have good native support for all 3
desktop platforms, and ao_sdl or ao_openal as fallbacks.
2014-12-29 18:53:12 +01:00
Stefano Pigozzi 20f6cf3ec6 build: fix linking with --enable-static-build 2014-12-29 18:30:24 +01:00
Stefano Pigozzi 54aea7d5de chmap_sel: add multichannel fallback heuristic
Instead of just failing during channel map selection, try to select a close
layout that makes most sense and upmix/downmix to that instead of failing AO
initialization. The heuristic is rather simple, and uses the following steps:

1) If mono is required always prefer stereo to a multichannel upmix.
2) Search for an upmix that is an exact superset of the required channel map.
3) Search for a downmix that is the exact subset of the required channel map.
4) Search for either an upmix or downmix that is the closest (minimum difference
   of channels) to the required channel map.
2014-12-29 17:56:53 +01:00
wm4 3fdb6be316 win32: add mmap() emulation
Makes all of overlay_add work on windows/mingw.

Since we now don't explicitly check for mmap() anymore (it's always
present), this also requires us to make af_export.c compile, but I
haven't tested it.
2014-12-26 17:30:10 +01:00
wm4 fb855b8659 client API: expose OpenGL renderer
This adds API to libmpv that lets host applications use the mpv opengl
renderer. This is a more flexible (and possibly more portable) option to
foreign window embedding (via --wid).

This assumes that methods like context sharing and multithreaded OpenGL
rendering are infeasible, and that a way is needed to integrate it with
an application that uses a single thread to render everything.

Add an example that does this with QtQuick/qml. The example is
relatively lazy, but still shows how relatively simple the integration
is. The FBO indirection could probably be avoided, but would require
more work (and would probably lead to worse QtQuick integration, because
it would have to ignore transformations like rotation).

Because this makes mpv directly use the host application's OpenGL
context, there is no platform specific code involved in mpv, except
for hw decoding interop.

main.qml is derived from some Qt example.

The following things are still missing:
- a way to do better video timing
- expose GL renderer options, allow changing them at runtime
- support for color equalizer controls
- support for screenshots
2014-12-09 17:59:04 +01:00
wm4 231d5a32a9 build: showqscale was removed 2014-12-04 10:39:11 +01:00