Commit Graph

240 Commits

Author SHA1 Message Date
wm4 15a5d33b79 vf_vavpp: move frame handling to separate file
Move the handling of the future/past frames and the associated dataflow
rules to a separate source file.

While this on its own seems rather questionable and just inflates the
code, I intend to reuse it for other filters. The logic is annoying
enough that it shouldn't be duplicated a bunch of times.

(I considered other ways of sharing this logic, such as an uber-
deinterlace filter, which would access the hardware deinterlacer via a
different API. Although that sounds like kind of the right approach,
this would have other problems, so let's not, at least for now.)
2016-05-25 23:51:24 +02:00
wm4 39b64fb176 video: merge dxva2 source files
video/dxva2.c exported only 2 functions, both used only by
video/decode/dxva2.c.

The same was already done for d3d11.
2016-05-17 11:05:51 +02:00
Niklas Haas 7c3d78fd82 vo_opengl: support external user hooks
This allows users to add their own near-arbitrary hooks to the vo_opengl
processing pipeline, greatly enhancing the flexibility of user shaders.
This enables, among other things, user shaders such as CrossBilateral,
SuperRes, LumaSharpen and many more.

To make parsing the user shaders easier, shaders are now loaded as
bstrs, and the hooks are set up during video reconfig instead of on
every single frame.
2016-05-15 20:42:02 +02:00
wm4 84ccebd9b9 vo_opengl: reorganize texture format handling
This merges all knowledge about texture format into a central table.

Most of the work done here is actually identifying which formats exactly
are supported by OpenGL(ES) under which circumstances, and keeping this
information in the format table in a somewhat declarative way. (Although
only to the extend needed by mpv.) In particular, ES and float formats
are a horrible mess.

Again this is a big refactor that might cause regression on "obscure"
configurations.
2016-05-12 21:22:28 +02:00
wm4 fd82e14888 build: merge d3d11va and dxva2 hwaccel checks
We don't have any reason to disable either. Both are loaded dynamically
at runtime anyway. There is also no reason why dxva2 would disappear
from libavcodec any time soon.
2016-05-11 15:40:31 +02:00
wm4 fde20d10bc vo_opengl: angle: dynamically load ANGLE
ANGLE is _really_ annoying to build. (Requires special toolchain and a
recent MSVC version.) This results in various issues with people
having trouble to build mpv against ANGLE (apparently linking it
against a prebuilt binary doesn't count, or using binaries from
potentially untrusted sources is not wanted).

Dynamically loading ANGLE is going to be a huge convenience. This commit
implements this, with special focus on keeping it source compatible to
a normal build with ANGLE linked at build-time.
2016-05-11 15:39:29 +02:00
wm4 46fff8d31a video: refactor how VO exports hwdec device handles
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.

The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)

This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.

Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.

The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.

This is a big refactor. Some things may be untested and could be broken.
2016-05-09 20:03:22 +02:00
wm4 dff33893f2 d3d11va: store texture/subindex in IMGFMT_D3D11VA plane pointers
Basically this gets rid of the need for the accessors in d3d11va.h, and
the code can be cleaned up a little bit.

Note that libavcodec only defines a ID3D11VideoDecoderOutputView pointer
in the last plane pointers, but it tolerates/passes through the other
plane pointers we set.
2016-04-27 14:06:50 +02:00
wm4 3706918311 vo_opengl: D3D11VA + ANGLE interop
This uses ID3D11VideoProcessor to convert the video to a RGBA surface,
which is then bound to ANGLE. Currently ANGLE does not provide any way
to bind nv12 surfaces directly, so this will have to do.

ID3D11VideoContext1 would give us slightly more control about the
colorspace conversion, though it's still not good, and not available
in MinGW headers yet.

The video processor is created lazily, because we need to have the coded
frame size, of which AVFrame and mp_image have no concept of. Doing the
creation lazily is less of a pain than somehow hacking the coded frame
size into mp_image.

I'm not really sure how ID3D11VideoProcessorInputView is supposed to
work. We recreate it on every frame, which is simple and hopefully
doesn't affect performance.
2016-04-27 13:49:47 +02:00
wm4 7e3d8e7134 vd_lavc: simplify RPI and Mediacodec wrappers
Use the recently added lavc_suffix mechanism to select the wrapper
decoder.

With all hwdec callbacks being optional, and RPI/Mediacodec having only
dummy callbacks, all the callbacks can be removed as well.

The result is that the vd_lavc_hwdec struct for both of them is tiny.
It's better to move them to vd_lavc.c directly, because they are so
trivial and small.
2016-04-25 12:23:38 +02:00
Kevin Mitchell a7110862c8 vd_lavc: add d3d11va hwdec
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
2016-03-30 09:01:27 -07:00
Jan Ekström 5843392db5 Add a mediacodec decoder hwdec wrapper
Does the same thing as the rpi one - makes fallback possible by
pretending that h264_mediacodec is a hwdec.
2016-03-25 21:35:59 +01:00
James Ross-Gowan 5bf473d5ca ipc: add Windows implementation with named pipes
This implements the JSON IPC protocol with named pipes, which are
probably the closest Windows equivalent to Unix domain sockets in terms
of functionality. Like with Unix sockets, this will allow mpv to listen
for IPC connections and handle multiple IPC clients at once. A few cross
platform libraries and frameworks (Qt, node.js) use named pipes for IPC
on Windows and Unix sockets on Linux and Unix, so hopefully this will
ease the creation of portable JSON IPC clients.

Unlike the Unix implementation, this doesn't share code with
--input-file, meaning --input-file on Windows won't understand JSON
commands (yet.) Sharing code and removing the separate implementation in
pipe-win32.c is definitely a possible future improvement.
2016-03-23 23:15:20 +11:00
wm4 3c59f54ec8 build: be less strict about line endings
This is a shitty hack, but also not terribly offensive.
2016-03-11 17:22:50 +01:00
Kevin Mitchell 8ff09f3217 vo_opengl: add dxva2 interop to angle backend
Like dxinterop, this uses StretchRect or RGB conversion. This is unavoidable as
long as we use the dxva2 API, as there is no way to access the raw hardware
decoded Direct3D9 surfaces.
2016-03-10 15:49:55 -08:00
Jashandeep Sohi d54b60c63d build: install symbolic SVG icon 2016-03-10 22:47:54 +01:00
wm4 9972847265 demux: add null demuxer
It's useless, but can be used for fancy --lavfi-complex nonsense.
2016-03-04 23:51:55 +01:00
Ilya Zhuravlev 72aea5a12b ao: initial OpenSL ES support
OpenSL ES is used on Android. At the moment only stereo output is
supported. Two options are supported: 'frames-per-buffer' and
'sample-rate'. To get better latency the user of libmpv should pass
values obtained from AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER)
and AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE).
2016-02-27 00:00:36 +01:00
Emmanuel Gil Peyrot 87b09a180a wscript: don’t install the encoding profiles with encoding disabled 2016-02-22 20:51:06 +01:00
Kevin Mitchell 0d40140668 wscript: remove dxva2-dxinterop configure test
Wasn't really necessary as it was equivalent to gl-dxinterop.
2016-02-17 10:17:52 -08:00
Kevin Mitchell 2d1f42089c vo_opengl: dxinterop: add dxva2 passthrough
Use dxva2 surface to fill RGB IDirect3DSurface9 shared with opengl via
DXRegisterObjectNV.
2016-02-17 09:07:12 -08:00
wm4 0af5335383 Rewrite ordered chapters and timeline stuff
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.

The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.

The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)

To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
  concatenating files with completely different codecs for the sake
  of EDL (which also uses the timeline infrastructure). A "lighter"
  approach would try to make use of decoder mechanism to update e.g.
  the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
  normal playback core mechanisms like hr-seek, but now the playback
  core doesn't need to care about these things.

These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)

There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.

Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
2016-02-15 21:04:07 +01:00
Kevin Mitchell 06f1e934db dxva2: use mp_image pool for d3d surfaces
This is required so that the individual surfaces can pass beyond the dxva2
decoder and be passed to the vo.

This also adds additional data to mp_image->planes[0] for IMGFMT_DXVA2, which is
required for maintaining and releasing the surface even if the decoder code is
uninited.

The IDirectXVideoDecoder itself is encapsulated together with its surface pool
and configuration in a dxva2_decoder structure whose creation and destruction is
managed by talloc.
2016-02-14 11:01:12 -08:00
Jan Ekström 8af7112802 wscript_build: disable SONAME generation when building for Android
Android is well-known for not supporting SONAME'd libraries. All
libraries imported into an APK have to end with '.so'.
2016-02-10 21:29:44 +01:00
Jan Ekström ff0112e08d Initial Android support
* Adds an 'android' feature, which is automatically detected.
* Android has a broken strnlen, so a wrapper is added from FreeBSD.
2016-02-10 21:29:36 +01:00
wm4 c0de087ba1 player: add complex filter graph support
See --lavfi-complex option.

This is still quite rough. There's no support for dynamic configuration
of any kind. There are probably corner cases where playback might freeze
or burn 100% CPU (due to dataflow problems when interaction with
libavfilter).

Future possible plans might include:
- freely switch tracks by providing some sort of default track graph
  label
- automatically enabling audio visualization
- automatically mix audio or stack video when multiple tracks are
  selected at once (similar to how multiple sub tracks can be selected)
2016-02-05 23:19:56 +01:00
wm4 45345d9c41 build: make libavfilter mandatory
The complex filter support that will be added makes much more complex
use of libavfilter, and I'm not going to bother with adding hacks to
keep libavfilter optional.
2016-02-05 23:17:33 +01:00
wm4 796d5266f1 build: fix mpv.conf installation 2016-01-11 22:13:16 +01:00
wm4 31a4547187 ao_wasapi: move out some utility functions
Note that hresult_to_str() (coming from wasapi_explain_err()) is mostly
wasapi-specific, but since HRESULT error codes are unique, it can be
extended for any other use.
2016-01-11 16:24:13 +01:00
wm4 3e90a5fe81 ao_dsound: remove this audio output
It existed for XP-compatibility only. There was also a time where
ao_wasapi caused issues, but we're relatively confident that ao_wasapi
works better or at least as good as ao_dsound on Windows Vista and
later.
2016-01-06 13:52:15 +01:00
Chris Mayo d0cd7fa31b build: add option of html manual 2016-01-05 11:24:08 +01:00
James Ross-Gowan 7558d1ed7b win32: input: use Vista CancelIoEx
libwaio was added due to the complete inability to cancel synchronous
I/O cleanly using the public Windows API in Windows XP. Even calling
TerminateThread on the thread performing I/O was a bad solution, because
the TerminateThread function in XP would leak the thread's stack.

In Vista and up, however, this is no longer a problem. CancelIoEx can
cancel synchronous I/O running on other threads, allowing the thread to
exit cleanly, so replace libwaio usage with native Vista API functions.

It should be noted that this change also removes the hack added in
8a27025 for preventing a deadlock that only seemed to happen in Windows
XP. KB2009703 says that Vista and up are not affected by this, due to a
change in the implementation of GetFileType, so the hack should not be
needed anymore.
2015-12-20 21:06:02 +11:00
wm4 4cc1861378 vo_opengl: prefix per-backend source files with context_ 2015-12-19 14:14:12 +01:00
wm4 6154c1d06d vo_opengl: split backend code from common.c to context.c
Now common.c only contains the code for the function loader, while
context.c contains the backend loader/dispatcher.

Not calling it "backend.c", because the central struct is called
MPGLContext.
2015-12-19 14:14:12 +01:00
wm4 32cd85bc7e vo_opengl: x11egl: retrieve framebuffer depth
This is used for dithering, although I'm not aware of anyone who got
higher than 8 bit depth support to work on Linux.

Also put this into egl_helpers.c. Since EGL is pseudo-portable at best I
have no hope that the EGL context creation code in all the backends can
be fully shared. But some self-contained functionality can definitely be
shared.
2015-12-19 14:14:12 +01:00
wm4 00135a87f0 sub: rename sd_lavc_conv.c to lavc_conv.c
The previous commit turned sd_lavc_conv from a sd_driver to
free-standing functions. Do the rename to reflect this change
separately to avoid confusing git's content tracking. (Or did
git solve this, making separating renames and content changes
unnecessary?)
2015-12-18 03:59:52 +01:00
wm4 e798cf1ff6 sub: remove sd_srt.c
The FFmpeg subtitle converter does the same. There used to be some
deficiencies in FFmpeg's code, but it seems at least some of them have
been fixed. There also used to be the timestamp issue (see previous
commit messages), but this doesn't matter anymore. So no reason to
keep this code - get rid of it.
2015-12-15 21:05:48 +01:00
wm4 cab942acae sub: remove sd_microdvd.c
This can be dropped for the same reasons as in the previous commits. It
removes MicroDVD conversion support on Libav, although MicroDVD files
couldn't be read in the first place ever since demux_subreader.c was
removed.
2015-12-15 21:05:48 +01:00
wm4 c7985fe5f7 sub: remove sd_lavf_srt.c
This restored timestamps when demuxing srt subtitles in Libav, which
was important for avoiding slightly overlapping subtitles. Since the
way this works was changed, there is no real reason to maintain proper
timestamps anymore on this level - this can be dropped without issues.
2015-12-15 21:05:48 +01:00
wm4 29226e6a99 sub: remove sd_movtext.c
libavcodec's movtext-to-ass converter does the same and has more
features. On Libav, this commit disables mp4 subtitle display.
2015-12-15 21:05:48 +01:00
James Ross-Gowan 3d12312806
vo_opengl: add dxinterop backend
WGL_NV_DX_interop is widely supported by Nvidia and AMD drivers. It
allows a texture to be shared between Direct3D and WGL, so that
rendering can be done with WGL and presentation can be done with
Direct3D. This should allow us to work around some persistent WGL
issues, such as dropped frames with some driver/OS combos, drivers that
buffer frames to increase performance at the cost of latency, and the
inability to disable exclusive fullscreen mode when using WGL to render
to a fullscreen window.

The addition of a DX_interop backend might also enable some cool
Direct3D-specific enhancements in the future, such as using the
GetPresentStatistics API to get accurate frame presentation timestamps.

Note that due to a driver bug, this backend is currently broken on
Intel. It will appear to work as long as the window is not resized too
often, but after a few changes of size it will be unable to share the
newly created renderbuffer with GL. See:
https://software.intel.com/en-us/forums/graphics-driver-bug-reporting/topic/562051
2015-12-11 18:29:18 +01:00
wm4 a6521f3941 demux: remove old subtitle parser
All of these are supported by FFmpeg now. It was disabled by default
too (with FFmpeg).

If compiled against Libav, mpv will lose the ability to read some
subtitle formats (but the most important ones, srt and ass, still should
work).
2015-12-10 22:53:37 +01:00
wm4 475fe453cc stream: drop PVR support
This is only for specific Hauppage cards. According to the comments in
who is actively using this feature. Get it out of the way.

Anyone who still wants to use this should complain. Keeping this code
would not cause terribly much additional work, and it could be restored
again. (But not if the request comes months later.)
2015-12-10 22:53:02 +01:00
Christian Hesse 30f18b8dc7 build: install scalable svg icon as well 2015-12-01 10:58:35 +01:00
James Ross-Gowan b7d614aa0d vo_opengl: win32: test for exclusive mode
This is a hack, but unfortunately the DwmGetCompositionTimingInfo
heuristic does not work in all cases (with multiple-monitors on Windows
8.1 and even with a single monitor in Windows 10.) See the comment in
mp_w32_is_in_exclusive_mode() for more details.

It should go without saying that if any better method of doing this
reveals itself, this hack should be dropped.
2015-11-26 00:38:03 +11:00
James Ross-Gowan e76ec2c963 vo_opengl: add initial ANGLE support
ANGLE is a GLES2 implementation for Windows that uses Direct3D 11 for
rendering, enabling vo_opengl to work on systems with poor OpenGL
drivers and bypassing some of the problems with native GL, such as VSync
in fullscreen mode.

Unfortunately, using GLES2 means that most of vo_opengl's advanced
features will not work, however ANGLE is under rapid development and
GLES3 support is supposed to be coming soon.
2015-11-18 23:07:33 +11:00
wm4 384b13c5fd demux_libass: remove this demuxer
This loaded external .ass files via libass. libavformat's .ass reader is
now good enough, so use that instead.

Apparently libavformat still doesn't support fonts embedded into text
.ass files, but support for this has been accidentally broken in mpv for
a while anyway. (And only 1 person complained.)
2015-11-11 21:28:20 +01:00
rr- c3f2ef5491 vo_opengl: add DRM EGL backend
Notes:

- Unfortunately the only way to talk to EGL from within DRM I could find
  involves linking with GBM (generic buffer management for Mesa.)
  Because of this, I'm pretty sure it won't work with proprietary NVidia
  drivers, but then again, last time I checked NVidia didn't offer
  proper screen resolution for VT.

- VT switching doesn't seem to work at all. It's worth mentioning that
  using vo_drm before introduction of VT switcher had an anomaly where
  user could switch to another VT and input text to it, while video
  played on top of that VT. However, that isn't the case with drm_egl:
  I can't switch to other VT during playback like this. This makes me
  think that it's either a limitation coming from my firmware or from
  EGL/KMS itself rather than a bug with my code. Nonetheless, I still
  left (untestable) VT switching code in place, in case it's useful to
  someone else.

- The mode_id, connector_id and device_path should be configurable for
  power users and people who wish to watch videos on nonprimary screen.
  Unfortunately I didn't see anything that would allow OpenGL backends
  to register their own set of options. At the same time, adding them to
  global namespace is pointless.

- A few dozens of lines could be shared with vo_drm (setting up VT
  switching, most of code behind page flipping). I don't have any strong
  opinion on this.

- Sometimes I get minor visual glitches. I'm not sure if there's a race
  condition of some sort, unitialized variable (doubtful), or if it's
  buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa)

- .config and .control are very minimal.

Signed-off-by: wm4 <wm4@nowhere>
2015-11-08 15:00:15 +01:00
James Ross-Gowan 647b360a0a w32: use DisplayConfig API to retrieve correct monitor refresh rate
This is based on an older patch by James Ross-Gowan. It was rebased and
cleaned up. Also, the DWM API usage present in the older patch was
removed, because DWM reports nonsense rates at least on Windows 8.1
(they are rounded to integers, just like with the old GDI API - except
the GDI API had a good excuse, as it could report only integers).

Signed-off-by: wm4 <wm4@nowhere>
2015-11-06 19:53:18 +01:00
Bin Jin 27dc834f37 vo_opengl: implement NNEDI3 prescaler
Implement NNEDI3, a neural network based deinterlacer.

The shader is reimplemented in GLSL and supports both 8x4 and 8x6
sampling window now. This allows the shader to be licensed
under LGPL2.1 so that it can be used in mpv.

The current implementation supports uploading the NN weights (up to
51kb with placebo setting) in two different way, via uniform buffer
object or hard coding into shader source. UBO requires OpenGL 3.1,
which only guarantee 16kb per block. But I find that 64kb seems to be
a default setting for recent card/driver (which nnedi3 is targeting),
so I think we're fine here (with default nnedi3 setting the size of
weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the
"intBitsToFloat()" built-in function. This is necessary to precisely
represent these weights in GLSL. I tried several human readable
floating point number format (with really high precision as for
single precision float), but for some reason they are not working
nicely, bad pixels (with NaN value) could be produced with some
weights set.

We could also add support to upload these weights with texture, just
for compatibility reason (etc. upscaling a still image with a low end
graphics card). But as I tested, it's rather slow even with 1D
texture (we probably had to use 2D texture due to dimension size
limitation). Since there is always better choice to do NNEDI3
upscaling for still image (vapoursynth plugin), it's not implemented
in this commit. If this turns out to be a popular demand from the
user, it should be easy to add it later.

For those who wants to optimize the performance a bit further, the
bottleneck seems to be:
1. overhead to upload and access these weights, (in particular,
   the shader code will be regenerated for each frame, it's on CPU
   though).
2. "dot()" performance in the main loop.
3. "exp()" performance in the main loop, there are various fast
   implementation with some bit tricks (probably with the help of the
   intBitsToFloat function).

The code is tested with nvidia card and driver (355.11), on Linux.

Closes #2230
2015-11-05 17:38:20 +01:00