Now --ass-use-margins doesn't apply to normal subtitles anymore. This is
probably the inverse from the mpv behavior users expected so far, and
thus a breaking change, so rename the option, that the user at least has
a chance to lookup the option and decide whether the new behavior is
wanted or not.
The basic idea here is:
- plain text subtitles should have a certain useful defalt behavior,
like actually using margins
- ASS subtitles should never be broken by default
- ASS subtitles should look and behave like plaintext subtitles if
the --ass-style-override=force option is used
This also subtly changes --sub-scale-with-window and adds the --ass-
scale-with-window option. Since this one isn't so important, don't
bother with compatibility.
You can set in which "corner" the OSD and subtitles are shown. I'd
prefer it a bit more general (so you could set the alignment using
a factor), but the libass API does not provide this.
Requested. See manpage additions.
This also makes the magical loop_times constants slightly saner, but
shouldn't change the semantics of any existing --loop option values.
Not very important for the command line player; but GUI applications
will want to know about this.
This only adds the internal API; support for specific audio outputs
comes later.
This reuses the ao struct as context for the hotplug event listener,
similar to how the "old" device listing API did. This is probably a bit
unclean and confusing. One argument got reusing it is that otherwise
rewriting parts of ao_pulse would be required (because the PulseAudio
API requires so damn much boilerplate). Another is that --ao-defaults is
applied to the hotplug dummy ao struct, which automatically applies such
defaults even to the hotplug context.
Notification works through the property observation mechanism in the
client API. The notification chain is a bit complicated: the AO notifies
the player, which in turn notifies the clients, which in turn will
actually retrieve the device list. (It still has the advantage that it's
slightly cleaner, since the AO stuff doesn't need to know about client
API issues.)
The weird handling of atomic flags in ao.c is because we still don't
require real atomics from the compiler. Otherwise we'd just use atomic
bitwise operations.
In my opinion the artifacts created by af_scaletempo on extreme slowdown
(50% or so) are too bothersome - but users disagree. So use
af_scaletempo on any speed changes, not just on speedup.
librubberband exports a big load of options. Normally, the default
settings (whether they're librubberband defaults or our defaults) should
be sufficient, but since I'm not so sure about this, making it
configurable allows others to figure it out for me.
If "--af=rubberband" is used, librubberband will be used to speed up or
slow down audio with pitch correction.
This still has some problems: the audio delay is not calculated
correctly, so the audio position jitters around by a few milliseconds.
This will probably ruin video timing.
This reverts commit a33b46194c.
It turns out FFmpeg really considers this a bug, and fixed it by making
the decoder output the correct pixel format.
Fixes#1565. Reverts the fix#1528, though it should work fine with
a recent git master FFmpeg.
Make it accept "," as separator, instead of only ":". Do this by using
the key-value-list parser. Before this, the option was stored as a
string, with the option parser verifying that the option value as
correct. Now it's stored pre-parsed, although the log levels still
require separate verification and parsing-on-use to some degree (which
is why the msg-level option type doesn't go away).
Because the internal type changes, the client API "native" type also
changes. This could be prevented with some more effort, but I don't
think it's worth it - if MPV_FORMAT_STRING is used, it still works the
same, just with a different separator on read accesses.
This introduces a new option linear-scaling, which is now implied by
srgb, icc-profile and sigmoid-upscaling.
Notably, this means (sigmoidized) linear upscaling is now enabled by
default in opengl-hq mode. The impact should be negligible, and there
has been no observation of negative side effects of sigmoidized scaling,
so it feels safe to do so.
Autoload external audio files only if there's at least a video track
(which is not coverart pseudo-video).
Enable external audio file autoloading by default. Now that we actively
avoid doing stupid things like loading an external audio file for an
audio-only file, this should be fine.
Additionally, don't autoload subtitles if a subtitle is played.
Although you currently can't play subtitles without audio or video,
it's disturbing and stupid that the player might load subtitle files
with different extension and then fail.
Giving this such a prominent place is not really appropriate anymore.
Most people seeing this would probably expect a release changelog, not
something about MPlayer.
Since the page still could be useful for former MPlayer users (in
particular to avoid confusion with renamed options etc.), still keep
it in the DOCS directory.
This shouldn't exist and for the most part is meant to be used by the
ytdl Lua script, but let's document it anyway. Since the Lua API handles
all the details, it's considered much more "stable" than the raw API,
which is why the raw API wasn't documented.
In ancient times, this was needed because it was not default, and many
VOs had problems with it. But it was always default in mpv, and all VOs
are required to deal with it. Also, running --fixed-vo=no is not useful
and just creates weird corner cases. Get rid of it.
Comment explains why I have been so doubtful at adding this. The Apple docs
say CGDisplayModeGetRefreshRate is supposed to work only for CRTs, but it
doesn't, and actually works for LCD TVs connected over HDMI and external
displays (at least that's what I'm told, I don't have the hardware to test).
Maybe Apple docs are incorrect.
Since AFAIK Apple doesn't want to give us a better API – maybe in the fear we
might be able to actually write some useful software instead of "apps" –
I decided not to care as well and commit this.
This reverts the default behavior introduced in commit 93feffad. Way too
often libavcodec will return RGB data that has an alpha channel as per
pixel format, but actually contains garbage.
On the other hand, this will actually render garbage color values in
e.g. PNG files (for pixels with alpha==0, the color value should be
essentially ignored, which is what the old alpha blend mode did).
This "fixes" #1528, which is probably a decoder bug (or far less likely,
a broken file).
Make the lazy gamma initialization less weird, and make the default
value of the "gamma" sub-option 1.0. This means --vo=opengl:help will
list the actual default value.
Also change the lower bound to 0.1 - avoids a division by zero (I don't
know how shaders handle NaN, but it's probably not a good idea to give
them this value).
These commands are counterparts of sub_add/sub_remove/sub_reload which
work for external audio file.
Signed-off-by: wm4 <wm4@nowhere>
(minor simplification)
These were derived from dividing our assumed video gamut (1.961) by some
typical screen values (2.2 for dimly lit and 2.4 for pitch black):
1.961/2.4 = 0.8170833333333334 ~= 0.8
1.961/2.2 = 0.8913636363636364 ~= 0.9
This is somewhat imperfect, because detection of hw decoding APIs is
mostly done on demand, and often avoided if not necessary. (For example,
we know very well that there are no hw decoders for certain codecs.)
This also requires every hwdec backend to identify itself (see hwdec.h
changes).
This does what it's documented to do.
The implementation reuses the code in mpv_detach_destroy(). Due to the
way async requests currently work, just sending a synchronous dummy
request (like a "ignore" command) would be enough to ensure
synchronization, but this code will continue to work even if this
changes.
The line "ctx->event_mask = 0;" is removed, but it shouldn't be needed.
(If a client is somehow very slow to terminate, this could silence an
annoying queue overflow message, but all in all it does nothing.)
Calling mpv_wait_async_requests() and mpv_wait_event() concurrently is
in theory allowed, so change pthread_cond_signal() to
pthread_cond_broadcast() to avoid missed wakeups.
As requested in issue #1542.
This was apparently useful for correct interlaced scaling (although I
don't know anyone who used this). It was rarely used (if at all), had an
inconvenient output format (packed YUV), and now has a better solution
in libavfilter (using the libavfilter "scale" filter via vf_lavfi).
There is no reason to keep this filter any longer.
It's entirely useless. I left it in for a while, because the analog TV
code had a transitional bug that could switch chroma planes, but it was
fixed long ago. It's also available in libavfilter.
If a file is unseekable (consider e.g. a http server without resume
functionality), but the stream cache is active, the player will enable
seeking anyway. Until know, client API user couldn't know that this
happens, and it has implications on how well seeking will work. So add a
property which exports whether this situation applies.
Fixes#1522.
This allows getting the log at all with --no-terminal and without having
to retrieve log messages manually with the client API. The log level is
hardcoded to -v. A higher log level would lead to too much log output
(huge file sizes and latency issues due to waiting on the disk), and
isn't too useful in general anyway. For debugging, the terminal can be
used instead.
The previous default ("no") seemed to be equivalent to "min" in practice
(though it might depend on the website, which is even worse).
Better just select the best stream by default.
This queries the _ICC_PROFILE property on the root window. It also tries
to reload the ICC when it changes, or if the mpv window changes the
monitor. (If multiple monitors are covered, mpv will randomly select one
of them.)
The official spec is a dead link on freedesktop.org, so don't blame me
for any bugs.
Note that this assumes that Xinerama screen numbers match the way mpv
enumerates the xrandr monitors. Although there is some chance that this
matches, it most likely doesn't, and we actually have to do complicated
things to map the screen numbers. If it turns out that this is required,
I will fix it as soon as someone with a suitable setup for testing the
fix reports it.
Seems like several people agree that it's a good filter for downscaling.
Setting this option by default may also prevent people from accidentally
using an unsuitable filter for downscaling by setting "scale" and
without being aware of the impliciations (maybe). On the other hand,
this change is not strictly backwards compatible for the same reasons.
Also, allow disabling this option with scale-down="" (before this, not
setting it was the only way to do this - not possible anymore if it's
set by default). This is what the change in handle_scaler_opt() does.
New command `mouse <x> <y> [<button> [single|double]]` is introduced.
This will update mouse position with given coordinate (`<x>`, `<y>`),
and additionally, send single-click or double-click event if `<button>`
is given.
vo.c queried the VO at initialization whether it wants to be updated on
every display frame, or every video frame. If the smoothmotion option
was changed at runtime, the rendering mode in vo.c wasn't updated.
Just let vo_opengl set the mode directly. Abuse the existing
vo_set_flip_queue_offset() function for this.
Also add a comment suggesting the use of --display-fps to the manpage,
which doesn't have anything to do with the rest of this commit, but is
important to make smoothmotion run well.
Repurpose demuxer->filetype for this. It used to be used to print a
human readable format description; change it to a symbolic format name
and export it as property.
Unfortunately, libavformat has its own weird conventions, which are
reflected through the new property, e.g. the .mp4 case mentioned in the
manpage.
Fixes#1504.
The symlink trick made waf go crazy (deleting source files, getting
tangled up in infinite recursion... I wish I was joking). This means we
still can't build the client API examples in a reasonable way using the
include files of the local repository (instead of globally installed
headers). Not building them at all is better than deleting source files.
Instead, provide some manual instructions how to build each example
(except for the Qt examples, which provide qmake project files).
SmoothMotion is a way to time and blend frames made popular by MadVR. It's
intended behaviour is to remove stuttering caused by mismatches between the
display refresh rate and the video fps, while preserving the video's original
artistic qualities (no soap opera effect). It's supposed to make 24fps video
playback on 60hz monitors as close as possible to a 24hz monitor.
Instead of drawing a frame once once it's pts has passed the vsync time, we
redraw at the display refresh rate, and if we detect the vsync is between two
frames we interpolated them (depending on their position relative to the vsync).
We actually interpolate as few frames as possible to avoid a blur effect as
much as possible. For example, if we were to play back a 1fps video on a 60hz
monitor, we would blend at most on 1 vsync for each frame (while the other 59
vsyncs would be rendered as is).
Frame interpolation is always done before scaling and in linear light when
possible (an ICC profile is used, or :srgb is used).
These aliases were removed in commit 1ec77214. Add a notice to the
manpage how to get these back. Apparently, "lanczos2" and "lanczos3"
were the only interesting aliases possibly used by someone, so the
description is limited to these two.
These are now auto-detected sanely; and enabled whenever it would be a
performance or quality gain (which is pretty much everything except
bilinear/bilinear scaling).
Perhaps notably, with the absence of scale_sep, there's no more way to
use convolution filters on hardware without FBOs, but I don't think
there's hardware in existence that doesn't have FBOs but is still fast
enough to run the fallback (slow) 2D convolution filters, so I don't
think it's a net loss.
This is better even for non-separable. The only exception is when using
bilinear for both lscale and cscale. I've fixed the
documentation/comments to make more sense.
This is not quite the same thing as madVR's antiringing algorithm, but
it essentially does something similar.
Porting madVR's approach to elliptic coordinates will take some amount
of thought.
This also fixes the maximum range to 16.0, which was previously set to
32.0 and incorrectly documented as 8.0. 16 taps should be more than
anybody will ever need, but it's the highest radius that's supported by
all affected filters.
Before this, we merely printed a message to the terminal. Now the API
user can determine this properly. This might be important for API users
which somehow maintain complex state, which all has to be invalidated if
(state-changing) events are missing due to an overflow.
This also forces the client API user to empty the event queue, which is
good, because otherwise the event queue would reach the "filled up"
state immediately again due to further asynchronous events being added
to the queue.
Also add some minor improvements to mpv_wait_event() documentation, and
some other minor cosmetic changes.
Fixes#1472.
(Maybe these options should have been named --autofit-max and
--autofit-min, but since --autofit-larger already exists, use
--autofit-smaller for symmetry.)
The "\\" escape was rendered as "\" on the website. I'm hoping quoting
this in ``...`` will render it correctly.
Also add an example for show_text, which awkwardly does not require
escaping the "\".
After finding out more about how video mastering is done in the real
world it dawned upon me why the "hack" we figured out in #534 looks so
much better.
Since mastering studios have historically been using only CRTs, the
practice adopted for backwards compatibility was to simulate CRT
responses even on modern digital monitors, a practice so ubiquitous that
the ITU-R formalized it in R-Rec BT.1886 to be precisely gamma 2.40.
As such, we finally have enough proof to get rid of the option
altogether and just always do that.
The value 1.961 is a rounded version of my experimentally obtained
approximation of the BT.709 curve, which resulted in a value of around
1.9610336. This is the closest average match to the source brightness
while preserving the nonlinear response of the BT.1886 ideal monitor.
For playback in dark environments, it's expected that the gamma shift
should be reproduced by a user controlled setting, up to a maximum of
1.224 (2.4/1.961) for a pitch black environment.
More information:
https://developer.apple.com/library/mac/technotes/tn2257/_index.html
The Qt example already does this. I hoped this was restricted to
QApplication only, but apparently Qt repeated this mistake with
QGuiApplication (QGuiApplication was specifically added for QtQuick at a
much later point, even though QApplication inherits from it).
Seems to work with GtkSocket and passing the gtk_socket_get_id() value
via "wid" option to mpv.
One caveat is that using <tab> to move input focus from mpv to GTK does
not work. It seems we would have to interpret <tab> ourselves in this
case. I'm not sure if we really should do this - it would probably
require emulating some other typical conventions too. I'm not sure if an
embedder could do something about this on the toolkit level, but in
theory it would be possible, so leave it as is for now.
Remove the "all" special-behavior, and instead interpret trailing "*"
characters. --display-tags=all is replaced by --display-tags=* as a
special-case of the new behavior.
See #1404.
Note that the most straight-forward value for matchlen in the normal
case would be INT_MAX, because it should be using the entire string.
I used keylen+1 instead, because glibc seems to handle this case
incorrectly:
snprintf(buf, sizeof(buf), "%.*s", INT_MAX, "hello");
The result is empty, instead of just containing the string argument.
This might be a glibc bug; it works with other libcs (even MinGW-w64).
Make their meaning more exact, and don't pretend that there's a
reasonable definition for "bits-per-pixel". Also make unset fields
unavailable.
average_depth still might be inconsistent: for example, 10 bit 4:2:0 is
identified as 24 bits, but RGB 4:4:4 as 12 bits. So YUV formats
seemingly drop the per-component padding, while RGB formats do not.
Internally it's consistent though: 10 bit YUV components are read as
16 bit, and the padding must be 0 (it's basically like an odd fixed-
point representation, rather than a bitfield).
bpp(bits-per-pixel) and depth(bit-depth for color component) can
be calculated from pixelformat technically but it requires massive
informations to be implemented in client side.
These subproperties are provided for convenience.
We still keep the window pointer, because we want to call
QQuickWindow::resetOpenGLState() (which runs on the rendering thread
only). Interesting mess...
This avoids issues when upscaling directly in linear light, and is the
recommended way to upscale images according to imagemagick.
The default slope of 6.5 offers a reasonable compromise between
ringing artifacts eliminated and ringing artifacts introduced by
sigmoid-upscaling. Same goes for the default center of 0.75.
The previous implementation of opengl-cb kept only latest flipped frame.
This can cause massive frame drops because rendering is done asynchronously
and only the latest frame can be rendered.
This commit introduces frame queue and releated options to opengl-cb.
frame-queue-size: the maximum size of frame queue (1-100, default: 1)
frame-drop-mode: behavior when frame queue is full (pop, clear, default: pop)
The frame queue holds delayed frames and drops frames if the frame queue is
overflowed with next method:
'pop' mode: drops all the oldest frames overflown.
'clear' mode: drops all frames in queue and clear it.
With default options(frame-queue-size=1:frame-drop-mode=pop),
opengl-cb behaves in the same way as previous implementation effectively.
For frame-queue-size > 1, opengl-cb tries to calls update() without waiting
next flip_page() in order to consume queued frames.
Signed-off-by: wm4 <wm4@nowhere>
mpv can be built natively on a Windows machine using MSYS2. Add detailed
instructions on how to build and merge them with the existing
instructions for cross-compilation.
This one avoids use of a FBO. It's less flexible, because it uses works
around the whole QML rendering API. It seems to be the only way to get
OpenGL rendering without any indirections, though.
Parts of this example were insipired by Qt's "Squircle" example.
Also add a README file with a short description of each example, to
reduce the initial confusing.
This used to be required to workaround PulseAudio bugs. Even later, when
the bugs were (partially?) fixed in PulseAudio, I had the feeling the
hacks gave better behavior. On the other hand, I couldn't actually
reproduce any bad behavior without the hacks lately. On top of this, it
seems our hacks sometimes perform much worse than PulseAudio's native
implementation (see #1430).
So disable the hacks by default, but still leave the code and the option
in case it still helps somewhere. Also, being able to blame PulseAudio's
code by using its native API is much easier than trying to debug our own
(mplayer2-derived) hacks.
Was already possible before by injecting the magic PID
8192 into channels.conf, the flag makes this much more
useable and we also have it documented.
Useful not only for debugging, but also for incomplete
channels.conf (mplayer format...), multi-channel
recording, or channels which do dynamic PID switchng.
full-transponder is also useful for channels which switch PIDs on-the-fly.
ffmpeg can handle this, but it needs the full stream with all PIDs.
--sub-scale-by-window=no attempts to keep subs always at the same pixel
size.
The implementation is a bit all over the place, because it compensates
already done scaling by an inverse scale factor, but it will probably do
its job.
Fixes#1424. (The semantics and name of --sub-scale-with-window are
kept, and this adds a new option - the name is confusingly similar, but
it's actually analogue to --osd-scale-by-window.)
This adds an "auto" choice to the concurrent-frames suboption, and makes
it the default.
I'm not so sure about making this the default, though. It could lead to
excessive buffering with large CPU counts. But we'll see.
Options which take colors accept two variants. The first is "r/g/b/a",
the second is "#AARRGGBB". Since they put alpha at different places,
it's probably better to document the second variant explicitly. (It's a
bit strange that they put alpha in different places, but on the other
hand, it's kind of natural. The second variant should probably be
considered deprecated.)
This is basically a hack; but apparently a needed one, since many
vapoursynth filters insist on having a FPS set.
We need to apply the FPS override before creating the filters. Also
change some terminal output related to the FPS value.
While there's no actual need to get rid of these, I want to make sure
nobody actually needs this stuff, and removing it is the best way to
get to know this. We still can revert this commit if it turns out there
is a significant need for this stuff.
The final goal is removing vo_opengl_old entirely. Add a warning, which
basically announces this intention.
The examples simple.c and cocoabasic.m can be compiled without
installing libmpv. But also, they didn't use the correct include path
libmpv programs normally use, so they couldn't be built with a properly
installed system-libmpv. That's pretty bad for examples, which are
supposed to show how to use libmpv correctly.
So do some bullshit that symlinks libmpv to a "mpv" include directory
under the build directory. This name-mismatch is a direct consequence of
the bullshit done in 499a6758 (requested in #539 for dumb reasons). (We
don't want to name the client API headers directory "mpv", because that
would be too unspecific, and clashes with having the mpv binary in the
same directory.)
If you have spaces or other "unusual" characters in your paths, the
build will break, because I couldn't find out where waf hides its
function to escape shell parameters (or a way to invoke programs
without involving the shell). Neither does such a thing to be
documented, nor do they seem to have a clear way to do this in
their code.
This also doesn't compile the Qt examples, because everything becomes
even more terrible from there on.
C++ is the worst language ever, and allows throwing any type, even if it
doesn't make sense. In this case, we were throwing char*, which the
runtime typically treats as opaque, instead of printing it as message if
such an exception was not caught.
Do so by using mp_subprocess(). Although this uses completely different
code on Unix too, you shouldn't notice a difference. A less ncie thing
is that this reserves an entire thread while the command is running
(which wastes some memory for stack, at least). But this is probably
still the simplest way, and the fork() trick is apparently not
implementable with posix_subprocess().
This may or may not be useful for client API users.
Fold this API extension into the previous API bump. The previous bump
was only yesterday, so it's ok.
Until now, calling mpv_opengl_cb_uninit_gl() at a "bad moment" could
make the whole thing to explode. The API user was asked to avoid such
situations by calling it only in "good moments". But this was probably a
bit too subtle and could easily be overlooked.
Integrate the approach the qml example uses directly into the
implementation. If the OpenGL context is to be unitialized, forcefully
disable video, and block until this is done.
Use queued signals instead of QEvent for the wakeup notification. This
is slightly nicer, and reduces the chance that the event (QEvent::User)
could clash with other code using the same event.
Also switch to modern connect() syntax.
Destruction (e.g. when closing the window) was a bit broken. This commit
fixes some possible crashes, and should make lifetime management
relatively sane. (Still a bit complex, though. Maybe this code should be
moved into a tiny library.)
QtQuick runs the renderer on a separate thread. This thread is rather
loosely connected to the main thread. The loose separation is enforced
by the API, which also makes coordination of initialization and
destruction harder. Throw refcounting at the problem, which fixes it.
The refcounting wrapper introduced in the previous commit is used for
this.
Also contains some general cleanups.
This attempts to increase user-friendliness by excluding useless tags.
It should be especially helpful with mp4 files, because the FFmpeg mp4
demuxer adds tons of completely useless information to the metadata.
Fixes#1403.
Until now, these options took effect only at program start. This could
be confusing when e.g. doing "mpv list.m3u --shuffle". Make them always
take effect when a playlist is loaded either via a playlist file, or
with the "loadlist" command.
Essentially, don't make it the mmap() argument, and just add it to the
memory address. This hides tricky things like alignment reequirements
from the user.
Strictly speaking, this is not entirely backwards compatible: this adds
the regression that you can't access past 2 or 4 GB of a file on 32 bit
systems anymore. But I doubt anyone cared about this.
In theory, we could be clever, and just align the offset manually and
pass that to mmap(). This would also be transparent to the user, but
minimally more effort, so this is left as exercise to the reader.
Makes all of overlay_add work on windows/mingw.
Since we now don't explicitly check for mmap() anymore (it's always
present), this also requires us to make af_export.c compile, but I
haven't tested it.
I'm hoping this is generally more compatible, and it works with GLES.
This probably has not much of an effect on desktop GL. It also switches
only the default format for --vo=opengl, not --vo=opengl-hq.
"-hq" already uses GL_RGBA16, though since it's a sized format, the
story is a bit different, and it won't work on GLES either.
Also clarify the statement about what we expect to happen by default.
It's well possible that distros at some point will fix their ALSA
configuration, and e.g. enable the upmix plugin by default.
This should work well with most audio APIs, except ALSA. A long-winded
explanation is provided how to make ALSA multichannel output work.
All other AOs should have no such problems. Of course it's possible
that previously unknown issues arise, because I assume that enabling
multichannel audio is actually relatively rare.
This also disables codec downmix by default, which could change the
audio output due to different mixing in the codec and libavresample.
Fixes#1313.
Obscure feature, and I've never heard of anyone using it.
The anaglyph effects can be reproduced with vf_stereo3d. The only thing
that can't be reproduced with it is "quadbuffer", which requires special
and expensive hardware.
- --lua and --lua-opts change to --script and --script-opts
- 'lua' default script dirs change to 'scripts'
- DOCS updated
- 'lua-settings' dir was _not_ modified
The old lua-based names/dirs still work, but display a warning.
Signed-off-by: wm4 <wm4@nowhere>
This was requested.
It seems libdvdread can't get the duration for titlesets other than the
currently opened title. The data structures contain dangling pointers
for these, and MPlayer works this around by opening every title
separately for the purpose of dumping the title list.
The --keep-open behavior was recently changed to act only on the last
file due to user requests (see commit 735a9c39). But the old behavior
was useful too, so bring it back as an additional mode.
Fixes#1332 (or rather, should help with it).
I think that's expected; mpv shouldn't draw anything while no video is
active. This doesn't blend transparently, though.
Also document the vo_opengl_cb thing.
This adds API to libmpv that lets host applications use the mpv opengl
renderer. This is a more flexible (and possibly more portable) option to
foreign window embedding (via --wid).
This assumes that methods like context sharing and multithreaded OpenGL
rendering are infeasible, and that a way is needed to integrate it with
an application that uses a single thread to render everything.
Add an example that does this with QtQuick/qml. The example is
relatively lazy, but still shows how relatively simple the integration
is. The FBO indirection could probably be avoided, but would require
more work (and would probably lead to worse QtQuick integration, because
it would have to ignore transformations like rotation).
Because this makes mpv directly use the host application's OpenGL
context, there is no platform specific code involved in mpv, except
for hw decoding interop.
main.qml is derived from some Qt example.
The following things are still missing:
- a way to do better video timing
- expose GL renderer options, allow changing them at runtime
- support for color equalizer controls
- support for screenshots
If no-block was given, the device would be opened with SND_PCM_NOBLOCK.
Also, after opening, blocking mode was unconditionally enabled anyway
with snd_pcm_nonblock(). Further, if opening with SND_PCM_NOBLOCK
failed, opening was retried without this flag.
This doesn't make any sense to me, and I've never heard of someone using
this suboption. I suspect it has to do with ancient ALSA bugs or API
caveats. Remove it and simplify the code.
This is an ancient filter, and we assume it's not useful anymore.
If you really want this, it's still available in libavfilter (e.g. via
--vf=lavfi=[pp...]). The disadvantage is that mpv doesn't pass through
QP information to libavfilter. (This was probably the reason vf_pp still
was part of mpv - it was slightly easier to pass QP internally.)
By now, input.conf is actually just a small part of input handling.
Rename the section to something else ("command interface" was the
first reasonable thing that came to mind).
Also fix a minor typo further down.
Yep, Lua is so crappy that the stdlib doesn't provide anything like
this.
Repurposes the undocumented mp.format_table() function and moves it to
mp.utils.
Makeshift-solution for working around certain fontconfig issues.
With --use-text-osd=no, libass and fontconfig won't be initialized, and
fontconfig won't block everything with scanning for fonts.
The fact that it's a generic command prefix that is parsed even when
using the client API is a bit unclean (because this flag makes sense
for actual key-bindings only), but it's less code this way.
This command was actually requested on IRC ages ago, but I forgot about
it.
The main purpose is that the decoding state can be reset without issuing
a seek, in particular in situations where you can't seek.
This restarts decoding from the middle of the packet stream; since it
discards the packet buffer intentionally, and the decoder will typically
not output "incomplete" frames until it has recovered, it can skip a
large amount of data.
It doesn't clear the byte stream cache - I'm not sure if it should.
It's passed with the '--format' option to youtube-dl.
If it isn't set, we don't pass '--format best' so that youtube-dl can
use the options from its configuration file.
Signed-off-by: wm4 <wm4@nowhere>
This sub-option was turned into a flag when the sub-option parser was
changed to the generic one (probably accidentally). Turn it into a
proper choice-option.
Also, adjust what the options do. Though none of this probably makes
much sense; the default should work, and if it doesn't, the GPU/driver
is probably beyond help.
Probably needs to be polished a bit more. Also, might require a key
binding that can set/clear the loop points in a more intuitive way.
For now, something like this can be put into input.conf to use it:
ctrl+y set ab-loop-a ${time-pos} # set A
ctrl+x set ab-loop-b ${time-pos} # set B
ctrl+c set ab-loop-a no # clear (mostly)
Fixes#1241.
Due to the current code structure, the "current" entry and the entry
which is playing can be different. This is probably silly, but still
try to mark the entries correctly.
Refs #1260.
This actually doesn't even write/return the new sub-property, because
I dislike the idea of dumping that field for every single playlist
entry, even though it's "needed" only for one.
Fixes#1260.
Make the changes started in commit c827ae5f more eloborate, and provide
an option to control the amount of data read before the seek-target. To
achieve this, rewrite the loop that finds the lowest still acceptable
target cluster. It is now searched by time instead of file position. The
behavior (both with and without preroll option) may be different from
before this change, although it shouldn't be worse.
The change demux_mkv_read_cues() fixes a bug: when seeking after playing
normally, the code would erroneously assume that durations are set. This
doesn't happen if the first operation after loading was a seek instead
of playback.
This might be interesting for GUIs and such.
It's probably still a little bit insufficient. For example, the filter
and audio/video output lists are not available through this.
Following the discussion in #1253.
The events won't be removed for a while, though. (Or maybe never, unless
we run out of bits for the uint64_t event mask.)
This is not a real change (the events still work, and the alternative
mechanisms were established a few API revisions earlier), but for the
sake of notifying API users, update DOCS/client-api-changes.rst.
The main need I see for this is with libmpv - it would be confusing if
some application showed up as "mpv" on whateverthehell PulseAudio uses
it for (generally it does show up on various PA GUI tools).
Call VOCTRL_GET_DISPLAY_NAMES it when the property is
requested. The vo should return the names of the displays that the mpv
window is covering. For example, with x11 vos, xrandr names LVDS1,
HDMI1, etc.