(I have no idea why there are different modes.)
Instead of risking to drop frames too early, give it some margin. Since
there are situations this could deadlock, wait with a timeout. This can
happen if e.g. the API user is refusing to render anything, or if
uninitialization is happening.
OpenSSL and GnuTLS are still causing this problem (although FFmpeg could
be blamed as well - but not really). In particular, it was happening to
libmpv users and in cases the pseudo-gui profile is used. This was
because all signal handling is in the terminal code, so if terminal is
disabled, it won't be set. This was obviously a questionable shortcut.
Avoid further problems by always blocking the signal. This is done even
for libmpv, despite our policy of not messing with global state.
Explicitly document this in the libmpv docs. It turns out that a version
bump to 1.17 was forgotten for the addition of MPV_FORMAT_BYTE_ARRAY, so
document that change as part of 1.16.
This creates the window before the first file is loaded. This was
requested a bunch of times, but on the other hand a change to make this
behavior the default was reverted some time ago, because other users
hated it.
Reduces (but likely does not remove) the danger of rounding intermediate
values down to 8 bit. This is important for cscale, or any other
processing that might store raw YUV values in framebuffers.
Fixes#1918.
ao_coreaudio uses AudioUnit - the OSX software mixer. In theory, it
supports multichannel audio just fine. But in practice, this might be
disabled by default, and the user is supposed to select a multichannel
base format in the "Audio MIDI Setup" utility.
This option attempts to change this setting automatically. Some possible
disadvantages and caveats are listed in the manpage additions. It is off
by default, since changing this might be rather bad behavior for a
normal application.
Since commit 7381db60, strings like "~desktop/" were expanded as
platform-specific paths by mpv. Apparently this similarity to standard
Unix shell expansion caused confusion, so change it to "~~desktop/". The
shell doesn't expand this, so it should be better.
This now stores caches for multiple ICC profiles, potentially all the
user has ever used. The big use case for this is for users with multiple
monitors. The old logic would mandate recomputing the LUT and discarding
the cache whenever dragging mpv from one screen to another.
This also avoids having to save and check the ICC profile itself, since
the file name already uniquely determines it.
This should take care of the endless complaints about the default
location for screenshots (and will of course create new ones).
If the screenshot-template is set to an absolute path, the directory
won't be used. So this should be reasonably compatible.
So that the user realizes where they come from, or can find them at all.
This was a common complaint, and this is the most lazy solution. Better
suggestions for a default template are welcome.
win32 has a special function for this.
I'm not sure about OSX - it seems ~/Desktop can be hardcoded, and the
OSX GUI actually localizes the _displayed_ path in its UI.
For Unix, there is not much to be done, or is there.
Now the rescan_external_files command will by default reselect the audio
and subtitle streams. This should be more intuitive.
Client API users and Lua scripts might break, but can be fixed in a
backward-compatible way by setting the mode explicitly.
This was in the "Window" section. It has absolutely nothing to do with
windows. Move it to the "Miscellaneous" section instead. The "--mc"
option, which has a similar function, was already there.
The build failed because rst2pdf apparently has problems with
page breaks. In this case, the link to the ALSA upmix guide was
causing a page break in an admonition block. My guess is that
rst2pdf screws up when it can’t fill at least one line of text
following a page break, so I worked around this by making that
paragraph a little longer. Seems to do the trick.
I also shortened the URL using GitHub’s service because it was
causing some rather unsightly formatting in the manpage output.
Maybe we should just build HTML instead of a PDF.
Since joystick support was removed and is a difference from mplayer, it
should be included in the document with the mplayer changes.
It will help new users who were using mplayer's joystick support to
seek alternatives when switching to mpv. It will also be helpful for
people that had problems with the joystick support in mplayer (for
example, by incorrectly recognizing other input devices as joystick)
to know that those problems won't persist in mpv.
Approximate time of video buffered in the demuxer, in seconds. Same as
`demuxer-cache-duration` but returns the last timestamp of bufferred
data in demuxer.
Signed-off-by: wm4 <wm4@nowhere>
This will be used in the following commit, which adds screenshot_raw.
The reasoning is that this will be better for binding scripting
languages.
One could special-case the screenshot_raw commit and define fixed
semantics for passing through a pointer using the current API, like
formatting a pointer as string. But that would be ridiculous and
unclean.
Remove the old implementation for these properties. It was never very
good, often returned very innaccurate values or just 0, and was static
even if the source was variable bitrate. Replace it with the
implementation of "packet-video-bitrate". Mark the "packet-..."
properties as deprecated. (The effective difference is different
formatting, and returning the raw value in bits instead of kilobits.)
Also extend the documentation a little.
It appears at least some decoders (sipr?) need the
AVCodecContext.bit_rate field set, so this one is still passed through.
Useful for dealing with libavfilter's terrible graph syntax.
Not strictly backwards compatible (for example "[a[b]" fails now - the
"[" within the quote is interpreted now). But hopefully it's obscure
enough not to warrant any kind of compatibility hacks.
It's entirely useless, especially now that vo.c handles screenshots in a
generic way, and requires no special VO support. There are some
potential weird use-cases, but actually I've never seen it being used.
The old behavior does not make too much sense after all. If you don't
want to file to be overwritten, the user can check this manually.
This is a change in behavior - let's hope nobody actually relied on it.
libavcodec makes it impossible to distinguish dropped frames (requested
with AVCodecContext.skip_frame), and cases when the decoder simply does
not return a frame by default (such as with VP9, which has invisible
reference frames).
This confuses users when decoding VP9 video. It's basically a cosmetic
issue, so just paint it over by ignoring them if framedropping is
disabled.
It seems this choice was never documented. "always" is actually older
than "yes", so just declare it a compatibility value for "yes". (Also
move it before "always" in the C code to make this clear.)
I tried to find that option by searching for terms like “cover art”
and got nothing. I imagine most users would look for similar terms.
Hope this helps.
The gender specific pronoun is changed, since we shouldn't assume the
gender of the user.
The sentence itself is also changed to be more correct in general.
This could help in cases where the DWM (Windows desktop compositor) adds another
layer of bufferring and therefore the SwapBuffers timing could get messed up.
Signed-off-by: wm4 <wm4@nowhere>
This seems to come up often. I guess '.' vs. ':' for Lua calls is
confusing, and this part of the scripting API is the only one which
requires using it.
There still might be FFmpeg demuxers which mess up if audio is disabled
(like it happened to the FLV demuxer), but these are bugs and shouldn't
happen.
This merges all of the scaler-related options into a single
configuration struct, and also cleans up the way they're passed through
the code. (For example, the scaler index is no longer threaded through
pass_sample, just the scaler configuration itself, and there's no longer
duplication of the params etc.)
In addition, this commit makes scale-down more principled, and turns it
into a scaler in its own right - so there's no longer an ugly separation
between scale and scale-down in the code.
Finally, the radius stuff has been made more proper - filters always
have a radius now (there's no more radius -1), and get a new .resizable
attribute instead for when it's tunable.
User-visible changes:
1. scale-down has been renamed dscale and now has its own set of config
options (dscale-param1, dscale-radius) etc., instead of reusing
scale-param1 (which was arguably a bug).
2. The default radius is no longer fixed at 3, but instead uses that
filter's preferred radius by default. (Scalers with a default radius
other than 3 include sinc, gaussian, box and triangle)
3. scale-radius etc. now goes down to 0.5, rather than 1.0. 0.5 is the
smallest radius that theoretically makes sense, and indeed it's used
by at least one filter (nearest).
Apart from that, it should just be internal changes only.
Note that this sets up for the refactor discussed in #1720, which would
be to merge scaler and window configurations (include parameters etc.)
into a single, simplified string. In the code, this would now basically
just mean getting rid of all the OPT_FLOATRANGE etc. lines related to
scalers and replacing them by a single function that parses a string and
updates the struct scaler_config as appropriate.
This makes the core much more elegant, reusable, reconfigurable and also
allows us to more easily add aliases for specific configurations.
Furthermore, this lets us apply a generic blur factor / window function
to arbitrary filters, so we can finally "mix and match" in order to
fine-tune windowing functions.
A few notes are in order:
1. The current system for configuring scalers is ugly and rapidly
getting unwieldy. I modified the man page to make it a bit more
bearable, but long-term we have to do something about it; especially
since..
2. There's currently no way to affect the blur factor or parameters of
the window functions themselves. For example, I can't actually
fine-tune the kaiser window's param1, since there's simply no way to
do so in the current API - even though filter_kernels.c supports it
just fine!
3. This removes some lesser used filters (especially those which are
purely window functions to begin with). If anybody asks, you can get
eg. the old behavior of scale=hanning by using
scale=box:scale-window=hanning:scale-radius=1 (and yes, the result is
just as terrible as that sounds - which is why nobody should have
been using them in the first place).
4. This changes the semantics of the "triangle" scaler slightly - it now
has an arbitrary radius. This can possibly produce weird results for
people who were previously using scale-down=triangle, especially if
in combination with scale-radius (for the usual upscaling). The
correct fix for this is to use scale-down=bilinear_slow instead,
which is an alias for triangle at radius 1.
In regards to the last point, in future I want to make it so that
filters have a filter-specific "preferred radius" (for the ones that
are arbitrarily tunable), once the configuration system for filters has
been redesigned (in particular in a way that will let us separate scale
and scale-down cleanly). That way, "triangle" can simply have the
preferred radius of 1 by default, while still being tunable. (Rather
than the default radius being hard-coded to 3 always)
Remove the colorspace-related top-level options, add them to vf_format.
They are rather obscure and not needed often, so it's better to get them
out of the way. In particular, this gets rid of the semi-complicated
logic in command.c (most of which was needed for OSD display and the
direct feedback from the VO). It removes the duplicated color-related
name mappings.
This removes the ability to write the colormatrix and related
properties. Since filters can be changed at runtime, there's no loss of
functionality, except that you can't cycle automatically through the
color constants anymore (but who needs to do this).
This also changes the type of the mp_csp_names and related variables, so
they can directly be used with OPT_CHOICE. This probably ended up a bit
awkward, for the sake of not adding a new option type which would have
used the previous format.
It was "by design" possible to make mpv crash if the parameters didn't
make enough sense, like "format=rgb24:yuv420p". While forcing the format
has some minor (rather questionable) use for debugging, allowing it to
crash is just stupid.
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
This has a number of user-visible changes:
1. A new flag blend-subtitles (default on for opengl-hq) to control this
behavior.
2. The OSD itself will not be color managed or affected by
gamma controls. To get subtitle CMS/gamma, blend-subtitles must be
used.
3. When enabled, this will make subtitles be cleanly interpolated by
:interpolation, and also dithered etc. (just like the normal output).
Signed-off-by: wm4 <wm4@nowhere>
Bilinear scaling is not a suitable default for something named "hq"; the
whole reason this was done in the past was because cscale used to be
obscenely slow. This is no longer the case, with cscale being nearly
free.
Why did this exist in the first place? Other than being completely
useless, this even caused some regressions in the past. For example,
there was the case of a laptop exposing its accelerometer as joystick
device, which led to extremely fun things due to the default mappings of
axis movement being mapped to seeking.
I suppose those who really want to use their joystick to control a media
player (???) can configure it as mouse device or so.
This replaces the old smoothmotion code by a more flexible tscale
option, which essentially allows any scaler to be used for interpolating
frames. (The actual "smoothmotion" scaler which behaves identical to the
old code does not currently exist, but it will be re-added in a later commit)
The only odd thing is that larger filters require a larger queue size
offset, which is currently set dynamically as it introduces some issues
when pausing or framestepping. Filters with a lower radius are not
affected as much, so this is identical to the old smoothmotion if the
smoothmotion interpolator is used.
I think this is what I alwass missed ever since I found the MPlayer
cache options: a way to enable the cache on local files with the default
settings, whatever they are.
Requested change in behavior.
Note that we set the assumed "infinite" display_fps to 1e6, which
conveniently lets vo_get_vsync_interval() return a dummy value of 1,
which can be easily checked against, and still avoids doing math with
float INFs.
This adds stuff related to gamma, linear light, sigmoid, BT.2020-CL,
etc, as well as color management. Also adds a new gamma function (gamma22).
This adds new parameters to configure the CMS settings, in particular
letting us target simple colorspaces without requiring usage of a 3DLUT.
This adds smoothmotion. Mostly working, but it's still sensitive to
timing issues. It's based on an actual queue now, but the queue size
is kept small to avoid larger amounts of latency.
Also makes “upscale before blending” the default strategy.
This is justified because the "render after blending" thing doesn't seme
to work consistently any way (introduces stutter due to the way vsync
timing works, or something), so this behavior is a bit closer to master
and makes pausing/unpausing less weird/jumpy.
This adds the remaining scalers, including bicubic_fast, sharpen3,
sharpen5, polar filters and antiringing. Apparently, sharpen3/5 also
consult scale-param1, which was undocumented in master.
This also implements cropping and chroma transformation, plus
rotation/flipping. These are inherently part of the same logic, although
it's a bit rough around the edges in some case, mainly due to the fallback
code paths (for bilinear scaling without indirection).
Move the command line parsing and some other things to the common init
routine shared between command line player and client API. This means
they're using almost exactly the same code now.
The main intended side effect is that the client API will load mpv.conf;
though still only if config loading is enabled.
(The cplayer still avoids creating an extra thread, passes a command
line, and prints an exit status to the terminal. It also has some
different defaults.)
This gets rid of the need for a second (or more) parameters; instead it
can be all in one parameter. The (now) redundant parameter is still
parsed for compatibility, though.
The way the flags make each other conflict is a bit tricky: they have
overlapping bits, and the option parser disallows setting already set
bits.
This automatically sets the gamma option depending on lighting conditions
measured from the computer's ambient light sensor.
sRGB – arguably the “sibling” to BT.709 for still images – has a reference
viewing environment defined in its specification (IEC 61966-2-1:1999, see
http://www.color.org/chardata/rgb/srgb.xalter). According to this data, the
assumed ambient illuminance is 64 lux. This is the illuminance where the gamma
that results from ICC color management is correct.
On the other hand, BT.1886 formalizes that the gamma level for dim environments
to be 2.40, and Apple resources (WWDC12: 2012 Session 523: Best practices for
color management) define the BT.1886 dim at 16 lux.
So the logic we apply is:
* >= 64lux -> 1.961 gamma
* =< 16lux -> 2.400 gamma
* 16lux < x < 64lux -> logaritmic rescale of lux to gamma. The human
perception of illuminance roughly follows a logaritmic scale of lux [1].
[1]: https://msdn.microsoft.com/en-us/library/windows/desktop/dd319008%28v=vs.85%29.aspx
Breaks vo_opengl by default. I'm hot able to fix this myself, because I
have no clue about the overcomplicated color management logic. Also,
whilethis is apparently caused by commit fbacd5, the following commits
all depend on it, so revert them too.
This reverts the following commits:
e141caa97d653b0dd529729c8b3f64fbacd5de31Fixes#1636.
Just use makeFirstResponder on the mpv events view from client code
if you need the built in keyboard events (this is easier for dealing with view
nesting).
This relies on upstream support in lavc, and will hence basically not
work at all. The intent is to get support for writing this information
into ffmpeg's PNG encoders etc.
Now that we have fast stream switching, we can bump these sizes, as the
queues cause no delay in switching anymore.
Of course, the fast stream switching works for mkv and mp4 only. Other
formats will incur a quite terrible delay especially in network mode,
which this commit changes to 10 seconds. Let's see if someone
complains...
The way I interpreted it, it seemed like this was not default behavior
and could be enabled with --audio-pitch-correction - it should be made
clearer that this is actually *the default behavior*.
This is based on pretty much the same (somewhat naive) logic right now.
I'm not convinced that the extra logic that eg. madVR includes is worth
enough to warrant heavily confusing the logic for it.
This shouldn't slow down the logic at all in any sane shader compiler,
and indeed it doesn't on any shader compiler that I tested.
Note that this currently doesn't affect cscale at all, due to the weird
implementation details of that.
This option allows the user to pass non-supported options directly to
youtube-dl, such as "--proxy URL", "--username USERNAME" and
'--password PASSWORD".
There is no sanity checking so it's possible to break things (i.e.
if you pass "--version" mpv exits with random JSON error).
Signed-off-by: wm4 <wm4@nowhere>
Hopefully, this will really clear up how the thing is supposed to work
(and that it's not SVP, nor MVTools).
I also removed instances of the word "interpolation", since that's a
term that's easily misleading.
Finally, I expanded on smoothmotion-threshold since the purpose/meaning
was a bit confusing.
This is done mainly for consistency, since all of the EWA filters share
similar properties and it's important to distinguish them for
documentation purposes.
This is a variation of ewa_lanczos that is sinc-windowed instead of
jinc-windowed. Results are pretty similar, but the logic is simpler.
This could potentially replace the ugly ewa_lanczos code.
It's hard to tell, but from comparing stills I think this one has
slightly less ringing than regular ewa_lanczos.
Now --ass-use-margins doesn't apply to normal subtitles anymore. This is
probably the inverse from the mpv behavior users expected so far, and
thus a breaking change, so rename the option, that the user at least has
a chance to lookup the option and decide whether the new behavior is
wanted or not.
The basic idea here is:
- plain text subtitles should have a certain useful defalt behavior,
like actually using margins
- ASS subtitles should never be broken by default
- ASS subtitles should look and behave like plaintext subtitles if
the --ass-style-override=force option is used
This also subtly changes --sub-scale-with-window and adds the --ass-
scale-with-window option. Since this one isn't so important, don't
bother with compatibility.
You can set in which "corner" the OSD and subtitles are shown. I'd
prefer it a bit more general (so you could set the alignment using
a factor), but the libass API does not provide this.
Requested. See manpage additions.
This also makes the magical loop_times constants slightly saner, but
shouldn't change the semantics of any existing --loop option values.
Not very important for the command line player; but GUI applications
will want to know about this.
This only adds the internal API; support for specific audio outputs
comes later.
This reuses the ao struct as context for the hotplug event listener,
similar to how the "old" device listing API did. This is probably a bit
unclean and confusing. One argument got reusing it is that otherwise
rewriting parts of ao_pulse would be required (because the PulseAudio
API requires so damn much boilerplate). Another is that --ao-defaults is
applied to the hotplug dummy ao struct, which automatically applies such
defaults even to the hotplug context.
Notification works through the property observation mechanism in the
client API. The notification chain is a bit complicated: the AO notifies
the player, which in turn notifies the clients, which in turn will
actually retrieve the device list. (It still has the advantage that it's
slightly cleaner, since the AO stuff doesn't need to know about client
API issues.)
The weird handling of atomic flags in ao.c is because we still don't
require real atomics from the compiler. Otherwise we'd just use atomic
bitwise operations.
In my opinion the artifacts created by af_scaletempo on extreme slowdown
(50% or so) are too bothersome - but users disagree. So use
af_scaletempo on any speed changes, not just on speedup.
librubberband exports a big load of options. Normally, the default
settings (whether they're librubberband defaults or our defaults) should
be sufficient, but since I'm not so sure about this, making it
configurable allows others to figure it out for me.
If "--af=rubberband" is used, librubberband will be used to speed up or
slow down audio with pitch correction.
This still has some problems: the audio delay is not calculated
correctly, so the audio position jitters around by a few milliseconds.
This will probably ruin video timing.
This reverts commit a33b46194c.
It turns out FFmpeg really considers this a bug, and fixed it by making
the decoder output the correct pixel format.
Fixes#1565. Reverts the fix#1528, though it should work fine with
a recent git master FFmpeg.
Make it accept "," as separator, instead of only ":". Do this by using
the key-value-list parser. Before this, the option was stored as a
string, with the option parser verifying that the option value as
correct. Now it's stored pre-parsed, although the log levels still
require separate verification and parsing-on-use to some degree (which
is why the msg-level option type doesn't go away).
Because the internal type changes, the client API "native" type also
changes. This could be prevented with some more effort, but I don't
think it's worth it - if MPV_FORMAT_STRING is used, it still works the
same, just with a different separator on read accesses.
This introduces a new option linear-scaling, which is now implied by
srgb, icc-profile and sigmoid-upscaling.
Notably, this means (sigmoidized) linear upscaling is now enabled by
default in opengl-hq mode. The impact should be negligible, and there
has been no observation of negative side effects of sigmoidized scaling,
so it feels safe to do so.
Autoload external audio files only if there's at least a video track
(which is not coverart pseudo-video).
Enable external audio file autoloading by default. Now that we actively
avoid doing stupid things like loading an external audio file for an
audio-only file, this should be fine.
Additionally, don't autoload subtitles if a subtitle is played.
Although you currently can't play subtitles without audio or video,
it's disturbing and stupid that the player might load subtitle files
with different extension and then fail.
Giving this such a prominent place is not really appropriate anymore.
Most people seeing this would probably expect a release changelog, not
something about MPlayer.
Since the page still could be useful for former MPlayer users (in
particular to avoid confusion with renamed options etc.), still keep
it in the DOCS directory.
This shouldn't exist and for the most part is meant to be used by the
ytdl Lua script, but let's document it anyway. Since the Lua API handles
all the details, it's considered much more "stable" than the raw API,
which is why the raw API wasn't documented.
In ancient times, this was needed because it was not default, and many
VOs had problems with it. But it was always default in mpv, and all VOs
are required to deal with it. Also, running --fixed-vo=no is not useful
and just creates weird corner cases. Get rid of it.
Comment explains why I have been so doubtful at adding this. The Apple docs
say CGDisplayModeGetRefreshRate is supposed to work only for CRTs, but it
doesn't, and actually works for LCD TVs connected over HDMI and external
displays (at least that's what I'm told, I don't have the hardware to test).
Maybe Apple docs are incorrect.
Since AFAIK Apple doesn't want to give us a better API – maybe in the fear we
might be able to actually write some useful software instead of "apps" –
I decided not to care as well and commit this.
This reverts the default behavior introduced in commit 93feffad. Way too
often libavcodec will return RGB data that has an alpha channel as per
pixel format, but actually contains garbage.
On the other hand, this will actually render garbage color values in
e.g. PNG files (for pixels with alpha==0, the color value should be
essentially ignored, which is what the old alpha blend mode did).
This "fixes" #1528, which is probably a decoder bug (or far less likely,
a broken file).
Make the lazy gamma initialization less weird, and make the default
value of the "gamma" sub-option 1.0. This means --vo=opengl:help will
list the actual default value.
Also change the lower bound to 0.1 - avoids a division by zero (I don't
know how shaders handle NaN, but it's probably not a good idea to give
them this value).
These commands are counterparts of sub_add/sub_remove/sub_reload which
work for external audio file.
Signed-off-by: wm4 <wm4@nowhere>
(minor simplification)
These were derived from dividing our assumed video gamut (1.961) by some
typical screen values (2.2 for dimly lit and 2.4 for pitch black):
1.961/2.4 = 0.8170833333333334 ~= 0.8
1.961/2.2 = 0.8913636363636364 ~= 0.9
This is somewhat imperfect, because detection of hw decoding APIs is
mostly done on demand, and often avoided if not necessary. (For example,
we know very well that there are no hw decoders for certain codecs.)
This also requires every hwdec backend to identify itself (see hwdec.h
changes).
This does what it's documented to do.
The implementation reuses the code in mpv_detach_destroy(). Due to the
way async requests currently work, just sending a synchronous dummy
request (like a "ignore" command) would be enough to ensure
synchronization, but this code will continue to work even if this
changes.
The line "ctx->event_mask = 0;" is removed, but it shouldn't be needed.
(If a client is somehow very slow to terminate, this could silence an
annoying queue overflow message, but all in all it does nothing.)
Calling mpv_wait_async_requests() and mpv_wait_event() concurrently is
in theory allowed, so change pthread_cond_signal() to
pthread_cond_broadcast() to avoid missed wakeups.
As requested in issue #1542.
This was apparently useful for correct interlaced scaling (although I
don't know anyone who used this). It was rarely used (if at all), had an
inconvenient output format (packed YUV), and now has a better solution
in libavfilter (using the libavfilter "scale" filter via vf_lavfi).
There is no reason to keep this filter any longer.
It's entirely useless. I left it in for a while, because the analog TV
code had a transitional bug that could switch chroma planes, but it was
fixed long ago. It's also available in libavfilter.
If a file is unseekable (consider e.g. a http server without resume
functionality), but the stream cache is active, the player will enable
seeking anyway. Until know, client API user couldn't know that this
happens, and it has implications on how well seeking will work. So add a
property which exports whether this situation applies.
Fixes#1522.
This allows getting the log at all with --no-terminal and without having
to retrieve log messages manually with the client API. The log level is
hardcoded to -v. A higher log level would lead to too much log output
(huge file sizes and latency issues due to waiting on the disk), and
isn't too useful in general anyway. For debugging, the terminal can be
used instead.
The previous default ("no") seemed to be equivalent to "min" in practice
(though it might depend on the website, which is even worse).
Better just select the best stream by default.
This queries the _ICC_PROFILE property on the root window. It also tries
to reload the ICC when it changes, or if the mpv window changes the
monitor. (If multiple monitors are covered, mpv will randomly select one
of them.)
The official spec is a dead link on freedesktop.org, so don't blame me
for any bugs.
Note that this assumes that Xinerama screen numbers match the way mpv
enumerates the xrandr monitors. Although there is some chance that this
matches, it most likely doesn't, and we actually have to do complicated
things to map the screen numbers. If it turns out that this is required,
I will fix it as soon as someone with a suitable setup for testing the
fix reports it.
Seems like several people agree that it's a good filter for downscaling.
Setting this option by default may also prevent people from accidentally
using an unsuitable filter for downscaling by setting "scale" and
without being aware of the impliciations (maybe). On the other hand,
this change is not strictly backwards compatible for the same reasons.
Also, allow disabling this option with scale-down="" (before this, not
setting it was the only way to do this - not possible anymore if it's
set by default). This is what the change in handle_scaler_opt() does.
New command `mouse <x> <y> [<button> [single|double]]` is introduced.
This will update mouse position with given coordinate (`<x>`, `<y>`),
and additionally, send single-click or double-click event if `<button>`
is given.
vo.c queried the VO at initialization whether it wants to be updated on
every display frame, or every video frame. If the smoothmotion option
was changed at runtime, the rendering mode in vo.c wasn't updated.
Just let vo_opengl set the mode directly. Abuse the existing
vo_set_flip_queue_offset() function for this.
Also add a comment suggesting the use of --display-fps to the manpage,
which doesn't have anything to do with the rest of this commit, but is
important to make smoothmotion run well.
Repurpose demuxer->filetype for this. It used to be used to print a
human readable format description; change it to a symbolic format name
and export it as property.
Unfortunately, libavformat has its own weird conventions, which are
reflected through the new property, e.g. the .mp4 case mentioned in the
manpage.
Fixes#1504.
The symlink trick made waf go crazy (deleting source files, getting
tangled up in infinite recursion... I wish I was joking). This means we
still can't build the client API examples in a reasonable way using the
include files of the local repository (instead of globally installed
headers). Not building them at all is better than deleting source files.
Instead, provide some manual instructions how to build each example
(except for the Qt examples, which provide qmake project files).
SmoothMotion is a way to time and blend frames made popular by MadVR. It's
intended behaviour is to remove stuttering caused by mismatches between the
display refresh rate and the video fps, while preserving the video's original
artistic qualities (no soap opera effect). It's supposed to make 24fps video
playback on 60hz monitors as close as possible to a 24hz monitor.
Instead of drawing a frame once once it's pts has passed the vsync time, we
redraw at the display refresh rate, and if we detect the vsync is between two
frames we interpolated them (depending on their position relative to the vsync).
We actually interpolate as few frames as possible to avoid a blur effect as
much as possible. For example, if we were to play back a 1fps video on a 60hz
monitor, we would blend at most on 1 vsync for each frame (while the other 59
vsyncs would be rendered as is).
Frame interpolation is always done before scaling and in linear light when
possible (an ICC profile is used, or :srgb is used).
These aliases were removed in commit 1ec77214. Add a notice to the
manpage how to get these back. Apparently, "lanczos2" and "lanczos3"
were the only interesting aliases possibly used by someone, so the
description is limited to these two.
These are now auto-detected sanely; and enabled whenever it would be a
performance or quality gain (which is pretty much everything except
bilinear/bilinear scaling).
Perhaps notably, with the absence of scale_sep, there's no more way to
use convolution filters on hardware without FBOs, but I don't think
there's hardware in existence that doesn't have FBOs but is still fast
enough to run the fallback (slow) 2D convolution filters, so I don't
think it's a net loss.
This is better even for non-separable. The only exception is when using
bilinear for both lscale and cscale. I've fixed the
documentation/comments to make more sense.
This is not quite the same thing as madVR's antiringing algorithm, but
it essentially does something similar.
Porting madVR's approach to elliptic coordinates will take some amount
of thought.
This also fixes the maximum range to 16.0, which was previously set to
32.0 and incorrectly documented as 8.0. 16 taps should be more than
anybody will ever need, but it's the highest radius that's supported by
all affected filters.
Before this, we merely printed a message to the terminal. Now the API
user can determine this properly. This might be important for API users
which somehow maintain complex state, which all has to be invalidated if
(state-changing) events are missing due to an overflow.
This also forces the client API user to empty the event queue, which is
good, because otherwise the event queue would reach the "filled up"
state immediately again due to further asynchronous events being added
to the queue.
Also add some minor improvements to mpv_wait_event() documentation, and
some other minor cosmetic changes.
Fixes#1472.
(Maybe these options should have been named --autofit-max and
--autofit-min, but since --autofit-larger already exists, use
--autofit-smaller for symmetry.)
The "\\" escape was rendered as "\" on the website. I'm hoping quoting
this in ``...`` will render it correctly.
Also add an example for show_text, which awkwardly does not require
escaping the "\".
After finding out more about how video mastering is done in the real
world it dawned upon me why the "hack" we figured out in #534 looks so
much better.
Since mastering studios have historically been using only CRTs, the
practice adopted for backwards compatibility was to simulate CRT
responses even on modern digital monitors, a practice so ubiquitous that
the ITU-R formalized it in R-Rec BT.1886 to be precisely gamma 2.40.
As such, we finally have enough proof to get rid of the option
altogether and just always do that.
The value 1.961 is a rounded version of my experimentally obtained
approximation of the BT.709 curve, which resulted in a value of around
1.9610336. This is the closest average match to the source brightness
while preserving the nonlinear response of the BT.1886 ideal monitor.
For playback in dark environments, it's expected that the gamma shift
should be reproduced by a user controlled setting, up to a maximum of
1.224 (2.4/1.961) for a pitch black environment.
More information:
https://developer.apple.com/library/mac/technotes/tn2257/_index.html
The Qt example already does this. I hoped this was restricted to
QApplication only, but apparently Qt repeated this mistake with
QGuiApplication (QGuiApplication was specifically added for QtQuick at a
much later point, even though QApplication inherits from it).
Seems to work with GtkSocket and passing the gtk_socket_get_id() value
via "wid" option to mpv.
One caveat is that using <tab> to move input focus from mpv to GTK does
not work. It seems we would have to interpret <tab> ourselves in this
case. I'm not sure if we really should do this - it would probably
require emulating some other typical conventions too. I'm not sure if an
embedder could do something about this on the toolkit level, but in
theory it would be possible, so leave it as is for now.
Remove the "all" special-behavior, and instead interpret trailing "*"
characters. --display-tags=all is replaced by --display-tags=* as a
special-case of the new behavior.
See #1404.
Note that the most straight-forward value for matchlen in the normal
case would be INT_MAX, because it should be using the entire string.
I used keylen+1 instead, because glibc seems to handle this case
incorrectly:
snprintf(buf, sizeof(buf), "%.*s", INT_MAX, "hello");
The result is empty, instead of just containing the string argument.
This might be a glibc bug; it works with other libcs (even MinGW-w64).
Make their meaning more exact, and don't pretend that there's a
reasonable definition for "bits-per-pixel". Also make unset fields
unavailable.
average_depth still might be inconsistent: for example, 10 bit 4:2:0 is
identified as 24 bits, but RGB 4:4:4 as 12 bits. So YUV formats
seemingly drop the per-component padding, while RGB formats do not.
Internally it's consistent though: 10 bit YUV components are read as
16 bit, and the padding must be 0 (it's basically like an odd fixed-
point representation, rather than a bitfield).
bpp(bits-per-pixel) and depth(bit-depth for color component) can
be calculated from pixelformat technically but it requires massive
informations to be implemented in client side.
These subproperties are provided for convenience.
We still keep the window pointer, because we want to call
QQuickWindow::resetOpenGLState() (which runs on the rendering thread
only). Interesting mess...
This avoids issues when upscaling directly in linear light, and is the
recommended way to upscale images according to imagemagick.
The default slope of 6.5 offers a reasonable compromise between
ringing artifacts eliminated and ringing artifacts introduced by
sigmoid-upscaling. Same goes for the default center of 0.75.
The previous implementation of opengl-cb kept only latest flipped frame.
This can cause massive frame drops because rendering is done asynchronously
and only the latest frame can be rendered.
This commit introduces frame queue and releated options to opengl-cb.
frame-queue-size: the maximum size of frame queue (1-100, default: 1)
frame-drop-mode: behavior when frame queue is full (pop, clear, default: pop)
The frame queue holds delayed frames and drops frames if the frame queue is
overflowed with next method:
'pop' mode: drops all the oldest frames overflown.
'clear' mode: drops all frames in queue and clear it.
With default options(frame-queue-size=1:frame-drop-mode=pop),
opengl-cb behaves in the same way as previous implementation effectively.
For frame-queue-size > 1, opengl-cb tries to calls update() without waiting
next flip_page() in order to consume queued frames.
Signed-off-by: wm4 <wm4@nowhere>
mpv can be built natively on a Windows machine using MSYS2. Add detailed
instructions on how to build and merge them with the existing
instructions for cross-compilation.
This one avoids use of a FBO. It's less flexible, because it uses works
around the whole QML rendering API. It seems to be the only way to get
OpenGL rendering without any indirections, though.
Parts of this example were insipired by Qt's "Squircle" example.
Also add a README file with a short description of each example, to
reduce the initial confusing.
This used to be required to workaround PulseAudio bugs. Even later, when
the bugs were (partially?) fixed in PulseAudio, I had the feeling the
hacks gave better behavior. On the other hand, I couldn't actually
reproduce any bad behavior without the hacks lately. On top of this, it
seems our hacks sometimes perform much worse than PulseAudio's native
implementation (see #1430).
So disable the hacks by default, but still leave the code and the option
in case it still helps somewhere. Also, being able to blame PulseAudio's
code by using its native API is much easier than trying to debug our own
(mplayer2-derived) hacks.
Was already possible before by injecting the magic PID
8192 into channels.conf, the flag makes this much more
useable and we also have it documented.
Useful not only for debugging, but also for incomplete
channels.conf (mplayer format...), multi-channel
recording, or channels which do dynamic PID switchng.
full-transponder is also useful for channels which switch PIDs on-the-fly.
ffmpeg can handle this, but it needs the full stream with all PIDs.
--sub-scale-by-window=no attempts to keep subs always at the same pixel
size.
The implementation is a bit all over the place, because it compensates
already done scaling by an inverse scale factor, but it will probably do
its job.
Fixes#1424. (The semantics and name of --sub-scale-with-window are
kept, and this adds a new option - the name is confusingly similar, but
it's actually analogue to --osd-scale-by-window.)