There's no need to call wl_display_flush() since all the client-side
buffered data has already been flushed prior to polling the fd.
Instead only check for POLLIN and the usual ERR+HUP.
Don't just cause vo_opengl to update the ICC profile every time the
window is moved. Instead, explicitly check if the screen was changed.
Mostly untested.
Some client API users simply don't like such filenames. For their sake,
don't return them, but return a dummy filename instead. (Returning a
latin1-ized version would work too, but is slightly more work.)
Also remove the "\n" from the replacement dummy filename. This was
accidental.
mixer.c didn't really deserve to be separate anymore, as half of its
contents were unnecessary glue code after recent changes. It also
created a weird split between audio.c and af.c due to the fact that
mixer.c could insert audio filters. With the code being in audio.c
directly, together with other code that unserts filters during runtime,
it will be possible to cleanup this code a bit and make it work like the
video filter code.
As part of this change, make the balance code work like the volume code,
and add an option to back the current balance value. Also, since the
balance semantics are unexpected for most users (panning between the
audio channels, instead of just changing the relative volume), and there
are some other volumes, formally deprecate both the old property and the
new option.
Since mixer->ao is always NULL now (it was really just forgotten to be
removed), the uninit call never actually cleared the af field, leaving
a dangling pointer that could be accessed by volume control.
This makes the difference between passing VA_FRAME_PICTURE or
VA_BOTTOM_FIELD for progressive frames (that should be force-
deinterlaced) to VAProcPipelineParameterBuffer.flags. VA-VPP doesn't
really seem to care, and we can get rid of mp_refqueue_is_interlaced()
entirely. It could be argued it's better to pass field flags instead of
the progressive flag.
"Real" frame flag vs. what we pretend it to be. It always used the real
flag, and thus never deinterlaced unflagged frames, even if the
suboption was set to "no".
The hw_subfmt field roughly corresponds to the field
AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type
AVPixelFormat (instead of the underlying hardware format), so it's a
good idea to switch to this too for preparation.
Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API-
specific number. VDPAU and Direct3D11 already used mp_imgfmt, but
Videotoolbox and VAAPI had to be switched.
One somewhat user-visible change is that the verbose log will now always
show the hw_subfmt as image format, instead of as nonsensical number.
(In the end it would be good if we could switch to AVHWFramesContext
completely, but the upstream API is incomplete and doesn't cover
Direct3D11 and Videotoolbox.)
Old-style commands using _ as separator (e.g. show_progress) were still
used in some places, including documentation and configuration files.
This commit updates all such instances to the new style (show-progress)
so that commands are easier to find in the manual.
Since it turns out that knowing what exactly a file was tagged with can
be useful for debugging purposes, expose this as a property so I can
check it more easily.
This is mostly useful for sig-peak (since nom-peak is currently entirely
calculated by us), but I added both for consistency.
--deinterlace=auto is the default, and has the obscure semantics that
deinterlacing is disabled, unless the user has manually inserted a
deinterlacing filter.
While in software decoding this doesn't matter, and we will happily
insert 2 yadif filters (if the user has already added one), or not
remove the yadif filter (if deinterlacing is disabled, but the user has
added the filter manually), this is different with hardware deinterlacer
filters. These support VFCTRL_SET_DEINTERLACE for toggling deinterlacing
filtering at runtime. It exists mainly for legacy reasons, and possibly
because it makes switching deinterlacing modes more efficient. It might
also gives us an entry-point for VO deinterlacing, maybe. For whatever
reasons this mechanism exists, we still support and use it.
This commit fixes that video.c always used VFCTRL_SET_DEINTERLACE to
disable deinterlacing, even if --deinterlace=auto was set. Fix this by
checking the value of the option directly.
For some reason, the lack of version info was preventing mpv from
appearing in the Default Programs dialog. Re-add it, but don't set the
string version numbers from version.h, because that's what was causing
trouble when the version info was removed. Like the binary version
numbers, these are now hardcoded to 2.0.0.0, which probably doesn't
matter.
The new version info block is also slightly different to the old one. It
fills out all the binary VERSIONINFO fields and makes better use of
macros. It also removes the \000 line terminators from the string
version info, since as far as I can tell, this was just cargo-culting
for an old broken version of the Microsoft resource compiler, and
binutils' windres terminates the strings properly without them.
This should get mpv working on Windows 7 machines without hardware
accelerated graphics adapters. It already worked on Windows 8 and up
because those systems would silently fall back to WARP if there was no
graphics hardware installed.
The normal MPGL_CAP_SW flag is not set, so unlike other opengl backends,
this will choose a software adapter even if opengl:sw is not specified.
The reason for this is, unlike on Linux, where vo_xv and vo_x11 can be
used, mpv on Windows does not have any VO to fall back on when hardware
acceleration isn't available, so if software adapters are rejected, the
user won't see any video output when using the default settings. WARP
seems to perform quite well, so it should be used in this case.
The cache reader thread actually unlocks the mutex protecting the
underlying stream while reading from it. That's why other code goes out
of its way to run certain stream operations on the cache thread. Do the
same.
We could have this simpler by creating a mechanism that would "park" the
cache thread and make it wait for the lock (while we have it) in order
to gain exclusive access. This could be done in the future.
Eagerly execute seeks to the underlying stream in the cache seek
entrypoint itself. While asynchronous execution is a goal of the cache,
it doesn't matter too much for seeks. They always were executed within
the lock, so the reader was blocked anyway. It's not necessary to ensure
async. execution here either, because seeks are relatively rare, and the
demuxer can just stay blocked for a while.
Fixes: mpv http://samples.mplayerhq.hu/V-codecs/DIV5/ayaneshk-test.avi
This code was supposed to adjust existing conversion filters (to make
them output a different format). But the code was just broken,
apparently a refactoring accident. It accessed af instead of af->prev.
The bug tended to add new conversion filters, even if an existing one
could have been used. (Can be tested by inserting a dummy lavrresample
filter followed by a format filter which forces conversion.)
In addition, it's probably better to return the actual error code if
reinitializing the filter fails. It would then respect an AF_FALSE
return value, which means format negotiation failed, instead of a
generic error.
af_volume has a volumedb sub-option, which allows the user to set an
explicit volume. Until recently, the player read back this value and
used it as initial softvol volume. But now it just overwrites it.
Instead of overwriting it, multiply the different gain values. Above
all, this will do the right thing if only softvol is used, or if the
user only adds the af_volume filter manually.
Normally, you want downmixing to happen first thing in the filter chain.
This is reflected in codec downmixing, which feeds the filter chain
downmixed audio in the first place. Doing this has the advantage of
needing less data to process. But the main motivation is that if there
is a drc filter in the chain, you want to process it the downmixed
audio.
Add an idiotic heuristic to achieve this. It tries to detect whether the
audio was indeed automatically downmixed (or upmixed). To detect what
the output format is going to be, it builds the filter chain normally,
and then retries with the heuristic applied (and for extra paranoia,
retries without the heuristic again if it fails to successfully rebuild
the filter chain for unknown reasons). This is simple and will work in
almost all cases.
Doing it in a more complete way is rather hard, because filters are so
generic. For example, we know absolutely nothing about the behavior of
af_lavfi, which creates an opaque filter graph with libavfilter. We
don't know why a filter would e.g. change the channel layout on its
output. (Our heuristic bails out in this case.) We're also slave to the
lowest common denominator of how our format negotiation works, and how
libavfilter's works.
In theory, we could make this mechanism explicit by introducing a
special dummy filter. The filter chain would then try to convert between
input and output formats at the dummy filter, which would give the user
more control over how downmix happens. On the other hand, the user could
just insert explicit conversion filters instead, so this would probably
have questionable value.
The algorithm and functionality is the same, but the code becomes much
simpler and easier to follow.
The assumption that there is only 1 conversion filter (lavrresample)
helps with the simplification, but the main change is to use the same
code for format/channels/rate. Get rid of the different AF_CONTROL_SET_*
controls, and change the af->data parameters directly. (af->data is
badly named, but essentially is a placeholder for the output format.)
Also, instead of trying to use the af_reinit() loop to init inserted
conversion filters or filters with changed output formats, do it inline,
and move the common code to a filter_reinit() function. This gets rid of
the awful retry variable.
In general, this should not change any runtime behavior.
This happens to be better for the af_volume filter (for softvol), and
saves some code too. It's "better" because you want to affect the
final filtered audio, such as after a manually inserted drc filter.
Drop the code for switching the volume options and properties between
af_volume and AO volume controls. interface-changes.rst mentions the
changes in detail.
Do this because this was exceedingly complex and had other problems as
well. It was also very hard to test. It's just not worth the trouble.
Some leftovers like AOCONTROL_HAS_PER_APP_VOLUME will be removed at a
later point.
Fixes#3322.
I made this call up because I sort of thought this makes senssssse. I'm
now convinced that it does not, and even is actively harmful. I'm still
quite in the dark how the DirectD 11 video API is supposed to work with
synchronization, but at least for normal graphics this call would not
make much sense.
When selecting a device that simply doesn't exist with --audio-device,
AudioUnit will still initialize and start playback without complaining.
But it will never call the audio render callback, which leads to audio
playback simply not progressing.
I couldn't find a way to get AudioUnit to report an error at all, so
here's a crappy hack that takes care of this in most cases. We assume
that all devices have a kAudioDevicePropertyDeviceIsAlive property.
Invalid devices will error when querying the property (with 'obj!' as
status code).
This is not the correct fix, because we try to double-guess AudioUnit's
behavior by accessing a lower label API. Suggestions welcome.
Although it appears to be accepted by the function, MSGL_STATUS messages
are never passed to the client API. Consequently "status" has the same
meaning as "v" and is useless.