Both AVFrame.pts and AVFrame.pkt_pts have existed for a long time. Until
now, decoders always returned the pts via the pkt_pts field, while the
pts field was used for encoding and libavfilter only. Recently, pkt_pts
was deprecated, and pts was switched to always carry the pts.
This means we have to be careful not to accidentally use the wrong
field, depending on the libavcodec version. We have to explicitly check
the version numbers. Of course the version numbers are completely
idiotic, because idiotically the pkg-config and library names are the
same for FFmpeg and Libav, so we have to deal with this explicitly as
well.
Change a few other defaults accordingly:
- seekbarstyle=bar looks better with bottombar.
- Bigger scalewindowed and scalefullscreen make bottom/topbar more readable.
POSIX leaves poll() behavior on directories unspecified. While on
Linux, it seems to behave the same way as regular files (always
return immediately), this is not guaranteed. At least with OSX
10.12, it seems to wait, which essentially means that opening
directories will "hang".
Fixes#3530 and #3649.
Keyboard input in the console still isn't quite as flexible as it is in
the video window. Ctrl+<letter> and Ctrl+LEFT/RIGHT work, but
Ctrl+Alt+<letter> and Ctrl+<number> do not. Also, in the new Windows 10
console, a bunch of Ctrl keystrokes including Ctrl+UP/DOWN are handled
by the console window and not passed to the application.
Unlike in w32_common.c, we can't really translate keyboaard input
ourselves because the keyboard layout of the console window (in
conhost.exe) doesn't necessarily match the keyboard layout of mpv's
console input thread, however, using ToUnicode as a fallback when the
console doesn't return a unicode value could be a possible future
improvement.
Fixes#3625
The original version of this code in getch2-win.c fetched 128 console
events at once. This was probably to maximize the chance of getting a
key event if there were other events in the buffer, because it returned
the value of the first key event it found and ignored all others. Since
that code was written, it has been modified to receive console input in
an event-based way using an input thread, so it is probably not
necessary to fetch so many events at once any more. Also, I'm not sure
what it would have done if there were more than 128 events in the
console input buffer. It's possible that fetching multiple events at a
time also had performance advantages, but I can't find any other
programs that do this. Even libuv just fetches one console event at a
time.
Change read_input() to fetch only one event at a time and to consume all
available events before returning to WaitForMultipleObjects. Also remove
some outdated comments and pass the console handle through to the input
thread instead of calling GetStdHandle multiple times (I think this is
theoretically more correct because it is possible for the handles
returned by GetStdHandle to be changed by other threads.)
If the "default" device refuses to be opened as spdif device (i.e. it
errors due to the AES0 etc. parameters), we were falling back to the
iec958 device. This is needed on some systems for smooth operation with
PCM vs. spdif.
Now change it to try "hdmi" before "iec958", which supposedly helps in
other situations.
Better suggestions welcome. Apparently kodi does this too, although I
didn't check directly.
- Change connector selection to accept human readable names (such as
eDP-1, HDMI-A-2) rather than arbitrary numbers.
- Change GPU selection to accept GPU number rather than device paths.
- Merge connector and GPU selection into one --drm-connector.
- Add support for --drm-connector=help.
- Add support for --drm-* in EGL backend.
- Refactor KMS; reduce state sharing across drm_common.
The glFlush() call was made optional recently
since it's not needed in most cases. On OSX though
this is needed since we removed kCGLPFADoubleBuffer
from the context creation, so the glFlush() call
was added to the cocoa backend only.
The CGLFlushDrawable() call can be safely removed
since it only does something when a double
buffered context is used. Also fixes a small typo.
Fixes#3627.
In "dumb mode" (where most features are disabled and which only performs
some basic rendering) we explicitly copy a set of whitelisted options,
and leave all the other options at their default values. Add the new
--opengl-early-flush option to this whitelist. Also remove an option
field accidentally added in the commit adding --opengl-early-flush.
Don't require special property code for handling updates, and simply use
the UPDATE_AUDIO flag instead. Also make runtime changes to
--audio-client-name take effect.
Now a reload requested by an AO behaves in exactly the same way as
changing an AO-related options (like --audio-channels or
--audio-exclusive). This is good for testing and uniform behavior. (You
could go as far as saying it's a necessity, because the spotty and
obscure AO reload behavior is hard to reproduce and thus hard to test at
all.)
This affects changing audio configuration options. Explicitly flush and
uninitialize the audio output first before doing the rest. This should
ensure all state attached to the audio output is discarded and not used
during the reconfiguration.
Also add a comment to the reinit_audio_filters() call. It doesn't
necessarily restore the audio chain fully, but makes sure a seek is
issued if the amnount of buffered audio discarded was huge enough to
cause "problems".
It seems this can cause issues with certain platforms, so better to
disable it by default. The original reason for this isn't overly
justified, and display-sync mode should get rid of the need for it
anyway.
The new option is meant for testing, and will probably be removed if
nobody comes up and reports that enabling the option actually improves
anything.
The player tries to avoid splitting frames with spdif (sample alignment
stuff). This can in certain corner cases with certain drivers lead to
the situation that ao_get_space() returns a number higher than 0 and
lower than the audio frame size. The playloop will round this down to 0
bytes and do nothing, leading to a missed wakeup. This can lead to
underruns or playback completely getting stuck.
It can be reproduced by playing AC3 passthrough with no video and:
--ao=null --ao-null-buffer=0.256 --ao-null-outburst=6100
This commit attempts to fix it by allowing the playloop to write some
additional data (to get a complete frame), that will be buffered within
the AO ringbuffer even if the audio device doesn't want it.
If we really want client API users to use mpv_set_property() instead of
mpv_set_option(), then compatibility handling of deprecated options
should be included. Otherwise, there's the danger that client API users
either break too early (and without a warning), or mpv_set_option() will
continue to have a reason to exist.
Reduces code duplication between OpenGL backend and DRM VO.
(The control() for OpenGL backend isn't sufficiently similar to the
VO's control() to consider merging it as a whole - I extracted only the
FPS code.)
Rename the text subtitle options from --sub-text- to --sub-
and --ass- options to --sub-ass-.
The intention is to common sub options to prefixed --sub-
and special ASS option be seen as a special version of sub options.
The OSD options that work like the --sub- options are still named
--osd-.
Man page updated including a short note about renamed --sub-text-*
and --ass-* options to --sub-* and --sub-ass-*.
This is potentially incompatible if a program used negative timestamps
to deal with timestamp resets, which would potentially lead to mpv
producing and using negative timestamps.
"seek -10 absolute" will seek to 10 seconds before the end. This more or less
matches the --start option and negative seeks were otherwise useless (they just
clipped to 0).
Regression since commit bbcd0b6a. This code is just cursed, because it's
a fragile state machine with no proper tests, and which could be done in
a much simpler way. Without doubt this change will cause a regression in
some ridiculous corner case as well.
Fixes#3610 (the cause of it, not the behavior it resulted in).
When the vaapi decoder is used in copy mode, it creates a dummy
display to render to. In theory, this should support hardware
decoding on on a separate GPU that is not actually connected to
any output (like an iGPU which supports more formats than the
external GPU to which the monitor is connected).
However, before this change, only X11 displays were supported as
dummy displays. This caused some graphics drivers (namely
intel-driver) to core dump when they were not actually used as X11
module.
This change introduces support for drm libav displays, which
allows vaapi-copy to run on such cards which are not actually
rendering the X11 output.
Move the screensaver enable/disable determination to a central place,
and call it if the stop-screensaver property is changed.
Also, do not stop the screensaver when in idle mode (i.e. no file is
loaded).
Fixes#3615.