The video code can deal fine with feeding software image formats to
hwdec interop drivers. In RPI's case, this is preferable for
performance, working around OpenGL bugs (see RPI firmware issue #666),
and because OpenGL rendering doesn't bring too many advantages due to
RPI supporting GLES 2.0 only.
Maybe a way to force the normal video path is needed later. But
currently, this can be tested by just not loading the hwdec interop
driver.
If you run command-line mpv and set --hwdec to something that does
not load the RPI interop layer, you'll even have to use --hwdec-preload
manually to get it enabled.
Was intended to put the GL layer above the standard console. (But
actually that was done already, and the oddness I'm seeing seems to
be an unrelated bug.)
The code for expanding the ~~ prefix used mp_find_config_file(), which
strictly looks for _existing_ files in any config path (e.g. not just
the user-local one, but also system-wide config). If no such file
exists, it simply returns NULL, which makes the code below just return
the literal, unexpanded path.
Change this so that it'll resolve the path to the user-local config
directory instead.
Requested in #3591.
This should make display-names usable on Windows. It returns a list of
GDI monitor names like "\\.\DISPLAY1". Since it may be useful to get the
monitor that Windows considers associated with the window (with
MonitorFromWindow,) this will always be returned as the first argument.
This monitor is the one used for display-fps and icc-profile-auto.
Seems like a valid use-case. Not sure if I like it calling back into the
config code. Care has to be taken for not letting the config path
resolving code dead-lock (which is why locking details in the msg.c code
are changed).
Fixes#3591.
I'm not sure if this option affects anything or if it's a placebo,
especially since the VO thread is now registered with MMCSS. Still, I
think --priority=high may have helped back when I used mplayer2 on a
netbook. It's also possible that encoding-mode users would want to set
--priority=idle.
Anyway, it was one of the last M_OPT_FIXED options, so fix that.
We always want to use __declspec(selectany) to declare GUIDs, but
manually including <initguid.h> in every file that used GUIDs was
error-prone. Since all <initguid.h> does is define INITGUID and include
<guiddef.h>, we can remove all references to <initguid.h> and just
compile with -DINITGUID to get the same effect.
Also, this partially reverts 622bcb0 by re-adding libuuid.a to the
build, since apparently some GUIDs (such as GUID_NULL) are not declared
in the source file, even when INITGUID is set.
AVIOContext.seekable is actually a bitfield. Currently, it has only
AVIO_SEEKABLE_NORMAL defined, but it might be extended with a hint for
non-byte seekability. Thus we should check it correctly.
This should still allow user-set default options to override built-in
pseudo-gui while respecting user-set pseudo-gui options.
Pros:
- user option in default profile overrides built-in pseudo-gui's options
Ex: screenshot-directory overrides built-in pseudo-gui's
- user can "fix" pseudo-gui if some option like "force-window=no" is set
in default by setting "force-window=yes" in [pseudo-gui]
- `mpv --profile=pseudo-gui` will work as before
Cons:
- --show-profile=pseudo-gui won't display the built-in's options
Original idea from wm4.
Documentation edits mostly by wm4.
Signed-off-by: wm4 <wm4@nowhere>
If the PTS goes backwards (whether it's a timestamp reset or some other
problem) would just use 0 as frame duration. (At least until the logic
for detecting divergence with the timestamps gets active.)
Trust the demuxer framerate in these cases instead, if it's available. I
think this improves behavior slightly with some broken files.
During init it will first call mp_load_builtin_scripts(), and then again
via mp_load_scripts().
This was harmless (a second attempt won't load it again if the first one
was successful), but it's unnecessary, and also looks confusing if the
scripts failed to load the first time.
demux_lavf.c forces seek to being determined as supported if
STREAM_CTRL_HAS_AVSEEK is returned as success. But it always succeeds
with current FFmpeg versions. (Seems like Libav commit cae448cf broke
this in early 2016.)
Now we can't determine via private API whether the underlying protocol
supports read_seek anymore. The affected protocols (mostly rtmp) also
set seekable=0, meaning they signal they're not seekable, even though
read_seek would work. (My guess is that this can't be fixed because even
though seekable is in theory a combination of elaborate flags [of which
only 1 is defined, AVIO_SEEKABLE_NORMAL], a seekable!=0 always means
it's byte-seekable in some way.)
So the FFmpeg API is being garbage _again_, and all what we can do is
determining this via protocol name and a whitelist.
Should fix the behavior reported in #1701.
Secondary subtitle streams (to be shown on the top of the screen along
main subtitle stream) were shown with normal alignment. This is because
we tell libass to override the alignment style (a relatively recent
change, see commit 2f1eb49e). This would behave differently with old
libass versions too.
To escape the mess, just set the alignment explicitly with an override
tag instead of modifying the style.
This should normally happen only if memory allocation for the state
happens, which should be extremely rare. But with Luajit on OSX, it can
happen if the magic compiler flags required by Luajit were not passed to
mpv compilation. Print an error to reduce confusion.
The intention of M_OPT_FIXED is to make options not runtime-changeable,
so trying to set them at runtime will always error. This is not wanted
for --profile and --include, for which there is no reason to block them
at runtime.
Fixes#3581.
When switching a subtitle track, the subtitle wasn't necessarily
updated, especially when playback was paused.
Some awfully subtle and complex interactions here.
First off (and not so subtle), the subtitle decoder will read packets
only on explicit update_subtitles() calls, which, if video is active, is
called only when a new video frame is shown. (A simply video frame
redraw doesn't trigger this.) So call it explicitly. But only if
playback is "initialized", i.e. not when it does initial track selection
and decoder init, during which no packets should be read.
The second issue is that the demuxer thread simply will not read new
packets just because a track was switched, especially if playback is
paused. That's fine, but if a refresh seek is to be done, it really
should do this. So if there's either 1. a refresh seek requested, or 2.
a refresh seek ongoing, then read more packets.
Note that it's entirely possible that we overflow the packet queue with
this in unpredicated weird corner cases, but the queue limit will still
be enforced, so this shouldn't make the situation worse.
This was dumb and could return something like "{name=123}" as an array.
Also, fix the error message if a key is not a string. lua_typename()
takes a type directly, not a stack item.
The last commit was fine - just making some enhancements.
Rename the function to parse_node_chapters(), since it really has not
much to do with Lua.
Don't use len<0 as check whether it makes sense to set chapters, and
instead check for mpctx->demuxer (that includes the possibility to set
chapters e.g. within a preload hook, but after chapters are initialized
from the demuxer).
Return M_PROPERTY_ERROR instead of M_PROPERTY_OK if the mpv_node has the
wrong type.
It's ok if a chapter has no title, so change the checks accordingly.
Remove a Yoda condition.
Notify that chapter metadata might have changed with mp_notify() (the
chapter list itself is already taken care by generic code).
Fix leaking the metadata allocations of the new chapter list.
Obviously, in the vast majority of cases, there's only one device
in the system, but doing this means we're more likely to get a
usable device in the multi-device case.
cuda would support decoding on one device and displaying on another
but the peer memory handling is not transparent and I have no way
to test it so I can't really write it.
The documentation around this stuff is poor, but I found an nvidia
sample that demonstrates how to use the interop API most efficiently.
(https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st)
Key lessons are:
1) you can register the texture itself and have cuda write to it,
thereby skipping an additional copy through the PBO.
2) You don't have to be mapped when you do the copy - once you get a
mapped pointer, it remains valid. Magic!
This lets us throw out the PBOs as well as much of the explicit
alignment and stride handling.
CPU usage is slightly (~3%) lower for 4K content in one test case,
so it makes a detectable difference, and presumably saves memory.
Seems like this confused users quite often.
Instead of --profile=pseudo-gui, --player-operation-mode=pseudo-gui now
has to be used to invoke pseudo GUI mode. The old way still works, and
still behaves in the old way.
I would have been fine with this, but now I want to add another flag,
and the duplication would become more messy than having a strange
function for deduplication.
The property observation mechanism turns properties into integer IDs for
fast comparison. This means if two properties get the same ID, they will
receive the same notifications. Use this to make properties under
options/ receive notifications. The option-property bridge marks
top-level properties with the same name as the options.
This still might not work in cases the C code sets values on options
structs directly.