Directory-opening never worked on Windows because MSVCRT's open()
doesn't open directories and its fstat() doesn't recognise directory
handles. These are just MSVCRT restrictions, and the Windows API itself
has no problem with opening directories as file objects, so reimplement
mpv's mp_open and mp_stat to use the Windows API directly. This should
fix directory playback.
This also populates the st_dev and st_ino fields of struct stat, so
filesystem loop checking in demux_playlist.c should now work on Windows.
Fixes#4711
The new_segment field was used to track the decoder data flow handler of
timeline boundaries, which are used for ordered chapters etc. (anything
that sets demuxer_desc.load_timeline). This broke seeking with the
demuxer cache enabled. The demuxer is expected to set the new_segment
field after every seek or segment boundary switch, so the cached packets
basically contained incorrect values for this, and the decoders were not
initialized correctly.
Fix this by getting rid of the flag completely. Let the decoders instead
compare the segment information by content, which is hopefully enough.
(In theory, two segments with same information could perhaps appear in
broken-ish corner cases, or in an attempt to simulate looping, and such.
I preferred the simple solution over others, such as generating unique
and stable segment IDs.)
We still add a "segmented" field to make it explicit whether segments
are used, instead of doing something silly like testing arbitrary other
segment fields for validity.
Cached seeking with timeline stuff is still slightly broken even with
this commit: the seek logic is not aware of the overlap that segments
can have, and the timestamp clamping that needs to be performed in
theory to account for the fact that a packet might contain a frame that
is always clipped off by segment handling. This can be fixed later.
Needed for a failed thing, leaving it anyway because it causes no harm
and might be less awkward if struct virtual_stream is possibly extended
anyway in the future.
This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced
format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor
struct.
That struct holds dmabuf fds and information allowing zerocopy rendering
using KMS / DRM Atomic.
This has been tested on RockChip ROCK64 device.
Although seeking past the cached range will trigger a low level seek, a
seek into the region between cache end and last video key frame would
simply seek to the video key frame. This meant that you could get
"stuck" at the end of the file instead of terminating playback when
trying to seek past the end.
One change is that we fix this by _actually_ allowing SEEK_FORWARD to
seek past the last video keyframe in find_seek_target().
In that case, or otherwise seeking to cache buffer end, it could happen
that we set ds->reader_head=NULL if the seek target is after the current
packet. We allow this, because the end of the cached region is defined
by the existence of "any" packet, not necessarily a key frame. Seeking
there still makes sense, because we know that there is going to be more
packets (or EOF) that satisfy the seek target.
The problem is that just resuming demuxing with reader_head==NULL will
simply return any packets that come its way, even non-keyframe ones.
Some decoders will produce ugly soup in this case. (In practice, this
was not a problem, because seeking at the end of the cached region was
rare before this commit, and also some decoders like h264 will skip
broken frames by default anyway.)
So the other change of this commit is to enable key frame skipping.
As a nasty implementation detail, we use a separate flag, instead of
setting reader_head to the first key frame encounted (reader_head being
NULL can happen after a normal seek or on playback start, and then we
want to mirror the underlying demuxer behavior, for better or worse).
This change is relatively untested, so you get to keep the pieces for
yourself.
Seems like most code dealing with this was for setting it in redundant
cases. Now SEEK_BACKWARD is redundant, and SEEK_FORWARD is the odd one
out.
Also fix that SEEK_FORWARD was not correctly unset in try_seek_cache().
In demux_mkv_seek(), make the arbitrary decision that a video stream is
not required for the subtitle prefetch logic to be active. We might want
subtitles with long duration even with audio only playback, or if the
file is used as external subtitle.
If a packet uses segmentation, the codec field must be set. Copying the
codec field was forgotten as an oversight, which is why this just
crashes. This showed up only now, because demux_copy_packet() was not
used before in the main demux path until recently.
Fixes#5027.
Since we divide by it in a couple of places and compositors can be crazy,
its better to be safe than sorry.
Also checks cursor spawn durinig init (pointless since it does again on
cursor entry but its more correct).
It seems the cursor hadn't had its position properly adjusted when scaled.
Hence, bring back correct buffer scaling to make the cursor look fine.
Also the cursor surface now gets created sooner so that's better.
This improves upon the previous commit, and partially rewrites it (and
other code). It does:
- disable the seeking within cache by default, and add an option to
control it
- mess with the buffer estimation reporting code, which will most likely
lead to funny regressions even if the new features are not enabled
- add a back buffer to the packet cache
- enhance the seek code so you can seek into the back buffer
- unnecessarily change a bunch of other stuff for no reason
- fuck up everything and vomit ponies and rainbows
This should actually be pretty usable. One thing we should add are some
properties to report the proper buffer state. Then the OSC could show a
nice buffer range. Also configuration of the buffers could be made
simpler. Once this has been tested enough, it can be enabled by default,
and might replace the stream cache's byte ringbuffer.
In addition it may or may not be possible to keep other buffer ranges
when seeking outside of the current range, but that would be much more
complex.
More the ignore_eof field to the internal demux_stream struct. This is
relatively messy, because the internal struct exists only once the
stream is created, and after that setting the ignore_eof flag is a race
condition. We could bother with adding demux_add_sh_stream() parameters
for this, but let's not. So in theory a tiny race condition is
introduced, which can never be triggered since all demux API functions
are called by the playback thread only anyway.
Fix that ts_offset is accessed without log (this was introduced much
earlier by myself).
Introduce an alternative way of avoiding the annoying EOF reached
messages by not resetting the EOF flags for CC streams when a CC packet
is added. This makes the second commit in the PR which added the
original fix unnecessary.
As another cosmetic change merge the check in cached_demux_control()
into a single if().
In the future, the CC pseudo-stream should probably be replaced with an
entire pseudo-demuxer or such, which would avoid some of the messiness
(or maybe not, we don't know yet).
In the extreme case, reading 1 byte would wake up the cache to make the
cache thread read 1 byte. This would be extremely inefficient. This will
not normally happen in our cache implementation, but it's still present
to some lesser degree. Normally you'd set a predefined "cache too low"
boundary, after which you would restart reading. For some reason
something like this is already present using a hardcoded value
(FILL_LIMIT - I don't even know the deeper reason why this exists). So
use that to reduce wakeups.
This doesn't fix redundant wakeups on EOFs, which is especially visible
should something keep retrying reading on EOF (like in an endless loop).
Regression since ec6e8a31e0. Removal of the explicit else case
always applies the conversion to premultiplied alpha in the else branch.
We want to scale with multiplied alpha, but we don't want to multiply
with alpha again on top of it.
Fixes#4983, hopefully.
This should be functionally identical to rgba16f, since the formats only
differ in their representation on the CPU, but it could be useful for RA
backends that don't expose rgba16f, like Vulkan. It's definitely useful
for the WIP D3D11 backend.
With video paused, changing the brightness controls (or similar) would
sometimes not rerender the video frame. So the OSD would redraw, but the
video wouldn't change. This is caused by output caching, and a redraw
request is free to return the cached frame. Change it such to invalidate
the cached frame if any of the options or the equalizer change.
In theory, gl_video_reset_surfaces() could be called if the equalizer
changes - this would apparently force interpolatzion to redraw all
frames. But this looks kind of crappy when changing the equalizer during
playback. It'll "eventually" use the correct settings anyway, and when
paused interpolation is off.
This was phased out, and was used only by vdpau by now. Drop the
mechanism and the vdpau special code, which means screenshots won't
include the vf_vdpaupp processing anymore. (I don't care enough about
vdpau, it's on its way out.)
The mechanism introduced in b135af6842 assumed AVHWFramesContext would
be enough. Apparently it's not - the intended use with Rockchip (not
Rokchip btw.) requires accessing actual frame data in order to access
the AVDRMFrameDescriptor struct.
Just pass the entire mp_image to the new function. This is more
flexible, although it slightly worries me that it will be less reusable
for things which require setting up mp_image_params before any real
frames are processed (such as filters).
The same should happen with any other side data that matters to mpv,
otherwise filters will drop it.
(No, don't try to argue that mpv should use AVFrame. That won't work.)
ffmpeg_garbage() is copy&paste from frame_new_side_data() in FFmpeg
(roughly feed201849b8f91), because it's not public API. The name
reflects my opinion about FFmpeg's API.
In mp_image_to_av_frame(), change the too-fragile
*new_ref = (struct mp_image){0};
into explicitly zeroing out the fields that are "transferred" to the
created AVFrame.
Merge mp_image_copy_fields_to_av_frame() into mp_image_from_av_frame(),
same for the other direction.
There isn't any good reason to keep them separate, and the refcounting
handling makes it only more awkward.
It seems this will be useful for Rokchip DRM hwcontext integration.
DRM hwcontexts have additional internal structure which can be different
depending on the decoder, and which is not part of the generic hwcontext
API. Rockchip has 1 layer, which EGL interop happens to translate to a
RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple
layers. Both will use sw_format=nv12, and thus are indistinguishable on
the mp_image_params level. But this is needed to initialize the EGL
mapping and the vo_gpu video renderer correctly.
We hope that the layer count is enough to tell whether EGL will
translate the data to a RGB texture (vs. 2 texture resembling raw nv12
data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE.
This commit adds the flag, infrastructure to set it, and an "example"
for D3D11.
The D3D11 addition is quite useless at this point. But later we want to
get rid of d3d11_update_image_attribs() anyway, while we still need a
way to force d3d11vpp filter insertion, so maybe it has some
justification (who knows). In any case it makes testing this easier.
Obviously it also adds some basic support for triggering the opaque
format for decoding, which will use a driver-specific format, but which
is not supported in shaders. The opaque flag is not used to determine
whether d3d11vpp needs to be inserted, though.
Mostly an obscure option for testing. But --videotoolbox-format can be
deprecated, as it becomes redundant.
We rely on the libavutil hwcontext implementation to reject invalid
pixfmts, or not to blow up if they are incompatible.
This was confusing at best. Change it to output the actual choices.
(Seems like in the end it's always me who has to clean up other people's
bullshit.)
Context names were not unique - but they should be, so fix it. The whole
point of the original --opengl-backend option was to side-step the
tricky auto-detection, so you know exactly what you get. The goal of
this commit is to make --gpu-context work the same way. Fix the
non-unique names by appending "vk" to the names.
Keep in mind that this was not suitable for slecting the "UI" backend
anyway, since "x11" would force GLX, whereas people on not-NVIDIA
actually want "x11egl". Users trying to use --gpu-context=x11 to force
the X11 backend would always end up with GLX, which would at least break
VAAPI hardware decoding for them. Basically the idea that this option
could select the "UI" type is completely broken - it selects an
implementation, which implies a UI. Selecting the UI type This would
require a separate mechanism. (Although in theory this separate
mechanism could be part of the --gpu-context option - in any case,
someone would have to implement it.)
To achieve help output that can actually be understood, just duplicate
the code. Most of that code is duplicated anyway, and trying to share
just the list code with the result of making the output unreadable
doesn't make too much sense. If we wanted to save code/effort, we could
just remove the help output altogether.
--gpu-api has non-unique entries, and it would be nice to group them
(e.g. list all OpenGL capable contexts with "opengl"), but C makes this
simple idea too much of a pain, so don't do it.
Also remove a stray tab from the android entry on the manpage.
If the chroma location is missing, vo_gpu will use centered chroma.
Select a better chroma location by default: normally, it will always be
MPEG video chroma location. If full levels are used, use JPEG chroma
location, because that sort of sounds like it could make sense as it
might coincide with JPEG being decoded.
See e.g. #4804.
Unfortunately I'm also adding the full text of the LGPL license text,
because the GPL one was already present in this repository, and I don't
want to imply that the GPL somehow has priority.