Until now, bstr_xappend_vasprintf() called vsnprintf() always twice:
once to determine how much output the call would produce, and a second
time to actually output the data to the (possibly resized) target
memory.
Change this so that it tries to output to the already allocated memory
first, and repeat the call only if allocation is required.
This is especially helpful, as bstr_xappend_vasprintf() is designed to
avoid reallocation when building strings. Usually, the second
vsnprintf() will happen only at the beginning, when the buffer hasn't
been extended to his largest needed size yet.
Not sure if there is a need to optimize this; but see the next commit.
JSON IPC works on Windows now, and although the transports for each
plaform have similar characteristics, they unfortunately have different
names (Unix domain sockets on Linux/Unix vs. named pipes on Windows.)
Hopefully this change better reflects the purpose of the option too,
since with --input-ipc-server, mpv acts as an IPC server that can
service many simultaneous clients (as opposed to --input-file, which can
only do one-to-one IPC.)
This implements the JSON IPC protocol with named pipes, which are
probably the closest Windows equivalent to Unix domain sockets in terms
of functionality. Like with Unix sockets, this will allow mpv to listen
for IPC connections and handle multiple IPC clients at once. A few cross
platform libraries and frameworks (Qt, node.js) use named pipes for IPC
on Windows and Unix sockets on Linux and Unix, so hopefully this will
ease the creation of portable JSON IPC clients.
Unlike the Unix implementation, this doesn't share code with
--input-file, meaning --input-file on Windows won't understand JSON
commands (yet.) Sharing code and removing the separate implementation in
pipe-win32.c is definitely a possible future improvement.
Glitches when resizing are still possible, but are reduced. Other VOs
could support this too, but don't need to do so.
(Totally avoiding glitches would be much more effort, and probably not
worth the trouble. How about you just watch the video the player is
playing, instead of spending your time resizing the window.)
Should reflect I/O speed.
This could go into the terminal status line. But I'm not sure how to put
it there, since it already uses too much space, so it's not there yet.
The old algorithm produced results which were not uniformly distributed,
i.e. some particular shuffles were preferred over others.
The new algorithm is an implementation of the Fisher-Yates shuffle which
is guaranteed to shuffle uniformly given a sufficiently uniform rand()
and ignoring potential floating-point errors.
Signed-off-by: wm4 <wm4@nowhere>
Until now, we have tried to create a GL 3.0 context. The main reason for
this is that many Mesa-based drivers did not support anything better.
But some drivers (Mesa AMD) will not report a higher OpenGL version,
because their compatibility mode is restricted. While later GL features
are reported as extensions just fine, there doesn't seem to be a way to
determine or enable higher GLSL versions.
Add some more shitty hacks to try to deal with this messed up situation,
and try to probe each interesting GL version separately (starting with
3.3, then 3.2 etc.). Other backends might suffer from similar problems,
but these will have to deal with it on their own.
Probably fixes#2938, or maybe not.
Prevents an infinite loop of configure event and set_fullscreen
request on Weston and other compositors respecting the protocol.
Fixes#2817
This reverts commit eb6b2b6e50.
This colorspace has been historically used as a calibration target for
most digital projectors and sees some involvement in the UltraHD
standards, so it's a useful addition to mpv.
This changes behavior somewhat. The old behavior can be restored by
running "mp.use_suspend=true". It was originally introduced for the OSC,
but I can't reproduce whatever misbehavior I was seeing.
(See mp.suspend()/resume() for explanations what the suspend mechanism
does.)
converted_imgfmt will be used by the renderer logic to build an
appropriate shader chain. It doesn't influence the format of any
textures. Thus it doesn't matter whether the hw video surface is mapped
as RGB or RGBA. What matters is if the video actually contains alpha or
not. Since virtually all hardware decoder do not support alpha in any
way, this can be hardcoded as "no alpha".
This avoids unnecessary GPU work.
This also gets rid of the kind of hard to read texture swizzle setup and
turns it into something dumber.
Assumes that we don't create any FBOs with 2 channel formats. (Only the
video source textures are handled by this commit.)
This is particularly useful for opus which allows only a fairly restrictive set
of samplerates. If the codec doesn't provide a list of samplerates, just
continue to try the requsted one and hope for the best.
fixes#2957
This function chooses the best match to a given samplerate from a provided
list. This can be used, for example, by the ao to decide what samplerate to use
for output.
* Use the update-core command
* Add --check-c-compiler=gcc to be safe
* Add warning about potential pitfalls of adding C:\msys2\mingw64\bin to %PATH%
* Recommend winpty
* Add note about ANGLE
Previously, gl->DXOpenDeviceNV was called twice using dxva2 with dxinterop. AMD
drivers refused to allow this. With this commit, context_dxinterop sets its own
implementation of MPGetNativeDisplay, which can return either a
IDirect3DDevice9Ex or a dxinterop_device_HANDLE depending on the "name" request
string. hwdec_dxva2gldx then requests both of these avoiding the need to call
gl->DXOpenDeviceNV a second time.
Drag&drop mechanisms typically support multiple types for the drop data.
Move most of the logic which types are accepted and preferred to
event.c, where the data is also interpreted.
(Maybe sorting the types by assigning scores is over-engineered, since
they're already sorted by preference, but it's actually not much more
code.)
Not very interesting/meaningful yet, but preparation for the next
commit.
Reduces VO access and makes the code more self-contained. (One day the
windowing backend code should not access the VO anymore. We're just not
quite there yet.)
Instead of displaying it only on playback start (or after switching
tracks), always display it even after a seek.
This helps with --lavfi-complex. You can now overlay e.g. audio
visualizations over cover art, and it won't break after a seek.
The downside is that this might make seeks with huge cover art slower.
There is also a glitch on seeking: since cover art pictures always
have timestamp 0, the playback time will be 0 for a moment after seek,
and then revert to audio PTS (as video is considered EOF). This is also
due to how lavfi's overlay filter behaves. (I'm not sure how to tell
lavfi that it's just a single frame.)
Like dxinterop, this uses StretchRect or RGB conversion. This is unavoidable as
long as we use the dxva2 API, as there is no way to access the raw hardware
decoded Direct3D9 surfaces.