Doesn't really seem to be much of use. Get rid of the remaining uses of
it.
Concerning vo_opengl_old, it seems uninitGl() works fine even if called
before initialization.
This was a minor code duplication between vf_vdpaupp.c and vo_vdpau.c.
(In theory, we could always require using vf_vdpaupp with vo_vdpau, but
I think it's better if vo_vdpau can work standalone.)
Also remove MSGL_SMODE and friends.
Note: The indent in options.rst was added to work around a bug in
ReportLab that causes the PDF manual build to fail.
Change how the video decoding loop works. The structure should now be a
bit easier to follow. The interactions on format changes are (probably)
simpler. This also aligns the decoding loop with future planned changes,
such as moving various things to separate threads.
vf_fix_img_params() takes care of overwriting image parameters that are
normally not set correctly by filters. But this makes no sense for input
images. So instead, check that the input is correct.
It still has to be done for the first input image, because that's used
to handle some overrides (see video_reconfig_filters()).
These replace vf_read_output_frame(), although we still emulate that
function. This change is preparation for another commit (and this is
basically just to reduce the diff and signal/noise ratio in that
commit).
Basically, if we feed the filter a new image even after the EOF state
has been reached (e.g. because the input stream "recovered"), we want
the filter to restart, instead of returning an error forever.
Some non-deinterlacing filters (potentially denoising) also use
additional frames for filtering. The vdpau docs suggest providing at
least 1 future and 2 past _fields_, which means we need to provide 1
past frame (the future field is already the other field of the current
field, and both fields are in the same frame).
We can easily achieve this by buffering an additional frame in the non-
deint case.
Remove the special casing of vo_vdpau vs. other VOs. Replace the
complicated interaction between vo.c and vo_vdpau.c with a simple queue
in vo.c. VOs other than vdpau are handled by setting the length of the
queue to 1 (this is essentially what waiting_mpi was).
Note that vo_vdpau.c seems to have buffered only 1 or 2 frames into the
future, while the remaining 3 or 4 frames were past frames. So the new
code buffers 2 frames (vo_vdpau.c requests this queue length by setting
vo->max_video_queue to 2). It should probably be investigated why
vo_vdpau.c kept so many past frames.
The field vo->redrawing is removed. I'm not really sure what that would
be needed for; it seems pointless.
Future directions include making the interface between playloop and VO
simpler, as well as making rendering a frame a single operation, as
opposed to the weird 3-step sequence of rendering, drawing OSD, and
flipping.
The previous commits changed vo_vdpau so that these options are set by
vf_vdpaupp, and the corresponding vo_vdpau were ignored. But for
compatibility, keep the "old" options working.
The value of this is questionable - maybe the vo_vdpau options should
just be removed. For now, at least demonstrate that it's possible.
The "deint" suboption still doesn't work, because the framerate doubling
logic required for some deint modes was moved to vf_vdpaupp. This
requires more elaborate workarounds.
This is slightly incomplete: the mixer options, such as sharpen and
especially deinterlacing, are ignored. This also breaks automatic
enabling of interlacing with 'D' or --deinterlace. These issues will be
fixed later in the following commits.
Note that we keep all the custom vdpau queue stuff. This will also be
simplified later.
This uses mp_vdpau_mixer_render(). The benefit is that it makes vdpau
deinterlacing just work. One additional minor advantage is that the
video mixer creation code is factored out (although that is a double-
edged sword).
This factors out some code from vo_vdpau.c, especially deinterlacing
handling. The intention is to use this for vo_vdpau.c to make the logic
significantly easier, and to use it for vo_opengl (gl_hwdec_vdpau.c) to
allow selecting deinterlace and postprocessing modes.
As of this commit, the filter actually does nothing, since both vo_vdpau
and vo_opengl treat the generated images as normal vdpau images. This
will change in the following commits.
It might have been nice not to do this so that metadata could
accumulate accross seeks, but it seems libavfilter looses its copy
anyway on recreate_graph.
lavfi would segfault due to a NULL dereference if it was asked for its
metadata and none had been allocated (oops). This happens for libav
which has no concept of filter metadata.
Commit 5e4e248 added a mp_image_params field to mp_image, and moved many
parameters to that struct. display_w/h was left redundant with
mp_image_params.d_w/d_h. These fields were supposed to be always in
sync, but it seems some code forgot to do this correctly, such as
vf_fix_img_params() or mp_image_copy_attributes(). This led to the
problem in github issue #756, because display_w/_h could become
incorrect.
It turns out that most code didn't use the old fields anyway. Just
remove them. Note that mp_image_params.d_w/d_h are supposed to be always
valid, so the additional checks for 0 shouldn't be needed. Remove these
checks as well.
Fixes#756.
In theory, returning the screenshot with original pixel aspect would
allow avoiding scaling them with image formats that support non-square
pixels, but in practice this isn't used anyway (nothing seems to
understand e.g. jpeg aspect ratio tags).
We only support them for input. The frame properties of output frames
are ignored (except frame durations).
Properties not set for now: _ChromaLocation, _Field, _FieldBased
Set _DurationNum/_DurationDen on each VS frame, instead of
_AbsoluteTime. The duration is the difference between the timestamp of
the frame and the next frame, and when receiving filtered VS frames, we
convert them back to an absolute PTS by summing them.
We pass the timestamps with microsecond resolution. mpv uses double for
timestamps internally, so we don't know the "real" timebase or FPS. VS
on the other hand uses fractions for frame durations. We can't pass
through the numbers exactly, but microseconds ought to be enough to be
even safe from accumulating rounding errors.
Since this leaks video images, and the player keeps feeding new images
to the fitler even if it fails, this would probably have disastrous
consequences.
Or in other words, add support for properly draining remaining frames
from video filters. vf_yadif is buffering at least one frame, and the
buffered frame was not retrieved on EOF.
For most filters, ignore this for now, and just adjust them to the
changed semantics of filter_ext. But for vf_lavfi (used by vf_yadif),
real support is implemented. libavfilter handles this simply by passing
a NULL frame to av_buffersrc_add_frame(), so we just have to make
mp_to_av() handle NULL arguments.
In load_next_vo_frame(), we first try to output a frame buffered in the
VO, then the filter, and then (if EOF is reached and there's still no
new frame) the VO again, with draining enabled. I guess this was
implemented slightly incorrectly before, because the filter chain still
could have had remaining output frames.
This extracts the scheduling logic to a single function which is nicer to keep
it consistent.
Additionally make sure we don't schedule sync operations from a sync operation
itself since that could cause deadlocks (even if it should not be happening
with the current code).
Previously the window could be made to completly exit the screen with a
combination or moving it close to an edge and halving it's size (via cmd+0).
This commit address the problem in the most simple way possibile by
constraining the window to the closest edge in these edge cases.
This fixes a couple of issues with the Cocoa `--native-fs` mode, primarily:
- A ghost titlebar at the top of the screen in full screen:
This was caused by the window constraining code kicking in during
fullscreen. Simply returning the unconstrained rect from the constraining
method fixes the problem.
- Incorrect behavior when using the titlebar buttons to enter/exit
fullscreen, as opposed to the OSD button.
This was caused by mpv's internal fullscreen state going out of sync with
the NSWindow's one. This was the case because `toggleFullScreen:`
completely bypassed the normal event flow that mpv expects.
Signed-off-by: Ryan Goulden <percontation@gmail.com>
Change style for mpv, simplify and refactor some of the constraining code.
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
I thought the "_" in "_AbsoluteTime" was part of the documentation
markup.
This still doesn't help us with VS filters that change timing;
apparently you must use frame durations instead.
Make the filter apply the pixel aspect ratio of the input to the output.
This is more useful than forcing 1:1 PAR when playing anamorphic video
such as DVDs.
VapourSynth itself actually allows passing through the aspect ratio, but
it's in a not very useful form for us: it's per video-frame instead of
constant (i.e in VSVideoInfo). As long as we don't have a way to allow a
filter to spontaneously change output parameters, we can't use this.
(And I don't really feel like making this possible.)
This reverts commit 6e34b0ec1f.
There has always been an error message "proxy already has a listener" and
I couldn't reproduce where it is comming from until now. The display interface
already has a listener and we can't overwrite it. Now remove the code and avoid
this error message.
Conflicts:
video/out/wayland_common.c
Add the event FD after preinit, remove it before destroy. There's no
need to do it on vo_config, and there's no need to remove the event
FD when vo_config fails.
Unfortunately, if a VO can't display something as intended, we can just
complain to the user, and leave it at it. But it's still better than
silently displaying things differently with different VOs.
For now, this is used for rotation only. Other things that we should
check includes colorspace and colorlevels stuff.
When using rotation with hw decoding, and the VO does not support
rotation, vf_rotate is attempted to be inserted. This will go wrong, and
after that it can't recover because a vf_scale filter was autoinserted.
Just removing all autoinserted filters before reconfig fixes this.
This turned out much more complicated than I thought. It's not just a
matter of adjusting the texture coordinates, but you also have to
consider separated scaling and panscan clipping, which make everything
complicated.
This actually still doesn't clip 100% correctly, but the bug is only
visible when rotating (or flipping with --vf=flip), and using something
like --video-pan-x/y at the same time.
For rotation, we assume that the source image will be rotated within the
VO, so the aspect/panscan code needs to calculate its param using
rotated coordinates. VOs which support rotation natively can use this.
This couldn't rotate by 180°. Add this, and also make the parameter in
degrees, instead of magic numbers.
For now, drop the flipping stuff. You can still flip with --vf=flip or
--vf=mirror. Drop the landscape/portrait stuff - I think this is
something almost nobody will use. If it turns out that we need some of
these things, they can be readded later.
Make it use libavfilter. Its vf_transpose implementation looks pretty
simple, except that it uses slice threading and should be much faster.
Fix all include statements of the form:
#include "libav.../..."
These come from MPlayer times, when FFmpeg was somehow part of the
MPlayer build tree, and this form was needed to prefer the local files
over system FFmpeg.
In some cases, the include statement wasn't needed or could be replaced
with mpv defined symbols.
Not needed anymore. I'm not opposed to having asm, but inline asm is too
much of a pain, and it was planned long ago to eventually get rid fo all
inline asm uses.
For the note, the inline asm use that was removed with the previous
commits was almost worthless. It was confined to video filters, and most
video filtering is now done with libavfilter. Some mpv filters (like
vf_pullup) actually redirect to libavfilter if possible.
If asm is added in the future, it should happen in the form of external
files.
No change in speed (or even slightly faster, though I tested with
progressive solid color video only), and normally we use libavformat's
vf_pullup anyway.
I didn't test the speed, but by default, this filter diverts to
libavfilter already. So this would help only if libavfilter is disabled,
or libavfilter doesn't have vf_noise (like on Libav). For these cases,
we still provide the (possibly but not necessarily) slower C
implementation of vf_noise.
This makes it multiple times slower. However, the output format (packed
YUV) isn't handled efficiently by anything to begin with, and I have no
clue we even have this filter. I guess it's one of these filters which
find some use sometimes, but are not of higher importance, which
justifies removing the faster inline asm.
This replaces translate_key_input with a solution that gives mpv more
control over how keyboard input is converted to unicode. As a result:
- Key up/down events are generated the same way for all keys.
- Dead keys generate their base character instead of being combined with
the following character.
- Many Ctrl and Ctrl+Alt key combinations that were previously broken
are fixed, since it's possible to discover the base keys.
- AltGr doesn't produce special characters when mp_input_use_alt_gr is
false.
This also fixes some logic to do with detecting AltGr and adds proper
UTF-16 decoding.
This collects statistics and other things. The option dumps raw data
into a file. A script to visualize this data is included too.
Litter some of the player code with calls that generate these
statistics.
In general, this will be helpful to debug timing dependent issues, such
as A/V sync problems. Normally, one could argue that this is the task of
a real profiler, but then we'd have a hard time to include extra
information like audio/video PTS differences. We could also just
hardcode all statistics collection and processing in the player code,
but then we'd end up with something like mplayer's status line, which
was cluttered and required a centralized approach (i.e. getting the data
to the status line; so it was all in mplayer.c). Some players can
visualize such statistics on OSD, but that sounds even more complicated.
So the approach added with this commit sounds sensible.
The stats-conv.py script is rather primitive at the moment and its
output is semi-ugly. It uses matplotlib, so it could probably be
extended to do a lot, so it's not a dead-end.
qscale export has been completely removed from Libav 10, and FFmpeg has
an alternative API, so this code does nothing and only causes
deprecation warnings on Libav.
We were relying on vsscript_freeScript() to take care of proper
termination. But it doesn't do that: it doesn't wait for the filters to
finish and exit at all. Instead, it just destroys all objects, which
causes the worker threads to crash sometimes.
Also, we're supposed to wait for the frame callback to finish before
freeing the associated node.
Handle this by explicitly waiting as far as we can. Probably fixes
crashes on seeking, although VapourSynth itself might also need some
work to make this case completely stable.
The most user visible change is that "420p" is now displayed as
"yuv420p". This is what FFmpeg uses (almost), and is also less confusing
since "420p" is often confused with "420 pixels vertical resolution".
In general, we return the FFmpeg pixel format name. We still use our own
old mechanism to keep a list of exceptions to provide compatibility for
a while.
Also, never return NULL for image format names. If the format is unset
(0/IMGFMT_NONE), return "none". If the format has no name (probably
never happens, FFmpeg seems to guarantee that a name is set), return
"unknown".
Before this commit, the filter attempted to keep the vsscript state
(p->se) even when the script was reloaded. Change it to destroy the
script state too on reloading. Now no workaround for LoadPlugin is
necessary, and this also fixes a weird theoretical race condition when
destroying and recreating the mpv source filter.
I hate tabs.
This replaces all tabs in all source files with spaces. The only
exception is old-makefile. The replacement was made by running the
GNU coreutils "expand" command on every file. Since the replacement was
automatic, it's possible that some formatting was destroyed (but perhaps
only if it was assuming that the end of a tab does not correspond to
aligning the end to multiples of 8 spaces).
mpv was resizing to the same size before it went to fullscreen, we don't need to schedule a resize because the compositor will send a configure event with the new dimensions and thats when we should do it.
Mainly meant to apply simple VapourSynth filters to video at runtime.
This has various restrictions, which are listed in the manpage.
Additionally, this actually copies video frames when converting frame
references from mpv to VapourSynth, and a second time when going from
VapourSynth to mpv. This is inefficient and could probably be easily
improved. But for now, this is simpler, and in fact I'm not sure if
we even can references VapourSynth frames after the core has been
destroyed.
The stats were retrieved and written on every encode call, instead of
every encode call that actually returned a packet. ffmpeg.c also does it
this way, so it must be "more correct". Fixes 2-pass encoding.
We needed this because the OSD rendering path used GBRP for RGB
rendering, and not all swscale versions supported this conversion. But
recently we've dropped support for very old ffmpeg/libav versions, so
this isn't needed anymore.
This might be a good idea in order to prevent queuing a frame too far in
the future (causing apparent freezing of the video display), or dropping
an infinite number of frames (also apparent as freezing).
I think at this point this is most of what we can do if the vdpau time
source is unreliable (like with Mesa). There are still inherent race
conditions which can't be fixed.
The strange thing about this code was the shift parameter of the
prev_vs2 function. The parameter is used to handle timestamps before the
last vsync, since the % operator handles negative values incorrectly.
Most callers set shift to 0, and _usually_ pass a timestamp after the
last vsync. One caller sets it to 16, and can pass a timestamp before
the last timestamp.
The mystery is why prev_vs2 doesn't just compensate for the % operator
semantics in the most simple way: if the result of the operator is
negative, add the divisor to it. Instead, it adds a huge value to it
(how huge is influenced by shift). If shift is 0, the result of the
function will not be aligned to vsyncs.
I have no idea why it was written in this way. Were there concerns about
certain numeric overflows that could happen in the calculations? But I
can't think of any (the difference between ts and vc->recent_vsync_time
is usually not that huge). Or is there something more clever about it,
which is important for the timing code? I can't think of anything
either.
So scrap it and simplify it.
vo_vdpau used a somewhat complicated and fragile mechanism to convert
the vdpau time to internal mpv time. This was fragile as in it couldn't
deal well with Mesa's (apparently) random timestamps, which can change
the base offset in multiple situations. It can happen when moving the
mpv window to a different screen, and somehow it also happens when
pausing the player.
It seems this mechanism to synchronize the vdpau time is not actually
needed. There are only 2 places where sync_vdptime() is used (i.e.
returning the current vdpau time interpolated by system time).
The first call is for determining the PTS used to queue a frame. This
also uses convert_to_vdptime(). It's easily replaced by querying the
time directly, and adding the wait time to it (rel_pts_ns in the patch).
The second call is pretty odd: it updates the vdpau time a second time
in the same function. From what I can see, this can matter only if
update_presentation_queue_status() is very slow. I'm not sure what to
make out of this, because the call merely queries the presentation
queue. Just assume it isn't slow, and that we don't have to update the
time.
Another potential issue with this is that we call VdpPresentationQueueGetTime()
every frame now, instead of every 5 seconds and interpolating the other
calls via system time. More over, this is per video frame (which can be
portantially dropped, and not per actually displayed frame. Assume this
doesn't matter.
This simplifies the code, and should make it more robust on Mesa. But
note that what Mesa does is obviously insane - this is one situation
where you really need a stable time source. There are still plenty of
race condition windows where things can go wrong, although this commit
should drastically reduce the possibility of this.
In my tests, everything worked well. But I have no access to a Mesa
system with vdpau, so it needs testing by others.
See github issues #520, #694, #695.
This commit adds support for automatic selection of color profiles based on
the display where mpv is initialized, and automatically changes the color
profile when display is changed or the profile itself is changed from
System Preferences.
@UliZappe was responsible with the testing and implementation of a lot of this
commit, including the original implementation of `cocoa_get_icc_profile_path`
(See #594).
Fixes#594
Reduce most dependencies on struct mp_csp_details, which was a bad first
attempt at dealing with colorspace stuff. Instead, consistently use
mp_image_params.
Code which retrieves colorspace matrices from csputils.c still uses this
type, though.
This is pretty obscure, so it didn't matter much. It still breaks
switching output levels at runtime, because the video output is not
reinitialized with the new params.
There were some bad interactions with the OSC.
For one, dragging the OSC bar, and then moving the mouse outside of the
OSC (while mouse button still held) would suddenly initiate window
dragging. This was because win_drag_button1_down was not reset when
sending a normal mouse event, which means the window dragging code can
become active even after we've basically decided that the preceding
click didn't initiate window dragging.
Second, dragging the window and clicking on the OSC bar after that did
nothing. This was because no mouse button up event was sent to the core,
even though a mouse down event was sent. So make sure the key state is
erased with MP_INPUT_RELEASE_ALL.
We don't check whether the WM supports _NET_WM_MOVERESIZE_MOVE, but
if it doesn't, nothing bad happens. There might be a race condition
when pressing a button, and then moving the mouse and releasing the
button at the same time; then the WM might get the message to initiate
moving the window after the mouse button has been released, in which
case the result will probably be annoying. This could possibly be fixed
by sending _NET_WM_MOVERESIZE_CANCEL on button release, but on the
other hand, we probably won't receive a button release event in this
situation, so ignore this problem.
The dragging is initiated only when moving the mouse pointer after a
click in order to reduce annoying behavior when the user is e.g.
doubleclicking.
Closes#608.
VAAPI has some ambiguous image formats, like VA_FOURCC_I420,
VA_FOURCC_IYUV, VA_FOURCC_YV12 (the latter exactly the same as the first
two, just with swapped planes). There is potentially a problem when one
specific VAAPI format was picked, and converting it to a mpv format and
back to a VAAPI FourCC would result in a numerically different format
(even if it's actually the same). Then it could e.g. happen that
functions like va_surface_upload() reallocate the underlying VAImage,
which would be inefficient. Change the code so that this can't happen.
(Probably not a problem in practice with the current VAAPI usage.)
It's not really needed to be public. Other code can just use mp_image.
The only disadvantage is that the other code needs to call an accessor
to get the VASurfaceID.
Although I at first thought it would be better to have a separate
implementation for hwaccels because the difference to software images
are too large, it turns out you can actually save some code with it.
Note that the old implementation had a small memory management bug. This
got painted over in commit 269c1e1, but is hereby solved properly.
Also note that I couldn't test vf_vavpp.c (due to lack of hardware), and
I hope I didn't accidentally break it.
The plan is to get rid of the custom VAAPI and possibly VDPAU surface
allocators.
Add custom surface allocation, because hwaccel surfaces are allocated
completely differently from software surfaces.
Add optional LRU allocation, which is (probably) helpful for hwaccel,
but (probably) less optimal for software surfaces.
mp_image_pool_get_no_alloc() is specifically for VAAPI, which can't
allocate new decoder surfaces after decoder init.
They were used by ancient libavcodec versions. This also removes the
need to distinguish vdpau image formats at all (since there is only
one), and some code can be simplified.
Image formats used to be FourCCs, so unsigned int was better. But now
it's annoying and the only difference is that unsigned int is more to
type than int.
Instead of doing it on every seek (libavcodec calls get_format on every
seek), reinitialize the decoder only if the video resolution changes.
Note that this may be relatively naive, since we e.g. (or: in
particular) don't check for profile changes. But it's not worse than the
state before the get_format change, and at least it paints over the
current vaapi breakage (issue #646).
This follows the (only slowly progressing) plan to replace all internal
video filters with libavfilter.
All what's left in vf_gradfun.c is the weird wrapper around vf_lavfi.c.
This "sometimes" crashed when seeking. The fault apparently lies in
libavcodec: the decoder returns an unreferenced frame! This is
completely insane, but somehow I'm apparently still expected to
work this around. As a reaction, I will drop Libav 9 support in the
next commit. (While this commit will go into release/0.3.)
The window doesn't recieve a WM_LBUTTONUP message after it's dragged,
probably because it's swallowed by the modal loop. To stop the button
from sticking, release it manually when the drag is complete.
Mouse buttons can get stuck down if the button is pressed inside the
video window and released outside. Avoid this by capturing mouse input
when a button is pressed.
Apparently the "right" place to initialize the hardware decoder is in
the libavcodec get_format callback.
This doesn't change vda.c and vdpau_old.c, because I don't have OSX, and
vdpau_old.c is probably going to be removed soon (if Libav ever manages
to release Libav 10). So for now the init_decoder callback added with
this commit is optional.
This also means vdpau.c and vaapi.c don't have to manage and check the
image parameters anymore.
This change is probably needed for when libavcodec VDA supports gets a
new iteration of its API.
This updates the logic for the new, somewhat unified behavior of SRGB
and 3DLUT since 34bf9be (not that it was particularly correct even that
change) and checks for the presence of corresponding extensions only in
the cases in which they're needed.
This commit:
- Changes some of the #define and variable names for clarification and
adds comments where appropriate.
- Unifies :srgb and :icc-profile, making them fit into the same step of
the decoding process and removing the weird interactions between both
of them.
- Makes :icc-profile take precedence over :srgb (to significantly reduce
the number of confusing and useless special cases)
- Moves BT709 decompanding (approximate or actual) to the shader in all
cases, making it happen before upscaling (instead of the old 0.45
gamma function). This is the simpler and more proper way to do it.
- Enables the approx gamma function to work with :srgb as well due to
this (since they now share the gamma expansion code).
- Renames :icc-approx-gamma to :approx-gamma since it is no longer tied
to the ICC options or LittleCMS.
- Uses gamma 2.4 as input space for the actual 3DLUT, this is now a
pretty arbitrary factor but I picked 2.4 mainly because a higher pure
power value here seems to produce visually better results with wide
gamut profiles, rather then the previous 1.95 or BT.709.
- Adds the input gamma space to the 3dlut cache header in case we change
it more in the future, or even make it user customizable (though I
don't see why the latter would really be necessary).
- Fixes the OSD's gamma when using :srgb, which was previously still
using the old (0.45) approximation in all cases.
- Updates documentation on :srgb, it was still mentioning the old
behavior from circa a year ago.
This commit should serve to both open up and make the CMS/shader code much
more accessible and less confusing/error-prone and simultaneously also
improve the performance of 3DLUTs with wide gamut color spaces.
I would liked to have made it more modular but almost all of these
changes are interdependent, save for the documentation updates.
Note: Right now, the "3DLUT takes precedence over SRGB" logic is just
coded into gl_lcms.c's compile_shaders function. Ideally, this should be
done earlier, when parsing the options (by overriding the actual
opts.srgb flag) and output a warning to the user.
Note: I'm not sure how well this works together with real-world
subtitles that may need to be color corrected as well. I'm not sure
whether :approx-gamma needs to apply to subtitles as well. I'll need to
test this on proper files later.
Note: As of now, linear light scaling is still intrinsically tied to
either :srgb or :icc-profile. It would be thinkable to have this as an
extra option, :linear-scaling or similar, that could be used with or
without the two color management options.
Since the AO will run in a thread, and there's lots of shared state with
encoding, we have to add locking.
One case this doesn't handle correctly are the encode_lavc_available()
calls in ao_lavc.c and vo_lavc.c. They don't do much (and usually only
to protect against doing --ao=lavc with normal playback), and changing
it would be a bit messy. So just leave them.
The previous version of the gamma suboption was pretty useless. It could
be used to disable delayed gamma enabling, which is a mechanism to avoid
having to adjust gamma in the shader by default.
Repurpose the suboption and allow setting an exact gamma value with it.
You can already override gamma with the --gamma option as well as the
gamma input property, but these use a weird curve to create the
impression of a linear perceived brightness change when changing the
value. This suboption now allows setting an exact gamma value.
This used to be absolute colorimetric, but relative colorimetric is a
saner default due to the arguments presented in issue #595.
A short summary: In general it doesn't affect much because our eyes
adapt to the white point either way, but if running in windowed mode it
would make the whites seem inconsistent/tinted. For fullscreen
projection it's also undesirable since it reduces the dynamic range
without much benefit (again, since our eyes adapt either way) and it
also breaks calibration against ambient lighting.
This shouldn't change much, since most profile types that aren't 3DLUTs
aren't capable of either of those transforms, and most displays are
calibrated against D65 (same as BT.709 source) either way.
This uses the value of 1.95 as an approximation for the exact gamma
curve, which replicates the behavior of popular video software including
anything in the Apple ecosystem, as per issue #534.
This is the same issue as addressed by 257d9f1, except this time for
the :srgb option as well. (257d9f1 only addressed :icc-profile)
The conditions of the srgb_compand mix() call are also flipped to
prevent an off-by-one error.
I was unhappy with the old way of handling buffers, especially resizing. But my
original plan to use wl_shm_pool_resize wasn't as good as I initially thought.
I might get back to it.
With the new buffer pools it now possible to select triple buffering. Also the
buffer pools are also needed for the upcoming subsurfaces for osd and subtitles.
I hope this change was worth it.
I could not see any difference whatsoever, but for usage with a 3DLUT
there's zero performance difference so we might as well follow the spec to
the letter.
Legacy GL context creation (glXCreateContext) explicitly requires a X
visual, while the modern one (glXCreateContextAttribsARB) does not for
some reason. So fail only on the legacy code path if we don't find a
visual. Note that vo_x11_config_vo_window() will select a default visual
if a NULL visual is passed to it.
This fixes issue #504. For some reason, glXChooseFBConfig() will return
a fbconfig with no associated visual. (I'm not sure if this allowed.
They don't always have a visual, but since GLX_X_RENDERABLE is set
and GLX_DRAWABLE_TYPE is (implicitly) set to GLX_WINDOW_BIT, why would
there be no visual?)
Even worse, a test program seems to show that a 16 bit fbconfig is
selected (instead of 24/32 bit), which doesn't sound nice at all. Since
there _are_ better fbconfigs available, glXChooseFBConfig() should
normally sort them by quality, and return the better ones first. It's
worth noting that this function should also prefer GLX_TRUE_COLOR
over anything else, although this comes last in the sort order.
Whatever is going on, requesting GLX_X_VISUAL_TYPE with GLX_TRUE_COLOR
seems to fix it.
This was done incorrectly in the previous commit: the fallback size used
the window size as requested with the first config call, which is the
size of the hidden window in the vo_opengl case. (That damn hidden
window again...)
This code essentially does nothing. As far as I could find out, this
actually used to do something. Then it was removed with commit efe7c39f,
leaving some leftover code that didn't do anything useful. This happened
12 years ago!
Also remove a commented debug printf.
vo_opengl creates a hidden X11 window to probe the OpenGL context. It
must do that before creating a visible window, because VO creation and
VO config are separate phases.
There's a race condition involving the hidden window: when starting with
--fs, and then leaving fullscreen, the unfullscreened window is
sometimes set to the aspect ratio of the hidden window. I'm not sure why
the window size itself uses the correct size (but corrupted by the wrong
aspect), but that's perhaps because the window manager is free to ignore
the size hint while honoring the aspect, or something equally messed up.
It turns out this happens because x11_common.c thinks the size of the
hidden window is the size of the unfullscreened window. This in turn
happens because vo_x11_update_geometry() reads the size of the hidden
window when called in vo_x11_fullscreen() (called from
vo_x11_config_vo_window()) when mapping the fullscreen window. At that
point, the window could be mapped, but not necessarily. If it's not
mapped, it will get the size of the unfullscreened window... I think.
One could fix this by actively waiting until the window is mapped. Try
to pick a less hacky approach instead, and never read the window size
until MapNotify is received.
vo_x11_create_window() needs a hack, because we'd possibly set the VO's
size to 0, resulting e.g. in vdpau to fail initialization. (It'll print
error messages until a proper resize is received.)
Larger sizes can introduce overflows, depending on the image format. In
the worst case, something larger than 16000x16000 with 8 bytes per pixel
will overflow 31 bits.
Maybe there should be a proper failure path instead of a hard crash, but
not yet. I imagine anything that sets a higher image size than a known
working size should be forced to call a function to check the size (much
like in ffmpeg/libavutil).
RGB565 is one of the fastest and most supported formats on low end consumer
devices, but ffmpeg spams warning when using it. Make it opt-in instead of
opt-out.
The problem seems to have solved itself. I guess the previous changes to
resizing and commit ba101ab made this possible. Consider me happy for removing
that crap.
Still untested, because now it crashes inside of libSDL for unknown
reasons. (This also happens with mpv git from yesterday - probably an
installation problem, or SDL doing weird things it shouldn't be doing.)
The main difference between the old and new callbacks is that the old
callbacks required passing the window size, which is and always was very
inconvenient and confusing, since the window size is already in
vo->dwidth and vo->dheight.
Rename vo_get_src_dst_rects() to mp_get_src_dst_rects() and make it
independent from the VO (it takes a comical amount of parameters now to
pass all required state). Add a convenience wrapper with the name
vo_get_src_dst_rects() to vo.c. Replace all aspdat and vo usages with
immediate parameters.
Functionally, nothing should change, except that the window size is
clamped to a minimum of size 1 much earlier, and some log messages
change the prefix (don't bother with vo.vo_log stuff).
The plan is to make all the code in aspect.c independent from vo.c,
which should make the code easier to understand, will allow removal of
vo->aspdat, and reduces the amount of code that accesses weird mutable
struct vo fields.
vo->aspdat is basically an outdated version of vo->params, plus some
weirdness. Get rid of it, which will allow further cleanups and which
will make multithreading easier (less state to care about).
Also, simplify some VO code by using mp_image_set_attributes() instead
of caring about display size, colorspace, etc. manually. Add the
function osd_res_from_image_params(), which is often needed in the case
OSD renders into an image.
Do two things:
1. add locking to struct osd_state
2. make struct osd_state opaque
While 1. is somewhat simple, 2. is quite horrible. Lots of code accesses
lots of osd_state (and osd_object) members. To make sure everything is
accessed synchronously, I prefer making osd_state opaque, even if it
means adding pretty dumb accessors.
All of this is meant to allow running VO in their own threads.
Eventually, VOs will request OSD on their own, which means osd_state
will be accessed from foreign threads.
This is a bit of a hack, but in order to prevent TranslateMessage from
seeing WM_KEYDOWN messages that we already know how to decode, move the
decoding logic to the event loop. This should fix#476, since it stops
the generation of extraneous WM_CHAR messages that were triggering more
than one action on keydown.
Doesn't make any sense anymore. X11 (which was mentioned in the manpage)
autodetects it, and everything else ignored the option values.
Since for incomprehensible reasons the backends and vo.c still need to
exchange information about the screensize using the option fields,
they're not removed yet.
For some reason, this made all VO backends both set the screen
resolution in opts->screenwidth/height, and call
aspect_save_screenres(). Remove the latter. Move the code to calculate
the PAR-corrected window size from aspect.c to vo.c, and make it so that
the monitor PAR is recalculated when it makes sense.