This code calculates the source/display video rectangle for scaling with
most VOs. It's responsible for clipping the display rectangle against
the screen and adjusting the source rectangle accordingly.
Until now, it assumed that the video was centered on the screen. Change
this so that any rectangle is possible. Basically, the clipping is
extended to two sides (e.g. left and right), instead of handling both at
the same time.
The rounding behavior slightly changes. It seems to be slightly better
than before. On the other hand, the video is not strictly centered
anymore (due to different rounding on either side). When using panscan
controls, the video can "jitter" by 1 or 2 pixels around the center as
the panscan value is changed.
When the displayed image is cropped in Y direction (like using panscan
controls when playing 4:3 video on a 16:9 monitor), and separated
scaling is used, the texture size for the FBO holding the intermediate
result was calculated incorrectly. This could lead to artifacts, which
were quite apparent with extreme scale factors.
Actually, the size of that texture is OK, but the texture shouldn't be
used to hold the complete scaled image. Instead, it should be used for
the visible part of the image only. Because separate scaling works by
scaling in Y direction first, it's still fine to scale the image on the
full image width on the first pass. This helps avoiding artifacts on
the left/right border of the image when scaling in X direction, as the
scaler will try to fetch pixels from beyond the border. (The left border
is still kind of fine, but the right border will fetch garbage, unless
the texture is strictly sized, or explicit clamping is added to the
shader. Too much trouble, so using the full image width is simpler.)
Also fix some issues with no-npot mode, which enables use of power-of-2
textures. Maybe this mode isn't really useful anymore (modern hardware
is faster with smaller non-power-of-2 textures), but keep it for now.
This is quite similar to the previous commit.
Untested. I'm not sure if this is how it's supposed to work. At least
--no-stop-screensaver should work in any case.
Use the recently introduced screensaver VOCTRLs to control the
screensaver in the X11 backend. This means the behavior when paused
changes: the old code always kept the screensaver disabled, but now the
screensaver is reenabled on pausing.
Rename the --stop-xscreensaver option to --stop-screensaver and make it
more generic. Now it affects all backends that respond to the
screensaver VOCTRLs.
This is slightly better because VOCTRL_RESUME/VOCTRL_PAUSE are usually
needed by VOs to know whether video is actually being played (for
whatever reason), and they wouldn't be passed to the backend's VOCTRL
handler, like vo_x11_control().
Also try to make sure that these flags (both pause state and screensaver
state) are set consistently in some corner cases. For example, it seems
enabling video in the middle of playing a file while the player is
paused would not set the paused flag.
If codec initialization fails, destroy the VO instead of keeping it
around to make sure the state is consistent.
Framestepping is implemented by unpausing the player for the duration of
a frame. Remove the special handling of VOCTRL_PAUSE/RESUME in these
cases. It was most likely needed because these VOCTRLs used to be
important for screen redrawing (blatant guess), which is now handled
completely differently. The only potentially bad side-effect is that the
screensaver will be disabled/reenabled for the duration of one frame.
`enterFullScreenMode:withOptions:` creates another window so set the level on
both the windowed window and current window.
Also remove NSFullScreenModeWindowLevel as it seems to be superfluous.
Now it properly hits the "0 times displayed" case when frames get
skipped; this means the candidate frame for the case the next frame is
"long" is set properly.
Now that Cocoa's input handling is done on a separate thread from the playloop
it is ridicolously simple to have longer asynchronous sleeps when paused.
On OSX with Cocoa enabled keyDown events are now handled with
addLocalMonitorForEventsMatchingMask:handler:. This allows to respond to
events even when there is no VO initialized but the GUI is focused.
Originally, the header wasn't supposed to contain random compatibility
stuff, but now all that is printed with -v. Add a hack to skip it and
to reduce the noise.
This could lead to quite visible artifacts when using an appropriate ICC
and float FBOs. The float FBOs allow storing out of range values, and my
guess is that the rest of the precessing chain elevated these out of
range values, resulting in artifacts.
Autohide the menubar and/or dock only if they are present in the screen the
player is going to go fullscreen into. I thought the GUI would handle this for
me when I switched 0057aa476 but lack of hardware to test made me embarass
myself yet again.
I reimplemented this feature with nicer code and behaviour. The code checks
separately wether to hide menubar and dock separatly, while the old code used
a single check possibly hiding stuff without need.
Added the key checks as a some category additions to NSScreen for readability.
This takes an approach similar to the wayland OpenGL backend. VOFLAG_HIDDEN
flag semantics doesn't mean "hide the window" but is simply ever used only to
do detection of available OpenGL extensions. On OSX it's possibile to
accomplish this task just by creating the OpenGL context without attaching
it to a drawable.
Using `enterFullScreenMode:withOptions:` with a screen handle than the current
screen doesn't hide the current window in the current screen. This is a bug in
Cocoa (preparing an isolated test case and sending the rdar later).
To work around this, manually hide/show the window that the toolkit should
hide/show for us.
This bug was the result of crappy position detection in the previous code
combined with the commits moving autohide delay out of the cocoa backend and
into the core.
The hit detection was improved and now takes also account of interactions with
the Dock and Menubar. Moreover VOCTRL_SET_CURSOR_VISIBILITY now has an effect
only if the mouse position matches with this improved hit detection. This means
that both interaction with the Dock and Menubar are considered as well as
moving the mouse inside another screen.
This removes a bit of ugly code and bookeeping which is never bad. `drawRect`
needs to guard against different window instances since in fullscreen the view
is wrapped in a fullscreen window provided by the toolkit (a instance of
NSFullScreenWindow to be precise).
The event handling was moved to the view so that it can still get all the
events when in the fullscreen window. Ideally these should be moved to
some NSResponder subclass within macosx_application and made available even
when no window is present. I refrained from this because "small steps".
At this point 10.6 is pretty old and we don't want to supporting old platforms.
I'm killing all the 10.6 compatibility code before doing more refactorings.
Next commits will also use newer Objective-C syntax such as literals and
@autoreleasepool.
0 is invalid. The intention of the code turning off any additional
alignment, so we need 1.
Change a comment: obviously we don't try to set alignment parameters
etc.to handle stride correctly, and instead do everything by row.
This probes and prints the depth of some texture formats with the help
of a FBO. By default it tests the format used for scaling, as well as
the format used for dithering and the 3D LUT (if any of these are
enabled).
The output is visible only with -v. Some representative values are
probed, and the difference of input and output value is printed as hex-
float. Hex-floats are used because they make the implied precision more
obvious. Originally I wanted to do some more sophisticated guessing of
the implied depth/precision for more user-friendly reporting, but then
I decided that printing raw data is better for debugging, especially if
things go wrong.
This does not try to disable any functionality and does not print any
warnings if the depth is lower than what it should be.
This might be better with dumb shader compilers, which won't vectorize
this to a single vector-division, assuming the hardware does have such
an instruction. Affects "bicubic_fast" scale mode only.
The internal texture format GL_RED is typically 8 bit, which is clearly
not good enough for the new dither matrix. The idea was to use a float
texture format, but this was somehow "forgotten". Use GL_R16, since
16 bit textures are more robust, and provide more precision for the
same memory usage.
Change how the offset for centering the dither matrix is applied. This
is needed for making it possible to round up values to the target depth.
Before this commit, this changed the output even if the input was exact
and input and output depth were the same, which is not really what you
want. Now it doesn't do that anymore.
There's no point duplicating all the text that is already in the man
pages, and synchronizing them is a pain. Place a link to the github
generated pages instead.
Unfortunately, the anchor '#vo-opengl' does not work. Maybe github's
rst converter just sucks, as the actually generated HTML contains
links using that anchor too, but does not generate the anchor itself.
Too bad.
Use a different algorithm to generate the dithering matrix. This
looks much better than the previous ordered dither matrix with its
cross-hatch artifacts.
The matrix generation algorithm as well as its implementation was
contributed by Wessel Dankers aka Fruit. The code in dither.c is
his implementation, reformatted and with static global variables
removed by me.
The new matrix is uploaded as float texture - before this commit, it
was a normal integer fixed point matrix. This means dithering will
be disabled on systems without float textures.
The size of the dithering matrix can be configured, as the matrix is
generated at runtime. The generation of the matrix can take rather
long, and is already unacceptable with size 8. The default is at 6,
which takes about 100 ms on a Core2 Duo system with dither.c compiled
at -O2, which I consider just about acceptable.
The old ordered dithering is still available and can be selected by
putting the dither=ordered sub-option. The ordered dither matrix
generation code was moved to dither.c. This function was originally
written by Uoti Urpala.
GetTimer() is generally replaced with mp_time_us(). Both calls return
microseconds, but the latter uses int64_t, us defined to never wrap,
and never returns 0 or negative values.
GetTimerMS() has no direct replacement. Instead the other functions are
used.
For some code, switch to mp_time_sec(), which returns the time as double
float value in seconds. The returned time is offset to program start
time, so there is enough precision left to deliver microsecond
resolution for at least 100 years. Unless it's casted to a float
(or the CPU reduces precision), which is why we still use mp_time_us()
out of paranoia in places where precision is clearly needed.
Always switch to the correct time. The whole point of the new timer
calls is that they don't wrap, and storing microseconds in unsigned int
variables would negate this.
In some cases, remove wrap-around handling for time values.
This was used by some VOs to do timing of cursor autohiding, but we
recently moved that out of the VOs. Even though this mechanism might
be a good idea and could be needed again in future (but for what?),
it's unused now. So better just get rid of it.