The use of filters prior to PNG compression can greatly improve
compression ratio, with "mixed" (ImageMagick calls it "adaptive")
typically achieving the best results.
This is quite similar to the previous commit.
Untested. I'm not sure if this is how it's supposed to work. At least
--no-stop-screensaver should work in any case.
Use the recently introduced screensaver VOCTRLs to control the
screensaver in the X11 backend. This means the behavior when paused
changes: the old code always kept the screensaver disabled, but now the
screensaver is reenabled on pausing.
Rename the --stop-xscreensaver option to --stop-screensaver and make it
more generic. Now it affects all backends that respond to the
screensaver VOCTRLs.
This is slightly better because VOCTRL_RESUME/VOCTRL_PAUSE are usually
needed by VOs to know whether video is actually being played (for
whatever reason), and they wouldn't be passed to the backend's VOCTRL
handler, like vo_x11_control().
Also try to make sure that these flags (both pause state and screensaver
state) are set consistently in some corner cases. For example, it seems
enabling video in the middle of playing a file while the player is
paused would not set the paused flag.
If codec initialization fails, destroy the VO instead of keeping it
around to make sure the state is consistent.
Framestepping is implemented by unpausing the player for the duration of
a frame. Remove the special handling of VOCTRL_PAUSE/RESUME in these
cases. It was most likely needed because these VOCTRLs used to be
important for screen redrawing (blatant guess), which is now handled
completely differently. The only potentially bad side-effect is that the
screensaver will be disabled/reenabled for the duration of one frame.
`enterFullScreenMode:withOptions:` creates another window so set the level on
both the windowed window and current window.
Also remove NSFullScreenModeWindowLevel as it seems to be superfluous.
Now it properly hits the "0 times displayed" case when frames get
skipped; this means the candidate frame for the case the next frame is
"long" is set properly.
This branch heavily refactors the subtitle code (both loading and
rendering), and adds support for a few new formats through FFmpeg.
We don't remove any of the old code yet. There are still some subtleties
related to subreader.c to be resolved: code page detection & conversion,
timing post-processing, UTF-16 subtitle support, support for the -subfps
option. Also, SRT reading and loading ASS via libass should be turned
into proper demuxers. (SRT is needed because Libav's is gravely broken,
and we want ASS loading via libass to cover full libass format support.
Both should be demuxers which are probed _before_ libavformat, so that
all subtitles can be loaded through the demuxer infrastructure, and
libavformat subtitles don't need to be treated in a special way.)
Audio and video had their own (very similar) functions to initialize an
AVPacket (ffmpeg's packet struct) from a demux_packet (mplayer's packet
struct). Add a common function for these.
Also use this function for sd_lavc_conv. This is actually a functional
change, as some libavfilter subtitle demuxers add weird out-of-band
stuff as side-data.
Now that Cocoa's input handling is done on a separate thread from the playloop
it is ridicolously simple to have longer asynchronous sleeps when paused.
On OSX with Cocoa enabled keyDown events are now handled with
addLocalMonitorForEventsMatchingMask:handler:. This allows to respond to
events even when there is no VO initialized but the GUI is focused.
Originally, the header wasn't supposed to contain random compatibility
stuff, but now all that is printed with -v. Add a hack to skip it and
to reduce the noise.
This could lead to quite visible artifacts when using an appropriate ICC
and float FBOs. The float FBOs allow storing out of range values, and my
guess is that the rest of the precessing chain elevated these out of
range values, resulting in artifacts.
Autohide the menubar and/or dock only if they are present in the screen the
player is going to go fullscreen into. I thought the GUI would handle this for
me when I switched 0057aa476 but lack of hardware to test made me embarass
myself yet again.
I reimplemented this feature with nicer code and behaviour. The code checks
separately wether to hide menubar and dock separatly, while the old code used
a single check possibly hiding stuff without need.
Added the key checks as a some category additions to NSScreen for readability.
This takes an approach similar to the wayland OpenGL backend. VOFLAG_HIDDEN
flag semantics doesn't mean "hide the window" but is simply ever used only to
do detection of available OpenGL extensions. On OSX it's possibile to
accomplish this task just by creating the OpenGL context without attaching
it to a drawable.
Using `enterFullScreenMode:withOptions:` with a screen handle than the current
screen doesn't hide the current window in the current screen. This is a bug in
Cocoa (preparing an isolated test case and sending the rdar later).
To work around this, manually hide/show the window that the toolkit should
hide/show for us.
This bug was the result of crappy position detection in the previous code
combined with the commits moving autohide delay out of the cocoa backend and
into the core.
The hit detection was improved and now takes also account of interactions with
the Dock and Menubar. Moreover VOCTRL_SET_CURSOR_VISIBILITY now has an effect
only if the mouse position matches with this improved hit detection. This means
that both interaction with the Dock and Menubar are considered as well as
moving the mouse inside another screen.
This removes a bit of ugly code and bookeeping which is never bad. `drawRect`
needs to guard against different window instances since in fullscreen the view
is wrapped in a fullscreen window provided by the toolkit (a instance of
NSFullScreenWindow to be precise).
The event handling was moved to the view so that it can still get all the
events when in the fullscreen window. Ideally these should be moved to
some NSResponder subclass within macosx_application and made available even
when no window is present. I refrained from this because "small steps".
At this point 10.6 is pretty old and we don't want to supporting old platforms.
I'm killing all the 10.6 compatibility code before doing more refactorings.
Next commits will also use newer Objective-C syntax such as literals and
@autoreleasepool.
0 is invalid. The intention of the code turning off any additional
alignment, so we need 1.
Change a comment: obviously we don't try to set alignment parameters
etc.to handle stride correctly, and instead do everything by row.
This probes and prints the depth of some texture formats with the help
of a FBO. By default it tests the format used for scaling, as well as
the format used for dithering and the 3D LUT (if any of these are
enabled).
The output is visible only with -v. Some representative values are
probed, and the difference of input and output value is printed as hex-
float. Hex-floats are used because they make the implied precision more
obvious. Originally I wanted to do some more sophisticated guessing of
the implied depth/precision for more user-friendly reporting, but then
I decided that printing raw data is better for debugging, especially if
things go wrong.
This does not try to disable any functionality and does not print any
warnings if the depth is lower than what it should be.
This might be better with dumb shader compilers, which won't vectorize
this to a single vector-division, assuming the hardware does have such
an instruction. Affects "bicubic_fast" scale mode only.
The internal texture format GL_RED is typically 8 bit, which is clearly
not good enough for the new dither matrix. The idea was to use a float
texture format, but this was somehow "forgotten". Use GL_R16, since
16 bit textures are more robust, and provide more precision for the
same memory usage.
Change how the offset for centering the dither matrix is applied. This
is needed for making it possible to round up values to the target depth.
Before this commit, this changed the output even if the input was exact
and input and output depth were the same, which is not really what you
want. Now it doesn't do that anymore.
Most of these are rather questionable, the rest you rarely need to set
manually. You still can set all of them with -lavdopts-o (because
libavcodec has AVOptions for them).
There's no point duplicating all the text that is already in the man
pages, and synchronizing them is a pain. Place a link to the github
generated pages instead.
Unfortunately, the anchor '#vo-opengl' does not work. Maybe github's
rst converter just sucks, as the actually generated HTML contains
links using that anchor too, but does not generate the anchor itself.
Too bad.
If the image is not writeable, the image actually has to be copied
beforehand. This was overlooked when converting the video chain to
reference counted images.
Fix a double free issue. This was overlooked when vf.c was changed to
free filter priv data automatically.
Use a different algorithm to generate the dithering matrix. This
looks much better than the previous ordered dither matrix with its
cross-hatch artifacts.
The matrix generation algorithm as well as its implementation was
contributed by Wessel Dankers aka Fruit. The code in dither.c is
his implementation, reformatted and with static global variables
removed by me.
The new matrix is uploaded as float texture - before this commit, it
was a normal integer fixed point matrix. This means dithering will
be disabled on systems without float textures.
The size of the dithering matrix can be configured, as the matrix is
generated at runtime. The generation of the matrix can take rather
long, and is already unacceptable with size 8. The default is at 6,
which takes about 100 ms on a Core2 Duo system with dither.c compiled
at -O2, which I consider just about acceptable.
The old ordered dithering is still available and can be selected by
putting the dither=ordered sub-option. The ordered dither matrix
generation code was moved to dither.c. This function was originally
written by Uoti Urpala.
GetTimer() is generally replaced with mp_time_us(). Both calls return
microseconds, but the latter uses int64_t, us defined to never wrap,
and never returns 0 or negative values.
GetTimerMS() has no direct replacement. Instead the other functions are
used.
For some code, switch to mp_time_sec(), which returns the time as double
float value in seconds. The returned time is offset to program start
time, so there is enough precision left to deliver microsecond
resolution for at least 100 years. Unless it's casted to a float
(or the CPU reduces precision), which is why we still use mp_time_us()
out of paranoia in places where precision is clearly needed.
Always switch to the correct time. The whole point of the new timer
calls is that they don't wrap, and storing microseconds in unsigned int
variables would negate this.
In some cases, remove wrap-around handling for time values.
This was used by some VOs to do timing of cursor autohiding, but we
recently moved that out of the VOs. Even though this mechanism might
be a good idea and could be needed again in future (but for what?),
it's unused now. So better just get rid of it.
Notify the core of mouse movement events. The coordinates are converted to a
coordinate system with the origin in upper left corner, since Cocoa has it in
the lower left corner.
Use VOCTRL_CHECK_EVENTS instead. Change the remaining VOs to use it.
Only vo_sdl and vo_caca actually need this, and vo_null, vo_lavc, and
vo_image had stubs only.
Instead of having separate callbacks for each backend-handled feature
(like MPGLContext.fullscreen, MPGLContext.border, etc.), pass the
VOCTRL responsible for this directly to the backend. This allows
removing a bunch of callbacks, that currently must be set even for
optional/lesser features (like VOCTRL_BORDER).
This requires changes to all VOs using gl_common, as well as all
backends that support gl_common.
Also introduce VOCTRL_CHECK_EVENTS. vo.check_events is now optional.
VO backends can use VOCTRL_CHECK_EVENTS instead to implementing
check_events. This has the advantage that the event handling code in
VOs doesn't have to be duplicated if vo_control() is used.
Can be used to refer to filters by name. Intended to be used when the
filter chain is changed at runtime.
A label can be assigned to a filter by prefixing it with '@name:', where
'name' is an user-chosen identifier. For example, a filter added with
'-vf-add @label1:gradfun=123' can be removed with '-vf-del @label1'.
If a filter with an already existing label is added, the existing filter
is replaced with the new filter (this happens for both -vf-add and
-vf-pre). If a filter is replaced, the new filter takes the position of
the old filter, instead of being appended/prepended to the filter chain
as usual. For -vf-toggle, labels are compared if at least one of the
filters has a label; otherwise they are compared by filter name and
arguments (like before). This means two filters are never considered
equal if one has a label and the other one does not.
libavcodec generally shouldn't have this problem anymore (if libavcodec
ever had it). All other video decoders are gone. In any case, if this
commit actually causes regressions, these are libavcodec bugs and should
be fixed there instead.
This happens only if an option actually allocates memory (like strings).
Change filter API such that vf->priv is free'd by vf.c instead by the
filters. vf.c will free the option values as well.
Add the "vf" command, which allows changing the video filter chain at
runtime. For example, the 'y' key could be bound to toggle deinterlacing
by adding 'y vf toggle yadif' to the input.conf.
Reconfiguring the video filter chain normally resets the VO, so that it
will be "stuck" until a new video frame is rendered. To mitigate this, a
seek to the current position is issued when the filter chain is changed.
This is done only if playback is paused, because normal playback will
show an actual new frame quickly enough.
If vdpau hardware decoding is used, filter insertion (whether it fails
or not) will break the video for a while. This is because vo_vdpau
resets decoding related things on vo_config().
This tried to use ctx->pic (last decoded AVFrame) for the frame bounds.
However, if av_frame_unref() on the AVFrame is called, the function will
reset _all_ AVFrame fields, even those which are not involved with
memory management. As result, mpcodecs_config_vo() was called with 0
width/height, which made the function exit early, instead of
reconfiguring the filter chain.
Go back to using mpcodecs_config_vo() directly. (That's what it did
before this VDCTRL was originally introduced; the original reason for
it disappeared.)
vo_vdpau actually reads past the image allocation when displaying a
non-mod 2 420p image. The vdpau API specifies that VdpVideoSurfacePutBitsYCbCr()
requires a height that is a multiple of 4, and surface allocations are
automatically rounded.
So allocate video images with rounded height. libavutil does the same,
so images coming directly from the decoder or from libavfilter are no
problem. (libavutil does this alginment explicitly, not just because the
decoded image size is aligned to macroblocks.)
This changes the code so that it does the same as MPlayer, mplayer2
and mpv before ref-counted AVFrame. The problem is that get_buffer2
is called with aligned frame dimensions, while get_buffer didn't. This
breaks the mpv video frame size change detection.
Followup to 8df7127. This refines the condition for front ordering the
condition to account for minimized or hidden state where the window should go
to the front only as a consequnce of user interaction.
When using --fs `vo_cocoa_fullscreen` was called from the primary thread. This
occurred inside `vo_cocoa_config_window` which is scheduled for excution on the
primary thread with libdispatch).
All of this caused spam from NSRecursiveLock in standard output.
When going in and going out of full screen the player lost information on the
movableByWindowBackground behaviour. There were some hacks in place to fix it
but they were broken with the recent native fullscreen changes in 74c15ec6.
This commit removes the problem at the root and removes the hacks. The delegate
method `isMovableByWindowBackground` seems to be called after setFrame and
setPresentationOptions so change fs state in opts before that.
Fixes#83
This adds Mission Control fullscreen functionality to mpv. Since this doesn't
play well with many of mpv's features disable it by default. Users can activate
this feature by using `--native-fs` when starting mpv.
Fixes#34
This commit is a followup on the previous one and uses a solution I like more
since it totally decouples the Cocoa code from mpv's core and tries to emulate
a generic Cocoa application's lifecycle as much as possible without fighting
the framework.
mpv's main is executed in a pthread while the main thread runs the native cocoa
event loop.
All of the thread safety is mainly accomplished with additional logic in
cocoa_common as to not increase complexity on the crossplatform parts of the
code.
Schedule mpv's playloop as a high frequency timer inside the main Cocoa event
loop. This has the benefit to allow accessing menus as well as resizing the
window without the playback being blocked and allows to remove countless hacks
from the code that involved manually pumping the event loop as well simulating
manually some of the Cocoa default behaviours.
A huge improvement consists in removing NSApplicationLoad. This is a C function
defined in the Cocoa header and implements a minimal OSX application under ther
hood so that you can use the Cocoa GUI toolkit from C/C++ without having to
respect the Cocoa standards in terms of application initialization. This was
bad because the behaviour implemented by NSApplicationLoad was hard to customize
and had several gotchas especially in the menu department.
mpv was changed to be just a nib-less application. All the Cocoa part is still
generated in code but the event handling is now not dissimilar to what is
present in a stock Mac application.
As a part of reviewing the initialization process, I also removed all of
`osdep/macosx_finder_args`. The useful parts of the code were moved to
`osdep/macosx_appication` which has the broaded responsibility of managing the
full lifecycle of the Cocoa application. By consequence the
`--enable-macosx-finder` configure switch was killed as well, as this feature
is always enabled.
Another change the users will notice is that when using a bundle the `--quiet`
option will be inserted much earlier in the initializaion process. This results
in mpv not spamming mpv.log anymore with all the initialization outputs.
gl_video_resize_redraw() simply resizes and redraws (but without
invoking swapGlBuffers()). The VO is not involved in any way, so this
can simply be called from inside the mpgl lock from any thread.
Requires a minor refactor of the GL OSD code in order to redraw without
an OSD object.
To simplify things, we just assume that all OpenGL calls as well as
all calls into gl_video must be locked. Currently, also assume that
anything GUI related must be locked as well (stuff like VOCTRL_BORDER).
In its current state, this commit does nothing, but it will allow us to
move the Cocoa GUI out of the playloop, as well as possibly implementing
better framedropping.
Some OpenGL implementations on some platforms require that a context
is current only on one thread. For this reason, mpgl_lock() and
mpgl_unlock() take care of this as well for convenience.
Each backend that needs thread safety should provide it's own locking strategy
inside of `set_current`.
This fixes 2 bugs:
* Resizing very fast breaks the aspect of the window and the width and height
don't match with the video anymore
* Pressing 'f' for fullscreen very fast can overwrite the backup variables for
the previous width and height.
Also includes a better aspect calculation with fluid resizing.
These were found by the cppcheck and scan-build static analyzers. Most
of these aren't interesting (the 2 previous commits fix some interesting
cases found by these analyzers), and they don't nearly fix all warnings.
(Most of the unfixed warnings are spam, things MPlayer never cared
about, or false positives.)
This caused all formats with fewer than 8 bits per component marked as
little endian. (Normally, only some messed up packed RGB formats are
endian-specific in this case.)
This allows using the vdpau decoders with -vd without having to use
the -hwdec switch (basically like in mplayer).
Note that this way of selecting the hardware decoder is still
deprecated. libavcodec went away from adding special decoder entries
for hardware decoding, and instead makes use of the "hwaccel"
architecture, where hardware decoders use the same decoder names as
the software decoders. The old vdpau special decoders will probably
be deprecated and removed in the future.
YCgCo can be manually selected, but will also be used if the decoder
reports YCgCo. To make things more fun, files are sometimes marked
incorrectly, which will display such broken files incorrectly starting
with this commit.
Useful for the j2k decoder.
Matrix taken from http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html
(XYZ to sRGB, whitepoint D65)
Gamma conversion follows what libswscale does (2.6 in, 2.2 out).
If linear RGB is used internally for scaling, the gamma conversion
will be undone by setting the exponent to 1. Unfortunately, the two
gamma values don't compensate each others exactly (2.2 vs. 1/0.45=2.22...),
so output is a little bit incorrect in sRGB or color-managed mode. But
for now try hard to match libswscale output, which may or may not be
correct.
We only print an error message when POLLERR or POLLHUP occurrs, as the
something did go horribly wrong and the server will either deal with it or
crash.
Also add POLLOUT to the events.