Commit 2c2c1203 sorted the output of --list-options, but the same code
ias also used for listing sub-options, such as --vo=scale:help. For sub-
options, the order actually matters.
Until now, --list-options printed options in random order. There
literally wasn't any logic in its order, they just appeared as they were
declared. So just sort them.
Note that we can't sort them in advance, because for certain things
internal to m_config, the order actually matters.
Also we're using strcasecmp(), which is bad (locale dependent), but this
is output intended for human consumption, so it's not a problem.
This used to display the property type, but it was not always correct or
even available. The way the property mechanism works, we can know this
only at runtime.
Otherwise, the client API user could not know why playback was stopped.
Regarding the fact that 0 is used both for normal EOF and EOF on error:
this is because mplayer traditionally did not distinguish these, and in
general it's hard to tell the real reason. (There are various weird
corner cases which make it hard.)
Although this is something really basic, Lua's standard library doesn't
provide anything like this. Probably because there are too many ways to
do it right or wrong.
This code tries to be really careful when dealing with mixed
arrays/maps, e.g. when a table has integer keys starting from 1, making
it look like an array, but then also has other keys.
The stats were retrieved and written on every encode call, instead of
every encode call that actually returned a packet. ffmpeg.c also does it
this way, so it must be "more correct". Fixes 2-pass encoding.
Our own tables have size for only 8 chars, so these sequences must be
rejected. It seems strings of length 8 are still ok, because the code
uses memcmp and not strcmp, so still allow these.
Based on mplayer-svn commit r37129.
I have some doubts that short reads are even allowed/
possible for /dev/js*, does someone know for sure?
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@37132 b3059339-0415-0410-9bf9-f77b7e298cf2
We needed this because the OSD rendering path used GBRP for RGB
rendering, and not all swscale versions supported this conversion. But
recently we've dropped support for very old ffmpeg/libav versions, so
this isn't needed anymore.
This was broken at some unknown point (even before the recent cache
changes). There are several problems:
- stream_dvd returning a random stream position, confusing the cache
layer (cached data and stream data lost their 1:1 corrospondence by
position)
- this also confused the mechanism added with commit a9671524, which
basically triggered random seeking (although this was not the only
problem)
- demux_lavf requesting seeks in the stream layer, which resulted in
seeks in the cache or the real stream
Fix this by completely removing byte-based seeking from stream_dvd. This
already works fine for stream_dvdnav and stream_bluray. Now all these
streams do time-based seeks, and pretend to be infinite streams of data,
and the rest of the player simply doesn't care about the stream byte
positions.
resize_cache() checks the size itself and clamps the size to the valid
range if necessary, so we don't need these checks. In fact, the checks
are different. Also, output the cache size after clamping, instead of
before.
Use NtQueryVolumeInformationFile instead of GetDriveType for detecting
remote filesystems on Windows. This has the advantage of working
directly on the file handle instead of needing a path and it works
unmodified in Cygwin where the previous code wouldn't understand Cygwin
paths or symlinks.
There is some risk in using NtQueryVolumeInformationFile, since it's an
internal function and its behaviour could change at any time or it could
be removed in a future version of Windows, however it's documented[1] in
the WDK and it's used successfully by Cygwin, so it should be fine. If
it's removed, the code should fail gracefully by treating all files as
local.
[1]: http://msdn.microsoft.com/en-us/library/windows/hardware/ff567070.aspx
Signed-off-by: wm4 <wm4@nowhere>
Merge the cache_read function into cache_fill_buffer, since there's
not much reason to keep them separate. Also, simply call read_buffer()
to see if there's any readable data, instead of checking for the
condition manually.
The only tricky part is keeping the cache contents, which is made simple
by allocating the new cache while still keeping the old cache around,
and then copying the old data.
To explain the "Don't use this when playing DVD or Bluray." comment: the
cache also associates timestamps to blocks of bytes, but throws away the
timestamps on seek. Thus you will experience strange behavior after
resizing the cache until the old cached region is exhausted.
The only difference is that the MP_DBG message is not printed anymore if
the current user read position is outside of the current cache range.
(In order to handle seek_limit==0 gracefully in the normal case of
linear reading, change the comparison from ">=" to ">".)
Until now, this could never happen, because new data was simply always
appended to the end of the cache. But for making stream cache resizing
easier, doing it this way seems advantageous. It also makes it harder to
make the internal state inconsistent. (Before this change it could
happen that cache and stream position went out of sync if the read
position was adjusted "inappropriately".)
Until now, cache_read() (which calls read_buffer()) could return short
reads. This was a simplification allowed by the stream interface. But
for cache resizing, it will be more practical to make read_buffer() do
a full read.
Seems like a good idea. One possible bad effect would be slowing down
uncached controls, but they're already slow. The good thing is that
many controls make intrusive changes to the stream (at least controls
which do write accesses), so the cached parameters should be updated.
Some of these property implementations already send notifications on
their own, but most don't. This takes care of them.
Of course this still doesn't handle all propertry changes - this is
impossible without special-casing each property that can change on its
own.
This might be a good idea in order to prevent queuing a frame too far in
the future (causing apparent freezing of the video display), or dropping
an infinite number of frames (also apparent as freezing).
I think at this point this is most of what we can do if the vdpau time
source is unreliable (like with Mesa). There are still inherent race
conditions which can't be fixed.
The strange thing about this code was the shift parameter of the
prev_vs2 function. The parameter is used to handle timestamps before the
last vsync, since the % operator handles negative values incorrectly.
Most callers set shift to 0, and _usually_ pass a timestamp after the
last vsync. One caller sets it to 16, and can pass a timestamp before
the last timestamp.
The mystery is why prev_vs2 doesn't just compensate for the % operator
semantics in the most simple way: if the result of the operator is
negative, add the divisor to it. Instead, it adds a huge value to it
(how huge is influenced by shift). If shift is 0, the result of the
function will not be aligned to vsyncs.
I have no idea why it was written in this way. Were there concerns about
certain numeric overflows that could happen in the calculations? But I
can't think of any (the difference between ts and vc->recent_vsync_time
is usually not that huge). Or is there something more clever about it,
which is important for the timing code? I can't think of anything
either.
So scrap it and simplify it.
vo_vdpau used a somewhat complicated and fragile mechanism to convert
the vdpau time to internal mpv time. This was fragile as in it couldn't
deal well with Mesa's (apparently) random timestamps, which can change
the base offset in multiple situations. It can happen when moving the
mpv window to a different screen, and somehow it also happens when
pausing the player.
It seems this mechanism to synchronize the vdpau time is not actually
needed. There are only 2 places where sync_vdptime() is used (i.e.
returning the current vdpau time interpolated by system time).
The first call is for determining the PTS used to queue a frame. This
also uses convert_to_vdptime(). It's easily replaced by querying the
time directly, and adding the wait time to it (rel_pts_ns in the patch).
The second call is pretty odd: it updates the vdpau time a second time
in the same function. From what I can see, this can matter only if
update_presentation_queue_status() is very slow. I'm not sure what to
make out of this, because the call merely queries the presentation
queue. Just assume it isn't slow, and that we don't have to update the
time.
Another potential issue with this is that we call VdpPresentationQueueGetTime()
every frame now, instead of every 5 seconds and interpolating the other
calls via system time. More over, this is per video frame (which can be
portantially dropped, and not per actually displayed frame. Assume this
doesn't matter.
This simplifies the code, and should make it more robust on Mesa. But
note that what Mesa does is obviously insane - this is one situation
where you really need a stable time source. There are still plenty of
race condition windows where things can go wrong, although this commit
should drastically reduce the possibility of this.
In my tests, everything worked well. But I have no access to a Mesa
system with vdpau, so it needs testing by others.
See github issues #520, #694, #695.
This turned out ridiculously complex. I think it will have to be
simplified some day. Main reason for the complexity are:
- filtering properties by forcing clients to observe individual
properties explicitly
(to avoid spamming clients with changes they don't want)
- optional retrieval of property value with the notification
(the basic idea was that this is more user friendly)
- allowing to the client to specify a format in which the value
should be retrieved
(because if a property changes its type, the client API couldn't
convert it properly, and compatibility would break)
I don't know yet which of these are important, and everything could
change. In particular, the interface and semantics should be adjusted
to reduce the implementation complexity.
While I consider the API complete, there could (and probably will) be
bugs left. Also while the implementation is complete, it's inefficient.
The complexity of the property matching is O(a*b*c) with a clients,
b observed properties, and c properties changing at once. I threw away
an earlier implementation using bitmasks, because it was too unwieldy.
Remove the use of mp_ring and use a simple array and a bunch of
variables instead. This is way less awkwad.
The change in reserve_reply fixes incorrect tracking of free events.