This allowed to move the input stream layer across the network, allowing
the user to play anything that mplayer could play remotely. For example,
playing a DVD related on a remote server (say, with the host name
"remotehost1") could be done by starting the netstream server on that
remote server, and then running:
mplayer mpst://remotehost1/dvd://
This would open the DVD on the remote host, and transfer the raw DVD
sector reads over network. It works the same for other protocols, and
all accesses to the stream layer are marshaled over network. It's
comparable to the way the cache layer (--cache) works.
It has questionable use and most likely was barely used at all. There's
lots of potential for breakage, because it doesn't translate the stream
CTRLs to network packets. Just get rid of it.
The server used to be in TOOLS/netstream.c, and was accidentally removed
earlier.
This function sucks and apparently is not very portable (at least on
mingw, the configure check fails). Also remove the emulation of that
function from osdep/strsep*, and remove the configure check.
I have no idea when or how this broke, but _wstati64() is the function
we want anyway (64 bit filesize). Possibly this was a mingw-w64 bug.
It's unknown why "wstat()" just doesn't work in this case, as it's not
defined by MSDN and could be defined by mingw as it needs.
vsscanf() is in POSIX, C99, mingw, etc. Further, the implementation in
osdep/vsscanf.c was completely broken, and if it worked, it worked only
by chance.
The check determined whether the argument for .align is in bytes, or
log2(bytes). Apparently it's always in bytes for ELF i386 systems, and
this check is used for x86 inline assembler only. Even if this
assumption should be wrong, it likely won't cause much damage: the
existing code uses it only in the form ".align 4", which means in the
worst case it will try to align to 16 bytes, which doesn't cause any
problems (unless the object file format does not support such a high
alignment).
Update the filters that used this.
Quoting the GNU as manual:
For other systems, including ppc, i386 using a.out format, arm and
strongarm, it is the number of low-order zero bits the location counter
must have after advancement. For example `.align 3' advances the
location counter until it a multiple of 8. If the location counter is
already a multiple of 8, no change is needed.
Change the only usage of HAVE_BUILTIN_EXPECT, demux.h, to use an #ifdef
instead. In theory, a configure check is better, but nobody does it this
way anyway, and we seek to reduce the configure script.
mixer_setvolume() accepts float values for volume, but used the
integer function av_clip() to limit range, losing the fractional part
as a side effect. Change the code to use av_clipf() instead. For most
uses this shouldn't make any real difference; actual AO volume
settings may not have that much precision anyway.
af_volnorm can process either int16_t or float audio data. The float
version used 0 to INT_MAX as full value range, when it should be 0 to
1. This effectively disabled the filter (due to all input being
considered to fall in the silence range). Fix.
Reported by Tobias Jacobi <liquid.acid@gmx.net>.
Something produces corrupt Matroska files with audio tracks that have
SamplingFrequency set to 44100 and OutputSamplingFrequency to 96000,
when the correct playback rate is 44100. Add a special case for this
44100/96000 combination and override it to 44100/44100; it's unlikely
that anyone would ever want to use this 44100/96000 combination for
real in valid files.
Ensure that even if a seek is inaccurate it will not show video from
outside the defined timeline. Previously, seeking to the beginning of
a segment could show frames from before the start of the segment if
the seek was done in inaccurate mode and the demuxer seeked to an
earlier position. Now hr-seek machinery is used to skip at least the
frames that should not be part of playback timeline at all.
Now external subtitles essentially use the playback time, instead of
the segment time.
This is more useful when using external subtitles with mkv ordered
chapters. The previous behavior is not necessarily incorrect, and e.g.
makes it easier to use subtitles directly extracted from ordered
chapters segments. But we consider the new behavior more useful.
Also see commit 06e3dc8.
This is simpler and more useful. We could add a new switch for the old
functionality, but that would probably be more confusing than helpful.
When passing only a single file to the command line, this commit
shouldn't change behavior.
(Classic mplayer provided both features by duplicating the loop
functionality in the "playtree".)
When the last frame is displayed, and a frame step command is issued,
playback ands and advances to the next file. But before this commit,
the next file was played unpause. Fix this, and make sure pause is
kept.
Looks like unicode support was broken with this simple `fonts.conf`. Copy more
(all) of fontconfig's default `fonts.conf`.
Fixes#13
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
This causes trouble when a hw device is used:
pcm_hw.c:514:(snd_pcm_hw_delay) SNDRV_PCM_IOCTL_DELAY failed (-77): File descriptor in bad state
when running mpv test.mkv --ao=alsa:device=iec958,alsa and pausing
during playback.
Historically, mplayer usually did not call snd_pcm_delay() (which is
called by get_delay()) while paused, so this problem never showed up.
But at least mpv has changes that cause get_delay() to be called when
updating the status line (see commit 3f949cf).
It's possible that calling snd_pcm_delay() is not always legal when the
audio is paused, and at least fails with the error message mentioned
above is the device is a hardware device. Change get_delay() to return
the last delay before the audio was paused. The intention is to get a
continuous playback status display, even when pausing or frame stepping,
otherwise we could just return the audio buffer fill status in
get_delay() or even just 0 when paused.
Setting some subtitle options may lead to incorrect rendering of complex
ASS subtitle scripts, such as displaced signs or visual artifacts. The
user should be made aware that this can happen.
In theory, libass could make using some of these options relatively
safe, but it doesn't.
Note that there are potentially much more options that could in theory
break subtitle rendering, but add a warning only to the most fragile
ones.
Before this commit, the --osd-* options (like --osd-font-size etc.)
configured both the OSD and subtitle font. Make them separate, and add
--sub-text-* options (like --sub-text-size etc.). Now --osd-* affects
the OSD font only, and --sub-text-* unstyled text subtitles only.
They were more or less grouped by usefulness, but since everything
else in the manpage is sorted alphabetically, it's better to be
consistent and sort these options as well.
The premultiplied-alpha hack is changed:
- The first stage now uses a colormod of black with an unmodified
texture. This saves on applying the AND mask of 0xFF000000 to keep
alpha only.
- The second stage no longer uses an AND mask, but only an OR mask of
0xFF000000 to cancel out alpha.
- The texture uploads are no longer done using SDL_LockTexture,
SDL_ConvertPixels, SDL_UnlockTexture when the mpv pixel format matches
the OSD's pixel format. Instead, SDL_UploadTexture is used, which
saves a copy when using the "opengl" renderer.
If ffmpeg returns AVERROR_PROTOCOL_NOT_FOUND, print a warning that
ffmpeg should be compiled with network support. Note that stream_lavf.c
itself includes a whitelist of directly supported ffmpeg protocols, so
it can't happen that a completely unknown/madeup protocol triggers
this message. (Unless the ffmpeg:// or lavf:// prefixes are used.)
Strictly speaking, 444P is higher quality than YV12, but doing this is
not very useful when playing 10 bit video with -vo opengl-old on a GPU
that doesn't support 16 bit textures.
For 9-15 bit material, cutting off the lower bits leads to significant
quality reduction, because these formats leave the most significant bits
unused (e.g. 10 bit padded to 16 bit, transferred as 8 bit -> only
2 bits left). 16 bit formats still can be played like this, as cutting
the lower bits merely reduces quality in this case.
This problem was encountered with the following GPU/driver combination:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) 915GM x86/MMX/SSE2
OpenGL version string: 1.4 Mesa 9.0.1
It appears 16 bit support is rather common on GPUs, so testing the
actual texture depth wasn't needed until now. (There are some other Mesa
GPU/driver combinations which support 16 bit only when using RG textures
instead of LUMINANCE_ALPHA. This is due to OpenGL driver bugs.)
The extension checking logic was broken, which reported OpenGL 3 if the
OpenGL .so exported OpenGL 3-only symbols, even if the reported OpenGL
version is below 3.0. Fix it and simplify the code a bit. Also never
fail hard if required functions are not found. The caller should check
the capability flags instead. Give up on the idea that we should print
a warning if essential functions are not found (makes loading of ancient
legacy-only extensions easier).
This was experienced with the following version strings:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) 915GM x86/MMX/SSE2
OpenGL version string: 1.4 Mesa 9.0.1
(Possibly reports a very old version because it has no GLSL support,
and thus isn't even GL 2.0 compliant.)
Uses the same trick as the planarization code to turn per-sample memcpy
calls into mov instructions. Makes decoding a ~25min 48000Hz 2ch floatle
audio file faster from 3.8s to 2.7s.
Change from gamma 2.2 to the slightly more precise 1/0.45 as per BT.709.
https://www.itu.int/rec/R-REC-BT.709-5-200204-I/en mentions a value of
γ=0.45 for the conceptual non-linear precorrection of video signals.
This is approximately the inverse of 2.22, and not 2.20 as the code had
been using until now.