2001-03-18 13:57:06 +00:00
|
|
|
So, I'll describe how this stuff works.
|
|
|
|
|
|
|
|
The main modules:
|
|
|
|
|
2002-04-13 02:09:18 +00:00
|
|
|
1. stream.c: this is the input layer, this reads the input media (file, stdin,
|
|
|
|
vcd, dvd, network etc). what it has to know: appropriate buffering by
|
|
|
|
sector, seek, skip functions, reading by bytes, or blocks with any size.
|
|
|
|
The stream_t (stream.h) structure describes the input stream, file/device.
|
|
|
|
|
|
|
|
There is a stream cache layer (cache2.c), it's a wrapper for the stream
|
|
|
|
API. It does fork(), then emulates stream driver in the parent process,
|
|
|
|
and stream user in the child process, while proxying between them using
|
|
|
|
preallocated big memory chunk for FIFO buffer.
|
|
|
|
|
|
|
|
2. demuxer.c: this does the demultiplexing (separating) of the input to
|
|
|
|
audio, video or dvdsub channels, and their reading by buffered packages.
|
2001-03-18 13:57:06 +00:00
|
|
|
The demuxer.c is basically a framework, which is the same for all the
|
|
|
|
input formats, and there are parsers for each of them (mpeg-es,
|
|
|
|
mpeg-ps, avi, avi-ni, asf), these are in the demux_*.c files.
|
|
|
|
The structure is the demuxer_t. There is only one demuxer.
|
|
|
|
|
2001-04-20 20:48:25 +00:00
|
|
|
2.a. demux_packet_t, that is DP.
|
|
|
|
Contains one chunk (avi) or packet (asf,mpg). They are stored in memory as
|
2002-04-13 02:09:18 +00:00
|
|
|
in linked list, cause of their different size.
|
2001-04-20 20:48:25 +00:00
|
|
|
|
2001-05-25 17:04:11 +00:00
|
|
|
2.b. demuxer stream, that is DS.
|
|
|
|
Struct: demux_stream_t
|
2002-04-13 02:09:18 +00:00
|
|
|
Every channel (a/v/s) has one. This contains the packets for the stream
|
2001-05-25 17:04:11 +00:00
|
|
|
(see 2.a). For now, there can be 3 for each demuxer :
|
|
|
|
- audio (d_audio)
|
|
|
|
- video (d_video)
|
|
|
|
- DVD subtitle (d_dvdsub)
|
2001-04-20 20:48:25 +00:00
|
|
|
|
|
|
|
2.c. stream header. There are 2 types (for now): sh_audio_t and sh_video_t
|
|
|
|
This contains every parameter essential for decoding, such as input/output
|
|
|
|
buffers, chosen codec, fps, etc. There are each for every stream in
|
|
|
|
the file. At least one for video, if sound is present then another,
|
|
|
|
but if there are more, then there'll be one structure for each.
|
|
|
|
These are filled according to the header (avi/asf), or demux_mpg.c
|
|
|
|
does it (mpg) if it founds a new stream. If a new stream is found,
|
|
|
|
the ====> Found audio/video stream: <id> messages is displayed.
|
|
|
|
|
|
|
|
The chosen stream header and its demuxer are connected together
|
|
|
|
(ds->sh and sh->ds) to simplify the usage. So it's enough to pass the
|
|
|
|
ds or the sh, depending on the function.
|
|
|
|
|
|
|
|
For example: we have an asf file, 6 streams inside it, 1 audio, 5
|
2001-05-25 17:04:11 +00:00
|
|
|
video. During the reading of the header, 6 sh structs are created, 1
|
|
|
|
audio and 5 video. When it starts reading the packet, it chooses the
|
|
|
|
stream for the first found audio & video packet, and sets the sh
|
|
|
|
pointers of d_audio and d_video according to them. So later it reads
|
|
|
|
only these streams. Of course the user can force choosing a specific
|
|
|
|
stream with
|
2001-04-20 20:48:25 +00:00
|
|
|
-vid and -aid switches.
|
|
|
|
A good example for this is the DVD, where the english stream is not
|
|
|
|
always the first, so every VOB has different language :)
|
|
|
|
That's when we have to use for example the -aid 128 switch.
|
2001-03-18 13:57:06 +00:00
|
|
|
|
|
|
|
Now, how this reading works?
|
|
|
|
- demuxer.c/demux_read_data() is called, it gets how many bytes,
|
|
|
|
and where (memory address), would we like to read, and from which
|
|
|
|
DS. The codecs call this.
|
|
|
|
- this checks if the given DS's buffer contains something, if so, it
|
|
|
|
reads from there as much as needed. If there isn't enough, it calls
|
|
|
|
ds_fill_buffer(), which:
|
|
|
|
- checks if the given DS has buffered packages (DP's), if so, it moves
|
|
|
|
the oldest to the buffer, and reads on. If the list is empty, it
|
|
|
|
calls demux_fill_buffer() :
|
|
|
|
- this calls the parser for the input format, which reads the file
|
|
|
|
onward, and moves the found packages to their buffers.
|
|
|
|
Well it we'd like an audio package, but only a bunch of video
|
|
|
|
packages are available, then sooner or later the:
|
|
|
|
DEMUXER: Too many (%d in %d bytes) audio packets in the buffer
|
|
|
|
error shows up.
|
|
|
|
|
2002-04-13 02:09:18 +00:00
|
|
|
2.d. video.c: this file/function handle the reading and assembling of the
|
|
|
|
video frames. each call to video_read_frame() should read and return a
|
|
|
|
single video frame, and it's duration in seconds (float).
|
|
|
|
The implementation is splitted to 2 big parts - reading from mpeg-like
|
|
|
|
streams and reading from one-frame-per-chunk files (avi, asf, mov).
|
|
|
|
Then it calculates duration, either from fixed FPS value, or from the
|
|
|
|
PTS difference between and after reading the frame.
|
|
|
|
|
2007-02-22 00:18:10 +00:00
|
|
|
2.e. other utility functions: there are some useful code there, like
|
2002-04-13 02:09:18 +00:00
|
|
|
AVI muxer, or mp3 header parser, but leave them for now.
|
|
|
|
|
|
|
|
So everything is ok 'till now. It can be found in libmpdemux/ library.
|
|
|
|
It should compile outside of mplayer tree, you just have to implement few
|
|
|
|
simple functions, like mp_msg() to print messages, etc.
|
|
|
|
See libmpdemux/test.c for example.
|
|
|
|
|
|
|
|
See also formats.txt, for description of common media file formats and their
|
|
|
|
implementation details in libmpdemux.
|
2001-03-18 13:57:06 +00:00
|
|
|
|
|
|
|
Now, go on:
|
|
|
|
|
|
|
|
3. mplayer.c - ooh, he's the boss :)
|
2001-08-23 11:33:41 +00:00
|
|
|
Its main purpose is connecting the other modules, and maintaining A/V
|
2001-08-13 10:38:01 +00:00
|
|
|
sync.
|
2001-05-25 17:32:52 +00:00
|
|
|
|
2001-08-23 11:33:41 +00:00
|
|
|
The given stream's actual position is in the 'timer' field of the
|
|
|
|
corresponding stream header (sh_audio / sh_video).
|
2001-05-25 17:04:11 +00:00
|
|
|
|
|
|
|
The structure of the playing loop :
|
|
|
|
while(not EOF) {
|
|
|
|
fill audio buffer (read & decode audio) + increase a_frame
|
|
|
|
read & decode a single video frame + increase v_frame
|
|
|
|
sleep (wait until a_frame>=v_frame)
|
|
|
|
display the frame
|
|
|
|
apply A-V PTS correction to a_frame
|
2002-04-13 02:09:18 +00:00
|
|
|
handle events (keys,lirc etc) -> pause,seek,...
|
2001-05-25 17:04:11 +00:00
|
|
|
}
|
|
|
|
|
2001-03-18 13:57:06 +00:00
|
|
|
When playing (a/v), it increases the variables by the duration of the
|
2001-05-25 17:04:11 +00:00
|
|
|
played a/v.
|
|
|
|
- with audio this is played bytes / sh_audio->o_bps
|
|
|
|
Note: i_bps = number of compressed bytes for one second of audio
|
|
|
|
o_bps = number of uncompressed bytes for one second of audio
|
|
|
|
(this is = bps*samplerate*channels)
|
|
|
|
- with video this is usually == 1.0/fps, but I have to note that
|
2001-03-18 16:01:12 +00:00
|
|
|
fps doesn't really matters at video, for example asf doesn't have that,
|
|
|
|
instead there is "duration" and it can change per frame.
|
2001-03-18 13:57:06 +00:00
|
|
|
MPEG2 has "repeat_count" which delays the frame by 1-2.5 ...
|
|
|
|
Maybe only AVI and MPEG1 has fixed fps.
|
|
|
|
|
2001-03-18 16:01:12 +00:00
|
|
|
So everything works right until the audio and video are in perfect
|
2001-03-18 13:57:06 +00:00
|
|
|
synchronity, since the audio goes, it gives the timing, and if the
|
|
|
|
time of a frame passed, the next frame is displayed.
|
|
|
|
But what if these two aren't synchronized in the input file?
|
|
|
|
PTS correction kicks in. The input demuxers read the PTS (presentation
|
|
|
|
timestamp) of the packages, and with it we can see if the streams
|
|
|
|
are synchronized. Then MPlayer can correct the a_frame, within
|
|
|
|
a given maximal bounder (see -mc option). The summary of the
|
|
|
|
corrections can be found in c_total .
|
|
|
|
|
|
|
|
Of course this is not everything, several things suck.
|
|
|
|
For example the soundcards delay, which has to be corrected by
|
2001-05-25 17:04:11 +00:00
|
|
|
MPlayer! The audio delay is the sum of all these:
|
|
|
|
- bytes read since the last timestamp:
|
|
|
|
t1 = d_audio->pts_bytes/sh_audio->i_bps
|
|
|
|
- if Win32/ACM then the bytes stored in audio input buffer
|
|
|
|
t2 = a_in_buffer_len/sh_audio->i_bps
|
|
|
|
- uncompressed bytes in audio out buffer
|
|
|
|
t3 = a_buffer_len/sh_audio->o_bps
|
|
|
|
- not yet played bytes stored in the soundcard's (or DMA's) buffer
|
|
|
|
t4 = get_audio_delay()/sh_audio->o_bps
|
|
|
|
|
|
|
|
From this we can calculate what PTS we need for the just played
|
|
|
|
audio, then after we compare this with the video's PTS, we have
|
|
|
|
the difference!
|
2009-05-13 02:58:57 +00:00
|
|
|
|
2001-03-18 13:57:06 +00:00
|
|
|
Life didn't get simpler with AVI. There's the "official" timing
|
|
|
|
method, the BPS-based, so the header contains how many compressed
|
2001-08-13 10:38:01 +00:00
|
|
|
audio bytes or chunks belong to one second of frames.
|
|
|
|
In the AVI stream header there are 2 important fields, the
|
|
|
|
dwSampleSize, and dwRate/dwScale pairs:
|
|
|
|
- If the dwSampleSize is 0, then it's VBR stream, so its bitrate
|
|
|
|
isn't constant. It means that 1 chunk stores 1 sample, and
|
|
|
|
dwRate/dwScale gives the chunks/sec value.
|
|
|
|
- If the dwSampleSize is >0, then it's constant bitrate, and the
|
|
|
|
time can be measured this way: time = (bytepos/dwSampleSize) /
|
|
|
|
(dwRate/dwScale) (so the sample's number is divided with the
|
|
|
|
samplerate). Now the audio can be handled as a stream, which can
|
|
|
|
be cut to chunks, but can be one chunk also.
|
|
|
|
|
|
|
|
The other method can be used only for interleaved files: from
|
|
|
|
the order of the chunks, a timestamp (PTS) value can be calculated.
|
|
|
|
The PTS of the video chunks are simple: chunk number * fps
|
|
|
|
The audio is the same as the previous video chunk was.
|
|
|
|
We have to pay attention to the so called "audio preload", that is,
|
|
|
|
there is a delay between the audio and video streams. This is
|
|
|
|
usually 0.5-1.0 sec, but can be totally different.
|
|
|
|
The exact value was measured until now, but now the demux_avi.c
|
|
|
|
handles it: at the audio chunk after the first video, it calculates
|
|
|
|
the A/V difference, and take this as a measure for audio preload.
|
2001-05-25 17:04:11 +00:00
|
|
|
|
|
|
|
3.a. audio playback:
|
|
|
|
Some words on audio playback:
|
|
|
|
Not the playing is hard, but:
|
|
|
|
1. knowing when to write into the buffer, without blocking
|
|
|
|
2. knowing how much was played of what we wrote into
|
|
|
|
The first is needed for audio decoding, and to keep the buffer
|
|
|
|
full (so the audio will never skip). And the second is needed for
|
|
|
|
correct timing, because some soundcards delay even 3-7 seconds,
|
|
|
|
which can't be forgotten about.
|
|
|
|
To solve this, the OSS gives several possibilities:
|
|
|
|
- ioctl(SNDCTL_DSP_GETODELAY): tells how many unplayed bytes are in
|
|
|
|
the soundcard's buffer -> perfect for timing, but not all drivers
|
|
|
|
support it :(
|
|
|
|
- ioctl(SNDCTL_DSP_GETOSPACE): tells how much can we write into the
|
|
|
|
soundcard's buffer, without blocking. If the driver doesn't
|
|
|
|
support GETODELAY, we can use this to know how much the delay is.
|
|
|
|
- select(): should tell if we can write into the buffer without
|
|
|
|
blocking. Unfortunately it doesn't say how much we could :((
|
|
|
|
Also, doesn't/badly works with some drivers.
|
|
|
|
Only used if none of the above works.
|
2009-05-13 02:58:57 +00:00
|
|
|
|
2002-04-13 02:09:18 +00:00
|
|
|
4. Codecs. Consists of libmpcodecs/* and separate files or libs,
|
|
|
|
for example liba52, libmpeg2, xa/*, alaw.c, opendivx/*, loader, mp3lib.
|
2001-08-13 10:38:01 +00:00
|
|
|
|
2002-04-13 02:09:18 +00:00
|
|
|
mplayer.c doesn't call them directly, but through the dec_audio.c and
|
2001-08-13 10:38:01 +00:00
|
|
|
dec_video.c files, so the mplayer.c doesn't have to know anything about
|
2002-04-13 02:09:18 +00:00
|
|
|
the codecs.
|
2009-05-13 02:58:57 +00:00
|
|
|
|
2002-04-13 02:09:18 +00:00
|
|
|
libmpcodecs contains wrapper for every codecs, some of them include the
|
|
|
|
codec function implementation, some calls functions from other files
|
|
|
|
included with mplayer, some calls optional external libraries.
|
|
|
|
file naming convention in libmpcodecs:
|
|
|
|
ad_*.c - audio decoder (called through dec_audio.c)
|
|
|
|
vd_*.c - video decoder (called through dec_video.c)
|
|
|
|
ve_*.c - video encoder (used by mencoder)
|
2003-03-22 12:02:27 +00:00
|
|
|
vf_*.c - video filter (see option -vf)
|
2001-04-20 20:48:25 +00:00
|
|
|
|
2002-07-30 17:56:24 +00:00
|
|
|
On this topic, see also:
|
2002-09-15 01:40:59 +00:00
|
|
|
libmpcodecs.txt - The structure of the codec-filter path, with explanation
|
2002-07-30 17:56:24 +00:00
|
|
|
dr-methods.txt - Direct rendering, MPI buffer management for video codecs
|
|
|
|
codecs.conf.txt - How to write/edit codec configuration file (codecs.conf)
|
|
|
|
codec-devel.txt - Mike's hints about codec development - a bit OUTDATED
|
|
|
|
hwac3.txt - about SP/DIF audio passthrough
|
|
|
|
|
2001-04-20 20:48:25 +00:00
|
|
|
5. libvo: this displays the frame.
|
|
|
|
|
2002-04-13 02:09:18 +00:00
|
|
|
for details on this, read libvo.txt
|
2001-04-20 20:48:25 +00:00
|
|
|
|
2001-06-04 08:06:20 +00:00
|
|
|
6. libao2: this control audio playing
|
2002-02-08 13:54:57 +00:00
|
|
|
6.a audio plugins
|
|
|
|
|
2002-07-30 17:56:24 +00:00
|
|
|
for details on this, read libao2.txt
|
2001-04-20 20:48:25 +00:00
|
|
|
|