mpv/player/playloop.c

1289 lines
41 KiB
C
Raw Normal View History

/*
* This file is part of mpv.
*
player: change license of most core files to LGPL These files have all in common that they were fully or mostly taken from mplayer.c. (mplayer.c was a huge file that contains almost all of the playback core, until it was split into multiple parts.) This was probably the hardest part to relicense, because so much code was moved around all the time. player/audio.c still does not compile. We'll have to redo audio filtering. Once that is done, we can probably actually provide an actual LGPL configure switch. Here is a relatively detailed list of potential issues: 8d190244: author did not reply, parts were made GPL-only in a previous commit. 7882ea9b: author could not be reached, but the code is gone. wscript still has --datadir switch, but I don't think this is relevant to copyright. f197efd5: unclear origin, but I consider the code gone anyway (replaced with generic OSD mechanisms). 8337d9c2: author did not reply, but only the option still exists (under a different name), other code was removed. d8fd7131: did not reply. Disabled in a previous commit. 05258251: same author as above. Both fields actually seem to have vanished (even when tracking renames), so no action taken. d459e644, 268b2c1a: author did not reply, but we reuse only the options (with different names and slightly or fully different semantics, and completely different implementations), so I don't think this is relevant for copyright. 09e742fe, 17c39c4e: same as above. e8a173de, bff4b3ee: author could not be reached. The commands were reworked to properties, and the code outside of the TV code were moved back to the TV code. So I don't think copyright applies to the current command.c parts (mp_property_tv_color, mp_property_tv_freq, mp_property_tv_scan). The TV parts remain GPL. 0810e427: could not be reached. Disabled in a previous commit. 43744a2d: unknown author, but this was replaced by dynamic alloc (if the change is even copyrightable). 116ca0c7: unknown author; reasoning see input.c relicensing commit. e7e4d1d8: these semantics still exist, but as generic code, and this code was fully removed. f1175cd9: the author of the cited patch is unknown, and upon inspection it turns out that I was only using the idea to pause the player on EOF, so I claim it's not copyright relevant. 25affdcc: author could not be reached (yet) - but it's only a function rename, not copyrightable. 5728504c was committed by Arpi (who agreed), but hints that it might be by a different author. In fact it seems to be mostly this patch: http://lists.mplayerhq.hu/pipermail/mplayer-dev-eng/2001-November/002041.html The author did not respond, but it all seems to have been removed later. It's a terrible mess though. Arpi reverted the A-V sync code at first, but left the RTC code for a while. The following commits remove these changes 100%: 14b35442, 7181a091, 31482783, 614f8475, df58e822. cehoyos did explicitly not agree to LGPL, but was involved in the following changes: c99d8fc8: applied a patch and didn't modify it, the original author agreed. 40ac0d31: author could not be reached, but all code is gone anyway. The "af" command has a similar function, but works completely different and actually reuses a mechanism older than this patch. 54350436: applied a patch, but didn't modify it, except for adding a German translation, which was removed later. a2dda036: same situation as above 240b743e: this was made GPL-only in a previous commit 7b25afd7: same as above (for now) kirijua could not be reached, but was a regular patch contributor: c2c997fd: video equalizer code move; probably not copyrightable. Is GPL due to Nick anyway. be54f481: technically, this became the audio track property later. But all what is left is the fact that you pass a track ID to it, so consider the original coypright non-relevant. 2f376d1b: this was rewritten in b7052b43, but for now we can afford to be careful, so this was marked as GPL only in a previous commit. 43844d09: remaining parts in main.c were reverted in a previous commit. anders has mostly disagreed with the LGPL relicensing. Does not want libaf to become LGPL, but made some concessions. In particular, he granted us permission to relicense 4943e9c52c and 242aa6ebd4. We also consider some of his changes remaining in mpv not relevant for copyright (such as 735de602 - we won't remove the this option completely). We will completely remove his other contributions, including the entire audio filter chain. For now, this stuff is marked as GPL only. The remaining question is how much code in player/audio.c (based on the former mplayer.c and dec_audio.c) is under his copyright. I made claims about this in a previous commit. Nick(ols) Kurshev, svn username "nick" and "nickols_k", could not be reached. He had a lot of changes in early MPlayer. It seems all of that was removed, at least in mpv. His main work, like VIDIX or libswscale work, does not exist in mpv anymore, but the changes to mplayer.c and other core parts still deserve attention: a4119f6b, fb927549, ad3529b8, e11b23dc, 5f2178be, 93c371d5: removed in b43d67e0, d1628d12, 24ed01fe, df58e822. 0a83c6ec, 104c125e, 4e067f62, aec5dcc8, b587a3d6, f3de6e6b: DR, VAA, and "tune" stuff was fully removed later on or replaced with other mechanisms. 340183b0: screenshots were redone later (the VOCTRL was even removed, with an independent implementation using the same VOCTRL a few years later), so not relevant anymore. Basically only the 's' shortcut remains (but not its implementation). 92c5c274, bffd4007, 555c6766: for now marked as GPL only in a previous commit. Might contain some trace amounts of "michael"'s copyright, who agreed to LGPL only once the core is relicensed. This will still be respected, but I don't think it matters at this in this case. (Some code touched by him was merged into mplayer.c, and then disappeared after heavy refactoring.) I tried to be as careful and as complete as possible. It can't be excluded that amends to this will be made later. This does not make the player LGPL yet.
2017-06-23 13:53:41 +00:00
* mpv is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* mpv is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
player: change license of most core files to LGPL These files have all in common that they were fully or mostly taken from mplayer.c. (mplayer.c was a huge file that contains almost all of the playback core, until it was split into multiple parts.) This was probably the hardest part to relicense, because so much code was moved around all the time. player/audio.c still does not compile. We'll have to redo audio filtering. Once that is done, we can probably actually provide an actual LGPL configure switch. Here is a relatively detailed list of potential issues: 8d190244: author did not reply, parts were made GPL-only in a previous commit. 7882ea9b: author could not be reached, but the code is gone. wscript still has --datadir switch, but I don't think this is relevant to copyright. f197efd5: unclear origin, but I consider the code gone anyway (replaced with generic OSD mechanisms). 8337d9c2: author did not reply, but only the option still exists (under a different name), other code was removed. d8fd7131: did not reply. Disabled in a previous commit. 05258251: same author as above. Both fields actually seem to have vanished (even when tracking renames), so no action taken. d459e644, 268b2c1a: author did not reply, but we reuse only the options (with different names and slightly or fully different semantics, and completely different implementations), so I don't think this is relevant for copyright. 09e742fe, 17c39c4e: same as above. e8a173de, bff4b3ee: author could not be reached. The commands were reworked to properties, and the code outside of the TV code were moved back to the TV code. So I don't think copyright applies to the current command.c parts (mp_property_tv_color, mp_property_tv_freq, mp_property_tv_scan). The TV parts remain GPL. 0810e427: could not be reached. Disabled in a previous commit. 43744a2d: unknown author, but this was replaced by dynamic alloc (if the change is even copyrightable). 116ca0c7: unknown author; reasoning see input.c relicensing commit. e7e4d1d8: these semantics still exist, but as generic code, and this code was fully removed. f1175cd9: the author of the cited patch is unknown, and upon inspection it turns out that I was only using the idea to pause the player on EOF, so I claim it's not copyright relevant. 25affdcc: author could not be reached (yet) - but it's only a function rename, not copyrightable. 5728504c was committed by Arpi (who agreed), but hints that it might be by a different author. In fact it seems to be mostly this patch: http://lists.mplayerhq.hu/pipermail/mplayer-dev-eng/2001-November/002041.html The author did not respond, but it all seems to have been removed later. It's a terrible mess though. Arpi reverted the A-V sync code at first, but left the RTC code for a while. The following commits remove these changes 100%: 14b35442, 7181a091, 31482783, 614f8475, df58e822. cehoyos did explicitly not agree to LGPL, but was involved in the following changes: c99d8fc8: applied a patch and didn't modify it, the original author agreed. 40ac0d31: author could not be reached, but all code is gone anyway. The "af" command has a similar function, but works completely different and actually reuses a mechanism older than this patch. 54350436: applied a patch, but didn't modify it, except for adding a German translation, which was removed later. a2dda036: same situation as above 240b743e: this was made GPL-only in a previous commit 7b25afd7: same as above (for now) kirijua could not be reached, but was a regular patch contributor: c2c997fd: video equalizer code move; probably not copyrightable. Is GPL due to Nick anyway. be54f481: technically, this became the audio track property later. But all what is left is the fact that you pass a track ID to it, so consider the original coypright non-relevant. 2f376d1b: this was rewritten in b7052b43, but for now we can afford to be careful, so this was marked as GPL only in a previous commit. 43844d09: remaining parts in main.c were reverted in a previous commit. anders has mostly disagreed with the LGPL relicensing. Does not want libaf to become LGPL, but made some concessions. In particular, he granted us permission to relicense 4943e9c52c and 242aa6ebd4. We also consider some of his changes remaining in mpv not relevant for copyright (such as 735de602 - we won't remove the this option completely). We will completely remove his other contributions, including the entire audio filter chain. For now, this stuff is marked as GPL only. The remaining question is how much code in player/audio.c (based on the former mplayer.c and dec_audio.c) is under his copyright. I made claims about this in a previous commit. Nick(ols) Kurshev, svn username "nick" and "nickols_k", could not be reached. He had a lot of changes in early MPlayer. It seems all of that was removed, at least in mpv. His main work, like VIDIX or libswscale work, does not exist in mpv anymore, but the changes to mplayer.c and other core parts still deserve attention: a4119f6b, fb927549, ad3529b8, e11b23dc, 5f2178be, 93c371d5: removed in b43d67e0, d1628d12, 24ed01fe, df58e822. 0a83c6ec, 104c125e, 4e067f62, aec5dcc8, b587a3d6, f3de6e6b: DR, VAA, and "tune" stuff was fully removed later on or replaced with other mechanisms. 340183b0: screenshots were redone later (the VOCTRL was even removed, with an independent implementation using the same VOCTRL a few years later), so not relevant anymore. Basically only the 's' shortcut remains (but not its implementation). 92c5c274, bffd4007, 555c6766: for now marked as GPL only in a previous commit. Might contain some trace amounts of "michael"'s copyright, who agreed to LGPL only once the core is relicensed. This will still be respected, but I don't think it matters at this in this case. (Some code touched by him was merged into mplayer.c, and then disappeared after heavy refactoring.) I tried to be as careful and as complete as possible. It can't be excluded that amends to this will be made later. This does not make the player LGPL yet.
2017-06-23 13:53:41 +00:00
* GNU Lesser General Public License for more details.
*
player: change license of most core files to LGPL These files have all in common that they were fully or mostly taken from mplayer.c. (mplayer.c was a huge file that contains almost all of the playback core, until it was split into multiple parts.) This was probably the hardest part to relicense, because so much code was moved around all the time. player/audio.c still does not compile. We'll have to redo audio filtering. Once that is done, we can probably actually provide an actual LGPL configure switch. Here is a relatively detailed list of potential issues: 8d190244: author did not reply, parts were made GPL-only in a previous commit. 7882ea9b: author could not be reached, but the code is gone. wscript still has --datadir switch, but I don't think this is relevant to copyright. f197efd5: unclear origin, but I consider the code gone anyway (replaced with generic OSD mechanisms). 8337d9c2: author did not reply, but only the option still exists (under a different name), other code was removed. d8fd7131: did not reply. Disabled in a previous commit. 05258251: same author as above. Both fields actually seem to have vanished (even when tracking renames), so no action taken. d459e644, 268b2c1a: author did not reply, but we reuse only the options (with different names and slightly or fully different semantics, and completely different implementations), so I don't think this is relevant for copyright. 09e742fe, 17c39c4e: same as above. e8a173de, bff4b3ee: author could not be reached. The commands were reworked to properties, and the code outside of the TV code were moved back to the TV code. So I don't think copyright applies to the current command.c parts (mp_property_tv_color, mp_property_tv_freq, mp_property_tv_scan). The TV parts remain GPL. 0810e427: could not be reached. Disabled in a previous commit. 43744a2d: unknown author, but this was replaced by dynamic alloc (if the change is even copyrightable). 116ca0c7: unknown author; reasoning see input.c relicensing commit. e7e4d1d8: these semantics still exist, but as generic code, and this code was fully removed. f1175cd9: the author of the cited patch is unknown, and upon inspection it turns out that I was only using the idea to pause the player on EOF, so I claim it's not copyright relevant. 25affdcc: author could not be reached (yet) - but it's only a function rename, not copyrightable. 5728504c was committed by Arpi (who agreed), but hints that it might be by a different author. In fact it seems to be mostly this patch: http://lists.mplayerhq.hu/pipermail/mplayer-dev-eng/2001-November/002041.html The author did not respond, but it all seems to have been removed later. It's a terrible mess though. Arpi reverted the A-V sync code at first, but left the RTC code for a while. The following commits remove these changes 100%: 14b35442, 7181a091, 31482783, 614f8475, df58e822. cehoyos did explicitly not agree to LGPL, but was involved in the following changes: c99d8fc8: applied a patch and didn't modify it, the original author agreed. 40ac0d31: author could not be reached, but all code is gone anyway. The "af" command has a similar function, but works completely different and actually reuses a mechanism older than this patch. 54350436: applied a patch, but didn't modify it, except for adding a German translation, which was removed later. a2dda036: same situation as above 240b743e: this was made GPL-only in a previous commit 7b25afd7: same as above (for now) kirijua could not be reached, but was a regular patch contributor: c2c997fd: video equalizer code move; probably not copyrightable. Is GPL due to Nick anyway. be54f481: technically, this became the audio track property later. But all what is left is the fact that you pass a track ID to it, so consider the original coypright non-relevant. 2f376d1b: this was rewritten in b7052b43, but for now we can afford to be careful, so this was marked as GPL only in a previous commit. 43844d09: remaining parts in main.c were reverted in a previous commit. anders has mostly disagreed with the LGPL relicensing. Does not want libaf to become LGPL, but made some concessions. In particular, he granted us permission to relicense 4943e9c52c and 242aa6ebd4. We also consider some of his changes remaining in mpv not relevant for copyright (such as 735de602 - we won't remove the this option completely). We will completely remove his other contributions, including the entire audio filter chain. For now, this stuff is marked as GPL only. The remaining question is how much code in player/audio.c (based on the former mplayer.c and dec_audio.c) is under his copyright. I made claims about this in a previous commit. Nick(ols) Kurshev, svn username "nick" and "nickols_k", could not be reached. He had a lot of changes in early MPlayer. It seems all of that was removed, at least in mpv. His main work, like VIDIX or libswscale work, does not exist in mpv anymore, but the changes to mplayer.c and other core parts still deserve attention: a4119f6b, fb927549, ad3529b8, e11b23dc, 5f2178be, 93c371d5: removed in b43d67e0, d1628d12, 24ed01fe, df58e822. 0a83c6ec, 104c125e, 4e067f62, aec5dcc8, b587a3d6, f3de6e6b: DR, VAA, and "tune" stuff was fully removed later on or replaced with other mechanisms. 340183b0: screenshots were redone later (the VOCTRL was even removed, with an independent implementation using the same VOCTRL a few years later), so not relevant anymore. Basically only the 's' shortcut remains (but not its implementation). 92c5c274, bffd4007, 555c6766: for now marked as GPL only in a previous commit. Might contain some trace amounts of "michael"'s copyright, who agreed to LGPL only once the core is relicensed. This will still be respected, but I don't think it matters at this in this case. (Some code touched by him was merged into mplayer.c, and then disappeared after heavy refactoring.) I tried to be as careful and as complete as possible. It can't be excluded that amends to this will be made later. This does not make the player LGPL yet.
2017-06-23 13:53:41 +00:00
* You should have received a copy of the GNU Lesser General Public
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
*/
#include <assert.h>
#include <inttypes.h>
#include <math.h>
#include <stdbool.h>
#include <stddef.h>
#include "client.h"
#include "command.h"
#include "config.h"
#include "core.h"
#include "mpv_talloc.h"
#include "screenshot.h"
#include "audio/out/ao.h"
#include "common/common.h"
#include "common/encode.h"
#include "common/msg.h"
#include "common/playlist.h"
#include "common/recorder.h"
#include "common/stats.h"
#include "demux/demux.h"
#include "filters/f_decoder_wrapper.h"
#include "filters/filter_internal.h"
2013-12-17 00:23:09 +00:00
#include "input/input.h"
#include "misc/dispatch.h"
#include "options/m_config_frontend.h"
#include "options/m_property.h"
#include "options/options.h"
#include "osdep/terminal.h"
#include "osdep/timer.h"
#include "stream/stream.h"
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
#include "sub/dec_sub.h"
#include "sub/osd.h"
#include "video/out/vo.h"
// Wait until mp_wakeup_core() is called, since the last time
// mp_wait_events() was called.
void mp_wait_events(struct MPContext *mpctx)
{
client API: rewrite property observation (again) I intend to rewrite this code approximately every 2 months. Last time, I did this in commit d66eb93e5d4 (and 065c307e8e7 and b2006eeb74f). It was intended to remove the roundabout synchronous thread "ping pong" when observing properties. At first, the original async. code was replaced with some nice mostly synchronous code. But then an async. code path had to be added for vo_libmpv, and finally the sync. code was dropped because it broke in other obscure cases (like the Objective-C Cocoa backend). Try again. This time, update properties entirely on the main thread. Updates get batched out on every playloop iteration. (At first I wanted it to make it every time the player goes to sleep, but that might starve API clients if the playloop get saturated.) One nice thing is that clients only get woken up once all changed events have been sent, which might reduce overhead. While this sounds simple, it's not. The main problem is that reading properties must not block the client API, i.e. no client API locks can be held while reading the property. Maybe eventually we can avoid this requirement, but currently it's just a fact. This means we have to iterate over all clients and then over all properties (of each client), all while releasing all locks when updating a property. Solve this by rechecking on each iteration whether the list changed, and if so, aborting the iteration and redo it "next time". High risk change, expect bugs such as crashes and missing property updates.
2019-12-19 10:11:51 +00:00
mp_client_send_property_changes(mpctx);
stats_event(mpctx->stats, "iterations");
bool sleeping = mpctx->sleeptime > 0;
if (sleeping)
MP_STATS(mpctx, "start sleep");
mp_dispatch_queue_process(mpctx->dispatch, mpctx->sleeptime);
mpctx->sleeptime = INFINITY;
if (sleeping)
MP_STATS(mpctx, "end sleep");
}
// Set the timeout used when the playloop goes to sleep. This means the
// playloop will re-run as soon as the timeout elapses (or earlier).
// mp_set_timeout(c, 0) is essentially equivalent to mp_wakeup_core(c).
void mp_set_timeout(struct MPContext *mpctx, double sleeptime)
{
if (mpctx->sleeptime > sleeptime) {
mpctx->sleeptime = sleeptime;
int64_t abstime = mp_add_timeout(mp_time_us(), sleeptime);
mp_dispatch_adjust_timeout(mpctx->dispatch, abstime);
}
}
// Cause the playloop to run. This can be called from any thread. If called
// from within the playloop itself, it will be run immediately again, instead
// of going to sleep in the next mp_wait_events().
void mp_wakeup_core(struct MPContext *mpctx)
{
mp_dispatch_interrupt(mpctx->dispatch);
}
// Opaque callback variant of mp_wakeup_core().
void mp_wakeup_core_cb(void *ctx)
{
struct MPContext *mpctx = ctx;
mp_wakeup_core(mpctx);
}
void mp_core_lock(struct MPContext *mpctx)
{
mp_dispatch_lock(mpctx->dispatch);
}
void mp_core_unlock(struct MPContext *mpctx)
{
mp_dispatch_unlock(mpctx->dispatch);
}
// Process any queued user input.
2020-04-03 10:56:50 +00:00
static void mp_process_input(struct MPContext *mpctx)
{
int processed = 0;
for (;;) {
mp_cmd_t *cmd = mp_input_read_cmd(mpctx->input);
if (!cmd)
break;
run_command(mpctx, cmd, NULL, NULL, NULL);
processed = 1;
}
mp_set_timeout(mpctx, mp_input_get_delay(mpctx->input));
if (processed)
mp_notify(mpctx, MP_EVENT_INPUT_PROCESSED, NULL);
}
Relicense some non-MPlayer source files to LGPL 2.1 or later This covers source files which were added in mplayer2 and mpv times only, and where all code is covered by LGPL relicensing agreements. There are probably more files to which this applies, but I'm being conservative here. A file named ao_sdl.c exists in MPlayer too, but the mpv one is a complete rewrite, and was added some time after the original ao_sdl.c was removed. The same applies to vo_sdl.c, for which the SDL2 API is radically different in addition (MPlayer supports SDL 1.2 only). common.c contains only code written by me. But common.h is a strange case: although it originally was named mp_common.h and exists in MPlayer too, by now it contains only definitions written by uau and me. The exceptions are the CONTROL_ defines - thus not changing the license of common.h yet. codec_tags.c contained once large tables generated from MPlayer's codecs.conf, but all of these tables were removed. From demux_playlist.c I'm removing a code fragment from someone who was not asked; this probably could be done later (see commit 15dccc37). misc.c is a bit complicated to reason about (it was split off mplayer.c and thus contains random functions out of this file), but actually all functions have been added post-MPlayer. Except get_relative_time(), which was written by uau, but looks similar to 3 different versions of something similar in each of the Unix/win32/OSX timer source files. I'm not sure what that means in regards to copyright, so I've just moved it into another still-GPL source file for now. screenshot.c once had some minor parts of MPlayer's vf_screenshot.c, but they're all gone.
2016-01-19 17:36:06 +00:00
double get_relative_time(struct MPContext *mpctx)
{
int64_t new_time = mp_time_us();
int64_t delta = new_time - mpctx->last_time;
mpctx->last_time = new_time;
return delta * 0.000001;
}
void update_core_idle_state(struct MPContext *mpctx)
{
bool eof = mpctx->video_status == STATUS_EOF &&
mpctx->audio_status == STATUS_EOF;
bool active = !mpctx->paused && mpctx->restart_complete &&
!mpctx->stop_play && mpctx->in_playloop && !eof;
if (mpctx->playback_active != active) {
mpctx->playback_active = active;
update_screensaver_state(mpctx);
mp_notify(mpctx, MP_EVENT_CORE_IDLE, NULL);
}
}
bool get_internal_paused(struct MPContext *mpctx)
{
return mpctx->opts->pause || mpctx->paused_for_cache;
}
// The value passed here is the new value for mpctx->opts->pause
void set_pause_state(struct MPContext *mpctx, bool user_pause)
{
struct MPOpts *opts = mpctx->opts;
opts->pause = user_pause;
bool internal_paused = get_internal_paused(mpctx);
if (internal_paused != mpctx->paused) {
mpctx->paused = internal_paused;
if (mpctx->ao)
ao_set_paused(mpctx->ao, internal_paused);
if (mpctx->video_out)
vo_set_paused(mpctx->video_out, internal_paused);
mpctx->osd_function = 0;
mpctx->osd_force_update = true;
mp_wakeup_core(mpctx);
if (internal_paused) {
mpctx->step_frames = 0;
mpctx->time_frame -= get_relative_time(mpctx);
} else {
(void)get_relative_time(mpctx); // ignore time that passed during pause
}
}
update_core_idle_state(mpctx);
player: change m_config to use new option handling mechanisms Instead of making m_config a special-case, it more or less uses the underlying m_config_cache/m_config_shadow APIs properly. This makes the player core a (relatively) equivalent user of the core option API. In particular, this means that other threads can change core options with m_config_cache_write_opt() calls (before this commit, this merely led to diverging option values). An important change is that before this commit, mpctx->opts contained the "master copy" of all option data. Now it's just another copy of the option data, and the shadow copy is considered the master. This is why whenever mpctx->opts is written, the change needs to be copied to the master (thus why this commits add a bunch of m_config_notify... calls). If another thread (e.g. a VO) changes an option, async_change_cb is now invoked, which funnels the change notification through the player's layers. The new self_notification parameter on mp_option_change_callback is so that m_config_notify... doesn't trigger recursion, and it's used in cases where the change was already "processed". It's still needed to trigger libmpv property updates. (I considered using an extra m_config_cache for that, but it'd only cause problems with no advantages.) I think the recent changes actually forgot to send libmpv property updates in some cases. This should fix this anyway. In some cases, property updates are reworked, and the potential for bugs should be lower (probably). The primary point of this change is to allow external updates, for example by a VO writing the fullscreen option if the window state is changed by the window manager (rather than mpv changing it). This is not used yet, but the following commits will.
2019-11-29 11:49:15 +00:00
m_config_notify_change_opt_ptr(mpctx->mconfig, &opts->pause);
}
void update_internal_pause_state(struct MPContext *mpctx)
{
set_pause_state(mpctx, mpctx->opts->pause);
}
void update_screensaver_state(struct MPContext *mpctx)
{
if (!mpctx->video_out)
return;
bool saver_state = (!mpctx->playback_active || !mpctx->opts->stop_screensaver) &&
mpctx->opts->stop_screensaver != 2;
vo_control_async(mpctx->video_out, saver_state ? VOCTRL_RESTORE_SCREENSAVER
: VOCTRL_KILL_SCREENSAVER, NULL);
}
void add_step_frame(struct MPContext *mpctx, int dir)
{
if (!mpctx->vo_chain)
return;
if (dir > 0) {
mpctx->step_frames += 1;
set_pause_state(mpctx, false);
} else if (dir < 0) {
if (!mpctx->hrseek_active) {
queue_seek(mpctx, MPSEEK_BACKSTEP, 0, MPSEEK_VERY_EXACT, 0);
set_pause_state(mpctx, true);
}
}
}
// Clear some playback-related fields on file loading or after seeks.
void reset_playback_state(struct MPContext *mpctx)
{
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
mp_filter_reset(mpctx->filter_root);
reset_video_state(mpctx);
reset_audio_state(mpctx);
reset_subtitle_state(mpctx);
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
for (int n = 0; n < mpctx->num_tracks; n++) {
struct track *t = mpctx->tracks[n];
// (Often, but not always, this is redundant and also done elsewhere.)
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
if (t->dec)
mp_decoder_wrapper_set_play_dir(t->dec, mpctx->play_dir);
if (t->d_sub)
sub_set_play_dir(t->d_sub, mpctx->play_dir);
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
}
// May need unpause first
if (mpctx->paused_for_cache)
update_internal_pause_state(mpctx);
mpctx->hrseek_active = false;
mpctx->hrseek_lastframe = false;
mpctx->hrseek_backstep = false;
mpctx->current_seek = (struct seek_params){0};
mpctx->playback_pts = MP_NOPTS_VALUE;
mpctx->step_frames = 0;
mpctx->ab_loop_clip = true;
mpctx->restart_complete = false;
mpctx->paused_for_cache = false;
mpctx->cache_buffer = 100;
mpctx->cache_update_pts = MP_NOPTS_VALUE;
encode_lavc_discontinuity(mpctx->encode_lavc_ctx);
update_internal_pause_state(mpctx);
update_core_idle_state(mpctx);
}
2016-06-08 10:04:56 +00:00
static void mp_seek(MPContext *mpctx, struct seek_params seek)
{
struct MPOpts *opts = mpctx->opts;
if (!mpctx->demuxer || !seek.type || seek.amount == MP_NOPTS_VALUE)
2016-06-08 10:04:56 +00:00
return;
bool hr_seek_very_exact = seek.exact == MPSEEK_VERY_EXACT;
player: clamp relative seek base time to nominal duration Since b74c09efbf7, audio-only files let you seek to arbitrary points beyond the end of the file (but still displayed the time clamped to the nominal file duration). This was confusing and just not wanted. The reason is probably that the commit removed setting the audio PTS for data before the seek target, so if you seek past the end of the file, the audio PTS is never set. This in turn means the logic to determine the current playback time has no PTS at all, and thus falls back to the seek PTS. This happened in the past for other reasons (like efe43d768f). I have enough of this, so I'm just changing the code to clamp the seek timestamp to a "known" range. Do this when seeking ends, because in the fallback case, the playback time shouldn't be stuck at e.g. "end + seek_argument". Also do it when initiating a new seek (mp_seek), because if the previous seek hasn't finished yet, it shouldn't add them up and allow it to go "out of range" either. The latter is especially relevant for hr-seeks. Doing this clamping is problematic because the duration is a possibly invalid value from the demuxer, or just missing. Especially with timestamp resets, fun sometimes happens, and in these situations it might be better not to clamp. One could argue you should just use the last audio timestamp returned by the decoder or demuxer (even if that directly conflicts with --end), but that sounds even more hairy. In summary: what a dumb waste of time, what the fuck.
2020-09-10 21:24:35 +00:00
double current_time = get_playback_time(mpctx);
if (current_time == MP_NOPTS_VALUE && seek.type == MPSEEK_RELATIVE)
return;
2016-02-28 18:43:07 +00:00
if (current_time == MP_NOPTS_VALUE)
current_time = 0;
double seek_pts = MP_NOPTS_VALUE;
int demux_flags = 0;
switch (seek.type) {
case MPSEEK_ABSOLUTE:
2016-02-28 18:43:07 +00:00
seek_pts = seek.amount;
break;
case MPSEEK_BACKSTEP:
2016-02-28 18:43:07 +00:00
seek_pts = current_time;
hr_seek_very_exact = true;
break;
case MPSEEK_RELATIVE:
demux_flags = seek.amount > 0 ? SEEK_FORWARD : 0;
2016-02-28 18:43:07 +00:00
seek_pts = current_time + seek.amount;
break;
case MPSEEK_FACTOR: ;
double len = get_time_length(mpctx);
if (len >= 0)
2016-02-28 18:43:07 +00:00
seek_pts = seek.amount * len;
break;
default: MP_ASSERT_UNREACHABLE();
}
2016-02-28 18:43:07 +00:00
double demux_pts = seek_pts;
bool hr_seek = seek.exact != MPSEEK_KEYFRAME && seek_pts != MP_NOPTS_VALUE &&
(seek.exact >= MPSEEK_EXACT || opts->hr_seek == 1 ||
(opts->hr_seek >= 0 && seek.type == MPSEEK_ABSOLUTE) ||
(opts->hr_seek == 2 && (!mpctx->vo_chain || mpctx->vo_chain->is_sparse)));
2016-02-28 18:43:07 +00:00
if (seek.type == MPSEEK_FACTOR || seek.amount < 0 ||
(seek.type == MPSEEK_ABSOLUTE && seek.amount < mpctx->last_chapter_pts))
mpctx->last_chapter_seek = -2;
2016-02-28 18:43:07 +00:00
// Under certain circumstances, prefer SEEK_FACTOR.
if (seek.type == MPSEEK_FACTOR && !hr_seek &&
(mpctx->demuxer->ts_resets_possible || seek_pts == MP_NOPTS_VALUE))
{
2016-02-28 18:43:07 +00:00
demux_pts = seek.amount;
demux_flags |= SEEK_FACTOR;
}
int play_dir = opts->play_dir;
if (play_dir < 0)
demux_flags |= SEEK_SATAN;
2016-02-28 18:43:07 +00:00
if (hr_seek) {
double hr_seek_offset = opts->hr_seek_demuxer_offset;
// Always try to compensate for possibly bad demuxers in "special"
// situations where we need more robustness from the hr-seek code, even
// if the user doesn't use --hr-seek-demuxer-offset.
// The value is arbitrary, but should be "good enough" in most situations.
if (hr_seek_very_exact)
hr_seek_offset = MPMAX(hr_seek_offset, 0.5); // arbitrary
for (int n = 0; n < mpctx->num_tracks; n++) {
double offset = 0;
if (!mpctx->tracks[n]->is_external)
offset += get_track_seek_offset(mpctx, mpctx->tracks[n]);
hr_seek_offset = MPMAX(hr_seek_offset, -offset);
}
demux_pts -= hr_seek_offset * play_dir;
demux_flags = (demux_flags | SEEK_HR) & ~SEEK_FORWARD;
// For HR seeks in backward playback mode, the correct seek rounding
// direction is forward instead of backward.
if (play_dir < 0)
demux_flags |= SEEK_FORWARD;
}
if (!mpctx->demuxer->seekable)
demux_flags |= SEEK_CACHED;
player: add optional separate video decoding thread See manpage additions. This has been a topic in MPlayer/mplayer2/mpv since forever. But since libavcodec multi-threaded decoding was added, I've always considered this pointless. libavcodec requires you to "preload" it with packets, and then you can pretty much avoid blocking on it, if decoding is fast enough. But in some cases, a decoupled decoder thread _might_ help. Users have for example come up with cases where decoding video in a separate process and piping it as raw video to mpv helped. (Or my memory is false, and it was about vapoursynth filtering, who knows.) So let's just see whether this helps with anything. Note that this would have been _much_ easier if libavcodec had an asynchronous (or rather, non-blocking) API. It could probably have easily gained that with a small change to its multi-threading code and a small extension to its API, but I guess not. Unfortunately, this uglifies f_decoder_wrapper quite a lot. Part of this is due to annoying corner cases like legacy frame dropping and hardware decoder state. These could probably be prettified later on. There is also a change in playloop.c: this is because there is a need to coordinate playback resets between demuxer thread, decoder thread, and playback logic. I think this SEEK_BLOCK idea worked out reasonably well. There are still a number of problems. For example, if the demuxer cache is full, the decoder thread will simply block hard until the output queue is full, which interferes with seeking. Could also be improved later. Hardware decoding will probably die in a fire, because it will run out of surfaces quickly. We could reduce the queue to size 1... maybe later. We could update the queue options at runtime easily, but currently I'm not going to bother. I could only have put the lavc wrapper itself on a separate thread. But there is some annoying interaction with EDL and backward playback shit, and also you would have had to loop demuxer packets through the playloop, so this sounded less annoying. The food my mother made for us today was delicious. Because audio uses the same code, also for audio (even if completely pointless). Fixes: #6926
2020-02-29 20:40:52 +00:00
demux_flags |= SEEK_BLOCK;
if (!demux_seek(mpctx->demuxer, demux_pts, demux_flags)) {
if (!mpctx->demuxer->seekable) {
MP_ERR(mpctx, "Cannot seek in this stream.\n");
MP_ERR(mpctx, "You can force it with '--force-seekable=yes'.\n");
}
return;
}
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
mpctx->play_dir = play_dir;
// Seek external, extra files too:
for (int t = 0; t < mpctx->num_tracks; t++) {
struct track *track = mpctx->tracks[t];
if (track->selected && track->is_external && track->demuxer) {
2016-02-28 18:43:07 +00:00
double main_new_pos = demux_pts;
if (!hr_seek || track->is_external)
main_new_pos += get_track_seek_offset(mpctx, track);
2016-02-28 18:43:07 +00:00
if (demux_flags & SEEK_FACTOR)
main_new_pos = seek_pts;
player: add optional separate video decoding thread See manpage additions. This has been a topic in MPlayer/mplayer2/mpv since forever. But since libavcodec multi-threaded decoding was added, I've always considered this pointless. libavcodec requires you to "preload" it with packets, and then you can pretty much avoid blocking on it, if decoding is fast enough. But in some cases, a decoupled decoder thread _might_ help. Users have for example come up with cases where decoding video in a separate process and piping it as raw video to mpv helped. (Or my memory is false, and it was about vapoursynth filtering, who knows.) So let's just see whether this helps with anything. Note that this would have been _much_ easier if libavcodec had an asynchronous (or rather, non-blocking) API. It could probably have easily gained that with a small change to its multi-threading code and a small extension to its API, but I guess not. Unfortunately, this uglifies f_decoder_wrapper quite a lot. Part of this is due to annoying corner cases like legacy frame dropping and hardware decoder state. These could probably be prettified later on. There is also a change in playloop.c: this is because there is a need to coordinate playback resets between demuxer thread, decoder thread, and playback logic. I think this SEEK_BLOCK idea worked out reasonably well. There are still a number of problems. For example, if the demuxer cache is full, the decoder thread will simply block hard until the output queue is full, which interferes with seeking. Could also be improved later. Hardware decoding will probably die in a fire, because it will run out of surfaces quickly. We could reduce the queue to size 1... maybe later. We could update the queue options at runtime easily, but currently I'm not going to bother. I could only have put the lavc wrapper itself on a separate thread. But there is some annoying interaction with EDL and backward playback shit, and also you would have had to loop demuxer packets through the playloop, so this sounded less annoying. The food my mother made for us today was delicious. Because audio uses the same code, also for audio (even if completely pointless). Fixes: #6926
2020-02-29 20:40:52 +00:00
demux_seek(track->demuxer, main_new_pos,
demux_flags & (SEEK_SATAN | SEEK_BLOCK));
}
}
if (!(seek.flags & MPSEEK_FLAG_NOFLUSH))
clear_audio_output_buffers(mpctx);
reset_playback_state(mpctx);
if (mpctx->recorder)
mp_recorder_mark_discontinuity(mpctx->recorder);
player: add optional separate video decoding thread See manpage additions. This has been a topic in MPlayer/mplayer2/mpv since forever. But since libavcodec multi-threaded decoding was added, I've always considered this pointless. libavcodec requires you to "preload" it with packets, and then you can pretty much avoid blocking on it, if decoding is fast enough. But in some cases, a decoupled decoder thread _might_ help. Users have for example come up with cases where decoding video in a separate process and piping it as raw video to mpv helped. (Or my memory is false, and it was about vapoursynth filtering, who knows.) So let's just see whether this helps with anything. Note that this would have been _much_ easier if libavcodec had an asynchronous (or rather, non-blocking) API. It could probably have easily gained that with a small change to its multi-threading code and a small extension to its API, but I guess not. Unfortunately, this uglifies f_decoder_wrapper quite a lot. Part of this is due to annoying corner cases like legacy frame dropping and hardware decoder state. These could probably be prettified later on. There is also a change in playloop.c: this is because there is a need to coordinate playback resets between demuxer thread, decoder thread, and playback logic. I think this SEEK_BLOCK idea worked out reasonably well. There are still a number of problems. For example, if the demuxer cache is full, the decoder thread will simply block hard until the output queue is full, which interferes with seeking. Could also be improved later. Hardware decoding will probably die in a fire, because it will run out of surfaces quickly. We could reduce the queue to size 1... maybe later. We could update the queue options at runtime easily, but currently I'm not going to bother. I could only have put the lavc wrapper itself on a separate thread. But there is some annoying interaction with EDL and backward playback shit, and also you would have had to loop demuxer packets through the playloop, so this sounded less annoying. The food my mother made for us today was delicious. Because audio uses the same code, also for audio (even if completely pointless). Fixes: #6926
2020-02-29 20:40:52 +00:00
demux_block_reading(mpctx->demuxer, false);
for (int t = 0; t < mpctx->num_tracks; t++) {
struct track *track = mpctx->tracks[t];
if (track->selected && track->demuxer)
demux_block_reading(track->demuxer, false);
}
/* Use the target time as "current position" for further relative
* seeks etc until a new video frame has been decoded */
2016-02-28 18:43:07 +00:00
mpctx->last_seek_pts = seek_pts;
if (hr_seek) {
mpctx->hrseek_active = true;
2016-02-28 18:43:07 +00:00
mpctx->hrseek_backstep = seek.type == MPSEEK_BACKSTEP;
Implement backwards playback See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
mpctx->hrseek_pts = seek_pts * mpctx->play_dir;
// allow decoder to drop frames before hrseek_pts
bool hrseek_framedrop = !hr_seek_very_exact && opts->hr_seek_framedrop;
MP_VERBOSE(mpctx, "hr-seek, skipping to %f%s%s\n", mpctx->hrseek_pts,
hrseek_framedrop ? "" : " (no framedrop)",
mpctx->hrseek_backstep ? " (backstep)" : "");
for (int n = 0; n < mpctx->num_tracks; n++) {
struct track *track = mpctx->tracks[n];
struct mp_decoder_wrapper *dec = track->dec;
if (dec && hrseek_framedrop)
mp_decoder_wrapper_set_start_pts(dec, mpctx->hrseek_pts);
}
}
2016-02-28 18:43:07 +00:00
if (mpctx->stop_play == AT_END_OF_FILE)
mpctx->stop_play = KEEP_PLAYING;
mpctx->start_timestamp = mp_time_sec();
mp_wakeup_core(mpctx);
mp_notify(mpctx, MPV_EVENT_SEEK, NULL);
mp_notify(mpctx, MPV_EVENT_TICK, NULL);
player: gross hack to improve non-hr seeking with external audio tracks Relative seeks backwards with external audio tracks does not always work well: it tends to happen that video seek back further than audio, so audio will remain silent until the audio's after-seek position is reached. This happens because we strictly seek both video and audio demuxer to the approximate desirted target PTS, and then start decoding from that. Commit 81358380 removes an older method that was supposed to deal with this. It was sort of bad, because it could lead to the playback core freezing by waiting on network. Ideally, the demuxer layer would probably somehow deal with such seeks, and do them in a way the audio is seeked after video. Currently this is infeasible, because the demuxer layer assumes a single demuxer, and external tracks simply use separate demuxer layers. (MPlayer actually had a pseudo-demuxer that joined external tracks into a single demuxer, but this is not flexible enough - and also, the demuxer layer as it currently exists can't deal with dynamically removing external tracks either. Maybe some time in the future.) Instead, add a gross hack, that essentially reseeks the audio if it detects that it's too far off. The result is actually not too bad, because we can reuse the mechanism that is used for instant track switching. This way we can make sure of the right position, without having to care about certain other issues. It should be noted that if the audio demuxer is used for other tracks too, and the demuxer does not support refresh seeking, audio will probably be off by even a higher amount. But this should be rare.
2016-08-07 14:29:13 +00:00
player: modify/simplify AB-loop behavior This changes the behavior of the --ab-loop-a/b options. In addition, it makes it work with backward playback mode. The most obvious change is that the both the A and B point need to be set now before any looping happens. Unlike before, unset points don't implicitly use the start or end of the file. I think the old behavior was a feature that was explicitly added/wanted. Well, it's gone now. This is because of 2 reasons: 1. I never liked this feature, and it always got in my way (as user). 2. It's inherently annoying with backward playback mode. In backward playback mode, the user wants to set A/B in the wrong order. The ab-loop command will first set A, then B, so if you use this command during backward playback, A will be set to a higher timestamps than B. If you switch back to forward playback mode, the loop would stop working. I want the loop to just continue to work, and the chosen solution conflicts with the removed feature. The order issue above _could_ be fixed by also switching the AB-loop user option values around on direction switch. But there are no other instances of option changes magically affecting other options, and doing this would probably lead to unexpected misery (dying from corner cases and such). Another solution is sorting the A/B points by timestamps after copying them from the user options. Then A/B options set in backward mode will work in forward mode. This is the chosen solution. If you sort the points, you don't know anymore whether the unset point is supposed to signify the end or the start of the file. The AB-loop code is slightly better abstracted now, so it should be easy to restore the removed feature. It would still require coming up with a solution for backwards playback, though. A minor change is that if one point is set and the other is unset, I'm rendering both the chapter markers and the marker for the set point. Why? I don't know. My test file had chapters, and I guess I decided this looked better. This commit also fixes some subtle and obvious issues that I already forgot about when I wrote this commit message. It cleans up some minor code duplication and nonsense too. Regarding backward playback, the code uses an unsanitary mix of internal ("transformed") and user timestamps. So the play_dir variable appears more than usual. To mention one unfixed issue: if you set an AB-loop that is completely past the end of the file, it will get stuck in an infinite seeking loop once playback reaches the end of the file. Fixing this reliably seemed annoying, so the fix is "just don't do this". It's not a hard freeze anyway.
2019-05-26 23:24:22 +00:00
update_ab_loop_clip(mpctx);
mpctx->current_seek = seek;
}
player: handle seek delays differently The code removed from handle_input_and_seek_coalesce() did two things: 1. If there's a queued seek, stop accepting non-seek commands, and delay them to the next playloop iteration. 2. If a seek is executing (i.e. the seek was unqueued, and now it's trying to decode and display the first video frame), stop accepting seek commands (and in fact all commands that were queued after the first seek command). This logic is disabled if seeking started longer than 300ms ago. (To avoid starvation.) I'm not sure why 1. would be needed. It's still possible that a command immediately executed after a seek command sees a "seeking in progress" state, because it affects queued seeks only, and not seeks in progress. Drop this code, since it can easily lead to input starvation, and I'm not aware of any disadvantages. The logic in 2. is good to make seeking behave much better, as it guarantees that the video display is updated frequently. Keep the core idea, but implement it differently. Now this logic is applied to seeks only. Commands after the seek can execute freely, and like with 1., I don't see a reason why they couldn't. However, in some cases, seeks are supposed to be executed instantly, so queue_seek() needs an additional parameter to signal the need for immediate update. One nice thing is that commands like sub_seek automatically profit from the seek delay logic. On the other hand, hitting chapter seek multiple times still does not update the video on chapter boundaries (as it should be). Note that the main goal of this commit is actually simplification of the input processing logic and to allow all commands to be executed immediately.
2014-02-07 21:29:50 +00:00
// This combines consecutive seek requests.
void queue_seek(struct MPContext *mpctx, enum seek_type type, double amount,
enum seek_precision exact, int flags)
{
struct seek_params *seek = &mpctx->seek;
mp_wakeup_core(mpctx);
if (mpctx->stop_play == AT_END_OF_FILE)
mpctx->stop_play = KEEP_PLAYING;
switch (type) {
case MPSEEK_RELATIVE:
seek->flags |= flags;
if (seek->type == MPSEEK_FACTOR)
return; // Well... not common enough to bother doing better
seek->amount += amount;
seek->exact = MPMAX(seek->exact, exact);
if (seek->type == MPSEEK_NONE)
seek->exact = exact;
if (seek->type == MPSEEK_ABSOLUTE)
return;
seek->type = MPSEEK_RELATIVE;
return;
case MPSEEK_ABSOLUTE:
case MPSEEK_FACTOR:
case MPSEEK_BACKSTEP:
*seek = (struct seek_params) {
.type = type,
.amount = amount,
.exact = exact,
.flags = flags,
};
return;
case MPSEEK_NONE:
*seek = (struct seek_params){ 0 };
return;
}
MP_ASSERT_UNREACHABLE();
}
void execute_queued_seek(struct MPContext *mpctx)
{
if (mpctx->seek.type) {
bool queued_hr_seek = mpctx->seek.exact != MPSEEK_KEYFRAME;
// Let explicitly imprecise seeks cancel precise seeks:
if (mpctx->hrseek_active && !queued_hr_seek)
mpctx->start_timestamp = -1e9;
// If the user seeks continuously (keeps arrow key down) try to finish
// showing a frame from one location before doing another seek (instead
// of never updating the screen).
if ((mpctx->seek.flags & MPSEEK_FLAG_DELAY) &&
player: handle seek delays differently The code removed from handle_input_and_seek_coalesce() did two things: 1. If there's a queued seek, stop accepting non-seek commands, and delay them to the next playloop iteration. 2. If a seek is executing (i.e. the seek was unqueued, and now it's trying to decode and display the first video frame), stop accepting seek commands (and in fact all commands that were queued after the first seek command). This logic is disabled if seeking started longer than 300ms ago. (To avoid starvation.) I'm not sure why 1. would be needed. It's still possible that a command immediately executed after a seek command sees a "seeking in progress" state, because it affects queued seeks only, and not seeks in progress. Drop this code, since it can easily lead to input starvation, and I'm not aware of any disadvantages. The logic in 2. is good to make seeking behave much better, as it guarantees that the video display is updated frequently. Keep the core idea, but implement it differently. Now this logic is applied to seeks only. Commands after the seek can execute freely, and like with 1., I don't see a reason why they couldn't. However, in some cases, seeks are supposed to be executed instantly, so queue_seek() needs an additional parameter to signal the need for immediate update. One nice thing is that commands like sub_seek automatically profit from the seek delay logic. On the other hand, hitting chapter seek multiple times still does not update the video on chapter boundaries (as it should be). Note that the main goal of this commit is actually simplification of the input processing logic and to allow all commands to be executed immediately.
2014-02-07 21:29:50 +00:00
mp_time_sec() - mpctx->start_timestamp < 0.3)
{
// Wait until a video frame is available and has been shown.
if (mpctx->video_status < STATUS_PLAYING)
return;
player: make repeated hr-seeks past EOF trigger EOF as expected If you have a normal file with audio and video, and keep "spamming" forward hr-seeks, the player just kept showing the last video frame instead of exiting or playing the next file. This started happening since commit 6bcda94cb. Although not a bug per se, it was odd, and very user-noticable. The main problem was that the pending seek command was processed before the EOF was "noticed". Processing the command reset everything, so the player did not terminate playback, but repeated the seek. This commit restores the old behavior. For one, it makes video return the correct status (video.c). The parameter is a bit ugly, but better than duplicating the logic or having another MPContext field. (As a minor detail, setting r=VD_EOF makes sure have_new_frame() returns true, rather than going through another iteration or whatever the hell will happen instead, which would clobber logical_eof.) Another thing is making the seek logic actually wait until the seek outcome has been determined if audio is also active. Audio needs to wait for video in order to get the video seek target position. (Which in turn is because hr-seek still "snaps" to video frames. You can't seek in between two frames, so audio can't just use the seek target, but always has to wait on the timestamp of the video frame. This has other disadvantages and is a misdesign, but not something I'll fix today.) In theory, this might make hr-seeks less responsive, because it needs to fully decode/filter the audio too, but in practice most time is spent on video, which had to be fully decoded before this change. (In general, hr-seek could probably just show a random frame when a queued hr-seek overrides the current hr-seek, which would probably lead to a better user experience, but that's out of scope.) Fixes: #7206
2019-12-14 13:17:16 +00:00
// On A/V hr-seeks, always wait for the full result, to avoid corner
// cases when seeking past EOF (we want it to determine that EOF
// actually happened, instead of overwriting it with the new seek).
if (mpctx->hrseek_active && queued_hr_seek && mpctx->vo_chain &&
mpctx->ao_chain && !mpctx->restart_complete)
return;
}
mp_seek(mpctx, mpctx->seek);
mpctx->seek = (struct seek_params){0};
}
}
// NOPTS (i.e. <0) if unknown
double get_time_length(struct MPContext *mpctx)
{
struct demuxer *demuxer = mpctx->demuxer;
return demuxer && demuxer->duration >= 0 ? demuxer->duration : MP_NOPTS_VALUE;
}
// Return approximate PTS of first frame played. This can be completely wrong
// for a number of reasons in a number of situations.
double get_start_time(struct MPContext *mpctx, int dir)
{
double res = 0;
if (mpctx->demuxer) {
if (!mpctx->opts->rebase_start_time)
res += mpctx->demuxer->start_time;
if (dir < 0)
res += MPMAX(mpctx->demuxer->duration, 0);
}
return res;
}
double get_current_time(struct MPContext *mpctx)
{
2020-09-10 21:47:59 +00:00
if (!mpctx->demuxer)
return MP_NOPTS_VALUE;
if (mpctx->playback_pts != MP_NOPTS_VALUE)
return mpctx->playback_pts * mpctx->play_dir;
return mpctx->last_seek_pts;
}
double get_playback_time(struct MPContext *mpctx)
{
double cur = get_current_time(mpctx);
// During seeking, the time corresponds to the last seek time - apply some
// cosmetics to it.
2020-09-10 21:47:59 +00:00
if (cur != MP_NOPTS_VALUE && mpctx->playback_pts == MP_NOPTS_VALUE) {
double length = get_time_length(mpctx);
if (length >= 0)
cur = MPCLAMP(cur, 0, length);
}
return cur;
}
// Return playback position in 0.0-1.0 ratio, or -1 if unknown.
double get_current_pos_ratio(struct MPContext *mpctx, bool use_range)
{
struct demuxer *demuxer = mpctx->demuxer;
if (!demuxer)
return -1;
double ans = -1;
double start = 0;
double len = get_time_length(mpctx);
if (use_range) {
double startpos = get_play_start_pts(mpctx);
double endpos = get_play_end_pts(mpctx);
player: modify/simplify AB-loop behavior This changes the behavior of the --ab-loop-a/b options. In addition, it makes it work with backward playback mode. The most obvious change is that the both the A and B point need to be set now before any looping happens. Unlike before, unset points don't implicitly use the start or end of the file. I think the old behavior was a feature that was explicitly added/wanted. Well, it's gone now. This is because of 2 reasons: 1. I never liked this feature, and it always got in my way (as user). 2. It's inherently annoying with backward playback mode. In backward playback mode, the user wants to set A/B in the wrong order. The ab-loop command will first set A, then B, so if you use this command during backward playback, A will be set to a higher timestamps than B. If you switch back to forward playback mode, the loop would stop working. I want the loop to just continue to work, and the chosen solution conflicts with the removed feature. The order issue above _could_ be fixed by also switching the AB-loop user option values around on direction switch. But there are no other instances of option changes magically affecting other options, and doing this would probably lead to unexpected misery (dying from corner cases and such). Another solution is sorting the A/B points by timestamps after copying them from the user options. Then A/B options set in backward mode will work in forward mode. This is the chosen solution. If you sort the points, you don't know anymore whether the unset point is supposed to signify the end or the start of the file. The AB-loop code is slightly better abstracted now, so it should be easy to restore the removed feature. It would still require coming up with a solution for backwards playback, though. A minor change is that if one point is set and the other is unset, I'm rendering both the chapter markers and the marker for the set point. Why? I don't know. My test file had chapters, and I guess I decided this looked better. This commit also fixes some subtle and obvious issues that I already forgot about when I wrote this commit message. It cleans up some minor code duplication and nonsense too. Regarding backward playback, the code uses an unsanitary mix of internal ("transformed") and user timestamps. So the play_dir variable appears more than usual. To mention one unfixed issue: if you set an AB-loop that is completely past the end of the file, it will get stuck in an infinite seeking loop once playback reaches the end of the file. Fixing this reliably seemed annoying, so the fix is "just don't do this". It's not a hard freeze anyway.
2019-05-26 23:24:22 +00:00
if (endpos > MPMAX(0, len))
endpos = MPMAX(0, len);
if (endpos < startpos)
endpos = startpos;
start = startpos;
len = endpos - startpos;
}
double pos = get_current_time(mpctx);
if (len > 0)
ans = MPCLAMP((pos - start) / len, 0, 1);
playloop: don't read playback pos from byte stream If a file format supports PTS resets, get_current_pos_ratio calculates the ratio using the stored filepos in the demuxer struct instead of a standard query of the current time in the stream and its total length. This seems like a reasonable way to avoid weird PTS values, but in reality this just causes problems and results in inaccurate ratio values that can affect other parts of the player (most notably the osc seekbar). For libavformat formats, demux->filepos is obtained from the demux_lavf_fill_buffer function which is called on the next packet. The problem is that there is a slight delay between packets and in some cases, this delay can be relatively large. That means the obtained demuxer->filepos value will be very inaccurate since it obtains the pos from the end of the upcoming packet and not its actual current position. This is especially noticeable at the very beginning of playback where get_current_pos_ratio sometimes returns a value of well over 2% despite less than a second passing in the stream. Another telltale sign is to simply watch the osc seekbar as a stream progresses and observe how it loads in staggered steps as every packet is decoded. In contrast, the seekbar progresses smoothly on the playback of a format that does not support PTS resets. The simple solution is to instead use the query of the current time and length of a stream and calculate the ratio that way. get_current_pos_ratio will still fallback on using the byte stream position if the previous queries fail. However, get_current_time will be more accurate in the vast majority of cases and should be the preferred method of calculating the position ratio.
2019-09-21 15:03:57 +00:00
if (ans < 0) {
int64_t size = demuxer->filesize;
if (size > 0 && demuxer->filepos >= 0)
ans = MPCLAMP(demuxer->filepos / (double)size, 0, 1);
}
if (use_range) {
if (mpctx->opts->play_frames > 0)
ans = MPMAX(ans, 1.0 -
mpctx->max_frames / (double) mpctx->opts->play_frames);
}
return ans;
}
// 0-100, -1 if unknown
int get_percent_pos(struct MPContext *mpctx)
{
double pos = get_current_pos_ratio(mpctx, false);
return pos < 0 ? -1 : (int)round(pos * 100);
}
// -2 is no chapters, -1 is before first chapter
int get_current_chapter(struct MPContext *mpctx)
{
if (!mpctx->num_chapters)
return -2;
double current_pts = get_current_time(mpctx);
int i;
for (i = 0; i < mpctx->num_chapters; i++)
if (current_pts < mpctx->chapters[i].pts)
break;
return MPMAX(mpctx->last_chapter_seek, i - 1);
}
char *chapter_display_name(struct MPContext *mpctx, int chapter)
{
char *name = chapter_name(mpctx, chapter);
char *dname = NULL;
if (name) {
dname = talloc_asprintf(NULL, "(%d) %s", chapter + 1, name);
} else if (chapter < -1) {
dname = talloc_strdup(NULL, "(unavailable)");
} else {
int chapter_count = get_chapter_count(mpctx);
if (chapter_count <= 0)
dname = talloc_asprintf(NULL, "(%d)", chapter + 1);
else
dname = talloc_asprintf(NULL, "(%d) of %d", chapter + 1,
chapter_count);
}
return dname;
}
// returns NULL if chapter name unavailable
char *chapter_name(struct MPContext *mpctx, int chapter)
{
if (chapter < 0 || chapter >= mpctx->num_chapters)
return NULL;
return mp_tags_get_str(mpctx->chapters[chapter].metadata, "title");
}
// returns the start of the chapter in seconds (NOPTS if unavailable)
double chapter_start_time(struct MPContext *mpctx, int chapter)
{
if (chapter == -1)
return 0;
if (chapter >= 0 && chapter < mpctx->num_chapters)
return mpctx->chapters[chapter].pts;
return MP_NOPTS_VALUE;
}
int get_chapter_count(struct MPContext *mpctx)
{
return mpctx->num_chapters;
}
player: modify/simplify AB-loop behavior This changes the behavior of the --ab-loop-a/b options. In addition, it makes it work with backward playback mode. The most obvious change is that the both the A and B point need to be set now before any looping happens. Unlike before, unset points don't implicitly use the start or end of the file. I think the old behavior was a feature that was explicitly added/wanted. Well, it's gone now. This is because of 2 reasons: 1. I never liked this feature, and it always got in my way (as user). 2. It's inherently annoying with backward playback mode. In backward playback mode, the user wants to set A/B in the wrong order. The ab-loop command will first set A, then B, so if you use this command during backward playback, A will be set to a higher timestamps than B. If you switch back to forward playback mode, the loop would stop working. I want the loop to just continue to work, and the chosen solution conflicts with the removed feature. The order issue above _could_ be fixed by also switching the AB-loop user option values around on direction switch. But there are no other instances of option changes magically affecting other options, and doing this would probably lead to unexpected misery (dying from corner cases and such). Another solution is sorting the A/B points by timestamps after copying them from the user options. Then A/B options set in backward mode will work in forward mode. This is the chosen solution. If you sort the points, you don't know anymore whether the unset point is supposed to signify the end or the start of the file. The AB-loop code is slightly better abstracted now, so it should be easy to restore the removed feature. It would still require coming up with a solution for backwards playback, though. A minor change is that if one point is set and the other is unset, I'm rendering both the chapter markers and the marker for the set point. Why? I don't know. My test file had chapters, and I guess I decided this looked better. This commit also fixes some subtle and obvious issues that I already forgot about when I wrote this commit message. It cleans up some minor code duplication and nonsense too. Regarding backward playback, the code uses an unsanitary mix of internal ("transformed") and user timestamps. So the play_dir variable appears more than usual. To mention one unfixed issue: if you set an AB-loop that is completely past the end of the file, it will get stuck in an infinite seeking loop once playback reaches the end of the file. Fixing this reliably seemed annoying, so the fix is "just don't do this". It's not a hard freeze anyway.
2019-05-26 23:24:22 +00:00
// If the current playback position (or seek target) falls before the B
// position, actually make playback loop when reaching the B point. The
// intention is that you can seek out of the ab-loop range.
void update_ab_loop_clip(struct MPContext *mpctx)
{
double pts = get_current_time(mpctx);
double ab[2];
mpctx->ab_loop_clip = pts != MP_NOPTS_VALUE &&
get_ab_loop_times(mpctx, ab) &&
pts * mpctx->play_dir <= ab[1] * mpctx->play_dir;
}
static void handle_osd_redraw(struct MPContext *mpctx)
{
if (!mpctx->video_out || !mpctx->video_out->config_ok)
return;
// If we're playing normally, let OSD be redrawn naturally as part of
// video display.
if (!mpctx->paused) {
if (mpctx->sleeptime < 0.1 && mpctx->video_status == STATUS_PLAYING)
return;
}
// Don't redraw immediately during a seek (makes it significantly slower).
bool use_video = mpctx->vo_chain && !mpctx->vo_chain->is_sparse;
if (use_video && mp_time_sec() - mpctx->start_timestamp < 0.1) {
mp_set_timeout(mpctx, 0.1);
return;
}
bool want_redraw = osd_query_and_reset_want_redraw(mpctx->osd) ||
vo_want_redraw(mpctx->video_out);
if (!want_redraw)
return;
vo_redraw(mpctx->video_out);
}
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
static void clear_underruns(struct MPContext *mpctx)
{
if (mpctx->ao_chain && mpctx->ao_chain->underrun) {
mpctx->ao_chain->underrun = false;
mp_wakeup_core(mpctx);
}
if (mpctx->vo_chain && mpctx->vo_chain->underrun) {
mpctx->vo_chain->underrun = false;
mp_wakeup_core(mpctx);
}
}
static void handle_update_cache(struct MPContext *mpctx)
{
bool force_update = false;
struct MPOpts *opts = mpctx->opts;
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
player: consider audio buffer if AO driver does not report underruns AOs can report audio underruns, but only ao_alsa and ao_sdl (???) currently do so. If the AO was marked as not reporting it, the cache state was used to determine whether playback was interrupted due to slow input. This caused problems in some cases, such as video with very low video frame rate: when a new frame is displayed, a new frame has to be decoded, and since there it's so much further into the file (long frame durations), the cache gets into an underrun state for a short moment, even though both audio and video are playing fine. Enlarging the audio buffer didn't help. Fix this by making all AOs report underruns. If the AO driver does not report underruns, fall back to using the buffer state. pull.c behavior is slightly changed. Pull AOs are normally intended to be used by pseudo-realtime audio APIs that fetch an audio buffer from the API user via callback. I think it makes no sense to consider a buffer underflow not an underrun in any situation, since we return silence to the reader. (OK, maybe the reader could check the return value? But let's not go there as long as there's no implementation.) Remove the flag from ao_sdl.c, since it just worked via the generic mechanism. Make the redundant underrun message verbose only. push.c seems to log a redundant underflow message when resuming (because somehow ao_play_data() is called when there's still no new data in the buffer). But since ao_alsa does its own underrun reporting, and I only use ao_alsa, I don't really care. Also in all my tests, there seemed to be a rather high delay until the underflow was logged (with audio only). I have no idea why this happened and didn't try to debug this, but there's probably something wrong somewhere. This commit may cause random regressions. See: #7440
2020-02-13 00:28:59 +00:00
if (!mpctx->demuxer || mpctx->encode_lavc_ctx) {
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
clear_underruns(mpctx);
return;
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
}
2016-04-20 08:50:22 +00:00
double now = mp_time_sec();
struct demux_reader_state s;
demux_get_reader_state(mpctx->demuxer, &s);
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
mpctx->demux_underrun |= s.underrun;
int cache_buffer = 100;
bool use_pause_on_low_cache = opts->cache_pause && mpctx->play_dir > 0;
if (!mpctx->restart_complete) {
// Audio or video is restarting, and initial buffering is enabled. Make
// sure we actually restart them in paused mode, so no audio gets
// dropped and video technically doesn't start yet.
use_pause_on_low_cache &= opts->cache_pause_initial &&
(mpctx->video_status == STATUS_READY ||
mpctx->audio_status == STATUS_READY);
}
bool is_low = use_pause_on_low_cache && !s.idle &&
s.ts_duration < opts->cache_pause_wait;
// Enter buffering state only if there actually was an underrun (or if
// initial caching before playback restart is used).
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
bool need_wait = is_low;
if (is_low && !mpctx->paused_for_cache && mpctx->restart_complete) {
// Wait only if an output underrun was registered. (Or if there is no
// underrun detection.)
bool output_underrun = false;
player: consider audio buffer if AO driver does not report underruns AOs can report audio underruns, but only ao_alsa and ao_sdl (???) currently do so. If the AO was marked as not reporting it, the cache state was used to determine whether playback was interrupted due to slow input. This caused problems in some cases, such as video with very low video frame rate: when a new frame is displayed, a new frame has to be decoded, and since there it's so much further into the file (long frame durations), the cache gets into an underrun state for a short moment, even though both audio and video are playing fine. Enlarging the audio buffer didn't help. Fix this by making all AOs report underruns. If the AO driver does not report underruns, fall back to using the buffer state. pull.c behavior is slightly changed. Pull AOs are normally intended to be used by pseudo-realtime audio APIs that fetch an audio buffer from the API user via callback. I think it makes no sense to consider a buffer underflow not an underrun in any situation, since we return silence to the reader. (OK, maybe the reader could check the return value? But let's not go there as long as there's no implementation.) Remove the flag from ao_sdl.c, since it just worked via the generic mechanism. Make the redundant underrun message verbose only. push.c seems to log a redundant underflow message when resuming (because somehow ao_play_data() is called when there's still no new data in the buffer). But since ao_alsa does its own underrun reporting, and I only use ao_alsa, I don't really care. Also in all my tests, there seemed to be a rather high delay until the underflow was logged (with audio only). I have no idea why this happened and didn't try to debug this, but there's probably something wrong somewhere. This commit may cause random regressions. See: #7440
2020-02-13 00:28:59 +00:00
if (mpctx->ao_chain)
output_underrun |= mpctx->ao_chain->underrun;
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
if (mpctx->vo_chain)
output_underrun |= mpctx->vo_chain->underrun;
// Output underruns could be sporadic (unrelated to demuxer buffer state
// and for example caused by slow decoding), so use a past demuxer
// underrun as indication that the underrun was possibly due to a
// demuxer underrun.
need_wait = mpctx->demux_underrun && output_underrun;
}
// Let the underrun flag "stick" around until the cache has fully recovered.
// See logic where demux_underrun is used.
if (!is_low)
mpctx->demux_underrun = false;
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
if (mpctx->paused_for_cache != need_wait) {
mpctx->paused_for_cache = need_wait;
update_internal_pause_state(mpctx);
force_update = true;
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
if (mpctx->paused_for_cache)
mpctx->cache_stop_time = now;
}
player: partially rework --cache-pause The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
2019-10-11 17:34:04 +00:00
if (!mpctx->paused_for_cache)
clear_underruns(mpctx);
if (mpctx->paused_for_cache) {
cache_buffer =
100 * MPCLAMP(s.ts_duration / opts->cache_pause_wait, 0, 0.99);
mp_set_timeout(mpctx, 0.2);
}
// Also update cache properties.
bool busy = !s.idle;
if (fabs(mpctx->cache_update_pts - mpctx->playback_pts) >= 1.0)
busy = true;
if (busy || mpctx->next_cache_update > 0) {
if (mpctx->next_cache_update <= now) {
mpctx->next_cache_update = busy ? now + 0.25 : 0;
force_update = true;
}
if (mpctx->next_cache_update > 0)
mp_set_timeout(mpctx, mpctx->next_cache_update - now);
}
if (mpctx->cache_buffer != cache_buffer) {
if ((mpctx->cache_buffer == 100) != (cache_buffer == 100)) {
if (cache_buffer < 100) {
MP_VERBOSE(mpctx, "Enter buffering (buffer went from %d%% -> %d%%) [%fs].\n",
mpctx->cache_buffer, cache_buffer, s.ts_duration);
} else {
2016-04-20 08:50:22 +00:00
double t = now - mpctx->cache_stop_time;
MP_VERBOSE(mpctx, "End buffering (waited %f secs) [%fs].\n",
t, s.ts_duration);
}
} else {
MP_VERBOSE(mpctx, "Still buffering (buffer went from %d%% -> %d%%) [%fs].\n",
mpctx->cache_buffer, cache_buffer, s.ts_duration);
}
mpctx->cache_buffer = cache_buffer;
force_update = true;
}
if (s.eof && !busy)
prefetch_next(mpctx);
if (force_update) {
mpctx->cache_update_pts = mpctx->playback_pts;
mp_notify(mpctx, MP_EVENT_CACHE_UPDATE, NULL);
}
}
int get_cache_buffering_percentage(struct MPContext *mpctx)
{
return mpctx->demuxer ? mpctx->cache_buffer : -1;
}
static void handle_cursor_autohide(struct MPContext *mpctx)
{
struct MPOpts *opts = mpctx->opts;
struct vo *vo = mpctx->video_out;
if (!vo)
return;
bool mouse_cursor_visible = mpctx->mouse_cursor_visible;
double now = mp_time_sec();
unsigned mouse_event_ts = mp_input_get_mouse_event_counter(mpctx->input);
if (mpctx->mouse_event_ts != mouse_event_ts) {
mpctx->mouse_event_ts = mouse_event_ts;
mpctx->mouse_timer = now + opts->cursor_autohide_delay / 1000.0;
mouse_cursor_visible = true;
}
if (mpctx->mouse_timer > now) {
mp_set_timeout(mpctx, mpctx->mouse_timer - now);
} else {
mouse_cursor_visible = false;
}
if (opts->cursor_autohide_delay == -1)
mouse_cursor_visible = true;
if (opts->cursor_autohide_delay == -2)
mouse_cursor_visible = false;
if (opts->cursor_autohide_fs && !opts->vo->fullscreen)
mouse_cursor_visible = true;
if (mouse_cursor_visible != mpctx->mouse_cursor_visible)
vo_control(vo, VOCTRL_SET_CURSOR_VISIBILITY, &mouse_cursor_visible);
mpctx->mouse_cursor_visible = mouse_cursor_visible;
}
static void handle_vo_events(struct MPContext *mpctx)
{
struct vo *vo = mpctx->video_out;
int events = vo ? vo_query_and_reset_events(vo, VO_EVENTS_USER) : 0;
if (events & VO_EVENT_RESIZE)
mp_notify(mpctx, MP_EVENT_WIN_RESIZE, NULL);
if (events & VO_EVENT_WIN_STATE)
mp_notify(mpctx, MP_EVENT_WIN_STATE, NULL);
if (events & VO_EVENT_DPI)
mp_notify(mpctx, MP_EVENT_WIN_STATE2, NULL);
if (events & VO_EVENT_FOCUS)
mp_notify(mpctx, MP_EVENT_FOCUS, NULL);
}
static void handle_sstep(struct MPContext *mpctx)
{
struct MPOpts *opts = mpctx->opts;
if (mpctx->stop_play || !mpctx->restart_complete)
return;
if (opts->step_sec > 0 && !mpctx->paused) {
set_osd_function(mpctx, OSD_FFW);
queue_seek(mpctx, MPSEEK_RELATIVE, opts->step_sec, MPSEEK_DEFAULT, 0);
}
if (mpctx->video_status >= STATUS_EOF) {
if (mpctx->max_frames >= 0 && !mpctx->stop_play)
mpctx->stop_play = AT_END_OF_FILE; // force EOF even if audio left
if (mpctx->step_frames > 0 && !mpctx->paused)
set_pause_state(mpctx, true);
}
}
static void handle_loop_file(struct MPContext *mpctx)
{
struct MPOpts *opts = mpctx->opts;
if (mpctx->stop_play != AT_END_OF_FILE)
return;
double target = MP_NOPTS_VALUE;
enum seek_precision prec = MPSEEK_DEFAULT;
player: modify/simplify AB-loop behavior This changes the behavior of the --ab-loop-a/b options. In addition, it makes it work with backward playback mode. The most obvious change is that the both the A and B point need to be set now before any looping happens. Unlike before, unset points don't implicitly use the start or end of the file. I think the old behavior was a feature that was explicitly added/wanted. Well, it's gone now. This is because of 2 reasons: 1. I never liked this feature, and it always got in my way (as user). 2. It's inherently annoying with backward playback mode. In backward playback mode, the user wants to set A/B in the wrong order. The ab-loop command will first set A, then B, so if you use this command during backward playback, A will be set to a higher timestamps than B. If you switch back to forward playback mode, the loop would stop working. I want the loop to just continue to work, and the chosen solution conflicts with the removed feature. The order issue above _could_ be fixed by also switching the AB-loop user option values around on direction switch. But there are no other instances of option changes magically affecting other options, and doing this would probably lead to unexpected misery (dying from corner cases and such). Another solution is sorting the A/B points by timestamps after copying them from the user options. Then A/B options set in backward mode will work in forward mode. This is the chosen solution. If you sort the points, you don't know anymore whether the unset point is supposed to signify the end or the start of the file. The AB-loop code is slightly better abstracted now, so it should be easy to restore the removed feature. It would still require coming up with a solution for backwards playback, though. A minor change is that if one point is set and the other is unset, I'm rendering both the chapter markers and the marker for the set point. Why? I don't know. My test file had chapters, and I guess I decided this looked better. This commit also fixes some subtle and obvious issues that I already forgot about when I wrote this commit message. It cleans up some minor code duplication and nonsense too. Regarding backward playback, the code uses an unsanitary mix of internal ("transformed") and user timestamps. So the play_dir variable appears more than usual. To mention one unfixed issue: if you set an AB-loop that is completely past the end of the file, it will get stuck in an infinite seeking loop once playback reaches the end of the file. Fixing this reliably seemed annoying, so the fix is "just don't do this". It's not a hard freeze anyway.
2019-05-26 23:24:22 +00:00
double ab[2];
if (get_ab_loop_times(mpctx, ab) && mpctx->ab_loop_clip) {
if (opts->ab_loop_count > 0) {
opts->ab_loop_count--;
m_config_notify_change_opt_ptr(mpctx->mconfig, &opts->ab_loop_count);
}
target = ab[0];
prec = MPSEEK_EXACT;
} else if (opts->loop_file) {
player: change m_config to use new option handling mechanisms Instead of making m_config a special-case, it more or less uses the underlying m_config_cache/m_config_shadow APIs properly. This makes the player core a (relatively) equivalent user of the core option API. In particular, this means that other threads can change core options with m_config_cache_write_opt() calls (before this commit, this merely led to diverging option values). An important change is that before this commit, mpctx->opts contained the "master copy" of all option data. Now it's just another copy of the option data, and the shadow copy is considered the master. This is why whenever mpctx->opts is written, the change needs to be copied to the master (thus why this commits add a bunch of m_config_notify... calls). If another thread (e.g. a VO) changes an option, async_change_cb is now invoked, which funnels the change notification through the player's layers. The new self_notification parameter on mp_option_change_callback is so that m_config_notify... doesn't trigger recursion, and it's used in cases where the change was already "processed". It's still needed to trigger libmpv property updates. (I considered using an extra m_config_cache for that, but it'd only cause problems with no advantages.) I think the recent changes actually forgot to send libmpv property updates in some cases. This should fix this anyway. In some cases, property updates are reworked, and the potential for bugs should be lower (probably). The primary point of this change is to allow external updates, for example by a VO writing the fullscreen option if the window state is changed by the window manager (rather than mpv changing it). This is not used yet, but the following commits will.
2019-11-29 11:49:15 +00:00
if (opts->loop_file > 0) {
opts->loop_file--;
player: change m_config to use new option handling mechanisms Instead of making m_config a special-case, it more or less uses the underlying m_config_cache/m_config_shadow APIs properly. This makes the player core a (relatively) equivalent user of the core option API. In particular, this means that other threads can change core options with m_config_cache_write_opt() calls (before this commit, this merely led to diverging option values). An important change is that before this commit, mpctx->opts contained the "master copy" of all option data. Now it's just another copy of the option data, and the shadow copy is considered the master. This is why whenever mpctx->opts is written, the change needs to be copied to the master (thus why this commits add a bunch of m_config_notify... calls). If another thread (e.g. a VO) changes an option, async_change_cb is now invoked, which funnels the change notification through the player's layers. The new self_notification parameter on mp_option_change_callback is so that m_config_notify... doesn't trigger recursion, and it's used in cases where the change was already "processed". It's still needed to trigger libmpv property updates. (I considered using an extra m_config_cache for that, but it'd only cause problems with no advantages.) I think the recent changes actually forgot to send libmpv property updates in some cases. This should fix this anyway. In some cases, property updates are reworked, and the potential for bugs should be lower (probably). The primary point of this change is to allow external updates, for example by a VO writing the fullscreen option if the window state is changed by the window manager (rather than mpv changing it). This is not used yet, but the following commits will.
2019-11-29 11:49:15 +00:00
m_config_notify_change_opt_ptr(mpctx->mconfig, &opts->loop_file);
}
target = get_start_time(mpctx, mpctx->play_dir);
}
if (target != MP_NOPTS_VALUE) {
if (!mpctx->shown_aframes && !mpctx->shown_vframes) {
MP_WARN(mpctx, "No media data to loop.\n");
return;
}
mpctx->stop_play = KEEP_PLAYING;
set_osd_function(mpctx, OSD_FFW);
mark_seek(mpctx);
// Assumes execute_queued_seek() happens before next audio/video is
// attempted to be decoded or filtered.
queue_seek(mpctx, MPSEEK_ABSOLUTE, target, prec, MPSEEK_FLAG_NOFLUSH);
}
}
void seek_to_last_frame(struct MPContext *mpctx)
{
if (!mpctx->vo_chain)
return;
if (mpctx->hrseek_lastframe) // exit if we already tried this
return;
MP_VERBOSE(mpctx, "seeking to last frame...\n");
// Approximately seek close to the end of the file.
// Usually, it will seek some seconds before end.
double end = MP_NOPTS_VALUE;
if (mpctx->play_dir > 0) {
end = get_play_end_pts(mpctx);
if (end == MP_NOPTS_VALUE)
end = get_time_length(mpctx);
} else {
end = get_start_time(mpctx, 1);
}
mp_seek(mpctx, (struct seek_params){
.type = MPSEEK_ABSOLUTE,
.amount = end,
.exact = MPSEEK_VERY_EXACT,
});
// Make it exact: stop seek only if last frame was reached.
if (mpctx->hrseek_active) {
mpctx->hrseek_pts = INFINITY * mpctx->play_dir;
mpctx->hrseek_lastframe = true;
}
}
static void handle_keep_open(struct MPContext *mpctx)
{
struct MPOpts *opts = mpctx->opts;
if (opts->keep_open && mpctx->stop_play == AT_END_OF_FILE &&
(opts->keep_open == 2 ||
(!playlist_get_next(mpctx->playlist, 1) && opts->loop_times == 1)))
{
mpctx->stop_play = KEEP_PLAYING;
if (mpctx->vo_chain) {
if (!vo_has_frame(mpctx->video_out)) { // EOF not reached normally
seek_to_last_frame(mpctx);
mpctx->audio_status = STATUS_EOF;
mpctx->video_status = STATUS_EOF;
}
}
if (opts->keep_open_pause) {
if (mpctx->ao && ao_is_playing(mpctx->ao))
return;
set_pause_state(mpctx, true);
}
}
}
static void handle_chapter_change(struct MPContext *mpctx)
{
int chapter = get_current_chapter(mpctx);
if (chapter != mpctx->last_chapter) {
mpctx->last_chapter = chapter;
mp_notify(mpctx, MP_EVENT_CHAPTER_CHANGE, NULL);
}
}
// Execute a forceful refresh of the VO window. This clears the window from
// the previous video. It also creates/destroys the VO on demand.
// It tries to make the change only in situations where the window is
// definitely needed or not needed, or if the force parameter is set (the
// latter also decides whether to clear an existing window, because there's
// no way to know if this has already been done or not).
int handle_force_window(struct MPContext *mpctx, bool force)
{
// True if we're either in idle mode, or loading of the file has finished.
// It's also set via force in some stages during file loading.
bool act = mpctx->stop_play || mpctx->playback_initialized || force;
// On the other hand, if a video track is selected, but no video is ever
// decoded on it, then create the window.
bool stalled_video = mpctx->playback_initialized && mpctx->restart_complete &&
mpctx->video_status == STATUS_EOF && mpctx->vo_chain &&
!mpctx->video_out->config_ok;
// Don't interfere with real video playback
if (mpctx->vo_chain && !stalled_video)
return 0;
if (!mpctx->opts->force_vo) {
if (act && !mpctx->vo_chain)
uninit_video_out(mpctx);
return 0;
}
if (mpctx->opts->force_vo != 2 && !act)
return 0;
if (!mpctx->video_out) {
struct vo_extra ex = {
.input_ctx = mpctx->input,
.osd = mpctx->osd,
.encode_lavc_ctx = mpctx->encode_lavc_ctx,
.wakeup_cb = mp_wakeup_core_cb,
.wakeup_ctx = mpctx,
};
mpctx->video_out = init_best_video_out(mpctx->global, &ex);
if (!mpctx->video_out)
goto err;
mpctx->mouse_cursor_visible = true;
}
if (!mpctx->video_out->config_ok || force) {
struct vo *vo = mpctx->video_out;
// Pick whatever works
int config_format = 0;
uint8_t fmts[IMGFMT_END - IMGFMT_START] = {0};
vo_query_formats(vo, fmts);
for (int fmt = IMGFMT_START; fmt < IMGFMT_END; fmt++) {
if (fmts[fmt - IMGFMT_START]) {
config_format = fmt;
break;
}
}
int w = 960;
int h = 480;
struct mp_image_params p = {
.imgfmt = config_format,
.w = w, .h = h,
.p_w = 1, .p_h = 1,
};
if (vo_reconfig(vo, &p) < 0)
goto err;
struct track *track = mpctx->current_track[0][STREAM_VIDEO];
update_content_type(mpctx, track);
update_screensaver_state(mpctx);
vo_set_paused(vo, true);
vo_redraw(vo);
mp_notify(mpctx, MPV_EVENT_VIDEO_RECONFIG, NULL);
}
return 0;
err:
mpctx->opts->force_vo = 0;
player: change m_config to use new option handling mechanisms Instead of making m_config a special-case, it more or less uses the underlying m_config_cache/m_config_shadow APIs properly. This makes the player core a (relatively) equivalent user of the core option API. In particular, this means that other threads can change core options with m_config_cache_write_opt() calls (before this commit, this merely led to diverging option values). An important change is that before this commit, mpctx->opts contained the "master copy" of all option data. Now it's just another copy of the option data, and the shadow copy is considered the master. This is why whenever mpctx->opts is written, the change needs to be copied to the master (thus why this commits add a bunch of m_config_notify... calls). If another thread (e.g. a VO) changes an option, async_change_cb is now invoked, which funnels the change notification through the player's layers. The new self_notification parameter on mp_option_change_callback is so that m_config_notify... doesn't trigger recursion, and it's used in cases where the change was already "processed". It's still needed to trigger libmpv property updates. (I considered using an extra m_config_cache for that, but it'd only cause problems with no advantages.) I think the recent changes actually forgot to send libmpv property updates in some cases. This should fix this anyway. In some cases, property updates are reworked, and the potential for bugs should be lower (probably). The primary point of this change is to allow external updates, for example by a VO writing the fullscreen option if the window state is changed by the window manager (rather than mpv changing it). This is not used yet, but the following commits will.
2019-11-29 11:49:15 +00:00
m_config_notify_change_opt_ptr(mpctx->mconfig, &mpctx->opts->force_vo);
uninit_video_out(mpctx);
MP_FATAL(mpctx, "Error opening/initializing the VO window.\n");
return -1;
}
// Potentially needed by some Lua scripts, which assume TICK always comes.
static void handle_dummy_ticks(struct MPContext *mpctx)
{
if ((mpctx->video_status != STATUS_PLAYING &&
mpctx->video_status != STATUS_DRAINING) ||
mpctx->paused)
{
if (mp_time_sec() - mpctx->last_idle_tick > 0.050) {
mpctx->last_idle_tick = mp_time_sec();
mp_notify(mpctx, MPV_EVENT_TICK, NULL);
}
}
}
// Update current playback time.
static void handle_playback_time(struct MPContext *mpctx)
{
if (mpctx->vo_chain &&
!mpctx->vo_chain->is_sparse &&
mpctx->video_status >= STATUS_PLAYING &&
mpctx->video_status < STATUS_EOF)
{
mpctx->playback_pts = mpctx->video_pts;
} else if (mpctx->audio_status >= STATUS_PLAYING &&
mpctx->audio_status < STATUS_EOF)
{
mpctx->playback_pts = playing_audio_pts(mpctx);
} else if (mpctx->video_status == STATUS_EOF &&
mpctx->audio_status == STATUS_EOF)
{
double apts =
mpctx->ao_chain ? mpctx->ao_chain->last_out_pts : MP_NOPTS_VALUE;
double vpts = mpctx->video_pts;
double mpts = MP_PTS_MAX(apts, vpts);
if (mpts != MP_NOPTS_VALUE)
mpctx->playback_pts = mpts;
}
}
// We always make sure audio and video buffers are filled before actually
// starting playback. This code handles starting them at the same time.
static void handle_playback_restart(struct MPContext *mpctx)
{
struct MPOpts *opts = mpctx->opts;
if (mpctx->audio_status < STATUS_READY ||
mpctx->video_status < STATUS_READY)
return;
handle_update_cache(mpctx);
if (mpctx->video_status == STATUS_READY) {
mpctx->video_status = STATUS_PLAYING;
get_relative_time(mpctx);
mp_wakeup_core(mpctx);
MP_DBG(mpctx, "starting video playback\n");
}
if (mpctx->audio_status == STATUS_READY) {
// If a new seek is queued while the current one finishes, don't
// actually play the audio, but resume seeking immediately.
if (mpctx->seek.type && mpctx->video_status == STATUS_PLAYING) {
handle_playback_time(mpctx);
player: make repeated hr-seeks past EOF trigger EOF as expected If you have a normal file with audio and video, and keep "spamming" forward hr-seeks, the player just kept showing the last video frame instead of exiting or playing the next file. This started happening since commit 6bcda94cb. Although not a bug per se, it was odd, and very user-noticable. The main problem was that the pending seek command was processed before the EOF was "noticed". Processing the command reset everything, so the player did not terminate playback, but repeated the seek. This commit restores the old behavior. For one, it makes video return the correct status (video.c). The parameter is a bit ugly, but better than duplicating the logic or having another MPContext field. (As a minor detail, setting r=VD_EOF makes sure have_new_frame() returns true, rather than going through another iteration or whatever the hell will happen instead, which would clobber logical_eof.) Another thing is making the seek logic actually wait until the seek outcome has been determined if audio is also active. Audio needs to wait for video in order to get the video seek target position. (Which in turn is because hr-seek still "snaps" to video frames. You can't seek in between two frames, so audio can't just use the seek target, but always has to wait on the timestamp of the video frame. This has other disadvantages and is a misdesign, but not something I'll fix today.) In theory, this might make hr-seeks less responsive, because it needs to fully decode/filter the audio too, but in practice most time is spent on video, which had to be fully decoded before this change. (In general, hr-seek could probably just show a random frame when a queued hr-seek overrides the current hr-seek, which would probably lead to a better user experience, but that's out of scope.) Fixes: #7206
2019-12-14 13:17:16 +00:00
mpctx->seek.flags &= ~MPSEEK_FLAG_DELAY; // immediately
execute_queued_seek(mpctx);
return;
}
audio_start_ao(mpctx);
}
if (!mpctx->restart_complete) {
mpctx->hrseek_active = false;
mpctx->restart_complete = true;
mpctx->current_seek = (struct seek_params){0};
handle_playback_time(mpctx);
mp_notify(mpctx, MPV_EVENT_PLAYBACK_RESTART, NULL);
update_core_idle_state(mpctx);
if (!mpctx->playing_msg_shown) {
if (opts->playing_msg && opts->playing_msg[0]) {
char *msg =
mp_property_expand_escaped_string(mpctx, opts->playing_msg);
struct mp_log *log = mp_log_new(NULL, mpctx->log, "!term-msg");
mp_info(log, "%s\n", msg);
talloc_free(log);
talloc_free(msg);
}
if (opts->osd_playing_msg && opts->osd_playing_msg[0]) {
char *msg =
mp_property_expand_escaped_string(mpctx, opts->osd_playing_msg);
2022-04-07 22:00:00 +00:00
set_osd_msg(mpctx, 1, opts->osd_playing_msg_duration ?
opts->osd_playing_msg_duration : opts->osd_duration,
"%s", msg);
talloc_free(msg);
}
}
mpctx->playing_msg_shown = true;
mp_wakeup_core(mpctx);
player: modify/simplify AB-loop behavior This changes the behavior of the --ab-loop-a/b options. In addition, it makes it work with backward playback mode. The most obvious change is that the both the A and B point need to be set now before any looping happens. Unlike before, unset points don't implicitly use the start or end of the file. I think the old behavior was a feature that was explicitly added/wanted. Well, it's gone now. This is because of 2 reasons: 1. I never liked this feature, and it always got in my way (as user). 2. It's inherently annoying with backward playback mode. In backward playback mode, the user wants to set A/B in the wrong order. The ab-loop command will first set A, then B, so if you use this command during backward playback, A will be set to a higher timestamps than B. If you switch back to forward playback mode, the loop would stop working. I want the loop to just continue to work, and the chosen solution conflicts with the removed feature. The order issue above _could_ be fixed by also switching the AB-loop user option values around on direction switch. But there are no other instances of option changes magically affecting other options, and doing this would probably lead to unexpected misery (dying from corner cases and such). Another solution is sorting the A/B points by timestamps after copying them from the user options. Then A/B options set in backward mode will work in forward mode. This is the chosen solution. If you sort the points, you don't know anymore whether the unset point is supposed to signify the end or the start of the file. The AB-loop code is slightly better abstracted now, so it should be easy to restore the removed feature. It would still require coming up with a solution for backwards playback, though. A minor change is that if one point is set and the other is unset, I'm rendering both the chapter markers and the marker for the set point. Why? I don't know. My test file had chapters, and I guess I decided this looked better. This commit also fixes some subtle and obvious issues that I already forgot about when I wrote this commit message. It cleans up some minor code duplication and nonsense too. Regarding backward playback, the code uses an unsanitary mix of internal ("transformed") and user timestamps. So the play_dir variable appears more than usual. To mention one unfixed issue: if you set an AB-loop that is completely past the end of the file, it will get stuck in an infinite seeking loop once playback reaches the end of the file. Fixing this reliably seemed annoying, so the fix is "just don't do this". It's not a hard freeze anyway.
2019-05-26 23:24:22 +00:00
update_ab_loop_clip(mpctx);
MP_VERBOSE(mpctx, "playback restart complete @ %f, audio=%s, video=%s%s\n",
mpctx->playback_pts, mp_status_str(mpctx->audio_status),
mp_status_str(mpctx->video_status),
get_internal_paused(mpctx) ? " (paused)" : "");
player: clamp relative seek base time to nominal duration Since b74c09efbf7, audio-only files let you seek to arbitrary points beyond the end of the file (but still displayed the time clamped to the nominal file duration). This was confusing and just not wanted. The reason is probably that the commit removed setting the audio PTS for data before the seek target, so if you seek past the end of the file, the audio PTS is never set. This in turn means the logic to determine the current playback time has no PTS at all, and thus falls back to the seek PTS. This happened in the past for other reasons (like efe43d768f). I have enough of this, so I'm just changing the code to clamp the seek timestamp to a "known" range. Do this when seeking ends, because in the fallback case, the playback time shouldn't be stuck at e.g. "end + seek_argument". Also do it when initiating a new seek (mp_seek), because if the previous seek hasn't finished yet, it shouldn't add them up and allow it to go "out of range" either. The latter is especially relevant for hr-seeks. Doing this clamping is problematic because the duration is a possibly invalid value from the demuxer, or just missing. Especially with timestamp resets, fun sometimes happens, and in these situations it might be better not to clamp. One could argue you should just use the last audio timestamp returned by the decoder or demuxer (even if that directly conflicts with --end), but that sounds even more hairy. In summary: what a dumb waste of time, what the fuck.
2020-09-10 21:24:35 +00:00
// To avoid strange effects when using relative seeks, especially if
// there are no proper audio & video timestamps (seeks after EOF).
double length = get_time_length(mpctx);
if (mpctx->last_seek_pts != MP_NOPTS_VALUE && length >= 0)
mpctx->last_seek_pts = MPCLAMP(mpctx->last_seek_pts, 0, length);
// Continuous seeks past EOF => treat as EOF instead of repeating seek.
if (mpctx->seek.type == MPSEEK_RELATIVE && mpctx->seek.amount > 0 &&
mpctx->video_status == STATUS_EOF &&
mpctx->audio_status == STATUS_EOF)
mpctx->seek = (struct seek_params){0};
}
}
static void handle_eof(struct MPContext *mpctx)
{
if (mpctx->seek.type)
return; // for proper keep-open operation
/* Don't quit while paused and we're displaying the last video frame. On the
* other hand, if we don't have a video frame, then the user probably seeked
* outside of the video, and we do want to quit. */
bool prevent_eof =
mpctx->paused && mpctx->video_out && vo_has_frame(mpctx->video_out);
/* It's possible for the user to simultaneously switch both audio
* and video streams to "disabled" at runtime. Handle this by waiting
* rather than immediately stopping playback due to EOF.
*/
if ((mpctx->ao_chain || mpctx->vo_chain) && !prevent_eof &&
mpctx->audio_status == STATUS_EOF &&
mpctx->video_status == STATUS_EOF &&
!mpctx->stop_play)
{
mpctx->stop_play = AT_END_OF_FILE;
}
}
void run_playloop(struct MPContext *mpctx)
{
if (encode_lavc_didfail(mpctx->encode_lavc_ctx)) {
mpctx->stop_play = PT_ERROR;
return;
}
update_demuxer_properties(mpctx);
handle_cursor_autohide(mpctx);
handle_vo_events(mpctx);
handle_command_updates(mpctx);
if (mpctx->lavfi && mp_filter_has_failed(mpctx->lavfi))
mpctx->stop_play = AT_END_OF_FILE;
fill_audio_out_buffers(mpctx);
write_video(mpctx);
handle_playback_restart(mpctx);
handle_playback_time(mpctx);
handle_dummy_ticks(mpctx);
update_osd_msg(mpctx);
if (mpctx->video_status == STATUS_EOF)
update_subtitles(mpctx, mpctx->playback_pts);
handle_each_frame_screenshot(mpctx);
handle_eof(mpctx);
handle_loop_file(mpctx);
handle_keep_open(mpctx);
handle_sstep(mpctx);
update_core_idle_state(mpctx);
execute_queued_seek(mpctx);
if (mpctx->stop_play)
return;
handle_osd_redraw(mpctx);
if (mp_filter_graph_run(mpctx->filter_root))
video: rewrite filtering glue code Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
2018-01-16 10:53:44 +00:00
mp_wakeup_core(mpctx);
mp_wait_events(mpctx);
handle_update_cache(mpctx);
mp_process_input(mpctx);
handle_chapter_change(mpctx);
handle_force_window(mpctx, false);
}
void mp_idle(struct MPContext *mpctx)
{
handle_dummy_ticks(mpctx);
mp_wait_events(mpctx);
mp_process_input(mpctx);
handle_command_updates(mpctx);
handle_update_cache(mpctx);
handle_cursor_autohide(mpctx);
handle_vo_events(mpctx);
update_osd_msg(mpctx);
handle_osd_redraw(mpctx);
}
// Waiting for the slave master to send us a new file to play.
void idle_loop(struct MPContext *mpctx)
{
// ================= idle loop (STOP state) =========================
bool need_reinit = true;
player: fix subtle idle mode differences on early program start If the user manages to run a "loadfile x append" command before the loop in mp_play_files() is entered, then the player could start playing these. This isn't expected, because appending files to the playlist in idle mode does not normally start playback. It could happen because there is a short time window where commands are processed before the loop is entered (such as running the command when a script is loaded). The idle mode semantics are pretty weird: if files were provided in advance (on the command line), then these should be played immediately. But if idle mode was already entered, and something is appended to the playlist using "append", i.e. without explicitly triggering playback, then it should remain in idle mode. Try to follow this by redefining PT_STOP to strictly mean idle mode. Remove the playlist->current check from idle_loop(), since only the stop_play field counts now (cf. what mp_set_playlist_entry() does). This actually introduces the possibility that playlist->current, and with it playlist-pos, are set to something, even though playback is not active or being started. Previously, this was only possible during state transitions, such as when changing playlist entries. Very annoyingly, this means the current way MPV_EVENT_IDLE was sent doesn't work anymore. Logically, idle mode can be "active" even if idle_loop() was not entered yet (between the time after mp_initialize() and before the loop in mp_play_files()). Instead of worrying about this, redo the "idle-active" property, and deprecate the event. See: #7543
2020-03-21 13:01:38 +00:00
while (mpctx->opts->player_idle_mode && mpctx->stop_play == PT_STOP) {
if (need_reinit) {
uninit_audio_out(mpctx);
handle_force_window(mpctx, true);
mp_wakeup_core(mpctx);
mp_notify(mpctx, MPV_EVENT_IDLE, NULL);
need_reinit = false;
}
mp_idle(mpctx);
}
}