ceph/qa/workunits/cephadm
Sage Weil 92b49094e7 cephadm: avoid trigger old podman bug
This ticket seems to suggest that (1) the root cause is related to an
exec that is orphaned and screws up the container state (due to, e.g., ssh
dropping, or a timeout), (2) -f may be needed, sometimes, to recover, and
(3) newer versions fix it.

  https://github.com/containers/libpod/issues/3226

Way back in 26f9fe54cb we found that using
-f the first time around was a Bad Idea, so we'd rather avoid this.

Instead, just avoid triggering the bug.

Signed-off-by: Sage Weil <sage@redhat.com>
2020-02-12 13:59:23 -06:00
..
test_cephadm.sh cephadm: avoid trigger old podman bug 2020-02-12 13:59:23 -06:00
test_repos.sh qa/workunits/cephadm/test_repos: don't try to use the refspec 2020-02-08 07:33:47 -06:00