mirror of
https://github.com/ceph/ceph
synced 2025-02-22 18:47:18 +00:00
pre-single-major.yaml kernel doesn't have any of the monitor client fixes that came in 4.6. If the connection is closed, it closes the session and retries only after 10 seconds. On top of that, there is nothing to prevent it from picking the same monitor when reconnecting. This means that when given both v1 and v2 ports (which look like two different monitors), it is susceptible to mount_timeout (60 seconds): $ sudo rbd map img rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (5) Input/output error [ 822.242313] libceph: mon0 172.21.15.132:3300 socket closed (con state CONNECTING) [ 832.265494] libceph: mon0 172.21.15.132:3300 socket closed (con state CONNECTING) [ 842.296175] libceph: mon0 172.21.15.132:3300 socket closed (con state CONNECTING) [ 852.326924] libceph: mon0 172.21.15.132:3300 socket closed (con state CONNECTING) [ 862.357611] libceph: mon0 172.21.15.132:3300 socket closed (con state CONNECTING) [ 872.388373] libceph: mon0 172.21.15.132:3300 socket closed (con state CONNECTING) [ 882.676136] libceph: mon0 172.21.15.132:3300 socket closed (con state CONNECTING) Unlike newer kernels that return ETIMEDOUT, it returns EIO. Newer kernels are much more aggressive about retries and will pick a different monitor when reconnecting, hence they are always able to establish the session in time. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> |
||
---|---|---|
.. | ||
archs | ||
btrfs | ||
cephfs | ||
client | ||
clusters | ||
config | ||
crontab | ||
debug | ||
distros | ||
erasure-code | ||
libceph | ||
machine_types | ||
mds | ||
mon/bootstrap | ||
msgr | ||
nightlies | ||
objectstore | ||
objectstore_cephfs | ||
overrides | ||
packages | ||
qa_scripts | ||
rbd | ||
releases | ||
rgw_frontend | ||
rgw_pool_type | ||
standalone | ||
suites | ||
tasks | ||
timezone | ||
workunits | ||
.gitignore | ||
find-used-ports.sh | ||
loopall.sh | ||
Makefile | ||
README | ||
run_xfstests_qemu.sh | ||
run_xfstests-obsolete.sh | ||
run_xfstests.sh | ||
run-standalone.sh | ||
runallonce.sh | ||
runoncfuse.sh | ||
runonkclient.sh | ||
setup-chroot.sh | ||
tox.ini | ||
valgrind.supp |
ceph-qa-suite ------------- clusters/ - some predefined cluster layouts suites/ - set suite The suites directory has a hierarchical collection of tests. This can be freeform, but generally follows the convention of suites/<test suite name>/<test group>/... A test is described by a yaml fragment. A test can exist as a single .yaml file in the directory tree. For example: suites/foo/one.yaml suites/foo/two.yaml is a simple group of two tests. A directory with a magic '+' file represents a test that combines all other items in the directory into a single yaml fragment. For example: suites/foo/bar/+ suites/foo/bar/a.yaml suites/foo/bar/b.yaml suites/foo/bar/c.yaml is a single test consisting of a + b + c. A directory with a magic '%' file represents a test matrix formed from all other items in the directory. For example, suites/baz/% suites/baz/a.yaml suites/baz/b/b1.yaml suites/baz/b/b2.yaml suites/baz/c.yaml suites/baz/d/d1.yaml suites/baz/d/d2.yaml is a 4-dimensional test matrix. Two dimensions (a, c) are trivial (1 item), so this is really 2x2 = 4 tests, which are a + b1 + c + d1 a + b1 + c + d2 a + b2 + c + d1 a + b2 + c + d2 A directory with a magic '$' file represents a test where one of the other items is chosen randomly. For example, suites/foo/$ suites/foo/a.yaml suites/foo/b.yaml suites/foo/c.yaml is a single test. It will be either a.yaml, b.yaml or c.yaml. This can be used in conjunction with the '%' file in other directories to run a series of tests without causing an unwanted increase in the total number of jobs run. Symlinks are okay. The teuthology code can be found in https://github.com/ceph/teuthology.git