mirror of
https://github.com/ceph/ceph
synced 2024-12-25 21:03:31 +00:00
0e2814d81e
Otherwise, we may fail while racing with a workload that deletes a pool: 2015-09-23T15:01:52.855 INFO:tasks.workunit.client.1.vpm128.stdout:[ RUN ] LibRadosTwoPoolsPP.PromoteSnapTrimRace 2015-09-23T15:01:53.892 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get .rgw pg_num' 2015-09-23T15:01:54.206 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get .rgw.gc pg_num' 2015-09-23T15:01:54.462 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get .users.uid pg_num' 2015-09-23T15:01:54.696 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get .users.email pg_num' 2015-09-23T15:01:55.006 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get .users pg_num' 2015-09-23T15:01:55.296 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get .rgw.buckets.index pg_num' 2015-09-23T15:01:55.523 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get .log pg_num' 2015-09-23T15:01:55.752 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get .usage pg_num' 2015-09-23T15:01:56.188 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get .rgw.buckets.extra pg_num' 2015-09-23T15:01:56.625 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get test-rados-api-vpm128-17360-6 pg_num' 2015-09-23T15:01:56.928 INFO:teuthology.orchestra.run.vpm176:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get test-rados-api-vpm128-17360-13 pg_num' 2015-09-23T15:01:57.193 INFO:teuthology.orchestra.run.vpm176.stderr:Error ENOENT: unrecognized pool 'test-rados-api-vpm128-17360-13' 2015-09-23T15:01:57.206 ERROR:teuthology.parallel:Exception in parallel execution Traceback (most recent call last): ... Signed-off-by: Sage Weil <sage@redhat.com> |
||
---|---|---|
.. | ||
buildpackages | ||
cephfs | ||
tests | ||
util | ||
__init__.py | ||
admin_socket.py | ||
apache.conf.template | ||
autotest.py | ||
blktrace.py | ||
boto.cfg.template | ||
buildpackages.py | ||
calamari_nosetests.py | ||
calamari_setup.py | ||
ceph_client.py | ||
ceph_deploy.py | ||
ceph_fuse.py | ||
ceph_manager.py | ||
ceph_objectstore_tool.py | ||
ceph.py | ||
cephfs_test_runner.py | ||
cifs_mount.py | ||
cram.py | ||
devstack.py | ||
die_on_err.py | ||
divergent_priors2.py | ||
divergent_priors.py | ||
dump_stuck.py | ||
ec_lost_unfound.py | ||
filestore_idempotent.py | ||
kclient.py | ||
locktest.py | ||
logrotate.conf | ||
lost_unfound.py | ||
manypools.py | ||
mds_creation_failure.py | ||
mds_journal_migration.py | ||
mds_scrub_checks.py | ||
mds_thrash.py | ||
metadata.yaml | ||
mod_fastcgi.conf.template | ||
mod_proxy_fcgi.tcp.conf.template | ||
mod_proxy_fcgi.uds.conf.template | ||
mon_clock_skew_check.py | ||
mon_recovery.py | ||
mon_thrash.py | ||
multibench.py | ||
object_source_down.py | ||
omapbench.py | ||
osd_backfill.py | ||
osd_failsafe_enospc.py | ||
osd_recovery.py | ||
peer.py | ||
peering_speed_test.py | ||
populate_rbd_pool.py | ||
qemu.py | ||
rados.py | ||
radosbench.py | ||
radosgw_admin_rest.py | ||
radosgw_admin.py | ||
radosgw_agent.py | ||
rbd_fsx.py | ||
rbd.py | ||
recovery_bench.py | ||
reg11184.py | ||
rep_lost_unfound_delete.py | ||
repair_test.py | ||
rest_api.py | ||
restart.py | ||
rgw_logsocket.py | ||
rgw.py | ||
s3readwrite.py | ||
s3roundtrip.py | ||
s3tests.py | ||
samba.py | ||
scrub_test.py | ||
scrub.py | ||
tgt.py | ||
thrash_pool_snaps.py | ||
thrashosds.py | ||
userdata_setup.yaml | ||
userdata_teardown.yaml | ||
watch_notify_same_primary.py | ||
watch_notify_stress.py | ||
workunit.py |