mirror of
https://github.com/ceph/ceph
synced 2024-12-25 12:54:16 +00:00
01d48a270a
The existing logic is to ceph-deploy osd create --zap-disk which will zap the data device before preparing it. However it will not zap the journal device (see http://tracker.ceph.com/issues/13291). If ceph-deploy osd create fails, a fall back will zap both the data device and the journal and try prepare again. This could work if the device preparation and activation was synchronous and catch all errors that could be caused by an unclean journal device. However, the activation is asynchronous and it is entirely possible for a device to be prepared successfully and fail to activate in the background. The data and journal device are always zapped before calling ceph-deploy osd create. The logic is simpler and the overhead is low. http://tracker.ceph.com/issues/13000 Fixes: #13000 Signed-off-by: Loic Dachary <loic@dachary.org> |
||
---|---|---|
.. | ||
buildpackages | ||
cephfs | ||
tests | ||
util | ||
__init__.py | ||
admin_socket.py | ||
apache.conf.template | ||
autotest.py | ||
blktrace.py | ||
boto.cfg.template | ||
buildpackages.py | ||
calamari_nosetests.py | ||
calamari_setup.py | ||
ceph_client.py | ||
ceph_deploy.py | ||
ceph_fuse.py | ||
ceph_manager.py | ||
ceph_objectstore_tool.py | ||
ceph.py | ||
cephfs_test_runner.py | ||
cifs_mount.py | ||
cram.py | ||
devstack.py | ||
die_on_err.py | ||
divergent_priors2.py | ||
divergent_priors.py | ||
dump_stuck.py | ||
ec_lost_unfound.py | ||
filestore_idempotent.py | ||
kclient.py | ||
locktest.py | ||
logrotate.conf | ||
lost_unfound.py | ||
manypools.py | ||
mds_creation_failure.py | ||
mds_journal_migration.py | ||
mds_scrub_checks.py | ||
mds_thrash.py | ||
metadata.yaml | ||
mod_fastcgi.conf.template | ||
mod_proxy_fcgi.tcp.conf.template | ||
mod_proxy_fcgi.uds.conf.template | ||
mon_clock_skew_check.py | ||
mon_recovery.py | ||
mon_thrash.py | ||
multibench.py | ||
object_source_down.py | ||
omapbench.py | ||
osd_backfill.py | ||
osd_failsafe_enospc.py | ||
osd_recovery.py | ||
peer.py | ||
peering_speed_test.py | ||
populate_rbd_pool.py | ||
qemu.py | ||
rados.py | ||
radosbench.py | ||
radosgw_admin_rest.py | ||
radosgw_admin.py | ||
radosgw_agent.py | ||
rbd_fsx.py | ||
rbd.py | ||
recovery_bench.py | ||
reg11184.py | ||
rep_lost_unfound_delete.py | ||
repair_test.py | ||
rest_api.py | ||
restart.py | ||
rgw_logsocket.py | ||
rgw.py | ||
s3readwrite.py | ||
s3roundtrip.py | ||
s3tests.py | ||
samba.py | ||
scrub_test.py | ||
scrub.py | ||
tgt.py | ||
thrash_pool_snaps.py | ||
thrashosds.py | ||
userdata_setup.yaml | ||
userdata_teardown.yaml | ||
watch_notify_same_primary.py | ||
watch_notify_stress.py | ||
workunit.py |