ceph/src
Sage Weil 5709fb9122 osd: go active even if mon only accepted our v1 addr
We may bind to v1 and v2 addrs, but we need to behave if the mon only
recognized our v1 addr and still go active.  That's because whether they
see our v2 addr depends on whether we connected to the mon via the v1 or
v2 port, and the mon may not be binding to v2 (yet, or ever).

Add a legacy_equals to entity_addrvec_t, and use that instead of
probably_equals for the OSD boot checks.  The probably_equals returns true
if the IP address portion of the address is empty.. which should never
happen in the OSD boot case since we have learned our real IP long before
we try to send osd_boot.

Signed-off-by: Sage Weil <sage@redhat.com>
2018-12-21 15:30:18 -06:00
..
arch
auth
bash_completion
blkin@f24ceec055
c-ares@fd6124c74d
ceph-volume
civetweb@6062892715
client
cls
common
compressor
crimson
crush
crypto
dmclock@5c82208802
doc
erasure-code
global
googletest@fdb8504792
include
isa-l@7e1a337433
java
journal
json_spirit
key_value_store
kv
librados
libradosstriper
librbd
log
lua@1fce39c639
mds
messages
mgr
mon
mount
msg osd: go active even if mon only accepted our v1 addr 2018-12-21 15:30:18 -06:00
objclass
objsync
ocf
os
osd osd: go active even if mon only accepted our v1 addr 2018-12-21 15:30:18 -06:00
osdc
perfglue
powerdns
pybind
rapidjson@f54b0e47a0
rbd_fuse
rbd_replay
rgw
rocksdb@37828c548a
script
seastar@6f5dcc4414
spdk@fd292c568f
telemetry
test
tools
tracing
xxHash@1f40c6511f
zstd@f4340f46b2
.gitignore
btrfs_ioc_test.c
ceph_common.sh
ceph_fuse.cc
ceph_mds.cc
ceph_mgr.cc
ceph_mon.cc
ceph_osd.cc
ceph_release
ceph_syn.cc
ceph_ver.c
ceph_ver.h.in.cmake
ceph-clsinfo
ceph-coverage.in
ceph-crash.in
ceph-create-keys
ceph-debugpack.in
ceph-osd-prestart.sh
ceph-post-file.in
ceph-rbdnamer
ceph-run
ceph.conf.twoosds
ceph.in
cls_acl.cc
cls_crypto.cc
CMakeLists.txt
cmonctl
etc-rbdmap
init-ceph.in
init-radosgw
init-rbdmap
krbd.cc
libcephfs.cc
librados-config.cc
loadclass.sh
logrotate.conf
mount.fuse.ceph
mrgw.sh
mrun
mstart.sh
mstop.sh
multi-dump.sh
perf_histogram.h
ps-ceph.pl
push_to_qemu.pl
rbd-replay-many
rbdmap
README
sample.ceph.conf
stop.sh
TODO
valgrind.supp
vstart.sh
yasm-wrapper

Sage Weil <sage@newdream.net>
Ceph - scalable distributed storage system