Previously we did not bother with locking for accept() because we were
not visible to any other threads. However, we need to close accepting
Pipes from mark_down_all(), which means we need to handle interference.
Fix up the locking so that we hold pipe_lock when looking at Pipe state
and verify that we are still in the ACCEPTING state any time we retake
the lock.
Signed-off-by: Sage Weil <sage@inktank.com>
New pipes exist in a sort of limbo before we know who the peer is and
add them to rank_pipe. Keep a list of them in accepting_pipes for that
period.
Signed-off-by: Sage Weil <sage@inktank.com>
We can have a situation where:
- we have a pipe to a peer
- pipe goes to standby (on peer)
- we rebind to a new port
- ....
- we rebind again to the same old port
- we connect to peer
and get reattached to the ancient pipe from two instances back. Avoid that
by picking a new nonce each time we rebind.
Add 1,000,000 each time so that the port is still legible in the printed
output.
Signed-off-by: Sage Weil <sage@inktank.com>
If we are shutting down all old connections and binding to new ports,
we want to avoid a sequence like:
- close all prevoius connections
- new connection comes in on old port
- rebind to new ports
-> connection from old port leaks through
As a first step, close all connections after we shut down the old
accepter and before we start the new one.
Signed-off-by: Sage Weil <sage@inktank.com>
We need to maintain the invariant that all sub queues in out_q are never
empty. Fix discard_requeued_up_to() to avoid creating an entry unless we
know it is already present.
This bug leads to an incorrect reconnect attempt when
- we accept a pipe (lossless peer)
- they send some stuff, maybe
- fault
- we initiate reconnect, even tho we have nothing queued
In particular, we shouldn't reconnect because we aren't checking for
resets, and the fact that our out_seq is 0 while the peer's might be
something else entirely will trigger asserts later.
This fixes at least one source of #5626, and possibly #5517.
Backport: cuttlefish
Signed-off-by: Sage Weil <sage@inktank.com>
We were returning success without waiting if the pending pool state had
the snap.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Joao Eduardo Luis <joao.luis@inktank.com>
A monc/mon connection fault or the dup command test flag may mean an extra
osd id is created that we isn't actually up; reorder so that doesn't screw
up 'osd ls'.
Signed-off-by: Sage Weil <sage@inktank.com>
This bug prevented resurrection of ancestor pgs where
necessary.
Fixes: #5269
This may result in pg A being created just before pg B
is resurrected and split into A and B resulting in one
or the other operations getting and EEXIST.
Backport: cuttlefish
Signed-off-by: Samuel Just <sam.just@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Monitor commands need to be idempotent. This helps us test this by
simply issuing any successful command a second time so that we notice
when a dup submission fails.
Signed-off-by: Sage Weil <sage@inktank.com>
Make these methods work in terms of device *names*, not paths, and fix up
the only direct list_partitions() caller to do the same.
Signed-off-by: Sage Weil <sage@inktank.com>
Ensure that the snap does in fact exist before we try to remove it. This
avoids a crash where a we get two dup rmsnap requests (due to thrashing, or
a reconnect, or something), the committed (p) value does have the snap, but
the uncommitted (pp) does not. This fails the old test such that we try
to remove it from pp again, and assert.
Restructure the flow so that it is easier to distinguish the committed
short return from the uncommitted return (which must still wait for the
commit).
0> 2013-07-16 14:21:27.189060 7fdf301e9700 -1 osd/osd_types.cc: In function 'void pg_pool_t::remove_snap(snapid_t)' thread 7fdf301e9700 time 2013-07-16 14:21:27.187095
osd/osd_types.cc: 662: FAILED assert(snaps.count(s))
ceph version 0.66-602-gcd39d8a (cd39d8a6727d81b889869e98f5869e4227b50720)
1: (pg_pool_t::remove_snap(snapid_t)+0x6d) [0x7ad6dd]
2: (OSDMonitor::prepare_command(MMonCommand*)+0x6407) [0x5c1517]
3: (OSDMonitor::prepare_update(PaxosServiceMessage*)+0x1fb) [0x5c41ab]
4: (PaxosService::dispatch(PaxosServiceMessage*)+0x937) [0x598c87]
5: (Monitor::handle_command(MMonCommand*)+0xe56) [0x56ec36]
6: (Monitor::_ms_dispatch(Message*)+0xd1d) [0x5719ad]
7: (Monitor::handle_forward(MForward*)+0x821) [0x572831]
8: (Monitor::_ms_dispatch(Message*)+0xe44) [0x571ad4]
9: (Monitor::ms_dispatch(Message*)+0x32) [0x588c52]
10: (DispatchQueue::entry()+0x549) [0x7cf1d9]
11: (DispatchQueue::DispatchThread::entry()+0xd) [0x7060fd]
12: (()+0x7e9a) [0x7fdf35165e9a]
13: (clone()+0x6d) [0x7fdf334fcccd]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Joao Eduardo Luis <joao.luis@inktank.com>
'thrash_map' is only set if we are the leader, so we would thrash and
propose the pending value if we are the leader. However, we should keep
the 'is_leader()' check not only for clarity's sake (an unfamiliar reader
may cry OMGBUG, prompting to a patch much like this), but also because
we may lose a subsequent election and become a peon instead, while still
holding a 'thrash_map' value > 0 -- and we really don't want to propose
while being a peon.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Do not leak these.
Fixes: #5643
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Greg Farnum <greg@inktank.com>
Reviewed-by: Joao Eduardo Luis <joao.luis@inktank.com>
The send_latest() helper may put a message in the waiting_for_map list
if we are not readable, but currently send_to_waiting() is only called
from update_from_paxos(), and it is possible that we may be unreadable
but not get a map update.
Instead, share the map when we are active. Do the same for check_subs(),
which is also about sharing the *new* map. Leave
share_map_with_random_osd() and process_failures() which are not
concerned with whether this is the latest map or not.
This problem surfaced when we changed the timing of refresh relative to
paxos commit, since update_from_paxos() is now not normally called while
readable; see f1ce8d7c95 and
c711203c0d.
Fixes: #5643
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Greg Farnum <greg@inktank.com>
Reviewed-by: Joao Eduardo Luis <joao.luis@inktank.com>
Fixes: #5439
ECANCELED there means that we lost in a race to write the object. We
should treat it as a successful write. This is reviving an old behavior
that was changed inadvertently.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
This was necessary when ceph-disk-udev didn't create the by-partuuid (and
other) symlinks for us, but now it is fragile and error-prone. (It also
appears to be broken on a certain customer RHEL VM.) See
d7f7d61351.
Instead, just use the by-partuuid symlinks that we spent all that ugly
effort generating.
Backport: cuttlefish
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Dan Mick <dan.mick@inktank.com>
The previous debug message outputted the function's name, as often our
functions do. This was however a source of bewilderment, as users would
see those in logs and think their stores would need conversion. Changing
this message is trivial enough and it will make ceph users happier log
readers.
Backport: cuttlefish
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
We are supposed to have umount'ed the store and set the pointer to NULL.
We should not tolerate any other case on init().
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
We already open the store on ceph_mon.cc, before we start the conversion.
Given we are unable to reproduce this every time a conversion is triggered,
we are led to believe that this causes a race in leveldb that will lead
to 'store.db/LOCK' being locked upon the open this patch removes.
Regardless, reopening the db here is pointless as we already did it when
we reach Monitor::StoreConverter::convert().
Fixes: #5640
Backport: cuttlefish
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>