It is useful to map OSDs to PGs and vice-versa; pg dump gives that
information, but gives a lot of other stuff. This is the same dump
as pg dump pgs, but omitting everything except pgid, state, and
osd up and acting sets.
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
We need to hold mds_lock here.
Normally the con also holds a reference, but an ill-timed connection reset
could drop it.
Fixes: #5883
Backport: dumpling, cuttlefish
Signed-off-by: Sage Weil <sage@inktank.com>
Fixes: #5875
ops logging should (at this point) should only include object
store related operations.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
If we run across a user-supplied parameter that doesn't validate against
a non-required descriptor, it may be that it's a valid entry for a later
descriptor...or it may be that it's supposed to match. We can't really tell.
A possible heuristic would be to call it invalid-for-sure if we're at the
end of the descriptor list, but that's not very generic.
Warn about it and try to drive on anyway.
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Check that number of validated arguments matches the number of required
arguments in the signature. Also, sort all possible matches by
length of signature. This way "ceph osd crush set" and
"ceph osd crush set <args>" can work while still insisting that
extra args or too few args are errors.
Also, restructure and factor out some of the work of validate() to make
its inner loop smaller and hopefully more comprehensible.
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Locker::issue_caps() does not issue new caps to stale client,
CInode::encode_inodestat() should have the same logical.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
commit 0071b8e75b (mds: stay in SCAN state in file_eval) makes
Locker::file_eval() ignore lock in LOCK_SCAN state. If there
is no request changes the lock state, the lock can be stuck in
LOCK_SCAN state forever. This can cause client read/write hang
because lock in LOCK_SCAN state does not allow Frw caps.
The fix is change LOCK_SCAN to a unstable state. Thank to the
CInode::STATE_RECOVERING check in Locker::eval_gather(), the
lock stays in the SCAN state while file is being recovering.
The lock will transit to a stable state once the recovery
finishes.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
If we find lock state is LOCK_LOCK_XLOCK when cancelling xlock,
set lock state to LOCK_XLOCK_DONE and call Locker::eval_gather().
This makes sure the lock will eventually transit to a stable state.
(LOCK_XLOCK_DONE's next state is stable)
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
For acquiring/cancelling xlock, the lock state transitions for
dentry lock and other types of locks are the same. So I think
the "type != CEPH_LOCK_DN" check doesn't make sense.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
If lock state is LOCK_XLOCKDONE, the xlocker can have GSHARED cap.
So when finishing xlock, we may need to revoke the GSHARED cap.
In most cases Locker::_finish_xlock() directly set lock state to
LOCK_LOCK or LOCK_EXCL, which hides the issue. If 'num_rdlock > 0'
or 'num_wrlock > 0' when finishing xlock, the issue reveals.
(lock get stuck in LOCK_XLOCKDONE forever)
The fix is always call Locker::_finish_xlock() when xlock count
reaches zero. _finish_xlock() checks if it can change lock state
to LOCK_EXCL immediately. If not, it uses Locker::eval_gather()
to transit lock state.
Another change of this patch is avoid changing lock state to
LOCK_LOCK directly. because lock in LOCK_XLOCK_DONE state allows
GSHARED cap, lock in LOCK_LOCK state does not.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
There are several issues in the Capability::confirm_receipt()
1. when receiving a client caps message with 'seq == last_sent',
it doesn't mean we finish revoking caps. The client can send
caps message that only flushes dirty metadata.
2. When receiving a client caps message with 'seq == N', we should
forget pending revocations whose seq numbers are less than N.
This is because, when revoking caps, we create a revoke_info
structure and set its seq number to 'last_sent', then increase
the 'last_sent'.
3. When client actively releases caps (by request), the code only
works for the 'seq == last_sent' case. If there are pending
revocations, we should update them as if the release message
is received before we revoke the corresponding caps.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Otherwise, we might serve a pull before we start_flush in the
ReplicaActive constructor.
Fixes: #5799
Signed-off-by: Samuel Just <sam.just@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Fixes: #5831
This commit moves around the cors handling code. Beforehand
we were unnecessarily reading the cors headers for every
request whether that was needed or not. Moved that code to
be only called when needed. While at it, cleaned up the
layering a bit so that not to mix S3 specific code with
the generic functionality (except for debugging).
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
it is not easy to figure out what a PG temp is just by reading the
code although it is easy to understand with an example.
Signed-off-by: Loic Dachary <loic@dachary.org>
Merge back what we have in the (open)SUSE ceph spec file for JUnit.
Add missing Requires and the package is named junit4 on some SUSE
versions.
Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
This patch adds two buildrequires to the ceph.spec file, that are needed
to build the rpms under Fedora. Danny Al-Gaaf commented that the
snappy-devel dependency should actually be added to the leveldb-devel
package. I will try to get that fixed too, in the mean time, this patch
does make sure Ceph builds on Fedora.
Signed-off-by: Erik Logtenberg <erik@logtenberg.eu>