This lets us have e.g. /etc/ceph/ceph.client.admin.keyring that is
owned by root:admin and mode u=rw,g=r,o= without making every non-root
run of the command line tools complain and fail.
This is what the Chef cookbook has been doing for a while already.
If the following sequence of events occured,
a clone could be created of an unprotected snapshot:
1. A: begin clone - check that snap foo is protected
2. B: rbd unprotect snap foo
3. B: check that all pools have no clones of foo
4. B: unprotect snap foo
5. A: finish creating clone of foo, add it as a child
To stop this from happening, check at the beginning and end of
cloning that the parent snapshot is protected. If it is not,
or checking protection status fails (possibly because the parent
snapshot was removed), remove the clone and return an error.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
These iterate over all pools and check for children of a
particular snapshot.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Dan Mick <dan.mick@inktank.com>
Prevent the 'bucket link' command from overwriting the index of an
existing bucket. Corrects bug 2935:
http://tracker.newdream.net/issues/2935
Signed-off-by: caleb miles <caleb.miles@inktank.com>
Fixes: #3057
Since we read usage in chunks we need to clear the
usage map before reading the next chunk, otherwise
we're going to aggregate the old data as well.
Backport: argonaut
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
If requeue is false, we won't have cleared out waiting_for_ondisk; adjust
assert placement as appropriate. Also, make sur we handle the requeue
and !op case properly (although I'm not sure offhand if/when it would
come up).
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Samuel Just <sam.just@inktank.com>
If we don't wait for the callback, the finisher may cleanup the callback
context before the callback is actually invoked, causing a
use-after-free error.
This fixes#3048.
Signed-off-by: Mike Ryan <mike.ryan@inktank.com>
If we don't wait for the callback, the finisher may cleanup the callback
context before the callback is actually invoked, causing a
use-after-free error.
This fixes#3048.
Signed-off-by: Mike Ryan <mike.ryan@inktank.com>
If the mon session drops, we get an EAGAIN callback, which we already
correctly ignored. (Clean this up and comment so it's clearer what is
going on.)
Fix ms_handle_connect() to resubmit those requests.
Noticed while fixing #3049.
Signed-off-by: Sage Weil <sage@inktank.com>
If our map get_version check needs to be retried, tell the
is_latest_map() callers instead of giving returning 0 ("no").
Fixes: #3049
Signed-off-by: Sage Weil <sage@inktank.com>
We should requeue the dups along with the originals. This avoids
situations where, after requeue, the dups are reordered with respect to
each other. For example:
- client sends A, B, C
- osd receives A
- connection drops
- client sends A', B', C'
- osd puts A' in waiting_for_ondisk, starts B' and C'
- on_change() requeues everything
Final queue order (before this patch) is
A, B', C', A'
After this patch, the resulting queue order is
A, A', B', C'
Or somewhat more generally, it might be:
A, A', B, B', B'', C', C'', D'', ....
Fixes (another source of): #2947
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Samuel Just <sam.just@inktank.com>
was mishandling parsing of binary flag arguments.
also, fix argument parsing and update radosgw-admin
cli test reference.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
Add a garbage collector thread that is responsible for clean
up of clutter. When removing an object, store info about the
leftovers in a special gc map (via rgw objclass). A new
radosgw-admin commands to list objects in gc, and to run the
gc process manually. Also, gc processors can run in parallel,
however, each will handle a single gc shard (synchronized
using lock objclass).
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
We don't shut down all threads, and the surviving ones fight with
exit()'s teardown. Kludge until we have a clean shutdown process.
Signed-off-by: Sage Weil <sage@inktank.com>