The following
./ceph osd pool create data-cache 8 8
./ceph osd tier add data data-cache
./ceph osd tier cache-mode data-cache writeback
./ceph osd tier set-overlay data data-cache
./rados -p data create foo
./rados -p data stat foo
results in
error stat-ing data/foo: No such file or directory
even though foo exists in the data-cache pool, as it should. STAT
checks for (exists && !is_whiteout()), but the whiteout flag isn't
cleared on CREATE as it is on WRITE and WRITEFULL. The problem is
that, for newly created 0-sized cache pool objects, CREATE handler in
do_osd_ops() doesn't get a chance to queue OP_TOUCH, and so the logic
in prepare_transaction() considers CREATE to be a read and therefore
doesn't clear whiteout. Fix it by allowing CREATE handler to queue
OP_TOUCH at all times, mimicking WRITE and WRITEFULL behaviour.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
When getting a REJECT from a backfill target, tell already GRANTed targets to
go back to RepNotRecovering state by sending a REJECT to them.
Fixes: #7922
Signed-off-by: David Zafman <david.zafman@inktank.com>
Fixes: #7978
We tried to move to the next placement rule, but we were already at the
last one, so we ended up looping forever.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
find_object_context provides some niceties which we don't need since we know
the oid of the clones. Problematically, it also return ENOENT if the snap
requested happens to have been removed. Even in such a case, the clone may
well still exist for other snaps. Rather than modify find_object_context to
avoid this situation for this caller, we'll simply do it inline in do_op.
Fixes: #7858
Signed-off-by: Samuel Just <sam.just@inktank.com>
Head eviction implies that no clones are present. Also, add
an exists flag to SnapSetContext in order prevent an ssc from
a recent eviction from preventing a snap read from activating
the promotion machinery.
Fixes: #7858
Signed-off-by: Samuel Just <sam.just@inktank.com>
This will make the OSD randomly reject backfill reservation requests. This
exercises the failure code paths but does not break overall behavior
because the primary will back off and retry later.
This should help us reproduce #7922.
Signed-off-by: Sage Weil <sage@inktank.com>
Create a custom profile with ruleset-failure-domain=osd. (The default
ruleset-failure-domain=host won't do because this script assumes and
works only if all osds are on the same host.) While at it, set k and m
explicitly to avoid troubles in the future.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Unmap rbd images when stopping the whole cluster. Not doing so results
in images that cannot be unmapped until the same cluster is brought
back up. Issue a warning if we failed to unmap all images.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Command tracing here doesn't bring any value and simply pollutes the
terminal, as the script always runs to completion.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Set ruleset-failure-domain=osd so that
./ceph osd pool create ecpool 12 12 erasure
./rados --pool ecpool put SOMETHING /etc/group
works by default. When using a vstart cluster the default failure
domain (host) won't work because all OSDs are in "localhost".
Signed-off-by: Loic Dachary <loic@dachary.org>
If we shut down, clear out all of the lockdep state. This ensures that if
we start up again on another cct, we will not be confused by old type ids
and dependency state.
Possibly contributed to #7965.
Signed-off-by: Sage Weil <sage@inktank.com>
If we have already registered a cct for lockdep, do not accept another one.
We already check that the cct matches when we shut down. This we will run
for the life span of a single cct and no longer.
Fixes: #7965
Signed-off-by: Sage Weil <sage@inktank.com>
When we make an existing pool a tier, we start copying the snap metadata
from the base tier. That includes removed_snaps. In order for the OSD
to recognize that this value is changing for the first time, we need to
set snap_epoch, or else the OSD doesn't update it's in-memory PGPool
with removed snaps and we eventually hit an assertion failure because
PGPool::cached_remove_snaps is incorrect (e.g., empty).
Fix this by bumping snap_epoch when we add the new tier.
Fixes: #7915
Signed-off-by: Sage Weil <sage@inktank.com>
* Require "$remote_fs" since it guarantees /usr availability
(rbd executable is in /usr/bin/rbd)
* Speed-up init.d rbd mapping on machines acting as MON/OSD
by starting rbdmap after /init.d/ceph (when possible) and
shutting down rbd before ceph.
* Map rbd devices before starting X (helpful when /home is mounted from rbd).
Files in a dirfrag are usually processed in the order of readdir
results. Files at the beginning of are more likely to be used in
the future than files at the last.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
For across authority rename, the MDS first freezes the source inode's
authpin. It happens while the source dentry isn't locked. So when the
inode's authpin become frozen, the source dentry may have changed and
be linked to a different inode.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>