cls/rgw: Clean up the "magic string" usage in the cls layer for RGW.
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: Adam C. Emerson <aemerson@redhat.com>
539385b143
introduced a regression preventing directory backed OSD from starting at
boot time.
For device backed OSD the boot sequence starts with ceph-disk@.service
and proceeds to
systemctl enable --runtime ceph-osd@.service
where the --runtime ensure ceph-osd@12 is removed when the machine
reboots so that it does not compete with the ceph-disk@/dev/sdb1 unit at
boot time.
However directory backed OSD solely rely on the ceph-osd@.service unit
to start at boot time and will therefore fail to boot.
The --runtime flag is selectively set for device backed OSD only.
Fixes: http://tracker.ceph.com/issues/19628
Signed-off-by: Loic Dachary <loic@dachary.org>
Now that we send these to the cluster log, we must
whitelist them in the tests that exercise those
unhealthy states.
Fixes: http://tracker.ceph.com/issues/19551
Signed-off-by: John Spray <john.spray@redhat.com>
there could be some pg(s) still being created when we are upgrading to
luminous, and the pools holding them are not changed in the sense of
pg_pool_t::last_change after the upgrade and before we scan for
creating pgs. in that case, the existing update_pending_creatings()
will fail to collect the pgs being created before the upgrade.
with this change, the creating_pgs in pgmap are also used for updating
the OSDMonitor's creating_pgs if it's updated.
but we should stopupdating the pgmap once the upgrade completes. i.e.
stop dispatching MSG_PGSTATS messages to PGMonitor if the quorum and all
osds are luminous.
Fixes: http://tracker.ceph.com/issues/19584
Signed-off-by: Kefu Chai <kchai@redhat.com>
Some of the finisher contexts would try to call into Objecter.
We mostly are protected from this by mds_lock+the stopping
flag, but at the Filer level there's no mds_lock, so in the
case of file size probing we have a problem.
Fixes: http://tracker.ceph.com/issues/19204
Signed-off-by: John Spray <john.spray@redhat.com>
We get ENOENT when a pool doesn't exist. This can
happen because we don't prevent people deleting
former cephfs data pools whose files may not have
had their metadata flushed yet.
http://tracker.ceph.com/issues/19401
Signed-off-by: John Spray <john.spray@redhat.com>
If a readdir expire event turns out to be older than last_readdir,
just reschedule it (but actually, we should just discard it, as
another expire event must be in queue.
Fixes: http://tracker.ceph.com/issues/19625
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Previously, when we got a beacon that updated the health
metrics for an MDS, the user would just see mysterious-looking
cluster log messages indicating a rising fsmap epoch number.
It would be good to do this for health messages in general at
some point, but for now just do it for the MDS ones.
Fixes: http://tracker.ceph.com/issues/19551
Signed-off-by: John Spray <john.spray@redhat.com>
Were previously only tearing MgrClient down when not
holding a rank, leading to it trying to continue
to run after monclient was shut down.
Fixes: http://tracker.ceph.com/issues/19566
Signed-off-by: John Spray <john.spray@redhat.com>