When removing a directory path from mirroring, cephfs-mirror records
this state in a thread-local storage. The replayer thread backs-off
in midst of mirroring the directory snapshots for this directory path.
However, the state (canceled state) is never cleared causing the thread
to incorrectly assume that other directory paths (which are picked up
by this thread) need backing-off, hence, marking these directory paths
as failed (to synchronize snapshots).
Fix is to store this state in the directory specific store which is
allocated when a thread picks up a directory path for synchronization.
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Without this, the updater thread can start processing othere queued
contexts when a mirror instance is failed or blocklisted resulting
in unexpected behavior.
Signed-off-by: Venky Shankar <vshankar@redhat.com>
During teuthology tests, the initial cluster bootstrap often starts up
the mon sbut doesn't include all mons in the initial quorum, due to
mon startup misalignment and random delays. Provide a short grace period
where we will not raise a MON_DOWN alert even though the quorum is not
complete.
Fixes: https://tracker.ceph.com/issues/43584
Signed-off-by: Sage Weil <sage@newdream.net>
* refs/pull/42073/head:
doc/mgr/nfs: fix 'export apply', pool name
PendingReleaseNotes: document workaround for NFS storage change
qa/tasks/mgr/test_orchestrator_cli: fix test
qa/suites/orch/cephadm/mgr-nfs-upgrade: add test for nfs migration
mgr/cephadm: migrate nfs grace file
mgr/nfs: migrate pre-pacific nfs.ganesha-foo clusters to nfs.foo
doc/cephfs/fs-nfs-exports: document new export apply capabilities
qa/tasks/cephfs/test_nfs: define NFS_POOL_NAME
mgr/nfs: use NFS_POOL_NAME in test_nfs.py
mgr/nfs: test export apply on JSON list
mgr/nfs: add test for ganesha conf apply/import
qa/tasks/cephfs/test_nfs: retry mount a few times
mgr/cephadm: migrate all legacy nfs exports to new .nfs pool
mgr/nfs: adjust cephfs export caps if necessary
python-common: don't accept pool/ns for NFSServiceSpec
mgr/orchestrator: drop rados_config_location ServiceDescription property
mgr/cephadm: move rados_config_location() out of NFSServiceSpec
mgr/nfs: change nfs pool to .nfs
mgr/nfs/export: accept a JSON or ganesha EXPORT config
mgr/nfs: allow 'nfs export apply' to take a list of exports
python-common: remove pool + namespace from NFSServiceSpec
mgr/nfs: used fixed pool + ns
mgr/rook: used fixed pool + ns
mgr/dashboard: use fixed pool + ns
mgr/cephadm: always use fixed pool and namespace
mgr/nfs: adjust test to match pool name
mgr/nfs: always create ganesha pool with well-defined name
Reviewed-by: Varsha Rao <varao@redhat.com>
the legacy-option-headers target is only marked as a dependency of the
common-objs and common-common-objs. because those targets are OBJECT
libraries, their dependencies aren't inherited by the targets that link
common-objs or common-common-objs
this adds the dependencies manually, so that changes to the config
yaml files will cause legacy-option-headers to regenerate the headers
Signed-off-by: Casey Bodley <cbodley@redhat.com>
mgr/cephadm/iscsi: check if dashboard is enabled
Reviewed-by: Adam King <adking@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Sebastian Wagner <sewagner@redhat.com>
crimson/common/log: print out logger.debug() when log level >=6
Reviewed-by: Mark Nelson <mnelson@readhat.com>
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
Reviewed-by: Chunmei Liu <chunmei.liu@intel.com>
crimson/os: give AlienStore its own debug subsystem.
Reviewed-by: Mark Nelson <mnelson@readhat.com>
Reviewed-by: Chunmei Liu <chunmei.liu@intel.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
Otherwise, we isntall new podman at the end, and the
container-selinux-policy package install triggers a bunch of selinux
errors.
Fixes: https://tracker.ceph.com/issues/50151
Signed-off-by: Sage Weil <sage@newdream.net>
It may take a moment for a ganesha to (re)configure itself with a new
export. If a mount fails, retry a couple times.
Signed-off-by: Sage Weil <sage@newdream.net>
Migrate all past NFS pools, whether they were created by mgr/nfs or by
the dashboard, to the new mgr/nfs .nfs pool.
Since this migrations relies on RADOS being available, we have to be a bit
careful here: we only attempt the migration from serve(), not during
module init.
After the exports are re-imported, we destroy existing ganesha daemons so
that new ones will get recreated. This ensures the (new) daemons have
cephx keys to access the new pool.
Note that no attempt is made to clean up the old NFS pools. This is out
of paranoia: if something goes wrong, the old NFS configuration data will
still be there.
Signed-off-by: Sage Weil <sage@newdream.net>
If we are importing an old export, we may find that the cephx user
existed but with the wrong caps. Adjust caps in that case!
Signed-off-by: Sage Weil <sage@newdream.net>