This fixes errors caused by remount done by some tests (test_recovery_pool.py)
where the fs name is not given.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
The MDS may not be on the same machine where the cluster command is run.
Fixes: http://tracker.ceph.com/issues/24858
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21885/head:
qa: update cluster log health warning message
qa: add tests for client features
mds: evict clients that lack required features
mds: cleanup MDSRank::evict_client
mds: infer client version by client metadata and connection's features
mds: introduce "ceph fs set <fs_name> min_compat_client <release_name>"
mds: tell client why it's rejected
mds: introduce cephfs' own feature bits
mds: make Server::prepare_force_open_sessions() update client metadata
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/22455/head:
qa/ceph-volume: add a test for put_object_versioned()
ceph-volume-client: allow atomic updates for RADOS objects
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Just make caller happy. there is no easy way to support timeout.
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Fixes: https://tracker.ceph.com/issues/24053
* refs/pull/21712/head:
qa/tasks/cephfs: add test for renewing stale session
client: invalidate caps and leases when session becomes stale
client: fix race in concurrent readdir
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21374/head:
qa: add test for snap format upgrade
mds: initialize SnapServer::snaprealm_v2_since after journal replay
mds: properly distinguish cap update from snap flush
mds: update dev document of cephfs snapshot
doc: add release notes for cephfs snapshot
mds: allow snapshot by default for new filesystem
mds: close past parents after snaprealm format gets converted
mds: automaticly allow multi-active MDS after scrubbing all inodes
mds: don't mark primary dentry damaged if inode has been repaired
mds: upgrade snaprealm format during scrub
mds: allow scrubbing mdsdir
mds: cleanup scrub code
mds: show health warning if multimds with old format snapshots
mds: automaticly allow multi-active MDS after removing all old snapshots
mds: disallow multi-active MDS if snapshot was ever created by pre-mimic mds
mds: validate SnapInfo::long_name before using it
mds: don't bump snaptable last_snap when renaming snapshot
mds: properly save snaptable after upgrading version
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21065/head:
qa/cephfs: test if evicted client unmounts without hanging
qa/tasks: allow custom timeout for umount_wait()
client: don't hang when MDS sessions are evicted
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/16608/head:
qa: whitelist mds down wrn during cephfs testing
mds: add config to disable fragmentation
qa: add max_mds thrash test
qa: mds_thrash updates for new max_mds behavior
doc: update upgrade procedure and release notes
qa: add test for cluster resizing
qa: remove use of mds deactivate
cephfs: add new down/joinable fs flags
mds: evict all clients if last mds shutting down
cephfs: deprecate ceph mds deactivate
cephfs: kill allow_dirfrags
cephfs: Kill allow_multimds
cephfs: Change behavior of cluster_down flag
mon/FSCommands: Set extra MDS to standby
cephfs: Health check changes
mon/MDSMonitor: Remove command support for legacy syntax
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
As dirfrags are now standard in CephFS, remove the machinery for
tracking and enabling this feature.
ceph fs set <fs> allow_dirfrags is now deprecated and prints a warning
message.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>