* refs/pull/24292/head:
qa: add test for rctime on root inode
mds: set rctime on new system inode
mds: small refactor
Reviewed-by: Zheng Yan <zyan@redhat.com>
Otherwise a bug preventing an asok operation from completing will cause the
entire job to fail.
Fixes: http://tracker.ceph.com/issues/36335
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21566/head:
test: add test for mds drop cache command
mds: command to trim mds cache and client caps
mds: implement journal flush as asynchronous context execution
mds: cleanup some asok commands
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Otherwise QA sits forever waiting for the kclient to umount when there is a
problem.
Fixes: http://tracker.ceph.com/issues/36184
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Also, fix a bunch of quirky journal_tool invocations that pass
"--rank" argument as the command argument rather than passing it
as function argument.
Fixes: https://tracker.ceph.com/issues/24780
Signed-off-by: Venky Shankar <vshankar@redhat.com>
This fixes errors caused by remount done by some tests (test_recovery_pool.py)
where the fs name is not given.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
The MDS may not be on the same machine where the cluster command is run.
Fixes: http://tracker.ceph.com/issues/24858
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21885/head:
qa: update cluster log health warning message
qa: add tests for client features
mds: evict clients that lack required features
mds: cleanup MDSRank::evict_client
mds: infer client version by client metadata and connection's features
mds: introduce "ceph fs set <fs_name> min_compat_client <release_name>"
mds: tell client why it's rejected
mds: introduce cephfs' own feature bits
mds: make Server::prepare_force_open_sessions() update client metadata
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/22455/head:
qa/ceph-volume: add a test for put_object_versioned()
ceph-volume-client: allow atomic updates for RADOS objects
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Just make caller happy. there is no easy way to support timeout.
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Fixes: https://tracker.ceph.com/issues/24053
* refs/pull/21712/head:
qa/tasks/cephfs: add test for renewing stale session
client: invalidate caps and leases when session becomes stale
client: fix race in concurrent readdir
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21374/head:
qa: add test for snap format upgrade
mds: initialize SnapServer::snaprealm_v2_since after journal replay
mds: properly distinguish cap update from snap flush
mds: update dev document of cephfs snapshot
doc: add release notes for cephfs snapshot
mds: allow snapshot by default for new filesystem
mds: close past parents after snaprealm format gets converted
mds: automaticly allow multi-active MDS after scrubbing all inodes
mds: don't mark primary dentry damaged if inode has been repaired
mds: upgrade snaprealm format during scrub
mds: allow scrubbing mdsdir
mds: cleanup scrub code
mds: show health warning if multimds with old format snapshots
mds: automaticly allow multi-active MDS after removing all old snapshots
mds: disallow multi-active MDS if snapshot was ever created by pre-mimic mds
mds: validate SnapInfo::long_name before using it
mds: don't bump snaptable last_snap when renaming snapshot
mds: properly save snaptable after upgrading version
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21065/head:
qa/cephfs: test if evicted client unmounts without hanging
qa/tasks: allow custom timeout for umount_wait()
client: don't hang when MDS sessions are evicted
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>