Just make caller happy. there is no easy way to support timeout.
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Fixes: https://tracker.ceph.com/issues/24053
* refs/pull/21712/head:
qa/tasks/cephfs: add test for renewing stale session
client: invalidate caps and leases when session becomes stale
client: fix race in concurrent readdir
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21374/head:
qa: add test for snap format upgrade
mds: initialize SnapServer::snaprealm_v2_since after journal replay
mds: properly distinguish cap update from snap flush
mds: update dev document of cephfs snapshot
doc: add release notes for cephfs snapshot
mds: allow snapshot by default for new filesystem
mds: close past parents after snaprealm format gets converted
mds: automaticly allow multi-active MDS after scrubbing all inodes
mds: don't mark primary dentry damaged if inode has been repaired
mds: upgrade snaprealm format during scrub
mds: allow scrubbing mdsdir
mds: cleanup scrub code
mds: show health warning if multimds with old format snapshots
mds: automaticly allow multi-active MDS after removing all old snapshots
mds: disallow multi-active MDS if snapshot was ever created by pre-mimic mds
mds: validate SnapInfo::long_name before using it
mds: don't bump snaptable last_snap when renaming snapshot
mds: properly save snaptable after upgrading version
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21065/head:
qa/cephfs: test if evicted client unmounts without hanging
qa/tasks: allow custom timeout for umount_wait()
client: don't hang when MDS sessions are evicted
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/16608/head:
qa: whitelist mds down wrn during cephfs testing
mds: add config to disable fragmentation
qa: add max_mds thrash test
qa: mds_thrash updates for new max_mds behavior
doc: update upgrade procedure and release notes
qa: add test for cluster resizing
qa: remove use of mds deactivate
cephfs: add new down/joinable fs flags
mds: evict all clients if last mds shutting down
cephfs: deprecate ceph mds deactivate
cephfs: kill allow_dirfrags
cephfs: Kill allow_multimds
cephfs: Change behavior of cluster_down flag
mon/FSCommands: Set extra MDS to standby
cephfs: Health check changes
mon/MDSMonitor: Remove command support for legacy syntax
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
As dirfrags are now standard in CephFS, remove the machinery for
tracking and enabling this feature.
ceph fs set <fs> allow_dirfrags is now deprecated and prints a warning
message.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
* refs/pull/16779/head:
mds: cleanup MDCache::open_snaprealms()
mds: make sure snaptable version > 0
mds: don't consider CEPH_INO_LOST_AND_FOUND as base inode
mds: replace MAX() with std::max()
tools/cephfs: make cephfs-data-scan create snaprealm for base inodes
qa/cephfs: don't run TestSnapshots.test_kill_mdstable on kclient
qa/cephfs: adjust check of 'cephfs-table-tool all show snap' output
mds: don't warn unconnected snaplrealms in cluster log
mds: update CInode/CDentry's first according to global snapshot seq
qa/cephfs: add tests for snapclient cache
qa/cephfs: add tests for snaptable transaction
mds: add asok command that dumps cached snap infos
qa/cephfs: add tests for multimds snapshot
client: don't mark snap directory complete when its dirstat is empty
qa/workunits/snaps: add snaprealm split test
mds: make sure mds has uptodate mdsmap before checking 'allows_snaps'
client: fix incorrect snaprealm when adding caps
qa/workunits/snaps: add hardlink snapshot test
mds: add incompat feature and bump protocol for snapshot changes
mds: detach inode with single hardlink from global snaprealm
mds: record hardlink snaps in inode's snaprealm
mds: attach inode with multiple hardlinks to dummy global snaprealm
mds: cleanup rename code
mds: ensure xlocker has uptodate lock state
mds: simplify SnapRealm::build_snap_{set,trace}
mds: record global last_created/last_destroyed in snaptable
mds: pop projected snaprealm before inode's parent changes
mds: keep isnap lock in sync state
mds: handle mksnap vs resolve_snapname race
mds: cleanup snaprealm past parents open check
mds: rollback snaprealms when rolling back slave request
mds: send updated snaprealms along with slave requests
mds: explict notification for snap update
mds: send snap related messages centrally during mds recovery
mds: synchronize snaptable caches when mds recovers
mds: introduce MDCache::maybe_finish_slave_resolve()
mds: notify all mds about prepared snaptable update
mds: record snaps in old snaprealm when moving inode into new snaprealm
mds: cache snaptable in snapclient
mds: recover snaptable client when mds enters resolve state
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/20132/head:
qa/cephfs: update TestDamage for open file table
mds: allow storing open file table in multiple omaps
mds: differentiate Anchor types to clarify purpose
mds: add perf counter for 'open ino' operation
mds: protect open file table against partial omap update
mds: add dirfrags whose child inodes have caps to open file table
mds: don't try prefetching destroyed inodes
mds: don't try opening inodes that haven't been created
mds: don't re-requeue open files to head of log
mds: use open file table to speed up mds recovery
mds: introduce open file table
mds: track how many clients/mds want caps for each inode
mds: cleanup MDCache::opening_inodes access
mds: cleanup CInode/CDir states definition
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/19263/head:
qa: ignore bad backtrace cluster wrn
qa/cephfs: Add tests to validate scrub functionality
cephfs: Add option to load invalid metadata from disk
cephfs: Reset scrub data when inodes move
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This reverts commit 3189ba19a7, reversing
changes made to b7620de020.
Despite the change in json format being positive, the unfortunate side-effect
is that it broke upgrade testing (because the QA framework must handle the
transition of mdsmap["info"] to a list from object) and the ceph-mgr.
Fixes: http://tracker.ceph.com/issues/22527
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/19369/head:
qa: update handling of fs status format
PendingReleaseNotes: add note for format change
mds/MDSMap : use arrary_section for mds stat
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Zheng Yan <zyan@redhat.com>
Reviewed-by: Xiaoxi Chen <xiaoxchen@ebay.com>
These configs were used for initialization but it is more appropriate to
require setting these file system attributes via `ceph fs set`. This is similar
to what was already done with max_mds. There are new variables added for `fs
set` where missing.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/18274/head:
mds: fold mds_revoke_cap_timeout into mds_session_timeout
client: add new delegation testcases
client: add delegation support for cephfs
common: remove data_dir_option from common_preinit and global_pre_init
Reviewed-by: Gregory Farnum <gfarnum@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Right now, we have two different timeout settings -- one for when the
client is just not responding at all (mds_session_timeout), and one for
when the client is otherwise responding but isn't returning caps in a
timely fashion (mds_cap_revoke_timeout).
The default settings on them are equivalent (60s), but only the
mds_session_timeout is communicated via the mdsmap. The
mds_cap_revoke_timeout is known only to the MDS. Neither timeout results
in anything other than warnings in the current codebase.
There is also a third setting (mds_session_autoclose) that is also
communicated via the MDSmap. Exceeding that value (default of 300s)
could eventually result in the client being blacklisted from the
cluster. The code to implement that doesn't exist yet, however.
The current codebase doesn't do any real sanity checking of these
timeouts, so the potential for admins to get them wrong is rather high.
It's hard to concoct a use-case where we'd want to warn about these
events at different intervals.
Simplify this by just removing the mds_cap_revoke_timeout setting, and
replace its use in the code with the mds_session_timeout. With that, the
client can at least determine when warnings might start showing up in
the MDS' logs.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
* refs/pull/18192/head:
qa/cephfs: test ec data pool
qa/suites/fs/basic_functional/clusters: more osds
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>