Since kraken, Ceph enforces a 1:1 correspondence between CRUSH ruleset and
CRUSH rule, so effectively ruleset and rule are the same thing, although
the term "ruleset" still survives - notably in the CRUSH rule itself, where it
effectively denotes the number of the rule.
This commit updates the documentation to more faithfully reflect the current
state of the code.
Fixes: http://tracker.ceph.com/issues/20559
Signed-off-by: Nathan Cutler <ncutler@suse.com>
* refs/pull/17678/head:
mon/AuthMonitor: improve error message
mon/OSDMonitor: disallow "all" as a key or value name
cephfs, mon/AuthMonitor, OSD/osdcap: make 'all' a synonym for '*'
vstart.sh: Create an admin user for each CephFS
mon/AuthMonitor: Allow * wildcard for filesystem name
OSD/OSDCap: Allow namespace and pool tag to be combined
OSD/OSDCap: Namespace globbing
mon/AuthMonitor: Use new osd auth caps for ceph fs authorize
OSD/auth caps: Add OSD auth caps based on pool tag
mon/FSCommands: Tag pools used for cephfs by default
mon/OSDMonitor: Add key/value arguments for pool tagging
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/18274/head:
mds: fold mds_revoke_cap_timeout into mds_session_timeout
client: add new delegation testcases
client: add delegation support for cephfs
common: remove data_dir_option from common_preinit and global_pre_init
Reviewed-by: Gregory Farnum <gfarnum@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Define the string 'all' to be a synonym for the wildcard '*'. This
avoids confusion in the event that some auth caps (typically with
ceph fs authorize) are not quoted and thus '*' is expanded by the shell.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Right now, we have two different timeout settings -- one for when the
client is just not responding at all (mds_session_timeout), and one for
when the client is otherwise responding but isn't returning caps in a
timely fashion (mds_cap_revoke_timeout).
The default settings on them are equivalent (60s), but only the
mds_session_timeout is communicated via the mdsmap. The
mds_cap_revoke_timeout is known only to the MDS. Neither timeout results
in anything other than warnings in the current codebase.
There is also a third setting (mds_session_autoclose) that is also
communicated via the MDSmap. Exceeding that value (default of 300s)
could eventually result in the client being blacklisted from the
cluster. The code to implement that doesn't exist yet, however.
The current codebase doesn't do any real sanity checking of these
timeouts, so the potential for admins to get them wrong is rather high.
It's hard to concoct a use-case where we'd want to warn about these
events at different intervals.
Simplify this by just removing the mds_cap_revoke_timeout setting, and
replace its use in the code with the mds_session_timeout. With that, the
client can at least determine when warnings might start showing up in
the MDS' logs.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Change 'ceph fs authorize' to grant osd auth caps by pool tag
instead of to current data pools. This makes:
ceph fs authorize cephfs_a client.foo /bar rw
now equivalent to:
ceph auth get-or-create client.foo mon 'allow r' mds 'allow rw path=/bar' osd 'allow rw tag cephfs data=cephfs_a'
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Remove the misleading reference to this from the client
eviction page, it was never the right option to mention
there (my mistake).
Demote the option from LEVEL_ADVANCED to LEVEL_DEV as it
is hard to imagine a good reason for the user to change it.
Set a hard minimum of one hour, to make it harder to
corrupt` a system by setting it close to zero.
Remove the legacy definition of the field while we're at it.
Fixes: http://tracker.ceph.com/issues/21821
Signed-off-by: John Spray <john.spray@redhat.com>
The "mds blacklist interval" setting has no effect on the time that
the "ceph osd blacklist" command will use by default. Clarify this in
the docs.
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
* refs/remotes/upstream/pull/16036/head:
mds: improve cap min/max ratio descriptions
mds: fix whitespace
mds: cap client recall to min caps per client
mds: fix conf types
mds: fix whitespace
doc/cephfs: add client min cache and max cache ratio describe
mds: adding tunable features for caps_per_client
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Zheng Yan <zyan@redhat.com>
* refs/remotes/upstream/pull/17657/head:
mds: optimize MDCache::rejoin_scour_survivor_replicas()
mds: fix MDSCacheObject::clear_replica_map
mds: support limiting cache by memory
common: refactor of lru
mds: resolve unsigned coercion compiler warning
common: use safer uint64_t for list size
common: add bytes2str pretty print function
mds: check if waiting is allocated before use
mds: go back to compact_map for replicas
mds: use mempool for cache objects
mds: cleanup replica_map access
common: add alloc_ptr smart pointer
common: add warning on base class use of mempool
common: use atomic uin64_t for counter
Reviewed-by: Zheng Yan <zyan@redhat.com>
* refs/remotes/upstream/pull/17608/head:
doc/cephfs/posix: put posix notes in perspective
Reviewed-by: John Spray <john.spray@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This introduces two config parameters:
mds_cache_memory_limit: Sets the soft maximum of the cache to the given
byte count. (Like mds_cache_size, this doesn't actually limit the maximum
size of the cache. It just dictates the steady-state size.)
mds_cache_reservation: This replaces mds_health_cache_threshold everywhere
except the Beacon heartbeat sent to the mons. The idea here is to specify a
reservation of memory (5% by default) for operations and the MDS tries to
always maintain that reservation. So, the MDS will recall caps from clients
when it begins dipping into its reservation of memory.
mds_cache_size still limits the cache by Inode count but is now by-default 0
(i.e. unlimited). The new preferred way of specifying cache limits is by memory
size. The default is 1GB.
Fixes: http://tracker.ceph.com/issues/20594
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1464976
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Current cephfs can support seekdir efficiently. The diverge was
fixed by https://github.com/ceph/ceph/pull/14317
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Add a short description and command example to set the allow_multimds
flag and add a <fs_name> place holder to all 'ceph fs set' commands.
Signed-off-by: Jan Fajerski <jfajerski@suse.com>
Add a description of max_file_size to the CephFS admin docs.
Thanks to John Spray <jspray@redhat.com> on ceph-users for this
information.
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>