As dirfrags are now standard in CephFS, remove the machinery for
tracking and enabling this feature.
ceph fs set <fs> allow_dirfrags is now deprecated and prints a warning
message.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Setting the cluster_down flag will now set all active MDS
to standby and clearing it will restore the previous max_mds.
Changing max_mds when the cluster_down flag is set will clear the
flag automatically.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
* refs/pull/20927/head:
doc: use actual entity address for clarity
doc: make minor grammatical rectifications
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This parameter has the following 2 advantages:
1. FUSE(version > 2.8) the default single IO write size is 128k (controlled by max_write), if I use bs=4M in the FIO tool test,
you will find that 4*1024k/128=32 is needed, ie 32 IO operations are needed . If I adjust max_write to 4M,
only one operation is needed, which greatly improves the write performance of cephfs during fuse mount.
Of course, the above implementation requires libfuse and kernel fuse to support.
2. In addition, we can also limit the single IO write size by setting max_write to less than 128K.
Signed-off-by: huanwen ren <ren.huanwen@zte.com.cn>
These configs were used for initialization but it is more appropriate to
require setting these file system attributes via `ceph fs set`. This is similar
to what was already done with max_mds. There are new variables added for `fs
set` where missing.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Since kraken, Ceph enforces a 1:1 correspondence between CRUSH ruleset and
CRUSH rule, so effectively ruleset and rule are the same thing, although
the term "ruleset" still survives - notably in the CRUSH rule itself, where it
effectively denotes the number of the rule.
This commit updates the documentation to more faithfully reflect the current
state of the code.
Fixes: http://tracker.ceph.com/issues/20559
Signed-off-by: Nathan Cutler <ncutler@suse.com>
* refs/pull/17678/head:
mon/AuthMonitor: improve error message
mon/OSDMonitor: disallow "all" as a key or value name
cephfs, mon/AuthMonitor, OSD/osdcap: make 'all' a synonym for '*'
vstart.sh: Create an admin user for each CephFS
mon/AuthMonitor: Allow * wildcard for filesystem name
OSD/OSDCap: Allow namespace and pool tag to be combined
OSD/OSDCap: Namespace globbing
mon/AuthMonitor: Use new osd auth caps for ceph fs authorize
OSD/auth caps: Add OSD auth caps based on pool tag
mon/FSCommands: Tag pools used for cephfs by default
mon/OSDMonitor: Add key/value arguments for pool tagging
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/18274/head:
mds: fold mds_revoke_cap_timeout into mds_session_timeout
client: add new delegation testcases
client: add delegation support for cephfs
common: remove data_dir_option from common_preinit and global_pre_init
Reviewed-by: Gregory Farnum <gfarnum@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Define the string 'all' to be a synonym for the wildcard '*'. This
avoids confusion in the event that some auth caps (typically with
ceph fs authorize) are not quoted and thus '*' is expanded by the shell.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Right now, we have two different timeout settings -- one for when the
client is just not responding at all (mds_session_timeout), and one for
when the client is otherwise responding but isn't returning caps in a
timely fashion (mds_cap_revoke_timeout).
The default settings on them are equivalent (60s), but only the
mds_session_timeout is communicated via the mdsmap. The
mds_cap_revoke_timeout is known only to the MDS. Neither timeout results
in anything other than warnings in the current codebase.
There is also a third setting (mds_session_autoclose) that is also
communicated via the MDSmap. Exceeding that value (default of 300s)
could eventually result in the client being blacklisted from the
cluster. The code to implement that doesn't exist yet, however.
The current codebase doesn't do any real sanity checking of these
timeouts, so the potential for admins to get them wrong is rather high.
It's hard to concoct a use-case where we'd want to warn about these
events at different intervals.
Simplify this by just removing the mds_cap_revoke_timeout setting, and
replace its use in the code with the mds_session_timeout. With that, the
client can at least determine when warnings might start showing up in
the MDS' logs.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Change 'ceph fs authorize' to grant osd auth caps by pool tag
instead of to current data pools. This makes:
ceph fs authorize cephfs_a client.foo /bar rw
now equivalent to:
ceph auth get-or-create client.foo mon 'allow r' mds 'allow rw path=/bar' osd 'allow rw tag cephfs data=cephfs_a'
Signed-off-by: Douglas Fuller <dfuller@redhat.com>