doc: discuss the standard multi-tenant CephFS security model
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Ramana Raja <rraja@redhat.com>
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Ken Dreyer <kdreyer@redhat.com>
mgr/stats: change in structure of perf_stats o/p
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Jos Collin <jcollin@redhat.com>
Dropping the output of this command entirely, as it is
difficult to maintain it in the docs.A minor change in
the command output invalidates the docs.
Fixes: https://tracker.ceph.com/issues/56162
Signed-off-by: Neeraj Pratap Singh <neesingh@redhat.com>
doc/cephfs: note regarding start time time zone
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Dhairya Parmar <dparmar@redhat.com>
Rationale: There are many threshold limits for split and
merge in this doc that just says like:
"A directory fragment is eligible for splitting
when its size exceeds `mds_bal_split_size`
(default 10000)". Need to clarify what 10000 actually
means. This applies to all other such entries in this
doc.
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
If any clone is in pending or in-progress state then
show these clones in 'fs subvolume snapshot info'
command output. This field only exists if clones are
in pending or in progress state.
Fixes: https://tracker.ceph.com/issues/55041
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
mon: verify data pool is already not in use by any file system
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Jos Collin <jcollin@redhat.com>
Reviewed-by: Neeraj Pratap Singh <neesingh@redhat.com>
Reviewed-by: Milind Changire <mchangir@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
The 'size' shown in the output of snapshot info command relies on
rstats which is incorrect snapshot size. It tracks size of the
subvolume from the snapshot has been taken instead of the snapshot
itself. Hence having the 'size' field in the output of 'snapshot info'
doesn't make sense until the rstats is fixed.
Fixes: https://tracker.ceph.com/issues/55822
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
Add documentation for subvolume group quota along
with 'subvolumegroup resize' and 'subvolumegroup info'
commands
Fixes: https://tracker.ceph.com/issues/53509
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Set custom metadata on the snapshot as a key-value pair using
$ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the snapshot using::
$ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
Fixes: https://tracker.ceph.com/issues/55401
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>