mgr/stats: change in structure of perf_stats o/p
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Jos Collin <jcollin@redhat.com>
Dropping the output of this command entirely, as it is
difficult to maintain it in the docs.A minor change in
the command output invalidates the docs.
Fixes: https://tracker.ceph.com/issues/56162
Signed-off-by: Neeraj Pratap Singh <neesingh@redhat.com>
doc/cephfs: note regarding start time time zone
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Dhairya Parmar <dparmar@redhat.com>
Rationale: There are many threshold limits for split and
merge in this doc that just says like:
"A directory fragment is eligible for splitting
when its size exceeds `mds_bal_split_size`
(default 10000)". Need to clarify what 10000 actually
means. This applies to all other such entries in this
doc.
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
If any clone is in pending or in-progress state then
show these clones in 'fs subvolume snapshot info'
command output. This field only exists if clones are
in pending or in progress state.
Fixes: https://tracker.ceph.com/issues/55041
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
mon: verify data pool is already not in use by any file system
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Jos Collin <jcollin@redhat.com>
Reviewed-by: Neeraj Pratap Singh <neesingh@redhat.com>
Reviewed-by: Milind Changire <mchangir@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
The 'size' shown in the output of snapshot info command relies on
rstats which is incorrect snapshot size. It tracks size of the
subvolume from the snapshot has been taken instead of the snapshot
itself. Hence having the 'size' field in the output of 'snapshot info'
doesn't make sense until the rstats is fixed.
Fixes: https://tracker.ceph.com/issues/55822
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
Add documentation for subvolume group quota along
with 'subvolumegroup resize' and 'subvolumegroup info'
commands
Fixes: https://tracker.ceph.com/issues/53509
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Set custom metadata on the snapshot as a key-value pair using
$ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the snapshot using::
$ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
Fixes: https://tracker.ceph.com/issues/55401
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
Set custom metadata on the subvolume as a key-value pair using
$ ceph fs subvolume metadata set <vol_name> <subvol_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: key_name and value should be a string of ASCII characters (as specified in python's string.printable). key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the subvolume using the metadata key
$ ceph fs subvolume metadata get <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the subvolume using
$ ceph fs subvolume metadata ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the subvolume using the metadata key
$ ceph fs subvolume metadata rm <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
Fixes: https://tracker.ceph.com/issues/54472
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
Description: 1) Add a note about using cephadm for setting up the
cluster and mds(s), also mention the use of ceph
orchestrator if one needs to setup mds(s) manually.
2) Changed the term `data point` to `directory` in
point 1 under "Adding an MDS" section for better
clarity.
Fixes: https://tracker.ceph.com/issues/54551
Signed-off-by: dparmar18 <dparmar@redhat.com>
Also, the sample cephfs-top image in the doc is outdated. Update that!
Fixes: http://tracker.ceph.com/issues/48619
Signed-off-by: Venky Shankar <vshankar@redhat.com>
The `fs volume rename` command renames the volume, i.e.,
orchestrator MDS service, file system, and the data and
metadata pool of the file system.
Fixes: https://tracker.ceph.com/issues/51162
Signed-off-by: Ramana Raja <rraja@redhat.com>
* refs/pull/44315/head:
doc/cephfs: mds default cache memory limit is now 4GB
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>