doc/cephfs: note regarding start time time zone
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Dhairya Parmar <dparmar@redhat.com>
Rationale: There are many threshold limits for split and
merge in this doc that just says like:
"A directory fragment is eligible for splitting
when its size exceeds `mds_bal_split_size`
(default 10000)". Need to clarify what 10000 actually
means. This applies to all other such entries in this
doc.
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
If any clone is in pending or in-progress state then
show these clones in 'fs subvolume snapshot info'
command output. This field only exists if clones are
in pending or in progress state.
Fixes: https://tracker.ceph.com/issues/55041
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
mon: verify data pool is already not in use by any file system
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Jos Collin <jcollin@redhat.com>
Reviewed-by: Neeraj Pratap Singh <neesingh@redhat.com>
Reviewed-by: Milind Changire <mchangir@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
The 'size' shown in the output of snapshot info command relies on
rstats which is incorrect snapshot size. It tracks size of the
subvolume from the snapshot has been taken instead of the snapshot
itself. Hence having the 'size' field in the output of 'snapshot info'
doesn't make sense until the rstats is fixed.
Fixes: https://tracker.ceph.com/issues/55822
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
Add documentation for subvolume group quota along
with 'subvolumegroup resize' and 'subvolumegroup info'
commands
Fixes: https://tracker.ceph.com/issues/53509
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Set custom metadata on the snapshot as a key-value pair using
$ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the snapshot using::
$ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
Fixes: https://tracker.ceph.com/issues/55401
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
Set custom metadata on the subvolume as a key-value pair using
$ ceph fs subvolume metadata set <vol_name> <subvol_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: key_name and value should be a string of ASCII characters (as specified in python's string.printable). key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the subvolume using the metadata key
$ ceph fs subvolume metadata get <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the subvolume using
$ ceph fs subvolume metadata ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the subvolume using the metadata key
$ ceph fs subvolume metadata rm <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
Fixes: https://tracker.ceph.com/issues/54472
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
Description: 1) Add a note about using cephadm for setting up the
cluster and mds(s), also mention the use of ceph
orchestrator if one needs to setup mds(s) manually.
2) Changed the term `data point` to `directory` in
point 1 under "Adding an MDS" section for better
clarity.
Fixes: https://tracker.ceph.com/issues/54551
Signed-off-by: dparmar18 <dparmar@redhat.com>
Also, the sample cephfs-top image in the doc is outdated. Update that!
Fixes: http://tracker.ceph.com/issues/48619
Signed-off-by: Venky Shankar <vshankar@redhat.com>
The `fs volume rename` command renames the volume, i.e.,
orchestrator MDS service, file system, and the data and
metadata pool of the file system.
Fixes: https://tracker.ceph.com/issues/51162
Signed-off-by: Ramana Raja <rraja@redhat.com>
* refs/pull/44315/head:
doc/cephfs: mds default cache memory limit is now 4GB
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
Problem:
The MDS only stores the symlink inode's backtrace
information in the data pool. During disaster
recovery of the metadata pool by scanning data
pool, the symlinks are recreated as regular files.
Solution:
This patch stores the symlink target on the first
data object as an xattr for recovery.
MDS option:
The mds option 'mds_symlink_recovery' is introduced
which is enabled by default. Enabling the option
stores the symlink target.
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Fixes: https://tracker.ceph.com/issues/46166
... so that such links can be included in alert warnings.
Additionally, document some other health warnings. Credit to @pcuzner
to point out that not all health warnings have been documented.
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Since 58eaa237b0, an MDS is only promoted if it is compatible with the
file system compat. The users may see persistent failed state even they
have enough standby daemons.
Signed-off-by: 胡玮文 <huww98@outlook.com>
* refs/pull/42584/head:
doc: fix `daemon status` interface (exclude file system name)
test: adjust mirroring tests for `daemon status` change
mgr/mirroring: `daemon status` command does not require file system name
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Currently, to recover a file system after recovering monitor store, you
need to stop all the MDSs; create FSMap with defaults using `fs new`
command; execute `fs reset` command to get the file system's rank 0 into
existing but failed state; and then restart MDSs.
Add 'recover' flag to the `fs new` command that sets the file system's
rank 0 to existing but failed state, and sets the file system's
'joinable' setting to False. Using the `fs new` command with 'recover'
flag gets rid of the steps to stop all the MDSs and execute `fs reset`
command when recovering the file system after recoving monitor store.
Fixes: https://tracker.ceph.com/issues/51716
Signed-off-by: Ramana Raja <rraja@redhat.com>
The change to doing sync I/Os when we're in NEARFULL conditions
apparently caught some folks by surprise. Add something to clarify that
to the kclient debugging docs.
Also, remove the incomplete sentence that follows it, which contains no
useful information.
Fixes: https://tracker.ceph.com/issues/49406
Signed-off-by: Jeff Layton <jlayton@redhat.com>
better maintainablity this way. and drop unsupported options of
- journaler batch interval
- journaler batch max
Signed-off-by: Kefu Chai <kchai@redhat.com>
... monitor stores using OSDs. The steps are valid only to recover
single active MDS file systems.
Partially-fixes: https://tracker.ceph.com/issues/51341
Signed-off-by: Ramana Raja <rraja@redhat.com>
File system will need to be recreated when monitor databases are lost
and rebuilt. Some applications (e.g., CSI) expect that the recovered
file system have the same ID as before. Allow creating a file system
with a specific ID to help in such scenarios. This can now be done by
the `fs new` command using the argument 'fscid' and 'force' flag.
Newer file systems will no longer have increasing IDs as a corollary.
Fixes: https://tracker.ceph.com/issues/51340
Signed-off-by: Ramana Raja <rraja@redhat.com>
* refs/pull/41574/head:
qa/tasks/vstart_runner: add LocalCluster.run
qa/tasks/cephfs/test_nfs: fiddle with sudo
mgr/nfs/export: some cleanup, minor refactoring
mgr/nfs/cluster: remove unused @cluster_setter
nfs/mgr: fix help message case
doc/cephfs/fs-nfs-export: add note about export update behavior
mgr/nfs: move user create/delete into helper
mgr/nfs: refactor _delete_user helper
mgr/nfs: refactor create_export_from_dict() helper
mgr/nfs: keep 'nfs export get' around for backward-compat
mgr/nfs: rename method
qa/tasks/cephfs/test_nfs: test new export via apply
doc/cephfs/fs-nfs-export: be consistent with cluster_id and _ vs -
mgr/nfs: addr -> client_addr for 'nfs export create ...'
mgr/nfs: fix tests
mgr/nfs: 'nfs export get' -> 'nfs export info'
mgr/nfs: binding -> pseudo_path
mgr/nfs: more revisions based on review
mgr/nfs: adjust NFSExceptoin errno arg
doc/cephfs: update 'nfs export {get,apply}' docs
mgr/nfs: merge FSExport back into ExportMgr
doc/radosgw/nfs: document mgr/nfs way to add/remove rgw exports
mgr/nfs: merge 'nfs export {update,import}' -> 'nfs export apply'
mgr/nfs: test export creation and list
mgr/nfs: test export_update (+ fixes)
mgr/nfs: test Export.validate(); several fixes
mgr/nfs: test that export <-> block+dict conversions go both ways
mgr/nfs: clean up test a bit
mgr/nfs/export: fix export validation
mgr/nfs/export: fix tests
mgr/nfs: handle option addr/client block in create_export()
mgr/nfs: allow multiple addrs for new exports
mgr/nfs: fix/finish rgw export
mgr/nfs/module: clusterid -> cluster_id
mgr/nfs/export: fix export_update_1 to type check
mgr/nfs/cluster: fix type error
mgr/nfs/export: wrap long lines
mgr/nfs: ExportMgr._delete_export only works for cephfs for now
mgr/nfs: Remove pool_ns from NFSCluster
mgr/nfs: Remove ExportMgr.rados_namespace
mgr/nfs: flake8
mgr/nfs: Add type checking
mgr/nfs: Add __eq__ method to Export
mgr/nfs: Add some compatibility to mgr/dashboard
mgr/nfs: Fix whitespace handling
mgr/nfs: Copy unit tests from mgr/dashboard
mgr/nfs: partially implement rgw export support
mgr/nfs: abstract FSAL; add RGWFSAL
mgr/nfs: refactor to merge 'update' and 'import' code
mgr/nfs: add 'nfs export import' command
mgr/nfs: refactor 'nfs export update' and export validation
mgr/nfs: fix _fetch_export to distinguish between clusters
mgr/nfs: move export ganesha conf translation into caller
mgr/nfs: name nfs cephfs client key 'nfs.{cluster_id}.{export_id}'
mgr/nfs: add --addr to 'nfs export create'
mgr/nfs: add --squash to 'nfs export create'
mgr/nfs/export_utils: include false but non-None items in config
vstart.sh: enable nfs module
mgr/cephadm: nfs: drop attr_expiration_time from top-level config
mgr/cephadm: remove Dir_Chunk = 0
Reviewed-by: Michael Fritch <mfritch@suse.com>
The fs_name of the relevant MDSMap is set to the new name. Also,
the application tags of the data pools and the meta data pool of
the file system is set to the new name.
Fixes: https://tracker.ceph.com/issues/47276
Signed-off-by: Ramana Raja <rraja@redhat.com>
This was using an obscure syntax that worked at one time and wasn't
documented (AFAIK).
Fixes: https://tracker.ceph.com/issues/51182
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This documentation was for the old code, the new code (by Zheng)
fragments the directory and distributes those fragments.
Fixes: https://tracker.ceph.com/issues/51187
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This command is very awkward to implement unless all service spec fields
are always required. That will soon mean both the placement *and*
virtual_ip (if any), making it much less useful for a human to make use
of.
Instead, let them update yaml, or adjust the nfs and/or ingress specs
directly. I don't think this command is needed.
Signed-off-by: Sage Weil <sage@newdream.net>
At the time NFS support was added, this limitation applied.
However, in
b3d97f8157
and
1cfe7e2df9
we added support for multiple filesystems and started mixing
the fscid into the filehandle.
Signed-off-by: Sage Weil <sage@newdream.net>
* refs/pull/40885/head:
doc: document cephfs-mirror configuration options
cephfs-mirror: use sensible mount timeout when mounting local/remote fs
test: add tests for settting mount timeout
pybind/cephfs: add interface to set mount timeout
libcephfs: add interface to set mount timeout
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Instead admins should specify specific features to require.
Fixes: https://tracker.ceph.com/issues/50819
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* format global options using option directive
* fix the header, so man/conf.py is able to parse
the description
* define "Synopsis" section to be consistent with other manpages.
* drop reference to glossary using "term" as manapge does not have
reference to glossary entries.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* format global options using option directive
* fix the header, so man/conf.py is able to parse
the description
* define "Synopsis" section to be consistent with other manpages.
* drop reference to glossary using "term" as manapge does not have
reference to glossary entries.
Signed-off-by: Kefu Chai <kchai@redhat.com>
this change addresses the warning of:
/home/jenkins-build/build/workspace/ceph-pr-docs/doc/cephfs/mds-config-ref.rst:2: WARNING: duplicate confval_option description of mds_cache_memory_limit, other instance in cephfs/cache-configuration
/home/jenkins-build/build/workspace/ceph-pr-docs/doc/cephfs/mds-config-ref.rst:2: WARNING: duplicate confval_option description of mds_cache_reservation, other instance in cephfs/cache-configuration
Signed-off-by: Kefu Chai <kchai@redhat.com>
Recently, nfs related code was moved out of volumes plugin[1]. So using the
name volume/nfs for the interface is not appropriate.
[1] https://github.com/ceph/ceph/pull/40526
Signed-off-by: Varsha Rao <varao@redhat.com>
Correct the defaults following 8df2388b9fb66e1606f47c095ecf0b5c71a1941e.
Related-to: https://tracker.ceph.com/issues/48403
Signed-off-by: Dan van der Ster <daniel.vanderster@cern.ch>
* refs/pull/40526/head:
spec: add nfs to spec file
mgr/nfs: Don't enable nfs module by default
mgr/nfs: check for invalid chars in cluster id
mgr/nfs: Use CLICommand wrapper
mgr/nfs: reorg nfs files
mgr/nfs: Check if transport or protocol are list instance
mgr/nfs: reorg cluster class and common helper methods
mgr/nfs: move common export helper methods to ExportMgr class
mgr/nfs: move validate methods into new ValidateExport class
mgr/nfs: add custom exception module
mgr/nfs: create new module for export utils
mgr/nfs: rename fs dir to export
mgr/volumes/nfs: Move nfs code out of volumes plugin
Reviewed-by: Alfonso Martínez <almartin@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
* refs/pull/40411/head:
doc: add note about removal of the `cephfs` nfs cluster type
mgr/volumes/nfs: drop `type` param during cluster create
Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Varsha Rao <varao@redhat.com>
PR #37600 introduced support for both `cephfs` and `rgw` exports
to be configured using a single nfs-ganesha cluster
Fixes: https://tracker.ceph.com/issues/50369
Signed-off-by: Michael Fritch <mfritch@suse.com>
* refs/pull/39939/head:
cephfs: ceph-dokan - properly log the mounted root
cephfs: Update ceph-dokan "--removable" flag
cephfs: document using multiple fs on Windows
cephfs: provide additional volume details on Windows
cephfs: add ceph-dokan unmap command
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This change updates the ceph-dokan documentation, showing how
a non-default Ceph filesystem can be mounted.
Fixes: https://tracker.ceph.com/issues/49662
Signed-off-by: Lucian Petrut <lpetrut@cloudbasesolutions.com>
At the moment, Windows CephFS mounts can only be removed by
terminating the daemon (e.g. sending CTRL-C) or through the
Windows mount manager if the "-o -m" parameters were passed
when the mapping was created.
This change adds the "ceph-dokan unmap" command, which takes
the mountpoint as input.
Fixes: https://tracker.ceph.com/issues/49662
Signed-off-by: Lucian Petrut <lpetrut@cloudbasesolutions.com>
* refs/pull/40305/head:
doc/cephfs/nfs: Add note about cephadm NFS-Ganesha daemon port
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
* refs/pull/40145/head:
doc: add note about disabling standby-replay during upgrades
qa: add test for standby-replay disable
mon: fail standby-replay daemons when flag is turned off
Reviewed-by: Sidharth Anupkrishnan <sanupkri@redhat.com>
Most of the Windows documentation is currently included in the
README.windows.rst file.
To make it more accessible, we're moving most of it to the
"doc/" folder, adding the following pages:
* Installing Ceph on Windows
* RBD on Windows
* Windows troubleshooting
We'll keep the build and manual install instructions in
README.windows.rst. Note that ceph-dokan already has a separate
doc page.
Signed-off-by: Lucian Petrut <lpetrut@cloudbasesolutions.com>
* refs/pull/38913/head:
qa/tasks/cephfs/nfs: Add tests for updating fs exports
mgr/volumes/nfs: Handle rook restart error
doc/cephfs/nfs: Add about update export interface
mgr/volumes/nfs: Add command to update cephfs exports
pybind/volumes/nfs: set mds caps according to user specified access type
mgr/volumes/module: Remove unused json module
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This change documents ceph-dokan, describing the prerequisites,
usage and limitations.
Some of this was mentioned in README.windows.rst but is now being
moved to the Ceph doc pages.
Signed-off-by: Lucian Petrut <lpetrut@cloudbasesolutions.com>
Add subvolume evict command which evicts the subvolume mounts
which are mounted using particular auth-ID.
Fixes: https://tracker.ceph.com/issues/44928
Signed-off-by: Kotresh HR <khiremat@redhat.com>
* refs/pull/38769/head:
doc/cephfs: add data pool-MDS instructions link
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
- This commit adds a link to the "Create a Ceph
File System" page. The link that it adds is to the
"Adding a data pool to the MDS" subsection of the
file layouts page.
- s/mds/file system/
Fixes: https://tracker.ceph.com/issues/48531
Signed-off-by: Zac Dover <zac.dover@gmail.com>
Ceph config option names may use spaces, underscores, or by one reference hyphens
as interstitial separators. Most usage within the doc tree uses underscores,
though example conf files and especially structured lists of options mostly
use spaces. Mostly. Underscores help differentiate the config names from
surrounding text, and moreover facilitate scripting, grep, awk, etc and match
their form in src/common/options.cc.
This PR conforms these occurrences of option names to use interstitial underscores instead of spaces.
Fixes: https://tracker.ceph.com/issues/48301
Signed-off-by: Anthony D'Atri <anthony.datri@gmail.com>
After pr #37608, ceph health detail output message
have changed when mds has slow requests. So update
doc according to output.
Signed-off-by: haoyixing <haoyixing@kuaishou.com>
* refs/pull/36554/head:
mgr/volumes: Make number of cloner threads configurable
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Shyamsundar R <srangana@redhat.com>
Implement a root_squash mode in MDS auth caps to deny operations for
clients with uid=0 or gid=0 that need write access. It's mainly to
prevent operations such as accidental `sudo rm -rf /path`.
The root squash mode can be enforced in one of the following ways in
the MDS caps,
'allow rw root_squash'
(across file systems)
or
'allow rw fsname=a root_squash'
(on a file system)
or
'allow rw fsname=a path=/vol/group/subvol00 root_squash'
(on a file system path)
Fixes: https://tracker.ceph.com/issues/42451
Signed-off-by: Ramana Raja <rraja@redhat.com>
The number of cloner threads is set to 4 and it can't be
configured. This patch makes the number of cloner threads
configurable via the mgr config option "max_concurrent_clones".
On an increase in number of cloner threads, it will just
spawn the difference of threads between existing number of
cloner threads and the new configuration. It will not cancel
the running cloner threads.
On decrease in number of cloner threads, the cases are as follows.
1. If all cloner threads are waiting for the job:
In this case, all threads are notified and required number
threads are terminated.
2. If all the cloner threads are processing a job:
In this case, the condition is validated for each thread after
the current job is finished and the thread is termianted if the
condition for required number of cloner threads is not satisified.
3. If few cloner threads are processing and others are waiting:
The threads which are waiting are notified to validate the
number of threads required. If terminating those doesn't satisfy the
required number of threads, the remaining threads are terminated
upon completion of existing job.
Fixes: https://tracker.ceph.com/issues/46892
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Add new auth caps to restrict access to clients based on fsnames. To
specify this, for example:
mds 'allow rw fsname=cephfs1'
This will restrict client access to fs name "cephfs1" only. Messages to
active MDS assigned to any other FSMap will be dropped. Standby MDS not
associated with an FSMap will accept messages from clients. To allow
multiple file systems, create MDS cap as follows -
mds 'allow rw fsname=cephfs1, allow rw fsname=cephfs2'
Fixes: http://tracker.ceph.com/issues/15070
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Add a 'fsname' clause to mon auth caps to restrict a client's view
of the FSMap. Example:
mon 'allow rw fsname=cephfs2'
This would restrict the client's view of the FSMap to the MDSMap for
cephfs2. Any MDS allocated to a different filesystem will be invisible.
Global standby daemons are always visible. To allow multiple CephFSs,
add multiple caps:
mon 'allow rw fsname=cephfs2, allow rw fsname=cephfs2'
Fixes: http://tracker.ceph.com/issues/15070
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Since RADOS is an acronym, albeit a somewhat difficult-to-remember one,
it is customary to write it ALL-CAPS.
Signed-off-by: Nathan Cutler <ncutler@suse.com>
The document seemed to be wanting to refer to the software as "NFS
Ganesha", but was failing to do so in some places.
Signed-off-by: Nathan Cutler <ncutler@suse.com>