- clean up language
- move config hierarchy to the bottom (this is an implementation detail
that is only useful if managing ganesha externally)
Signed-off-by: Sage Weil <sage@newdream.net>
This PR makes minor alterations to the
text at the beginning of the RADOS Guide.
Most notably, the monitor daemon has been
added to the list of types of daemons that
constitute a Ceph cluster.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
Changing the notification behavior in case of Multipart Upload, updating
the related test cases and adding the documentation changes for the same.
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
better maintainablity this way. and drop unsupported options of
- journaler batch interval
- journaler batch max
Signed-off-by: Kefu Chai <kchai@redhat.com>
doc/dev/perf_counters: update docs to include more context about perf counter usage
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
as per
https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html
> Like py:currentmodule, this directive produces no output. Instead, it
> serves to notify Sphinx that all following option directives document
> options for the program called name.
> ...
> The program name may contain spaces (in case you want to document
> subcommands like svn add and svn commit separately).
and to avoid the warnings like:
doc/man/8/ceph-volume.rst:424: WARNING: Duplicate explicit target name:
"cmdoption-ceph-volume-h".
we should specify different "program" for different set of options.
Signed-off-by: Kefu Chai <kchai@redhat.com>
this option prevent mgr from loading specified modules. which could
be handly when debugging issues with always_on_modules.
and document mgr_standby_modules as well.
Signed-off-by: Kefu Chai <kchai@redhat.com>
The Perf Counters docs, although informative, are lacking for users and developers who are wondering what they can do with their perf counter data. I wrote an extra paragraph here that outlines some ways in which the counters can be used, including diagnosing problems in a cluster and identifying workload patterns.
Signed-off-by: Laura Flores <lflores@redhat.com>
Review comments are addressed.
Added documentation in authentication.rst for newly added IAM policies.
Test case failure due to incorrect IAM policy is fixed.
Signed-off-by: Rahul Dev Parashar <rahul.dev@flipkart.com>
The positions of two words are interchanged:
scans each cluster in the host ----> scans each host in the cluster
Signed-off-by: "Wang,Fei" <wf.ab@126.com>
... monitor stores using OSDs. The steps are valid only to recover
single active MDS file systems.
Partially-fixes: https://tracker.ceph.com/issues/51341
Signed-off-by: Ramana Raja <rraja@redhat.com>
* refs/pull/42073/head:
doc/mgr/nfs: fix 'export apply', pool name
PendingReleaseNotes: document workaround for NFS storage change
qa/tasks/mgr/test_orchestrator_cli: fix test
qa/suites/orch/cephadm/mgr-nfs-upgrade: add test for nfs migration
mgr/cephadm: migrate nfs grace file
mgr/nfs: migrate pre-pacific nfs.ganesha-foo clusters to nfs.foo
doc/cephfs/fs-nfs-exports: document new export apply capabilities
qa/tasks/cephfs/test_nfs: define NFS_POOL_NAME
mgr/nfs: use NFS_POOL_NAME in test_nfs.py
mgr/nfs: test export apply on JSON list
mgr/nfs: add test for ganesha conf apply/import
qa/tasks/cephfs/test_nfs: retry mount a few times
mgr/cephadm: migrate all legacy nfs exports to new .nfs pool
mgr/nfs: adjust cephfs export caps if necessary
python-common: don't accept pool/ns for NFSServiceSpec
mgr/orchestrator: drop rados_config_location ServiceDescription property
mgr/cephadm: move rados_config_location() out of NFSServiceSpec
mgr/nfs: change nfs pool to .nfs
mgr/nfs/export: accept a JSON or ganesha EXPORT config
mgr/nfs: allow 'nfs export apply' to take a list of exports
python-common: remove pool + namespace from NFSServiceSpec
mgr/nfs: used fixed pool + ns
mgr/rook: used fixed pool + ns
mgr/dashboard: use fixed pool + ns
mgr/cephadm: always use fixed pool and namespace
mgr/nfs: adjust test to match pool name
mgr/nfs: always create ganesha pool with well-defined name
Reviewed-by: Varsha Rao <varao@redhat.com>
crimson/common/log: print out logger.debug() when log level >=6
Reviewed-by: Mark Nelson <mnelson@readhat.com>
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
Reviewed-by: Chunmei Liu <chunmei.liu@intel.com>
* refs/pull/42041/head:
mgr/restful: ignore min/max_size
test/crush: drop min/max_size refs
qa/workunits/mon/pool_ops: remove test for min/max_size check
qa: scrub a few remaining mentions of ruleset
qa/standalone/mon/osd-*: fix tests
PendingReleaseNotes: note min/max_size removal
mgr/dashboard: remove max/min_size and ruleset
mon/OSDMonitor: fix calls to CrushTester
crush: eliminate min_size and max_size
test/cli/crushtool: reunumber rulesets in test maps
crushtool: require min/max or num-rep for --test
crush: remove last traces of 'ruleset'
test/cli/crushtool: use 'id' instead of 'ruleset' in crush inputs
crushtool: take --min-rep and --max-rep explicitly
crush/CrushTester: drop --ruleset
doc: scrub 'ruleset' from docs
src/erasure-code: rule, not ruleset
mon/OSDMonitor: remove check_crush_rule() callers
mon/OSDMonitor: rule, not ruleset
crushtool: remove check for overlapped ruels
crush/CrushWrapper: get_osd_pool_default_crush_replicated_ruleset -> rule
crush: remove find_rule()
mon/OSDMonitor: use pool's crush rule directly
osd/OSDMap: drop checks for ruleset == ruleid
osd/OSDMap: use pool's crush rule_id directly
mon/PGMap: use pool's crush_rule directly
mon/OSDMonitor: remove crush ruleset->rule rewrite
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
Reviewed-by: Avan Thakkar <athakkar@redhat.com>