Some parts of the documents regarding
the bulk flag have typos.
Command for creating a pool
was: `ceph osd create test_pool --bulk`
should be: `ceph osd pool create test_pool --bulk`
Command for setting bulk value in a pool
was: `ceph osd pool set test_pool bulk=<true/false/1/0>`
should be: `ceph osd pool set test_pool bulk <true/false/1/0>`
Also removed a bit of trailing white spaces.
Changed `complements` to `complement`.
https://tracker.ceph.com/issues/54485
Signed-off-by: Kamoltat <ksirivad@redhat.com>
This PR repairs a link to a PDF. The link was broken
when the PDF assets were moved during the restructure
of the ceph.io website in 2021.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
* refs/pull/44054/head:
doc/rados/operations: document pg_num_max
mgr: set max of 32 pgs for .mgr pool
mgr/dashboard: expect pg_num_max property for pools
mon/OSDMonitor: add option --pg-num_max arg for create pool
mon/OSDMonitor: disallow setting pg_num < min or > max
mgr/pg_autoscaler: apply pg_num_max
mon: add pg_num_max pool property
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Add Release Notes and remove any `profile`
related stuff in the autoscaler documentation
and replace it with `bulk` flag.
Signed-off-by: Kamoltat <ksirivad@redhat.com>
Creating the pool with `--bulk` will allow
the pg_autoscaler to use the `scale-down`
mode on.
Creating pool:
`ceph osd pool create <pool-name> --bulk`
Get var:
`ceph osd pool get <pool-name> bulk`
Set var:
`ceph osd pool set <pool-name> bulk=true/false/1/0`
Removed `autoscale_profile` and incorporate bulk flag
into calculating `final_pg_target` for each pool.
bin/ceph osd pool autoscale-status no longer has
`PROFILE` column but has `BULK` instead.
Signed-off-by: Kamoltat <ksirivad@redhat.com>
This PR adds a deployment scenarios section to the cephadm docs to document the single-host-defaults flag, and explain how to deploy in an isolated environment.
Signed-off-by: Melissa Li <melissali@redhat.com>
pg_autoscale module will now start out all the pools
with a scale-up profile by default.
Added tests in workunits/mon/pg_autoscaler.sh
to evaluate if the default pool creation is
a scale-up profile
Updated documentation and release notes to
reflect the change in the default behavior
of the pg_autoscale profile.
Fixes: https://tracker.ceph.com/issues/53309
Signed-off-by: Kamoltat <ksirivad@redhat.com>
This is the editorial syntax and elegance PR for the "Bootstrap Options"
section in the "Configuring Ceph" chapter of the RADOS Guide.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
Update the steps in the mclock config reference document to manually
override an OSDs max IOPS capacity. Provide information on the alternative
ways to override the osd_mclock_max_capacity_iops_[hdd,ssd] options for
an OSD.
Fixes: https://tracker.ceph.com/issues/52025
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
This PR updates the text in the RADOS Guide
(the Ceph Storage Cluster Guide) that appears
at the beginning of the "Storage Devices"
chapter. I did the following:
- rewrote some of the sentences so that
they read more like written text than like
spoken language
- added "Ceph Manager" to the list of daemons
that a Ceph cluster comprises
- that's about it.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
This PR makes minor improvements to the
syntax of the sentences in the "FileStore"
material in the Configuration chapter of
the RADOS manual.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
This PR makes minor alterations to the
text at the beginning of the RADOS Guide.
Most notably, the monitor daemon has been
added to the list of types of daemons that
constitute a Ceph cluster.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
... monitor stores using OSDs. The steps are valid only to recover
single active MDS file systems.
Partially-fixes: https://tracker.ceph.com/issues/51341
Signed-off-by: Ramana Raja <rraja@redhat.com>
* refs/pull/42041/head:
mgr/restful: ignore min/max_size
test/crush: drop min/max_size refs
qa/workunits/mon/pool_ops: remove test for min/max_size check
qa: scrub a few remaining mentions of ruleset
qa/standalone/mon/osd-*: fix tests
PendingReleaseNotes: note min/max_size removal
mgr/dashboard: remove max/min_size and ruleset
mon/OSDMonitor: fix calls to CrushTester
crush: eliminate min_size and max_size
test/cli/crushtool: reunumber rulesets in test maps
crushtool: require min/max or num-rep for --test
crush: remove last traces of 'ruleset'
test/cli/crushtool: use 'id' instead of 'ruleset' in crush inputs
crushtool: take --min-rep and --max-rep explicitly
crush/CrushTester: drop --ruleset
doc: scrub 'ruleset' from docs
src/erasure-code: rule, not ruleset
mon/OSDMonitor: remove check_crush_rule() callers
mon/OSDMonitor: rule, not ruleset
crushtool: remove check for overlapped ruels
crush/CrushWrapper: get_osd_pool_default_crush_replicated_ruleset -> rule
crush: remove find_rule()
mon/OSDMonitor: use pool's crush rule directly
osd/OSDMap: drop checks for ruleset == ruleid
osd/OSDMap: use pool's crush rule_id directly
mon/PGMap: use pool's crush_rule directly
mon/OSDMonitor: remove crush ruleset->rule rewrite
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
Reviewed-by: Avan Thakkar <athakkar@redhat.com>