mirror of
https://github.com/ceph/ceph
synced 2025-01-11 21:50:26 +00:00
doc: replace spaces with underscores in config option names
Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com>
This commit is contained in:
parent
3a21a91528
commit
98ac8e1130
@ -205,10 +205,10 @@ app must supply a monitor address, a username and an authentication key
|
||||
RADOS provides a number of ways for you to set the required values. For
|
||||
the monitor and encryption key settings, an easy way to handle them is to ensure
|
||||
that your Ceph configuration file contains a ``keyring`` path to a keyring file
|
||||
and at least one monitor address (e.g., ``mon host``). For example::
|
||||
and at least one monitor address (e.g., ``mon_host``). For example::
|
||||
|
||||
[global]
|
||||
mon host = 192.168.1.1
|
||||
mon_host = 192.168.1.1
|
||||
keyring = /etc/ceph/ceph.client.admin.keyring
|
||||
|
||||
Once you create the handle, you can read a Ceph configuration file to configure
|
||||
|
@ -1,6 +1,6 @@
|
||||
[global]
|
||||
fsid = {cluster-id}
|
||||
mon_initial_ members = {hostname}[, {hostname}]
|
||||
mon_initial_members = {hostname}[, {hostname}]
|
||||
mon_host = {ip-address}[, {ip-address}]
|
||||
|
||||
#All clusters have a front-side public network.
|
||||
@ -19,13 +19,13 @@ auth_client_required = cephx
|
||||
#and placement groups.
|
||||
osd_journal_size = {n}
|
||||
osd_pool_default_size = {n} # Write an object n times.
|
||||
osd_pool_default_min size = {n} # Allow writing n copy in a degraded state.
|
||||
osd_pool_default_pg num = {n}
|
||||
osd_pool_default_pgp num = {n}
|
||||
osd_pool_default_min_size = {n} # Allow writing n copy in a degraded state.
|
||||
osd_pool_default_pg_num = {n}
|
||||
osd_pool_default_pgp_num = {n}
|
||||
|
||||
#Choose a reasonable crush leaf type.
|
||||
#0 for a 1-node cluster.
|
||||
#1 for a multi node cluster in a single rack
|
||||
#2 for a multi node, multi chassis cluster with multiple hosts in a chassis
|
||||
#3 for a multi node cluster with hosts across racks, etc.
|
||||
osd_crush_chooseleaf_type = {n}
|
||||
osd_crush_chooseleaf_type = {n}
|
||||
|
@ -5,7 +5,7 @@
|
||||
The Filestore back end is no longer the default when creating new OSDs,
|
||||
though Filestore OSDs are still supported.
|
||||
|
||||
``filestore debug omap check``
|
||||
``filestore_debug_omap_check``
|
||||
|
||||
:Description: Debugging check on synchronization. Expensive. For debugging only.
|
||||
:Type: Boolean
|
||||
|
@ -207,13 +207,13 @@ these under ``[mon]`` or under the entry for a specific monitor.
|
||||
.. code-block:: ini
|
||||
|
||||
[global]
|
||||
mon host = 10.0.0.2,10.0.0.3,10.0.0.4
|
||||
mon_host = 10.0.0.2,10.0.0.3,10.0.0.4
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[mon.a]
|
||||
host = hostname1
|
||||
mon addr = 10.0.0.10:6789
|
||||
mon_addr = 10.0.0.10:6789
|
||||
|
||||
See the `Network Configuration Reference`_ for details.
|
||||
|
||||
|
@ -257,8 +257,8 @@ configuration option. For example,
|
||||
.. code-block:: ini
|
||||
|
||||
[osd.0]
|
||||
public addr = {host-public-ip-address}
|
||||
cluster addr = {host-cluster-ip-address}
|
||||
public_addr = {host-public-ip-address}
|
||||
cluster_addr = {host-cluster-ip-address}
|
||||
|
||||
.. topic:: One NIC OSD in a Two Network Cluster
|
||||
|
||||
|
@ -5,10 +5,10 @@
|
||||
# copies--reset the default values as shown in 'osd_pool_default_size'.
|
||||
# If you want to allow Ceph to accept an I/O operation to a degraded PG,
|
||||
# set 'osd_pool_default_min_size' to a number less than the
|
||||
# 'osd pool default size' value.
|
||||
# 'osd_pool_default_size' value.
|
||||
|
||||
osd_pool_default_size = 3 # Write an object 3 times.
|
||||
osd_pool_default_min size = 2 # Accept an I/O operation to a PG that has two copies of an object.
|
||||
osd_pool_default_min_size = 2 # Accept an I/O operation to a PG that has two copies of an object.
|
||||
|
||||
# Ensure you have a realistic number of placement groups. We recommend
|
||||
# approximately 100 per OSD. E.g., total number of OSDs multiplied by 100
|
||||
|
@ -896,7 +896,7 @@ To make this warning go away, you have two options:
|
||||
2. You can make the warning go away without making any changes to CRUSH by
|
||||
adding the following option to your ceph.conf ``[mon]`` section::
|
||||
|
||||
mon warn on legacy crush tunables = false
|
||||
mon_warn_on_legacy_crush_tunables = false
|
||||
|
||||
For the change to take effect, you will need to restart the monitors, or
|
||||
apply the option to running monitors with::
|
||||
|
@ -339,7 +339,7 @@ still write a new object to a ``degraded`` placement group if it is ``active``.
|
||||
If an OSD is ``down`` and the ``degraded`` condition persists, Ceph may mark the
|
||||
``down`` OSD as ``out`` of the cluster and remap the data from the ``down`` OSD
|
||||
to another OSD. The time between being marked ``down`` and being marked ``out``
|
||||
is controlled by ``mon osd down out interval``, which is set to ``600`` seconds
|
||||
is controlled by ``mon_osd_down_out_interval``, which is set to ``600`` seconds
|
||||
by default.
|
||||
|
||||
A placement group can also be ``degraded``, because Ceph cannot find one or more
|
||||
@ -366,13 +366,13 @@ the fault is resolved.
|
||||
|
||||
Ceph provides a number of settings to balance the resource contention between
|
||||
new service requests and the need to recover data objects and restore the
|
||||
placement groups to the current state. The ``osd recovery delay start`` setting
|
||||
placement groups to the current state. The ``osd_recovery_delay_start`` setting
|
||||
allows an OSD to restart, re-peer and even process some replay requests before
|
||||
starting the recovery process. The ``osd
|
||||
recovery thread timeout`` sets a thread timeout, because multiple OSDs may fail,
|
||||
restart and re-peer at staggered rates. The ``osd recovery max active`` setting
|
||||
starting the recovery process. The ``osd_recovery_thread_timeout``
|
||||
sets a thread timeout, because multiple OSDs may fail,
|
||||
restart and re-peer at staggered rates. The ``osd_recovery_max_active`` setting
|
||||
limits the number of recovery requests an OSD will entertain simultaneously to
|
||||
prevent the OSD from failing to serve . The ``osd recovery max chunk`` setting
|
||||
prevent the OSD from failing to serve . The ``osd_recovery_max_chunk`` setting
|
||||
limits the size of the recovered data chunks to prevent network congestion.
|
||||
|
||||
|
||||
@ -401,12 +401,12 @@ backfill can proceed.
|
||||
Ceph provides a number of settings to manage the load spike associated with
|
||||
reassigning placement groups to an OSD (especially a new OSD). By default,
|
||||
``osd_max_backfills`` sets the maximum number of concurrent backfills to and from
|
||||
an OSD to 1. The ``backfill full ratio`` enables an OSD to refuse a
|
||||
an OSD to 1. The ``backfill_full_ratio`` enables an OSD to refuse a
|
||||
backfill request if the OSD is approaching its full ratio (90%, by default) and
|
||||
change with ``ceph osd set-backfillfull-ratio`` command.
|
||||
If an OSD refuses a backfill request, the ``osd backfill retry interval``
|
||||
If an OSD refuses a backfill request, the ``osd_backfill_retry_interval``
|
||||
enables an OSD to retry the request (after 30 seconds, by default). OSDs can
|
||||
also set ``osd backfill scan min`` and ``osd backfill scan max`` to manage scan
|
||||
also set ``osd_backfill_scan_min`` and ``osd_backfill_scan_max`` to manage scan
|
||||
intervals (64 and 512, by default).
|
||||
|
||||
|
||||
@ -453,7 +453,7 @@ include:
|
||||
are waiting for an OSD with the most up-to-date data to come back ``up``.
|
||||
- **Stale**: Placement groups are in an unknown state, because the OSDs that
|
||||
host them have not reported to the monitor cluster in a while (configured
|
||||
by ``mon osd report timeout``).
|
||||
by ``mon_osd_report_timeout``).
|
||||
|
||||
To identify stuck placement groups, execute the following::
|
||||
|
||||
|
@ -74,22 +74,22 @@ particular daemons are set under the daemon section in your configuration file
|
||||
.. code-block:: ini
|
||||
|
||||
[global]
|
||||
debug ms = 1/5
|
||||
debug_ms = 1/5
|
||||
|
||||
[mon]
|
||||
debug mon = 20
|
||||
debug paxos = 1/5
|
||||
debug auth = 2
|
||||
debug_mon = 20
|
||||
debug_paxos = 1/5
|
||||
debug_auth = 2
|
||||
|
||||
[osd]
|
||||
debug osd = 1/5
|
||||
debug filestore = 1/5
|
||||
debug journal = 1
|
||||
debug monc = 5/20
|
||||
debug_osd = 1/5
|
||||
debug_filestore = 1/5
|
||||
debug_journal = 1
|
||||
debug_monc = 5/20
|
||||
|
||||
[mds]
|
||||
debug mds = 1
|
||||
debug mds balancer = 1
|
||||
debug_mds = 1
|
||||
debug_mds_balancer = 1
|
||||
|
||||
|
||||
See `Subsystem, Log and Debug Settings`_ for details.
|
||||
|
@ -557,8 +557,8 @@ related to your issue. This may not be an easy task for someone unfamiliar
|
||||
with troubleshooting Ceph. For most situations, setting the following options
|
||||
on your monitors will be enough to pinpoint a potential source of the issue::
|
||||
|
||||
debug mon = 10
|
||||
debug ms = 1
|
||||
debug_mon = 10
|
||||
debug_ms = 1
|
||||
|
||||
If we find that these debug levels are not enough, there's a chance we may
|
||||
ask you to raise them or even define other debug subsystems to obtain infos
|
||||
|
@ -243,9 +243,9 @@ No Free Drive Space
|
||||
|
||||
Ceph prevents you from writing to a full OSD so that you don't lose data.
|
||||
In an operational cluster, you should receive a warning when your cluster's OSDs
|
||||
and pools approach the full ratio. The ``mon osd full ratio`` defaults to
|
||||
and pools approach the full ratio. The ``mon_osd_full_ratio`` defaults to
|
||||
``0.95``, or 95% of capacity before it stops clients from writing data.
|
||||
The ``mon osd backfillfull ratio`` defaults to ``0.90``, or 90 % of
|
||||
The ``mon_osd_backfillfull_ratio`` defaults to ``0.90``, or 90 % of
|
||||
capacity above which backfills will not start. The
|
||||
OSD nearfull ratio defaults to ``0.85``, or 85% of capacity
|
||||
when it generates a health warning.
|
||||
@ -456,7 +456,7 @@ Blocked Requests or Slow Requests
|
||||
|
||||
If a ``ceph-osd`` daemon is slow to respond to a request, messages will be logged
|
||||
noting ops that are taking too long. The warning threshold
|
||||
defaults to 30 seconds and is configurable via the ``osd op complaint time``
|
||||
defaults to 30 seconds and is configurable via the ``osd_op_complaint_time``
|
||||
setting. When this happens, the cluster log will receive messages.
|
||||
|
||||
Legacy versions of Ceph complain about ``old requests``::
|
||||
@ -589,7 +589,7 @@ You can clear the flags with::
|
||||
Two other flags are supported, ``noin`` and ``noout``, which prevent
|
||||
booting OSDs from being marked ``in`` (allocated data) or protect OSDs
|
||||
from eventually being marked ``out`` (regardless of what the current value for
|
||||
``mon osd down out interval`` is).
|
||||
``mon_osd_down_out_interval`` is).
|
||||
|
||||
.. note:: ``noup``, ``noout``, and ``nodown`` are temporary in the
|
||||
sense that once the flags are cleared, the action they were blocking
|
||||
|
@ -28,11 +28,11 @@ Ceph daemon may cause a deadlock due to issues with the Linux kernel itself
|
||||
configuration, in spite of the limitations as described herein.
|
||||
|
||||
If you are trying to create a cluster on a single node, you must change the
|
||||
default of the ``osd crush chooseleaf type`` setting from ``1`` (meaning
|
||||
default of the ``osd_crush_chooseleaf_type`` setting from ``1`` (meaning
|
||||
``host`` or ``node``) to ``0`` (meaning ``osd``) in your Ceph configuration
|
||||
file before you create your monitors and OSDs. This tells Ceph that an OSD
|
||||
can peer with another OSD on the same host. If you are trying to set up a
|
||||
1-node cluster and ``osd crush chooseleaf type`` is greater than ``0``,
|
||||
1-node cluster and ``osd_crush_chooseleaf_type`` is greater than ``0``,
|
||||
Ceph will try to peer the PGs of one OSD with the PGs of another OSD on
|
||||
another node, chassis, rack, row, or even datacenter depending on the setting.
|
||||
|
||||
@ -49,12 +49,12 @@ Fewer OSDs than Replicas
|
||||
|
||||
If you have brought up two OSDs to an ``up`` and ``in`` state, but you still
|
||||
don't see ``active + clean`` placement groups, you may have an
|
||||
``osd pool default size`` set to greater than ``2``.
|
||||
``osd_pool_default_size`` set to greater than ``2``.
|
||||
|
||||
There are a few ways to address this situation. If you want to operate your
|
||||
cluster in an ``active + degraded`` state with two replicas, you can set the
|
||||
``osd pool default min size`` to ``2`` so that you can write objects in
|
||||
an ``active + degraded`` state. You may also set the ``osd pool default size``
|
||||
``osd_pool_default_min_size`` to ``2`` so that you can write objects in
|
||||
an ``active + degraded`` state. You may also set the ``osd_pool_default_size``
|
||||
setting to ``2`` so that you only have two stored replicas (the original and
|
||||
one replica), in which case the cluster should achieve an ``active + clean``
|
||||
state.
|
||||
@ -66,7 +66,7 @@ state.
|
||||
Pool Size = 1
|
||||
-------------
|
||||
|
||||
If you have the ``osd pool default size`` set to ``1``, you will only have
|
||||
If you have the ``osd_pool_default_size`` set to ``1``, you will only have
|
||||
one copy of the object. OSDs rely on other OSDs to tell them which objects
|
||||
they should have. If a first OSD has a copy of an object and there is no
|
||||
second copy, then no second OSD can tell the first OSD that it should have
|
||||
@ -363,7 +363,7 @@ If your cluster is up, but some OSDs are down and you cannot write data,
|
||||
check to ensure that you have the minimum number of OSDs running for the
|
||||
placement group. If you don't have the minimum number of OSDs running,
|
||||
Ceph will not allow you to write data because there is no guarantee
|
||||
that Ceph can replicate your data. See ``osd pool default min size``
|
||||
that Ceph can replicate your data. See ``osd_pool_default_min_size``
|
||||
in the `Pool, PG and CRUSH Config Reference`_ for details.
|
||||
|
||||
|
||||
|
@ -45,8 +45,8 @@ Enable Cache
|
||||
|
||||
To enable the PWL cache, set the following configuration settings::
|
||||
|
||||
rbd persistent cache mode = {cache-mode}
|
||||
rbd plugins = pwl_cache
|
||||
rbd_persistent_cache_mode = {cache-mode}
|
||||
rbd_plugins = pwl_cache
|
||||
|
||||
Value of {cache-mode} can be ``rwl``, ``ssd`` or ``disabled``. By default the
|
||||
cache is disabled.
|
||||
|
Loading…
Reference in New Issue
Block a user