Merge PR #20688 into wip-sage-testing-20180306.131906

* refs/pull/20688/head:
	doc: Drop the output of pg query
	doc: Update Monitoring OSDs and PGs

Reviewed-by: Lenz Grimmer <lgrimmer@suse.com>
Reviewed-by: Sage Weil <sage@redhat.com>
This commit is contained in:
Sage Weil 2018-03-06 07:19:46 -06:00
commit f4bb81553c
2 changed files with 20 additions and 127 deletions

View File

@ -51,3 +51,4 @@ jtlayton Jeff Layton <jlayton@redhat.com>
yuriw Yuri Weinstein <yweins@redhat.com> yuriw Yuri Weinstein <yweins@redhat.com>
jecluis João Eduardo Luís <joao@suse.de> jecluis João Eduardo Luís <joao@suse.de>
yunfeiguan Yunfei Guan <yunfei.guan@xtaotech.com> yunfeiguan Yunfei Guan <yunfei.guan@xtaotech.com>
LenzGr Lenz Grimmer <lgrimmer@suse.com>

View File

@ -66,10 +66,10 @@ running, too. To see if all OSDs are running, execute::
ceph osd stat ceph osd stat
The result should tell you the map epoch (eNNNN), the total number of OSDs (x), The result should tell you the total number of OSDs (x),
how many are ``up`` (y) and how many are ``in`` (z). :: how many are ``up`` (y), how many are ``in`` (z) and the map epoch (eNNNN). ::
eNNNN: x osds: y up, z in x osds: y up, z in; epoch: eNNNN
If the number of OSDs that are ``in`` the cluster is more than the number of If the number of OSDs that are ``in`` the cluster is more than the number of
OSDs that are ``up``, execute the following command to identify the ``ceph-osd`` OSDs that are ``up``, execute the following command to identify the ``ceph-osd``
@ -79,14 +79,12 @@ daemons that are not running::
:: ::
dumped osdmap tree epoch 1 #ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
# id weight type name up/down reweight -1 2.00000 pool openstack
-1 2 pool openstack -3 2.00000 rack dell-2950-rack-A
-3 2 rack dell-2950-rack-A -2 2.00000 host dell-2950-A1
-2 2 host dell-2950-A1 0 ssd 1.00000 osd.0 up 1.00000 1.00000
0 1 osd.0 up 1 1 ssd 1.00000 osd.1 down 1.00000 1.00000
1 1 osd.1 down 1
.. tip:: The ability to search through a well-designed CRUSH hierarchy may help .. tip:: The ability to search through a well-designed CRUSH hierarchy may help
you troubleshoot your cluster by identifying the physcial locations faster. you troubleshoot your cluster by identifying the physcial locations faster.
@ -142,7 +140,7 @@ The result should tell you the osdmap epoch (eNNN), the placement group number
({pg-num}), the OSDs in the Up Set (up[]), and the OSDs in the acting set ({pg-num}), the OSDs in the Up Set (up[]), and the OSDs in the acting set
(acting[]). :: (acting[]). ::
osdmap eNNN pg {pg-num} -> up [0,1,2] acting [0,1,2] osdmap eNNN pg {raw-pg-num} ({pg-num}) -> up [0,1,2] acting [0,1,2]
.. note:: If the Up Set and Acting Set do not match, this may be an indicator .. note:: If the Up Set and Acting Set do not match, this may be an indicator
that the cluster rebalancing itself or of a potential problem with that the cluster rebalancing itself or of a potential problem with
@ -207,16 +205,16 @@ placement groups, execute::
ceph pg stat ceph pg stat
The result should tell you the placement group map version (vNNNNNN), the total The result should tell you the total number of placement groups (x), how many
number of placement groups (x), and how many placement groups are in a placement groups are in a particular state such as ``active+clean`` (y) and the
particular state such as ``active+clean`` (y). :: amount of data stored (z). ::
vNNNNNN: x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail
.. note:: It is common for Ceph to report multiple states for placement groups. .. note:: It is common for Ceph to report multiple states for placement groups.
In addition to the placement group states, Ceph will also echo back the amount In addition to the placement group states, Ceph will also echo back the amount of
of data used (aa), the amount of storage capacity remaining (bb), and the total storage capacity used (aa), the amount of storage capacity remaining (bb), and the total
storage capacity for the placement group. These numbers can be important in a storage capacity for the placement group. These numbers can be important in a
few cases: few cases:
@ -255,113 +253,7 @@ To query a particular placement group, execute the following::
Ceph will output the query in JSON format. Ceph will output the query in JSON format.
.. code-block:: javascript The following subsections describe the common pg states in detail.
{
"state": "active+clean",
"up": [
1,
0
],
"acting": [
1,
0
],
"info": {
"pgid": "1.e",
"last_update": "4'1",
"last_complete": "4'1",
"log_tail": "0'0",
"last_backfill": "MAX",
"purged_snaps": "[]",
"history": {
"epoch_created": 1,
"last_epoch_started": 537,
"last_epoch_clean": 537,
"last_epoch_split": 534,
"same_up_since": 536,
"same_interval_since": 536,
"same_primary_since": 536,
"last_scrub": "4'1",
"last_scrub_stamp": "2013-01-25 10:12:23.828174"
},
"stats": {
"version": "4'1",
"reported": "536'782",
"state": "active+clean",
"last_fresh": "2013-01-25 10:12:23.828271",
"last_change": "2013-01-25 10:12:23.828271",
"last_active": "2013-01-25 10:12:23.828271",
"last_clean": "2013-01-25 10:12:23.828271",
"last_unstale": "2013-01-25 10:12:23.828271",
"mapping_epoch": 535,
"log_start": "0'0",
"ondisk_log_start": "0'0",
"created": 1,
"last_epoch_clean": 1,
"parent": "0.0",
"parent_split_bits": 0,
"last_scrub": "4'1",
"last_scrub_stamp": "2013-01-25 10:12:23.828174",
"log_size": 128,
"ondisk_log_size": 128,
"stat_sum": {
"num_bytes": 205,
"num_objects": 1,
"num_object_clones": 0,
"num_object_copies": 0,
"num_objects_missing_on_primary": 0,
"num_objects_degraded": 0,
"num_objects_unfound": 0,
"num_read": 1,
"num_read_kb": 0,
"num_write": 3,
"num_write_kb": 1
},
"stat_cat_sum": {
},
"up": [
1,
0
],
"acting": [
1,
0
]
},
"empty": 0,
"dne": 0,
"incomplete": 0
},
"recovery_state": [
{
"name": "Started\/Primary\/Active",
"enter_time": "2013-01-23 09:35:37.594691",
"might_have_unfound": [
],
"scrub": {
"scrub_epoch_start": "536",
"scrub_active": 0,
"scrub_block_writes": 0,
"finalizing_scrub": 0,
"scrub_waiting_on": 0,
"scrub_waiting_on_whom": [
]
}
},
{
"name": "Started",
"enter_time": "2013-01-23 09:35:31.581160"
}
]
}
The following subsections describe common states in greater detail.
Creating Creating
-------- --------
@ -571,7 +463,7 @@ calculates how to map the object to a `placement group`_, and then calculates
how to assign the placement group to an OSD dynamically. To find the object how to assign the placement group to an OSD dynamically. To find the object
location, all you need is the object name and the pool name. For example:: location, all you need is the object name and the pool name. For example::
ceph osd map {poolname} {object-name} ceph osd map {poolname} {object-name} [namespace]
.. topic:: Exercise: Locate an Object .. topic:: Exercise: Locate an Object
@ -593,7 +485,7 @@ location, all you need is the object name and the pool name. For example::
Ceph should output the object's location. For example:: Ceph should output the object's location. For example::
osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0] osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up ([1,0], p0) acting ([1,0], p0)
To remove the test object, simply delete it using the ``rados rm`` command. To remove the test object, simply delete it using the ``rados rm`` command.
For example:: For example::