From eb290eebbd1462d4dcdeb69a6b258a16a16cb7fb Mon Sep 17 00:00:00 2001 From: Jos Collin Date: Fri, 2 Mar 2018 10:31:54 +0530 Subject: [PATCH 1/3] doc: Update Monitoring OSDs and PGs Updated 'Monitoring OSDs and PGs' doc with: * Latest command output * misc doc fixes Signed-off-by: Jos Collin --- doc/rados/operations/monitoring-osd-pg.rst | 38 ++++++++++------------ 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/doc/rados/operations/monitoring-osd-pg.rst b/doc/rados/operations/monitoring-osd-pg.rst index 0107e341d1c..294dd2941fe 100644 --- a/doc/rados/operations/monitoring-osd-pg.rst +++ b/doc/rados/operations/monitoring-osd-pg.rst @@ -66,10 +66,10 @@ running, too. To see if all OSDs are running, execute:: ceph osd stat -The result should tell you the map epoch (eNNNN), the total number of OSDs (x), -how many are ``up`` (y) and how many are ``in`` (z). :: +The result should tell you the total number of OSDs (x), +how many are ``up`` (y), how many are ``in`` (z) and the map epoch (eNNNN). :: - eNNNN: x osds: y up, z in + x osds: y up, z in; epoch: eNNNN If the number of OSDs that are ``in`` the cluster is more than the number of OSDs that are ``up``, execute the following command to identify the ``ceph-osd`` @@ -79,14 +79,12 @@ daemons that are not running:: :: - dumped osdmap tree epoch 1 - # id weight type name up/down reweight - -1 2 pool openstack - -3 2 rack dell-2950-rack-A - -2 2 host dell-2950-A1 - 0 1 osd.0 up 1 - 1 1 osd.1 down 1 - + #ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF + -1 2.00000 pool openstack + -3 2.00000 rack dell-2950-rack-A + -2 2.00000 host dell-2950-A1 + 0 ssd 1.00000 osd.0 up 1.00000 1.00000 + 1 ssd 1.00000 osd.1 down 1.00000 1.00000 .. tip:: The ability to search through a well-designed CRUSH hierarchy may help you troubleshoot your cluster by identifying the physcial locations faster. @@ -142,7 +140,7 @@ The result should tell you the osdmap epoch (eNNN), the placement group number ({pg-num}), the OSDs in the Up Set (up[]), and the OSDs in the acting set (acting[]). :: - osdmap eNNN pg {pg-num} -> up [0,1,2] acting [0,1,2] + osdmap eNNN pg {raw-pg-num} ({pg-num}) -> up [0,1,2] acting [0,1,2] .. note:: If the Up Set and Acting Set do not match, this may be an indicator that the cluster rebalancing itself or of a potential problem with @@ -207,16 +205,16 @@ placement groups, execute:: ceph pg stat -The result should tell you the placement group map version (vNNNNNN), the total -number of placement groups (x), and how many placement groups are in a -particular state such as ``active+clean`` (y). :: +The result should tell you the total number of placement groups (x), how many +placement groups are in a particular state such as ``active+clean`` (y) and the +amount of data stored (z). :: - vNNNNNN: x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail + x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail .. note:: It is common for Ceph to report multiple states for placement groups. -In addition to the placement group states, Ceph will also echo back the amount -of data used (aa), the amount of storage capacity remaining (bb), and the total +In addition to the placement group states, Ceph will also echo back the amount of +storage capacity used (aa), the amount of storage capacity remaining (bb), and the total storage capacity for the placement group. These numbers can be important in a few cases: @@ -571,7 +569,7 @@ calculates how to map the object to a `placement group`_, and then calculates how to assign the placement group to an OSD dynamically. To find the object location, all you need is the object name and the pool name. For example:: - ceph osd map {poolname} {object-name} + ceph osd map {poolname} {object-name} [namespace] .. topic:: Exercise: Locate an Object @@ -593,7 +591,7 @@ location, all you need is the object name and the pool name. For example:: Ceph should output the object's location. For example:: - osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0] + osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up ([1,0], p0) acting ([1,0], p0) To remove the test object, simply delete it using the ``rados rm`` command. For example:: From afe657c2feef81ddfd6669d64ba7371e226d5c1a Mon Sep 17 00:00:00 2001 From: Jos Collin Date: Fri, 2 Mar 2018 10:34:47 +0530 Subject: [PATCH 2/3] doc: Drop the output of pg query Drop the output of pg query from the doc, as it is: * Very old. The doc has only one-fourth of the latest 'pg query' output. * Updating the latest 'pg query' output in the doc would be huge. * too difficult to maintain it in the doc and make it sync with the changes of the actual 'pg query' output. However, we can insert parts of the output in the doc if necessary. Signed-off-by: Jos Collin --- doc/rados/operations/monitoring-osd-pg.rst | 108 +-------------------- 1 file changed, 1 insertion(+), 107 deletions(-) diff --git a/doc/rados/operations/monitoring-osd-pg.rst b/doc/rados/operations/monitoring-osd-pg.rst index 294dd2941fe..f0bd25e885b 100644 --- a/doc/rados/operations/monitoring-osd-pg.rst +++ b/doc/rados/operations/monitoring-osd-pg.rst @@ -253,113 +253,7 @@ To query a particular placement group, execute the following:: Ceph will output the query in JSON format. -.. code-block:: javascript - - { - "state": "active+clean", - "up": [ - 1, - 0 - ], - "acting": [ - 1, - 0 - ], - "info": { - "pgid": "1.e", - "last_update": "4'1", - "last_complete": "4'1", - "log_tail": "0'0", - "last_backfill": "MAX", - "purged_snaps": "[]", - "history": { - "epoch_created": 1, - "last_epoch_started": 537, - "last_epoch_clean": 537, - "last_epoch_split": 534, - "same_up_since": 536, - "same_interval_since": 536, - "same_primary_since": 536, - "last_scrub": "4'1", - "last_scrub_stamp": "2013-01-25 10:12:23.828174" - }, - "stats": { - "version": "4'1", - "reported": "536'782", - "state": "active+clean", - "last_fresh": "2013-01-25 10:12:23.828271", - "last_change": "2013-01-25 10:12:23.828271", - "last_active": "2013-01-25 10:12:23.828271", - "last_clean": "2013-01-25 10:12:23.828271", - "last_unstale": "2013-01-25 10:12:23.828271", - "mapping_epoch": 535, - "log_start": "0'0", - "ondisk_log_start": "0'0", - "created": 1, - "last_epoch_clean": 1, - "parent": "0.0", - "parent_split_bits": 0, - "last_scrub": "4'1", - "last_scrub_stamp": "2013-01-25 10:12:23.828174", - "log_size": 128, - "ondisk_log_size": 128, - "stat_sum": { - "num_bytes": 205, - "num_objects": 1, - "num_object_clones": 0, - "num_object_copies": 0, - "num_objects_missing_on_primary": 0, - "num_objects_degraded": 0, - "num_objects_unfound": 0, - "num_read": 1, - "num_read_kb": 0, - "num_write": 3, - "num_write_kb": 1 - }, - "stat_cat_sum": { - - }, - "up": [ - 1, - 0 - ], - "acting": [ - 1, - 0 - ] - }, - "empty": 0, - "dne": 0, - "incomplete": 0 - }, - "recovery_state": [ - { - "name": "Started\/Primary\/Active", - "enter_time": "2013-01-23 09:35:37.594691", - "might_have_unfound": [ - - ], - "scrub": { - "scrub_epoch_start": "536", - "scrub_active": 0, - "scrub_block_writes": 0, - "finalizing_scrub": 0, - "scrub_waiting_on": 0, - "scrub_waiting_on_whom": [ - - ] - } - }, - { - "name": "Started", - "enter_time": "2013-01-23 09:35:31.581160" - } - ] - } - - - -The following subsections describe common states in greater detail. +The following subsections describe the common pg states in detail. Creating -------- From 2a978666fc078cefc44a6d2df945dd790cffa4e2 Mon Sep 17 00:00:00 2001 From: Sage Weil Date: Tue, 6 Mar 2018 07:19:45 -0600 Subject: [PATCH 3/3] githubmap: update contributors Signed-off-by: Sage Weil --- .githubmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.githubmap b/.githubmap index bee2e2a3605..fb49d73fbd7 100644 --- a/.githubmap +++ b/.githubmap @@ -51,3 +51,4 @@ jtlayton Jeff Layton yuriw Yuri Weinstein jecluis João Eduardo Luís yunfeiguan Yunfei Guan +LenzGr Lenz Grimmer