cephfs: remove vestiges of mds deactivate

Fixes: http://tracker.ceph.com/issues/24001

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This commit is contained in:
Patrick Donnelly 2018-09-18 15:29:00 -07:00
parent b2a9c6082d
commit f113fa80a9
No known key found for this signature in database
GPG Key ID: 3A2A7E25BEA8AADB
8 changed files with 16 additions and 30 deletions

View File

@ -21,6 +21,9 @@
The secrets can be set using the 'rbd mirror pool peer add' and
'rbd mirror pool peer set' actions.
* The `ceph mds deactivate` is fully obsolete and references to it in the docs
have been removed or clarified.
>=13.1.0
--------

View File

@ -77,8 +77,8 @@ and enters the MDS cluster.
up:stopping
When a rank is deactivated (stopped), the monitors command an active MDS to
enter the ``up:stopping`` state. In this state, the MDS accepts no new client
When a rank is stopped, the monitors command an active MDS to enter the
``up:stopping`` state. In this state, the MDS accepts no new client
connections, migrates all subtrees to other ranks in the file system, flush its
metadata journal, and, if the last rank (0), evict all clients and shutdown
(see also :ref:`cephfs-administration`).

View File

@ -80,20 +80,20 @@ Reducing the number of ranks is as simple as reducing ``max_mds``:
...
# fsmap e10: 1/1/1 up {0=a=up:active}, 2 up:standby
The cluster will automatically deactivate extra ranks incrementally until
``max_mds`` is reached.
The cluster will automatically stop extra ranks incrementally until ``max_mds``
is reached.
See :doc:`/cephfs/administration` for more details which forms ``<role>`` can
take.
Note: deactivated ranks will first enter the stopping state for a period of
Note: stopped ranks will first enter the stopping state for a period of
time while it hands off its share of the metadata to the remaining active
daemons. This phase can take from seconds to minutes. If the MDS appears to
be stuck in the stopping state then that should be investigated as a possible
bug.
If an MDS daemon crashes or is killed while in the ``up:stopping`` state, a
standby will take over and the cluster monitors will against try to deactivate
standby will take over and the cluster monitors will against try to stop
the daemon.
When a daemon finishes stopping, it will respawn itself and go back to being a

View File

@ -18,7 +18,7 @@ The proper sequence for upgrading the MDS cluster is:
ceph fs set <fs_name> max_mds 1
2. Wait for cluster to deactivate non-zero ranks where only rank 0 is active and the rest are standbys.
2. Wait for cluster to stop non-zero ranks where only rank 0 is active and the rest are standbys.
::

View File

@ -33,7 +33,7 @@ Synopsis
| **ceph** **log** *<logtext>* [ *<logtext>*... ]
| **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *repaired* ] ...
| **ceph** **mds** [ *compat* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *repaired* ] ...
| **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
@ -371,12 +371,6 @@ Usage::
ceph mds compat show
Subcommand ``deactivate`` stops mds.
Usage::
ceph mds deactivate <role>
Subcommand ``fail`` forces mds to status fail.
Usage::

View File

@ -917,7 +917,6 @@ function test_mon_mds()
fail_all_mds $FS_NAME
ceph mds compat show
expect_false ceph mds deactivate 2
ceph fs dump
ceph fs get $FS_NAME
for mds_gid in $(get_mds_gids $FS_NAME) ; do

View File

@ -707,7 +707,7 @@ bool MDSMonitor::prepare_beacon(MonOpRequestRef op)
const auto &fs = pending.get_filesystem(fscid);
mon->clog->info() << info.human_name() << " finished "
<< "deactivating rank " << info.rank << " in filesystem "
<< "stopping rank " << info.rank << " in filesystem "
<< fs->mds_map.fs_name << " (now has "
<< fs->mds_map.get_num_in_mds() - 1 << " ranks)";
@ -1299,11 +1299,7 @@ int MDSMonitor::filesystem_command(
string whostr;
cmd_getval(g_ceph_context, cmdmap, "role", whostr);
if (prefix == "mds deactivate") {
ss << "This command is deprecated because it is obsolete;"
<< " to deactivate one or more MDS, decrease max_mds appropriately"
<< " (ceph fs set <fsname> max_mds)";
} else if (prefix == "mds set_state") {
if (prefix == "mds set_state") {
mds_gid_t gid;
if (!cmd_getval(g_ceph_context, cmdmap, "gid", gid)) {
ss << "error parsing 'gid' value '"
@ -1787,15 +1783,15 @@ bool MDSMonitor::maybe_resize_cluster(FSMap &fsmap, fs_cluster_id_t fscid)
mds_rank_t target = in - 1;
const auto &info = mds_map.get_info(target);
if (mds_map.is_active(target)) {
dout(1) << "deactivating " << target << dendl;
mon->clog->info() << "deactivating " << info.human_name();
dout(1) << "stopping " << target << dendl;
mon->clog->info() << "stopping " << info.human_name();
fsmap.modify_daemon(info.global_id,
[] (MDSMap::mds_info_t *info) {
info->state = MDSMap::STATE_STOPPING;
});
return true;
} else {
dout(20) << "skipping deactivate on " << target << dendl;
dout(20) << "skipping stop of " << target << dendl;
return false;
}
}

View File

@ -376,12 +376,6 @@ class TestMDS(TestArgparse):
assert_equal({}, validate_command(sigdict, ['mds', 'compat',
'show', 'toomany']))
def test_deactivate(self):
self.assert_valid_command(['mds', 'deactivate', 'someone'])
assert_equal({}, validate_command(sigdict, ['mds', 'deactivate']))
assert_equal({}, validate_command(sigdict, ['mds', 'deactivate',
'someone', 'toomany']))
def test_set_state(self):
self.assert_valid_command(['mds', 'set_state', '1', '2'])
assert_equal({}, validate_command(sigdict, ['mds', 'set_state']))