mirror of
https://github.com/ceph/ceph
synced 2024-12-17 17:05:42 +00:00
Merge pull request #16498 from Songweibin/wip-doc-rbd-trash-cli
doc: add rbd new trash cli and cleanups in release-notes.rst
This commit is contained in:
commit
f0b5900025
@ -383,6 +383,21 @@ Commands
|
||||
--io-total. Defaults are: --io-size 4096, --io-threads 16, --io-total 1G,
|
||||
--io-pattern seq.
|
||||
|
||||
:command:`trash ls` [*pool-name*]
|
||||
List all entries from trash.
|
||||
|
||||
:command:`trash mv` *image-spec*
|
||||
Move an image to the trash. Images, even ones actively in-use by
|
||||
clones, can be moved to the trash and deleted at a later time.
|
||||
|
||||
:command:`trash rm` *image-id*
|
||||
Delete an image from trash. If image deferment time has not expired
|
||||
you can not removed it unless use force. But an actively in-use by clones
|
||||
or has snapshots can not be removed.
|
||||
|
||||
:command:`trash restore` *image-id*
|
||||
Restore an image from trash.
|
||||
|
||||
Image and snap specs
|
||||
====================
|
||||
|
||||
@ -561,6 +576,30 @@ To release a lock::
|
||||
|
||||
rbd lock remove mypool/myimage mylockid client.2485
|
||||
|
||||
To list images from trash::
|
||||
|
||||
rbd trash ls mypool
|
||||
|
||||
To defer delete an image (use *--delay* to set delay-time, default is 0)::
|
||||
|
||||
rbd trash mv mypool/myimage
|
||||
|
||||
To delete an image from trash (be careful!)::
|
||||
|
||||
rbd trash rm mypool/myimage-id
|
||||
|
||||
To force delete an image from trash (be careful!)::
|
||||
|
||||
rbd trash rm mypool/myimage-id --force
|
||||
|
||||
To restore an image from trash::
|
||||
|
||||
rbd trash restore mypool/myimage-id
|
||||
|
||||
To restore an image from trash and rename it::
|
||||
|
||||
rbd trash restore mypool/myimage-id --image mynewimage
|
||||
|
||||
|
||||
Availability
|
||||
============
|
||||
|
@ -88,7 +88,21 @@ but replace ``{poolname}`` with the name of the pool::
|
||||
For example::
|
||||
|
||||
rbd ls swimmingpool
|
||||
|
||||
|
||||
To list deferred delete block devices in the ``rbd`` pool, execute the
|
||||
following::
|
||||
|
||||
rbd trash ls
|
||||
|
||||
To list deferred delete block devices in a particular pool, execute the
|
||||
following, but replace ``{poolname}`` with the name of the pool::
|
||||
|
||||
rbd trash ls {poolname}
|
||||
|
||||
For example::
|
||||
|
||||
rbd trash ls swimmingpool
|
||||
|
||||
Retrieving Image Information
|
||||
============================
|
||||
|
||||
@ -131,21 +145,77 @@ To remove a block device, execute the following, but replace ``{image-name}``
|
||||
with the name of the image you want to remove::
|
||||
|
||||
rbd rm {image-name}
|
||||
|
||||
|
||||
For example::
|
||||
|
||||
rbd rm foo
|
||||
|
||||
|
||||
To remove a block device from a pool, execute the following, but replace
|
||||
``{image-name}`` with the name of the image to remove and replace
|
||||
``{pool-name}`` with the name of the pool::
|
||||
|
||||
rbd rm {pool-name}/{image-name}
|
||||
|
||||
|
||||
For example::
|
||||
|
||||
rbd rm swimmingpool/bar
|
||||
|
||||
To defer delete a block device from a pool, execute the following, but
|
||||
replace ``{image-name}`` with the name of the image to move and replace
|
||||
``{pool-name}`` with the name of the pool::
|
||||
|
||||
rbd trash mv {pool-name}/{image-name}
|
||||
|
||||
For example::
|
||||
|
||||
rbd trash mv swimmingpool/bar
|
||||
|
||||
To remove a deferred block device from a pool, execute the following, but
|
||||
replace ``{image-id}`` with the id of the image to remove and replace
|
||||
``{pool-name}`` with the name of the pool::
|
||||
|
||||
rbd trash rm {pool-name}/{image-id}
|
||||
|
||||
For example::
|
||||
|
||||
rbd trash rm swimmingpool/2bf4474b0dc51
|
||||
|
||||
.. note::
|
||||
|
||||
* You can move an image to the trash even it has shapshot(s) or actively
|
||||
in-use by clones, but can not be removed from trash.
|
||||
|
||||
* You can use *--delay* to set the defer time (default is 0), and if its
|
||||
deferment time has not expired, it can not be removed unless you use
|
||||
force.
|
||||
|
||||
Restoring a Block Device Image
|
||||
=============================
|
||||
|
||||
To restore a deferred delete block device in the rbd pool, execute the
|
||||
following, but replace ``{image-id}`` with the id of the image::
|
||||
|
||||
rbd trash restore {image-d}
|
||||
|
||||
For example::
|
||||
|
||||
rbd trash restore 2bf4474b0dc51
|
||||
|
||||
To restore a deferred delete block device in a particular pool, execute
|
||||
the following, but replace ``{image-id}`` with the id of the image and
|
||||
replace ``{pool-name}`` with the name of the pool::
|
||||
|
||||
rbd trash restore {pool-name}/{image-id}
|
||||
|
||||
For example::
|
||||
|
||||
rbd trash restore swimmingpool/2bf4474b0dc51
|
||||
|
||||
Also you can use *--image* to rename the iamge when restore it, for
|
||||
example::
|
||||
|
||||
rbd trash restore swimmingpool/2bf4474b0dc51 --image new-name
|
||||
|
||||
|
||||
.. _create a pool: ../../rados/operations/pools/#create-a-pool
|
||||
.. _Storage Pools: ../../rados/operations/pools
|
||||
|
@ -129,7 +129,7 @@ Major Changes from Kraken
|
||||
* Improved discard handling when the object map feature is enabled.
|
||||
* rbd CLI ``import`` and ``copy`` commands now detect sparse and
|
||||
preserve sparse regions.
|
||||
* Images and Snapshots will now include a creation timestamp
|
||||
* Images and Snapshots will now include a creation timestamp.
|
||||
|
||||
- *CephFS*:
|
||||
|
||||
@ -195,7 +195,7 @@ Major Changes from Kraken
|
||||
for applying changes to entire subtrees. For example, ``ceph
|
||||
osd down `ceph osd ls-tree rack1```.
|
||||
- ``ceph osd {add,rm}-{noout,noin,nodown,noup}`` allow the
|
||||
`noout`, `nodown`, `noin`, and `noup` flags to be applied to
|
||||
`noout`, `noin`, `nodown`, and `noup` flags to be applied to
|
||||
specific OSDs.
|
||||
- ``ceph log last [n]`` will output the last *n* lines of the cluster
|
||||
log.
|
||||
@ -332,7 +332,7 @@ Upgrade from Jewel or Kraken
|
||||
|
||||
#. Upgrade monitors by installing the new packages and restarting the
|
||||
monitor daemons. Note that, unlike prior releases, the ceph-mon
|
||||
daemons *must* be upgraded first.::
|
||||
daemons *must* be upgraded first::
|
||||
|
||||
# systemctl restart ceph-mon.target
|
||||
|
||||
@ -356,7 +356,7 @@ Upgrade from Jewel or Kraken
|
||||
If you are upgrading from kraken, you may already have ceph-mgr
|
||||
daemons deployed. If not, or if you are upgrading from jewel, you
|
||||
can deploy new daemons with tools like ceph-deploy or ceph-ansible.
|
||||
For example,::
|
||||
For example::
|
||||
|
||||
# ceph-deploy mgr create HOST
|
||||
|
||||
@ -371,12 +371,12 @@ Upgrade from Jewel or Kraken
|
||||
...
|
||||
|
||||
#. Upgrade all OSDs by installing the new packages and restarting the
|
||||
ceph-osd daemons on all hosts.::
|
||||
ceph-osd daemons on all hosts::
|
||||
|
||||
# systemctl restart ceph-osd.target
|
||||
|
||||
You can monitor the progress of the OSD upgrades with the new
|
||||
``ceph versions`` or ``ceph osd versions`` command.::
|
||||
``ceph versions`` or ``ceph osd versions`` command::
|
||||
|
||||
# ceph osd versions
|
||||
{
|
||||
@ -385,12 +385,12 @@ Upgrade from Jewel or Kraken
|
||||
}
|
||||
|
||||
#. Upgrade all CephFS daemons by upgrading packages and restarting
|
||||
daemons on all hosts.::
|
||||
daemons on all hosts::
|
||||
|
||||
# systemctl restart ceph-mds.target
|
||||
|
||||
#. Upgrade all radosgw daemons by upgrading packages and restarting
|
||||
daemons on all hosts.::
|
||||
daemons on all hosts::
|
||||
|
||||
# systemctl restart radosgw.target
|
||||
|
||||
@ -5824,7 +5824,7 @@ Upgrading from Hammer
|
||||
|
||||
* For all distributions that support systemd (CentOS 7, Fedora, Debian
|
||||
Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd
|
||||
files instead of the legacy sysvinit scripts. For example,::
|
||||
files instead of the legacy sysvinit scripts. For example::
|
||||
|
||||
systemctl start ceph.target # start all daemons
|
||||
systemctl status ceph-osd@12 # check status of osd.12
|
||||
@ -5865,7 +5865,7 @@ Upgrading from Hammer
|
||||
|
||||
ceph-deploy install --stable jewel HOST
|
||||
|
||||
#. Stop the daemon(s).::
|
||||
#. Stop the daemon(s)::
|
||||
|
||||
service ceph stop # fedora, centos, rhel, debian
|
||||
stop ceph-all # ubuntu
|
||||
@ -5875,7 +5875,7 @@ Upgrading from Hammer
|
||||
chown -R ceph:ceph /var/lib/ceph
|
||||
chown -R ceph:ceph /var/log/ceph
|
||||
|
||||
#. Restart the daemon(s).::
|
||||
#. Restart the daemon(s)::
|
||||
|
||||
start ceph-all # ubuntu
|
||||
systemctl start ceph.target # debian, centos, fedora, rhel
|
||||
@ -9436,7 +9436,7 @@ Upgrading from Hammer
|
||||
|
||||
* For all distributions that support systemd (CentOS 7, Fedora, Debian
|
||||
Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd
|
||||
files instead of the legacy sysvinit scripts. For example,::
|
||||
files instead of the legacy sysvinit scripts. For example::
|
||||
|
||||
systemctl start ceph.target # start all daemons
|
||||
systemctl status ceph-osd@12 # check status of osd.12
|
||||
@ -9476,7 +9476,7 @@ Upgrading from Hammer
|
||||
|
||||
ceph-deploy install --stable infernalis HOST
|
||||
|
||||
#. Stop the daemon(s).::
|
||||
#. Stop the daemon(s)::
|
||||
|
||||
service ceph stop # fedora, centos, rhel, debian
|
||||
stop ceph-all # ubuntu
|
||||
@ -9486,7 +9486,7 @@ Upgrading from Hammer
|
||||
chown -R ceph:ceph /var/lib/ceph
|
||||
chown -R ceph:ceph /var/log/ceph
|
||||
|
||||
#. Restart the daemon(s).::
|
||||
#. Restart the daemon(s)::
|
||||
|
||||
start ceph-all # ubuntu
|
||||
systemctl start ceph.target # debian, centos, fedora, rhel
|
||||
@ -10127,7 +10127,7 @@ Upgrading from Hammer
|
||||
|
||||
* For all distributions that support systemd (CentOS 7, Fedora, Debian
|
||||
Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd
|
||||
files instead of the legacy sysvinit scripts. For example,::
|
||||
files instead of the legacy sysvinit scripts. For example::
|
||||
|
||||
systemctl start ceph.target # start all daemons
|
||||
systemctl status ceph-osd@12 # check status of osd.12
|
||||
@ -10166,7 +10166,7 @@ Upgrading from Hammer
|
||||
|
||||
ceph-deploy install --stable infernalis HOST
|
||||
|
||||
#. Stop the daemon(s).::
|
||||
#. Stop the daemon(s)::
|
||||
|
||||
service ceph stop # fedora, centos, rhel, debian
|
||||
stop ceph-all # ubuntu
|
||||
@ -10176,7 +10176,7 @@ Upgrading from Hammer
|
||||
chown -R ceph:ceph /var/lib/ceph
|
||||
chown -R ceph:ceph /var/log/ceph
|
||||
|
||||
#. Restart the daemon(s).::
|
||||
#. Restart the daemon(s)::
|
||||
|
||||
start ceph-all # ubuntu
|
||||
systemctl start ceph.target # debian, centos, fedora, rhel
|
||||
@ -18733,7 +18733,7 @@ Please refer to the document `Upgrading from Argonaut to Bobtail`_ for details.
|
||||
Upgrading a cluster without adjusting the Ceph configuration will
|
||||
likely prevent the system from starting up on its own. We recommend
|
||||
first modifying the configuration to indicate that authentication is
|
||||
disabled, and only then upgrading to the latest version.::
|
||||
disabled, and only then upgrading to the latest version::
|
||||
|
||||
auth client required = none
|
||||
auth service required = none
|
||||
|
Loading…
Reference in New Issue
Block a user