doc: remove leveldb support from doc

Signed-off-by: luo rixin <luorixin@huawei.com>
This commit is contained in:
luo rixin 2023-03-23 20:04:59 +08:00
parent e4c1a5fe2f
commit 6da360833a
8 changed files with 8 additions and 13 deletions

View File

@ -94,8 +94,7 @@ defaulted to ON. To build without the RADOS Gateway:
Another example below is building with debugging and alternate locations
for a couple of external dependencies:
cmake -DLEVELDB_PREFIX="/opt/hyperleveldb" \
-DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \
cmake -DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \
..
Ceph has several bundled dependencies such as Boost, RocksDB and Arrow. By

View File

@ -63,7 +63,7 @@ To install Ceph with RPMs, execute the following steps:
#. Install pre-requisite packages::
sudo yum install snappy leveldb gdisk python-argparse gperftools-libs
sudo yum install snappy gdisk python-argparse gperftools-libs
Once you have added either release or development packages, or added a

View File

@ -161,7 +161,7 @@ Usage::
compact
-------
Causes compaction of monitor's leveldb storage.
Causes compaction of monitor's RocksDB storage.
Usage::

View File

@ -38,7 +38,7 @@ the monitor services are written by the Ceph Monitor to a single Paxos
instance, and Paxos writes the changes to a key/value store for strong
consistency. Ceph Monitors are able to query the most recent version of the
cluster map during sync operations, and they use the key/value store's
snapshots and iterators (using leveldb) to perform store-wide synchronization.
snapshots and iterators (using RocksDB) to perform store-wide synchronization.
.. ditaa::
/-------------\ /-------------\
@ -265,7 +265,7 @@ Data
Ceph provides a default path where Ceph Monitors store data. For optimal
performance in a production Ceph Storage Cluster, we recommend running Ceph
Monitors on separate hosts and drives from Ceph OSD Daemons. As leveldb uses
Monitors on separate hosts and drives from Ceph OSD Daemons. As RocksDB uses
``mmap()`` for writing the data, Ceph Monitors flush their data from memory to disk
very often, which can interfere with Ceph OSD Daemon workloads if the data
store is co-located with the OSD Daemons.

View File

@ -127,8 +127,8 @@ Monitor databases might grow in size when there are placement groups that have
not reached an ``active+clean`` state in a long time.
This alert might also indicate that the monitor's database is not properly
compacting, an issue that has been observed with some older versions of leveldb
and rocksdb. Forcing a compaction with ``ceph daemon mon.<id> compact`` might
compacting, an issue that has been observed with some older versions of
RocksDB. Forcing a compaction with ``ceph daemon mon.<id> compact`` might
shrink the database's on-disk size.
This alert might also indicate that the monitor has a bug that prevents it from

View File

@ -290,8 +290,6 @@ to their default level or to a level suitable for normal operations.
+--------------------------+-----------+--------------+
| ``rocksdb`` | 4 | 5 |
+--------------------------+-----------+--------------+
| ``leveldb`` | 4 | 5 |
+--------------------------+-----------+--------------+
| ``fuse`` | 1 | 5 |
+--------------------------+-----------+--------------+
| ``mgr`` | 2 | 5 |

View File

@ -418,7 +418,7 @@ Monitor Store Failures
Symptoms of store corruption
----------------------------
Ceph monitor stores the :term:`Cluster Map` in a key/value store such as LevelDB. If
Ceph monitor stores the :term:`Cluster Map` in a key/value store such as RocksDB. If
a monitor fails due to the key/value store corruption, following error messages
might be found in the monitor log::

View File

@ -132,8 +132,6 @@ Footnotes
to how Extended Attributes associate with a POSIX file. An object's omap
is not physically located in the object's storage, but its precise
implementation is invisible and immaterial to RADOS Gateway.
In Hammer, LevelDB is used to store omap data within each OSD; later releases
default to RocksDB but can be configured to use LevelDB.
[2] Before the Dumpling release, the 'bucket.instance' metadata did not
exist and the 'bucket' metadata contained its information. It is possible