mirror of
https://github.com/ceph/ceph
synced 2024-12-19 09:57:05 +00:00
662c69e525
Previously, ceph-disk-* would only let you use a journal that was a file inside the OSD data directory. With this, you can do: ceph-disk-prepare /dev/sdb /dev/sdb to put the journal as a second partition on the same disk as the OSD data (might save some file system overhead), or, more interestingly: ceph-disk-prepare /dev/sdb /dev/sdc which makes it create a new partition on /dev/sdc to use as the journal. Size of the partition is decided by $osd_journal_size. /dev/sdc must be a GPT-format disk. Multiple OSDs may share the same journal disk (using separate partitions); this way, a single fast SSD can serve as journal for multiple spinning disks. The second use case currently requires parted, so a Recommends: for parted has been added to Debian packaging. Closes: #3078 Closes: #3079 Signed-off-by: Tommi Virtanen <tv@inktank.com> |
||
---|---|---|
.. | ||
source | ||
.gitignore | ||
ceph-common.install | ||
ceph-fs-common.install | ||
ceph-fuse.install | ||
ceph-mds.install | ||
ceph-resource-agents.install | ||
ceph.dirs | ||
ceph.docs | ||
ceph.install | ||
ceph.lintian-overrides | ||
ceph.postrm | ||
changelog | ||
compat | ||
control | ||
copyright | ||
libcephfs1.install | ||
libcephfs-dev.install | ||
librados2.install | ||
librados-dev.install | ||
librbd1.install | ||
librbd-dev.install | ||
python-ceph.install | ||
radosgw.dirs | ||
radosgw.install | ||
rest-bench.install | ||
rules | ||
watch |