diff --git a/doc/_themes/ceph/static/nature.css_t b/doc/_themes/ceph/static/nature.css_t index f02a23298a3..019f8dc1240 100644 --- a/doc/_themes/ceph/static/nature.css_t +++ b/doc/_themes/ceph/static/nature.css_t @@ -314,3 +314,22 @@ p.breathe-sectiondef-title { font-weight: bold; border-bottom: thin solid #5E6A71; } + +.columns-2, +.columns-3 { + display: flex; +} + +.columns-2 > div, +.columns-3 > div { + flex: 1; + padding: 0 10px 10px 0; +} + +.columns-2 > div { + width: 50%; +} + +.columns-3 > div { + width: 33.33%; +} diff --git a/doc/index.rst b/doc/index.rst index 954ee9da305..1c7710edfe7 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -5,74 +5,72 @@ Ceph uniquely delivers **object, block, and file storage in one unified system**. -.. raw:: html +.. container:: columns-3 - -

Ceph Object Store

+ .. container:: column -- RESTful Interface -- S3- and Swift-compliant APIs -- S3-style subdomains -- Unified S3/Swift namespace -- User management -- Usage tracking -- Striped objects -- Cloud solution integration -- Multi-site deployment -- Multi-site replication + .. raw:: html -.. raw:: html +

Ceph Object Store

-

Ceph Block Device

+ - RESTful Interface + - S3- and Swift-compliant APIs + - S3-style subdomains + - Unified S3/Swift namespace + - User management + - Usage tracking + - Striped objects + - Cloud solution integration + - Multi-site deployment + - Multi-site replication + .. container:: column -- Thin-provisioned -- Images up to 16 exabytes -- Configurable striping -- In-memory caching -- Snapshots -- Copy-on-write cloning -- Kernel driver support -- KVM/libvirt support -- Back-end for cloud solutions -- Incremental backup -- Disaster recovery (multisite asynchronous replication) + .. raw:: html -.. raw:: html +

Ceph Block Device

-

Ceph File System

+ - Thin-provisioned + - Images up to 16 exabytes + - Configurable striping + - In-memory caching + - Snapshots + - Copy-on-write cloning + - Kernel driver support + - KVM/libvirt support + - Back-end for cloud solutions + - Incremental backup + - Disaster recovery (multisite asynchronous replication) -- POSIX-compliant semantics -- Separates metadata from data -- Dynamic rebalancing -- Subdirectory snapshots -- Configurable striping -- Kernel driver support -- FUSE support -- NFS/CIFS deployable -- Use with Hadoop (replace HDFS) + .. container:: column -.. raw:: html + .. raw:: html -
+

Ceph File System

-See `Ceph Object Store`_ for additional details. + - POSIX-compliant semantics + - Separates metadata from data + - Dynamic rebalancing + - Subdirectory snapshots + - Configurable striping + - Kernel driver support + - FUSE support + - NFS/CIFS deployable + - Use with Hadoop (replace HDFS) -.. raw:: html +.. container:: columns-3 -
+ .. container:: column -See `Ceph Block Device`_ for additional details. + See `Ceph Object Store`_ for additional details. -.. raw:: html + .. container:: column - + See `Ceph Block Device`_ for additional details. -See `Ceph File System`_ for additional details. + .. container:: column -.. raw:: html - -
+ See `Ceph File System`_ for additional details. Ceph is highly reliable, easy to manage, and free. The power of Ceph can transform your company's IT infrastructure and your ability to manage vast diff --git a/doc/rados/index.rst b/doc/rados/index.rst index 27d1daad14e..0b38371715e 100644 --- a/doc/rados/index.rst +++ b/doc/rados/index.rst @@ -13,62 +13,63 @@ Ceph Monitor and two Ceph OSD Daemons for data replication. The Ceph File System, Ceph Object Storage and Ceph Block Devices read data from and write data to the Ceph Storage Cluster. -.. raw:: html +.. container:: columns-3 - -

Config and Deploy

+ .. container:: column -Ceph Storage Clusters have a few required settings, but most configuration -settings have default values. A typical deployment uses a deployment tool -to define a cluster and bootstrap a monitor. See `Deployment`_ for details -on ``cephadm.`` + .. raw:: html -.. toctree:: - :maxdepth: 2 +

Config and Deploy

- Configuration - Deployment <../cephadm/index> + Ceph Storage Clusters have a few required settings, but most configuration + settings have default values. A typical deployment uses a deployment tool + to define a cluster and bootstrap a monitor. See `Deployment`_ for details + on ``cephadm.`` -.. raw:: html + .. toctree:: + :maxdepth: 2 -

Operations

+ Configuration + Deployment <../cephadm/index> -Once you have deployed a Ceph Storage Cluster, you may begin operating -your cluster. + .. container:: column -.. toctree:: - :maxdepth: 2 - - - Operations + .. raw:: html -.. toctree:: - :maxdepth: 1 +

Operations

- Man Pages + Once you have deployed a Ceph Storage Cluster, you may begin operating + your cluster. + .. toctree:: + :maxdepth: 2 -.. toctree:: - :hidden: - - troubleshooting/index + Operations -.. raw:: html + .. toctree:: + :maxdepth: 1 -

APIs

+ Man Pages -Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the -`Ceph File System`_. You may also develop applications that talk directly to -the Ceph Storage Cluster. + .. toctree:: + :hidden: -.. toctree:: - :maxdepth: 2 + troubleshooting/index - APIs - -.. raw:: html + .. container:: column -
+ .. raw:: html + +

APIs

+ + Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the + `Ceph File System`_. You may also develop applications that talk directly to + the Ceph Storage Cluster. + + .. toctree:: + :maxdepth: 2 + + APIs .. _Ceph Block Devices: ../rbd/ .. _Ceph File System: ../cephfs/ diff --git a/doc/start/intro.rst b/doc/start/intro.rst index 8d7c79887f7..a0549ea9837 100644 --- a/doc/start/intro.rst +++ b/doc/start/intro.rst @@ -56,34 +56,34 @@ Ceph Storage Cluster to scale, rebalance, and recover dynamically. .. _REST API: ../../mgr/restful -.. raw:: html +.. container:: columns-2 - -

Recommendations

- -To begin using Ceph in production, you should review our hardware -recommendations and operating system recommendations. + .. container:: column -.. toctree:: - :maxdepth: 2 + .. raw:: html - Hardware Recommendations - OS Recommendations +

Recommendations

+ To begin using Ceph in production, you should review our hardware + recommendations and operating system recommendations. -.. raw:: html + .. toctree:: + :maxdepth: 2 -

Get Involved

+ Hardware Recommendations + OS Recommendations - You can avail yourself of help or contribute documentation, source - code or bugs by getting involved in the Ceph community. + .. container:: column -.. toctree:: - :maxdepth: 2 + .. raw:: html - get-involved - documenting-ceph +

Get Involved

-.. raw:: html + You can avail yourself of help or contribute documentation, source + code or bugs by getting involved in the Ceph community. -
+ .. toctree:: + :maxdepth: 2 + + get-involved + documenting-ceph