============================== Block Devices and Nomad ============================== Like Kubernetes, Nomad can use Ceph Block Device thanks to `ceph-csi`_, which allow to dynamically provision RBD images or import existing one. Every nomad version can use `ceph-csi`_, however we'll here describe the latest version available at writing time, Nomad v1.1.2 . To use Ceph Block Devices with Nomad, you must install and configure ``ceph-csi`` within your Nomad environment. The following diagram depicts the Nomad/Ceph technology stack. .. ditaa:: +-------------------------+-------------------------+ | Container | ceph--csi | | | node | | ^ | ^ | | | | | | +----------+--------------+-------------------------+ | | | | | v | | | Nomad | | | | | +---------------------------------------------------+ | ceph--csi | | controller | +--------+------------------------------------------+ | | | configures maps | +---------------+ +----------------+ | | v v +------------------------+ +------------------------+ | | | rbd--nbd | | Kernel Modules | +------------------------+ | | | librbd | +------------------------+-+------------------------+ | RADOS Protocol | +------------------------+-+------------------------+ | OSDs | | Monitors | +------------------------+ +------------------------+ .. note:: Nomad has many task drivers, but we'll only use a Docker container in this example. .. important:: ``ceph-csi`` uses the RBD kernel modules by default which may not support all Ceph `CRUSH tunables`_ or `RBD image features`_. Create a Pool ============= By default, Ceph block devices use the ``rbd`` pool. Create a pool for Nopmad persistent storage. Ensure your Ceph cluster is running, then create the pool. :: $ ceph osd pool create nomad See `Create a Pool`_ for details on specifying the number of placement groups for your pools, and `Placement Groups`_ for details on the number of placement groups you should set for your pools. A newly created pool must be initialized prior to use. Use the ``rbd`` tool to initialize the pool:: $ rbd pool init nomad Configure ceph-csi ================== Setup Ceph Client Authentication -------------------------------- Create a new user for nomad and `ceph-csi`. Execute the following and record the generated key:: $ ceph auth get-or-create client.nomad mon 'profile rbd' osd 'profile rbd pool=nomad' mgr 'profile rbd pool=nomad' [client.nomad] key = AQAlh9Rgg2vrDxAARy25T7KHabs6iskSHpAEAQ== Configure Nomad --------------- By default Nomad doesn't allow containers to use privileged mode. Edit the nomad configuration file by adding this configuration block to `/etc/nomad.d/nomad.hcl`:: plugin "docker" { config { allow_privileged = true } } Nomad must have `rbd` module loaded, check if it's the case.:: $ lsmod |grep rbd rbd 94208 2 libceph 364544 1 rbd If it's not the case, load it.:: $ sudo modprobe rbd And restart Nomad. Create ceph-csi controller and plugin nodes =========================================== The `ceph-csi`_ plugin requieres two components: - **Controller plugin**: Communicates with the provider's API. - **Node plugin**: execute tasks on the client. .. note:: We'll set the ceph-csi's version in those files see `ceph-csi release`_ for other versions. Configure controller plugin --------------------------- The controller plugin requires Cpeh monitor addresses of for the Ceph cluster. Collect both the Ceph cluster unique `fsid` and the monitor addresses:: $ ceph mon dump <...> fsid b9127830-b0cc-4e34-aa47-9d1a2e9949a8 <...> 0: [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] mon.a 1: [v2:192.168.1.2:3300/0,v1:192.168.1.2:6789/0] mon.b 2: [v2:192.168.1.3:3300/0,v1:192.168.1.3:6789/0] mon.c Generate a `ceph-csi-plugin-controller.nomad` file similar to the example below, substituting the `fsid` for "clusterID", and the monitor addresses for "monitors":: job "ceph-csi-plugin-controller" { datacenters = ["dc1"] group "controller" { network { port "metrics" {} } task "ceph-controller" { template { data = <