2018-07-17 11:35:11 +00:00
|
|
|
|
2018-07-31 15:18:33 +00:00
|
|
|
.. _kubernetes-dev:
|
|
|
|
|
2018-07-17 11:35:11 +00:00
|
|
|
=======================================
|
|
|
|
Hacking on Ceph in Kubernetes with Rook
|
|
|
|
=======================================
|
|
|
|
|
2018-08-07 13:54:19 +00:00
|
|
|
.. warning::
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
This is *not* official user documentation for setting up production
|
|
|
|
Ceph clusters with Kubernetes. It is aimed at developers who want
|
|
|
|
to hack on Ceph in Kubernetes.
|
|
|
|
|
|
|
|
This guide is aimed at Ceph developers getting started with running
|
2020-12-11 17:53:28 +00:00
|
|
|
in a Kubernetes environment. It assumes that you may be hacking on Rook,
|
2018-07-17 11:35:11 +00:00
|
|
|
Ceph or both, so everything is built from source.
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
TL;DR for hacking on MGR modules
|
|
|
|
================================
|
|
|
|
|
|
|
|
Make your changes to the Python code base and then from Ceph's
|
|
|
|
``build`` directory, run::
|
|
|
|
|
|
|
|
../src/script/kubejacker/kubejacker.sh '192.168.122.1:5000'
|
|
|
|
|
|
|
|
where ``'192.168.122.1:5000'`` is a local docker registry and
|
|
|
|
Rook's ``CephCluster`` CR uses ``image: 192.168.122.1:5000/ceph/ceph:latest``.
|
|
|
|
|
2018-07-17 11:35:11 +00:00
|
|
|
1. Build a kubernetes cluster
|
|
|
|
=============================
|
|
|
|
|
|
|
|
Before installing Ceph/Rook, make sure you've got a working kubernetes
|
|
|
|
cluster with some nodes added (i.e. ``kubectl get nodes`` shows you something).
|
|
|
|
The rest of this guide assumes that your development workstation has network
|
|
|
|
access to your kubernetes cluster, such that ``kubectl`` works from your
|
|
|
|
workstation.
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
`There are many ways <https://kubernetes.io/docs/setup/>`_
|
2018-07-17 11:35:11 +00:00
|
|
|
to build a kubernetes cluster: here we include some tips/pointers on where
|
|
|
|
to get started.
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
`kubic-terraform-kvm <https://github.com/kubic-project/kubic-terraform-kvm>`_
|
|
|
|
might also be an option.
|
2018-07-17 11:35:11 +00:00
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
Or `Host your own <https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/>`_ with
|
|
|
|
``kubeadm``.
|
2018-07-17 11:35:11 +00:00
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
Some Tips
|
|
|
|
---------
|
2018-07-17 11:35:11 +00:00
|
|
|
|
2020-12-11 17:53:28 +00:00
|
|
|
Here are some tips for a smoother ride with
|
|
|
|
|
|
|
|
``kubeadm``:
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
- If you have previously added any yum/deb repos for kubernetes packages,
|
|
|
|
disable them before trying to use the packages.cloud.google.com repository.
|
|
|
|
If you don't, you'll get quite confusing conflicts.
|
|
|
|
- Even if your distro already has docker, make sure you're installing it
|
|
|
|
a version from docker.com that is within the range mentioned in the
|
2020-12-11 17:53:28 +00:00
|
|
|
kubeadm install instructions. Especially, note that the docker in CentOS 7, 8
|
2018-07-17 11:35:11 +00:00
|
|
|
will *not* work.
|
|
|
|
|
2020-12-11 17:53:28 +00:00
|
|
|
``minikube``:
|
|
|
|
|
|
|
|
- Start up minikube by passing local docker registry address::
|
|
|
|
``minikube start --driver=docker --insecure-registry='192.168.122.1:5000'``
|
|
|
|
|
2018-07-17 11:35:11 +00:00
|
|
|
Hosted elsewhere
|
|
|
|
----------------
|
|
|
|
|
|
|
|
If you do not have any servers to hand, you might try a pure
|
2020-12-11 17:53:28 +00:00
|
|
|
container provider such as Google Compute Engine. Your mileage may
|
2018-07-17 11:35:11 +00:00
|
|
|
vary when it comes to what kinds of storage devices are visible
|
|
|
|
to your kubernetes cluster.
|
|
|
|
|
|
|
|
Make sure you check how much it's costing you before you spin up a big cluster!
|
|
|
|
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
2. Run a docker registry
|
|
|
|
========================
|
2018-07-17 11:35:11 +00:00
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
Run this somewhere accessible from both your workstation and your
|
2018-08-07 13:54:19 +00:00
|
|
|
kubernetes cluster (i.e. so that ``docker push/pull`` just works everywhere).
|
2018-07-17 11:35:11 +00:00
|
|
|
This is likely to be the same host you're using as your kubernetes master.
|
|
|
|
|
|
|
|
1. Install the ``docker-distribution`` package.
|
|
|
|
2. If you want to configure the port, edit ``/etc/docker-distribution/registry/config.yml``
|
|
|
|
3. Enable the registry service:
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
systemctl enable docker-distribution
|
|
|
|
systemctl start docker-distribution
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
You may need to mark the registry as **insecure**.
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
3. Build Rook
|
|
|
|
=============
|
|
|
|
|
2018-08-07 13:54:19 +00:00
|
|
|
.. note::
|
2018-07-17 11:35:11 +00:00
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
Building Rook is **not required** to make changes to Ceph.
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
Install Go if you don't already have it.
|
|
|
|
|
|
|
|
Download the Rook source code:
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
go get github.com/rook/rook
|
|
|
|
|
|
|
|
# Ignore this warning, as Rook is not a conventional go package
|
|
|
|
can't load package: package github.com/rook/rook: no Go files in /home/jspray/go/src/github.com/rook/rook
|
|
|
|
|
|
|
|
You will now have a Rook source tree in ~/go/src/github.com/rook/rook -- you may
|
|
|
|
be tempted to clone it elsewhere, but your life will be easier if you
|
|
|
|
leave it in your GOPATH.
|
|
|
|
|
|
|
|
Run ``make`` in the root of your Rook tree to build its binaries and containers:
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
make
|
|
|
|
...
|
|
|
|
=== saving image build-9204c79b/ceph-amd64
|
|
|
|
=== docker build build-9204c79b/ceph-toolbox-base-amd64
|
|
|
|
sha256:653bb4f8d26d6178570f146fe637278957e9371014ea9fce79d8935d108f1eaa
|
|
|
|
=== docker build build-9204c79b/ceph-toolbox-amd64
|
|
|
|
sha256:445d97b71e6f8de68ca1c40793058db0b7dd1ebb5d05789694307fd567e13863
|
|
|
|
=== caching image build-9204c79b/ceph-toolbox-base-amd64
|
|
|
|
|
|
|
|
You can use ``docker image ls`` to see the resulting built images. The
|
|
|
|
images you care about are the ones with tags ending "ceph-amd64" (used
|
|
|
|
for the Rook operator and Ceph daemons) and "ceph-toolbox-amd64" (used
|
|
|
|
for the "toolbox" container where the CLI is run).
|
|
|
|
|
|
|
|
4. Build Ceph
|
|
|
|
=============
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
.. note::
|
|
|
|
|
|
|
|
Building Ceph is **not required** to make changes to MGR modules
|
|
|
|
written in Python.
|
|
|
|
|
2018-07-17 11:35:11 +00:00
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
The Rook containers and the Ceph containers are independent now. Note that
|
|
|
|
Rook's Ceph client libraries need to communicate with the Ceph cluster,
|
|
|
|
therefore a compatible major version is required.
|
|
|
|
|
2020-12-11 17:53:28 +00:00
|
|
|
You can run a Registry docker container with access to your Ceph source
|
2018-07-17 11:35:11 +00:00
|
|
|
tree using a command like:
|
|
|
|
|
|
|
|
::
|
|
|
|
|
2020-12-11 17:53:28 +00:00
|
|
|
docker run -i -v /my/ceph/src:/my/ceph/src -p 192.168.122.1:5000:5000 -t --name registry registry:2
|
|
|
|
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
Once you have built Ceph, you can inject the resulting binaries into
|
|
|
|
the Rook container image using the ``kubejacker.sh`` script (run from
|
|
|
|
your build directory but from *outside* your build container).
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
5. Run Kubejacker
|
|
|
|
=================
|
|
|
|
|
|
|
|
``kubejacker`` needs access to your docker registry. Execute the script
|
|
|
|
to build a docker image containing your latest Ceph binaries:
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
::
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
build$ ../src/script/kubejacker/kubejacker.sh "<host>:<port>"
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
|
|
|
|
Now you've got your freshly built Rook and freshly built Ceph into
|
|
|
|
a single container image, ready to run. Next time you change something
|
|
|
|
in Ceph, you can re-run this to update your image and restart your
|
2020-12-11 17:53:28 +00:00
|
|
|
kubernetes containers. If you change something in Rook, then re-run the Rook
|
2018-07-17 11:35:11 +00:00
|
|
|
build, and the Ceph build too.
|
|
|
|
|
|
|
|
5. Run a Rook cluster
|
|
|
|
=====================
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
Please refer to `Rook's documentation <https://rook.io/docs/rook/master/ceph-quickstart.html>`_
|
|
|
|
for setting up a Rook operator, a Ceph cluster and the toolbox.
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
The Rook source tree includes example .yaml files in
|
2019-05-13 12:36:10 +00:00
|
|
|
``cluster/examples/kubernetes/ceph/``. Copy these into
|
|
|
|
a working directory, and edit as necessary to configure
|
|
|
|
the setup you want:
|
2018-07-17 11:35:11 +00:00
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
- Ensure that ``spec.cephVersion.image`` points to your docker registry::
|
2018-07-17 11:35:11 +00:00
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
spec:
|
|
|
|
cephVersion:
|
|
|
|
allowUnsupported: true
|
|
|
|
image: 192.168.122.1:5000/ceph/ceph:latest
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
Then, load the configuration into the kubernetes API using ``kubectl``:
|
|
|
|
|
|
|
|
::
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
kubectl apply -f ./cluster-test.yaml
|
|
|
|
|
|
|
|
Use ``kubectl -n rook-ceph get pods`` to check the operator
|
|
|
|
pod the Ceph daemons and toolbox are is coming up.
|
2018-07-17 11:35:11 +00:00
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
Once everything is up and running,
|
2018-07-17 11:35:11 +00:00
|
|
|
you should be able to open a shell in the toolbox container and
|
|
|
|
run ``ceph status``.
|
|
|
|
|
|
|
|
If your mon services start but the rest don't, it could be that they're
|
|
|
|
unable to form a quorum due to a Kubernetes networking issue: check that
|
|
|
|
containers in your Kubernetes cluster can ping containers on other nodes.
|
|
|
|
|
|
|
|
Cheat sheet
|
|
|
|
===========
|
|
|
|
|
|
|
|
Open a shell in your toolbox container::
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- bash
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
Inspect the Rook operator container's logs::
|
|
|
|
|
2019-05-13 12:36:10 +00:00
|
|
|
kubectl -n rook-ceph logs -l app=rook-ceph-operator
|
2018-07-17 11:35:11 +00:00
|
|
|
|
|
|
|
Inspect the ceph-mgr container's logs::
|
|
|
|
|
|
|
|
kubectl -n rook-ceph logs -l app=rook-ceph-mgr
|
|
|
|
|