This avoids any need for the script to be present on the remote host.
We introduce a config option to indicate where the script should be
read from, since the location varies between a vstart environment (source
dir) and a real install (/usr/sbin).
Signed-off-by: Sage Weil <sage@redhat.com>
Allow someone to run this script by prepending injected_{args,stdin} to
the top and then piping it all to a python3 binary.
Signed-off-by: Sage Weil <sage@redhat.com>
Caveats:
- this doesn't clean out /etc/ceph/*, since we don't know which is the
last daemon to go, and/or whether the user wants to keep it around
for using the ceph CLI on this host
- leaves behind /var/lib/ceph/bootstrap-* keys, even after all daemons
have been converted.
Signed-off-by: Sage Weil <sage@redhat.com>
Three basic steps:
1- ceph-volume lvm prepare
2- ceph-volume lvm list
3- for each osd, ceph-daemon deploy (which calls c-v activate inside the
new container)
Signed-off-by: Sage Weil <sage@redhat.com>
Don't assume it is the hostname (with osds, it's not!).
Also, just pass arbitrary args down, instead of special-casing the
network option.
Signed-off-by: Sage Weil <sage@redhat.com>
e.g., 'ceph config get osd debug_osd' to return the config value that
would apply to a generic OSD (either from the osd or global section of
the config).
Signed-off-by: Sage Weil <sage@redhat.com>
- Use a single instance of the config and identity files for the whole
module. There's no need to create these for *every* connection--it just
pollutes /tmp.
- Drop the SSHConnection wrapper, since the temp files are tied to the
daemon lifecycle now.
- Prefix the tmp files so I can tell wtf is going on.
- Always connect to root@host, to avoid remoto's localhost detection
feature. This ensures we have a consistent connection model and user.
(The daemon might be running as user ceph and try to connect to localhost,
but end up running the command as the wrong user and/or inside the
ceph-mgr container.)
Signed-off-by: Sage Weil <sage@redhat.com>
This is sufficient to deploy an OSD that is based on ceph-volume lvm.
YMMV if it's not an lvm-based OSD.
Run the OSD container privileged so we can open the raw block device.
Signed-off-by: Sage Weil <sage@redhat.com>
This lets you start up a 'generic' container of a particular class,
without a data mount, but with the appropriate other mounts and privilege
levels.
Signed-off-by: Sage Weil <sage@redhat.com>
When activating a bluestore inside a container, we want to (be able to)
make the osd dir metadata persistent inside the container.
Signed-off-by: Sage Weil <sage@redhat.com>