mirror of
https://github.com/ceph/ceph
synced 2025-03-11 02:39:05 +00:00
Merge PR #40734 into master
* refs/pull/40734/head: mgr/cephadm: make prometheus scrape ingress haproxy doc/cephadm: remove big warning about stability doc/cepham/compatibility: rgw-ha -> ingress; note possibility of breaking changes mgr/cephadm: ingress: add optional virtual_interface_networks doc/cephadm/rgw: clean up example spec mgr/cephadm/services/ingress: less verbose about prepare_create doc/cephadm/rgw: add note about which ethernet interface is used cephadm: make keepalived unit fiddle sysctl settings mgr/orchestrator: report external endpoints from 'orch ls' mgr/orchestrator: drop - when no ports doc/cephadm/rgw: update docs for ingress service mgr/cephadm: use per_host_daemon feature in scheduler mgr/cephadm/schedule: add per_host_daemon_type support mgr/cephadm: HA_RGW -> Ingress mgr/cephadm: include daemon_type in DaemonPlacement mgr/cephadm: update list-networks to report interface names too mgr/orchestrator: streamline 'orch ps' PORTS formatting mgr/cephadm/schedule: handle multiple ports per daemon mgr/cephadm/utils: resolve_ip(): prefer IPv4 Reviewed-by: Sebastian Wagner <swagner@suse.com>
This commit is contained in:
commit
8cc5dc75d7
@ -38,11 +38,11 @@ changes in the near future:
|
||||
|
||||
- RGW
|
||||
|
||||
Cephadm support for the following features is still under development:
|
||||
Cephadm support for the following features is still under development and may see breaking
|
||||
changes in future releases:
|
||||
|
||||
- RGW-HA
|
||||
- Ingress
|
||||
- Cephadm exporter daemon
|
||||
- cephfs-mirror
|
||||
|
||||
In case you encounter issues, see also :ref:`cephadm-pause`.
|
||||
|
||||
|
@ -20,11 +20,6 @@ either via the Ceph command-line interface (CLI) or via the dashboard (GUI).
|
||||
``cephadm`` is new in Ceph release v15.2.0 (Octopus) and does not support older
|
||||
versions of Ceph.
|
||||
|
||||
.. note::
|
||||
|
||||
Cephadm is new. Please read about :ref:`cephadm-stability` before
|
||||
using cephadm to deploy a production system.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
|
@ -88,62 +88,40 @@ specification. See :ref:`multisite` for more information of setting up multisit
|
||||
High availability service for RGW
|
||||
=================================
|
||||
|
||||
This service allows the user to create a high avalilability RGW service
|
||||
providing a minimun set of configuration options.
|
||||
The *ingress* service allows you to create a high availability endpoint
|
||||
for RGW with a minumum set of configuration options. The orchestrator will
|
||||
deploy and manage a combination of haproxy and keepalived to provide load
|
||||
balancing on a floating virtual IP.
|
||||
|
||||
The orchestrator will deploy and configure automatically several HAProxy and
|
||||
Keepalived containers to assure the continuity of the RGW service while the
|
||||
Ceph cluster will have at least 1 RGW daemon running.
|
||||
|
||||
The next image explains graphically how this service works:
|
||||
If SSL is used, then SSL must be configured and terminated by the ingress service
|
||||
and not RGW itself.
|
||||
|
||||
.. image:: ../images/HAProxy_for_RGW.svg
|
||||
|
||||
There are N hosts where the HA RGW service is deployed. This means that we have
|
||||
an HAProxy and a keeplived daemon running in each of this hosts.
|
||||
Keepalived is used to provide a "virtual IP" binded to the hosts. All RGW
|
||||
clients use this "virtual IP" to connect with the RGW Service.
|
||||
There are N hosts where the ingress service is deployed. Each host
|
||||
has a haproxy daemon and a keepalived daemon. A virtual IP is
|
||||
automatically configured on only one of these hosts at a time.
|
||||
|
||||
Each keeplived daemon is checking each few seconds what is the status of the
|
||||
HAProxy daemon running in the same host. Also it is aware that the "master" keepalived
|
||||
daemon will be running without problems.
|
||||
Each keepalived daemon checks every few seconds whether the haproxy
|
||||
daemon on the same host is responding. Keepalived will also check
|
||||
that the master keepalived daemon is running without problems. If the
|
||||
"master" keepalived daemon or the active haproxy is not responding,
|
||||
one of the remaining keepalived daemons running in backup mode will be
|
||||
elected as master, and the virtual IP will be moved to that node.
|
||||
|
||||
If the "master" keepalived daemon or the Active HAproxy is not responding, one
|
||||
of the keeplived daemons running in backup mode will be elected as master, and
|
||||
the "virtual ip" will be moved to that node.
|
||||
|
||||
The active HAProxy also acts like a load balancer, distributing all RGW requests
|
||||
The active haproxy acts like a load balancer, distributing all RGW requests
|
||||
between all the RGW daemons available.
|
||||
|
||||
**Prerequisites:**
|
||||
|
||||
* At least two RGW daemons running in the Ceph cluster
|
||||
* Operating system prerequisites:
|
||||
In order for the Keepalived service to forward network packets properly to the
|
||||
real servers, each router node must have IP forwarding turned on in the kernel.
|
||||
So it will be needed to set this system option::
|
||||
|
||||
net.ipv4.ip_forward = 1
|
||||
|
||||
Load balancing in HAProxy and Keepalived at the same time also requires the
|
||||
ability to bind to an IP address that are nonlocal, meaning that it is not
|
||||
assigned to a device on the local system. This allows a running load balancer
|
||||
instance to bind to an IP that is not local for failover.
|
||||
So it will be needed to set this system option::
|
||||
|
||||
net.ipv4.ip_nonlocal_bind = 1
|
||||
|
||||
Be sure to set properly these two options in the file ``/etc/sysctl.conf`` in
|
||||
order to persist this values even if the hosts are restarted.
|
||||
These configuration changes must be applied in all the hosts where the HAProxy for
|
||||
RGW service is going to be deployed.
|
||||
|
||||
* An existing RGW service, without SSL. (If you want SSL service, the certificate
|
||||
should be configured on the ingress service, not the RGW service.)
|
||||
|
||||
**Deploy of the high availability service for RGW**
|
||||
|
||||
Use the command::
|
||||
|
||||
ceph orch apply -i <service_spec_file>
|
||||
ceph orch apply -i <ingress_spec_file>
|
||||
|
||||
**Service specification file:**
|
||||
|
||||
@ -151,97 +129,82 @@ It is a yaml format file with the following properties:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
service_type: ha-rgw
|
||||
service_id: haproxy_for_rgw
|
||||
service_type: ingress
|
||||
service_id: rgw.something # adjust to match your existing RGW service
|
||||
placement:
|
||||
hosts:
|
||||
- host1
|
||||
- host2
|
||||
- host3
|
||||
spec:
|
||||
virtual_ip_interface: <string> # ex: eth0
|
||||
virtual_ip_address: <string>/<string> # ex: 192.168.20.1/24
|
||||
frontend_port: <integer> # ex: 8080
|
||||
ha_proxy_port: <integer> # ex: 1967
|
||||
ha_proxy_stats_enabled: <boolean> # ex: true
|
||||
ha_proxy_stats_user: <string> # ex: admin
|
||||
ha_proxy_stats_password: <string> # ex: true
|
||||
ha_proxy_enable_prometheus_exporter: <boolean> # ex: true
|
||||
ha_proxy_monitor_uri: <string> # ex: /haproxy_health
|
||||
keepalived_password: <string> # ex: admin
|
||||
ha_proxy_frontend_ssl_certificate: <optional string> ex:
|
||||
[
|
||||
"-----BEGIN CERTIFICATE-----",
|
||||
"MIIDZTCCAk2gAwIBAgIUClb9dnseOsgJWAfhPQvrZw2MP2kwDQYJKoZIhvcNAQEL",
|
||||
....
|
||||
"-----END CERTIFICATE-----",
|
||||
"-----BEGIN PRIVATE KEY-----",
|
||||
....
|
||||
"sCHaZTUevxb4h6dCEk1XdPr2O2GdjV0uQ++9bKahAy357ELT3zPE8yYqw7aUCyBO",
|
||||
"aW5DSCo8DgfNOgycVL/rqcrc",
|
||||
"-----END PRIVATE KEY-----"
|
||||
]
|
||||
ha_proxy_frontend_ssl_port: <optional integer> # ex: 8090
|
||||
ha_proxy_ssl_dh_param: <optional integer> # ex: 1024
|
||||
ha_proxy_ssl_ciphers: <optional string> # ex: ECDH+AESGCM:!MD5
|
||||
ha_proxy_ssl_options: <optional string> # ex: no-sslv3
|
||||
haproxy_container_image: <optional string> # ex: haproxy:2.4-dev3-alpine
|
||||
keepalived_container_image: <optional string> # ex: arcts/keepalived:1.2.2
|
||||
backend_service: rgw.something # adjust to match your existing RGW service
|
||||
virtual_ip: <string>/<string> # ex: 192.168.20.1/24
|
||||
frontend_port: <integer> # ex: 8080
|
||||
monitor_port: <integer> # ex: 1967, used by haproxy for load balancer status
|
||||
virtual_interface_networks: [ ... ] # optional: list of CIDR networks
|
||||
ssl_cert: | # optional: SSL certificate and key
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
...
|
||||
-----END PRIVATE KEY-----
|
||||
|
||||
where the properties of this service specification are:
|
||||
|
||||
* ``service_type``
|
||||
Mandatory and set to "ha-rgw"
|
||||
Mandatory and set to "ingress"
|
||||
* ``service_id``
|
||||
The name of the service.
|
||||
The name of the service. We suggest naming this after the service you are
|
||||
controlling ingress for (e.g., ``rgw.foo``).
|
||||
* ``placement hosts``
|
||||
The hosts where it is desired to run the HA daemons. An HAProxy and a
|
||||
Keepalived containers will be deployed in these hosts.
|
||||
The RGW daemons can run in other different hosts or not.
|
||||
* ``virtual_ip_interface``
|
||||
The physical network interface where the virtual ip will be binded
|
||||
* ``virtual_ip_address``
|
||||
The virtual IP ( and network ) where the HA RGW service will be available.
|
||||
All your RGW clients must point to this IP in order to use the HA RGW
|
||||
service .
|
||||
The hosts where it is desired to run the HA daemons. An haproxy and a
|
||||
keepalived container will be deployed on these hosts. These hosts do not need
|
||||
to match the nodes where RGW is deployed.
|
||||
* ``virtual_ip``
|
||||
The virtual IP (and network) in CIDR format where the ingress service will be available.
|
||||
* ``virtual_interface_networks``
|
||||
A list of networks to identify which ethernet interface to use for the virtual IP.
|
||||
* ``frontend_port``
|
||||
The port used to access the HA RGW service
|
||||
* ``ha_proxy_port``
|
||||
The port used by HAProxy containers
|
||||
* ``ha_proxy_stats_enabled``
|
||||
If it is desired to enable the statistics URL in HAProxy daemons
|
||||
* ``ha_proxy_stats_user``
|
||||
User needed to access the HAProxy statistics URL
|
||||
* ``ha_proxy_stats_password``
|
||||
The password needed to access the HAProxy statistics URL
|
||||
* ``ha_proxy_enable_prometheus_exporter``
|
||||
If it is desired to enable the Promethes exporter in HAProxy. This will
|
||||
allow to consume RGW Service metrics from Grafana.
|
||||
* ``ha_proxy_monitor_uri``:
|
||||
To set the API endpoint where the health of HAProxy daemon is provided
|
||||
* ``keepalived_password``:
|
||||
The password needed to access keepalived daemons
|
||||
* ``ha_proxy_frontend_ssl_certificate``:
|
||||
SSl certificate. You must paste the content of your .pem file
|
||||
* ``ha_proxy_frontend_ssl_port``:
|
||||
The https port used by HAProxy containers
|
||||
* ``ha_proxy_ssl_dh_param``:
|
||||
Value used for the `tune.ssl.default-dh-param` setting in the HAProxy
|
||||
config file
|
||||
* ``ha_proxy_ssl_ciphers``:
|
||||
Value used for the `ssl-default-bind-ciphers` setting in HAProxy config
|
||||
file.
|
||||
* ``ha_proxy_ssl_options``:
|
||||
Value used for the `ssl-default-bind-options` setting in HAProxy config
|
||||
file.
|
||||
* ``haproxy_container_image``:
|
||||
HAProxy image location used to pull the image
|
||||
* ``keepalived_container_image``:
|
||||
Keepalived image location used to pull the image
|
||||
The port used to access the ingress service.
|
||||
* ``ssl_cert``:
|
||||
SSL certificate, if SSL is to be enabled. This must contain the both the certificate and
|
||||
private key blocks in .pem format.
|
||||
|
||||
**Useful hints for the RGW Service:**
|
||||
**Selecting ethernet interfaces for the virtual IP:**
|
||||
|
||||
You cannot simply provide the name of the network interface on which
|
||||
to configure the virtual IP because interface names tend to vary
|
||||
across hosts (and/or reboots). Instead, cephadm will select
|
||||
interfaces based on other existing IP addresses that are already
|
||||
configured.
|
||||
|
||||
Normally, the virtual IP will be configured on the first network
|
||||
interface that has an existing IP in the same subnet. For example, if
|
||||
the virtual IP is 192.168.0.80/24 and eth2 has the static IP
|
||||
192.168.0.40/24, cephadm will use eth2.
|
||||
|
||||
In some cases, the virtual IP may not belong to the same subnet as an existing static
|
||||
IP. In such cases, you can provide a list of subnets to match against existing IPs,
|
||||
and cephadm will put the virtual IP on the first network interface to match. For example,
|
||||
if the virtual IP is 192.168.0.80/24 and we want it on the same interface as the machine's
|
||||
static IP in 10.10.0.0/16, you can use a spec like::
|
||||
|
||||
service_type: ingress
|
||||
service_id: rgw.something
|
||||
spec:
|
||||
virtual_ip: 192.168.0.80/24
|
||||
virtual_interface_networks:
|
||||
- 10.10.0.0/16
|
||||
...
|
||||
|
||||
A consequence of this strategy is that you cannot currently configure the virtual IP
|
||||
on an interface that has no existing IP address. In this situation, we suggest
|
||||
configuring a "dummy" IP address is an unroutable network on the correct interface
|
||||
and reference that dummy network in the networks list (see above).
|
||||
|
||||
|
||||
**Useful hints for ingress:**
|
||||
|
||||
* Good to have at least 3 RGW daemons
|
||||
* Use at least 3 hosts for the HAProxy for RGW service
|
||||
* In each host an HAProxy and a Keepalived daemon will be deployed. These
|
||||
daemons can be managed as systemd services
|
||||
* Use at least 3 hosts for the ingress
|
||||
|
@ -810,6 +810,14 @@ class Keepalived(object):
|
||||
]
|
||||
return envs
|
||||
|
||||
@staticmethod
|
||||
def get_prestart():
|
||||
return (
|
||||
'# keepalived needs IP forwarding and non-local bind\n'
|
||||
'sysctl net.ipv4.ip_forward=1\n'
|
||||
'sysctl net.ipv4.ip_nonlocal_bind=1\n'
|
||||
)
|
||||
|
||||
def extract_uid_gid_keepalived(self):
|
||||
# better directory for this?
|
||||
return extract_uid_gid(self.ctx, file_path='/var/lib')
|
||||
@ -2712,6 +2720,8 @@ def deploy_daemon_units(
|
||||
ceph_iscsi = CephIscsi.init(ctx, fsid, daemon_id)
|
||||
tcmu_container = ceph_iscsi.get_tcmu_runner_container()
|
||||
_write_container_cmd_to_bash(ctx, f, tcmu_container, 'iscsi tcmu-runnter container', background=True)
|
||||
elif daemon_type == Keepalived.daemon_type:
|
||||
f.write(Keepalived.get_prestart())
|
||||
|
||||
_write_container_cmd_to_bash(ctx, f, c, '%s.%s' % (daemon_type, str(daemon_id)))
|
||||
|
||||
@ -3428,7 +3438,10 @@ def prepare_mon_addresses(
|
||||
if not ctx.skip_mon_network:
|
||||
# make sure IP is configured locally, and then figure out the
|
||||
# CIDR network
|
||||
for net, ips in list_networks(ctx).items():
|
||||
for net, ifaces in list_networks(ctx).items():
|
||||
ips: List[str] = []
|
||||
for iface, ls in ifaces.items():
|
||||
ips.extend(ls)
|
||||
if ipaddress.ip_address(unwrap_ipv6(base_ip)) in \
|
||||
[ipaddress.ip_address(ip) for ip in ips]:
|
||||
mon_network = net
|
||||
@ -4533,7 +4546,7 @@ def command_logs(ctx):
|
||||
|
||||
|
||||
def list_networks(ctx):
|
||||
# type: (CephadmContext) -> Dict[str,List[str]]
|
||||
# type: (CephadmContext) -> Dict[str,Dict[str,List[str]]]
|
||||
|
||||
# sadly, 18.04's iproute2 4.15.0-2ubun doesn't support the -j flag,
|
||||
# so we'll need to use a regex to parse 'ip' command output.
|
||||
@ -4556,17 +4569,20 @@ def _list_ipv4_networks(ctx: CephadmContext):
|
||||
|
||||
|
||||
def _parse_ipv4_route(out):
|
||||
r = {} # type: Dict[str,List[str]]
|
||||
p = re.compile(r'^(\S+) (.*)scope link (.*)src (\S+)')
|
||||
r = {} # type: Dict[str,Dict[str,List[str]]]
|
||||
p = re.compile(r'^(\S+) dev (\S+) (.*)scope link (.*)src (\S+)')
|
||||
for line in out.splitlines():
|
||||
m = p.findall(line)
|
||||
if not m:
|
||||
continue
|
||||
net = m[0][0]
|
||||
ip = m[0][3]
|
||||
iface = m[0][1]
|
||||
ip = m[0][4]
|
||||
if net not in r:
|
||||
r[net] = []
|
||||
r[net].append(ip)
|
||||
r[net] = {}
|
||||
if iface not in r[net]:
|
||||
r[net][iface] = []
|
||||
r[net][iface].append(ip)
|
||||
return r
|
||||
|
||||
|
||||
@ -4580,27 +4596,39 @@ def _list_ipv6_networks(ctx: CephadmContext):
|
||||
|
||||
|
||||
def _parse_ipv6_route(routes, ips):
|
||||
r = {} # type: Dict[str,List[str]]
|
||||
r = {} # type: Dict[str,Dict[str,List[str]]]
|
||||
route_p = re.compile(r'^(\S+) dev (\S+) proto (\S+) metric (\S+) .*pref (\S+)$')
|
||||
ip_p = re.compile(r'^\s+inet6 (\S+)/(.*)scope (.*)$')
|
||||
iface_p = re.compile(r'^(\d+): (\S+): (.*)$')
|
||||
for line in routes.splitlines():
|
||||
m = route_p.findall(line)
|
||||
if not m or m[0][0].lower() == 'default':
|
||||
continue
|
||||
net = m[0][0]
|
||||
if '/' not in net: # only consider networks with a mask
|
||||
continue
|
||||
iface = m[0][1]
|
||||
if net not in r:
|
||||
r[net] = []
|
||||
r[net] = {}
|
||||
if iface not in r[net]:
|
||||
r[net][iface] = []
|
||||
|
||||
iface = None
|
||||
for line in ips.splitlines():
|
||||
m = ip_p.findall(line)
|
||||
if not m:
|
||||
m = iface_p.findall(line)
|
||||
if m:
|
||||
# drop @... suffix, if present
|
||||
iface = m[0][1].split('@')[0]
|
||||
continue
|
||||
ip = m[0][0]
|
||||
# find the network it belongs to
|
||||
net = [n for n in r.keys()
|
||||
if ipaddress.ip_address(ip) in ipaddress.ip_network(n)]
|
||||
if net:
|
||||
r[net[0]].append(ip)
|
||||
assert(iface)
|
||||
r[net[0]][iface].append(ip)
|
||||
|
||||
return r
|
||||
|
||||
|
@ -191,11 +191,11 @@ default via 192.168.178.1 dev enxd89ef3f34260 proto dhcp metric 100
|
||||
195.135.221.12 via 192.168.178.1 dev enxd89ef3f34260 proto static metric 100
|
||||
""",
|
||||
{
|
||||
'10.4.0.1': ['10.4.0.2'],
|
||||
'172.17.0.0/16': ['172.17.0.1'],
|
||||
'192.168.39.0/24': ['192.168.39.1'],
|
||||
'192.168.122.0/24': ['192.168.122.1'],
|
||||
'192.168.178.0/24': ['192.168.178.28']
|
||||
'10.4.0.1': {'tun0': ['10.4.0.2']},
|
||||
'172.17.0.0/16': {'docker0': ['172.17.0.1']},
|
||||
'192.168.39.0/24': {'virbr1': ['192.168.39.1']},
|
||||
'192.168.122.0/24': {'virbr0': ['192.168.122.1']},
|
||||
'192.168.178.0/24': {'enxd89ef3f34260': ['192.168.178.28']}
|
||||
}
|
||||
), (
|
||||
"""
|
||||
@ -214,10 +214,10 @@ default via 10.3.64.1 dev eno1 proto static metric 100
|
||||
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
|
||||
""",
|
||||
{
|
||||
'10.3.64.0/24': ['10.3.64.23', '10.3.64.27'],
|
||||
'10.88.0.0/16': ['10.88.0.1'],
|
||||
'172.21.3.1': ['172.21.3.2'],
|
||||
'192.168.122.0/24': ['192.168.122.1']}
|
||||
'10.3.64.0/24': {'eno1': ['10.3.64.23', '10.3.64.27']},
|
||||
'10.88.0.0/16': {'cni-podman0': ['10.88.0.1']},
|
||||
'172.21.3.1': {'tun0': ['172.21.3.2']},
|
||||
'192.168.122.0/24': {'virbr0': ['192.168.122.1']}}
|
||||
),
|
||||
])
|
||||
def test_parse_ipv4_route(self, test_input, expected):
|
||||
@ -227,61 +227,144 @@ default via 10.3.64.1 dev eno1 proto static metric 100
|
||||
(
|
||||
"""
|
||||
::1 dev lo proto kernel metric 256 pref medium
|
||||
fdbc:7574:21fe:9200::/64 dev wlp2s0 proto ra metric 600 pref medium
|
||||
fdd8:591e:4969:6363::/64 dev wlp2s0 proto ra metric 600 pref medium
|
||||
fde4:8dba:82e1::/64 dev eth1 proto kernel metric 256 expires 1844sec pref medium
|
||||
fe80::/64 dev eno1 proto kernel metric 100 pref medium
|
||||
fe80::/64 dev br-3d443496454c proto kernel metric 256 linkdown pref medium
|
||||
fe80::/64 dev tun0 proto kernel metric 256 pref medium
|
||||
fe80::/64 dev wlp2s0 proto kernel metric 600 pref medium
|
||||
default dev tun0 proto static metric 50 pref medium
|
||||
default via fe80::2480:28ec:5097:3fe2 dev wlp2s0 proto ra metric 20600 pref medium
|
||||
fe80::/64 dev br-4355f5dbb528 proto kernel metric 256 pref medium
|
||||
fe80::/64 dev docker0 proto kernel metric 256 linkdown pref medium
|
||||
fe80::/64 dev cni-podman0 proto kernel metric 256 linkdown pref medium
|
||||
fe80::/64 dev veth88ba1e8 proto kernel metric 256 pref medium
|
||||
fe80::/64 dev vethb6e5fc7 proto kernel metric 256 pref medium
|
||||
fe80::/64 dev vethaddb245 proto kernel metric 256 pref medium
|
||||
fe80::/64 dev vethbd14d6b proto kernel metric 256 pref medium
|
||||
fe80::/64 dev veth13e8fd2 proto kernel metric 256 pref medium
|
||||
fe80::/64 dev veth1d3aa9e proto kernel metric 256 pref medium
|
||||
fe80::/64 dev vethe485ca9 proto kernel metric 256 pref medium
|
||||
""",
|
||||
"""
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
|
||||
inet6 fe80::225:90ff:fee5:26e8/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
6: br-3d443496454c: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 state DOWN
|
||||
inet6 fe80::42:23ff:fe9d:ee4/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
7: br-4355f5dbb528: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
|
||||
inet6 fe80::42:6eff:fe35:41fe/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
8: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 state DOWN
|
||||
inet6 fe80::42:faff:fee6:40a0/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
11: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 state UNKNOWN qlen 100
|
||||
inet6 fe80::98a6:733e:dafd:350/64 scope link stable-privacy
|
||||
valid_lft forever preferred_lft forever
|
||||
28: cni-podman0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 state DOWN qlen 1000
|
||||
inet6 fe80::3449:cbff:fe89:b87e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
31: vethaddb245@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
|
||||
inet6 fe80::90f7:3eff:feed:a6bb/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
33: veth88ba1e8@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
|
||||
inet6 fe80::d:f5ff:fe73:8c82/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
35: vethbd14d6b@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
|
||||
inet6 fe80::b44f:8ff:fe6f:813d/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
37: vethb6e5fc7@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
|
||||
inet6 fe80::4869:c6ff:feaa:8afe/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
39: veth13e8fd2@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
|
||||
inet6 fe80::78f4:71ff:fefe:eb40/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
41: veth1d3aa9e@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
|
||||
inet6 fe80::24bd:88ff:fe28:5b18/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
43: vethe485ca9@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
|
||||
inet6 fe80::6425:87ff:fe42:b9f0/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
""",
|
||||
{
|
||||
"fe80::/64": {
|
||||
"eno1": [
|
||||
"fe80::225:90ff:fee5:26e8"
|
||||
],
|
||||
"br-3d443496454c": [
|
||||
"fe80::42:23ff:fe9d:ee4"
|
||||
],
|
||||
"tun0": [
|
||||
"fe80::98a6:733e:dafd:350"
|
||||
],
|
||||
"br-4355f5dbb528": [
|
||||
"fe80::42:6eff:fe35:41fe"
|
||||
],
|
||||
"docker0": [
|
||||
"fe80::42:faff:fee6:40a0"
|
||||
],
|
||||
"cni-podman0": [
|
||||
"fe80::3449:cbff:fe89:b87e"
|
||||
],
|
||||
"veth88ba1e8": [
|
||||
"fe80::d:f5ff:fe73:8c82"
|
||||
],
|
||||
"vethb6e5fc7": [
|
||||
"fe80::4869:c6ff:feaa:8afe"
|
||||
],
|
||||
"vethaddb245": [
|
||||
"fe80::90f7:3eff:feed:a6bb"
|
||||
],
|
||||
"vethbd14d6b": [
|
||||
"fe80::b44f:8ff:fe6f:813d"
|
||||
],
|
||||
"veth13e8fd2": [
|
||||
"fe80::78f4:71ff:fefe:eb40"
|
||||
],
|
||||
"veth1d3aa9e": [
|
||||
"fe80::24bd:88ff:fe28:5b18"
|
||||
],
|
||||
"vethe485ca9": [
|
||||
"fe80::6425:87ff:fe42:b9f0"
|
||||
]
|
||||
}
|
||||
}
|
||||
),
|
||||
(
|
||||
"""
|
||||
::1 dev lo proto kernel metric 256 pref medium
|
||||
2001:1458:301:eb::100:1a dev ens20f0 proto kernel metric 100 pref medium
|
||||
2001:1458:301:eb::/64 dev ens20f0 proto ra metric 100 pref medium
|
||||
fd01:1458:304:5e::/64 dev ens20f0 proto ra metric 100 pref medium
|
||||
fe80::/64 dev ens20f0 proto kernel metric 100 pref medium
|
||||
default proto ra metric 100
|
||||
nexthop via fe80::46ec:ce00:b8a0:d3c8 dev ens20f0 weight 1
|
||||
nexthop via fe80::46ec:ce00:b8a2:33c8 dev ens20f0 weight 1 pref medium
|
||||
""",
|
||||
"""
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
|
||||
inet6 fdd8:591e:4969:6363:4c52:cafe:8dd4:dc4/64 scope global temporary dynamic
|
||||
valid_lft 86394sec preferred_lft 14394sec
|
||||
inet6 fdbc:7574:21fe:9200:4c52:cafe:8dd4:dc4/64 scope global temporary dynamic
|
||||
valid_lft 6745sec preferred_lft 3145sec
|
||||
inet6 fdd8:591e:4969:6363:103a:abcd:af1f:57f3/64 scope global temporary deprecated dynamic
|
||||
valid_lft 86394sec preferred_lft 0sec
|
||||
inet6 fdbc:7574:21fe:9200:103a:abcd:af1f:57f3/64 scope global temporary deprecated dynamic
|
||||
valid_lft 6745sec preferred_lft 0sec
|
||||
inet6 fdd8:591e:4969:6363:a128:1234:2bdd:1b6f/64 scope global temporary deprecated dynamic
|
||||
valid_lft 86394sec preferred_lft 0sec
|
||||
inet6 fdbc:7574:21fe:9200:a128:1234:2bdd:1b6f/64 scope global temporary deprecated dynamic
|
||||
valid_lft 6745sec preferred_lft 0sec
|
||||
inet6 fdd8:591e:4969:6363:d581:4321:380b:3905/64 scope global temporary deprecated dynamic
|
||||
valid_lft 86394sec preferred_lft 0sec
|
||||
inet6 fdbc:7574:21fe:9200:d581:4321:380b:3905/64 scope global temporary deprecated dynamic
|
||||
valid_lft 6745sec preferred_lft 0sec
|
||||
inet6 fe80::1111:2222:3333:4444/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fde4:8dba:82e1:0:ec4a:e402:e9df:b357/64 scope global temporary dynamic
|
||||
valid_lft 1074sec preferred_lft 1074sec
|
||||
inet6 fde4:8dba:82e1:0:5054:ff:fe72:61af/64 scope global dynamic mngtmpaddr
|
||||
valid_lft 1074sec preferred_lft 1074sec
|
||||
12: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 state UNKNOWN qlen 100
|
||||
inet6 fe80::cafe:cafe:cafe:cafe/64 scope link stable-privacy
|
||||
2: ens20f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
|
||||
inet6 2001:1458:301:eb::100:1a/128 scope global dynamic noprefixroute
|
||||
valid_lft 590879sec preferred_lft 590879sec
|
||||
inet6 fe80::2e60:cff:fef8:da41/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
""",
|
||||
{
|
||||
"::1": ["::1"],
|
||||
"fdbc:7574:21fe:9200::/64": ["fdbc:7574:21fe:9200:4c52:cafe:8dd4:dc4",
|
||||
"fdbc:7574:21fe:9200:103a:abcd:af1f:57f3",
|
||||
"fdbc:7574:21fe:9200:a128:1234:2bdd:1b6f",
|
||||
"fdbc:7574:21fe:9200:d581:4321:380b:3905"],
|
||||
"fdd8:591e:4969:6363::/64": ["fdd8:591e:4969:6363:4c52:cafe:8dd4:dc4",
|
||||
"fdd8:591e:4969:6363:103a:abcd:af1f:57f3",
|
||||
"fdd8:591e:4969:6363:a128:1234:2bdd:1b6f",
|
||||
"fdd8:591e:4969:6363:d581:4321:380b:3905"],
|
||||
"fde4:8dba:82e1::/64": ["fde4:8dba:82e1:0:ec4a:e402:e9df:b357",
|
||||
"fde4:8dba:82e1:0:5054:ff:fe72:61af"],
|
||||
"fe80::/64": ["fe80::1111:2222:3333:4444",
|
||||
"fe80::cafe:cafe:cafe:cafe"]
|
||||
'2001:1458:301:eb::/64': {
|
||||
'ens20f0': [
|
||||
'2001:1458:301:eb::100:1a'
|
||||
],
|
||||
},
|
||||
'fe80::/64': {
|
||||
'ens20f0': ['fe80::2e60:cff:fef8:da41'],
|
||||
},
|
||||
'fd01:1458:304:5e::/64': {
|
||||
'ens20f0': []
|
||||
},
|
||||
}
|
||||
)])
|
||||
),
|
||||
])
|
||||
def test_parse_ipv6_route(self, test_routes, test_ips, expected):
|
||||
assert cd._parse_ipv6_route(test_routes, test_ips) == expected
|
||||
|
||||
|
@ -263,7 +263,7 @@ class HostCache():
|
||||
self.last_facts_update = {} # type: Dict[str, datetime.datetime]
|
||||
self.osdspec_previews = {} # type: Dict[str, List[Dict[str, Any]]]
|
||||
self.osdspec_last_applied = {} # type: Dict[str, Dict[str, datetime.datetime]]
|
||||
self.networks = {} # type: Dict[str, Dict[str, List[str]]]
|
||||
self.networks = {} # type: Dict[str, Dict[str, Dict[str, List[str]]]]
|
||||
self.last_device_update = {} # type: Dict[str, datetime.datetime]
|
||||
self.last_device_change = {} # type: Dict[str, datetime.datetime]
|
||||
self.daemon_refresh_queue = [] # type: List[str]
|
||||
@ -309,7 +309,7 @@ class HostCache():
|
||||
orchestrator.DaemonDescription.from_json(d)
|
||||
for d in j.get('devices', []):
|
||||
self.devices[host].append(inventory.Device.from_json(d))
|
||||
self.networks[host] = j.get('networks', {})
|
||||
self.networks[host] = j.get('networks_and_interfaces', {})
|
||||
self.osdspec_previews[host] = j.get('osdspec_previews', {})
|
||||
for name, ts in j.get('osdspec_last_applied', {}).items():
|
||||
self.osdspec_last_applied[host][name] = str_to_datetime(ts)
|
||||
@ -358,8 +358,12 @@ class HostCache():
|
||||
return True
|
||||
return False
|
||||
|
||||
def update_host_devices_networks(self, host, dls, nets):
|
||||
# type: (str, List[inventory.Device], Dict[str,List[str]]) -> None
|
||||
def update_host_devices_networks(
|
||||
self,
|
||||
host: str,
|
||||
dls: List[inventory.Device],
|
||||
nets: Dict[str, Dict[str, List[str]]]
|
||||
) -> None:
|
||||
if (
|
||||
host not in self.devices
|
||||
or host not in self.last_device_change
|
||||
@ -438,7 +442,7 @@ class HostCache():
|
||||
for d in self.devices[host]:
|
||||
j['devices'].append(d.to_json())
|
||||
if host in self.networks:
|
||||
j['networks'] = self.networks[host]
|
||||
j['networks_and_interfaces'] = self.networks[host]
|
||||
if host in self.daemon_config_deps:
|
||||
for name, depi in self.daemon_config_deps[host].items():
|
||||
j['daemon_config_deps'][name] = {
|
||||
|
@ -25,7 +25,7 @@ from ceph.deployment import inventory
|
||||
from ceph.deployment.drive_group import DriveGroupSpec
|
||||
from ceph.deployment.service_spec import \
|
||||
NFSServiceSpec, ServiceSpec, PlacementSpec, assert_valid_host, \
|
||||
HostPlacementSpec
|
||||
HostPlacementSpec, IngressSpec
|
||||
from ceph.utils import str_to_datetime, datetime_to_str, datetime_now
|
||||
from cephadm.serve import CephadmServe
|
||||
from cephadm.services.cephadmservice import CephadmDaemonDeploySpec
|
||||
@ -47,9 +47,9 @@ from . import utils
|
||||
from .migrations import Migrations
|
||||
from .services.cephadmservice import MonService, MgrService, MdsService, RgwService, \
|
||||
RbdMirrorService, CrashService, CephadmService, CephfsMirrorService
|
||||
from .services.ingress import IngressService
|
||||
from .services.container import CustomContainerService
|
||||
from .services.iscsi import IscsiService
|
||||
from .services.ha_rgw import HA_RGWService
|
||||
from .services.nfs import NFSService
|
||||
from .services.osd import OSDRemovalQueue, OSDService, OSD, NotFoundError
|
||||
from .services.monitoring import GrafanaService, AlertmanagerService, PrometheusService, \
|
||||
@ -421,7 +421,7 @@ class CephadmOrchestrator(orchestrator.Orchestrator, MgrModule,
|
||||
OSDService, NFSService, MonService, MgrService, MdsService,
|
||||
RgwService, RbdMirrorService, GrafanaService, AlertmanagerService,
|
||||
PrometheusService, NodeExporterService, CrashService, IscsiService,
|
||||
HA_RGWService, CustomContainerService, CephadmExporter, CephfsMirrorService
|
||||
IngressService, CustomContainerService, CephadmExporter, CephfsMirrorService
|
||||
]
|
||||
|
||||
# https://github.com/python/mypy/issues/8993
|
||||
@ -1577,12 +1577,14 @@ class CephadmOrchestrator(orchestrator.Orchestrator, MgrModule,
|
||||
events=self.events.get_for_service(spec.service_name()),
|
||||
created=self.spec_store.spec_created[nm],
|
||||
deleted=self.spec_store.spec_deleted.get(nm, None),
|
||||
virtual_ip=spec.get_virtual_ip(),
|
||||
ports=spec.get_port_start(),
|
||||
)
|
||||
if service_type == 'nfs':
|
||||
spec = cast(NFSServiceSpec, spec)
|
||||
sm[nm].rados_config_location = spec.rados_config_location()
|
||||
if spec.service_type == 'ha-rgw':
|
||||
# ha-rgw has 2 daemons running per host
|
||||
if spec.service_type == 'ingress':
|
||||
# ingress has 2 daemons running per host
|
||||
sm[nm].size *= 2
|
||||
|
||||
# factor daemons into status
|
||||
@ -1952,16 +1954,38 @@ class CephadmOrchestrator(orchestrator.Orchestrator, MgrModule,
|
||||
previews_for_specs.update({host: osd_reports})
|
||||
return previews_for_specs
|
||||
|
||||
def _calc_daemon_deps(self, daemon_type: str, daemon_id: str) -> List[str]:
|
||||
need = {
|
||||
'prometheus': ['mgr', 'alertmanager', 'node-exporter'],
|
||||
'grafana': ['prometheus'],
|
||||
'alertmanager': ['mgr', 'alertmanager'],
|
||||
}
|
||||
def _calc_daemon_deps(self,
|
||||
spec: Optional[ServiceSpec],
|
||||
daemon_type: str,
|
||||
daemon_id: str) -> List[str]:
|
||||
deps = []
|
||||
for dep_type in need.get(daemon_type, []):
|
||||
for dd in self.cache.get_daemons_by_service(dep_type):
|
||||
deps.append(dd.name())
|
||||
if daemon_type == 'haproxy':
|
||||
# because cephadm creates new daemon instances whenever
|
||||
# port or ip changes, identifying daemons by name is
|
||||
# sufficient to detect changes.
|
||||
if not spec:
|
||||
return []
|
||||
ingress_spec = cast(IngressSpec, spec)
|
||||
assert ingress_spec.backend_service
|
||||
daemons = self.cache.get_daemons_by_service(ingress_spec.backend_service)
|
||||
deps = [d.name() for d in daemons]
|
||||
elif daemon_type == 'keepalived':
|
||||
# because cephadm creates new daemon instances whenever
|
||||
# port or ip changes, identifying daemons by name is
|
||||
# sufficient to detect changes.
|
||||
if not spec:
|
||||
return []
|
||||
daemons = self.cache.get_daemons_by_service(spec.service_name())
|
||||
deps = [d.name() for d in daemons if d.daemon_type == 'haproxy']
|
||||
else:
|
||||
need = {
|
||||
'prometheus': ['mgr', 'alertmanager', 'node-exporter', 'ingress'],
|
||||
'grafana': ['prometheus'],
|
||||
'alertmanager': ['mgr', 'alertmanager'],
|
||||
}
|
||||
for dep_type in need.get(daemon_type, []):
|
||||
for dd in self.cache.get_daemons_by_type(dep_type):
|
||||
deps.append(dd.name())
|
||||
return sorted(deps)
|
||||
|
||||
@forall_hosts
|
||||
@ -2017,11 +2041,10 @@ class CephadmOrchestrator(orchestrator.Orchestrator, MgrModule,
|
||||
self.cephadm_services[service_type].config(spec, daemon_id)
|
||||
did_config = True
|
||||
|
||||
port = spec.get_port_start()
|
||||
daemon_spec = self.cephadm_services[service_type].make_daemon_spec(
|
||||
host, daemon_id, network, spec,
|
||||
# NOTE: this does not consider port conflicts!
|
||||
ports=[port] if port else None)
|
||||
ports=spec.get_port_start())
|
||||
self.log.debug('Placing %s.%s on host %s' % (
|
||||
daemon_type, daemon_id, host))
|
||||
args.append(daemon_spec)
|
||||
@ -2106,7 +2129,7 @@ class CephadmOrchestrator(orchestrator.Orchestrator, MgrModule,
|
||||
'mgr': PlacementSpec(count=2),
|
||||
'mds': PlacementSpec(count=2),
|
||||
'rgw': PlacementSpec(count=2),
|
||||
'ha-rgw': PlacementSpec(count=2),
|
||||
'ingress': PlacementSpec(count=2),
|
||||
'iscsi': PlacementSpec(count=1),
|
||||
'rbd-mirror': PlacementSpec(count=2),
|
||||
'cephfs-mirror': PlacementSpec(count=1),
|
||||
@ -2169,7 +2192,7 @@ class CephadmOrchestrator(orchestrator.Orchestrator, MgrModule,
|
||||
return self._apply(spec)
|
||||
|
||||
@handle_orch_error
|
||||
def apply_ha_rgw(self, spec: ServiceSpec) -> str:
|
||||
def apply_ingress(self, spec: ServiceSpec) -> str:
|
||||
return self._apply(spec)
|
||||
|
||||
@handle_orch_error
|
||||
|
@ -12,42 +12,46 @@ T = TypeVar('T')
|
||||
|
||||
|
||||
class DaemonPlacement(NamedTuple):
|
||||
daemon_type: str
|
||||
hostname: str
|
||||
network: str = '' # for mons only
|
||||
name: str = ''
|
||||
ip: Optional[str] = None
|
||||
port: Optional[int] = None
|
||||
ports: List[int] = []
|
||||
|
||||
def __str__(self) -> str:
|
||||
res = self.hostname
|
||||
res = self.daemon_type + ':' + self.hostname
|
||||
other = []
|
||||
if self.network:
|
||||
other.append(f'network={self.network}')
|
||||
if self.name:
|
||||
other.append(f'name={self.name}')
|
||||
if self.port:
|
||||
other.append(f'{self.ip or "*"}:{self.port}')
|
||||
if self.ports:
|
||||
other.append(f'{self.ip or "*"}:{self.ports[0] if len(self.ports) == 1 else ",".join(map(str, self.ports))}')
|
||||
if other:
|
||||
res += '(' + ' '.join(other) + ')'
|
||||
return res
|
||||
|
||||
def renumber_port(self, n: int) -> 'DaemonPlacement':
|
||||
def renumber_ports(self, n: int) -> 'DaemonPlacement':
|
||||
return DaemonPlacement(
|
||||
self.daemon_type,
|
||||
self.hostname,
|
||||
self.network,
|
||||
self.name,
|
||||
self.ip,
|
||||
(self.port + n) if self.port is not None else None
|
||||
[p + n for p in self.ports],
|
||||
)
|
||||
|
||||
def matches_daemon(self, dd: DaemonDescription) -> bool:
|
||||
if self.daemon_type != dd.daemon_type:
|
||||
return False
|
||||
if self.hostname != dd.hostname:
|
||||
return False
|
||||
# fixme: how to match against network?
|
||||
if self.name and self.name != dd.daemon_id:
|
||||
return False
|
||||
if self.port:
|
||||
if [self.port] != dd.ports:
|
||||
if self.ports:
|
||||
if self.ports != dd.ports:
|
||||
return False
|
||||
if self.ip != dd.ip:
|
||||
return False
|
||||
@ -60,19 +64,23 @@ class HostAssignment(object):
|
||||
spec, # type: ServiceSpec
|
||||
hosts: List[orchestrator.HostSpec],
|
||||
daemons: List[orchestrator.DaemonDescription],
|
||||
networks: Dict[str, Dict[str, List[str]]] = {},
|
||||
networks: Dict[str, Dict[str, Dict[str, List[str]]]] = {},
|
||||
filter_new_host=None, # type: Optional[Callable[[str],bool]]
|
||||
allow_colo: bool = False,
|
||||
primary_daemon_type: Optional[str] = None,
|
||||
per_host_daemon_type: Optional[str] = None,
|
||||
):
|
||||
assert spec
|
||||
self.spec = spec # type: ServiceSpec
|
||||
self.primary_daemon_type = primary_daemon_type or spec.service_type
|
||||
self.hosts: List[orchestrator.HostSpec] = hosts
|
||||
self.filter_new_host = filter_new_host
|
||||
self.service_name = spec.service_name()
|
||||
self.daemons = daemons
|
||||
self.networks = networks
|
||||
self.allow_colo = allow_colo
|
||||
self.port_start = spec.get_port_start()
|
||||
self.per_host_daemon_type = per_host_daemon_type
|
||||
self.ports_start = spec.get_port_start()
|
||||
|
||||
def hosts_by_label(self, label: str) -> List[orchestrator.HostSpec]:
|
||||
return [h for h in self.hosts if label in h.labels]
|
||||
@ -116,6 +124,35 @@ class HostAssignment(object):
|
||||
f'Cannot place {self.spec.one_line_str()}: No matching '
|
||||
f'hosts for label {self.spec.placement.label}')
|
||||
|
||||
def place_per_host_daemons(
|
||||
self,
|
||||
slots: List[DaemonPlacement],
|
||||
to_add: List[DaemonPlacement],
|
||||
to_remove: List[orchestrator.DaemonDescription],
|
||||
) -> Tuple[List[DaemonPlacement], List[DaemonPlacement], List[orchestrator.DaemonDescription]]:
|
||||
if self.per_host_daemon_type:
|
||||
host_slots = [
|
||||
DaemonPlacement(daemon_type=self.per_host_daemon_type,
|
||||
hostname=hostname)
|
||||
for hostname in set([s.hostname for s in slots])
|
||||
]
|
||||
existing = [
|
||||
d for d in self.daemons if d.daemon_type == self.per_host_daemon_type
|
||||
]
|
||||
slots += host_slots
|
||||
for dd in existing:
|
||||
found = False
|
||||
for p in host_slots:
|
||||
if p.matches_daemon(dd):
|
||||
host_slots.remove(p)
|
||||
found = True
|
||||
break
|
||||
if not found:
|
||||
to_remove.append(dd)
|
||||
to_add += host_slots
|
||||
|
||||
return slots, to_add, to_remove
|
||||
|
||||
def place(self):
|
||||
# type: () -> Tuple[List[DaemonPlacement], List[DaemonPlacement], List[orchestrator.DaemonDescription]]
|
||||
"""
|
||||
@ -137,7 +174,7 @@ class HostAssignment(object):
|
||||
def expand_candidates(ls: List[DaemonPlacement], num: int) -> List[DaemonPlacement]:
|
||||
r = []
|
||||
for offset in range(num):
|
||||
r.extend([dp.renumber_port(offset) for dp in ls])
|
||||
r.extend([dp.renumber_ports(offset) for dp in ls])
|
||||
return r
|
||||
|
||||
# consider enough slots to fulfill target count-per-host or count
|
||||
@ -151,11 +188,11 @@ class HostAssignment(object):
|
||||
per_host = 1 + ((count - 1) // len(candidates))
|
||||
candidates = expand_candidates(candidates, per_host)
|
||||
|
||||
# consider active daemons first
|
||||
# consider active (primary) daemons first
|
||||
daemons = [
|
||||
d for d in self.daemons if d.is_active
|
||||
d for d in self.daemons if d.is_active and d.daemon_type == self.primary_daemon_type
|
||||
] + [
|
||||
d for d in self.daemons if not d.is_active
|
||||
d for d in self.daemons if not d.is_active and d.daemon_type == self.primary_daemon_type
|
||||
]
|
||||
|
||||
# sort candidates into existing/used slots that already have a
|
||||
@ -185,7 +222,7 @@ class HostAssignment(object):
|
||||
# If we don't have <count> the list of candidates is definitive.
|
||||
if count is None:
|
||||
logger.debug('Provided hosts: %s' % candidates)
|
||||
return candidates, others, to_remove
|
||||
return self.place_per_host_daemons(candidates, others, to_remove)
|
||||
|
||||
# The number of new slots that need to be selected in order to fulfill count
|
||||
need = count - len(existing)
|
||||
@ -194,17 +231,19 @@ class HostAssignment(object):
|
||||
if need <= 0:
|
||||
to_remove.extend(existing[count:])
|
||||
del existing_slots[count:]
|
||||
return existing_slots, [], to_remove
|
||||
return self.place_per_host_daemons(existing_slots, [], to_remove)
|
||||
|
||||
# ask the scheduler to select additional slots
|
||||
to_add = others[:need]
|
||||
logger.debug('Combine hosts with existing daemons %s + new hosts %s' % (
|
||||
existing, to_add))
|
||||
return existing_slots + to_add, to_add, to_remove
|
||||
return self.place_per_host_daemons(existing_slots + to_add, to_add, to_remove)
|
||||
|
||||
def find_ip_on_host(self, hostname: str, subnets: List[str]) -> Optional[str]:
|
||||
for subnet in subnets:
|
||||
ips = self.networks.get(hostname, {}).get(subnet, [])
|
||||
ips: List[str] = []
|
||||
for iface, ips in self.networks.get(hostname, {}).get(subnet, {}).items():
|
||||
ips.extend(ips)
|
||||
if ips:
|
||||
return sorted(ips)[0]
|
||||
return None
|
||||
@ -212,18 +251,21 @@ class HostAssignment(object):
|
||||
def get_candidates(self) -> List[DaemonPlacement]:
|
||||
if self.spec.placement.hosts:
|
||||
ls = [
|
||||
DaemonPlacement(hostname=h.hostname, network=h.network, name=h.name,
|
||||
port=self.port_start)
|
||||
DaemonPlacement(daemon_type=self.primary_daemon_type,
|
||||
hostname=h.hostname, network=h.network, name=h.name,
|
||||
ports=self.ports_start)
|
||||
for h in self.spec.placement.hosts
|
||||
]
|
||||
elif self.spec.placement.label:
|
||||
ls = [
|
||||
DaemonPlacement(hostname=x.hostname, port=self.port_start)
|
||||
DaemonPlacement(daemon_type=self.primary_daemon_type,
|
||||
hostname=x.hostname, ports=self.ports_start)
|
||||
for x in self.hosts_by_label(self.spec.placement.label)
|
||||
]
|
||||
elif self.spec.placement.host_pattern:
|
||||
ls = [
|
||||
DaemonPlacement(hostname=x, port=self.port_start)
|
||||
DaemonPlacement(daemon_type=self.primary_daemon_type,
|
||||
hostname=x, ports=self.ports_start)
|
||||
for x in self.spec.placement.filter_matching_hostspecs(self.hosts)
|
||||
]
|
||||
elif (
|
||||
@ -231,7 +273,8 @@ class HostAssignment(object):
|
||||
or self.spec.placement.count_per_host is not None
|
||||
):
|
||||
ls = [
|
||||
DaemonPlacement(hostname=x.hostname, port=self.port_start)
|
||||
DaemonPlacement(daemon_type=self.primary_daemon_type,
|
||||
hostname=x.hostname, ports=self.ports_start)
|
||||
for x in self.hosts
|
||||
]
|
||||
else:
|
||||
@ -245,8 +288,9 @@ class HostAssignment(object):
|
||||
for p in orig:
|
||||
ip = self.find_ip_on_host(p.hostname, self.spec.networks)
|
||||
if ip:
|
||||
ls.append(DaemonPlacement(hostname=p.hostname, network=p.network,
|
||||
name=p.name, port=p.port, ip=ip))
|
||||
ls.append(DaemonPlacement(daemon_type=self.primary_daemon_type,
|
||||
hostname=p.hostname, network=p.network,
|
||||
name=p.name, ports=p.ports, ip=ip))
|
||||
else:
|
||||
logger.debug(
|
||||
f'Skipping {p.hostname} with no IP in network(s) {self.spec.networks}'
|
||||
@ -254,10 +298,14 @@ class HostAssignment(object):
|
||||
|
||||
if self.filter_new_host:
|
||||
old = ls.copy()
|
||||
ls = [h for h in ls if self.filter_new_host(h.hostname)]
|
||||
for h in list(set(old) - set(ls)):
|
||||
logger.info(
|
||||
f"Filtered out host {h.hostname}: could not verify host allowed virtual ips")
|
||||
ls = []
|
||||
for h in old:
|
||||
if self.filter_new_host(h.hostname):
|
||||
ls.append(h)
|
||||
else:
|
||||
logger.info(
|
||||
f"Filtered out host {h.hostname}: could not verify host allowed virtual ips")
|
||||
if len(old) > len(ls):
|
||||
logger.debug('Filtered %s down to %s' % (old, ls))
|
||||
|
||||
# shuffle for pseudo random selection
|
||||
|
@ -14,14 +14,14 @@ except ImportError:
|
||||
|
||||
from ceph.deployment import inventory
|
||||
from ceph.deployment.drive_group import DriveGroupSpec
|
||||
from ceph.deployment.service_spec import ServiceSpec, HA_RGWSpec, CustomContainerSpec
|
||||
from ceph.deployment.service_spec import ServiceSpec, IngressSpec, CustomContainerSpec
|
||||
from ceph.utils import str_to_datetime, datetime_now
|
||||
|
||||
import orchestrator
|
||||
from orchestrator import OrchestratorError, set_exception_subject, OrchestratorEvent, \
|
||||
DaemonDescriptionStatus, daemon_type_to_service, service_to_daemon_types
|
||||
DaemonDescriptionStatus, daemon_type_to_service
|
||||
from cephadm.services.cephadmservice import CephadmDaemonDeploySpec
|
||||
from cephadm.schedule import HostAssignment, DaemonPlacement
|
||||
from cephadm.schedule import HostAssignment
|
||||
from cephadm.utils import forall_hosts, cephadmNoImage, is_repo_digest, \
|
||||
CephadmNoImage, CEPH_TYPES, ContainerInspectInfo
|
||||
from mgr_module import MonCommandFailed
|
||||
@ -559,9 +559,14 @@ class CephadmServe:
|
||||
hosts=self.mgr._hosts_with_daemon_inventory(),
|
||||
daemons=daemons,
|
||||
networks=self.mgr.cache.networks,
|
||||
filter_new_host=matches_network if service_type == 'mon'
|
||||
else virtual_ip_allowed if service_type == 'ha-rgw' else None,
|
||||
filter_new_host=(
|
||||
matches_network if service_type == 'mon'
|
||||
else virtual_ip_allowed if service_type == 'ingress'
|
||||
else None
|
||||
),
|
||||
allow_colo=svc.allow_colo(),
|
||||
primary_daemon_type=svc.primary_daemon_type(),
|
||||
per_host_daemon_type=svc.per_host_daemon_type(),
|
||||
)
|
||||
|
||||
try:
|
||||
@ -587,68 +592,68 @@ class CephadmServe:
|
||||
self.log.debug('Hosts that will receive new daemons: %s' % slots_to_add)
|
||||
self.log.debug('Daemons that will be removed: %s' % daemons_to_remove)
|
||||
|
||||
if service_type == 'ha-rgw':
|
||||
spec = self.update_ha_rgw_definitive_hosts(spec, all_slots, slots_to_add)
|
||||
|
||||
for slot in slots_to_add:
|
||||
for daemon_type in service_to_daemon_types(service_type):
|
||||
# first remove daemon on conflicting port?
|
||||
if slot.port:
|
||||
for d in daemons_to_remove:
|
||||
if d.hostname != slot.hostname or d.ports != [slot.port]:
|
||||
continue
|
||||
if d.ip and slot.ip and d.ip != slot.ip:
|
||||
continue
|
||||
self.log.info(
|
||||
f'Removing {d.name()} before deploying to {slot} to avoid a port conflict'
|
||||
)
|
||||
# NOTE: we don't check ok-to-stop here to avoid starvation if
|
||||
# there is only 1 gateway.
|
||||
self._remove_daemon(d.name(), d.hostname)
|
||||
daemons_to_remove.remove(d)
|
||||
break
|
||||
# first remove daemon on conflicting port?
|
||||
if slot.ports:
|
||||
for d in daemons_to_remove:
|
||||
if d.hostname != slot.hostname:
|
||||
continue
|
||||
if not (set(d.ports or []) & set(slot.ports)):
|
||||
continue
|
||||
if d.ip and slot.ip and d.ip != slot.ip:
|
||||
continue
|
||||
self.log.info(
|
||||
f'Removing {d.name()} before deploying to {slot} to avoid a port conflict'
|
||||
)
|
||||
# NOTE: we don't check ok-to-stop here to avoid starvation if
|
||||
# there is only 1 gateway.
|
||||
self._remove_daemon(d.name(), d.hostname)
|
||||
daemons_to_remove.remove(d)
|
||||
break
|
||||
|
||||
# deploy new daemon
|
||||
daemon_id = self.mgr.get_unique_name(
|
||||
daemon_type,
|
||||
slot.hostname,
|
||||
daemons,
|
||||
prefix=spec.service_id,
|
||||
forcename=slot.name)
|
||||
# deploy new daemon
|
||||
daemon_id = self.mgr.get_unique_name(
|
||||
slot.daemon_type,
|
||||
slot.hostname,
|
||||
daemons,
|
||||
prefix=spec.service_id,
|
||||
forcename=slot.name)
|
||||
|
||||
if not did_config:
|
||||
svc.config(spec, daemon_id)
|
||||
did_config = True
|
||||
if not did_config:
|
||||
svc.config(spec, daemon_id)
|
||||
did_config = True
|
||||
|
||||
daemon_spec = svc.make_daemon_spec(
|
||||
slot.hostname, daemon_id, slot.network, spec, daemon_type=daemon_type,
|
||||
ports=[slot.port] if slot.port else None,
|
||||
ip=slot.ip,
|
||||
)
|
||||
self.log.debug('Placing %s.%s on host %s' % (
|
||||
daemon_type, daemon_id, slot.hostname))
|
||||
daemon_spec = svc.make_daemon_spec(
|
||||
slot.hostname, daemon_id, slot.network, spec,
|
||||
daemon_type=slot.daemon_type,
|
||||
ports=slot.ports,
|
||||
ip=slot.ip,
|
||||
)
|
||||
self.log.debug('Placing %s.%s on host %s' % (
|
||||
slot.daemon_type, daemon_id, slot.hostname))
|
||||
|
||||
try:
|
||||
daemon_spec = svc.prepare_create(daemon_spec)
|
||||
self._create_daemon(daemon_spec)
|
||||
r = True
|
||||
except (RuntimeError, OrchestratorError) as e:
|
||||
self.mgr.events.for_service(spec, 'ERROR',
|
||||
f"Failed while placing {daemon_type}.{daemon_id} "
|
||||
f"on {slot.hostname}: {e}")
|
||||
# only return "no change" if no one else has already succeeded.
|
||||
# later successes will also change to True
|
||||
if r is None:
|
||||
r = False
|
||||
continue
|
||||
try:
|
||||
daemon_spec = svc.prepare_create(daemon_spec)
|
||||
self._create_daemon(daemon_spec)
|
||||
r = True
|
||||
except (RuntimeError, OrchestratorError) as e:
|
||||
self.mgr.events.for_service(
|
||||
spec, 'ERROR',
|
||||
f"Failed while placing {slot.daemon_type}.{daemon_id} "
|
||||
f"on {slot.hostname}: {e}")
|
||||
# only return "no change" if no one else has already succeeded.
|
||||
# later successes will also change to True
|
||||
if r is None:
|
||||
r = False
|
||||
continue
|
||||
|
||||
# add to daemon list so next name(s) will also be unique
|
||||
sd = orchestrator.DaemonDescription(
|
||||
hostname=slot.hostname,
|
||||
daemon_type=daemon_type,
|
||||
daemon_id=daemon_id,
|
||||
)
|
||||
daemons.append(sd)
|
||||
# add to daemon list so next name(s) will also be unique
|
||||
sd = orchestrator.DaemonDescription(
|
||||
hostname=slot.hostname,
|
||||
daemon_type=slot.daemon_type,
|
||||
daemon_id=daemon_id,
|
||||
)
|
||||
daemons.append(sd)
|
||||
|
||||
# remove any?
|
||||
def _ok_to_stop(remove_daemons: List[orchestrator.DaemonDescription]) -> bool:
|
||||
@ -700,7 +705,7 @@ class CephadmServe:
|
||||
else:
|
||||
dd.is_active = False
|
||||
|
||||
deps = self.mgr._calc_daemon_deps(dd.daemon_type, dd.daemon_id)
|
||||
deps = self.mgr._calc_daemon_deps(spec, dd.daemon_type, dd.daemon_id)
|
||||
last_deps, last_config = self.mgr.cache.get_daemon_last_config_deps(
|
||||
dd.hostname, dd.name())
|
||||
if last_deps is None:
|
||||
@ -786,29 +791,6 @@ class CephadmServe:
|
||||
# FIXME: we assume the first digest here is the best
|
||||
self.mgr.set_container_image(entity, image_info.repo_digests[0])
|
||||
|
||||
# ha-rgw needs definitve host list to create keepalived config files
|
||||
# if definitive host list has changed, all ha-rgw daemons must get new
|
||||
# config, including those that are already on the correct host and not
|
||||
# going to be deployed
|
||||
def update_ha_rgw_definitive_hosts(
|
||||
self,
|
||||
spec: ServiceSpec,
|
||||
hosts: List[DaemonPlacement],
|
||||
add_hosts: List[DaemonPlacement]
|
||||
) -> HA_RGWSpec:
|
||||
spec = cast(HA_RGWSpec, spec)
|
||||
hostnames = [p.hostname for p in hosts]
|
||||
add_hostnames = [p.hostname for p in add_hosts]
|
||||
if not (set(hostnames) == set(spec.definitive_host_list)):
|
||||
spec.definitive_host_list = hostnames
|
||||
ha_rgw_daemons = self.mgr.cache.get_daemons_by_service(spec.service_name())
|
||||
for daemon in ha_rgw_daemons:
|
||||
if daemon.hostname in hostnames and daemon.hostname not in add_hostnames:
|
||||
assert daemon.hostname is not None
|
||||
self.mgr.cache.schedule_daemon_action(
|
||||
daemon.hostname, daemon.name(), 'reconfig')
|
||||
return spec
|
||||
|
||||
def _create_daemon(self,
|
||||
daemon_spec: CephadmDaemonDeploySpec,
|
||||
reconfig: bool = False,
|
||||
@ -839,12 +821,12 @@ class CephadmServe:
|
||||
self._deploy_cephadm_binary(daemon_spec.host)
|
||||
|
||||
if daemon_spec.daemon_type == 'haproxy':
|
||||
haspec = cast(HA_RGWSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
haspec = cast(IngressSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
if haspec.haproxy_container_image:
|
||||
image = haspec.haproxy_container_image
|
||||
|
||||
if daemon_spec.daemon_type == 'keepalived':
|
||||
haspec = cast(HA_RGWSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
haspec = cast(IngressSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
if haspec.keepalived_container_image:
|
||||
image = haspec.keepalived_container_image
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
import errno
|
||||
import json
|
||||
import re
|
||||
import logging
|
||||
import re
|
||||
from abc import ABCMeta, abstractmethod
|
||||
from typing import TYPE_CHECKING, List, Callable, TypeVar, \
|
||||
Optional, Dict, Any, Tuple, NewType, cast
|
||||
@ -118,6 +118,12 @@ class CephadmService(metaclass=ABCMeta):
|
||||
def allow_colo(self) -> bool:
|
||||
return False
|
||||
|
||||
def per_host_daemon_type(self) -> Optional[str]:
|
||||
return None
|
||||
|
||||
def primary_daemon_type(self) -> str:
|
||||
return self.TYPE
|
||||
|
||||
def make_daemon_spec(
|
||||
self, host: str,
|
||||
daemon_id: str,
|
||||
@ -394,7 +400,7 @@ class CephService(CephadmService):
|
||||
"""
|
||||
# despite this mapping entity names to daemons, self.TYPE within
|
||||
# the CephService class refers to service types, not daemon types
|
||||
if self.TYPE in ['rgw', 'rbd-mirror', 'cephfs-mirror', 'nfs', "iscsi", 'ha-rgw']:
|
||||
if self.TYPE in ['rgw', 'rbd-mirror', 'cephfs-mirror', 'nfs', "iscsi", 'ingress']:
|
||||
return AuthEntity(f'client.{self.TYPE}.{daemon_id}')
|
||||
elif self.TYPE == 'crash':
|
||||
if host == "":
|
||||
@ -832,15 +838,15 @@ class RgwService(CephService):
|
||||
force: bool = False,
|
||||
known: Optional[List[str]] = None # output argument
|
||||
) -> HandleCommandResult:
|
||||
# if load balancer (ha-rgw) is present block if only 1 daemon up otherwise ok
|
||||
# if load balancer (ingress) is present block if only 1 daemon up otherwise ok
|
||||
# if no load balancer, warn if > 1 daemon, block if only 1 daemon
|
||||
def ha_rgw_present() -> bool:
|
||||
running_ha_rgw_daemons = [
|
||||
daemon for daemon in self.mgr.cache.get_daemons_by_type('ha-rgw') if daemon.status == 1]
|
||||
def ingress_present() -> bool:
|
||||
running_ingress_daemons = [
|
||||
daemon for daemon in self.mgr.cache.get_daemons_by_type('ingress') if daemon.status == 1]
|
||||
running_haproxy_daemons = [
|
||||
daemon for daemon in running_ha_rgw_daemons if daemon.daemon_type == 'haproxy']
|
||||
daemon for daemon in running_ingress_daemons if daemon.daemon_type == 'haproxy']
|
||||
running_keepalived_daemons = [
|
||||
daemon for daemon in running_ha_rgw_daemons if daemon.daemon_type == 'keepalived']
|
||||
daemon for daemon in running_ingress_daemons if daemon.daemon_type == 'keepalived']
|
||||
# check that there is at least one haproxy and keepalived daemon running
|
||||
if running_haproxy_daemons and running_keepalived_daemons:
|
||||
return True
|
||||
@ -853,7 +859,7 @@ class RgwService(CephService):
|
||||
|
||||
# if reached here, there is > 1 rgw daemon.
|
||||
# Say okay if load balancer present or force flag set
|
||||
if ha_rgw_present() or force:
|
||||
if ingress_present() or force:
|
||||
return HandleCommandResult(0, warn_message, '')
|
||||
|
||||
# if reached here, > 1 RGW daemon, no load balancer and no force flag.
|
||||
|
@ -1,132 +0,0 @@
|
||||
import logging
|
||||
from typing import List, cast, Tuple, Dict, Any
|
||||
|
||||
from ceph.deployment.service_spec import HA_RGWSpec
|
||||
|
||||
from .cephadmservice import CephadmDaemonDeploySpec, CephService
|
||||
from ..utils import resolve_ip
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class HA_RGWService(CephService):
|
||||
TYPE = 'ha-rgw'
|
||||
|
||||
class rgw_server():
|
||||
def __init__(self, hostname: str, address: str, port: int):
|
||||
self.name = hostname
|
||||
self.ip = address
|
||||
self.port = port
|
||||
|
||||
def prepare_create(self, daemon_spec: CephadmDaemonDeploySpec) -> CephadmDaemonDeploySpec:
|
||||
assert daemon_spec.daemon_type == 'haproxy' or daemon_spec.daemon_type == 'keepalived'
|
||||
if daemon_spec.daemon_type == 'haproxy':
|
||||
return self.haproxy_prepare_create(daemon_spec)
|
||||
else:
|
||||
return self.keepalived_prepare_create(daemon_spec)
|
||||
|
||||
def generate_config(self, daemon_spec: CephadmDaemonDeploySpec) -> Tuple[Dict[str, Any], List[str]]:
|
||||
assert daemon_spec.daemon_type == 'haproxy' or daemon_spec.daemon_type == 'keepalived'
|
||||
|
||||
if daemon_spec.daemon_type == 'haproxy':
|
||||
return self.haproxy_generate_config(daemon_spec)
|
||||
else:
|
||||
return self.keepalived_generate_config(daemon_spec)
|
||||
|
||||
def haproxy_prepare_create(self, daemon_spec: CephadmDaemonDeploySpec) -> CephadmDaemonDeploySpec:
|
||||
assert daemon_spec.daemon_type == 'haproxy'
|
||||
|
||||
daemon_id = daemon_spec.daemon_id
|
||||
host = daemon_spec.host
|
||||
spec = cast(HA_RGWSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
|
||||
logger.info('Create daemon %s on host %s with spec %s' % (
|
||||
daemon_id, host, spec))
|
||||
|
||||
daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
|
||||
|
||||
return daemon_spec
|
||||
|
||||
def keepalived_prepare_create(self, daemon_spec: CephadmDaemonDeploySpec) -> CephadmDaemonDeploySpec:
|
||||
assert daemon_spec.daemon_type == 'keepalived'
|
||||
|
||||
daemon_id = daemon_spec.daemon_id
|
||||
host = daemon_spec.host
|
||||
spec = cast(HA_RGWSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
|
||||
logger.info('Create daemon %s on host %s with spec %s' % (
|
||||
daemon_id, host, spec))
|
||||
|
||||
daemon_spec.final_config, daemon_spec.deps = self.keepalived_generate_config(daemon_spec)
|
||||
|
||||
return daemon_spec
|
||||
|
||||
def haproxy_generate_config(self, daemon_spec: CephadmDaemonDeploySpec) -> Tuple[Dict[str, Any], List[str]]:
|
||||
spec = cast(HA_RGWSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
|
||||
rgw_daemons = self.mgr.cache.get_daemons_by_type('rgw')
|
||||
rgw_servers = []
|
||||
for daemon in rgw_daemons:
|
||||
assert daemon.hostname is not None
|
||||
rgw_servers.append(self.rgw_server(
|
||||
daemon.name(),
|
||||
resolve_ip(daemon.hostname),
|
||||
daemon.ports[0] if daemon.ports else 80
|
||||
))
|
||||
|
||||
# virtual ip address cannot have netmask attached when passed to haproxy config
|
||||
# since the port is added to the end and something like 123.123.123.10/24:8080 is invalid
|
||||
virtual_ip_address = spec.virtual_ip_address
|
||||
if "/" in str(spec.virtual_ip_address):
|
||||
just_ip = str(spec.virtual_ip_address).split('/')[0]
|
||||
virtual_ip_address = just_ip
|
||||
|
||||
ha_context = {'spec': spec, 'rgw_servers': rgw_servers,
|
||||
'virtual_ip_address': virtual_ip_address}
|
||||
|
||||
haproxy_conf = self.mgr.template.render('services/haproxy/haproxy.cfg.j2', ha_context)
|
||||
|
||||
config_files = {
|
||||
'files': {
|
||||
"haproxy.cfg": haproxy_conf,
|
||||
}
|
||||
}
|
||||
if spec.ha_proxy_frontend_ssl_certificate:
|
||||
ssl_cert = spec.ha_proxy_frontend_ssl_certificate
|
||||
if isinstance(ssl_cert, list):
|
||||
ssl_cert = '\n'.join(ssl_cert)
|
||||
config_files['files']['haproxy.pem'] = ssl_cert
|
||||
|
||||
return config_files, []
|
||||
|
||||
def keepalived_generate_config(self, daemon_spec: CephadmDaemonDeploySpec) -> Tuple[Dict[str, Any], List[str]]:
|
||||
host = daemon_spec.host
|
||||
|
||||
spec = cast(HA_RGWSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
|
||||
all_hosts = spec.definitive_host_list
|
||||
|
||||
# set state. first host in placement is master all others backups
|
||||
state = 'BACKUP'
|
||||
if all_hosts[0] == host:
|
||||
state = 'MASTER'
|
||||
|
||||
# remove host, daemon is being deployed on from all_hosts list for
|
||||
# other_ips in conf file and converter to ips
|
||||
all_hosts.remove(host)
|
||||
other_ips = [resolve_ip(h) for h in all_hosts]
|
||||
|
||||
ka_context = {'spec': spec, 'state': state,
|
||||
'other_ips': other_ips,
|
||||
'host_ip': resolve_ip(host)}
|
||||
|
||||
keepalived_conf = self.mgr.template.render(
|
||||
'services/keepalived/keepalived.conf.j2', ka_context)
|
||||
|
||||
config_file = {
|
||||
'files': {
|
||||
"keepalived.conf": keepalived_conf,
|
||||
}
|
||||
}
|
||||
|
||||
return config_file, []
|
219
src/pybind/mgr/cephadm/services/ingress.py
Normal file
219
src/pybind/mgr/cephadm/services/ingress.py
Normal file
@ -0,0 +1,219 @@
|
||||
import ipaddress
|
||||
import logging
|
||||
import random
|
||||
import string
|
||||
from typing import List, Dict, Any, Tuple, cast, Optional
|
||||
|
||||
from ceph.deployment.service_spec import IngressSpec
|
||||
from cephadm.utils import resolve_ip
|
||||
|
||||
from cephadm.services.cephadmservice import CephadmDaemonDeploySpec, CephService
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class IngressService(CephService):
|
||||
TYPE = 'ingress'
|
||||
|
||||
def primary_daemon_type(self) -> str:
|
||||
return 'haproxy'
|
||||
|
||||
def per_host_daemon_type(self) -> Optional[str]:
|
||||
return 'keepalived'
|
||||
|
||||
def prepare_create(
|
||||
self,
|
||||
daemon_spec: CephadmDaemonDeploySpec,
|
||||
) -> CephadmDaemonDeploySpec:
|
||||
if daemon_spec.daemon_type == 'haproxy':
|
||||
return self.haproxy_prepare_create(daemon_spec)
|
||||
if daemon_spec.daemon_type == 'keepalived':
|
||||
return self.keepalived_prepare_create(daemon_spec)
|
||||
assert False, "unexpected daemon type"
|
||||
|
||||
def generate_config(
|
||||
self,
|
||||
daemon_spec: CephadmDaemonDeploySpec
|
||||
) -> Tuple[Dict[str, Any], List[str]]:
|
||||
if daemon_spec.daemon_type == 'haproxy':
|
||||
return self.haproxy_generate_config(daemon_spec)
|
||||
else:
|
||||
return self.keepalived_generate_config(daemon_spec)
|
||||
assert False, "unexpected daemon type"
|
||||
|
||||
def haproxy_prepare_create(
|
||||
self,
|
||||
daemon_spec: CephadmDaemonDeploySpec,
|
||||
) -> CephadmDaemonDeploySpec:
|
||||
assert daemon_spec.daemon_type == 'haproxy'
|
||||
|
||||
daemon_id = daemon_spec.daemon_id
|
||||
host = daemon_spec.host
|
||||
spec = cast(IngressSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
|
||||
logger.debug('prepare_create haproxy.%s on host %s with spec %s' % (
|
||||
daemon_id, host, spec))
|
||||
|
||||
daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
|
||||
|
||||
return daemon_spec
|
||||
|
||||
def haproxy_generate_config(
|
||||
self,
|
||||
daemon_spec: CephadmDaemonDeploySpec,
|
||||
) -> Tuple[Dict[str, Any], List[str]]:
|
||||
spec = cast(IngressSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
assert spec.backend_service
|
||||
daemons = self.mgr.cache.get_daemons_by_service(spec.backend_service)
|
||||
deps = [d.name() for d in daemons]
|
||||
|
||||
# generate password?
|
||||
pw_key = f'{spec.service_name()}/monitor_password'
|
||||
password = self.mgr.get_store(pw_key)
|
||||
if password is None:
|
||||
if not spec.monitor_password:
|
||||
password = ''.join(random.choice(string.ascii_lowercase) for _ in range(20))
|
||||
self.mgr.set_store(pw_key, password)
|
||||
else:
|
||||
if spec.monitor_password:
|
||||
self.mgr.set_store(pw_key, None)
|
||||
if spec.monitor_password:
|
||||
password = spec.monitor_password
|
||||
|
||||
haproxy_conf = self.mgr.template.render(
|
||||
'services/ingress/haproxy.cfg.j2',
|
||||
{
|
||||
'spec': spec,
|
||||
'servers': [
|
||||
{
|
||||
'name': d.name(),
|
||||
'ip': d.ip or resolve_ip(str(d.hostname)),
|
||||
'port': d.ports[0],
|
||||
} for d in daemons if d.ports
|
||||
],
|
||||
'user': spec.monitor_user or 'admin',
|
||||
'password': password,
|
||||
'ip': daemon_spec.ip or '*',
|
||||
'frontend_port': daemon_spec.ports[0] if daemon_spec.ports else spec.frontend_port,
|
||||
'monitor_port': daemon_spec.ports[1] if daemon_spec.ports else spec.monitor_port,
|
||||
}
|
||||
)
|
||||
config_files = {
|
||||
'files': {
|
||||
"haproxy.cfg": haproxy_conf,
|
||||
}
|
||||
}
|
||||
if spec.ssl_cert:
|
||||
ssl_cert = spec.ssl_cert
|
||||
if isinstance(ssl_cert, list):
|
||||
ssl_cert = '\n'.join(ssl_cert)
|
||||
config_files['files']['haproxy.pem'] = ssl_cert
|
||||
|
||||
return config_files, sorted(deps)
|
||||
|
||||
def keepalived_prepare_create(
|
||||
self,
|
||||
daemon_spec: CephadmDaemonDeploySpec,
|
||||
) -> CephadmDaemonDeploySpec:
|
||||
assert daemon_spec.daemon_type == 'keepalived'
|
||||
|
||||
daemon_id = daemon_spec.daemon_id
|
||||
host = daemon_spec.host
|
||||
spec = cast(IngressSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
|
||||
logger.debug('prepare_create keepalived.%s on host %s with spec %s' % (
|
||||
daemon_id, host, spec))
|
||||
|
||||
daemon_spec.final_config, daemon_spec.deps = self.keepalived_generate_config(daemon_spec)
|
||||
|
||||
return daemon_spec
|
||||
|
||||
def keepalived_generate_config(
|
||||
self,
|
||||
daemon_spec: CephadmDaemonDeploySpec,
|
||||
) -> Tuple[Dict[str, Any], List[str]]:
|
||||
spec = cast(IngressSpec, self.mgr.spec_store[daemon_spec.service_name].spec)
|
||||
assert spec.backend_service
|
||||
|
||||
# generate password?
|
||||
pw_key = f'{spec.service_name()}/keepalived_password'
|
||||
password = self.mgr.get_store(pw_key)
|
||||
if password is None:
|
||||
if not spec.keepalived_password:
|
||||
password = ''.join(random.choice(string.ascii_lowercase) for _ in range(20))
|
||||
self.mgr.set_store(pw_key, password)
|
||||
else:
|
||||
if spec.keepalived_password:
|
||||
self.mgr.set_store(pw_key, None)
|
||||
if spec.keepalived_password:
|
||||
password = spec.keepalived_password
|
||||
|
||||
daemons = self.mgr.cache.get_daemons_by_service(spec.service_name())
|
||||
deps = sorted([d.name() for d in daemons if d.daemon_type == 'haproxy'])
|
||||
|
||||
host = daemon_spec.host
|
||||
hosts = sorted(list(set([str(d.hostname) for d in daemons])))
|
||||
|
||||
# interface
|
||||
bare_ip = str(spec.virtual_ip).split('/')[0]
|
||||
interface = None
|
||||
for subnet, ifaces in self.mgr.cache.networks.get(host, {}).items():
|
||||
if ifaces and ipaddress.ip_address(bare_ip) in ipaddress.ip_network(subnet):
|
||||
interface = list(ifaces.keys())[0]
|
||||
logger.info(
|
||||
f'{bare_ip} is in {subnet} on {host} interface {interface}'
|
||||
)
|
||||
break
|
||||
if not interface and spec.networks:
|
||||
# hmm, try spec.networks
|
||||
for subnet, ifaces in self.mgr.cache.networks.get(host, {}).items():
|
||||
if subnet in spec.networks:
|
||||
interface = list(ifaces.keys())[0]
|
||||
logger.info(
|
||||
f'{spec.virtual_ip} will be configured on {host} interface '
|
||||
f'{interface} (which has guiding subnet {subnet})'
|
||||
)
|
||||
break
|
||||
if not interface:
|
||||
interface = 'eth0'
|
||||
|
||||
# script to monitor health
|
||||
script = '/usr/bin/false'
|
||||
for d in daemons:
|
||||
if d.hostname == host:
|
||||
if d.daemon_type == 'haproxy':
|
||||
assert d.ports
|
||||
port = d.ports[1] # monitoring port
|
||||
script = f'/usr/bin/curl http://{d.ip or "localhost"}:{port}/health'
|
||||
assert script
|
||||
|
||||
# set state. first host in placement is master all others backups
|
||||
state = 'BACKUP'
|
||||
if hosts[0] == host:
|
||||
state = 'MASTER'
|
||||
|
||||
# remove host, daemon is being deployed on from hosts list for
|
||||
# other_ips in conf file and converter to ips
|
||||
hosts.remove(host)
|
||||
other_ips = [resolve_ip(h) for h in hosts]
|
||||
|
||||
keepalived_conf = self.mgr.template.render(
|
||||
'services/ingress/keepalived.conf.j2',
|
||||
{
|
||||
'spec': spec,
|
||||
'script': script,
|
||||
'password': password,
|
||||
'interface': interface,
|
||||
'state': state,
|
||||
'other_ips': other_ips,
|
||||
'host_ip': resolve_ip(host),
|
||||
}
|
||||
)
|
||||
|
||||
config_file = {
|
||||
'files': {
|
||||
"keepalived.conf": keepalived_conf,
|
||||
}
|
||||
}
|
||||
|
||||
return config_file, deps
|
@ -8,6 +8,7 @@ from mgr_module import HandleCommandResult
|
||||
from orchestrator import DaemonDescription
|
||||
from ceph.deployment.service_spec import AlertManagerSpec
|
||||
from cephadm.services.cephadmservice import CephadmService, CephadmDaemonDeploySpec
|
||||
from cephadm.services.ingress import IngressSpec
|
||||
from mgr_util import verify_tls, ServerConfigException, create_self_signed_cert
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@ -245,10 +246,25 @@ class PrometheusService(CephadmService):
|
||||
addr = self.mgr.inventory.get_addr(dd.hostname)
|
||||
alertmgr_targets.append("'{}:9093'".format(addr.split(':')[0]))
|
||||
|
||||
# scrape haproxies
|
||||
haproxy_targets = []
|
||||
for dd in self.mgr.cache.get_daemons_by_type('ingress'):
|
||||
if dd.service_name() in self.mgr.spec_store:
|
||||
spec = cast(IngressSpec, self.mgr.spec_store[dd.service_name()].spec)
|
||||
assert dd.hostname is not None
|
||||
deps.append(dd.name())
|
||||
if dd.daemon_type == 'haproxy':
|
||||
addr = self.mgr.inventory.get_addr(dd.hostname)
|
||||
haproxy_targets.append({
|
||||
"url": f"'{addr.split(':')[0]}:{spec.monitor_port}'",
|
||||
"service": dd.service_name(),
|
||||
})
|
||||
|
||||
# generate the prometheus configuration
|
||||
context = {
|
||||
'alertmgr_targets': alertmgr_targets,
|
||||
'mgr_scrape_list': mgr_scrape_list,
|
||||
'haproxy_targets': haproxy_targets,
|
||||
'nodes': nodes,
|
||||
}
|
||||
r = {
|
||||
|
@ -0,0 +1,62 @@
|
||||
# {{ cephadm_managed }}
|
||||
global
|
||||
log 127.0.0.1 local2
|
||||
chroot /var/lib/haproxy
|
||||
pidfile /var/lib/haproxy/haproxy.pid
|
||||
maxconn 8000
|
||||
daemon
|
||||
stats socket /var/lib/haproxy/stats
|
||||
{% if spec.ssl_cert %}
|
||||
{% if spec.ssl_dh_param %}
|
||||
tune.ssl.default-dh-param {{ spec.ssl_dh_param }}
|
||||
{% endif %}
|
||||
{% if spec.ssl_ciphers %}
|
||||
ssl-default-bind-ciphers {{ spec.ssl_ciphers | join(':') }}
|
||||
{% endif %}
|
||||
{% if spec.ssl_options %}
|
||||
ssl-default-bind-options {{ spec.ssl_options | join(' ') }}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
defaults
|
||||
mode http
|
||||
log global
|
||||
option httplog
|
||||
option dontlognull
|
||||
option http-server-close
|
||||
option forwardfor except 127.0.0.0/8
|
||||
option redispatch
|
||||
retries 3
|
||||
timeout http-request 1s
|
||||
timeout queue 20s
|
||||
timeout connect 5s
|
||||
timeout client 1s
|
||||
timeout server 1s
|
||||
timeout http-keep-alive 5s
|
||||
timeout check 5s
|
||||
maxconn 8000
|
||||
|
||||
frontend stats
|
||||
bind {{ ip }}:{{ monitor_port }}
|
||||
stats enable
|
||||
stats uri /stats
|
||||
stats refresh 10s
|
||||
stats auth {{ user }}:{{ password }}
|
||||
http-request use-service prometheus-exporter if { path /metrics }
|
||||
monitor-uri /health
|
||||
|
||||
frontend frontend
|
||||
{% if spec.ssl_cert %}
|
||||
bind {{ ip }}:{{ frontend_port }} ssl crt /var/lib/haproxy/haproxy.pem
|
||||
{% else %}
|
||||
bind {{ ip }}:{{ frontend_port }}
|
||||
{% endif %}
|
||||
default_backend backend
|
||||
|
||||
backend backend
|
||||
option forwardfor
|
||||
balance static-rr
|
||||
option httpchk HEAD / HTTP/1.0
|
||||
{% for server in servers %}
|
||||
server {{ server.name }} {{ server.ip }}:{{ server.port }} check weight 100
|
||||
{% endfor %}
|
@ -0,0 +1,32 @@
|
||||
# {{ cephadm_managed }}
|
||||
vrrp_script check_backend {
|
||||
script "{{ script }}"
|
||||
weight -20
|
||||
interval 2
|
||||
rise 2
|
||||
fall 2
|
||||
}
|
||||
|
||||
vrrp_instance VI_0 {
|
||||
state {{ state }}
|
||||
priority 100
|
||||
interface {{ interface }}
|
||||
virtual_router_id 51
|
||||
advert_int 1
|
||||
authentication {
|
||||
auth_type PASS
|
||||
auth_pass {{ password }}
|
||||
}
|
||||
unicast_src_ip {{ host_ip }}
|
||||
unicast_peer {
|
||||
{% for ip in other_ips %}
|
||||
{{ ip }}
|
||||
{% endfor %}
|
||||
}
|
||||
virtual_ipaddress {
|
||||
{{ spec.virtual_ip }} dev {{ interface }}
|
||||
}
|
||||
track_script {
|
||||
check_backend
|
||||
}
|
||||
}
|
@ -20,6 +20,7 @@ scrape_configs:
|
||||
{% for mgr in mgr_scrape_list %}
|
||||
- '{{ mgr }}'
|
||||
{% endfor %}
|
||||
|
||||
{% if nodes %}
|
||||
- job_name: 'node'
|
||||
static_configs:
|
||||
@ -29,3 +30,13 @@ scrape_configs:
|
||||
instance: '{{ node.hostname }}'
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
{% if haproxy_targets %}
|
||||
- job_name: 'haproxy'
|
||||
static_configs:
|
||||
{% for haproxy in haproxy_targets %}
|
||||
- targets: [{{ haproxy.url }}]
|
||||
labels:
|
||||
instance: '{{ haproxy.service }}'
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
@ -148,7 +148,8 @@ class TestCephadm(object):
|
||||
'service_id': 'r.z',
|
||||
'service_name': 'rgw.r.z',
|
||||
'service_type': 'rgw',
|
||||
'status': {'created': mock.ANY, 'running': 1, 'size': 1},
|
||||
'status': {'created': mock.ANY, 'running': 1, 'size': 1,
|
||||
'ports': [80]},
|
||||
}
|
||||
]
|
||||
for o in out:
|
||||
|
@ -6,7 +6,7 @@ from typing import NamedTuple, List, Dict
|
||||
import pytest
|
||||
|
||||
from ceph.deployment.hostspec import HostSpec
|
||||
from ceph.deployment.service_spec import ServiceSpec, PlacementSpec, ServiceSpecValidationError
|
||||
from ceph.deployment.service_spec import ServiceSpec, PlacementSpec, ServiceSpecValidationError, IngressSpec
|
||||
|
||||
from cephadm.module import HostAssignment
|
||||
from cephadm.schedule import DaemonPlacement
|
||||
@ -187,21 +187,48 @@ test_explicit_scheduler_results = [
|
||||
]
|
||||
|
||||
|
||||
@pytest.mark.parametrize("dp,n,result",
|
||||
[ # noqa: E128
|
||||
(
|
||||
DaemonPlacement(daemon_type='mgr', hostname='host1', ports=[80]),
|
||||
0,
|
||||
DaemonPlacement(daemon_type='mgr', hostname='host1', ports=[80]),
|
||||
),
|
||||
(
|
||||
DaemonPlacement(daemon_type='mgr', hostname='host1', ports=[80]),
|
||||
2,
|
||||
DaemonPlacement(daemon_type='mgr', hostname='host1', ports=[82]),
|
||||
),
|
||||
(
|
||||
DaemonPlacement(daemon_type='mgr', hostname='host1', ports=[80, 90]),
|
||||
2,
|
||||
DaemonPlacement(daemon_type='mgr', hostname='host1', ports=[82, 92]),
|
||||
),
|
||||
])
|
||||
def test_daemon_placement_renumber(dp, n, result):
|
||||
assert dp.renumber_ports(n) == result
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
'dp,dd,result',
|
||||
[
|
||||
(
|
||||
DaemonPlacement(hostname='host1'),
|
||||
DaemonPlacement(daemon_type='mgr', hostname='host1'),
|
||||
DaemonDescription('mgr', 'a', 'host1'),
|
||||
True
|
||||
),
|
||||
(
|
||||
DaemonPlacement(hostname='host1', name='a'),
|
||||
DaemonPlacement(daemon_type='mgr', hostname='host1', name='a'),
|
||||
DaemonDescription('mgr', 'a', 'host1'),
|
||||
True
|
||||
),
|
||||
(
|
||||
DaemonPlacement(hostname='host1', name='a'),
|
||||
DaemonPlacement(daemon_type='mon', hostname='host1', name='a'),
|
||||
DaemonDescription('mgr', 'a', 'host1'),
|
||||
False
|
||||
),
|
||||
(
|
||||
DaemonPlacement(daemon_type='mgr', hostname='host1', name='a'),
|
||||
DaemonDescription('mgr', 'b', 'host1'),
|
||||
False
|
||||
),
|
||||
@ -364,7 +391,7 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(hosts=['smithi060']),
|
||||
['smithi060'],
|
||||
[],
|
||||
['smithi060'], ['smithi060'], []
|
||||
['mgr:smithi060'], ['mgr:smithi060'], []
|
||||
),
|
||||
# all_hosts
|
||||
NodeAssignmentTest(
|
||||
@ -375,7 +402,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
DaemonDescription('mgr', 'a', 'host1'),
|
||||
DaemonDescription('mgr', 'b', 'host2'),
|
||||
],
|
||||
['host1', 'host2', 'host3'], ['host3'], []
|
||||
['mgr:host1', 'mgr:host2', 'mgr:host3'],
|
||||
['mgr:host3'],
|
||||
[]
|
||||
),
|
||||
# all_hosts + count_per_host
|
||||
NodeAssignmentTest(
|
||||
@ -386,8 +415,8 @@ class NodeAssignmentTest(NamedTuple):
|
||||
DaemonDescription('mds', 'a', 'host1'),
|
||||
DaemonDescription('mds', 'b', 'host2'),
|
||||
],
|
||||
['host1', 'host2', 'host3', 'host1', 'host2', 'host3'],
|
||||
['host3', 'host1', 'host2', 'host3'],
|
||||
['mds:host1', 'mds:host2', 'mds:host3', 'mds:host1', 'mds:host2', 'mds:host3'],
|
||||
['mds:host3', 'mds:host1', 'mds:host2', 'mds:host3'],
|
||||
[]
|
||||
),
|
||||
# count that is bigger than the amount of hosts. Truncate to len(hosts)
|
||||
@ -397,7 +426,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(count=4),
|
||||
'host1 host2 host3'.split(),
|
||||
[],
|
||||
['host1', 'host2', 'host3'], ['host1', 'host2', 'host3'], []
|
||||
['mgr:host1', 'mgr:host2', 'mgr:host3'],
|
||||
['mgr:host1', 'mgr:host2', 'mgr:host3'],
|
||||
[]
|
||||
),
|
||||
# count that is bigger than the amount of hosts; wrap around.
|
||||
NodeAssignmentTest(
|
||||
@ -405,8 +436,8 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(count=6),
|
||||
'host1 host2 host3'.split(),
|
||||
[],
|
||||
['host1', 'host2', 'host3', 'host1', 'host2', 'host3'],
|
||||
['host1', 'host2', 'host3', 'host1', 'host2', 'host3'],
|
||||
['mds:host1', 'mds:host2', 'mds:host3', 'mds:host1', 'mds:host2', 'mds:host3'],
|
||||
['mds:host1', 'mds:host2', 'mds:host3', 'mds:host1', 'mds:host2', 'mds:host3'],
|
||||
[]
|
||||
),
|
||||
# count + partial host list
|
||||
@ -418,7 +449,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
DaemonDescription('mgr', 'a', 'host1'),
|
||||
DaemonDescription('mgr', 'b', 'host2'),
|
||||
],
|
||||
['host3'], ['host3'], ['mgr.a', 'mgr.b']
|
||||
['mgr:host3'],
|
||||
['mgr:host3'],
|
||||
['mgr.a', 'mgr.b']
|
||||
),
|
||||
# count + partial host list (with colo)
|
||||
NodeAssignmentTest(
|
||||
@ -426,10 +459,12 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(count=3, hosts=['host3']),
|
||||
'host1 host2 host3'.split(),
|
||||
[
|
||||
DaemonDescription('mgr', 'a', 'host1'),
|
||||
DaemonDescription('mgr', 'b', 'host2'),
|
||||
DaemonDescription('mds', 'a', 'host1'),
|
||||
DaemonDescription('mds', 'b', 'host2'),
|
||||
],
|
||||
['host3', 'host3', 'host3'], ['host3', 'host3', 'host3'], ['mgr.a', 'mgr.b']
|
||||
['mds:host3', 'mds:host3', 'mds:host3'],
|
||||
['mds:host3', 'mds:host3', 'mds:host3'],
|
||||
['mds.a', 'mds.b']
|
||||
),
|
||||
# count 1 + partial host list
|
||||
NodeAssignmentTest(
|
||||
@ -440,7 +475,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
DaemonDescription('mgr', 'a', 'host1'),
|
||||
DaemonDescription('mgr', 'b', 'host2'),
|
||||
],
|
||||
['host3'], ['host3'], ['mgr.a', 'mgr.b']
|
||||
['mgr:host3'],
|
||||
['mgr:host3'],
|
||||
['mgr.a', 'mgr.b']
|
||||
),
|
||||
# count + partial host list + existing
|
||||
NodeAssignmentTest(
|
||||
@ -450,7 +487,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
[
|
||||
DaemonDescription('mgr', 'a', 'host1'),
|
||||
],
|
||||
['host3'], ['host3'], ['mgr.a']
|
||||
['mgr:host3'],
|
||||
['mgr:host3'],
|
||||
['mgr.a']
|
||||
),
|
||||
# count + partial host list + existing (deterministic)
|
||||
NodeAssignmentTest(
|
||||
@ -460,7 +499,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
[
|
||||
DaemonDescription('mgr', 'a', 'host1'),
|
||||
],
|
||||
['host1'], [], []
|
||||
['mgr:host1'],
|
||||
[],
|
||||
[]
|
||||
),
|
||||
# count + partial host list + existing (deterministic)
|
||||
NodeAssignmentTest(
|
||||
@ -470,7 +511,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
[
|
||||
DaemonDescription('mgr', 'a', 'host2'),
|
||||
],
|
||||
['host1'], ['host1'], ['mgr.a']
|
||||
['mgr:host1'],
|
||||
['mgr:host1'],
|
||||
['mgr.a']
|
||||
),
|
||||
# label only
|
||||
NodeAssignmentTest(
|
||||
@ -478,7 +521,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(label='foo'),
|
||||
'host1 host2 host3'.split(),
|
||||
[],
|
||||
['host1', 'host2', 'host3'], ['host1', 'host2', 'host3'], []
|
||||
['mgr:host1', 'mgr:host2', 'mgr:host3'],
|
||||
['mgr:host1', 'mgr:host2', 'mgr:host3'],
|
||||
[]
|
||||
),
|
||||
# label + count (truncate to host list)
|
||||
NodeAssignmentTest(
|
||||
@ -486,7 +531,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(count=4, label='foo'),
|
||||
'host1 host2 host3'.split(),
|
||||
[],
|
||||
['host1', 'host2', 'host3'], ['host1', 'host2', 'host3'], []
|
||||
['mgr:host1', 'mgr:host2', 'mgr:host3'],
|
||||
['mgr:host1', 'mgr:host2', 'mgr:host3'],
|
||||
[]
|
||||
),
|
||||
# label + count (with colo)
|
||||
NodeAssignmentTest(
|
||||
@ -494,8 +541,8 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(count=6, label='foo'),
|
||||
'host1 host2 host3'.split(),
|
||||
[],
|
||||
['host1', 'host2', 'host3', 'host1', 'host2', 'host3'],
|
||||
['host1', 'host2', 'host3', 'host1', 'host2', 'host3'],
|
||||
['mds:host1', 'mds:host2', 'mds:host3', 'mds:host1', 'mds:host2', 'mds:host3'],
|
||||
['mds:host1', 'mds:host2', 'mds:host3', 'mds:host1', 'mds:host2', 'mds:host3'],
|
||||
[]
|
||||
),
|
||||
# label only + count_per_hst
|
||||
@ -504,10 +551,10 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(label='foo', count_per_host=3),
|
||||
'host1 host2 host3'.split(),
|
||||
[],
|
||||
['host1', 'host2', 'host3', 'host1', 'host2', 'host3',
|
||||
'host1', 'host2', 'host3'],
|
||||
['host1', 'host2', 'host3', 'host1', 'host2', 'host3',
|
||||
'host1', 'host2', 'host3'],
|
||||
['mds:host1', 'mds:host2', 'mds:host3', 'mds:host1', 'mds:host2', 'mds:host3',
|
||||
'mds:host1', 'mds:host2', 'mds:host3'],
|
||||
['mds:host1', 'mds:host2', 'mds:host3', 'mds:host1', 'mds:host2', 'mds:host3',
|
||||
'mds:host1', 'mds:host2', 'mds:host3'],
|
||||
[]
|
||||
),
|
||||
# host_pattern
|
||||
@ -516,7 +563,9 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(host_pattern='mgr*'),
|
||||
'mgrhost1 mgrhost2 datahost'.split(),
|
||||
[],
|
||||
['mgrhost1', 'mgrhost2'], ['mgrhost1', 'mgrhost2'], []
|
||||
['mgr:mgrhost1', 'mgr:mgrhost2'],
|
||||
['mgr:mgrhost1', 'mgr:mgrhost2'],
|
||||
[]
|
||||
),
|
||||
# host_pattern + count_per_host
|
||||
NodeAssignmentTest(
|
||||
@ -524,8 +573,8 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(host_pattern='mds*', count_per_host=3),
|
||||
'mdshost1 mdshost2 datahost'.split(),
|
||||
[],
|
||||
['mdshost1', 'mdshost2', 'mdshost1', 'mdshost2', 'mdshost1', 'mdshost2'],
|
||||
['mdshost1', 'mdshost2', 'mdshost1', 'mdshost2', 'mdshost1', 'mdshost2'],
|
||||
['mds:mdshost1', 'mds:mdshost2', 'mds:mdshost1', 'mds:mdshost2', 'mds:mdshost1', 'mds:mdshost2'],
|
||||
['mds:mdshost1', 'mds:mdshost2', 'mds:mdshost1', 'mds:mdshost2', 'mds:mdshost1', 'mds:mdshost2'],
|
||||
[]
|
||||
),
|
||||
# label + count_per_host + ports
|
||||
@ -534,10 +583,10 @@ class NodeAssignmentTest(NamedTuple):
|
||||
PlacementSpec(count=6, label='foo'),
|
||||
'host1 host2 host3'.split(),
|
||||
[],
|
||||
['host1(*:80)', 'host2(*:80)', 'host3(*:80)',
|
||||
'host1(*:81)', 'host2(*:81)', 'host3(*:81)'],
|
||||
['host1(*:80)', 'host2(*:80)', 'host3(*:80)',
|
||||
'host1(*:81)', 'host2(*:81)', 'host3(*:81)'],
|
||||
['rgw:host1(*:80)', 'rgw:host2(*:80)', 'rgw:host3(*:80)',
|
||||
'rgw:host1(*:81)', 'rgw:host2(*:81)', 'rgw:host3(*:81)'],
|
||||
['rgw:host1(*:80)', 'rgw:host2(*:80)', 'rgw:host3(*:80)',
|
||||
'rgw:host1(*:81)', 'rgw:host2(*:81)', 'rgw:host3(*:81)'],
|
||||
[]
|
||||
),
|
||||
# label + count_per_host + ports (+ xisting)
|
||||
@ -550,10 +599,10 @@ class NodeAssignmentTest(NamedTuple):
|
||||
DaemonDescription('rgw', 'b', 'host2', ports=[80]),
|
||||
DaemonDescription('rgw', 'c', 'host1', ports=[82]),
|
||||
],
|
||||
['host1(*:80)', 'host2(*:80)', 'host3(*:80)',
|
||||
'host1(*:81)', 'host2(*:81)', 'host3(*:81)'],
|
||||
['host1(*:80)', 'host3(*:80)',
|
||||
'host2(*:81)', 'host3(*:81)'],
|
||||
['rgw:host1(*:80)', 'rgw:host2(*:80)', 'rgw:host3(*:80)',
|
||||
'rgw:host1(*:81)', 'rgw:host2(*:81)', 'rgw:host3(*:81)'],
|
||||
['rgw:host1(*:80)', 'rgw:host3(*:80)',
|
||||
'rgw:host2(*:81)', 'rgw:host3(*:81)'],
|
||||
['rgw.c']
|
||||
),
|
||||
# cephadm.py teuth case
|
||||
@ -565,7 +614,7 @@ class NodeAssignmentTest(NamedTuple):
|
||||
DaemonDescription('mgr', 'y', 'host1'),
|
||||
DaemonDescription('mgr', 'x', 'host2'),
|
||||
],
|
||||
['host1(name=y)', 'host2(name=x)'],
|
||||
['mgr:host1(name=y)', 'mgr:host2(name=x)'],
|
||||
[], []
|
||||
),
|
||||
])
|
||||
@ -731,7 +780,7 @@ def test_node_assignment3(service_type, placement, hosts,
|
||||
|
||||
class NodeAssignmentTest4(NamedTuple):
|
||||
spec: ServiceSpec
|
||||
networks: Dict[str, Dict[str, List[str]]]
|
||||
networks: Dict[str, Dict[str, Dict[str, List[str]]]]
|
||||
daemons: List[DaemonDescription]
|
||||
expected: List[str]
|
||||
expected_add: List[str]
|
||||
@ -748,19 +797,70 @@ class NodeAssignmentTest4(NamedTuple):
|
||||
networks=['10.0.0.0/8'],
|
||||
),
|
||||
{
|
||||
'host1': {'10.0.0.0/8': ['10.0.0.1']},
|
||||
'host2': {'10.0.0.0/8': ['10.0.0.2']},
|
||||
'host3': {'192.168.0.0/16': ['192.168.0.1']},
|
||||
'host1': {'10.0.0.0/8': {'eth0': ['10.0.0.1']}},
|
||||
'host2': {'10.0.0.0/8': {'eth0': ['10.0.0.2']}},
|
||||
'host3': {'192.168.0.0/16': {'eth0': ['192.168.0.1']}},
|
||||
},
|
||||
[],
|
||||
['host1(10.0.0.1:80)', 'host2(10.0.0.2:80)',
|
||||
'host1(10.0.0.1:81)', 'host2(10.0.0.2:81)',
|
||||
'host1(10.0.0.1:82)', 'host2(10.0.0.2:82)'],
|
||||
['host1(10.0.0.1:80)', 'host2(10.0.0.2:80)',
|
||||
'host1(10.0.0.1:81)', 'host2(10.0.0.2:81)',
|
||||
'host1(10.0.0.1:82)', 'host2(10.0.0.2:82)'],
|
||||
['rgw:host1(10.0.0.1:80)', 'rgw:host2(10.0.0.2:80)',
|
||||
'rgw:host1(10.0.0.1:81)', 'rgw:host2(10.0.0.2:81)',
|
||||
'rgw:host1(10.0.0.1:82)', 'rgw:host2(10.0.0.2:82)'],
|
||||
['rgw:host1(10.0.0.1:80)', 'rgw:host2(10.0.0.2:80)',
|
||||
'rgw:host1(10.0.0.1:81)', 'rgw:host2(10.0.0.2:81)',
|
||||
'rgw:host1(10.0.0.1:82)', 'rgw:host2(10.0.0.2:82)'],
|
||||
[]
|
||||
),
|
||||
NodeAssignmentTest4(
|
||||
IngressSpec(
|
||||
service_type='ingress',
|
||||
service_id='rgw.foo',
|
||||
frontend_port=443,
|
||||
monitor_port=8888,
|
||||
virtual_ip='10.0.0.20/8',
|
||||
backend_service='rgw.foo',
|
||||
placement=PlacementSpec(label='foo'),
|
||||
networks=['10.0.0.0/8'],
|
||||
),
|
||||
{
|
||||
'host1': {'10.0.0.0/8': {'eth0': ['10.0.0.1']}},
|
||||
'host2': {'10.0.0.0/8': {'eth1': ['10.0.0.2']}},
|
||||
'host3': {'192.168.0.0/16': {'eth2': ['192.168.0.1']}},
|
||||
},
|
||||
[],
|
||||
['haproxy:host1(10.0.0.1:443,8888)', 'haproxy:host2(10.0.0.2:443,8888)',
|
||||
'keepalived:host1', 'keepalived:host2'],
|
||||
['haproxy:host1(10.0.0.1:443,8888)', 'haproxy:host2(10.0.0.2:443,8888)',
|
||||
'keepalived:host1', 'keepalived:host2'],
|
||||
[]
|
||||
),
|
||||
NodeAssignmentTest4(
|
||||
IngressSpec(
|
||||
service_type='ingress',
|
||||
service_id='rgw.foo',
|
||||
frontend_port=443,
|
||||
monitor_port=8888,
|
||||
virtual_ip='10.0.0.20/8',
|
||||
backend_service='rgw.foo',
|
||||
placement=PlacementSpec(label='foo'),
|
||||
networks=['10.0.0.0/8'],
|
||||
),
|
||||
{
|
||||
'host1': {'10.0.0.0/8': {'eth0': ['10.0.0.1']}},
|
||||
'host2': {'10.0.0.0/8': {'eth1': ['10.0.0.2']}},
|
||||
'host3': {'192.168.0.0/16': {'eth2': ['192.168.0.1']}},
|
||||
},
|
||||
[
|
||||
DaemonDescription('haproxy', 'a', 'host1', ip='10.0.0.1',
|
||||
ports=[443, 8888]),
|
||||
DaemonDescription('keepalived', 'b', 'host2'),
|
||||
DaemonDescription('keepalived', 'c', 'host3'),
|
||||
],
|
||||
['haproxy:host1(10.0.0.1:443,8888)', 'haproxy:host2(10.0.0.2:443,8888)',
|
||||
'keepalived:host1', 'keepalived:host2'],
|
||||
['haproxy:host2(10.0.0.2:443,8888)',
|
||||
'keepalived:host1'],
|
||||
['keepalived.c']
|
||||
),
|
||||
])
|
||||
def test_node_assignment4(spec, networks, daemons,
|
||||
expected, expected_add, expected_remove):
|
||||
@ -770,6 +870,8 @@ def test_node_assignment4(spec, networks, daemons,
|
||||
daemons=daemons,
|
||||
allow_colo=True,
|
||||
networks=networks,
|
||||
primary_daemon_type='haproxy' if spec.service_type == 'ingress' else spec.service_type,
|
||||
per_host_daemon_type='keepalived' if spec.service_type == 'ingress' else None,
|
||||
).place()
|
||||
|
||||
got = [str(p) for p in all_slots]
|
||||
|
@ -668,37 +668,24 @@ def test_custom_container_spec_config_json():
|
||||
assert key not in config_json
|
||||
|
||||
|
||||
def test_HA_RGW_spec():
|
||||
yaml_str = """service_type: ha-rgw
|
||||
service_id: haproxy_for_rgw
|
||||
def test_ingress_spec():
|
||||
yaml_str = """service_type: ingress
|
||||
service_id: rgw.foo
|
||||
placement:
|
||||
hosts:
|
||||
- host1
|
||||
- host2
|
||||
- host3
|
||||
spec:
|
||||
virtual_ip_interface: eth0
|
||||
virtual_ip_address: 192.168.20.1/24
|
||||
virtual_ip: 192.168.20.1/24
|
||||
backend_service: rgw.foo
|
||||
frontend_port: 8080
|
||||
ha_proxy_port: 1967
|
||||
ha_proxy_stats_enabled: true
|
||||
ha_proxy_stats_user: admin
|
||||
ha_proxy_stats_password: admin
|
||||
ha_proxy_enable_prometheus_exporter: true
|
||||
ha_proxy_monitor_uri: /haproxy_health
|
||||
keepalived_password: admin
|
||||
monitor_port: 8081
|
||||
"""
|
||||
yaml_file = yaml.safe_load(yaml_str)
|
||||
spec = ServiceSpec.from_json(yaml_file)
|
||||
assert spec.service_type == "ha-rgw"
|
||||
assert spec.service_id == "haproxy_for_rgw"
|
||||
assert spec.virtual_ip_interface == "eth0"
|
||||
assert spec.virtual_ip_address == "192.168.20.1/24"
|
||||
assert spec.service_type == "ingress"
|
||||
assert spec.service_id == "rgw.foo"
|
||||
assert spec.virtual_ip == "192.168.20.1/24"
|
||||
assert spec.frontend_port == 8080
|
||||
assert spec.ha_proxy_port == 1967
|
||||
assert spec.ha_proxy_stats_enabled is True
|
||||
assert spec.ha_proxy_stats_user == "admin"
|
||||
assert spec.ha_proxy_stats_password == "admin"
|
||||
assert spec.ha_proxy_enable_prometheus_exporter is True
|
||||
assert spec.ha_proxy_monitor_uri == "/haproxy_health"
|
||||
assert spec.keepalived_password == "admin"
|
||||
assert spec.monitor_port == 8081
|
||||
|
@ -106,7 +106,13 @@ def is_repo_digest(image_name: str) -> bool:
|
||||
|
||||
def resolve_ip(hostname: str) -> str:
|
||||
try:
|
||||
return socket.getaddrinfo(hostname, None, flags=socket.AI_CANONNAME, type=socket.SOCK_STREAM)[0][4][0]
|
||||
r = socket.getaddrinfo(hostname, None, flags=socket.AI_CANONNAME,
|
||||
type=socket.SOCK_STREAM)
|
||||
# pick first v4 IP, if present
|
||||
for a in r:
|
||||
if a[0] == socket.AF_INET:
|
||||
return a[4][0]
|
||||
return r[0][4][0]
|
||||
except socket.gaierror as e:
|
||||
raise OrchestratorError(f"Cannot resolve ip for host {hostname}: {e}")
|
||||
|
||||
|
@ -31,7 +31,7 @@ import yaml
|
||||
|
||||
from ceph.deployment import inventory
|
||||
from ceph.deployment.service_spec import ServiceSpec, NFSServiceSpec, RGWSpec, \
|
||||
ServiceSpecValidationError, IscsiServiceSpec, HA_RGWSpec
|
||||
ServiceSpecValidationError, IscsiServiceSpec, IngressSpec
|
||||
from ceph.deployment.drive_group import DriveGroupSpec
|
||||
from ceph.deployment.hostspec import HostSpec
|
||||
from ceph.utils import datetime_to_str, str_to_datetime
|
||||
@ -450,7 +450,7 @@ class Orchestrator(object):
|
||||
'prometheus': self.apply_prometheus,
|
||||
'rbd-mirror': self.apply_rbd_mirror,
|
||||
'rgw': self.apply_rgw,
|
||||
'ha-rgw': self.apply_ha_rgw,
|
||||
'ingress': self.apply_ingress,
|
||||
'host': self.add_host,
|
||||
'cephadm-exporter': self.apply_cephadm_exporter,
|
||||
}
|
||||
@ -596,8 +596,8 @@ class Orchestrator(object):
|
||||
"""Update RGW cluster"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def apply_ha_rgw(self, spec: HA_RGWSpec) -> OrchResult[str]:
|
||||
"""Update ha-rgw daemons"""
|
||||
def apply_ingress(self, spec: IngressSpec) -> OrchResult[str]:
|
||||
"""Update ingress daemons"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def apply_rbd_mirror(self, spec: ServiceSpec) -> OrchResult[str]:
|
||||
@ -687,8 +687,8 @@ def daemon_type_to_service(dtype: str) -> str:
|
||||
'mds': 'mds',
|
||||
'rgw': 'rgw',
|
||||
'osd': 'osd',
|
||||
'haproxy': 'ha-rgw',
|
||||
'keepalived': 'ha-rgw',
|
||||
'haproxy': 'ingress',
|
||||
'keepalived': 'ingress',
|
||||
'iscsi': 'iscsi',
|
||||
'rbd-mirror': 'rbd-mirror',
|
||||
'cephfs-mirror': 'cephfs-mirror',
|
||||
@ -712,7 +712,7 @@ def service_to_daemon_types(stype: str) -> List[str]:
|
||||
'mds': ['mds'],
|
||||
'rgw': ['rgw'],
|
||||
'osd': ['osd'],
|
||||
'ha-rgw': ['haproxy', 'keepalived'],
|
||||
'ingress': ['haproxy', 'keepalived'],
|
||||
'iscsi': ['iscsi'],
|
||||
'rbd-mirror': ['rbd-mirror'],
|
||||
'cephfs-mirror': ['cephfs-mirror'],
|
||||
@ -810,8 +810,6 @@ class DaemonDescription(object):
|
||||
# The type of service (osd, mon, mgr, etc.)
|
||||
self.daemon_type = daemon_type
|
||||
|
||||
assert daemon_type not in ['HA_RGW', 'ha-rgw']
|
||||
|
||||
# The orchestrator will have picked some names for daemons,
|
||||
# typically either based on hostnames or on pod names.
|
||||
# This is the <foo> in mds.<foo>, the ID that will appear
|
||||
@ -856,9 +854,7 @@ class DaemonDescription(object):
|
||||
def get_port_summary(self) -> str:
|
||||
if not self.ports:
|
||||
return ''
|
||||
return ' '.join([
|
||||
f"{self.ip or '*'}:{p}" for p in self.ports
|
||||
])
|
||||
return f"{self.ip or '*'}:{','.join(map(str, self.ports or []))}"
|
||||
|
||||
def name(self) -> str:
|
||||
return '%s.%s' % (self.daemon_type, self.daemon_id)
|
||||
@ -1027,7 +1023,9 @@ class ServiceDescription(object):
|
||||
deleted: Optional[datetime.datetime] = None,
|
||||
size: int = 0,
|
||||
running: int = 0,
|
||||
events: Optional[List['OrchestratorEvent']] = None) -> None:
|
||||
events: Optional[List['OrchestratorEvent']] = None,
|
||||
virtual_ip: Optional[str] = None,
|
||||
ports: List[int] = []) -> None:
|
||||
# Not everyone runs in containers, but enough people do to
|
||||
# justify having the container_image_id (image hash) and container_image
|
||||
# (image name)
|
||||
@ -1057,12 +1055,20 @@ class ServiceDescription(object):
|
||||
|
||||
self.events: List[OrchestratorEvent] = events or []
|
||||
|
||||
self.virtual_ip = virtual_ip
|
||||
self.ports = ports
|
||||
|
||||
def service_type(self) -> str:
|
||||
return self.spec.service_type
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<ServiceDescription of {self.spec.one_line_str()}>"
|
||||
|
||||
def get_port_summary(self) -> str:
|
||||
if not self.ports:
|
||||
return ''
|
||||
return f"{self.virtual_ip or '?'}:{','.join(map(str, self.ports or []))}"
|
||||
|
||||
def to_json(self) -> OrderedDict:
|
||||
out = self.spec.to_json()
|
||||
status = {
|
||||
@ -1074,6 +1080,8 @@ class ServiceDescription(object):
|
||||
'running': self.running,
|
||||
'last_refresh': self.last_refresh,
|
||||
'created': self.created,
|
||||
'virtual_ip': self.virtual_ip,
|
||||
'ports': self.ports if self.ports else None,
|
||||
}
|
||||
for k in ['last_refresh', 'created']:
|
||||
if getattr(self, k):
|
||||
|
@ -561,11 +561,14 @@ class OrchestratorCli(OrchestratorClientMixin, MgrModule,
|
||||
else:
|
||||
now = datetime_now()
|
||||
table = PrettyTable(
|
||||
['NAME', 'RUNNING', 'REFRESHED', 'AGE',
|
||||
'PLACEMENT',
|
||||
],
|
||||
[
|
||||
'NAME', 'PORTS',
|
||||
'RUNNING', 'REFRESHED', 'AGE',
|
||||
'PLACEMENT',
|
||||
],
|
||||
border=False)
|
||||
table.align['NAME'] = 'l'
|
||||
table.align['PORTS'] = 'l'
|
||||
table.align['RUNNING'] = 'r'
|
||||
table.align['REFRESHED'] = 'l'
|
||||
table.align['AGE'] = 'l'
|
||||
@ -586,6 +589,7 @@ class OrchestratorCli(OrchestratorClientMixin, MgrModule,
|
||||
|
||||
table.add_row((
|
||||
s.spec.service_name(),
|
||||
s.get_port_summary(),
|
||||
'%d/%d' % (s.running, s.size),
|
||||
refreshed,
|
||||
nice_delta(now, s.created),
|
||||
@ -649,7 +653,7 @@ class OrchestratorCli(OrchestratorClientMixin, MgrModule,
|
||||
table.add_row((
|
||||
s.name(),
|
||||
ukn(s.hostname),
|
||||
s.get_port_summary() or '-',
|
||||
s.get_port_summary(),
|
||||
status,
|
||||
nice_delta(now, s.last_refresh, ' ago'),
|
||||
nice_delta(now, s.created),
|
||||
|
@ -428,8 +428,8 @@ class ServiceSpec(object):
|
||||
"""
|
||||
KNOWN_SERVICE_TYPES = 'alertmanager crash grafana iscsi mds mgr mon nfs ' \
|
||||
'node-exporter osd prometheus rbd-mirror rgw ' \
|
||||
'container cephadm-exporter ha-rgw cephfs-mirror'.split()
|
||||
REQUIRES_SERVICE_ID = 'iscsi mds nfs osd rgw container ha-rgw '.split()
|
||||
'container cephadm-exporter ingress cephfs-mirror'.split()
|
||||
REQUIRES_SERVICE_ID = 'iscsi mds nfs osd rgw container ingress '.split()
|
||||
MANAGED_CONFIG_OPTIONS = [
|
||||
'mds_join_fs',
|
||||
]
|
||||
@ -444,7 +444,7 @@ class ServiceSpec(object):
|
||||
'osd': DriveGroupSpec,
|
||||
'iscsi': IscsiServiceSpec,
|
||||
'alertmanager': AlertManagerSpec,
|
||||
'ha-rgw': HA_RGWSpec,
|
||||
'ingress': IngressSpec,
|
||||
'container': CustomContainerSpec,
|
||||
}.get(service_type, cls)
|
||||
if ret == ServiceSpec and not service_type:
|
||||
@ -576,9 +576,12 @@ class ServiceSpec(object):
|
||||
n += '.' + self.service_id
|
||||
return n
|
||||
|
||||
def get_port_start(self) -> Optional[int]:
|
||||
def get_port_start(self) -> List[int]:
|
||||
# If defined, we will allocate and number ports starting at this
|
||||
# point.
|
||||
return []
|
||||
|
||||
def get_virtual_ip(self) -> Optional[str]:
|
||||
return None
|
||||
|
||||
def to_json(self):
|
||||
@ -749,8 +752,8 @@ class RGWSpec(ServiceSpec):
|
||||
self.rgw_frontend_type = rgw_frontend_type
|
||||
self.ssl = ssl
|
||||
|
||||
def get_port_start(self) -> Optional[int]:
|
||||
return self.get_port()
|
||||
def get_port_start(self) -> List[int]:
|
||||
return [self.get_port()]
|
||||
|
||||
def get_port(self) -> int:
|
||||
if self.rgw_frontend_port:
|
||||
@ -855,97 +858,72 @@ class AlertManagerSpec(ServiceSpec):
|
||||
yaml.add_representer(AlertManagerSpec, ServiceSpec.yaml_representer)
|
||||
|
||||
|
||||
class HA_RGWSpec(ServiceSpec):
|
||||
class IngressSpec(ServiceSpec):
|
||||
def __init__(self,
|
||||
service_type: str = 'ha-rgw',
|
||||
service_type: str = 'ingress',
|
||||
service_id: Optional[str] = None,
|
||||
config: Optional[Dict[str, str]] = None,
|
||||
networks: Optional[List[str]] = None,
|
||||
placement: Optional[PlacementSpec] = None,
|
||||
virtual_ip_interface: Optional[str] = None,
|
||||
virtual_ip_address: Optional[str] = None,
|
||||
backend_service: Optional[str] = None,
|
||||
frontend_port: Optional[int] = None,
|
||||
ha_proxy_port: Optional[int] = None,
|
||||
ha_proxy_stats_enabled: Optional[bool] = None,
|
||||
ha_proxy_stats_user: Optional[str] = None,
|
||||
ha_proxy_stats_password: Optional[str] = None,
|
||||
ha_proxy_enable_prometheus_exporter: Optional[bool] = None,
|
||||
ha_proxy_monitor_uri: Optional[str] = None,
|
||||
ssl_cert: Optional[str] = None,
|
||||
ssl_dh_param: Optional[str] = None,
|
||||
ssl_ciphers: Optional[List[str]] = None,
|
||||
ssl_options: Optional[List[str]] = None,
|
||||
monitor_port: Optional[int] = None,
|
||||
monitor_user: Optional[str] = None,
|
||||
monitor_password: Optional[str] = None,
|
||||
enable_stats: Optional[bool] = None,
|
||||
keepalived_password: Optional[str] = None,
|
||||
ha_proxy_frontend_ssl_certificate: Optional[str] = None,
|
||||
ha_proxy_frontend_ssl_port: Optional[int] = None,
|
||||
ha_proxy_ssl_dh_param: Optional[str] = None,
|
||||
ha_proxy_ssl_ciphers: Optional[List[str]] = None,
|
||||
ha_proxy_ssl_options: Optional[List[str]] = None,
|
||||
virtual_ip: Optional[str] = None,
|
||||
virtual_interface_networks: Optional[List[str]] = [],
|
||||
haproxy_container_image: Optional[str] = None,
|
||||
keepalived_container_image: Optional[str] = None,
|
||||
definitive_host_list: Optional[List[str]] = None
|
||||
):
|
||||
assert service_type == 'ha-rgw'
|
||||
super(HA_RGWSpec, self).__init__('ha-rgw', service_id=service_id,
|
||||
placement=placement, config=config,
|
||||
networks=networks)
|
||||
|
||||
self.virtual_ip_interface = virtual_ip_interface
|
||||
self.virtual_ip_address = virtual_ip_address
|
||||
assert service_type == 'ingress'
|
||||
super(IngressSpec, self).__init__(
|
||||
'ingress', service_id=service_id,
|
||||
placement=placement, config=config,
|
||||
networks=networks
|
||||
)
|
||||
self.backend_service = backend_service
|
||||
self.frontend_port = frontend_port
|
||||
self.ha_proxy_port = ha_proxy_port
|
||||
self.ha_proxy_stats_enabled = ha_proxy_stats_enabled
|
||||
self.ha_proxy_stats_user = ha_proxy_stats_user
|
||||
self.ha_proxy_stats_password = ha_proxy_stats_password
|
||||
self.ha_proxy_enable_prometheus_exporter = ha_proxy_enable_prometheus_exporter
|
||||
self.ha_proxy_monitor_uri = ha_proxy_monitor_uri
|
||||
self.ssl_cert = ssl_cert
|
||||
self.ssl_dh_param = ssl_dh_param
|
||||
self.ssl_ciphers = ssl_ciphers
|
||||
self.ssl_options = ssl_options
|
||||
self.monitor_port = monitor_port
|
||||
self.monitor_user = monitor_user
|
||||
self.monitor_password = monitor_password
|
||||
self.keepalived_password = keepalived_password
|
||||
self.ha_proxy_frontend_ssl_certificate = ha_proxy_frontend_ssl_certificate
|
||||
self.ha_proxy_frontend_ssl_port = ha_proxy_frontend_ssl_port
|
||||
self.ha_proxy_ssl_dh_param = ha_proxy_ssl_dh_param
|
||||
self.ha_proxy_ssl_ciphers = ha_proxy_ssl_ciphers
|
||||
self.ha_proxy_ssl_options = ha_proxy_ssl_options
|
||||
self.virtual_ip = virtual_ip
|
||||
self.virtual_interface_networks = virtual_interface_networks or []
|
||||
self.haproxy_container_image = haproxy_container_image
|
||||
self.keepalived_container_image = keepalived_container_image
|
||||
# placeholder variable. Need definitive list of hosts this service will
|
||||
# be placed on in order to generate keepalived config. Will be populated
|
||||
# when applying spec
|
||||
self.definitive_host_list = [] # type: List[str]
|
||||
|
||||
def get_port_start(self) -> List[int]:
|
||||
return [cast(int, self.frontend_port),
|
||||
cast(int, self.monitor_port)]
|
||||
|
||||
def get_virtual_ip(self) -> Optional[str]:
|
||||
return self.virtual_ip
|
||||
|
||||
def validate(self) -> None:
|
||||
super(HA_RGWSpec, self).validate()
|
||||
super(IngressSpec, self).validate()
|
||||
|
||||
if not self.virtual_ip_interface:
|
||||
if not self.backend_service:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: No Virtual IP Interface specified')
|
||||
if not self.virtual_ip_address:
|
||||
'Cannot add ingress: No backend_service specified')
|
||||
if not self.frontend_port:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: No Virtual IP Address specified')
|
||||
if not self.frontend_port and not self.ha_proxy_frontend_ssl_certificate:
|
||||
'Cannot add ingress: No frontend_port specified')
|
||||
if not self.monitor_port:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: No Frontend Port specified')
|
||||
if not self.ha_proxy_port:
|
||||
'Cannot add ingress: No monitor_port specified')
|
||||
if not self.virtual_ip:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: No HA Proxy Port specified')
|
||||
if not self.ha_proxy_stats_enabled:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: Ha Proxy Stats Enabled option not set')
|
||||
if not self.ha_proxy_stats_user:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: No HA Proxy Stats User specified')
|
||||
if not self.ha_proxy_stats_password:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: No HA Proxy Stats Password specified')
|
||||
if not self.ha_proxy_enable_prometheus_exporter:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: HA Proxy Enable Prometheus Exporter option not set')
|
||||
if not self.ha_proxy_monitor_uri:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: No HA Proxy Monitor Uri specified')
|
||||
if not self.keepalived_password:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: No Keepalived Password specified')
|
||||
if self.ha_proxy_frontend_ssl_certificate:
|
||||
if not self.ha_proxy_frontend_ssl_port:
|
||||
raise ServiceSpecValidationError(
|
||||
'Cannot add ha-rgw: Specified Ha Proxy Frontend SSL ' +
|
||||
'Certificate but no SSL Port')
|
||||
'Cannot add ingress: No virtual_ip provided')
|
||||
|
||||
|
||||
class CustomContainerSpec(ServiceSpec):
|
||||
|
Loading…
Reference in New Issue
Block a user