* refs/pull/41084/head:
test: test to verify dir path removal when no mirror daemons are running
pybind/mirroring: advance state machine from stalled state
pybind/mirroring: start from correct state during policy init
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/41286/head:
qa/suites/orch/rook: disable centos for now
qa/suites/orch/rook/smoke: initial smoke suite
qa/tasks/rook: ROOK_HOSTPATH_REQUIRES_PRIVILEGED=true on centos
qa/tasks/rook: simplify shutdown
qa/tasks/rook: archive logs
qa/tasks/rook: more orderly cluster teardown
qa/tasks/rook: deploy ceph via rook on top of kubernetes
qa/tasks/kubeadm: install kubernetes with kubeadm
qa/suites: move rados/cephadm -> orch/cephadm; symlink
qa/tasks/cephadm: add whitespace between functions
qa/tasks/cephadm: clean up ctx.manager setup
Reviewed-by: Sébastien Han <seb@redhat.com>
For some reason deleting common.yaml sometimes fails. Not really
sure why, but since we will tear down kubernetes anyway this
cleanup isn't really needed.
Signed-off-by: Sage Weil <sage@newdream.net>
This assumes that k8s is installed and kubectl works.
The ceph container to use is selected the same way the cephadm
task does it.
All scratch devices are consumed as OSDs.
A ceph.conf and client.admin keyring are deployed on all test
nodes, so normal tasks should work (if/when packages are installed).
Fixes: https://tracker.ceph.com/issues/47507
Signed-off-by: Sage Weil <sage@newdream.net>
- install k8s with kubeadm
- initial support for flannel only
- remove taint from bootstrap/master node
- create PVs for all scratch_devs + a 'scratch' SC
- kubeadm.kubectl task
Signed-off-by: Sage Weil <sage@newdream.net>
* refs/pull/39550/head:
mgr/cephadm: induce retune of osd memory on osd creation
qa/tasks/cephadm.conf: autotune osd memory by default
mgr/cephadm: do not autotune when _no_autotune_memory label is present
mgr/cephadm: autotune osd memory
common: add osd_memory_target_autotune
mgr/cephadm: report memory usage, request (limit) in 'orch ps'
doc/cephadm/host-management: document _admin group
mgr/orchestrator: fix help formatting
Reviewed-by: Adam King <adking@redhat.com>
Without this a traceback is seen in mgr logs. Also, this solves
one part of the issue. The other half (failing tests) will be
resolved by PR #40885.
Fixes: http://tracker.ceph.com/issues/50224
Signed-off-by: Venky Shankar <vshankar@redhat.com>
* refs/pull/40962/head:
test: add test to validate snap synchronization with parent directory snapshots
cephfs-mirror: ignore parent directory snapshots when building snap map
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/40941/head:
qa/suites/rados/cephadm/smoke-roleless: test client-keyring
qa/tasks/cephadm.py: adjust client.admin key mode; place on all hosts
cephadm: distribute client.admin keyring+conf to label:_admin on bootstrap
doc/cephadm: document the default 'admin' label
mgr/cephadm: 'ceph orch client-keyring ...' commands to manage keyring files
mgr/cephadm: reimplement ceph.conf pushing
mgr/cephadm: use _write_remote_file for ceph.conf
mgr/cephadm: _write_remote_file helper
mgr/cephadm: add placementspec for which hosts get ceph.conf
Reviewed-by: Sebastian Wagner <swagner@suse.com>
Reviewed-by: Adam King <adking@redhat.com>
When listing for available snapshot schedules, we should not an error in case
there is none. We should just return 0 with an empty dict.
Fixes: https://tracker.ceph.com/issues/49837
Signed-off-by: Sébastien Han <seb@redhat.com>
* refs/pull/40526/head:
spec: add nfs to spec file
mgr/nfs: Don't enable nfs module by default
mgr/nfs: check for invalid chars in cluster id
mgr/nfs: Use CLICommand wrapper
mgr/nfs: reorg nfs files
mgr/nfs: Check if transport or protocol are list instance
mgr/nfs: reorg cluster class and common helper methods
mgr/nfs: move common export helper methods to ExportMgr class
mgr/nfs: move validate methods into new ValidateExport class
mgr/nfs: add custom exception module
mgr/nfs: create new module for export utils
mgr/nfs: rename fs dir to export
mgr/volumes/nfs: Move nfs code out of volumes plugin
Reviewed-by: Alfonso Martínez <almartin@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
* refs/pull/40888/head:
qa/tasks/cephadm: ignore --keep-logs failure
qa/tasks/cephadm: use yaml.dump_all()
qa/suites/rados/cephadm/smoke-*: use cephadm.wait_for_service
qa/tasks/cephadm: tear down clsuter before gathering logs
qa/suites/rados/cephadm/smoke-roleless: test rgw-ingress
mgr/cephadm: remove virtual_ip check during scheduling
mgr/orchestrator: orch ls: leave off virtual_ip prefixlen
qa/tasks/cephadm: add wait_for_service
qa/tasks/cephadm: allow skip_monitor_stack=true
qa/tasks/cephadm: do subst_vip for cephadm.shell and .apply
qa/tasks/vip: add vip task to allocate virtual IPs
qa/suites/rados/cephadm/smoke-roleless: add rgw-ingress test case
qa/tasks/cephadm: shell: take 'all-roles' or 'all-hosts'
qa/tasks/cephadm: let cephadm.shell take string or list
Reviewed-by: Sebastian Wagner <swagner@suse.com>
We dont' always stop all services, because teuthology doesn't know about
things it didn't start. Use rm-cluster to tear things down, but do not
remove the logs themselves. After we get logs, we'll clean up completely.
Signed-off-by: Sage Weil <sage@newdream.net>
* refs/pull/40411/head:
doc: add note about removal of the `cephfs` nfs cluster type
mgr/volumes/nfs: drop `type` param during cluster create
Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Varsha Rao <varao@redhat.com>
* refs/pull/40412/head:
vstart_runner: reuse code in LocalRemoteProcess
Reviewed-by: Xiubo Li <xiubli@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/39660/head:
qa: Update the mdsmap schema in mgr/dashboard/test_health.py
doc: add lsflags command to Administrative Commands document
qa: test fs lsflags command
mon: add command to print fs flags
mds: print each flag value
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
PR #37600 introduced support for both `cephfs` and `rgw` exports
to be configured using a single nfs-ganesha cluster
Fixes: https://tracker.ceph.com/issues/50369
Signed-off-by: Michael Fritch <mfritch@suse.com>