Put the vault token file in a location that ceph can read.
Make it readable only by ceph.
On rhel8 (and indeed, any vanilla rhel machine), $HOME is liable to be
mode 700. This means the ceph user can't read things in that user's
directory. This causes radosgw to emit the confusing message "ERROR:
Vault token file ... not found" even though the teuthology log will
plainly show it was created and made readable by ceph.
Fixes: http://tracker.ceph.com/issues/51539
Signed-off-by: Marcus Watts <mwatts@redhat.com>
* refs/pull/38752/head:
qa: enable dynamic debug support to kclient
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
PR #43239 has modified ECBackend::get_hash_info() behavior.
Modified the standalone scrub test to match.
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
* refs/pull/43510/head:
qa/suites/orch/cephadm/upgrade: smoke test for 'orch upgrade ls'
mgr/cephadm: make upgrade ls output structured
mgr/cephadm: add 'orch upgrade ls' to list available versions
Reviewed-by: Sebastian Wagner <sewagner@redhat.com>
mgr/dashboard: move NFS_GANESHA_SUPPORTED_FSALS to mgr_module.py
Importing from nfs module throws AttributeError because as a side effect the dashboard module is impersonating the nfs module.
https://gist.github.com/varshar16/61ac26426bbe5f5f562ebb14bcd0f548
mgr/dashboard: 'Create NFS export' form: list clusters from nfs module
mgr/dashboard: frontend+backend cleanups for NFS export
Removed all code and references related to daemons. UI cleanup and adopted unit-testing for
nfs-epxort create form for CEPHFS backend. Cleanup for export list/get/create/set/delete endpoints.
mgr/dashboard: rm set-ganesha ref + update docs
Remove existing set-ganesha-clusters-rados-pool-namespace references as
they are no longer required. Moreover, nfs doc in dashboard doc is
updated accordingly to the current nfs status.
mgr/dashboard: add nfs-export e2e test coverage
mgr/dashboard: 'Create NFS export' form: remove RGW user id field.
- Improve bucket typeahead behavior.
- Increase version for bucket list endpoint.
- Some refactoring.
mgr/dashboard: 'Create NFS export' form: allow RGW backend only when default realm is selected.
When RGW multisite is configured, the NFS module can only handle buckets in the default realm.
mgr/dashboard: 'Create service' form: fix NFS service creation.
After https://github.com/ceph/ceph/pull/42073, NFS pool and namespace are not customizable.
mgr/dashboard: 'Create NFS export' form: add bucket validation.
- Allow only existing buckets.
- Refactoring:
- Moved bucket validator from bucket form to cd-validators.ts
- Split bucket validator into 2: bucket name validator and bucket existence (that checks either existence or non-existence).
mgr/dashboard: 'Create NFS export' form: path validation refactor: allow only existing paths.
Fixes: https://tracker.ceph.com/issues/46493
Fixes: https://tracker.ceph.com/issues/51479
Signed-off-by: Alfonso Martínez <almartin@redhat.com>
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
* we use setup_iscsi_client.py to deploy iscsi client services,
configuring intiator and mulitpath this is done by qa task
ceph_iscsi_client
* qa/cephadm: adds remotes ip addresses to iscsi gateway,
* rename poolname: iscsi >> datapool, which we usually use for tests and
expresses type of pool more clearly.
Signed-off-by: Deepika Upadhyay <dupadhya@redhat.com>
The osd backfill reservation does not take compression into account so
we need to operate with "uncompressed" bytes when calculating nearfull
ratio.
Signed-off-by: Mykola Golub <mgolub@suse.com>
* refs/pull/43425/head:
qa: import CommandFailedError from exceptions not run
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Jos Collin <jcollin@redhat.com>
pybind/mgr/cephadm: set allow_standby_replay during CephFS upgrade
Reviewed-by: Sage Weil <sage@newdream.net>
Reviewed-by: Sebastian Wagner <sewagner@redhat.com>
* refs/pull/43049/head:
mgr/rook: apply mds using placement spec and osd_pool_default_size
mgr/rook: factor out replica/failureDomain calc
Reviewed-by: Juan Miguel Olmo <jolmomar@redhat.com>
Add host section of the cluster creation workflow.
1. Fix bug in the modal where going forward one step on the wizard and coming back opens up the add host modal.
2. Rename Create Cluster to Expand Cluster as per the discussions
3. A skip confirmation modal to warn the user when he tries to skip the
cluster creation
4. Adapted all the tests
5. Did some UI improvements like fixing and aligning the styles,
colors..
- Used routed modal for host Additon form
- Renamed the Create to Add in Host Form
Fixes: https://tracker.ceph.com/issues/51517
Fixes: https://tracker.ceph.com/issues/51640
Fixes: https://tracker.ceph.com/issues/50336
Fixes: https://tracker.ceph.com/issues/50565
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
Signed-off-by: Nizamudeen A <nia@redhat.com>
A module option called CLUSTER_STATUS has two option. INSTALLED
AND POST_INSTALLED. When CLUSTER_STATUS is INSTALLED it will allow to show the
create-cluster-wizard after login the initial time. After the cluster
creation is succesfull this option is set to POST_INSTALLED
Also has the e2e codes for the Review Section
Fixes: https://tracker.ceph.com/issues/50336
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
Signed-off-by: Nizamudeen A <nia@redhat.com>
This commit has been causing scheduled jobs to request e.g. aarch64
smithi machines, which don't exist. The dispatcher then tries to find them forever, requiring the dispatcher to be killed and restarted. The queue
will sit idle until someone notices the problem.
Signed-off-by: Zack Cerza <zack@redhat.com>
This commit changes the apply_mds command in the rook orchestrator
to support some placement specs and also sets the replica size according
to the osd_pool_default_size ceph option.
This commit also adds `orch apply mds` to the QA to test if the command
runs.
Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
Python 2.7 reached the end of its lifetime, the pr fixes teuthology task
error in Python 3.x
Fixes: https://tracker.ceph.com/issues/52878
Signed-off-by: Dai Zhiwei <daizhiwei3@huawei.com>
Using an nvme loop device makes the LVs look like "real" disks,
which means we can exercise all of the normal code paths for
provisioning, deprovisioning, and zapping.
Signed-off-by: Sage Weil <sage@newdream.net>
* refs/pull/43163/head:
qa: fsync dir for asynchronous creat on stray tests
qa: refactor and generalize create_n_files
qa: only set frag confs for workloads
mds: improve debugging for fragment size check
Reviewed-by: Ramana Raja <rraja@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
modified: qa/standalone/erasure-code/test-erasure-code-plugins.sh
new file: qa/suites/rados/thrash-erasure-code-isa/arch/aarch64.yaml
Signed-off-by: Dai Zhiwei <daizhiwei3@huawei.com>
Use the enhanced create_n_files to dedup code. Also split the large test
into three.
Fixes: https://tracker.ceph.com/issues/52606
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>