DACs are overridable for directories. For files,
Read/write DACs are always overridable but executable
DACs are overridable when there is at least one exec bit
set.
The files and directory DACS overriding were handled the
same way for root which is incorrect. This patch fixes
DACs overriding as described above for the root.
Fixes: https://tracker.ceph.com/issues/55313
Signed-off-by: Kotresh HR <khiremat@redhat.com>
When we set the proxy mode to remove a writeback cache according to
the ceph official documentation an error occurred:
[root@controller-1 root]# ceph osd tier cache-mode cachepool proxy
Invalid command: proxy not in writeback|readproxy|readonly|none
osd tier cache-mode writeback|readproxy|readonly|none [--yes-i-really-mean-it]:
specify the caching mode for cache tier
According to the description of the official website document: since
a writeback cache may have modified data, you must take steps to ensure
that you do not lose any recent changes to objects in the cache before
you disable and remove it. Change the cache mode to proxy so that new and
modified objects will flush to the backing storage pool.
Fixes: https://tracker.ceph.com/issues/54576
Signed-off-by: tan changzhi <544463199@qq.com>
Due to lack of Windows support in the Teuthology, the test case adopts
the following workaround:
* Deploy baremetal machine with `ubuntu_latest.yaml` and
configure it with libvirt KVM.
* Create a libvirt VM and provision it with Windows Server 2019, using
the official ISO from Microsoft.
* Configure SSH in the Windows VM, and run the tests remotely via SSH.
The implementation of the test case consists of workunit scripts.
`qa/workunits/windows/test_rbd_wnbd.py` is the main Python script
to test Ceph on Windows basic functionality. This is executed in the
libvirt VM configured with Windows Server 2019.
Co-authored-by: Lucian Petrut <lpetrut@cloudbasesolutions.com>
Co-authored-by: Daniel Vincze <dvincze@cloudbasesolutions.com>
Signed-off-by: Ionut Balutoiu <ibalutoiu@cloudbasesolutions.com>
- Avoid crashing when a call out to an external program expectedly does
not return exit status zero.
There are programs that communicate other information than error/no
error through exit status. E.g. `systemctl status` will return different
exit codes depending on the actual status of the units in question.
In cases where this is expected crashing with a RuntimeError exception
is inappropriate and should be avoided.
Fixes: https://tracker.ceph.com/issues/55117
Signed-off-by: Moritz Röhrich <moritz.rohrich@suse.com>
These tests are supposed to be validating we don't accept invalid IPs,
but they left out the "add" subcommand so they're all failing on that!
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
Check that reshard prunes olh entries with empty name.
Fixes: https://tracker.ceph.com/issues/54500
Signed-off-by: Dan van der Ster <daniel.vanderster@cern.ch>
The default pids-limit (docker 4096/podman 2048) prevent some
customization from working (http threads on RGW) or limits the number
of luns per iscsi target.
Fixes: https://tracker.ceph.com/issues/52898
Signed-off-by: Teoman ONAY <tonay@redhat.com>
Commit 08df6e0fd0 ("qa/workunits/rbd: expand LevelSpec parsing
coverage") didn't account for images with a separate data pool. This
was missed because of small-cache-pool.yaml breakage.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
mgr/rbd_support: cast pool_id from int to str when collecting LevelSpec
Reviewed-by: Mykola Golub <mgolub@suse.com>
Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
Invoke "rbd mirror snapshot schedule ls -R" and "rbd mirror snapshot
schedule status" commands on all levels, consistently. In particular,
make sure that an image level schedule is listed for a recursive query
at the pool level both before and after the schedule kicks in:
$ rbd create --size 1G --mirror-image-mode snapshot -p foo bar
$ rbd mirror snapshot schedule add -p foo --image bar 1m
$ rbd mirror snapshot schedule ls -p foo -R
POOL NAMESPACE IMAGE SCHEDULE
foo bar every 1m
<wait for schedule to become visible in status>
$ rbd mirror snapshot schedule ls -p foo -R
POOL NAMESPACE IMAGE SCHEDULE
foo bar every 1m
Also, make sure that pool and image level status queries work:
$ rbd mirror snapshot schedule status -p foo
SCHEDULE TIME IMAGE
2022-03-04 07:14:00 foo/bar
$ rbd mirror snapshot schedule status -p foo --image bar
SCHEDULE TIME IMAGE
2022-03-04 07:14:00 foo/bar
Both of these issues are fixed by the previous commit.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Test the commands:
`osd pool create` <pool> --target_size_ratio <float>
`osd pool set` <pool> target_size_ratio <float>
`osd pool get` <pool> target_size_ratio
Signed-off-by: Kamoltat <ksirivad@redhat.com>
mon/ConfigMonitor: fix config get key with whitespace
Reviewed-by: Ronen Friedman <rfriedma@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Commit fea6fdff4c ("mgr/rbd_support: level_spec passed to some
commands is not optional") is wrong. While it is true that a valid
level_spec is needed to create a LevelSpec instance, an empty string
is very much a valid level spec -- it signifies "all levels".
This wasn't caught because within Ceph these commands are wrapped by
rbd CLI which injects an empty string in get_level_spec_args().
Fixes: https://tracker.ceph.com/issues/54058
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
With dynamic bucket index resharding, when the average number of
objects per shard exceeds the configured value, that bucket is
scheduled for reshard. That bucket may receive more new objects before
the resharding takes place. As a result, the existing code
re-calculates the number of new shards just prior to resharding,
rather than waste a resharding opportunity with too low a value.
The same holds true for a user-scheduled resharding.
A user reported confusion that the number reported in `radosgw-admin
reshard list` wasn't the number that the reshard operation ultimately
used. This commit makes it clear that the new number of shards is
"tentative". And test_rgw_reshard.py is updated to reflect this
altered output.
Additionally this commit adds some modernization and efficiency to the
"reshard list" subcommand.
Signed-off-by: J. Eric Ivancich <ivancich@redhat.com>
Update the workunit/mon/config.sh to include set/get/rm commands with and without whitespaces
Fixes: https://tracker.ceph.com/issues/44092
Signed-off-by: Nitzan Mordechai <nmordech@redhat.com>
* refs/pull/44054/head:
doc/rados/operations: document pg_num_max
mgr: set max of 32 pgs for .mgr pool
mgr/dashboard: expect pg_num_max property for pools
mon/OSDMonitor: add option --pg-num_max arg for create pool
mon/OSDMonitor: disallow setting pg_num < min or > max
mgr/pg_autoscaler: apply pg_num_max
mon: add pg_num_max pool property
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
I removed the `02-hosts-inventory.e2e` file because it is a duplicate
test of one of the test in the `01-hosts.e2e` file and fixed the error
from that file.
Also, in the inventory Identify test, we test for an element to be not
visible. According to the latest cypress docs, this should be not.exist
instead of not.visible since the cd-modal will not even be present in
the DOM
Fixes: https://tracker.ceph.com/issues/53499
Signed-off-by: Nizamudeen A <nia@redhat.com>
set and unset the noautoscale flag,
evaluate if the results are what
we expected. As well as, evaluate
if the flag is correct when we
create new pools.
Signed-off-by: Kamoltat <ksirivad@redhat.com>