RGW:Multisite: Verify if the synced object is identical to source
Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
Reviewed-by: Casey Bodley <cbodley@redhat.com>
This follows b162541ac21e965a304ee6ffe604c43f22fa96c4.
The balancer was turned on by default in
d4fbaf7, as a result of which we might see
PG_AVAILABILITY health warnings when pg-upmap-items are applied.
Fixes: https://tracker.ceph.com/issues/45802
Signed-off-by: Neha Ojha <nojha@redhat.com>
this was added to test that admin apis forward relevent requests to the
master zone, but radosgw_admin_rest.py tries to create an admin user
with 'radosgw-admin user create'. this fails with:
Please run the command on master zone. Performing this operation on
non-master zone leads to inconsistent metadata between zones
Are you sure you want to go ahead? (requires --yes-i-really-mean-it)
Signed-off-by: Casey Bodley <cbodley@redhat.com>
If we get a SIGINT or SIGTERM or are deleted from the OSDMap, do a fast
shutdown by exiting immediately. This has a few important benefits:
- We immediately stop responding (binding) to any sockets, which means
other OSDs will immediately decide we are down (and dead!). This
minimizes IO interruption.
- We avoid the complex "clean" shutdown process, which is historically a
source of bugs.
In reality, the only purpose of the "clean" shutdown is to try to tear down
everything in memory so we can do memory leak checking with valgrind. Set
this option to false for valgrind QA runs so we can still do that.
Not that with the new read leases in octopus, we rely on the default
behavior that a ECONNREFUSED is taken to mean that the OSD is fully dead,
so that we don't have to wait for any leases to time out. This works in
sane environments with normal IP networks, but that behavior could
conceivably be a bad idea if there are some weird network shenanigans
going on. If osd_fast_fail_on_connection_refused were disabled, then this
fast shutdown procedure might be *worse* than the clean shutdown because
we would have to wait for the heartbeat timeout.
Signed-off-by: Sage Weil <sage@redhat.com>
radosgw now uses 512 frontend threads by default, and valgrind won't
start with its default --max-threads=500
Fixes: http://tracker.ceph.com/issues/25214
Signed-off-by: Casey Bodley <cbodley@redhat.com>
This utilizes the recent feature in teuthology [1] to skip hidden files in
suites when building the job matrix.
Idea of this change is to enable referring to the top-level qa directory in a
position-independent way such that copies of a suite to another location do not
break any symlinks.
[1] https://github.com/ceph/teuthology/pull/1185
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
the multisite tests run manual trim operations with radosgw-admin, which
can race with internal log trimming to produce tests failures
Signed-off-by: Casey Bodley <cbodley@redhat.com>
added a qa/rgw_frontend directory for civetweb.yaml and the new
beast.yaml. the rgw suites for multifs and singleton now symlink
rgw_frontend/civetweb.yaml. the multisite, tempest and verify suites
symlink rgw_frontend to test both. this doubles the number of jobs in
those suites
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Valgrind runs itself on forked children, and does its cleanup when they
complete, and this is slow... slow enough that it frequently makes the
test time out.
Valgrind let's you ignore child *processes* that you exec, but I can't
find a way to skip forked children in the same address space.
Work around this by skip this validation when running under valgrind.
Fixes: http://tracker.ceph.com/issues/20602
Signed-off-by: Sage Weil <sage@redhat.com>
This reverts 693bd23851, which was
added in response to http://tracker.ceph.com/issues/18126. But
we updated the Ubuntu packages in sepia so it should be good to go.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
This reverts 693bd238510e69569cc3461f84b04c8667bc11da, which was
added in response to http://tracker.ceph.com/issues/18126. But
we updated the Ubuntu packages in sepia so it should be good to go.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>