Problem:
Failed the test in EC Pool configuration because PGs are
not going into active+clean (our fault for over thrashing and checking the wrong thing).
Also, PG would not go into active because we thrash below min_size
in an EC pool config, not enough shards in the acting set.
Therefore, failed the wait_for_recovery check.
Moreover, When we revive osds, we didn't add the osd back in the cluster,
this messes up true count for live_osds in the test.
Solution:
Instead of randomly choosing OSDs to thrash,
we randomly select a PG from each pool and
thrash the OSDs in the PG's acting set until
we reach min_size, then we check to see if the
PG is still active. After that we revive all
the OSDs to see if the PG recovered cleanly.
We removed some of the unnecessary part such
as `min_dead`, `min_live`, `min_out` and etc.
Also, we refractored the part of where we are
assigning k,m for the EC pools so that we get
better code readablility.
Fixes: Fixes: https://tracker.ceph.com/issues/59172
Signed-off-by: Kamoltat <ksirivad@redhat.com>
dummy onodes don't have a collection, consequentially, we need to create
a blob without a collection.
Signed-off-by: Pere Diaz Bou <pere-altea@hotmail.com>
Collection is a member of Blob after previous changes, making it necessary for rehome_blob
to move both Blob and SharedBlob
Signed-off-by: Pere Diaz Bou <pdiabou@redhat.com>
Instead of setting blob collection to the same as shared blob, we ensure
we were in the same collection instead.
Signed-off-by: Pere Diaz Bou <pdiabou@redhat.com>
split_cache requires moving from one collection to another in a
sharedblob. Therefore, with the new refactor we need collection inside
blob to be moved too.
Signed-off-by: Pere Diaz Bou <pdiabou@redhat.com>
Instead of creating a SharedBlob for each Blob instance, we just move
necesary fields like Collection to Blob and create SharedBlob only when
necessary.
Signed-off-by: Pere Diaz Bou <pdiabou@redhat.com>
Fixes https://tracker.ceph.com/issues/64270
Issue:
======
Accessing Object->Users-Roles tab causing 500 internal servor error.
This is due to the "PermissionPolicies" which are attached to role and
backend was not handling this field for rgw roles.
Fix:
====
Added "PermissionPolicies" as the valid field in backend and updated
frontend to render the attached policy in formatted JSON
Signed-off-by: Afreen <afreen23.git@gmail.com>
CephFS clone creation have a limit of 4 parallel clones by default at a time and rest
of the clone create requests are queued. This makes CephFS cloning very slow when
there is large amount of clones being created.After this patch clone requests won't be accepeted
when the requests exceed the `max_concurrent_clones` config value.
Fixes: https://tracker.ceph.com/issues/59714
Signed-off-by: Neeraj Pratap Singh <neesingh@redhat.com>
mgr/prometheus: fix orch check to prevent Prometheus from crashing
Reviewed-by: Avan Thakkar <athakkar@redhat.com>
Reviewed-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
move the QatAccel instance out of the Compressor base class and into
the zlib and lz4 compressors that can use it
this avoids having to link QAT into the ceph-common library, and only
the plugins where it's necessary
had to add LZ4Compressor.cc to store the new static variable
Signed-off-by: Casey Bodley <cbodley@redhat.com>