* refs/pull/43974/head:
qa: disable metrics on kernel client during upgrade
Reviewed-by: Xiubo Li <xiubli@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
pg_autoscale module will now start out all the pools
with a scale-up profile by default.
Added tests in workunits/mon/pg_autoscaler.sh
to evaluate if the default pool creation is
a scale-up profile
Updated documentation and release notes to
reflect the change in the default behavior
of the pg_autoscale profile.
Fixes: https://tracker.ceph.com/issues/53309
Signed-off-by: Kamoltat <ksirivad@redhat.com>
v16.2.4 MDS triggers an assert from these messages.
Also: add latest pacific for extra coverage.
Fixes: https://tracker.ceph.com/issues/53293
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
os/bluestore: improve usability for bluestore/bluefs perf counters
Signed-off-by: Igor Fedotov <igor.fedotov@croit.io>
Reviewed-by: Laura Flores lflores@redhat.com
rgw/CloudTransition: Transition objects to cloud endpoint
Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
Reviewed-by: Casey Bodley <cbodley@redhat.com>
osd/scrub: mark PG as being scrubbed, from scrub initiation to Inactive state
Reviewed-by: Samuel Just <sjust@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
To avoid the overhead of using coroutines during lifecycle transition,
RGWRESTStream* APIs are used to transition objects to remote cloud.
Also handled few optimizations and cleanup stated below:
* Store the list of cloud target buckets as part of LCWorker instead
of making it global. This list is maintained for the duration of
RGWLC::process(), post which discarded.
* Refactor code to remove coroutine based class definitions which are no
longer needed and use direct function calls instead.
* Check for cloud transitioned objects using tier-type and return error if
accessed in RGWGetObj, RGWCopyObj and RGWPutObj ops.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Similar to "get_resource()", add an API "send_resource()" to send
PUT/POST/DELETE Stream request on RGWRestConn
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
With commit#81ad226, aws auth v4 rquires region name for remote
endpoint connection. Include the same in the tier parameters.
& misc fixes
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
For versioned and locked objects, similar semantics as that of LifecycleExpiration are applied as stated below -
If the bucket versioning is enabled and the object transitioned to cloud is
- current version, irrespective of what the config option "retain_object" value is, the object is not deleted but instead delete marker is created on the source rgw server.
- noncurrent version, it is deleted or retained based on the config option "retain_object" value.
If the object is locked, and is
- current version, it is transitioned to cloud post which it is made noncurrent with delete marker created.
- noncurrent version, transition is skipped.
Also misc rebase fixes and cleanup -
* Rename config option to "retain_head_object"
to reflect its functionality to keep head object post transitioning
to cloud if enabled
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
If an object is locked, skip its transition to cloud.
@todo: Do we need special checks for bucket versioning too?
If current, instead of deleting the data, do we need to create
a delete marker? What about the case if retain_object is set to true.
& misc rebase fixes
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Currently the transition is supported to cloud providers
that are compatible with AWS/S3. Hence change the tier-type to
cloud-s3 to configure the S3 style endpoint details.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
If the object is versioned, to avoid objects getting overwritten
post transition to cloud, append object versionID to the target
object name
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Also to avoid object name collisions across various buckets
post cloud transition, add bucket name to the object prefix.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Tier type storage classes should not be allowed to have data
pools
& few other fixes/cleanup stated below -
* If the tier_targets are not configured, do not dump them in
the 'zonegroup get' command.
* If not configured, by default a bucket of below name convention -
"rgwx-$zonegroup-$storage_class-cloud-bucket"
is created in the remote cloud endpoint to transition objects to.
* Rename config option 'tier_storage_class' to 'target_storage_class'.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
As per https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
GET operation may fail with “InvalidObjectStateError” error if the
object is in GLACIER or DEEP_ARCHIVE storage class and not restored.
Same can apply for cloud tiered objects. However STAT/HEAD requests
shall return the metadata stored.
& misc fixes
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Add class to fetch headers from remote endpoint and verify if the object
is already tiered.
& Few other fixes stated below -
* Erase data in the head of cloud transitioned object
* 'placement rm' command should erase tier_config details
* A new option added in the object manifest to denote if the
object is tiered in multiparts
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Store the status of multipart upload parts to verify if the object
hasn't changed during the transition and if yes, abort the upload.
Also avoid re-creating target buckets -
Its not ideal to try creating target bucket for every object
transition to cloud. To avoid it caching the bucket creations in
a map with an expiry period set to '2*lc_debug_interval' for each
entry.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Added a new option "retain_object" in tier_config which determines
whether a cloud tiered object is deleted or if its head object is
retained. By default the value is false i.e, the objects get
deleted.
XXX: verify that if Object is locked (ATTR_RETENTION), transition is
not processed. Also check if the transition takes place separately for
each version.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
After transitioning the object to cloud, following updates are done
to the existing object.
* In bi entry, change object category to CloudTiered
* Update cloud-tier details (like endpoint, keys etc) in Object Manifest
* Mark the tail objects expired to be deleted by gc
TODO:
* Update all the cloud config details including multiparts
* Check if any other object metadata needs to be changed
* Optimize to avoid using read_op again to read attrs.
* Check for mtime to resolve conflicts when multiple zones try to transition obj
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
If the storage class configured is of cloud, transition
the objects to remote endpoint configured.
In case the object size is >mulitpart size limit (say 5M),
upload the object into multiparts.
As part of transition, map rgw attributes to http attrs,
including ACLs.
A new attribute (x-amz-meta-source: rgw) is added to denote
that the object is transitioned from RGW source.
Added two new options to tier-config to configure multipart size -
* multipart_sync_threshold - determines the limit of object size,
when exceeded transitioned in multiparts
* multipart_min_part_size - the minimum size of the multipart upload part
Default values for both the options is 32M and minimum value supported
is 5M.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
As mentioned in https://docs.google.com/document/d/1IoeITPCF64A5W-UA-9Y3Vp2oSfz3xVQHu31GTu3u3Ug/edit,
the tier storage class will be configured at zonegroup level.
So the existing CLI "radosgw-admin zonegroup placement add <id> --storage-class <class>" will be
used to add tier storage classes as well but with extra tier-config options mentioned below -
--tier-type : "cloud"
--tier-config : [<key,value>,]
These tier options are already defined to configure cloud sync module which are being reused here.
TODO:
* Add multipart options (if any , like part size, threshold)
* Document
* Test upgrade/downgrade
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Delay the second (nested) peerevent::start to let the first finish.
Then avoid interruptor nesting which will cause local interrupt_cond
not equal global interrupt_cond.
Signed-off-by: chunmei-liu <chunmei.liu@intel.com>
Restore ability to run radosgw_admin.py unit standalone--improved
to use vstart_runner hooks.
Local rgwadmin(...) wrapper suggested as a cleanup in review by Casey.
Fixes: https://tracker.ceph.com/issues/52837
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
This is fix to regression introduced by fix to omap upgrade: https://github.com/ceph/ceph/pull/43687
The problem was that we always skipped first omap entry.
This worked fine with objects having omap header key.
For objects without header key we skipped first actual omap key.
Fixes: https://tracker.ceph.com/issues/53260
Signed-off-by: Adam Kupczyk <akupczyk@redhat.com>
crimson/os/seastore/lba_manager: do full merge if the donor node is *AT* its minimum capacity
Reviewed-by: Samuel Just <sjust@redhat.com>
Reviewed-by: Chunmei Liu <chunmei.liu@intel.com>