Tier type storage classes should not be allowed to have data
pools
& few other fixes/cleanup stated below -
* If the tier_targets are not configured, do not dump them in
the 'zonegroup get' command.
* If not configured, by default a bucket of below name convention -
"rgwx-$zonegroup-$storage_class-cloud-bucket"
is created in the remote cloud endpoint to transition objects to.
* Rename config option 'tier_storage_class' to 'target_storage_class'.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
As per https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
GET operation may fail with “InvalidObjectStateError” error if the
object is in GLACIER or DEEP_ARCHIVE storage class and not restored.
Same can apply for cloud tiered objects. However STAT/HEAD requests
shall return the metadata stored.
& misc fixes
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Add class to fetch headers from remote endpoint and verify if the object
is already tiered.
& Few other fixes stated below -
* Erase data in the head of cloud transitioned object
* 'placement rm' command should erase tier_config details
* A new option added in the object manifest to denote if the
object is tiered in multiparts
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Store the status of multipart upload parts to verify if the object
hasn't changed during the transition and if yes, abort the upload.
Also avoid re-creating target buckets -
Its not ideal to try creating target bucket for every object
transition to cloud. To avoid it caching the bucket creations in
a map with an expiry period set to '2*lc_debug_interval' for each
entry.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Added a new option "retain_object" in tier_config which determines
whether a cloud tiered object is deleted or if its head object is
retained. By default the value is false i.e, the objects get
deleted.
XXX: verify that if Object is locked (ATTR_RETENTION), transition is
not processed. Also check if the transition takes place separately for
each version.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
After transitioning the object to cloud, following updates are done
to the existing object.
* In bi entry, change object category to CloudTiered
* Update cloud-tier details (like endpoint, keys etc) in Object Manifest
* Mark the tail objects expired to be deleted by gc
TODO:
* Update all the cloud config details including multiparts
* Check if any other object metadata needs to be changed
* Optimize to avoid using read_op again to read attrs.
* Check for mtime to resolve conflicts when multiple zones try to transition obj
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
If the storage class configured is of cloud, transition
the objects to remote endpoint configured.
In case the object size is >mulitpart size limit (say 5M),
upload the object into multiparts.
As part of transition, map rgw attributes to http attrs,
including ACLs.
A new attribute (x-amz-meta-source: rgw) is added to denote
that the object is transitioned from RGW source.
Added two new options to tier-config to configure multipart size -
* multipart_sync_threshold - determines the limit of object size,
when exceeded transitioned in multiparts
* multipart_min_part_size - the minimum size of the multipart upload part
Default values for both the options is 32M and minimum value supported
is 5M.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
As mentioned in https://docs.google.com/document/d/1IoeITPCF64A5W-UA-9Y3Vp2oSfz3xVQHu31GTu3u3Ug/edit,
the tier storage class will be configured at zonegroup level.
So the existing CLI "radosgw-admin zonegroup placement add <id> --storage-class <class>" will be
used to add tier storage classes as well but with extra tier-config options mentioned below -
--tier-type : "cloud"
--tier-config : [<key,value>,]
These tier options are already defined to configure cloud sync module which are being reused here.
TODO:
* Add multipart options (if any , like part size, threshold)
* Document
* Test upgrade/downgrade
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
This is fix to regression introduced by fix to omap upgrade: https://github.com/ceph/ceph/pull/43687
The problem was that we always skipped first omap entry.
This worked fine with objects having omap header key.
For objects without header key we skipped first actual omap key.
Fixes: https://tracker.ceph.com/issues/53260
Signed-off-by: Adam Kupczyk <akupczyk@redhat.com>
crimson/os/seastore/lba_manager: do full merge if the donor node is *AT* its minimum capacity
Reviewed-by: Samuel Just <sjust@redhat.com>
Reviewed-by: Chunmei Liu <chunmei.liu@intel.com>
mgr/dashboard: Device health status is not getting listed under hosts section
Reviewed-by: Aashish Sharma <aasharma@redhat.com>
Reviewed-by: Avan Thakkar <athakkar@redhat.com>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
Reviewed-by: Nizamudeen A <nia@redhat.com>
mon: MonMap: do not increase mon_info_t's compatv in stretch mode, really
Reviewed-by: Samuel Just <sjust@redhat.com
Reviewed-by: Laura Flores <lflores@redhat.com>
doc/radosgw/nfs: add note about NFSv3 deprecation
Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
Reviewed-by: Sage Weil <sage@newdream.net>
Reviewed-by: Varsha Rao <rvarsha016@gmail.com>
This was supposed to be fixed a year ago in commit
2e3643647b, but it set compat_v to 4 instead of all
the way back to 1 as it should have.
Our testing for stretch mode in these areas is just not very thorough -- the
kernel only supports compat_v 1 and apparently nobody's noticed the issue
since then? :/
As the prior commit says, you can't set locations without being gated on a
server feature bit, so simply cancelling this enforcement is completely safe.
Fixes: https://tracker.ceph.com/issues/53237
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
whenever an index transaction uses remove_objs for complete(), it also
needs to pass them for cancel() to avoid leaking index entries
Signed-off-by: Casey Bodley <cbodley@redhat.com>