Running Keystone with WSGIChunkedRequest=On is not supported.
We have to make sure that we set the Content-Length header when getting
an admin token and checking revoked tokens, otherwise Keystone returns
a HTTP 411 error.
Same applies when checking revoked tickets.
Fixes: #11473
Backport: Hammer, Firefly
Signed-off-by: Hervé Rousseau <hroussea@cern.ch>
If the client releases the AioCompletion while librbd is waiting
to acquire the exclusive lock, the memory associated with the
completion will be freed too early.
Fixes: #11478
Backport: hammer
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
RBD format 2 is now the default image format, so tests involving the old
format should explicitly request the old format.
Fixes: #11477
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
An unnecessary error message is being logged due to a failure to retrieve
metadata for old-format images -- which don't support metadata.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
The librbd API previously permitted the creation of snapshots while
the image context was associated to another snapshot. A recent code
cleanup broke that ability, so this re-introduces it. The code change
also allows minor cleanup with rebuild_object_map.
Fixes: #11475
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
During Ceph upgrade testing, older Ceph test suites assume that
get_features will return -ENOENT if provided a missing snapshot.
Support these negative tests until the older releases are no
longer supported.
Fixes: #11380
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 66493b7e83)
Add this flag so that the bad object will be removed (should be called
only after user has verified that objects content is correct).
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
In pipe.cc:1353 we stop this connection and we will let reader and write threads stop. If now reader and writer quit ASAP and we call queue_reap to trigger the reap progress. Now we haven't call "connection_state->clear_pipe(this)" in pipe.cc:1379, so we may assert failure here.
Fixes: #11381
Signed-off-by: Haomai Wang <haomaiwang@gmail.com>
Fixes: #11447
Backport: hammer
When creating gc chain, use the appropriate oid, otherwise objects will
leak.
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
The ceph-radosgw service fails to start if the httpd package is not
installed. This is because the init.d file attempts to start the RGW
process with the "apache" UID. If a user is running civetweb, there is
no reason for the httpd or apache2 package to be present on the system.
Switch the init scripts to use "root" as is done on Ubuntu.
http://tracker.ceph.com/issues/11453 Refs: #11453
Reported-by: Vickey Singh <vickey.singh22693@gmail.com>
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
max_req_id was moved to RGWRados and changed to atomic64_t.
The same request id resulted in gc giving the same idtag to all objects
resulting in a leakage of rados objects. It only kept the last deleted object in
it's queue, the previous objects were never freed.
Fixes: 10295
Backport: Hammer, Firefly
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
We were unable to set a new non-zero max if the original max was 0.
Fix it. Also, add test cases for it.
Signed-off-by: Henry Chang <henry@bigtera.com>
Objects that start with underscore need to have an object locator,
this is due to an old behavior that we need to retain. Some objects
might have been created without the locator. This tool creates a new
rados object with the appropriate locator.
Syntax:
$ ./radosgw-admin bucket check --check-head-obj-locator \
--bucket=<bucket> [--fix]
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
Fixes: #11451
Backport: hammer
Got broken in commit:7dd54fa3621c04c8ea5723fb1bc06b91d81a0c6c.
Resurrect the option to list unlimited number of buckets using the S3
api.
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
The previous calculation was based upon the image's object size.
Since the cache stores smaller bufferheads, the object size is not
a good indicator of cache usage and was resulting in objects being
evicted from the cache too often. Instead, base the max number of
objects on the memory load required to store the extra metadata
for the objects.
Fixes: #7385
Backport: firefly, hammer
Signed-off-by: Jason Dillaman <dillaman@redhat.com>