when some rules have been deleted before, the index in array of crush->rules
is not always equals to crush_ruleset of pool.
Fixes: #12210
Reported-by: Ning Yao <zay11022@gmail.com>
Signed-off-by: Xinze Chi <xmdxcxz@gmail.com>
One less part of the MDS interface that's public, and
avoids the subsystems being coupled to the names of these
callback fns.
Signed-off-by: John Spray <john.spray@redhat.com>
Filer is cheap and stateless, no need to share one
instance across the whole process. Let subsystems
create their own -- one less thing in the effectively
global MDS:: namespace.
Signed-off-by: John Spray <john.spray@redhat.com>
change aio_bench and clean_up parameter const char * to const std::string & format.
In rest_bench.cc, aio_bench used run_name.c_str(), so this format will always be empty
string not NULL, so the condition statement
const std::string run_name_meta = (run_name == NULL ? BENCH_LASTRUN_METADATA : std::string(run_name));
is wrong!
test fix:
before:
./rest-bench --seconds 1 -t 2 -b 100 write --api-host=radosgw.com --bucket=test_rm --access-key=FTL7TSJAGXX5KKDQHMJM --secret=123456879
use s3cmd ls s3://test_rm , we can a lot of objects in this bucket, objects are not cleaned up.
after changes, do the same procedure, objects are cleaned up.
Signed-off-by: shawn chen <cxwshawn@gmail.com>
While errors shouldn't occur during image close, it is possible that
they could occur. In this case, the user should be informed.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Added new librbd::Image::close method to allow checking the close result
when using the C++ librbd library. rbd_close is no longer hard-coded to
return 0.
Fixes: #12069
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
W/O this patch:
root@dev:/var/log/ceph# ceph health detail
HEALTH_WARN 1 pgs stuck unclean; 2 requests are blocked > 32 sec; 1 osds
have slow requests; recovery 5/115 objects degraded (4.348%); recovery
1/38 unfound (2.632%); too few PGs per OSD (15 < min 30)
pg 2.1 is stuck unclean for 899.708271, current state active, last
acting [2,3,0,1]
1 ops are blocked > 1048.58 sec
1 ops are blocked > 262.144 sec
1 ops are blocked > 1048.58 sec on osd.2
1 ops are blocked > 262.144 sec on osd.2
1 osds have slow requests
recovery 5/115 objects degraded (4.348%)
recovery 1/38 unfound (2.632%)
too few PGs per OSD (15 < min 30)
W/ this patch:
ceph health detail
HEALTH_WARN 1 pgs stuck unclean; 2 requests are blocked > 32 sec; 1 osds
have slow requests; recovery 5/115 objects degraded (4.348%); recovery
1/38 unfound (2.632%); too few PGs per OSD (15 < min 30)
pg 2.1 is stuck unclean for 427.103877, current state active, last
acting [2,3,0,1]
1 ops are blocked > 524.288 sec on osd.2
1 ops are blocked > 131.072 sec on osd.2
1 osds have slow requests
recovery 5/115 objects degraded (4.348%)
recovery 1/38 unfound (2.632%)
too few PGs per OSD (15 < min 30
Later messages looks better.
Signed-off-by: Jianpeng Ma <jianpeng.ma@intel.com>
size less than old size(only truncate to new size)
If new size larger or equal old size, no need do truncate.
It can diretcly overwrite.
Signed-off-by: Jianpeng Ma <jianpeng.ma@intel.com>