Crush temporary buffers are allocated as per replica size configured
by the user.When there are more final osds (to be selected as per
rule) than the replicas, buffer overlaps and it causes crash.Now, it
ensures that at most num-rep osds are selected even if more number of
osds are allowed by indep rule. The fix for firstn rules is already
merged as part of bug #9492. Required test files are added.
Fixes: #9492
Signed-off-by: Johnu George johnugeo@cisco.com
(cherry picked from commit 234b066ba0)
Crush temporary buffers are allocated as per replica size configured
by the user.When there are more final osds (to be selected as per
rule) than the replicas, buffer overlaps and it causes crash.Now, it
ensures that at most num-rep osds are selected even if more number of
osds are allowed by the rule.
Fixes: #9492
Signed-off-by: Johnu George <johnugeo@cisco.com>
(cherry picked from commit 6b4d1aa997)
... to avoid leaving log events that reference log
segments by offsets which no longer exist.
Signed-off-by: John Spray <john.spray@redhat.com>
(cherry picked from commit 386f2d7c82)
Reviewed-by: Greg Farnum <greg@inktank.com>
Using a stringstream that is only displayed on error when calling the
erasure code factory, instead of cerr. The user expects the output to be
clean when there is no error.
http://tracker.ceph.com/issues/9429Fixes: #9429
Signed-off-by: Loic Dachary <loic-201408@dachary.org>
If we have nolaggy waiters but are laggy we want to sleep. Otherwise,
we will just spin and spam the log ...
Signed-off-by: Sage Weil <sage@redhat.com>
Since the erasure code plugin version check has been introduced,
whenever a library/binary that can load plugin needs to be recompiled,
the erasure code plugins must also be considered. If the reason for
recompiling the library/binary is a new commit, the plugins will fail to
load.
The dependency is not based on source compilation and a shared library
dependency on liberasure-code.la is added instead. This library is
uniformly used whenever a plugin is to be loaded and therefore covers
all library/binaries that need it.
http://tracker.ceph.com/issues/9413Fixes: #9413
Signed-off-by: Loic Dachary <loic-201408@dachary.org>
When access key contains ':', e.g. `some_info:for_user',
authorization header looks like:
"AWS some_info:for_user:request_signature"
so `auth_str.find(':')` result in auth_id = "some_info",
auth_sign = "for_user:request_signature".
auth_str.rfind(':') solves this issue.
Signed-off-by: Roman Haritonov <reclosedev@gmail.com>
When replaying EImportFinish/EFragment event, the replay thread may call
MDS::queue_waiters. MDS::queue_waiters() requires its caller to hold the
mds_lock. Otherwise assert(waiter_mutex == __null || waiter_mutex->is_locked())
in Cond::Signal() will be tiggered.
Signed-off-by: Yan, Zheng <zyan@redhat.com>
When upgrading from a build without the promotion on 2nd read feature,
should set min_read_recency_for_promote to the default value 1, instead
of 0.
Signed-off-by: Zhiqiang Wang <wonzhq@hotmail.com>
Currently in CrushWrapper, the member "struct crush_map *crush" is a public member,
so people can break the encapsulation and manipulate directly to the crush structure.
This is not a good practice for encapsulation and will lead to inconsistent if code
mix use the CrushWrapper API and crush C API.A simple example could be:
1.some code use crush_add_rule(C-API) to add a rule, which will not set the have_rmap flag to false in CrushWrapper
2.another code using CrushWrapper trying to look up the newly added rule by name will get a -ENOENT.
This patch move CrushWrapper::crush to private, together with three reverse map(type_rmap, name_rmap, rule_name_rmap)
and also change codes accessing the CrushWrapper::crush to make it compile.
Signed-off-by: Xiaoxi Chen <xiaoxi.chen@intel.com>
This is a bad assert. Specifically, handle_osd_op_reply may still be
holding the session ref while it is calling the completion for a previous
request. This is safe: it is only holding the session ref after it dropped
the global map rwlock because of the per-session completion locks. The
request in question was already marked completed by the time our thread
took the session lock.
Fixes: #9241
Signed-off-by: Sage Weil <sage@redhat.com>