This option was enabled in 87f33376d9 but
causes ObjectStore/StoreTest.Synthetic/1 (filestore) to fail. Revert that
bit for now until we fix fiemap properly.
See http://tracker.ceph.com/issues/21880
Signed-off-by: Sage Weil <sage@redhat.com>
rgw: fix opslog uri as per Amazon s3
Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
Reviewed-by: Jos Collin <jcollin@redhat.com>
Reviewed-by: Casey Bodley <cbodley@redhat.com>
rgw:fix list objects with marker wrong result when bucket is enable versioning
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: Matt Benjamin <mbenjami@redhat.com>
If the argument is an absolute path it is fine to just return whatever
get_lv finds since it is a "safe" call, it will return a None if nothing
is found
Signed-off-by: Alfredo Deza <adeza@redhat.com>
The create_lv signature changed to require full size description and
tags need to be an actual dictionary (vs. keyword args)
Signed-off-by: Alfredo Deza <adeza@redhat.com>
The log gathering causes large performance degradation to clients
with high message throughputs. This is hopefully a short-term
workaround until per-message logging can be replaced with an
efficient data recording system for post-incident analysis
use-cases.
Fixes: http://tracker.ceph.com/issues/21860
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
During async log compaction we rely on _flush-and_sync_log to update the
log_writer to jump_to. However, if racing threads are also trying to flush
the log and manage to flush our new log events for us, then our flush will
turn into a no-op, and we won't update jump_to correctly at all. This
results in a corrupted log size a bit later one.
Fix by ensuring that there are no in-progress flushes before we add our
log entries. Also, add asserts to _flush_and_sync_log to make sure we
never bail out early if jump_to is set (which would indicate this or
another similar bug is still present).
Fixes: http://tracker.ceph.com/issues/21878
Signed-off-by: Sage Weil <sage@redhat.com>
This broke the C++ ABI by changing the list structure size. Also, it's
not necessary as we can infer the mempool by looking at the other list
contents. We don't (currently) have a need to map an empty list to a
particular mempool and have that state stick.
Fixes: http://tracker.ceph.com/issues/21573
Signed-off-by: Sage Weil <sage@redhat.com>
These requests impacts whole subtree tree, replaying them when
mds recovers may break order of requests in multimds cluster.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
it is used by the "repair" feature to dedup the files to be searched for
MANIFEST-* files. the default implementation is the POSIX one, which
tries to look at the local fs, but we should be looking for the files in
the bluefs. in this very use case, wal and db do not share the same device,
so we can just compare the paths. actually, it should aways return
"false". as the files being compared are always "db" and "db.wal".
Fixes: http://tracker.ceph.com/issues/21842
Signed-off-by: Kefu Chai <kchai@redhat.com>