currently, only plugin based on isa-l is installed. archs other than
amd64 will not have this directory or the plugin(s) residing in it.
hence dh_install will fail when trying to copy nonexistence file/dir.
* debian/ceph-common.install: chmod +x, and only install crypto on amd64
so dh_install can filter the install list using dh-exec
* debian/control: depends on dh-exec now. dh-exec v0.13 introduces support
for filtering based on architecture. see dh-exec's changelog for more
details. but trusty only offers dh-exec v0.12. so do not require ">=
0.13) at this moment.
Signed-off-by: Kefu Chai <kchai@redhat.com>
quote from nspr's header file
```
* Perform a graceful shutdown of NSPR. PR_Cleanup() may be called by
* the primordial thread near the end of the main() function.
```
this helps to silence some warnings from valgrind. but it does not hurt
in practice, because the process is about to die. and the freed memory
chunks are only allocated once in NSPR.
Signed-off-by: Kefu Chai <kchai@redhat.com>
This fixes the additional comments in the PR: https://github.com/ceph/ceph/pull/14705
Fixed the review comments too.
Signed-off-by: Jos Collin <jcollin@redhat.com>
i was incorrectly using [] as a default function argument, without
realizing that default values are mutable and shared across invocations
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Explicitly aggregate deferred writes into a batch. When we
submit, take the opportunity to coalesce contiguous writes.
Handle aio completion independently from the original txcs.
Note that this paves the way for a few additional steps:
1- we could make deallocations cancel deferred writes.
2- we could drop the txc deferred states entirely and rely on
the explicit deferred write batch machinery instead... if we
build an alternative way to complete the SharedBlob writes
and ensure the lifecycle issue are dealt with. (I'm not sure
it would be worth it, but it might be.)
Signed-off-by: Sage Weil <sage@redhat.com>
Since the array we used needs additional check to ensure that the size
is correct, and causes undefined behaviour in a few platforms, using a
vector and passing the c array back to mg_start so that we don't go past
the end of array.
Fixes: http://tracker.ceph.com/issues/19749
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
Signed-off-by: Jesse Williamson <jwilliamson@suse.de>
when the master zone is changed, this config variable was increasing the
window of time where the old master zone would continue to handle
requests to modify metadata. those changes would not be reflected by the
new metadata master zone, and would be lost to the cluster
it was an attempt to optimize for the unlikely case of multiple period
changes in a short period of time, but the logic in reload() handles this
case correctly as is
Signed-off-by: Casey Bodley <cbodley@redhat.com>
if a zone is promoted to master before it has a chance to sync from the
previous master zone, any metadata entries after its sync position will
be lost
print an error if 'period commit' is trying to promote a zone that is
more than one period behind the current master, and only allow the
commit to proceed if the --yes-i-really-mean-it flag is provided
Signed-off-by: Casey Bodley <cbodley@redhat.com>
makes the same change to read_sync_status() in RGWMetaSyncStatusManager,
needed to support multiple concurrent readers for the rest interface
Signed-off-by: Casey Bodley <cbodley@redhat.com>
sync status markers can't be compared between periods, so we need to
record the current period's realm epoch with its markers. when the
rgw_meta_sync_info.realm_epoch is more recent than the marker's
realm_epoch, we must treat the marker as empty
Signed-off-by: Casey Bodley <cbodley@redhat.com>
RGWInitDataSyncStatusCoroutine operates on a given rgw_data_sync_status
pointer, which saves us from having to read it back from rados
Signed-off-by: Casey Bodley <cbodley@redhat.com>