The `LibRadosWatchNotify.WatchNotify2` was expecting
data in the very raw form:
```cpp
std::map<std::pair<uint64_t,uint64_t>, bufferlist> reply_map;
std::set<std::pair<uint64_t,uint64_t> > missed_map;
auto reply_p = reply.cbegin();
decode(reply_map, reply_p);
decode(missed_map, reply_p);
```
while the serialization of `notify_reply_t` was appending
extra preamable with versioning data.
This was the root cause of the following problem:
```
2021-03-04T15:40:03.001 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: Running main() from gmock_main.cc
2021-03-04T15:40:03.001 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [==========] Running 11 tests from 2 test suites.
2021-03-04T15:40:03.002 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [----------] Global test environment set-up.
2021-03-04T15:40:03.002 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify
2021-03-04T15:40:03.002 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify
2021-03-04T15:40:03.002 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: watch_notify_test_cb
2021-03-04T15:40:03.003 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify (744 ms)
2021-03-04T15:40:03.003 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch2Delete
2021-03-04T15:40:03.003 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: waiting up to 300 for disconnect notification ...
2021-03-04T15:40:03.003 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94023196839536 err -107
2021-03-04T15:40:03.004 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch2Delete (3123 ms)
2021-03-04T15:40:03.004 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete
2021-03-04T15:40:03.004 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: waiting up to 300 for disconnect notification ...
2021-03-04T15:40:03.004 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94023196851488 err -107
2021-03-04T15:40:03.005 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete (5086 ms)
2021-03-04T15:40:03.005 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2
2021-03-04T15:40:03.005 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: watch_notify2_test_cb from 4394 notify_id 120259084288 cookie 94023196869248
2021-03-04T15:40:03.005 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: unknown file: Failure
2021-03-04T15:40:03.006 INFO:tasks.workunit.client.0.smithi058.stdout: api_watch_notify: C++ exception with description "End of buffer" thrown in the test body.```
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
In 791952cc01 we switched to return JSON
both on success and fail to describe which PGs are affected or are blocking
the ability to stop/restart OSDs. Do the same for the case where
some PG states are unknown (i.e., just after a mgr restart) so that
the cephadm upgrade process can unconditionally expect a JSON result.
Signed-off-by: Sage Weil <sage@newdream.net>
This is being done from ansible now. Also, it breaks when
the conf file has unqualified-search-registries but not 'registry'
entries.
Signed-off-by: Sage Weil <sage@newdream.net>
crimson/osd: do not pass lvalue of the lambda to seastar::futurize_invoke
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
crimson/osd: capture error_code by value in PG::handle_failed_op
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
Reviewed-by: Xuehan Xu <xxhdx1985126@gmail.com>
I've written up a brief description of using kmip
with ceph. Major features:
* ceph configuration.
* making keys with a "paste-in" python script.
* pointers to PyKMIP and IBM SKLM.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
Actually add kmip to the kms crypt suite.
This also makes some ssl certs which is required for use of kmip.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
s3tests needs to know key names in order to run kms tests.
It seems desirable to have s3tests default to discovering
the names that were created by the pykmip task, and that
if there is more than one rgw connected to more than one
pykmip, that names belonging to the appropriate pykmip
instance should be used.
This logic does the following:
rgw task: save pykmip role name.
s3tests task: set kms_key (and kms_keyid2) to
these in order of priority
1 s3tests client task property ['kms_key'] (or ['kms_key2'])
2 first (second) secret created in the matching pykmip instance.
3 testkey-1 (testkey-2)
For case 2, names from the secrets have an initial "token-" stripped from them.
The assumption here is that rgw is being run with a setting such as
rgw crypt kmip kms key template: pykmip-$keyid
therefore "pykmip-" will be prefixed back onto the key before use.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
The pykmip task should be after ceph, and before rgw.
kmip needs ssl certs in order to function correctly.
Because the openssl_keys task has an indeterminate
order of execution, it is best to create the ca as
a separate task. The ca can be shared with rgw, but
real life deployments of kmip are likely to have their
own CA.
In order to create kmip secrets, a client certificate
is necessary, so must be supplied to the pykmip task.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
The logic to deploy pykmip in teuthology was not complete.
The necessary logic to add kmip keys was missing.
Existing logic for other key services providers could use rest based
protocols directly from the teuthology host. For kmip, it is necessary
to use a special protocol, and it is more convenient to run this directly
on the pykmip server.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
The logic to deploy pykmip in teuthology was not complete.
While it deployed all the code and certs to run pykmip,
it didn't actually run it. This commit fixes that.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
python3 requires different imports and there's a different
way to get at the first element in a view.
This is to match changes introduced in the rest of ceph in these
commits: 24e7acc261d7258ea7fd
Signed-off-by: Marcus Watts <mwatts@redhat.com>
This implements SSE-KMS for the radosgw using kmip.
This uses symmetric raw keys with a name attribute in kmip,
so providing the same functionality as the "kv" key store
in hashicorp vault.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
As of a49d1dbb32, when the rbd_rwl_cache and
rbd_ssd_cache bconds are enabled and WITH_SYSTEM_PMDK is disabled (as it is by
default), the RPM build attempts to
git clone https://github.com/ceph/pmdk.git
but of course that won't work in the OBS, where the build workers have no
Internet connectivity.
Fortunately, the openSUSE/SLE versions targeted by Ceph master and pacific ship
the necessary PMDK libraries as RPM packages.
Fixes: a49d1dbb32
Fixes: https://tracker.ceph.com/issues/49550
Signed-off-by: Nathan Cutler <ncutler@suse.com>
* refs/pull/39780/head:
qa/vstart_runner: dont log "not Ceph bin" msg too often
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
* refs/pull/39681/head:
vstart_runner: define path to ceph binary and use it
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>