The main get_user() function doesn't query the cluster, but the rest of
them do. Rename the functions to match, and add comments to clarify.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
This commit changes the RGWStoreManager to return a RGWStore* rather
than a RGWRadosStore*. This is the thread that unravels the rest of the
Zipper work, removing hard-coded uses of the RGWRados* classes.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
when setting "persistent=false" a non persistent
topic should be created
Fixes: https://tracker.ceph.com/issues/49552
Signed-off-by: Yuval Lifshitz <ylifshit@redhat.com>
The message is logged everytime a binary not from Ceph repo's build
directory is executed, which it too often.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
this change is for improve the readability. to emphasis that the
next steps are performed only if a connection to monitor is
established.
Signed-off-by: Kefu Chai <kchai@redhat.com>
for smaller memory foot print. as we don't have lots of mon_command in
flight, hence not likely to benefit from a O(log(n)) lookup.
Signed-off-by: Kefu Chai <kchai@redhat.com>
this behavior matches that of `MonClient::_resend_mon_commands()`. so
far the only user which sends mon command in crimson is
`OSD::_add_me_to_crush()`, but there is still (rare) chance that the connected
monitor cannot be reached when we send the command to it, in that case,
we should retry when the connection is re-established.
Signed-off-by: Kefu Chai <kchai@redhat.com>
as the reason why MMonCommand uses a vector is but for legacy reasons,
there is no need to expose this via the interface.
Signed-off-by: Kefu Chai <kchai@redhat.com>
both of them fall into the category of jobs which we should do after
the connection to monitor is established.
Signed-off-by: Kefu Chai <kchai@redhat.com>
we always send all pending_messages, and clear it when establishing a
connection to mon, so there is no need to check for it when calling
`send_message()`.
Signed-off-by: Kefu Chai <kchai@redhat.com>
before this change, we guard the `send_pendings()` call only in
`Client::send_message()`, after this change, all of the
`send_pendings()` calls are guarded with this check.
more consistent this way.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Lack of this feature was the root cause of an issue in
teuthology testing in which a socket failure injection
happened exactly during `mon_subscribe`; after the OSD
reconnected, the message hasn't been resent and entire
boot process has frozen.
```
DEBUG 2021-02-25 11:42:53,757 [shard 0] ms - [osd.2(client) v2:172.21.15.204:6804/33459@57376 >> mon.0 v2:172.21.15.204:3300/0] --> #6 === mon_subscribe({osdmap=1}) v3
(15)
DEBUG 2021-02-25 11:42:53,757 [shard 0] ms - authenticated_encrypt_update plaintext.length()=80 buffer.length()=80
DEBUG 2021-02-25 11:42:53,757 [shard 0] ms - authenticated_encrypt_final buffer.length()=96 final_len=0
DEBUG 2021-02-25 11:42:53,757 [shard 0] ms - authenticated_encrypt_update plaintext.length()=48 buffer.length()=48
DEBUG 2021-02-25 11:42:53,757 [shard 0] ms - authenticated_encrypt_update plaintext.length()=16 buffer.length()=64
DEBUG 2021-02-25 11:42:53,757 [shard 0] ms - authenticated_encrypt_final buffer.length()=80 final_len=0
INFO 2021-02-25 11:42:53,758 [shard 0] ms - [osd.2(client) v2:172.21.15.204:6804/33459@57376 >> mon.0 v2:172.21.15.204:3300/0] execute_ready(): fault at READY on lossy
channel, going to CLOSING -- std::system_error (error crimson::net:4, read eof)
```
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
The luarocks conditional had gotten hard to read, and the openSUSE Leap 15.3
build needs lua53 as well.
Signed-off-by: Nathan Cutler <ncutler@suse.com>
* refs/pull/39680/head:
mds: allow `fs authorize` command to work
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/39710/head:
qa: run fs:verify on all distros
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Yuri Weinstein <yweins@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
This PR breaks the "Deploying a New Ceph Cluster"
section into several sub-sections, so that each sub-section
pertains to only one subject. I've also added some explanatory
text that puts the instructions into context more than they were
before.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
This PR rewrites the section "Bootstrap A New
Cluster" in the Cephadm Guide, in the Install
Chapter. I've broken this section up into what
seem to me to be the topics that the content
naturally divides into.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
This ensures that daemon messenger nonces don't collide by using PIDs that are
no longer unique for the IP address.
Signed-off-by: Sage Weil <sage@newdream.net>
If we are in a container, then we do not have a unique pid, and need to
use a random nonce. We normally detect this if our pid is 1, but that
doesn't work when we have a init process--we'll (probably?) have a small
pid (in my tests, the OSDs were getting pid 7).
To be safe, also check for an environment variable set by cephadm.
This avoids problems that arise when we don't have a unique address.
Fixes: https://tracker.ceph.com/issues/49534
Signed-off-by: Sage Weil <sage@newdream.net>