this is by making metatable names fully qualified names
that contain the entire "path" for reaching them and not
just the name of the object they point to.
with the fix, the code would either create a new metatable,
as in this case:
local o1 = Request.Object
-- new metatable is created to represent the Object in Request.Object
local o2 = Request.CopyFrom.Object
-- new metatable (with different upvalues) is created to represent Request.CopyFrom.Object
print(o1.Name)
print(o2.Name)
or, will reuse an existing metatable, as in this case:
local o1 = Request.Object
-- new metatable is created to represent the Object in Request.Object
local o2 = Request.Object
-- reuse the same metatable
print(o1.Name)
print(o2.Name)
Fixes: https://tracker.ceph.com/issues/58412
Signed-off-by: Yuval Lifshitz <ylifshit@redhat.com>
a change in zipper caused bucket->get_info().owner to return an empty
string. so the lua value now expose: bucket->get_owner()->get_id()
Signed-off-by: Yuval Lifshitz <ylifshit@redhat.com>
The only purpose of test_concurrent_dispatch() is to verify that 2
messages are all received. Revise test_echo() for the same purpose, and
drop test_concurrent_dispatch().
Signed-off-by: Yingxin Cheng <yingxin.cheng@intel.com>
Due to that we aren't able to determine cross-core ordering:
* Move ms_handle_connect/accept() to be called in the new shard, so it
will notify before ms_dispatch() in the same core;
* Introduce another ms_handle_shard_change() when the current core is
changed;
Signed-off-by: Yingxin Cheng <yingxin.cheng@intel.com>
A minor typo fix found while skimming through the cephadm docs:
"will no remove" -> "will not remove".
Signed-off-by: John Mulligan <jmulligan@redhat.com>
extra_container_args where only applied for rbd_target_api container and not for
tcmu-runner container.
Signed-off-by: Raimund Sacherer <rsachere@redhat.com>
0.0.2 includes a patch that allows the nvmeof
daemon to use non-admin keyrings, so we should
use it over 0.0.1
Signed-off-by: Adam King <adking@redhat.com>
"igw_id" was leftover from the nvmeof implementation
being taken heavily from the iscsi implementation. "igw"
means nothing in this context, so we can change the name.
Signed-off-by: Adam King <adking@redhat.com>
This is the IP the nvmeof daemon will bind
to, so it should be the IP of the host we're
deploying the nvmeof daemon on, not the IP
of the active mgr
Signed-off-by: Adam King <adking@redhat.com>
Rather than giving full admin privileges,
try to be a bit more strict by limiting it
to profile rbd mon caps and full OSD
privileges for rbd tagged pools. I also wanted
to include an OSD cap like
allow all pool="*" object_prefix "nvmeof.state"
but this caused a failure in the nvme-of daemon
RADOS permission error (Failed to operate write op for oid nvmeof.None.state)
Signed-off-by: Adam King <adking@redhat.com>
Similar to what is done for iscsi, basic deployment
test to make sure we can deploy the daemon and
it comes up in running state with no issue
Signed-off-by: Adam King <adking@redhat.com>
The ok-to-stop function works for certain daemons
by checking if there are at least a certain number
(typically 1) daemon(s) that are actually running
and saying it's not ok-to-stop if if that won't
be true after the removals. This case breaks down
when all the daemons are in error state, making
it so cephadm will refuse to remove a set of
daemons that aren't even working because they're
not "ok to stop". Since ok-to-stop works in a
yes or no fashion, something like this where we
want to be willing to remove a certain subset
(or potentially all currently deployed) daemons
it's easier to keep this logic as part of applying
the service
Signed-off-by: Adam King <adking@redhat.com>
Before, we were just using the client.admin keyring
as a temporary workaround while we figured out
how to get the keyring to work. We should swap
over to using the keyring we actually generated
for the nvmeof daemon.
Signed-off-by: Adam King <adking@redhat.com>
This is going to be used as the rados_id
to be set when connecting to the cluster using
the keyring we generate for the nvmeof daemon.
The python librados library defaults the name
to "client.admin" and so if we don't provide
a name or rados_id, we'll only be able to
use nvmeof with the "client.admin" keyring
Signed-off-by: Adam King <adking@redhat.com>
Edit the first part of doc/rados/operations/add-or-rm-mons.rst.
Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
When dumping an op, it may be desirable to alter how it is dumped depending on
which locks are held. As it happens, I plan to dump extra information if the
mds_lock is held!
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
The MDS does not generally bother locking a Mutation before changing
anything so this lock provides weak protection. In any case, try to
improve on that...
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
It is not safe to "cache" a member which may be changed by a racing thread.
This reworks the locking so we can do a light-weight check if the description
is already generated wihtout acquiring the heavier TrackedOp::lock. If it's not
available yet or it needs regenerated, then get the proper locks to generate
it.
Fixes: e45f5c2c33
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>