all the scripts except for test_cls_cas.sh under qa/workunits/cls
are executable. to be more consistent, add the executable bit to
test_cls_cas.sh as well.
also, these scripts are launched by src/script/gen-corpus.sh directly,
so it's convenient just call them.
Signed-off-by: Kefu Chai <kchai@redhat.com>
if we happen to run this script on a host where /etc/ceph/ceph.conf is
available, ceph CLI would use it instead. so, point it to $PWD/ceph.conf
instead.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Changes to the socket code now result in returning EINVAL
In the past ENOENT was returned which is the FreeBSD error code
if DNS lookup does not work.
And that change is probably because somewhere in the code that
errorcode is not passed verbatim from the systemcall, but is
rewritten in extra evaluation.
Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
FreeBSD ceph-dencoder crashes in the exit() calls, due to
invalid pointer references during the release process of
the loaded libraries.
Often this is signaled by libc reporting:
__cxa_thread_call_dtors: dtr 0x47efc0 from unloaded dso, skipping
The cause for this is different behaviour between FreeBSD and Linux:
https://groups.google.com/g/bsdmailinglist/c/22ncTZAbDp4/m/Dii_pII5AwAJ
_The FreeBSD implementation here looks racy. If one thread dlcloses an
object while another thread is exiting, we can end up calling a
function at an invalid memory address. It also looks as if it may
be possible to unload one library, load another at the same address,
and end up executing entirely the wrong code, which would have some
serious security implications.
The GNU/Linux equivalent of this function locks the DSO in memory
until all references to it have gone away. A call to dlclose() on
GNU/Linux will not actually unload the library until all threads
with destructors in that library have been unloaded. I believe
that this reuses the same reference counting mechanism that
allows the same library to be dlopened and dlclosed multiple times.
Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
Make sure there are more than 1000 incomplete multiparts and also make
sure one of the incomplete multiparts has at least 1000 parts. This
test is done indirectly through rgw-orphan-list, which invokes
`radosgw-admin radoslist`.
Also, clean up shell flags, so script output is less verbose.
Signed-off-by: J. Eric Ivancich <ivancich@redhat.com>
* refs/pull/40431/head:
qa/cephfs: remove create_keyring_file from cephfs_test_case.py
qa/cephfs: don't use sudo to write files in /tmp
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
* refs/pull/40389/head:
mds: reject lookup ino requests for mds dirs
test: add test for invalid lookup of mdsdir
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
The docs build is now warning about these like:
WARNING: Unparseable C cross-reference: '[in]'
Invalid C declaration: Expected identifier in nested name.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
A RepoDigests returned by docker|podman image inspect can either include
the docker.io/ prefix or not. For reasons that aren't entirely clear,
this may vary between hosts in a cluster. However, ceph/ceph@sha256:abc...
is the same thing as docker.io/ceph/ceph@sha256:abc..., and should be
treated as such. Otherwise, upgrade can get into a loop where it pulls
the image on a new host, finds the other variant of the repodigests,
sees no overlap, updates target_digests, and restarts. (It will then
find the first variant again on the first host and loop.)
Avoid this by normalizing any docker.io digests by always including the
docker.io/ prefix.
Note that it is technically possible that this assumption is wrong: it
may be that the image that already exists on the local host is from a
different registry in registries.conf's unqualified-search-registries.
However, we don't know which, since this is a search list. In practice,
it should be exceeding rare that an image that *we* are installing using
a fully-qualified image name will end up having an unqualified repodigest
in the local registry. Hopefully!
Fixes: https://tracker.ceph.com/issues/50114
Signed-off-by: Sage Weil <sage@newdream.net>
If we get an unqualified target image, assume it's docker.io. This
ensures that we're passing a fully-qualified target to docker|podman on
the various hosts and don't end up with something different based on the
per-host search path for unqualified image names.
Signed-off-by: Sage Weil <sage@newdream.net>
This is my second attempt to rewrite the
second half of the mclock docs. The first attempt
is enshrined in https://github.com/ceph/ceph/pull/40571,
in which I got cute with git and got burned.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
When an incomplete multipart upload has in excess of 1000 parts,
looping over those parts was not handled property causing an infinite
loop. The paging/marker is now handled correctly.
Signed-off-by: J. Eric Ivancich <ivancich@redhat.com>
* refs/pull/40638/head:
mds: do not show the default auth if it's unambiguous
mds: switch to rank number instead
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>