implements the DataProcessor interface by writing its buffers with Aio,
and tracks the set of successful writes so they can be deleted on
failure/cancelation
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Ceph requires C++17 support from compiler. This means gcc 7.x being
minimal version supported.
This also allows to fail quick on Debian 'stretch' with it's gcc 6.3
compiler.
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
MDS beacon upkeep always waits mds_beacon_interval seconds even when laggy.
Check more frequently for when we stop being laggy to reduce likelihood that
the MDS is removed.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
function `set_stderr_level(-1, -1)` set m_stderr_log and m_stderr_crash to -1,
regardless of whether `err_to_stderr` is set to false or not, so logs will be
always written to stderr. fix it as the same way as handle_conf_change does.
Signed-off-by: Yan Jun <yan.jun8@zte.com.cn>
- block pgp_num increase if pg_num hasn't increased yet
- make no changes if there are inactive or unknown pgs
- make no changes if there are degraded pgs either. this might be a bit
conservative...
- calculate the magnitude of our adjusted based on the max_misplaced
target. this assumes a uniform distribution of objects across pgs,
so not perfectly accurate, but hopefully close enough.
Signed-off-by: Sage Weil <sage@redhat.com>
We can't merge until the PGs are stored together. (The mon would stop
us if we tried, but let's not waste time trying.)
Signed-off-by: Sage Weil <sage@redhat.com>
This is some of the same info we get in the json dump from
print_summary -> overall_recovery_summary -> recovery_summary.
Signed-off-by: Sage Weil <sage@redhat.com>
If we are waiting for a PG to merge we can't decrease more, but if we
were in the process of merging and an increase is requested, we can
abort the merge by increase pg_num_actual whenever we want.
Signed-off-by: Sage Weil <sage@redhat.com>
If we are in premerge (pg_num_pending == pg_num - 1) and abort by
increasing pg_num, we the last_force_op_resend_prenautilus since it will
be an interval change for nautlius+.
Signed-off-by: Sage Weil <sage@redhat.com>
Previously we were automatically adjusting pgp_num_target on a
pg_num_target change *only* when decreasing pg_num. Instead, make
pgp_num (continue to) track pg_num if it currently matches. If it ever
is set differently than pg_num, leave it different (unless/until it
matches again).
This is still slightly weird, but I think in practice it is good enough.
In the rare case that the admin manually sets pgp_num to something
different than pg_num, they probably won't also be using automagic
pg_num adjustment that might make them match and start tracking again.
Signed-off-by: Sage Weil <sage@redhat.com>
The textarea allows horizontal and vertical resize by default. Only the
vertical resize is appropriate for this form.
Fixes: http://tracker.ceph.com/issues/36452
Signed-off-by: Tatjana Dehler <tdehler@suse.com>
In the old ceph version, buffer advance length was defined as int, but
in async msg, the real length of data buffer was defined as unsigned.
Occassionly some MDS message back from OSD was too large, which caused
this length overflow and made MDS crash.
For compatibility reason, add an assertion here if buffer advance length
is overflow.
Fixes: http://tracker.ceph.com/issues/36340
Signed-off-by: Zhi Zhang <zhangz.david@outlook.com>
* refs/pull/24292/head:
qa: add test for rctime on root inode
mds: set rctime on new system inode
mds: small refactor
Reviewed-by: Zheng Yan <zyan@redhat.com>
When building with ccache, distcc, and other compiler wrappers (such
as STLFilt):
CC='ccache gcc' CXX='ccache g++' cmake /path/to/ceph
make
python modules fail to compile since distutils try to execute the
wrapper itself without specifying the actual compiler.
Although cmake has a special magic switch for compiling with ccache
(cmake -DWITH_CCACHE=ON) other tools (distcc) are not supported, and
specifying the compiler as
CC=/whatever/compiler/is
used to work for decades, and it's a good idea to keep it working
Signed-off-by: Alexey Sheplyakov <asheplyakov@mirantis.com>
Signed-off-by: Kefu Chai <kchai@redhat.com>
This reverts a27fd9d25c and
b863883ca7.
Quote form Sébastien Han:
> IIRC at some point, we were able to create a device class from the CLI.
Now it seems that the device class gets created when at least one OSD
of a particular class starts.
In ceph-ansible, we create pools after the initial monitors are up and
we want to assign a device crush class on some of them.
That's not possible at the moment since there no device class available yet.
Also, someone might want to create its own device class.
Something as crazy as running Filestore with a tmpfs osd store and
might want to isolate them.
I know it's a very limited use case, but still, it could be desired.
See also https://www.spinics.net/lists/ceph-devel/msg41152.html
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
implement the throttling algorithm in terms of rgw::putobj::Aio. this
differs from RGWPutObjProcessor_Aio in that it doesn't wait on the
first pending write to complete before making process on later ones
Signed-off-by: Casey Bodley <cbodley@redhat.com>
the Aio operations return a ResultList of previous completions that can
be inspected for the error code and object name (the latter is needed to
track created objects that need to be removed on cancelation)
returning results in a list avoids the extra locking that may be
required to poll/wait for a single completion at a time
Signed-off-by: Casey Bodley <cbodley@redhat.com>
ChunkProcessor turns the input stream into a series of discrete chunks
before forwarding to the wrapped DataProcessor
Signed-off-by: Casey Bodley <cbodley@redhat.com>
adds an abstract DataProcessor interface (analogous to
RGWPutObjDataProcessor) that allows processors to be composed into
pipelines, and a Pipe class to support the existing filters for
compression and encryption
Signed-off-by: Casey Bodley <cbodley@redhat.com>
when bucket reshard completes, rgw_link_bucket() passes the new bucket
instance id down to cls_user, but cls_user_set_buckets_info() does not
change the instance id when it's updating an existing bucket. so when
rgw_user_sync_all_stats() looks up each of the user's buckets, it uses
the original bucket instance id instead of the resharded one and
calculates user stats that may not match the current bucket stats
as a workaround, rgw_user_sync_all_stats() no longer relies on the
bucket instance id it gets from rgw_read_user_buckets(), and instead
calls get_bucket_info() to look up the current instance in the bucket
entrypoint
Signed-off-by: Casey Bodley <cbodley@redhat.com>
if cls_user_set_bucket_info() finds an existing bucket entry, it does
not update its bucket id
Fixes: https://tracker.ceph.com/issues/24505
Signed-off-by: Casey Bodley <cbodley@redhat.com>