These asserts are true if we are going to shutdown the BlueStore instance.
But the caller can also be something like "ceph daemon out/osd.1.asok flush_store_cache",
which can fire these asserts as a result.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
- VLAs are in GCC and Clang, and are there to stay forever,
if only to be compatible with all the software that is already
out there.
- Theoretical debates about VLA being hard to implement are
long superceded by th actual implentations
- Before setting this flag is would be required to first start
work on fixing all the fallout/warnings that will arise from
setting -Wvla
- Allocating large variable/stuctures on the stack could be asking
for trouble, but changes that ceph tools are going to be running
on small embedded devices are rather slim.
Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
* do not let osd shutdown itself by enlarge osd_max_markdown_count and
shorten osd_max_markdown_period
* do not shutdown all osds in the last test. if all osds are shutdown at
the same time. none of them will get updated osdmap after noup is
unset. we should leave at least one of them, so the gossip protocol
can kick in, and populate the news to all osds.
Fixes: http://tracker.ceph.com/issues/20174
Signed-off-by: Kefu Chai <kchai@redhat.com>
With the new Beast frontend, RGW now has a small Boost dependency [1] which was
being addressed by statically (and unconditionally) linking *all* the Boost
libraries. This patch ensures that only the necessary Boost components are
linked.
We use the target_link_libraries(<target> <item>...) [2] syntax to ensure that the
library dependencies are transitive: i.e. "when this target is linked into
another target then the libraries linked to this target will appear on the link
line for the other target too."
[1] The boost/asio/spawn.hpp header used by rgw_asio_frontend.cc depends on
boost::coroutine/boost::context
[2] https://cmake.org/cmake/help/v3.3/command/target_link_libraries.html#libraries-for-both-a-target-and-its-dependents
Signed-off-by: Nathan Cutler <ncutler@suse.com>
Signed-off-by: Kefu Chai <kchai@redhat.com>
This fixes librbd crashes currently observed on master, when
debug is on, because `rbd_image_options_t` is typedef-ed `void *`
and it's operator is used when attempting to print out an
address (`void *`) of any object.
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
the creating_pgs added from pgmap might contains pgs whose containing
pools have been deleted. this is fine with the PGMonitor, as it has the
updated pg mapping which is consistent with itself. but it does not work
with OSDMonitor's creating_pgs, whose pg mapping is calculated by
itself. so we need to filter the pgmap's creating_pgs when adding them to
OSDMonitor's creating_pgs with the latest osdmap.get_pools().
Fixes: http://tracker.ceph.com/issues/20067
Signed-off-by: Kefu Chai <kchai@redhat.com>
This is similar to what we do for OSDMonitor::create_initial().
Avoid setting these initial features just for teh mon test that verifies
persistent features get set on a full quorum.
Signed-off-by: Sage Weil <sage@redhat.com>
We use this information only for dumps. Stop dumping per-OSD stats as they're
not needed. In order to maintain pool "fullness" information, calculate
the OSDMap-based rule availibility ratios on the monitor and include those
values in the PGMapDigest. Also do it whenever we call dump_pool_stats_full()
on the manager.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
otherwise ceph_test_rados_api_stat: LibRadosStat.PoolStat will always
timeout once the cluster is switched to luminous
Signed-off-by: Kefu Chai <kchai@redhat.com>
otherwise ceph_test_rados_api_stat: LibRadosStat.ClusterStat will always
timeout once the cluster is switched to luminous
Signed-off-by: Kefu Chai <kchai@redhat.com>
we cannot apply pending_inc twice and expect the result is the same. in
other words, pg_map.apply_incremental(pending_inc) is not an idempotent
operation.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Use a flat_map with pointers into a buffer with the actual data. For a
decoded mapping, we have just two allocations (one for flat_map and one
for the encoded buffer).
This can get slow if you make lots of incremental changes after the fact
since flat_map is not efficient for modifications at large sizes. :/
Signed-off-by: Sage Weil <sage@redhat.com>
Also, count "not active" (inactive) pgs instead of active so that we
list "bad" things consistently, and so that 'inactive' is a separate
bucket of pgs than the 'unknown' ones.
Signed-off-by: Sage Weil <sage@redhat.com>
This is a goofy workaround that we're also doing in Mgr::init(). Someday
we should come up with a more elegant solution. In the meantime, this
works just fine!
Signed-off-by: Sage Weil <sage@redhat.com>
We want to drop updates for pgs for pools that don't exist. Keep an
updated set of those pools instead of relying on the previous PGMap
having them instantiated. (The previous map may drift due to bugs.)
Signed-off-by: Sage Weil <sage@redhat.com>
We were doing an incremental per osd stat report; this screws up the
delta stats updates when there are more than a handful of OSDs. Instead,
do it with the same period as the mgr->mon reports.
Signed-off-by: Sage Weil <sage@redhat.com>
If we have a huge pool it may take a while for the PGs to get out of the
queue and be created. If we use the epoch the pool was created it may
mean a lot of old OSDMaps the OSD has to process. If we use the current
epoch (the first epoch in which any OSD learned that this PG should
exist) we limit PastIntervals as much as possible.
It is still possible that we start trying to create a PG but the cluster
is unhealthy for a long time, resulting in a long PastIntervals that
needs to be generated by a primary OSD when it eventually comes up. So
this only partially
Partially-fixes: http://tracker.ceph.com/issues/20050
Signed-off-by: Sage Weil <sage@redhat.com>