* refs/remotes/upstream/pull/17657/head:
mds: optimize MDCache::rejoin_scour_survivor_replicas()
mds: fix MDSCacheObject::clear_replica_map
mds: support limiting cache by memory
common: refactor of lru
mds: resolve unsigned coercion compiler warning
common: use safer uint64_t for list size
common: add bytes2str pretty print function
mds: check if waiting is allocated before use
mds: go back to compact_map for replicas
mds: use mempool for cache objects
mds: cleanup replica_map access
common: add alloc_ptr smart pointer
common: add warning on base class use of mempool
common: use atomic uin64_t for counter
Reviewed-by: Zheng Yan <zyan@redhat.com>
ceph df accounts for pool size, so there is no need to do it in the test.
Fixes: http://tracker.ceph.com/issues/21381
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
BlueStore enables CRC by default, so this is a dup and gains
no more benefits.
Turn this off by default, which is good for performance.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
This introduces two config parameters:
mds_cache_memory_limit: Sets the soft maximum of the cache to the given
byte count. (Like mds_cache_size, this doesn't actually limit the maximum
size of the cache. It just dictates the steady-state size.)
mds_cache_reservation: This replaces mds_health_cache_threshold everywhere
except the Beacon heartbeat sent to the mons. The idea here is to specify a
reservation of memory (5% by default) for operations and the MDS tries to
always maintain that reservation. So, the MDS will recall caps from clients
when it begins dipping into its reservation of memory.
mds_cache_size still limits the cache by Inode count but is now by-default 0
(i.e. unlimited). The new preferred way of specifying cache limits is by memory
size. The default is 1GB.
Fixes: http://tracker.ceph.com/issues/20594
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1464976
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/remotes/upstream/pull/17679/head:
qa: get asok path from ceph.conf
qa: use config_path property instead of literal
Reviewed-by: John Spray <john.spray@redhat.com>
test_misc verifies that ceph fs new will not create a filesystem
on a pool that already contains objects. As part of the test, it
inserts a dummy object into a pool and then attempts to use it for
CephFS. This triggers POOL_APP_NOT_ENABLED. Setting the application
metadata for the pool (and having ceph fs new fail because of the
existing metadata) would then exercise a different failure case.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
vstart.sh now defaults to bluestore, so specify filestore
Set environment for run-standalone.sh and cmake build
Create td/cot_dir as test directory
Crush output format change
Change dir into test directory
Give a little time after pool creation
Check for core files as ceph-helpers.sh does
Signed-off-by: David Zafman <dzafman@redhat.com>
The newly introduced 'device-class' can be used to separate
different type of devices into different pools, e.g, hdd-pool
for backup data and all-flash-pool for DB applications.
However, if any osd of the cluster is currently running out
of space (exceeding the predefined 'full' threshold), Ceph
will mark the whole cluster as full and prevent writes to all pools,
which turns out to be very wrong.
This patch instead makes the space 'full' control at pool granularity,
which exactly leverages the pool quota logic but shall solve
the above problem.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
are mapped and use the new mapped role for upgrades during later
stage.
eg: mon.a is mapped to mon.mira002 during install, store this mapping
and durig upgrade map it back to appropriate name to find the hostname
with that role
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
jewel needs neither filestore or bluestore as an option, so provide none
when running with jewel branch.
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
This is to test for customer like upgrade scenarios and to find
any issues that may be related to systemd, packaging etc
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
rbd pool should exist for many rbd tests to work properly, create
the pool right after install is successful.
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
The teuthology machines are periodically running out of space
due to the aggressive log settings.
Fixes: http://tracker.ceph.com/issues/21251
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
We assume below that rerrosd is up, but it may not be when we exit the
loop.
Fixes: http://tracker.ceph.com/issues/21206
Signed-off-by: Sage Weil <sage@redhat.com>
Add support for testing recovery of CephFS metadata into an alternate
RADOS pool, useful as a disaster recovery mechanism that avoids
modifying the metadata in-place.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Add support for testing recovery of CephFS metadata into an alternate
RADOS pool, useful as a disaster recovery mechanism that avoids
modifying the metadata in-place.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Remove the alternate pool recovery test from test_data_scan. Newer
commits will place the test in its own file.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
ext4 seems to be a better choice for our purposes -- less test churn,
rather small and reliable exclude list.
All excluded tests but generic/050 fail with no krbd in the mix, most
have popped up on linux-ext4 list at least once.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Different filesystems (and further, different configurations of the
same filesystem) need different exclude lists. Hard coding the list in
a wrapper script is inflexible.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
xfstests is a pain to build on trusty, xenial and centos7 with a single
script. It is also very sensitive to dependencies, which again need to
be managed on all those distros -- different sets of supported commands
and switches, some versions have known bugs, etc.
Download a pre-built, statically linked tarball and use it instead.
The tarball was generated using xfstests-bld by Ted Ts'o, with a number
of tweaks by myself (mostly concerning the build environment).
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
AFAICT ./check doesn't query EXT4_MKFS_OPTIONS or BTRFS_MKFS_OPTIONS,
We don't need anything special for xfs, so remove all of them to avoid
confusion.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
- snapdir conversion (at-end) stuff
- merge luminous-specific collections that avoided the above back
into their normal locations
Signed-off-by: Sage Weil <sage@redhat.com>