Rearrange logic to make it easier to measure accumulation.
Instrument the boto request/response loop to count bytes in and out.
Accumulate byte counts in usage like structure.
Compare actual usage reported by ceph against local usage measured.
Report and assert if there are any short-comings.
Remove zone placement rule that was newly added at end: tests should be rerunable.
Nit: the logic to wait for "delete_obj" is not quite right.
Fixes: http://tracker.ceph.com/issues/19870
Signed-off-by: Marcus Watts <mwatts@redhat.com>
* refs/remotes/upstream/pull/16378/head:
doc: remove accidental additions to release notes
qa/cephfs: Fix race in test_volume_client
qa/cephfs: Test filtered df
PendingReleaseNotes: add note about df filtering
client: Support new, filtered MStatfs
objecter: Support new, filtered MStatfs
mon/PGMap stats: Support new, filtered MStatfs
messages: Add optional data pool to MStatfs
Reviewed-by: John Spray <john.spray@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Sage Weil <sage@redhat.com>
If the OSD doesn't see IO, it won't flush more pg/osd stats when the
luminous flag is not yet set (legacy pgmonitor mode).
Signed-off-by: Sage Weil <sage@redhat.com>
* refs/remotes/upstream/pull/16714/head:
qa: test export_pin is correct in dumped subtree
mds: print export_pin for dumped subtree
Reviewed-by: Douglas Fuller <dfuller@redhat.com>
Reviewed-by: huanwen ren <ren.huanwen@zte.com.cn>
lifecycle expiration tests are too reliant on timing, and have been
failing consistently for a long time
Signed-off-by: Casey Bodley <cbodley@redhat.com>
so we can avoid the warnings like
grep: Unmatched ( or \(
because we pass the whitelisted string to `egrep -v "$1"` directly.
Signed-off-by: Kefu Chai <kchai@redhat.com>
I'm seeing sporadic single thread deadlocks on fio stat_mutex during krbd
thrash runs:
(gdb) info threads
Id Target Id Frame
* 1 Thread 0x7f89ee730740 (LWP 15604) 0x00007f89ed9f41bd in __lll_lock_wait () from /lib64/libpthread.so.0
(gdb) bt
#0 0x00007f89ed9f41bd in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f89ed9f17b2 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#2 0x00000000004429b9 in fio_mutex_down (mutex=0x7f89ee72d000) at mutex.c:170
#3 0x0000000000459704 in thread_main (data=<optimized out>) at backend.c:1639
#4 0x000000000045b013 in fork_main (offset=0, shmid=<optimized out>, sk_out=0x0) at backend.c:1778
#5 run_threads (sk_out=sk_out@entry=0x0) at backend.c:2195
#6 0x000000000045b47f in fio_backend (sk_out=sk_out@entry=0x0) at backend.c:2400
#7 0x000000000040cb0c in main (argc=2, argv=0x7fffad3e3888, envp=<optimized out>) at fio.c:63
(gdb) up 2
170 pthread_cond_wait(&mutex->cond, &mutex->lock);
(gdb) p mutex.lock.__data.__owner
$1 = 15604
Upgrading to 2.21 seems to make these go away.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Review current log messages for consistency, accuracy and necessesity as
part of usability initiative. First in a series.
Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
Test cluster with 2 osds, stop osd.0, if osd.1
report the pg stats during pg peering, mon will
record pg state to 'peering',then stop osd.1,
finally the pg state will stuck in 'stale+peering',
which is unexpected.
Let's wait_for_active() after stop osd.0.
Signed-off-by: huangjun <huangjun@xsky.com>