* refs/pull/24809/head:
os/bluestore: omit redundant '/' in OSD path for ceph-bluestore-tool if
os/bluestore: improve error handling for migrate ops in
qa/standtalone/osd-bluefs-volume-ops: remove redundant code.
Reviewed-by: Sage Weil <sage@redhat.com>
* refs/pull/24787/head:
Merge PR #24796 into nautilus
osd: fix heartbeat_reset unlock
Merge PR #24780 into nautilus
Merge PR #24761 into nautilus
Merge PR #24651 into nautilus
osd: fix race between op_wq and context_queue
test: Make sure kill_daemons failure will be easy to find
test: Add flush_pg_stats to make test more deterministic
* refs/pull/24651/head:
test: Make sure kill_daemons failure will be easy to find
test: Add flush_pg_stats to make test more deterministic
Reviewed-by: Neha Ojha <nojha@redhat.com>
This is related to http://tracker.ceph.com/issues/36453. It is far from
a complete solution, but seems like a positive move.
I tested this change by first disabling my browser cache, and then used
the /docs endpoint to query /api/dashboard/health. Before compression:
Content-Length: 60748
Time: 615ms
After:
Content-Length: 7505
Time: 92ms
Then, I logged into the dashboard as normal and reloaded the page once I
was in. Some values for the reload operation before compression:
Total page load time: 58.48s
vendor.js Content-Length: 6486025
vendor.js time: 48.09s
After:
Total page load time: 14.55s
vendor.js Content-Length: 1143178
vendor.js time: 4.50s
Signed-off-by: Zack Cerza <zack@redhat.com>
This fixes "TypeError: admin_socket() got an unexpected keyword argument
'timeout'". The value is never used.
Signed-off-by: Zack Cerza <zack@redhat.com>
If there is a workunit task associated with the same client, the two
tasks will attempt to clone the suite repo to the same directory.
Worse, if it's parallel tasks, the two clones will clobber each
other.
Fixes: http://tracker.ceph.com/issues/36542
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 5d56014c61)
If there is a workunit task associated with the same client, the two
tasks will attempt to clone the suite repo to the same directory.
Worse, if it's parallel tasks, the two clones will clobber each
other.
Fixes: http://tracker.ceph.com/issues/36542
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
For EC pools we have a lot of shards, and 30% probability on each one
means we are very like to repeatedly fail backfill reservations.. long
enough that teuthology gives up waiting.
Signed-off-by: Sage Weil <sage@redhat.com>
* refs/pull/24359/head:
qa/tests: update ansible version to 2.6 for master branch testing.
qa/tests: use lvm as default for ceph-ansible testing, this should also work with raw devices
Reviewed-by: Alfredo Deza <adeza@redhat.com>
* refs/pull/24292/head:
qa: add test for rctime on root inode
mds: set rctime on new system inode
mds: small refactor
Reviewed-by: Zheng Yan <zyan@redhat.com>
This reverts a27fd9d25cb2819e25cc48b790c40afac0250464 and
b863883ca783487401fde4f4480ed1d9b093363e.
Quote form Sébastien Han:
> IIRC at some point, we were able to create a device class from the CLI.
Now it seems that the device class gets created when at least one OSD
of a particular class starts.
In ceph-ansible, we create pools after the initial monitors are up and
we want to assign a device crush class on some of them.
That's not possible at the moment since there no device class available yet.
Also, someone might want to create its own device class.
Something as crazy as running Filestore with a tmpfs osd store and
might want to isolate them.
I know it's a very limited use case, but still, it could be desired.
See also https://www.spinics.net/lists/ceph-devel/msg41152.html
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
This is shown to corrupt otherwise healthy rocksdb databases. Rename to
make it clear that it is generally not safe to run and shoud only be used
as a last resort.
Signed-off-by: Sage Weil <sage@redhat.com>