In function ShardedThreadPool::shardedthreadpool_worker, when call
_proces, it don't get shardedpool_lock. So there is a race between
return_waiting_threads and _proces. If return_waiting_threads first run
and _process will lost this signal and wait until.
This may cause ShardedThreadPool::drain can't complete.
Signed-off-by: Jianpeng Ma <jianpeng.ma@intel.com>
Why use WaitInterval, there is a comment:"optimistically sleep a moment; maybe another work item will come along."
But in fact,we don't see any benefit for this optimization.
Signed-off-by: Jianpeng Ma <jianpeng.ma@intel.com>
set the paxos's state to STATE_REFRESH avoid doing store->flush() while
we are in the async completion thread. this causes dead lock.
Signed-off-by: Kefu Chai <kchai@redhat.com>
d1ff03b667 Merge pull request #44 from adamemerson/wip-system-includes
4cc4b949ca build: Mark dependency includes as SYSTEM in dmclock
05096c1756 Merge pull request #43 from TaewoongKim/anticipation_conf
f356c45461 Add missing anticipation_timeout argument for PullPriorityQueue constructor
9896448ec5 Merge pull request #42 from tchaikov/wip-cmake
979899ef86 add travis CI on gnu/linux
8a3dabdbee cmake: the built archives are located in ${binary_dir}
ee15ced3e9 cmake: check for include in /usr/include also
git-subtree-dir: src/dmclock
git-subtree-split: d1ff03b667d9551478b2803ea533fc356ca441a9
It is not really our business to debug python, boost, or our other
dependencies. Mark them as system includes.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
* refs/pull/18624/head:
mds: trim 'N' log segments according to how many log segments are there
Reviewed-by: John Spray <john.spray@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
make_writeable() had some logic to pull old request snapcs forward to
what was in the SnapSet. This has no effect: if we pull forward, the
main block of make_writeable does not trigger, because no clone is
generated when snapc =~ snapset.
Signed-off-by: Sage Weil <sage@redhat.com>
the other one is located in src/spdk/dpdk, which is alwasy maintained and used by spdk.
so remove this src/dpdk folder, we only need maintain one copy.
Signed-off-by: chunmei <chunmei.liu@intel.com>
We have a race condition:
1. RGW client #1: requests an object be deleted.
2. RGW client #1: sends a prepare op to bucket index OSD #1.
3. OSD #1: prepares the op, adding pending ops to the bucket dir entry
4. RGW client #2: sends a list bucket to OSD #1
5. RGW client #2: sees that there are pending operations on bucket
dir entry, and calls check_disk_state
6. RGW client #2: check_disk_state sees that the object still exists, so it
sends CEPH_RGW_UPDATE to bucket index OSD (#1)
7. RGW client #1: sends a delete object to object OSD (#2)
8. OSD #2: deletes the object
9. RGW client #2: sends a complete op to bucket index OSD (#1)
10. OSD #1: completes the op
11. OSD #1: receives the CEPH_RGW_UPDATE and updates the bucket index
entry, thereby **RECREATING** it
Solution implemented:
At step #5 the object's dir entry exists. If we get to beginning of
step #11 and the object's dir entry no longer exists, we know that the
dir entry was just actively being modified, and ignore the
CEPH_RGW_UPDATE operation, thereby NOT recreating it.
Signed-off-by: J. Eric Ivancich <ivancich@redhat.com>
For auth caps that omit the gid, do not check for a gid match.
Fixes: http://tracker.ceph.com/issues/22009
Signed-off-by: Douglas Fuller <dfuller@redhat.com>