Ignore the profile 'directory' field.
This ensures that we can always find plugins even when teh cluster
is installed across a mix of distros.
Rename the option to have no osd_ (or mon_) prefix since anybody
may use the ec factory/plugin code.
We still hard-code .libs in the unit tests... sigh.
Signed-off-by: Sage Weil <sage@redhat.com>
There is one new plugin (shec). When upgrading a cluster, there
must be a protection against the following scenario:
* the mon are upgraded but not the osd
* a new pool is created using plugin shec
* the osd fail to load the shec plugin because they have not been
upgraded
A feature bit is added : PLUGINS_V3. The monitor will only agree to
create an erasure code profile for the shec plugin if all OSDs
supports PLUGINS_V3. Once such an erasure code profile is stored in the
OSDMap, an OSD can only boot if it supports the PLUGINS_V3 feature,
which means it is able to load the shec plugin.
The monitors will only activate the PLUGINS_V3 feature if all monitors
in the quorum support it. It protects against the following scenario:
* the leader is upgraded the peons are not upgraded
* the leader creates a pool with plugin=shec because all OSD have
the PLUGINS_V3 feature
* the leader goes down and a non upgraded peon becomes the leader
* an old OSD tries to join the cluster
* the new leader will let the OSD boot because it does not contain
the logic that would excluded it
* the old OSD will fail when required to load the plugin shec
This is going to be needed each time new plugins are added, which is
impractical. A more generic plugin upgrade support should be added
instead, as described in http://tracker.ceph.com/issues/7291.
See also 9687150cea for the PLUGINS_V2
implementation.
http://tracker.ceph.com/issues/10887Fixes: #10887
Signed-off-by: Loic Dachary <ldachary@redhat.com>
the cache of of leveldb does not perform well under some condition,
so we need a cache in our own stack.
* add an option "mon_osd_cache_size" to control the size of cache size
of MonitorDBStore.
Fixes: #12638
Signed-off-by: Kefu Chai <kchai@redhat.com>
in boost 1.49, BOOST_SCOPE_EXIT() does not accept capture_tuple,
only `(capture) (capture) ...` is supported.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Only the primary PG is allowed to remove all the hit set objects. And
the PG should be in the active or peered states.
Signed-off-by: Zhiqiang Wang <zhiqiang.wang@intel.com>
An cache pool object is either dirty or not. It's unlikely the agent
will do both flush and evict at the same time for an object.
Signed-off-by: Zhiqiang Wang <zhiqiang.wang@intel.com>
This is to avoid the extreme case that the agent continuously does
flush, but not evict. This may lead to the cache pool to be full.
Signed-off-by: Zhiqiang Wang <zhiqiang.wang@intel.com>
This change introduces handling for the encoding-type request
parameter on the get bucket operation. An object key may contain
characters which are not supported in XML. Passing the value "url" for
the encoding-type parameter will cause the key to be urlencoded in the
response.
Fixes: #12735
Signed-off-by: Jeff Weber <jweber@cofront.net>
Keep the architecture-sensitive code in a separate header.
Avoid duplicating the unrolled memcpy in each buffer.cc
method.
Signed-off-by: Sage Weil <sage@redhat.com>
We need to do this when we first read the version, before we
proceed with the mount. By the time we get to upgrade() it is too
late (the DBObjectMap may have already tried a conversion, journal
may have replayed, etc.).
Signed-off-by: Sage Weil <sage@redhat.com>
Force *all* OSDs to upgrade to hammer before allowing post-hammer
OSDs to join. This prevents any pre-hammer OSDs from running at
the same time as a post-hammer OSD.
This commit, as well as the definition of the sentinal post-hammer
feature, should get backported to hammer stable series.
Backport: hammer
Signed-off-by: Sage Weil <sage@redhat.com>