In practice, the map will remain pinned for a while, but this
will make coverity happy.
*** CID 1231685: Use after free (USE_AFTER_FREE)
/osd/OSD.cc: 6223 in OSD::handle_osd_map(MOSDMap *)()
6217
6218 if (o->test_flag(CEPH_OSDMAP_FULL))
6219 last_marked_full = e;
6220 pinned_maps.push_back(add_map(o));
6221
6222 bufferlist fbl;
>>> CID 1231685: Use after free (USE_AFTER_FREE)
>>> Calling "encode" dereferences freed pointer "o".
6223 o->encode(fbl);
6224
6225 hobject_t fulloid = get_osdmap_pobject_name(e);
6226 t.write(coll_t::META_COLL, fulloid, 0, fbl.length(), fbl);
6227 pin_map_bl(e, fbl);
6228 continue;
Signed-off-by: Sage Weil <sage@redhat.com>
This causes build failure in latest fedora builds, ceph_test_librbd_fsx adds -Wno-format cflag but the default AM_CFLAGS already contain -Werror=format-security, in previous releases, this was tolerated but in the latest fedora rawhide it no longer is, ceph_test_librbd_fsx builds fine without -Wno-format on x86_64 so there is likely no need for the flag anymore
Signed-off-by: Boris Ranto <branto@redhat.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Otherwise, an objecter callback might still be hanging
onto this reference until after the flush.
Fixes: #8894
Introduced: 589b639af7
Signed-off-by: Samuel Just <sam.just@inktank.com>
Remove the two old Wireshark plugins. They do not build and are
superseded by the dissector which is inside Wireshark.
Signed-Off-By: Kevin Cox <kevincox@kevincox.ca>
Often there will be a CRUSH rule present for erasure coding that uses the
new CRUSH steps or indep mode. If these rules are not referenced by any
pool, we do not need clients to support the mapping behavior. This is true
because the encoding has not changed; only the expected CRUSH output.
Fixes: #8963
Backport: firefly
Signed-off-by: Sage Weil <sage@redhat.com>
Add methods to check if a *specific* rule uses v2 or v3 features. Refactor
the existing checks to use these.
Signed-off-by: Sage Weil <sage@redhat.com>
We verify peons are contiguous and share new paxos states to catch peons
up at the end of the round. Do this each time we (potentially) get new
states via a collect message. This will allow peons to be pulled forward
and remain contiguous when they otherwise would not have been able to.
For example, if
mon.0 (leader) 20..30
mon.1 (peon) 15..25
mon.2 (peon) 28..40
If we got mon.1 first and then mon.2 second, we would store the new txns
and then boot mon.1 out at the end because 15..25 is not contiguous with
28..40. However, with this change, we share 26..30 to mon.1 when we get
the collect, and then 31..40 when we get mon.2's collect, pulling them
both into the final quorum.
It also breaks the 'catch-up' work into smaller pieces, which ought to
smooth out latency a bit.
Signed-off-by: Sage Weil <sage@redhat.com>
During the collect phase we verify that each peon has overlapping or
contiguous versions as us (and can therefore be caught up with some
series of transactions). However, we *also* assimilate any new states we
get from those peers, and that may move our own first_committed forward
in time. This means that an early responder might have originally been
contiguous, but a later one moved us forward, and when the round finished
they were not contiguous any more. This leads to a crash on the peon
when they get our first begin message.
For example:
- we have 10..20
- first peon has 5..15
- ok!
- second peon has 18..30
- we apply this state
- we are now 18..30
- we finish the round
- send commit to first peon (empty.. we aren't contiguous)
- send no commit to second peon (we match)
- we send a begin for state 31
- first peon crashes (it's lc is still 15)
Prevent this by checking at the end of the round if we are still
contiguous. If not, bootstrap. This is similar to the check we do above,
but reverse to make sure *we* aren't too far ahead of *them*.
Fixes: #9053
Signed-off-by: Sage Weil <sage@redhat.com>
If the remap vector is not empty, use it to figure out the sequence of
data chunks.
http://tracker.ceph.com/issues/9025Fixes: #9025
Signed-off-by: Loic Dachary <loic@dachary.org>
Each D letter is a data chunk. For instance:
_DDD_DDD
is going to parse into:
[ 1, 2, 3, 5, 6, 7 ]
the 0 and 4 positions are not used by chunks and do not show in the
mapping. Implement ErasureCode::parse to support a reasonable default
for the mapping parameter.
Signed-off-by: Loic Dachary <loic@dachary.org>
Add support for erasure code plugins that do not sequentially map the
chunks encoded to the corresponding index. This is mostly transparent to
the caller, except when it comes to retrieving the data chunks when
reading. For this purpose there needs to be a remapping function so the
caller has a way to figure out which chunks actually contain the data
and reorder them.
Signed-off-by: Loic Dachary <loic@dachary.org>
While calling index->collection_version, there is no need to
hold WLock at the index level. RLock should be sufficient.
Signed-off-by: Somnath Roy <somnath.roy@sandisk.com>
In lfn_open() there is no point of building the Index if the
cache lookup is successful and caller is not asking for Index.
Signed-off-by: Somnath Roy <somnath.roy@sandisk.com>
IndexManager now has a Index caching. Index will only be created if not
found in the cache. Earlier, each op is creating an Index object and other
ops requesting the same index needed to wait till previous op is done.
Also, after finishing lookup, this Index object was destroyed.
Now, a Index cache is been implemented to persists these Indexes since
there is a major performance hit because each op is creating and destroying
these. A RWlock is been introduced in the CollectionIndex class and that is
responsible for sync between lookup and create.
Also, since these Index objects are persistent there is no need to use
smart pointers. So, Index is a wrapper class of CollecIndex* now.
It is the responsibility of the users of Index now to lock explicitely
before using them. Index object is sufficient now for locking and no need
to hold IndexPath for locking. The function interfaces of lfn_open,lfn_find
are changed accordingly.
Signed-off-by: Somnath Roy <somnath.roy@sandisk.com>
With the changes to the shared_cache, we no longer need the fdcache_lock
to prevent us from inserting a second fd for the same hobject into the cache.
Signed-off-by: Greg Farnum <greg@inktank.com>
Merged conflict fixed.
Signed-off-by: Somnath Roy <somnath.roy@sandisk.com>
Conflicts:
src/os/FileStore.cc
This is just a basic sharding. A more sophisticated implementation would
rely on something other than luck for keeping the distribution equitable.
The minimum FDCache shard size is 1.
Signed-off-by: Greg Farnum <greg@inktank.com>
Signed-off-by: Somnath Roy <somnath.roy@sandisk.com>
The LRU now handles you attempting to insert multiple values for the
same key, by telling you that you've done so and returning the
existing value before it manages to muck up existing data.
The param 'existed' is not mandatory, default value is NULL.
Signed-off-by: Greg Farnum <greg@inktank.com>
Signed-off-by: Somnath Roy <somnath.roy@sandisk.com>
A new param to check whether the object has requires restriping,
checking whether a specific object stripe is bigger than the specified
size. By default it is set to 0, and in that case it'll always be
restriped. Having it set to 4M + 1 will make sure that only the objects
that weren't striped before (using default settings) will be restriped.
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
Description: Currently setmaxosd command allows removal of OSDs by providing
a number less than current max OSD number. This causes abrupt removal of
OSDs causing data loss as well as kernel panic when kernel RBDs are involved.
Fix is to avoid removal of OSDs if any of the OSDs in the range between
current max OSD number and new max OSD number is part of the cluster.
Fixes: #8865
Signed-off-by: Anand Bhat <anand.bhat@sandisk.com>
Fixes: #9089
copy_obj_data was not using the current object write infrastructure,
which means that the end objects weren't striped.
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
Fixes: #9039
Backport: firefly
The new manifest does not provide a way to put the head and the tail in
separate pools. In any case, if an object is copied between buckets in
different pools, we may really just want the object to be copied, rather
than reference counted.
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>