This was an attempt to ensure that we didn't let removed_snaps slip by
when we had a discontiguous stream of OSDMaps. In octopus, this can still
happen, but it's mostly harmless--the OSDs will periodically scrub to
clean up any resulting stray clones. It's not worth the complexity.
Signed-off-by: Sage Weil <sage@redhat.com>
This a naive one-shot implementation that does the full scan synchronously
in the command thread. It shouldn't block any IO except to the extent
that it will compete for IO reading the underlying snapmapper omap object.
When we discover mapped objects that are covered by ranges of snaps that
should be purged, we requeue the snapid for trim on the relevant PG(s).
For these 'repeat' trims we skip the final step(s) to mark the snapid as
purged, since that presumably already happened some time ago.
Signed-off-by: Sage Weil <sage@redhat.com>
The test creates a snap, removes it, waits for it to (hopefully) purge,
and then uses that snapid in a snapc to generate a clone.
This isn't a complete test because (1) it doesn't wait for the purge to
happen (e.g., by watching the osdmaps go by), and (2) it doesn't trigger
an osd scrub_purged_snaps afterwards.
Signed-off-by: Sage Weil <sage@redhat.com>
This path only triggers after an upgrade or osd creation, when
purged_snaps_last < current_epoch. When that happens, we slurp down the
old purged snaps so that we have a full history recorded locally.
Signed-off-by: Sage Weil <sage@redhat.com>
When we get a new map, record the (new) purged_snaps.
Only do this if the OSD has purged_snaps that are in sync with the latest
OSDMap. That means that after an upgrade, if the OSD didn't sync the
old purged_snaps on startup, it won't sync anything until it *next* starts
up.
Signed-off-by: Sage Weil <sage@redhat.com>
When we public our first require_osd_release >= octopus osdmap, record
all prior purged snaps in a key linked to the previous osdmap. We assume
this will encode and fit into a single key and transaction because the
even larger set of removed_snaps is already a member of pg_pool_t, which
is included in every osdmap.
Signed-off-by: Sage Weil <sage@redhat.com>
Only do this if the mons are all running octopus (and thus there is also
a record for all the pre-octopus purged snaps).
Signed-off-by: Sage Weil <sage@redhat.com>
- look at purged, not removed snap keys
- fix the key check to look at the *key name* prefix, not the overall
prefix (the one implemented by the KeyValueDB interface).
Signed-off-by: Sage Weil <sage@redhat.com>
Convert snapmapper keys to the new form the first time we start up running
octopus.
This is an incompat feature--once you start as octopus you can't go back.
Signed-off-by: Sage Weil <sage@redhat.com>
We want to sort starting with (pool, snapid, ...) so that we align with
the structure of the purged_snaps. Simply flattening all snaps across
pools is less than ideal because the purge records are intervals (with the
snap in the key the last snap for the interval); flattening means we'd have
to look at many records (across pools) to conclude anything. Putting
these in the form we really want them simplifies things going forward.
Signed-off-by: Sage Weil <sage@redhat.com>
This ensures 'make vstart' will build libcephfs, which lets the mgr volumes
module start, such that we can successfully vstart.
Signed-off-by: Sage Weil <sage@redhat.com>
This was there to test filestore's long file name handling, which (1)
works, and (2) we don't care that much about anymore. Meanwhile, the
long names make the OSD log files *really* painful to read.
Signed-off-by: Sage Weil <sage@redhat.com>
We will stop maintaining SnapSet::snaps shortly. Instead, generate this
snapc using the existing SnapSet::get_ssc_as_of() method, which will now
derive the snap list from the clone_snaps member.
Signed-off-by: Sage Weil <sage@redhat.com>
- do not examine removed_snaps
- do not add new items to removed_snaps unless we need pre-octopus compat
- clear removed_snaps in first octopus epoch
Signed-off-by: Sage Weil <sage@redhat.com>
Instead of checking the OSDMap pg_pool_t whether a snap exists, instead
1- Look at the clone_snaps more carefully. If the snap didn't exist when
the clone was last touched (created or partially-trimmed) then it still
doesn't exist now (snaps aren't resurrected).
2- Check in the OSDMap's removed snaps queue. This will catch anything
that is still being removed but hasn't been reflected by the clone_snaps
yet.
Signed-off-by: Sage Weil <sage@redhat.com>
No need to use the pg_pool_t member now--the osdmap has a queue
specifically for the snaps we are in the process of trimming.
Signed-off-by: Sage Weil <sage@redhat.com>