* refs/pull/16779/head:
mds: cleanup MDCache::open_snaprealms()
mds: make sure snaptable version > 0
mds: don't consider CEPH_INO_LOST_AND_FOUND as base inode
mds: replace MAX() with std::max()
tools/cephfs: make cephfs-data-scan create snaprealm for base inodes
qa/cephfs: don't run TestSnapshots.test_kill_mdstable on kclient
qa/cephfs: adjust check of 'cephfs-table-tool all show snap' output
mds: don't warn unconnected snaplrealms in cluster log
mds: update CInode/CDentry's first according to global snapshot seq
qa/cephfs: add tests for snapclient cache
qa/cephfs: add tests for snaptable transaction
mds: add asok command that dumps cached snap infos
qa/cephfs: add tests for multimds snapshot
client: don't mark snap directory complete when its dirstat is empty
qa/workunits/snaps: add snaprealm split test
mds: make sure mds has uptodate mdsmap before checking 'allows_snaps'
client: fix incorrect snaprealm when adding caps
qa/workunits/snaps: add hardlink snapshot test
mds: add incompat feature and bump protocol for snapshot changes
mds: detach inode with single hardlink from global snaprealm
mds: record hardlink snaps in inode's snaprealm
mds: attach inode with multiple hardlinks to dummy global snaprealm
mds: cleanup rename code
mds: ensure xlocker has uptodate lock state
mds: simplify SnapRealm::build_snap_{set,trace}
mds: record global last_created/last_destroyed in snaptable
mds: pop projected snaprealm before inode's parent changes
mds: keep isnap lock in sync state
mds: handle mksnap vs resolve_snapname race
mds: cleanup snaprealm past parents open check
mds: rollback snaprealms when rolling back slave request
mds: send updated snaprealms along with slave requests
mds: explict notification for snap update
mds: send snap related messages centrally during mds recovery
mds: synchronize snaptable caches when mds recovers
mds: introduce MDCache::maybe_finish_slave_resolve()
mds: notify all mds about prepared snaptable update
mds: record snaps in old snaprealm when moving inode into new snaprealm
mds: cache snaptable in snapclient
mds: recover snaptable client when mds enters resolve state
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
so workloads qemu_dynamic_features.sh and qemu_rebuild_object_map.sh,
which check if qemu is finished with periodicity 60 sec, have enough
time to detect this before the rbd image is removed.
Fixes: https://tracker.ceph.com/issues/23502
Signed-off-by: Mykola Golub <mgolub@suse.com>
The ssl task located in a python file called `ssl.py` will generate
python module loading conflicts with the `ssl` system module, when
running QA tests using vstart_runner.py.
Signed-off-by: Ricardo Dias <rdias@suse.com>
these tests don't need to be split across the job matrix the same way
that we split features like frontend, ssl, objectstore, etc. by
combining them, we can still test the whole matrix of features, but with
only 1/3 of the total jobs
Signed-off-by: Casey Bodley <cbodley@redhat.com>
mgr/dashboard_v2: Initial submission of a web-based management UI (replacement for the existing dashboard)
Reviewed-by: Nathan Cutler <ncutler@suse.com>
Reviewed-by: John Spray <john.spray@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
The group and data pool tests doesn't apply to v1 images.
Also removed the many messenger failure test option since it
is overkill.
Fixes: http://tracker.ceph.com/issues/22738
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
We've had multiple overflows in ceph_calc_file_object_mapping().
It wasn't being used by rbd, but it now is.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>