This was hard-coded to ceph.git (almost) and breaks when
you specify --ceph-repo. Remove it entirely. We'll see if
github.com is better at handling our load than it used to
be!
Signed-off-by: Sage Weil <sage@redhat.com>
all newly created files and directories under the mount dir inherit the
SELinux type of their parent directory. so we need to set it before
mkfs.
Fixes: http://tracker.ceph.com/issues/16800
Signed-off-by: Kefu Chai <kchai@redhat.com>
The kernel client's cluster availability test is
more primitive than the fuse client, so we need
to switch it off to avoid client mounts failing
while MDSs are still coming up.
Fixes: http://tracker.ceph.com/issues/18161
Signed-off-by: John Spray <john.spray@redhat.com>
The test still fails even after being enabled:
2016-12-07T18:00:44.337 INFO:teuthology.orchestra.run.mira105:Running: 'mpiexec -f /home/ubuntu/cephtest/mpi-hosts -wdir /home/ubuntu/cephtest/gmnt sudo /home/ubuntu/cephtest/fsx-mpi -o 1MB -N 50000 -p 10000 -l 1048576 /home/ubuntu/cephtest/gmnt/test'
2016-12-07T18:00:44.486 INFO:teuthology.orchestra.run.mira105.stderr:Warning: Permanently added '172.21.8.122' (ECDSA) to the list of known hosts.
2016-12-07T18:00:44.571 INFO:teuthology.orchestra.run.mira105.stdout:skipping zero size read
2016-12-07T18:00:44.591 INFO:teuthology.orchestra.run.mira105.stdout:truncating to largest ever: 0x7cccb
2016-12-07T18:00:44.606 INFO:teuthology.orchestra.run.mira083:Running: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
2016-12-07T18:00:44.611 INFO:teuthology.orchestra.run.mira100:Running: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
2016-12-07T18:00:44.614 INFO:teuthology.orchestra.run.mira105:Running: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
2016-12-07T18:00:44.887 INFO:teuthology.orchestra.run.mira105.stdout:skipping zero size read
2016-12-07T18:00:44.954 INFO:teuthology.orchestra.run.mira105.stdout:Size error: expected 0xa6f7c stat 0xd4000 seek 0xd5000
2016-12-07T18:00:44.954 INFO:teuthology.orchestra.run.mira105.stdout:LOG DUMP (2 total operations):
2016-12-07T18:00:44.954 INFO:teuthology.orchestra.run.mira105.stdout:1(1 mod 256): SKIPPED (no operation)
2016-12-07T18:00:44.954 INFO:teuthology.orchestra.run.mira105.stdout:2(2 mod 256): WRITE 0x1c748 thru 0xa6f7b (0x8a834 bytes) HOLE
2016-12-07T18:00:44.990 INFO:teuthology.orchestra.run.mira105.stdout:Correct content saved for comparison
2016-12-07T18:00:44.990 INFO:teuthology.orchestra.run.mira105.stdout:(maybe hexdump "/home/ubuntu/cephtest/gmnt/test" vs "/home/ubuntu/cephtest/gmnt/test.fsxgood")
2016-12-07T18:00:45.000 INFO:teuthology.orchestra.run.mira105.stdout:
2016-12-07T18:00:45.000 INFO:teuthology.orchestra.run.mira105.stdout:===================================================================================
2016-12-07T18:00:45.000 INFO:teuthology.orchestra.run.mira105.stdout:= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
2016-12-07T18:00:45.000 INFO:teuthology.orchestra.run.mira105.stdout:= EXIT CODE: 120
2016-12-07T18:00:45.000 INFO:teuthology.orchestra.run.mira105.stdout:= CLEANING UP REMAINING PROCESSES
2016-12-07T18:00:45.000 INFO:teuthology.orchestra.run.mira105.stdout:= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
2016-12-07T18:00:45.000 INFO:teuthology.orchestra.run.mira105.stdout:===================================================================================
2016-12-07T18:00:45.000 INFO:teuthology.orchestra.run.mira105.stderr:[proxy:0:0@mira105] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed
2016-12-07T18:00:45.000 INFO:teuthology.orchestra.run.mira105.stderr:[proxy:0:0@mira105] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
2016-12-07T18:00:45.001 INFO:teuthology.orchestra.run.mira105.stderr:[proxy:0:0@mira105] main (./pm/pmiserv/pmip.c:206): demux engine error waiting for event
2016-12-07T18:00:45.002 INFO:teuthology.orchestra.run.mira105.stderr:[mpiexec@mira105] HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
2016-12-07T18:00:45.002 INFO:teuthology.orchestra.run.mira105.stderr:[mpiexec@mira105] HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
2016-12-07T18:00:45.002 INFO:teuthology.orchestra.run.mira105.stderr:[mpiexec@mira105] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for completion
2016-12-07T18:00:45.002 INFO:teuthology.orchestra.run.mira105.stderr:[mpiexec@mira105] main (./ui/mpich/mpiexec.c:331): process manager error waiting for completion
I am not sure what the cause is. I'm leaving the test disabled for now and merging this PR.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Make this easy: write a singel yaml that does the hammer install,
some limited work, then upgardes to jewel. Copy it from the
parallel suite. Then, symlink all of the rest from the jewel-x
stress-split suite.
Signed-off-by: Sage Weil <sage@redhat.com>
Previously relied on client being able to unmount
while the MDS was offline, which is not necessarily
so. Use kill instead.
Signed-off-by: John Spray <john.spray@redhat.com>
The libcephfs tests are negatively affected by other mounts. This commit
adds a kclient disable in addition to the ceph-fuse one.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This commit synchronizes the multimds suite with the fs suite. The
basic/verify sub-suites now do the same tests except with different
cluster layouts (i.e. multiple actives). This is mostly accomplished by
symlinking parts of each sub-suite to its counterpart in the fs suite.
This commit also does a few things of note to the prior multimds suite:
o Turn on directory fragmentation.
o Add several tests from fs/basic/tasks to multimds/basic.
o Remove libcephfs as fs/basic/tasks already contain
multimds/basic/tasks.
Prior implementation and discussion are in PR#1114: https://github.com/ceph/ceph-qa-suite/pull/1114
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Previously this assumed it was running with exactly two MDS
daemons. When there were more, it would fail to execute
"fs reset" because the extra daemons were active in
the map.
Signed-off-by: John Spray <john.spray@redhat.com>
This relies on quota-ish stuff that doesn't exist
in kclient. We can still run the outer part
of the test though.
Signed-off-by: John Spray <john.spray@redhat.com>
Should have been to umount_wait, not umount (i.e.
the blocking foreground version). This happened
to matter because umount_wait is more tolerant
of being called if the client is not already mounted.
Signed-off-by: John Spray <john.spray@redhat.com>
This test was probably buggy and only happened to work
with ceph-fuse, because it expects the MDS to immediately
respond to updates to the client's auth caps, but that
doesn't happen.
Signed-off-by: John Spray <john.spray@redhat.com>
With the kernel client, this was proceeding to attempt
a split before the directory had persisted, because
there was no fsync after creating it.
Signed-off-by: John Spray <john.spray@redhat.com>
This tests a fuse-only feature, ticket for adding
it in kclient is:
http://tracker.ceph.com/issues/17805
Signed-off-by: John Spray <john.spray@redhat.com>
Change the Mount interface to take it as an
argument to mount() instead of setting it
out of band in a config file as we used to
for the fuse client.
Signed-off-by: John Spray <john.spray@redhat.com>
It was not correct to expect a client to block after
eviction unless it was also deauthorised. I guess
this was working with fuse because fuse does a less
good job at re-establishing a session than the kclient?
Signed-off-by: John Spray <john.spray@redhat.com>
Some tests want to use power cycling to reset stuck
mounts, and that needs to not kill OSDs as collateral
damage.
Need to revisit this to avoid unnecessarily using a whole
node for the client for those tests that don't require it
(i.e. those that don't use CephFSTestCase.REQUIRE_KCLIENT_REMOTE)
Signed-off-by: John Spray <john.spray@redhat.com>