This fixes "TypeError: admin_socket() got an unexpected keyword argument
'timeout'". The value is never used.
Signed-off-by: Zack Cerza <zack@redhat.com>
If there is a workunit task associated with the same client, the two
tasks will attempt to clone the suite repo to the same directory.
Worse, if it's parallel tasks, the two clones will clobber each
other.
Fixes: http://tracker.ceph.com/issues/36542
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
* refs/pull/24292/head:
qa: add test for rctime on root inode
mds: set rctime on new system inode
mds: small refactor
Reviewed-by: Zheng Yan <zyan@redhat.com>
This makes it easier to re-run tests against a suite branch without
requiring a full ceph-ci build and repo.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Apparently 15m is not long enough for some workunits like fsstress.
Fixes: http://tracker.ceph.com/issues/36365
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
It is now commented out like it was before,
but I've added a comment what happened during this test with the QA
system. The problem was that even with only a increase of 1 PG the QA
cluster went into a cluster warning state and did not recover in time.
The QA coverage timeout is 2 minutes.
I could not reproduce this behavior with a local cluster, but I've
added a loop to wait until pgp and pg number are equal and the cluster
is in a healthy state again. This can take locally about 5 seconds.
The internal loop has a timeout of 3 minutes.
Fixes: https://tracker.ceph.com/issues/36362
Signed-off-by: Stephan Müller <smueller@suse.com>
The dashboard backend can now unset all set compression arguments if the
compression mode is switched to 'unset'. In the case of 'unset' Ceph
itself will only delete the 'compression_mode' argument, not all other
set arguments. The other arguments that should be removed, too, are
added to the update arguments in order to delete all set arguments.
Fixes: https://tracker.ceph.com/issues/36355
Signed-off-by: Stephan Müller <smueller@suse.com>
Refactor '_get_mon_allow_pool_delete_config' method to be a little bit
more general. The method can now be used to get the value of every
config option known to the cluster.
Signed-off-by: Tatjana Dehler <tdehler@suse.com>
Otherwise a bug preventing an asok operation from completing will cause the
entire job to fail.
Fixes: http://tracker.ceph.com/issues/36335
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
when enabling a module attempt to determine if it is an always on
module, and if it is, then return without waiting on the active manager
daemon to restart---which it won't if it is an always on module.
Signed-off-by: Noah Watkins <nwatkins@redhat.com>
* refs/pull/21566/head:
test: add test for mds drop cache command
mds: command to trim mds cache and client caps
mds: implement journal flush as asynchronous context execution
mds: cleanup some asok commands
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
If there is a bug preventing rm from completing, the workunit will get stuck.
Fixes: http://tracker.ceph.com/issues/36184
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Otherwise QA sits forever waiting for the kclient to umount when there is a
problem.
Fixes: http://tracker.ceph.com/issues/36184
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Otherwise the command will hang if the mount is broken.
Fixes: http://tracker.ceph.com/issues/36184
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/23187/head:
test: make rank argument mandatory when running journal_tool
cephfs-journal-tool: make "--rank" argument mandatory
cephfs-journal-tool: pass local arg vector for Journal actions
cephfs-journal-tool: dump to per rank output file wherever necessary
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/23530/head:
qa/vstart_runner: fix daemons list
PendingReleaseNotes: note multifs support in libcephfs
test/cephfs: add pybind test for mount_root
pybind/cephfs: enable passing filesystem name to mount
libcephfs: add ceph_select_filesystem
common: add doc strings to client_mds_namespace
client: allow passing fs name to mount()
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Conflicts:
PendingReleaseNotes
This was wrongly dropped and moved to the finalizer.
Introduced-by: de824f74dd8ac909e47335ccd53d7a085e388e41
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Two instances of fsstress clobber each other. Just build it in the local sandbox.
Fixes: http://tracker.ceph.com/issues/24177
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Specifically fixes the recurringly occurring `test_osd.py` error on the
`test_scrub` method. But this change should also prevent other issues of
the same kind. Issues of "same kind" are issues which occurr due to
tests which do not immediately result in a clean cluster status and
aren't manually programmed to wait for it.
Fixes: http://tracker.ceph.com/issues/36107
Signed-off-by: Patrick Nawracay <pnawracay@suse.com>
Also, fix a bunch of quirky journal_tool invocations that pass
"--rank" argument as the command argument rather than passing it
as function argument.
Fixes: https://tracker.ceph.com/issues/24780
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Updated integration tests to check data from new python code
Fixes: https://tracker.ceph.com/issues/24573
Signed-off-by: Alfonso Martínez <almartin@redhat.com>
This was missing a cluster name prefix that
was added at some point, and consequently
calls to iter_daemons_of_role were returning
no daemons.
This was causing e.g. TestVolumeClient.test_data_isolated
to fail when run in vstart_runner.
Signed-off-by: John Spray <john.spray@redhat.com>
This module is written by Rick Chen <rick.chen@prophetstor.com> and
provides both a built-in local predictor and a cloud mode that queries
a cloud service (provided by ProphetStor) to predict device failures.
Signed-off-by: Rick Chen <rick.chen@prophetstor.com>
Signed-off-by: Sage Weil <sage@redhat.com>
* refs/pull/20469/head:
osd/PG: remove warn on delete+merge race
osd: base project_pg_history on is_new_interval
osd: make project_pg_history handle concurrent osdmap publish
osd: handle pg delete vs merge race
osd/PG: do not purge strays in premerge state
doc/rados/operations/placement-groups: a few minor corrections
doc/man/8/ceph: drop enumeration of pg states
doc/dev/placement-groups: drop old 'splitting' reference
osd: wait for laggy pgs without osd_lock in handle_osd_map
osd: drain peering wq in start_boot, not _committed_maps
osd: kick split children
osd: no osd_lock for finish_splits
osd/osd_types: remove is_split assert
ceph-objectstore-tool: prevent import of pg that has since merged
qa/suites: test pg merging
qa/tasks/thrashosds: support merging pgs too
mon/OSDMonitor: mon_inject_pg_merge_bounce_probability
doc/rados/operations/placement-groups: update to describe pg_num reductions too
doc/rados/operations: remove reference to lpgs
osd: implement pg merge
osd/PG: implement merge_from
osdc/Objecter: resend ops on pg merge
osd: collect and record pg_num changes by pool
osd: make load_pgs remove message more accurate
osd/osd_types: pg_t: add is_merge_target()
osd/osd_types: pg_t::is_merge -> is_merge_source
osd/osd_types: adding or substracting invalid stats -> invalid stats
osd/PG: clear_ready_to_merge on_shutdown (or final merge source prep)
osd: debug pending_creates_from_osd cleanup, don't use cbegin
ceph-objectstore-tool: debug intervals update
mgr/ClusterState: discard pg updates for pgs >= pg_num
mon/OSDMonitor: fix long line
mon/OSDMonitor: move pool created check into caller
mon/OSDMonitor: adjust pgp_num_target down along with pg_num_target as needed
mon/OSDMonitor: add mon_osd_max_initial_pgs to cap initial pool pgs
osd/OSDMap: set pg[p]_num_target in build_simple*() methods
mon/PGMap: adjust SMALLER_PGP_NUM warning to use *_target values
mon/OSDMonitor: set CREATING flag for force-create-pg
mon/OSDMonitor: start sending new-style pg_create2 messages
mon/OSDMonitor: set last_force_resend_prenautilus for pg_num_pending changes
osd: ignore pg creates when pool FLAG_CREATING is not set
mgr: do not adjust pg_num until FLAG_CREATING removed from pool
mon/OSDMonitor: add FLAG_CREATING on upgrade if pools still creating
mon/OSDMonitor: prevent FLAG_CREATING from getting set pre-nautilus
mon/OSDMonitor: disallow pg_num changes while CREATING flag is set
mon/OSDMonitor: set POOL_CREATING flag until initial pool pgs are created
osd/osd_types: add pg_pool_t FLAG_POOL_CREATING
osd/osd_types: introduce last_force_resend_prenautilus
osd/PGLog: merge_from helper
osd: no cache agent or snap trimming during premerge
osd: notify mon when pending PGs are ready to merge
mgr: add simple controller to adjust pg[p]_num_actual
mon/OSDMonitor: MOSDPGReadyToMerge to complete a pg_num change
mon/OSDMonitor: allow pg_num to adjusted up or down via pg[p]_num_target
osd/osd_types: make pg merge an interval boundary
osd/osd_types: add pg_t::is_merge() method
osd/osd_types: add pg_num_pending to pg_pool_t
osd: allow multiple threads to block on wait_min_pg_epoch
osd: restructure advance_pg() call mechanism
mon/PGMap: prune merged pgs
mon/PGMap: track pgs by state for each pool
osd/SnapMapper: allow split_bits to decrease (merge)
os/bluestore: fix osr_drain before merge
os/bluestore: allow reuse of osr from existing collection
os/filestore: (re)implement merge
os/filestore: add _merge_collections post-check
os: implement merge_collection
os/ObjectStore: add merge_collection operation to Transaction
We currently import a portion of the PG if it has split. Merge is more
complicated, though, mainly because COT is operating in a mode where it
fast-forwards the PG to the latest OSDMap epoch, which means it has to
implement any transformations to the PG (split/merge) independently.
Avoid doing this for merge.
Signed-off-by: Sage Weil <sage@redhat.com>
Commit 0d8887652d ("qa/tasks/cram: use suite_repo repository for all
cram jobs") removed hardcoded git.ceph.com links, but as it turned out
it is still used for nightlies. There is no good way to accommodate
the different URL schemes, so let's get rid of URLs altogether.
Fixes: https://tracker.ceph.com/issues/27211
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Also:
- Do not print **offset** until specified
- Count missing objects correctly (used to be primary's local missing)
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
The task uses netem to emulate wide area network delay.
Provides three different configurable options.
1. standard delay: Constant delay with +/- 5ms jitter with normal distribution as default.
2. variable delay: To provide a delay between two given min-max range in milliseconds.
3. packet drop: Toggles packet drop and recovery in regular interval.
Useful in simulating network delays between two clusters while testing
rgw multisite and rbd mirroring configurations.
Signed-off-by: Shilpa Jagannath <smanjara@redhat.com>
mgr/dashboard: Add support for managing individual OSD settings in the backend
Reviewed-by: Sebastian Wagner <swagner@suse.com>
Reviewed-by: Stephan Müller <smueller@suse.com>
Reviewed-by: Tatjana Dehler <tdehler@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
Currently git.ceph.com is hardcoded for all cram jobs. Testing
modifications is a pain: one needs to push to either ceph/ceph.git or
ceph/ceph-ci.git (depending on where the ceph branch is at, triggering
unnecessary builds in the latter case) and wait for the mirror to sync.
Runs scheduled against branches in developer's forks fail.
Move away from git.ceph.com to allow mixing branches and repositories,
similar to workunits.
Fixes: https://tracker.ceph.com/issues/27211
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
mgr/dashboard: Add REST API for role management
Reviewed-by: Ricardo Dias <rdias@suse.com>
Reviewed-by: Tatjana Dehler <tdehler@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
Add options to mark OSDs in/out/down/reweight/lost/remove/destroy/create
Fixes: http://tracker.ceph.com/issues/24270
Signed-off-by: Patrick Nawracay <pnawracay@suse.com>
* refs/pull/23439/head:
qa: whitelist cap revoke warning
doc: document cap revoke non-responders client eviction
test: validate client eviction for cap revoke non-responders
mds: add counter for tracking cap non-responding clients
mds: evict clients that do not respond to cap revoke by MDS
mds: pass timeout argument for fetching late clients
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Zheng Yan <zyan@redhat.com>
Enables to change (set/unset) values of settings of the dashboard using
the REST API.
Fixes: https://tracker.ceph.com/issues/24273
Signed-off-by: Patrick Nawracay <pnawracay@suse.com>
an ugly workaround for a python dependency conflict that's broken the
rgw/tempest suite. allows us to preserve the pinned versions of
keystone/tempest without having to maintain a fork of the keystone
repository
Fixes: http://tracker.ceph.com/issues/23659
Signed-off-by: Casey Bodley <cbodley@redhat.com>
'policy show' returns a json-encoded representation of
RGWAccessControlPolicy, while key.get_xml_acl() returns
RGWAccessControlPolicy_S3 encoded as xml. so even with '&format=xml',
the strings won't match
Signed-off-by: Casey Bodley <cbodley@redhat.com>
result.json() throws a 'JSONDecodeError: Expecting value: line 1 column 1'
for requests that return no body, such as 'user rm' 'key rm' 'subuser
rm', 'bucket unlink', etc
Signed-off-by: Casey Bodley <cbodley@redhat.com>
* Assert `pg_placement_num` has the same value as `pg_num`.
* Only set `application_metadata`, if not None.
* `osd pool set` only accepts strings.
* Sync `pgp_num` with `pg_num`.
Signed-off-by: Stephan Müller <smueller@suse.com>
Avoid need for each module to expose a self-test
command: they can just implement the method,
and then get it called via the selftest module.
As well as fewer LOC, this means that the self
test commands are not cluttering the interface
for end users, as they've invisible until
the selftest module is loaded.
Signed-off-by: John Spray <john.spray@redhat.com>
This is being done by passing native CPython objects
back and forth. It's safe because sub-interpreters in CPython
share memory allocation infrastructure and share the GIL.
With a view to PEP554, we limit inter-interpreter calls
to pickleable objects, so that this may be implemented
using byte-arrays in future.
This infrastructure should enable:
- the dashboard to display the status of other modules, for
example the set of progress indicators from `progress`
- dashboard and restful to share an underlying long running
job mechanism.
Signed-off-by: John Spray <john.spray@redhat.com>
This fixes errors caused by remount done by some tests (test_recovery_pool.py)
where the fs name is not given.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
The MDS may not be on the same machine where the cluster command is run.
Fixes: http://tracker.ceph.com/issues/24858
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21885/head:
qa: update cluster log health warning message
qa: add tests for client features
mds: evict clients that lack required features
mds: cleanup MDSRank::evict_client
mds: infer client version by client metadata and connection's features
mds: introduce "ceph fs set <fs_name> min_compat_client <release_name>"
mds: tell client why it's rejected
mds: introduce cephfs' own feature bits
mds: make Server::prepare_force_open_sessions() update client metadata
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>