The release note about bucket notifications being encapsulated in a JSON
array was added to PendingReleaseNotes by
19832a0dae but it never found its way into
the actual v15.2.0 release notes.
Signed-off-by: Nathan Cutler <ncutler@suse.com>
The release note about OSDs self-classifying themselves as "nvme" in
certain rare cases was added to PendingReleaseNotes by
1d0fd5603e but it never found its way into
the official v15.2.0 release notes.
Signed-off-by: Nathan Cutler <ncutler@suse.com>
The release note about restoring the original behavior of the rados
tool's "-o" option was added to PendingReleaseNotes by
d2b806070d but it never found its way into
the official v15.2.0 release notes.
Signed-off-by: Nathan Cutler <ncutler@suse.com>
This note was added to PendingReleaseNotes by
0efa73f029 but never found its way into
the official v15.2.0 release notes.
Signed-off-by: Nathan Cutler <ncutler@suse.com>
Adds option `mon_allow_pool_size_one` which will be disabled by default
to ensure pools are not configured without replicas.
If the user still wants to use pool size 1, they will have to change the
value of `mon_allow_pool_size_one` to true and then have to pass flag
`--yes-i-really-mean-it` to cli command:
Example:
`ceph osd pool test set size 1 --yes-i-really-mean-it`
Fixes: https://tracker.ceph.com/issues/44025
Signed-off-by: Deepika Upadhyay <dupadhya@redhat.com>
use APIs instead of apis to be consistent throughout.
fixes: https://tracker.ceph.com/issues/44374
Signed-off-by: Deepika Upadhyay <dupadhya@redhat.com>
Before this, "mds_join_fs" config enforced a preference for the standby
the monitors would select. Now the monitors actively enforce this
by purposefully removing an MDS wither lower "affinity". An MDS standby
has highest affinity if its mds_join_fs is the file system in question
or a vanilla standby (no mds_join_fs).
Fixes: https://tracker.ceph.com/issues/43392
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
1GB is too low as a default and usually results in cache size warnings
at that size; the MDS will struggle to maintain such a small cache size
for most workloads.
Fixes: https://tracker.ceph.com/issues/43182
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Remove last bits of support for 'mds_cache_size'.
'mds_cache_memory_limit' is preferred.
Fixes: https://tracker.ceph.com/issues/41951
Signed-off-by: Ramana Raja <rraja@redhat.com>
osd/OSDMap: Show health warning if a pool is configured with size 1
Reviewed-by: Sage Weil <sweil@redhat.com>
Reviewed-by: David Zafman <dzafman@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
0b369e1aff masked the original behaviour of '-o' which was to indicate
'outfile' as documented in the man page. Changing object-size to capital
o will restore the original behaviour.
Fixes: https://tracker.ceph.com/issues/42477
Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
Introduce a config option called 'mon_warn_on_pool_no_redundancy' that is
used to show a health warning if any pool in the ceph cluster is
configured with a size of 1. The user can mute/unmute the warning using
'ceph health mute/unmute POOL_NO_REDUNDANCY'.
Add standalone test to verify warning on setting pool size=1. Set the
associated warning to 'false' in ceph.conf.template under qa/tasks so
that existing tests do not break.
Fixes: https://tracker.ceph.com/issues/41666
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Currently, if a client requests the 1000 next entries from a bucket,
each bucket index shard will receive a request for the 1000 next
entries. When there are hundreds, thousands, or tens of thousands of
bucket index shards, this results in a huge amplification of the
request, even though only 1000 entries will be returned.
These changes reduce the per-bucket index shard requests. These also
allow re-requests in edge cases where all of one shard's returned
entries are consumed. Finally these changes improve the determination
of whether the resulting list is truncated.
Signed-off-by: J. Eric Ivancich <ivancich@redhat.com>
For an entity a.b.c.d, search all dot-delineated prefix sections. This
enables you to establish a hierarchical set of options for clients, such
as radosgw daemons.
Signed-off-by: Sage Weil <sage@redhat.com>
* refs/pull/30859/head:
auth: EACCES, not EPERM
mon: shunt old tell commands from cli interface to asok
mon: allow mgr to tell mon.foo smart
mon: include quorum features in quorum_status
qa/workunits/mon/caps.sh: fix test
ceph_test_rados_api_cmd: fix MonDescribe test
Merge branch 'vstart-fs-auth' of git://github.com/batrick/ceph into wip-cleanup-mon-asok
test/pybind/test_ceph_argparse: fix tests
vstart: add volume client keys to keyring
vstart: use fs authorize to create master client key
vstart: redirect some output to stderr
vstart: output command strings to stderr
qa/workunits/cephtool/test.sh: fix 'quorum enter' caller
qa: change mon_status calls to quorum_status or tell commands
mon: fix 'heap ...' command
mon: consolidate 'sync force' commands
mon: allow asok commands to return an error code
mon: move 'quorum enter|exit' and 'mon_status' to asok
mon: fix 'smart' asok command
mon: remove old 'config set' and 'injectargs'
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Replace the 'ceph [mon] sync force' commands and just use the asok
sync_force command instead. This is a low-level command that nobody should
reasonsbly using except in an emergency, so do not bother with trying to
maintain compatibility; it's a bit rediculous that we had 3 variations of
this to being with!
Signed-off-by: Sage Weil <sage@redhat.com>
* refs/pull/30724/head:
mgr/telemetry: bump content revision and add a release note
telemetry/server: add device report endpoint
mgr/telemetry: include device telemetry
mgr/devicehealth: factor _get_device_metrics out of show_device_metrics
mgr/devicehealth: pull out MAX_SAMPLES
Reviewed-by: Dan Mick <dmick@redhat.com>
* refs/pull/30217/head:
crimson: common/admin_socket kludge so that it builds
mon/MonClient: fix sending mon command to a specific rank
src/.gitignore: ignore .tox
mon/MonClient: interpret numeric mon target name as rank
mgr,mgr/MgrClient: use fsid to signal mon-mgr vs cli MCommands
qa/workunits/cephtool: fix errpr checks for 'ceph daemon' commands
common/ceph_context: make 'config unset' idempotent
qa/tasks/dump_stuck: mon.a, not mon.0
qa/suites/rados/singleton/all/admin-socket: fix test
common/config: EPERM setting config option after startup
qa/workunits/cephtool/test.sh: fix tell output error check
common/admin_socket: pass Formatter from generic infrastructure
common/admin_socket: pass ostream to call() for error output
os/bluestore: fix asok hook return value
rgw: fix asok return value
common/ceph_context: return error code from asok commands
test/pybind/test_rados: fix accidental mon tell test
mon: print entity_name along with caps to debug log
PendingReleaseNotes: notes about asok changes
mgr/MgrClient: empty target string for 'tell' means active mgr
common/admin_socket: report error code as part of output string
osd: change trigger_[deep_]scrub tommands to a pg tell command
osd: remove old command workqueue, threadpool
osd: drop MMonCommand handling
osdc/Objecter: resend OSD tell commands on EAGAIN
osd: route tell commands to asok; migrate commands
osd: use unique_ptr<Formatter> for asok_command
common/ceph_context: add generic asok 'injectargs'
common/admin_socket: allow dup prefixes
common/admin_socket: refactor with sync and async execute_command variants
common/admin_socket: pass input bufferlist
osd: transition to call_async() for asok
common/admin_socket: support alternative call_async()
mon/MonClient: send tell commands out of band via MCommand
mon: accept tell commands via MCommand and send them to asok handler
common/admin_socket: return int from hook call()
mgr/DaemonServer: route MCommand (for octopus+) to asok commands
do not use 'ceph tell mgr'
pybind/ceph_argparse: disambiguate mgr tell and CLI commands
ceph: make 'ceph tell mgr.*' send to the active mgr
ceph: send 'ceph tell mgr.X' to the right mgr
librados: add rados_mgr_command_target
mgr/MgrClient: add start_command variant that takes a target
common/admin_socket: drop unregister_command(); use per-hook variant
common/admin_socket: drop explicit prefix arg to register_command
common/admin_socket: simplify command routing
common/admin_socket: add ability to process MCommand via asok queue
common/admin_socket: pass cmdvec to execute_command
common/admin_socket: use pipe for general wakeup
include/compat: add flags arg to pipe_cloexec
common/admin_socket: drop unused args
Reviewed-by: Neha Ojha <nojha@redhat.com>
With osd_calc_pg_upmaps_aggressively on we might have to iterate
hundreds or thousands of times to figure out an optimization,
but upmap_max_optimizations will remain a hard limit for the
total optimizations that can be returned.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
* refs/pull/30525/head:
qa/tasks/ceph.conf.template: disable power-of-2 warning
qa/standalone/mon/health-mute: use power of 2 for pg_num
osd/OSDMap: remove remaining g_conf() usage
PendingReleaseNotes: add note for 14.2.5 so we can backport this
osd/OSDMap: health alert for non-power-of-two pg_num
Reviewed-by: Kai Wagner <kwagner@suse.com>
Reviewed-by: Nathan Cutler <ncutler@suse.com>
Reviewed-by: xie xingguo <xie.xingguo@zte.com.cn>
* refs/pull/29824/head:
qa: whitelist new FS_INLINE_DATA_DEPRECATED health warning
mds: add a HEALTH_WARN message when inline_data is enabled
mds: log a warning message when mds is started on an fs with inline_data
mon: deprecate CephFS inline_data support
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Douglas Fuller <dfuller@redhat.com>
The plan is to start deprecating this feature now so that we can remove
it in a future release. Change it to require the
--yes-i-really-really-mean-it flag, and to emit a custom
warning when that isn't specified.
For now, we leave the testing in place since we do want to be notified
if something breaks before we're ready to rip it out completely.
Fixes: https://tracker.ceph.com/issues/41311
Signed-off-by: Jeff Layton <jlayton@redhat.com>
The previous bluestore_no_per_pool_stats_tolerance had a lot of possible
values, not all of which make sense to users. Replace with a single new
option, bluestore_fsck_error_on_no_per_pool_stats, which controls whether
the lack of per-pool stats is an error or a warning. On repair, we will
unconditionally convert to per-pool stats.
This brings us in sync with the newer
bluestore_fsck_error_on_no_per_pool_omap.
Note that one part of the ceph_test_objectstore test is dropped since it
is no longer possible to create a store with legacy stats.
Signed-off-by: Sage Weil <sage@redhat.com>
This is similar to how recovery reservations are split between
local and remote.
It was the case that scrubs_pending was used for reservations at
the replicas as well as at the primary while requesting reservations
from the replicas. There was no need for scrubs_pending to turn
into scrubs_active at the primary as nothing treated that value
as special. scrubber.active = true when scrubbing is
actually going.
Now scurbber.local_reserved indicates scrubs_local incremented
Now scrubber.remote_reserved indicates scrubs_remote incremented
Fixes: https://tracker.ceph.com/issues/41669
Signed-off-by: David Zafman <dzafman@redhat.com>
As per amazon s3 spec -
https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
* The s3 bucket names should not contain upper case letters or underscore.
* Name cannot end with dash or have consecutive periods, or dashes adjacent
to periods.
* Each label in the bucket name must start and end with a lowercase
letter or a number.
* Name cannot exceed 63 characters.
This change is to enforce these rules if rgw_relaxed_s3_bucket_names is set to
'false' which is by default.
Fixes: https://tracker.ceph.com/issues/36293
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
This patch fixes raw_bytes_used key which was renamed to stored_raw.
Also added key percent_used and fixed zabbix template to be fully
compatible with zabbix 3.0
Fixes: https://tracker.ceph.com/issues/39644
Signed-off-by: Dmitriy Rabotjagov <noonedeadpunk@ya.ru>
Instead of a timeout and complicated decisions about whether the client is
releasing caps in an expeditious fashion, just use a DecayCounter that tracks
the number of caps we've recalled. This counter is decremented whenever the
client releases caps. If the counter passes a threshold, then we raise the
warning.
Similar reworking is done for the steady-state recall of client caps. Another
release DecayCounter is added so we can tell when the client is not releasing
any more caps.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This is to prevent unsustainable situations where a client has so many
outstanding caps that a linear traversal/operation on the session's caps takes
unacceptable amounts of time.
Fixes: http://tracker.ceph.com/issues/38022
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>