Adjusting dependant params as well.
Reolving space amplification caused by small objects and/or EC
overwrites.
Relates to: https://tracker.ceph.com/issues/44213
Signed-off-by: Igor Fedotov <ifedotov@suse.com>
Fixes errors when calling `from_json` of these classes:
- InventoryHost: parsing labels
- ServiceDescription: `last_refresh` and `created` fields should be parsed
to datetime type.
Signed-off-by: Kiefer Chang <kiefer.chang@suse.com>
Current behaviour is to only start a newly adopted ceph daemon if it was
already running before the adopt. Adding a --force-start option allows
the adopt command to start newly adopted daemons that weren't originally
running, to save the user having to manually invoke `systemctl start
ceph-$FSID@$DAEMMON.$ID`.
Signed-off-by: Tim Serong <tserong@suse.com>
When adopting OSDs, if a ceph-volume simple service is already disabled
(or otherwise missing) the previous implementation would raise an error,
thus killing the adopt.
Signed-off-by: Tim Serong <tserong@suse.com>
The current adopt behavior expects OSDs to be online, in order to read
/var/lib/ceph/osd/ceph-$ID/fsid. To handle the case where OSDs
are offline, this change first checks to see if that file is present,
and if not, falls back to calling `ceph-volume lvm list` to see if
there's a matching OSD there, and if that doesn't work, it checks
/etc/ceph/osd/*.json to see if there's a matching old-style simple
OSD present.
For LVM OSDs, the only thing we need is the ODS's fsid; the remainer
of the adopt procedure "just works", as the various other files
in /var/lib/ceph/$FSID/osd.$ID are created by magic anyway when the
OSD is activated, so it doesn't matter if they're not present at
adoption time.
For simple (ceph-disk created) OSDs, we actually need all the files under
/var/lib/ceph/osd/ceph-$ID/ to be moved to /var/lib/ceph/$FSID/osd.$ID
so if a simple OSD is found, it's mounted first, so the existing
move_files() a bit further down around line 3200 continues to work.
Fixes: https://tracker.ceph.com/issues/45095
Signed-off-by: Tim Serong <tserong@suse.com>
write can stuck at waiting for larger max_size in following sequence of
events:
- client opens a file and writes to position 'A' (larger than unit of
max size increment)
- client closes the file handle and updates wanted caps (not wanting
file write caps)
- client opens and truncates the file, writes to position 'A' again.
At the 1st event, client set inode's requested_max_size to 'A'. At the
2nd event, mds removes client's writable range, but client does not reset
requested_max_size. At the 3rd event, client does not request max size
because requested_max_size is already larger than 'A'.
Fixes: https://tracker.ceph.com/issues/44801
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
* use `shell` lexer, otherwise the Python one is used, and the rendered
result does not look right
* be consistent when indenting -- either use tab or spaces, otherwise
the indent in code block would be wrong.
* double quote the variables in text
Signed-off-by: Kefu Chai <kchai@redhat.com>
This patch adds SSL support to RGW when using cephadm.
If an SSL certificate is provided, inside the json supplied with:
cpeh orchestrator rgw create -i rgw.json
Then the SSL cert and/or key will be added to pushed into the mon config-key database
using the key `rgw/cert/<rgw_realm>/<rgw_zone>.[crt|key]`.
Which will then be referenced in the config:
rgw_frontends = beast port=80 ssl_port=443 ssl_certificate=config://rgw/cert/<rgw_realm>/<rgw_zone>.crt
And if an ssl key is also supplied this becomes something like:
rgw_frontends = beast port=80 ssl_port=443 ssl_certificate=config://rgw/cert/<rgw_realm>/<rgw_zone>.crt ssl_key=config://rgw/cert/<rgw_realm>/<rgw_zone>.key
Of course you could also just upload the cert and key yourself to
config-key location, and ssl will be enabled as well. But this patch
let's you either supply them via `-i` or as a manual upload step.
Co-Authored-By: Michael Fritch <mfritch@suse.com>
Co-Authored-By: Sebastian Wagner <sebastian@spawnhost.de>
Signed-off-by: Matthew Oliver <moliver@suse.com>
This version was only compiled as part of ceph-object-corpus
generation, when ENCODE_DUMP_PATH is defined, so it was missed
when bufferlist::copy() was removed.
Fixes: https://tracker.ceph.com/issues/45023
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
We might race with the remote rbd-mirror daemon creating a
tx-only peer when adding a new peer. Therefore, delete the
tx-only peer and attempt to re-create it.
Fixes: https://tracker.ceph.com/issues/44938
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
It previously included the pointer to string holding the generated
uuid (neither of which would mean much to an end user).
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
"ceph fs status" json format outputs to stderr instead of
stdout. This patch fixes the same.
Fixes: https://tracker.ceph.com/issues/44962
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Adds expand/collapse feature to every datatable with details.
Fixes: https://tracker.ceph.com/issues/40702
Signed-off-by: Sebastian Krah <skrah@suse.com>
After creating a filesystem using the 'fs new' command, the value
of the 'data' and 'metadata' key of the datapool and metadatapool's
application tag 'cephfs' should be the filesystem's name. This
didn't happen when the data or metadata pool's application metadata
'cephfs' was enabled before the pool was used in the 'fs new' command.
Fix this during the handling of the 'fs new' command by setting the
value of the key of the pool's application metadata 'cephfs' to the
filesystem's name even when the application metadata 'cephfs' is
already enabled or set.
Fixes: https://tracker.ceph.com/issues/43761
Signed-off-by: Ramana Raja <rraja@redhat.com>
After making a RADOS pool a filesystem's data pool using the
'add_data_pool' command, the value of the 'data' key of the pool's
application metadata 'cephfs' should be the filesystem's name. This
didn't happen when the pool's application metadata 'cephfs' was
enabled before the pool was made the data pool. Fix this during the
handling of the 'add_data_pool' command by setting the value of
the 'data' key of the pool's application metadata 'cephfs' to the
filesystem's name even when the application metadata 'cephfs' is
already enabled or set.
Fixes: https://tracker.ceph.com/issues/43061
Signed-off-by: Ramana Raja <rraja@redhat.com>