Create and delete exports for nfs ganesha with mgr volume
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Sebastian Wagner <sebastian.wagner@suse.com>
RTD does not support installing system packages, the only ways to install
dependencies are setuptools and pip. while ditaa is a tool written in
Java. so we need to find a native python tool allowing us to render ditaa
images. plantweb is able to the web service for rendering the ditaa
diagram. so let's use it as a fallback if "ditaa" is not around.
also start a new line after the directive, otherwise planweb server will
return 500 at seeing the diagram.
Signed-off-by: Kefu Chai <kchai@redhat.com>
The following interface is added
"ceph fs subvolume info <vol_name> <sub_name> [<group_name>]"
The output is in json format with following fields
1. atime: access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
2. mtime: modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
3. ctime: change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS"
4. uid: uid of subvolume path
5. gid: gid of subvolume path
6. mode: mode of subvolume path
7. mon_addrs: list of monitor addresses
8. bytes_pcent: quota used in percentage if quota is set, else displays "undefined"
9. bytes_quota: quota size in bytes if quota is set, else displays "infinite"
10. bytes_used: current used size of the subvolume in bytes
11. created_at: time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS"
12. data_pool: data pool the subvolume belongs to
13. path: absolute path of a subvolume
14. type: subvolume type indicating whether it's clone or subvolume
Fixes: https://tracker.ceph.com/issues/44277
Signed-off-by: Kotresh HR <khiremat@redhat.com>
* refs/pull/34060/head:
Merge PR #34027 into octopus
Merge PR #34045 into octopus
Merge pull request #34035 from dillaman/wip-rbd-permissions
mgr/progress: fix duration strings
Merge PR #34014 into octopus
Merge PR #34001 into octopus
Merge PR #34011 into octopus
qa/workunits/rbd: use context managers to control Rados lifespan
Merge pull request #34032 from dillaman/wip-rbd-octopus-docs
doc/releases/octopus: add additional RBD improvements
qa/workunits/cephadm/test_cephadm: mark services unmanaged for test
mgr/cephadm: do not reconfig unmanaged services
Merge PR #33981 into octopus
Merge pull request #34018 from ajarr/octopus-subvolume-clone-cancel
qa/workunits/cephadm/test_cephadm: output file for pub key
Merge PR #33866 into octopus
Merge PR #34005 into octopus
Merge PR #34013 into octopus
mgr/cephadm: pytest: Enable SpecStore
mgr/orchestrator: add test for default implementation for apply()
python-common: validate ServiceSpec.service_type
fixup mgr/cephadm: Fix ceph orch apply -i
mgr/dashbaord: orchestrator service: Revert wait_api_result to a single completion
mgr/orchestrator: `orch daemon add` accepts a yaml
mgr/cephadm: apply_drivegroups() returns a single Completion
mgr/cephadm: remove `trivial_result()`
mgr/cephadm: Fix `ceph orch apply -i`
Merge pull request #33994 from dillaman/wip-librbd-poll-event-race
doc: document `clone cancel` command
test: add `clone cancel` tests
mgr/volumes: introduce "clone cancel" volume command
mgr/volumes: allow canceling a single asynchronous job for a volume
mgr/volumes: helper for looking up a clone entry index
mgr/volumes: periodically check if clone operations should be canceled
mgr/volumes: periodically check if copy operations should be canceled
mgr/volumes: introduce 'canceled' state in clone op state machine
qa/suites/rados/verify/validater/valgrind: tolerate SLOW_OPS
qa/suites/rados/verify/validater/valgrind: less bluestore logging
qa/suites/rados/verify/validater: increase heartbeat grace
Revert "qa/suites/rados/verify: debug_ms = 1, osd_heartbeat_grace = 60"
Revert "qa/suites/rados/verify/validator/valgrind: debug refs = 5"
ceph_test_watch_notify: try notify 10x if ALLOW_TIMEOUTS is set
ceph_test_rados_api_misc: ShutdownRace timeout if ALLOW_TIMEOUTS is set
qa/suites/rados/verify: set ALLOW_TIMEOUTS for workunits
doc/install: edits
doc/cephadm: more edits
doc/cephadm/install: edits
doc/cephadm/adoption: improvements
doc/cephadm/install: a few edits
doc/cephadm/install: do not install ceph-common on host (by default)
doc/cephadm: drop os recs link
doc/cephadm/upgrade: improvements
doc/cephadm/upgrade: document upgrade
doc/cephadm/install: revamp install docs
doc: reorganize cephadm docs
doc/cephadm/administration: update docs on customizing SSH config
doc/cephadm/administration: add a note about the 'removed' dir
mgr/balancer: tolerate pgs outside of target weight map
qa/workunits/cephadm/test_cephadm: --skip-monitoring-stack
Merge PR #33974 into octopus
Merge PR #33442 into octopus
Merge PR #33997 into octopus
Merge PR #34000 into octopus
use quay octopus tip until 15.2 tag is available
python-common: reduce output of ServiceSpec.to_json()
python-common,mgr/cephadm: move assert_valid_host to service_spec
mgr/cephadm: add HostAssignment.validate()
mgr/dashboard: adapt create_osds interface change
mon/MgrMonitor: make 'mgr fail' work with no arguments
cephadm: add allow_ptrace option to enable SYS_PTRACE
update default container images
mgr/cephadm: limit number of times check host is performed in the serve loop
Merge PR #33961 into octopus
Merge PR #33952 into octopus
Merge PR #33990 into octopus
Merge PR #33955 into octopus
Merge PR #33936 into octopus
mgr/orch: add --all-available-devices to 'orch apply osd'
qa/workunits/cephadm: --skip-mon-network when using 127.0.0.1
cephadm: add tests
qa/tasks/cephadm: pass -v to bootstrap
mgr/cephadm: only try to place mons on hosts matching public_network
mgr/cephadm: keep track of host networks, ips
cephadm: automatically infer mon public_network, if we can
cephadm: add list-networks command
cephadm: bootstrap: deploy monitoring stack by default
librbd: defer event socket completion until after callback issued
cephadm: add-repo: add --version
mgr/cephadm: respect 'unmanaged' flag in spec
mgr/orch: orch ls: show <no spec> or <unmanaged> as appropriate
mgr/orch: orch ls: rename SPEC -> PLACEMENT
mgr/orch: add 'unmanaged' property to ServiceSpec
cephadm: rename distro args in repo methods
mgr/orch: combine 'orch daemon add <type> ...' into one command
mgr/orch: combine 'orch apply <type> [<placement>]' into one command
Reviewed-by: Laura Paduano <lpaduano@suse.com>
* refs/pull/33491/head:
mount.ceph: add "fs=<fs_name>" mount options support
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
"client_fs" is one alias for "client_mds_namespace=" and it will be
cleaner and be more user-friendly to use. "client_mds_namespace="
will be kept and backwards compatibility used.
Update the documents at the same time.
Fixes: https://tracker.ceph.com/issues/44212
Signed-off-by: Xiubo Li <xiubli@redhat.com>
"fs" is one alias for "mds_namespace=" and it will be cleaner and
be more user-friendly to use. The "fs" will be translated to
"mds_namespace" before sending it to kernel space.
And the "mds_namespace" will be deprecated to use any more.
Update the documents at the same time.
Fixes: https://tracker.ceph.com/issues/44214
Signed-off-by: Xiubo Li <xiubli@redhat.com>
These have not aged gracefully, and in particular include instructions
for setting pool size 1 to let Hadoop control the replication — but I've
heard reports of users setting up multiple size-1 pools and then wondering
where their data went when an OSD dies.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
According to the path restriction example, the filesystem name
should be cephfs_a, not cephfs. Converge on cephfs_a to avoid it
being confused with with the pool tag, which is always cephfs.
This was introduced in 160c4bfeb8 ("mon/AuthMonitor: Use new osd
auth caps for ceph fs authorize").
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Also, move the common part from "Mount using FUSE" doc and "Mount using
kernel" doc to "Mount CephFS" page to avoid repetitions.
Fixes: https://tracker.ceph.com/issues/43154
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Rename to mount-using-kernel.rst and mount-using-fuse.rst respectively
so that it's easier to find them in doc/cephfs directory.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
1GB is too low as a default and usually results in cache size warnings
at that size; the MDS will struggle to maintain such a small cache size
for most workloads.
Fixes: https://tracker.ceph.com/issues/43182
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/32124/head:
doc/cephfs/disaster-recovery-experts: Add link for scrub and note for scrub_path
Reviewed-by: Rishabh Dave <ridave@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Nautilus release presented the allow_standby_replay fs setting that obsdoleted
several MDS config entries: mds_standby_for_*, mon_force_standby_active, and
mds_standby_replay.
Removing entries instead of just marking them as "Obsolete' as per batrick
suggestion.
Signed-off-by: Rodrigo Severo <rodrigo@fabricadeideias.com>
Remove last bits of support for 'mds_cache_size'.
'mds_cache_memory_limit' is preferred.
Fixes: https://tracker.ceph.com/issues/41951
Signed-off-by: Ramana Raja <rraja@redhat.com>