Ceph is a distributed object, block, and file storage platform
Go to file
Sage Weil 1cc7617c8a Merge PR #28330 into master
* refs/pull/28330/head:
	osd: drop osd_lock during scrub
	ceph_test_rados_api_tier_pp: tolerate ENOENT or success from deleted snap
	osd: automatically scrub purged_snaps every deep scrub interval
	osd: move scrub_purged_snaps to helper
	osd/OSDMap: SERVER_OCTOPUS feature bit is now significant
	ceph_test_rados_api_snapshots_pp: drop unnecessary assert
	mon/OSDMonitor: record last_purged_snaps_scrub from beacon to osdmap
	osd: report last_purged_snaps_scrub as part of beacon
	osd: log purged_snaps scrub to cluster log
	osd: record last_purged_snaps_scrub in superblock
	osd/OSDMap: add last_purged_snaps_stamp to osd_xinfo_t
	mon/OSDMonitor: fix bug in try_prune_purged_snaps
	mon/OSDMonitor: record snap removal seq as purged
	mon/OSDMonitor: do not bother reporting gaps in removed_snaps
	osdc/Objecter: don't worry about gap_removed_snaps from map gaps
	mds/SnapServer: make not about pre-octopus compat code
	osd: implement scrub_purged_snaps command
	osd/PrimaryLogPG: always remove the snap we are trimming
	ceph_test_rados_api_snapshots_pp: (partial) test to reproduce stray clones
	osd: sync old purged_snaps on startup after upgrade or osd creation
	osd: record purged_snaps when we store new maps
	mon/OSDMonitor: add messages to get past purged_snaps
	mon/OSDMonitor: record pre-octopus purged snaps with first octopus map
	mon/OSDMonitor: record purged_snaps for each epoch
	mon/OSDMonitor: make_snap_epoch_key -> make_removed_snap_epoch_key
	osd/osd_types: add purged_snaps_last to OSDSuperblock
	osd/osd_types: clean up initial values for OSDSuperblock
	mon/OSDMonitor: make {removed,purged}_snap storage more efficient
	mon/OSDMonitor: move (removed, purged) snap update into a helper
	mon/OSDMonitor: generalize/refactor lookup_*_snap
	mon/OSDMonitor: refactor snap key and value helpers
	mon/OSDMonitor: make_snap_key -> make_removed_snap_key, make_purged_snap_key
	mon/OSDMonitor: fix lookup_purged_snap implementation
	mon/OSDMonitor: lookup_pruned_snap -> lookup_purged_snap
	osd: adjust snapmapper keys on first start as octopus
	osd/SnapMapper: include poolid in snap index
	mon/OSDMonitor: document osd snap metadata format
	osd/SnapMapper: document stored keys and values
	mon/OSDMonitor: use structured binding for prepare_remove_snaps
	mon/OSDMonitor: send MRemoveSnaps back to octopus MDS
	mds/SnapServer: handle MRemoveSnaps acks from mon
	CMakeLists: include 'cephfs' (which includes libcephfs) in 'vstart' target
	mon/PaxosService: add C_ReplyOp
	vnewosd.sh: add script to add a new osd to an existing vstart
	vstart.sh: remove useless auth add for osds
	vstart.sh: wait for mgr volume module to start up
	mon/OSDMonitor: make snap removal handle dups safely
	mon/OSDMonitor: only update removed_snaps when pre-octopus
	ceph_test_rados: stop doing long object names
	ceph_test_rados_api_tier_pp: fix osd version checks
	osd/PrimaryLogPG: use get_ssc_as_of for snapc for flushing clones
	osd/PrimaryLogPG: only maintain SnapSet::snaps for pre-octopus compat
	mon/OSDMonitor: only maintain pg_pool_t::removed_snaps for pre-octopus
	osd/osd_types: mark SnapSet::snaps as legacy
	osd/osd_types: SnapSet::get_ssc_as_of: use clone_snaps
	osd/PrimaryLogPG: change fabrication of promoted clone snaps
	osd/PrimaryLogPG: only filter SnapSet::snaps for flush for pre-octopus compat
	osd/PrimaryLogPG: trim_objects: only filter SnapSet::snaps for pre-octopus
	osd/PrimaryLogPG: make best effort to sanitize clones on copy-from
	mds/SnapServer: int -> int32_t for encoded type
	messages/MRemoveSnaps: int -> int32_t on encoded type
	osd/PrimaryLogPG: find_object_context: trust SnapSet's clone_snaps
	osd/PrimaryLogPG: use osdmap removed_snaps_queue for snap trimming
	mon/OSDMonitor: avoid is_removed_snap()
	osd/PeeringState: drop some mimic conditionals
	osd/PG: drop pre-mimic snap_trimq code
	osd/PeeringState: removed pre-mimic removed snap tracking
	osd: move snap_interval_set_t to osd_types
	mon: drop mon_debug_no_require_mimic
	mon/OSDMonitor: remove pre-mimic snap behavior support
	mon/OSDMonitor: remove support for pre-mimic conversion
	osd/osd_types: remove build_removed_snaps(), maybe_update_removed_snaps()
	osd: remove luminous compat code for removed_snaps

Reviewed-by: Samuel Just <sjust@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-07-03 08:22:34 -05:00
.github
admin admin,doc/_ext/ceph_releases.py: use yaml.safe_load() 2019-06-09 22:20:14 +08:00
alpine Merge PR #22446 into master 2019-06-12 09:11:49 -05:00
bin
ceph-erasure-code-corpus@2d7d78b9cc
ceph-menv
ceph-object-corpus@e9bd1dbea0 ceph-object-corpus: forward_incompat pg_missing_item and pg_missing_t 2019-05-09 12:33:31 +08:00
cmake/modules cmake: boost valgrind fixes for boost::lockfree::queue 2019-06-28 10:56:03 -04:00
debian debian/control: remove "libnl-3-dev" from build dependencies 2019-06-19 21:21:00 +08:00
doc Merge pull request #27997 from LenzGr/master-documentation 2019-07-03 09:23:52 +01:00
etc
examples
fusetrace
keys
man
mirroring
monitoring monitoring: SNMP OID per every Prometheus alert rule 2019-05-28 09:59:50 +02:00
qa mgr/dashboard: Pool list shows current r/w byte usage in graph (#28153) 2019-07-03 11:26:30 +02:00
selinux selinux: Update the policy for RHEL8 2019-06-05 19:29:23 +02:00
share
src Merge PR #28330 into master 2019-07-03 08:22:34 -05:00
sudoers.d
systemd systemd: ceph-mgr: set MemoryDenyWriteExecute to false 2019-05-09 07:36:43 +01:00
udev
.gitattributes
.githubmap
.gitignore mgr/dashboard: Remove messages.xlf 2019-05-09 12:48:18 +02:00
.gitmodule_mirrors
.gitmodules
.mailmap
.organizationmap
.peoplemap
AUTHORS
ceph.spec.in Merge PR #22446 into master 2019-06-12 09:11:49 -05:00
CMakeLists.txt CMakeLists.txt: s/Remote block storage/RADOS Block Device/ 2019-06-29 23:47:48 +08:00
CodingStyle relicense LGPL-2.1 code as LGPL-2.1 or LGPL-3.0 2019-04-22 11:22:55 -05:00
CONTRIBUTING.rst
COPYING relicense LGPL-2.1 code as LGPL-2.1 or LGPL-3.0 2019-04-22 11:22:55 -05:00
COPYING-GPL2
COPYING-LGPL2.1
COPYING-LGPL3 relicense LGPL-2.1 code as LGPL-2.1 or LGPL-3.0 2019-04-22 11:22:55 -05:00
do_cmake.sh do_cmake.sh: Add a heading to the minimal config 2019-06-28 15:46:30 +10:00
do_freebsd.sh
doc_deps.deb.txt
Doxyfile
install-deps.sh install-deps.sh: update ubuntu-toolchain-r mirrors 2019-06-08 00:14:54 +08:00
make-apk.sh
make-debs.sh
make-dist make-dist: set version number only once 2019-05-08 11:28:19 +02:00
make-srpm.sh
PendingReleaseNotes osd: add hdd and ssd variants for osd_recovery_max_active 2019-06-20 16:24:51 -05:00
pom.xml
README.aix
README.alpine.md
README.FreeBSD
README.md relicense LGPL-2.1 code as LGPL-2.1 or LGPL-3.0 2019-04-22 11:22:55 -05:00
README.solaris
run-make-check.sh
SubmittingPatches.rst

Ceph - a scalable distributed storage system

Please see http://ceph.com/ for current info.

Contributing Code

Most of Ceph is dual licensed under the LGPL version 2.1 or 3.0. Some miscellaneous code is under BSD-style license or is public domain. The documentation is licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0). There are a handful of headers included here that are licensed under the GPL. Please see the file COPYING for a full inventory of licenses by file.

Code contributions must include a valid "Signed-off-by" acknowledging the license for the modified or contributed file. Please see the file SubmittingPatches.rst for details on what that means and on how to generate and submit patches.

We do not require assignment of copyright to contribute code; code is contributed under the terms of the applicable license.

Checking out the source

You can clone from github with

git clone git@github.com:ceph/ceph

or, if you are not a github user,

git clone git://github.com/ceph/ceph

Ceph contains many git submodules that need to be checked out with

git submodule update --init --recursive

Build Prerequisites

The list of Debian or RPM packages dependencies can be installed with:

./install-deps.sh

Building Ceph

Note that these instructions are meant for developers who are compiling the code for development and testing. To build binaries suitable for installation we recommend you build deb or rpm packages, or refer to the ceph.spec.in or debian/rules to see which configuration options are specified for production builds.

Prerequisite: CMake 3.5.1

Build instructions:

./do_cmake.sh
cd build
make

(Note: do_cmake.sh now defaults to creating a debug build of ceph that can be up to 5x slower with some workloads. Please pass "-DCMAKE_BUILD_TYPE=RelWithDebInfo" to do_cmake.sh to create a non-debug release.)

This assumes you make your build dir a subdirectory of the ceph.git checkout. If you put it elsewhere, just replace .. in do_cmake.sh with a correct path to the checkout. Any additional CMake args can be specified setting ARGS before invoking do_cmake. See cmake options for more details. Eg.

ARGS="-DCMAKE_C_COMPILER=gcc-7" ./do_cmake.sh

To build only certain targets use:

make [target name]

To install:

make install

CMake Options

If you run the cmake command by hand, there are many options you can set with "-D". For example the option to build the RADOS Gateway is defaulted to ON. To build without the RADOS Gateway:

cmake -DWITH_RADOSGW=OFF [path to top level ceph directory]

Another example below is building with debugging and alternate locations for a couple of external dependencies:

cmake -DLEVELDB_PREFIX="/opt/hyperleveldb" -DOFED_PREFIX="/opt/ofed" \
-DCMAKE_INSTALL_PREFIX=/opt/accelio -DCMAKE_C_FLAGS="-O0 -g3 -gdwarf-4" \
..

To view an exhaustive list of -D options, you can invoke cmake with:

cmake -LH

If you often pipe make to less and would like to maintain the diagnostic colors for errors and warnings (and if your compiler supports it), you can invoke cmake with:

cmake -DDIAGNOSTICS_COLOR=always ..

Then you'll get the diagnostic colors when you execute:

make | less -R

Other available values for 'DIAGNOSTICS_COLOR' are 'auto' (default) and 'never'.

Building a source tarball

To build a complete source tarball with everything needed to build from source and/or build a (deb or rpm) package, run

./make-dist

This will create a tarball like ceph-$version.tar.bz2 from git. (Ensure that any changes you want to include in your working directory are committed to git.)

Running a test cluster

To run a functional test cluster,

cd build
make vstart        # builds just enough to run vstart
../src/vstart.sh --debug --new -x --localhost --bluestore
./bin/ceph -s

Almost all of the usual commands are available in the bin/ directory. For example,

./bin/rados -p rbd bench 30 write
./bin/rbd create foo --size 1000

To shut down the test cluster,

../src/stop.sh

To start or stop individual daemons, the sysvinit script can be used:

./bin/init-ceph restart osd.0
./bin/init-ceph stop

Running unit tests

To build and run all tests (in parallel using all processors), use ctest:

cd build
make
ctest -j$(nproc)

(Note: Many targets built from src/test are not run using ctest. Targets starting with "unittest" are run in make check and thus can be run with ctest. Targets starting with "ceph_test" can not, and should be run by hand.)

When failures occur, look in build/Testing/Temporary for logs.

To build and run all tests and their dependencies without other unnecessary targets in Ceph:

cd build
make check -j$(nproc)

To run an individual test manually, run ctest with -R (regex matching):

ctest -R [regex matching test name(s)]

(Note: ctest does not build the test it's running or the dependencies needed to run it)

To run an individual test manually and see all the tests output, run ctest with the -V (verbose) flag:

ctest -V -R [regex matching test name(s)]

To run an tests manually and run the jobs in parallel, run ctest with the -j flag:

ctest -j [number of jobs]

There are many other flags you can give ctest for better control over manual test execution. To view these options run:

man ctest

Building the Documentation

Prerequisites

The list of package dependencies for building the documentation can be found in doc_deps.deb.txt:

sudo apt-get install `cat doc_deps.deb.txt`

Building the Documentation

To build the documentation, ensure that you are in the top-level /ceph directory, and execute the build script. For example:

admin/build-doc