Go bindings for Ceph
Go to file
Niels de Vos f0bfebcac5 rbd/tests: force librbd errors by passing a closed image
In order to get a higher test coverage for rbd/rbd.go, there is a need
to trigger errors for the librbd functions. This makes it possible to
test the error path. The new TestClosedImage() function tricks
RbdImage.validate() into accepting an image that is closed, but still
references the closed image.

librbd is expected to return an error when the image is closed. However,
passing a closed image does not result in stable behavior. Sometimes
librbd causes a panic, at other times there will be a hang and the test
will time-out.

The TestClosedImage() function is disabled, it will skip all testing but
remains available so that it could be enabled in the future.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2020-01-22 09:31:58 +01:00
.github github: configure a pull request template 2019-12-12 16:17:31 -05:00
cephfs cephfs: replace tests mount setup boilerplate with func 2020-01-20 16:52:45 +01:00
contrib contrib: create a hacky script for comparing api coverage 2020-01-21 08:00:38 +01:00
errutil errutil: common error handling functions 2019-12-16 16:59:25 +01:00
rados rados: update test setup code to avoid blocking forever 2020-01-20 16:52:45 +01:00
rbd rbd/tests: force librbd errors by passing a closed image 2020-01-22 09:31:58 +01:00
.gitignore entrypoint: handle the sub-packages separately 2020-01-20 17:41:15 +01:00
.revive.toml testing: enable exported vars check as warning 2019-12-17 08:30:02 +01:00
.travis.yml makefile: use revive for style checking 2019-12-12 11:22:11 -05:00
doc.go ceph: remove unnecessary imports in doc.go 2019-12-11 14:44:54 -05:00
Dockerfile tests: use Ceph Nautilus 2019-12-11 13:43:58 -05:00
entrypoint.sh entrypoint: handle the sub-packages separately 2020-01-20 17:41:15 +01:00
LICENSE license: add MIT license 2014-11-27 10:53:04 -08:00
Makefile build: compile all go files with "make" 2020-01-21 08:08:35 -05:00
micro-osd.sh test: update micro-osd.sh to support nautilus 2019-11-04 13:15:30 -05:00
package_test.go ceph: rename package for go files in root dir 2019-12-11 14:44:54 -05:00
README.md cephfs: add unmount, release, chmod, chown, etc 2018-10-11 11:09:16 -07:00

go-ceph - Go bindings for Ceph APIs

Build Status Godoc license

Installation

go get github.com/ceph/go-ceph

The native RADOS library and development headers are expected to be installed.

On debian systems (apt):

libcephfs-dev librbd-dev librados-dev

On rpm based systems (dnf, yum, etc):

libcephfs-devel librbd-devel librados-devel

Documentation

Detailed documentation is available at http://godoc.org/github.com/ceph/go-ceph.

Connecting to a cluster

Connect to a Ceph cluster using a configuration file located in the default search paths.

conn, _ := rados.NewConn()
conn.ReadDefaultConfigFile()
conn.Connect()

A connection can be shutdown by calling the Shutdown method on the connection object (e.g. conn.Shutdown()). There are also other methods for configuring the connection. Specific configuration options can be set:

conn.SetConfigOption("log_file", "/dev/null")

and command line options can also be used using the ParseCmdLineArgs method.

args := []string{ "--mon-host", "1.1.1.1" }
err := conn.ParseCmdLineArgs(args)

For other configuration options see the full documentation.

Object I/O

Object in RADOS can be written to and read from with through an interface very similar to a standard file I/O interface:

// open a pool handle
ioctx, err := conn.OpenIOContext("mypool")

// write some data
bytesIn := []byte("input data")
err = ioctx.Write("obj", bytesIn, 0)

// read the data back out
bytesOut := make([]byte, len(bytesIn))
_, err := ioctx.Read("obj", bytesOut, 0)

if !bytes.Equal(bytesIn, bytesOut) {
    fmt.Println("Output is not input!")
}

Pool maintenance

The list of pools in a cluster can be retreived using the ListPools method on the connection object. On a new cluster the following code snippet:

pools, _ := conn.ListPools()
fmt.Println(pools)

will produce the output [data metadata rbd], along with any other pools that might exist in your cluster. Pools can also be created and destroyed. The following creates a new, empty pool with default settings.

conn.MakePool("new_pool")

Deleting a pool is also easy. Call DeletePool(name string) on a connection object to delete a pool with the given name. The following will delete the pool named new_pool and remove all of the pool's data.

conn.DeletePool("new_pool")

Development

docker run --rm -it --net=host \
  --device /dev/fuse --cap-add SYS_ADMIN --security-opt apparmor:unconfined \
  -v ${PWD}:/go/src/github.com/ceph/go-ceph:z \
  -v /home/nwatkins/src/ceph/build:/home/nwatkins/src/ceph/build:z \
  -e CEPH_CONF=/home/nwatkins/src/ceph/build/ceph.conf \
  ceph-golang

Run against a vstart.sh cluster without installing Ceph:

export CGO_CPPFLAGS="-I/ceph/src/include"
export CGO_LDFLAGS="-L/ceph/build/lib"
go build

Contributing

Contributions are welcome & greatly appreciated, every little bit helps. Make code changes via Github pull requests:

  • Fork the repo and create a topic branch for every feature/fix. Avoid making changes directly on master branch.
  • All incoming features should be accompanied with tests.
  • Make sure that you run go fmt before submitting a change set. Alternatively the Makefile has a flag for this, so you can call make fmt as well.
  • The integration tests can be run in a docker container, for this run:
make test-docker