Go bindings for Ceph
Go to file
John Mulligan e40744fdf6 cutil: add a new internal package for c+go utility functions
Now we have sufficient boilerplate in our code for interacting with
various types and ceph calls with similar needs we establish a new
internal package, "cutil" (C utilities).

Note that many of the return types are wrapped. This is due to the
limits placed on us by cgo.  Despite the irritating limitations Go
places on "exporting" C types it still ought to help in the long run for
patterns that are very common or patterns that are subtle and we want to
write specific tests for.

Signed-off-by: John Mulligan <jmulligan@redhat.com>
2020-05-12 17:18:08 -04:00
.github workflows: get coverage result from ci container and archive it 2020-04-21 15:56:40 -04:00
cephfs cephfs: add path based Statx function implmenting ceph_statx 2020-05-07 13:44:19 -04:00
contrib contrib: create a hacky script for comparing api coverage 2020-01-21 08:00:38 +01:00
docs fix typos 2020-05-01 09:37:09 -04:00
internal cutil: add a new internal package for c+go utility functions 2020-05-12 17:18:08 -04:00
rados fix typos 2020-05-01 09:37:09 -04:00
rbd fix typos 2020-05-01 09:37:09 -04:00
testing/containers testing: move ubuntu xenial based Dockerfile to a subdir 2020-03-12 13:33:16 +01:00
.gitignore entrypoint: handle the sub-packages separately 2020-01-20 17:41:15 +01:00
.revive.toml revive: enable rule to check for unhandled errors 2020-04-09 13:11:05 -04:00
.travis.yml travis: use "octopus" release by name 2020-03-25 17:37:43 +01:00
LICENSE license: add MIT license 2014-11-27 10:53:04 -08:00
Makefile Makefile: Fix CONTAINER_OPTS for Fedora 2020-04-23 08:22:07 -04:00
README.md readme: advertise maintainer "office hours" 2020-04-08 10:31:23 -04:00
doc.go go-ceph: fix typo in doc.go 2020-01-28 08:38:59 +01:00
entrypoint.sh retry: add a helper lib for retrying common operations 2020-04-20 15:08:36 -04:00
go.mod cephfs: add path based Statx function implmenting ceph_statx 2020-05-07 13:44:19 -04:00
go.sum cephfs: add path based Statx function implmenting ceph_statx 2020-05-07 13:44:19 -04:00
micro-osd.sh test: update micro-osd.sh to support nautilus 2019-11-04 13:15:30 -05:00
package_test.go ceph: remove use of boolean literal in expression 2020-04-09 13:11:05 -04:00

README.md

go-ceph - Go bindings for Ceph APIs

Build Status Godoc license

Installation

go get github.com/ceph/go-ceph

The native RADOS library and development headers are expected to be installed.

On debian systems (apt):

libcephfs-dev librbd-dev librados-dev

On rpm based systems (dnf, yum, etc):

libcephfs-devel librbd-devel librados-devel

go-ceph tries to support different Ceph versions. However some functions might only be available in recent versions, and others can be deprecated. In order to work with non-current versions of Ceph, it is required to pass build-tags to on the go commandline. A tag with the named Ceph release will enable/disable certain features of the go-ceph packages, and prevent warnings or compile problems. E.g. build against libcephfs/librados/librbd from Mimic, or run go test against Limunous, use:

go build -tags mimic ....
go test -tags luminous ....

Documentation

Detailed documentation is available at https://pkg.go.dev/github.com/ceph/go-ceph.

Connecting to a cluster

Connect to a Ceph cluster using a configuration file located in the default search paths.

conn, _ := rados.NewConn()
conn.ReadDefaultConfigFile()
conn.Connect()

A connection can be shutdown by calling the Shutdown method on the connection object (e.g. conn.Shutdown()). There are also other methods for configuring the connection. Specific configuration options can be set:

conn.SetConfigOption("log_file", "/dev/null")

and command line options can also be used using the ParseCmdLineArgs method.

args := []string{ "--mon-host", "1.1.1.1" }
err := conn.ParseCmdLineArgs(args)

For other configuration options see the full documentation.

Object I/O

Object in RADOS can be written to and read from with through an interface very similar to a standard file I/O interface:

// open a pool handle
ioctx, err := conn.OpenIOContext("mypool")

// write some data
bytesIn := []byte("input data")
err = ioctx.Write("obj", bytesIn, 0)

// read the data back out
bytesOut := make([]byte, len(bytesIn))
_, err := ioctx.Read("obj", bytesOut, 0)

if !bytes.Equal(bytesIn, bytesOut) {
    fmt.Println("Output is not input!")
}

Pool maintenance

The list of pools in a cluster can be retreived using the ListPools method on the connection object. On a new cluster the following code snippet:

pools, _ := conn.ListPools()
fmt.Println(pools)

will produce the output [data metadata rbd], along with any other pools that might exist in your cluster. Pools can also be created and destroyed. The following creates a new, empty pool with default settings.

conn.MakePool("new_pool")

Deleting a pool is also easy. Call DeletePool(name string) on a connection object to delete a pool with the given name. The following will delete the pool named new_pool and remove all of the pool's data.

conn.DeletePool("new_pool")

Development

docker run --rm -it --net=host \
  --device /dev/fuse --cap-add SYS_ADMIN --security-opt apparmor:unconfined \
  -v ${PWD}:/go/src/github.com/ceph/go-ceph:z \
  -v /home/nwatkins/src/ceph/build:/home/nwatkins/src/ceph/build:z \
  -e CEPH_CONF=/home/nwatkins/src/ceph/build/ceph.conf \
  ceph-golang

Run against a vstart.sh cluster without installing Ceph:

export CGO_CPPFLAGS="-I/ceph/src/include"
export CGO_LDFLAGS="-L/ceph/build/lib"
go build

Contributing

Contributions are welcome & greatly appreciated, every little bit helps. Make code changes via Github pull requests:

  • Fork the repo and create a topic branch for every feature/fix. Avoid making changes directly on master branch.
  • All incoming features should be accompanied with tests.
  • Make sure that you run go fmt before submitting a change set. Alternatively the Makefile has a flag for this, so you can call make fmt as well.
  • The integration tests can be run in a docker container, for this run:
make test-docker

Interactive "Office Hours"

The maintenance team plans to be available regularly for questions, comments, pings, etc for about an hour twice a week. The current schedule is:

  • 2:00pm EDT (currently 18:00 UTC) Mondays
  • 9:00am EDT (currently 13:00 UTC) Thursdays

We will use the #ceph-devel IRC channel