Go bindings for Ceph
Go to file
John Mulligan 2202a2c32d rados: add test functions for OsdCommand
Signed-off-by: John Mulligan <jmulligan@redhat.com>
2020-05-27 13:38:07 -04:00
.github workflows: get coverage result from ci container and archive it 2020-04-21 15:56:40 -04:00
cephfs cephfs: add ceph_flock() function 2020-05-27 10:40:52 -04:00
contrib implements: add brief description to build and use the tool 2020-05-17 11:26:46 -04:00
docs fix typos 2020-05-01 09:37:09 -04:00
internal cutil: allow passing free functions to command output type 2020-05-12 17:18:08 -04:00
rados rados: add test functions for OsdCommand 2020-05-27 13:38:07 -04:00
rbd fix typos 2020-05-01 09:37:09 -04:00
testing/containers testing: move ubuntu xenial based Dockerfile to a subdir 2020-03-12 13:33:16 +01:00
.gitignore gitignore: add implements (binary) 2020-05-17 11:26:46 -04:00
.revive.toml revive: enable rule to check for unhandled errors 2020-04-09 13:11:05 -04:00
.travis.yml travis: use "octopus" release by name 2020-03-25 17:37:43 +01:00
doc.go go-ceph: fix typo in doc.go 2020-01-28 08:38:59 +01:00
entrypoint.sh entrypoint: enable building and testing of internal/cutil package 2020-05-12 17:18:08 -04:00
go.mod cephfs: add path based Statx function implmenting ceph_statx 2020-05-07 13:44:19 -04:00
go.sum cephfs: add path based Statx function implmenting ceph_statx 2020-05-07 13:44:19 -04:00
LICENSE license: add MIT license 2014-11-27 10:53:04 -08:00
Makefile makefile: add rule to run implements tool 2020-05-17 11:26:46 -04:00
micro-osd.sh test: update micro-osd.sh to support nautilus 2019-11-04 13:15:30 -05:00
package_test.go ceph: remove use of boolean literal in expression 2020-04-09 13:11:05 -04:00
README.md readme: advertise maintainer "office hours" 2020-04-08 10:31:23 -04:00

go-ceph - Go bindings for Ceph APIs

Build Status Godoc license

Installation

go get github.com/ceph/go-ceph

The native RADOS library and development headers are expected to be installed.

On debian systems (apt):

libcephfs-dev librbd-dev librados-dev

On rpm based systems (dnf, yum, etc):

libcephfs-devel librbd-devel librados-devel

go-ceph tries to support different Ceph versions. However some functions might only be available in recent versions, and others can be deprecated. In order to work with non-current versions of Ceph, it is required to pass build-tags to on the go commandline. A tag with the named Ceph release will enable/disable certain features of the go-ceph packages, and prevent warnings or compile problems. E.g. build against libcephfs/librados/librbd from Mimic, or run go test against Limunous, use:

go build -tags mimic ....
go test -tags luminous ....

Documentation

Detailed documentation is available at https://pkg.go.dev/github.com/ceph/go-ceph.

Connecting to a cluster

Connect to a Ceph cluster using a configuration file located in the default search paths.

conn, _ := rados.NewConn()
conn.ReadDefaultConfigFile()
conn.Connect()

A connection can be shutdown by calling the Shutdown method on the connection object (e.g. conn.Shutdown()). There are also other methods for configuring the connection. Specific configuration options can be set:

conn.SetConfigOption("log_file", "/dev/null")

and command line options can also be used using the ParseCmdLineArgs method.

args := []string{ "--mon-host", "1.1.1.1" }
err := conn.ParseCmdLineArgs(args)

For other configuration options see the full documentation.

Object I/O

Object in RADOS can be written to and read from with through an interface very similar to a standard file I/O interface:

// open a pool handle
ioctx, err := conn.OpenIOContext("mypool")

// write some data
bytesIn := []byte("input data")
err = ioctx.Write("obj", bytesIn, 0)

// read the data back out
bytesOut := make([]byte, len(bytesIn))
_, err := ioctx.Read("obj", bytesOut, 0)

if !bytes.Equal(bytesIn, bytesOut) {
    fmt.Println("Output is not input!")
}

Pool maintenance

The list of pools in a cluster can be retreived using the ListPools method on the connection object. On a new cluster the following code snippet:

pools, _ := conn.ListPools()
fmt.Println(pools)

will produce the output [data metadata rbd], along with any other pools that might exist in your cluster. Pools can also be created and destroyed. The following creates a new, empty pool with default settings.

conn.MakePool("new_pool")

Deleting a pool is also easy. Call DeletePool(name string) on a connection object to delete a pool with the given name. The following will delete the pool named new_pool and remove all of the pool's data.

conn.DeletePool("new_pool")

Development

docker run --rm -it --net=host \
  --device /dev/fuse --cap-add SYS_ADMIN --security-opt apparmor:unconfined \
  -v ${PWD}:/go/src/github.com/ceph/go-ceph:z \
  -v /home/nwatkins/src/ceph/build:/home/nwatkins/src/ceph/build:z \
  -e CEPH_CONF=/home/nwatkins/src/ceph/build/ceph.conf \
  ceph-golang

Run against a vstart.sh cluster without installing Ceph:

export CGO_CPPFLAGS="-I/ceph/src/include"
export CGO_LDFLAGS="-L/ceph/build/lib"
go build

Contributing

Contributions are welcome & greatly appreciated, every little bit helps. Make code changes via Github pull requests:

  • Fork the repo and create a topic branch for every feature/fix. Avoid making changes directly on master branch.
  • All incoming features should be accompanied with tests.
  • Make sure that you run go fmt before submitting a change set. Alternatively the Makefile has a flag for this, so you can call make fmt as well.
  • The integration tests can be run in a docker container, for this run:
make test-docker

Interactive "Office Hours"

The maintenance team plans to be available regularly for questions, comments, pings, etc for about an hour twice a week. The current schedule is:

  • 2:00pm EDT (currently 18:00 UTC) Mondays
  • 9:00am EDT (currently 13:00 UTC) Thursdays

We will use the #ceph-devel IRC channel