Added a ceph-libs-compat package in accordance with Fedora packaging
guidelines [1], to handle the recent package split more gracefully.
In Fedora this is necessary because there are already other packages
depending on ceph-libs, that need to be adjusted to depend on the new
split packages instead. In the mean time, ceph-libs-compat prevents
breakage.
[1] http://fedoraproject.org/wiki/Upgrade_paths_%E2%80%94_renaming_or_splitting_packages
Signed-off-by: Erik Logtenberg <erik@logtenberg.eu>
Fixes: 9206
Backport: firefly
Certain web servers filter out underscores in the header field name.
Convert them into dashes.
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
Instead of dumping everything to the same log file, let us this way
allow a user to specify per-channel log files, levels and syslog
facilities.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Instead of allowing only one LogClient, we will now allow any daemon to
have any number of LogClients. They may either all log to the default
facility and level, or they may see their facility and level specified
upon creation (via a new constructor).
This patch also changes 'handle_log_ack' in such a way that the LogClient
will handle all acks with bearing the LogClient's facility, or will
otherwise simply ignore them. The function will return true whenever the
message has been handled or false if that was not the case. It will fall
on the caller the responsibility of deciding whether the message will be
passed to other LogClients or not, and when it is to be release.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
'get_str_map()' used to handle both JSON and plain-text. In fact it
would try parsing the map as JSON on a first try and then fallback to
plain-text if it failed. Altough useful this would pose a big issue
when we attempted to parse some values, tha we knew to be plain-text,
that had some meaning in JSON -- e.g., 'false' or 'true'. In such case
the JSON parser would spit out an error, stating it had been able to
parse the JSON but didn't expected the type, which in fairness is
acceptable.
In its stead we now have two functions: 'get_str_map()' will only handle
plain-text, whereas 'get_json_str_map()' will keep the previous
behavior, attempting to parse a JSON string and falling back to
'get_str_map()' should it fail.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Fixes: #9201
Backport: firefly
We sometimes need to call RGWPutObjProcessor::handle_data() again,
so that we send the pending data. However, we failed to clear the buffer
that was already sent, thus it was resent. This triggers when using non
default pool alignments.
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
Both methods obtain values for keys from a given map. Main distinction
is that one method will return a default value if key is not present and
if the default value is specified (i.e., not NULL), while the other
method will return the value of a fallback key if the key is not
present.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Caller decides whether to release the message's reference. It may want
to use the same message on another LogClient, if we decide not to handle
it.
Currently we always handle the message and thus always return true. This
will not be as such later in the patch series.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
We now introduce the concept of 'channel', analogous to syslog
facilities, for log entries. This will, shortly, allow a LogClient
to send messages to more than just the default syslog facility and log
file, also allowing multiple LogClients and having a way to associate
a LogEntry with its rightful owner.
We also add the same field to the MLogAck message so that a LogClient
waiting on a given ack is able to recognize the MLogAck as its own.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
This commit adds:
- a section for radosgw
- fix the replica count value (default is 3 since firefly)
- more OSDs options for performance and CRUSH
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
If we get more than one reservation rejection we should ignore them; when
we got the first we already sent out cancellations. More importantly, we
should not crash.
Fixes: #8863
Backport: firefly, dumpling
Signed-off-by: Sage Weil <sage@redhat.com>
ceph_mon: check for existing mon store before opening db
Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Pavan Rallabhandi <pavanrallabhandis@gmail.com>
Outputs a diff between the current config and what the daemon believes
to be its default config.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
It's mildly annoying when trying to figure out what has been changed on
a running system's config options and having to rely on whatever is set
on ceph.conf and the admin's memory of what has been injected.
With this we can simply ask the daemon for the diff between what would be
its default and what is its current config.
Current form will output extraneous information that was not directly
supplied by the user though, such as 'host' 'fsid' and 'daemonize', as
well as defaults we may rewrite ourselves (leveldb tunables on the monitor
for instance). Nonetheless, it's way better than the alternative and
considering it should be used solely for debug purposes I think we can
get away with it.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Suppose we start with the following in the cache pool:
30:[29,21,20,15,10,4]:[22(21), 15(15,10), 4(4)]+head
The object doesn't exist at 29 or 20.
First, we flush 4 leaving the backing pool with:
3:[]+head
Then, we begin to flush 15 with a delete with snapc 4:[4] leaving the
backing pool with:
4:[4]:[4(4)]
Then, we finish flushing 15 with snapc 9:[4] with leaving the backing
pool with:
9:[4]:[4(4)]+head
Next, snaps 10 and 15 are removed causing clone 10 to be removed leaving
the cache with:
30:[29,21,20,4]:[22(21),4(4)]+head
We next begin to flush 22 by sending a delete with snapc 4(4) since
prev_snapc is 4 <---------- here is the bug
The backing pool ignores this request since 4 < 9 (ORDERSNAP) leaving it
with:
9:[4]:[4(4)]
Then, we complete flushing 22 with snapc 19:[4] leaving the backing pool
with:
19:[4]:[4(4)]+head
Then, we begin to flush head by deleting with snapc 22:[21,20,4] leaving
the backing pool with:
22[21,20,4]:[22(21,20), 4(4)]
Finally, we flush head leaving the backing pool with:
30:[29,21,20,4]:[22(21*,20*),4(4)]+head
When we go to flush clone 22, all we know is that 22 is dirty, has snaps
[21], and 4 is clean. As part of flushing 22, we need to do two things:
1) Ensure that the current head is cloned as cloneid 4 with snaps [4] by
sending a delete at snapc 4:[4].
2) Flush the data at snap sequence < 21 by sending a copyfrom with snapc
20:[20,4].
Unfortunately, it is possible that 1, 1&2, or 1 and part of the flush
process for some other now non-existent clone have already been
performed. Because of that, between 1) and 2), we need to send
a second delete ensuring that the object does not exist at 20.
Fixes: #9054
Backport: firefly
Signed-off-by: Samuel Just <sam.just@inktank.com>