Note that it is only (currently) important that this value be accurate
on the current OSD since we only use this value (currently) to discard
ops sent before the split. If we are getting the history from a different
OSD in the cluster that doesn't have an up to date value it doesn't matter
because that implies a primary change and also a client resend.
Signed-off-by: Sage Weil <sage@redhat.com>
New clients will resend.
Old clients will see a last_force_op_resend (now named
last_force_op_resend_preluminous in latest code) and resend.
We know this because we require that the monitors upgrade to luminous
before the OSDs, and the new mon code sets this field on split.
Signed-off-by: Sage Weil <sage@redhat.com>
There are some useful messages at level 1. They're rare and won't affect
performance, but are helpful to see in the log.
Signed-off-by: Sage Weil <sage@redhat.com>
Removed logic to skip reclaim processing conditionally on hiwat,
this probably meant to be related to a lowat value, which does
not exist.
Having exercised the hiwat reclaim behavior, noticed that the
path which moves unreachable objects to LRU, could and probably
should remove them altogether when q.size exceeds hiwat. Now
the max unreachable float is lane hiwat, for all lanes.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
This change includes 3 related changes:
1. add required lock flags for FHCache updates--this is a crash
bug under concurrent update/lookup
2. omit to inc/dec refcnt on root filehandles in 2 places--the
root handle current is not on the lru list, so it's not
valid to do so
3. based on observation of LRU behavior during creates/deletes,
update (cohort) LRU unref to move objects to LRU when their
refcount falls to SENTINEL_REFCNT--this cheaply primes the
current reclaim() mechanism, so very significanty improves
space use (e.g., after deletes) in the absence of scans
(which is common due to nfs-ganesha caching)
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
The documentation for using rbd together with openstack havana/icehouse
states that the parameter libvirt_disk_cachemodes should be added to
the nova.conf file. However, this is the only parameter that has no
legacy name with a 'libvirt_' prefix. (See
https://github.com/openstack/nova/blob/icehouse-eol/nova/virt/libvirt/driver.py#L252
for the configuration option)
Thus the configured disk_cachemodes were not applied, defaulting to
no caching.
Fixes: #17978
Signed-off-by: Michael Eischer <michael.eischer@fau.de>
This was super slow, and Objecter was incapable of generating the requests
to use it.
To do this properly we should create a new listing op that returns the
set of clones and/or snaps for each object as part of a single listing
result. If/when the need arises.
Signed-off-by: Sage Weil <sage@redhat.com>
This made us wait if the snapid != CEPH_NOSNAP and there were any missing
objects at all. Objecter can't submit such ops, and we're dropping support
for listing at a specific snap anyway.
Signed-off-by: Sage Weil <sage@redhat.com>
Pre-luminous clients do not understand that a split PG forms a new
interval. Make them resend ops to work around this.
Signed-off-by: Sage Weil <sage@redhat.com>
If the client has the new feature bit, use the new field; if they have the
older feature bit, use the old field.
Note that there is no change to the Objecter: last_force_op_resend is
still the "current" field that it should pay attention to.
Signed-off-by: Sage Weil <sage@redhat.com>
Rename the current last_force_op_resend for legacy clients, and add a new
one that only applies to new clients that have the new
CEPH_FEATURE_OSD_NEW_INTERVAL_ON_SPLIT feature.
Signed-off-by: Sage Weil <sage@redhat.com>
to avoid linking against to both libceph-common and libcommon at the same
time, because both of them will be registered as a provider of lttng
provider.
Fixes: http://tracker.ceph.com/issues/18838
Signed-off-by: Kefu Chai <kchai@redhat.com>
cephd_rgw_base build currently fails with fastcgi enabled:
--
In file included from /home/david/ceph/src/rgw/rgw_request.h:13:0,
from /home/david/ceph/src/rgw/rgw_main.cc:53:
/home/david/ceph/src/rgw/rgw_fcgi.h:8:21: fatal error: fcgiapp.h:
No such file or directory
^
--
This is despite the fact that fastcgi was detected and located at
configure time:
build/CMakeCache.txt:FCGI_INCLUDE_DIR:PATH=/usr/include/fastcgi
Fix this by ensuring that the cephd_rgw_base build target correctly uses
FCGI_INCLUDE_DIR.
Fixes: http://tracker.ceph.com/issues/18918
Signed-off-by: David Disseldorp <ddiss@suse.de>