Commit Graph

29466 Commits

Author SHA1 Message Date
Gary Lowell
dc9a7721d4 Merge branch 'next' of jenkins:ceph/ceph into next 2013-10-30 18:34:42 +00:00
Li Wang
6efd82cc63 ceph: Release resource before return in BackedObject::download()
Close file before return

Signed-off-by: Li Wang <liwang@ubuntukylin.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2013-10-30 09:05:12 -07:00
Li Wang
e22347df38 ceph: Fix memory leak in chain_listxattr
Free allocated memory before return

Signed-off-by: Li Wang <liwang@ubuntukylin.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2013-10-30 09:02:58 -07:00
Li Wang
905243b245 Fix memory leak in Backtrace::print()
Free already allocated memory if short of memory

Signed-off-by: Li Wang <liwang@ubuntukylin.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2013-10-30 09:02:43 -07:00
Gary Lowell
e11c9756af v0.72-rc1 2013-10-30 00:45:10 +00:00
Sage Weil
1e2e4297f8 Revert "ceph-crush-location: new crush location hook"
This reverts commit fc49065d85.

Merged to wrong branch; my bad!
2013-10-29 13:58:32 -07:00
Sage Weil
22ff717688 Revert "upstart, sysvinit: use ceph-crush-location hook"
This reverts commit 111a37efb1.
2013-10-29 13:58:32 -07:00
Loic Dachary
7dd387b0af Merge pull request #779 from ceph/wip-crush-hook
upstart,sysvinit: allow 'osd crush location hook' script to determine osd crush position

Reviewed-by: Loic Dachary <loic@dachary.org>
2013-10-29 12:24:05 -07:00
Sage Weil
111a37efb1 upstart, sysvinit: use ceph-crush-location hook
Instead of hard-coding a check in ceph.conf and some reasonable
defaults, defer this work to ceph-crush-location, and allow users to
specify their own hook with alternative logic.

This can be helpful in a nubmer of cases, like:

 - rack (or other) information included in hostname and easily parsed
   out by a hook
 - multiple types of devices in each host, resulting in 'parallel'
   crush trees (e.g., one for hdd, one for ssd)

Signed-off-by: Sage Weil <sage@inktank.com>
2013-10-29 11:10:32 -07:00
Sage Weil
fc49065d85 ceph-crush-location: new crush location hook
This generalizes the bit of code that builds a key=value pair list to
update an entity's CRUSH location.

Signed-off-by: Sage Weil <sage@inktank.com>
2013-10-29 11:09:52 -07:00
Sage Weil
13d1b9c99b Merge pull request #786 from ceph/wip-6673
mon/PGMonitor: always send pg creations after mapping

Reviewed-by: Joao Eduardo Luis <joao.luis@inktank.com>
2013-10-29 10:16:52 -07:00
Sage Weil
df229e5eff mon/PGMonitor: always send pg creations after mapping
At some point in the dumpling cycle I separated the map stage from the
send stage.  We can send the creates any time we have a non-zero osdmap
epoch, and are in good shape as long as we do the map step after the
osdmap is loaded (hence the post_paxos_update).

Some background:

We originally introduced the map-but-don't send in a2fe0137, at which
point all was well because we only called it on ceph-mon startup.

Later, this turned into post_paxos_update in e635c478, at which point
it was now called by a running monitor.. but we didn't add in the
send_pg_creates().  This is where this bug stems from.

This particular path is responsible for the stalled test referenced in
bug #6673.

Backport: dumpling
Signed-off-by: Sage Weil <sage@inktank.com>
2013-10-29 10:10:21 -07:00
Sage Weil
2181b4c946 mon/OSDMonitor: fix signedness warning on poolid
Signed-off-by: Sage Weil <sage@inktank.com>
2013-10-29 08:59:06 -07:00
Samuel Just
7a06a71e0f ReplicatedPG::recover_backfill: update last_backfill to max() when backfill is complete
Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-29 08:26:57 -07:00
athanatos
ad5655beb2 Merge pull request #780 from ceph/wip-6585
Wip 6585

Reviewed-by: Sage Weil <sage@inktank.com>
Reviewed-by: Greg Farnum <greg@inktank.com>
Reviewed-by: Samuel Just <sam.just@inktank.com>
2013-10-28 21:11:27 -07:00
Sage Weil
4e48dd56a4 osd/ReplicatedPG: use MIN for backfill_pos
Signed-off-by: Sage Weil <sage@inktank.com>
2013-10-28 16:39:09 -07:00
Loic Dachary
e6d983beda Merge pull request #772 from ceph/wip-5612
init-ceph, upstart: make crush update on osd start time out

Reviewed-by: Loic Dachary <loic@dachary.org>
2013-10-28 16:13:34 -07:00
Samuel Just
4139e75d63 ReplicatedPG: recover_backfill: don't prematurely adjust last_backfill
We can't adjust last_backfill to object x until x has been fully
backfilled.  pending_backfill_updates contains all those backfills
started, but which have not yet been reflected in pinfo.last_update.
backfills_in_flight contains those backfills which have not yet
completed.  Thus, we can adjust last_update to the largest entry
in pending_backfill_updates not in backfills_in_flight.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 16:10:16 -07:00
Samuel Just
ecddd12b01 ReplicatedPG: add empty stat when we remove an object in recover_backfill
Subsequent updates to that object need to have their stats added
to the backfill info stats atomically with the last_backfill
update.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 16:10:09 -07:00
Samuel Just
9ec35d5ccf ReplicatedPG: replace backfill_pos with last_backfill_started
last_backfill_started reflects what pinfo.last_backfill will be
once all currently outstanding backfills complete.  backfill_pos
was tricky since we couldn't correctly inialize it without
doing the first backfill scan pair.

In recover_backfill, we rescan from last_backfill_started rather
than from backfill_pos.  This ensures that we capture all clones
created between last_backfill_started and what previously had been
backfill_pos without special handling in make_writeable.  The main
downside is that we will tend to "rescan" last_backfill_started.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 16:03:59 -07:00
Samuel Just
8774f03d39 PG::BackfillInfo: introduce trim_to
We'll use this to trim off last_backfill_started since it'll
often be included in rescans.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 16:03:58 -07:00
Samuel Just
46dfd91975 PG::BackfillInterval: use trim() in pop_front()
Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 16:03:58 -07:00
Samuel Just
0a9a2d7b9c ReplicatedPG::prepare_transaction: info.last_backfill is inclusive
Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 16:03:58 -07:00
Sage Weil
5939eaceb0 upstart: fail osd start if crush update fails
If the update for the CRUSH position fails for some reason, do not
start the OSD.

Signed-off-by: Sage Weil <sage@inktank.com>
2013-10-28 15:58:29 -07:00
Sage Weil
177e2ab1ca init-ceph: make crush update on osd start time out
If the monitor is not currently available, this crush update would block
forever, preventing the OSD and (potentially) the rest of the system
from starting up.  Instead, make it time out after 10 seconds and then
abort startup.  This prevents startup of an OSD if we failed to update
the CRUSH position for some reason.

In fact, do not start up the OSD if the CRUSH update fails for any
reason--not just a timeout!

Works-around: #5612
Signed-off-by: Sage Weil <sage@inktank.com>
2013-10-28 15:58:29 -07:00
Sage Weil
ac8dcdbeed Merge pull request #778 from ceph/wip-6621
radosgw-admin: accept negative values for quota params

Reviewed-by: Sage Weil <sage@inktank.com>
2013-10-28 14:28:25 -07:00
Yehuda Sadeh
d5d36d0baa radosgw-admin: accept negative values for quota params
and document that in the usage output.

Fixes: #6621

Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
2013-10-28 14:15:43 -07:00
athanatos
7cbfdbf38d Merge pull request #760 from ceph/wip-6585
Wip 6585

Reviewed-by: Sage Weil <sage@inktank.com>
Reviewed-by: Greg Farnum <greg@inktank.com>
2013-10-28 13:50:34 -07:00
Samuel Just
8db03ed027 ReplicatedBackend: don't hold ObjectContexts in pull completion callback
We need flushing the sequencer to ensure that all Contexts which hold
ObjectContextRefs have been run or deleted.
C_ReplicatedBackend_OnPullComplete, however, gets queued in a second
work queue in order to avoid performing expensive push related reads
in the FileStore finisher.

Rather than keep the objects contexts around, we instead put off
removing the object from the pulling map until the call back
fires and read the object context out of the pulling map.  This
way the ObjectContextRef will be cleaned up along with the rest
of the pulling map in on_change.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:35:17 -07:00
Samuel Just
5a416dab6e ReplicatedPG: put repops even in TrimObjects
Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:35:17 -07:00
Samuel Just
420182a1e8 ReplicatedPG: improved on_flushed error output
Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:35:17 -07:00
Samuel Just
ce33892271 PG: call on_flushed on FlushEvt
Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:35:10 -07:00
Samuel Just
6f975e35a1 PG,ReplicatedPG: remove the waiting_for_backfill_peer mechanism
See previous patch.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:34:17 -07:00
Samuel Just
3d0d69fed0 ReplicatedPG: have make_writeable adjust backfill_pos
If we are writing to backfill_pos and create a clone, we end
up failing to send the transaction creating the clone to the
backfill peer.  This is fine as long as we end up backfilling
the clone.  To that end, we simply add the clone to
backfill_info and adjust backfill_pos accordingly.  This is less
brittle than the waiting_for_backfill_pos mechanism since it
works even if we wait between that check and issuing the repop,
which can happen for copy_from.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:34:16 -07:00
Samuel Just
3de32bd368 ReplicatedBackend: fix failed push error output
Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:34:16 -07:00
Samuel Just
807dde4814 ReplicatedPG,osd_types: move rw tracking from its own map to ObjectContext
We also modify recovering to hold a reference to the recovering obc
in order to ensure that our backfill_read_lock doesn't outlive the
obc.

ReplicatedPG::op_applied no longer clears repop->obc since we need
it to live until the op is finally cleaned up.  This is fine since
repop->obc is now an ObjectContextRef and can clean itself up.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:32:56 -07:00
Samuel Just
2cadc231ae osd_types,OpRequest: move osd_req_id into OpRequest
This way I can have OpRequest included from osd_types.h.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:31:08 -07:00
Samuel Just
9b003b327e OpRequest: move method implementations into cc
I need to remove the osd_types.h include.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:31:08 -07:00
Samuel Just
c4442d70ed ReplicatedPG: reset new_obs and new_snapset in execute_ctx
This way, if execute_ctx is rerun on the same OpContext, we
won't erroneously reuse a stale snapset/object_info.

Signed-off-by: Samuel Just <sam.just@inktank.com>
2013-10-28 13:30:42 -07:00
huangjun
8a62bf1c04 fix the bug if we set pgp_num=-1 using "ceph osd pool set data|metadata|rbd -1"
will set the pgp_num to a hunge number.

   Signed-off-by: huangjun  <hjwsm1989@gmail.com>
(cherry picked from commit bf198e673f)
2013-10-28 13:29:50 -07:00
Greg Farnum
5eb836f23a ReplicatedPG: take and drop read locks when doing backfill
All our interfaces are in place, so now we can actually take and
drop the locks.
1) Take locks in ReplicatedPG::recover_backfill. This is the entry
into the backfill code path, and covers all objects which are
added to backfills_in_flight (via prep_backfill_object_push()). If we
can't get the lock right away, we stop the backfill movement there
until we can do so.
2) Drop the locks in ReplicatedPG::on_peer_recover(), called when the
push is completed.
2b) Further drop the locks on all backfills_in_flight objects in
_clear_recovery_state(), for when we cancel peering.

Signed-off-by: Greg Farnum <greg@inktank.com>
2013-10-27 10:40:32 -07:00
Greg Farnum
058c74ab23 PG: switch the start_recovery_ops interface to specify work to do as a param
We previously inferred whether there was useful work to be done
by looking at the number of ops started, but with the upcoming
introduction of the rw_manager read locking on backfill, we could
start no ops while still having work to do. Switch around the
interfaces to specify these as separate pieces of information.

Signed-off-by: Greg Farnum <greg@inktank.com>
2013-10-27 10:40:32 -07:00
Greg Farnum
87daef76cd ReplicatedPG: implement the RWTracker mechanisms for backfill read locking
We want backfill to take read locks on the objects it's pushing. Add
a get_backfill_read(hobject_t) function, a corresponding drop_backfill_read(),
and a backfill_waiting_on_read member in ObjState. Check that member when
getting a write lock, and in put_write(). Tell callers to requeue the recovery
if necessary, and clean up the backfill block when its read lock is dropped.

Signed-off-by: Greg Farnum <greg@inktank.com>
2013-10-27 10:40:32 -07:00
Greg Farnum
96ed5b8c38 ReplicatedPG: separate RWTracker's waitlist from getting locks
This way we can try and get locks which aren't associated with
an OpRequest.

Signed-off-by: Greg Farnum <greg@inktank.com>
2013-10-27 10:40:32 -07:00
Greg Farnum
f0f67507dd common: add an hobject_t::is_min() function
Signed-off-by: Greg Farnum <greg@inktank.com>
2013-10-27 10:40:32 -07:00
Sage Weil
c2cd460950 Merge pull request #765 from ceph/wip-6635
mon: OSDMonitor: Make 'osd pool rename' idempotent

Reviewed-by: Sage Weil <sage@inktank.com>
Reviewed-by: Joao Eduardo Luis <joao.luis@inktank.com>
2013-10-25 17:53:30 -07:00
Sage Weil
8282e24dd6 mon/OSDMonitor: make racing dup pool rename behave
If we get dup pool rename requests that are racing, make sure the second
one comes back with 'success' if the rename entry already exists in the
pending_inc map.

Signed-off-by: Sage Weil <sage@inktank.com>
2013-10-25 17:45:06 -07:00
Joao Eduardo Luis
c14c98d3f0 mon: OSDMonitor: Make 'osd pool rename' idempotent
'ceph osd pool rename' takes two arguments: source pool and dest pool.
If by chance 'source pool' does not exist and 'destination pool' does,
then, in order to assure it's idempotent, we want to assume that if
'source pool' no longer exists is because it was already renamed.

However, while we will return success in such case, we want to make sure
to let the user know that we made such assumption.  Mostly to warn the
user of such a thing in case of a mistake on the user's part (say, the
user didn't notice that the source pool didn't exist, while the dest did),
but also to make sure that the user is not surprised by the command
returning success if the user expected an ENOENT or EEXIST.

Fixes: #6635

Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
2013-10-26 01:28:10 +01:00
Gregory Farnum
0f1fed6fe7 Merge pull request #769 from ceph/wip-copy-get
With this branch we make copy-get significantly easier to extend by applying our standard encode/decode stuff to it, instead of doing an inline encode-onto-the-payload. We also add some infrastructure for dealing with completion of RepGathers.

Reviewed-by: Sage Weil <sage@inktank.com>
2013-10-25 13:57:21 -07:00
Greg Farnum
aea985c142 Objecter: expose the copy-get()'ed object's category
In the OSD, store the category in the CopyOp using this.

Signed-off-by: Greg Farnum <greg@inktank.com>
2013-10-25 13:52:57 -07:00