From patchwork Thu Jan 3 22:43:13 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 1929781 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 37189DF25A for ; Thu, 3 Jan 2013 22:43:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754261Ab3ACWnQ (ORCPT ); Thu, 3 Jan 2013 17:43:16 -0500 Received: from mail-ia0-f175.google.com ([209.85.210.175]:54725 "EHLO mail-ia0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754022Ab3ACWnQ (ORCPT ); Thu, 3 Jan 2013 17:43:16 -0500 Received: by mail-ia0-f175.google.com with SMTP id z3so12976380iad.20 for ; Thu, 03 Jan 2013 14:43:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=H91b0p6mNjj0Cg2UZCby/mZkECWzCl4KukHXhBaeA+Y=; b=UdpPGv+XVPIlQwtrZ5mv8sm7ADVVbgxrAPsUyCDKWXJebSldI17bPo+TEY6EKoDvNf EdUepyRwRS13gTtgSbA8ZTfN/5DiG+/plo3wUhTLC8H+jM8Eum198iiF/Y7ErGvDvsAt 5E4scK9koYQXOJQP2oQ7bS1Sij/JUx4PIzwUezTSQxalyA5JLLNKjwIBiPePuDrA1IGX 50vphidFL1LnuCcva5UulDJaFHW5OdkGs0IhrApDXUDongu4quGT5g34sMUK+l1WXBp2 ClLjCCCjDNjJ5fnnIk62swxwdjoZ34X9tMeL77033ywr0WWv0HkAaBtjTgiGFt0m+Jdz VLXg== X-Received: by 10.50.178.67 with SMTP id cw3mr44386687igc.53.1357252995709; Thu, 03 Jan 2013 14:43:15 -0800 (PST) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPS id aa6sm45490396igc.14.2013.01.03.14.43.14 (version=SSLv3 cipher=OTHER); Thu, 03 Jan 2013 14:43:14 -0800 (PST) Message-ID: <50E60981.2090801@inktank.com> Date: Thu, 03 Jan 2013 16:43:13 -0600 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: "ceph-devel@vger.kernel.org" Subject: [PATCH REPOST 1/2] rbd: encapsulate handling for a single request References: <50E6094F.9080101@inktank.com> In-Reply-To: <50E6094F.9080101@inktank.com> X-Gm-Message-State: ALoCoQl7ApR0hHVs3WG5OnXnG3RvgTVy8Vk6w0UN6FGqUVdANptBkluCfi4X6TWxxhdjBsuA6Y9c Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org In rbd_rq_fn(), requests are fetched from the block layer and each request is processed, looping through the request's list of bio's until they've all been consumed. Separate the handling for a single request into its own function to make it a bit easier to see what's going on. Signed-off-by: Alex Elder --- drivers/block/rbd.c | 119 +++++++++++++++++++++++++++------------------------ 1 file changed, 63 insertions(+), 56 deletions(-) */ @@ -1596,10 +1654,8 @@ static void rbd_rq_fn(struct request_queue *q) bool do_write; unsigned int size; u64 ofs; - int num_segs, cur_seg = 0; - struct rbd_req_coll *coll; struct ceph_snap_context *snapc; - unsigned int bio_offset; + int result; dout("fetched request\n"); @@ -1637,60 +1693,11 @@ static void rbd_rq_fn(struct request_queue *q) ofs = blk_rq_pos(rq) * SECTOR_SIZE; bio = rq->bio; - dout("%s 0x%x bytes at 0x%llx\n", - do_write ? "write" : "read", - size, (unsigned long long) blk_rq_pos(rq) * SECTOR_SIZE); - - num_segs = rbd_get_num_segments(&rbd_dev->header, ofs, size); - if (num_segs <= 0) { - spin_lock_irq(q->queue_lock); - __blk_end_request_all(rq, num_segs); - ceph_put_snap_context(snapc); - continue; - } - coll = rbd_alloc_coll(num_segs); - if (!coll) { - spin_lock_irq(q->queue_lock); - __blk_end_request_all(rq, -ENOMEM); - ceph_put_snap_context(snapc); - continue; - } - - bio_offset = 0; - do { - u64 limit = rbd_segment_length(rbd_dev, ofs, size); - unsigned int chain_size; - struct bio *bio_chain; - - BUG_ON(limit > (u64) UINT_MAX); - chain_size = (unsigned int) limit; - dout("rq->bio->bi_vcnt=%hu\n", rq->bio->bi_vcnt); - - kref_get(&coll->kref); - - /* Pass a cloned bio chain via an osd request */ - - bio_chain = bio_chain_clone_range(&bio, - &bio_offset, chain_size, - GFP_ATOMIC); - if (bio_chain) - (void) rbd_do_op(rq, rbd_dev, snapc, - ofs, chain_size, - bio_chain, coll, cur_seg); - else - rbd_coll_end_req_index(rq, coll, cur_seg, - (s32) -ENOMEM, - chain_size); - size -= chain_size; - ofs += chain_size; - - cur_seg++; - } while (size > 0); - kref_put(&coll->kref, rbd_coll_release); - - spin_lock_irq(q->queue_lock); - + result = rbd_dev_do_request(rq, rbd_dev, snapc, ofs, size, bio); ceph_put_snap_context(snapc); + spin_lock_irq(q->queue_lock); + if (result < 0) + __blk_end_request_all(rq, result); } } diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 8b79a5b..88b9b2e 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -1583,6 +1583,64 @@ static struct rbd_req_coll *rbd_alloc_coll(int num_reqs) return coll; } +static int rbd_dev_do_request(struct request *rq, + struct rbd_device *rbd_dev, + struct ceph_snap_context *snapc, + u64 ofs, unsigned int size, + struct bio *bio_chain) +{ + int num_segs; + struct rbd_req_coll *coll; + unsigned int bio_offset; + int cur_seg = 0; + + dout("%s 0x%x bytes at 0x%llx\n", + rq_data_dir(rq) == WRITE ? "write" : "read", + size, (unsigned long long) blk_rq_pos(rq) * SECTOR_SIZE); + + num_segs = rbd_get_num_segments(&rbd_dev->header, ofs, size); + if (num_segs <= 0) + return num_segs; + + coll = rbd_alloc_coll(num_segs); + if (!coll) + return -ENOMEM; + + bio_offset = 0; + do { + u64 limit = rbd_segment_length(rbd_dev, ofs, size); + unsigned int clone_size; + struct bio *bio_clone; + + BUG_ON(limit > (u64) UINT_MAX); + clone_size = (unsigned int) limit; + dout("bio_chain->bi_vcnt=%hu\n", bio_chain->bi_vcnt); + + kref_get(&coll->kref); + + /* Pass a cloned bio chain via an osd request */ + + bio_clone = bio_chain_clone_range(&bio_chain, + &bio_offset, clone_size, + GFP_ATOMIC); + if (bio_clone) + (void) rbd_do_op(rq, rbd_dev, snapc, + ofs, clone_size, + bio_clone, coll, cur_seg); + else + rbd_coll_end_req_index(rq, coll, cur_seg, + (s32) -ENOMEM, + clone_size); + size -= clone_size; + ofs += clone_size; + + cur_seg++; + } while (size > 0); + kref_put(&coll->kref, rbd_coll_release); + + return 0; +} + /* * block device queue callback