From patchwork Thu Jan 3 22:43:28 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 1929791 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id AAA59DF25A for ; Thu, 3 Jan 2013 22:43:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754298Ab3ACWnd (ORCPT ); Thu, 3 Jan 2013 17:43:33 -0500 Received: from mail-ie0-f181.google.com ([209.85.223.181]:56158 "EHLO mail-ie0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754019Ab3ACWnc (ORCPT ); Thu, 3 Jan 2013 17:43:32 -0500 Received: by mail-ie0-f181.google.com with SMTP id 16so19063697iea.12 for ; Thu, 03 Jan 2013 14:43:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=A8wdpidp84SE82uWFgAm8z3yw81myz24kUaCwxT6BCU=; b=J8dn6jW4lY+Gyd185nx0xrSbq24lYNcQQI0N/aoPD6A7sznZdMoD8l76dDppuTAGim gwX+ymGA/wVg1gedFw8ykamb2cpn23mkXExlcd0odsTAbNl2PQWxDyEbLR6yTisV3L8p OYwBVGe4X4FVl2dr5mkCwq8f/8q5yn5m2o8PV2LHrfdkKl31rG1+OOTRDmgjB8gCJu62 uXL/wPJgfmTwaml3sBk/X1yqcBEE02zh8QoL+dnNunPjWQS83QrNRko+MTDEhzNsL42b zGAvqvUPH+nKhqCY52xjFEpFs2f/pJq5vXsi0nN678aMPgL9ri+bzeq3w66/m9PkE8YD 9TnA== X-Received: by 10.42.121.1 with SMTP id h1mr38250676icr.43.1357253010268; Thu, 03 Jan 2013 14:43:30 -0800 (PST) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPS id yf6sm42330394igb.0.2013.01.03.14.43.28 (version=SSLv3 cipher=OTHER); Thu, 03 Jan 2013 14:43:29 -0800 (PST) Message-ID: <50E60990.90109@inktank.com> Date: Thu, 03 Jan 2013 16:43:28 -0600 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: "ceph-devel@vger.kernel.org" Subject: [PATCH REPOST 2/2] rbd: a little more cleanup of rbd_rq_fn() References: <50E6094F.9080101@inktank.com> In-Reply-To: <50E6094F.9080101@inktank.com> X-Gm-Message-State: ALoCoQmubfXO78ZT9fXyo4G+yYHM+7lVgPMCw4gqyLFT2/8qiXiVmsJjiTD+O+Js8a4xIU6HbSWU Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Now that a big hunk in the middle of rbd_rq_fn() has been moved into its own routine we can simplify it a little more. Signed-off-by: Alex Elder Reviewed-by: Josh Durgin --- drivers/block/rbd.c | 50 +++++++++++++++++++++++--------------------------- 1 file changed, 23 insertions(+), 27 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 88b9b2e..0a35c34 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -1647,53 +1647,49 @@ static int rbd_dev_do_request(struct request *rq, static void rbd_rq_fn(struct request_queue *q) { struct rbd_device *rbd_dev = q->queuedata; + bool read_only = rbd_dev->mapping.read_only; struct request *rq; while ((rq = blk_fetch_request(q))) { - struct bio *bio; - bool do_write; - unsigned int size; - u64 ofs; - struct ceph_snap_context *snapc; + struct ceph_snap_context *snapc = NULL; int result; dout("fetched request\n"); - /* filter out block requests we don't understand */ + /* Filter out block requests we don't understand */ + if ((rq->cmd_type != REQ_TYPE_FS)) { __blk_end_request_all(rq, 0); continue; } + spin_unlock_irq(q->queue_lock); - /* deduce our operation (read, write) */ - do_write = (rq_data_dir(rq) == WRITE); - if (do_write && rbd_dev->mapping.read_only) { - __blk_end_request_all(rq, -EROFS); - continue; - } + /* Stop writes to a read-only device */ - spin_unlock_irq(q->queue_lock); + result = -EROFS; + if (read_only && rq_data_dir(rq) == WRITE) + goto out_end_request; + + /* Grab a reference to the snapshot context */ down_read(&rbd_dev->header_rwsem); + if (rbd_dev->exists) { + snapc = ceph_get_snap_context(rbd_dev->header.snapc); + rbd_assert(snapc != NULL); + } + up_read(&rbd_dev->header_rwsem); - if (!rbd_dev->exists) { + if (!snapc) { rbd_assert(rbd_dev->spec->snap_id != CEPH_NOSNAP); - up_read(&rbd_dev->header_rwsem); dout("request for non-existent snapshot"); - spin_lock_irq(q->queue_lock); - __blk_end_request_all(rq, -ENXIO); - continue; + result = -ENXIO; + goto out_end_request; } - snapc = ceph_get_snap_context(rbd_dev->header.snapc); - - up_read(&rbd_dev->header_rwsem); - - size = blk_rq_bytes(rq); - ofs = blk_rq_pos(rq) * SECTOR_SIZE; - bio = rq->bio; - - result = rbd_dev_do_request(rq, rbd_dev, snapc, ofs, size, bio); + result = rbd_dev_do_request(rq, rbd_dev, snapc, + blk_rq_pos(rq) * SECTOR_SIZE, + blk_rq_bytes(rq), rq->bio); +out_end_request: ceph_put_snap_context(snapc); spin_lock_irq(q->queue_lock); if (result < 0)