From patchwork Wed Jun 17 23:55:46 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 6631611 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AE930C0020 for ; Wed, 17 Jun 2015 23:58:36 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9A037206A1 for ; Wed, 17 Jun 2015 23:58:35 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 99483206A3 for ; Wed, 17 Jun 2015 23:58:34 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 8C7991826E7; Wed, 17 Jun 2015 16:58:34 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by ml01.01.org (Postfix) with ESMTP id CD200182686 for ; Wed, 17 Jun 2015 16:58:32 -0700 (PDT) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 17 Jun 2015 16:58:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,636,1427785200"; d="scan'208";a="748672188" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.23.232.36]) by orsmga002.jf.intel.com with ESMTP; 17 Jun 2015 16:58:26 -0700 Subject: [PATCH 11/15] libnvdimm: pmem, blk, and btt make_request cleanups From: Dan Williams To: axboe@kernel.dk, linux-nvdimm@lists.01.org Date: Wed, 17 Jun 2015 19:55:46 -0400 Message-ID: <20150617235546.12943.2374.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20150617235209.12943.24419.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20150617235209.12943.24419.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-8-g92dd MIME-Version: 1.0 Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, linux-acpi@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Various cleanups: 1/ kill the BUG_ONs since we've already told the block layer we don't support DISCARD on all these drivers. 2/ Fix up use of 'rw'. No need to cache it in the pmem driver and for btt using bio_data_dir() saves a check for READA. 3/ Kill the local 'sector' variables. bio_for_each_segment() is already advancing the iterator's sector number by the bio_vec length. Reviewed-by: Vishal Verma Signed-off-by: Dan Williams --- drivers/nvdimm/blk.c | 14 ++++---------- drivers/nvdimm/btt.c | 18 +++++------------- drivers/nvdimm/pmem.c | 20 ++++++-------------- 3 files changed, 15 insertions(+), 37 deletions(-) diff --git a/drivers/nvdimm/blk.c b/drivers/nvdimm/blk.c index 8a6345797a71..9d609ef95266 100644 --- a/drivers/nvdimm/blk.c +++ b/drivers/nvdimm/blk.c @@ -170,18 +170,12 @@ static void nd_blk_make_request(struct request_queue *q, struct bio *bio) struct bvec_iter iter; struct bio_vec bvec; int err = 0, rw; - sector_t sector; - sector = bio->bi_iter.bi_sector; - if (bio_end_sector(bio) > get_capacity(disk)) { + if (unlikely(bio_end_sector(bio) > get_capacity(disk))) { err = -EIO; goto out; } - BUG_ON(bio->bi_rw & REQ_DISCARD); - - rw = bio_data_dir(bio); - /* * bio_integrity_enabled also checks if the bio already has an * integrity payload attached. If it does, we *don't* do a @@ -196,20 +190,20 @@ static void nd_blk_make_request(struct request_queue *q, struct bio *bio) bip = bio_integrity(bio); blk_dev = disk->private_data; + rw = bio_data_dir(bio); bio_for_each_segment(bvec, bio, iter) { unsigned int len = bvec.bv_len; BUG_ON(len > PAGE_SIZE); err = nd_blk_do_bvec(blk_dev, bip, bvec.bv_page, len, - bvec.bv_offset, rw, sector); + bvec.bv_offset, rw, iter.bi_sector); if (err) { dev_info(&blk_dev->nsblk->dev, "io error in %s sector %lld, len %d,\n", (rw == READ) ? "READ" : "WRITE", - (unsigned long long) sector, len); + (unsigned long long) iter.bi_sector, len); goto out; } - sector += len >> SECTOR_SHIFT; } out: diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c index 380e01cedd24..83b798dd2e68 100644 --- a/drivers/nvdimm/btt.c +++ b/drivers/nvdimm/btt.c @@ -1177,23 +1177,16 @@ static void btt_make_request(struct request_queue *q, struct bio *bio) struct bio_integrity_payload *bip = bio_integrity(bio); struct block_device *bdev = bio->bi_bdev; struct btt *btt = q->queuedata; - int rw; - struct bio_vec bvec; - sector_t sector; struct bvec_iter iter; - int err = 0; + struct bio_vec bvec; + int err = 0, rw; - sector = bio->bi_iter.bi_sector; if (bio_end_sector(bio) > get_capacity(bdev->bd_disk)) { err = -EIO; goto out; } - BUG_ON(bio->bi_rw & REQ_DISCARD); - - rw = bio_rw(bio); - if (rw == READA) - rw = READ; + rw = bio_data_dir(bio); /* * bio_integrity_enabled also checks if the bio already has an @@ -1216,15 +1209,14 @@ static void btt_make_request(struct request_queue *q, struct bio *bio) BUG_ON(len % btt->sector_size); err = btt_do_bvec(btt, bip, bvec.bv_page, len, bvec.bv_offset, - rw, sector); + rw, iter.bi_sector); if (err) { dev_info(&btt->nd_btt->dev, "io error in %s sector %lld, len %d,\n", (rw == READ) ? "READ" : "WRITE", - (unsigned long long) sector, len); + (unsigned long long) iter.bi_sector, len); goto out; } - sector += len >> SECTOR_SHIFT; } out: diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index b825a2201aa8..0337b00f5409 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -58,28 +58,20 @@ static void pmem_do_bvec(struct pmem_device *pmem, struct page *page, static void pmem_make_request(struct request_queue *q, struct bio *bio) { - struct block_device *bdev = bio->bi_bdev; - struct pmem_device *pmem = bdev->bd_disk->private_data; - int rw; + int err = 0; struct bio_vec bvec; - sector_t sector; struct bvec_iter iter; - int err = 0; + struct block_device *bdev = bio->bi_bdev; + struct pmem_device *pmem = bdev->bd_disk->private_data; - if (bio_end_sector(bio) > get_capacity(bdev->bd_disk)) { + if (unlikely(bio_end_sector(bio) > get_capacity(bdev->bd_disk))) { err = -EIO; goto out; } - BUG_ON(bio->bi_rw & REQ_DISCARD); - - rw = bio_data_dir(bio); - sector = bio->bi_iter.bi_sector; - bio_for_each_segment(bvec, bio, iter) { + bio_for_each_segment(bvec, bio, iter) pmem_do_bvec(pmem, bvec.bv_page, bvec.bv_len, bvec.bv_offset, - rw, sector); - sector += bvec.bv_len >> 9; - } + bio_data_dir(bio), iter.bi_sector); out: bio_endio(bio, err);