From patchwork Wed Jun 17 23:55:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 6631851 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 622F2C0020 for ; Thu, 18 Jun 2015 00:00:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1DAEF206A1 for ; Thu, 18 Jun 2015 00:00:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B410B20689 for ; Thu, 18 Jun 2015 00:00:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751410AbbFRAAi (ORCPT ); Wed, 17 Jun 2015 20:00:38 -0400 Received: from mga14.intel.com ([192.55.52.115]:1968 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755241AbbFQX6b (ORCPT ); Wed, 17 Jun 2015 19:58:31 -0400 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP; 17 Jun 2015 16:58:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,636,1427785200"; d="scan'208";a="712946708" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.23.232.36]) by orsmga001.jf.intel.com with ESMTP; 17 Jun 2015 16:58:10 -0700 Subject: [PATCH 08/15] libnvdimm, btt: add support for blk integrity From: Dan Williams To: axboe@kernel.dk, linux-nvdimm@lists.01.org Cc: boaz@plexistor.com, toshi.kani@hp.com, Vishal Verma , linux-kernel@vger.kernel.org, hch@lst.de, linux-acpi@vger.kernel.org, linux-fsdevel@vger.kernel.org, mingo@kernel.org Date: Wed, 17 Jun 2015 19:55:30 -0400 Message-ID: <20150617235530.12943.73663.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20150617235209.12943.24419.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20150617235209.12943.24419.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-8-g92dd MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Vishal Verma Support multiple block sizes (sector + metadata) using the blk integrity framework. This registers a new integrity template that defines the protection information tuple size based on the configured metadata size, and simply acts as a passthrough for protection information generated by another layer. The metadata is written to the storage as-is, and read back with each sector. Signed-off-by: Vishal Verma Signed-off-by: Dan Williams --- drivers/nvdimm/btt.c | 126 +++++++++++++++++++++++++++++++++++++++------ drivers/nvdimm/btt.h | 2 - drivers/nvdimm/btt_devs.c | 3 + drivers/nvdimm/core.c | 37 +++++++++++++ drivers/nvdimm/nd.h | 1 5 files changed, 151 insertions(+), 18 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c index 58becbd69ae1..c337b7abfb43 100644 --- a/drivers/nvdimm/btt.c +++ b/drivers/nvdimm/btt.c @@ -837,6 +837,11 @@ static int btt_meta_init(struct btt *btt) return ret; } +static u32 btt_meta_size(struct btt *btt) +{ + return btt->lbasize - btt->sector_size; +} + /* * This function calculates the arena in which the given LBA lies * by doing a linear walk. This is acceptable since we expect only @@ -921,8 +926,63 @@ static void zero_fill_data(struct page *page, unsigned int off, u32 len) kunmap_atomic(mem); } -static int btt_read_pg(struct btt *btt, struct page *page, unsigned int off, - sector_t sector, unsigned int len) +#ifdef CONFIG_BLK_DEV_INTEGRITY +static int btt_rw_integrity(struct btt *btt, struct bio_integrity_payload *bip, + struct arena_info *arena, u32 postmap, int rw) +{ + unsigned int len = btt_meta_size(btt); + u64 meta_nsoff; + int ret = 0; + + if (bip == NULL) + return 0; + + meta_nsoff = to_namespace_offset(arena, postmap) + btt->sector_size; + + while (len) { + unsigned int cur_len; + struct bio_vec bv; + void *mem; + + bv = bvec_iter_bvec(bip->bip_vec, bip->bip_iter); + /* + * The 'bv' obtained from bvec_iter_bvec has its .bv_len and + * .bv_offset already adjusted for iter->bi_bvec_done, and we + * can use those directly + */ + + cur_len = min(len, bv.bv_len); + mem = kmap_atomic(bv.bv_page); + if (rw) + ret = arena_write_bytes(arena, meta_nsoff, + mem + bv.bv_offset, cur_len); + else + ret = arena_read_bytes(arena, meta_nsoff, + mem + bv.bv_offset, cur_len); + + kunmap_atomic(mem); + if (ret) + return ret; + + len -= cur_len; + meta_nsoff += cur_len; + bvec_iter_advance(bip->bip_vec, &bip->bip_iter, cur_len); + } + + return ret; +} + +#else /* CONFIG_BLK_DEV_INTEGRITY */ +static int btt_rw_integrity(struct btt *btt, struct bio_integrity_payload *bip, + struct arena_info *arena, u32 postmap, int rw) +{ + return 0; +} +#endif + +static int btt_read_pg(struct btt *btt, struct bio_integrity_payload *bip, + struct page *page, unsigned int off, sector_t sector, + unsigned int len) { int ret = 0; int t_flag, e_flag; @@ -984,6 +1044,12 @@ static int btt_read_pg(struct btt *btt, struct page *page, unsigned int off, if (ret) goto out_rtt; + if (bip) { + ret = btt_rw_integrity(btt, bip, arena, postmap, READ); + if (ret) + goto out_rtt; + } + arena->rtt[lane] = RTT_INVALID; nd_region_release_lane(btt->nd_region, lane); @@ -1001,8 +1067,9 @@ static int btt_read_pg(struct btt *btt, struct page *page, unsigned int off, return ret; } -static int btt_write_pg(struct btt *btt, sector_t sector, struct page *page, - unsigned int off, unsigned int len) +static int btt_write_pg(struct btt *btt, struct bio_integrity_payload *bip, + sector_t sector, struct page *page, unsigned int off, + unsigned int len) { int ret = 0; struct arena_info *arena = NULL; @@ -1036,12 +1103,19 @@ static int btt_write_pg(struct btt *btt, sector_t sector, struct page *page, if (new_postmap >= arena->internal_nlba) { ret = -EIO; goto out_lane; - } else - ret = btt_data_write(arena, new_postmap, page, - off, cur_len); + } + + ret = btt_data_write(arena, new_postmap, page, off, cur_len); if (ret) goto out_lane; + if (bip) { + ret = btt_rw_integrity(btt, bip, arena, new_postmap, + WRITE); + if (ret) + goto out_lane; + } + lock_map(arena, premap); ret = btt_map_read(arena, premap, &old_postmap, NULL, NULL); if (ret) @@ -1081,18 +1155,18 @@ static int btt_write_pg(struct btt *btt, sector_t sector, struct page *page, return ret; } -static int btt_do_bvec(struct btt *btt, struct page *page, - unsigned int len, unsigned int off, int rw, - sector_t sector) +static int btt_do_bvec(struct btt *btt, struct bio_integrity_payload *bip, + struct page *page, unsigned int len, unsigned int off, + int rw, sector_t sector) { int ret; if (rw == READ) { - ret = btt_read_pg(btt, page, off, sector, len); + ret = btt_read_pg(btt, bip, page, off, sector, len); flush_dcache_page(page); } else { flush_dcache_page(page); - ret = btt_write_pg(btt, sector, page, off, len); + ret = btt_write_pg(btt, bip, sector, page, off, len); } return ret; @@ -1100,6 +1174,7 @@ static int btt_do_bvec(struct btt *btt, struct page *page, static void btt_make_request(struct request_queue *q, struct bio *bio) { + struct bio_integrity_payload *bip = bio_integrity(bio); struct block_device *bdev = bio->bi_bdev; struct btt *btt = q->queuedata; int rw; @@ -1120,6 +1195,17 @@ static void btt_make_request(struct request_queue *q, struct bio *bio) if (rw == READA) rw = READ; + /* + * bio_integrity_enabled also checks if the bio already has an + * integrity payload attached. If it does, we *don't* do a + * bio_integrity_prep here - the payload has been generated by + * another kernel subsystem, and we just pass it through. + */ + if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) { + err = -EIO; + goto out; + } + bio_for_each_segment(bvec, bio, iter) { unsigned int len = bvec.bv_len; @@ -1129,7 +1215,7 @@ static void btt_make_request(struct request_queue *q, struct bio *bio) BUG_ON(len < btt->sector_size); BUG_ON(len % btt->sector_size); - err = btt_do_bvec(btt, bvec.bv_page, len, bvec.bv_offset, + err = btt_do_bvec(btt, bip, bvec.bv_page, len, bvec.bv_offset, rw, sector); if (err) { dev_info(&btt->nd_btt->dev, @@ -1150,7 +1236,7 @@ static int btt_rw_page(struct block_device *bdev, sector_t sector, { struct btt *btt = bdev->bd_disk->private_data; - btt_do_bvec(btt, page, PAGE_CACHE_SIZE, 0, rw, sector); + btt_do_bvec(btt, NULL, page, PAGE_CACHE_SIZE, 0, rw, sector); page_endio(page, rw & WRITE, 0); return 0; } @@ -1204,10 +1290,17 @@ static int btt_blk_init(struct btt *btt) blk_queue_logical_block_size(btt->btt_queue, btt->sector_size); btt->btt_queue->queuedata = btt; - set_capacity(btt->btt_disk, - btt->nlba * btt->sector_size >> SECTOR_SHIFT); + set_capacity(btt->btt_disk, 0); add_disk(btt->btt_disk); + if (btt_meta_size(btt)) { + ret = nd_integrity_init(btt->btt_disk, btt_meta_size(btt)); + if (ret) + goto out_free_queue; + } + + set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9); + return 0; out_free_queue: @@ -1217,6 +1310,7 @@ out_free_queue: static void btt_blk_cleanup(struct btt *btt) { + blk_integrity_unregister(btt->btt_disk); del_gendisk(btt->btt_disk); put_disk(btt->btt_disk); blk_cleanup_queue(btt->btt_queue); diff --git a/drivers/nvdimm/btt.h b/drivers/nvdimm/btt.h index 8c95a7792c3e..2caa0ef7e67a 100644 --- a/drivers/nvdimm/btt.h +++ b/drivers/nvdimm/btt.h @@ -31,7 +31,7 @@ #define ARENA_MAX_SIZE (1ULL << 39) /* 512 GB */ #define RTT_VALID (1UL << 31) #define RTT_INVALID 0 -#define INT_LBASIZE_ALIGNMENT 256 +#define INT_LBASIZE_ALIGNMENT 64 #define BTT_PG_SIZE 4096 #define BTT_DEFAULT_NFREE ND_MAX_LANES #define LOG_SEQ_INIT 1 diff --git a/drivers/nvdimm/btt_devs.c b/drivers/nvdimm/btt_devs.c index c03c854f892b..02e125b91e77 100644 --- a/drivers/nvdimm/btt_devs.c +++ b/drivers/nvdimm/btt_devs.c @@ -53,7 +53,8 @@ struct nd_btt *to_nd_btt(struct device *dev) } EXPORT_SYMBOL(to_nd_btt); -static const unsigned long btt_lbasize_supported[] = { 512, 4096, 0 }; +static const unsigned long btt_lbasize_supported[] = { 512, 520, 528, + 4096, 4104, 4160, 4224, 0 }; static ssize_t sector_size_show(struct device *dev, struct device_attribute *attr, char *buf) diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c index 0fa9b6225450..36f112995c0c 100644 --- a/drivers/nvdimm/core.c +++ b/drivers/nvdimm/core.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -381,6 +382,42 @@ void nvdimm_bus_unregister(struct nvdimm_bus *nvdimm_bus) } EXPORT_SYMBOL_GPL(nvdimm_bus_unregister); +#ifdef CONFIG_BLK_DEV_INTEGRITY +static int nd_pi_nop_generate_verify(struct blk_integrity_iter *iter) +{ + return 0; +} + +int nd_integrity_init(struct gendisk *disk, unsigned long meta_size) +{ + struct blk_integrity integrity = { + .name = "ND-PI-NOP", + .generate_fn = nd_pi_nop_generate_verify, + .verify_fn = nd_pi_nop_generate_verify, + .tuple_size = meta_size, + .tag_size = meta_size, + }; + int ret; + + ret = blk_integrity_register(disk, &integrity); + if (ret) + return ret; + + blk_queue_max_integrity_segments(disk->queue, 1); + + return 0; +} +EXPORT_SYMBOL(nd_integrity_init); + +#else /* CONFIG_BLK_DEV_INTEGRITY */ +int nd_integrity_init(struct gendisk *disk, unsigned long meta_size) +{ + return 0; +} +EXPORT_SYMBOL(nd_integrity_init); + +#endif + static __init int libnvdimm_init(void) { int rc; diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 2b7746e798fb..1cea3f191a83 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -128,6 +128,7 @@ enum nd_async_mode { ND_ASYNC, }; +int nd_integrity_init(struct gendisk *disk, unsigned long meta_size); void wait_nvdimm_bus_probe_idle(struct device *dev); void nd_device_register(struct device *dev); void nd_device_unregister(struct device *dev, enum nd_async_mode mode);