From patchwork Tue May 12 04:30:28 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 6385681 X-Patchwork-Delegate: dan.j.williams@gmail.com Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EA79D9F1C2 for ; Tue, 12 May 2015 04:33:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DDF2C20121 for ; Tue, 12 May 2015 04:33:10 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D82DE203B5 for ; Tue, 12 May 2015 04:33:09 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id C85E6182BAC; Mon, 11 May 2015 21:33:09 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by ml01.01.org (Postfix) with ESMTP id 7F66D182B6A for ; Mon, 11 May 2015 21:33:08 -0700 (PDT) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP; 11 May 2015 21:33:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,412,1427785200"; d="scan'208";a="727650803" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.23.232.36]) by orsmga002.jf.intel.com with ESMTP; 11 May 2015 21:33:08 -0700 Subject: [PATCH v3 11/11] block: base support for pfn i/o From: Dan Williams To: linux-kernel@vger.kernel.org Date: Tue, 12 May 2015 00:30:28 -0400 Message-ID: <20150512043028.11521.84763.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20150512042629.11521.70356.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20150512042629.11521.70356.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-8-g92dd MIME-Version: 1.0 Cc: axboe@kernel.dk, linux-arch@vger.kernel.org, riel@redhat.com, linux-nvdimm@lists.01.org, david@fromorbit.com, mingo@kernel.org, j.glisse@gmail.com, mgorman@suse.de, linux-fsdevel@vger.kernel.org, Tejun Heo , akpm@linux-foundation.org, hch@lst.de X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Allow block device drivers to opt-in to receiving bio(s) where the bio_vec(s) point to memory that is not backed by struct page entries. When a driver opts in it asserts that it will use the __pfn_t versions of the dma_map/kmap/scatterlist apis in its bio submission path. Cc: Tejun Heo Cc: Jens Axboe Signed-off-by: Dan Williams --- block/bio.c | 46 ++++++++++++++++++++++++++++++++++++++------- block/blk-core.c | 9 +++++++++ include/linux/blk_types.h | 1 + include/linux/blkdev.h | 2 ++ 4 files changed, 51 insertions(+), 7 deletions(-) diff --git a/block/bio.c b/block/bio.c index 7100fd6d5898..58553dfd777e 100644 --- a/block/bio.c +++ b/block/bio.c @@ -567,6 +567,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src) bio->bi_rw = bio_src->bi_rw; bio->bi_iter = bio_src->bi_iter; bio->bi_io_vec = bio_src->bi_io_vec; + bio->bi_flags |= bio_src->bi_flags & (1 << BIO_PFN); } EXPORT_SYMBOL(__bio_clone_fast); @@ -658,6 +659,8 @@ struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask, goto integrity_clone; } + bio->bi_flags |= bio_src->bi_flags & (1 << BIO_PFN); + bio_for_each_segment(bv, bio_src, iter) bio->bi_io_vec[bio->bi_vcnt++] = bv; @@ -699,9 +702,9 @@ int bio_get_nr_vecs(struct block_device *bdev) } EXPORT_SYMBOL(bio_get_nr_vecs); -static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page - *page, unsigned int len, unsigned int offset, - unsigned int max_sectors) +static int __bio_add_pfn(struct request_queue *q, struct bio *bio, + __pfn_t pfn, unsigned int len, unsigned int offset, + unsigned int max_sectors) { int retried_segments = 0; struct bio_vec *bvec; @@ -723,7 +726,7 @@ static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page if (bio->bi_vcnt > 0) { struct bio_vec *prev = &bio->bi_io_vec[bio->bi_vcnt - 1]; - if (page == bvec_page(prev) && + if (__pfn_t_to_pfn(pfn) == __pfn_t_to_pfn(prev->bv_pfn) && offset == prev->bv_offset + prev->bv_len) { unsigned int prev_bv_len = prev->bv_len; prev->bv_len += len; @@ -768,7 +771,7 @@ static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page * cannot add the page */ bvec = &bio->bi_io_vec[bio->bi_vcnt]; - bvec_set_page(bvec, page); + bvec->bv_pfn = pfn; bvec->bv_len = len; bvec->bv_offset = offset; bio->bi_vcnt++; @@ -845,7 +848,7 @@ static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page int bio_add_pc_page(struct request_queue *q, struct bio *bio, struct page *page, unsigned int len, unsigned int offset) { - return __bio_add_page(q, bio, page, len, offset, + return __bio_add_pfn(q, bio, page_to_pfn_t(page), len, offset, queue_max_hw_sectors(q)); } EXPORT_SYMBOL(bio_add_pc_page); @@ -872,10 +875,39 @@ int bio_add_page(struct bio *bio, struct page *page, unsigned int len, if ((max_sectors < (len >> 9)) && !bio->bi_iter.bi_size) max_sectors = len >> 9; - return __bio_add_page(q, bio, page, len, offset, max_sectors); + return __bio_add_pfn(q, bio, page_to_pfn_t(page), len, offset, + max_sectors); } EXPORT_SYMBOL(bio_add_page); +/** + * bio_add_pfn - attempt to add pfn to bio + * @bio: destination bio + * @pfn: pfn to add + * @len: vec entry length + * @offset: vec entry offset + * + * Identical to bio_add_page() except this variant flags the bio as + * not have struct page backing. A given request_queue must assert + * that it is prepared to handle this constraint before bio(s) + * flagged in the manner can be passed. + */ +int bio_add_pfn(struct bio *bio, __pfn_t pfn, unsigned int len, + unsigned int offset) +{ + struct request_queue *q = bdev_get_queue(bio->bi_bdev); + unsigned int max_sectors; + + if (!blk_queue_pfn(q)) + return 0; + set_bit(BIO_PFN, &bio->bi_flags); + max_sectors = blk_max_size_offset(q, bio->bi_iter.bi_sector); + if ((max_sectors < (len >> 9)) && !bio->bi_iter.bi_size) + max_sectors = len >> 9; + + return __bio_add_pfn(q, bio, pfn, len, offset, max_sectors); +} + struct submit_bio_ret { struct completion event; int error; diff --git a/block/blk-core.c b/block/blk-core.c index 94d2c6ccf801..1275e2c08c16 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1856,6 +1856,15 @@ generic_make_request_checks(struct bio *bio) goto end_io; } + if (bio_flagged(bio, BIO_PFN)) { + if (IS_ENABLED(CONFIG_DEV_PFN) && blk_queue_pfn(q)) + /* pass */; + else { + err = -EOPNOTSUPP; + goto end_io; + } + } + /* * Various block parts want %current->io_context and lazy ioc * allocation ends up trading a lot of pain for a small amount of diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index f50464e167b4..ccde0b2d689d 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -150,6 +150,7 @@ struct bio { #define BIO_NULL_MAPPED 8 /* contains invalid user pages */ #define BIO_QUIET 9 /* Make BIO Quiet */ #define BIO_SNAP_STABLE 10 /* bio data must be snapshotted during write */ +#define BIO_PFN 11 /* bio_vec references memory without struct page */ /* * Flags starting here get preserved by bio_reset() - this includes diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 42bcaf2b9311..d3f9b8cc50f2 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -513,6 +513,7 @@ struct request_queue { #define QUEUE_FLAG_INIT_DONE 20 /* queue is initialized */ #define QUEUE_FLAG_NO_SG_MERGE 21 /* don't attempt to merge SG segments*/ #define QUEUE_FLAG_SG_GAPS 22 /* queue doesn't support SG gaps */ +#define QUEUE_FLAG_PFN 23 /* queue supports pfn-only bio_vec(s) */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_STACKABLE) | \ @@ -594,6 +595,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) #define blk_queue_noxmerges(q) \ test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags) #define blk_queue_nonrot(q) test_bit(QUEUE_FLAG_NONROT, &(q)->queue_flags) +#define blk_queue_pfn(q) test_bit(QUEUE_FLAG_PFN, &(q)->queue_flags) #define blk_queue_io_stat(q) test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags) #define blk_queue_add_random(q) test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags) #define blk_queue_stackable(q) \