From patchwork Fri Oct 20 05:24:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 10018909 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C311860224 for ; Fri, 20 Oct 2017 05:24:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5FA32850F for ; Fri, 20 Oct 2017 05:24:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AAC1B2851E; Fri, 20 Oct 2017 05:24:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57E0328421 for ; Fri, 20 Oct 2017 05:24:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750969AbdJTFYN (ORCPT ); Fri, 20 Oct 2017 01:24:13 -0400 Received: from mga09.intel.com ([134.134.136.24]:23820 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750752AbdJTFYL (ORCPT ); Fri, 20 Oct 2017 01:24:11 -0400 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Oct 2017 22:24:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.43,405,1503385200"; d="scan'208";a="1027224173" Received: from theros.lm.intel.com ([10.232.112.77]) by orsmga003.jf.intel.com with ESMTP; 19 Oct 2017 22:24:09 -0700 From: Ross Zwisler To: linux-kernel@vger.kernel.org, Josef Bacik Cc: Ross Zwisler , Alasdair Kergon , Dan Williams , Dave Chinner , Jan Kara , Mike Snitzer , Shaohua Li , dm-devel@redhat.com, linux-nvdimm@lists.01.org, linux-raid@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, Christoph Hellwig Subject: [PATCH 1/2] dm log writes: Add support for inline data buffers Date: Thu, 19 Oct 2017 23:24:03 -0600 Message-Id: <20171020052404.13762-1-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.9.5 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently dm-log-writes supports writing filesystem data via BIOs, and writing internal metadata from a flat buffer via write_metadata(). For DAX writes, though, we won't have a BIO, but will instead have an iterator that we'll want to use to fill a flat data buffer. So, create write_inline_data() which allows us to write filesystem data using a flat buffer as a source, and wire it up in log_one_block(). Signed-off-by: Ross Zwisler --- drivers/md/dm-log-writes.c | 90 +++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 86 insertions(+), 4 deletions(-) diff --git a/drivers/md/dm-log-writes.c b/drivers/md/dm-log-writes.c index 8b80a9c..c65f9d1 100644 --- a/drivers/md/dm-log-writes.c +++ b/drivers/md/dm-log-writes.c @@ -246,27 +246,109 @@ static int write_metadata(struct log_writes_c *lc, void *entry, return -1; } +static int write_inline_data(struct log_writes_c *lc, void *entry, + size_t entrylen, void *data, size_t datalen, + sector_t sector) +{ + int num_pages, bio_pages, pg_datalen, pg_sectorlen, i; + struct page *page; + struct bio *bio; + size_t ret; + void *ptr; + + while (datalen) { + num_pages = ALIGN(datalen, PAGE_SIZE) >> PAGE_SHIFT; + bio_pages = min(num_pages, BIO_MAX_PAGES); + + atomic_inc(&lc->io_blocks); + + bio = bio_alloc(GFP_KERNEL, bio_pages); + if (!bio) { + DMERR("Couldn't alloc inline data bio"); + goto error; + } + + bio->bi_iter.bi_size = 0; + bio->bi_iter.bi_sector = sector; + bio_set_dev(bio, lc->logdev->bdev); + bio->bi_end_io = log_end_io; + bio->bi_private = lc; + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); + + for (i = 0; i < bio_pages; i++) { + pg_datalen = min(datalen, PAGE_SIZE); + pg_sectorlen = ALIGN(pg_datalen, lc->sectorsize); + + page = alloc_page(GFP_KERNEL); + if (!page) { + DMERR("Couldn't alloc inline data page"); + goto error_bio; + } + + ptr = kmap_atomic(page); + memcpy(ptr, data, pg_datalen); + if (pg_sectorlen > pg_datalen) + memset(ptr + pg_datalen, 0, + pg_sectorlen - pg_datalen); + kunmap_atomic(ptr); + + ret = bio_add_page(bio, page, pg_sectorlen, 0); + if (ret != pg_sectorlen) { + DMERR("Couldn't add page of inline data"); + __free_page(page); + goto error_bio; + } + + datalen -= pg_datalen; + data += pg_datalen; + } + submit_bio(bio); + + sector += bio_pages * PAGE_SECTORS; + } + return 0; +error_bio: + bio_free_pages(bio); + bio_put(bio); +error: + put_io_block(lc); + return -1; +} + static int log_one_block(struct log_writes_c *lc, struct pending_block *block, sector_t sector) { struct bio *bio; struct log_write_entry entry; - size_t ret; + size_t metadlen, ret; int i; entry.sector = cpu_to_le64(block->sector); entry.nr_sectors = cpu_to_le64(block->nr_sectors); entry.flags = cpu_to_le64(block->flags); entry.data_len = cpu_to_le64(block->datalen); - if (write_metadata(lc, &entry, sizeof(entry), block->data, - block->datalen, sector)) { + + metadlen = (block->flags & LOG_MARK_FLAG) ? block->datalen : 0; + if (write_metadata(lc, &entry, sizeof(entry), block->data, metadlen, + sector)) { free_pending_block(lc, block); return -1; } + sector += dev_to_bio_sectors(lc, 1); + + if (block->datalen && metadlen == 0) { + if (write_inline_data(lc, &entry, sizeof(entry), block->data, + block->datalen, sector)) { + free_pending_block(lc, block); + return -1; + } + /* we don't support both inline data & bio data */ + goto out; + } + if (!block->vec_cnt) goto out; - sector += dev_to_bio_sectors(lc, 1); atomic_inc(&lc->io_blocks); bio = bio_alloc(GFP_KERNEL, min(block->vec_cnt, BIO_MAX_PAGES));