From patchwork Wed Jun 14 23:40:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Halcrow X-Patchwork-Id: 9787641 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BE83B60325 for ; Wed, 14 Jun 2017 23:41:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B283B26E4A for ; Wed, 14 Jun 2017 23:41:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A755F28539; Wed, 14 Jun 2017 23:41:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0925026E4A for ; Wed, 14 Jun 2017 23:41:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752748AbdFNXlG (ORCPT ); Wed, 14 Jun 2017 19:41:06 -0400 Received: from mail-pf0-f177.google.com ([209.85.192.177]:35212 "EHLO mail-pf0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752809AbdFNXlA (ORCPT ); Wed, 14 Jun 2017 19:41:00 -0400 Received: by mail-pf0-f177.google.com with SMTP id l89so7565359pfi.2 for ; Wed, 14 Jun 2017 16:41:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=ZBhJdNuAsJsBQgArk7ii3FlR3t5dR4ogKkQ0NYXDri0=; b=WNNPZiX7mOV3duHtSwxZYJrItXp5YvsymHu/A4Z1nw40L6Mj/mLg0gl3kqKeg6Cq4o IYoXmt94pYmfbfuKqJLYByRW7OB7/cLCrfNe4s0SzG/hhl5um90buoZ1RVK5bLk9STQ2 ++wAg0n6VERzZMaa6xbMuJdyHZ1vQz+oT9yOdaLuoEiIWT5aY9nkqn89RUU4KGbklldr NCFy/I3EFEL3i7CA19GmKQxCAl7d6Fm+esXVCDdUfWvXGbIBcGSLps1+htoKqMvWApbD 3RQ9QPtvrR/zp5DPsOpGJans/YpD4sbJE1Uc8aIGoG4L+83ryA/f79u+UkBjte8Q/5Wv i/HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=ZBhJdNuAsJsBQgArk7ii3FlR3t5dR4ogKkQ0NYXDri0=; b=DGFHmEOxpYRdXLseKIvxawqgsP8areyne//UHZbs7Pkm6TxWsG9PdMKayJzHAVVRA3 +utNDCQNdwIkPmCvg/FXKKBhPwEINEbv6EKKNyVb/Ohhn9i617lgFvzJpVAiuLStjZBv IHuTiykF/jtuLLJ8s6EqsQMloMrb5ItwSqebWWqOZ6Tuva5sAlwGrCuZd6/9xucm4i+G XzDeqlMtotQvRNrBXJxFktJI8t2zNAH8eUYEkkx+E0Zxs9WeWVxnrvaa5lfdnjd7vStc d6qmmOT0LZNkh3XSb2sMjWJ1fch1tY5I9ERdATnuZ9NHiFrv5zenYRk+mMIJ5C1O+h5Q lR1Q== X-Gm-Message-State: AKS2vOzdozRSCD5LlVeYAOs1yp8gkyQ8AAxmcwhjpTwmVkTHKmS9qyIN jAS6pxZFVVIZ1mBb X-Received: by 10.99.137.195 with SMTP id v186mr2378662pgd.204.1497483654445; Wed, 14 Jun 2017 16:40:54 -0700 (PDT) Received: from mhalcrow-linux.kir.corp.google.com ([100.66.175.61]) by smtp.gmail.com with ESMTPSA id z66sm2141286pfl.13.2017.06.14.16.40.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 14 Jun 2017 16:40:53 -0700 (PDT) From: Michael Halcrow To: Michael Halcrow , "Theodore Y . Ts'o" , Eric Biggers , Jaegeuk Kim , linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, dm-devel@redhat.com, linux-ext4@vger.kernel.org, Tyler Hicks Subject: [RFC PATCH 3/4] ext4: Set the bio REQ_NOENCRYPT flag Date: Wed, 14 Jun 2017 16:40:39 -0700 Message-Id: <20170614234040.4326-4-mhalcrow@google.com> X-Mailer: git-send-email 2.13.1.518.g3df882009-goog In-Reply-To: <20170614234040.4326-1-mhalcrow@google.com> References: <20170614234040.4326-1-mhalcrow@google.com> Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When lower layers such as dm-crypt observe the REQ_NOENCRYPT flag, it helps the I/O stack avoid redundant encryption, improving performance and power utilization. Note that lower layers must be consistent in their observation of this flag in order to avoid the possibility of data corruption. Signed-off-by: Michael Halcrow --- fs/crypto/bio.c | 2 +- fs/ext4/ext4.h | 3 +++ fs/ext4/inode.c | 13 ++++++++----- fs/ext4/page-io.c | 5 +++++ fs/ext4/readpage.c | 3 ++- 5 files changed, 19 insertions(+), 7 deletions(-) diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c index a409a84f1bca..9093a715d2be 100644 --- a/fs/crypto/bio.c +++ b/fs/crypto/bio.c @@ -118,7 +118,7 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, bio->bi_bdev = inode->i_sb->s_bdev; bio->bi_iter.bi_sector = pblk << (inode->i_sb->s_blocksize_bits - 9); - bio_set_op_attrs(bio, REQ_OP_WRITE, 0); + bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_NOENCRYPT); ret = bio_add_page(bio, ciphertext_page, inode->i_sb->s_blocksize, 0); if (ret != inode->i_sb->s_blocksize) { diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 8e8046104f4d..48c2bc9f8688 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -206,7 +206,10 @@ typedef struct ext4_io_end { ssize_t size; /* size of the extent */ } ext4_io_end_t; +#define EXT4_IO_ENCRYPTED 1 + struct ext4_io_submit { + unsigned int io_flags; struct writeback_control *io_wbc; struct bio *io_bio; ext4_io_end_t *io_end; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 1bd0bfa547f6..25a9b7265692 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1154,10 +1154,11 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len, if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh) && (block_start < from || block_end > to)) { - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - *wait_bh++ = bh; decrypt = ext4_encrypted_inode(inode) && S_ISREG(inode->i_mode); + ll_rw_block(REQ_OP_READ, (decrypt ? REQ_NOENCRYPT : 0), + 1, &bh); + *wait_bh++ = bh; } } /* @@ -3863,6 +3864,7 @@ static int __ext4_block_zero_page_range(handle_t *handle, struct inode *inode = mapping->host; struct buffer_head *bh; struct page *page; + bool decrypt; int err = 0; page = find_or_create_page(mapping, from >> PAGE_SHIFT, @@ -3905,13 +3907,14 @@ static int __ext4_block_zero_page_range(handle_t *handle, if (!buffer_uptodate(bh)) { err = -EIO; - ll_rw_block(REQ_OP_READ, 0, 1, &bh); + decrypt = S_ISREG(inode->i_mode) && + ext4_encrypted_inode(inode); + ll_rw_block(REQ_OP_READ, (decrypt ? REQ_NOENCRYPT : 0), 1, &bh); wait_on_buffer(bh); /* Uhhuh. Read error. Complain and punt. */ if (!buffer_uptodate(bh)) goto unlock; - if (S_ISREG(inode->i_mode) && - ext4_encrypted_inode(inode)) { + if (decrypt) { /* We expect the key to be set. */ BUG_ON(!fscrypt_has_encryption_key(inode)); BUG_ON(blocksize != PAGE_SIZE); diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index 1a82138ba739..e25bf6cb216a 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -349,6 +349,8 @@ void ext4_io_submit(struct ext4_io_submit *io) if (bio) { int io_op_flags = io->io_wbc->sync_mode == WB_SYNC_ALL ? REQ_SYNC : 0; + if (io->io_flags & EXT4_IO_ENCRYPTED) + io_op_flags |= REQ_NOENCRYPT; bio_set_op_attrs(io->io_bio, REQ_OP_WRITE, io_op_flags); submit_bio(io->io_bio); } @@ -358,6 +360,7 @@ void ext4_io_submit(struct ext4_io_submit *io) void ext4_io_submit_init(struct ext4_io_submit *io, struct writeback_control *wbc) { + io->io_flags = 0; io->io_wbc = wbc; io->io_bio = NULL; io->io_end = NULL; @@ -499,6 +502,8 @@ int ext4_bio_write_page(struct ext4_io_submit *io, do { if (!buffer_async_write(bh)) continue; + if (data_page) + io->io_flags |= EXT4_IO_ENCRYPTED; ret = io_submit_add_bh(io, inode, data_page ? data_page : page, bh); if (ret) { diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c index a81b829d56de..008d14d74f33 100644 --- a/fs/ext4/readpage.c +++ b/fs/ext4/readpage.c @@ -258,7 +258,8 @@ int ext4_mpage_readpages(struct address_space *mapping, bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9); bio->bi_end_io = mpage_end_io; bio->bi_private = ctx; - bio_set_op_attrs(bio, REQ_OP_READ, 0); + bio_set_op_attrs(bio, REQ_OP_READ, + ctx ? REQ_NOENCRYPT : 0); } length = first_hole << blkbits;