From patchwork Wed Nov 2 11:53:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sweet Tea Dorminy X-Patchwork-Id: 13028048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 722DAC3A5A3 for ; Wed, 2 Nov 2022 11:54:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230474AbiKBLyI (ORCPT ); Wed, 2 Nov 2022 07:54:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230424AbiKBLxo (ORCPT ); Wed, 2 Nov 2022 07:53:44 -0400 Received: from box.fidei.email (box.fidei.email [71.19.144.250]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E247BD1; Wed, 2 Nov 2022 04:53:43 -0700 (PDT) Received: from authenticated-user (box.fidei.email [71.19.144.250]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by box.fidei.email (Postfix) with ESMTPSA id 43AAB81462; Wed, 2 Nov 2022 07:53:43 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=dorminy.me; s=mail; t=1667390023; bh=Y7i/Jzp5z12W/F5OGHJJyvkneUKSA1fRLs5m/r5eQvE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=urXc+jE9JYnppE2uCMrid+NGu6M1/xiPqSs0dqnlXCfAxjkuGjoVj5bIevLo9cOxC ZJmBq4QkcKn+Npjrxbzn1E1Um2fPFwy8GIlXZXtYhWHqwbHSKT10z6RvMzq8ank+AJ FW0hJKC+MhZaMW0DYP7uAUBEcsmwImoUurhXQ3cBuL+IEdip4GcWUHU0iNmq8FV1NR NTAsV4Y/B8wtAK6ujpSUXumPpSe5534oOC11+BqN8BGMYYtfOe8jJbwQpeDmw1BVmi 9ZGfj/+FItHxy0GlQrAW3xhq8KCAk8jBp4FJBGDoP9/DFKCpnvDOjFlG+5r2W5gm4z A2+YXObNnYAtg== From: Sweet Tea Dorminy To: "Theodore Y. Ts'o" , Jaegeuk Kim , Eric Biggers , Chris Mason , Josef Bacik , David Sterba , linux-fscrypt@vger.kernel.org, linux-btrfs@vger.kernel.org, kernel-team@meta.com Cc: Sweet Tea Dorminy Subject: [PATCH v5 18/18] btrfs: allow encrypting compressed extents Date: Wed, 2 Nov 2022 07:53:07 -0400 Message-Id: In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Conveniently, compressed extents are already padded to sector size, so they can be encrypted in-place (which requires 16-byte alignment). Signed-off-by: Sweet Tea Dorminy --- fs/btrfs/btrfs_inode.h | 3 --- fs/btrfs/compression.c | 23 +++++++++++++++++++++++ 2 files changed, 23 insertions(+), 3 deletions(-) diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h index f0935a95ec70..d7f2b9a3d42b 100644 --- a/fs/btrfs/btrfs_inode.h +++ b/fs/btrfs/btrfs_inode.h @@ -385,9 +385,6 @@ static inline bool btrfs_inode_in_log(struct btrfs_inode *inode, u64 generation) */ static inline bool btrfs_inode_can_compress(const struct btrfs_inode *inode) { - if (IS_ENCRYPTED(&inode->vfs_inode)) - return false; - if (inode->flags & BTRFS_INODE_NODATACOW || inode->flags & BTRFS_INODE_NODATASUM) return false; diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 52df6c06cc91..038721a66414 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -202,6 +202,16 @@ static void end_compressed_bio_read(struct btrfs_bio *bbio) status = errno_to_blk_status(ret); } } + if (fscrypt_inode_uses_fs_layer_crypto(inode)) { + int err; + u64 lblk_num = start >> fs_info->sectorsize_bits; + err = fscrypt_decrypt_block_inplace(inode, bv.bv_page, + fs_info->sectorsize, + bv.bv_offset, + lblk_num); + if (err) + status = errno_to_blk_status(err); + } } if (status) @@ -451,6 +461,19 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, real_size = min_t(u64, real_size, compressed_len - offset); ASSERT(IS_ALIGNED(real_size, fs_info->sectorsize)); + if (fscrypt_inode_uses_fs_layer_crypto(&inode->vfs_inode)) { + int err; + u64 lblk_num = start >> fs_info->sectorsize_bits; + + err = fscrypt_encrypt_block_inplace(&inode->vfs_inode, + page, real_size, 0, + lblk_num, GFP_NOFS); + if (err) { + ret = errno_to_blk_status(err); + break; + } + } + if (use_append) added = bio_add_zone_append_page(bio, page, real_size, offset_in_page(offset));