From patchwork Fri Jun 4 21:09:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12300743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51B70C4743D for ; Fri, 4 Jun 2021 21:10:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A91B61403 for ; Fri, 4 Jun 2021 21:10:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231562AbhFDVMO (ORCPT ); Fri, 4 Jun 2021 17:12:14 -0400 Received: from mail-pj1-f73.google.com ([209.85.216.73]:39865 "EHLO mail-pj1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231545AbhFDVMM (ORCPT ); Fri, 4 Jun 2021 17:12:12 -0400 Received: by mail-pj1-f73.google.com with SMTP id w4-20020a17090a4f44b029016bab19a594so4866562pjl.4 for ; Fri, 04 Jun 2021 14:10:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Xq6XDEYaDKE+/PR0+no3jb6x1KyuSdW0uyfm5BLybSg=; b=c+EVMuLg7yEijovWlnx06pYkAai1JpkSzW7npNUoU8qbU5uomSKFqJ7UMfQfJ73Cq4 pcYtCiTMfqNkyVgEEFyRnUd9ezIasQUx/67XyiXAr4x3jFsQRKcs5iteUIZcHCT23n3w sGing6tTOEBIcRRTSQq5S67btZ71UwnE75HOpuxyYaiams3kRHchrHMUaYQatQYD5K9c lJn7pj2mOOm0TfHEK0s4rYl2zE5UD7eP5P4sZeIXl2ELqz1YjUVUzX+STwpxbv0oAFoi UDc3vbdb4bONW7xgCGZcfmyN88SFEKDf9lvAF4mcl6uS/SnZilf08OFEKPP4jzidRzM9 iO0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Xq6XDEYaDKE+/PR0+no3jb6x1KyuSdW0uyfm5BLybSg=; b=CGsR6LmM6q80rgt31afsEFRPjj4CkrCpqX7WM8cbPto+8EvJK4mJvurP83Z0y/Ncda +uP2x7LnConWaVP0iqX+zJlvv/Hdm9zK1NN1uC0PWHcLZ9llOPTpTr5JOkfYuApc4h2u kc6EyPaHxlRdg9XlrpXVc7a3R5HGEcFmrjxjA3L46xrhLOdLCbQ3Zw4e+hpjVcZyJSjW 5+LNdUE3odNuALl69O+N3S3ziUNJssJmoEMItm4JlL63yWDa1LFfw6nrmldGJYkPbLEG AAANsgJS0w0teTcSallxizxT/1Xz0gyPuKQ9fPM2pktIYr2Ka8/ZSgBY2TCvetmCqbOC WgIA== X-Gm-Message-State: AOAM530mLCZk62fq0GZbsYYB1cs4I0u+KQ1CBVU176qVnv6AGksNCgxO G0yoy7YVGOo7Ul6ojknh1VcU/DGk3lM= X-Google-Smtp-Source: ABdhPJwclDl+rO6cci2JFypmyufnz+zo4oqIC3u0PMQnH6z7Hzn0olEh5nlnh30hjMo+O9X32MQhGPUx3TM= X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a17:90a:cb07:: with SMTP id z7mr1618519pjt.0.1622840952703; Fri, 04 Jun 2021 14:09:12 -0700 (PDT) Date: Fri, 4 Jun 2021 21:09:00 +0000 In-Reply-To: <20210604210908.2105870-1-satyat@google.com> Message-Id: <20210604210908.2105870-2-satyat@google.com> Mime-Version: 1.0 References: <20210604210908.2105870-1-satyat@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v9 1/9] block: blk-crypto-fallback: handle data unit split across multiple bvecs From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala , Eric Biggers Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Till now, the blk-crypto-fallback required each crypto data unit to be contained within a single bvec. It also required the starting offset of each bvec to be aligned to the data unit size. This patch removes both restrictions, so that blk-crypto-fallback can handle crypto data units split across multiple bvecs. blk-crypto-fallback now only requires that the total size of the bio be aligned to the crypto data unit size. The buffer that is being read/written to no longer needs to be data unit size aligned. This is useful for making the alignment requirements for direct I/O on encrypted files similar to those for direct I/O on unencrypted files. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- block/blk-crypto-fallback.c | 203 +++++++++++++++++++++++++++--------- 1 file changed, 156 insertions(+), 47 deletions(-) diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c index 85c813ef670b..658ff7deadf1 100644 --- a/block/blk-crypto-fallback.c +++ b/block/blk-crypto-fallback.c @@ -256,6 +256,65 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], iv->dun[i] = cpu_to_le64(dun[i]); } +/* + * If the length of any bio segment isn't a multiple of data_unit_size + * (which can happen if data_unit_size > logical_block_size), then each + * encryption/decryption might need to be passed multiple scatterlist elements. + * If that will be the case, this function allocates and initializes src and dst + * scatterlists (or a combined src/dst scatterlist) with the needed length. + * + * If 1 element is guaranteed to be enough (which is usually the case, and is + * guaranteed when data_unit_size <= logical_block_size), then this function + * just initializes the on-stack scatterlist(s). + */ +static bool blk_crypto_alloc_sglists(struct bio *bio, + const struct bvec_iter *start_iter, + unsigned int data_unit_size, + struct scatterlist **src_p, + struct scatterlist **dst_p) +{ + struct bio_vec bv; + struct bvec_iter iter; + bool aligned = true; + unsigned int count = 0; + + __bio_for_each_segment(bv, bio, iter, *start_iter) { + count++; + aligned &= IS_ALIGNED(bv.bv_len, data_unit_size); + } + if (aligned) { + count = 1; + } else { + /* + * We can't need more elements than bio segments, and we can't + * need more than the number of sectors per data unit. This may + * overestimate the required length by a bit, but that's okay. + */ + count = min(count, data_unit_size >> SECTOR_SHIFT); + } + + if (count > 1) { + *src_p = kmalloc_array(count, sizeof(struct scatterlist), + GFP_NOIO); + if (!*src_p) + return false; + if (dst_p) { + *dst_p = kmalloc_array(count, + sizeof(struct scatterlist), + GFP_NOIO); + if (!*dst_p) { + kfree(*src_p); + *src_p = NULL; + return false; + } + } + } + sg_init_table(*src_p, count); + if (dst_p) + sg_init_table(*dst_p, count); + return true; +} + /* * The crypto API fallback's encryption routine. * Allocate a bounce bio for encryption, encrypt the input bio using crypto API, @@ -272,9 +331,12 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) struct skcipher_request *ciph_req = NULL; DECLARE_CRYPTO_WAIT(wait); u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; - struct scatterlist src, dst; + struct scatterlist _src, *src = &_src; + struct scatterlist _dst, *dst = &_dst; union blk_crypto_iv iv; - unsigned int i, j; + unsigned int i; + unsigned int sg_idx = 0; + unsigned int du_filled = 0; bool ret = false; blk_status_t blk_st; @@ -286,11 +348,18 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) bc = src_bio->bi_crypt_context; data_unit_size = bc->bc_key->crypto_cfg.data_unit_size; + /* Allocate scatterlists if needed */ + if (!blk_crypto_alloc_sglists(src_bio, &src_bio->bi_iter, + data_unit_size, &src, &dst)) { + src_bio->bi_status = BLK_STS_RESOURCE; + return false; + } + /* Allocate bounce bio for encryption */ enc_bio = blk_crypto_clone_bio(src_bio); if (!enc_bio) { src_bio->bi_status = BLK_STS_RESOURCE; - return false; + goto out_free_sglists; } /* @@ -310,45 +379,58 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) } memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun)); - sg_init_table(&src, 1); - sg_init_table(&dst, 1); - skcipher_request_set_crypt(ciph_req, &src, &dst, data_unit_size, + skcipher_request_set_crypt(ciph_req, src, dst, data_unit_size, iv.bytes); - /* Encrypt each page in the bounce bio */ + /* + * Encrypt each data unit in the bounce bio. + * + * Take care to handle the case where a data unit spans bio segments. + * This can happen when data_unit_size > logical_block_size. + */ for (i = 0; i < enc_bio->bi_vcnt; i++) { - struct bio_vec *enc_bvec = &enc_bio->bi_io_vec[i]; - struct page *plaintext_page = enc_bvec->bv_page; + struct bio_vec *bv = &enc_bio->bi_io_vec[i]; + struct page *plaintext_page = bv->bv_page; struct page *ciphertext_page = mempool_alloc(blk_crypto_bounce_page_pool, GFP_NOIO); + unsigned int offset_in_bv = 0; - enc_bvec->bv_page = ciphertext_page; + bv->bv_page = ciphertext_page; if (!ciphertext_page) { src_bio->bi_status = BLK_STS_RESOURCE; goto out_free_bounce_pages; } - sg_set_page(&src, plaintext_page, data_unit_size, - enc_bvec->bv_offset); - sg_set_page(&dst, ciphertext_page, data_unit_size, - enc_bvec->bv_offset); - - /* Encrypt each data unit in this page */ - for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) { - blk_crypto_dun_to_iv(curr_dun, &iv); - if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req), - &wait)) { - i++; - src_bio->bi_status = BLK_STS_IOERR; - goto out_free_bounce_pages; + while (offset_in_bv < bv->bv_len) { + unsigned int n = min(bv->bv_len - offset_in_bv, + data_unit_size - du_filled); + sg_set_page(&src[sg_idx], plaintext_page, n, + bv->bv_offset + offset_in_bv); + sg_set_page(&dst[sg_idx], ciphertext_page, n, + bv->bv_offset + offset_in_bv); + sg_idx++; + offset_in_bv += n; + du_filled += n; + if (du_filled == data_unit_size) { + blk_crypto_dun_to_iv(curr_dun, &iv); + if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req), + &wait)) { + src_bio->bi_status = BLK_STS_IOERR; + i++; + goto out_free_bounce_pages; + } + bio_crypt_dun_increment(curr_dun, 1); + sg_idx = 0; + du_filled = 0; } - bio_crypt_dun_increment(curr_dun, 1); - src.offset += data_unit_size; - dst.offset += data_unit_size; } } + if (WARN_ON_ONCE(du_filled != 0)) { + src_bio->bi_status = BLK_STS_IOERR; + goto out_free_bounce_pages; + } enc_bio->bi_private = src_bio; enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio; @@ -369,7 +451,11 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) out_put_enc_bio: if (enc_bio) bio_put(enc_bio); - +out_free_sglists: + if (src != &_src) + kfree(src); + if (dst != &_dst) + kfree(dst); return ret; } @@ -388,13 +474,21 @@ static void blk_crypto_fallback_decrypt_bio(struct work_struct *work) DECLARE_CRYPTO_WAIT(wait); u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; union blk_crypto_iv iv; - struct scatterlist sg; + struct scatterlist _sg, *sg = &_sg; struct bio_vec bv; struct bvec_iter iter; const int data_unit_size = bc->bc_key->crypto_cfg.data_unit_size; - unsigned int i; + unsigned int sg_idx = 0; + unsigned int du_filled = 0; blk_status_t blk_st; + /* Allocate scatterlist if needed */ + if (!blk_crypto_alloc_sglists(bio, &f_ctx->crypt_iter, data_unit_size, + &sg, NULL)) { + bio->bi_status = BLK_STS_RESOURCE; + goto out_no_sglists; + } + /* * Use the crypto API fallback keyslot manager to get a crypto_skcipher * for the algorithm and key specified for this bio. @@ -412,33 +506,48 @@ static void blk_crypto_fallback_decrypt_bio(struct work_struct *work) } memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun)); - sg_init_table(&sg, 1); - skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size, - iv.bytes); + skcipher_request_set_crypt(ciph_req, sg, sg, data_unit_size, iv.bytes); - /* Decrypt each segment in the bio */ + /* + * Decrypt each data unit in the bio. + * + * Take care to handle the case where a data unit spans bio segments. + * This can happen when data_unit_size > logical_block_size. + */ __bio_for_each_segment(bv, bio, iter, f_ctx->crypt_iter) { - struct page *page = bv.bv_page; - - sg_set_page(&sg, page, data_unit_size, bv.bv_offset); - - /* Decrypt each data unit in the segment */ - for (i = 0; i < bv.bv_len; i += data_unit_size) { - blk_crypto_dun_to_iv(curr_dun, &iv); - if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req), - &wait)) { - bio->bi_status = BLK_STS_IOERR; - goto out; + unsigned int offset_in_bv = 0; + + while (offset_in_bv < bv.bv_len) { + unsigned int n = min(bv.bv_len - offset_in_bv, + data_unit_size - du_filled); + sg_set_page(&sg[sg_idx++], bv.bv_page, n, + bv.bv_offset + offset_in_bv); + offset_in_bv += n; + du_filled += n; + if (du_filled == data_unit_size) { + blk_crypto_dun_to_iv(curr_dun, &iv); + if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req), + &wait)) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } + bio_crypt_dun_increment(curr_dun, 1); + sg_idx = 0; + du_filled = 0; } - bio_crypt_dun_increment(curr_dun, 1); - sg.offset += data_unit_size; } } - + if (WARN_ON_ONCE(du_filled != 0)) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } out: skcipher_request_free(ciph_req); blk_ksm_put_slot(slot); out_no_keyslot: + if (sg != &_sg) + kfree(sg); +out_no_sglists: mempool_free(f_ctx, bio_fallback_crypt_ctx_pool); bio_endio(bio); } From patchwork Fri Jun 4 21:09:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12300733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E16AC4743F for ; Fri, 4 Jun 2021 21:09:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 404BB61406 for ; Fri, 4 Jun 2021 21:09:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231213AbhFDVLS (ORCPT ); Fri, 4 Jun 2021 17:11:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231148AbhFDVLR (ORCPT ); Fri, 4 Jun 2021 17:11:17 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A993C061787 for ; Fri, 4 Jun 2021 14:09:15 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id n12-20020a0c8c0c0000b02901edb8963d4dso7540016qvb.18 for ; Fri, 04 Jun 2021 14:09:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=91peql34DHbFUki+AbIkoVzV6GEWt4wa1rjinU0Vj1o=; b=NWM6rSqjQWgBV4b0PQCeKKpM41Gh30JBudqV4IzVKloaLmDl/Elq/REtE98KKwRDDK RoyIta1OKr/9Zo6G9QXrLium4ej+9gKZTVPYUePqI1jhOJgobWyhOoEeUm6Z8HlMRjIr Qx4GLl9jime2jzklU9y2MF4l4VIiast22wwPkx8CULvJraclZ2P+a0Rou3l70zG/oHiq 1UjT69ghMT/VyqfhLtg/Z6iCpGrigd9QganfvoleBrBwR0XeBn1h+LbhUsV4OYDFF6iX rkUigqHsbMPT6bWEOC6ezUgYqzhG+7kTT2s2TUnbwbfhC+8Z9agd+FqZP4QJKKf7EKS3 yU5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=91peql34DHbFUki+AbIkoVzV6GEWt4wa1rjinU0Vj1o=; b=BfRHenwy864YunXrxfJT7sUUZaqxWcVqUSgtxBmB7KNQ6045KjXac2+5/FyAUljVay 1bWH68vOtCUVTHNceeUHUcN/H3lRoP3XLLZDnv1mI8egO793EExAto7OquIo5iQnxlW4 HefHfanGwV7MrfwWNulBhI3gRac47jc4TIiRr7GiNwhWRWNUQdUh8oWcgaQ/WZdc+3YD sSWzEfJ8PVTgDiJpKkRSsJ9yEJa14+n+VOzhLNWbGFxjs+g2bC8OaksmT/NGaqWKO5YE DdYAKK1Su0Idga3qDiOgrD/UwLhLJPBu8OXT6zTLaDKeCY4BbLMREO1//wysQSdzxd3s UESQ== X-Gm-Message-State: AOAM530ET3FgggujEzDc3398i1lp29M5BGOePkQcaP0o5zA35u8j4258 nN4OP7vYkFflH8u647YagzSexb+PgQM= X-Google-Smtp-Source: ABdhPJyxZNAgaBzH8MB4R3w5D0FDEiNYSY0E6g3pRNVW4sq6jgFG0GaBevF/RjxS6fOw2/bE9D33hwV+eRU= X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a05:6214:428:: with SMTP id a8mr3037233qvy.3.1622840954643; Fri, 04 Jun 2021 14:09:14 -0700 (PDT) Date: Fri, 4 Jun 2021 21:09:01 +0000 In-Reply-To: <20210604210908.2105870-1-satyat@google.com> Message-Id: <20210604210908.2105870-3-satyat@google.com> Mime-Version: 1.0 References: <20210604210908.2105870-1-satyat@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v9 2/9] block: blk-crypto: relax alignment requirements for bvecs in bios From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala , Eric Biggers Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org blk-crypto only accepted bios whose bvecs' offsets and lengths were aligned to the crypto data unit size, since blk-crypto-fallback required that to work correctly. Now that the blk-crypto-fallback has been updated to work without that assumption, we relax the alignment requirement - blk-crypto now only needs the total size of the bio to be aligned to the crypto data unit size. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- block/blk-crypto.c | 19 ++----------------- 1 file changed, 2 insertions(+), 17 deletions(-) diff --git a/block/blk-crypto.c b/block/blk-crypto.c index c5bdaafffa29..06f81e64151d 100644 --- a/block/blk-crypto.c +++ b/block/blk-crypto.c @@ -200,22 +200,6 @@ bool bio_crypt_ctx_mergeable(struct bio_crypt_ctx *bc1, unsigned int bc1_bytes, return !bc1 || bio_crypt_dun_is_contiguous(bc1, bc1_bytes, bc2->bc_dun); } -/* Check that all I/O segments are data unit aligned. */ -static bool bio_crypt_check_alignment(struct bio *bio) -{ - const unsigned int data_unit_size = - bio->bi_crypt_context->bc_key->crypto_cfg.data_unit_size; - struct bvec_iter iter; - struct bio_vec bv; - - bio_for_each_segment(bv, bio, iter) { - if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size)) - return false; - } - - return true; -} - blk_status_t __blk_crypto_init_request(struct request *rq) { return blk_ksm_get_slot_for_key(rq->q->ksm, rq->crypt_ctx->bc_key, @@ -271,7 +255,8 @@ bool __blk_crypto_bio_prep(struct bio **bio_ptr) goto fail; } - if (!bio_crypt_check_alignment(bio)) { + if (!IS_ALIGNED(bio->bi_iter.bi_size, + bc_key->crypto_cfg.data_unit_size)) { bio->bi_status = BLK_STS_IOERR; goto fail; } From patchwork Fri Jun 4 21:09:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12300725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C121AC4743E for ; Fri, 4 Jun 2021 21:09:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ACEAD613F4 for ; Fri, 4 Jun 2021 21:09:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230161AbhFDVLE (ORCPT ); Fri, 4 Jun 2021 17:11:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230208AbhFDVLD (ORCPT ); Fri, 4 Jun 2021 17:11:03 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DF4EC061766 for ; Fri, 4 Jun 2021 14:09:17 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id s5-20020a17090a7645b029016d923cccbeso936988pjl.0 for ; Fri, 04 Jun 2021 14:09:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=56OloOiE27JD9XtSi9HokxC2e9KDiOeAmTHFCwPKpyA=; b=vbqS3zJ19To/jom+ObfOanT8EILqfeS29IQIUOHVCS13utK55kbwH+b+vma82AiCBJ xJmQBnzp58GRQH/SF4e6+f/N45nMsWR3k1Me5k2EQOFH979IVe7QbUdKtS8q9A4uA7nK cOYRuyuZA0iSenAhsrYh0pKnkNdH/YcyBYbm0dmNAhFKShttIqf4FggtsjKcIY1/VN7i JvPgLgAsI3GUGhZ14x/DaOjeVUGHCCn9DSbfuZ2ZzKSNBxmAMxBuUSJacpNoHPPyza3+ OSlonI2XsmDcTAWVxMoffEB/jJUCvZ2KGqXueGuy2OK8JkJz0tfcPa4IzqhvW79Hwup/ 1OEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=56OloOiE27JD9XtSi9HokxC2e9KDiOeAmTHFCwPKpyA=; b=Dwvgbx66qFQHfsaYyVjkMAjvXVq1TR8F50XmJsaRaP6XBTJW0GGs5zOR1cwYBAqk4T HuaXGbelG8jZJeizNZfo4OiHJnLeVdss/XgIh7TXKcUwD273if0DDqJCYaPoqK8cGrCB Tbv3hAXse53YzPWGmKxM8oAPpmHRvwn2D6V02iX3Ov8/CBcYJFyJ4YGX5Lbr9nC9sDwO OYMaT61quAbNFmn0YQomxS4KJv6VR+CogcuQKUqnd+ZGRlVWcs4oiLNCQ+yutS89vMpT CFneJZG3Xnr/YT536No0uNDUS09xxAj/3TdKphJgOsVt/evHgUWadCvuH7iVsn3cXOt0 Dg0Q== X-Gm-Message-State: AOAM532iHUIZdyV8bchGeLu6qRmaca/cDEzF2O7ZEkVJXgWdQvwbyreC ub5unKEe2sQXL1FnopIKt6uwiPojLsA= X-Google-Smtp-Source: ABdhPJzc2nffU5Z54IaqxPBWN/qr29mQecS8KOrrMRqa4DEUMTWf/BcnirC+grt57kCFTPNFLENEAXhle28= X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a17:90a:db0f:: with SMTP id g15mr7943874pjv.156.1622840956608; Fri, 04 Jun 2021 14:09:16 -0700 (PDT) Date: Fri, 4 Jun 2021 21:09:02 +0000 In-Reply-To: <20210604210908.2105870-1-satyat@google.com> Message-Id: <20210604210908.2105870-4-satyat@google.com> Mime-Version: 1.0 References: <20210604210908.2105870-1-satyat@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v9 3/9] fscrypt: add functions for direct I/O support From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Introduce fscrypt_dio_supported() to check whether a direct I/O request is unsupported due to encryption constraints. Also introduce fscrypt_limit_io_blocks() to limit how many blocks can be added to a bio being prepared for direct I/O. This is needed for filesystems that use the iomap direct I/O implementation to avoid DUN wraparound in the middle of a bio (which is possible with the IV_INO_LBLK_32 IV generation method). Elsewhere fscrypt_mergeable_bio() is used for this, but iomap operates on logical ranges directly, so filesystems using iomap won't have a chance to call fscrypt_mergeable_bio() on every block added to a bio. So we need this function which limits a logical range in one go. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala --- fs/crypto/crypto.c | 8 +++++ fs/crypto/inline_crypt.c | 75 ++++++++++++++++++++++++++++++++++++++++ include/linux/fscrypt.h | 18 ++++++++++ 3 files changed, 101 insertions(+) diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c index 4ef3f714046a..4fcca79f39ae 100644 --- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -69,6 +69,14 @@ void fscrypt_free_bounce_page(struct page *bounce_page) } EXPORT_SYMBOL(fscrypt_free_bounce_page); +/* + * Generate the IV for the given logical block number within the given file. + * For filenames encryption, lblk_num == 0. + * + * Keep this in sync with fscrypt_limit_io_blocks(). fscrypt_limit_io_blocks() + * needs to know about any IV generation methods where the low bits of IV don't + * simply contain the lblk_num (e.g., IV_INO_LBLK_32). + */ void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num, const struct fscrypt_info *ci) { diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c index c57bebfa48fe..a7ce650fbd3b 100644 --- a/fs/crypto/inline_crypt.c +++ b/fs/crypto/inline_crypt.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "fscrypt_private.h" @@ -363,3 +364,77 @@ bool fscrypt_mergeable_bio_bh(struct bio *bio, return fscrypt_mergeable_bio(bio, inode, next_lblk); } EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio_bh); + +/** + * fscrypt_dio_supported() - check whether a direct I/O request is unsupported + * due to encryption constraints + * @iocb: the file and position the I/O is targeting + * @iter: the I/O data segment(s) + * + * Return: true if direct I/O is supported + */ +bool fscrypt_dio_supported(struct kiocb *iocb, struct iov_iter *iter) +{ + const struct inode *inode = file_inode(iocb->ki_filp); + const unsigned int blocksize = i_blocksize(inode); + + /* If the file is unencrypted, no veto from us. */ + if (!fscrypt_needs_contents_encryption(inode)) + return true; + + /* We only support direct I/O with inline crypto, not fs-layer crypto */ + if (!fscrypt_inode_uses_inline_crypto(inode)) + return false; + + /* + * Since the granularity of encryption is filesystem blocks, the I/O + * must be block aligned -- not just disk sector aligned. + */ + if (!IS_ALIGNED(iocb->ki_pos | iov_iter_count(iter), blocksize)) + return false; + + return true; +} +EXPORT_SYMBOL_GPL(fscrypt_dio_supported); + +/** + * fscrypt_limit_io_blocks() - limit I/O blocks to avoid discontiguous DUNs + * @inode: the file on which I/O is being done + * @lblk: the block at which the I/O is being started from + * @nr_blocks: the number of blocks we want to submit starting at @lblk + * + * Determine the limit to the number of blocks that can be submitted in the bio + * targeting @lblk without causing a data unit number (DUN) discontinuity. + * + * This is normally just @nr_blocks, as normally the DUNs just increment along + * with the logical blocks. (Or the file is not encrypted.) + * + * In rare cases, fscrypt can be using an IV generation method that allows the + * DUN to wrap around within logically continuous blocks, and that wraparound + * will occur. If this happens, a value less than @nr_blocks will be returned + * so that the wraparound doesn't occur in the middle of the bio. + * + * Return: the actual number of blocks that can be submitted + */ +u64 fscrypt_limit_io_blocks(const struct inode *inode, u64 lblk, u64 nr_blocks) +{ + const struct fscrypt_info *ci = inode->i_crypt_info; + u32 dun; + + if (!fscrypt_inode_uses_inline_crypto(inode)) + return nr_blocks; + + if (nr_blocks <= 1) + return nr_blocks; + + if (!(fscrypt_policy_flags(&ci->ci_policy) & + FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)) + return nr_blocks; + + /* With IV_INO_LBLK_32, the DUN can wrap around from U32_MAX to 0. */ + + dun = ci->ci_hashed_ino + lblk; + + return min_t(u64, nr_blocks, (u64)U32_MAX + 1 - dun); +} +EXPORT_SYMBOL_GPL(fscrypt_limit_io_blocks); diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h index 2ea1387bb497..d8dde02aee82 100644 --- a/include/linux/fscrypt.h +++ b/include/linux/fscrypt.h @@ -609,6 +609,10 @@ bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode, bool fscrypt_mergeable_bio_bh(struct bio *bio, const struct buffer_head *next_bh); +bool fscrypt_dio_supported(struct kiocb *iocb, struct iov_iter *iter); + +u64 fscrypt_limit_io_blocks(const struct inode *inode, u64 lblk, u64 nr_blocks); + #else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ static inline bool __fscrypt_inode_uses_inline_crypto(const struct inode *inode) @@ -637,6 +641,20 @@ static inline bool fscrypt_mergeable_bio_bh(struct bio *bio, { return true; } + +static inline bool fscrypt_dio_supported(struct kiocb *iocb, + struct iov_iter *iter) +{ + const struct inode *inode = file_inode(iocb->ki_filp); + + return !fscrypt_needs_contents_encryption(inode); +} + +static inline u64 fscrypt_limit_io_blocks(const struct inode *inode, u64 lblk, + u64 nr_blocks) +{ + return nr_blocks; +} #endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ /** From patchwork Fri Jun 4 21:09:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12300735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BFE8C4743D for ; Fri, 4 Jun 2021 21:10:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 69414613F4 for ; Fri, 4 Jun 2021 21:10:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231473AbhFDVMG (ORCPT ); Fri, 4 Jun 2021 17:12:06 -0400 Received: from mail-qv1-f73.google.com ([209.85.219.73]:45852 "EHLO mail-qv1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231169AbhFDVMF (ORCPT ); Fri, 4 Jun 2021 17:12:05 -0400 Received: by mail-qv1-f73.google.com with SMTP id n17-20020ad444b10000b02902157677ec50so7560136qvt.12 for ; Fri, 04 Jun 2021 14:10:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9BcqtLCkYahip03kuRnuf7gXu/E/crKmMbXBSPc/Uvg=; b=QW5Kxzl6EXIE5ugNtoOGXViSlNInd8SrlO8Beh/QzLyDbfSyMdTdmYKbRpLpu68sod XDCQBa4ui+uU1IuoV4WMm3lNyBElOUbgc7+HQyR9AIl6hQ5XEqQ1V0rji9JD/Nspg6y4 RakQjBXdNV9KepJ9OqowwQvnDS8Ouo0vJGyEJe47wFSn7tbgQ4Zh3NuuG+9FVTwsZsFd ZiYbIprwR+8eyGGwjm98fHgm3bY3Q699yk5zFWUsRxSkon77390wrHMmB8vtjUsLC5af 9+OCpw8xRk1QegJsOwD64XDcoYUtUYbbiBMJC9RgHShDwlE7+QaapU57T6I+Yt8FYaqR h71A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9BcqtLCkYahip03kuRnuf7gXu/E/crKmMbXBSPc/Uvg=; b=ojeZB9ACfyKUzgwVT/U7tj47AUTgolqQORglSxKMrCR3TofD/ypPcybX1EJXiYy7sL N5XUmR2NgC0jKXwKRtH4eZ8YgSvu1rQ5et90gVcy5zTGqi5M3pvsM1QA/R+N9T2dU2FD eWOI6A52ftqmqaNSNesTUnLb0EE09ulWRgBGFKVBEixAeZnAdogbfyoqwBTi06dBWUhT nJT6h6xiwbaU5HWoiU8kh3MFIqWBmEKd9VhJiPkr+DAfNgMf0Ux9OXwgn/jWEnfAlmsT D9RmloBe8bXvEakm1Ig9yfn9ogpt7CjpL1ENtLkXpouMCMGn8XXP/2bnm5UMYHeYYqmO WtFg== X-Gm-Message-State: AOAM532Ii2pAlhCAdM0iwlBcEoGfia+ZuIsIi0XTZnLyYHrr0OliBHE8 g+7VGGO055dKQzTr9Vk5mlw/BlSt2Fc= X-Google-Smtp-Source: ABdhPJxgk02c9+fXdTtbO2iZcSrGJNMdNgq2Ji6Xc1aRp0U/sfalFFR8mjElyDxbEU1XEUxA6IbhvBZxcCk= X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a0c:e84b:: with SMTP id l11mr6830971qvo.52.1622840958230; Fri, 04 Jun 2021 14:09:18 -0700 (PDT) Date: Fri, 4 Jun 2021 21:09:03 +0000 In-Reply-To: <20210604210908.2105870-1-satyat@google.com> Message-Id: <20210604210908.2105870-5-satyat@google.com> Mime-Version: 1.0 References: <20210604210908.2105870-1-satyat@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v9 4/9] direct-io: add support for fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Set bio crypt contexts on bios by calling into fscrypt when required, and explicitly check for DUN continuity when adding pages to the bio. (While DUN continuity is usually implied by logical block contiguity, this is not the case when using certain fscrypt IV generation methods like IV_INO_LBLK_32). Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Reviewed-by: Jaegeuk Kim --- fs/direct-io.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/fs/direct-io.c b/fs/direct-io.c index b2e86e739d7a..328ed7ac0094 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include @@ -392,6 +393,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, sector_t first_sector, int nr_vecs) { struct bio *bio; + struct inode *inode = dio->inode; /* * bio_alloc() is guaranteed to return a bio when allowed to sleep and @@ -399,6 +401,9 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, */ bio = bio_alloc(GFP_KERNEL, nr_vecs); + fscrypt_set_bio_crypt_ctx(bio, inode, + sdio->cur_page_fs_offset >> inode->i_blkbits, + GFP_KERNEL); bio_set_dev(bio, bdev); bio->bi_iter.bi_sector = first_sector; bio_set_op_attrs(bio, dio->op, dio->op_flags); @@ -765,9 +770,17 @@ static inline int dio_send_cur_page(struct dio *dio, struct dio_submit *sdio, * current logical offset in the file does not equal what would * be the next logical offset in the bio, submit the bio we * have. + * + * When fscrypt inline encryption is used, data unit number + * (DUN) contiguity is also required. Normally that's implied + * by logical contiguity. However, certain IV generation + * methods (e.g. IV_INO_LBLK_32) don't guarantee it. So, we + * must explicitly check fscrypt_mergeable_bio() too. */ if (sdio->final_block_in_bio != sdio->cur_page_block || - cur_offset != bio_next_offset) + cur_offset != bio_next_offset || + !fscrypt_mergeable_bio(sdio->bio, dio->inode, + cur_offset >> dio->inode->i_blkbits)) dio_bio_submit(dio, sdio); } From patchwork Fri Jun 4 21:09:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12300727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45AA0C4743C for ; Fri, 4 Jun 2021 21:09:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 309CC61403 for ; Fri, 4 Jun 2021 21:09:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230242AbhFDVLI (ORCPT ); Fri, 4 Jun 2021 17:11:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230209AbhFDVLH (ORCPT ); Fri, 4 Jun 2021 17:11:07 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8DE0C061789 for ; Fri, 4 Jun 2021 14:09:20 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id z93-20020a0ca5e60000b02901ec19d8ff47so7579882qvz.8 for ; Fri, 04 Jun 2021 14:09:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XCMW74I69DeN8Z3kX5iYnnwjp70fgDZ7aIjMCwTUzeM=; b=Mw++crRiYfOUuxBBsYJaIjEa2wzFZ7uIymgYyHPMmfqTENGSHY2XJFEFHfB0uF7oKU q+30l4Moqzicat/BsS7qK98HMHFB/NLzIAJndkfQNTeAD2gynzn1R3KcC6ecx+103TBP GDOBRtGgHEnAavWAyXnjadC5RjL1xZKdJoiqVP9LX+Baz1NH8ZskxxPR83N7wSzYmGmy AzR1MNMkb9T8NXWXKltKXehqDno9ArTvpLAmBtZwsBDFWHY2DN8mTuZF85xYugPqWMzK yDR3nmN38QXxX3Feu5LZLFS1lswICuDdid9tBrfJEby3qDOgFQZQYjh0Whmxpid95T2Z py5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XCMW74I69DeN8Z3kX5iYnnwjp70fgDZ7aIjMCwTUzeM=; b=fvbpgeNYme2AaOGc5v9ia/TEhrv7OPtbyAcVRcBPJCiqmAmWC++pOSsTWOsIP7jlQY JwBFBWkWoC7hfX+wy6tVY0VwE8dz5JLUAU+r6eDKWWGLK0VaA6NbZVj4sXX68J6DgCzr 3eL6vTk3BLB4DAVK2OdKP67px2cceivwqSFh7cD3jRmw2PU8SfIS8UST+TMuOkAxRZm+ /BAiBIa/J5MY5bTQBkTV+P89FT7ahXOu8FhaVmMWIbrDOEOTDGkk+CAwM65oKXjAYU22 0jfb/OCzq9UuYs4OspXGVuHVm/AVhOJyK9PKw3rug78bgdgXtu/18HUqDmw0HL8cMAFb SIjg== X-Gm-Message-State: AOAM5302V+DTft7iRL7b4DTpdmyc5u61x0c6+Bj5iSZ49xB4HkCtD9mb X6EW1P5ikF+t9Td7Es3YZUPvtMhIgew= X-Google-Smtp-Source: ABdhPJxiSLZqcPqUF+AidxN8J2rs3GJOT86onesJOcAIexrEcLNAzcay/hdWehQRGDOJrX95PrTmEboMFxw= X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a05:6214:2aa3:: with SMTP id js3mr6751098qvb.56.1622840959831; Fri, 04 Jun 2021 14:09:19 -0700 (PDT) Date: Fri, 4 Jun 2021 21:09:04 +0000 In-Reply-To: <20210604210908.2105870-1-satyat@google.com> Message-Id: <20210604210908.2105870-6-satyat@google.com> Mime-Version: 1.0 References: <20210604210908.2105870-1-satyat@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v9 5/9] block: Make bio_iov_iter_get_pages() respect bio_required_sector_alignment() From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Previously, bio_iov_iter_get_pages() wasn't used with bios that could have an encryption context. However, direct I/O support using blk-crypto introduces this possibility, so this function must now respect bio_required_sector_alignment() (otherwise, xfstests like generic/465 with ext4 will fail). Signed-off-by: Satya Tangirala Reported-by: kernel test robot Reported-by: kernel test robot --- block/bio.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 32f75f31bb5c..99c510f706e2 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1099,7 +1099,8 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter) * The function tries, but does not guarantee, to pin as many pages as * fit into the bio, or are requested in @iter, whatever is smaller. If * MM encounters an error pinning the requested pages, it stops. Error - * is returned only if 0 pages could be pinned. + * is returned only if 0 pages could be pinned. It also ensures that the number + * of sectors added to the bio is aligned to bio_required_sector_alignment(). * * It's intended for direct IO, so doesn't do PSI tracking, the caller is * responsible for setting BIO_WORKINGSET if necessary. @@ -1107,6 +1108,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter) int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) { int ret = 0; + unsigned int aligned_sectors; if (iov_iter_is_bvec(iter)) { if (bio_op(bio) == REQ_OP_ZONE_APPEND) @@ -1121,6 +1123,15 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) ret = __bio_iov_iter_get_pages(bio, iter); } while (!ret && iov_iter_count(iter) && !bio_full(bio, 0)); + /* + * Ensure that number of sectors in bio is aligned to + * bio_required_sector_align() + */ + aligned_sectors = round_down(bio_sectors(bio), + bio_required_sector_alignment(bio)); + iov_iter_revert(iter, (bio_sectors(bio) - aligned_sectors) << SECTOR_SHIFT); + bio_truncate(bio, aligned_sectors << SECTOR_SHIFT); + /* don't account direct I/O as memory stall */ bio_clear_flag(bio, BIO_WORKINGSET); return bio->bi_vcnt ? 0 : ret; From patchwork Fri Jun 4 21:09:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12300729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57371C4743C for ; Fri, 4 Jun 2021 21:09:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4089361406 for ; Fri, 4 Jun 2021 21:09:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230302AbhFDVLK (ORCPT ); Fri, 4 Jun 2021 17:11:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230256AbhFDVLJ (ORCPT ); Fri, 4 Jun 2021 17:11:09 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3953BC061795 for ; Fri, 4 Jun 2021 14:09:22 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id m205-20020a25d4d60000b029052a8de1fe41so13323972ybf.23 for ; Fri, 04 Jun 2021 14:09:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RT0RrB/jxKMI6YW+PFYA3ymIrlH2UsJVKGU6y0kBnOw=; b=IlZOZ7kDYPui8ADSyBSgg53Xoy729/anXoL9HE6GnxaUrhX/OSd1UbLB0u1F+7d9Td JVxj6n9wuW+E0OZ7vDRXrTfbCD6XuwNVY9v22kKfJVNYb9WY404LeKW4yQ2tpPmzigpm V+j0KbaKVIPgWl/XtyfWFGvYkgLJlkemDaCQc25sAFQeYkHSg34+XxpgajWSGIfj4/be V52CdBnBn4uGgdqBlgjFpwgA4zIXigOLYYLcck1Ox7vAMAb0YTWh94/lpF9I2Q7F7RI/ gPD6HaFeiyqVyFUYSb1DYREYy6u9CegrBeWpsXNHM9mK3f2L+mshA5baEf6Ew6vzjwfA VpKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RT0RrB/jxKMI6YW+PFYA3ymIrlH2UsJVKGU6y0kBnOw=; b=Ntd7E3Q+XtsdsDooOyG4muibuYNPm+gL2ozBMdBZJzoncBDuHBN0stB2HOAiO1gkhK 0KmS0ZW89vBj6qjq6uOKl+MsXXUELRg7+1CcafRjVksXeK0Lw+zDa1Z6S/DFtLfNHwPC XH3amw5wqYVVrpOuyOnGjSBBYh68MYMJsHrQM+Uk3AvgvgtikoD8X9JKhwMOpvYcqK4U +Trm4YG8nX7zBZUY0pTPdyvHLV2Vz3ezwJkoTaVGRGoAR53+UZWXLFD9Xfmpg3XSamyD oz3c4rLawaGxv5+aXnZDNPhjL4t6Y1RIcW7EbCxIRpClwssDtzoFywCYW8NsxPMTaNV4 qx4g== X-Gm-Message-State: AOAM5309rNFipOHP4CsU0vBW04TTP24chqt1KMXrLv9MmezIb/gtRc++ 6ayLV6IvJmOI4oQNbz12QLHmqGxfA5Y= X-Google-Smtp-Source: ABdhPJzJE4zslyGfGvcDlz07boMQ5nRjs8yyzCWvA2MMz8AlycCO4TqXJ8BnIMpWw/eg+X7rMUBks6idOm4= X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:8185:: with SMTP id p5mr7728654ybk.54.1622840961400; Fri, 04 Jun 2021 14:09:21 -0700 (PDT) Date: Fri, 4 Jun 2021 21:09:05 +0000 In-Reply-To: <20210604210908.2105870-1-satyat@google.com> Message-Id: <20210604210908.2105870-7-satyat@google.com> Mime-Version: 1.0 References: <20210604210908.2105870-1-satyat@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v9 6/9] iomap: support direct I/O with fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Set bio crypt contexts on bios by calling into fscrypt when required. No DUN contiguity checks are done - callers are expected to set up the iomap correctly to ensure that each bio submitted by iomap will not have blocks with incontiguous DUNs by calling fscrypt_limit_io_blocks() appropriately. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Acked-by: Darrick J. Wong --- fs/iomap/direct-io.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 9398b8c31323..1c825deb36a9 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -185,11 +186,14 @@ static void iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos, unsigned len) { + struct inode *inode = file_inode(dio->iocb->ki_filp); struct page *page = ZERO_PAGE(0); int flags = REQ_SYNC | REQ_IDLE; struct bio *bio; bio = bio_alloc(GFP_KERNEL, 1); + fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits, + GFP_KERNEL); bio_set_dev(bio, iomap->bdev); bio->bi_iter.bi_sector = iomap_sector(iomap, pos); bio->bi_private = dio; @@ -306,6 +310,8 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, } bio = bio_alloc(GFP_KERNEL, nr_pages); + fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits, + GFP_KERNEL); bio_set_dev(bio, iomap->bdev); bio->bi_iter.bi_sector = iomap_sector(iomap, pos); bio->bi_write_hint = dio->iocb->ki_hint; From patchwork Fri Jun 4 21:09:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12300737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C7F5C4743F for ; Fri, 4 Jun 2021 21:10:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F9E9613F4 for ; Fri, 4 Jun 2021 21:10:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231532AbhFDVML (ORCPT ); Fri, 4 Jun 2021 17:12:11 -0400 Received: from mail-pg1-f201.google.com ([209.85.215.201]:57101 "EHLO mail-pg1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231521AbhFDVMK (ORCPT ); Fri, 4 Jun 2021 17:12:10 -0400 Received: by mail-pg1-f201.google.com with SMTP id 28-20020a63135c0000b029021b78388f01so6666782pgt.23 for ; Fri, 04 Jun 2021 14:10:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=H6v2kY5YMxOOHLcLSDdX+aEzIdLNYM1I0EjO3txeols=; b=v1jZNkJYmY3AfKMvuvpfH3JJQcdV29JlvKaiWzw7Pj5dPY3hbHoKohjT3M1SqQ5FFz G/tUkYjzmPiXLBY61CpYrnU7ZI7tvTpp2CBXGHl/LyoFPLLKRvPxUy+N+cPRdapDxLeC Z8Eh4SoflqHue1COLbTamqv01t/mA8CE1LJwQOBgEgyAryAOTaY7MJlPrjluNTGeAint /gc5P2jZaZ3YZjQRytNEvbbEzCD+sJ/qkVZK7oysVmpkIMUsp5TFiGLVJL+HI3o0vFAa ewZH43cN3mTTq1bKsTBmIoWlxjxTnjXv2d97I3HYY+lA+lAakvigohnk7CHOmQBQGEy1 o3Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=H6v2kY5YMxOOHLcLSDdX+aEzIdLNYM1I0EjO3txeols=; b=lXpLnJueJInO0pBSpj7bUBHDLnXCUsiLqNlq0r5vn4YD3RJWQINBvwRg0v92dNnyJ5 y7hDdOj99o51hZrjfiUv1UfY51WDR1JwI3z2J/wT8HT+ZP+W+ge5LXZ4HNEXTZ8IB0Dr 66yvTH+tI58MoWGoXHskPr4GKbyu8/nwpYT2psGUEqzAm4ktJsSJHFDPHVSCIOyDw/17 rXzF6oaDqfb47Z+F9ydoYk1BP2SMRROqMkrlvJKgaxikt1RIHALmSe5skZdt5HMmnYtO jDy/c0/jxW50KFU2jl45yTqtHdN6LeRV/05/bUmDAGL40hbSxBaj0k44uLpYV04OeJe/ mV0w== X-Gm-Message-State: AOAM531gg56kEet3g1STSHCu6GTi0FjAkuyVEn2epLSOo/rqDgdP64H7 Sg3XgjTE2zPUFm066Ky2hHgi/h+75NE= X-Google-Smtp-Source: ABdhPJw2OZI8kOjDJIkhSAHFElaeOr1IGaH93bDYTIxiQQ+b217XZW3IqkglpAxk3/foE4UT4038CtgtNRs= X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a17:902:6acb:b029:fc:4d8c:cfba with SMTP id i11-20020a1709026acbb02900fc4d8ccfbamr6249435plt.29.1622840963331; Fri, 04 Jun 2021 14:09:23 -0700 (PDT) Date: Fri, 4 Jun 2021 21:09:06 +0000 In-Reply-To: <20210604210908.2105870-1-satyat@google.com> Message-Id: <20210604210908.2105870-8-satyat@google.com> Mime-Version: 1.0 References: <20210604210908.2105870-1-satyat@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v9 7/9] ext4: support direct I/O with fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Wire up ext4 with fscrypt direct I/O support. Direct I/O with fscrypt is only supported through blk-crypto (i.e. CONFIG_BLK_INLINE_ENCRYPTION must have been enabled, the 'inlinecrypt' mount option must have been specified, and either hardware inline encryption support must be present or CONFIG_BLK_INLINE_ENCYRPTION_FALLBACK must have been enabled). Further, direct I/O on encrypted files is only supported when the *length* of the I/O is aligned to the filesystem block size (which is *not* necessarily the same as the block device's block size). fscrypt_limit_io_blocks() is called before setting up the iomap to ensure that the blocks of each bio that iomap will submit will have contiguous DUNs. Note that fscrypt_limit_io_blocks() is normally a no-op, as normally the DUNs simply increment along with the logical blocks. But it's needed to handle an edge case in one of the fscrypt IV generation methods. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Reviewed-by: Jaegeuk Kim Acked-by: Theodore Ts'o --- fs/ext4/file.c | 10 ++++++---- fs/ext4/inode.c | 7 +++++++ 2 files changed, 13 insertions(+), 4 deletions(-) diff --git a/fs/ext4/file.c b/fs/ext4/file.c index 816dedcbd541..a2898a496c4e 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -36,9 +36,11 @@ #include "acl.h" #include "truncate.h" -static bool ext4_dio_supported(struct inode *inode) +static bool ext4_dio_supported(struct kiocb *iocb, struct iov_iter *iter) { - if (IS_ENABLED(CONFIG_FS_ENCRYPTION) && IS_ENCRYPTED(inode)) + struct inode *inode = file_inode(iocb->ki_filp); + + if (!fscrypt_dio_supported(iocb, iter)) return false; if (fsverity_active(inode)) return false; @@ -61,7 +63,7 @@ static ssize_t ext4_dio_read_iter(struct kiocb *iocb, struct iov_iter *to) inode_lock_shared(inode); } - if (!ext4_dio_supported(inode)) { + if (!ext4_dio_supported(iocb, to)) { inode_unlock_shared(inode); /* * Fallback to buffered I/O if the operation being performed on @@ -511,7 +513,7 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from) } /* Fallback to buffered I/O if the inode does not support direct I/O. */ - if (!ext4_dio_supported(inode)) { + if (!ext4_dio_supported(iocb, from)) { if (ilock_shared) inode_unlock_shared(inode); else diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index fe6045a46599..fe8006efb5ef 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3481,6 +3481,13 @@ static int ext4_iomap_begin(struct inode *inode, loff_t offset, loff_t length, if (ret < 0) return ret; out: + /* + * When inline encryption is enabled, sometimes I/O to an encrypted file + * has to be broken up to guarantee DUN contiguity. Handle this by + * limiting the length of the mapping returned. + */ + map.m_len = fscrypt_limit_io_blocks(inode, map.m_lblk, map.m_len); + ext4_set_iomap(inode, iomap, &map, offset, length); return 0; From patchwork Fri Jun 4 21:09:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12300731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 556EDC4743D for ; Fri, 4 Jun 2021 21:09:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 427B7613F4 for ; Fri, 4 Jun 2021 21:09:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230424AbhFDVLP (ORCPT ); Fri, 4 Jun 2021 17:11:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230389AbhFDVLM (ORCPT ); Fri, 4 Jun 2021 17:11:12 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED4C6C061795 for ; Fri, 4 Jun 2021 14:09:25 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id mw15-20020a17090b4d0fb0290157199aadbaso8374959pjb.7 for ; Fri, 04 Jun 2021 14:09:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=P+DRV+1kr28zVJ/U3wDOfdSPd3HkANrRfuFzxqlC4Pc=; b=BD3ZmfBd88tZgrK5VWyAdL43R8ifUsR8x6sFdE82nRuDKynYAWDrzE4ViAPmykHBE4 iyytiZIYdAESIcvzY6xdWybqTUGCNisGZiXWYfKieKbymZmV/yt7zvjY9NFl3juibb/s ikEDpwDO3GTB7QBr/5PZz/yTynDJQ+EVyDbqXYxqVspW4jHjr6GfHmb6aePCiI+9M6Vd nAWhQ+RRLTv2OXSWhOZSnRNx4+nvq3NfPSSnycc4sLlhRuBgnNb++2d32IlTONdzLlpM 5aRbbSRXEHIXxwZ6wYXd6/i1swYlGsa4c2M1WAwUuXtMLF2EffxUpGOmrT0pPALk2QKG mb2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=P+DRV+1kr28zVJ/U3wDOfdSPd3HkANrRfuFzxqlC4Pc=; b=gfGu6mX+Xk9ONwy+q1x2U0zKZYdlJ/kohAH6EeYW0ouQ3rs/71MHY8FwOzZDMdsEfy DmHctIKOn+nhKZLpmXZZ+FxSNwFuJoGr0DGCprfq0Hsskh78/XMqsrBZgtJ6IPT6jfSI lqkaNxfCpKK8iG3CvHOYIvxz1MJ/O9rwRhn8sdHcag+6CnHqdwVA5lkVyk52572u43J8 A0DNvTEUXT35R/SRRfbuEpDkl1nSl/NGuDbBmFUAQ0CkkXzDfaAxgLtejjSpvUoCH0k2 Od2XTUxzY9IkzyuydafVLZOJRLJydr3uZGi1nXIBJ44AQfMeInz1fg5xkM4sU3tnl/h0 fTxw== X-Gm-Message-State: AOAM533Ugn7nDRxfsT/OTADeFJZwCQXImI8LtQ32mu0NsNtZJpkVkSCw uVzk4z5iUalPEtYKY4Ip12eNPHqiCDQ= X-Google-Smtp-Source: ABdhPJxmL8KM/5Kv0n8tQo795U2DFcdkGRAYUhSqHM6sj42QWQ+C3ACMCn3W2CeCnI48cqnYWj3LpGGCTK0= X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a63:1e55:: with SMTP id p21mr6828807pgm.412.1622840965137; Fri, 04 Jun 2021 14:09:25 -0700 (PDT) Date: Fri, 4 Jun 2021 21:09:07 +0000 In-Reply-To: <20210604210908.2105870-1-satyat@google.com> Message-Id: <20210604210908.2105870-9-satyat@google.com> Mime-Version: 1.0 References: <20210604210908.2105870-1-satyat@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v9 8/9] f2fs: support direct I/O with fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Wire up f2fs with fscrypt direct I/O support. direct I/O with fscrypt is only supported through blk-crypto (i.e. CONFIG_BLK_INLINE_ENCRYPTION must have been enabled, the 'inlinecrypt' mount option must have been specified, and either hardware inline encryption support must be present or CONFIG_BLK_INLINE_ENCYRPTION_FALLBACK must have been enabled). Further, direct I/O on encrypted files is only supported when the *length* of the I/O is aligned to the filesystem block size (which is *not* necessarily the same as the block device's block size). Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Acked-by: Jaegeuk Kim --- fs/f2fs/f2fs.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index c83d90125ebd..a416ea3a1a04 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -4181,7 +4181,11 @@ static inline bool f2fs_force_buffered_io(struct inode *inode, struct f2fs_sb_info *sbi = F2FS_I_SB(inode); int rw = iov_iter_rw(iter); - if (f2fs_post_read_required(inode)) + if (!fscrypt_dio_supported(iocb, iter)) + return true; + if (fsverity_active(inode)) + return true; + if (f2fs_compressed_file(inode)) return true; if (f2fs_is_multi_device(sbi)) return true; From patchwork Fri Jun 4 21:09:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 12300741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D81ACC48BCF for ; Fri, 4 Jun 2021 21:10:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C73E1613F4 for ; Fri, 4 Jun 2021 21:10:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231545AbhFDVMP (ORCPT ); Fri, 4 Jun 2021 17:12:15 -0400 Received: from mail-qv1-f74.google.com ([209.85.219.74]:43656 "EHLO mail-qv1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231506AbhFDVMN (ORCPT ); Fri, 4 Jun 2021 17:12:13 -0400 Received: by mail-qv1-f74.google.com with SMTP id br4-20020ad446a40000b029021addf7b587so7553126qvb.10 for ; Fri, 04 Jun 2021 14:10:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8v+WJhPg9I/xdgL6B4MiQX3CWBXyQYuM5MOB3Jc9dDY=; b=QRP2TwQixPWYnCinwP+Ayc/NX9oESif3zXsMESiFr/fPmLerJLAJJI31ktNvQoUUi/ 270xuMczduXDEX0vo2WK/GiF+vYlLeW1mKOFuNGgyPlQ58ZhE1qMy99pqSLvPW1zdSpI 7WHShNVJE/9VUm/dGSwEosOgNVoiQe1PPfDMFzezYnam9YjQYccnGi34ygSa7XEX7oyb 0LCE4C40aaOhyoeVm54LpIKdrTmC5JktSYenjEnkgNCDh1qyuQ5tx8nGQ9HAf956iCrx BD3SroZd7ttHA4w1l8C5fBKwL3tag/vAJxI3kmg5oNF7aOlMV5WLKwZoVqv3hv8mcYMc gRcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8v+WJhPg9I/xdgL6B4MiQX3CWBXyQYuM5MOB3Jc9dDY=; b=D6zxKH5Xsol/REjrkG1eQpuxm0ieVTozPuWtEYUc9BCe+jceiQNlazibYgmWIoCiBI F4aBnbLLKoiiviUB6/EKT4dHyg3AWPUFQqJ28Or1pyzVWlC/EuoPSl9gpMUVdH/a4gZI SGghZRj24gWbToG6oDD3NLkoF3xWjS/L4b3vmWQrDpn62u75u2m6yfhazTgFkGYTQAYb gEo5pRzSPBBhTCG3IfVOSr3cRoXENmJ1cLeARCVpqrRQuuj4lO32N3AaG+di+KCgiAW7 acyFqLs0huy0RffND1pjgA7cKRmxXMYiM5T+Qp2cwlBYHdYw5ZpNoBZa0SmsY/LNlI4A uk0A== X-Gm-Message-State: AOAM532LoasBJwHbPB/XrwCBwtBVUU/UoSWwbwQdEgJltddlFIhE3RJF fHN79eD0bp2eHVq3KAWDp682yyAm3kM= X-Google-Smtp-Source: ABdhPJyjv+YInqbg6KaJ0jOYrxG2sn5YDWcEehU2DlFwYYyA8k2F0TGVXbcKcmz67/7RnRJ6djeyEnwJoGQ= X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:ad4:4e68:: with SMTP id ec8mr6758723qvb.62.1622840966813; Fri, 04 Jun 2021 14:09:26 -0700 (PDT) Date: Fri, 4 Jun 2021 21:09:08 +0000 In-Reply-To: <20210604210908.2105870-1-satyat@google.com> Message-Id: <20210604210908.2105870-10-satyat@google.com> Mime-Version: 1.0 References: <20210604210908.2105870-1-satyat@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v9 9/9] fscrypt: update documentation for direct I/O support From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala , Eric Biggers Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Update fscrypt documentation to reflect the addition of direct I/O support and document the necessary conditions for direct I/O on encrypted files. Signed-off-by: Satya Tangirala Reviewed-by: Eric Biggers Reviewed-by: Jaegeuk Kim --- Documentation/filesystems/fscrypt.rst | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst index 44b67ebd6e40..c0c1747fa2fb 100644 --- a/Documentation/filesystems/fscrypt.rst +++ b/Documentation/filesystems/fscrypt.rst @@ -1047,8 +1047,10 @@ astute users may notice some differences in behavior: may be used to overwrite the source files but isn't guaranteed to be effective on all filesystems and storage devices. -- Direct I/O is not supported on encrypted files. Attempts to use - direct I/O on such files will fall back to buffered I/O. +- Direct I/O is supported on encrypted files only under some + circumstances (see `Direct I/O support`_ for details). When these + circumstances are not met, attempts to use direct I/O on encrypted + files will fall back to buffered I/O. - The fallocate operations FALLOC_FL_COLLAPSE_RANGE and FALLOC_FL_INSERT_RANGE are not supported on encrypted files and will @@ -1121,6 +1123,21 @@ It is not currently possible to backup and restore encrypted files without the encryption key. This would require special APIs which have not yet been implemented. +Direct I/O support +================== + +Direct I/O on encrypted files is supported through blk-crypto. In +particular, this means the kernel must have CONFIG_BLK_INLINE_ENCRYPTION +enabled, the filesystem must have had the 'inlinecrypt' mount option +specified, and either hardware inline encryption must be present, or +CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK must have been enabled. Further, +the starting position in the file and the length of any I/O must be aligned +to the filesystem block size (*not* necessarily the same as the block +device's block size). If any of these conditions isn't met, attempts to do +direct I/O on an encrypted file will fall back to buffered I/O. However, +there aren't any additional requirements on user buffer alignment (apart +from those already present when using direct I/O on unencrypted files). + Encryption policy enforcement =============================