From patchwork Tue Nov 17 14:07:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11912441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26398C8300C for ; Tue, 17 Nov 2020 14:09:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CC79B20773 for ; Tue, 17 Nov 2020 14:09:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EMyEaxPR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729388AbgKQOJJ (ORCPT ); Tue, 17 Nov 2020 09:09:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731824AbgKQOHP (ORCPT ); Tue, 17 Nov 2020 09:07:15 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C7F2C061A48 for ; Tue, 17 Nov 2020 06:07:15 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id i14so12408193qtq.18 for ; Tue, 17 Nov 2020 06:07:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=TAjmsBc6qqfsA6JtVzH7iXxCcqn3Re43ZeihhwJWblk=; b=EMyEaxPRqKp6BJmvLRed7xNjmflfFIDpyGmyIGRvHgAeWoukL1WjrHvHe+O3b8rvLF 8EI0zLB41F1sZawhV0uB603CdXjhro7CHUPP0rayhula8vN8WIdmPlSP60B1eecKULn5 gTOQ9+9r5sq5GCsfD3ezd0jFpMuisFBf2mgC026o8DJmPV4x3edrmvcW+NXopyyf4nhT wBU/jWJT+2LhpPjmugxc0DLsv6/Dxj7PMbTbpFsa3ISpxIiBuqqwuXwNg2JTJr4GmTB8 wopCcpBlc4k5j4S6DwLaJECovbaBQX0nBj85zqJi++0Rg8CIiknFMxqdKf/yhImhcATN s0KA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TAjmsBc6qqfsA6JtVzH7iXxCcqn3Re43ZeihhwJWblk=; b=WBheOG4l6qXJhgygnUGi7+3GlrbzmnhOaYV5eSVfOB9l8M7HmyBAkAPIJu6K535Um0 JVXkYYaTw0BeJaMVP8QHnMpUCGC559J80oon4bbEEsa4oZZ4KmPyE7tHjvpCYEwiGiJo AuMV+C5AaZoe8B2Brob5ThPWtP5I5YwJcEWxHeGstu+CIqKr/yPAt74HfbWiLS0LlPFW +h998p7u3o0Xx37yZxdxNk8em0Q6LD9rG7/jt5FYA9ml3gw+zxDUzOVThEkGDp6hVpZ9 tgEaUSflGXtjdCIDh2/ijFlwKz+6GZbBalHj2qRGB/G31cINkfS3y13MTLHf1TSwtdKW 8gXA== X-Gm-Message-State: AOAM532tJrXPpDoj6WBggATkUiImbwF2PUNT0NZ09e2uR7z41Wdyn2Mr qD47Q3LqaAn7v7MW/Vs5YS3/MhudWGQ= X-Google-Smtp-Source: ABdhPJyGAlbPfHrO4NdA530/bUyk6s6UcuhFEoGPXg9d+QKChA9y9f47sWaLpQrfr/X+MyYGDds7q+TuEqM= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:ad4:45eb:: with SMTP id q11mr21110656qvu.20.1605622034377; Tue, 17 Nov 2020 06:07:14 -0800 (PST) Date: Tue, 17 Nov 2020 14:07:01 +0000 In-Reply-To: <20201117140708.1068688-1-satyat@google.com> Message-Id: <20201117140708.1068688-2-satyat@google.com> Mime-Version: 1.0 References: <20201117140708.1068688-1-satyat@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH v7 1/8] block: ensure bios are not split in middle of crypto data unit From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce blk_crypto_bio_sectors_alignment() that returns the required alignment for the number of sectors in a bio. Any bio split must ensure that the number of sectors in the resulting bios is aligned to that returned value. This patch also updates __blk_queue_split(), __blk_queue_bounce() and blk_crypto_split_bio_if_needed() to respect blk_crypto_bio_sectors_alignment() when splitting bios. Signed-off-by: Satya Tangirala --- block/bio.c | 1 + block/blk-crypto-fallback.c | 10 ++-- block/blk-crypto-internal.h | 18 +++++++ block/blk-merge.c | 96 ++++++++++++++++++++++++++++++++----- block/blk-mq.c | 3 ++ block/bounce.c | 4 ++ 6 files changed, 117 insertions(+), 15 deletions(-) diff --git a/block/bio.c b/block/bio.c index fa01bef35bb1..259cef126df3 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1472,6 +1472,7 @@ struct bio *bio_split(struct bio *bio, int sectors, BUG_ON(sectors <= 0); BUG_ON(sectors >= bio_sectors(bio)); + WARN_ON(!IS_ALIGNED(sectors, blk_crypto_bio_sectors_alignment(bio))); /* Zone append commands cannot be split */ if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND)) diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c index c162b754efbd..db2d2c67b308 100644 --- a/block/blk-crypto-fallback.c +++ b/block/blk-crypto-fallback.c @@ -209,20 +209,22 @@ static bool blk_crypto_alloc_cipher_req(struct blk_ksm_keyslot *slot, static bool blk_crypto_split_bio_if_needed(struct bio **bio_ptr) { struct bio *bio = *bio_ptr; + struct bio_crypt_ctx *bc = bio->bi_crypt_context; unsigned int i = 0; - unsigned int num_sectors = 0; + unsigned int len = 0; struct bio_vec bv; struct bvec_iter iter; bio_for_each_segment(bv, bio, iter) { - num_sectors += bv.bv_len >> SECTOR_SHIFT; + len += bv.bv_len; if (++i == BIO_MAX_PAGES) break; } - if (num_sectors < bio_sectors(bio)) { + if (len < bio->bi_iter.bi_size) { struct bio *split_bio; - split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL); + len = round_down(len, bc->bc_key->crypto_cfg.data_unit_size); + split_bio = bio_split(bio, len >> SECTOR_SHIFT, GFP_NOIO, NULL); if (!split_bio) { bio->bi_status = BLK_STS_RESOURCE; return false; diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h index 0d36aae538d7..304e90ed99f5 100644 --- a/block/blk-crypto-internal.h +++ b/block/blk-crypto-internal.h @@ -60,6 +60,19 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq) return rq->crypt_ctx; } +/* + * Returns the alignment requirement for the number of sectors in this bio based + * on its bi_crypt_context. Any bios split from this bio must follow this + * alignment requirement as well. + */ +static inline unsigned int blk_crypto_bio_sectors_alignment(struct bio *bio) +{ + if (!bio_has_crypt_ctx(bio)) + return 1; + return bio->bi_crypt_context->bc_key->crypto_cfg.data_unit_size >> + SECTOR_SHIFT; +} + #else /* CONFIG_BLK_INLINE_ENCRYPTION */ static inline bool bio_crypt_rq_ctx_compatible(struct request *rq, @@ -93,6 +106,11 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq) return false; } +static inline unsigned int blk_crypto_bio_sectors_alignment(struct bio *bio) +{ + return 1; +} + #endif /* CONFIG_BLK_INLINE_ENCRYPTION */ void __bio_crypt_advance(struct bio *bio, unsigned int bytes); diff --git a/block/blk-merge.c b/block/blk-merge.c index bcf5e4580603..f34dda7132f9 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -149,13 +149,15 @@ static inline unsigned get_max_io_size(struct request_queue *q, unsigned pbs = queue_physical_block_size(q) >> SECTOR_SHIFT; unsigned lbs = queue_logical_block_size(q) >> SECTOR_SHIFT; unsigned start_offset = bio->bi_iter.bi_sector & (pbs - 1); + unsigned int bio_sectors_alignment = + blk_crypto_bio_sectors_alignment(bio); max_sectors += start_offset; max_sectors &= ~(pbs - 1); - if (max_sectors > start_offset) - return max_sectors - start_offset; + if (max_sectors - start_offset >= bio_sectors_alignment) + return round_down(max_sectors - start_offset, bio_sectors_alignment); - return sectors & ~(lbs - 1); + return round_down(sectors & ~(lbs - 1), bio_sectors_alignment); } static inline unsigned get_max_segment_size(const struct request_queue *q, @@ -174,6 +176,41 @@ static inline unsigned get_max_segment_size(const struct request_queue *q, (unsigned long)queue_max_segment_size(q)); } +/** + * update_aligned_sectors_and_segs() - Ensures that *@aligned_sectors is aligned + * to @bio_sectors_alignment, and that + * *@aligned_segs is the value of nsegs + * when sectors reached/first exceeded that + * value of *@aligned_sectors. + * + * @nsegs: [in] The current number of segs + * @sectors: [in] The current number of sectors + * @aligned_segs: [in,out] The number of segments that make up @aligned_sectors + * @aligned_sectors: [in,out] The largest number of sectors <= @sectors that is + * aligned to @sectors + * @bio_sectors_alignment: [in] The alignment requirement for the number of + * sectors + * + * Updates *@aligned_sectors to the largest number <= @sectors that is also a + * multiple of @bio_sectors_alignment. This is done by updating *@aligned_sectors + * whenever @sectors is at least @bio_sectors_alignment more than + * *@aligned_sectors, since that means we can increment *@aligned_sectors while + * still keeping it aligned to @bio_sectors_alignment and also keeping it <= + * @sectors. *@aligned_segs is updated to the value of nsegs when @sectors first + * reaches/exceeds any value that causes *@aligned_sectors to be updated. + */ +static inline void update_aligned_sectors_and_segs(const unsigned int nsegs, + const unsigned int sectors, + unsigned int *aligned_segs, + unsigned int *aligned_sectors, + const unsigned int bio_sectors_alignment) +{ + if (sectors - *aligned_sectors < bio_sectors_alignment) + return; + *aligned_sectors = round_down(sectors, bio_sectors_alignment); + *aligned_segs = nsegs; +} + /** * bvec_split_segs - verify whether or not a bvec should be split in the middle * @q: [in] request queue associated with the bio associated with @bv @@ -195,9 +232,12 @@ static inline unsigned get_max_segment_size(const struct request_queue *q, * the block driver. */ static bool bvec_split_segs(const struct request_queue *q, - const struct bio_vec *bv, unsigned *nsegs, - unsigned *sectors, unsigned max_segs, - unsigned max_sectors) + const struct bio_vec *bv, unsigned int *nsegs, + unsigned int *sectors, unsigned int *aligned_segs, + unsigned int *aligned_sectors, + unsigned int bio_sectors_alignment, + unsigned int max_segs, + unsigned int max_sectors) { unsigned max_len = (min(max_sectors, UINT_MAX >> 9) - *sectors) << 9; unsigned len = min(bv->bv_len, max_len); @@ -211,6 +251,11 @@ static bool bvec_split_segs(const struct request_queue *q, (*nsegs)++; total_len += seg_size; + update_aligned_sectors_and_segs(*nsegs, + *sectors + (total_len >> 9), + aligned_segs, + aligned_sectors, + bio_sectors_alignment); len -= seg_size; if ((bv->bv_offset + total_len) & queue_virt_boundary(q)) @@ -235,6 +280,8 @@ static bool bvec_split_segs(const struct request_queue *q, * following is guaranteed for the cloned bio: * - That it has at most get_max_io_size(@q, @bio) sectors. * - That it has at most queue_max_segments(@q) segments. + * - That the number of sectors in the returned bio is aligned to + * blk_crypto_bio_sectors_alignment(@bio) * * Except for discard requests the cloned bio will point at the bi_io_vec of * the original bio. It is the responsibility of the caller to ensure that the @@ -252,6 +299,9 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, unsigned nsegs = 0, sectors = 0; const unsigned max_sectors = get_max_io_size(q, bio); const unsigned max_segs = queue_max_segments(q); + const unsigned int bio_sectors_alignment = + blk_crypto_bio_sectors_alignment(bio); + unsigned int aligned_segs = 0, aligned_sectors = 0; bio_for_each_bvec(bv, bio, iter) { /* @@ -266,8 +316,14 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, bv.bv_offset + bv.bv_len <= PAGE_SIZE) { nsegs++; sectors += bv.bv_len >> 9; - } else if (bvec_split_segs(q, &bv, &nsegs, §ors, max_segs, - max_sectors)) { + update_aligned_sectors_and_segs(nsegs, sectors, + &aligned_segs, + &aligned_sectors, + bio_sectors_alignment); + } else if (bvec_split_segs(q, &bv, &nsegs, §ors, + &aligned_segs, &aligned_sectors, + bio_sectors_alignment, max_segs, + max_sectors)) { goto split; } @@ -275,11 +331,24 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, bvprvp = &bvprv; } + /* + * The input bio's number of sectors is assumed to be aligned to + * bio_sectors_alignment. If that's the case, then this function should + * ensure that aligned_segs == nsegs and aligned_sectors == sectors if + * the bio is not going to be split. + */ + WARN_ON(aligned_segs != nsegs || aligned_sectors != sectors); *segs = nsegs; return NULL; split: - *segs = nsegs; - return bio_split(bio, sectors, GFP_NOIO, bs); + *segs = aligned_segs; + if (WARN_ON(aligned_sectors == 0)) + goto err; + return bio_split(bio, aligned_sectors, GFP_NOIO, bs); +err: + bio->bi_status = BLK_STS_IOERR; + bio_endio(bio); + return bio; } /** @@ -366,6 +435,9 @@ unsigned int blk_recalc_rq_segments(struct request *rq) { unsigned int nr_phys_segs = 0; unsigned int nr_sectors = 0; + unsigned int nr_aligned_phys_segs = 0; + unsigned int nr_aligned_sectors = 0; + unsigned int bio_sectors_alignment; struct req_iterator iter; struct bio_vec bv; @@ -381,9 +453,11 @@ unsigned int blk_recalc_rq_segments(struct request *rq) return 1; } + bio_sectors_alignment = blk_crypto_bio_sectors_alignment(rq->bio); rq_for_each_bvec(bv, rq, iter) bvec_split_segs(rq->q, &bv, &nr_phys_segs, &nr_sectors, - UINT_MAX, UINT_MAX); + &nr_aligned_phys_segs, &nr_aligned_sectors, + bio_sectors_alignment, UINT_MAX, UINT_MAX); return nr_phys_segs; } diff --git a/block/blk-mq.c b/block/blk-mq.c index 55bcee5dc032..de5c97ab8e5a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2161,6 +2161,9 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio) blk_queue_bounce(q, &bio); __blk_queue_split(&bio, &nr_segs); + if (bio->bi_status != BLK_STS_OK) + goto queue_exit; + if (!bio_integrity_prep(bio)) goto queue_exit; diff --git a/block/bounce.c b/block/bounce.c index 162a6eee8999..b15224799008 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -295,6 +295,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, bool bounce = false; int sectors = 0; bool passthrough = bio_is_passthrough(*bio_orig); + unsigned int bio_sectors_alignment; bio_for_each_segment(from, *bio_orig, iter) { if (i++ < BIO_MAX_PAGES) @@ -305,6 +306,9 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, if (!bounce) return; + bio_sectors_alignment = blk_crypto_bio_sectors_alignment(bio); + sectors = round_down(sectors, bio_sectors_alignment); + if (!passthrough && sectors < bio_sectors(*bio_orig)) { bio = bio_split(*bio_orig, sectors, GFP_NOIO, &bounce_bio_split); bio_chain(bio, *bio_orig); From patchwork Tue Nov 17 14:07:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11912445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69E43C83012 for ; Tue, 17 Nov 2020 14:09:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 12C7420729 for ; Tue, 17 Nov 2020 14:09:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RftOlyar" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729370AbgKQOJJ (ORCPT ); Tue, 17 Nov 2020 09:09:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729571AbgKQOHS (ORCPT ); Tue, 17 Nov 2020 09:07:18 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29532C061A04 for ; Tue, 17 Nov 2020 06:07:17 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id z29so25327239ybi.23 for ; Tue, 17 Nov 2020 06:07:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=gI9t9CC9GAbETMLuNAlH/ClZ/2jSqevbRrvZwCwXB2c=; b=RftOlyar3vMM+UuLWWoDc+3E4txH9IyOqegNk1ZKpYqJ+64DndBUuka4yGqTGvV8HZ kKKYIdA3FBdISG2pT5diEdurh7c6SenZdtfseQienr50nVwEs+lVPqoh/QQcPcHbxKTd 6z7tLj8QkLtfqCHUCSwGQImVKkXObQm1qs9c+uvW7zoloZat5A0FTqmA9aG/G8SXJdsa XBCaiEzt4ANxyW9qoY3hqx21xVpXdpDPR+RQQfctNVrHqjmeDGIDEGd96xqHl9PF78S+ xVP/FSmJPwvk20i6vSzW80obRBeRa+undBkZLraZ4Zn/siWNcMZHVxphaEILVDyr4sDw uynw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gI9t9CC9GAbETMLuNAlH/ClZ/2jSqevbRrvZwCwXB2c=; b=noVA6hSiHoPrlsO0WCFCcoZucD9ToN5bqTP4njZl8vcOmpBxuJkLFzLbWCLPmjzbBq MamhkSkIt43IJYIk4nwH04Jf34W39CEpLAyTZjQNZkwocXtVYENueEkOD7z9FkE6uTRC vzg1PS4rd/nFpf6zcGSeC5147leJTg5xR14eLn4w1Y7yHHgc/700gaMQXxH2QLtNSSdN C3wMOIVtYOZr3fHOO6QqHuiONhjbXP8Pwa0DzbtIRY3CryAHV4xIy6LPahZ2WQn85PxS aLb5eYOCKw9JjNlx+dN5CFwFbsBmljmMjShhN07/VdMu/0k4uEkhApgFOKu7l64M5qtf 8giA== X-Gm-Message-State: AOAM530lKzNGi9Ck/mwP21RIpTBQQMcHSEbO7FpSocJUaF6Qv5yW/J++ zLJOu/NQfPkR169YfeF+Q9zMWC3ZCrE= X-Google-Smtp-Source: ABdhPJz9doXCbA2XybATuEWhl6nzDUH2qic2NqkzBZezLv85f8bG7r9mJGCgGG0IMaxYIkm8TwTZJBSohjQ= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:d7d4:: with SMTP id o203mr23633500ybg.286.1605622036343; Tue, 17 Nov 2020 06:07:16 -0800 (PST) Date: Tue, 17 Nov 2020 14:07:02 +0000 In-Reply-To: <20201117140708.1068688-1-satyat@google.com> Message-Id: <20201117140708.1068688-3-satyat@google.com> Mime-Version: 1.0 References: <20201117140708.1068688-1-satyat@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH v7 2/8] blk-crypto: don't require user buffer alignment From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala , Eric Biggers Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Previously, blk-crypto-fallback required the offset and length of each bvec in a bio to be aligned to the crypto data unit size. This patch enables blk-crypto-fallback to work even if that's not the case - the requirement now is only that the total length of the data in the bio is aligned to the crypto data unit size. Now that blk-crypto-fallback can handle bvecs not aligned to crypto data units, and we've ensured that bios are not split in the middle of a crypto data unit, we can relax the alignment check done by blk-crypto. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- block/blk-crypto-fallback.c | 202 +++++++++++++++++++++++++++--------- block/blk-crypto.c | 19 +--- 2 files changed, 157 insertions(+), 64 deletions(-) diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c index db2d2c67b308..619f0746ce02 100644 --- a/block/blk-crypto-fallback.c +++ b/block/blk-crypto-fallback.c @@ -251,6 +251,65 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], iv->dun[i] = cpu_to_le64(dun[i]); } +/* + * If the length of any bio segment isn't a multiple of data_unit_size + * (which can happen if data_unit_size > logical_block_size), then each + * encryption/decryption might need to be passed multiple scatterlist elements. + * If that will be the case, this function allocates and initializes src and dst + * scatterlists (or a combined src/dst scatterlist) with the needed length. + * + * If 1 element is guaranteed to be enough (which is usually the case, and is + * guaranteed when data_unit_size <= logical_block_size), then this function + * just initializes the on-stack scatterlist(s). + */ +static bool blk_crypto_alloc_sglists(struct bio *bio, + const struct bvec_iter *start_iter, + unsigned int data_unit_size, + struct scatterlist **src_p, + struct scatterlist **dst_p) +{ + struct bio_vec bv; + struct bvec_iter iter; + bool aligned = true; + unsigned int count = 0; + + __bio_for_each_segment(bv, bio, iter, *start_iter) { + count++; + aligned &= IS_ALIGNED(bv.bv_len, data_unit_size); + } + if (aligned) { + count = 1; + } else { + /* + * We can't need more elements than bio segments, and we can't + * need more than the number of sectors per data unit. This may + * overestimate the required length by a bit, but that's okay. + */ + count = min(count, data_unit_size >> SECTOR_SHIFT); + } + + if (count > 1) { + *src_p = kmalloc_array(count, sizeof(struct scatterlist), + GFP_NOIO); + if (!*src_p) + return false; + if (dst_p) { + *dst_p = kmalloc_array(count, + sizeof(struct scatterlist), + GFP_NOIO); + if (!*dst_p) { + kfree(*src_p); + *src_p = NULL; + return false; + } + } + } + sg_init_table(*src_p, count); + if (dst_p) + sg_init_table(*dst_p, count); + return true; +} + /* * The crypto API fallback's encryption routine. * Allocate a bounce bio for encryption, encrypt the input bio using crypto API, @@ -267,9 +326,12 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) struct skcipher_request *ciph_req = NULL; DECLARE_CRYPTO_WAIT(wait); u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; - struct scatterlist src, dst; + struct scatterlist _src, *src = &_src; + struct scatterlist _dst, *dst = &_dst; union blk_crypto_iv iv; - unsigned int i, j; + unsigned int i; + unsigned int sg_idx = 0; + unsigned int du_filled = 0; bool ret = false; blk_status_t blk_st; @@ -281,11 +343,18 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) bc = src_bio->bi_crypt_context; data_unit_size = bc->bc_key->crypto_cfg.data_unit_size; + /* Allocate scatterlists if needed */ + if (!blk_crypto_alloc_sglists(src_bio, &src_bio->bi_iter, + data_unit_size, &src, &dst)) { + src_bio->bi_status = BLK_STS_RESOURCE; + return false; + } + /* Allocate bounce bio for encryption */ enc_bio = blk_crypto_clone_bio(src_bio); if (!enc_bio) { src_bio->bi_status = BLK_STS_RESOURCE; - return false; + goto out_free_sglists; } /* @@ -305,45 +374,57 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) } memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun)); - sg_init_table(&src, 1); - sg_init_table(&dst, 1); - skcipher_request_set_crypt(ciph_req, &src, &dst, data_unit_size, + skcipher_request_set_crypt(ciph_req, src, dst, data_unit_size, iv.bytes); - /* Encrypt each page in the bounce bio */ + /* + * Encrypt each data unit in the bounce bio. + * + * Take care to handle the case where a data unit spans bio segments. + * This can happen when data_unit_size > logical_block_size. + */ for (i = 0; i < enc_bio->bi_vcnt; i++) { - struct bio_vec *enc_bvec = &enc_bio->bi_io_vec[i]; - struct page *plaintext_page = enc_bvec->bv_page; + struct bio_vec *bv = &enc_bio->bi_io_vec[i]; + struct page *plaintext_page = bv->bv_page; struct page *ciphertext_page = mempool_alloc(blk_crypto_bounce_page_pool, GFP_NOIO); + unsigned int offset_in_bv = 0; - enc_bvec->bv_page = ciphertext_page; + bv->bv_page = ciphertext_page; if (!ciphertext_page) { src_bio->bi_status = BLK_STS_RESOURCE; goto out_free_bounce_pages; } - sg_set_page(&src, plaintext_page, data_unit_size, - enc_bvec->bv_offset); - sg_set_page(&dst, ciphertext_page, data_unit_size, - enc_bvec->bv_offset); - - /* Encrypt each data unit in this page */ - for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) { - blk_crypto_dun_to_iv(curr_dun, &iv); - if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req), - &wait)) { - i++; - src_bio->bi_status = BLK_STS_IOERR; - goto out_free_bounce_pages; + while (offset_in_bv < bv->bv_len) { + unsigned int n = min(bv->bv_len - offset_in_bv, + data_unit_size - du_filled); + sg_set_page(&src[sg_idx], plaintext_page, n, + bv->bv_offset + offset_in_bv); + sg_set_page(&dst[sg_idx], ciphertext_page, n, + bv->bv_offset + offset_in_bv); + sg_idx++; + offset_in_bv += n; + du_filled += n; + if (du_filled == data_unit_size) { + blk_crypto_dun_to_iv(curr_dun, &iv); + if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req), + &wait)) { + src_bio->bi_status = BLK_STS_IOERR; + goto out_free_bounce_pages; + } + bio_crypt_dun_increment(curr_dun, 1); + sg_idx = 0; + du_filled = 0; } - bio_crypt_dun_increment(curr_dun, 1); - src.offset += data_unit_size; - dst.offset += data_unit_size; } } + if (WARN_ON_ONCE(du_filled != 0)) { + src_bio->bi_status = BLK_STS_IOERR; + goto out_free_bounce_pages; + } enc_bio->bi_private = src_bio; enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio; @@ -364,7 +445,11 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr) out_put_enc_bio: if (enc_bio) bio_put(enc_bio); - +out_free_sglists: + if (src != &_src) + kfree(src); + if (dst != &_dst) + kfree(dst); return ret; } @@ -383,13 +468,21 @@ static void blk_crypto_fallback_decrypt_bio(struct work_struct *work) DECLARE_CRYPTO_WAIT(wait); u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]; union blk_crypto_iv iv; - struct scatterlist sg; + struct scatterlist _sg, *sg = &_sg; struct bio_vec bv; struct bvec_iter iter; const int data_unit_size = bc->bc_key->crypto_cfg.data_unit_size; - unsigned int i; + unsigned int sg_idx = 0; + unsigned int du_filled = 0; blk_status_t blk_st; + /* Allocate scatterlist if needed */ + if (!blk_crypto_alloc_sglists(bio, &f_ctx->crypt_iter, data_unit_size, + &sg, NULL)) { + bio->bi_status = BLK_STS_RESOURCE; + goto out_no_sglists; + } + /* * Use the crypto API fallback keyslot manager to get a crypto_skcipher * for the algorithm and key specified for this bio. @@ -407,33 +500,48 @@ static void blk_crypto_fallback_decrypt_bio(struct work_struct *work) } memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun)); - sg_init_table(&sg, 1); - skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size, - iv.bytes); + skcipher_request_set_crypt(ciph_req, sg, sg, data_unit_size, iv.bytes); - /* Decrypt each segment in the bio */ + /* + * Decrypt each data unit in the bio. + * + * Take care to handle the case where a data unit spans bio segments. + * This can happen when data_unit_size > logical_block_size. + */ __bio_for_each_segment(bv, bio, iter, f_ctx->crypt_iter) { - struct page *page = bv.bv_page; - - sg_set_page(&sg, page, data_unit_size, bv.bv_offset); - - /* Decrypt each data unit in the segment */ - for (i = 0; i < bv.bv_len; i += data_unit_size) { - blk_crypto_dun_to_iv(curr_dun, &iv); - if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req), - &wait)) { - bio->bi_status = BLK_STS_IOERR; - goto out; + unsigned int offset_in_bv = 0; + + while (offset_in_bv < bv.bv_len) { + unsigned int n = min(bv.bv_len - offset_in_bv, + data_unit_size - du_filled); + sg_set_page(&sg[sg_idx++], bv.bv_page, n, + bv.bv_offset + offset_in_bv); + offset_in_bv += n; + du_filled += n; + if (du_filled == data_unit_size) { + blk_crypto_dun_to_iv(curr_dun, &iv); + if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req), + &wait)) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } + bio_crypt_dun_increment(curr_dun, 1); + sg_idx = 0; + du_filled = 0; } - bio_crypt_dun_increment(curr_dun, 1); - sg.offset += data_unit_size; } } - + if (WARN_ON_ONCE(du_filled != 0)) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } out: skcipher_request_free(ciph_req); blk_ksm_put_slot(slot); out_no_keyslot: + if (sg != &_sg) + kfree(sg); +out_no_sglists: mempool_free(f_ctx, bio_fallback_crypt_ctx_pool); bio_endio(bio); } diff --git a/block/blk-crypto.c b/block/blk-crypto.c index 5da43f0973b4..fcee0038f7e0 100644 --- a/block/blk-crypto.c +++ b/block/blk-crypto.c @@ -200,22 +200,6 @@ bool bio_crypt_ctx_mergeable(struct bio_crypt_ctx *bc1, unsigned int bc1_bytes, return !bc1 || bio_crypt_dun_is_contiguous(bc1, bc1_bytes, bc2->bc_dun); } -/* Check that all I/O segments are data unit aligned. */ -static bool bio_crypt_check_alignment(struct bio *bio) -{ - const unsigned int data_unit_size = - bio->bi_crypt_context->bc_key->crypto_cfg.data_unit_size; - struct bvec_iter iter; - struct bio_vec bv; - - bio_for_each_segment(bv, bio, iter) { - if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size)) - return false; - } - - return true; -} - blk_status_t __blk_crypto_init_request(struct request *rq) { return blk_ksm_get_slot_for_key(rq->q->ksm, rq->crypt_ctx->bc_key, @@ -271,7 +255,8 @@ bool __blk_crypto_bio_prep(struct bio **bio_ptr) goto fail; } - if (!bio_crypt_check_alignment(bio)) { + if (!IS_ALIGNED(bio->bi_iter.bi_size, + bc_key->crypto_cfg.data_unit_size)) { bio->bi_status = BLK_STS_IOERR; goto fail; } From patchwork Tue Nov 17 14:07:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11912443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFA06C56202 for ; Tue, 17 Nov 2020 14:09:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 656DC20729 for ; Tue, 17 Nov 2020 14:09:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="snl8AQN+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387962AbgKQOH4 (ORCPT ); Tue, 17 Nov 2020 09:07:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387645AbgKQOHU (ORCPT ); Tue, 17 Nov 2020 09:07:20 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09F67C0617A7 for ; Tue, 17 Nov 2020 06:07:19 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id u14so11598602plq.5 for ; Tue, 17 Nov 2020 06:07:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=ngULhpz11Q/j6tHSZGdoq1k2Y18NapqEmNmWx6pnpds=; b=snl8AQN+4q0zaDnmgZiKDfY4BcQFwcUPqfaEamyZnARbcBpvvm2tCsbFDhC7PO/kaF XpmHVQypDCrEpETyH0O/+IPdPEe8+0F6bjFFPe+HqFOwM5mnRLXJm3qrf3oH5yz93weu k8yUvB7kymKr5j1mfzIC1JBZb7b75X9XgzHD6wbTTAx6zcG4bW3As4kxLmJYLAWWbKs+ v8sgx1pD/T4xCBLazxrhjZOqc1AF/wKhYoGuhM09dAeMU4xK7yhaYyoick9RdHyNjnN/ r1QEPRlPwRN+pJTqhVxfd2tUWcyfAYU8/yOjD7m64fmvTuG3SCgn0P6TiSsH7bTZFxLv zkZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ngULhpz11Q/j6tHSZGdoq1k2Y18NapqEmNmWx6pnpds=; b=Yj8wvFcBFoUB1Cxe1xJME8roRqz5ZHjaT6Fdc9ozo2lVlvFjnErDSkoszyCGRbdu/7 0ED/y375mNCGftX9k+SK13jFSGOLeyU9D+bCVk1x5/4DPIvJwU7qHaBonQv2FbLLyYpF GZgHSGzNYf/HuW4DseSOTSCYjmuz8NF5oK8vm3Ixk1rdOC07snFeZqdF8BchLRli2Yrm b8ihqLEyJ7681QX2wQ0VhQ8AWbiPzwmN6/SOaoK5cHxnWHp2h/hydxqzIamTUKelTZOy 4Krg1HwdW4jMaKFXonC5Qn0CPyRZ9YxtvBMiMu1ZIilwxWd7MThvguT+nO6xaSNounan BKyQ== X-Gm-Message-State: AOAM531ONcn6rT608ew7d4+2V3FMzKkDMEtg03NOWjQohXoaFlG/+WEd iH5VXmdYbGcRxyrg1NhBlIjai5W1LBw= X-Google-Smtp-Source: ABdhPJw5NQvqUboH8Ybmr6y+GycjFJcCstoOU2VcZiDffIhrL0ssJ30b/OSNm1zUDKVRLZuAPZdifhA2Hvk= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a17:902:7fc9:b029:d6:c372:a04b with SMTP id t9-20020a1709027fc9b02900d6c372a04bmr17896536plb.4.1605622038458; Tue, 17 Nov 2020 06:07:18 -0800 (PST) Date: Tue, 17 Nov 2020 14:07:03 +0000 In-Reply-To: <20201117140708.1068688-1-satyat@google.com> Message-Id: <20201117140708.1068688-4-satyat@google.com> Mime-Version: 1.0 References: <20201117140708.1068688-1-satyat@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH v7 3/8] fscrypt: add functions for direct I/O support From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Eric Biggers Introduce fscrypt_dio_supported() to check whether a direct I/O request is unsupported due to encryption constraints. Also introduce fscrypt_limit_io_blocks() to limit how many blocks can be added to a bio being prepared for direct I/O. This is needed for filesystems that use the iomap direct I/O implementation to avoid DUN wraparound in the middle of a bio (which is possible with the IV_INO_LBLK_32 IV generation method). Elsewhere fscrypt_mergeable_bio() is used for this, but iomap operates on logical ranges directly, so filesystems using iomap won't have a chance to call fscrypt_mergeable_bio() on every block added to a bio. So we need this function which limits a logical range in one go. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala --- fs/crypto/crypto.c | 8 +++++ fs/crypto/inline_crypt.c | 74 ++++++++++++++++++++++++++++++++++++++++ include/linux/fscrypt.h | 18 ++++++++++ 3 files changed, 100 insertions(+) diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c index 4ef3f714046a..4fcca79f39ae 100644 --- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -69,6 +69,14 @@ void fscrypt_free_bounce_page(struct page *bounce_page) } EXPORT_SYMBOL(fscrypt_free_bounce_page); +/* + * Generate the IV for the given logical block number within the given file. + * For filenames encryption, lblk_num == 0. + * + * Keep this in sync with fscrypt_limit_io_blocks(). fscrypt_limit_io_blocks() + * needs to know about any IV generation methods where the low bits of IV don't + * simply contain the lblk_num (e.g., IV_INO_LBLK_32). + */ void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num, const struct fscrypt_info *ci) { diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c index c57bebfa48fe..956f5bfab7a0 100644 --- a/fs/crypto/inline_crypt.c +++ b/fs/crypto/inline_crypt.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "fscrypt_private.h" @@ -363,3 +364,76 @@ bool fscrypt_mergeable_bio_bh(struct bio *bio, return fscrypt_mergeable_bio(bio, inode, next_lblk); } EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio_bh); + +/** + * fscrypt_dio_supported() - check whether a direct I/O request is unsupported + * due to encryption constraints + * @iocb: the file and position the I/O is targeting + * @iter: the I/O data segment(s) + * + * Return: true if direct I/O is supported + */ +bool fscrypt_dio_supported(struct kiocb *iocb, struct iov_iter *iter) +{ + const struct inode *inode = file_inode(iocb->ki_filp); + const unsigned int blocksize = i_blocksize(inode); + + /* If the file is unencrypted, no veto from us. */ + if (!fscrypt_needs_contents_encryption(inode)) + return true; + + /* We only support direct I/O with inline crypto, not fs-layer crypto */ + if (!fscrypt_inode_uses_inline_crypto(inode)) + return false; + + /* + * Since the granularity of encryption is filesystem blocks, the I/O + * must be block aligned -- not just disk sector aligned. + */ + if (!IS_ALIGNED(iocb->ki_pos | iov_iter_count(iter), blocksize)) + return false; + + return true; +} +EXPORT_SYMBOL_GPL(fscrypt_dio_supported); + +/** + * fscrypt_limit_io_blocks() - limit I/O blocks to avoid discontiguous DUNs + * @inode: the file on which I/O is being done + * @lblk: the block at which the I/O is being started from + * @nr_blocks: the number of blocks we want to submit starting at @lblk + * + * Determine the limit to the number of blocks that can be submitted in the bio + * targeting @lblk without causing a data unit number (DUN) discontinuity. + * + * This is normally just @nr_blocks, as normally the DUNs just increment along + * with the logical blocks. (Or the file is not encrypted.) + * + * In rare cases, fscrypt can be using an IV generation method that allows the + * DUN to wrap around within logically continuous blocks, and that wraparound + * will occur. If this happens, a value less than @nr_blocks will be returned + * so that the wraparound doesn't occur in the middle of the bio. + * + * Return: the actual number of blocks that can be submitted + */ +u64 fscrypt_limit_io_blocks(const struct inode *inode, u64 lblk, u64 nr_blocks) +{ + const struct fscrypt_info *ci = inode->i_crypt_info; + u32 dun; + + if (!fscrypt_inode_uses_inline_crypto(inode)) + return nr_blocks; + + if (nr_blocks <= 1) + return nr_blocks; + + if (!(fscrypt_policy_flags(&ci->ci_policy) & + FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)) + return nr_blocks; + + /* With IV_INO_LBLK_32, the DUN can wrap around from U32_MAX to 0. */ + + dun = ci->ci_hashed_ino + lblk; + + return min_t(u64, nr_blocks, (u64)U32_MAX + 1 - dun); +} diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h index a8f7a43f031b..39cce302660b 100644 --- a/include/linux/fscrypt.h +++ b/include/linux/fscrypt.h @@ -567,6 +567,10 @@ bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode, bool fscrypt_mergeable_bio_bh(struct bio *bio, const struct buffer_head *next_bh); +bool fscrypt_dio_supported(struct kiocb *iocb, struct iov_iter *iter); + +u64 fscrypt_limit_io_blocks(const struct inode *inode, u64 lblk, u64 nr_blocks); + #else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ static inline bool __fscrypt_inode_uses_inline_crypto(const struct inode *inode) @@ -595,6 +599,20 @@ static inline bool fscrypt_mergeable_bio_bh(struct bio *bio, { return true; } + +static inline bool fscrypt_dio_supported(struct kiocb *iocb, + struct iov_iter *iter) +{ + const struct inode *inode = file_inode(iocb->ki_filp); + + return !fscrypt_needs_contents_encryption(inode); +} + +static inline u64 fscrypt_limit_io_blocks(const struct inode *inode, u64 lblk, + u64 nr_blocks) +{ + return nr_blocks; +} #endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ /** From patchwork Tue Nov 17 14:07:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11912377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9558C64E7C for ; Tue, 17 Nov 2020 14:08:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7C25E2078D for ; Tue, 17 Nov 2020 14:08:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nFNLxAfP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387915AbgKQOH3 (ORCPT ); Tue, 17 Nov 2020 09:07:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387548AbgKQOHW (ORCPT ); Tue, 17 Nov 2020 09:07:22 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 016C6C061A4B for ; Tue, 17 Nov 2020 06:07:21 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id d6so14703337pfn.15 for ; Tue, 17 Nov 2020 06:07:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=4NJdRU1Ce0aa6Hnfqmub71YUmFd4Qx8kk5Y17DesY40=; b=nFNLxAfPd8e/zZTqlZUXpXuN77+dJdfeaniNFRSeDnLdmkHZ4pPlCypUrVhliOogm2 4Dwh4eHqmw2aUK8kmS7CCa6ww0SDvMftt2FkxmnXssuFtIw0EusEu5Tkclj14gnChFDP nxiC5qrIaleDvreRVgMA2Wlvc7e6OAIU8ncMChxF9U+5y5/JiD5FTbKCl+/WuKfwvoEH mCKaCC4qnI+2FSmVaNkQ7Him8OftmfrHMw8NaMSBnAohoHm15k5RFsSppzf8zmthwEic mYwEOwBuIURqwBhUSgbh3+kM8mQOu63f4FrhK7OWUC5IlMyZLtGBLvhREdbtOF4tBIKo hkEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4NJdRU1Ce0aa6Hnfqmub71YUmFd4Qx8kk5Y17DesY40=; b=eCUi63FK/ERfYG0HpOvbvb0KHuCpt1rpYAI8tdJJ74zTkYSZea9DN1lULLugJhJYXB bXGtUOa+i/8tnoAvFvZO9alEtpC1biEWrHy8GCrMroNUFqTi6IIIpKnHYKfzDHQ06cai MBRIj+LXaEyVArbHxSftT+xbzENdxXffV0y/TEC2MbAC0a8cDbn9xVEZ5Xf1vt8fJVbS 3GH46NiwWT5udp6Ua3DPl/1blMQsSYe/28bWu7vyuRJC5y/S3ppUndwL7bUGavMad/aG nYwhC/apL+j13lIGSMuvoTgIENO5e1GPqt1IiD3Xan4FCm/oIXzfAY1crBi0QrJYjl9W wEpg== X-Gm-Message-State: AOAM531RtvSuw1FCZpvkWAHCj2HACJbwgr9QN3htZyMgTJElmytLdW/N vDf2E6Uv4p5v3SCXXE1PmW/BroKd3FQ= X-Google-Smtp-Source: ABdhPJxfrRrknOI/BvOJaGPc6TEIdJJ1naDUKZko8kN45MeyLyMTe361mEhiYF+cx+Qes0o2OJZjXCGY5eY= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a17:90a:8043:: with SMTP id e3mr4901058pjw.52.1605622040427; Tue, 17 Nov 2020 06:07:20 -0800 (PST) Date: Tue, 17 Nov 2020 14:07:04 +0000 In-Reply-To: <20201117140708.1068688-1-satyat@google.com> Message-Id: <20201117140708.1068688-5-satyat@google.com> Mime-Version: 1.0 References: <20201117140708.1068688-1-satyat@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH v7 4/8] direct-io: add support for fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Eric Biggers Set bio crypt contexts on bios by calling into fscrypt when required, and explicitly check for DUN continuity when adding pages to the bio. (While DUN continuity is usually implied by logical block contiguity, this is not the case when using certain fscrypt IV generation methods like IV_INO_LBLK_32). Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Reviewed-by: Jaegeuk Kim --- fs/direct-io.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/fs/direct-io.c b/fs/direct-io.c index d53fa92a1ab6..f6672c4030e3 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include @@ -392,6 +393,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, sector_t first_sector, int nr_vecs) { struct bio *bio; + struct inode *inode = dio->inode; /* * bio_alloc() is guaranteed to return a bio when allowed to sleep and @@ -399,6 +401,9 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, */ bio = bio_alloc(GFP_KERNEL, nr_vecs); + fscrypt_set_bio_crypt_ctx(bio, inode, + sdio->cur_page_fs_offset >> inode->i_blkbits, + GFP_KERNEL); bio_set_dev(bio, bdev); bio->bi_iter.bi_sector = first_sector; bio_set_op_attrs(bio, dio->op, dio->op_flags); @@ -763,9 +768,17 @@ static inline int dio_send_cur_page(struct dio *dio, struct dio_submit *sdio, * current logical offset in the file does not equal what would * be the next logical offset in the bio, submit the bio we * have. + * + * When fscrypt inline encryption is used, data unit number + * (DUN) contiguity is also required. Normally that's implied + * by logical contiguity. However, certain IV generation + * methods (e.g. IV_INO_LBLK_32) don't guarantee it. So, we + * must explicitly check fscrypt_mergeable_bio() too. */ if (sdio->final_block_in_bio != sdio->cur_page_block || - cur_offset != bio_next_offset) + cur_offset != bio_next_offset || + !fscrypt_mergeable_bio(sdio->bio, dio->inode, + cur_offset >> dio->inode->i_blkbits)) dio_bio_submit(dio, sdio); } From patchwork Tue Nov 17 14:07:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11912373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4549C64E75 for ; Tue, 17 Nov 2020 14:08:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4C66020773 for ; Tue, 17 Nov 2020 14:08:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lsrEjARH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387919AbgKQOH3 (ORCPT ); Tue, 17 Nov 2020 09:07:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387564AbgKQOH1 (ORCPT ); Tue, 17 Nov 2020 09:07:27 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC965C061A51 for ; Tue, 17 Nov 2020 06:07:22 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id b191so14028390qkc.10 for ; Tue, 17 Nov 2020 06:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=VFLfomyHfS1rChguiUYux0VDzmO87K0fqGJKV/Zo06o=; b=lsrEjARHRfnK+GRKjAxdOdYK84058343x4ulP/2szJFkfRTORC2ldq4cnjAxiY8jFI VQLtam4zqFLuBEcfKLbMnYFBrc32LaXluaMUMRa9Q0lhpOiy17Y8e5FpWCYnvuxFlQcp +qgOqNxvhb7p7nbmzb3olVL/Wp9bYkkluw/hVMI8LRZExpX+23VEDIC4LLOSsmGyFkYJ uYKVQe833Zw5GqXv9aw6cVkP5qfvpcB7mibNUbGyOlTqiXHs2rELMbbwEkRA7ghwvovl 8cJR9PNzWNaFeurdO8tvnYukki2HEdBkRu7gHibwFjZNw2DaBb8Sv8CWJWiewGjFMyKg o55Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VFLfomyHfS1rChguiUYux0VDzmO87K0fqGJKV/Zo06o=; b=NB6cXtQ82ELQ9ylNoQbbBy/+xpU8AoIxWHY9WAbH6WJd9o91LsqL9MgnY6vo3ooQZO eYmmV7dGLKKSC4PHsHYqGkk5Z2DrgxHszblVaG1xx4KHGVSKINAvCCrkPq+jOsle9OsL wNR9A/L7eUmIfCnBccGa2JPjwKFYaSlmx+Y7zgdhof7/qOKvBZ6x2gbLcuwPvHrAYTKd MueW8Dn/fohni2RzT/ecs36oKDXmdPSMAFHg+hprHKNu6ULQCFj4lL2eVgNYMmBiMFaX 2ZcFaPRSUIB1MiqpAPkNcDKVIDWe9Oqw4cd/xsGbMySmU3i0InvpO2v0s8mPvFk8iGno UKUg== X-Gm-Message-State: AOAM532H9KDxK/W2NOZAsTw1FmCLyDxGetd9fh7iADQ7DIaLoDzJguN5 PUhHmS6YN7DZc9zRi+SXjbrVt7yvLmA= X-Google-Smtp-Source: ABdhPJziptkvCq37CzgbpEBQNKwt2yD7i4Pi8fsOTIWooGKOH1aD86wFQm0soQyhT2NOVUgerF6aSe6rQFQ= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:ad4:470d:: with SMTP id k13mr21059648qvz.40.1605622042064; Tue, 17 Nov 2020 06:07:22 -0800 (PST) Date: Tue, 17 Nov 2020 14:07:05 +0000 In-Reply-To: <20201117140708.1068688-1-satyat@google.com> Message-Id: <20201117140708.1068688-6-satyat@google.com> Mime-Version: 1.0 References: <20201117140708.1068688-1-satyat@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH v7 5/8] iomap: support direct I/O with fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Eric Biggers Set bio crypt contexts on bios by calling into fscrypt when required. No DUN contiguity checks are done - callers are expected to set up the iomap correctly to ensure that each bio submitted by iomap will not have blocks with incontiguous DUNs by calling fscrypt_limit_io_blocks() appropriately. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala --- fs/iomap/direct-io.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 933f234d5bec..b4240cc3c9f9 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -185,11 +186,14 @@ static void iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos, unsigned len) { + struct inode *inode = file_inode(dio->iocb->ki_filp); struct page *page = ZERO_PAGE(0); int flags = REQ_SYNC | REQ_IDLE; struct bio *bio; bio = bio_alloc(GFP_KERNEL, 1); + fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits, + GFP_KERNEL); bio_set_dev(bio, iomap->bdev); bio->bi_iter.bi_sector = iomap_sector(iomap, pos); bio->bi_private = dio; @@ -272,6 +276,8 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, } bio = bio_alloc(GFP_KERNEL, nr_pages); + fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits, + GFP_KERNEL); bio_set_dev(bio, iomap->bdev); bio->bi_iter.bi_sector = iomap_sector(iomap, pos); bio->bi_write_hint = dio->iocb->ki_hint; From patchwork Tue Nov 17 14:07:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11912371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B9EDC64E7B for ; Tue, 17 Nov 2020 14:08:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C255F2083E for ; Tue, 17 Nov 2020 14:08:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="A1VqI+LM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729706AbgKQOH3 (ORCPT ); Tue, 17 Nov 2020 09:07:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729686AbgKQOH1 (ORCPT ); Tue, 17 Nov 2020 09:07:27 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3BA9C08C5F2 for ; Tue, 17 Nov 2020 06:07:24 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id n10so11547256plk.14 for ; Tue, 17 Nov 2020 06:07:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=R5CBswLlpIQJapBKDpTXO8RLcdpazdMLHFE+h8xz3Es=; b=A1VqI+LMloxuA0/HprdxN9YHfC55pEdsHf9BWLtT4qqvVhKEvzYgxNkj+PEH42zjXI OeoVgtNcR3CaOyVZPb0/+EZGu6RDYd31W6zu2oEy2+C+HnSD+jn7TyQ7D+HZa8IjozkG 4hAqocgyLQCoHqVkyG8GyLQLIaRU2/fq6uT/ZkAsK9xFCvgEfNnS4Hongmf7BNJPyj8x UbUJ9UUgodrohuPK1/l/tCv4x6Xms0f/fdtBrte0iHuY0WAnhEc6rzUTBxXXLvNrFyIl IaO2TCGhJ4uq4s+QPe2kWWVsyJGVNENdgx1rzkljewjoPQlOzUEgn/9ZvFhUmL7E+Gll 5Oig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=R5CBswLlpIQJapBKDpTXO8RLcdpazdMLHFE+h8xz3Es=; b=dtrnOcJAbsZH2Bt+eU6ux6d1sdfGSNnFx/KaQexlSJZ1/pyINZlVEgal2CacAvcl54 1DWR3TWIuQWd8KF14cI1xNsl9korC0Cf/0dNSVebmPiKHv5HkPjMnRUbLArDq/ftvS6b +Z0nzQPmx50DD9mylMhwciVPIluo+xJOu/vyPbjAEBzUrJhmb7tpI3O4Gt5SOVxkS9Rd 2iQ9TJb6f76S5nORR+7YJ+19aj4PzerHGzgjTqgJN9pbt3fcvmQ8N2mh3M/2xspgR11K aJKjE/tmtp1/bfFKZE0gW4LviQWeXl6Ii3v3IFOvUYwdnS4ldErVTN0n4QNYsIHKa8Wb 1AfQ== X-Gm-Message-State: AOAM531IdnkKEbtUeGZr+jmFKC9QeePDK1SxnQhxxcgwhynX7xxK54N2 efHbjHNmHhuyfPurM7rNEQjCfMqRFQU= X-Google-Smtp-Source: ABdhPJyWmh3Iv+b4cPEzHM54qrtBxKCx9Mi+c3ZCqdPzBuuo/b+Pkvkr7t7Q1Y5hP7E1a6CpiFyWdFgF074= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a62:17c8:0:b029:18b:5a97:a8d1 with SMTP id 191-20020a6217c80000b029018b5a97a8d1mr18265685pfx.15.1605622044271; Tue, 17 Nov 2020 06:07:24 -0800 (PST) Date: Tue, 17 Nov 2020 14:07:06 +0000 In-Reply-To: <20201117140708.1068688-1-satyat@google.com> Message-Id: <20201117140708.1068688-7-satyat@google.com> Mime-Version: 1.0 References: <20201117140708.1068688-1-satyat@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH v7 6/8] ext4: support direct I/O with fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Eric Biggers Wire up ext4 with fscrypt direct I/O support. Direct I/O with fscrypt is only supported through blk-crypto (i.e. CONFIG_BLK_INLINE_ENCRYPTION must have been enabled, the 'inlinecrypt' mount option must have been specified, and either hardware inline encryption support must be present or CONFIG_BLK_INLINE_ENCYRPTION_FALLBACK must have been enabled). Further, direct I/O on encrypted files is only supported when the *length* of the I/O is aligned to the filesystem block size (which is *not* necessarily the same as the block device's block size). fscrypt_limit_io_blocks() is called before setting up the iomap to ensure that the blocks of each bio that iomap will submit will have contiguous DUNs. Note that fscrypt_limit_io_blocks() is normally a no-op, as normally the DUNs simply increment along with the logical blocks. But it's needed to handle an edge case in one of the fscrypt IV generation methods. Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Reviewed-by: Jaegeuk Kim Acked-by: Theodore Ts'o --- fs/ext4/file.c | 10 ++++++---- fs/ext4/inode.c | 7 +++++++ 2 files changed, 13 insertions(+), 4 deletions(-) diff --git a/fs/ext4/file.c b/fs/ext4/file.c index 3ed8c048fb12..be77b7732c8e 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -36,9 +36,11 @@ #include "acl.h" #include "truncate.h" -static bool ext4_dio_supported(struct inode *inode) +static bool ext4_dio_supported(struct kiocb *iocb, struct iov_iter *iter) { - if (IS_ENABLED(CONFIG_FS_ENCRYPTION) && IS_ENCRYPTED(inode)) + struct inode *inode = file_inode(iocb->ki_filp); + + if (!fscrypt_dio_supported(iocb, iter)) return false; if (fsverity_active(inode)) return false; @@ -61,7 +63,7 @@ static ssize_t ext4_dio_read_iter(struct kiocb *iocb, struct iov_iter *to) inode_lock_shared(inode); } - if (!ext4_dio_supported(inode)) { + if (!ext4_dio_supported(iocb, to)) { inode_unlock_shared(inode); /* * Fallback to buffered I/O if the operation being performed on @@ -495,7 +497,7 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, struct iov_iter *from) } /* Fallback to buffered I/O if the inode does not support direct I/O. */ - if (!ext4_dio_supported(inode)) { + if (!ext4_dio_supported(iocb, from)) { if (ilock_shared) inode_unlock_shared(inode); else diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 0d8385aea898..0ef3d805bb8c 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3473,6 +3473,13 @@ static int ext4_iomap_begin(struct inode *inode, loff_t offset, loff_t length, if (ret < 0) return ret; out: + /* + * When inline encryption is enabled, sometimes I/O to an encrypted file + * has to be broken up to guarantee DUN contiguity. Handle this by + * limiting the length of the mapping returned. + */ + map.m_len = fscrypt_limit_io_blocks(inode, map.m_lblk, map.m_len); + ext4_set_iomap(inode, iomap, &map, offset, length); return 0; From patchwork Tue Nov 17 14:07:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11912375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EB82C71156 for ; Tue, 17 Nov 2020 14:08:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 248182083E for ; Tue, 17 Nov 2020 14:08:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B6mSrSI+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729512AbgKQOHk (ORCPT ); Tue, 17 Nov 2020 09:07:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729876AbgKQOHj (ORCPT ); Tue, 17 Nov 2020 09:07:39 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25B33C0617A7 for ; Tue, 17 Nov 2020 06:07:40 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id m186so11022516ybm.22 for ; Tue, 17 Nov 2020 06:07:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=BwiyyJFpFKuHzxO9ElSEoW591wjEhbpMFVEeLerCK4A=; b=B6mSrSI+KLsPLHZOubezapg1w7KyF48Gcd5KvqiZAUUomrvCt93BMFte58x4CeD/eg k0xlCR7goDt3KvTfbq6ZMZJTRtCUUnhHZa0njAuLo6BAWtOyT0lG1N+rkq1ohiO1t97Q qemqcHPO0pfn6jMj0SXbr2ZPFcNslQAZJlIrb9BwOmPHHl8x9VSfb8bJ8B9PJOSyLNxA sh1xch+eqx6drXfRAgiE1e5/U0BiaXHV0kNKfFjq0GDmh5MgGwrgrpp+Gb33sSAE+/7N uiK1X9QiX7TLKFELMTbOhucdRKr0jLm77xygNNOq76Vp9Flin7B1n7wnTkkNWd/qi3VO sykw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BwiyyJFpFKuHzxO9ElSEoW591wjEhbpMFVEeLerCK4A=; b=nyRUgmdTqA4yFfceqYjn7wbsvMfQ1kVlHkmeYMrqu7cT7qmWtqJnt6NCm97/BnPWLD NPxQPhCxmE3L7mcPwi71mC9un2nhzFgGyHIgpHP2LJLwbfPQsGRKr+nygrOJxiwTxlcb YxxTuKih7Z8MCP/CjGuJH5hrkdiXMGzA9qxo45fIQ/o1S5xgrp4NUOfeiIcr4OfOzLRL 3Sn9Xk/nWeIhjxz4urGkARy6pKKV9L7iv5xdkR0RArK/EF5Al+Z5YW7e7UGKj/zDgL9D GEzsAA/Fs2vymIYwzueIhTPZ3EJVgTqi9oKRAMomHUSs94afhV0rQzf0wMES/Sge+eYp Q5gw== X-Gm-Message-State: AOAM531fXuB4LpVupQILuOp1515NMVCP+Um8WQp5rgkLuGbQFY5q86ql j6EM9Y7F/rMfUlLIIY14rDHssgxGP74= X-Google-Smtp-Source: ABdhPJzSl4jy6BzVHmPfkYxBH8ET6UtBl4AEksaq1fddCMgxT0uxzDjjRNZm1PxWWWwhr7bOMNiWNibk17E= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a25:2fcf:: with SMTP id v198mr26506998ybv.492.1605622059376; Tue, 17 Nov 2020 06:07:39 -0800 (PST) Date: Tue, 17 Nov 2020 14:07:07 +0000 In-Reply-To: <20201117140708.1068688-1-satyat@google.com> Message-Id: <20201117140708.1068688-8-satyat@google.com> Mime-Version: 1.0 References: <20201117140708.1068688-1-satyat@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH v7 7/8] f2fs: support direct I/O with fscrypt using blk-crypto From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Eric Biggers , Satya Tangirala Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Eric Biggers Wire up f2fs with fscrypt direct I/O support. direct I/O with fscrypt is only supported through blk-crypto (i.e. CONFIG_BLK_INLINE_ENCRYPTION must have been enabled, the 'inlinecrypt' mount option must have been specified, and either hardware inline encryption support must be present or CONFIG_BLK_INLINE_ENCYRPTION_FALLBACK must have been enabled). Further, direct I/O on encrypted files is only supported when the *length* of the I/O is aligned to the filesystem block size (which is *not* necessarily the same as the block device's block size). Signed-off-by: Eric Biggers Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Acked-by: Jaegeuk Kim --- fs/f2fs/f2fs.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index cb700d797296..d518e668618e 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -4120,7 +4120,11 @@ static inline bool f2fs_force_buffered_io(struct inode *inode, struct f2fs_sb_info *sbi = F2FS_I_SB(inode); int rw = iov_iter_rw(iter); - if (f2fs_post_read_required(inode)) + if (!fscrypt_dio_supported(iocb, iter)) + return true; + if (fsverity_active(inode)) + return true; + if (f2fs_compressed_file(inode)) return true; if (f2fs_is_multi_device(sbi)) return true; From patchwork Tue Nov 17 14:07:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11912379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ABE7C8300B for ; Tue, 17 Nov 2020 14:08:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 109ED20729 for ; Tue, 17 Nov 2020 14:08:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CRCbMwpu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387944AbgKQOHo (ORCPT ); Tue, 17 Nov 2020 09:07:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729797AbgKQOHn (ORCPT ); Tue, 17 Nov 2020 09:07:43 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2170DC0617A7 for ; Tue, 17 Nov 2020 06:07:42 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id s9so14136216qks.2 for ; Tue, 17 Nov 2020 06:07:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=yJkkPo7d0J6NtmBv7rzIbrBkhqskC5NsYDTAoyAGSwU=; b=CRCbMwpu14Qny/Fey+w65l3KTXcgMu/wKgUqWyt7la+sSuhY3Qh3bSLfvbmNmy2ohN d6CLetvPjrxg9HdcLQrwcUJI/t7E3dhAItIG3xDBSGqno0iPXS+Up8TqksLE2sx9De7b RnDWU7ZU+TwovEWzFLtOxWOaWhir5AxQO2FAkbTw00c9UXWqh58duX34o1osKQfPEqVH kKqjD9fQyxlm+X97TMmYjnC8FTr0DpcoshYQpZMqrkZkd8QOZGD/NcSiuteoNG9hyOLX 0VnfOnPnKTkbYUbPzPkiA75vd3hEs8SLeW/KWNERv36yx+7UQx03bI8wWQFRjgua6eps yiDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yJkkPo7d0J6NtmBv7rzIbrBkhqskC5NsYDTAoyAGSwU=; b=j0Pjh0ag7y1AsNHqyWirhGupgJZbcGLnz/128SK0s2GTUIJwc7BsCq6EuYLWd+YHj5 8Ec+VK/Ly4j+jDupRWKluEmEL2N9jDPoIAFQdwQkbvjTVDTvniVbGBkRDZ1G9piQlVvI K66iDfk1a9Kt9shymezaGGh5sYmBLFBrOh4wv7ESsvRr02BOd6iY8SAo5uAcAkn2eq1x 45JlfG+x2eQhSneBYkMKdfsQ7WCCd7yKcTuGQqS2Gj6dYJGXJNV/vzEDyr5+MJWtl8u6 V/LECZxBigc/O7GM/M74YqjbOdMiziLMPl+KJqeLLoFoUm8hwrqBje2XhlSK7G9zQF9u 7X1A== X-Gm-Message-State: AOAM5300ihK44Ah/zN2HaGwWA5z9tviOUkVQ0socjkqRnbb+5dIiKWNL XDrtJ7mSYsMFVHUxvS5mK7fvrL3GWuk= X-Google-Smtp-Source: ABdhPJxEG4c+F9yJyuTls+kfuLbDaGW8pmeOfu1t4VEHHtaVEi94dVbJaunxNyEQ37dnM4o0k+oO8Td4NSk= Sender: "satyat via sendgmr" X-Received: from satyaprateek.c.googlers.com ([fda3:e722:ac3:10:24:72f4:c0a8:1092]) (user=satyat job=sendgmr) by 2002:a05:6214:386:: with SMTP id l6mr21357187qvy.49.1605622061201; Tue, 17 Nov 2020 06:07:41 -0800 (PST) Date: Tue, 17 Nov 2020 14:07:08 +0000 In-Reply-To: <20201117140708.1068688-1-satyat@google.com> Message-Id: <20201117140708.1068688-9-satyat@google.com> Mime-Version: 1.0 References: <20201117140708.1068688-1-satyat@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH v7 8/8] fscrypt: update documentation for direct I/O support From: Satya Tangirala To: "Theodore Y . Ts'o" , Jaegeuk Kim , Eric Biggers , Chao Yu , Jens Axboe , "Darrick J . Wong" Cc: linux-kernel@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org, linux-ext4@vger.kernel.org, Satya Tangirala , Eric Biggers Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Update fscrypt documentation to reflect the addition of direct I/O support and document the necessary conditions for direct I/O on encrypted files. Signed-off-by: Satya Tangirala Reviewed-by: Eric Biggers Reviewed-by: Jaegeuk Kim --- Documentation/filesystems/fscrypt.rst | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst index 44b67ebd6e40..757b8aa2af9b 100644 --- a/Documentation/filesystems/fscrypt.rst +++ b/Documentation/filesystems/fscrypt.rst @@ -1047,8 +1047,10 @@ astute users may notice some differences in behavior: may be used to overwrite the source files but isn't guaranteed to be effective on all filesystems and storage devices. -- Direct I/O is not supported on encrypted files. Attempts to use - direct I/O on such files will fall back to buffered I/O. +- Direct I/O is supported on encrypted files only under some + circumstances (see `Direct I/O support`_ for details). When these + circumstances are not met, attempts to use direct I/O on encrypted + files will fall back to buffered I/O. - The fallocate operations FALLOC_FL_COLLAPSE_RANGE and FALLOC_FL_INSERT_RANGE are not supported on encrypted files and will @@ -1121,6 +1123,21 @@ It is not currently possible to backup and restore encrypted files without the encryption key. This would require special APIs which have not yet been implemented. +Direct I/O support +================== + +Direct I/O on encrypted files is supported through blk-crypto. In +particular, this means the kernel must have CONFIG_BLK_INLINE_ENCRYPTION +enabled, the filesystem must have had the 'inlinecrypt' mount option +specified, and either hardware inline encryption must be present, or +CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK must have been enabled. Further, +the length of any I/O must be aligned to the filesystem block size +(*not* necessarily the same as the block device's block size). If any of +these conditions isn't met, attempts to do direct I/O on an encrypted file +will fall back to buffered I/O. However, there aren't any additional +requirements on user buffer alignment (apart from those already present +when using direct I/O on unencrypted files). + Encryption policy enforcement =============================