From patchwork Thu Oct 22 21:22:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11851907 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8790DC388F7 for ; Thu, 22 Oct 2020 21:22:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F2F9224630 for ; Thu, 22 Oct 2020 21:22:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="j0gytLMN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S372253AbgJVVWd (ORCPT ); Thu, 22 Oct 2020 17:22:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2897727AbgJVVWc (ORCPT ); Thu, 22 Oct 2020 17:22:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1895EC0613CE; Thu, 22 Oct 2020 14:22:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/Sk507X/XJRZb80wAXzDRfdLw7egwWlksVqiUNUiyjs=; b=j0gytLMNDZRI40meEo4hRH0jPx MaadKOoRAPi5awBfQX5VgWUV5s2w/Vq8fLZU7MEYAZpA6hWI36nO0Yyy87RMst0MaWjYFrDwDYkmY 1Mmm67OCuN6KpXH2OEfhgTkkLEX2rnoG1L5KxXBqM5t6ocYmlKBJYQ/KA3fq64qupBiiNHa1+eBp3 kns7bcC2TbVAOpW6oKEFdQmtxu79GBBw0QBR/Xz/d9YDOUs65+/sr7Eg55SFiCgJlHLYA5qGL7xkq WRRCEoB7yu1gIVj8FVkelOahlDSNLRO/ULvcB2rU3q4Haps42hFd+dyKw3WMTHvndlWFGTwarGCc0 DCgHAx1A==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVi2Y-00046F-AE; Thu, 22 Oct 2020 21:22:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 1/6] block: Add blk_completion Date: Thu, 22 Oct 2020 22:22:23 +0100 Message-Id: <20201022212228.15703-2-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201022212228.15703-1-willy@infradead.org> References: <20201022212228.15703-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This new data structure allows a task to wait for N things to complete. Usually the submitting task will handle cleanup, but if it is killed, the last completer will take care of it. Signed-off-by: Matthew Wilcox (Oracle) --- block/blk-core.c | 61 +++++++++++++++++++++++++++++++++++++++++++++ include/linux/bio.h | 11 ++++++++ 2 files changed, 72 insertions(+) diff --git a/block/blk-core.c b/block/blk-core.c index 10c08ac50697..2892246f2176 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1900,6 +1900,67 @@ void blk_io_schedule(void) } EXPORT_SYMBOL_GPL(blk_io_schedule); +void blk_completion_init(struct blk_completion *cmpl, int n) +{ + spin_lock_init(&cmpl->cmpl_lock); + cmpl->cmpl_count = n; + cmpl->cmpl_task = current; + cmpl->cmpl_status = BLK_STS_OK; +} + +int blk_completion_sub(struct blk_completion *cmpl, blk_status_t status, int n) +{ + int ret = 0; + + spin_lock_bh(&cmpl->cmpl_lock); + if (cmpl->cmpl_status == BLK_STS_OK && status != BLK_STS_OK) + cmpl->cmpl_status = status; + cmpl->cmpl_count -= n; + BUG_ON(cmpl->cmpl_count < 0); + if (cmpl->cmpl_count == 0) { + if (cmpl->cmpl_task) + wake_up_process(cmpl->cmpl_task); + else + ret = -EINTR; + } + spin_unlock_bh(&cmpl->cmpl_lock); + if (ret < 0) + kfree(cmpl); + return ret; +} + +int blk_completion_wait_killable(struct blk_completion *cmpl) +{ + int err = 0; + + for (;;) { + set_current_state(TASK_KILLABLE); + spin_lock_bh(&cmpl->cmpl_lock); + if (cmpl->cmpl_count == 0) + break; + spin_unlock_bh(&cmpl->cmpl_lock); + blk_io_schedule(); + if (fatal_signal_pending(current)) { + spin_lock_bh(&cmpl->cmpl_lock); + cmpl->cmpl_task = NULL; + if (cmpl->cmpl_count != 0) { + spin_unlock_bh(&cmpl->cmpl_lock); + cmpl = NULL; + } + err = -ERESTARTSYS; + break; + } + } + set_current_state(TASK_RUNNING); + if (cmpl) { + spin_unlock_bh(&cmpl->cmpl_lock); + err = blk_status_to_errno(cmpl->cmpl_status); + kfree(cmpl); + } + + return err; +} + int __init blk_dev_init(void) { BUILD_BUG_ON(REQ_OP_LAST >= (1 << REQ_OP_BITS)); diff --git a/include/linux/bio.h b/include/linux/bio.h index f254bc79bb3a..0bde05f5548c 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -814,4 +814,15 @@ static inline void bio_set_polled(struct bio *bio, struct kiocb *kiocb) bio->bi_opf |= REQ_NOWAIT; } +struct blk_completion { + struct task_struct *cmpl_task; + spinlock_t cmpl_lock; + int cmpl_count; + blk_status_t cmpl_status; +}; + +void blk_completion_init(struct blk_completion *, int n); +int blk_completion_sub(struct blk_completion *, blk_status_t status, int n); +int blk_completion_wait_killable(struct blk_completion *); + #endif /* __LINUX_BIO_H */ From patchwork Thu Oct 22 21:22:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11851909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF685C56202 for ; Thu, 22 Oct 2020 21:22:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4FF9124650 for ; Thu, 22 Oct 2020 21:22:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="LQlgaypn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S372276AbgJVVWe (ORCPT ); Thu, 22 Oct 2020 17:22:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2897384AbgJVVWc (ORCPT ); Thu, 22 Oct 2020 17:22:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19180C0613CF; Thu, 22 Oct 2020 14:22:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RSNjd1RLqIQkhcEbGUovOOxZITaIfVGPNuDnxyQq0O0=; b=LQlgaypn3igScLoea4+gSyLtDO NgxFuTmVDAvbUR6Mi/cmnLGwkbNlYBerd7Gbv68PgnB8LalSSsm5yt3lrlZBIkqoMzT/zwFGJvwPq byTgnZ+leAHcuntvPibg2KNXOVolHVRb6RjhrEP7nq7uhkTxK0z9ETsIm+DLM/11LSuydqYTmNMKY lt3UumJ1VjKVBpDrrNiLz49FQuFC1ZYUk4avMBZx6vUMxN66LcJB4czs4QOs5J93fgXDlBy79YLXf ot5e7sFA3vU/NAb8jz5nYTdWwb36BbwVl/BI8BpL12pxOj8lEIFt18SAJMesVRRzreEY2PIfy0ngd AhfcjQzA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVi2Y-00046K-Gc; Thu, 22 Oct 2020 21:22:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 2/6] fs: Return error from block_read_full_page Date: Thu, 22 Oct 2020 22:22:24 +0100 Message-Id: <20201022212228.15703-3-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201022212228.15703-1-willy@infradead.org> References: <20201022212228.15703-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If the filesystem returns an error from get_block, report it instead of ineffectually setting PageError. Don't bother starting any I/Os in this case since they won't bring the page Uptodate. Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 1d5337517dcd..1b0ba1d59966 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2262,7 +2262,7 @@ int block_read_full_page(struct page *page, get_block_t *get_block) sector_t iblock, lblock; struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE]; unsigned int blocksize, bbits; - int nr, i; + int nr, i, err = 0; int fully_mapped = 1; head = create_page_buffers(page, inode, 0); @@ -2280,19 +2280,16 @@ int block_read_full_page(struct page *page, get_block_t *get_block) continue; if (!buffer_mapped(bh)) { - int err = 0; - fully_mapped = 0; if (iblock < lblock) { WARN_ON(bh->b_size != blocksize); err = get_block(inode, iblock, bh, 0); if (err) - SetPageError(page); + break; } if (!buffer_mapped(bh)) { zero_user(page, i * blocksize, blocksize); - if (!err) - set_buffer_uptodate(bh); + set_buffer_uptodate(bh); continue; } /* @@ -2305,18 +2302,17 @@ int block_read_full_page(struct page *page, get_block_t *get_block) arr[nr++] = bh; } while (i++, iblock++, (bh = bh->b_this_page) != head); + if (err) { + unlock_page(page); + return err; + } if (fully_mapped) SetPageMappedToDisk(page); if (!nr) { - /* - * All buffers are uptodate - we can set the page uptodate - * as well. But not if get_block() returned an error. - */ - if (!PageError(page)) - SetPageUptodate(page); - unlock_page(page); - return 0; + /* All buffers are uptodate - we can set the page uptodate */ + SetPageUptodate(page); + return AOP_UPDATED_PAGE; } /* Stage two: lock the buffers */ From patchwork Thu Oct 22 21:22:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11851929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21B3AC56201 for ; Thu, 22 Oct 2020 21:22:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C426324630 for ; Thu, 22 Oct 2020 21:22:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="wGLZoLn+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S372268AbgJVVWd (ORCPT ); Thu, 22 Oct 2020 17:22:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2897770AbgJVVWc (ORCPT ); Thu, 22 Oct 2020 17:22:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F3C8C0613D4; Thu, 22 Oct 2020 14:22:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=D5/8QIXE3yC+uuHVJiE5vkRHeihDIjcMcZdxL1pBCQ4=; b=wGLZoLn+vxDtBSTplOWB5A+yxY 4gXlb6xT44IG7AN2PID9b0O8ZGI/rbp3+sCHYy6uRBhaLahRaV5dNK/hhP8+9+QI/rhBySKBSplHH clpOGXv/SHxEFmJt4AvzLlupRfl9s4o+r+vS2wZsazdLf6EJOiy1a1Aa86IBFWjFwU7/OoJxMW4ZG mK+8YsXEX3KA8WaTJdDzIvdPqRQ8sujmG1KMpsz8x79exG9eET7ZHjZ1LHaKcf62BEf5b2HCLbuM3 +ziSG12rUJR5NNEIEFWjZEXV3Wi+MEXrqLvL3LOIlWUaqTnOH3Bel/OSG4DfoqZhH1COA4spC+8Mw s/hCPUHg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVi2Y-00046R-Np; Thu, 22 Oct 2020 21:22:30 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 3/6] fs: Convert block_read_full_page to be synchronous Date: Thu, 22 Oct 2020 22:22:25 +0100 Message-Id: <20201022212228.15703-4-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201022212228.15703-1-willy@infradead.org> References: <20201022212228.15703-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use the new blk_completion infrastructure to wait for multiple I/Os. Also coalesce adjacent buffer heads into a single BIO instead of submitting one BIO per buffer head. This doesn't work for fscrypt yet, so keep the old code around for now. Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 90 insertions(+) diff --git a/fs/buffer.c b/fs/buffer.c index 1b0ba1d59966..ccb90081117c 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2249,6 +2249,87 @@ int block_is_partially_uptodate(struct page *page, unsigned long from, } EXPORT_SYMBOL(block_is_partially_uptodate); +static void readpage_end_bio(struct bio *bio) +{ + struct bio_vec *bvec; + struct page *page; + struct buffer_head *bh; + int i, nr = 0; + + bio_for_each_bvec_all(bvec, bio, i) { + size_t offset = 0; + size_t max = bvec->bv_offset + bvec->bv_len; + + page = bvec->bv_page; + bh = page_buffers(page); + + for (offset = 0; offset < max; offset += bh->b_size, + bh = bh->b_this_page) { + if (offset < bvec->bv_offset) + continue; + BUG_ON(bh_offset(bh) != offset); + nr++; + if (unlikely(bio_flagged(bio, BIO_QUIET))) + set_bit(BH_Quiet, &bh->b_state); + if (bio->bi_status == BLK_STS_OK) + set_buffer_uptodate(bh); + else + buffer_io_error(bh, ", async page read"); + unlock_buffer(bh); + } + } + + if (blk_completion_sub(bio->bi_private, bio->bi_status, nr) < 0) + unlock_page(page); + bio_put(bio); +} + +static int readpage_submit_bhs(struct page *page, struct blk_completion *cmpl, + unsigned int nr, struct buffer_head **bhs) +{ + struct bio *bio = NULL; + unsigned int i; + int err; + + blk_completion_init(cmpl, nr); + + for (i = 0; i < nr; i++) { + struct buffer_head *bh = bhs[i]; + sector_t sector = bh->b_blocknr * (bh->b_size >> 9); + bool same_page; + + if (buffer_uptodate(bh)) { + end_buffer_async_read(bh, 1); + blk_completion_sub(cmpl, BLK_STS_OK, 1); + continue; + } + if (bio) { + if (bio_end_sector(bio) == sector && + __bio_try_merge_page(bio, bh->b_page, bh->b_size, + bh_offset(bh), &same_page)) + continue; + submit_bio(bio); + } + bio = bio_alloc(GFP_NOIO, 1); + bio_set_dev(bio, bh->b_bdev); + bio->bi_iter.bi_sector = sector; + bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh)); + bio->bi_end_io = readpage_end_bio; + bio->bi_private = cmpl; + /* Take care of bh's that straddle the end of the device */ + guard_bio_eod(bio); + } + + if (bio) + submit_bio(bio); + + err = blk_completion_wait_killable(cmpl); + if (!err) + return AOP_UPDATED_PAGE; + unlock_page(page); + return err; +} + /* * Generic "read page" function for block devices that have the normal * get_block functionality. This is most of the block device filesystems. @@ -2258,6 +2339,7 @@ EXPORT_SYMBOL(block_is_partially_uptodate); */ int block_read_full_page(struct page *page, get_block_t *get_block) { + struct blk_completion *cmpl = kmalloc(sizeof(*cmpl), GFP_NOIO); struct inode *inode = page->mapping->host; sector_t iblock, lblock; struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE]; @@ -2265,6 +2347,9 @@ int block_read_full_page(struct page *page, get_block_t *get_block) int nr, i, err = 0; int fully_mapped = 1; + if (!cmpl) + return -ENOMEM; + head = create_page_buffers(page, inode, 0); blocksize = head->b_size; bbits = block_size_bits(blocksize); @@ -2303,6 +2388,7 @@ int block_read_full_page(struct page *page, get_block_t *get_block) } while (i++, iblock++, (bh = bh->b_this_page) != head); if (err) { + kfree(cmpl); unlock_page(page); return err; } @@ -2322,6 +2408,10 @@ int block_read_full_page(struct page *page, get_block_t *get_block) mark_buffer_async_read(bh); } + if (!fscrypt_inode_uses_fs_layer_crypto(inode)) + return readpage_submit_bhs(page, cmpl, nr, arr); + kfree(cmpl); + /* * Stage 3: start the IO. Check for uptodateness * inside the buffer lock in case another process reading From patchwork Thu Oct 22 21:22:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11851927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE9C1C388F7 for ; Thu, 22 Oct 2020 21:22:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 66CC124641 for ; Thu, 22 Oct 2020 21:22:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="dRymKTfp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S372265AbgJVVWd (ORCPT ); Thu, 22 Oct 2020 17:22:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S370489AbgJVVWc (ORCPT ); Thu, 22 Oct 2020 17:22:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76F27C0613D5; Thu, 22 Oct 2020 14:22:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gL/c6EulWmWo4TRA2GCRaD9qXTLuixWc7vV5ZfmfHIc=; b=dRymKTfpPXjP0X6UMe31yGzaTt +heaOvNc/Z/Kg92wigLlVB4sE4PbXVt0ZgSEF7mW+QGpO6QyZdgjtrewCc4mkeG1Ag2RKj15Co4Mz n76+oCvKbkx7kMm5DVXkSwT9XkZYirElJ7/CgcOJ4elVpEmXeXaUrxfKdVSZq/WBBz6UK3R6uv/WH +gAzcbWgEoMovy2kC4841N5dLQacRD2hwClEtoNGpEev4B137Vq0MUVCcr3qbu+kp5EKGsWMVUQSM t3Mbaru+h6CZktUdWmG7KVfSCAe7uoOoxWy+sxMEzlci5jIoBHKU92lqrnjpSR6fUXs9g+lTLcixm FbE4s6DQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVi2Y-00046X-Ua; Thu, 22 Oct 2020 21:22:31 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 4/6] fs: Hoist fscrypt decryption to bio completion handler Date: Thu, 22 Oct 2020 22:22:26 +0100 Message-Id: <20201022212228.15703-5-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201022212228.15703-1-willy@infradead.org> References: <20201022212228.15703-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is prep work for doing decryption at the BIO level instead of the BH level. It still works on one BH at a time for now. Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 45 +++++++++++++++++++++------------------------ 1 file changed, 21 insertions(+), 24 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index ccb90081117c..627ae1d853c0 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -241,6 +241,10 @@ __find_get_block_slow(struct block_device *bdev, sector_t block) return ret; } +/* + * I/O completion handler for block_read_full_page() - pages + * which come unlocked at the end of I/O. + */ static void end_buffer_async_read(struct buffer_head *bh, int uptodate) { unsigned long flags; @@ -313,28 +317,6 @@ static void decrypt_bh(struct work_struct *work) kfree(ctx); } -/* - * I/O completion handler for block_read_full_page() - pages - * which come unlocked at the end of I/O. - */ -static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate) -{ - /* Decrypt if needed */ - if (uptodate && - fscrypt_inode_uses_fs_layer_crypto(bh->b_page->mapping->host)) { - struct decrypt_bh_ctx *ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC); - - if (ctx) { - INIT_WORK(&ctx->work, decrypt_bh); - ctx->bh = bh; - fscrypt_enqueue_decrypt_work(&ctx->work); - return; - } - uptodate = 0; - } - end_buffer_async_read(bh, uptodate); -} - /* * Completion handler for block_write_full_page() - pages which are unlocked * during I/O, and which have PageWriteback cleared upon I/O completion. @@ -404,7 +386,7 @@ EXPORT_SYMBOL(end_buffer_async_write); */ static void mark_buffer_async_read(struct buffer_head *bh) { - bh->b_end_io = end_buffer_async_read_io; + bh->b_end_io = end_buffer_async_read; set_buffer_async_read(bh); } @@ -3103,11 +3085,26 @@ EXPORT_SYMBOL(generic_block_bmap); static void end_bio_bh_io_sync(struct bio *bio) { struct buffer_head *bh = bio->bi_private; + int uptodate = !bio->bi_status; if (unlikely(bio_flagged(bio, BIO_QUIET))) set_bit(BH_Quiet, &bh->b_state); - bh->b_end_io(bh, !bio->bi_status); + /* Decrypt if needed */ + if ((bio_data_dir(bio) == READ) && uptodate && + fscrypt_inode_uses_fs_layer_crypto(bh->b_page->mapping->host)) { + struct decrypt_bh_ctx *ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC); + + if (ctx) { + INIT_WORK(&ctx->work, decrypt_bh); + ctx->bh = bh; + fscrypt_enqueue_decrypt_work(&ctx->work); + bio_put(bio); + return; + } + uptodate = 0; + } + bh->b_end_io(bh, uptodate); bio_put(bio); } From patchwork Thu Oct 22 21:22:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11851933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45AA7C56201 for ; Thu, 22 Oct 2020 21:23:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E458824650 for ; Thu, 22 Oct 2020 21:23:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="AF9rUsfI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S372257AbgJVVW6 (ORCPT ); Thu, 22 Oct 2020 17:22:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S372232AbgJVVWc (ORCPT ); Thu, 22 Oct 2020 17:22:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABCD1C0613D6; Thu, 22 Oct 2020 14:22:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ljAA4PGd/GSmX+kkh/i6lw2/EaP+AhT2hL7GA28WXSQ=; b=AF9rUsfI+8g5rTMEZGeaJnZ8QA Rt/ef7DEtRmk8GSl6EY20t+Gf/mTE+Dc9s8MDNmXl283W3Pppo8IVftue49/Qpq/DqwQEzVWwh/YN KZ8w8VrMsbjOmDE3wlx5LESl0wk84wJry96DjMVtSqP5J973VE820NPe+afNVOccPGbawER146eve y0tnsnpUYZRfDdapTk8Dr5VJj46BpIVFp1bgqiHWNXUFm6OjWVS8gUIibYrP2a707LppAfsEDBnwr 5RLTxDg2Zpp4LgB8mxoD8Go5WKOOhx6Y5yh3kUwmOmNjztLxOdKxNbfirsrifgQz80Jz2fPq7Q/Sj BQaOhQfA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVi2Z-00046d-6A; Thu, 22 Oct 2020 21:22:31 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 5/6] fs: Turn decrypt_bh into decrypt_bio Date: Thu, 22 Oct 2020 22:22:27 +0100 Message-Id: <20201022212228.15703-6-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201022212228.15703-1-willy@infradead.org> References: <20201022212228.15703-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Pass a bio to decrypt_bio instead of a buffer_head to decrypt_bh. Another step towards doing decryption per-BIO instead of per-BH. Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 627ae1d853c0..f859e0929b7e 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -299,22 +299,24 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate) return; } -struct decrypt_bh_ctx { +struct decrypt_bio_ctx { struct work_struct work; - struct buffer_head *bh; + struct bio *bio; }; -static void decrypt_bh(struct work_struct *work) +static void decrypt_bio(struct work_struct *work) { - struct decrypt_bh_ctx *ctx = - container_of(work, struct decrypt_bh_ctx, work); - struct buffer_head *bh = ctx->bh; + struct decrypt_bio_ctx *ctx = + container_of(work, struct decrypt_bio_ctx, work); + struct bio *bio = ctx->bio; + struct buffer_head *bh = bio->bi_private; int err; err = fscrypt_decrypt_pagecache_blocks(bh->b_page, bh->b_size, bh_offset(bh)); end_buffer_async_read(bh, err == 0); kfree(ctx); + bio_put(bio); } /* @@ -3093,13 +3095,12 @@ static void end_bio_bh_io_sync(struct bio *bio) /* Decrypt if needed */ if ((bio_data_dir(bio) == READ) && uptodate && fscrypt_inode_uses_fs_layer_crypto(bh->b_page->mapping->host)) { - struct decrypt_bh_ctx *ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC); + struct decrypt_bio_ctx *ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC); if (ctx) { - INIT_WORK(&ctx->work, decrypt_bh); - ctx->bh = bh; + INIT_WORK(&ctx->work, decrypt_bio); + ctx->bio = bio; fscrypt_enqueue_decrypt_work(&ctx->work); - bio_put(bio); return; } uptodate = 0; From patchwork Thu Oct 22 21:22:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11851935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48B0BC56201 for ; Thu, 22 Oct 2020 21:22:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB62624630 for ; Thu, 22 Oct 2020 21:22:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="PzyBls8e" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S372314AbgJVVWx (ORCPT ); Thu, 22 Oct 2020 17:22:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S372257AbgJVVWd (ORCPT ); Thu, 22 Oct 2020 17:22:33 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8637C0613D7; Thu, 22 Oct 2020 14:22:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XEu2EmGV2J3O+NGwABsLcV4fD3cQ9sI5rt1XPNbqumg=; b=PzyBls8es8bXMK2hYFm3qT6hv3 fMYeGYuuoZVO8DXIjkDUkZYHLUV93wdu1djgLmokABmSEIffsY4F9oM7jMaCnVzSZXQ518dVnq1zS Iq4gSMFV3CykSEt1BAdBEpfIfZARVsma0q05RNNbIS45oJ4perCAzz4sARFopkYMo/kEiuQLr7VKD zD6T9NDo/398s026g3B4hUaX96u12XaxuVhrJWCj5uP3oY1bCHXGcLZAZdoLoTU8Xl3UaIC+BUhFe 7xcJTXHp8G5UFxsFz7iy98ASHRF00xsozx+hfpImkVLIM66wx7Y03NHSRY7nIecytZt1gcrgxFbLf z/N3brCw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVi2Z-00046j-Di; Thu, 22 Oct 2020 21:22:31 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 6/6] fs: Convert block_read_full_page to be synchronous with fscrypt enabled Date: Thu, 22 Oct 2020 22:22:28 +0100 Message-Id: <20201022212228.15703-7-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201022212228.15703-1-willy@infradead.org> References: <20201022212228.15703-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use the new decrypt_end_bio() instead of readpage_end_bio() if fscrypt needs to be used. Remove the old end_buffer_async_read() now that all BHs go through readpage_end_bio(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 198 ++++++++++++++++------------------------------------ 1 file changed, 59 insertions(+), 139 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index f859e0929b7e..62c74f0102d4 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -241,84 +241,6 @@ __find_get_block_slow(struct block_device *bdev, sector_t block) return ret; } -/* - * I/O completion handler for block_read_full_page() - pages - * which come unlocked at the end of I/O. - */ -static void end_buffer_async_read(struct buffer_head *bh, int uptodate) -{ - unsigned long flags; - struct buffer_head *first; - struct buffer_head *tmp; - struct page *page; - int page_uptodate = 1; - - BUG_ON(!buffer_async_read(bh)); - - page = bh->b_page; - if (uptodate) { - set_buffer_uptodate(bh); - } else { - clear_buffer_uptodate(bh); - buffer_io_error(bh, ", async page read"); - SetPageError(page); - } - - /* - * Be _very_ careful from here on. Bad things can happen if - * two buffer heads end IO at almost the same time and both - * decide that the page is now completely done. - */ - first = page_buffers(page); - spin_lock_irqsave(&first->b_uptodate_lock, flags); - clear_buffer_async_read(bh); - unlock_buffer(bh); - tmp = bh; - do { - if (!buffer_uptodate(tmp)) - page_uptodate = 0; - if (buffer_async_read(tmp)) { - BUG_ON(!buffer_locked(tmp)); - goto still_busy; - } - tmp = tmp->b_this_page; - } while (tmp != bh); - spin_unlock_irqrestore(&first->b_uptodate_lock, flags); - - /* - * If none of the buffers had errors and they are all - * uptodate then we can set the page uptodate. - */ - if (page_uptodate && !PageError(page)) - SetPageUptodate(page); - unlock_page(page); - return; - -still_busy: - spin_unlock_irqrestore(&first->b_uptodate_lock, flags); - return; -} - -struct decrypt_bio_ctx { - struct work_struct work; - struct bio *bio; -}; - -static void decrypt_bio(struct work_struct *work) -{ - struct decrypt_bio_ctx *ctx = - container_of(work, struct decrypt_bio_ctx, work); - struct bio *bio = ctx->bio; - struct buffer_head *bh = bio->bi_private; - int err; - - err = fscrypt_decrypt_pagecache_blocks(bh->b_page, bh->b_size, - bh_offset(bh)); - end_buffer_async_read(bh, err == 0); - kfree(ctx); - bio_put(bio); -} - /* * Completion handler for block_write_full_page() - pages which are unlocked * during I/O, and which have PageWriteback cleared upon I/O completion. @@ -365,33 +287,6 @@ void end_buffer_async_write(struct buffer_head *bh, int uptodate) } EXPORT_SYMBOL(end_buffer_async_write); -/* - * If a page's buffers are under async readin (end_buffer_async_read - * completion) then there is a possibility that another thread of - * control could lock one of the buffers after it has completed - * but while some of the other buffers have not completed. This - * locked buffer would confuse end_buffer_async_read() into not unlocking - * the page. So the absence of BH_Async_Read tells end_buffer_async_read() - * that this buffer is not under async I/O. - * - * The page comes unlocked when it has no locked buffer_async buffers - * left. - * - * PageLocked prevents anyone starting new async I/O reads any of - * the buffers. - * - * PageWriteback is used to prevent simultaneous writeout of the same - * page. - * - * PageLocked prevents anyone from starting writeback of a page which is - * under read I/O (PageWriteback is only ever set against a locked page). - */ -static void mark_buffer_async_read(struct buffer_head *bh) -{ - bh->b_end_io = end_buffer_async_read; - set_buffer_async_read(bh); -} - static void mark_buffer_async_write_endio(struct buffer_head *bh, bh_end_io_t *handler) { @@ -2268,8 +2163,54 @@ static void readpage_end_bio(struct bio *bio) bio_put(bio); } +struct decrypt_bio_ctx { + struct work_struct work; + struct bio *bio; +}; + +static void decrypt_bio(struct work_struct *work) +{ + struct decrypt_bio_ctx *ctx = + container_of(work, struct decrypt_bio_ctx, work); + struct bio *bio = ctx->bio; + struct bio_vec *bvec; + int i, err = 0; + + kfree(ctx); + bio_for_each_bvec_all(bvec, bio, i) { + err = fscrypt_decrypt_pagecache_blocks(bvec->bv_page, + bvec->bv_len, bvec->bv_offset); + if (err) + break; + } + + /* XXX: Should report a better error here */ + if (err) + bio->bi_status = BLK_STS_IOERR; + readpage_end_bio(bio); +} + +static void decrypt_end_bio(struct bio *bio) +{ + struct decrypt_bio_ctx *ctx = NULL; + + if (bio->bi_status == BLK_STS_OK) { + ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC); + if (!ctx) + bio->bi_status = BLK_STS_RESOURCE; + } + + if (ctx) { + INIT_WORK(&ctx->work, decrypt_bio); + ctx->bio = bio; + fscrypt_enqueue_decrypt_work(&ctx->work); + } else { + readpage_end_bio(bio); + } +} + static int readpage_submit_bhs(struct page *page, struct blk_completion *cmpl, - unsigned int nr, struct buffer_head **bhs) + unsigned int nr, struct buffer_head **bhs, bio_end_io_t end_bio) { struct bio *bio = NULL; unsigned int i; @@ -2283,7 +2224,8 @@ static int readpage_submit_bhs(struct page *page, struct blk_completion *cmpl, bool same_page; if (buffer_uptodate(bh)) { - end_buffer_async_read(bh, 1); + clear_buffer_async_read(bh); + unlock_buffer(bh); blk_completion_sub(cmpl, BLK_STS_OK, 1); continue; } @@ -2298,7 +2240,7 @@ static int readpage_submit_bhs(struct page *page, struct blk_completion *cmpl, bio_set_dev(bio, bh->b_bdev); bio->bi_iter.bi_sector = sector; bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh)); - bio->bi_end_io = readpage_end_bio; + bio->bi_end_io = end_bio; bio->bi_private = cmpl; /* Take care of bh's that straddle the end of the device */ guard_bio_eod(bio); @@ -2314,6 +2256,13 @@ static int readpage_submit_bhs(struct page *page, struct blk_completion *cmpl, return err; } +static bio_end_io_t *fscrypt_end_io(struct inode *inode) +{ + if (fscrypt_inode_uses_fs_layer_crypto(inode)) + return decrypt_end_bio; + return readpage_end_bio; +} + /* * Generic "read page" function for block devices that have the normal * get_block functionality. This is most of the block device filesystems. @@ -2389,26 +2338,10 @@ int block_read_full_page(struct page *page, get_block_t *get_block) for (i = 0; i < nr; i++) { bh = arr[i]; lock_buffer(bh); - mark_buffer_async_read(bh); + set_buffer_async_read(bh); } - if (!fscrypt_inode_uses_fs_layer_crypto(inode)) - return readpage_submit_bhs(page, cmpl, nr, arr); - kfree(cmpl); - - /* - * Stage 3: start the IO. Check for uptodateness - * inside the buffer lock in case another process reading - * the underlying blockdev brought it uptodate (the sct fix). - */ - for (i = 0; i < nr; i++) { - bh = arr[i]; - if (buffer_uptodate(bh)) - end_buffer_async_read(bh, 1); - else - submit_bh(REQ_OP_READ, 0, bh); - } - return 0; + return readpage_submit_bhs(page, cmpl, nr, arr, fscrypt_end_io(inode)); } EXPORT_SYMBOL(block_read_full_page); @@ -3092,19 +3025,6 @@ static void end_bio_bh_io_sync(struct bio *bio) if (unlikely(bio_flagged(bio, BIO_QUIET))) set_bit(BH_Quiet, &bh->b_state); - /* Decrypt if needed */ - if ((bio_data_dir(bio) == READ) && uptodate && - fscrypt_inode_uses_fs_layer_crypto(bh->b_page->mapping->host)) { - struct decrypt_bio_ctx *ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC); - - if (ctx) { - INIT_WORK(&ctx->work, decrypt_bio); - ctx->bio = bio; - fscrypt_enqueue_decrypt_work(&ctx->work); - return; - } - uptodate = 0; - } bh->b_end_io(bh, uptodate); bio_put(bio); }