From patchwork Fri May 20 18:36:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12857255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 736A6C4332F for ; Fri, 20 May 2022 18:37:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352929AbiETSho (ORCPT ); Fri, 20 May 2022 14:37:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352949AbiETShg (ORCPT ); Fri, 20 May 2022 14:37:36 -0400 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A6A6195BCC for ; Fri, 20 May 2022 11:37:35 -0700 (PDT) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.17.1.5/8.17.1.5) with ESMTP id 24KHScId008454 for ; Fri, 20 May 2022 11:37:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=mtjtBSFBboij+dZeJUy8sfQn/qaPHn8tVpe9eVt5tec=; b=KEtuZFmnc7S13PpMRMlIu/rpwu7L3bcae8H0m3sa51fznvmCJ3fdBSAdtQ87LY2A+4ey /xvfmLgU4JaCvF7nN2C/Tof/yPAm07Yvu8tQpXHnlB4J82N4DUkWWFKtMtWyUnUr/dPq 5TA8/IiS5FWiFhs7HOwiXzJV5GEDVb7eqEw= Received: from maileast.thefacebook.com ([163.114.130.16]) by m0001303.ppops.net (PPS) with ESMTPS id 3g5wkre7my-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 20 May 2022 11:37:34 -0700 Received: from twshared4937.07.ash9.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Fri, 20 May 2022 11:37:30 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id 7D316F5E5B2B; Fri, 20 May 2022 11:37:16 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , Subject: [RFC PATCH v4 06/17] iomap: Add async buffered write support Date: Fri, 20 May 2022 11:36:35 -0700 Message-ID: <20220520183646.2002023-7-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220520183646.2002023-1-shr@fb.com> References: <20220520183646.2002023-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: g74O18RzeV_G5MNjNC-rqg-6SMTocZt_ X-Proofpoint-ORIG-GUID: g74O18RzeV_G5MNjNC-rqg-6SMTocZt_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-20_06,2022-05-20_02,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This adds async buffered write support to iomap. Signed-off-by: Stefan Roesch --- fs/iomap/buffered-io.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 27e67bfc64f5..187f4ddd7ba7 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -555,15 +555,21 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, loff_t block_size = i_blocksize(iter->inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); + unsigned int nr_blocks = i_blocks_per_folio(iter->inode, folio); size_t from = offset_in_folio(folio, pos), to = from + len; size_t poff, plen; gfp_t gfp = GFP_NOFS | __GFP_NOFAIL; + if (iter->flags & IOMAP_NOWAIT) + gfp = GFP_NOWAIT; + if (folio_test_uptodate(folio)) return 0; folio_clear_error(folio); iop = iomap_page_create(iter->inode, folio, gfp); + if ((iter->flags & IOMAP_NOWAIT) && !iop && nr_blocks > 1) + return -EAGAIN; do { iomap_adjust_read_range(iter->inode, folio, &block_start, @@ -581,7 +587,12 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, return -EIO; folio_zero_segments(folio, poff, from, to, poff + plen); } else { - int status = iomap_read_folio_sync(block_start, folio, + int status; + + if (iter->flags & IOMAP_NOWAIT) + return -EAGAIN; + + status = iomap_read_folio_sync(block_start, folio, poff, plen, srcmap); if (status) return status; @@ -610,6 +621,9 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS; int status = 0; + if (iter->flags & IOMAP_NOWAIT) + fgp |= FGP_NOWAIT; + BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length); if (srcmap != &iter->iomap) BUG_ON(pos + len > srcmap->offset + srcmap->length); @@ -767,6 +781,10 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) * Otherwise there's a nasty deadlock on copying from the * same page as we're writing to, without it being marked * up-to-date. + * + * For async buffered writes the assumption is that the user + * page has already been faulted in. This can be optimized by + * faulting the user page in the prepare phase of io-uring. */ if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) { status = -EFAULT; @@ -822,6 +840,9 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, }; int ret; + if (iocb->ki_flags & IOCB_NOWAIT) + iter.flags |= IOMAP_NOWAIT; + while ((ret = iomap_iter(&iter, ops)) > 0) iter.processed = iomap_write_iter(&iter, i); if (iter.pos == iocb->ki_pos)