From patchwork Mon May 16 16:47:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12851163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3297DC352A7 for ; Mon, 16 May 2022 16:48:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343831AbiEPQsr (ORCPT ); Mon, 16 May 2022 12:48:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343845AbiEPQsm (ORCPT ); Mon, 16 May 2022 12:48:42 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CDF93C73E for ; Mon, 16 May 2022 09:48:38 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24GFGb6c007872 for ; Mon, 16 May 2022 09:48:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=9jzwUncG3kNO+MRv998diITiMaJS+Sv/ZWH/LB7G0QY=; b=EGe/EZMbU+O3Zbt31iwCX4Kz81POhCB48ARAWIyXDqMCv1a5CMInAYdh2dgWAqT1iZXw 7YDDdzjaX4mEWIfcgVXMdzH/eAUeGvTkQtAAbUdUxonFYtEcUUC0uOsvomgQR3oRGpZ/ vNlSotGX69+EdjJPkQC5j8uMJ9WTPXf+1U0= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3g2a5yakeb-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 16 May 2022 09:48:37 -0700 Received: from twshared35748.07.ash9.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 16 May 2022 09:48:35 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id 513F5F146DD7; Mon, 16 May 2022 09:48:25 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , Subject: [RFC PATCH v2 04/16] iomap: add async buffered write support Date: Mon, 16 May 2022 09:47:06 -0700 Message-ID: <20220516164718.2419891-5-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220516164718.2419891-1-shr@fb.com> References: <20220516164718.2419891-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: yPr-X7ID1vPNftV0X-gl8-bfMaGHbBJg X-Proofpoint-GUID: yPr-X7ID1vPNftV0X-gl8-bfMaGHbBJg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-16_15,2022-05-16_02,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This adds async buffered write support to iomap. The support is focused on the changes necessary to support XFS with iomap. Support for other filesystems might require additional changes. Signed-off-by: Stefan Roesch --- fs/iomap/buffered-io.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 1ffdc7078e7d..ceb3091f94c2 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -580,13 +580,20 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, size_t from = offset_in_folio(folio, pos), to = from + len; size_t poff, plen; gfp_t gfp = GFP_NOFS | __GFP_NOFAIL; + bool no_wait = (iter->flags & IOMAP_NOWAIT); + + if (no_wait) + gfp = GFP_NOIO; if (folio_test_uptodate(folio)) return 0; folio_clear_error(folio); - if (!iop && nr_blocks > 1) + if (!iop && nr_blocks > 1) { iop = iomap_page_create_gfp(iter->inode, folio, nr_blocks, gfp); + if (no_wait && !iop) + return -EAGAIN; + } do { iomap_adjust_read_range(iter->inode, folio, &block_start, @@ -603,6 +610,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, if (WARN_ON_ONCE(iter->flags & IOMAP_UNSHARE)) return -EIO; folio_zero_segments(folio, poff, from, to, poff + plen); + } else if (no_wait) { + return -EAGAIN; } else { int status = iomap_read_folio_sync(block_start, folio, poff, plen, srcmap); @@ -633,6 +642,9 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS; int status = 0; + if (iter->flags & IOMAP_NOWAIT) + fgp |= FGP_NOWAIT; + BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length); if (srcmap != &iter->iomap) BUG_ON(pos + len > srcmap->offset + srcmap->length); @@ -790,6 +802,10 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) * Otherwise there's a nasty deadlock on copying from the * same page as we're writing to, without it being marked * up-to-date. + * + * For async buffered writes the assumption is that the user + * page has already been faulted in. This can be optimized by + * faulting the user page in the prepare phase of io-uring. */ if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) { status = -EFAULT; @@ -845,6 +861,9 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, }; int ret; + if (iocb->ki_flags & IOCB_NOWAIT) + iter.flags |= IOMAP_NOWAIT; + while ((ret = iomap_iter(&iter, ops)) > 0) iter.processed = iomap_write_iter(&iter, i); if (iter.pos == iocb->ki_pos)