From patchwork Thu Nov 17 05:58:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13046179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0AAEC43217 for ; Thu, 17 Nov 2022 05:58:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239324AbiKQF6j (ORCPT ); Thu, 17 Nov 2022 00:58:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239246AbiKQF6S (ORCPT ); Thu, 17 Nov 2022 00:58:18 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1CC567100 for ; Wed, 16 Nov 2022 21:58:16 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id v17so683315plo.1 for ; Wed, 16 Nov 2022 21:58:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aO/cv4YV2OVecAaYjInPMlsbv9t1F4p45l8FdZlzeH0=; b=RAxcQfO4U0rS4VXbWnYkwy4yGgCEGg4YduwchgtG2QQcpRAoz7i6y7M/zY/4MNLE3D h9l9zbzRANUPyK/krXdjQFrmdkgoisnSiQaTHN/Y2iu8xA5D17Gbyfi3HzKZKGb58nt5 IcoWzmxCCxJCYyGBtFsiQKItU1MLF4/Tsn4nz7bDHLZkCg65Rmwv8uDINyPFHia49zC7 rj4IAeLhJu7OcOg2lk0U2chtZO0+7nb0Iyo1w3IF/UAc63j5W6PLtQKMYnJvat3v9gQ6 LOgtZLVDG4gXIx4sOeIRm7bYaQnjZJ4Sj3441JDYXYhB05+2u6khhs/WO/rkYeF42k2X TsCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aO/cv4YV2OVecAaYjInPMlsbv9t1F4p45l8FdZlzeH0=; b=tEBhn8Z8Gd7p0nlPbvOqrfYb006nMYSxRjtaQ/hsG0cpzVZU78XaCdvvo3G0LRurPb fP31QpCAeDDDv41BfEeCGxs74xdIxV8B3CL51yFjhSM2WRoQwsdS6r1gvvfhw0tRI74d 13/hP1dt+9mLZLXUQ1+e++FYlP2XvlSus3XsPI95zXXYIEDgsTJdQJUHmMn0WAK24PR6 lpkU3ADYi2ZpvYPhBAxv9qg+kgOckOUMGHXkrjwlmC0hi+pe3gUeziuC6v7wsXC/rLja B05VRp8p9wFO3LDhKP3yoRWsIP3qka+ZbGgbAqdstovw1ZMLWG6MfYFxM0JbZrPeG5Ev sQGw== X-Gm-Message-State: ANoB5pl360CIsp0ZICk/i5cCUmVm8XlRY2XDvNXQ90dcMiOMaulM4rEO 8EZYLHkxrdUErNHS83EBURgENxBv07mWgA== X-Google-Smtp-Source: AA0mqf6tmDXAKpa+q0BiObEgAugl7x1LyHt8NYIa4FgfPg+y62lJX3Wy4LfC/2uo9P7gMhFBFBwSPQ== X-Received: by 2002:a17:902:ebcd:b0:188:b8cf:868 with SMTP id p13-20020a170902ebcd00b00188b8cf0868mr1300388plg.98.1668664696281; Wed, 16 Nov 2022 21:58:16 -0800 (PST) Received: from dread.disaster.area (pa49-181-106-210.pa.nsw.optusnet.com.au. [49.181.106.210]) by smtp.gmail.com with ESMTPSA id o14-20020a170902d4ce00b00186a437f4d7sm212650plg.147.2022.11.16.21.58.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Nov 2022 21:58:15 -0800 (PST) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1ovXue-00FBpE-6R; Thu, 17 Nov 2022 16:58:12 +1100 Received: from dave by discord.disaster.area with local (Exim 4.96) (envelope-from ) id 1ovXue-0025bp-0X; Thu, 17 Nov 2022 16:58:12 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Subject: [PATCH 3/9] xfs: use byte ranges for write cleanup ranges Date: Thu, 17 Nov 2022 16:58:04 +1100 Message-Id: <20221117055810.498014-4-david@fromorbit.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221117055810.498014-1-david@fromorbit.com> References: <20221117055810.498014-1-david@fromorbit.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner xfs_buffered_write_iomap_end() currently converts the byte ranges passed to it to filesystem blocks to pass them to the bmap code to punch out delalloc blocks, but then has to convert filesytem blocks back to byte ranges for page cache truncate. We're about to make the page cache truncate go away and replace it with a page cache walk, so having to convert everything to/from/to filesystem blocks is messy and error-prone. It is much easier to pass around byte ranges and convert to page indexes and/or filesystem blocks only where those units are needed. In preparation for the page cache walk being added, add a helper that converts byte ranges to filesystem blocks and calls xfs_bmap_punch_delalloc_range() and convert xfs_buffered_write_iomap_end() to calculate limits in byte ranges. Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_iomap.c | 40 +++++++++++++++++++++++++--------------- 1 file changed, 25 insertions(+), 15 deletions(-) diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index a2e45ea1b0cb..7bb55dbc19d3 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -1120,6 +1120,20 @@ xfs_buffered_write_iomap_begin( return error; } +static int +xfs_buffered_write_delalloc_punch( + struct inode *inode, + loff_t start_byte, + loff_t end_byte) +{ + struct xfs_mount *mp = XFS_M(inode->i_sb); + xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, start_byte); + xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, end_byte); + + return xfs_bmap_punch_delalloc_range(XFS_I(inode), start_fsb, + end_fsb - start_fsb); +} + static int xfs_buffered_write_iomap_end( struct inode *inode, @@ -1129,10 +1143,9 @@ xfs_buffered_write_iomap_end( unsigned flags, struct iomap *iomap) { - struct xfs_inode *ip = XFS_I(inode); - struct xfs_mount *mp = ip->i_mount; - xfs_fileoff_t start_fsb; - xfs_fileoff_t end_fsb; + struct xfs_mount *mp = XFS_M(inode->i_sb); + loff_t start_byte; + loff_t end_byte; int error = 0; if (iomap->type != IOMAP_DELALLOC) @@ -1157,13 +1170,13 @@ xfs_buffered_write_iomap_end( * the range. */ if (unlikely(!written)) - start_fsb = XFS_B_TO_FSBT(mp, offset); + start_byte = round_down(offset, mp->m_sb.sb_blocksize); else - start_fsb = XFS_B_TO_FSB(mp, offset + written); - end_fsb = XFS_B_TO_FSB(mp, offset + length); + start_byte = round_up(offset + written, mp->m_sb.sb_blocksize); + end_byte = round_up(offset + length, mp->m_sb.sb_blocksize); /* Nothing to do if we've written the entire delalloc extent */ - if (start_fsb >= end_fsb) + if (start_byte >= end_byte) return 0; /* @@ -1173,15 +1186,12 @@ xfs_buffered_write_iomap_end( * leave dirty pages with no space reservation in the cache. */ filemap_invalidate_lock(inode->i_mapping); - truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb), - XFS_FSB_TO_B(mp, end_fsb) - 1); - - error = xfs_bmap_punch_delalloc_range(ip, start_fsb, - end_fsb - start_fsb); + truncate_pagecache_range(inode, start_byte, end_byte - 1); + error = xfs_buffered_write_delalloc_punch(inode, start_byte, end_byte); filemap_invalidate_unlock(inode->i_mapping); if (error && !xfs_is_shutdown(mp)) { - xfs_alert(mp, "%s: unable to clean up ino %lld", - __func__, ip->i_ino); + xfs_alert(mp, "%s: unable to clean up ino 0x%llx", + __func__, XFS_I(inode)->i_ino); return error; } return 0;