From patchwork Wed Jan 15 09:31:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940123 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1AD024818C; Wed, 15 Jan 2025 09:31:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933514; cv=none; b=S+e+Tg3IrUszEtYa9o0+O2itLFu9yAO8f1xbFddY1B17bTBHrGQdpOWbfGoV9RdwG1C63n6SCV+CZ2do1aurRymS1e/srZ5zBxi6zDfxddqNoAIHXmY5dd/wb4uO0IceLv8eddCBN+RR5XIa6u0uGrPGSwbuILaxJanhhr8YftY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933514; c=relaxed/simple; bh=yFaq4+y3L3aBB+QzbnekiDGuxensl38YhIVnwEC4MWs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bdylJOc5I1XoFWjZ39mX3GD8MmytPcfdJswDu8ARV7KHOYuxZbT/U3Oxu8ejJCKLHoYmRHAtQvAbLuk+ygVkYM9PEoap5/Hu2h+n4D3gOdU3r71GKx9hrzhUQ7d4xqo3c7rCOGRPkwQqzFmMtQ6Jb82KTn2UZvwYhrl18LyzpnM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=W2/CFTP/; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="W2/CFTP/" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933513; x=1768469513; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yFaq4+y3L3aBB+QzbnekiDGuxensl38YhIVnwEC4MWs=; b=W2/CFTP/YGn1x/oyewYajuqS6B13wfhG1G4qiLxCnhxHGY8L2He0n6f/ zogJ6+n8RpJiNl7xzPRc5vgdYNTll4FRrsxqIAie2WAKvDZLRIc37UCDC 31aMUa2EGmE3+1bUjqgbiwXfPIIo4jOJ1FHRgReKTPbehS7ltmAIuDxi0 zNlbyBZLFxELpypIJltZ0mumHJpE6pDViay5Ut6ReA5bXeY2h9nKmXWRp 2EjqIfRdcPsSZmn7nUJM63ThWysHp+oCXpX7qQM12RBz4MIYBIm6ZYLQM 0WgFbyiK3JaQkhg0a74E5mHhBou7im1z0OKeXogn/ctt5dDr5UKSxmaA5 g==; X-CSE-ConnectionGUID: aqe80r6nTJS1E1eq3zQarA== X-CSE-MsgGUID: KSgL6LwrQimqZzByZSrbOA== X-IronPort-AV: E=McAfee;i="6700,10204,11314"; a="37371815" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="37371815" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:31:52 -0800 X-CSE-ConnectionGUID: SRJjdY82STKzm2J5zuHSUw== X-CSE-MsgGUID: pixpA6aySyq+BTEWCXFN3g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="110066760" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa004.jf.intel.com with ESMTP; 15 Jan 2025 01:31:43 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 3267839C; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 01/11] mm/migrate: Transfer PG_dropbehind to the new folio Date: Wed, 15 Jan 2025 11:31:25 +0200 Message-ID: <20250115093135.3288234-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Do not lose the flag on page migration. Ideally, these folios should be freed instead of migration. But it requires to find right spot do this and proper testing. Transfer the flag for now. Signed-off-by: Kirill A. Shutemov --- mm/migrate.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/migrate.c b/mm/migrate.c index caadbe393aa2..690efa064bee 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -682,6 +682,10 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) if (folio_test_dirty(folio)) folio_set_dirty(newfolio); + /* TODO: free the folio on migration? */ + if (folio_test_dropbehind(folio)) + folio_set_dropbehind(newfolio); + if (folio_test_young(folio)) folio_set_young(newfolio); if (folio_test_idle(folio)) From patchwork Wed Jan 15 09:31:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940122 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B0BA248177; Wed, 15 Jan 2025 09:31:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933514; cv=none; b=jXzxun2eOzDAX7fBSbsWmVAKFCEoxdGsOD4PzvW155vL3OdhVy3Z+T2kmGxrFH2IveU5bjV7zKWc1DhV1NO+dbesZ161daJzqfxCRW3Ass9BRCMrVBAg+tRHA+H3w6xQegvXT+q++8f6YAZq+Q/vTaWcFYwTI9B8/pb9K80SEiw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933514; c=relaxed/simple; bh=nRzbgXuQyXrlvnkDG9Mh1SQu4X/gBOf94amjJ8YtZCI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IFbrR3LJP2lhnYA9r4W8ZNThlSKiEtEeEQn/LBxa36snb7p1hyJVBDurNpBHl5/EA80xjhwI5G8LFe39RlsZnSI/Lyy37lXo+480nnV6N+o/IpbSE5XJc0c55C1fpbFzCaAAZS1Fv3TPbx7Te3udsdq+4ABl8SlN4Yz+xV4OZqM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=NEISfs1e; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NEISfs1e" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933512; x=1768469512; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nRzbgXuQyXrlvnkDG9Mh1SQu4X/gBOf94amjJ8YtZCI=; b=NEISfs1e/kmVZBxnrEjgyFwltVUU8iDGifsbg8UykE2J1le/Whe+fSso spAE0/5PqmrNYug61rKnMaLkq7Z0of83BWpgKyilQcI0aAUZFBntxCApc Z1fwZMlQBg5h+7RD07AbhlB1JoFedsoqTC8JIZiSR/+mXWkTmr3JY1GE8 zx6AwhvQygcZa+KhrkxGy+ZBWqkLrRVQIJuAZbtMSgxQkgB2LWn8G1veL 5sWP1UYtdPX2DGxEDuO8ljSAB2XQUUEodmmzvs3akOp+W/7KHSHnhlfcX fXH3H4EzLn943NuNUpYkHJqFq0FJsOT7yV4MQvWakoxenwW6GV6SYSav8 A==; X-CSE-ConnectionGUID: DO9TP4DdS2SZxA2n/RmrSw== X-CSE-MsgGUID: wJusA8VKRoCPB6j689HwCQ== X-IronPort-AV: E=McAfee;i="6700,10204,11315"; a="41195003" X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="41195003" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:31:51 -0800 X-CSE-ConnectionGUID: AIiWRNDGRCO+nTKGleEiUw== X-CSE-MsgGUID: ICetDwloSsuj4G/eQ0fcsA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="109700828" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa005.fm.intel.com with ESMTP; 15 Jan 2025 01:31:43 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 4A033478; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 02/11] drm/i915/gem: Convert __shmem_writeback() to folios Date: Wed, 15 Jan 2025 11:31:26 +0200 Message-ID: <20250115093135.3288234-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Use folios instead of pages. This is preparation for removing PG_reclaim. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index fe69f2c8527d..9016832b20fc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -320,25 +320,25 @@ void __shmem_writeback(size_t size, struct address_space *mapping) /* Begin writeback on each dirty page */ for (i = 0; i < size >> PAGE_SHIFT; i++) { - struct page *page; + struct folio *folio; - page = find_lock_page(mapping, i); - if (!page) + folio = filemap_lock_folio(mapping, i); + if (!folio) continue; - if (!page_mapped(page) && clear_page_dirty_for_io(page)) { + if (!folio_mapped(folio) && folio_clear_dirty_for_io(folio)) { int ret; - SetPageReclaim(page); - ret = mapping->a_ops->writepage(page, &wbc); + folio_set_reclaim(folio); + ret = mapping->a_ops->writepage(&folio->page, &wbc); if (!PageWriteback(page)) - ClearPageReclaim(page); + folio_clear_reclaim(folio); if (!ret) goto put; } - unlock_page(page); + folio_unlock(folio); put: - put_page(page); + folio_put(folio); } } From patchwork Wed Jan 15 09:31:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940126 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4E511E7C2D; Wed, 15 Jan 2025 09:31:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933516; cv=none; b=LXupqfHCPbh2qRcOPshU1vmHOEXxHApcpFk5iJSB/AwDf1zB7BhqdjgLmMml3tZNbVc3u0zvW0bbkPIGnXR8QCYssWeeQ1Pfry/Oi3RBjv4mEJEqu2YcPFw1DJOXMikL4wAnpctiuL4i4n7eB4J3N0G4mH6zhlalCPC1iGsIOzw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933516; c=relaxed/simple; bh=cZw61RFJNpfhkjnCxIsm2dDafA4jFfM7q4hFg1D+Ye4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PBrky6utBMLV74Y7u8jjPTCquGNGotNfmC4FNJEDYyGDzRzX17vnGNz65Xecn5UQG51ueE4n9bMrgs25OR027b+8T4xmDvj231tuzUq1Izt1/WeW7HhiaMSJ3uCC7hHUbjA5Jak4oOH9h+XWgUQ33q2jepxEsRxp+6qyNCm1kPM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Hgt++E6h; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Hgt++E6h" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933514; x=1768469514; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cZw61RFJNpfhkjnCxIsm2dDafA4jFfM7q4hFg1D+Ye4=; b=Hgt++E6h9/kH3wlfM01FCrydVG6Z+35pI8RBmfzZYq9tK/a9ygtre/3f lA1/Xh73y+/yK+ahaNaVPnMNTNaJ5jSXIW+4JsmtdhBB9nno+aPBOoAXE z3rq3iv19NVSpFbEgZPLy85Nvx8yAtNWeaKj2IeLW1TeFE+uRE6zt5ZRm RkEqPShrKxX7lFJliqf2PSKARcaVMATNlwckdKEhNGkx4ky9rPYN14NZj XnaR7s0zIacb6PSkXfcsHd2L4v8ZlemzuzNCVD/FE45EPtfmAl+YitjwM vfLsc5c4ZBi8eutqEEVjJpzgzI5c2AtE74bSodLok/zOuYZi60DB7wS4G g==; X-CSE-ConnectionGUID: BQNw/B7ZQm2OZKTf5hUxsw== X-CSE-MsgGUID: 2eXkuoQGSzStVgIwrPYQgA== X-IronPort-AV: E=McAfee;i="6700,10204,11315"; a="41195073" X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="41195073" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:31:51 -0800 X-CSE-ConnectionGUID: BkISwSDCTmik0haa8kFKBg== X-CSE-MsgGUID: xtPqCedzReW23mPvf81/rQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="109700833" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa005.fm.intel.com with ESMTP; 15 Jan 2025 01:31:43 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 57ACF49D; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 03/11] drm/i915/gem: Use PG_dropbehind instead of PG_reclaim Date: Wed, 15 Jan 2025 11:31:27 +0200 Message-ID: <20250115093135.3288234-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in __shmem_writeback() It is safe to leave PG_dropbehind on the folio if, for some reason (bug?), the folio is not in a writeback state after ->writepage(). In these cases, the kernel had to clear PG_reclaim as it shared a page flag bit with PG_readahead. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 9016832b20fc..c1724847c001 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -329,10 +329,8 @@ void __shmem_writeback(size_t size, struct address_space *mapping) if (!folio_mapped(folio) && folio_clear_dirty_for_io(folio)) { int ret; - folio_set_reclaim(folio); + folio_set_dropbehind(folio); ret = mapping->a_ops->writepage(&folio->page, &wbc); - if (!PageWriteback(page)) - folio_clear_reclaim(folio); if (!ret) goto put; } From patchwork Wed Jan 15 09:31:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940125 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 800E51E7C04; Wed, 15 Jan 2025 09:31:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933516; cv=none; b=OCTaXkzG6jFOxOLsB1Dx3a+taplduttbG0gD5CSTfV2jcokHVS26qlH5JfNSt/Z4DpwYXW/xJuGL3Lc1wDIyZP3Hv5tTR/BkLzarbPIkmu7VklLdbaj2IT39I7jdWi8yJSIsRy+7VqE4NIoK80DGNPrTj1sjxeOtYzYAkTqk1tI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933516; c=relaxed/simple; bh=a239UknTIxW2CKH5kemCn7/leZfYKcHiwL8zSp/0EiM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kxENzneJcHbm5yTMZtMUdh8/E7rws4KYTXw9FL07j8CRPLOUun4wR0waGeDtRJQSOD5N4bHQTK5q8XPlAyuKNW9p8n0acrS/2HqUGTzsAeWPpToib1Gke4CnkDf4oNGOEB8sjyjGl3Nlz5Mz2cWXf3Bs5PYmeUgmt/nhbI7m4+g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Pnt62ecs; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Pnt62ecs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933514; x=1768469514; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a239UknTIxW2CKH5kemCn7/leZfYKcHiwL8zSp/0EiM=; b=Pnt62ecstSYFVx4flN20SZSx3mU7J+YpZ5mgVf4Mp3mcL4x1uyQhSIkR qrY61uWGvqF3xoy31iDcyFMMU4OxuucwIDGlgguB/qjcO9zTC7cfnp1u2 TlsxIs4UQC0gPoPDEqI9MG0v4aqeeXcRJIFbh2Kst9cdsJQWcWsIGl4q0 wl6RXNKpO3+MvhPHforwLFC5jrzQ7S//Akr+hf7qMycQPYEm7T6rh93jj 4X4u1aJq1RYAHBfsTqPbbsLlFewrFhPnrJ0cadvAPHL/mroLWkmdw6xE/ iDnJqef7k25ojcAhVPj4L5ec5AnMGM9bqu3Q6NAqKRnneGhSbA9hE9u1q w==; X-CSE-ConnectionGUID: 0kQxYKsORKKuVwgcbRAItg== X-CSE-MsgGUID: dk33gebdTU6jzgQWgjS3eA== X-IronPort-AV: E=McAfee;i="6700,10204,11314"; a="37371856" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="37371856" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:31:52 -0800 X-CSE-ConnectionGUID: ltRxBoJPQditDzSqg/CR1A== X-CSE-MsgGUID: ewfu1EWYS/iNJTs2vK9cyA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="110066758" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa004.jf.intel.com with ESMTP; 15 Jan 2025 01:31:43 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 652314AB; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 04/11] mm/zswap: Use PG_dropbehind instead of PG_reclaim Date: Wed, 15 Jan 2025 11:31:28 +0200 Message-ID: <20250115093135.3288234-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in zswap_writeback_entry(). Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand Acked-by: Yosry Ahmed --- mm/zswap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 167ae641379f..c20bad0b0978 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1096,8 +1096,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* folio is up to date */ folio_mark_uptodate(folio); - /* move it to the tail of the inactive list after end_writeback */ - folio_set_reclaim(folio); + /* free the folio after writeback */ + folio_set_dropbehind(folio); /* start writeback */ __swap_writepage(folio, &wbc); From patchwork Wed Jan 15 09:31:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940127 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 558E8248198; Wed, 15 Jan 2025 09:32:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933522; cv=none; b=X8jIO/90jG1+9J7O5gbTucVgeLkdpR2LhLBn6avYSqvhVPOFo7SFB/qaGwjXUdxg/cINRfKrW/W1fgKzninmZyFHdK425WO1arONKMSXmqCZtf8TOL1Rd7ri0KRJxsCSNDHx/GhtyqK/UtTj7hE+itw8lXSij/PZ07C8FkvkUWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933522; c=relaxed/simple; bh=36JNq/EXbfsIL2m0ifCEwxvDf1m9iaObLYkanVED3nU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=D8dMb+BanDFOtZ8sXyWYQ1swG/1O5uXbnOK/eQthRwSEFOWLo+JKQB9I84fu77ixpMrixxEPm+gTYxgdiEEDQNvOvkf72swTLIEh5oPHPE9UHbcNXMaIdl+FIvkNkV6kvR+iH8Ho2GA73boY7DMRD7xOqLfPHd5h5WFPTFIFPP4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=R/fe7ymH; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="R/fe7ymH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933520; x=1768469520; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=36JNq/EXbfsIL2m0ifCEwxvDf1m9iaObLYkanVED3nU=; b=R/fe7ymHFdAIGQaNerlUf8MjIVqXRsbO4k4V3g/gSu5W/0aMuqzjGLG4 t4TAy+reGVMyesfeRwd4f6Jir+YSGo6pOrCSRzXDWcGKMg23UeaYQokci URAtkNE3zY0QKWJxZfqlT8LuTGi8G00tKdwAws7N+6nE91tk6yVX96jjY 7Pa0TuTlEMs2kPabBDaASknPdA3dr3SwH161f9T48J3JIW4cFV66QV6KH VrtZTiMWVkicq46uZPl7rwW0sGe7M2XVIcEcADP4pZCBogitaVE5qOsDV 2CCagUpzG6KIL+zV3L5v/HZCb32j/gtr+Z+1Rezh5nFeXklWIHfJrL3OW Q==; X-CSE-ConnectionGUID: oMaqi8enSZKEPiGxb74HMg== X-CSE-MsgGUID: DMW9vJcvQM2EXaFPLYklLQ== X-IronPort-AV: E=McAfee;i="6700,10204,11315"; a="41195103" X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="41195103" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:31:59 -0800 X-CSE-ConnectionGUID: 7uc6MvmlRc2uU/scJlj9/A== X-CSE-MsgGUID: LSjikbJtRki8UuqJ5Zqd1Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="109700874" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa005.fm.intel.com with ESMTP; 15 Jan 2025 01:31:52 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 721904BE; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 05/11] mm/truncate: Use folio_set_dropbehind() instead of deactivate_file_folio() Date: Wed, 15 Jan 2025 11:31:29 +0200 Message-ID: <20250115093135.3288234-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. The new flag allows to replace whole deactivate_file_folio() machinery with simple folio_set_dropbehind(). Signed-off-by: Kirill A. Shutemov --- mm/internal.h | 1 - mm/swap.c | 90 --------------------------------------------------- mm/truncate.c | 2 +- 3 files changed, 1 insertion(+), 92 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11..93e6dac2077a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -379,7 +379,6 @@ static inline vm_fault_t vmf_anon_prepare(struct vm_fault *vmf) vm_fault_t do_swap_page(struct vm_fault *vmf); void folio_rotate_reclaimable(struct folio *folio); bool __folio_end_writeback(struct folio *folio); -void deactivate_file_folio(struct folio *folio); void folio_activate(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, diff --git a/mm/swap.c b/mm/swap.c index fc8281ef4241..7a0dffd5973a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -54,7 +54,6 @@ struct cpu_fbatches { */ local_lock_t lock; struct folio_batch lru_add; - struct folio_batch lru_deactivate_file; struct folio_batch lru_deactivate; struct folio_batch lru_lazyfree; #ifdef CONFIG_SMP @@ -524,68 +523,6 @@ void folio_add_lru_vma(struct folio *folio, struct vm_area_struct *vma) folio_add_lru(folio); } -/* - * If the folio cannot be invalidated, it is moved to the - * inactive list to speed up its reclaim. It is moved to the - * head of the list, rather than the tail, to give the flusher - * threads some time to write it out, as this is much more - * effective than the single-page writeout from reclaim. - * - * If the folio isn't mapped and dirty/writeback, the folio - * could be reclaimed asap using the reclaim flag. - * - * 1. active, mapped folio -> none - * 2. active, dirty/writeback folio -> inactive, head, reclaim - * 3. inactive, mapped folio -> none - * 4. inactive, dirty/writeback folio -> inactive, head, reclaim - * 5. inactive, clean -> inactive, tail - * 6. Others -> none - * - * In 4, it moves to the head of the inactive list so the folio is - * written out by flusher threads as this is much more efficient - * than the single-page writeout from reclaim. - */ -static void lru_deactivate_file(struct lruvec *lruvec, struct folio *folio) -{ - bool active = folio_test_active(folio) || lru_gen_enabled(); - long nr_pages = folio_nr_pages(folio); - - if (folio_test_unevictable(folio)) - return; - - /* Some processes are using the folio */ - if (folio_mapped(folio)) - return; - - lruvec_del_folio(lruvec, folio); - folio_clear_active(folio); - folio_clear_referenced(folio); - - if (folio_test_writeback(folio) || folio_test_dirty(folio)) { - /* - * Setting the reclaim flag could race with - * folio_end_writeback() and confuse readahead. But the - * race window is _really_ small and it's not a critical - * problem. - */ - lruvec_add_folio(lruvec, folio); - folio_set_reclaim(folio); - } else { - /* - * The folio's writeback ended while it was in the batch. - * We move that folio to the tail of the inactive list. - */ - lruvec_add_folio_tail(lruvec, folio); - __count_vm_events(PGROTATED, nr_pages); - } - - if (active) { - __count_vm_events(PGDEACTIVATE, nr_pages); - __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, - nr_pages); - } -} - static void lru_deactivate(struct lruvec *lruvec, struct folio *folio) { long nr_pages = folio_nr_pages(folio); @@ -652,10 +589,6 @@ void lru_add_drain_cpu(int cpu) local_unlock_irqrestore(&cpu_fbatches.lock_irq, flags); } - fbatch = &fbatches->lru_deactivate_file; - if (folio_batch_count(fbatch)) - folio_batch_move_lru(fbatch, lru_deactivate_file); - fbatch = &fbatches->lru_deactivate; if (folio_batch_count(fbatch)) folio_batch_move_lru(fbatch, lru_deactivate); @@ -667,28 +600,6 @@ void lru_add_drain_cpu(int cpu) folio_activate_drain(cpu); } -/** - * deactivate_file_folio() - Deactivate a file folio. - * @folio: Folio to deactivate. - * - * This function hints to the VM that @folio is a good reclaim candidate, - * for example if its invalidation fails due to the folio being dirty - * or under writeback. - * - * Context: Caller holds a reference on the folio. - */ -void deactivate_file_folio(struct folio *folio) -{ - /* Deactivating an unevictable folio will not accelerate reclaim */ - if (folio_test_unevictable(folio)) - return; - - if (lru_gen_enabled() && lru_gen_clear_refs(folio)) - return; - - folio_batch_add_and_move(folio, lru_deactivate_file, true); -} - /* * folio_deactivate - deactivate a folio * @folio: folio to deactivate @@ -772,7 +683,6 @@ static bool cpu_needs_drain(unsigned int cpu) /* Check these in order of likelihood that they're not zero */ return folio_batch_count(&fbatches->lru_add) || folio_batch_count(&fbatches->lru_move_tail) || - folio_batch_count(&fbatches->lru_deactivate_file) || folio_batch_count(&fbatches->lru_deactivate) || folio_batch_count(&fbatches->lru_lazyfree) || folio_batch_count(&fbatches->lru_activate) || diff --git a/mm/truncate.c b/mm/truncate.c index e2e115adfbc5..864aaadc1e91 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -486,7 +486,7 @@ unsigned long mapping_try_invalidate(struct address_space *mapping, * of interest and try to speed up its reclaim. */ if (!ret) { - deactivate_file_folio(folio); + folio_set_dropbehind(folio); /* Likely in the lru cache of a remote CPU */ if (nr_failed) (*nr_failed)++; From patchwork Wed Jan 15 09:31:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940130 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A5DF1E7C27; Wed, 15 Jan 2025 09:32:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933523; cv=none; b=O1FHlPhusdFrTLJcsu7blUngzcs4uckr87tRTUHjpedkv+Kxm4jkGbWFRiniIw/F97XGD0YkhfVncwhm2cNdStsW/h2BtE9UrPsgH0k8kJDefxR4A4yhqvvPm0IRUU1nhMLamIXsHPTilaRwsRvnGl1zme9T3m+Z+1eurMffcow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933523; c=relaxed/simple; bh=3jpMLKGipPEs+ffoK+nnTQL9X98v+QvM54TECel1oaE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=co6qwiL8Vc2FqCfZ5ijzSLSJUIR6J8otgznJoDQZ6ljeoBalcSn0wn7AMWEbC7ZRTcX2rtsTE0lRfpZIpc3Rk/+edHkkFceFMwgZnjxC7PlyUm4ZLmEJaynq0zjn9gWo9IGtq8a8XeddHh7+2/SHHGetxP37Ht3hWAdhI79ngBQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VG9yvkoL; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VG9yvkoL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933522; x=1768469522; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3jpMLKGipPEs+ffoK+nnTQL9X98v+QvM54TECel1oaE=; b=VG9yvkoLurJ6VioDqyzMDlWp72wbazM94F5dixkrjUCXz9OAhBNzinNL cpLjhIIjjYDs4TwqytljxHdQUsHxJl8eU4kQnin/zSXsElPK8AJIsHkLI US8OpZuOhAb5CVJKttiJcCccrZWF8ZgEG68mRn6zLO4KX4JYUmunuWZh5 E2sVU8eigNO++NLw4nAHyhayCT2nJ3CQK1ho1drvMJDmhR9UOsyELVL9O NDVBuDrEBrJQPUj+0HVSs0nQJ07CAj/I3iXfHO6blnElbfjzONebiu3ar L3yoxkhHWbi8sFMyEJ4/G9dqk3031Y4lEx6kE5xodecUdzrXXLXMnNoCn Q==; X-CSE-ConnectionGUID: hYdSIptNTYuxTT9rFBU/+Q== X-CSE-MsgGUID: p41wjBrYQmKIdY8/9Itecg== X-IronPort-AV: E=McAfee;i="6700,10204,11315"; a="41195168" X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="41195168" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:32:00 -0800 X-CSE-ConnectionGUID: aK+Z+lJOSQiu5q7r0uSE2w== X-CSE-MsgGUID: 8MOy57zKRiuhPkJ/vx+WqQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="109700880" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa005.fm.intel.com with ESMTP; 15 Jan 2025 01:31:52 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 7E9F34CA; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 06/11] mm/vmscan: Use PG_dropbehind instead of PG_reclaim Date: Wed, 15 Jan 2025 11:31:30 +0200 Message-ID: <20250115093135.3288234-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in pageout(). It is safe to leave PG_dropbehind on the folio if, for some reason (bug?), the folio is not in a writeback state after ->writepage(). In these cases, the kernel had to clear PG_reclaim as it shared a page flag bit with PG_readahead. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- mm/vmscan.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a099876fa029..d15f80333d6b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -692,19 +692,16 @@ static pageout_t pageout(struct folio *folio, struct address_space *mapping, if (shmem_mapping(mapping) && folio_test_large(folio)) wbc.list = folio_list; - folio_set_reclaim(folio); + folio_set_dropbehind(folio); + res = mapping->a_ops->writepage(&folio->page, &wbc); if (res < 0) handle_write_error(mapping, folio, res); if (res == AOP_WRITEPAGE_ACTIVATE) { - folio_clear_reclaim(folio); + folio_clear_dropbehind(folio); return PAGE_ACTIVATE; } - if (!folio_test_writeback(folio)) { - /* synchronous write or broken a_ops? */ - folio_clear_reclaim(folio); - } trace_mm_vmscan_write_folio(folio); node_stat_add_folio(folio, NR_VMSCAN_WRITE); return PAGE_SUCCESS; From patchwork Wed Jan 15 09:31:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940129 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 153E01AB533; Wed, 15 Jan 2025 09:32:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933523; cv=none; b=V4ke93ikc8K9ahXqtKL50+V0yTuJw42C5zllpt5LYvjvkhWGbalkc81GwlmUJpscuz97musZewnRnoWn3yWf+nDK6pfhguvqM9D8b3ZB5+K7jyj8wRMxrN2JHMMO7oRXMx+OJIIophvTU37bHjqaOTJuH0MRG154jGgIImeGWek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933523; c=relaxed/simple; bh=fPl/3uszqP/MJsjmU538XrAMkh1t/gARp9Ryfe4CQVc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ba6s+I9VoKJ6wSMHa8fvRA3Bh473Ja9zPIohFUmRjAEVDeKDclaVrFvM4hYkx/Bmo41EEpl2BhBBHjzphc6tcp06Td0Vfimrb2wpipDdaggEmi0DdyCmD4VFSJLvcnOkPyiVVRrhdvqKy3L2S917YmUL0jgEUDM7kYBL9Z4HpV0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KecdHtCS; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KecdHtCS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933522; x=1768469522; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fPl/3uszqP/MJsjmU538XrAMkh1t/gARp9Ryfe4CQVc=; b=KecdHtCSeOjTuxYhOX8dFlVsRhYTSvUMad9UW1AaB1CaHVj/PNlXZXxl ghe1fcJZm7FmNiisGzfs7i+MrW++fPMVUcfepDqF5easGaCyb8MxbDBWb 6MGfM4V//zHRqjpPkO7lO708Z6ZSxS+udrBjlCG0zjOBDI1PyRwlZBEXU uwk+Z7hgtw+6+EWaSK6VvOg0bUmES9UG9H2XWfRvxkm4ubp0pYvA9GAbp kgfYZgm5abOUpZQyLrP3DYM5/p4Crr2duZq6P5PCdzS9W+KSJEBCSlDLA b8XmI5NSo5i26TWAK8ueAsndYLHXngxsmwtdxALX4JfdZtz6z3Tt8SeD9 g==; X-CSE-ConnectionGUID: l8Ba/gnZSOOowgVAhZIb6g== X-CSE-MsgGUID: N7CDWstgRi2RQVY1/vrYNQ== X-IronPort-AV: E=McAfee;i="6700,10204,11315"; a="36540242" X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="36540242" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:32:01 -0800 X-CSE-ConnectionGUID: +suySKEDRgueK6JuCHJvFg== X-CSE-MsgGUID: gIRIFIkaRY2UOYWbdzUmkw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="105153452" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa006.jf.intel.com with ESMTP; 15 Jan 2025 01:31:52 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 8F3E95D8; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 07/11] mm/vmscan: Use PG_dropbehind instead of PG_reclaim in shrink_folio_list() Date: Wed, 15 Jan 2025 11:31:31 +0200 Message-ID: <20250115093135.3288234-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in shrink_folio_list(). It is safe to leave PG_dropbehind on the folio if, for some reason (bug?), the folio is not in a writeback state after ->writepage(). In these cases, the kernel had to clear PG_reclaim as it shared a page flag bit with PG_readahead. Also use PG_dropbehind instead PG_reclaim to detect I/O congestion. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- mm/vmscan.c | 30 ++++++++---------------------- 1 file changed, 8 insertions(+), 22 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index d15f80333d6b..bb5ec22f97b5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1140,7 +1140,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * for immediate reclaim are making it to the end of * the LRU a second time. */ - if (writeback && folio_test_reclaim(folio)) + if (writeback && folio_test_dropbehind(folio)) stat->nr_congested += nr_pages; /* @@ -1149,7 +1149,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * * 1) If reclaim is encountering an excessive number * of folios under writeback and this folio has both - * the writeback and reclaim flags set, then it + * the writeback and dropbehind flags set, then it * indicates that folios are being queued for I/O but * are being recycled through the LRU before the I/O * can complete. Waiting on the folio itself risks an @@ -1174,7 +1174,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * would probably show more reasons. * * 3) Legacy memcg encounters a folio that already has the - * reclaim flag set. memcg does not have any dirty folio + * dropbehind flag set. memcg does not have any dirty folio * throttling so we could easily OOM just because too many * folios are in writeback and there is nothing else to * reclaim. Wait for the writeback to complete. @@ -1193,31 +1193,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, /* Case 1 above */ if (current_is_kswapd() && - folio_test_reclaim(folio) && + folio_test_dropbehind(folio) && test_bit(PGDAT_WRITEBACK, &pgdat->flags)) { stat->nr_immediate += nr_pages; goto activate_locked; /* Case 2 above */ } else if (writeback_throttling_sane(sc) || - !folio_test_reclaim(folio) || + !folio_test_dropbehind(folio) || !may_enter_fs(folio, sc->gfp_mask) || (mapping && mapping_writeback_indeterminate(mapping))) { - /* - * This is slightly racy - - * folio_end_writeback() might have - * just cleared the reclaim flag, then - * setting the reclaim flag here ends up - * interpreted as the readahead flag - but - * that does not matter enough to care. - * What we do want is for this folio to - * have the reclaim flag set next time - * memcg reclaim reaches the tests above, - * so it will then wait for writeback to - * avoid OOM; and it's also appropriate - * in global reclaim. - */ - folio_set_reclaim(folio); + folio_set_dropbehind(folio); stat->nr_writeback += nr_pages; goto activate_locked; @@ -1372,7 +1358,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ if (folio_is_file_lru(folio) && (!current_is_kswapd() || - !folio_test_reclaim(folio) || + !folio_test_dropbehind(folio) || !test_bit(PGDAT_DIRTY, &pgdat->flags))) { /* * Immediately reclaim when written back. @@ -1382,7 +1368,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE, nr_pages); - folio_set_reclaim(folio); + folio_set_dropbehind(folio); goto activate_locked; } From patchwork Wed Jan 15 09:31:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940131 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9C662416B2; Wed, 15 Jan 2025 09:32:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933525; cv=none; b=adOtVoVqsbzy2NBs22aZ8YsmbQygMMME8ggTLIIzXDw4TRubq5HliQH32M53fp2NZ6UyYx8knq38RhltZ3Mp55HKmlhgh2Fg05Bd/mGnLSH33GF5dcV8YMLrLzrNFioUGGnr7syfyxgGtGyJ5RabEEh3c90vzP/G5L8aPjXZ4kw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933525; c=relaxed/simple; bh=9vbzxe10nrH/pHYnBj/bfEL4jLmf+9HKx83uuAIPxtQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RjxUQm2ATiLMrO2lcERNiQIWCL7sahITe3Sg9qfy0cH5PLh6Pjwgg0YNbEXhSZgdd3J5mITLJk+8ztrMUK9e/pGoBIp9uN4gFQajt7t3IfkGsY8RaEBaO3gl2Bu3yYL3Fr+wNStW7QwOUol6IzP8NJ1Qt7P2Ljod+9FTfcjp/iY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XQMTQ7qi; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XQMTQ7qi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933524; x=1768469524; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9vbzxe10nrH/pHYnBj/bfEL4jLmf+9HKx83uuAIPxtQ=; b=XQMTQ7qim483Kz088DgHyPZCh/panAubb61iOXDXMWRFU/YpekxRRNm6 HZyQnfTh65wFNIEfojyM/Wadgv9OLLS7Xv8Yici25v9CIDUcOu44NQD20 Fi96LOMPU6Wp15YeX3XN8ywp4jOWCoWUcs4lHV5kIGzdUqoBqi/XYstzA XAyLJyGU5Ftom9xQ0rm1LFJYfbfN5VkItrDIIX3TwNwa02EVLlk/hVXh3 vAj8BdAs9pL7HHPXYfv4LbPK9psjQtFHIlCj279i71eVmNi332PfOrY5R kWME2RVAr1A3Y8lFZUBueQsmhcPQh9+DGoHoBZCPTdE2/I4iL4DXG2H6R Q==; X-CSE-ConnectionGUID: MSX0bq1ZS9ias2hMrj4VpQ== X-CSE-MsgGUID: c9utA+d9To22zQ+IGiny1w== X-IronPort-AV: E=McAfee;i="6700,10204,11315"; a="36540299" X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="36540299" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:32:01 -0800 X-CSE-ConnectionGUID: p3NKY2bRSTOG1/Tde53fKw== X-CSE-MsgGUID: uqQL2m/wTq227tGiusyjyg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="105153456" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa006.jf.intel.com with ESMTP; 15 Jan 2025 01:31:52 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id A0EE2712; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 08/11] mm/mglru: Check PG_dropbehind instead of PG_reclaim in lru_gen_folio_seq() Date: Wed, 15 Jan 2025 11:31:32 +0200 Message-ID: <20250115093135.3288234-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Kernel sets PG_dropcache instead of PG_reclaim everywhere. Check PG_dropcache in lru_gen_folio_seq(). No need to check for dirty and writeback as there's no conflict with PG_readahead anymore. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- include/linux/mm_inline.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f9157a0c42a5..f353d3c610ac 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -241,8 +241,7 @@ static inline unsigned long lru_gen_folio_seq(struct lruvec *lruvec, struct foli else if (reclaiming) gen = MAX_NR_GENS; else if ((!folio_is_file_lru(folio) && !folio_test_swapcache(folio)) || - (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || folio_test_writeback(folio)))) + folio_test_dropbehind(folio)) gen = MIN_NR_GENS; else gen = MAX_NR_GENS - folio_test_workingset(folio); From patchwork Wed Jan 15 09:31:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940132 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 914DD2416AA; Wed, 15 Jan 2025 09:32:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933526; cv=none; b=CyW4tMsXL8PFhsTD/FBzSmf7RKLvxqYnFnOZAQOFy/VZfX4ieedxyHthVRuzrxYaXvWDkwhSrZkW70LSCduQd2pwO2MEYzeCptvfdp7YEK1fIoi4X5LPmtv6V/mXwDR7l/aHiBYNkpeU7/XUX22bQowVDDlf7Q8hDm7LzT2R38w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933526; c=relaxed/simple; bh=qlcvUWkNXPs+I4sRu7nXXq1shN3OcaQUE+kllyJJ/aA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i2FvbYNNcYZNGIzt43zZ0DX4Yvf+x4u8Q2MxrdgwqmmEuPYjSQ7aPDufxRgGYv/u5ALcfDvRbaFY2KFacOxghUkhoO+8M3Ig4ar025HzKDQTpRaQRmJZ0ca5Omp50ALD1gyYbPFF4axo4amfpEgBI5wge3MF2nm3+MGlAZeWp5g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=PTzZhyzF; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="PTzZhyzF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933524; x=1768469524; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qlcvUWkNXPs+I4sRu7nXXq1shN3OcaQUE+kllyJJ/aA=; b=PTzZhyzFsOkPLNzsbE7WBAf+tcZiFAcwUX/2GFMxAnfCfBFWctp7wuvb vAX417hBg+fRfs/yVufH79800rctUU13n+Fo88NdqG/0OuNro4hVwLNoO t5Z/LRSKmEZXdMqyQv9zSz6jzWE906DbyexcMKuxaSCLXbsOu9dFgL4H3 QKzfbiQBSkPtDsdUK8AV1TgAhSrSFk8YbLWg9/84s6eCSFmLg3mSMrkrW QPPl39GbZRiLUD5qp+h5CsVCTwFJXhV4yBnXe3AliQMLtLGIGZEw3P0TO iBkxmUFFhtWS6eAQw+9hyJNApDL04PT/28/WQn8k4PYmnCJHq/J/xIwM1 w==; X-CSE-ConnectionGUID: Tb7WpZM5QGGwpICv3w9ilg== X-CSE-MsgGUID: WhCXhcZbRm+oa0J2R7EjAw== X-IronPort-AV: E=McAfee;i="6700,10204,11315"; a="36540288" X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="36540288" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:32:01 -0800 X-CSE-ConnectionGUID: GlVCCYhSTl+nFq5iyy7XmQ== X-CSE-MsgGUID: ZyKi2pmcR/eEfP2pfD8jdA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="105153454" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa006.jf.intel.com with ESMTP; 15 Jan 2025 01:31:52 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id B2EDD748; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 09/11] mm: Remove PG_reclaim Date: Wed, 15 Jan 2025 11:31:33 +0200 Message-ID: <20250115093135.3288234-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Nobody sets the flag anymore. Remove the PG_reclaim, making PG_readhead exclusive user of the page flag bit. Signed-off-by: Kirill A. Shutemov --- fs/fuse/dev.c | 2 +- fs/proc/page.c | 2 +- include/linux/mm_inline.h | 15 ------- include/linux/page-flags.h | 15 +++---- include/trace/events/mmflags.h | 2 +- include/uapi/linux/kernel-page-flags.h | 2 +- mm/filemap.c | 12 ----- mm/migrate.c | 10 +---- mm/page-writeback.c | 16 +------ mm/page_io.c | 15 +++---- mm/swap.c | 61 ++------------------------ mm/vmscan.c | 7 --- tools/mm/page-types.c | 8 +--- 13 files changed, 22 insertions(+), 145 deletions(-) diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 27ccae63495d..20005e2e1d28 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -827,7 +827,7 @@ static int fuse_check_folio(struct folio *folio) 1 << PG_lru | 1 << PG_active | 1 << PG_workingset | - 1 << PG_reclaim | + 1 << PG_readahead | 1 << PG_waiters | LRU_GEN_MASK | LRU_REFS_MASK))) { dump_page(&folio->page, "fuse: trying to steal weird page"); diff --git a/fs/proc/page.c b/fs/proc/page.c index a55f5acefa97..59860ba2393c 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -189,7 +189,7 @@ u64 stable_page_flags(const struct page *page) u |= kpf_copy_bit(k, KPF_LRU, PG_lru); u |= kpf_copy_bit(k, KPF_REFERENCED, PG_referenced); u |= kpf_copy_bit(k, KPF_ACTIVE, PG_active); - u |= kpf_copy_bit(k, KPF_RECLAIM, PG_reclaim); + u |= kpf_copy_bit(k, KPF_READAHEAD, PG_readahead); #define SWAPCACHE ((1 << PG_swapbacked) | (1 << PG_swapcache)) if ((k & SWAPCACHE) == SWAPCACHE) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f353d3c610ac..e5049a975579 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -270,7 +270,6 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); lru_gen_update_size(lruvec, folio, -1, gen); - /* for folio_rotate_reclaimable() */ if (reclaiming) list_add_tail(&folio->lru, &lrugen->folios[gen][type][zone]); else @@ -349,20 +348,6 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio) list_add(&folio->lru, &lruvec->lists[lru]); } -static __always_inline -void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio) -{ - enum lru_list lru = folio_lru_list(folio); - - if (lru_gen_add_folio(lruvec, folio, true)) - return; - - update_lru_size(lruvec, lru, folio_zonenum(folio), - folio_nr_pages(folio)); - /* This is not expected to be used on LRU_UNEVICTABLE */ - list_add_tail(&folio->lru, &lruvec->lists[lru]); -} - static __always_inline void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio) { diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 2414e7921eea..8f59fd8b86c9 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -63,8 +63,8 @@ * might lose their PG_swapbacked flag when they simply can be dropped (e.g. as * a result of MADV_FREE). * - * PG_referenced, PG_reclaim are used for page reclaim for anonymous and - * file-backed pagecache (see mm/vmscan.c). + * PG_referenced is used for page reclaim for anonymous and file-backed + * pagecache (see mm/vmscan.c). * * PG_arch_1 is an architecture specific page state bit. The generic code * guarantees that this bit is cleared for a page when it first is entered into @@ -107,7 +107,7 @@ enum pageflags { PG_reserved, PG_private, /* If pagecache, has fs-private data */ PG_private_2, /* If pagecache, has fs aux data */ - PG_reclaim, /* To be reclaimed asap */ + PG_readahead, PG_swapbacked, /* Page is backed by RAM/swap */ PG_unevictable, /* Page is "unevictable" */ PG_dropbehind, /* drop pages on IO completion */ @@ -129,8 +129,6 @@ enum pageflags { #endif __NR_PAGEFLAGS, - PG_readahead = PG_reclaim, - /* Anonymous memory (and shmem) */ PG_swapcache = PG_owner_priv_1, /* Swap page: swp_entry_t in private */ /* Some filesystems */ @@ -168,7 +166,7 @@ enum pageflags { PG_xen_remapped = PG_owner_priv_1, /* non-lru isolated movable page */ - PG_isolated = PG_reclaim, + PG_isolated = PG_readahead, /* Only valid for buddy pages. Used to track pages that are reported */ PG_reported = PG_uptodate, @@ -187,7 +185,7 @@ enum pageflags { /* At least one page in this folio has the hwpoison flag set */ PG_has_hwpoisoned = PG_active, PG_large_rmappable = PG_workingset, /* anon or file-backed */ - PG_partially_mapped = PG_reclaim, /* was identified to be partially mapped */ + PG_partially_mapped = PG_readahead, /* was identified to be partially mapped */ }; #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) @@ -594,9 +592,6 @@ TESTPAGEFLAG(Writeback, writeback, PF_NO_TAIL) TESTSCFLAG(Writeback, writeback, PF_NO_TAIL) FOLIO_FLAG(mappedtodisk, FOLIO_HEAD_PAGE) -/* PG_readahead is only used for reads; PG_reclaim is only for writes */ -PAGEFLAG(Reclaim, reclaim, PF_NO_TAIL) - TESTCLEARFLAG(Reclaim, reclaim, PF_NO_TAIL) FOLIO_FLAG(readahead, FOLIO_HEAD_PAGE) FOLIO_TEST_CLEAR_FLAG(readahead, FOLIO_HEAD_PAGE) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 3bc8656c8359..15d92784a745 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -114,7 +114,7 @@ DEF_PAGEFLAG_NAME(private_2), \ DEF_PAGEFLAG_NAME(writeback), \ DEF_PAGEFLAG_NAME(head), \ - DEF_PAGEFLAG_NAME(reclaim), \ + DEF_PAGEFLAG_NAME(readahead), \ DEF_PAGEFLAG_NAME(swapbacked), \ DEF_PAGEFLAG_NAME(unevictable), \ DEF_PAGEFLAG_NAME(dropbehind) \ diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h index ff8032227876..e5a9a113e079 100644 --- a/include/uapi/linux/kernel-page-flags.h +++ b/include/uapi/linux/kernel-page-flags.h @@ -15,7 +15,7 @@ #define KPF_ACTIVE 6 #define KPF_SLAB 7 #define KPF_WRITEBACK 8 -#define KPF_RECLAIM 9 +#define KPF_READAHEAD 9 #define KPF_BUDDY 10 /* 11-20: new additions in 2.6.31 */ diff --git a/mm/filemap.c b/mm/filemap.c index 5ca26f5e7238..8951c37c8a38 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1624,18 +1624,6 @@ void folio_end_writeback(struct folio *folio) VM_BUG_ON_FOLIO(!folio_test_writeback(folio), folio); - /* - * folio_test_clear_reclaim() could be used here but it is an - * atomic operation and overkill in this particular case. Failing - * to shuffle a folio marked for immediate reclaim is too mild - * a gain to justify taking an atomic operation penalty at the - * end of every folio writeback. - */ - if (folio_test_reclaim(folio)) { - folio_clear_reclaim(folio); - folio_rotate_reclaimable(folio); - } - /* * Writeback does not hold a folio reference of its own, relying * on truncation to wait for the clearing of PG_writeback. diff --git a/mm/migrate.c b/mm/migrate.c index 690efa064bee..2bf9f08c4f84 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -690,6 +690,8 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) folio_set_young(newfolio); if (folio_test_idle(folio)) folio_set_idle(newfolio); + if (folio_test_readahead(folio)) + folio_set_readahead(newfolio); folio_migrate_refs(newfolio, folio); /* @@ -732,14 +734,6 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) if (folio_test_writeback(newfolio)) folio_end_writeback(newfolio); - /* - * PG_readahead shares the same bit with PG_reclaim. The above - * end_page_writeback() may clear PG_readahead mistakenly, so set the - * bit after that. - */ - if (folio_test_readahead(folio)) - folio_set_readahead(newfolio); - folio_copy_owner(newfolio, folio); pgalloc_tag_swap(newfolio, folio); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 4f5970723cf2..f2b94a2cbfcf 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2888,22 +2888,8 @@ bool folio_mark_dirty(struct folio *folio) { struct address_space *mapping = folio_mapping(folio); - if (likely(mapping)) { - /* - * readahead/folio_deactivate could remain - * PG_readahead/PG_reclaim due to race with folio_end_writeback - * About readahead, if the folio is written, the flags would be - * reset. So no problem. - * About folio_deactivate, if the folio is redirtied, - * the flag will be reset. So no problem. but if the - * folio is used by readahead it will confuse readahead - * and make it restart the size rampup process. But it's - * a trivial problem. - */ - if (folio_test_reclaim(folio)) - folio_clear_reclaim(folio); + if (likely(mapping)) return mapping->a_ops->dirty_folio(mapping, folio); - } return noop_dirty_folio(mapping, folio); } diff --git a/mm/page_io.c b/mm/page_io.c index 9b983de351f9..0cb71f318fb1 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -37,14 +37,11 @@ static void __end_swap_bio_write(struct bio *bio) * Re-dirty the page in order to avoid it being reclaimed. * Also print a dire warning that things will go BAD (tm) * very quickly. - * - * Also clear PG_reclaim to avoid folio_rotate_reclaimable() */ folio_mark_dirty(folio); pr_alert_ratelimited("Write-error on swap-device (%u:%u:%llu)\n", MAJOR(bio_dev(bio)), MINOR(bio_dev(bio)), (unsigned long long)bio->bi_iter.bi_sector); - folio_clear_reclaim(folio); } folio_end_writeback(folio); } @@ -350,19 +347,17 @@ static void sio_write_complete(struct kiocb *iocb, long ret) if (ret != sio->len) { /* - * In the case of swap-over-nfs, this can be a - * temporary failure if the system has limited - * memory for allocating transmit buffers. - * Mark the page dirty and avoid - * folio_rotate_reclaimable but rate-limit the - * messages. + * In the case of swap-over-nfs, this can be a temporary failure + * if the system has limited memory for allocating transmit + * buffers. + * + * Mark the page dirty but rate-limit the messages. */ pr_err_ratelimited("Write error %ld on dio swapfile (%llu)\n", ret, swap_dev_pos(page_swap_entry(page))); for (p = 0; p < sio->pages; p++) { page = sio->bvec[p].bv_page; set_page_dirty(page); - ClearPageReclaim(page); } } diff --git a/mm/swap.c b/mm/swap.c index 7a0dffd5973a..96892a0d2491 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -59,14 +59,10 @@ struct cpu_fbatches { #ifdef CONFIG_SMP struct folio_batch lru_activate; #endif - /* Protecting the following batches which require disabling interrupts */ - local_lock_t lock_irq; - struct folio_batch lru_move_tail; }; static DEFINE_PER_CPU(struct cpu_fbatches, cpu_fbatches) = { .lock = INIT_LOCAL_LOCK(lock), - .lock_irq = INIT_LOCAL_LOCK(lock_irq), }; static void __page_cache_release(struct folio *folio, struct lruvec **lruvecp, @@ -175,29 +171,20 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) } static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch, - struct folio *folio, move_fn_t move_fn, - bool on_lru, bool disable_irq) + struct folio *folio, move_fn_t move_fn, bool on_lru) { - unsigned long flags; - if (on_lru && !folio_test_clear_lru(folio)) return; folio_get(folio); - if (disable_irq) - local_lock_irqsave(&cpu_fbatches.lock_irq, flags); - else - local_lock(&cpu_fbatches.lock); + local_lock(&cpu_fbatches.lock); if (!folio_batch_add(this_cpu_ptr(fbatch), folio) || folio_test_large(folio) || lru_cache_disabled()) folio_batch_move_lru(this_cpu_ptr(fbatch), move_fn); - if (disable_irq) - local_unlock_irqrestore(&cpu_fbatches.lock_irq, flags); - else - local_unlock(&cpu_fbatches.lock); + local_unlock(&cpu_fbatches.lock); } #define folio_batch_add_and_move(folio, op, on_lru) \ @@ -205,37 +192,9 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch, &cpu_fbatches.op, \ folio, \ op, \ - on_lru, \ - offsetof(struct cpu_fbatches, op) >= offsetof(struct cpu_fbatches, lock_irq) \ + on_lru \ ) -static void lru_move_tail(struct lruvec *lruvec, struct folio *folio) -{ - if (folio_test_unevictable(folio)) - return; - - lruvec_del_folio(lruvec, folio); - folio_clear_active(folio); - lruvec_add_folio_tail(lruvec, folio); - __count_vm_events(PGROTATED, folio_nr_pages(folio)); -} - -/* - * Writeback is about to end against a folio which has been marked for - * immediate reclaim. If it still appears to be reclaimable, move it - * to the tail of the inactive list. - * - * folio_rotate_reclaimable() must disable IRQs, to prevent nasty races. - */ -void folio_rotate_reclaimable(struct folio *folio) -{ - if (folio_test_locked(folio) || folio_test_dirty(folio) || - folio_test_unevictable(folio)) - return; - - folio_batch_add_and_move(folio, lru_move_tail, true); -} - void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_io, unsigned int nr_rotated) { @@ -578,17 +537,6 @@ void lru_add_drain_cpu(int cpu) if (folio_batch_count(fbatch)) folio_batch_move_lru(fbatch, lru_add); - fbatch = &fbatches->lru_move_tail; - /* Disabling interrupts below acts as a compiler barrier. */ - if (data_race(folio_batch_count(fbatch))) { - unsigned long flags; - - /* No harm done if a racing interrupt already did this */ - local_lock_irqsave(&cpu_fbatches.lock_irq, flags); - folio_batch_move_lru(fbatch, lru_move_tail); - local_unlock_irqrestore(&cpu_fbatches.lock_irq, flags); - } - fbatch = &fbatches->lru_deactivate; if (folio_batch_count(fbatch)) folio_batch_move_lru(fbatch, lru_deactivate); @@ -682,7 +630,6 @@ static bool cpu_needs_drain(unsigned int cpu) /* Check these in order of likelihood that they're not zero */ return folio_batch_count(&fbatches->lru_add) || - folio_batch_count(&fbatches->lru_move_tail) || folio_batch_count(&fbatches->lru_deactivate) || folio_batch_count(&fbatches->lru_lazyfree) || folio_batch_count(&fbatches->lru_activate) || diff --git a/mm/vmscan.c b/mm/vmscan.c index bb5ec22f97b5..e61e88e63511 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3216,9 +3216,6 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_FLAGS); new_flags |= (new_gen + 1UL) << LRU_GEN_PGOFF; - /* for folio_end_writeback() */ - if (reclaiming) - new_flags |= BIT(PG_reclaim); } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); lru_gen_update_size(lruvec, folio, old_gen, new_gen); @@ -4460,9 +4457,6 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca if (!folio_test_referenced(folio)) set_mask_bits(&folio->flags, LRU_REFS_MASK, 0); - /* for shrink_folio_list() */ - folio_clear_reclaim(folio); - success = lru_gen_del_folio(lruvec, folio, true); VM_WARN_ON_ONCE_FOLIO(!success, folio); @@ -4659,7 +4653,6 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap continue; } - /* retry folios that may have missed folio_rotate_reclaimable() */ if (!skip_retry && !folio_test_active(folio) && !folio_mapped(folio) && !folio_test_dirty(folio) && !folio_test_writeback(folio)) { list_move(&folio->lru, &clean); diff --git a/tools/mm/page-types.c b/tools/mm/page-types.c index bcac7ebfb51f..c06647501370 100644 --- a/tools/mm/page-types.c +++ b/tools/mm/page-types.c @@ -85,7 +85,6 @@ * not part of kernel API */ #define KPF_ANON_EXCLUSIVE 47 -#define KPF_READAHEAD 48 #define KPF_SLUB_FROZEN 50 #define KPF_SLUB_DEBUG 51 #define KPF_FILE 61 @@ -108,7 +107,7 @@ static const char * const page_flag_names[] = { [KPF_ACTIVE] = "A:active", [KPF_SLAB] = "S:slab", [KPF_WRITEBACK] = "W:writeback", - [KPF_RECLAIM] = "I:reclaim", + [KPF_READAHEAD] = "I:readahead", [KPF_BUDDY] = "B:buddy", [KPF_MMAP] = "M:mmap", @@ -139,7 +138,6 @@ static const char * const page_flag_names[] = { [KPF_ARCH_2] = "H:arch_2", [KPF_ANON_EXCLUSIVE] = "d:anon_exclusive", - [KPF_READAHEAD] = "I:readahead", [KPF_SLUB_FROZEN] = "A:slub_frozen", [KPF_SLUB_DEBUG] = "E:slub_debug", @@ -484,10 +482,6 @@ static uint64_t expand_overloaded_flags(uint64_t flags, uint64_t pme) flags ^= BIT(ERROR) | BIT(SLUB_DEBUG); } - /* PG_reclaim is overloaded as PG_readahead in the read path */ - if ((flags & (BIT(RECLAIM) | BIT(WRITEBACK))) == BIT(RECLAIM)) - flags ^= BIT(RECLAIM) | BIT(READAHEAD); - if (pme & PM_SOFT_DIRTY) flags |= BIT(SOFTDIRTY); if (pme & PM_FILE) From patchwork Wed Jan 15 09:31:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940128 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D5BB35966; Wed, 15 Jan 2025 09:32:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933523; cv=none; b=WKRvtKoDr3Qtpe1K14hez2ZyM2UhHmqgCIQO7Egxm49Iu0M56qNR342Q/zIGO/UOqWhDZI0Xu/PUl6B13s7J8XlAcMCsYZn56EQR3THSocnjB7vaXlr8QQMEpObC6Dem9fl3CGQDfebsI69Yal7IXL+PbEDrmyIM5/6Wxfrh1E8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933523; c=relaxed/simple; bh=wdId850dbutGu/Dl/qOd5q3uP9vz/Pw94pmHcv++11Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MQydWu7XBs1itjk0iotakoiFY8MPWtaqHKsR8wQYQcIe7+ceXVI9LcxRvkbqfVYpbWRQD1yuIIN6pEu0bausE8ckqSPf8qmIhHmcgZX1brq8dMR+FbcN6zIOTo//VUQuaYH9UwSWJp+phbLy478P6Z9NztMLOW4zvb4syKhCnc0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VTl6YzUT; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VTl6YzUT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933521; x=1768469521; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wdId850dbutGu/Dl/qOd5q3uP9vz/Pw94pmHcv++11Y=; b=VTl6YzUTpSwYAZsk29hiQq8viCVSJ5RLraME9+ejxv2VL2V6Er6c/RQy 7iXQOkKPZ6FQGzKkTdbeXEpIoYNMNDH9jzqWFQNvOuFivM+vsHPBGf19K wRMaH8+RFXh/U6bUjikvhEpPJ5PzdlqbpmXp7kpxegPf68aAuLInDjxim vddafyMsWkcZF9IC54gnudV/+8dov8/yHryt9+ImHZHrTgFyZbDmaHuq4 BqZyqrZMXQLa1k9fmvUbACbkfhUYamTrnsdi2c0SU1MsTocgPL5ztNMtJ PkthPF+LYFGvSCMiGftpxF75KjTW5ExGscmd4lxVqiVnEkBTAdOqTvMr6 Q==; X-CSE-ConnectionGUID: wDUSZ2n7TGOLUdflb9pSlg== X-CSE-MsgGUID: ZnKb7J2nSmqSgqDGb9NgRw== X-IronPort-AV: E=McAfee;i="6700,10204,11315"; a="41195127" X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="41195127" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:32:00 -0800 X-CSE-ConnectionGUID: pRIEDaOSTYmO9MYn2OMq0g== X-CSE-MsgGUID: 5prHjalDTIyibN9ckH5OgQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="109700879" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa005.fm.intel.com with ESMTP; 15 Jan 2025 01:31:52 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id BFF03765; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 10/11] mm/vmscan: Do not demote PG_dropbehind folios Date: Wed, 15 Jan 2025 11:31:34 +0200 Message-ID: <20250115093135.3288234-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 PG_dropbehind flag indicates that the folio need to be freed immediately. No point in demoting it. Signed-off-by: Kirill A. Shutemov --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index e61e88e63511..0b8a6e0f384c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1235,7 +1235,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * Before reclaiming the folio, try to relocate * its contents to another node. */ - if (do_demote_pass && + if (do_demote_pass && !folio_test_dropbehind(folio) && (thp_migration_supported() || !folio_test_large(folio))) { list_add(&folio->lru, &demote_folios); folio_unlock(folio); From patchwork Wed Jan 15 09:31:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13940133 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA15224169E; Wed, 15 Jan 2025 09:32:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933528; cv=none; b=Ph7/opBSROIWYZP1suyKUS2K/W9EVCbdbTccrWEpDLZXUU0rjFqsxCww+kxZN4GnfAcpLTManfUuOEkIe4mQQ4Py42tTgzYYmtQCmIK5dgphAeFJPgplpBQYaOuossIR82Sx95F6dgUWdCPVYJ8lML9ihwL6bBQfz7yUtWUzAOc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736933528; c=relaxed/simple; bh=zy80qLdpxNB+wsWLFwR+1Pwxfqf8k1Y4He9PQtq4Feo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O4/ayUJbA8nJ2ODLeFU6JMN5SShMk5aFaQbNxWETsMX5+KwHRmKmOiI50ykgP4qwFviLAKDO2D8iKot3cHtsk/cboR12z/8htqdfXxJgElarjojqJqbQH0hq+431I5eh7r8/DTLnUDuxrFx9yaaOTLy8iySm1DWBVM0hjyAQa4Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Jv3yrw1u; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Jv3yrw1u" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736933526; x=1768469526; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zy80qLdpxNB+wsWLFwR+1Pwxfqf8k1Y4He9PQtq4Feo=; b=Jv3yrw1uDe0BsHS5D7URTHCs2NJUVr+WflkYKgI1L7Y4cCOtVZIBnsw3 apUH4Zt04WdLpyPg4lRxWdcVohDv61tJIAPOIaEIX5Qk1vMGv0/zqY6gR hapq6SywWsj6Cwiu1lwORWFQ87CrcCDJGxLyFip9CyD0tSoUl1QFub9B9 FxjYqj3J+Fpf3ZQsf7/bX79F8YRIPfQVSCq7jgAu+CTUSPl2oHzVFf59a Vhf+CL7SXCCnuNCDzwl0T9EOERLIciJUDa4yv3APNEQ5CeijQ85Pv40Dn lwcOIBB62+PlPId9HzaNLOBvRA+LWc077YIVes8tvXqQ/cMOGpf836cg1 w==; X-CSE-ConnectionGUID: SvsN6nJ7TTCwZzSn28CjVw== X-CSE-MsgGUID: ldpIgkRbRiupUY9vKxW/5w== X-IronPort-AV: E=McAfee;i="6700,10204,11315"; a="36540319" X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="36540319" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2025 01:32:02 -0800 X-CSE-ConnectionGUID: uKZs4AquScymUYYHoD/EPw== X-CSE-MsgGUID: 9mVe3DwCT3mvJbk9e9v9kw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,316,1728975600"; d="scan'208";a="105153461" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa006.jf.intel.com with ESMTP; 15 Jan 2025 01:31:53 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id D1F2D771; Wed, 15 Jan 2025 11:31:42 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCHv2 11/11] mm: Rename PG_dropbehind to PG_reclaim Date: Wed, 15 Jan 2025 11:31:35 +0200 Message-ID: <20250115093135.3288234-12-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> References: <20250115093135.3288234-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now as PG_reclaim is gone, its name can be reclaimed for better use :) Rename PG_dropbehind to PG_reclaim and rename all helpers around it. Signed-off-by: Kirill A. Shutemov --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 2 +- include/linux/mm_inline.h | 2 +- include/linux/page-flags.h | 8 +++--- include/linux/pagemap.h | 2 +- include/trace/events/mmflags.h | 2 +- mm/filemap.c | 34 +++++++++++------------ mm/migrate.c | 4 +-- mm/readahead.c | 4 +-- mm/swap.c | 2 +- mm/truncate.c | 2 +- mm/vmscan.c | 22 +++++++-------- mm/zswap.c | 2 +- 12 files changed, 43 insertions(+), 43 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index c1724847c001..e543e6bfb093 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -329,7 +329,7 @@ void __shmem_writeback(size_t size, struct address_space *mapping) if (!folio_mapped(folio) && folio_clear_dirty_for_io(folio)) { int ret; - folio_set_dropbehind(folio); + folio_set_reclaim(folio); ret = mapping->a_ops->writepage(&folio->page, &wbc); if (!ret) goto put; diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index e5049a975579..9077ba15bc36 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -241,7 +241,7 @@ static inline unsigned long lru_gen_folio_seq(struct lruvec *lruvec, struct foli else if (reclaiming) gen = MAX_NR_GENS; else if ((!folio_is_file_lru(folio) && !folio_test_swapcache(folio)) || - folio_test_dropbehind(folio)) + folio_test_reclaim(folio)) gen = MIN_NR_GENS; else gen = MAX_NR_GENS - folio_test_workingset(folio); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 8f59fd8b86c9..f5a058761188 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -110,7 +110,7 @@ enum pageflags { PG_readahead, PG_swapbacked, /* Page is backed by RAM/swap */ PG_unevictable, /* Page is "unevictable" */ - PG_dropbehind, /* drop pages on IO completion */ + PG_reclaim, /* drop pages on IO completion */ #ifdef CONFIG_MMU PG_mlocked, /* Page is vma mlocked */ #endif @@ -595,9 +595,9 @@ FOLIO_FLAG(mappedtodisk, FOLIO_HEAD_PAGE) FOLIO_FLAG(readahead, FOLIO_HEAD_PAGE) FOLIO_TEST_CLEAR_FLAG(readahead, FOLIO_HEAD_PAGE) -FOLIO_FLAG(dropbehind, FOLIO_HEAD_PAGE) - FOLIO_TEST_CLEAR_FLAG(dropbehind, FOLIO_HEAD_PAGE) - __FOLIO_SET_FLAG(dropbehind, FOLIO_HEAD_PAGE) +FOLIO_FLAG(reclaim, FOLIO_HEAD_PAGE) + FOLIO_TEST_CLEAR_FLAG(reclaim, FOLIO_HEAD_PAGE) + __FOLIO_SET_FLAG(reclaim, FOLIO_HEAD_PAGE) #ifdef CONFIG_HIGHMEM /* diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index d0be5f36082a..72488f1c50bb 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1371,7 +1371,7 @@ struct readahead_control { pgoff_t _index; unsigned int _nr_pages; unsigned int _batch_count; - bool dropbehind; + bool reclaim; bool _workingset; unsigned long _pflags; }; diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 15d92784a745..c635d97c4065 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -117,7 +117,7 @@ DEF_PAGEFLAG_NAME(readahead), \ DEF_PAGEFLAG_NAME(swapbacked), \ DEF_PAGEFLAG_NAME(unevictable), \ - DEF_PAGEFLAG_NAME(dropbehind) \ + DEF_PAGEFLAG_NAME(reclaim) \ IF_HAVE_PG_MLOCK(mlocked) \ IF_HAVE_PG_HWPOISON(hwpoison) \ IF_HAVE_PG_IDLE(idle) \ diff --git a/mm/filemap.c b/mm/filemap.c index 8951c37c8a38..92cec1dd9a6b 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1590,11 +1590,11 @@ int folio_wait_private_2_killable(struct folio *folio) EXPORT_SYMBOL(folio_wait_private_2_killable); /* - * If folio was marked as dropbehind, then pages should be dropped when writeback + * If folio was marked as reclaim, then pages should be dropped when writeback * completes. Do that now. If we fail, it's likely because of a big folio - - * just reset dropbehind for that case and latter completions should invalidate. + * just reset reclaim for that case and latter completions should invalidate. */ -static void folio_end_dropbehind_write(struct folio *folio) +static void folio_end_reclaim_write(struct folio *folio) { /* * Hitting !in_task() should not happen off RWF_DONTCACHE writeback, @@ -1620,7 +1620,7 @@ static void folio_end_dropbehind_write(struct folio *folio) */ void folio_end_writeback(struct folio *folio) { - bool folio_dropbehind = false; + bool folio_reclaim = false; VM_BUG_ON_FOLIO(!folio_test_writeback(folio), folio); @@ -1632,13 +1632,13 @@ void folio_end_writeback(struct folio *folio) */ folio_get(folio); if (!folio_test_dirty(folio)) - folio_dropbehind = folio_test_clear_dropbehind(folio); + folio_reclaim = folio_test_clear_reclaim(folio); if (__folio_end_writeback(folio)) folio_wake_bit(folio, PG_writeback); acct_reclaim_writeback(folio); - if (folio_dropbehind) - folio_end_dropbehind_write(folio); + if (folio_reclaim) + folio_end_reclaim_write(folio); folio_put(folio); } EXPORT_SYMBOL(folio_end_writeback); @@ -1962,7 +1962,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, if (fgp_flags & FGP_ACCESSED) __folio_set_referenced(folio); if (fgp_flags & FGP_DONTCACHE) - __folio_set_dropbehind(folio); + __folio_set_reclaim(folio); err = filemap_add_folio(mapping, folio, index, gfp); if (!err) @@ -1986,8 +1986,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, if (!folio) return ERR_PTR(-ENOENT); /* not an uncached lookup, clear uncached if set */ - if (folio_test_dropbehind(folio) && !(fgp_flags & FGP_DONTCACHE)) - folio_clear_dropbehind(folio); + if (folio_test_reclaim(folio) && !(fgp_flags & FGP_DONTCACHE)) + folio_clear_reclaim(folio); return folio; } EXPORT_SYMBOL(__filemap_get_folio); @@ -2485,7 +2485,7 @@ static int filemap_create_folio(struct kiocb *iocb, struct folio_batch *fbatch) if (!folio) return -ENOMEM; if (iocb->ki_flags & IOCB_DONTCACHE) - __folio_set_dropbehind(folio); + __folio_set_reclaim(folio); /* * Protect against truncate / hole punch. Grabbing invalidate_lock @@ -2532,7 +2532,7 @@ static int filemap_readahead(struct kiocb *iocb, struct file *file, if (iocb->ki_flags & IOCB_NOIO) return -EAGAIN; if (iocb->ki_flags & IOCB_DONTCACHE) - ractl.dropbehind = 1; + ractl.reclaim = 1; page_cache_async_ra(&ractl, folio, last_index - folio->index); return 0; } @@ -2563,7 +2563,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count, if (iocb->ki_flags & IOCB_NOWAIT) flags = memalloc_noio_save(); if (iocb->ki_flags & IOCB_DONTCACHE) - ractl.dropbehind = 1; + ractl.reclaim = 1; page_cache_sync_ra(&ractl, last_index - index); if (iocb->ki_flags & IOCB_NOWAIT) memalloc_noio_restore(flags); @@ -2611,15 +2611,15 @@ static inline bool pos_same_folio(loff_t pos1, loff_t pos2, struct folio *folio) return (pos1 >> shift == pos2 >> shift); } -static void filemap_end_dropbehind_read(struct address_space *mapping, +static void filemap_end_reclaim_read(struct address_space *mapping, struct folio *folio) { - if (!folio_test_dropbehind(folio)) + if (!folio_test_reclaim(folio)) return; if (folio_test_writeback(folio) || folio_test_dirty(folio)) return; if (folio_trylock(folio)) { - if (folio_test_clear_dropbehind(folio)) + if (folio_test_clear_reclaim(folio)) folio_unmap_invalidate(mapping, folio, 0); folio_unlock(folio); } @@ -2741,7 +2741,7 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter, for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i]; - filemap_end_dropbehind_read(mapping, folio); + filemap_end_reclaim_read(mapping, folio); folio_put(folio); } folio_batch_init(&fbatch); diff --git a/mm/migrate.c b/mm/migrate.c index 2bf9f08c4f84..72702e0607af 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -683,8 +683,8 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) folio_set_dirty(newfolio); /* TODO: free the folio on migration? */ - if (folio_test_dropbehind(folio)) - folio_set_dropbehind(newfolio); + if (folio_test_reclaim(folio)) + folio_set_reclaim(newfolio); if (folio_test_young(folio)) folio_set_young(newfolio); diff --git a/mm/readahead.c b/mm/readahead.c index 6a4e96b69702..73ec47a67708 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -184,8 +184,8 @@ static struct folio *ractl_alloc_folio(struct readahead_control *ractl, struct folio *folio; folio = filemap_alloc_folio(gfp_mask, order); - if (folio && ractl->dropbehind) - __folio_set_dropbehind(folio); + if (folio && ractl->reclaim) + __folio_set_reclaim(folio); return folio; } diff --git a/mm/swap.c b/mm/swap.c index 96892a0d2491..6250e21e1a73 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -406,7 +406,7 @@ static bool lru_gen_clear_refs(struct folio *folio) */ void folio_mark_accessed(struct folio *folio) { - if (folio_test_dropbehind(folio)) + if (folio_test_reclaim(folio)) return; if (lru_gen_enabled()) { lru_gen_inc_refs(folio); diff --git a/mm/truncate.c b/mm/truncate.c index 864aaadc1e91..37f94bc9fbd4 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -486,7 +486,7 @@ unsigned long mapping_try_invalidate(struct address_space *mapping, * of interest and try to speed up its reclaim. */ if (!ret) { - folio_set_dropbehind(folio); + folio_set_reclaim(folio); /* Likely in the lru cache of a remote CPU */ if (nr_failed) (*nr_failed)++; diff --git a/mm/vmscan.c b/mm/vmscan.c index 0b8a6e0f384c..11d503e9d079 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -692,13 +692,13 @@ static pageout_t pageout(struct folio *folio, struct address_space *mapping, if (shmem_mapping(mapping) && folio_test_large(folio)) wbc.list = folio_list; - folio_set_dropbehind(folio); + folio_set_reclaim(folio); res = mapping->a_ops->writepage(&folio->page, &wbc); if (res < 0) handle_write_error(mapping, folio, res); if (res == AOP_WRITEPAGE_ACTIVATE) { - folio_clear_dropbehind(folio); + folio_clear_reclaim(folio); return PAGE_ACTIVATE; } @@ -1140,7 +1140,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * for immediate reclaim are making it to the end of * the LRU a second time. */ - if (writeback && folio_test_dropbehind(folio)) + if (writeback && folio_test_reclaim(folio)) stat->nr_congested += nr_pages; /* @@ -1149,7 +1149,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * * 1) If reclaim is encountering an excessive number * of folios under writeback and this folio has both - * the writeback and dropbehind flags set, then it + * the writeback and reclaim flags set, then it * indicates that folios are being queued for I/O but * are being recycled through the LRU before the I/O * can complete. Waiting on the folio itself risks an @@ -1174,7 +1174,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * would probably show more reasons. * * 3) Legacy memcg encounters a folio that already has the - * dropbehind flag set. memcg does not have any dirty folio + * reclaim flag set. memcg does not have any dirty folio * throttling so we could easily OOM just because too many * folios are in writeback and there is nothing else to * reclaim. Wait for the writeback to complete. @@ -1193,17 +1193,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, /* Case 1 above */ if (current_is_kswapd() && - folio_test_dropbehind(folio) && + folio_test_reclaim(folio) && test_bit(PGDAT_WRITEBACK, &pgdat->flags)) { stat->nr_immediate += nr_pages; goto activate_locked; /* Case 2 above */ } else if (writeback_throttling_sane(sc) || - !folio_test_dropbehind(folio) || + !folio_test_reclaim(folio) || !may_enter_fs(folio, sc->gfp_mask) || (mapping && mapping_writeback_indeterminate(mapping))) { - folio_set_dropbehind(folio); + folio_set_reclaim(folio); stat->nr_writeback += nr_pages; goto activate_locked; @@ -1235,7 +1235,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * Before reclaiming the folio, try to relocate * its contents to another node. */ - if (do_demote_pass && !folio_test_dropbehind(folio) && + if (do_demote_pass && !folio_test_reclaim(folio) && (thp_migration_supported() || !folio_test_large(folio))) { list_add(&folio->lru, &demote_folios); folio_unlock(folio); @@ -1358,7 +1358,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ if (folio_is_file_lru(folio) && (!current_is_kswapd() || - !folio_test_dropbehind(folio) || + !folio_test_reclaim(folio) || !test_bit(PGDAT_DIRTY, &pgdat->flags))) { /* * Immediately reclaim when written back. @@ -1368,7 +1368,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE, nr_pages); - folio_set_dropbehind(folio); + folio_set_reclaim(folio); goto activate_locked; } diff --git a/mm/zswap.c b/mm/zswap.c index c20bad0b0978..2d02336ea839 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1097,7 +1097,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, folio_mark_uptodate(folio); /* free the folio after writeback */ - folio_set_dropbehind(folio); + folio_set_reclaim(folio); /* start writeback */ __swap_writepage(folio, &wbc);