From patchwork Mon Jan 13 09:34:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13937070 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34A9523ED6A; Mon, 13 Jan 2025 09:35:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760915; cv=none; b=XNVjueOw7NljiDaWUXl8uGMqcdyY1CVFdvcHftHKqjmWW2bGxrxGw1ceKjEwC3+y0n01smdTBADTG5WXhCwjNQ4mZSDQNUkh0BPPKwGBVyN4d68bLd79a74NRLBEPlfWDkJEAitzPOAFdQPTVa4oO9r9JrKwf5uXQ1K/a/A/6JY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760915; c=relaxed/simple; bh=KROqC9a9RBT0tRKvqTEojI0FO2hQo9eIFMH92Ay8miY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p/jiXGgcsZ8MfXcGuhJ2s/zyLDYIwka9nhqDZN9SMXYSwzTYuvI0FTCR4BVoBZpKFGgM0R9A8kZ3qHSAxyaRpXkJu5XBPtPvJCe4BFkndg0TlFXonf4DcRMnVS4kumzJieQIeyZskm7FrZO41yb4aa9tEsUvnUn0LB68Gg3nC+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cPNwuoTf; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cPNwuoTf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736760914; x=1768296914; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KROqC9a9RBT0tRKvqTEojI0FO2hQo9eIFMH92Ay8miY=; b=cPNwuoTf78Eq3VUJYyp9+NGlFBOa3AWye/eURFw+y2B2d6DQqae3ICiC 7ZY5bsvrjFKxfSiUanmOaERly8C7Yi7v7+/sDkjufI9TQdQuP+soJeK/f yzMgZXJ1wZsEBRZV8/lNvVThOjfVp3donSEJIc7g8JCYOw0KuyHrppBWp jPFzHH2A4dor9WmyTZLR0HNuV6X9qA+JvbMtvDSgWE6jWJu8CC5Ux1dgx mRDVmtop/zuOIo/5UqqNnqQztNyrS4L9wHx4hhFeBWP2hQ44SszHzH6mf 2i+blXeEn/TA/RymNcPtynkKvN6dfLLYY2XeBysd70Jpl+mELS7oBQDCx w==; X-CSE-ConnectionGUID: /CyT16yxSr682KiMRAYAWQ== X-CSE-MsgGUID: LBAYi8NaS9aR0shScsHIFg== X-IronPort-AV: E=McAfee;i="6700,10204,11313"; a="40948930" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="40948930" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 01:35:12 -0800 X-CSE-ConnectionGUID: hqmEiuR7TbCAMrvg4TzodQ== X-CSE-MsgGUID: TTYeZiVeTT668CUnK8hRKw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="104586344" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa008.fm.intel.com with ESMTP; 13 Jan 2025 01:35:05 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 05BF5329; Mon, 13 Jan 2025 11:35:03 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 1/8] drm/i915/gem: Convert __shmem_writeback() to folios Date: Mon, 13 Jan 2025 11:34:46 +0200 Message-ID: <20250113093453.1932083-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> References: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Use folios instead of pages. This is preparation for removing PG_reclaim. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index fe69f2c8527d..9016832b20fc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -320,25 +320,25 @@ void __shmem_writeback(size_t size, struct address_space *mapping) /* Begin writeback on each dirty page */ for (i = 0; i < size >> PAGE_SHIFT; i++) { - struct page *page; + struct folio *folio; - page = find_lock_page(mapping, i); - if (!page) + folio = filemap_lock_folio(mapping, i); + if (!folio) continue; - if (!page_mapped(page) && clear_page_dirty_for_io(page)) { + if (!folio_mapped(folio) && folio_clear_dirty_for_io(folio)) { int ret; - SetPageReclaim(page); - ret = mapping->a_ops->writepage(page, &wbc); + folio_set_reclaim(folio); + ret = mapping->a_ops->writepage(&folio->page, &wbc); if (!PageWriteback(page)) - ClearPageReclaim(page); + folio_clear_reclaim(folio); if (!ret) goto put; } - unlock_page(page); + folio_unlock(folio); put: - put_page(page); + folio_put(folio); } } From patchwork Mon Jan 13 09:34:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13937074 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5389B246324; Mon, 13 Jan 2025 09:35:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760919; cv=none; b=rs9X97yPWtCp/J0aqYjiS85kO67D4iVydOPrriizZHHlKFqooWQ5XRlWuOtUl8pGd3t/cAHxY2GwV+F6e93CT2mMx6MbC4jYcXl/6AR35pJyqsBc3SXqQZkqhTWLDX0QdW/iXz1PcxDaGPp6hwWaOVbTMYfY+7ioxYMvP1iDpiQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760919; c=relaxed/simple; bh=mOqTZnsJFHy5ou7lSIFde6JkX4W4gvXlxc9Z4IG9Td4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qq29dvPj5USRO9bu4FcdMOF5Rk6+OnEGSSMmOyZwJNcrftB+kl3syrtT4EYeWITJu0jJhM7qlRWb7YIzryW041+pgyWMTTihSbirS1TXI49t2Dtob/k26VnHf6gCO+CqvOm/UPQyeAdBf9bxS98oGm4yt0dyapWdwbV5MGn21/I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XRPp32qI; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XRPp32qI" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736760917; x=1768296917; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mOqTZnsJFHy5ou7lSIFde6JkX4W4gvXlxc9Z4IG9Td4=; b=XRPp32qIUNCKaFEyu9j/yw991jFgVuyktCdqakyf3Ja0USvvLiFcaNCn IQaatX3mhD9n5W30Eb65c9EHMLHowilNLkkHoXlZ2G/oKVvDudtQjoaJP ueXLkLUMAOvQmqyswAif8lUyNwR3BOneXxFCRR5RHo7cUqA4dVD9vppY/ iWz24tuflhMJJQA63YCMfi70DbHJtshlGuTmeVssHyJbhoK4ZxSaolPoE 3zOEOOW+J3haTXEwekwl3JOotsokDbhwFgTTliv8YiEMttEu4+pOxFFRS bgStbu9bGPRoaIeRQYN0E1p3tQOB3vgY+k1jyrnR48GsRWWWZg1NWza3+ A==; X-CSE-ConnectionGUID: hoJfx3b7S66UrumG5hpS6Q== X-CSE-MsgGUID: CmHWKdaIQ+Wiws2dmxhTOg== X-IronPort-AV: E=McAfee;i="6700,10204,11313"; a="40949038" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="40949038" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 01:35:14 -0800 X-CSE-ConnectionGUID: U5vIgKBbRveYB8YmwbxHAQ== X-CSE-MsgGUID: sUGRVgo8QlCupflC1Poz0w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="104303073" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa010.jf.intel.com with ESMTP; 13 Jan 2025 01:35:05 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 139A940B; Mon, 13 Jan 2025 11:35:04 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 2/8] drm/i915/gem: Use PG_dropbehind instead of PG_reclaim Date: Mon, 13 Jan 2025 11:34:47 +0200 Message-ID: <20250113093453.1932083-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> References: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in __shmem_writeback() It is safe to leave PG_dropbehind on the folio if, for some reason (bug?), the folio is not in a writeback state after ->writepage(). In these cases, the kernel had to clear PG_reclaim as it shared a page flag bit with PG_readahead. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 9016832b20fc..c1724847c001 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -329,10 +329,8 @@ void __shmem_writeback(size_t size, struct address_space *mapping) if (!folio_mapped(folio) && folio_clear_dirty_for_io(folio)) { int ret; - folio_set_reclaim(folio); + folio_set_dropbehind(folio); ret = mapping->a_ops->writepage(&folio->page, &wbc); - if (!PageWriteback(page)) - folio_clear_reclaim(folio); if (!ret) goto put; } From patchwork Mon Jan 13 09:34:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13937071 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43CD71FDA67; Mon, 13 Jan 2025 09:35:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760917; cv=none; b=oehvo7ucbRWyJCZiJYmuk/NVd7kCNBD4GdS1qZmkmT0tCOcGFwlbGSgO5RACJVDVbCvRiP+0jxSI/eAdSPCnDWaRCjzos9P2iAiXPSpOSb0M5M2o1sFIR0gftmUOh7CWd6zkN+M83L290zXyPd4pFdlfDguMTUM1hFot2BRPJds= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760917; c=relaxed/simple; bh=IgjaNFBWhujQnq/c6yPPss+dBuUf4OdsPTNtQqXJnW4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WqbeQNCdOk22JDOqAJ2HOQDXoVvz39vOAbGDvQi5L3xawDKPmtwFcDgazNksJwd71T20sExTyjF8kiGzkU1axhWEFxO6l/W/fbGMqmjnB480sWbNhuNzq91HZUJNXxUk+1TD4Y6SM3UcNda2y7P+FQ+K3ad5GwLUxKTt9be+/u0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IIntNe6t; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IIntNe6t" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736760915; x=1768296915; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IgjaNFBWhujQnq/c6yPPss+dBuUf4OdsPTNtQqXJnW4=; b=IIntNe6tzj0ggH+8Wz21nD0oCVtXfxNGZPT1nmbWoL94u8Oc6Wc6KV2/ OjIEewrD3cDXuapZB/vCfl61n8rMAHQqJXZbQO6OCQcPtUAVYyzdtfKt4 NwesXv7K8h9De+M0qeJvpOPXB1+xCxUMqPGbyaWtEO/E3bfSzcKt1uEij 4BF96n6DLW1jYky3haJ/JgjMSzqlWuWfeJ63bhrS8Pjww/kj760rE0y8Y QEfhnphq1Rp8S0ot3fJi3wKM4Qmu1Mif6x5TnAC2tzNSkhlhMOVbJp8V1 xNtta5zt2nPIry/stWQ6tvAutqLRpU7HmAKSChHItorPrcKhenkJSdKSc A==; X-CSE-ConnectionGUID: l5y25PlHTgqO2vBSgUzZ4w== X-CSE-MsgGUID: pg2dYjX7T2qZpCPaZmQVtQ== X-IronPort-AV: E=McAfee;i="6700,10204,11313"; a="40948956" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="40948956" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 01:35:13 -0800 X-CSE-ConnectionGUID: FvOhar42Sdu5F/xuFj0UkQ== X-CSE-MsgGUID: 2hPQLD4bQZOgY3uq4G9MfA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="104586348" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa008.fm.intel.com with ESMTP; 13 Jan 2025 01:35:05 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 24E5A478; Mon, 13 Jan 2025 11:35:04 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 3/8] mm/zswap: Use PG_dropbehind instead of PG_reclaim Date: Mon, 13 Jan 2025 11:34:48 +0200 Message-ID: <20250113093453.1932083-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> References: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in zswap_writeback_entry(). Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand Acked-by: Yosry Ahmed --- mm/zswap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 167ae641379f..c20bad0b0978 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1096,8 +1096,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* folio is up to date */ folio_mark_uptodate(folio); - /* move it to the tail of the inactive list after end_writeback */ - folio_set_reclaim(folio); + /* free the folio after writeback */ + folio_set_dropbehind(folio); /* start writeback */ __swap_writepage(folio, &wbc); From patchwork Mon Jan 13 09:34:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13937072 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 092FF2451D9; Mon, 13 Jan 2025 09:35:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760917; cv=none; b=QY3AuhGgWkyHomn9KWPK6tepsYL+ELJgW4j/HB2M7zCGusjub9qa4irv+ROI0Wzx/FiE9xoC0KXMY168et95k4QeDpvZgclaoyMkqo34PjHv4UGizkpxjAci5RCvINx1XMjiFSruO1omaFMLi29kWLLwzGscISeVSSMDmC9E0Wo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760917; c=relaxed/simple; bh=TNGXRP92KylfjqEnTf2oSTorm8trBrBBqsU1BaNYJWE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dtBrZpMpnixxinV0FHoyOUv3O0SG+O+NYF5gnmbkoFWPAiyHexduSQQUSjXdRvp6io0gMcDtN00WGuLbOp/crDqfy4MUxez0AMTDs8wjUqDRfyY2My1qcYAMYgFGzpT6frpf7fy2eLyhjYZXr/vieXPlElB7ftU9LELB9NZeyGo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mcoapG7Y; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mcoapG7Y" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736760916; x=1768296916; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TNGXRP92KylfjqEnTf2oSTorm8trBrBBqsU1BaNYJWE=; b=mcoapG7YL5EHWHvDwl98wyR5rqsj9iLCcYmnBtm81KGhNfdtpc2V9CU8 KrXJc8tMOZdPzY2jIQGbaoZgIiAUicMJx/GAsN8EWaS3O+iZ1Yq6yJL78 ga7CZEeLKKL1YlZUs9sCWvKSbeykZ0czJdcSVoCAE+UYOvfrp1pPuQvWz 4uSJb8t//Ucbm2DL0eeHzlNDipam7hnov1JW/M2JaOXXl2I6GiMhAGme4 KLTGyVm3xpOJBNBpyzc97KSmAutPswV52U8Vuy3WFSDUM86b2BZ9OMQ0o /R7artYlSsQjIbfWmO1L5Dh+MNyzotjPq64xsPlWp9s/2lkjaWQgiq0hC w==; X-CSE-ConnectionGUID: /XQcScogTX+YCZpiRbDHuw== X-CSE-MsgGUID: bxNGXVAlQJu7c2zvnJgz9A== X-IronPort-AV: E=McAfee;i="6700,10204,11313"; a="40948981" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="40948981" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 01:35:13 -0800 X-CSE-ConnectionGUID: T7ipL8kBRZeAxUscMfKfog== X-CSE-MsgGUID: WowDcJsZQ9SogBXqcgQJLQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="104586351" Received: from black.fi.intel.com ([10.237.72.28]) by fmviesa008.fm.intel.com with ESMTP; 13 Jan 2025 01:35:05 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 34FF949D; Mon, 13 Jan 2025 11:35:04 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 4/8] mm/swap: Use PG_dropbehind instead of PG_reclaim Date: Mon, 13 Jan 2025 11:34:49 +0200 Message-ID: <20250113093453.1932083-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> References: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in lru_deactivate_file(). Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- mm/swap.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index fc8281ef4241..4eb33b4804a8 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -562,14 +562,8 @@ static void lru_deactivate_file(struct lruvec *lruvec, struct folio *folio) folio_clear_referenced(folio); if (folio_test_writeback(folio) || folio_test_dirty(folio)) { - /* - * Setting the reclaim flag could race with - * folio_end_writeback() and confuse readahead. But the - * race window is _really_ small and it's not a critical - * problem. - */ lruvec_add_folio(lruvec, folio); - folio_set_reclaim(folio); + folio_set_dropbehind(folio); } else { /* * The folio's writeback ended while it was in the batch. From patchwork Mon Jan 13 09:34:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13937075 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 594B01FDA79; Mon, 13 Jan 2025 09:35:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760924; cv=none; b=EGKJNrmCGLKUH8ooYH3lkYRcusBcH27Zy5iGCsKmPYK8Qtyc1Atn+7vXTGM+LRuOGLXmZQOobweJr8RzXnfbU6iSVjnFymrJH66vZiyUYG2sDxqtVi6fdZGkEGeINX1jH+T+Kt9P5xxVkmCK6DlGMmkhQc9DUqkYqsuCrJ/Qto8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760924; c=relaxed/simple; bh=jJrpfq76m3vQIYJgXgCzbJfE1zr1AyR7lITQuXNlGAM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VKmKv48VzzrUVzXx32oJr1iqSe64RQuC9MvlPis+sOXcfAWlZmSpPtnKXHRJOwja7cqjgHG1Yf4Vkuee1e6+h1kAcB3eYHQYX7pVfbeFFLC5n7/gUArf9hF89xUYGPFuWwNE8o1vzL0lKALVx+PL5Lc1OJOkgqdTqYgC8xhFQr4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Lr0cCryS; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Lr0cCryS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736760923; x=1768296923; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jJrpfq76m3vQIYJgXgCzbJfE1zr1AyR7lITQuXNlGAM=; b=Lr0cCrySQEBpF6ostdq758OR/yteAm0kbuHnzuj6Q/RaaZaa5cfa/E6m i5G1bssU0aBLskFiWgah6BXp2/3duK9drY3SdXo66b8nPynrf7ECic6Rx OKoWwNxWLFC9fAAuBFvQh59D1NOjFZgr5kz2aQ5fienC+DTyzEDHLTkSS MFKJxzkn+d6+UMU5nXqfhJMCfAjIpC/t6/Rcf8vjvT44vwehJiNngZHSl CoOjmpCaEB/mveV+WkuFzMJ98Kp/EwASIigIxqst/IYxbnjmI7KDxRxn6 +mv7cPdEIWbxX98cuTLeqjLodJlgCo5uQYB2/t9c3rvIEIdyiu3srRh4N w==; X-CSE-ConnectionGUID: PAiLr4KhTY2V68WsRKD7EA== X-CSE-MsgGUID: K5OdnLYgTrS0vMlxDcwXyg== X-IronPort-AV: E=McAfee;i="6700,10204,11313"; a="40949103" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="40949103" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 01:35:22 -0800 X-CSE-ConnectionGUID: iRfJD88RTbCEE7NJr+FuSA== X-CSE-MsgGUID: 8qvuIIumQI6LRh7Z8DIP0w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="104303082" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa010.jf.intel.com with ESMTP; 13 Jan 2025 01:35:14 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 45BE54AB; Mon, 13 Jan 2025 11:35:04 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 5/8] mm/vmscan: Use PG_dropbehind instead of PG_reclaim Date: Mon, 13 Jan 2025 11:34:50 +0200 Message-ID: <20250113093453.1932083-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> References: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in pageout(). It is safe to leave PG_dropbehind on the folio if, for some reason (bug?), the folio is not in a writeback state after ->writepage(). In these cases, the kernel had to clear PG_reclaim as it shared a page flag bit with PG_readahead. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- mm/vmscan.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a099876fa029..d15f80333d6b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -692,19 +692,16 @@ static pageout_t pageout(struct folio *folio, struct address_space *mapping, if (shmem_mapping(mapping) && folio_test_large(folio)) wbc.list = folio_list; - folio_set_reclaim(folio); + folio_set_dropbehind(folio); + res = mapping->a_ops->writepage(&folio->page, &wbc); if (res < 0) handle_write_error(mapping, folio, res); if (res == AOP_WRITEPAGE_ACTIVATE) { - folio_clear_reclaim(folio); + folio_clear_dropbehind(folio); return PAGE_ACTIVATE; } - if (!folio_test_writeback(folio)) { - /* synchronous write or broken a_ops? */ - folio_clear_reclaim(folio); - } trace_mm_vmscan_write_folio(folio); node_stat_add_folio(folio, NR_VMSCAN_WRITE); return PAGE_SUCCESS; From patchwork Mon Jan 13 09:34:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13937076 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58D2C1FDA88; Mon, 13 Jan 2025 09:35:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760926; cv=none; b=Bp2HRynq/Sg4x6UjZG3BTZnFeLSlzjX5voCgaeVMsEaA+t8j9BH0WR6I81k+XaJ+o6RhfEN37rgdaicyF4pX30cfnk3vgtR1B9hh/e/wFmRjPXSn3VfTb2nAzhJkP5jIFFxJBYASi0sb5ziGjGxk91yYgEFQg5CU0YvjGJpt8UA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760926; c=relaxed/simple; bh=wVRioOkkZOwv25N2ZRVjElURmnKSRsrD38rHoZaYYT0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mzCRE0og9aGLa9v9uyYN0DaacPz9eUklsWLpAO9YqJV3DA6obcWgXjzv4G/KlzVhvBjQgWUN4NU+xPsy8HyAtfAlIW0VdT0IGazhroInJIHhobyy/FdCZBbS+d3sYdDQMMYuDi1NW94YRVZTaKAwYetiV7cWTC6VgldlErsgr5E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OR1mEGLA; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OR1mEGLA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736760924; x=1768296924; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wVRioOkkZOwv25N2ZRVjElURmnKSRsrD38rHoZaYYT0=; b=OR1mEGLAI3ZNVZ2DohjZgxDZ8Pf24H5qr9yJSXhblP6/UQW89SVA/rCG KOQHMZkugyTJG7CHro8luaC3xx3p9a+93x9z3I/5c4q8aOeR3xW4U7/4w TiyV6KBf4hjLqSbaVTwLVMrFp6wRfC5zoigrV09rmSump+rwXmw7wcEJa h5mA356HFfQrkLPiAcfVyso2v4yitlB3vfQ+hIvc2hp59OjelvQiubLF3 +YepwJlEiAiLXvkM/GAZF4Ka098RkPI96nTcl25R19vfuKW0+NeOzg48Y +CsXjLAFsN14cjE9xqJpWXIK6ANdUpBRB9WvqEHF3u/5N9UIigLYYFBr6 A==; X-CSE-ConnectionGUID: zvv7o9dWS2GjvUlDzuktkA== X-CSE-MsgGUID: gV+qUbMuTz6VzrcXIFXB4w== X-IronPort-AV: E=McAfee;i="6700,10204,11313"; a="40949142" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="40949142" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 01:35:22 -0800 X-CSE-ConnectionGUID: D8E71QBnRGOAapjWvsrSMg== X-CSE-MsgGUID: XTdDCN2ZS361GzcokH0UHQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="104303084" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa010.jf.intel.com with ESMTP; 13 Jan 2025 01:35:14 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 52D544BE; Mon, 13 Jan 2025 11:35:04 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 6/8] mm/vmscan: Use PG_dropbehind instead of PG_reclaim in shrink_folio_list() Date: Mon, 13 Jan 2025 11:34:51 +0200 Message-ID: <20250113093453.1932083-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> References: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The recently introduced PG_dropbehind allows for freeing folios immediately after writeback. Unlike PG_reclaim, it does not need vmscan to be involved to get the folio freed. Instead of using folio_set_reclaim(), use folio_set_dropbehind() in shrink_folio_list(). It is safe to leave PG_dropbehind on the folio if, for some reason (bug?), the folio is not in a writeback state after ->writepage(). In these cases, the kernel had to clear PG_reclaim as it shared a page flag bit with PG_readahead. Also use PG_dropbehind instead PG_reclaim to detect I/O congestion. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- mm/vmscan.c | 30 ++++++++---------------------- 1 file changed, 8 insertions(+), 22 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index d15f80333d6b..bb5ec22f97b5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1140,7 +1140,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * for immediate reclaim are making it to the end of * the LRU a second time. */ - if (writeback && folio_test_reclaim(folio)) + if (writeback && folio_test_dropbehind(folio)) stat->nr_congested += nr_pages; /* @@ -1149,7 +1149,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * * 1) If reclaim is encountering an excessive number * of folios under writeback and this folio has both - * the writeback and reclaim flags set, then it + * the writeback and dropbehind flags set, then it * indicates that folios are being queued for I/O but * are being recycled through the LRU before the I/O * can complete. Waiting on the folio itself risks an @@ -1174,7 +1174,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * would probably show more reasons. * * 3) Legacy memcg encounters a folio that already has the - * reclaim flag set. memcg does not have any dirty folio + * dropbehind flag set. memcg does not have any dirty folio * throttling so we could easily OOM just because too many * folios are in writeback and there is nothing else to * reclaim. Wait for the writeback to complete. @@ -1193,31 +1193,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, /* Case 1 above */ if (current_is_kswapd() && - folio_test_reclaim(folio) && + folio_test_dropbehind(folio) && test_bit(PGDAT_WRITEBACK, &pgdat->flags)) { stat->nr_immediate += nr_pages; goto activate_locked; /* Case 2 above */ } else if (writeback_throttling_sane(sc) || - !folio_test_reclaim(folio) || + !folio_test_dropbehind(folio) || !may_enter_fs(folio, sc->gfp_mask) || (mapping && mapping_writeback_indeterminate(mapping))) { - /* - * This is slightly racy - - * folio_end_writeback() might have - * just cleared the reclaim flag, then - * setting the reclaim flag here ends up - * interpreted as the readahead flag - but - * that does not matter enough to care. - * What we do want is for this folio to - * have the reclaim flag set next time - * memcg reclaim reaches the tests above, - * so it will then wait for writeback to - * avoid OOM; and it's also appropriate - * in global reclaim. - */ - folio_set_reclaim(folio); + folio_set_dropbehind(folio); stat->nr_writeback += nr_pages; goto activate_locked; @@ -1372,7 +1358,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ if (folio_is_file_lru(folio) && (!current_is_kswapd() || - !folio_test_reclaim(folio) || + !folio_test_dropbehind(folio) || !test_bit(PGDAT_DIRTY, &pgdat->flags))) { /* * Immediately reclaim when written back. @@ -1382,7 +1368,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE, nr_pages); - folio_set_reclaim(folio); + folio_set_dropbehind(folio); goto activate_locked; } From patchwork Mon Jan 13 09:34:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13937077 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0AD22284A45; Mon, 13 Jan 2025 09:35:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760926; cv=none; b=da9bAmVJJUBfNrlivqARplBKBFa/aQipq427xIToEVKdKP0ObUsyQjbag9mhx8z/pCKtJ38dAYQ5gfriNLqr+mXhP/uZCA8sw9BxUxiCwNLFhVM206sjnRScALhz4lKMKRKgrcOVliWTfgGthupR43RqsKR86k7VB+594RJ9pUc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760926; c=relaxed/simple; bh=3isjMJSZcf5W5aeqy7AwK5YpbZSgqyLtVLMRx7T66oQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Z+5vEn7PB/0SG6F5AvMtdqW825v3SpR8p9WZE7AGXHQf2OpO6G2RQAm+tp3JgbPTo5ZqvTuJv0bUlWxyZ+WuWnj1cBOyhgAuPu7wWYokmP2dIoECzXf+LleCvkbniRWJy1e7aauIJdS7AT6LPsg+lQJJcWchv1DmWkkvx2/eHwM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=QvSeEOGB; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QvSeEOGB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736760925; x=1768296925; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3isjMJSZcf5W5aeqy7AwK5YpbZSgqyLtVLMRx7T66oQ=; b=QvSeEOGBJ0F8Fx8wqAQOsPKj4Xi4E6qZMtE4C419y7TsFNw6TlTbligM KZveoMSKY5Ys7txib29nJuMyOnBiMgF6AFLhUmojJDe2jY1wYqBWd71In QJ6oiFOAswK6boPKVrLC3GbY89oLoZscbB14sQFELqD46PfztfSj9Mvcz ljoQC+DeihxXbz2if3ms5EqdiBouuuql9ViPr1lgx0JBuhmSNH5m6vzFi 1guDRCsUTrPlF36w/4ZuaUfVPyynJaFm3N4EIKh952hrSfDuhrsfBm1ZO w3jbK+UepG8mw5fe7OgnpoBAZpIVF6JKXZBw0hH6hA4ctkhhio6vxx+6m w==; X-CSE-ConnectionGUID: x6zOWA5jTVqiRHGsLEtliQ== X-CSE-MsgGUID: fuK67OxbQF2x1M9U8gznXA== X-IronPort-AV: E=McAfee;i="6700,10204,11313"; a="40949180" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="40949180" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 01:35:22 -0800 X-CSE-ConnectionGUID: DEKeSvlxRkqbGVBH2c0tpg== X-CSE-MsgGUID: e2AXZi8iSmWaz75vxHTbCA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="104303089" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa010.jf.intel.com with ESMTP; 13 Jan 2025 01:35:14 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 5EBCB4CA; Mon, 13 Jan 2025 11:35:04 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 7/8] mm/mglru: Check PG_dropcache instead of PG_reclaim in lru_gen_folio_seq() Date: Mon, 13 Jan 2025 11:34:52 +0200 Message-ID: <20250113093453.1932083-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> References: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Kernel sets PG_dropcache instead of PG_reclaim everywhere. Check PG_dropcache in lru_gen_folio_seq(). No need to check for dirty and writeback as there's no conflict with PG_readahead anymore. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- include/linux/mm_inline.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f9157a0c42a5..f353d3c610ac 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -241,8 +241,7 @@ static inline unsigned long lru_gen_folio_seq(struct lruvec *lruvec, struct foli else if (reclaiming) gen = MAX_NR_GENS; else if ((!folio_is_file_lru(folio) && !folio_test_swapcache(folio)) || - (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || folio_test_writeback(folio)))) + folio_test_dropbehind(folio)) gen = MIN_NR_GENS; else gen = MAX_NR_GENS - folio_test_workingset(folio); From patchwork Mon Jan 13 09:34:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13937078 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 723781FDA85; Mon, 13 Jan 2025 09:35:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760927; cv=none; b=StQVoDohVk64au5JghO3M+YzdPg0oLpPdgpzSa1xhh9wDQzPMyWon3mwYYfgXfaPYVYUHlWRG+4AtLMka87G6BXfzyvXV7YYApkfvl9SaPqbf34ZeCqupIHfU/j0FvdM5lXGxIlxHPWYZGOBT/EHxAqXNEBJ0qj6V8mBt9Jqx3s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736760927; c=relaxed/simple; bh=CCpvMvxvKjjlBeGck2mOjRrQOMT3wkVTGexiGH4Nui8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fNJoaCC/n+9ykvKSkyH8skGljJmWw5J+xwAFuqIX7R3nZaJmI04NFm6V5rNs01X2AgXQArnBSFN3PyuwNUrfTQc+jPdlc2lCRBGgLvOLNyeezigmv7Zz8cflHxJhVCVmD5i1ti95jSgI6QvdHZrp3rS3FssqHpv3lMEOzW3e/GI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.helo=mgamail.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OTYzA7Au; arc=none smtp.client-ip=192.198.163.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.helo=mgamail.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OTYzA7Au" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736760925; x=1768296925; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CCpvMvxvKjjlBeGck2mOjRrQOMT3wkVTGexiGH4Nui8=; b=OTYzA7AuuliGkn4fXUkdnULOYkYjPehP5GDUULRGxEs1v5dzp5BFVihN WuZrEwxKXh20pR28cK7mJa4QUZ1RkM7Rd8L/8vvW4A2KsMT4lKMVT+qO6 SzekDIXwHvWQ1vk1Qoq38wzIUsH8kjV+HT+PXKN+8fiRCyAyM5X8ZTxAH dxrckHxwd89XDgT+BDUxI1xcaArBxtygK8/xnY9sHJfD/S39YnnrRPwOc amJsiYWAN5id6gkmgADAaDnrbCNS513iapwaB7dUcnb2KcFD16PL5BEK8 r5Lqpl/D4nEvYN03+XlrZCdb2/a2SGdU5k+5tzs7CLAwcyCFGqDZ4QofS w==; X-CSE-ConnectionGUID: /0DLULgvRCu55GKO6SUaWw== X-CSE-MsgGUID: mZrmFfD/TBOTFO+xjzHkcw== X-IronPort-AV: E=McAfee;i="6700,10204,11313"; a="40949168" X-IronPort-AV: E=Sophos;i="6.12,310,1728975600"; d="scan'208";a="40949168" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2025 01:35:22 -0800 X-CSE-ConnectionGUID: 0Ur29KhvS2GP2MkWrALa4Q== X-CSE-MsgGUID: 8oaF7mB8QjmixrsHUbeguA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="104303087" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa010.jf.intel.com with ESMTP; 13 Jan 2025 01:35:14 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 6BC3E5D8; Mon, 13 Jan 2025 11:35:04 +0200 (EET) From: "Kirill A. Shutemov" To: Andrew Morton , "Matthew Wilcox (Oracle)" , Jens Axboe Cc: "Jason A. Donenfeld" , "Kirill A. Shutemov" , Andi Shyti , Chengming Zhou , Christian Brauner , Christophe Leroy , Dan Carpenter , David Airlie , David Hildenbrand , Hao Ge , Jani Nikula , Johannes Weiner , Joonas Lahtinen , Josef Bacik , Masami Hiramatsu , Mathieu Desnoyers , Miklos Szeredi , Nhat Pham , Oscar Salvador , Ran Xiaokai , Rodrigo Vivi , Simona Vetter , Steven Rostedt , Tvrtko Ursulin , Vlastimil Babka , Yosry Ahmed , Yu Zhao , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH 8/8] mm: Remove PG_reclaim Date: Mon, 13 Jan 2025 11:34:53 +0200 Message-ID: <20250113093453.1932083-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> References: <20250113093453.1932083-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Nobody sets the flag anymore. Remove the PG_reclaim, making PG_readhead exclusive user of the page flag bit. Signed-off-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- fs/fuse/dev.c | 2 +- fs/proc/page.c | 2 +- include/linux/mm_inline.h | 1 - include/linux/page-flags.h | 15 +++++---------- include/trace/events/mmflags.h | 2 +- include/uapi/linux/kernel-page-flags.h | 2 +- mm/filemap.c | 12 ------------ mm/migrate.c | 10 ++-------- mm/page-writeback.c | 16 +--------------- mm/page_io.c | 15 +++++---------- mm/swap.c | 16 ---------------- mm/vmscan.c | 7 ------- tools/mm/page-types.c | 8 +------- 13 files changed, 18 insertions(+), 90 deletions(-) diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 27ccae63495d..20005e2e1d28 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -827,7 +827,7 @@ static int fuse_check_folio(struct folio *folio) 1 << PG_lru | 1 << PG_active | 1 << PG_workingset | - 1 << PG_reclaim | + 1 << PG_readahead | 1 << PG_waiters | LRU_GEN_MASK | LRU_REFS_MASK))) { dump_page(&folio->page, "fuse: trying to steal weird page"); diff --git a/fs/proc/page.c b/fs/proc/page.c index a55f5acefa97..59860ba2393c 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -189,7 +189,7 @@ u64 stable_page_flags(const struct page *page) u |= kpf_copy_bit(k, KPF_LRU, PG_lru); u |= kpf_copy_bit(k, KPF_REFERENCED, PG_referenced); u |= kpf_copy_bit(k, KPF_ACTIVE, PG_active); - u |= kpf_copy_bit(k, KPF_RECLAIM, PG_reclaim); + u |= kpf_copy_bit(k, KPF_READAHEAD, PG_readahead); #define SWAPCACHE ((1 << PG_swapbacked) | (1 << PG_swapcache)) if ((k & SWAPCACHE) == SWAPCACHE) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f353d3c610ac..269acf1f77b4 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -270,7 +270,6 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); lru_gen_update_size(lruvec, folio, -1, gen); - /* for folio_rotate_reclaimable() */ if (reclaiming) list_add_tail(&folio->lru, &lrugen->folios[gen][type][zone]); else diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 2414e7921eea..8f59fd8b86c9 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -63,8 +63,8 @@ * might lose their PG_swapbacked flag when they simply can be dropped (e.g. as * a result of MADV_FREE). * - * PG_referenced, PG_reclaim are used for page reclaim for anonymous and - * file-backed pagecache (see mm/vmscan.c). + * PG_referenced is used for page reclaim for anonymous and file-backed + * pagecache (see mm/vmscan.c). * * PG_arch_1 is an architecture specific page state bit. The generic code * guarantees that this bit is cleared for a page when it first is entered into @@ -107,7 +107,7 @@ enum pageflags { PG_reserved, PG_private, /* If pagecache, has fs-private data */ PG_private_2, /* If pagecache, has fs aux data */ - PG_reclaim, /* To be reclaimed asap */ + PG_readahead, PG_swapbacked, /* Page is backed by RAM/swap */ PG_unevictable, /* Page is "unevictable" */ PG_dropbehind, /* drop pages on IO completion */ @@ -129,8 +129,6 @@ enum pageflags { #endif __NR_PAGEFLAGS, - PG_readahead = PG_reclaim, - /* Anonymous memory (and shmem) */ PG_swapcache = PG_owner_priv_1, /* Swap page: swp_entry_t in private */ /* Some filesystems */ @@ -168,7 +166,7 @@ enum pageflags { PG_xen_remapped = PG_owner_priv_1, /* non-lru isolated movable page */ - PG_isolated = PG_reclaim, + PG_isolated = PG_readahead, /* Only valid for buddy pages. Used to track pages that are reported */ PG_reported = PG_uptodate, @@ -187,7 +185,7 @@ enum pageflags { /* At least one page in this folio has the hwpoison flag set */ PG_has_hwpoisoned = PG_active, PG_large_rmappable = PG_workingset, /* anon or file-backed */ - PG_partially_mapped = PG_reclaim, /* was identified to be partially mapped */ + PG_partially_mapped = PG_readahead, /* was identified to be partially mapped */ }; #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) @@ -594,9 +592,6 @@ TESTPAGEFLAG(Writeback, writeback, PF_NO_TAIL) TESTSCFLAG(Writeback, writeback, PF_NO_TAIL) FOLIO_FLAG(mappedtodisk, FOLIO_HEAD_PAGE) -/* PG_readahead is only used for reads; PG_reclaim is only for writes */ -PAGEFLAG(Reclaim, reclaim, PF_NO_TAIL) - TESTCLEARFLAG(Reclaim, reclaim, PF_NO_TAIL) FOLIO_FLAG(readahead, FOLIO_HEAD_PAGE) FOLIO_TEST_CLEAR_FLAG(readahead, FOLIO_HEAD_PAGE) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 3bc8656c8359..15d92784a745 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -114,7 +114,7 @@ DEF_PAGEFLAG_NAME(private_2), \ DEF_PAGEFLAG_NAME(writeback), \ DEF_PAGEFLAG_NAME(head), \ - DEF_PAGEFLAG_NAME(reclaim), \ + DEF_PAGEFLAG_NAME(readahead), \ DEF_PAGEFLAG_NAME(swapbacked), \ DEF_PAGEFLAG_NAME(unevictable), \ DEF_PAGEFLAG_NAME(dropbehind) \ diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h index ff8032227876..e5a9a113e079 100644 --- a/include/uapi/linux/kernel-page-flags.h +++ b/include/uapi/linux/kernel-page-flags.h @@ -15,7 +15,7 @@ #define KPF_ACTIVE 6 #define KPF_SLAB 7 #define KPF_WRITEBACK 8 -#define KPF_RECLAIM 9 +#define KPF_READAHEAD 9 #define KPF_BUDDY 10 /* 11-20: new additions in 2.6.31 */ diff --git a/mm/filemap.c b/mm/filemap.c index 5ca26f5e7238..8951c37c8a38 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1624,18 +1624,6 @@ void folio_end_writeback(struct folio *folio) VM_BUG_ON_FOLIO(!folio_test_writeback(folio), folio); - /* - * folio_test_clear_reclaim() could be used here but it is an - * atomic operation and overkill in this particular case. Failing - * to shuffle a folio marked for immediate reclaim is too mild - * a gain to justify taking an atomic operation penalty at the - * end of every folio writeback. - */ - if (folio_test_reclaim(folio)) { - folio_clear_reclaim(folio); - folio_rotate_reclaimable(folio); - } - /* * Writeback does not hold a folio reference of its own, relying * on truncation to wait for the clearing of PG_writeback. diff --git a/mm/migrate.c b/mm/migrate.c index caadbe393aa2..beba72da5e33 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -686,6 +686,8 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) folio_set_young(newfolio); if (folio_test_idle(folio)) folio_set_idle(newfolio); + if (folio_test_readahead(folio)) + folio_set_readahead(newfolio); folio_migrate_refs(newfolio, folio); /* @@ -728,14 +730,6 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio) if (folio_test_writeback(newfolio)) folio_end_writeback(newfolio); - /* - * PG_readahead shares the same bit with PG_reclaim. The above - * end_page_writeback() may clear PG_readahead mistakenly, so set the - * bit after that. - */ - if (folio_test_readahead(folio)) - folio_set_readahead(newfolio); - folio_copy_owner(newfolio, folio); pgalloc_tag_swap(newfolio, folio); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 4f5970723cf2..f2b94a2cbfcf 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2888,22 +2888,8 @@ bool folio_mark_dirty(struct folio *folio) { struct address_space *mapping = folio_mapping(folio); - if (likely(mapping)) { - /* - * readahead/folio_deactivate could remain - * PG_readahead/PG_reclaim due to race with folio_end_writeback - * About readahead, if the folio is written, the flags would be - * reset. So no problem. - * About folio_deactivate, if the folio is redirtied, - * the flag will be reset. So no problem. but if the - * folio is used by readahead it will confuse readahead - * and make it restart the size rampup process. But it's - * a trivial problem. - */ - if (folio_test_reclaim(folio)) - folio_clear_reclaim(folio); + if (likely(mapping)) return mapping->a_ops->dirty_folio(mapping, folio); - } return noop_dirty_folio(mapping, folio); } diff --git a/mm/page_io.c b/mm/page_io.c index 9b983de351f9..0cb71f318fb1 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -37,14 +37,11 @@ static void __end_swap_bio_write(struct bio *bio) * Re-dirty the page in order to avoid it being reclaimed. * Also print a dire warning that things will go BAD (tm) * very quickly. - * - * Also clear PG_reclaim to avoid folio_rotate_reclaimable() */ folio_mark_dirty(folio); pr_alert_ratelimited("Write-error on swap-device (%u:%u:%llu)\n", MAJOR(bio_dev(bio)), MINOR(bio_dev(bio)), (unsigned long long)bio->bi_iter.bi_sector); - folio_clear_reclaim(folio); } folio_end_writeback(folio); } @@ -350,19 +347,17 @@ static void sio_write_complete(struct kiocb *iocb, long ret) if (ret != sio->len) { /* - * In the case of swap-over-nfs, this can be a - * temporary failure if the system has limited - * memory for allocating transmit buffers. - * Mark the page dirty and avoid - * folio_rotate_reclaimable but rate-limit the - * messages. + * In the case of swap-over-nfs, this can be a temporary failure + * if the system has limited memory for allocating transmit + * buffers. + * + * Mark the page dirty but rate-limit the messages. */ pr_err_ratelimited("Write error %ld on dio swapfile (%llu)\n", ret, swap_dev_pos(page_swap_entry(page))); for (p = 0; p < sio->pages; p++) { page = sio->bvec[p].bv_page; set_page_dirty(page); - ClearPageReclaim(page); } } diff --git a/mm/swap.c b/mm/swap.c index 4eb33b4804a8..5b94f13821e3 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -221,22 +221,6 @@ static void lru_move_tail(struct lruvec *lruvec, struct folio *folio) __count_vm_events(PGROTATED, folio_nr_pages(folio)); } -/* - * Writeback is about to end against a folio which has been marked for - * immediate reclaim. If it still appears to be reclaimable, move it - * to the tail of the inactive list. - * - * folio_rotate_reclaimable() must disable IRQs, to prevent nasty races. - */ -void folio_rotate_reclaimable(struct folio *folio) -{ - if (folio_test_locked(folio) || folio_test_dirty(folio) || - folio_test_unevictable(folio)) - return; - - folio_batch_add_and_move(folio, lru_move_tail, true); -} - void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_io, unsigned int nr_rotated) { diff --git a/mm/vmscan.c b/mm/vmscan.c index bb5ec22f97b5..e61e88e63511 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3216,9 +3216,6 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_FLAGS); new_flags |= (new_gen + 1UL) << LRU_GEN_PGOFF; - /* for folio_end_writeback() */ - if (reclaiming) - new_flags |= BIT(PG_reclaim); } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); lru_gen_update_size(lruvec, folio, old_gen, new_gen); @@ -4460,9 +4457,6 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca if (!folio_test_referenced(folio)) set_mask_bits(&folio->flags, LRU_REFS_MASK, 0); - /* for shrink_folio_list() */ - folio_clear_reclaim(folio); - success = lru_gen_del_folio(lruvec, folio, true); VM_WARN_ON_ONCE_FOLIO(!success, folio); @@ -4659,7 +4653,6 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap continue; } - /* retry folios that may have missed folio_rotate_reclaimable() */ if (!skip_retry && !folio_test_active(folio) && !folio_mapped(folio) && !folio_test_dirty(folio) && !folio_test_writeback(folio)) { list_move(&folio->lru, &clean); diff --git a/tools/mm/page-types.c b/tools/mm/page-types.c index bcac7ebfb51f..c06647501370 100644 --- a/tools/mm/page-types.c +++ b/tools/mm/page-types.c @@ -85,7 +85,6 @@ * not part of kernel API */ #define KPF_ANON_EXCLUSIVE 47 -#define KPF_READAHEAD 48 #define KPF_SLUB_FROZEN 50 #define KPF_SLUB_DEBUG 51 #define KPF_FILE 61 @@ -108,7 +107,7 @@ static const char * const page_flag_names[] = { [KPF_ACTIVE] = "A:active", [KPF_SLAB] = "S:slab", [KPF_WRITEBACK] = "W:writeback", - [KPF_RECLAIM] = "I:reclaim", + [KPF_READAHEAD] = "I:readahead", [KPF_BUDDY] = "B:buddy", [KPF_MMAP] = "M:mmap", @@ -139,7 +138,6 @@ static const char * const page_flag_names[] = { [KPF_ARCH_2] = "H:arch_2", [KPF_ANON_EXCLUSIVE] = "d:anon_exclusive", - [KPF_READAHEAD] = "I:readahead", [KPF_SLUB_FROZEN] = "A:slub_frozen", [KPF_SLUB_DEBUG] = "E:slub_debug", @@ -484,10 +482,6 @@ static uint64_t expand_overloaded_flags(uint64_t flags, uint64_t pme) flags ^= BIT(ERROR) | BIT(SLUB_DEBUG); } - /* PG_reclaim is overloaded as PG_readahead in the read path */ - if ((flags & (BIT(RECLAIM) | BIT(WRITEBACK))) == BIT(RECLAIM)) - flags ^= BIT(RECLAIM) | BIT(READAHEAD); - if (pme & PM_SOFT_DIRTY) flags |= BIT(SOFTDIRTY); if (pme & PM_FILE)