From patchwork Tue Feb 28 12:23:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yin Fengwei X-Patchwork-Id: 13154855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24A51C64ED6 for ; Tue, 28 Feb 2023 12:22:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 237B86B0082; Tue, 28 Feb 2023 07:22:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 178D46B0081; Tue, 28 Feb 2023 07:22:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E21A36B0080; Tue, 28 Feb 2023 07:22:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id CF6ED6B007E for ; Tue, 28 Feb 2023 07:22:30 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8CDAC1A0972 for ; Tue, 28 Feb 2023 12:22:30 +0000 (UTC) X-FDA: 80516613660.29.B3E774D Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf06.hostedemail.com (Postfix) with ESMTP id 81E70180005 for ; Tue, 28 Feb 2023 12:22:28 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WfURzNFX; spf=pass (imf06.hostedemail.com: domain of fengwei.yin@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677586948; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AwPy+c43oFsYwD4hiVZ/pUstsa9jnYnKUWH/NSqWR7E=; b=S5pILEWGUIW3Lu9KuEDywlscA4sCrc0S6zj9WSVXjVBzecg6ozJO1rLA9LJ9hUnLZDSoM6 cpHuIH4UGpchaAaqaA2W5O2D3z5von8Y2EYi2x7DITM+nbfimqEEqfyWDGyEIgB3z5NvBD kWfBV7YrFIUKnY7emB+Sh1okAi26sOM= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WfURzNFX; spf=pass (imf06.hostedemail.com: domain of fengwei.yin@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=fengwei.yin@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677586948; a=rsa-sha256; cv=none; b=l/h9Ez6lF2ep5fEEhsl4RHfGaPDcsi1dcqW40/6lgWtWuEW4zQAsl5WxHEK5yEGlv2xrtY OlZAE4/n8CqXF8ojOYa84M5oQm4Pi1RQEj8SxAK/odEezRHDBeQFZwsz7q+eazPINqYi2e GxYuGnKCQId4BtC/6OzjQaFnCsk8rf4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677586948; x=1709122948; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Dy4UH/JKR5tY8YvHaIig91ornTI36QW88TO4IHJeQSY=; b=WfURzNFXNZTfJE+WDWM2Dlzt2jJN9XM3PScxajSJnYqHnySTGSmhPr7T flg+p0O9wvB1v8u7p9xc4fXuKhYkmUDsrq+bsGZ6TmUo79+y9HdCpl+7I RxpBSSZe6yPB87lBH2fTyfWbHuVOJPZXGIKvctDwrZZs6dH+U+h97X4ts OlE8D+rXJZVMCvZn4qGtxT4KebYWshVCX8H9CYxGF9glzkW1Tktz87p6H JooP0HSvXT3tSDf2ogc9JgQI5Hsu5ikdZ4sozyD9GQFU7/SZmc3r2Hehp vGnRFvX9/nwLjXsOIdiqALNOL7f7b13rz846AN+5LIe0XyOZKncuvF34j g==; X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="317921183" X-IronPort-AV: E=Sophos;i="5.98,221,1673942400"; d="scan'208";a="317921183" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2023 04:22:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="1003220779" X-IronPort-AV: E=Sophos;i="5.98,221,1673942400"; d="scan'208";a="1003220779" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by fmsmga005.fm.intel.com with ESMTP; 28 Feb 2023 04:22:03 -0800 From: Yin Fengwei To: linux-mm@kvack.org, akpm@linux-foundation.org, willy@infradead.org, sidhartha.kumar@oracle.com, mike.kravetz@oracle.com, jane.chu@oracle.com, naoya.horiguchi@nec.com Cc: fengwei.yin@intel.com Subject: [PATCH v2 3/5] rmap: cleanup exit path of try_to_unmap_one_page() Date: Tue, 28 Feb 2023 20:23:06 +0800 Message-Id: <20230228122308.2972219-4-fengwei.yin@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230228122308.2972219-1-fengwei.yin@intel.com> References: <20230228122308.2972219-1-fengwei.yin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 81E70180005 X-Stat-Signature: 5yymwgi179jc97sijjgxnxb5owwrzuka X-Rspam-User: X-HE-Tag: 1677586948-203338 X-HE-Meta: U2FsdGVkX1+ebHkTDhwOHU7L/p97jChg8nXSfhy0+FDWNH6zAXOm4/lYPZjsXOt/mzzJCEi/TMosr/mWXAr+ouMZH9Ef6rlAeGOVKFCjYIF154nzSEa4vD4ebgoqap6YEXLV/zC+6t4COmLSMmvKSM3LmOki163bAgcTh1W83pNJszrxL0E+bcPCX9qOUdozS2arSOAZ8NyGYCKinEXrxqAnQzZtKTlYjs/P8u8JWWBf7d9dMjO+3fm+sbyf5kNUnHJxELRU/gmZWfMPGfk+AHol5/5quhZ5AE21quCS0PbGbcQ3UeuSJTSgPStCQ4L/Aygoay84aAe26H9oQtYy/oWUKNsTWNyKbI7Br7Oooio+bUPNYb5AZO4YOnnMt29yC4nLFbfPxIassZR7WuqsSidC4p15ImG2ClhKupfblgY0G7QIqbV+XKGNw3dxXsQz9PHzG+cfhyptpsQBry1uSSOXqRPOe9jfxBAb5IHFeEqh1KK1sfo8ZHdFzqXmmHJhYx9VwNxXxYmrl4zHVclXSkKNWj8WINA6PrR3/y18DTiQ5DYIAkYyzvVPdOWbgI9zwlz1EDmhzPMta0rHVvK6Tw9knwblO//3h+h3ek5XexupuiOI3bYGrEAWyq+c4bLDNGm1iusxAI5JYkCA3qd/kHl795+vm22tvy4IjmYZtde81sZdIhEL8LVfqYSIdNcleW2DaKGeF0WcHUAkD33+1KitiTfKXk6o1w1JcuxdHu6y2v1r0n/CQdwrBqB+3IW3WY0zqIlQz7Q14O//82EswCdMD3riMFBM7E5iHuLoICOGcWNS1v0t+ZjcT5JL5rY5TVXYMY9aWyOjWRQ8s2kpP/wAE5/0W4Qb72MLkq7vS8ah4hALV2F3tWdgwVFx6EcU7J2l5HYHAhQUeI780Y4QDAIw1dQ+/WG+AEJa3D/UNVXGsr/lNq27gxHbyxczENQp9gp11t47LgMszL/DQyR 2PkQQZeJ POAq4Ia2QATm7GOWvOh0XdvtmU/nmMnNLDDe88Yd5AO1whnTJevr6CSHnEsWnzJ15Rnu8l581kjbxIw1uGvFCd5nSqM4lugtqSVCKuLuOGDHZowPOM4a1Yl1hzgQ9P9+IqMk2oKMnTxcKPF/jmdh4RmZzorowTr/VvpT8PHjhsg/RIkejm/+wdhg0UH+N7aBxsZjlZ+GrOD/t4IXip8Qf8jQAj8bADXkrBUYL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Cleanup exit path of try_to_unmap_one_page() by removing some duplicated code. Move page_vma_mapped_walk_done() back to try_to_unmap_one(). Change subpage to page as folio has no concept of subpage. Signed-off-by: Yin Fengwei --- mm/rmap.c | 74 ++++++++++++++++++++++--------------------------------- 1 file changed, 30 insertions(+), 44 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 987ab402392f..d243e557c6e4 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1530,7 +1530,7 @@ static bool try_to_unmap_one_hugetlb(struct folio *folio, * * See Documentation/mm/mmu_notifier.rst */ - page_remove_rmap(&folio->page, vma, folio_test_hugetlb(folio)); + page_remove_rmap(&folio->page, vma, true); /* No VM_LOCKED set in vma->vm_flags for hugetlb. So not * necessary to call mlock_drain_local(). */ @@ -1545,15 +1545,13 @@ static bool try_to_unmap_one_page(struct folio *folio, struct page_vma_mapped_walk pvmw, unsigned long address, enum ttu_flags flags) { - bool anon_exclusive, ret = true; - struct page *subpage; + bool anon_exclusive; + struct page *page; struct mm_struct *mm = vma->vm_mm; pte_t pteval; - subpage = folio_page(folio, - pte_pfn(*pvmw.pte) - folio_pfn(folio)); - anon_exclusive = folio_test_anon(folio) && - PageAnonExclusive(subpage); + page = folio_page(folio, pte_pfn(*pvmw.pte) - folio_pfn(folio)); + anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(page); flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); /* Nuke the page table entry. */ @@ -1581,15 +1579,14 @@ static bool try_to_unmap_one_page(struct folio *folio, pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval); /* Set the dirty flag on the folio now the pte is gone. */ - if (pte_dirty(pteval)) + if (pte_dirty(pteval) && !folio_test_dirty(folio)) folio_mark_dirty(folio); /* Update high watermark before we lower rss */ update_hiwater_rss(mm); - if (PageHWPoison(subpage) && !(flags & TTU_HWPOISON)) { - pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); - dec_mm_counter(mm, mm_counter(&folio->page)); + if (PageHWPoison(page) && !(flags & TTU_HWPOISON)) { + pteval = swp_entry_to_pte(make_hwpoison_entry(page)); set_pte_at(mm, address, pvmw.pte, pteval); } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { /* @@ -1602,12 +1599,11 @@ static bool try_to_unmap_one_page(struct folio *folio, * migration) will not expect userfaults on already * copied pages. */ - dec_mm_counter(mm, mm_counter(&folio->page)); /* We have to invalidate as we cleared the pte */ mmu_notifier_invalidate_range(mm, address, address + PAGE_SIZE); } else if (folio_test_anon(folio)) { - swp_entry_t entry = { .val = page_private(subpage) }; + swp_entry_t entry = { .val = page_private(page) }; pte_t swp_pte; /* * Store the swap location in the pte. @@ -1616,12 +1612,10 @@ static bool try_to_unmap_one_page(struct folio *folio, if (unlikely(folio_test_swapbacked(folio) != folio_test_swapcache(folio))) { WARN_ON_ONCE(1); - ret = false; /* We have to invalidate as we cleared the pte */ mmu_notifier_invalidate_range(mm, address, address + PAGE_SIZE); - page_vma_mapped_walk_done(&pvmw); - goto discard; + goto exit; } /* MADV_FREE page check */ @@ -1653,7 +1647,6 @@ static bool try_to_unmap_one_page(struct folio *folio, /* Invalidate as we cleared the pte */ mmu_notifier_invalidate_range(mm, address, address + PAGE_SIZE); - dec_mm_counter(mm, MM_ANONPAGES); goto discard; } @@ -1661,43 +1654,30 @@ static bool try_to_unmap_one_page(struct folio *folio, * If the folio was redirtied, it cannot be * discarded. Remap the page to page table. */ - set_pte_at(mm, address, pvmw.pte, pteval); folio_set_swapbacked(folio); - ret = false; - page_vma_mapped_walk_done(&pvmw); - goto discard; + goto exit_restore_pte; } - if (swap_duplicate(entry) < 0) { - set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - goto discard; - } + if (swap_duplicate(entry) < 0) + goto exit_restore_pte; + if (arch_unmap_one(mm, vma, address, pteval) < 0) { swap_free(entry); - set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - goto discard; + goto exit_restore_pte; } /* See page_try_share_anon_rmap(): clear PTE first. */ - if (anon_exclusive && - page_try_share_anon_rmap(subpage)) { + if (anon_exclusive && page_try_share_anon_rmap(page)) { swap_free(entry); - set_pte_at(mm, address, pvmw.pte, pteval); - ret = false; - page_vma_mapped_walk_done(&pvmw); - goto discard; + goto exit_restore_pte; } + if (list_empty(&mm->mmlist)) { spin_lock(&mmlist_lock); if (list_empty(&mm->mmlist)) list_add(&mm->mmlist, &init_mm.mmlist); spin_unlock(&mmlist_lock); } - dec_mm_counter(mm, MM_ANONPAGES); inc_mm_counter(mm, MM_SWAPENTS); swp_pte = swp_entry_to_pte(entry); if (anon_exclusive) @@ -1708,8 +1688,7 @@ static bool try_to_unmap_one_page(struct folio *folio, swp_pte = pte_swp_mkuffd_wp(swp_pte); set_pte_at(mm, address, pvmw.pte, swp_pte); /* Invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, address, - address + PAGE_SIZE); + mmu_notifier_invalidate_range(mm, address, address + PAGE_SIZE); } else { /* * This is a locked file-backed folio, @@ -1722,11 +1701,16 @@ static bool try_to_unmap_one_page(struct folio *folio, * * See Documentation/mm/mmu_notifier.rst */ - dec_mm_counter(mm, mm_counter_file(&folio->page)); } discard: - return ret; + dec_mm_counter(vma->vm_mm, mm_counter(&folio->page)); + return true; + +exit_restore_pte: + set_pte_at(mm, address, pvmw.pte, pteval); +exit: + return false; } /* @@ -1804,8 +1788,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, pte_pfn(*pvmw.pte) - folio_pfn(folio)); ret = try_to_unmap_one_page(folio, vma, range, pvmw, address, flags); - if (!ret) + if (!ret) { + page_vma_mapped_walk_done(&pvmw); break; + } /* * No need to call mmu_notifier_invalidate_range() it has be @@ -1814,7 +1800,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * * See Documentation/mm/mmu_notifier.rst */ - page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); + page_remove_rmap(subpage, vma, false); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio);