From patchwork Thu Feb 15 12:17:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13558303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 345E5C48BEB for ; Thu, 15 Feb 2024 12:18:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A56498D0014; Thu, 15 Feb 2024 07:18:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 98EB28D0001; Thu, 15 Feb 2024 07:18:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FA7F8D0014; Thu, 15 Feb 2024 07:18:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 534CC8D0001 for ; Thu, 15 Feb 2024 07:18:14 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2F851C1041 for ; Thu, 15 Feb 2024 12:18:14 +0000 (UTC) X-FDA: 81793940508.15.22A6FFD Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf08.hostedemail.com (Postfix) with ESMTP id 87C3616000C for ; Thu, 15 Feb 2024 12:18:12 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707999492; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uUGDUFHa6ouIpGmhULS0O/3eXPjqJ4rcpqKk6CYa4SM=; b=Aa+cl9ABM+wxu+TuRHDZr7QeKGBJjrDdbuXoSKPGqxCbRDvMkkpV9TbBdnpQ0SDLgyOdgy AyMp97UbWQk39vgepbY15kLEiHa3FcvxDd15Sme35x/HhhFaocHmOsUiApjpE1ecLrdO4+ X+IRsdCCVb8oLjlOtLQMqDtNclfdefA= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707999492; a=rsa-sha256; cv=none; b=bzROzrbsgwH9l0OO0peJhlsXYNwyrBxI7Vos86lAyAPuHkxW1wpf+B8w5Lfn3foA3nhfIP pl1291JmTh6CpYBjRiJMDRaq0ubF6KeO1nGvj0ReKuCIdwsOvtZ8uRwyBcNqwZ6ZIpEqqR PC8KdorBUl/KMYRFnKNi4rY9lC6J9b0= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ACE0F14BF; Thu, 15 Feb 2024 04:18:52 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0F9763F766; Thu, 15 Feb 2024 04:18:09 -0800 (PST) From: Ryan Roberts To: David Hildenbrand , Mark Rutland , Catalin Marinas , Will Deacon , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Andrew Morton , Muchun Song Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v1 3/4] mm/memory: Use ptep_get_lockless_norecency() for orig_pte Date: Thu, 15 Feb 2024 12:17:55 +0000 Message-Id: <20240215121756.2734131-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240215121756.2734131-1-ryan.roberts@arm.com> References: <20240215121756.2734131-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 7ah447es8aifumsqqor84eunqpwaj6fx X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 87C3616000C X-HE-Tag: 1707999492-74639 X-HE-Meta: U2FsdGVkX18m39fss1xRn0LarDv5I4ya4PAB4WqUD5e7Td0AygcP/Dki3nMr9a6cSbvg5F4VDbf0R/A1ODwML5QN8ebBEuEbx0l6+TItC4AdeEdgQefLNXjia19IiF3EL4LZnxN+fUCfxrDzYe/lMvgbB7e2PBXz5WGeukAZU0oLqwe5iLUvHMU0Yu9bfwOxXdvyWaqeIpaCjlpbb/V8c6XOijhvjgo08rwZjo/eK4SJgSlkgEklHvVOY1nTlU7yhc8lhsA3vPq5zPxuYQg8HS8qIPvL+m3ylSBqbPJcHnqMm+mmqmjd2+rYiBVSNSlyUJjJhyRaPrMuVo4gyGXUHMD0kxKcK2NmSsBi/yPDHpwPEOmqqb7XlABu7HrGpwSR8L8on0HEfIyxH2mzbLPY4vbki4joI/lSy0qj6FG3YObgoHsJNpU4fsadS8dPxkV/hVFpU/M3qDcG7Nzcv2+FPVc7Z2OZ+vTmZhpxjbTV/Ov8iylxRdz3hYlXl+nfrk8S3E3WEr6Hjxb42b+SwaZclsYXcGULxHwYpnAXE/GESd/wWlJoUs82oxz25RworNa9OBYP32UGIUEi84z4oboSnMTFfLAdLATtmA9WQqokqkAd52BM0vtLJ1nUWbEIw7VwqJizKk92cfxw7wEv1LrJKb3g3UXWtxVrnS3fWyDuNUh7s7NgK9AdEeLj4y5JPkyu9sUkyI8I+DwBA+rND67N+DoePnCecYwYs1r+nWv2qq0VMVWpdUl90lO8SauNmzhMOT8omubRZYhNQxKcyl3gLzqtgOKqyLunSzOHSiW9ZhLrVzul89sAI12rkNxc9gjK8QhoBrZ2ksuqVmV59tLQUX/3wQAmPYTWPk2v7duyAeUDh+1QMr5s4uKT10f8dZnHcZyOeQ9+J0SclQ459qSfMpuqgUBM4dWDxTz1jLxcKT4tY3pSASnMyok6lvzp4NjaRjksqNzCUIp+1FH2Quk CmbkzaOP YxSCGue56Y55AKZ5xRCJeurXLZXw8wybiTgL37wn4UywROdUEVzNARwo+Xk6fTnK9h6e1Qwtctsrsjs96oVqQd/7nT3Vv3/ERPYlnMBIAmaj+B1Guf78AGzAQD6r+1OcKJHQ4t7+zpKrH5jTvx4NTfz2ZOuYU5CwbK9yHysarEhZKHl4YqvwnMw5HCY44prlpzkClXjsemgDJLpo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's convert handle_pte_fault()'s use of ptep_get_lockless() to ptep_get_lockless_norecency() to save orig_pte. There are a number of places that follow this model: orig_pte = ptep_get_lockless(ptep) ... if (!pte_same(orig_pte, ptep_get(ptep))) // RACE! ... So we need to be careful to convert all of those to use pte_same_norecency() so that the access and dirty bits are excluded from the comparison. Additionally there are a couple of places that genuinely rely on the access and dirty bits of orig_pte, but with some careful refactoring, we can use ptep_get() once we are holding the lock to achieve equivalent logic. Signed-off-by: Ryan Roberts --- mm/memory.c | 55 +++++++++++++++++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 21 deletions(-) -- 2.25.1 diff --git a/mm/memory.c b/mm/memory.c index 8e65fa1884f1..075245314ec3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3014,7 +3014,7 @@ static inline int pte_unmap_same(struct vm_fault *vmf) #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION) if (sizeof(pte_t) > sizeof(unsigned long)) { spin_lock(vmf->ptl); - same = pte_same(ptep_get(vmf->pte), vmf->orig_pte); + same = pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte); spin_unlock(vmf->ptl); } #endif @@ -3062,11 +3062,14 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, * take a double page fault, so mark it accessed here. */ vmf->pte = NULL; - if (!arch_has_hw_pte_young() && !pte_young(vmf->orig_pte)) { + if (!arch_has_hw_pte_young()) { pte_t entry; vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); - if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (likely(vmf->pte)) + entry = ptep_get(vmf->pte); + if (unlikely(!vmf->pte || + !pte_same_norecency(entry, vmf->orig_pte))) { /* * Other thread has already handled the fault * and update local tlb only @@ -3077,9 +3080,11 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, goto pte_unlock; } - entry = pte_mkyoung(vmf->orig_pte); - if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0)) - update_mmu_cache_range(vmf, vma, addr, vmf->pte, 1); + if (!pte_young(entry)) { + entry = pte_mkyoung(entry); + if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0)) + update_mmu_cache_range(vmf, vma, addr, vmf->pte, 1); + } } /* @@ -3094,7 +3099,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, /* Re-validate under PTL if the page is still mapped */ vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); - if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (unlikely(!vmf->pte || + !pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) { /* The PTE changed under us, update local tlb */ if (vmf->pte) update_mmu_tlb(vma, addr, vmf->pte); @@ -3369,14 +3375,17 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) * Re-check the pte - we dropped the lock */ vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl); - if (likely(vmf->pte && pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (likely(vmf->pte)) + entry = ptep_get(vmf->pte); + if (likely(vmf->pte && pte_same_norecency(entry, vmf->orig_pte))) { if (old_folio) { if (!folio_test_anon(old_folio)) { dec_mm_counter(mm, mm_counter_file(old_folio)); inc_mm_counter(mm, MM_ANONPAGES); } } else { - ksm_might_unmap_zero_page(mm, vmf->orig_pte); + /* Needs dirty bit so can't use vmf->orig_pte. */ + ksm_might_unmap_zero_page(mm, entry); inc_mm_counter(mm, MM_ANONPAGES); } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); @@ -3494,7 +3503,7 @@ static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio * We might have raced with another page fault while we released the * pte_offset_map_lock. */ - if (!pte_same(ptep_get(vmf->pte), vmf->orig_pte)) { + if (!pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte)) { update_mmu_tlb(vmf->vma, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); return VM_FAULT_NOPAGE; @@ -3883,7 +3892,8 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); - if (likely(vmf->pte && pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + if (likely(vmf->pte && + pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) restore_exclusive_pte(vma, vmf->page, vmf->address, vmf->pte); if (vmf->pte) @@ -3928,7 +3938,7 @@ static vm_fault_t pte_marker_clear(struct vm_fault *vmf) * quickly from a PTE_MARKER_UFFD_WP into PTE_MARKER_POISONED. * So is_pte_marker() check is not enough to safely drop the pte. */ - if (pte_same(vmf->orig_pte, ptep_get(vmf->pte))) + if (pte_same_norecency(vmf->orig_pte, ptep_get(vmf->pte))) pte_clear(vmf->vma->vm_mm, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); return 0; @@ -4028,8 +4038,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (unlikely(!vmf->pte || - !pte_same(ptep_get(vmf->pte), - vmf->orig_pte))) + !pte_same_norecency(ptep_get(vmf->pte), + vmf->orig_pte))) goto unlock; /* @@ -4117,7 +4127,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (likely(vmf->pte && - pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + pte_same_norecency(ptep_get(vmf->pte), + vmf->orig_pte))) ret = VM_FAULT_OOM; goto unlock; } @@ -4187,7 +4198,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); - if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + if (unlikely(!vmf->pte || + !pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) goto out_nomap; if (unlikely(!folio_test_uptodate(folio))) { @@ -4747,7 +4759,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio, static bool vmf_pte_changed(struct vm_fault *vmf) { if (vmf->flags & FAULT_FLAG_ORIG_PTE_VALID) - return !pte_same(ptep_get(vmf->pte), vmf->orig_pte); + return !pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte); return !pte_none(ptep_get(vmf->pte)); } @@ -5125,7 +5137,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) * the pfn may be screwed if the read is non atomic. */ spin_lock(vmf->ptl); - if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) { pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -5197,7 +5209,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) vmf->address, &vmf->ptl); if (unlikely(!vmf->pte)) goto out; - if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), + vmf->orig_pte))) { pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -5343,7 +5356,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) vmf->address, &vmf->ptl); if (unlikely(!vmf->pte)) return 0; - vmf->orig_pte = ptep_get_lockless(vmf->pte); + vmf->orig_pte = ptep_get_lockless_norecency(vmf->pte); vmf->flags |= FAULT_FLAG_ORIG_PTE_VALID; if (pte_none(vmf->orig_pte)) { @@ -5363,7 +5376,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) spin_lock(vmf->ptl); entry = vmf->orig_pte; - if (unlikely(!pte_same(ptep_get(vmf->pte), entry))) { + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), entry))) { update_mmu_tlb(vmf->vma, vmf->address, vmf->pte); goto unlock; }