From patchwork Mon Jul 18 12:02:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12921690 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C01BCC433EF for ; Mon, 18 Jul 2022 19:37:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5AD5C6B0074; Mon, 18 Jul 2022 15:37:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 55D636B0081; Mon, 18 Jul 2022 15:37:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3FE4A8E0001; Mon, 18 Jul 2022 15:37:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2DF426B0074 for ; Mon, 18 Jul 2022 15:37:27 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0F06C20DB5 for ; Mon, 18 Jul 2022 19:37:27 +0000 (UTC) X-FDA: 79701229734.05.8E2B6F6 Received: from relay5.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by imf06.hostedemail.com (Postfix) with ESMTP id 7267C18004E for ; Mon, 18 Jul 2022 19:37:26 +0000 (UTC) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 2DA298015F for ; Mon, 18 Jul 2022 19:37:26 +0000 (UTC) X-FDA: 79701229692.24.E2E4418 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf02.hostedemail.com (Postfix) with ESMTP id D53388004B for ; Mon, 18 Jul 2022 19:37:25 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id t5-20020a17090a6a0500b001ef965b262eso13643444pjj.5 for ; Mon, 18 Jul 2022 12:37:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vrKPjJvEMBTDr1mCiDbgd+KWQ8nvzpXNXEsnVSmuvO0=; b=eYUZpQJpHryXrfyw7d48dRMqIWNv2+baxNAcAVwwnO0fjHTTRD50J1bwh/PG8hjhyo RFIxBYYQfj5C9aZGvM+1u1GlB88xW+kndwLcOf6g2YEW+59i/GnfGPV2S9WxiHUwpSKX x08be9IZJ+YBx/qdBlDRbTMoOz/KXj85/cPSJxz55YfQzzX3Jpf+kiiIG4NAlj1yOMjc BjlyIgdaoYZ3WQ5j2mm/bj4G39v948AmRF31dtKwb52uX06Lm82D+vYO7r8Ancx0fHXp /xQCRujNj3hHmv8itzoQcsbS0vCIUj1i2sOEK0ErYyuRJHrObKtzScHJD8DsT83a0u+y G2WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vrKPjJvEMBTDr1mCiDbgd+KWQ8nvzpXNXEsnVSmuvO0=; b=p6mAS2C0WkBj3lAF/7PVJHvykEXhsE6Gisxcq0ukvmGX8nn8yR0r2gFqOrI4EUJgYC QN/fDiue2+zth7EJseJnuWjwWGqeOxbMR7i5WF8J/ClqLBgFNpwkDIbY7Yf6yrKEntzu 5OAdoQXYJ1GcBgHfa7yXJeS0rpyBMNL1o+LGuePB0QGdYxXIHS0NL2OUkL7Qh9BMVDs0 rcD0u1X3PdI0DIx9jaYhOhjwXB9GoueNJkNPjupB9OsQTZYhCL9ooPjZBmIQC2qDlA2g /Kx1qvIPZ5wjAo1Dcg0dJFQ11CdkrHYzBiZ2/Y/936Lj1YNXPLXT1BR4osi48GnLBNpu EaPg== X-Gm-Message-State: AJIora+6aNG3toGzOdnTT1OgrGwawISkWUGwprDPFJXcKpAPg/0G+I0M x9+EGv+eOrEEf9K1nMMc84dx1KU8Mf2ZaA== X-Google-Smtp-Source: AGRyM1uYMSGTx4GyYdBLjJpMepu7xtMk4UhhFdK1EgIxLwbQTenVoVfEyUI9bNANKUd9XUVG9W4PUA== X-Received: by 2002:a17:903:1111:b0:16a:acf4:e951 with SMTP id n17-20020a170903111100b0016aacf4e951mr29606165plh.72.1658173044426; Mon, 18 Jul 2022 12:37:24 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q6-20020a170902a3c600b0016bc4a6ce28sm9907887plb.98.2022.07.18.12.37.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Jul 2022 12:37:24 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Mike Rapoport , Axel Rasmussen , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , David Hildenbrand , Peter Xu , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin Subject: [RFC PATCH 14/14] mm: conditional check of pfn in pte_flush_type Date: Mon, 18 Jul 2022 05:02:12 -0700 Message-Id: <20220718120212.3180-15-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220718120212.3180-1-namit@vmware.com> References: <20220718120212.3180-1-namit@vmware.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658173046; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vrKPjJvEMBTDr1mCiDbgd+KWQ8nvzpXNXEsnVSmuvO0=; b=B0XCiCg5TURkSKSd+2sy52UekafPkcXQ5zwNTvsC2+T63kCUczoENNXh92BJUuFLcUs+It VddiUgvdT+Wk/emWNQLr8hPgeeSInHPKKzrEznm0RcDOtH+d7U3gAPzkCyJjZTQTqz1Z76 Za0gDbt73Yb6L14wddF+RqWFVfthBug= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=eYUZpQJp; spf=none (imf06.hostedemail.com: domain of MAILER-DAEMON@hostedemail.com has no SPF policy when checking 216.40.44.10) smtp.mailfrom=MAILER-DAEMON@hostedemail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658173046; a=rsa-sha256; cv=none; b=oThEBHArchJ1+poYfl1RFEU6wle0+lS0oJfAJg3sOR+HHjoDHJfBjrkcBoOefTAXDio4Ld orUXWESaGp36L1DH36DSIiCI2gKV34S/fTbOLv7rOO8tys1s7J36PLYETi7KdQnO4+92pu Fz7MzLuyzvmrAKQNffu4d93Ribwy0PU= X-Stat-Signature: j39ryna9ubxiu6x9bkwea361fyau7q8y X-Rspamd-Queue-Id: 7267C18004E X-HE-Tag-Orig: 1658173045-356782 X-Rspamd-Server: rspam02 X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=eYUZpQJp; spf=none (imf06.hostedemail.com: domain of MAILER-DAEMON@hostedemail.com has no SPF policy when checking 216.40.44.10) smtp.mailfrom=MAILER-DAEMON@hostedemail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1658173046-740852 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit Checking whether PFNs in two PTEs are the same takes surprisingly large number of instructions. Yet in fact, in most cases the caller to pte_flush_type() already knows if the PFN was changed. For instance, mprotect() does not change the PFN, but only modifies the protection flags. Add argument to pte_flush_type() to indicate whether the PFN should be checked. Keep checking it in mm-debug to see if some caller was wrong to assume the PFN is the same. Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: David Hildenbrand Cc: Peter Xu Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Signed-off-by: Nadav Amit --- arch/x86/include/asm/tlbflush.h | 14 ++++++++++---- include/asm-generic/tlb.h | 6 ++++-- mm/huge_memory.c | 2 +- mm/mprotect.c | 2 +- mm/rmap.c | 2 +- 5 files changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 58c95e36b098..50349861fdc9 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -340,14 +340,17 @@ static inline enum pte_flush_type pte_flags_flush_type(unsigned long oldflags, * whether a strict or relaxed TLB flush is need. It should only be used on * userspace PTEs. */ -static inline enum pte_flush_type pte_flush_type(pte_t oldpte, pte_t newpte) +static inline enum pte_flush_type pte_flush_type(pte_t oldpte, pte_t newpte, + bool check_pfn) { /* !PRESENT -> * ; no need for flush */ if (!(pte_flags(oldpte) & _PAGE_PRESENT)) return PTE_FLUSH_NONE; /* PFN changed ; needs flush */ - if (pte_pfn(oldpte) != pte_pfn(newpte)) + if (!check_pfn) + VM_BUG_ON(pte_pfn(oldpte) != pte_pfn(newpte)); + else if (pte_pfn(oldpte) != pte_pfn(newpte)) return PTE_FLUSH_STRICT; /* @@ -363,14 +366,17 @@ static inline enum pte_flush_type pte_flush_type(pte_t oldpte, pte_t newpte) * huge_pmd_flush_type() checks whether permissions were demoted and require a * flush. It should only be used for userspace huge PMDs. */ -static inline enum pte_flush_type huge_pmd_flush_type(pmd_t oldpmd, pmd_t newpmd) +static inline enum pte_flush_type huge_pmd_flush_type(pmd_t oldpmd, pmd_t newpmd, + bool check_pfn) { /* !PRESENT -> * ; no need for flush */ if (!(pmd_flags(oldpmd) & _PAGE_PRESENT)) return PTE_FLUSH_NONE; /* PFN changed ; needs flush */ - if (pmd_pfn(oldpmd) != pmd_pfn(newpmd)) + if (!check_pfn) + VM_BUG_ON(pmd_pfn(oldpmd) != pmd_pfn(newpmd)); + else if (pmd_pfn(oldpmd) != pmd_pfn(newpmd)) return PTE_FLUSH_STRICT; /* diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 07b3eb8caf63..aee9da6cc5d5 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -677,14 +677,16 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb, #endif #ifndef pte_flush_type -static inline struct pte_flush_type pte_flush_type(pte_t oldpte, pte_t newpte) +static inline struct pte_flush_type pte_flush_type(pte_t oldpte, pte_t newpte, + bool check_pfn) { return PTE_FLUSH_STRICT; } #endif #ifndef huge_pmd_flush_type -static inline bool huge_pmd_flush_type(pmd_t oldpmd, pmd_t newpmd) +static inline bool huge_pmd_flush_type(pmd_t oldpmd, pmd_t newpmd, + bool check_pfn) { return PTE_FLUSH_STRICT; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b32b7da0f6f7..92a7b3ca317f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1818,7 +1818,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, flush_type = PTE_FLUSH_STRICT; if (!tlb->strict) - flush_type = huge_pmd_flush_type(oldpmd, entry); + flush_type = huge_pmd_flush_type(oldpmd, entry, false); if (flush_type != PTE_FLUSH_NONE) tlb_flush_pmd_range(tlb, addr, HPAGE_PMD_SIZE, flush_type == PTE_FLUSH_STRICT); diff --git a/mm/mprotect.c b/mm/mprotect.c index cf775f6c8c08..78081d7f4edf 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -204,7 +204,7 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, flush_type = PTE_FLUSH_STRICT; if (!tlb->strict) - flush_type = pte_flush_type(oldpte, ptent); + flush_type = pte_flush_type(oldpte, ptent, false); if (flush_type != PTE_FLUSH_NONE) tlb_flush_pte_range(tlb, addr, PAGE_SIZE, flush_type == PTE_FLUSH_STRICT); diff --git a/mm/rmap.c b/mm/rmap.c index 62f4b2a4f067..63261619b607 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -974,7 +974,7 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) entry = pte_wrprotect(oldpte); entry = pte_mkclean(entry); - if (pte_flush_type(oldpte, entry) != PTE_FLUSH_NONE || + if (pte_flush_type(oldpte, entry, false) != PTE_FLUSH_NONE || mm_tlb_flush_pending(vma->vm_mm)) flush_tlb_page(vma, address);