From patchwork Sat Sep 25 20:54:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12517937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F0F5C433FE for ; Sun, 26 Sep 2021 04:25:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CCD256113D for ; Sun, 26 Sep 2021 04:25:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org CCD256113D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5EAD46B0072; Sun, 26 Sep 2021 00:25:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D626900002; Sun, 26 Sep 2021 00:25:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 265146B0074; Sun, 26 Sep 2021 00:25:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id 1990D6B0072 for ; Sun, 26 Sep 2021 00:25:47 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B6B5139465 for ; Sun, 26 Sep 2021 04:25:46 +0000 (UTC) X-FDA: 78628436292.26.883D2BC Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf09.hostedemail.com (Postfix) with ESMTP id 672FE3000100 for ; Sun, 26 Sep 2021 04:25:46 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id w11so9323274plz.13 for ; Sat, 25 Sep 2021 21:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vPTWY6ZhEwxKm2bRo4H/RgEPre3036ELut4KiHyBxNs=; b=j93tBAwKR6PU+Tf2RAq5LpM+xsVlcTolYB+z48FLtTHG1Qaae5hnNp28s8O2AIo/fH f5h53AFXFLlBUK7HJsu0br/zkYEPcmLLFx0HXImh6rG6a82RYAPzOpyq26uaISQZovlS Y/w37pgqobsu0P7FNbP2L5cDVAfAgLSDjiW8sw1uoGtkrU09Zmw0kI0UpqvbAilqJjzR YYqhYkAaZJSE0qi21YUu1Lwy4kFkhCoJ41quaBQ1fKa99nbshybzo6aEnuTPMwWh6TQX LH/WLZWuFSzZgO0Rx7mwZrqu3ED/z0dccLkco7MnXk89h5iCunIpo/pq5debKqCWrk/Y f+oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vPTWY6ZhEwxKm2bRo4H/RgEPre3036ELut4KiHyBxNs=; b=6tvdr6S7Fe5aXCsNQNGbZc4TmGt/TkaRYkXzn+4VJUAL4GR+k094l4zP0UQ+c2xnbU 4CW5IBMUZ7m1GjY3M4lsxChzfPyMzpagjJuNlChxB8XAMU4rkrQfkpwdDJrF7EzuSxrX JUgv0COy7MFNGwbewsgJGZLszqa28Pr6Kl31sIYOP1FpIGFuoxHzApak3z9X8fQcWZiP FkaEtGObJZcvU+CnWYKnuujU18nwccBx4QjbFprJSDTkKkxlE7ltU30noZ6TE/NN5SKW A7+XBxaNAY6/cj3gpGfXwfSUqIXN454WL4w4fCxa+3RmvFyx6lpwmdgfDZC5eCEETZuV M8vQ== X-Gm-Message-State: AOAM533HKk27DsPfCkjePSCY9aYl+e8BG6tIc718Q3dHSDpvCdSkGqpu DG+OxseFMAXGVlAW7y8Oqdk= X-Google-Smtp-Source: ABdhPJza7Jkvcd4LMXrUFhOx0lek5OgkoePvKdQ+XxiNS+0296z+pucET+h/3L++oB+zNdy2RX5vYw== X-Received: by 2002:a17:90a:1d6:: with SMTP id 22mr11541249pjd.39.1632630345392; Sat, 25 Sep 2021 21:25:45 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q11sm12406154pjf.14.2021.09.25.21.25.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Sep 2021 21:25:44 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: Andrew Morton Cc: LKML , Linux-MM , Peter Xu , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH 1/2] mm/mprotect: use mmu_gather Date: Sat, 25 Sep 2021 13:54:22 -0700 Message-Id: <20210925205423.168858-2-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210925205423.168858-1-namit@vmware.com> References: <20210925205423.168858-1-namit@vmware.com> MIME-Version: 1.0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=j93tBAwK; spf=none (imf09.hostedemail.com: domain of mail-pl1-f179.google.com has no SPF policy when checking 209.85.214.179) smtp.helo=mail-pl1-f179.google.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 672FE3000100 X-Stat-Signature: qc9zyk1peeop1knokuzmu7ewzo9d7pan X-HE-Tag: 1632630346-332057 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit change_pXX_range() currently does not use mmu_gather, but instead implements its own deferred TLB flushes scheme. This both complicates the code, as developers need to be aware of different invalidation schemes, and prevents opportunities to avoid TLB flushes or perform them in finer granularity. Use mmu_gather in change_pXX_range(). As the pages are not released, only record the flushed range using tlb_flush_pXX_range(). Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Signed-off-by: Nadav Amit --- mm/mprotect.c | 50 ++++++++++++++++++++++++++++---------------------- 1 file changed, 28 insertions(+), 22 deletions(-) diff --git a/mm/mprotect.c b/mm/mprotect.c index 883e2cc85cad..075ff94aa51c 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -32,12 +32,13 @@ #include #include #include +#include #include "internal.h" -static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, - unsigned long addr, unsigned long end, pgprot_t newprot, - unsigned long cp_flags) +static unsigned long change_pte_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pte_t *pte, oldpte; spinlock_t *ptl; @@ -138,6 +139,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, ptent = pte_mkwrite(ptent); } ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); + tlb_flush_pte_range(tlb, addr, PAGE_SIZE); pages++; } else if (is_swap_pte(oldpte)) { swp_entry_t entry = pte_to_swp_entry(oldpte); @@ -219,9 +221,9 @@ static inline int pmd_none_or_clear_bad_unless_trans_huge(pmd_t *pmd) return 0; } -static inline unsigned long change_pmd_range(struct vm_area_struct *vma, - pud_t *pud, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_pmd_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pud_t *pud, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pmd_t *pmd; unsigned long next; @@ -261,6 +263,10 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, if (next - addr != HPAGE_PMD_SIZE) { __split_huge_pmd(vma, pmd, addr, false, NULL); } else { + /* + * change_huge_pmd() does not defer TLB flushes, + * so no need to propagate the tlb argument. + */ int nr_ptes = change_huge_pmd(vma, pmd, addr, newprot, cp_flags); @@ -276,8 +282,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, } /* fall through, the trans huge pmd just split */ } - this_pages = change_pte_range(vma, pmd, addr, next, newprot, - cp_flags); + this_pages = change_pte_range(tlb, vma, pmd, addr, next, + newprot, cp_flags); pages += this_pages; next: cond_resched(); @@ -291,9 +297,9 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, return pages; } -static inline unsigned long change_pud_range(struct vm_area_struct *vma, - p4d_t *p4d, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_pud_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, p4d_t *p4d, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pud_t *pud; unsigned long next; @@ -304,16 +310,16 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma, next = pud_addr_end(addr, end); if (pud_none_or_clear_bad(pud)) continue; - pages += change_pmd_range(vma, pud, addr, next, newprot, + pages += change_pmd_range(tlb, vma, pud, addr, next, newprot, cp_flags); } while (pud++, addr = next, addr != end); return pages; } -static inline unsigned long change_p4d_range(struct vm_area_struct *vma, - pgd_t *pgd, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_p4d_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { p4d_t *p4d; unsigned long next; @@ -324,7 +330,7 @@ static inline unsigned long change_p4d_range(struct vm_area_struct *vma, next = p4d_addr_end(addr, end); if (p4d_none_or_clear_bad(p4d)) continue; - pages += change_pud_range(vma, p4d, addr, next, newprot, + pages += change_pud_range(tlb, vma, p4d, addr, next, newprot, cp_flags); } while (p4d++, addr = next, addr != end); @@ -338,25 +344,25 @@ static unsigned long change_protection_range(struct vm_area_struct *vma, struct mm_struct *mm = vma->vm_mm; pgd_t *pgd; unsigned long next; - unsigned long start = addr; unsigned long pages = 0; + struct mmu_gather tlb; BUG_ON(addr >= end); pgd = pgd_offset(mm, addr); flush_cache_range(vma, addr, end); inc_tlb_flush_pending(mm); + tlb_gather_mmu(&tlb, mm); + tlb_start_vma(&tlb, vma); do { next = pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(pgd)) continue; - pages += change_p4d_range(vma, pgd, addr, next, newprot, + pages += change_p4d_range(&tlb, vma, pgd, addr, next, newprot, cp_flags); } while (pgd++, addr = next, addr != end); - /* Only flush the TLB if we actually modified any entries: */ - if (pages) - flush_tlb_range(vma, start, end); - dec_tlb_flush_pending(mm); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb); return pages; } From patchwork Sat Sep 25 20:54:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12517939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E282C433F5 for ; Sun, 26 Sep 2021 04:25:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D56D16108E for ; Sun, 26 Sep 2021 04:25:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D56D16108E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 06DF16B0073; Sun, 26 Sep 2021 00:25:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F36246B0074; Sun, 26 Sep 2021 00:25:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E00E86B0075; Sun, 26 Sep 2021 00:25:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id CD6426B0073 for ; Sun, 26 Sep 2021 00:25:48 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7AA111815228C for ; Sun, 26 Sep 2021 04:25:48 +0000 (UTC) X-FDA: 78628436376.09.59549C0 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf27.hostedemail.com (Postfix) with ESMTP id 30832700009B for ; Sun, 26 Sep 2021 04:25:48 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id j15so7932136plh.7 for ; Sat, 25 Sep 2021 21:25:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=38X15zXo+/Y1pmE94+biXA74I0XZ/m9EP9lkXOHRlBI=; b=B217FJUV7D0NkEiadrzSyMA9XRT/EK7NqAYZYYhlQpfEqpnzbOA4K1gd8qMEiZBbxv JJANulJyWob1QHSStd6EY0YY2iKA61nKRVvdE6D/gw5/jj4XWa4W+EPOsrfViRpB1vev Ckzh98tfj1x1enZcLqRAZkbDUmz7zv+kHgHfsVwyAfSKy21qvx/bOmblN0yuUq5mOrhA 4qRT+wjUlhQkcbG9fYJzLwx8BNOG1s9pXu7EoSIKicAb+Q2cIDBk+XemIWUYgq+k6YrO Bdi6QHVxHCiI9T27FUfqjiZSjCC27b/etvSGsyGTAChMiaDGVogCEm3035fdUimeEP2V /IXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=38X15zXo+/Y1pmE94+biXA74I0XZ/m9EP9lkXOHRlBI=; b=Z9WjG9ACYCf69CaxOCQ4CMezs43LfsJG+X3RZ8xn6VWamEXcy0lP9k2SjQVRKd4O5z tuz2FXqyDQeVh9ZuinyZsQGlE7wwNzzmX3vnH6YIhukfxzHvLunrqEGxbq/wnWe2pyPZ uiSvxUVxSXMN06q8/+RfMh/xKdavILKiB7yfHqdToNaYycuZWGEFKN78pUVRYBhQCdF1 iQVys2T7BG+EUam+8RCSpnzHE4wGEFwmUwsePXHQbHQqcYExTBOkqOsfuV0FApmgtgbg ZdKOkU8r60mUTAW5j+egrkNdmoMP7RWSjpqcOGVr79/sKiFSyra5Sgq31L9/+nDiGnOV bkDQ== X-Gm-Message-State: AOAM531L/IuVnpxqf8eBGuqdDEs7IovT1ZoIgtc4iK6GKXR3dcmQ/BOa c2vT5n1XVjJpEfhBytEGeAE= X-Google-Smtp-Source: ABdhPJyowsR26GW3IybxXUytPWUnH8GvlF0Qxvy1Y52O08GFhCfT6DO6GA1K4z8Qt2T0oAyx63Eyag== X-Received: by 2002:a17:90a:7d11:: with SMTP id g17mr11731048pjl.19.1632630347189; Sat, 25 Sep 2021 21:25:47 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q11sm12406154pjf.14.2021.09.25.21.25.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Sep 2021 21:25:46 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: Andrew Morton Cc: LKML , Linux-MM , Peter Xu , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH 2/2] mm/mprotect: do not flush on permission promotion Date: Sat, 25 Sep 2021 13:54:23 -0700 Message-Id: <20210925205423.168858-3-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210925205423.168858-1-namit@vmware.com> References: <20210925205423.168858-1-namit@vmware.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 30832700009B X-Stat-Signature: 3p333jwqgmf1djt6cg8jas6oqaxebcia Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=B217FJUV; spf=none (imf27.hostedemail.com: domain of mail-pl1-f181.google.com has no SPF policy when checking 209.85.214.181) smtp.helo=mail-pl1-f181.google.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1632630348-920168 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit Currently, using mprotect() to unprotect a memory region or uffd to unprotect a memory region causes a TLB flush. At least on x86, as protection is promoted, no TLB flush is needed. Add an arch-specific pte_may_need_flush() which tells whether a TLB flush is needed based on the old PTE and the new one. Implement an x86 pte_may_need_flush(). For x86, PTE protection promotion or changes of software bits does require a flush, also add logic that considers the dirty-bit. Changes to the access-bit do not trigger a TLB flush, although architecturally they should, as Linux considers the access-bit as a hint. Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Signed-off-by: Nadav Amit --- arch/x86/include/asm/tlbflush.h | 40 +++++++++++++++++++++++++++++++++ include/asm-generic/tlb.h | 4 ++++ mm/mprotect.c | 3 ++- 3 files changed, 46 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index b587a9ee9cb2..e74d13d174d1 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -259,6 +259,46 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +/* + * pte_may_need_flush() checks whether a TLB flush is necessary according to x86 + * architectural definition. Mere promotion of permissions does not require a + * TLB flush. Changes of software bits does not require a TLB flush. Demotion + * of the access-bit also does not trigger a TLB flush (although it is required + * architecturally) - see the comment in ptep_clear_flush_young(). + * + * Further optimizations may be possible, such as avoiding a flush when clearing + * the write-bit on non-dirty entries. As those do not explicitly follow the + * specifications, they are not implemented (at least for now). + */ +static inline bool pte_may_need_flush(pte_t oldpte, pte_t newpte) +{ + const pteval_t ignore_mask = _PAGE_SOFTW1 | _PAGE_SOFTW2 | + _PAGE_SOFTW3 | _PAGE_ACCESSED; + const pteval_t enable_mask = _PAGE_RW | _PAGE_DIRTY | _PAGE_GLOBAL; + pteval_t oldval = pte_val(oldpte); + pteval_t newval = pte_val(newpte); + pteval_t diff = oldval ^ newval; + pteval_t disable_mask = 0; + + if (IS_ENABLED(CONFIG_X86_64) || IS_ENABLED(CONFIG_X86_PAE)) + disable_mask = _PAGE_NX; + + /* new is non-present: need only if old is present */ + if (pte_none(newpte)) + return !pte_none(oldpte); + + /* + * Any change of PFN and any flag other than those that we consider + * requires a flush (e.g., PAT, protection keys). To save flushes we do + * not consider the access bit as it is considered by the kernel as + * best-effort. + */ + return diff & ((oldval & enable_mask) | + (newval & disable_mask) | + ~(enable_mask | disable_mask | ignore_mask)); +} +#define pte_may_need_flush pte_may_need_flush + #endif /* !MODULE */ #endif /* _ASM_X86_TLBFLUSH_H */ diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 2c68a545ffa7..5ca49f44bc38 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -654,6 +654,10 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb, } while (0) #endif +#ifndef pte_may_need_flush +static inline bool pte_may_need_flush(pte_t oldpte, pte_t newpte) { return true; } +#endif + #endif /* CONFIG_MMU */ #endif /* _ASM_GENERIC__TLB_H */ diff --git a/mm/mprotect.c b/mm/mprotect.c index 075ff94aa51c..ae79df59a7a8 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -139,7 +139,8 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, ptent = pte_mkwrite(ptent); } ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); - tlb_flush_pte_range(tlb, addr, PAGE_SIZE); + if (pte_may_need_flush(oldpte, ptent)) + tlb_flush_pte_range(tlb, addr, PAGE_SIZE); pages++; } else if (is_swap_pte(oldpte)) { swp_entry_t entry = pte_to_swp_entry(oldpte);