From patchwork Sat Sep 25 20:54:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12517939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E282C433F5 for ; Sun, 26 Sep 2021 04:25:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D56D16108E for ; Sun, 26 Sep 2021 04:25:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D56D16108E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 06DF16B0073; Sun, 26 Sep 2021 00:25:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F36246B0074; Sun, 26 Sep 2021 00:25:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E00E86B0075; Sun, 26 Sep 2021 00:25:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id CD6426B0073 for ; Sun, 26 Sep 2021 00:25:48 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7AA111815228C for ; Sun, 26 Sep 2021 04:25:48 +0000 (UTC) X-FDA: 78628436376.09.59549C0 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf27.hostedemail.com (Postfix) with ESMTP id 30832700009B for ; Sun, 26 Sep 2021 04:25:48 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id j15so7932136plh.7 for ; Sat, 25 Sep 2021 21:25:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=38X15zXo+/Y1pmE94+biXA74I0XZ/m9EP9lkXOHRlBI=; b=B217FJUV7D0NkEiadrzSyMA9XRT/EK7NqAYZYYhlQpfEqpnzbOA4K1gd8qMEiZBbxv JJANulJyWob1QHSStd6EY0YY2iKA61nKRVvdE6D/gw5/jj4XWa4W+EPOsrfViRpB1vev Ckzh98tfj1x1enZcLqRAZkbDUmz7zv+kHgHfsVwyAfSKy21qvx/bOmblN0yuUq5mOrhA 4qRT+wjUlhQkcbG9fYJzLwx8BNOG1s9pXu7EoSIKicAb+Q2cIDBk+XemIWUYgq+k6YrO Bdi6QHVxHCiI9T27FUfqjiZSjCC27b/etvSGsyGTAChMiaDGVogCEm3035fdUimeEP2V /IXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=38X15zXo+/Y1pmE94+biXA74I0XZ/m9EP9lkXOHRlBI=; b=Z9WjG9ACYCf69CaxOCQ4CMezs43LfsJG+X3RZ8xn6VWamEXcy0lP9k2SjQVRKd4O5z tuz2FXqyDQeVh9ZuinyZsQGlE7wwNzzmX3vnH6YIhukfxzHvLunrqEGxbq/wnWe2pyPZ uiSvxUVxSXMN06q8/+RfMh/xKdavILKiB7yfHqdToNaYycuZWGEFKN78pUVRYBhQCdF1 iQVys2T7BG+EUam+8RCSpnzHE4wGEFwmUwsePXHQbHQqcYExTBOkqOsfuV0FApmgtgbg ZdKOkU8r60mUTAW5j+egrkNdmoMP7RWSjpqcOGVr79/sKiFSyra5Sgq31L9/+nDiGnOV bkDQ== X-Gm-Message-State: AOAM531L/IuVnpxqf8eBGuqdDEs7IovT1ZoIgtc4iK6GKXR3dcmQ/BOa c2vT5n1XVjJpEfhBytEGeAE= X-Google-Smtp-Source: ABdhPJyowsR26GW3IybxXUytPWUnH8GvlF0Qxvy1Y52O08GFhCfT6DO6GA1K4z8Qt2T0oAyx63Eyag== X-Received: by 2002:a17:90a:7d11:: with SMTP id g17mr11731048pjl.19.1632630347189; Sat, 25 Sep 2021 21:25:47 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q11sm12406154pjf.14.2021.09.25.21.25.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Sep 2021 21:25:46 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: Andrew Morton Cc: LKML , Linux-MM , Peter Xu , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH 2/2] mm/mprotect: do not flush on permission promotion Date: Sat, 25 Sep 2021 13:54:23 -0700 Message-Id: <20210925205423.168858-3-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210925205423.168858-1-namit@vmware.com> References: <20210925205423.168858-1-namit@vmware.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 30832700009B X-Stat-Signature: 3p333jwqgmf1djt6cg8jas6oqaxebcia Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=B217FJUV; spf=none (imf27.hostedemail.com: domain of mail-pl1-f181.google.com has no SPF policy when checking 209.85.214.181) smtp.helo=mail-pl1-f181.google.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1632630348-920168 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit Currently, using mprotect() to unprotect a memory region or uffd to unprotect a memory region causes a TLB flush. At least on x86, as protection is promoted, no TLB flush is needed. Add an arch-specific pte_may_need_flush() which tells whether a TLB flush is needed based on the old PTE and the new one. Implement an x86 pte_may_need_flush(). For x86, PTE protection promotion or changes of software bits does require a flush, also add logic that considers the dirty-bit. Changes to the access-bit do not trigger a TLB flush, although architecturally they should, as Linux considers the access-bit as a hint. Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Signed-off-by: Nadav Amit --- arch/x86/include/asm/tlbflush.h | 40 +++++++++++++++++++++++++++++++++ include/asm-generic/tlb.h | 4 ++++ mm/mprotect.c | 3 ++- 3 files changed, 46 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index b587a9ee9cb2..e74d13d174d1 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -259,6 +259,46 @@ static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +/* + * pte_may_need_flush() checks whether a TLB flush is necessary according to x86 + * architectural definition. Mere promotion of permissions does not require a + * TLB flush. Changes of software bits does not require a TLB flush. Demotion + * of the access-bit also does not trigger a TLB flush (although it is required + * architecturally) - see the comment in ptep_clear_flush_young(). + * + * Further optimizations may be possible, such as avoiding a flush when clearing + * the write-bit on non-dirty entries. As those do not explicitly follow the + * specifications, they are not implemented (at least for now). + */ +static inline bool pte_may_need_flush(pte_t oldpte, pte_t newpte) +{ + const pteval_t ignore_mask = _PAGE_SOFTW1 | _PAGE_SOFTW2 | + _PAGE_SOFTW3 | _PAGE_ACCESSED; + const pteval_t enable_mask = _PAGE_RW | _PAGE_DIRTY | _PAGE_GLOBAL; + pteval_t oldval = pte_val(oldpte); + pteval_t newval = pte_val(newpte); + pteval_t diff = oldval ^ newval; + pteval_t disable_mask = 0; + + if (IS_ENABLED(CONFIG_X86_64) || IS_ENABLED(CONFIG_X86_PAE)) + disable_mask = _PAGE_NX; + + /* new is non-present: need only if old is present */ + if (pte_none(newpte)) + return !pte_none(oldpte); + + /* + * Any change of PFN and any flag other than those that we consider + * requires a flush (e.g., PAT, protection keys). To save flushes we do + * not consider the access bit as it is considered by the kernel as + * best-effort. + */ + return diff & ((oldval & enable_mask) | + (newval & disable_mask) | + ~(enable_mask | disable_mask | ignore_mask)); +} +#define pte_may_need_flush pte_may_need_flush + #endif /* !MODULE */ #endif /* _ASM_X86_TLBFLUSH_H */ diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 2c68a545ffa7..5ca49f44bc38 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -654,6 +654,10 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb, } while (0) #endif +#ifndef pte_may_need_flush +static inline bool pte_may_need_flush(pte_t oldpte, pte_t newpte) { return true; } +#endif + #endif /* CONFIG_MMU */ #endif /* _ASM_GENERIC__TLB_H */ diff --git a/mm/mprotect.c b/mm/mprotect.c index 075ff94aa51c..ae79df59a7a8 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -139,7 +139,8 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, ptent = pte_mkwrite(ptent); } ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); - tlb_flush_pte_range(tlb, addr, PAGE_SIZE); + if (pte_may_need_flush(oldpte, ptent)) + tlb_flush_pte_range(tlb, addr, PAGE_SIZE); pages++; } else if (is_swap_pte(oldpte)) { swp_entry_t entry = pte_to_swp_entry(oldpte);