From patchwork Sat Sep 25 20:54:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12517937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F0F5C433FE for ; Sun, 26 Sep 2021 04:25:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CCD256113D for ; Sun, 26 Sep 2021 04:25:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org CCD256113D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5EAD46B0072; Sun, 26 Sep 2021 00:25:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D626900002; Sun, 26 Sep 2021 00:25:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 265146B0074; Sun, 26 Sep 2021 00:25:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id 1990D6B0072 for ; Sun, 26 Sep 2021 00:25:47 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B6B5139465 for ; Sun, 26 Sep 2021 04:25:46 +0000 (UTC) X-FDA: 78628436292.26.883D2BC Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf09.hostedemail.com (Postfix) with ESMTP id 672FE3000100 for ; Sun, 26 Sep 2021 04:25:46 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id w11so9323274plz.13 for ; Sat, 25 Sep 2021 21:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vPTWY6ZhEwxKm2bRo4H/RgEPre3036ELut4KiHyBxNs=; b=j93tBAwKR6PU+Tf2RAq5LpM+xsVlcTolYB+z48FLtTHG1Qaae5hnNp28s8O2AIo/fH f5h53AFXFLlBUK7HJsu0br/zkYEPcmLLFx0HXImh6rG6a82RYAPzOpyq26uaISQZovlS Y/w37pgqobsu0P7FNbP2L5cDVAfAgLSDjiW8sw1uoGtkrU09Zmw0kI0UpqvbAilqJjzR YYqhYkAaZJSE0qi21YUu1Lwy4kFkhCoJ41quaBQ1fKa99nbshybzo6aEnuTPMwWh6TQX LH/WLZWuFSzZgO0Rx7mwZrqu3ED/z0dccLkco7MnXk89h5iCunIpo/pq5debKqCWrk/Y f+oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vPTWY6ZhEwxKm2bRo4H/RgEPre3036ELut4KiHyBxNs=; b=6tvdr6S7Fe5aXCsNQNGbZc4TmGt/TkaRYkXzn+4VJUAL4GR+k094l4zP0UQ+c2xnbU 4CW5IBMUZ7m1GjY3M4lsxChzfPyMzpagjJuNlChxB8XAMU4rkrQfkpwdDJrF7EzuSxrX JUgv0COy7MFNGwbewsgJGZLszqa28Pr6Kl31sIYOP1FpIGFuoxHzApak3z9X8fQcWZiP FkaEtGObJZcvU+CnWYKnuujU18nwccBx4QjbFprJSDTkKkxlE7ltU30noZ6TE/NN5SKW A7+XBxaNAY6/cj3gpGfXwfSUqIXN454WL4w4fCxa+3RmvFyx6lpwmdgfDZC5eCEETZuV M8vQ== X-Gm-Message-State: AOAM533HKk27DsPfCkjePSCY9aYl+e8BG6tIc718Q3dHSDpvCdSkGqpu DG+OxseFMAXGVlAW7y8Oqdk= X-Google-Smtp-Source: ABdhPJza7Jkvcd4LMXrUFhOx0lek5OgkoePvKdQ+XxiNS+0296z+pucET+h/3L++oB+zNdy2RX5vYw== X-Received: by 2002:a17:90a:1d6:: with SMTP id 22mr11541249pjd.39.1632630345392; Sat, 25 Sep 2021 21:25:45 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q11sm12406154pjf.14.2021.09.25.21.25.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Sep 2021 21:25:44 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: Andrew Morton Cc: LKML , Linux-MM , Peter Xu , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH 1/2] mm/mprotect: use mmu_gather Date: Sat, 25 Sep 2021 13:54:22 -0700 Message-Id: <20210925205423.168858-2-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210925205423.168858-1-namit@vmware.com> References: <20210925205423.168858-1-namit@vmware.com> MIME-Version: 1.0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=j93tBAwK; spf=none (imf09.hostedemail.com: domain of mail-pl1-f179.google.com has no SPF policy when checking 209.85.214.179) smtp.helo=mail-pl1-f179.google.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 672FE3000100 X-Stat-Signature: qc9zyk1peeop1knokuzmu7ewzo9d7pan X-HE-Tag: 1632630346-332057 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit change_pXX_range() currently does not use mmu_gather, but instead implements its own deferred TLB flushes scheme. This both complicates the code, as developers need to be aware of different invalidation schemes, and prevents opportunities to avoid TLB flushes or perform them in finer granularity. Use mmu_gather in change_pXX_range(). As the pages are not released, only record the flushed range using tlb_flush_pXX_range(). Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Signed-off-by: Nadav Amit --- mm/mprotect.c | 50 ++++++++++++++++++++++++++++---------------------- 1 file changed, 28 insertions(+), 22 deletions(-) diff --git a/mm/mprotect.c b/mm/mprotect.c index 883e2cc85cad..075ff94aa51c 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -32,12 +32,13 @@ #include #include #include +#include #include "internal.h" -static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, - unsigned long addr, unsigned long end, pgprot_t newprot, - unsigned long cp_flags) +static unsigned long change_pte_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pte_t *pte, oldpte; spinlock_t *ptl; @@ -138,6 +139,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, ptent = pte_mkwrite(ptent); } ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); + tlb_flush_pte_range(tlb, addr, PAGE_SIZE); pages++; } else if (is_swap_pte(oldpte)) { swp_entry_t entry = pte_to_swp_entry(oldpte); @@ -219,9 +221,9 @@ static inline int pmd_none_or_clear_bad_unless_trans_huge(pmd_t *pmd) return 0; } -static inline unsigned long change_pmd_range(struct vm_area_struct *vma, - pud_t *pud, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_pmd_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pud_t *pud, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pmd_t *pmd; unsigned long next; @@ -261,6 +263,10 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, if (next - addr != HPAGE_PMD_SIZE) { __split_huge_pmd(vma, pmd, addr, false, NULL); } else { + /* + * change_huge_pmd() does not defer TLB flushes, + * so no need to propagate the tlb argument. + */ int nr_ptes = change_huge_pmd(vma, pmd, addr, newprot, cp_flags); @@ -276,8 +282,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, } /* fall through, the trans huge pmd just split */ } - this_pages = change_pte_range(vma, pmd, addr, next, newprot, - cp_flags); + this_pages = change_pte_range(tlb, vma, pmd, addr, next, + newprot, cp_flags); pages += this_pages; next: cond_resched(); @@ -291,9 +297,9 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, return pages; } -static inline unsigned long change_pud_range(struct vm_area_struct *vma, - p4d_t *p4d, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_pud_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, p4d_t *p4d, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pud_t *pud; unsigned long next; @@ -304,16 +310,16 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma, next = pud_addr_end(addr, end); if (pud_none_or_clear_bad(pud)) continue; - pages += change_pmd_range(vma, pud, addr, next, newprot, + pages += change_pmd_range(tlb, vma, pud, addr, next, newprot, cp_flags); } while (pud++, addr = next, addr != end); return pages; } -static inline unsigned long change_p4d_range(struct vm_area_struct *vma, - pgd_t *pgd, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_p4d_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { p4d_t *p4d; unsigned long next; @@ -324,7 +330,7 @@ static inline unsigned long change_p4d_range(struct vm_area_struct *vma, next = p4d_addr_end(addr, end); if (p4d_none_or_clear_bad(p4d)) continue; - pages += change_pud_range(vma, p4d, addr, next, newprot, + pages += change_pud_range(tlb, vma, p4d, addr, next, newprot, cp_flags); } while (p4d++, addr = next, addr != end); @@ -338,25 +344,25 @@ static unsigned long change_protection_range(struct vm_area_struct *vma, struct mm_struct *mm = vma->vm_mm; pgd_t *pgd; unsigned long next; - unsigned long start = addr; unsigned long pages = 0; + struct mmu_gather tlb; BUG_ON(addr >= end); pgd = pgd_offset(mm, addr); flush_cache_range(vma, addr, end); inc_tlb_flush_pending(mm); + tlb_gather_mmu(&tlb, mm); + tlb_start_vma(&tlb, vma); do { next = pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(pgd)) continue; - pages += change_p4d_range(vma, pgd, addr, next, newprot, + pages += change_p4d_range(&tlb, vma, pgd, addr, next, newprot, cp_flags); } while (pgd++, addr = next, addr != end); - /* Only flush the TLB if we actually modified any entries: */ - if (pages) - flush_tlb_range(vma, start, end); - dec_tlb_flush_pending(mm); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb); return pages; }