From patchwork Fri Nov 20 14:35:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA9D0C5519F for ; Fri, 20 Nov 2020 14:36:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EC16E22272 for ; Fri, 20 Nov 2020 14:36:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="NWbZpQUt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC16E22272 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3EBA36B0071; Fri, 20 Nov 2020 09:36:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 36B8F6B0072; Fri, 20 Nov 2020 09:36:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 259FB6B0073; Fri, 20 Nov 2020 09:36:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id DE07A6B0071 for ; Fri, 20 Nov 2020 09:36:08 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 667CA180AD80F for ; Fri, 20 Nov 2020 14:36:08 +0000 (UTC) X-FDA: 77505046416.24.ray24_4d093312734c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 480A31A4A0 for ; Fri, 20 Nov 2020 14:36:08 +0000 (UTC) X-HE-Tag: ray24_4d093312734c X-Filterd-Recvd-Size: 3690 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Nov 2020 14:36:07 +0000 (UTC) Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3C5082236F; Fri, 20 Nov 2020 14:36:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882966; bh=j2dSYypETbDYVNoDBWsVOOyAKJVVu4Ogy3WRiJXmbGI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NWbZpQUtUGh4S38MxVlOuy+y3ZRHQ+hSU0lessRpwqEV7+WCNXSMp1fUFfrMKd9OH TFMKC+MD6wHULTQjMpUVJDiQmSZt9QXpeyxuEaJQ1LAFkfuHqsrtMGoKfOdxlcr2Li oJC/N5qg2Ae1eefEiIBLHa8KwMUVOHPqVqRv1oPY= From: Will Deacon To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Yu Zhao , Minchan Kim , Peter Zijlstra , Linus Torvalds , Anshuman Khandual , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, stable@vger.kernel.org Subject: [PATCH 1/6] arm64: pgtable: Fix pte_accessible() Date: Fri, 20 Nov 2020 14:35:52 +0000 Message-Id: <20201120143557.6715-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: pte_accessible() is used by ptep_clear_flush() to figure out whether TLB invalidation is necessary when unmapping pages for reclaim. Although our implementation is correct according to the architecture, returning true only for valid, young ptes in the absence of racing page-table modifications, this is in fact flawed due to lazy invalidation of old ptes in ptep_clear_flush_young() where we elide the expensive DSB instruction for completing the TLB invalidation. Rather than penalise the aging path, adjust pte_accessible() to return true for any valid pte, even if the access flag is cleared. Cc: Fixes: 76c714be0e5e ("arm64: pgtable: implement pte_accessible()") Reported-by: Yu Zhao Signed-off-by: Will Deacon Reviewed-by: Minchan Kim Acked-by: Yu Zhao Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/pgtable.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 4ff12a7adcfd..1bdf51f01e73 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -115,8 +115,6 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID)) #define pte_valid_not_user(pte) \ ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) -#define pte_valid_young(pte) \ - ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF)) #define pte_valid_user(pte) \ ((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) @@ -126,7 +124,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; * remapped as PROT_NONE but are yet to be flushed from the TLB. */ #define pte_accessible(mm, pte) \ - (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid_young(pte)) + (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid(pte)) /* * p??_access_permitted() is true for valid user mappings (subject to the From patchwork Fri Nov 20 14:35:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29C74C5519F for ; Fri, 20 Nov 2020 14:36:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 874A02240B for ; Fri, 20 Nov 2020 14:36:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="oPCA9JlQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 874A02240B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0FDD66B0073; Fri, 20 Nov 2020 09:36:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0AE046B0074; Fri, 20 Nov 2020 09:36:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDEF56B0075; Fri, 20 Nov 2020 09:36:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id B54956B0073 for ; Fri, 20 Nov 2020 09:36:11 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 53DB51EF2 for ; Fri, 20 Nov 2020 14:36:11 +0000 (UTC) X-FDA: 77505046542.27.eyes03_0e0f0732734c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 3B69C3D668 for ; Fri, 20 Nov 2020 14:36:11 +0000 (UTC) X-HE-Tag: eyes03_0e0f0732734c X-Filterd-Recvd-Size: 3869 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Nov 2020 14:36:10 +0000 (UTC) Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E341822403; Fri, 20 Nov 2020 14:36:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882969; bh=5+VLlS1UUO/TI2Fp+RDt7LnIJ1HNEXVw/xMX/rGfLQg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oPCA9JlQU8QGFpdcEWLoYhJojlk4qHNZfH3K7ZHQreYPLhy7IDyvsPJuzXJKkloFt XB4RJF7DC3c1CsG5sP2y4Hsq0DuBMmjhDDSOjbH0H1hm84YDFuLwpLGdsQyWOJIkSr Ad0byF1KCrj5mPq5P5e+ftrx4wfGjw85zDyts/Yk= From: Will Deacon To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Yu Zhao , Minchan Kim , Peter Zijlstra , Linus Torvalds , Anshuman Khandual , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, stable@vger.kernel.org Subject: [PATCH 2/6] arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect() Date: Fri, 20 Nov 2020 14:35:53 +0000 Message-Id: <20201120143557.6715-3-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With hardware dirty bit management, calling pte_wrprotect() on a writable, dirty PTE will lose the dirty state and return a read-only, clean entry. Move the logic from ptep_set_wrprotect() into pte_wrprotect() to ensure that the dirty bit is preserved for writable entries, as this is required for soft-dirty bit management if we enable it in the future. Cc: Signed-off-by: Will Deacon Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/pgtable.h | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 1bdf51f01e73..a155551863c9 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -162,13 +162,6 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot) return pmd; } -static inline pte_t pte_wrprotect(pte_t pte) -{ - pte = clear_pte_bit(pte, __pgprot(PTE_WRITE)); - pte = set_pte_bit(pte, __pgprot(PTE_RDONLY)); - return pte; -} - static inline pte_t pte_mkwrite(pte_t pte) { pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); @@ -194,6 +187,20 @@ static inline pte_t pte_mkdirty(pte_t pte) return pte; } +static inline pte_t pte_wrprotect(pte_t pte) +{ + /* + * If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY + * clear), set the PTE_DIRTY bit. + */ + if (pte_hw_dirty(pte)) + pte = pte_mkdirty(pte); + + pte = clear_pte_bit(pte, __pgprot(PTE_WRITE)); + pte = set_pte_bit(pte, __pgprot(PTE_RDONLY)); + return pte; +} + static inline pte_t pte_mkold(pte_t pte) { return clear_pte_bit(pte, __pgprot(PTE_AF)); @@ -843,12 +850,6 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres pte = READ_ONCE(*ptep); do { old_pte = pte; - /* - * If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY - * clear), set the PTE_DIRTY bit. - */ - if (pte_hw_dirty(pte)) - pte = pte_mkdirty(pte); pte = pte_wrprotect(pte); pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), pte_val(old_pte), pte_val(pte)); From patchwork Fri Nov 20 14:35:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50FF4C56201 for ; Fri, 20 Nov 2020 14:36:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C7C12224C for ; Fri, 20 Nov 2020 14:36:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="sdvwdSiU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C7C12224C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0B1306B0075; Fri, 20 Nov 2020 09:36:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 037C86B0078; Fri, 20 Nov 2020 09:36:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1AC06B007B; Fri, 20 Nov 2020 09:36:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id A96E36B0075 for ; Fri, 20 Nov 2020 09:36:14 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 269F43631 for ; Fri, 20 Nov 2020 14:36:14 +0000 (UTC) X-FDA: 77505046668.16.turn96_2815c332734c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 03935100E6912 for ; Fri, 20 Nov 2020 14:36:13 +0000 (UTC) X-HE-Tag: turn96_2815c332734c X-Filterd-Recvd-Size: 9222 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Nov 2020 14:36:13 +0000 (UTC) Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E8DE722253; Fri, 20 Nov 2020 14:36:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882972; bh=2RI4uHy8G5D16ugcm1YLYcVZG8lPDpNpEfnOzJ0v6uA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sdvwdSiU4xyVAqMlfniSPeo0oxdpl4vZS8dgLoKDT+v8XvtbeDFDxvgNgBMQvkjY/ dKklSZmXym4ItI4MPPkk9RFsrWko5vwsCSa1QNxKdR4xBXqbNGIraBFNwsDem0YCMv cTutuPcUP68rGBQuLepHX6iHgvpumYAOGULLi6eE= From: Will Deacon To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Yu Zhao , Minchan Kim , Peter Zijlstra , Linus Torvalds , Anshuman Khandual , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 3/6] tlb: mmu_gather: Remove unused start/end arguments from tlb_finish_mmu() Date: Fri, 20 Nov 2020 14:35:54 +0000 Message-Id: <20201120143557.6715-4-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: tlb_finish_mmu() takes two confusing and unused 'start'/'end' address arguments. Remove them. Signed-off-by: Will Deacon --- arch/ia64/include/asm/tlb.h | 2 +- arch/x86/kernel/ldt.c | 2 +- fs/exec.c | 2 +- fs/proc/task_mmu.c | 2 +- include/linux/mm_types.h | 3 +-- mm/hugetlb.c | 2 +- mm/madvise.c | 6 +++--- mm/memory.c | 4 ++-- mm/mmap.c | 4 ++-- mm/mmu_gather.c | 5 +---- mm/oom_kill.c | 4 ++-- 11 files changed, 16 insertions(+), 20 deletions(-) diff --git a/arch/ia64/include/asm/tlb.h b/arch/ia64/include/asm/tlb.h index 8d9da6f08a62..7059eb2e867a 100644 --- a/arch/ia64/include/asm/tlb.h +++ b/arch/ia64/include/asm/tlb.h @@ -36,7 +36,7 @@ * tlb_end_vma(tlb, vma); * } * } - * tlb_finish_mmu(tlb, start, end); // finish unmap for address space MM + * tlb_finish_mmu(tlb); // finish unmap for address space MM */ #include #include diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c index b8aee71840ae..0d4e1253c9c9 100644 --- a/arch/x86/kernel/ldt.c +++ b/arch/x86/kernel/ldt.c @@ -400,7 +400,7 @@ static void free_ldt_pgtables(struct mm_struct *mm) tlb_gather_mmu(&tlb, mm, start, end); free_pgd_range(&tlb, start, end, start, end); - tlb_finish_mmu(&tlb, start, end); + tlb_finish_mmu(&tlb); #endif } diff --git a/fs/exec.c b/fs/exec.c index 547a2390baf5..aa846c6ec2f0 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -724,7 +724,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift) free_pgd_range(&tlb, old_start, old_end, new_end, vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING); } - tlb_finish_mmu(&tlb, old_start, old_end); + tlb_finish_mmu(&tlb); /* * Shrink the vma to just the new range. Always succeeds. diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 217aa2705d5d..cd03ab9087b0 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1260,7 +1260,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, &cp); if (type == CLEAR_REFS_SOFT_DIRTY) mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, 0, -1); + tlb_finish_mmu(&tlb); mmap_read_unlock(mm); out_mm: mmput(mm); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5a9238f6caad..7b90058a62be 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -585,8 +585,7 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) struct mmu_gather; extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end); -extern void tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end); +extern void tlb_finish_mmu(struct mmu_gather *tlb); static inline void init_tlb_flush_pending(struct mm_struct *mm) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 37f15c3c24dc..4c0235122464 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3985,7 +3985,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, tlb_gather_mmu(&tlb, mm, tlb_start, tlb_end); __unmap_hugepage_range(&tlb, vma, start, end, ref_page); - tlb_finish_mmu(&tlb, tlb_start, tlb_end); + tlb_finish_mmu(&tlb); } /* diff --git a/mm/madvise.c b/mm/madvise.c index 416a56b8e757..29cd3d4172f5 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -508,7 +508,7 @@ static long madvise_cold(struct vm_area_struct *vma, lru_add_drain(); tlb_gather_mmu(&tlb, mm, start_addr, end_addr); madvise_cold_page_range(&tlb, vma, start_addr, end_addr); - tlb_finish_mmu(&tlb, start_addr, end_addr); + tlb_finish_mmu(&tlb); return 0; } @@ -560,7 +560,7 @@ static long madvise_pageout(struct vm_area_struct *vma, lru_add_drain(); tlb_gather_mmu(&tlb, mm, start_addr, end_addr); madvise_pageout_page_range(&tlb, vma, start_addr, end_addr); - tlb_finish_mmu(&tlb, start_addr, end_addr); + tlb_finish_mmu(&tlb); return 0; } @@ -732,7 +732,7 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, &madvise_free_walk_ops, &tlb); tlb_end_vma(&tlb, vma); mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, range.start, range.end); + tlb_finish_mmu(&tlb); return 0; } diff --git a/mm/memory.c b/mm/memory.c index c48f8df6e502..04a88c15e076 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1529,7 +1529,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, for ( ; vma && vma->vm_start < range.end; vma = vma->vm_next) unmap_single_vma(&tlb, vma, start, range.end, NULL); mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, start, range.end); + tlb_finish_mmu(&tlb); } /** @@ -1555,7 +1555,7 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr mmu_notifier_invalidate_range_start(&range); unmap_single_vma(&tlb, vma, address, range.end, details); mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, address, range.end); + tlb_finish_mmu(&tlb); } /** diff --git a/mm/mmap.c b/mm/mmap.c index d91ecb00d38c..6d94b2ee9c45 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2678,7 +2678,7 @@ static void unmap_region(struct mm_struct *mm, unmap_vmas(&tlb, vma, start, end); free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, next ? next->vm_start : USER_PGTABLES_CEILING); - tlb_finish_mmu(&tlb, start, end); + tlb_finish_mmu(&tlb); } /* @@ -3221,7 +3221,7 @@ void exit_mmap(struct mm_struct *mm) /* Use -1 here to ensure all VMAs in the mm are unmapped */ unmap_vmas(&tlb, vma, 0, -1); free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); - tlb_finish_mmu(&tlb, 0, -1); + tlb_finish_mmu(&tlb); /* * Walk the list again, actually closing and freeing it, diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 03c33c93a582..b0be5a7aa08f 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -290,14 +290,11 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, /** * tlb_finish_mmu - finish an mmu_gather structure * @tlb: the mmu_gather structure to finish - * @start: start of the region that will be removed from the page-table - * @end: end of the region that will be removed from the page-table * * Called at the end of the shootdown operation to free up any resources that * were required. */ -void tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end) +void tlb_finish_mmu(struct mmu_gather *tlb) { /* * If there are parallel threads are doing PTE changes on same range diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 8b84661a6410..c7936196a4ae 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -546,13 +546,13 @@ bool __oom_reap_task_mm(struct mm_struct *mm) vma->vm_end); tlb_gather_mmu(&tlb, mm, range.start, range.end); if (mmu_notifier_invalidate_range_start_nonblock(&range)) { - tlb_finish_mmu(&tlb, range.start, range.end); + tlb_finish_mmu(&tlb); ret = false; continue; } unmap_page_range(&tlb, vma, range.start, range.end, NULL); mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb, range.start, range.end); + tlb_finish_mmu(&tlb); } } From patchwork Fri Nov 20 14:35:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A1ACC56201 for ; Fri, 20 Nov 2020 14:36:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D77D022272 for ; Fri, 20 Nov 2020 14:36:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="YNtrw4KI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D77D022272 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5DE896B0078; Fri, 20 Nov 2020 09:36:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 591086B007D; Fri, 20 Nov 2020 09:36:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 394CB6B007E; Fri, 20 Nov 2020 09:36:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id F284B6B0078 for ; Fri, 20 Nov 2020 09:36:16 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 84EDD824999B for ; Fri, 20 Nov 2020 14:36:16 +0000 (UTC) X-FDA: 77505046752.21.house10_0703c262734c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 5F3D4180445E4 for ; Fri, 20 Nov 2020 14:36:16 +0000 (UTC) X-HE-Tag: house10_0703c262734c X-Filterd-Recvd-Size: 5498 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Nov 2020 14:36:15 +0000 (UTC) Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6CCEB2237B; Fri, 20 Nov 2020 14:36:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882974; bh=Ee4dHPlm9Xtf0uOlgveF1NWM2zzjVroNVw3kT3+i2IQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YNtrw4KIsx5ciCK6WLPTjUp8aE7IPVNFvghiWSSah0DnFIQwNnZQ/LdP0EwuNax4T A55M+EnVXDrPZXG1Gw57bUGVT/GghFB57TIauZ2nuQaIiI4wHrSoGXob5xLjuA/l/C j1U6xOsKzYOmZCl5KHq2KPMRO4gRXBdHk9rdYPpM= From: Will Deacon To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Yu Zhao , Minchan Kim , Peter Zijlstra , Linus Torvalds , Anshuman Khandual , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 4/6] mm: proc: Invalidate TLB after clearing soft-dirty page state Date: Fri, 20 Nov 2020 14:35:55 +0000 Message-Id: <20201120143557.6715-5-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double flush"), TLB invalidation is elided in tlb_finish_mmu() if no entries were batched via the tlb_remove_*() functions. Consequently, the page-table modifications performed by clear_refs_write() in response to a write to /proc//clear_refs do not perform TLB invalidation. Although this is fine when simply aging the ptes, in the case of clearing the "soft-dirty" state we can end up with entries where pte_write() is false, yet a writable mapping remains in the TLB. Fix this by calling tlb_remove_tlb_entry() for each entry being write-protected when cleating soft-dirty. Signed-off-by: Will Deacon --- fs/proc/task_mmu.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index cd03ab9087b0..3308292ee5c5 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1032,11 +1032,12 @@ enum clear_refs_types { struct clear_refs_private { enum clear_refs_types type; + struct mmu_gather *tlb; }; #ifdef CONFIG_MEM_SOFT_DIRTY static inline void clear_soft_dirty(struct vm_area_struct *vma, - unsigned long addr, pte_t *pte) + unsigned long addr, pte_t *pte, struct mmu_gather *tlb) { /* * The soft-dirty tracker uses #PF-s to catch writes @@ -1053,6 +1054,7 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, ptent = pte_wrprotect(old_pte); ptent = pte_clear_soft_dirty(ptent); ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent); + tlb_remove_tlb_entry(tlb, pte, addr); } else if (is_swap_pte(ptent)) { ptent = pte_swp_clear_soft_dirty(ptent); set_pte_at(vma->vm_mm, addr, pte, ptent); @@ -1060,14 +1062,14 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, } #else static inline void clear_soft_dirty(struct vm_area_struct *vma, - unsigned long addr, pte_t *pte) + unsigned long addr, pte_t *pte, struct mmu_gather *tlb) { } #endif #if defined(CONFIG_MEM_SOFT_DIRTY) && defined(CONFIG_TRANSPARENT_HUGEPAGE) static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmdp) + unsigned long addr, pmd_t *pmdp, struct mmu_gather *tlb) { pmd_t old, pmd = *pmdp; @@ -1081,6 +1083,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, pmd = pmd_wrprotect(pmd); pmd = pmd_clear_soft_dirty(pmd); + tlb_remove_pmd_tlb_entry(tlb, pmdp, addr); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); } else if (is_migration_entry(pmd_to_swp_entry(pmd))) { @@ -1090,7 +1093,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, } #else static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmdp) + unsigned long addr, pmd_t *pmdp, struct mmu_gather *tlb) { } #endif @@ -1107,7 +1110,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { if (cp->type == CLEAR_REFS_SOFT_DIRTY) { - clear_soft_dirty_pmd(vma, addr, pmd); + clear_soft_dirty_pmd(vma, addr, pmd, cp->tlb); goto out; } @@ -1133,7 +1136,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, ptent = *pte; if (cp->type == CLEAR_REFS_SOFT_DIRTY) { - clear_soft_dirty(vma, addr, pte); + clear_soft_dirty(vma, addr, pte, cp->tlb); continue; } @@ -1212,7 +1215,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, if (mm) { struct mmu_notifier_range range; struct clear_refs_private cp = { - .type = type, + .type = type, + .tlb = &tlb, }; if (type == CLEAR_REFS_MM_HIWATER_RSS) { From patchwork Fri Nov 20 14:35:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BE5FC56201 for ; Fri, 20 Nov 2020 14:36:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B6A2C22253 for ; Fri, 20 Nov 2020 14:36:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="hewr0Z2s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B6A2C22253 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D323E6B007D; Fri, 20 Nov 2020 09:36:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CE41B6B007E; Fri, 20 Nov 2020 09:36:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD1526B0080; Fri, 20 Nov 2020 09:36:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7D6086B007D for ; Fri, 20 Nov 2020 09:36:19 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0DD2A180AD81A for ; Fri, 20 Nov 2020 14:36:19 +0000 (UTC) X-FDA: 77505046878.03.shoe15_46092d02734c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id E1EA728A4EB for ; Fri, 20 Nov 2020 14:36:18 +0000 (UTC) X-HE-Tag: shoe15_46092d02734c X-Filterd-Recvd-Size: 6267 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Nov 2020 14:36:18 +0000 (UTC) Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EC6A12236F; Fri, 20 Nov 2020 14:36:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882977; bh=JrUunaT13FOAbVs5krWn+S3cSLbG3IZElKuxk2ra5Zs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hewr0Z2s9VAjf449FFaVKXNLa3oxpQEFrq22RF7GnyAapbmtehNsFPQ4nI053WAXp 11ca/MGmrJ+tDWmfwVDghHg8kd2bDwxDl7qa3iE9ckSeXg1uAXoXqlAQb5jYtCTlHc fTRbYX+P0e5QjWzWHdIGwzK2IYcm5ozrbF3sVFC8= From: Will Deacon To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Yu Zhao , Minchan Kim , Peter Zijlstra , Linus Torvalds , Anshuman Khandual , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 5/6] tlb: mmu_gather: Introduce tlb_gather_mmu_fullmm() Date: Fri, 20 Nov 2020 14:35:56 +0000 Message-Id: <20201120143557.6715-6-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Passing the range '0, -1' to tlb_gather_mmu() sets the 'fullmm' flag, which indicates that the mm_struct being operated on is going away. In this case, some architectures (such as arm64) can elide TLB invalidation by ensuring that the TLB tag (ASID) associated with this mm is not immediately reclaimed. Although this behaviour is documented in asm-generic/tlb.h, it's subtle and easily missed. Consequently, the /proc walker for manipulating the young and soft-dirty bits passes this range regardless. Introduce tlb_gather_mmu_fullmm() to make it clearer that this is for the entire mm and WARN() if tlb_gather_mmu() is called with an 'end' address greated than TASK_SIZE. Signed-off-by: Will Deacon Reported-by: kernel test robot --- fs/proc/task_mmu.c | 2 +- include/asm-generic/tlb.h | 6 ++++-- include/linux/mm_types.h | 1 + mm/mmap.c | 2 +- mm/mmu_gather.c | 16 ++++++++++++++-- 5 files changed, 21 insertions(+), 6 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3308292ee5c5..a76d339b5754 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, count = -EINTR; goto out_mm; } - tlb_gather_mmu(&tlb, mm, 0, -1); + tlb_gather_mmu_fullmm(&tlb, mm); if (type == CLEAR_REFS_SOFT_DIRTY) { for (vma = mm->mmap; vma; vma = vma->vm_next) { if (!(vma->vm_flags & VM_SOFTDIRTY)) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 6661ee1cff47..2c68a545ffa7 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -46,7 +46,9 @@ * * The mmu_gather API consists of: * - * - tlb_gather_mmu() / tlb_finish_mmu(); start and finish a mmu_gather + * - tlb_gather_mmu() / tlb_gather_mmu_fullmm() / tlb_finish_mmu() + * + * start and finish a mmu_gather * * Finish in particular will issue a (final) TLB invalidate and free * all (remaining) queued pages. @@ -91,7 +93,7 @@ * * - mmu_gather::fullmm * - * A flag set by tlb_gather_mmu() to indicate we're going to free + * A flag set by tlb_gather_mmu_fullmm() to indicate we're going to free * the entire mm; this allows a number of optimizations. * * - We can ignore tlb_{start,end}_vma(); because we don't diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 7b90058a62be..42231729affe 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -585,6 +585,7 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) struct mmu_gather; extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end); +extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm); extern void tlb_finish_mmu(struct mmu_gather *tlb); static inline void init_tlb_flush_pending(struct mm_struct *mm) diff --git a/mm/mmap.c b/mm/mmap.c index 6d94b2ee9c45..4b2809fbbd4a 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3216,7 +3216,7 @@ void exit_mmap(struct mm_struct *mm) lru_add_drain(); flush_cache_mm(mm); - tlb_gather_mmu(&tlb, mm, 0, -1); + tlb_gather_mmu_fullmm(&tlb, mm); /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ unmap_vmas(&tlb, vma, 0, -1); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index b0be5a7aa08f..87b48444e7e5 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -261,8 +261,8 @@ void tlb_flush_mmu(struct mmu_gather *tlb) * respectively when @mm is without users and we're going to destroy * the full address space (exit/execve). */ -void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) +static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end) { tlb->mm = mm; @@ -287,6 +287,18 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, inc_tlb_flush_pending(tlb->mm); } +void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end) +{ + WARN_ON(end > TASK_SIZE); + __tlb_gather_mmu(tlb, mm, start, end); +} + +void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm) +{ + __tlb_gather_mmu(tlb, mm, 0, -1); +} + /** * tlb_finish_mmu - finish an mmu_gather structure * @tlb: the mmu_gather structure to finish From patchwork Fri Nov 20 14:35:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 11920417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF14EC56201 for ; Fri, 20 Nov 2020 14:36:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A52622253 for ; Fri, 20 Nov 2020 14:36:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="afgeTZc4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6A52622253 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1291F6B0080; Fri, 20 Nov 2020 09:36:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0AE326B0081; Fri, 20 Nov 2020 09:36:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB9896B0082; Fri, 20 Nov 2020 09:36:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id A44936B0080 for ; Fri, 20 Nov 2020 09:36:21 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1D149180AD81A for ; Fri, 20 Nov 2020 14:36:21 +0000 (UTC) X-FDA: 77505046962.02.lead60_060a1512734c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 042F810097AA2 for ; Fri, 20 Nov 2020 14:36:20 +0000 (UTC) X-HE-Tag: lead60_060a1512734c X-Filterd-Recvd-Size: 2744 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Nov 2020 14:36:20 +0000 (UTC) Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7948B2224C; Fri, 20 Nov 2020 14:36:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882979; bh=Cx43nH7+0p7dCMlh2hWC5+4DOrQsHy/9+cM/6j7sA0k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=afgeTZc4tCjR0MOdMgoK69uQiEsqDnC9K0pzMB5Sj80IghTT6RiSH0yixLei86pAh eA3sUhCiS98J5br6nm+aCIY5W8YWvOl3cFeDFnuwN33agaWXZXHtRo46DvLXqEmIYl oudAEy1bh7kfEsjFKdgb8jP7qkR0efezAG0Z/UKY= From: Will Deacon To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Yu Zhao , Minchan Kim , Peter Zijlstra , Linus Torvalds , Anshuman Khandual , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 6/6] mm: proc: Avoid fullmm flush for young/dirty bit toggling Date: Fri, 20 Nov 2020 14:35:57 +0000 Message-Id: <20201120143557.6715-7-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000368, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: clear_refs_write() uses the 'fullmm' API for invalidating TLBs after updating the page-tables for the current mm. However, since the mm is not being freed, this can result in stale TLB entries on architectures which elide 'fullmm' invalidation. Ensure that TLB invalidation is performed after updating soft-dirty entries via clear_refs_write() by using the non-fullmm API to MMU gather. Signed-off-by: Will Deacon --- fs/proc/task_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index a76d339b5754..316af047f1aa 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, count = -EINTR; goto out_mm; } - tlb_gather_mmu_fullmm(&tlb, mm); + tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE); if (type == CLEAR_REFS_SOFT_DIRTY) { for (vma = mm->mmap; vma; vma = vma->vm_next) { if (!(vma->vm_flags & VM_SOFTDIRTY))