From patchwork Fri Feb 4 17:56:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12735411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A581AC433EF for ; Fri, 4 Feb 2022 17:57:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 282A16B0078; Fri, 4 Feb 2022 12:57:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 20A5A6B007B; Fri, 4 Feb 2022 12:57:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 084136B007D; Fri, 4 Feb 2022 12:57:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id EAA2C6B0078 for ; Fri, 4 Feb 2022 12:57:02 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A0FE782806C7 for ; Fri, 4 Feb 2022 17:57:02 +0000 (UTC) X-FDA: 79105853484.26.EB773BB Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf27.hostedemail.com (Postfix) with ESMTP id 14BB740004 for ; Fri, 4 Feb 2022 17:57:01 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E2687B8386D; Fri, 4 Feb 2022 17:57:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 161C2C004E1; Fri, 4 Feb 2022 17:56:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1643997419; bh=8UFjsBnLl8qSAQtuJHwjvJ7uYQLWhLEujT7ncgu71AM=; h=Date:To:From:In-Reply-To:Subject:From; b=AUck5CglQCt35E2nEper//DjXHsXLMRTcN4Cc5ftDFhzFRc+DYbGsNJTRNoBYESfn D0RJmtK2RTPDcVFJ8anhRbAlTpuL/fOSglItbs7Fa+WrE2Y4ZzmkFzjcXGTE32b/PU gkT38DmiCk/X+BVV4g8BbVVXDG3Ae08kdf1JenF0= Received: by hp1 (sSMTP sendmail emulation); Fri, 04 Feb 2022 09:56:57 -0800 Date: Fri, 04 Feb 2022 09:56:57 -0800 To: ziy@nvidia.com,will@kernel.org,weixugc@google.com,songmuchun@bytedance.com,rppt@kernel.org,rientjes@google.com,pjt@google.com,mingo@redhat.com,jirislaby@kernel.org,hughd@google.com,hpa@zytor.com,gthelen@google.com,dave.hansen@linux.intel.com,anshuman.khandual@arm.com,aneesh.kumar@linux.ibm.com,pasha.tatashin@soleen.com,akpm@linux-foundation.org,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220203204836.88dcebe504f440686cc63a60@linux-foundation.org> Subject: [patch 05/10] mm/page_table_check: check entries at pmd levels Message-Id: <20220204175658.161C2C004E1@smtp.kernel.org> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 14BB740004 X-Rspam-User: nil Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=AUck5Cgl; dmarc=none; spf=pass (imf27.hostedemail.com: domain of akpm@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=akpm@kernel.org X-Stat-Signature: 9hqz9jku7y3adwnenu9uakwhjfnq317k X-HE-Tag: 1643997421-196465 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Pasha Tatashin Subject: mm/page_table_check: check entries at pmd levels syzbot detected a case where the page table counters were not properly updated. syzkaller login: ------------[ cut here ]------------ kernel BUG at mm/page_table_check.c:162! invalid opcode: 0000 [#1] PREEMPT SMP KASAN CPU: 0 PID: 3099 Comm: pasha Not tainted 5.16.0+ #48 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIO4 RIP: 0010:__page_table_check_zero+0x159/0x1a0 Code: 7d 3a b2 ff 45 39 f5 74 2a e8 43 38 b2 ff 4d 85 e4 01 RSP: 0018:ffff888010667418 EFLAGS: 00010293 RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000 RDX: ffff88800cea8680 RSI: ffffffff81becaf9 RDI: 0000000003 RBP: ffff888010667450 R08: 0000000000000001 R09: 0000000000 R10: ffffffff81becaab R11: 0000000000000001 R12: ffff888008 R13: 0000000000000001 R14: 0000000000000200 R15: dffffc0000 FS: 0000000000000000(0000) GS:ffff888035e00000(0000) knlG0 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007ffd875cad00 CR3: 00000000094ce000 CR4: 0000000000 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000 Call Trace: free_pcp_prepare+0x3be/0xaa0 free_unref_page+0x1c/0x650 ? trace_hardirqs_on+0x6a/0x1d0 free_compound_page+0xec/0x130 free_transhuge_page+0x1be/0x260 __put_compound_page+0x90/0xd0 release_pages+0x54c/0x1060 ? filemap_remove_folio+0x161/0x210 ? lock_downgrade+0x720/0x720 ? __put_page+0x150/0x150 ? filemap_free_folio+0x164/0x350 __pagevec_release+0x7c/0x110 shmem_undo_range+0x85e/0x1250 ... The repro involved having a huge page that is split due to uprobe event temporarily replacing one of the pages in the huge page. Later the huge page was combined again, but the counters were off, as the PTE level was not properly updated. Make sure that when PMD is cleared and prior to freeing the level the PTEs are updated. Link: https://lkml.kernel.org/r/20220131203249.2832273-5-pasha.tatashin@soleen.com Fixes: df4e817b7108 ("mm: page table check") Signed-off-by: Pasha Tatashin Acked-by: David Rientjes Cc: Aneesh Kumar K.V Cc: Anshuman Khandual Cc: Dave Hansen Cc: Greg Thelen Cc: H. Peter Anvin Cc: Hugh Dickins Cc: Ingo Molnar Cc: Jiri Slaby Cc: Mike Rapoport Cc: Muchun Song Cc: Paul Turner Cc: Wei Xu Cc: Will Deacon Cc: Zi Yan Signed-off-by: Andrew Morton --- include/linux/page_table_check.h | 19 +++++++++++++++++++ mm/khugepaged.c | 3 +++ mm/page_table_check.c | 20 ++++++++++++++++++++ 3 files changed, 42 insertions(+) --- a/include/linux/page_table_check.h~mm-page_table_check-check-entries-at-pmd-levels +++ a/include/linux/page_table_check.h @@ -26,6 +26,9 @@ void __page_table_check_pmd_set(struct m pmd_t *pmdp, pmd_t pmd); void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, pud_t *pudp, pud_t pud); +void __page_table_check_pte_clear_range(struct mm_struct *mm, + unsigned long addr, + pmd_t pmd); static inline void page_table_check_alloc(struct page *page, unsigned int order) { @@ -100,6 +103,16 @@ static inline void page_table_check_pud_ __page_table_check_pud_set(mm, addr, pudp, pud); } +static inline void page_table_check_pte_clear_range(struct mm_struct *mm, + unsigned long addr, + pmd_t pmd) +{ + if (static_branch_likely(&page_table_check_disabled)) + return; + + __page_table_check_pte_clear_range(mm, addr, pmd); +} + #else static inline void page_table_check_alloc(struct page *page, unsigned int order) @@ -143,5 +156,11 @@ static inline void page_table_check_pud_ { } +static inline void page_table_check_pte_clear_range(struct mm_struct *mm, + unsigned long addr, + pmd_t pmd) +{ +} + #endif /* CONFIG_PAGE_TABLE_CHECK */ #endif /* __LINUX_PAGE_TABLE_CHECK_H */ --- a/mm/khugepaged.c~mm-page_table_check-check-entries-at-pmd-levels +++ a/mm/khugepaged.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -1422,10 +1423,12 @@ static void collapse_and_free_pmd(struct spinlock_t *ptl; pmd_t pmd; + mmap_assert_write_locked(mm); ptl = pmd_lock(vma->vm_mm, pmdp); pmd = pmdp_collapse_flush(vma, addr, pmdp); spin_unlock(ptl); mm_dec_nr_ptes(mm); + page_table_check_pte_clear_range(mm, addr, pmd); pte_free(mm, pmd_pgtable(pmd)); } --- a/mm/page_table_check.c~mm-page_table_check-check-entries-at-pmd-levels +++ a/mm/page_table_check.c @@ -247,3 +247,23 @@ void __page_table_check_pud_set(struct m } } EXPORT_SYMBOL(__page_table_check_pud_set); + +void __page_table_check_pte_clear_range(struct mm_struct *mm, + unsigned long addr, + pmd_t pmd) +{ + if (&init_mm == mm) + return; + + if (!pmd_bad(pmd) && !pmd_leaf(pmd)) { + pte_t *ptep = pte_offset_map(&pmd, addr); + unsigned long i; + + pte_unmap(ptep); + for (i = 0; i < PTRS_PER_PTE; i++) { + __page_table_check_pte_clear(mm, addr, *ptep); + addr += PAGE_SIZE; + ptep++; + } + } +}