From patchwork Mon Aug 12 18:12:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13760913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 357D4C531DC for ; Mon, 12 Aug 2024 18:12:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92A396B00A5; Mon, 12 Aug 2024 14:12:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D9DA6B00A6; Mon, 12 Aug 2024 14:12:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 702A06B00A7; Mon, 12 Aug 2024 14:12:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 50F886B00A5 for ; Mon, 12 Aug 2024 14:12:51 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 0DB1DA032A for ; Mon, 12 Aug 2024 18:12:51 +0000 (UTC) X-FDA: 82444389342.01.02C5D4B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf03.hostedemail.com (Postfix) with ESMTP id DB25020036 for ; Mon, 12 Aug 2024 18:12:48 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=LsUwHmgy; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723486357; a=rsa-sha256; cv=none; b=0BVNRqex5vq2d6J2JuwSpYkdDcHE5cqgdRId0/W4QIqDPszxDqDtxt/MTMdzsGTj4K6Xg3 6mP8Kslj2d/3PbqnHDfwfOsJ74fCuB/7WCDv0At6HvatHQI1kqIwgHVAal6DUgRQAXTO/S rG4fvNq82GEsTNZBGylhDTzsKsymbaA= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=LsUwHmgy; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723486357; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A/6PEtzQTa+drbYKkEnodTZuUMYm4tSA4vStMaJG0Ro=; b=8jCb1N9VsT7PlX1jZBM/IW0iE1vyLcicf9r1ikvmnZ8psN/zXfgrLw7SV49XEo+gS8zNDH FT62IQESI+APaD22rIcg+JT+dFvg9FTj/Pny5bvYW4po0HqCMgLjuZ5DpJ8epeqCcyajH0 tC8s2Om87pLCOhnrV3457LAwBcqJbpo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723486368; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=A/6PEtzQTa+drbYKkEnodTZuUMYm4tSA4vStMaJG0Ro=; b=LsUwHmgySMm6mK2YuBCj95NSk3THe3p0E9aspsuhM4V3HIucDTCVMpn5OgtUJLB6SkQJGa 4PmN2e9RBzUomBydl0grWlM1vuBy0Pfb6xPzbO9HpXdK4neDvol50kn2lBfWcIRTayo9KM 7RLd7TeHItqADFMgts+MzqS0NA+/2ZI= Received: from mail-vs1-f72.google.com (mail-vs1-f72.google.com [209.85.217.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-210-950vymddN5eIPu2s_SQwqQ-1; Mon, 12 Aug 2024 14:12:45 -0400 X-MC-Unique: 950vymddN5eIPu2s_SQwqQ-1 Received: by mail-vs1-f72.google.com with SMTP id ada2fe7eead31-49291c389b9so194146137.2 for ; Mon, 12 Aug 2024 11:12:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723486364; x=1724091164; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A/6PEtzQTa+drbYKkEnodTZuUMYm4tSA4vStMaJG0Ro=; b=jDFXT3czRurOe2+lCnGgjSO4a1d/34/4Y77DGJ9MbjiNIYhz1FwCl8sG31CYrHdMYZ NxC8gWxWQ/X9kbKwwm1vBPQXGhFYyXHgCjYWwQAnOL5Pr8H+VSQlcniSOCHtUVG2wyaa NG4P7twuKZT7CwRF7VIuO0PdwsriN/QCghjGvaSMf3+tGU4XtddIjrEcjHj8rOiwHGDE h8ydHEoYHu8oBGZKtUIU/3KrL/hUoNeyAFK6UrdMYYWK/9zWlI/CYNZnn6eBL1uEDs3P Jqy6kFM1mU85dYI3zBGGxTXwULmqUWaQpx2q4cOLdZSML9qXk5lxkjTygto0OkiZSEai rxjQ== X-Forwarded-Encrypted: i=1; AJvYcCVHONA7Q/plzpLK1nSPR41aeJhydGKtXV0g/auQ0/nXVFb/oowLyc4xIDngd6TNLNSU+LDtFmiXog==@kvack.org X-Gm-Message-State: AOJu0Yx+dbvSgkEZfY0w0R5MI/5VchU73DC1DF+LJUs8LY9ioGOaWqpm wy600UtLgHtBUwIVsnkwdErae32VzHYrZ1qFuLzEHOXlY2qivGpReUj1SLFuUM5CchukOBIe+zG bSRT+0/pLDTd9Gy/huE/g81fzAn6VXaDyYL6Aqiafsp46klfc X-Received: by 2002:a05:6102:38d1:b0:493:31f9:d14e with SMTP id ada2fe7eead31-4974398cf6amr785254137.2.1723486364420; Mon, 12 Aug 2024 11:12:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEGg++VQElyCH8BIocMaINRG5gLiISp0Xxb9L94RctCtM0eBivu+dVoRsXo8+SybwrNJQcpMw== X-Received: by 2002:a05:6102:38d1:b0:493:31f9:d14e with SMTP id ada2fe7eead31-4974398cf6amr785238137.2.1723486364018; Mon, 12 Aug 2024 11:12:44 -0700 (PDT) Received: from x1n.redhat.com (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7a4c7dee013sm268663985a.84.2024.08.12.11.12.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Aug 2024 11:12:43 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: "Kirill A . Shutemov" , Nicholas Piggin , David Hildenbrand , Matthew Wilcox , Andrew Morton , James Houghton , Huang Ying , "Aneesh Kumar K . V" , peterx@redhat.com, Vlastimil Babka , Rick P Edgecombe , Hugh Dickins , Borislav Petkov , Christophe Leroy , Michael Ellerman , Rik van Riel , Dan Williams , Mel Gorman , x86@kernel.org, Ingo Molnar , linuxppc-dev@lists.ozlabs.org, Dave Hansen , Dave Jiang , Oscar Salvador , Thomas Gleixner Subject: [PATCH v5 7/7] mm/mprotect: fix dax pud handlings Date: Mon, 12 Aug 2024 14:12:25 -0400 Message-ID: <20240812181225.1360970-8-peterx@redhat.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20240812181225.1360970-1-peterx@redhat.com> References: <20240812181225.1360970-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspam-User: X-Rspamd-Queue-Id: DB25020036 X-Rspamd-Server: rspam01 X-Stat-Signature: 71knigi4wuzc3n1eax5qgqjn5xz8y1aa X-HE-Tag: 1723486368-478517 X-HE-Meta: U2FsdGVkX18nLdu/AJTUKxtk3jh2Mm6DkefDIw3iFtQTZdaGFFXbKa2S2Zf4Kw6QmRcjBubR6krC8grjBmI32xOnOweYdIV3BrgkTSC4z61OAq28dV4Pedbfuy1SQ7u2Tjvc61UfqATCkWuXz0ABbNYHvFqXG0BrgVvrxQd6lQ+Ze2A2DeNZp9ERx8AEplh0TSyUKU16rbXKQm0kglJzF7c7VSvqsOUaIQFrJnd27tcHBcCO7WYh8iYSc/IWy+o0JsRzY6J7dytOsi8G95AXXVVwuQ5z0auNPaq8caqxHYoTfrZZ/AQ6lV6YWpbG5N8q9qZDm0G5JOsqkxgg7m0V/jMl1PoM6IiU3X4CEFW2utPjjSAS+tEkzzUWc/3Ddj55q7TGbMXoBGdQjZSBDD7seBByYviYQWSm8jN7P671c0fykjJoucoMdeS24duycIPb6AXp9YezV4Qy15Y9Kx9m+9mdNxU4WPrPIvHjkMk4VSWwa30gkhuXp0/PCIbj7L4V8aWKJLaDAnCzNAmZAlpy9ZnH8vA/oANRkNffwtuz6/Fnobkt3CE5JrLtpRbSkxTLJmzUu25ELYwTWUm9rDTl5okFZqFyXgLt7w7R1yJLyHTw6rNdQ7Sbvq+ymSIipXnb/eKK4rUvTmgmQ48j1BpkgQkYhicEP20WXD43BOBH+f2FgM912piDNpHrYsu3s+9EImCCGCG7PxruDhlMtgKycIyLJebnU3zCtGwS1kW2OtQXxDnh/vVGq43Lkj4mijT64Fouie7THXjiMkHpfWd24K82urhLBM+1upJ71HpKFf8q2D6zZioohcGeYtXaQNwjapej5ce4D1dKcHnzrZvUa0/wG9jYjXd7Ypm4kDNGt/VDzDcFUhutceY3XY4l09zF2InNmE4jOoKYZDwj9fLstax1LjgXyUInqr7dTtSLNJTo8DwDqrsLbzyy/pZNkFNgOiWQTp6QXrIDR4OEsXU mb1POJAy p+cRVEPlfpKuDw8r0BPhQhwowyg6Ni+TBO0FE1t11EuP9iQVOVNCEjeQ5XFfb2WH+guPU1RZ9cy6IA+fxx91XWEt9BDTip10C8uNPq+/4KFyLQ/QYOnOnYG0wJcLJgjo9o17FtYvYFz5jcnrLVG4cAPcUYTJUGslsNdcsIqfrkrQxEgllRcZBW9Lxa0KC9/6DCQGKGQi7ofafOTN/aCUKLr1VuvWfoKP/t1MKvhM++o9qIq4/ecdeX5x3uhjDzVo6Gkp2UXBEwwWSw7hCSioGY8qaRQESuQUoUpb0vCFqpFS5xpu/Sb9fMfWrZJFUvKy5kV8vojd0yP4fG6EhbaTw0qs9JNjB9T263F/Dvyo+SwKxtPZvQVavX6J3EhjTz9xydpkZadeOMwZjEm4Od8KRmFhiekPeHuAhB87FAzmKkbCVqZ3xI0rPncZMXXY5YdtE035KwXWRhigX0c7AhyJvpIdS2EZQm0f2kaypykw5w13ELQKZRB+1ifLklGNKTjPXuh3qLJQbD5hvMYA97sFGd47WaU8E8/KPxZIlSGcqKbARX+Dh35KYp4DtC7knJZcJnx58eNc5W/BRuVc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This is only relevant to the two archs that support PUD dax, aka, x86_64 and ppc64. PUD THPs do not yet exist elsewhere, and hugetlb PUDs do not count in this case. DAX have had PUD mappings for years, but change protection path never worked. When the path is triggered in any form (a simple test program would be: call mprotect() on a 1G dev_dax mapping), the kernel will report "bad pud". This patch should fix that. The new change_huge_pud() tries to keep everything simple. For example, it doesn't optimize write bit as that will need even more PUD helpers. It's not too bad anyway to have one more write fault in the worst case once for 1G range; may be a bigger thing for each PAGE_SIZE, though. Neither does it support userfault-wp bits, as there isn't such PUD mappings that is supported; file mappings always need a split there. The same to TLB shootdown: the pmd path (which was for x86 only) has the trick of using _ad() version of pmdp_invalidate*() which can avoid one redundant TLB, but let's also leave that for later. Again, the larger the mapping, the smaller of such effect. There's some difference on handling "retry" for change_huge_pud() (where it can return 0): it isn't like change_huge_pmd(), as the pmd version is safe with all conditions handled in change_pte_range() later, thanks to Hugh's new pte_offset_map_lock(). In short, change_pte_range() is simply smarter. For that, change_pud_range() will need proper retry if it races with something else when a huge PUD changed from under us. The last thing to mention is currently the PUD path ignores the huge pte numa counter (NUMA_HUGE_PTE_UPDATES), not only because DAX is not applicable to NUMA, but also that it's ambiguous on its own to decide how to account pud in this case. In one earlier version of this patchset I proposed to remove the counter as it doesn't even look right to do the accounting as of now [1], but then a further discussion suggests we can leave that for later, as that doesn't block this series if we choose to ignore that counter. That's what this patch does, by ignoring it. When at it, touch up the comment in pgtable_split_needed() to make it generic to either pmd or pud file THPs. [1] https://lore.kernel.org/all/20240715192142.3241557-3-peterx@redhat.com/ [2] https://lore.kernel.org/r/added2d0-b8be-4108-82ca-1367a388d0b1@redhat.com Cc: Dan Williams Cc: Matthew Wilcox Cc: Dave Jiang Cc: Hugh Dickins Cc: Kirill A. Shutemov Cc: Vlastimil Babka Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: Michael Ellerman Cc: Aneesh Kumar K.V Cc: Oscar Salvador Cc: x86@kernel.org Cc: linuxppc-dev@lists.ozlabs.org Fixes: a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages") Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage") Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 24 +++++++++++++++++++ mm/huge_memory.c | 52 +++++++++++++++++++++++++++++++++++++++++ mm/mprotect.c | 39 ++++++++++++++++++++++++------- 3 files changed, 107 insertions(+), 8 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index ce44caa40eed..6370026689e0 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -342,6 +342,17 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, unsigned long address); +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, + pud_t *pudp, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags); +#else +static inline int +change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, + pud_t *pudp, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags) { return 0; } +#endif + #define split_huge_pud(__vma, __pud, __address) \ do { \ pud_t *____pud = (__pud); \ @@ -585,6 +596,19 @@ static inline int next_order(unsigned long *orders, int prev) { return 0; } + +static inline void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, + unsigned long address) +{ +} + +static inline int change_huge_pud(struct mmu_gather *tlb, + struct vm_area_struct *vma, pud_t *pudp, + unsigned long addr, pgprot_t newprot, + unsigned long cp_flags) +{ + return 0; +} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline int split_folio_to_list_to_order(struct folio *folio, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 81c5da0708ed..0aafd26d7a53 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2114,6 +2114,53 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, return ret; } +/* + * Returns: + * + * - 0: if pud leaf changed from under us + * - 1: if pud can be skipped + * - HPAGE_PUD_NR: if pud was successfully processed + */ +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, + pud_t *pudp, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags) +{ + struct mm_struct *mm = vma->vm_mm; + pud_t oldpud, entry; + spinlock_t *ptl; + + tlb_change_page_size(tlb, HPAGE_PUD_SIZE); + + /* NUMA balancing doesn't apply to dax */ + if (cp_flags & MM_CP_PROT_NUMA) + return 1; + + /* + * Huge entries on userfault-wp only works with anonymous, while we + * don't have anonymous PUDs yet. + */ + if (WARN_ON_ONCE(cp_flags & MM_CP_UFFD_WP_ALL)) + return 1; + + ptl = __pud_trans_huge_lock(pudp, vma); + if (!ptl) + return 0; + + /* + * Can't clear PUD or it can race with concurrent zapping. See + * change_huge_pmd(). + */ + oldpud = pudp_invalidate(vma, addr, pudp); + entry = pud_modify(oldpud, newprot); + set_pud_at(mm, addr, pudp, entry); + tlb_flush_pud_range(tlb, addr, HPAGE_PUD_SIZE); + + spin_unlock(ptl); + return HPAGE_PUD_NR; +} +#endif + #ifdef CONFIG_USERFAULTFD /* * The PT lock for src_pmd and dst_vma/src_vma (for reading) are locked by @@ -2344,6 +2391,11 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); } +#else +void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, + unsigned long address) +{ +} #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, diff --git a/mm/mprotect.c b/mm/mprotect.c index d423080e6509..446f8e5f10d9 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -302,8 +302,9 @@ pgtable_split_needed(struct vm_area_struct *vma, unsigned long cp_flags) { /* * pte markers only resides in pte level, if we need pte markers, - * we need to split. We cannot wr-protect shmem thp because file - * thp is handled differently when split by erasing the pmd so far. + * we need to split. For example, we cannot wr-protect a file thp + * (e.g. 2M shmem) because file thp is handled differently when + * split by erasing the pmd so far. */ return (cp_flags & MM_CP_UFFD_WP) && !vma_is_anonymous(vma); } @@ -430,31 +431,53 @@ static inline long change_pud_range(struct mmu_gather *tlb, unsigned long end, pgprot_t newprot, unsigned long cp_flags) { struct mmu_notifier_range range; - pud_t *pud; + pud_t *pudp, pud; unsigned long next; long pages = 0, ret; range.start = 0; - pud = pud_offset(p4d, addr); + pudp = pud_offset(p4d, addr); do { +again: next = pud_addr_end(addr, end); - ret = change_prepare(vma, pud, pmd, addr, cp_flags); + ret = change_prepare(vma, pudp, pmd, addr, cp_flags); if (ret) { pages = ret; break; } - if (pud_none_or_clear_bad(pud)) + + pud = READ_ONCE(*pudp); + if (pud_none(pud)) continue; + if (!range.start) { mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0, vma->vm_mm, addr, end); mmu_notifier_invalidate_range_start(&range); } - pages += change_pmd_range(tlb, vma, pud, addr, next, newprot, + + if (pud_leaf(pud)) { + if ((next - addr != PUD_SIZE) || + pgtable_split_needed(vma, cp_flags)) { + __split_huge_pud(vma, pudp, addr); + goto again; + } else { + ret = change_huge_pud(tlb, vma, pudp, + addr, newprot, cp_flags); + if (ret == 0) + goto again; + /* huge pud was handled */ + if (ret == HPAGE_PUD_NR) + pages += HPAGE_PUD_NR; + continue; + } + } + + pages += change_pmd_range(tlb, vma, pudp, addr, next, newprot, cp_flags); - } while (pud++, addr = next, addr != end); + } while (pudp++, addr = next, addr != end); if (range.start) mmu_notifier_invalidate_range_end(&range);