From patchwork Thu Dec 5 02:02:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guillaume Morin X-Patchwork-Id: 13894667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A8D8E7716F for ; Thu, 5 Dec 2024 02:02:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C00E46B0088; Wed, 4 Dec 2024 21:02:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BB1286B008A; Wed, 4 Dec 2024 21:02:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A79746B008C; Wed, 4 Dec 2024 21:02:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 88A0E6B0088 for ; Wed, 4 Dec 2024 21:02:37 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1B41BAEBCD for ; Thu, 5 Dec 2024 02:02:37 +0000 (UTC) X-FDA: 82859256018.05.5BF000A Received: from smtp2-g21.free.fr (smtp2-g21.free.fr [212.27.42.2]) by imf05.hostedemail.com (Postfix) with ESMTP id 9F59C100011 for ; Thu, 5 Dec 2024 02:02:06 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=morinfr.org header.s=20170427 header.b=Gtg08DYj; dmarc=pass (policy=quarantine) header.from=morinfr.org; spf=pass (imf05.hostedemail.com: domain of guillaume@morinfr.org designates 212.27.42.2 as permitted sender) smtp.mailfrom=guillaume@morinfr.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733364141; a=rsa-sha256; cv=none; b=XIDbzqrSFuIE9RoXB+NAXT2NkYvo8J8mfhgcOl2b+Vcan8VzjgGwLYVzt7G2Zm5X5vS03n j7mdx+0+aRdYTwu5Mq4jra9ZPjAU1hJXzvAF3W1t2ME0Jx6wQKG2y/8OEeF9DeqyzueKcp 0+71phmWTj+s5PTfD4bWgVbfzB189IM= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=morinfr.org header.s=20170427 header.b=Gtg08DYj; dmarc=pass (policy=quarantine) header.from=morinfr.org; spf=pass (imf05.hostedemail.com: domain of guillaume@morinfr.org designates 212.27.42.2 as permitted sender) smtp.mailfrom=guillaume@morinfr.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733364141; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=9LaaY0KgqG6gEYu65lafSSzzEHTNh8g9U9JnsLTnezU=; b=540fX3SAHNj/0T3Of7BsFmdsBTkzTkxnrfrEr7jiw3o1SQMdCV7FTPx8mVa+ev3RHlkVNW bMfemyS67lDQBM4v3ZOfJzdU7Fi5N/Mesgo4CnaXDcyenGZDgftmvZKwZYKxEfwWpW0LVA GU/jQGeQ+wt6iHgw3P5tEa4hm1DG+ns= Received: from bender.morinfr.org (unknown [82.66.66.112]) by smtp2-g21.free.fr (Postfix) with ESMTPS id 373742003AE; Thu, 5 Dec 2024 03:02:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=morinfr.org ; s=20170427; h=Content-Type:MIME-Version:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-Transfer-Encoding:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=9LaaY0KgqG6gEYu65lafSSzzEHTNh8g9U9JnsLTnezU=; b=Gtg08DYjOeZ/8rHAHZnEAZ6THc WUC6iTeB3S3yudJ29wJU/e99aSqrMVNc8KcDEiSQkElmdvAn4lbZmN9jVevHveV1CA9BITzj77iwa J+NXfxDqISgY2nhkT1N0avSbrm8uTjgOqwbOv/Tt1Icb1YUQck0WmuwM66uECxa54cIc=; Received: from guillaum by bender.morinfr.org with local (Exim 4.96) (envelope-from ) id 1tJ1CE-001IaL-1U; Thu, 05 Dec 2024 03:02:26 +0100 Date: Thu, 5 Dec 2024 03:02:26 +0100 From: Guillaume Morin To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Muchun Song , Andrew Morton , Peter Xu , David Hildenbrand , Eric Hagberg Subject: [PATCH v3] mm/hugetlb: support FOLL_FORCE|FOLL_WRITE Message-ID: MIME-Version: 1.0 Content-Disposition: inline X-Stat-Signature: fmuaywndeotubucxiqbe18m5yegix6pt X-Rspam-User: X-Rspamd-Queue-Id: 9F59C100011 X-Rspamd-Server: rspam08 X-HE-Tag: 1733364126-362152 X-HE-Meta: U2FsdGVkX1/swsg1LkKyZ2thZpT1shkscrVA5hB2wFZMiEnUnBywrBtby24c5pgdgowa5mCXypbTCfSkxlcF/Lt22tzHW3HCzun1jxTGe232IURgeR2S2zI89PqiTCivgu4H8NpWXWfnGdd3GtR1N61irLomnn25bAvoL3ptK2lM6dqwNBDcZ3PNu0cpsYauJqmbYE4I5Su5VrZwPBWL/LPhw49iVYLnLxQ80Q0zm43gp8FHNdCfkCEuGBuBagWKHZ38X78M/E99iJCT/YAm2/FKDrHm9j+FHTDOqELl24OsZ6xQQt4TCxjT35Cw2x3NTli6G/ZLjAYAf+r0MZ9dfHZoBtA6DGfy09kJjYAw3vAiAVGFZgJVo/xLnnl/+kDcjzbQj3z0USPvXhjM84Iu6u5/3/nvmSF+WCH1ruwRaTS9cD8jkYADSAYOQoFJZPw7WRm3U9P1jxfN/Zl/mKA/UF0brU6v++cGMOf+LuQtdl6oq82vfwXBxbVF6NQEuGVjRMth3PVDT6kmN/ovZVEmqO0UEEr54zLP1TLPOORXtVqjfTXMnJtiL+AmVZgdRrTbvaDyfuvIiJlfM/LmzUlfp0nPvMziVN/sDsFSNDsT8NtLFV4Vu/HTNmhHGkuk9dfL/oGu1esIsxOxNywFFHe2Qcvd8wGBUSuNo3Yq/iMYT1kyEIbliPuDmtuT/xeZ7o5kTYgaAvu5yhWJP1LStJbyqiWq/l/r8ROcd3WYq9gLr3tYkBxm6ItQuGSLFSbuPqIKlThnqIyDPgwBUzmcxsuFi8eFp9U33f8LQpwZTli/wynyAhrrh4BckM3+IhdyH7DG5+uAYdpHDuW09LmEjT793g4TmkuugbcHJfPmLNYR6plvWjeQGnb6bQu/pEyVYmINZM+0kvIacd2+vVZH5e1XNmRzqIqPDhk2rVW7/VWEZH1W+ZQQafAC6mcGDoL8XxolU9S++uUALIiAZ9cNPeG phAChVgz 6Vj2ENFeMit8nHmRXBcvMINo72E6Q7zHG7NLwovT/Y0/BGA8yZn7o2VSPo1M9eqg+KMcwGsEps3+V1s3vso/xraIRy54JnbnH7cFZhTLYhZ8rduR/0EwOND38RZQAADLkQrdWSuSEfnSEKACkPOigiCIdQOXMGImLXgymDAMTWGF9G0+1SnnsMkvyE02zO0jojGnNs746se2uQ305lyS1HKEwFkbJ76SLkAgVzPW9XyvvFWk0I2GEWEYi65WzfBvwN4XfMb+tsMR9cxsh7qW+sC4aUxpikMNZIl8dYJYSyVUy+gb+96EKR+uOn/J1eDWihRYwXhjnyHW485qDVmQEa8u5vl89JB8I7nAgdH0BlZszvLGMEAGx1rNHxrPvevCpguPidlmYuLBiTyNeSCqv6McG8qCT9EPG/YXcpsCIMVdP6n54FVYFeqPK4t9tQArLC4rhxZ1PGq6Ks2DydPV1CgVi1vW7LAMt6hQvMDhLf+KESJI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Eric reported that PTRACE_POKETEXT fails when applications use hugetlb for mapping text using huge pages. Before commit 1d8d14641fd9 ("mm/hugetlb: support write-faults in shared mappings"), PTRACE_POKETEXT worked by accident, but it was buggy and silently ended up mapping pages writable into the page tables even though VM_WRITE was not set. In general, FOLL_FORCE|FOLL_WRITE does currently not work with hugetlb. Let's implement FOLL_FORCE|FOLL_WRITE properly for hugetlb, such that what used to work in the past by accident now properly works, allowing applications using hugetlb for text etc. to get properly debugged. This change might also be required to implement uprobes support for hugetlb [1]. [1] https://lore.kernel.org/lkml/ZiK50qob9yl5e0Xz@bender.morinfr.org/ Cc: Muchun Song Cc: Andrew Morton Cc: Peter Xu Cc: David Hildenbrand Cc: Eric Hagberg Signed-off-by: Guillaume Morin --- Changes in v2: - Improved commit message Changes in v3: - Fix potential unitialized mem access in follow_huge_pud - define pud_soft_dirty when soft dirty is not enabled include/linux/pgtable.h | 5 +++ mm/gup.c | 99 +++++++++++++++++++++-------------------- mm/hugetlb.c | 20 +++++---- 3 files changed, 66 insertions(+), 58 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index adef9d6e9b1b..9335d7c82d20 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1422,6 +1422,11 @@ static inline int pmd_soft_dirty(pmd_t pmd) return 0; } +static inline int pud_soft_dirty(pud_t pud) +{ + return 0; +} + static inline pte_t pte_mksoft_dirty(pte_t pte) { return pte; diff --git a/mm/gup.c b/mm/gup.c index 746070a1d8bf..cc3eae458013 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -587,6 +587,33 @@ static struct folio *try_grab_folio_fast(struct page *page, int refs, } #endif /* CONFIG_HAVE_GUP_FAST */ +/* Common code for can_follow_write_* */ +static inline bool can_follow_write_common(struct page *page, + struct vm_area_struct *vma, unsigned int flags) +{ + /* Maybe FOLL_FORCE is set to override it? */ + if (!(flags & FOLL_FORCE)) + return false; + + /* But FOLL_FORCE has no effect on shared mappings */ + if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) + return false; + + /* ... or read-only private ones */ + if (!(vma->vm_flags & VM_MAYWRITE)) + return false; + + /* ... or already writable ones that just need to take a write fault */ + if (vma->vm_flags & VM_WRITE) + return false; + + /* + * See can_change_pte_writable(): we broke COW and could map the page + * writable if we have an exclusive anonymous page ... + */ + return page && PageAnon(page) && PageAnonExclusive(page); +} + static struct page *no_page_table(struct vm_area_struct *vma, unsigned int flags, unsigned long address) { @@ -613,6 +640,22 @@ static struct page *no_page_table(struct vm_area_struct *vma, } #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES +/* FOLL_FORCE can write to even unwritable PUDs in COW mappings. */ +static inline bool can_follow_write_pud(pud_t pud, struct page *page, + struct vm_area_struct *vma, + unsigned int flags) +{ + /* If the pud is writable, we can write to the page. */ + if (pud_write(pud)) + return true; + + if (!can_follow_write_common(page, vma, flags)) + return false; + + /* ... and a write-fault isn't required for other reasons. */ + return !vma_soft_dirty_enabled(vma) || pud_soft_dirty(pud); +} + static struct page *follow_huge_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp, int flags, struct follow_page_context *ctx) @@ -625,13 +668,16 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, assert_spin_locked(pud_lockptr(mm, pudp)); - if ((flags & FOLL_WRITE) && !pud_write(pud)) + pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; + page = pfn_to_page(pfn); + + if ((flags & FOLL_WRITE) && + !can_follow_write_pud(pud, page, vma, flags)) return NULL; if (!pud_present(pud)) return NULL; - pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && pud_devmap(pud)) { @@ -653,8 +699,6 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, return ERR_PTR(-EFAULT); } - page = pfn_to_page(pfn); - if (!pud_devmap(pud) && !pud_write(pud) && gup_must_unshare(vma, flags, page)) return ERR_PTR(-EMLINK); @@ -677,27 +721,7 @@ static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, if (pmd_write(pmd)) return true; - /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) - return false; - - /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) - return false; - - /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) - return false; - - /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) - return false; - - /* - * See can_change_pte_writable(): we broke COW and could map the page - * writable if we have an exclusive anonymous page ... - */ - if (!page || !PageAnon(page) || !PageAnonExclusive(page)) + if (!can_follow_write_common(page, vma, flags)) return false; /* ... and a write-fault isn't required for other reasons. */ @@ -798,27 +822,7 @@ static inline bool can_follow_write_pte(pte_t pte, struct page *page, if (pte_write(pte)) return true; - /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) - return false; - - /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) - return false; - - /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) - return false; - - /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) - return false; - - /* - * See can_change_pte_writable(): we broke COW and could map the page - * writable if we have an exclusive anonymous page ... - */ - if (!page || !PageAnon(page) || !PageAnonExclusive(page)) + if (!can_follow_write_common(page, vma, flags)) return false; /* ... and a write-fault isn't required for other reasons. */ @@ -1285,9 +1289,6 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if (!(vm_flags & VM_WRITE) || (vm_flags & VM_SHADOW_STACK)) { if (!(gup_flags & FOLL_FORCE)) return -EFAULT; - /* hugetlb does not support FOLL_FORCE|FOLL_WRITE. */ - if (is_vm_hugetlb_page(vma)) - return -EFAULT; /* * We used to let the write,force case do COW in a * VM_MAYWRITE VM_SHARED !VM_WRITE vma, so ptrace could diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ea2ed8e301ef..52517b7ce308 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5169,6 +5169,13 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma, update_mmu_cache(vma, address, ptep); } +static void set_huge_ptep_maybe_writable(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) +{ + if (vma->vm_flags & VM_WRITE) + set_huge_ptep_writable(vma, address, ptep); +} + bool is_hugetlb_entry_migration(pte_t pte) { swp_entry_t swp; @@ -5802,13 +5809,6 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio, if (!unshare && huge_pte_uffd_wp(pte)) return 0; - /* - * hugetlb does not support FOLL_FORCE-style write faults that keep the - * PTE mapped R/O such as maybe_mkwrite() would do. - */ - if (WARN_ON_ONCE(!unshare && !(vma->vm_flags & VM_WRITE))) - return VM_FAULT_SIGSEGV; - /* Let's take out MAP_SHARED mappings first. */ if (vma->vm_flags & VM_MAYSHARE) { set_huge_ptep_writable(vma, vmf->address, vmf->pte); @@ -5837,7 +5837,8 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio, SetPageAnonExclusive(&old_folio->page); } if (likely(!unshare)) - set_huge_ptep_writable(vma, vmf->address, vmf->pte); + set_huge_ptep_maybe_writable(vma, vmf->address, + vmf->pte); delayacct_wpcopy_end(); return 0; @@ -5943,7 +5944,8 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio, spin_lock(vmf->ptl); vmf->pte = hugetlb_walk(vma, vmf->address, huge_page_size(h)); if (likely(vmf->pte && pte_same(huge_ptep_get(mm, vmf->address, vmf->pte), pte))) { - pte_t newpte = make_huge_pte(vma, &new_folio->page, !unshare); + const bool writable = !unshare && (vma->vm_flags & VM_WRITE); + pte_t newpte = make_huge_pte(vma, &new_folio->page, writable); /* Break COW or unshare */ huge_ptep_clear_flush(vma, vmf->address, vmf->pte);