From patchwork Tue Dec 19 07:55:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 625FEC46CA2 for ; Tue, 19 Dec 2023 07:58:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2FBE8D000C; Tue, 19 Dec 2023 02:58:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB5F38D0005; Tue, 19 Dec 2023 02:58:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D08828D000C; Tue, 19 Dec 2023 02:58:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BC2F48D0005 for ; Tue, 19 Dec 2023 02:58:13 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 802EC408CF for ; Tue, 19 Dec 2023 07:58:13 +0000 (UTC) X-FDA: 81582814866.08.A398BD7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf16.hostedemail.com (Postfix) with ESMTP id C7D82180011 for ; Tue, 19 Dec 2023 07:58:11 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hOmhLtLR; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972691; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z+0dGl/nMUgxPM498YSXtipz9Gb3hgqO7wNjJOzkT+E=; b=rXjsXmVG8AFVhyaMFIltF7/p3GwiBXVvwDKrWJ+5nqVc5dbJdeojJiH0YkeqfAmWUdVeJw jQZAVkoza9racUsvVRwhmwyKPVtREG4SEJY4pqdw9idWPsz5/WWKv20+idnOGyoMjbD4hk qWFUF/Lw9ajEGF2HVCJ405/+1h3DyvY= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hOmhLtLR; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972691; a=rsa-sha256; cv=none; b=y/rH4RGwUgU0vNqbewr2/YPwQpe9EoWuW2qOinLs0fTDISYMpaMOsQN3U/0x2guv+/20S1 zg/80ACmJcf74duh/tv8dAkwNI3tQPU0ojirE/Ja+ab5ztK8BD6alN4zuNsiN6dyruHy6K EQKP9VGoqJDrxzw8zcIULU/qTXRAwm8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972691; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z+0dGl/nMUgxPM498YSXtipz9Gb3hgqO7wNjJOzkT+E=; b=hOmhLtLRBFLrz2CS/0L5Fi+2VN5zqyzGRhfRB9BMj/xXrQe7bc64J9TDyxKq77fX5cZdDl e6Y123mJQHyONeTFkObVHWnHbGhKmCJbxyzCv0dwAk1WDyp38HhCXZnumWh5ng+SK3GNnD 1e9Z9DEA2hhexx8E4+o62nwYiPH9Xjo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-678-9aIBjkYHNgawgyAMHjSpHg-1; Tue, 19 Dec 2023 02:58:05 -0500 X-MC-Unique: 9aIBjkYHNgawgyAMHjSpHg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 82F6C83BA88; Tue, 19 Dec 2023 07:58:04 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id B7CD92026D66; Tue, 19 Dec 2023 07:57:53 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 11/13] mm/gup: Handle huge pmd for follow_pmd_mask() Date: Tue, 19 Dec 2023 15:55:36 +0800 Message-ID: <20231219075538.414708-12-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: C7D82180011 X-Stat-Signature: mxpk4twpesi7fkndcncjcjn5z3ipewfb X-Rspam-User: X-HE-Tag: 1702972691-273461 X-HE-Meta: U2FsdGVkX19pXRGTfI8Lf2sthhktDVCDfDCV4qXpIiIrjs3f80L1er3NS4Y8ciwjZr/09/uR9D7Yj+J490v9hsNTbuug9V9rnEc3RYjCEjIWmEk0ZBi0qDl4gXbBfa4BfAxvBcvTqwWVGt9tZJZ+kykhROdeID7hF/gsYQsFD2T9ISL6fYtpzEPIhcaDIICWT4twTs2WQaeLfvZYjvf4wd4dlgc4l6uyzeq4N32JnMCUz4i3jlKRiS6UJ9CGiG71EG4IsGtJhYn0C2yu52ixMKeH5uA+WxHujWG9EPveEZv9oNU3fiuUvKXj08yfQgtarpXRXIiOdxqKUaz4dBIO6LXZYXiC0DhHOZXFfYK0ScwBlewd/WqD896Sng5wtuUwCNLjR0DOVlRZv/o/r1yaTKpCjhLNwYg3o23Z1O4VGb/wpH2rw47ZidrvVXYG7lOYOOet6hBeDOZGKqZMG7GCn4neusdAgwnonO770hUEGmhQq0TBADMe6Wpd5sdNqcNwzD+7KWAn3WDvj63MvLimSVXd/SPGB3eTMox7cECYlBG+7dmMH0KcmBmJGFTxLYZZMVHjXrzdIPOXZ9M51I32Gsv6RhM/QT95JeHa1eyPIBHil61WEaM3PeNR7vckM41aaJavYFX2iyKiXTQDmI6f3LEoHU4LAIyWuFr61hk29thNW4Kf8jIb6mq40ByKiZLZTqelcEecboUITfZoAz/eChJjng6jqtRqePv0SZW1wkeGc57oYa8eVCk/zzj7X45KKY4weg8BvRM+dLRutclAmOZYgVZ/+IQPEksR4j4B2QQ5xUlg0eM2T52zZSgEq94r9F8JR5FkgrGJW5Eief4og1grMHQeZ1SqQMKPlUpEBRrQlW1ZoxM7M3Z6fOmEzDzXPTnYWHahgU7eyDwwU+/OeC/4nood3GnpsEF1uHVCezPRDvCGx0n7RVRF0BscUH4kFOeyoivYFlsSwMdThWu ioZWytXJ KoewUA+uUVo3YtR9RiKMhtQwnucBKBEUA8jS0t/Z07vq/Q2ubgX9AUasH+J/5SKWm3Y0ZmbykhSOLs7dVvlFq4shE0OIvYAx/oE27DP28Xp7Subf6sAtFIU1RUBaWO1sHLesL4H6thw8l4ZlANYLrZPtiMpY7Sw6xSbXwERj9J8a7sFSZS5w6bS6OBc+0ErIkjFUZfkmdda/I3BnWa8DEEBoTThuvLYC/JC1b X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as long as enabled. FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge. Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it into follow_huge_pmd() to match what it does. Move it into gup.c so not depend on CONFIG_THP. When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set it when the page is valid. It was not a bug to set it before even if GUP failed (page==NULL), because follow_page_mask() callers always ignores page_mask if so. But doing so makes the code cleaner. Signed-off-by: Peter Xu --- mm/gup.c | 107 ++++++++++++++++++++++++++++++++++++++++++++--- mm/huge_memory.c | 86 +------------------------------------ mm/internal.h | 5 +-- 3 files changed, 105 insertions(+), 93 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 5b14f91d2f6b..080dff79b650 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, return page; } + +/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ +static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, + struct vm_area_struct *vma, + unsigned int flags) +{ + /* If the pmd is writable, we can write to the page. */ + if (pmd_write(pmd)) + return true; + + /* Maybe FOLL_FORCE is set to override it? */ + if (!(flags & FOLL_FORCE)) + return false; + + /* But FOLL_FORCE has no effect on shared mappings */ + if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) + return false; + + /* ... or read-only private ones */ + if (!(vma->vm_flags & VM_MAYWRITE)) + return false; + + /* ... or already writable ones that just need to take a write fault */ + if (vma->vm_flags & VM_WRITE) + return false; + + /* + * See can_change_pte_writable(): we broke COW and could map the page + * writable if we have an exclusive anonymous page ... + */ + if (!page || !PageAnon(page) || !PageAnonExclusive(page)) + return false; + + /* ... and a write-fault isn't required for other reasons. */ + if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) + return false; + return !userfaultfd_huge_pmd_wp(vma, pmd); +} + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + pmd_t pmdval = *pmd; + struct page *page; + int ret; + + assert_spin_locked(pmd_lockptr(mm, pmd)); + + page = pmd_page(pmdval); + VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); + + if ((flags & FOLL_WRITE) && + !can_follow_write_pmd(pmdval, page, vma, flags)) + return NULL; + + /* Avoid dumping huge zero page */ + if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval)) + return ERR_PTR(-EFAULT); + + if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) + return NULL; + + if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && + !PageAnonExclusive(page), page); + + ret = try_grab_page(page, flags); + if (ret) + return ERR_PTR(ret); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH)) + touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + + page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; + ctx->page_mask = HPAGE_PMD_NR - 1; + VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); + + return page; +} + #else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static struct page *follow_huge_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp, @@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, { return NULL; } + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} #endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, @@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return page; return no_page_table(vma, flags, address); } - if (likely(!pmd_trans_huge(pmdval))) + if (likely(!pmd_thp_or_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); - if (unlikely(!pmd_present(*pmd))) { + pmdval = *pmd; + if (unlikely(!pmd_present(pmdval))) { spin_unlock(ptl); return no_page_table(vma, flags, address); } - if (unlikely(!pmd_trans_huge(*pmd))) { + if (unlikely(!pmd_thp_or_huge(pmdval))) { spin_unlock(ptl); return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - if (flags & FOLL_SPLIT_PMD) { + if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) { spin_unlock(ptl); split_huge_pmd(vma, pmd, address); /* If pmd was left empty, stuff a page table in there quickly */ return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) : follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - page = follow_trans_huge_pmd(vma, address, pmd, flags); + page = follow_huge_pmd(vma, address, pmd, flags, ctx); spin_unlock(ptl); - ctx->page_mask = HPAGE_PMD_NR - 1; return page; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index def1dbe0d7e8..930c59d7ceab 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1216,8 +1216,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud); #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -static void touch_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, bool write) +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write) { pmd_t _pmd; @@ -1570,88 +1570,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma, return pmd_dirty(pmd); } -/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ -static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, - struct vm_area_struct *vma, - unsigned int flags) -{ - /* If the pmd is writable, we can write to the page. */ - if (pmd_write(pmd)) - return true; - - /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) - return false; - - /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) - return false; - - /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) - return false; - - /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) - return false; - - /* - * See can_change_pte_writable(): we broke COW and could map the page - * writable if we have an exclusive anonymous page ... - */ - if (!page || !PageAnon(page) || !PageAnonExclusive(page)) - return false; - - /* ... and a write-fault isn't required for other reasons. */ - if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) - return false; - return !userfaultfd_huge_pmd_wp(vma, pmd); -} - -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, - pmd_t *pmd, - unsigned int flags) -{ - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pmd_lockptr(mm, pmd)); - - page = pmd_page(*pmd); - VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); - - if ((flags & FOLL_WRITE) && - !can_follow_write_pmd(*pmd, page, vma, flags)) - return NULL; - - /* Avoid dumping huge zero page */ - if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd)) - return ERR_PTR(-EFAULT); - - if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) - return NULL; - - if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page)) - return ERR_PTR(-EMLINK); - - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && - !PageAnonExclusive(page), page); - - ret = try_grab_page(page, flags); - if (ret) - return ERR_PTR(ret); - - if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); - - page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; - VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); - - return page; -} - /* NUMA hinting page fault entry point for trans huge pmds */ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) { diff --git a/mm/internal.h b/mm/internal.h index 2fca14553d0f..c0e953a1eb62 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1009,9 +1009,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); */ void touch_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, bool write); -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmd, - unsigned int flags); +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write); /* * mm/mmap.c