From patchwork Fri Feb 21 14:31:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13985606 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB328C021B6 for ; Fri, 21 Feb 2025 14:31:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7CA0928000C; Fri, 21 Feb 2025 09:31:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E6DB28000A; Fri, 21 Feb 2025 09:31:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3B34C28000F; Fri, 21 Feb 2025 09:31:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 037F228000A for ; Fri, 21 Feb 2025 09:31:09 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B944A507C1 for ; Fri, 21 Feb 2025 14:31:09 +0000 (UTC) X-FDA: 83144189058.26.DAE50B9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 123D340023 for ; Fri, 21 Feb 2025 14:31:07 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=do8QZX4a; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740148268; a=rsa-sha256; cv=none; b=NvH2ysyW4vxjK2RXA5IyeDe/pVAxjIJHyQ5rt+r8vU3TdqvdsQFs5JaUQaDaWS+5vSrw8l Au5r0+Du2822ejjdpMPzmMVwtnsOlz/gETRlWBWp/DHCMUHQ3jXEhXd/9iUHTUC+L7dhPx f6SC3Gkw9UyzJasmm0Hml9xjJd+Uyzc= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=do8QZX4a; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740148268; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mYKH1mAPTthfanIoFuvHz+2lH48g9yQX03xmi/lqQUA=; b=2zGfyQfRuJ0y4Y5HZ9flhDiQ06BCNeZowalZM3DRTbjD12j97ZKh43oGDkyJNCu5Gqs414 oZgaRedxR4JgfxRdWzUbSQTkD/ftW8xah3zhqfXt7gEGDwfQ7pX+FsvvYYCAHZHef4Advi Mj4Br1/5bLk/JyopnPmLqWPKgeOaB8I= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mYKH1mAPTthfanIoFuvHz+2lH48g9yQX03xmi/lqQUA=; b=do8QZX4arlEWPAGiob61UoBcdk T6Nd16b+mpzNz3tL2cU8kKEmo2Ke5Rpa3VvT6MnbtrO9RfCzvvKLN4dpjpL3kp9HXKwKi7q4Ygz4E ceyHI5rm025PE7K2mtW0C2B5Jzjgpe/mClhqw1fH/fwKh5KbYt+gCe/WQPRPNczxnvEKtrnz5G+g1 PYuaUFIJG3Dm1bwT4XNzlods6TQPnYc6zxZGdnsaPjLDUYEPb8XkySD6/3YNG/vddHKLGYwA9VhGj CTxsuSdTXxmXfoW6szE8kkQtTwREATtplq2ajNmhdlqoYIVm+pHwWUCEFLjJBqAc9a/hy35b0H+di MmKZKFPQ==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tlU3W-0000000DzWY-2lbR; Fri, 21 Feb 2025 14:31:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org Subject: [PATCH 3/4] mm: Add folio_mk_pmd() Date: Fri, 21 Feb 2025 14:31:00 +0000 Message-ID: <20250221143104.3334444-4-willy@infradead.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250221143104.3334444-1-willy@infradead.org> References: <20250221143104.3334444-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: 98xuxtbdb6ghwes1sw1p31x7tguuexag X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 123D340023 X-Rspam-User: X-HE-Tag: 1740148267-429471 X-HE-Meta: U2FsdGVkX1+itp+nkDTuI8Qoszya4bc17+Z9H2vXDcFeDWJDvaaQYeb9HK00SO7uFPbJWk64ClQ0atZLXeyi/zFtM4s33pr9g50YtjgMAw1CZ/MhcvAP2WkBYFSyVcpYTeL6czuBA6AoJ5yTo8Mi/dxUf+BoBoZBoohzu6m78k3gefvW6wtuufwNySRyGxTZHdh/+NsDaj4oM2Fr95MCCWaFlApAg0W52ipVChkj+uBbDi/zoF3y8W1eZOHFWJTaNP6wv1SlZS9yVVoPq9wqkyoEZNLJPjP9vt7gGwfls4aTP39zgS5VW5TU+a66JFQixAXWOEDlSySHTr4/SUAA9T9vrnzaBLDO+99QaR3BaOdA6S+ltRnz4ty6NiD+rDfaESKyx+O1MEuAGC4DcH+u+W/xujhK00AbSvTcK7mlRYrdbh5kEna4Z7F3/BKQ3GlSa9gu88nz9wibAViY78KzT5X4mA8q+BJn2vOPy5lGh2qxp5kTGNsqT64drF4HfqG3fcHHnZJE+74/85edqBSCI5uG1NEvEg1k0vxBq3rvEjAS54GgEW3l4IzKZya1YdZ3uaZq47pF7+3g+ptj1tUwmEosCDY62ocSueiVVfkN5lbUiuUUtCvFIIZOfqLNyVl80yal7ZRrSvWQ+xK+ayR3WhjA5YVQkGqgDwGZksosGc24IN7XR0gmR1+k5S3G2U9GcUuY42pWrFJhuna9TlHHLtlpdMtsGCM63EsSu7q3MfGHHdLdATMrYOxfGuECAT+CznwIP1HrH1MMc923uh0H9cui2jWJj/XszmjFLqvXwXRtKWi7jb2CgfLs3gwGO8tbWjQgR3RRwJSQvLnTzRh9Vt1qsH21cohbyfPCiMBuJ4y+c4QNLls0uGWkhktkSo8gQY+PK8WqQ98cIL8e5XQRwc0t2pjDW/RNDorTcYQ4A2u2S6ORFYNSq5CZDrmLVafhulhTG1/ssA12VvfFpL4 M1Cogabo MKLLt+quKNkDybWKv3i6TOmfPq+YAruNU2Cskh2cJJ2+2gqkrc1YQsKbtxnnFFk6+J3PEcqipsSvR8gY8EyUJDhqDXja6vkRyf9o5KUh7hG6mh4QLHPZA3MhlG3tM8B13nE5cfl7EXHGwEHxw5nullv+Q8nF9hvW97Z9wtpwt/dJ1B59d4XdBw5J01SKdaI/PtJRqF7ObUHIyBiibfdNL0XUBSaCcwgsDfrepJD23x+tmaSUA9vZEFAnxL6F3U2BqOIHlpmMASKp9mhaW2a02AQjLxBVYBocbcTeT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Removes five conversions from folio to page. Also removes both callers of mk_pmd() that aren't part of mk_huge_pmd(), getting us a step closer to removing the confusion between mk_pmd(), mk_huge_pmd() and pmd_mkhuge(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/dax.c | 3 +-- include/linux/mm.h | 17 +++++++++++++++++ mm/huge_memory.c | 11 +++++------ mm/khugepaged.c | 2 +- mm/memory.c | 2 +- 5 files changed, 25 insertions(+), 10 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 21b47402b3dc..22efc6c44539 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1237,8 +1237,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); mm_inc_nr_ptes(vma->vm_mm); } - pmd_entry = mk_pmd(&zero_folio->page, vmf->vma->vm_page_prot); - pmd_entry = pmd_mkhuge(pmd_entry); + pmd_entry = folio_mk_pmd(zero_folio, vmf->vma->vm_page_prot); set_pmd_at(vmf->vma->vm_mm, pmd_addr, vmf->pmd, pmd_entry); spin_unlock(ptl); trace_dax_pmd_load_hole(inode, vmf, zero_folio, *entry); diff --git a/include/linux/mm.h b/include/linux/mm.h index b1e311bae6b7..5c883c619fa4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1936,7 +1936,24 @@ static inline pte_t folio_mk_pte(struct folio *folio, pgprot_t pgprot) { return pfn_pte(folio_pfn(folio), pgprot); } + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +/** + * folio_mk_pmd - Create a PMD for this folio + * @folio: The folio to create a PMD for + * @pgprot: The page protection bits to use + * + * Create a page table entry for the first page of this folio. + * This is suitable for passing to set_pmd_at(). + * + * Return: A page table entry suitable for mapping this folio. + */ +static inline pmd_t folio_mk_pmd(struct folio *folio, pgprot_t pgprot) +{ + return pmd_mkhuge(pfn_pmd(folio_pfn(folio), pgprot)); +} #endif +#endif /* CONFIG_MMU */ /** * folio_maybe_dma_pinned - Report if a folio may be pinned for DMA. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3d3ebdc002d5..95ed5dd9622b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1203,7 +1203,7 @@ static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd, { pmd_t entry; - entry = mk_huge_pmd(&folio->page, vma->vm_page_prot); + entry = folio_mk_pmd(folio, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); @@ -1311,8 +1311,7 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm, pmd_t entry; if (!pmd_none(*pmd)) return; - entry = mk_pmd(&zero_folio->page, vma->vm_page_prot); - entry = pmd_mkhuge(entry); + entry = folio_mk_pmd(zero_folio, vma->vm_page_prot); pgtable_trans_huge_deposit(mm, pmd, pgtable); set_pmd_at(mm, haddr, pmd, entry); mm_inc_nr_ptes(mm); @@ -2570,12 +2569,12 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm folio_move_anon_rmap(src_folio, dst_vma); src_folio->index = linear_page_index(dst_vma, dst_addr); - _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot); + _dst_pmd = folio_mk_pmd(src_folio, dst_vma->vm_page_prot); /* Follow mremap() behavior and treat the entry dirty after the move */ _dst_pmd = pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma); } else { src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd); - _dst_pmd = mk_huge_pmd(src_page, dst_vma->vm_page_prot); + _dst_pmd = folio_mk_pmd(src_folio, dst_vma->vm_page_prot); } set_pmd_at(mm, dst_addr, dst_pmd, _dst_pmd); @@ -4306,7 +4305,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) entry = pmd_to_swp_entry(*pvmw->pmd); folio_get(folio); - pmde = mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot)); + pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot)); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde = pmd_mksoft_dirty(pmde); if (is_writable_migration_entry(entry)) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5f0be134141e..4f85597a7f64 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1239,7 +1239,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, __folio_mark_uptodate(folio); pgtable = pmd_pgtable(_pmd); - _pmd = mk_huge_pmd(&folio->page, vma->vm_page_prot); + _pmd = folio_mk_pmd(folio, vma->vm_page_prot); _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); spin_lock(pmd_ptl); diff --git a/mm/memory.c b/mm/memory.c index ea5a58db76dd..6d1a1185c34c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5078,7 +5078,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) flush_icache_pages(vma, page, HPAGE_PMD_NR); - entry = mk_huge_pmd(page, vma->vm_page_prot); + entry = folio_mk_pmd(folio, vma->vm_page_prot); if (write) entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);