From patchwork Tue Jul 30 07:27:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AEC5CC3DA61 for ; Tue, 30 Jul 2024 07:22:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=c0AHcUj3DB9FGOrcI3Z7kN8DXR8ejzlgaIcgTiv2p50=; b=IhpLDq2WZl3xv7 ENOVd51zCTmWpsI4PzFyEDoZpCgNnn54Ql5HtxfkJ91nSZ8A/OGoPYRHxgkIZflmEbxqM4lz4zgHP 0uuMEdblSM0M0NoeAA/nsnoHO7Aku8qluJDHfmT1WeLdy2qedXE7zNPADK0aC7PznEZRGdRBlGrtf ka/S+0O9NPr/P7sDUeZH4sntUeaX3ZLLFh7gAt0wu659SkLrtAokk9QmMBqedqg3S1xrkraVmodr6 zHInF0faeMnoYbOCC5cONiuo16/1YFCvXJjPdPMb0qTpGYvhREsv3c1h+ZGrBeP7tVYyoId3rnBz9 9N4E80kJNNKmpLD49o5A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhC0-0000000Dznp-0CB9; Tue, 30 Jul 2024 07:22:44 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sYhBw-0000000Dzlp-1wsU; Tue, 30 Jul 2024 07:22:42 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C3B0761DDA; Tue, 30 Jul 2024 07:22:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 684EBC4AF17; Tue, 30 Jul 2024 07:22:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722324159; bh=F4rT1yReaNgoNPAOrss44yWnUe6Elb31IIk03kOZru4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I93puTgZYyNM8LhENhjVfRqUnOmgCNGDgnG5jenN7sEcro1xD8o4LG0tV9ptaSFwY PJLaerU/r2mHKee1c2YvQSyvlJg4Wwwvxp1DBp3niYcmBM0VqGPFTQP0rUYMj+49cB SBIYzw7Mq5blsyH6UBISlMfioxXEJu7mMlQJHH4yDgrhD4CTQ9hIsmJZsmVNBl3JKi WBQjq0R3FTO/cpgH/VFebdmXxc1+QxwdDz9Jy5MJW1j7QPpkEyrb970dyzBwR9zmXp QzRAanhlQYpwwtfa6YKWEQwsDF/tgyU1IBDAE2+6tQC7zeslhLzdguUkGZ689CjSZ6 ORaa3vXAkOiLQ== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , "Naveen N . Rao" , Andrew Morton Subject: [RFC PATCH 12/18] mm/thp: pass ptdesc to set_huge_zero_folio function Date: Tue, 30 Jul 2024 15:27:13 +0800 Message-ID: <20240730072719.3715016-2-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730072719.3715016-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> <20240730072719.3715016-1-alexs@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240730_002240_639510_8DDC1D5B X-CRM114-Status: GOOD ( 11.34 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Alex Shi Aim is still replace struct page to ptdesc. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/huge_memory.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index dc323453fa02..1c121ec85447 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1055,7 +1055,7 @@ gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma) } /* Caller must hold page table lock. */ -static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm, +static void set_huge_zero_folio(struct ptdesc *ptdesc, struct mm_struct *mm, struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd, struct folio *zero_folio) { @@ -1064,7 +1064,7 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm, return; entry = mk_pmd(&zero_folio->page, vma->vm_page_prot); entry = pmd_mkhuge(entry); - pgtable_trans_huge_deposit(mm, pmd, pgtable); + pgtable_trans_huge_deposit(mm, pmd, ptdesc_page(ptdesc)); set_pmd_at(mm, haddr, pmd, entry); mm_inc_nr_ptes(mm); } @@ -1113,7 +1113,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) ret = handle_userfault(vmf, VM_UFFD_MISSING); VM_BUG_ON(ret & VM_FAULT_FALLBACK); } else { - set_huge_zero_folio(ptdesc_page(ptdesc), vma->vm_mm, vma, + set_huge_zero_folio(ptdesc, vma->vm_mm, vma, haddr, vmf->pmd, zero_folio); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); spin_unlock(vmf->ptl);