From patchwork Tue May 21 11:48:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13669355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 369C5C25B7D for ; Tue, 21 May 2024 11:49:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Z6VWU73fbQGM+33tR7kAGvUX3y0TZTsOOSXgwJhEX+k=; b=sE2bo0pkYZHxAv x3DxTU1NQsp/ZC2c89HDekm8r7QIAMDl5qFcgZ2m7xusMrO2lK3gOjmefWWQhyHQXYtFnCM5LhuMY jrIYQfze5ILyXTNbhysbG1+iYmUBaP+kDnwyAsGS9Lets2qaGdnflrGQxrL0t3yC1VEJ/AvB9f3jD zw6ZvKGoJyZDhI0rTtoDdbBWLM44qgLqsqdxU8hI7mLD9Y2prykXKyZhul6sRnDDKdREhgr3b9nv4 ouwmig46g93bMndgIMyPVNArAGvpuscfwyrMjIsIbZzPKWmzT4gb3mZQX+ydjGVykES288G9lOJ4N YJKqFchlGcvJOTlG3b5Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzC-0000000HKYW-0RWV; Tue, 21 May 2024 11:48:54 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9Nz9-0000000HKWy-3FaX for linux-riscv@lists.infradead.org; Tue, 21 May 2024 11:48:52 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 58655621C6; Tue, 21 May 2024 11:48:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30B5DC32786; Tue, 21 May 2024 11:48:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716292130; bh=dNsU9/EzgA72vQ2DBRKYul88CUOKLAUu9MwoObC6yL8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GXrl8c02sxRhi+iEOVG7cmPLve4hs9PdvC3ifTK/0ZJ2fpiI1XBIM4f+Tw1XUK7oJ E4gTG3lnIzNy0NnH7nrt/MisLxPNbNSdRPNHDns409n2DL9j/qql7OXIVmG+wbtrld Kjm+nN8ILu0EkFtcEGiGNyfxoJyIGgCdRUFHFljWi+6cXzT8JsNglHSC1kyO0Gjzvq TKOX/eyVGq0AbdiNIWzUz0oGDOjhKt/Mvsj09wXl9JKM00gjNZsK/YEwrBgqQ6GusU /dOVij7LrrF21Q72kX1xKptNkdTTAiN8fiiWs98gKQJ3bmwiqITGgPQJChzVFnhp2N k6bc9FLkN9NgQ== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Oscar Salvador Subject: [PATCH v3 1/9] riscv: mm: Properly forward vmemmap_populate() altmap parameter Date: Tue, 21 May 2024 13:48:22 +0200 Message-Id: <20240521114830.841660-2-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240521114830.841660-1-bjorn@kernel.org> References: <20240521114830.841660-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_044851_886066_BD9D9A02 X-CRM114-Status: GOOD ( 10.38 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Stoakes , Chethan Seshadri , linux-kernel@vger.kernel.org, Andrew Bresticker , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Santosh Mamila , linux-mm@kvack.org, Sivakumar Munnangi , virtualization@lists.linux-foundation.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Björn Töpel Make sure that the altmap parameter is properly passed on to vmemmap_populate_hugepages(). Signed-off-by: Björn Töpel Reviewed-by: Alexandre Ghiti --- arch/riscv/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 2574f6a3b0e7..b66f846e7634 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1434,7 +1434,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, * memory hotplug, we are not able to update all the page tables with * the new PMDs. */ - return vmemmap_populate_hugepages(start, end, node, NULL); + return vmemmap_populate_hugepages(start, end, node, altmap); } #endif From patchwork Tue May 21 11:48:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13669352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A14D9C25B74 for ; Tue, 21 May 2024 11:49:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=P07L+JBZaLorPpZTLxiorBkiJvfZf09OdjcagkC5r6E=; b=rH0umIHV8NeaEF 2nA/SIvS8WXh6D+Po1rCfQMT/M+HwR/oAJUrGzADb5PRiZ7jTipfGzFg1huDNNc5roFZvzDAVkDCv FpU0cT0plQ9x9cNdr+qDKo7ukdW+wLRUe1udVRV/TMvKabkEz08V638IMVEUT8CCTaV+WnbKdjIrK YwYMNvK6dzx3YBjvAZ9aBk+SNa3iHd/gN1wA8R8w+7xweZTKAy2909/hnh4emZcg/ccXaH6/9u+oF skGrVykrfSC1yAtPBaYb2S+DwIbfMBIvMVGGNBLacPo5oxtvI1uzxAY4QW3mjLNm4N6ALK7RBtXLM 4M5UJeJa6WGSF1gCp25Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzI-0000000HKbl-222J; Tue, 21 May 2024 11:49:00 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzD-0000000HKYu-1LCM for linux-riscv@lists.infradead.org; Tue, 21 May 2024 11:48:57 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id A15556220F; Tue, 21 May 2024 11:48:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B1475C4AF09; Tue, 21 May 2024 11:48:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716292134; bh=nfJcShROxet1Zcs9SjZc/rXdynjgyC+wVMctU8YOdM8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o5nLepXejKU2VOMdC1SRZPZvOZANI4opdVRkd8+TPMDK5sR0P07NwX2ZJc79N4eQZ bfaqTkEl6t3lFSwKwGIAvi6aKuwLCtjqA4R28DwMCONu+LBEGDp5huVkPzSrGGxpd3 eAcze83sdvqq0s5ijrEJlIMQhBoRLsPinLO1fb+pp/i5dnM4oCQdFZahjroxJU+tqs ZfAsw+QYO295mUt8WJNF8A1EuNCBtYJdN32QaUsHu3K8cxyCjruvL8/XB8PAlIhEie b/8oz9DQETq8tgzZZdrqszIj0S0PXOjWIcDgLIx3Il4t+3513huBPyuUBrwrHotznV 5JmJyHV48MBQA== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Oscar Salvador Subject: [PATCH v3 2/9] riscv: mm: Pre-allocate vmemmap/direct map PGD entries Date: Tue, 21 May 2024 13:48:23 +0200 Message-Id: <20240521114830.841660-3-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240521114830.841660-1-bjorn@kernel.org> References: <20240521114830.841660-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_044855_794145_620AB6A8 X-CRM114-Status: GOOD ( 13.94 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Stoakes , Chethan Seshadri , linux-kernel@vger.kernel.org, Andrew Bresticker , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Santosh Mamila , linux-mm@kvack.org, Sivakumar Munnangi , virtualization@lists.linux-foundation.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Björn Töpel The RISC-V port copies the PGD table from init_mm/swapper_pg_dir to all userland page tables, which means that if the PGD level table is changed, other page tables has to be updated as well. Instead of having the PGD changes ripple out to all tables, the synchronization can be avoided by pre-allocating the PGD entries/pages at boot, avoiding the synchronization all together. This is currently done for the bpf/modules, and vmalloc PGD regions. Extend this scheme for the PGD regions touched by memory hotplugging. Prepare the RISC-V port for memory hotplug by pre-allocate vmemmap/direct map entries at the PGD level. This will roughly waste ~128 worth of 4K pages when memory hotplugging is enabled in the kernel configuration. Reviewed-by: Alexandre Ghiti Signed-off-by: Björn Töpel --- arch/riscv/include/asm/kasan.h | 4 ++-- arch/riscv/mm/init.c | 7 +++++++ 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kasan.h b/arch/riscv/include/asm/kasan.h index 0b85e363e778..e6a0071bdb56 100644 --- a/arch/riscv/include/asm/kasan.h +++ b/arch/riscv/include/asm/kasan.h @@ -6,8 +6,6 @@ #ifndef __ASSEMBLY__ -#ifdef CONFIG_KASAN - /* * The following comment was copied from arm64: * KASAN_SHADOW_START: beginning of the kernel virtual addresses. @@ -34,6 +32,8 @@ */ #define KASAN_SHADOW_START ((KASAN_SHADOW_END - KASAN_SHADOW_SIZE) & PGDIR_MASK) #define KASAN_SHADOW_END MODULES_LOWEST_VADDR + +#ifdef CONFIG_KASAN #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) void kasan_init(void); diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index b66f846e7634..c98010ede810 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -27,6 +27,7 @@ #include #include +#include #include #include #include @@ -1488,10 +1489,16 @@ static void __init preallocate_pgd_pages_range(unsigned long start, unsigned lon panic("Failed to pre-allocate %s pages for %s area\n", lvl, area); } +#define PAGE_END KASAN_SHADOW_START + void __init pgtable_cache_init(void) { preallocate_pgd_pages_range(VMALLOC_START, VMALLOC_END, "vmalloc"); if (IS_ENABLED(CONFIG_MODULES)) preallocate_pgd_pages_range(MODULES_VADDR, MODULES_END, "bpf/modules"); + if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) { + preallocate_pgd_pages_range(VMEMMAP_START, VMEMMAP_END, "vmemmap"); + preallocate_pgd_pages_range(PAGE_OFFSET, PAGE_END, "direct map"); + } } #endif From patchwork Tue May 21 11:48:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13669354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E3708C25B7A for ; Tue, 21 May 2024 11:49:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZxzwhZy1Gsvqx9j0DF5PE8v91iF0aUmEM5MkwKdlClY=; b=0xD34bJtaEFdO5 0a4MXbZh2xAs6K/GCpEAUGPEIsGT5DTBEWCNGin/2fOp52659J/og2FHa/BTaBvrla4hpgPrJaqTu Iv3aFWr5k1HQf/l/7HaCElPudPlQs+K8AHEMnc62cBuSbk4/RWKdeurgFL9l7q9H+vxubiKUSmPO/ oh7v8ODx7SA8gKSYQ3PQr9A6lhqahT8TUMy5OAl36tq1PnnXBGKKueE9kFPqD29U9LV4vi0EDU86L 46isjjhRBpEmhgcDfDA7twnzSvz8w/DDXiQy1ZKPvRfjBtvg4hPaJjOa3yy2CZ88HOxrhj05TXihr F1GiNokobU9aenS1q9nA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzL-0000000HKch-1D5G; Tue, 21 May 2024 11:49:03 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzH-0000000HKbT-2Hhz for linux-riscv@lists.infradead.org; Tue, 21 May 2024 11:49:01 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 13AE762215; Tue, 21 May 2024 11:48:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 065ADC2BD11; Tue, 21 May 2024 11:48:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716292138; bh=DyW6NyCJCRvw/wgT+8WCsVm6Sv3fnQSVUnww9+XHdH4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EtdarXYkV3DME9vOo1G0uAPmao9Ep7FhO3CDDdIUvSEx3nuRwgv4S46uO2nY4T2FU YeGt33vbk2te65221+YRq/egYDqtZ6eBhzy78fucQZtK4W52ztIGQ5/WhcRsr1Nlju Qwv/qdJqaOZPH+pww3sKHzD7n4SN2EyuQBE4R9wvKDvotLBnsrbvQWAh49lxQ6cXjt wywCLyng4NP+pp9ETrxKI9oDwDRyWeOnsia06rJKFlIikjyrffEd3Qs7sP+1gTUsR+ qvDhvMYxZcVUmfiBhOEEfJFplGJ7msBamcfVEuyxdIz4ZSbjzxqvol954Hks91OGbq exPQiAZeHjeuQ== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Oscar Salvador Subject: [PATCH v3 3/9] riscv: mm: Change attribute from __init to __meminit for page functions Date: Tue, 21 May 2024 13:48:24 +0200 Message-Id: <20240521114830.841660-4-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240521114830.841660-1-bjorn@kernel.org> References: <20240521114830.841660-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_044859_700900_3F4B09E5 X-CRM114-Status: GOOD ( 16.09 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Stoakes , Chethan Seshadri , linux-kernel@vger.kernel.org, Andrew Bresticker , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Santosh Mamila , linux-mm@kvack.org, Sivakumar Munnangi , virtualization@lists.linux-foundation.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Björn Töpel Prepare for memory hotplugging support by changing from __init to __meminit for the page table functions that are used by the upcoming architecture specific callbacks. Changing the __init attribute to __meminit, avoids that the functions are removed after init. The __meminit attribute makes sure the functions are kept in the kernel text post init, but only if memory hotplugging is enabled for the build. Reviewed-by: David Hildenbrand Reviewed-by: Oscar Salvador Signed-off-by: Björn Töpel Reviewed-by: Alexandre Ghiti --- arch/riscv/include/asm/mmu.h | 4 +-- arch/riscv/include/asm/pgtable.h | 2 +- arch/riscv/mm/init.c | 56 ++++++++++++++------------------ 3 files changed, 28 insertions(+), 34 deletions(-) diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h index 947fd60f9051..c9e03e9da3dc 100644 --- a/arch/riscv/include/asm/mmu.h +++ b/arch/riscv/include/asm/mmu.h @@ -31,8 +31,8 @@ typedef struct { #define cntx2asid(cntx) ((cntx) & SATP_ASID_MASK) #define cntx2version(cntx) ((cntx) & ~SATP_ASID_MASK) -void __init create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot); +void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot); #endif /* __ASSEMBLY__ */ #endif /* _ASM_RISCV_MMU_H */ diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 58fd7b70b903..7933f493db71 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -162,7 +162,7 @@ struct pt_alloc_ops { #endif }; -extern struct pt_alloc_ops pt_ops __initdata; +extern struct pt_alloc_ops pt_ops __meminitdata; #ifdef CONFIG_MMU /* Number of PGD entries that a user-mode program can use */ diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index c98010ede810..c969427eab88 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -295,7 +295,7 @@ static void __init setup_bootmem(void) } #ifdef CONFIG_MMU -struct pt_alloc_ops pt_ops __initdata; +struct pt_alloc_ops pt_ops __meminitdata; pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss; pgd_t trampoline_pg_dir[PTRS_PER_PGD] __page_aligned_bss; @@ -357,7 +357,7 @@ static inline pte_t *__init get_pte_virt_fixmap(phys_addr_t pa) return (pte_t *)set_fixmap_offset(FIX_PTE, pa); } -static inline pte_t *__init get_pte_virt_late(phys_addr_t pa) +static inline pte_t *__meminit get_pte_virt_late(phys_addr_t pa) { return (pte_t *) __va(pa); } @@ -376,7 +376,7 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va) return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); } -static phys_addr_t __init alloc_pte_late(uintptr_t va) +static phys_addr_t __meminit alloc_pte_late(uintptr_t va) { struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); @@ -384,9 +384,8 @@ static phys_addr_t __init alloc_pte_late(uintptr_t va) return __pa((pte_t *)ptdesc_address(ptdesc)); } -static void __init create_pte_mapping(pte_t *ptep, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +static void __meminit create_pte_mapping(pte_t *ptep, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot) { uintptr_t pte_idx = pte_index(va); @@ -440,7 +439,7 @@ static pmd_t *__init get_pmd_virt_fixmap(phys_addr_t pa) return (pmd_t *)set_fixmap_offset(FIX_PMD, pa); } -static pmd_t *__init get_pmd_virt_late(phys_addr_t pa) +static pmd_t *__meminit get_pmd_virt_late(phys_addr_t pa) { return (pmd_t *) __va(pa); } @@ -457,7 +456,7 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va) return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); } -static phys_addr_t __init alloc_pmd_late(uintptr_t va) +static phys_addr_t __meminit alloc_pmd_late(uintptr_t va) { struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); @@ -465,9 +464,9 @@ static phys_addr_t __init alloc_pmd_late(uintptr_t va) return __pa((pmd_t *)ptdesc_address(ptdesc)); } -static void __init create_pmd_mapping(pmd_t *pmdp, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +static void __meminit create_pmd_mapping(pmd_t *pmdp, + uintptr_t va, phys_addr_t pa, + phys_addr_t sz, pgprot_t prot) { pte_t *ptep; phys_addr_t pte_phys; @@ -503,7 +502,7 @@ static pud_t *__init get_pud_virt_fixmap(phys_addr_t pa) return (pud_t *)set_fixmap_offset(FIX_PUD, pa); } -static pud_t *__init get_pud_virt_late(phys_addr_t pa) +static pud_t *__meminit get_pud_virt_late(phys_addr_t pa) { return (pud_t *)__va(pa); } @@ -521,7 +520,7 @@ static phys_addr_t __init alloc_pud_fixmap(uintptr_t va) return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); } -static phys_addr_t alloc_pud_late(uintptr_t va) +static phys_addr_t __meminit alloc_pud_late(uintptr_t va) { unsigned long vaddr; @@ -541,7 +540,7 @@ static p4d_t *__init get_p4d_virt_fixmap(phys_addr_t pa) return (p4d_t *)set_fixmap_offset(FIX_P4D, pa); } -static p4d_t *__init get_p4d_virt_late(phys_addr_t pa) +static p4d_t *__meminit get_p4d_virt_late(phys_addr_t pa) { return (p4d_t *)__va(pa); } @@ -559,7 +558,7 @@ static phys_addr_t __init alloc_p4d_fixmap(uintptr_t va) return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); } -static phys_addr_t alloc_p4d_late(uintptr_t va) +static phys_addr_t __meminit alloc_p4d_late(uintptr_t va) { unsigned long vaddr; @@ -568,9 +567,8 @@ static phys_addr_t alloc_p4d_late(uintptr_t va) return __pa(vaddr); } -static void __init create_pud_mapping(pud_t *pudp, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +static void __meminit create_pud_mapping(pud_t *pudp, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot) { pmd_t *nextp; phys_addr_t next_phys; @@ -595,9 +593,8 @@ static void __init create_pud_mapping(pud_t *pudp, create_pmd_mapping(nextp, va, pa, sz, prot); } -static void __init create_p4d_mapping(p4d_t *p4dp, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +static void __meminit create_p4d_mapping(p4d_t *p4dp, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot) { pud_t *nextp; phys_addr_t next_phys; @@ -653,9 +650,8 @@ static void __init create_p4d_mapping(p4d_t *p4dp, #define create_pmd_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0) #endif /* __PAGETABLE_PMD_FOLDED */ -void __init create_pgd_mapping(pgd_t *pgdp, - uintptr_t va, phys_addr_t pa, - phys_addr_t sz, pgprot_t prot) +void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, phys_addr_t sz, + pgprot_t prot) { pgd_next_t *nextp; phys_addr_t next_phys; @@ -680,8 +676,7 @@ void __init create_pgd_mapping(pgd_t *pgdp, create_pgd_next_mapping(nextp, va, pa, sz, prot); } -static uintptr_t __init best_map_size(phys_addr_t pa, uintptr_t va, - phys_addr_t size) +static uintptr_t __meminit best_map_size(phys_addr_t pa, uintptr_t va, phys_addr_t size) { if (pgtable_l5_enabled && !(pa & (P4D_SIZE - 1)) && !(va & (P4D_SIZE - 1)) && size >= P4D_SIZE) @@ -714,7 +709,7 @@ asmlinkage void __init __copy_data(void) #endif #ifdef CONFIG_STRICT_KERNEL_RWX -static __init pgprot_t pgprot_from_va(uintptr_t va) +static __meminit pgprot_t pgprot_from_va(uintptr_t va) { if (is_va_kernel_text(va)) return PAGE_KERNEL_READ_EXEC; @@ -739,7 +734,7 @@ void mark_rodata_ro(void) set_memory_ro); } #else -static __init pgprot_t pgprot_from_va(uintptr_t va) +static __meminit pgprot_t pgprot_from_va(uintptr_t va) { if (IS_ENABLED(CONFIG_64BIT) && !is_kernel_mapping(va)) return PAGE_KERNEL; @@ -1231,9 +1226,8 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) pt_ops_set_fixmap(); } -static void __init create_linear_mapping_range(phys_addr_t start, - phys_addr_t end, - uintptr_t fixed_map_size) +static void __meminit create_linear_mapping_range(phys_addr_t start, phys_addr_t end, + uintptr_t fixed_map_size) { phys_addr_t pa; uintptr_t va, map_size; From patchwork Tue May 21 11:48:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13669353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1DC0BC25B7E for ; Tue, 21 May 2024 11:49:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CZa4HdgpGDoYiDE0lEFIt+Bn+kkz8ns8VvLpIPIvHoI=; b=Yk8OwCMpG+VzuC px5UGcU/t1ttIzzh5WD422mGrWwBFquWkvNvRSmSTiBoRDLIrCp2+p4XWaApjYBkz4yuHJaPSVVEy hMqgC0z0yMeoqge1RHFNbD5ckbijngcAJ8IRluuKMQQ9W7U0aBRgbaRBNPu3hsGoKK1Aft9nOx15u tYToEbPsFRD/3IngG4xzm5b/uewOM13ieDwCf6m80P0PLnEO6EZ8rRs79/mAMzGvBLNyLhiLpWeF5 CEZ95aex8/5l9XG+ZD0scWSBjmauqQdmR+bjt4GlS3Ddxm/igVsBkpiiu1z0Ho1mWowAAjFdubO8E brbbniDpv0Ix7z+LMXKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzP-0000000HKgK-2UPj; Tue, 21 May 2024 11:49:07 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzL-0000000HKd9-473d for linux-riscv@lists.infradead.org; Tue, 21 May 2024 11:49:05 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 71891621B6; Tue, 21 May 2024 11:49:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A029C2BD11; Tue, 21 May 2024 11:48:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716292143; bh=OVWdCLBgos2+PxBgxUayZBCsMHg1Thrc2DYHTXT9Xk8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JuFto51YhlnEYtIE5p7m81bgvn/FesIVCWWQrgta9FQ3B2VFM/MCwDRyv7PAP2PlM OcGLM/qjlm9yHvJ42jxE/UVj9EaiKy4E4zOVTD7Aea+fGMPCYzFxrVrBmSXy/h4HDF t926Rbef+QZc/5v6EBJZ92qopYPJAzrENcM/2kOdAKCneRJkyCJGk4uZ6XvWBnWTNY UE0khZMgvG8QpNoNPeaJxOTsVFSCWY8XS77MDoU002hcPtheNG1MeEYymcH5WPgVDs MmithN7NtP4w31zwP1/4w2/Ea+uNEwNZpSwC/G3ClsBE05NPIEj8Isbdxl0d8R08Xw kjhfVQUKhzJew== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Oscar Salvador Subject: [PATCH v3 4/9] riscv: mm: Refactor create_linear_mapping_range() for memory hot add Date: Tue, 21 May 2024 13:48:25 +0200 Message-Id: <20240521114830.841660-5-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240521114830.841660-1-bjorn@kernel.org> References: <20240521114830.841660-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_044904_166857_12E26162 X-CRM114-Status: UNSURE ( 9.20 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Stoakes , Chethan Seshadri , linux-kernel@vger.kernel.org, Andrew Bresticker , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Santosh Mamila , linux-mm@kvack.org, Sivakumar Munnangi , virtualization@lists.linux-foundation.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Björn Töpel Add a parameter to the direct map setup function, so it can be used in arch_add_memory() later. Reviewed-by: Alexandre Ghiti Reviewed-by: David Hildenbrand Reviewed-by: Oscar Salvador Signed-off-by: Björn Töpel --- arch/riscv/mm/init.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index c969427eab88..6f72b0b2b854 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1227,7 +1227,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) } static void __meminit create_linear_mapping_range(phys_addr_t start, phys_addr_t end, - uintptr_t fixed_map_size) + uintptr_t fixed_map_size, const pgprot_t *pgprot) { phys_addr_t pa; uintptr_t va, map_size; @@ -1238,7 +1238,7 @@ static void __meminit create_linear_mapping_range(phys_addr_t start, phys_addr_t best_map_size(pa, va, end - pa); create_pgd_mapping(swapper_pg_dir, va, pa, map_size, - pgprot_from_va(va)); + pgprot ? *pgprot : pgprot_from_va(va)); } } @@ -1282,22 +1282,19 @@ static void __init create_linear_mapping_page_table(void) if (end >= __pa(PAGE_OFFSET) + memory_limit) end = __pa(PAGE_OFFSET) + memory_limit; - create_linear_mapping_range(start, end, 0); + create_linear_mapping_range(start, end, 0, NULL); } #ifdef CONFIG_STRICT_KERNEL_RWX - create_linear_mapping_range(ktext_start, ktext_start + ktext_size, 0); - create_linear_mapping_range(krodata_start, - krodata_start + krodata_size, 0); + create_linear_mapping_range(ktext_start, ktext_start + ktext_size, 0, NULL); + create_linear_mapping_range(krodata_start, krodata_start + krodata_size, 0, NULL); memblock_clear_nomap(ktext_start, ktext_size); memblock_clear_nomap(krodata_start, krodata_size); #endif #ifdef CONFIG_KFENCE - create_linear_mapping_range(kfence_pool, - kfence_pool + KFENCE_POOL_SIZE, - PAGE_SIZE); + create_linear_mapping_range(kfence_pool, kfence_pool + KFENCE_POOL_SIZE, PAGE_SIZE, NULL); memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE); #endif From patchwork Tue May 21 11:48:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13669356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 97704C25B74 for ; Tue, 21 May 2024 11:49:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TUvwkldzFkXC1NrqSZPOBDASdF2xYXODd+lRKd7DH60=; b=LfA4CXiwCXdZaN LTcFuFb+fHGc28fM75H4us5VpJ7WYkH2M9oMaJ6b66q9QNH+VqVfLKEV3DjU8shj2PPgonaNB8ZVe vnPiWoqWobw0xQW6tHXEx3lq0VGVYRkkK2D0LcVHT0OtehYrcB/JkeBEkH+WeADHiti8l2POAgE3d 8ZjAJRUat/0GKh4L8+rsBuFLRjt463M86ROAEqHof0g2LvxQr/2gVlDBFq+nr9aZhx+mSJKNVHbFe wn3pMRW4F/FSmziT64Q5V902uoPFD3K85PZriybT3v96fyK3xbW1U4T4l2+/D81aFPF1H0jwTw8y9 LEyeO+JrR4iocRJyANkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzS-0000000HKir-3qaX; Tue, 21 May 2024 11:49:10 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzQ-0000000HKh5-1s42 for linux-riscv@lists.infradead.org; Tue, 21 May 2024 11:49:09 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id D3FD462213; Tue, 21 May 2024 11:49:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CC39FC2BD11; Tue, 21 May 2024 11:49:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716292147; bh=YmNQGTYS7tg+RvjQBa8mIUVU0Q8jznFPplKEu6+/K8U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UzvEzHHexF7JresSjbUV+sseX7MhZnOt/Q6RxfcvlB4jlYBbvS+Mc5EYr2OPt3w1n VnthCTVkILvDLIi1nYESspY9tZiEk/S8GdRdvBNtYuxxGJgGKsmeVaJ4NdBI2fjNpL O4lr9BqHQjAn99ZqfKqqrd8aspSj2moxL790uHQs3eJW+b7TxXtKU8u1ydUEtbB+SJ pUsoV8TCLbwqbeD5kw7g+Jin6DjIO1XOT1XywHo8E6kMryiVfWswgG3ubvVFBby7xy B5SfqddkJ20baZ98Z3B4D3Kkq65i2wxIOluwx6t+d5EhaQjAj2Q9LikQm46I4C6dpm fz3dUdiRyl1CQ== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Oscar Salvador Subject: [PATCH v3 5/9] riscv: mm: Add memory hotplugging support Date: Tue, 21 May 2024 13:48:26 +0200 Message-Id: <20240521114830.841660-6-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240521114830.841660-1-bjorn@kernel.org> References: <20240521114830.841660-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_044908_673167_8D403C87 X-CRM114-Status: GOOD ( 19.43 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Stoakes , Chethan Seshadri , linux-kernel@vger.kernel.org, Andrew Bresticker , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Santosh Mamila , linux-mm@kvack.org, Sivakumar Munnangi , virtualization@lists.linux-foundation.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Björn Töpel For an architecture to support memory hotplugging, a couple of callbacks needs to be implemented: arch_add_memory() This callback is responsible for adding the physical memory into the direct map, and call into the memory hotplugging generic code via __add_pages() that adds the corresponding struct page entries, and updates the vmemmap mapping. arch_remove_memory() This is the inverse of the callback above. vmemmap_free() This function tears down the vmemmap mappings (if CONFIG_SPARSEMEM_VMEMMAP is enabled), and also deallocates the backing vmemmap pages. Note that for persistent memory, an alternative allocator for the backing pages can be used; The vmem_altmap. This means that when the backing pages are cleared, extra care is needed so that the correct deallocation method is used. arch_get_mappable_range() This functions returns the PA range that the direct map can map. Used by the MHP internals for sanity checks. The page table unmap/teardown functions are heavily based on code from the x86 tree. The same remove_pgd_mapping() function is used in both vmemmap_free() and arch_remove_memory(), but in the latter function the backing pages are not removed. Signed-off-by: Björn Töpel --- arch/riscv/mm/init.c | 261 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 261 insertions(+) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 6f72b0b2b854..6693b742bf2f 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1493,3 +1493,264 @@ void __init pgtable_cache_init(void) } } #endif + +#ifdef CONFIG_MEMORY_HOTPLUG +static void __meminit free_pagetable(struct page *page, int order) +{ + unsigned int nr_pages = 1 << order; + + /* + * vmemmap/direct page tables can be reserved, if added at + * boot. + */ + if (PageReserved(page)) { + __ClearPageReserved(page); + while (nr_pages--) + free_reserved_page(page++); + return; + } + + free_pages((unsigned long)page_address(page), order); +} + +static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd) +{ + pte_t *pte; + int i; + + for (i = 0; i < PTRS_PER_PTE; i++) { + pte = pte_start + i; + if (!pte_none(*pte)) + return; + } + + free_pagetable(pmd_page(*pmd), 0); + pmd_clear(pmd); +} + +static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud) +{ + pmd_t *pmd; + int i; + + for (i = 0; i < PTRS_PER_PMD; i++) { + pmd = pmd_start + i; + if (!pmd_none(*pmd)) + return; + } + + free_pagetable(pud_page(*pud), 0); + pud_clear(pud); +} + +static void __meminit free_pud_table(pud_t *pud_start, p4d_t *p4d) +{ + pud_t *pud; + int i; + + for (i = 0; i < PTRS_PER_PUD; i++) { + pud = pud_start + i; + if (!pud_none(*pud)) + return; + } + + free_pagetable(p4d_page(*p4d), 0); + p4d_clear(p4d); +} + +static void __meminit free_vmemmap_storage(struct page *page, size_t size, + struct vmem_altmap *altmap) +{ + if (altmap) + vmem_altmap_free(altmap, size >> PAGE_SHIFT); + else + free_pagetable(page, get_order(size)); +} + +static void __meminit remove_pte_mapping(pte_t *pte_base, unsigned long addr, unsigned long end, + bool is_vmemmap, struct vmem_altmap *altmap) +{ + unsigned long next; + pte_t *ptep, pte; + + for (; addr < end; addr = next) { + next = (addr + PAGE_SIZE) & PAGE_MASK; + if (next > end) + next = end; + + ptep = pte_base + pte_index(addr); + pte = READ_ONCE(*ptep); + + if (!pte_present(*ptep)) + continue; + + pte_clear(&init_mm, addr, ptep); + if (is_vmemmap) + free_vmemmap_storage(pte_page(pte), PAGE_SIZE, altmap); + } +} + +static void __meminit remove_pmd_mapping(pmd_t *pmd_base, unsigned long addr, unsigned long end, + bool is_vmemmap, struct vmem_altmap *altmap) +{ + unsigned long next; + pte_t *pte_base; + pmd_t *pmdp, pmd; + + for (; addr < end; addr = next) { + next = pmd_addr_end(addr, end); + pmdp = pmd_base + pmd_index(addr); + pmd = READ_ONCE(*pmdp); + + if (!pmd_present(pmd)) + continue; + + if (pmd_leaf(pmd)) { + pmd_clear(pmdp); + if (is_vmemmap) + free_vmemmap_storage(pmd_page(pmd), PMD_SIZE, altmap); + continue; + } + + pte_base = (pte_t *)pmd_page_vaddr(*pmdp); + remove_pte_mapping(pte_base, addr, next, is_vmemmap, altmap); + free_pte_table(pte_base, pmdp); + } +} + +static void __meminit remove_pud_mapping(pud_t *pud_base, unsigned long addr, unsigned long end, + bool is_vmemmap, struct vmem_altmap *altmap) +{ + unsigned long next; + pud_t *pudp, pud; + pmd_t *pmd_base; + + for (; addr < end; addr = next) { + next = pud_addr_end(addr, end); + pudp = pud_base + pud_index(addr); + pud = READ_ONCE(*pudp); + + if (!pud_present(pud)) + continue; + + if (pud_leaf(pud)) { + if (pgtable_l4_enabled) { + pud_clear(pudp); + if (is_vmemmap) + free_vmemmap_storage(pud_page(pud), PUD_SIZE, altmap); + } + continue; + } + + pmd_base = pmd_offset(pudp, 0); + remove_pmd_mapping(pmd_base, addr, next, is_vmemmap, altmap); + + if (pgtable_l4_enabled) + free_pmd_table(pmd_base, pudp); + } +} + +static void __meminit remove_p4d_mapping(p4d_t *p4d_base, unsigned long addr, unsigned long end, + bool is_vmemmap, struct vmem_altmap *altmap) +{ + unsigned long next; + p4d_t *p4dp, p4d; + pud_t *pud_base; + + for (; addr < end; addr = next) { + next = p4d_addr_end(addr, end); + p4dp = p4d_base + p4d_index(addr); + p4d = READ_ONCE(*p4dp); + + if (!p4d_present(p4d)) + continue; + + if (p4d_leaf(p4d)) { + if (pgtable_l5_enabled) { + p4d_clear(p4dp); + if (is_vmemmap) + free_vmemmap_storage(p4d_page(p4d), P4D_SIZE, altmap); + } + continue; + } + + pud_base = pud_offset(p4dp, 0); + remove_pud_mapping(pud_base, addr, next, is_vmemmap, altmap); + + if (pgtable_l5_enabled) + free_pud_table(pud_base, p4dp); + } +} + +static void __meminit remove_pgd_mapping(unsigned long va, unsigned long end, bool is_vmemmap, + struct vmem_altmap *altmap) +{ + unsigned long addr, next; + p4d_t *p4d_base; + pgd_t *pgd; + + for (addr = va; addr < end; addr = next) { + next = pgd_addr_end(addr, end); + pgd = pgd_offset_k(addr); + + if (!pgd_present(*pgd)) + continue; + + if (pgd_leaf(*pgd)) + continue; + + p4d_base = p4d_offset(pgd, 0); + remove_p4d_mapping(p4d_base, addr, next, is_vmemmap, altmap); + } + + flush_tlb_all(); +} + +static void __meminit remove_linear_mapping(phys_addr_t start, u64 size) +{ + unsigned long va = (unsigned long)__va(start); + unsigned long end = (unsigned long)__va(start + size); + + remove_pgd_mapping(va, end, false, NULL); +} + +struct range arch_get_mappable_range(void) +{ + struct range mhp_range; + + mhp_range.start = __pa(PAGE_OFFSET); + mhp_range.end = __pa(PAGE_END - 1); + return mhp_range; +} + +int __ref arch_add_memory(int nid, u64 start, u64 size, struct mhp_params *params) +{ + int ret = 0; + + create_linear_mapping_range(start, start + size, 0, ¶ms->pgprot); + ret = __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, params); + if (ret) { + remove_linear_mapping(start, size); + goto out; + } + + max_pfn = PFN_UP(start + size); + max_low_pfn = max_pfn; + + out: + flush_tlb_all(); + return ret; +} + +void __ref arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +{ + __remove_pages(start >> PAGE_SHIFT, size >> PAGE_SHIFT, altmap); + remove_linear_mapping(start, size); + flush_tlb_all(); +} + +void __ref vmemmap_free(unsigned long start, unsigned long end, struct vmem_altmap *altmap) +{ + remove_pgd_mapping(start, end, true, altmap); +} +#endif /* CONFIG_MEMORY_HOTPLUG */ From patchwork Tue May 21 11:48:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13669357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CDBA6C25B74 for ; Tue, 21 May 2024 11:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1C2ag+UVkiRCJfSPH+LAnENaxVVFhaiKG8iEf52vcqc=; b=JtW3t2ZkYGPRuW c8tomTIwcqH62wtMau9JPYwo6wQQSycZo6IyYtLlswQtf6NKMTt+Nc13F9UOLxD/nQvlw1bpL63V3 XMvRSehRt79fVOKMYQZOf/QkJqoGteVm+px/GTUdIX/U1H5rNsLl586LMCBZ9tkJqf6jMlK3Pz9vy ycX2Lchg2cb89+Jz6tm0Pb2in2gjOUqumYKodEeMiobD6kRP8x98Xpp2rvSKCcTP1B1CldvUtb1MT EWGAGXSo2hSzvLj6vCFMwW7p+doViWWmL5oP+wOaCsUN8JbbcoPSOvJBPy1JMByZsMwVWSrSAkLxs A2UR3Cq/SHPioDl7XzAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9Nzb-0000000HKoG-0GWC; Tue, 21 May 2024 11:49:19 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzX-0000000HKkc-47z8 for linux-riscv@lists.infradead.org; Tue, 21 May 2024 11:49:17 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 9F1EACE0FAA; Tue, 21 May 2024 11:49:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 07C0EC4AF09; Tue, 21 May 2024 11:49:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716292151; bh=zdSV3EpJ/CKXjaWQ8/cD0+JOT+VH5rhvpIEde4YPaWA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QP7xHTIroKWAL1koAwZ6jjCgJLvRHG023RGth9xuUk2aAr6xVwWVLbscARU+Pv1ub rE7DpLmED0b0CVDykMK6Zx+Q7jnCMVy2PLYf754MVZMJIdV0GVpwGIlm3I3gUCQ0Q6 2H95BXA0It8CFEHUA2Cu/eAEMYUpIeaDtiemAskbUWoLdUlQdP1W1YSYtVV7ha1mAp T00CEcLQ7pEpSb3jAdqZceJbTnCXDIJ0KCDyF2nAHBRoyXITau6AkUfMqaNoMFbKQN QACCVo7OV9T3gy3HBJuXXKgSubA2i3XC1J2EvODkvPbaXRZ/G6/pdAH3MvgRE1voV6 WqhaCAyjmm6LQ== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Oscar Salvador Subject: [PATCH v3 6/9] riscv: mm: Take memory hotplug read-lock during kernel page table dump Date: Tue, 21 May 2024 13:48:27 +0200 Message-Id: <20240521114830.841660-7-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240521114830.841660-1-bjorn@kernel.org> References: <20240521114830.841660-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_044916_485704_E59D8643 X-CRM114-Status: UNSURE ( 9.24 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Stoakes , Chethan Seshadri , linux-kernel@vger.kernel.org, Andrew Bresticker , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Santosh Mamila , linux-mm@kvack.org, Sivakumar Munnangi , virtualization@lists.linux-foundation.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Björn Töpel During memory hot remove, the ptdump functionality can end up touching stale data. Avoid any potential crashes (or worse), by holding the memory hotplug read-lock while traversing the page table. This change is analogous to arm64's commit bf2b59f60ee1 ("arm64/mm: Hold memory hotplug lock while walking for kernel page table dump"). Reviewed-by: David Hildenbrand Reviewed-by: Oscar Salvador Signed-off-by: Björn Töpel --- arch/riscv/mm/ptdump.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c index 1289cc6d3700..9d5f657a251b 100644 --- a/arch/riscv/mm/ptdump.c +++ b/arch/riscv/mm/ptdump.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -370,7 +371,9 @@ bool ptdump_check_wx(void) static int ptdump_show(struct seq_file *m, void *v) { + get_online_mems(); ptdump_walk(m, m->private); + put_online_mems(); return 0; } From patchwork Tue May 21 11:48:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13669358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 01944C25B75 for ; Tue, 21 May 2024 11:49:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NlZ+kpfCeX8nteDe/NXQ3f0T5NWCqhhcWXMmZbCoGZc=; b=FgJH8qkUgQ9uGZ ZA3bpeSiMCJiuvQh9iaoC226NZPPo+7bHpteDdqw93QLpDb9LexKxYUhRpYfjWUAEHS606ofUm4d9 G5AkxYISmmnZ1mKexVqpinlcUvVxNPdXwYXJslG+NJuV5T8R2s4sWfweljDOXh6EaBNC7/3+PviQn Q8pYLNjwCS4jhZCubuQ1QfSGsfpkrRfhR7Nam/blQaUY+F3nYAJ1FAcMtzzkCYEjdVYbi7C7N0oOH iiU0Q85WtUl0bdYMmLCB5bHM3OXLexcJm11SOaH84ga4JLF0zhz95eYeMkDJ7Bz2DwIvZjG7lQCTh yvtJ8ks6YS0EfJ5KuuDA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9Nzd-0000000HKpn-0AwX; Tue, 21 May 2024 11:49:21 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9NzY-0000000HKm4-2rns for linux-riscv@lists.infradead.org; Tue, 21 May 2024 11:49:18 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 20AB662213; Tue, 21 May 2024 11:49:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3296AC4AF0E; Tue, 21 May 2024 11:49:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716292155; bh=i0Ih+w27uGb8ApwJcyy8K/wXBEe5AomGLiW94Z7779U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UEzghIrA4Xj8E+KJcv5rV4Fg3OTCWpAUwdbjYoPWT7gVOSRV6SQWe8m/tqWm/Z9j5 LOq8iVVDdA9rPeFRqn06kyALX/Y+uzSJpXu3xND6BGYItuRbp/7eyMzvWoJ0sru44g x4d1w6d0UF1SDAapeQK/4zR5j4SC+2FuKlwzMgObcR/47F8hL81htaSV4Rnu0ZDg4e fEo46pfirWJf5acsZojEOXkrupLGS8O0Hsh4ukXb9lgUJXlztR8hWESUBNwCG8lIb8 KgEaYeUmtF8QkqoLyQKmF8kJUAICzjbt8epkz2evCCZ1IFxRnNAHVmemOu17uDk2yJ LmoRZmxGAxPhQ== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Oscar Salvador Subject: [PATCH v3 7/9] riscv: Enable memory hotplugging for RISC-V Date: Tue, 21 May 2024 13:48:28 +0200 Message-Id: <20240521114830.841660-8-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240521114830.841660-1-bjorn@kernel.org> References: <20240521114830.841660-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_044916_840690_44E88B0C X-CRM114-Status: UNSURE ( 8.00 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Stoakes , Chethan Seshadri , linux-kernel@vger.kernel.org, Andrew Bresticker , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Santosh Mamila , linux-mm@kvack.org, Sivakumar Munnangi , virtualization@lists.linux-foundation.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Björn Töpel Enable ARCH_ENABLE_MEMORY_HOTPLUG and ARCH_ENABLE_MEMORY_HOTREMOVE for RISC-V. Signed-off-by: Björn Töpel Reviewed-by: Alexandre Ghiti --- arch/riscv/Kconfig | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index fe5281398543..2724dc2af29f 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -16,6 +16,8 @@ config RISCV select ACPI_REDUCED_HARDWARE_ONLY if ACPI select ARCH_DMA_DEFAULT_COHERENT select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION + select ARCH_ENABLE_MEMORY_HOTPLUG if SPARSEMEM_VMEMMAP && 64BIT && MMU + select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_BINFMT_FLAT From patchwork Tue May 21 11:48:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13669359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26C47C25B75 for ; Tue, 21 May 2024 11:49:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vI5EFFmAmM/8ZogLhXIOr0AhnwE9UXYBtEFhEGRUnDQ=; b=DrpRGgsR4zefle ALbYS+maKpxbS7UEh2e31AnKBLdewQXYuTEGRY3Mgp5hhC6ARNoKs2F5fi8hrnhyMTfRLGro/739V BLKdW6L3yhFiDFHhvJav//B2e49FPBJLC0+J0xmdiV7nMaasYLNUFbMnvz+SZ6ZpDVyRdwRSe6x4J mnfwrUVeWjtYOZ74kFu8eKs30bx9R7faC9kV1lc9iAVZM+T8xdjhwxRGpxQPd8ZRuk7h3wl+xcJfp 2uZw5EP+qSwwKY/l7nJOqqWaTXW9We67fdaI8Qy74gCXp/es/jWzbDc1xCcqlccrJBb85yDHEAMRg 6iWlvh68+J8CWpVZOy0A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9Nzo-0000000HKzq-1HAr; Tue, 21 May 2024 11:49:32 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9Nzd-0000000HKq0-1jmC for linux-riscv@lists.infradead.org; Tue, 21 May 2024 11:49:24 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id A3ACE621B6; Tue, 21 May 2024 11:49:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D227C4AF09; Tue, 21 May 2024 11:49:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716292160; bh=kYHsD4H/RUxm26ELjeG4CcFUjPg7NPg0mup/+DL8qlQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AE2HJc/Zo839RiPkn077HFAru7fuQ3IdqockYIAAK4zrgbbEvq7k/6D1z1skg/3ef RBh4dJwbAkn6gux0u6dNld3GBZYMx2VqFWHEMI0zvFmccCkrfEHAR2ddpZ24phAuqa mI0SCQ+ctgny3cFlc2/TsvQRGX/sEZoIceuOrAT0CeIc90szt7ePzoutW/5i4kLtFF qDWUDAGevJg9PqJeLprnBHn6L2mH7CMZnVN2piLag+A9h5W2rL/rpWPDEtL+u7oND2 9e7GTMsDYuPcl9ftqfNlKcpfbLlW30egyyPtb5iTFjfLza91IxsO77cTeetIR+UAPT cyOonut+Y4MSA== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Oscar Salvador Subject: [PATCH v3 8/9] virtio-mem: Enable virtio-mem for RISC-V Date: Tue, 21 May 2024 13:48:29 +0200 Message-Id: <20240521114830.841660-9-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240521114830.841660-1-bjorn@kernel.org> References: <20240521114830.841660-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_044921_789546_F214E882 X-CRM114-Status: UNSURE ( 9.82 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Stoakes , Chethan Seshadri , linux-kernel@vger.kernel.org, Andrew Bresticker , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Santosh Mamila , linux-mm@kvack.org, Sivakumar Munnangi , virtualization@lists.linux-foundation.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Björn Töpel Now that RISC-V has memory hotplugging support, virtio-mem can be used on the platform. Acked-by: David Hildenbrand Signed-off-by: Björn Töpel --- drivers/virtio/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index c17193544268..4e5cebf1b82a 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -122,7 +122,7 @@ config VIRTIO_BALLOON config VIRTIO_MEM tristate "Virtio mem driver" - depends on X86_64 || ARM64 + depends on X86_64 || ARM64 || RISCV depends on VIRTIO depends on MEMORY_HOTPLUG depends on MEMORY_HOTREMOVE From patchwork Tue May 21 11:48:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13669360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C3EFC25B74 for ; Tue, 21 May 2024 11:49:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QtNwWy817KhnnXDoSlt/X7u3iy1I5beLDO38Y21/ssU=; b=rcG47go/tZIgZM Ba6CEDlT8wlLT7f+l4vXVpRXzz+v4ZA8rmKEkJhQZfFxVhLZlZZVdHCPAEwPT7zDzi2TVVA7vmjm7 /cw95xR3238UCZkgV1m7Ra8a8H5LNXfzxA4sGtkeA/1deOA8I5Y5ZSxm+9c/pt7Ri9F+JGmOjcplB UZnsaDmQjUDRnVEArNIZgF+c6tTLkX+SreOZVlvvBMqhg4XUm21iWC3nQPhzA/HuLIsaVnBnsXkiw WUhojDBUYM/IGS1hBfnEOva/ze8jBjMRyuSaKjlN33M2DQGFhD3YO2Pm6kdlfd1jcocDutmI89Bhd p5ugMOncHTH3MTM5R59A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9Nzw-0000000HL5j-0tyn; Tue, 21 May 2024 11:49:40 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9Nzh-0000000HKuG-3ct5 for linux-riscv@lists.infradead.org; Tue, 21 May 2024 11:49:29 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 157E162210; Tue, 21 May 2024 11:49:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E6F4EC2BD11; Tue, 21 May 2024 11:49:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716292164; bh=UfTTCJAd6gAR/W/+kWuHX2+0dwTK25Vi11SpkX7YGOg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rESFWlwtcK7rvFv8Um9btQJZUwRB2RdO8MjvUB2BICQEPh/EbugiTsxumyL5292hX SaghKnmizBA/pp7A+Dtlq+qwqNSRixbwdlsADGbs6CIn15dyBN0k28FwBbMS8aiTPI Kp1INLM1FgGFhmvmSsgoX0Ju3Cx9yzirSxLHZ/7IJ7s8WGmxnG22tVNZm7926LBz6M m49b2985aSBi87GawTioSdpHQvEIGdWaOc7nHUaGWvtz6CXRNKEfKQBtE6tGfFo5qO sEpBR+MlOfCff/Ge6iR1CKDRfbGBh0FxjrF7lgu/xXIUY28Ntlxfpz6GDjZg2lyHDr N+eN0jCwq4bcw== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Alexandre Ghiti , Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Oscar Salvador Subject: [PATCH v3 9/9] riscv: mm: Add support for ZONE_DEVICE Date: Tue, 21 May 2024 13:48:30 +0200 Message-Id: <20240521114830.841660-10-bjorn@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240521114830.841660-1-bjorn@kernel.org> References: <20240521114830.841660-1-bjorn@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240521_044926_526163_4EA9C926 X-CRM114-Status: GOOD ( 11.18 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Stoakes , Chethan Seshadri , linux-kernel@vger.kernel.org, Andrew Bresticker , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Santosh Mamila , linux-mm@kvack.org, Sivakumar Munnangi , virtualization@lists.linux-foundation.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Björn Töpel ZONE_DEVICE pages need DEVMAP PTEs support to function (ARCH_HAS_PTE_DEVMAP). Claim another RSW (reserved for software) bit in the PTE for DEVMAP mark, add the corresponding helpers, and enable ARCH_HAS_PTE_DEVMAP for riscv64. Signed-off-by: Björn Töpel Reviewed-by: Alexandre Ghiti --- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/pgtable-64.h | 20 ++++++++++++++++++++ arch/riscv/include/asm/pgtable-bits.h | 1 + arch/riscv/include/asm/pgtable.h | 17 +++++++++++++++++ 4 files changed, 39 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 2724dc2af29f..0b74698c63c7 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -36,6 +36,7 @@ config RISCV select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PMEM_API select ARCH_HAS_PREPARE_SYNC_CORE_CMD + select ARCH_HAS_PTE_DEVMAP if 64BIT && MMU select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SET_DIRECT_MAP if MMU select ARCH_HAS_SET_MEMORY if MMU diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 221a5c1ee287..c67a9bbfd010 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -400,4 +400,24 @@ static inline struct page *pgd_page(pgd_t pgd) #define p4d_offset p4d_offset p4d_t *p4d_offset(pgd_t *pgd, unsigned long address); +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static inline int pte_devmap(pte_t pte); +static inline pte_t pmd_pte(pmd_t pmd); + +static inline int pmd_devmap(pmd_t pmd) +{ + return pte_devmap(pmd_pte(pmd)); +} + +static inline int pud_devmap(pud_t pud) +{ + return 0; +} + +static inline int pgd_devmap(pgd_t pgd) +{ + return 0; +} +#endif + #endif /* _ASM_RISCV_PGTABLE_64_H */ diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h index 179bd4afece4..a8f5205cea54 100644 --- a/arch/riscv/include/asm/pgtable-bits.h +++ b/arch/riscv/include/asm/pgtable-bits.h @@ -19,6 +19,7 @@ #define _PAGE_SOFT (3 << 8) /* Reserved for software */ #define _PAGE_SPECIAL (1 << 8) /* RSW: 0x1 */ +#define _PAGE_DEVMAP (1 << 9) /* RSW, devmap */ #define _PAGE_TABLE _PAGE_PRESENT /* diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 7933f493db71..02fadc276064 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -387,6 +387,13 @@ static inline int pte_special(pte_t pte) return pte_val(pte) & _PAGE_SPECIAL; } +#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP +static inline int pte_devmap(pte_t pte) +{ + return pte_val(pte) & _PAGE_DEVMAP; +} +#endif + /* static inline pte_t pte_rdprotect(pte_t pte) */ static inline pte_t pte_wrprotect(pte_t pte) @@ -428,6 +435,11 @@ static inline pte_t pte_mkspecial(pte_t pte) return __pte(pte_val(pte) | _PAGE_SPECIAL); } +static inline pte_t pte_mkdevmap(pte_t pte) +{ + return __pte(pte_val(pte) | _PAGE_DEVMAP); +} + static inline pte_t pte_mkhuge(pte_t pte) { return pte; @@ -711,6 +723,11 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd) return pte_pmd(pte_mkdirty(pmd_pte(pmd))); } +static inline pmd_t pmd_mkdevmap(pmd_t pmd) +{ + return pte_pmd(pte_mkdevmap(pmd_pte(pmd))); +} + static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) {