From patchwork Tue Sep 12 14:15:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13381763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 23333CA0EEB for ; Tue, 12 Sep 2023 14:18:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Yij5o6xR9s3m/yDAkHdrWK4MdcpJeSpqh9Kb8j+MtLA=; b=hpflPTeAJ8X+qdg0F3yQ7zOEZ1 BmgiXjLUqvT7nzu19byeQItk6pQ2PfDqCxz3hV2kk/bQgigebpL3bRbcaCSjkl+8dEzuk7zpGSbqd d1KtGioctlHcezvK3y8EzOvRncG6viS+l8/N/X4o3fR92tqopQVu9FMuogxZywR7qjZWhI4x8wQpU dHq6L6PLjOdnuMjcYN7jIWzYiefWAkvUcpwBCXqjxVjOrHwdJjxu2ydns9N1u2Pgf6lHB8Vr08iaE 3sylEdiNaqkBPAL9+7obj/EdNJIQn6UqfM4YRMiCW15KtZ+fHojJvyi2iD9tmFMoSc8Ipp6blm4oa 7yx23mBA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qg4Cw-003VxK-02; Tue, 12 Sep 2023 14:17:38 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qg4Ct-003VuS-04 for linux-arm-kernel@lists.infradead.org; Tue, 12 Sep 2023 14:17:36 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4011fa32e99so43404805e9.0 for ; Tue, 12 Sep 2023 07:17:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1694528252; x=1695133052; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZdYJARCA6Ht1ut9zO/LmlVjbDWZjjlgcF9vUFvkWOlY=; b=2a+C2vqH/BYJUZTguhtFfkrFyHE1N0jSkS72ASZNpeLc6f3449L6je2YealZqF4zYe VxmeUlMrKtVO/Znx9iFZh9FkSTo50bAJVJgFO/8kiK2/QnndHM6gnggnTTCz9FQpkHhG JTvNFejHemY1pWjEVNzTJdJYdBUoUZdQmF/d+hqNOXtFBMQ1pieQlf7bblT/a+ti2UJI 88yThvfVFyHdzIFMev32W5mfAqmbJ0XK5WcgEqCWv9HMlWADYhK+pDmwT8isSIf0LICV yVZhDVDL7WMuKkwIpJaY96bRN9iDV/V6drQUZeyXMdmEcBlDsyaK5R8WbCh8T4FaCVYx 7qoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694528252; x=1695133052; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZdYJARCA6Ht1ut9zO/LmlVjbDWZjjlgcF9vUFvkWOlY=; b=Vd45qEWyKeAmG7Bw+TYL9AQpPuC9t5ewwltPEGY4XbXU65UE7p1DRc85QNdNM4RGJd 6QdRux9HqzDoDQN2HxjonBBGUoV3IHIveF4fEEYpoxxKvcU9FI/GhCjvdLk4WF0a2Neg pHE15nvi/LnxCSwBDSSaPr+KPyJfwmPE48Ye3XrDwLWA0gkcYk0W7c15jowLPMSJ9dqU lzA2EPFHvMstgaHdDxMbuYeTnunW1Om1WxzF+PAzpcnHpQlR5UdWm2BL+C6ApkLWrCEh hO7UbKdIq2JXoT5UsrZtGz7nU4Ncny6sL0DWJS/Q7TF+CIfRJEtgBDcpUP01vmdMm9pH fjpA== X-Gm-Message-State: AOJu0YxhzuWim0M1gpLmm4jrdqo3G5L0kJCMwZg78wuKbE8eh9I1vS8E soRC1JemjJZX+9lgm502k5YAK16YIv13aUVDA5s8Ulb5p7+e2tWWwhqG/wCQQ/ApxHis6TwF4ye lH/pdh1ZB8a3jkjhHGp1ZBWAS17cW9v1J+MbYs2rT+TO8K2vQLJA+J8YBzbzyz1BTCZTWzuFmLA Y= X-Google-Smtp-Source: AGHT+IGWoaGUfuIY3ZYrtgIqh56Lv6Gfzqd/4LwiWV7EkoJPjO3kc76BOS1OjtyLJc5CnP+b4MIUF1Mi X-Received: from palermo.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:118a]) (user=ardb job=sendgmr) by 2002:adf:f64c:0:b0:31a:ed75:75d5 with SMTP id x12-20020adff64c000000b0031aed7575d5mr147856wrp.6.1694528252542; Tue, 12 Sep 2023 07:17:32 -0700 (PDT) Date: Tue, 12 Sep 2023 14:15:58 +0000 In-Reply-To: <20230912141549.278777-63-ardb@google.com> Mime-Version: 1.0 References: <20230912141549.278777-63-ardb@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=2456; i=ardb@kernel.org; h=from:subject; bh=fXe/v+wKmpChZFWr3ArtHIQDI4gcTDLCPu1fk5vCIYA=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIZWhaF6QDetXDqbtf+fddVDLvn6sgClhp57CBe21vf+ap 93o6DDpKGVhEONgkBVTZBGY/ffdztMTpWqdZ8nCzGFlAhnCwMUpABORusTwP91u4WPOOMd1Zi9W MvlfDvsjlK2jedT1bU2GzvoG/vm58xgZrul9Y1CYmvvgzoTEV+ZrFzspHI1aU/Pm9dwJOd92W20 LZwMA X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog Message-ID: <20230912141549.278777-71-ardb@google.com> Subject: [PATCH v4 08/61] arm64: vmemmap: Avoid base2 order of struct page size to dimension region From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook , Joey Gouly X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230912_071735_058417_756BA481 X-CRM114-Status: GOOD ( 17.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel The placement and size of the vmemmap region in the kernel virtual address space is currently derived from the base2 order of the size of a struct page. This makes for nicely aligned constants with lots of leading 0xf and trailing 0x0 digits, but given that the actual struct pages are indexed as an ordinary array, this resulting region is severely overdimensioned when the size of a struct page is just over a power of 2. This doesn't matter today, but once we enable 52-bit virtual addressing for 4k pages configurations, the vmemmap region may take up almost half of the upper VA region with the current struct page upper bound at 64 bytes. And once we enable KMSAN or other features that push the size of a struct page over 64 bytes, we will run out of VMALLOC space entirely. So instead, let's derive the region size from the actual size of a struct page, and place the entire region 1 GB from the top of the VA space, where it still doesn't share any lower level translation table entries with the fixmap. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/memory.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 2745bed8ae5b..b49575a92afc 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -30,8 +30,8 @@ * keep a constant PAGE_OFFSET and "fallback" to using the higher end * of the VMEMMAP where 52-bit support is not available in hardware. */ -#define VMEMMAP_SHIFT (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT) -#define VMEMMAP_SIZE ((_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) >> VMEMMAP_SHIFT) +#define VMEMMAP_RANGE (_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) +#define VMEMMAP_SIZE ((VMEMMAP_RANGE >> PAGE_SHIFT) * sizeof(struct page)) /* * PAGE_OFFSET - the virtual address of the start of the linear map, at the @@ -47,8 +47,8 @@ #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) #define MODULES_VADDR (_PAGE_END(VA_BITS_MIN)) #define MODULES_VSIZE (SZ_2G) -#define VMEMMAP_START (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT))) -#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE) +#define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE) +#define VMEMMAP_END (-UL(SZ_1G)) #define PCI_IO_START (VMEMMAP_END + SZ_8M) #define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE) #define FIXADDR_TOP (-UL(SZ_8M))