From patchwork Wed Dec 13 08:40:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13490550 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 79A72C4332F for ; Wed, 13 Dec 2023 08:41:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=rIKqReHc+/WEQNbryCmp44jBOWzAZKUW9eBiFRTU11k=; b=wGL7mRIi+3gm/0Zjj3m1lYx6Ge YNtNKbawQ4hSoxbglgsLYKpImx7REor29tfB8qp6doMi6jOd8VWukH+O3rgAPvw8qh+oc+9G1f7bT bj2cwC26bRUPsEjRgKocqwe+Xkel2NEJ389oZtnKbzMEBg/HAWX5LXvAIB++eTR56YJbhWq/i802T ICMTxr4bvwX0lDBjUNpTu5py7+HS34l9E3yiRg0QKqoxp4YBSnSsIj850WlgWFzYJaaVRU57gChiK N+fTIZMlIVyhn+DkdJU2KNiUaFkMs9CUjd0qg0jintEaD6IsM8xdv6vB5gjppa1gepvomh/YS2Z2a mUGIiD5A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDKnf-00E3MK-0D; Wed, 13 Dec 2023 08:41:03 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDKnW-00E3Hh-1w for linux-arm-kernel@lists.infradead.org; Wed, 13 Dec 2023 08:40:55 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-dbc68661060so4276492276.2 for ; Wed, 13 Dec 2023 00:40:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702456851; x=1703061651; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oycniDVE52sU1JSV8+a0iSsIio+ypPfsg2M+Cjxiv+A=; b=YerarOCSviELmH0WfXTh9yp8LjsbpzZjFkEKXVf6yRX5nAmL2umpUbomQ58WgBFDUB qwZVTObmVFt3Fyxmpk7IBegKtjhxQJmxNShk0KH0O5hIpEvvSLOYYUClUxixNpIfx73/ dJc5AKkRRpgn6fu5y2zvjLZRjuE0L3c8+LXA3KBj2RghWmVAtVeJJ+NYZ14jIg6vUbP+ 1W447SWGCdF3AIg2HuJgmcZVwZuTQh/UQUR6BljFNas1/xbWcCUMbAcjOK9/IZy9fhdK TuLERSpyl771oNqJPFXIq2JJqUTeMODo7cotfzYm21PvyfXLb5H40lG9JM6C3k68GPkc CJPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702456851; x=1703061651; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oycniDVE52sU1JSV8+a0iSsIio+ypPfsg2M+Cjxiv+A=; b=btl5X+sH17Vr4SXfKq9x+RBAqyk6hU9/PCXWZqcEwGur4IqoXHdTwtw/Gbd201j7MC MXQ05z3OL0JJZC3TLltxAqYepNLchsQdDIfEt+yYDFPJC8dkYJiYoq1YzugxN4dtbOOk p8pO8C531DpuAaKJ6J2cBhYQvFfqWg84lic1cHqK92R1Vn2SuYz4q9STWcDj7R4LUf2o WjZEYySEKLrLLxlM/WrRf9A2s2/9+Ei6iqIFHEQo1E02rChAY6UK5m4JqdxdJry3R2OV 8FiAHmuZH6i/eWrO8ggJBJqRHaJL0JomjFILI9JCKDURpbUugOo5Jmm5raxZkpithJyV bQRQ== X-Gm-Message-State: AOJu0YwMMfiJWZMwKzJYxLDOgsYAd/zyHyHK8sBj0a5KYcpDoYzbYHB0 BmVvlaEVcYKBplHb9JWN2jF5pmHYGZv8gEAAuzE1OlX3NHjm+94EcWjkatD+/r6oS/bCbuh2D+B krt+HlQp/EWG9OLvMgBMvdyt38CmNiiDGlfq8A+Bv3zw0sUdf8QmJfi4Jq+7shi8XGd6dsEMNbb g= X-Google-Smtp-Source: AGHT+IEcpSKZJseD72Iy4bWd61HCfzNtSMViwyxlgovX8MSoDrPzKOaYSJm46tGd9qNzzZ+03TXE3B0s X-Received: from palermo.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:118a]) (user=ardb job=sendgmr) by 2002:a25:aca7:0:b0:dbc:b717:5e39 with SMTP id x39-20020a25aca7000000b00dbcb7175e39mr20671ybi.13.1702456850302; Wed, 13 Dec 2023 00:40:50 -0800 (PST) Date: Wed, 13 Dec 2023 09:40:30 +0100 In-Reply-To: <20231213084024.2367360-9-ardb@google.com> Mime-Version: 1.0 References: <20231213084024.2367360-9-ardb@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=2503; i=ardb@kernel.org; h=from:subject; bh=+I5alsz6DqjYnaptAOO6mtqNRIftFAO6fBvjLV7D6dg=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIbUy96/NQYkUlZSrqn/mbhI0FhU4NrmYWSTyStYT43rLH 2ohv4s6SlkYxDgYZMUUWQRm/3238/REqVrnWbIwc1iZQIYwcHEKwESuf2BkOKUe6pSzsu/nRI0f 3bu2cT5J+FG/+ULiNeusE5Gpj9fonmJkmNnO/az1yL7bUhwXrUQSyqz1tCcf3LS4/nWU+8m1Qiu TmQA= X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231213084024.2367360-14-ardb@google.com> Subject: [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231213_004054_664990_4D946797 X-CRM114-Status: GOOD ( 16.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel The placement and size of the vmemmap region in the kernel virtual address space is currently derived from the base2 order of the size of a struct page. This makes for nicely aligned constants with lots of leading 0xf and trailing 0x0 digits, but given that the actual struct pages are indexed as an ordinary array, this resulting region is severely overdimensioned when the size of a struct page is just over a power of 2. This doesn't matter today, but once we enable 52-bit virtual addressing for 4k pages configurations, the vmemmap region may take up almost half of the upper VA region with the current struct page upper bound at 64 bytes. And once we enable KMSAN or other features that push the size of a struct page over 64 bytes, we will run out of VMALLOC space entirely. So instead, let's derive the region size from the actual size of a struct page, and place the entire region 1 GB from the top of the VA space, where it still doesn't share any lower level translation table entries with the fixmap. Acked-by: Mark Rutland Signed-off-by: Ard Biesheuvel Acked-by: Mark Rutland --- arch/arm64/include/asm/memory.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 2745bed8ae5b..b49575a92afc 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -30,8 +30,8 @@ * keep a constant PAGE_OFFSET and "fallback" to using the higher end * of the VMEMMAP where 52-bit support is not available in hardware. */ -#define VMEMMAP_SHIFT (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT) -#define VMEMMAP_SIZE ((_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) >> VMEMMAP_SHIFT) +#define VMEMMAP_RANGE (_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) +#define VMEMMAP_SIZE ((VMEMMAP_RANGE >> PAGE_SHIFT) * sizeof(struct page)) /* * PAGE_OFFSET - the virtual address of the start of the linear map, at the @@ -47,8 +47,8 @@ #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) #define MODULES_VADDR (_PAGE_END(VA_BITS_MIN)) #define MODULES_VSIZE (SZ_2G) -#define VMEMMAP_START (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT))) -#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE) +#define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE) +#define VMEMMAP_END (-UL(SZ_1G)) #define PCI_IO_START (VMEMMAP_END + SZ_8M) #define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE) #define FIXADDR_TOP (-UL(SZ_8M))