From patchwork Thu Dec 5 10:37:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Xu Lu X-Patchwork-Id: 13894990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 872D0E77175 for ; Thu, 5 Dec 2024 10:45:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jNiZUeBekrJDSLPsn3BorpQ5tGcLjCuOsUEuLYPsBm0=; b=kGeQcerI+pWdmg WxxT3nPI/C94kUN+88kTihI6D4m8BVDIynZ5TwQZlXFkzjzoDTnuZWbTn38cRZsLjLprEwu4ZVE7K nIw1iCpaaYD+a3kqLl5QSp9N8ghTXprLT8NW3mx68FuGZt34cCG44wOpg6WoYV88DoB3S6h5Xk4c2 OWCm/I2mvLSmg8FzV4ACfMe5/HgK+/O5i2N0pcPHsxsg+1q2kSk8qSL/4RcEi+MY1Q4CkpLLo4j5h wG+oms45MY8c2sd4fJiHXVNdbMgZWPqGfPmI/BKErJQP+5vLO+Q13Kb5QwOf8BTa0FXX5mCquVD64 KAd2W/2r+QFiNItnV3YQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tJ9Lu-0000000FaUF-2u6X; Thu, 05 Dec 2024 10:44:58 +0000 Received: from mail-pf1-x433.google.com ([2607:f8b0:4864:20::433]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tJ9Fg-0000000FYGG-25eH for linux-riscv@lists.infradead.org; Thu, 05 Dec 2024 10:38:33 +0000 Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-7258ed68cedso729181b3a.1 for ; Thu, 05 Dec 2024 02:38:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1733395112; x=1733999912; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=q07gCTIJJE8Xz9bnD//uUjqUF3bkJlTAdnYkbDObJk8=; b=PY0i8n5Pnbt00uUQ4lPjFDGrEjOyV38LMphCXihw0dXUxH9W2dc+wh2vwVVaIcCvF8 ImkhCZ8U8kFQ9B2AtlYR9uRh/ITKG+wUTpdhbrj/H8whhP1Dw/KVwV5vBQgK+dL1J/RQ ZX9ZiPMaajSkIxzY7W9bnb+J81Cgkk0iJdO23V3EftKavXMdRyZmSpPxov5KWHrTpZ4G eHQUTCFQ1bWKBiwrp4QGeq9npf67DAVUCe+dPpmap+6vuHGGsAwiD3p0m1g4IEQV0FjA VjUjoonNU1DZrFVsdteiwT6eKYbx/A0VXNS7pcOt2RftQZzBwKnlno+cj33N+zLu91Fn JFHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733395112; x=1733999912; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=q07gCTIJJE8Xz9bnD//uUjqUF3bkJlTAdnYkbDObJk8=; b=me6WOl+m/798dC+TKc9z2xV1+z3HZYB1R2f2tZ7jw+OgxwH+3JYzxqoeAFhZ9t/XPG sB5/+d3JGoFmCpAv1gfhkjZIBnrzk/tUTvFOoyR432Pxmn7qZbL0rY5SZ6N5BvdlZ9bd qSIlUqmPdU2O/TfWZ9T0WzTB+NHL/5UfG8VLB4C8wtQYD85YF1rrGBnepmQtFODWJX5w Jghq4/qWZqQ0rZSfl40BVmmh89JoVzOUCcb6XSZcv9pqpJVrFtxbqcmT/aVY9FKw42DK 2MAyahouAXIDQSZERdYoGo4jd4M9zZofcinTWhtzgO32ti4UepRcpGoej9jnxz74gako uBZg== X-Forwarded-Encrypted: i=1; AJvYcCW/Ovt0c56MD3o0SlpfoY6AS2cMUE3tsSiL2TuJEpMcBF5OlzynBqsz3VVgQi2ekhdCFGil9weVmug53Q==@lists.infradead.org X-Gm-Message-State: AOJu0YwiP1ga/a/waV9uN2EDDvk4Q+nO1SXJ5hlEHBXxLHEpqC4YhLtr n8BMBLYv6e66upkgnWnrZkVh7Zahoy/JTF/NuHkNinKgdkxHKkA8+SRdXEKKhrc= X-Gm-Gg: ASbGncv97FDVpJjdKQnXp3v1u3GAaSFkUFvNJnyAr4Ms8Ux/DAJn0Zzg93blYT5QZsl WonEWQW2UFJPWaR/wq98CStfpgFPI12gVFuoElYjaJsZoqB/sI7NsjhLcTvB04PWNiH3DaRmZbB fAKGkqSoUoXJMIkk9ui90ZzY0crfaE4pYw2DuOOpAY+5ZzpA3tZ3aalk259Q1z3QBDyORcQ27X8 Z6DSCRZIf+nNfyPVwpRRzCCRiDfpiYCuQvdTr/HwamdMVpTCJhxZWhwIPMQ5UtID88qUwMn4Bhv K+Qp2FhbLmvZnuRUT/PMo9tnK9zZaIv7 X-Google-Smtp-Source: AGHT+IG3G9tPg6qUeSO1QGI/Mdcn1uGTNUPe0Tsf/e+WSW8vxQ4wgkKlRQB0qkddigD/hXKfbzL+jQ== X-Received: by 2002:a05:6a00:138d:b0:71e:cb5:2219 with SMTP id d2e1a72fcca58-7257fa5e7d6mr10595111b3a.9.1733395111754; Thu, 05 Dec 2024 02:38:31 -0800 (PST) Received: from J9GPGXL7NT.bytedance.net ([61.213.176.56]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7fd156f048csm886826a12.39.2024.12.05.02.38.28 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 05 Dec 2024 02:38:31 -0800 (PST) From: Xu Lu To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, ardb@kernel.org, anup@brainfault.org, atishp@atishpatra.org Cc: xieyongji@bytedance.com, lihangjing@bytedance.com, punit.agrawal@bytedance.com, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Xu Lu Subject: [RFC PATCH v2 13/21] riscv: mm: Adjust PGDIR/P4D/PUD/PMD_SHIFT Date: Thu, 5 Dec 2024 18:37:21 +0800 Message-Id: <20241205103729.14798-14-luxu.kernel@bytedance.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20241205103729.14798-1-luxu.kernel@bytedance.com> References: <20241205103729.14798-1-luxu.kernel@bytedance.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241205_023832_536211_E688DAD2 X-CRM114-Status: GOOD ( 12.03 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This commit adjusts the SHIFT of pte index bits at each page table level. For example, in SV39, the traditional va behaves as: ---------------------------------------------- | pgd index | pmd index | pte index | offset | ---------------------------------------------- | 38 30 | 29 21 | 20 12 | 11 0 | ---------------------------------------------- When we choose 64K as basic software page, va now behaves as: ---------------------------------------------- | pgd index | pmd index | pte index | offset | ---------------------------------------------- | 38 34 | 33 25 | 24 16 | 15 0 | ---------------------------------------------- Signed-off-by: Xu Lu --- arch/riscv/include/asm/pgtable-32.h | 2 +- arch/riscv/include/asm/pgtable-64.h | 16 ++++++++-------- arch/riscv/include/asm/pgtable.h | 19 +++++++++++++++++++ 3 files changed, 28 insertions(+), 9 deletions(-) diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/pgtable-32.h index 2959ab72f926..e0c5c62f88d9 100644 --- a/arch/riscv/include/asm/pgtable-32.h +++ b/arch/riscv/include/asm/pgtable-32.h @@ -11,7 +11,7 @@ #include /* Size of region mapped by a page global directory */ -#define PGDIR_SHIFT 22 +#define PGDIR_SHIFT (10 + PAGE_SHIFT) #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) #define PGDIR_MASK (~(PGDIR_SIZE - 1)) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 2649cc90b14e..26c13484e721 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -13,9 +13,9 @@ extern bool pgtable_l4_enabled; extern bool pgtable_l5_enabled; -#define PGDIR_SHIFT_L3 30 -#define PGDIR_SHIFT_L4 39 -#define PGDIR_SHIFT_L5 48 +#define PGDIR_SHIFT_L3 (9 + 9 + PAGE_SHIFT) +#define PGDIR_SHIFT_L4 (9 + PGDIR_SHIFT_L3) +#define PGDIR_SHIFT_L5 (9 + PGDIR_SHIFT_L4) #define PGDIR_SHIFT (pgtable_l5_enabled ? PGDIR_SHIFT_L5 : \ (pgtable_l4_enabled ? PGDIR_SHIFT_L4 : PGDIR_SHIFT_L3)) /* Size of region mapped by a page global directory */ @@ -23,20 +23,20 @@ extern bool pgtable_l5_enabled; #define PGDIR_MASK (~(PGDIR_SIZE - 1)) /* p4d is folded into pgd in case of 4-level page table */ -#define P4D_SHIFT_L3 30 -#define P4D_SHIFT_L4 39 -#define P4D_SHIFT_L5 39 +#define P4D_SHIFT_L3 (9 + 9 + PAGE_SHIFT) +#define P4D_SHIFT_L4 (9 + P4D_SHIFT_L3) +#define P4D_SHIFT_L5 (9 + P4D_SHIFT_L3) #define P4D_SHIFT (pgtable_l5_enabled ? P4D_SHIFT_L5 : \ (pgtable_l4_enabled ? P4D_SHIFT_L4 : P4D_SHIFT_L3)) #define P4D_SIZE (_AC(1, UL) << P4D_SHIFT) #define P4D_MASK (~(P4D_SIZE - 1)) /* pud is folded into pgd in case of 3-level page table */ -#define PUD_SHIFT 30 +#define PUD_SHIFT (9 + 9 + PAGE_SHIFT) #define PUD_SIZE (_AC(1, UL) << PUD_SHIFT) #define PUD_MASK (~(PUD_SIZE - 1)) -#define PMD_SHIFT 21 +#define PMD_SHIFT (9 + PAGE_SHIFT) /* Size of region mapped by a page middle directory */ #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE - 1)) diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 9fa16c0c20aa..0fd9bd4e0d13 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -30,12 +30,27 @@ /* Number of entries in the page table */ #define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t)) +#ifdef CONFIG_RISCV_USE_SW_PAGE + +/* + * PGDIR_SHIFT grows as PAGE_SIZE grows. To avoid va exceeds limitation, pgd + * index bits should be cut. Thus we use HW_PAGE_SIZE instead. + */ +#define __PTRS_PER_PGD (HW_PAGE_SIZE / sizeof(pgd_t)) +#define pgd_index(a) (((a) >> PGDIR_SHIFT) & (__PTRS_PER_PGD - 1)) + +#define KERN_VIRT_SIZE ((__PTRS_PER_PGD / 2 * PGDIR_SIZE) / 2) + +#else + /* * Half of the kernel address space (1/4 of the entries of the page global * directory) is for the direct mapping. */ #define KERN_VIRT_SIZE ((PTRS_PER_PGD / 2 * PGDIR_SIZE) / 2) +#endif /* CONFIG_RISCV_USE_SW_PAGE */ + #define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1) #define VMALLOC_END PAGE_OFFSET #define VMALLOC_START (PAGE_OFFSET - VMALLOC_SIZE) @@ -1304,7 +1319,11 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) * Similarly for SV57, bits 63–57 must be equal to bit 56. */ #ifdef CONFIG_64BIT +#ifdef CONFIG_RISCV_USE_SW_PAGE +#define TASK_SIZE_64 (PGDIR_SIZE * __PTRS_PER_PGD / 2) +#else #define TASK_SIZE_64 (PGDIR_SIZE * PTRS_PER_PGD / 2) +#endif #define TASK_SIZE_MAX LONG_MAX #ifdef CONFIG_COMPAT