From patchwork Fri Apr 28 09:50:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Wenlong X-Patchwork-Id: 13226167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6108C77B61 for ; Fri, 28 Apr 2023 09:53:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345490AbjD1Jxq (ORCPT ); Fri, 28 Apr 2023 05:53:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345782AbjD1JxY (ORCPT ); Fri, 28 Apr 2023 05:53:24 -0400 Received: from out0-219.mail.aliyun.com (out0-219.mail.aliyun.com [140.205.0.219]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC24959F9; Fri, 28 Apr 2023 02:53:02 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047198;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---.STCEPV9_1682675548; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.STCEPV9_1682675548) by smtp.aliyun-inc.com; Fri, 28 Apr 2023 17:52:29 +0800 From: "Hou Wenlong" To: linux-kernel@vger.kernel.org Cc: "Thomas Garnier" , "Lai Jiangshan" , "Kees Cook" , "Hou Wenlong" , "Juergen Gross" , "Boris Ostrovsky" , "Darren Hart" , "Andy Shevchenko" , "Thomas Gleixner" , "Ingo Molnar" , "Borislav Petkov" , "Dave Hansen" , , "H. Peter Anvin" , , Subject: [PATCH RFC 15/43] x86/PVH: Use fixed_percpu_data to set up GS base Date: Fri, 28 Apr 2023 17:50:55 +0800 Message-Id: <4fdb800ce6f1a2315918cb02eec3efbec1032cb8.1682673543.git.houwenlong.hwl@antgroup.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: platform-driver-x86@vger.kernel.org startup_64() and startup_xen() both use fixed_percpu_data to set up GS base. So for consitency, use it too in PVH entry. Signed-off-by: Hou Wenlong Cc: Thomas Garnier Cc: Lai Jiangshan Cc: Kees Cook --- arch/x86/platform/pvh/head.S | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S index c4365a05ab83..b093996b7e19 100644 --- a/arch/x86/platform/pvh/head.S +++ b/arch/x86/platform/pvh/head.S @@ -96,7 +96,7 @@ SYM_CODE_START_LOCAL(pvh_start_xen) 1: /* Set base address in stack canary descriptor. */ mov $MSR_GS_BASE,%ecx - mov $_pa(canary), %eax + mov $_pa(INIT_PER_CPU_VAR(fixed_percpu_data)), %eax xor %edx, %edx wrmsr @@ -156,8 +156,6 @@ SYM_DATA_START_LOCAL(gdt_start) SYM_DATA_END_LABEL(gdt_start, SYM_L_LOCAL, gdt_end) .balign 16 -SYM_DATA_LOCAL(canary, .fill 48, 1, 0) - SYM_DATA_START_LOCAL(early_stack) .fill BOOT_STACK_SIZE, 1, 0 SYM_DATA_END_LABEL(early_stack, SYM_L_LOCAL, early_stack_end) From patchwork Fri Apr 28 09:50:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Wenlong X-Patchwork-Id: 13226168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6317C77B61 for ; Fri, 28 Apr 2023 09:54:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345476AbjD1JyG (ORCPT ); Fri, 28 Apr 2023 05:54:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345828AbjD1Jx4 (ORCPT ); Fri, 28 Apr 2023 05:53:56 -0400 Received: from out0-216.mail.aliyun.com (out0-216.mail.aliyun.com [140.205.0.216]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7B584696; Fri, 28 Apr 2023 02:53:26 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047194;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=35;SR=0;TI=SMTPD_---.STFQGKD_1682675559; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.STFQGKD_1682675559) by smtp.aliyun-inc.com; Fri, 28 Apr 2023 17:52:41 +0800 From: "Hou Wenlong" To: linux-kernel@vger.kernel.org Cc: "Thomas Garnier" , "Lai Jiangshan" , "Kees Cook" , "Hou Wenlong" , "Brian Gerst" , "Thomas Gleixner" , "Ingo Molnar" , "Borislav Petkov" , "Dave Hansen" , , "H. Peter Anvin" , "Andy Lutomirski" , "Juergen Gross" , "Boris Ostrovsky" , "Darren Hart" , "Andy Shevchenko" , "Nathan Chancellor" , "Nick Desaulniers" , "Tom Rix" , "Peter Zijlstra" , " =?utf-8?q?Mike_Rapoport_=28IBM?= =?utf-8?q?=29?= " , "Ashok Raj" , "Rick Edgecombe" , "Catalin Marinas" , "Guo Ren" , "Greg Kroah-Hartman" , "Jason A. Donenfeld" , "Pawan Gupta" , "Kim Phillips" , "David Woodhouse" , "Josh Poimboeuf" , , , Subject: [PATCH RFC 16/43] x86-64: Use per-cpu stack canary if supported by compiler Date: Fri, 28 Apr 2023 17:50:56 +0800 Message-Id: <7cee0c83225ffd8cf8fd0065bea9348f6db3b12a.1682673543.git.houwenlong.hwl@antgroup.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: platform-driver-x86@vger.kernel.org From: Brian Gerst From: Brian Gerst If the compiler supports it, use a standard per-cpu variable for the stack protector instead of the old fixed location. Keep the fixed location code for compatibility with older compilers. [Hou Wenlong: Disable it on Clang, adapt new code change and adapt missing GS set up path in pvh_start_xen()] Signed-off-by: Brian Gerst Co-developed-by: Hou Wenlong Signed-off-by: Hou Wenlong Cc: Thomas Garnier Cc: Lai Jiangshan Cc: Kees Cook --- arch/x86/Kconfig | 12 ++++++++++++ arch/x86/Makefile | 21 ++++++++++++++------- arch/x86/entry/entry_64.S | 6 +++++- arch/x86/include/asm/processor.h | 17 ++++++++++++----- arch/x86/include/asm/stackprotector.h | 16 +++++++--------- arch/x86/kernel/asm-offsets_64.c | 2 +- arch/x86/kernel/cpu/common.c | 15 +++++++-------- arch/x86/kernel/head_64.S | 16 ++++++++++------ arch/x86/kernel/vmlinux.lds.S | 4 +++- arch/x86/platform/pvh/head.S | 8 ++++++++ arch/x86/xen/xen-head.S | 14 +++++++++----- 11 files changed, 88 insertions(+), 43 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 68e5da464b96..55cce8cdf9bd 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -410,6 +410,18 @@ config CC_HAS_SANE_STACKPROTECTOR the compiler produces broken code or if it does not let us control the segment on 32-bit kernels. +config CC_HAS_CUSTOMIZED_STACKPROTECTOR + bool + # Although clang supports -mstack-protector-guard-reg option, it + # would generate GOT reference for __stack_chk_guard even with + # -fno-PIE flag. + default y if (!CC_IS_CLANG && $(cc-option,-mstack-protector-guard-reg=gs)) + +config STACKPROTECTOR_FIXED + bool + depends on X86_64 && STACKPROTECTOR + default !CC_HAS_CUSTOMIZED_STACKPROTECTOR + menu "Processor type and features" config SMP diff --git a/arch/x86/Makefile b/arch/x86/Makefile index b39975977c03..57e4dbbf501d 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -111,13 +111,7 @@ ifeq ($(CONFIG_X86_32),y) # temporary until string.h is fixed KBUILD_CFLAGS += -ffreestanding - ifeq ($(CONFIG_STACKPROTECTOR),y) - ifeq ($(CONFIG_SMP),y) - KBUILD_CFLAGS += -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard - else - KBUILD_CFLAGS += -mstack-protector-guard=global - endif - endif + percpu_seg := fs else BITS := 64 UTS_MACHINE := x86_64 @@ -167,6 +161,19 @@ else KBUILD_CFLAGS += -mcmodel=kernel KBUILD_RUSTFLAGS += -Cno-redzone=y KBUILD_RUSTFLAGS += -Ccode-model=kernel + + percpu_seg := gs +endif + +ifeq ($(CONFIG_STACKPROTECTOR),y) + ifneq ($(CONFIG_STACKPROTECTOR_FIXED),y) + ifeq ($(CONFIG_SMP),y) + KBUILD_CFLAGS += -mstack-protector-guard-reg=$(percpu_seg) \ + -mstack-protector-guard-symbol=__stack_chk_guard + else + KBUILD_CFLAGS += -mstack-protector-guard=global + endif + endif endif # diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 6f2297ebb15f..df79b7aa65bb 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -229,6 +229,10 @@ SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL) int3 SYM_CODE_END(entry_SYSCALL_64) +#ifdef CONFIG_STACKPROTECTOR_FIXED +#define __stack_chk_guard fixed_percpu_data + FIXED_stack_canary +#endif + /* * %rdi: prev task * %rsi: next task @@ -252,7 +256,7 @@ SYM_FUNC_START(__switch_to_asm) #ifdef CONFIG_STACKPROTECTOR movq TASK_stack_canary(%rsi), %rbx - movq %rbx, PER_CPU_VAR(fixed_percpu_data) + FIXED_stack_canary + movq %rbx, PER_CPU_VAR(__stack_chk_guard) #endif /* diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 2a5ec5750ba7..3890f609569d 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -379,6 +379,8 @@ struct irq_stack { } __aligned(IRQ_STACK_SIZE); #ifdef CONFIG_X86_64 + +#ifdef CONFIG_STACKPROTECTOR_FIXED struct fixed_percpu_data { /* * GCC hardcodes the stack canary as %gs:40. Since the @@ -394,21 +396,26 @@ struct fixed_percpu_data { DECLARE_PER_CPU_FIRST(struct fixed_percpu_data, fixed_percpu_data) __visible; DECLARE_INIT_PER_CPU(fixed_percpu_data); +#endif /* CONFIG_STACKPROTECTOR_FIXED */ static inline unsigned long cpu_kernelmode_gs_base(int cpu) { +#ifdef CONFIG_STACKPROTECTOR_FIXED return (unsigned long)per_cpu(fixed_percpu_data.gs_base, cpu); +#else +#ifdef CONFIG_SMP + return per_cpu_offset(cpu); +#else + return 0; +#endif +#endif } extern asmlinkage void ignore_sysret(void); /* Save actual FS/GS selectors and bases to current->thread */ void current_save_fsgs(void); -#else /* X86_64 */ -#ifdef CONFIG_STACKPROTECTOR -DECLARE_PER_CPU(unsigned long, __stack_chk_guard); -#endif -#endif /* !X86_64 */ +#endif /* X86_64 */ struct perf_event; diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h index 00473a650f51..24aa0e2ad0dd 100644 --- a/arch/x86/include/asm/stackprotector.h +++ b/arch/x86/include/asm/stackprotector.h @@ -36,6 +36,12 @@ #include +#ifdef CONFIG_STACKPROTECTOR_FIXED +#define __stack_chk_guard fixed_percpu_data.stack_canary +#else +DECLARE_PER_CPU(unsigned long, __stack_chk_guard); +#endif + /* * Initialize the stackprotector canary value. * @@ -51,25 +57,17 @@ static __always_inline void boot_init_stack_canary(void) { unsigned long canary = get_random_canary(); -#ifdef CONFIG_X86_64 +#ifdef CONFIG_STACKPROTECTOR_FIXED BUILD_BUG_ON(offsetof(struct fixed_percpu_data, stack_canary) != 40); #endif current->stack_canary = canary; -#ifdef CONFIG_X86_64 - this_cpu_write(fixed_percpu_data.stack_canary, canary); -#else this_cpu_write(__stack_chk_guard, canary); -#endif } static inline void cpu_init_stack_canary(int cpu, struct task_struct *idle) { -#ifdef CONFIG_X86_64 - per_cpu(fixed_percpu_data.stack_canary, cpu) = idle->stack_canary; -#else per_cpu(__stack_chk_guard, cpu) = idle->stack_canary; -#endif } #else /* STACKPROTECTOR */ diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c index bb65371ea9df..f39baf90126c 100644 --- a/arch/x86/kernel/asm-offsets_64.c +++ b/arch/x86/kernel/asm-offsets_64.c @@ -56,7 +56,7 @@ int main(void) BLANK(); -#ifdef CONFIG_STACKPROTECTOR +#ifdef CONFIG_STACKPROTECTOR_FIXED OFFSET(FIXED_stack_canary, fixed_percpu_data, stack_canary); BLANK(); #endif diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 3ea06b0b4570..972b1babf731 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2051,10 +2051,6 @@ DEFINE_PER_CPU_ALIGNED(struct pcpu_hot, pcpu_hot) = { EXPORT_PER_CPU_SYMBOL(pcpu_hot); #ifdef CONFIG_X86_64 -DEFINE_PER_CPU_FIRST(struct fixed_percpu_data, - fixed_percpu_data) __aligned(PAGE_SIZE) __visible; -EXPORT_PER_CPU_SYMBOL_GPL(fixed_percpu_data); - static void wrmsrl_cstar(unsigned long val) { /* @@ -2102,15 +2098,18 @@ void syscall_init(void) X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_RF| X86_EFLAGS_AC|X86_EFLAGS_ID); } - -#else /* CONFIG_X86_64 */ +#endif /* CONFIG_X86_64 */ #ifdef CONFIG_STACKPROTECTOR +#ifdef CONFIG_STACKPROTECTOR_FIXED +DEFINE_PER_CPU_FIRST(struct fixed_percpu_data, + fixed_percpu_data) __aligned(PAGE_SIZE) __visible; +EXPORT_PER_CPU_SYMBOL_GPL(fixed_percpu_data); +#else DEFINE_PER_CPU(unsigned long, __stack_chk_guard); EXPORT_PER_CPU_SYMBOL(__stack_chk_guard); #endif - -#endif /* CONFIG_X86_64 */ +#endif /* * Clear all 6 debug registers: diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 21f0556d3ac0..61f1873d0ff7 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -68,7 +68,13 @@ SYM_CODE_START_NOALIGN(startup_64) /* Setup GSBASE to allow stack canary access for C code */ movl $MSR_GS_BASE, %ecx +#if defined(CONFIG_STACKPROTECTOR_FIXED) leaq INIT_PER_CPU_VAR(fixed_percpu_data)(%rip), %rdx +#elif defined(CONFIG_SMP) + movabs $__per_cpu_load, %rdx +#else + xorl %edx, %edx +#endif movl %edx, %eax shrq $32, %rdx wrmsr @@ -283,16 +289,14 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL) movl %eax,%fs movl %eax,%gs - /* Set up %gs. - * - * The base of %gs always points to fixed_percpu_data. If the - * stack protector canary is enabled, it is located at %gs:40. + /* + * Set up GS base. * Note that, on SMP, the boot cpu uses init data section until * the per cpu areas are set up. */ movl $MSR_GS_BASE,%ecx -#ifndef CONFIG_SMP - leaq INIT_PER_CPU_VAR(fixed_percpu_data)(%rip), %rdx +#if !defined(CONFIG_SMP) && defined(CONFIG_STACKPROTECTOR_FIXED) + leaq __per_cpu_load(%rip), %rdx #endif movl %edx, %eax shrq $32, %rdx diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 25f155205770..f02dcde9f8a8 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -500,12 +500,14 @@ SECTIONS */ #define INIT_PER_CPU(x) init_per_cpu__##x = ABSOLUTE(x) + __per_cpu_load INIT_PER_CPU(gdt_page); -INIT_PER_CPU(fixed_percpu_data); INIT_PER_CPU(irq_stack_backing_store); +#ifdef CONFIG_STACKPROTECTOR_FIXED +INIT_PER_CPU(fixed_percpu_data); #ifdef CONFIG_SMP . = ASSERT((fixed_percpu_data == 0), "fixed_percpu_data is not at start of per-cpu area"); #endif +#endif #endif /* CONFIG_X86_64 */ diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S index b093996b7e19..5842fe0e4f96 100644 --- a/arch/x86/platform/pvh/head.S +++ b/arch/x86/platform/pvh/head.S @@ -96,8 +96,16 @@ SYM_CODE_START_LOCAL(pvh_start_xen) 1: /* Set base address in stack canary descriptor. */ mov $MSR_GS_BASE,%ecx +#if defined(CONFIG_STACKPROTECTOR_FIXED) mov $_pa(INIT_PER_CPU_VAR(fixed_percpu_data)), %eax xor %edx, %edx +#elif defined(CONFIG_SMP) + mov $__per_cpu_load, %rax + cdq +#else + xor %eax, %eax + xor %edx, %edx +#endif wrmsr call xen_prepare_pvh diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S index 643d02900fbb..09eaf59e8066 100644 --- a/arch/x86/xen/xen-head.S +++ b/arch/x86/xen/xen-head.S @@ -51,15 +51,19 @@ SYM_CODE_START(startup_xen) leaq (__end_init_task - PTREGS_SIZE)(%rip), %rsp - /* Set up %gs. - * - * The base of %gs always points to fixed_percpu_data. If the - * stack protector canary is enabled, it is located at %gs:40. + /* + * Set up GS base. * Note that, on SMP, the boot cpu uses init data section until * the per cpu areas are set up. */ movl $MSR_GS_BASE,%ecx - movq $INIT_PER_CPU_VAR(fixed_percpu_data),%rax +#if defined(CONFIG_STACKPROTECTOR_FIXED) + leaq INIT_PER_CPU_VAR(fixed_percpu_data)(%rip), %rdx +#elif defined(CONFIG_SMP) + movabs $__per_cpu_load, %rdx +#else + xorl %eax, %eax +#endif cdq wrmsr From patchwork Fri Apr 28 09:51:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Wenlong X-Patchwork-Id: 13226183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08F12C77B61 for ; Fri, 28 Apr 2023 09:56:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345912AbjD1J4c (ORCPT ); Fri, 28 Apr 2023 05:56:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345981AbjD1Jz4 (ORCPT ); Fri, 28 Apr 2023 05:55:56 -0400 Received: from out187-16.us.a.mail.aliyun.com (out187-16.us.a.mail.aliyun.com [47.90.187.16]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE2866181; Fri, 28 Apr 2023 02:55:15 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047212;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---.STFoGYl_1682675602; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.STFoGYl_1682675602) by smtp.aliyun-inc.com; Fri, 28 Apr 2023 17:53:23 +0800 From: "Hou Wenlong" To: linux-kernel@vger.kernel.org Cc: "Thomas Garnier" , "Lai Jiangshan" , "Kees Cook" , "Hou Wenlong" , "Juergen Gross" , "Boris Ostrovsky" , "Darren Hart" , "Andy Shevchenko" , "Thomas Gleixner" , "Ingo Molnar" , "Borislav Petkov" , "Dave Hansen" , , "H. Peter Anvin" , , Subject: [PATCH RFC 29/43] x86/PVH: Adapt PVH booting for PIE support Date: Fri, 28 Apr 2023 17:51:09 +0800 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: platform-driver-x86@vger.kernel.org If PIE is enabled, all symbol references would be RIP-relative. However, PVH booting runs in low address space, which could cause wrong x86_init callbacks assignment. Since init_top_pgt has building high kernel address mapping, let PVH booting runs in high address space to make all things right. PVH booting assumes that no relocation happened. Since the kernel compile address is still in top 2G, so it is allowed to use R_X86_64_32S for symbol references in pvh_start_xen(). Signed-off-by: Hou Wenlong Cc: Thomas Garnier Cc: Lai Jiangshan Cc: Kees Cook --- arch/x86/platform/pvh/head.S | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S index 5842fe0e4f96..09518d4de042 100644 --- a/arch/x86/platform/pvh/head.S +++ b/arch/x86/platform/pvh/head.S @@ -94,6 +94,13 @@ SYM_CODE_START_LOCAL(pvh_start_xen) /* 64-bit entry point. */ .code64 1: +#ifdef CONFIG_X86_PIE + movabs $2f, %rax + ANNOTATE_RETPOLINE_SAFE + jmp *%rax +2: + ANNOTATE_NOENDBR // above +#endif /* Set base address in stack canary descriptor. */ mov $MSR_GS_BASE,%ecx #if defined(CONFIG_STACKPROTECTOR_FIXED) @@ -149,9 +156,15 @@ SYM_CODE_END(pvh_start_xen) .section ".init.data","aw" .balign 8 SYM_DATA_START_LOCAL(gdt) + /* + * Use an ASM_PTR (quad on x64) for _pa(gdt_start) because PIE requires + * a pointer size storage value before applying the relocation. On + * 32-bit _ASM_PTR will be a long which is aligned the space needed for + * relocation. + */ .word gdt_end - gdt_start - .long _pa(gdt_start) - .word 0 + _ASM_PTR _pa(gdt_start) + .balign 8 SYM_DATA_END(gdt) SYM_DATA_START_LOCAL(gdt_start) .quad 0x0000000000000000 /* NULL descriptor */ From patchwork Fri Apr 28 09:51:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Wenlong X-Patchwork-Id: 13226184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66FA7C77B7C for ; Fri, 28 Apr 2023 09:56:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345926AbjD1J4k (ORCPT ); Fri, 28 Apr 2023 05:56:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345925AbjD1J4G (ORCPT ); Fri, 28 Apr 2023 05:56:06 -0400 Received: from out187-16.us.a.mail.aliyun.com (out187-16.us.a.mail.aliyun.com [47.90.187.16]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4317A5584; Fri, 28 Apr 2023 02:55:18 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047213;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=44;SR=0;TI=SMTPD_---.STCEPzg_1682675659; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.STCEPzg_1682675659) by smtp.aliyun-inc.com; Fri, 28 Apr 2023 17:54:20 +0800 From: "Hou Wenlong" To: linux-kernel@vger.kernel.org Cc: "Thomas Garnier" , "Lai Jiangshan" , "Kees Cook" , "Hou Wenlong" , "Alexander Potapenko" , "Marco Elver" , "Dmitry Vyukov" , "Thomas Gleixner" , "Ingo Molnar" , "Borislav Petkov" , "Dave Hansen" , , "H. Peter Anvin" , "Andy Lutomirski" , "Peter Zijlstra" , "Andrey Ryabinin" , "Andrey Konovalov" , "Vincenzo Frascino" , "Ard Biesheuvel" , "Darren Hart" , "Andy Shevchenko" , "Andrew Morton" , " =?utf-8?q?Mike_Rapoport_=28I?= =?utf-8?q?BM=29?= " , "Guo Ren" , "Stafford Horne" , "David Hildenbrand" , "Juergen Gross" , "Anshuman Khandual" , "Josh Poimboeuf" , "Pasha Tatashin" , "David Woodhouse" , "Brian Gerst" , "XueBing Chen" , "Yuntao Wang" , "Jonathan McDowell" , "Jason A. Donenfeld" , "Dan Williams" , "Jane Chu" , "Davidlohr Bueso" , "Sean Christopherson" , , , Subject: [PATCH RFC 42/43] x86/pie: Allow kernel image to be relocated in top 512G Date: Fri, 28 Apr 2023 17:51:22 +0800 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: platform-driver-x86@vger.kernel.org For PIE kernel image, it could be relocated at any address. To be simplified, treat the 2G area which including kernel image, modules area and fixmap area as a whole, allow it to be relocated in top 512G. After that, the relocated kernel address may be below than __START_KERNEL_map, so use a global variable to store the base of relocated kernel image. And pa/va transformation of kernel image address is adapted. Suggested-by: Lai Jiangshan Signed-off-by: Hou Wenlong Cc: Thomas Garnier Cc: Kees Cook --- arch/x86/include/asm/kmsan.h | 6 ++--- arch/x86/include/asm/page_64.h | 8 +++---- arch/x86/include/asm/page_64_types.h | 8 +++++++ arch/x86/include/asm/pgtable_64_types.h | 10 ++++---- arch/x86/kernel/head64.c | 32 ++++++++++++++++++------- arch/x86/kernel/head_64.S | 12 ++++++++++ arch/x86/kernel/setup.c | 6 +++++ arch/x86/mm/dump_pagetables.c | 9 ++++--- arch/x86/mm/init_64.c | 8 +++---- arch/x86/mm/kasan_init_64.c | 4 ++-- arch/x86/mm/pat/set_memory.c | 2 +- arch/x86/mm/physaddr.c | 14 +++++------ arch/x86/platform/efi/efi_thunk_64.S | 4 ++++ 13 files changed, 87 insertions(+), 36 deletions(-) diff --git a/arch/x86/include/asm/kmsan.h b/arch/x86/include/asm/kmsan.h index 8fa6ac0e2d76..a635d825342d 100644 --- a/arch/x86/include/asm/kmsan.h +++ b/arch/x86/include/asm/kmsan.h @@ -63,16 +63,16 @@ static inline bool kmsan_phys_addr_valid(unsigned long addr) static inline bool kmsan_virt_addr_valid(void *addr) { unsigned long x = (unsigned long)addr; - unsigned long y = x - __START_KERNEL_map; + unsigned long y = x - KERNEL_MAP_BASE; - /* use the carry flag to determine if x was < __START_KERNEL_map */ + /* use the carry flag to determine if x was < KERNEL_MAP_BASE */ if (unlikely(x > y)) { x = y + phys_base; if (y >= KERNEL_IMAGE_SIZE) return false; } else { - x = y + (__START_KERNEL_map - PAGE_OFFSET); + x = y + (KERNEL_MAP_BASE - PAGE_OFFSET); /* carry flag will be set if starting x was >= PAGE_OFFSET */ if ((x > y) || !kmsan_phys_addr_valid(x)) diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index cc6b8e087192..b8692e6cc939 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -20,10 +20,10 @@ extern unsigned long vmemmap_base; static __always_inline unsigned long __phys_addr_nodebug(unsigned long x) { - unsigned long y = x - __START_KERNEL_map; + unsigned long y = x - KERNEL_MAP_BASE; - /* use the carry flag to determine if x was < __START_KERNEL_map */ - x = y + ((x > y) ? phys_base : (__START_KERNEL_map - PAGE_OFFSET)); + /* use the carry flag to determine if x was < KERNEL_MAP_BASE */ + x = y + ((x > y) ? phys_base : (KERNEL_MAP_BASE - PAGE_OFFSET)); return x; } @@ -34,7 +34,7 @@ extern unsigned long __phys_addr_symbol(unsigned long); #else #define __phys_addr(x) __phys_addr_nodebug(x) #define __phys_addr_symbol(x) \ - ((unsigned long)(x) - __START_KERNEL_map + phys_base) + ((unsigned long)(x) - KERNEL_MAP_BASE + phys_base) #endif #define __phys_reloc_hide(x) (x) diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index e9e2c3ba5923..933d37845064 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -4,6 +4,8 @@ #ifndef __ASSEMBLY__ #include + +extern unsigned long kernel_map_base; #endif #ifdef CONFIG_KASAN @@ -49,6 +51,12 @@ #define __START_KERNEL_map _AC(0xffffffff80000000, UL) +#ifdef CONFIG_X86_PIE +#define KERNEL_MAP_BASE kernel_map_base +#else +#define KERNEL_MAP_BASE __START_KERNEL_map +#endif /* CONFIG_X86_PIE */ + /* See Documentation/x86/x86_64/mm.rst for a description of the memory map. */ #define __PHYSICAL_MASK_SHIFT 52 diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 38bf837e3554..3d6951128a07 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -187,14 +187,16 @@ extern unsigned int ptrs_per_p4d; #define KMSAN_MODULES_ORIGIN_START (KMSAN_MODULES_SHADOW_START + MODULES_LEN) #endif /* CONFIG_KMSAN */ -#define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) +#define RAW_MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) +#define MODULES_VADDR (KERNEL_MAP_BASE + KERNEL_IMAGE_SIZE) /* The module sections ends with the start of the fixmap */ #ifndef CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP -# define MODULES_END _AC(0xffffffffff000000, UL) +# define RAW_MODULES_END _AC(0xffffffffff000000, UL) #else -# define MODULES_END _AC(0xfffffffffe000000, UL) +# define RAW_MODULES_END _AC(0xfffffffffe000000, UL) #endif -#define MODULES_LEN (MODULES_END - MODULES_VADDR) +#define MODULES_LEN (RAW_MODULES_END - RAW_MODULES_VADDR) +#define MODULES_END (MODULES_VADDR + MODULES_LEN) #define ESPFIX_PGD_ENTRY _AC(-2, UL) #define ESPFIX_BASE_ADDR (ESPFIX_PGD_ENTRY << P4D_SHIFT) diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index c5cd61aab8ae..234ac796863a 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -66,6 +66,11 @@ unsigned long vmemmap_base __ro_after_init = __VMEMMAP_BASE_L4; EXPORT_SYMBOL(vmemmap_base); #endif +#ifdef CONFIG_X86_PIE +unsigned long kernel_map_base __ro_after_init = __START_KERNEL_map; +EXPORT_SYMBOL(kernel_map_base); +#endif + /* * GDT used on the boot CPU before switching to virtual addresses. */ @@ -193,6 +198,7 @@ unsigned long __head __startup_64(unsigned long physaddr, { unsigned long load_delta, *p; unsigned long pgtable_flags; + unsigned long kernel_map_base_offset = 0; pgdval_t *pgd; p4dval_t *p4d; pudval_t *pud; @@ -252,6 +258,13 @@ unsigned long __head __startup_64(unsigned long physaddr, pud[511] += load_delta; } +#ifdef CONFIG_X86_PIE + kernel_map_base_offset = text_base & PUD_MASK; + *fixup_long(&kernel_map_base, physaddr) = kernel_map_base_offset; + kernel_map_base_offset -= __START_KERNEL_map; + *fixup_long(&__FIXADDR_TOP, physaddr) += kernel_map_base_offset; +#endif + pmd = fixup_pointer(level2_fixmap_pgt, physaddr); for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--) pmd[i] += load_delta; @@ -328,7 +341,7 @@ unsigned long __head __startup_64(unsigned long physaddr, /* fixup pages that are part of the kernel image */ for (; i <= pmd_index(end_base); i++) if (pmd[i] & _PAGE_PRESENT) - pmd[i] += load_delta; + pmd[i] += load_delta + kernel_map_base_offset; /* invalidate pages after the kernel image */ for (; i < PTRS_PER_PMD; i++) @@ -338,7 +351,8 @@ unsigned long __head __startup_64(unsigned long physaddr, * Fixup phys_base - remove the memory encryption mask to obtain * the true physical address. */ - *fixup_long(&phys_base, physaddr) += load_delta - sme_get_me_mask(); + *fixup_long(&phys_base, physaddr) += load_delta + kernel_map_base_offset - + sme_get_me_mask(); return sme_postprocess_startup(bp, pmd); } @@ -376,7 +390,7 @@ bool __init __early_make_pgtable(unsigned long address, pmdval_t pmd) if (!pgtable_l5_enabled()) p4d_p = pgd_p; else if (pgd) - p4d_p = (p4dval_t *)((pgd & PTE_PFN_MASK) + __START_KERNEL_map - phys_base); + p4d_p = (p4dval_t *)((pgd & PTE_PFN_MASK) + KERNEL_MAP_BASE - phys_base); else { if (next_early_pgt >= EARLY_DYNAMIC_PAGE_TABLES) { reset_early_page_tables(); @@ -385,13 +399,13 @@ bool __init __early_make_pgtable(unsigned long address, pmdval_t pmd) p4d_p = (p4dval_t *)early_dynamic_pgts[next_early_pgt++]; memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D); - *pgd_p = (pgdval_t)p4d_p - __START_KERNEL_map + phys_base + _KERNPG_TABLE; + *pgd_p = (pgdval_t)p4d_p - KERNEL_MAP_BASE + phys_base + _KERNPG_TABLE; } p4d_p += p4d_index(address); p4d = *p4d_p; if (p4d) - pud_p = (pudval_t *)((p4d & PTE_PFN_MASK) + __START_KERNEL_map - phys_base); + pud_p = (pudval_t *)((p4d & PTE_PFN_MASK) + KERNEL_MAP_BASE - phys_base); else { if (next_early_pgt >= EARLY_DYNAMIC_PAGE_TABLES) { reset_early_page_tables(); @@ -400,13 +414,13 @@ bool __init __early_make_pgtable(unsigned long address, pmdval_t pmd) pud_p = (pudval_t *)early_dynamic_pgts[next_early_pgt++]; memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD); - *p4d_p = (p4dval_t)pud_p - __START_KERNEL_map + phys_base + _KERNPG_TABLE; + *p4d_p = (p4dval_t)pud_p - KERNEL_MAP_BASE + phys_base + _KERNPG_TABLE; } pud_p += pud_index(address); pud = *pud_p; if (pud) - pmd_p = (pmdval_t *)((pud & PTE_PFN_MASK) + __START_KERNEL_map - phys_base); + pmd_p = (pmdval_t *)((pud & PTE_PFN_MASK) + KERNEL_MAP_BASE - phys_base); else { if (next_early_pgt >= EARLY_DYNAMIC_PAGE_TABLES) { reset_early_page_tables(); @@ -415,7 +429,7 @@ bool __init __early_make_pgtable(unsigned long address, pmdval_t pmd) pmd_p = (pmdval_t *)early_dynamic_pgts[next_early_pgt++]; memset(pmd_p, 0, sizeof(*pmd_p) * PTRS_PER_PMD); - *pud_p = (pudval_t)pmd_p - __START_KERNEL_map + phys_base + _KERNPG_TABLE; + *pud_p = (pudval_t)pmd_p - KERNEL_MAP_BASE + phys_base + _KERNPG_TABLE; } pmd_p[pmd_index(address)] = pmd; @@ -497,6 +511,7 @@ static void __init copy_bootdata(char *real_mode_data) asmlinkage __visible void __init __noreturn x86_64_start_kernel(char * real_mode_data) { +#ifndef CONFIG_X86_PIE /* * Build-time sanity checks on the kernel image and module * area mappings. (these are purely build-time and produce no code) @@ -509,6 +524,7 @@ asmlinkage __visible void __init __noreturn x86_64_start_kernel(char * real_mode BUILD_BUG_ON(!(MODULES_VADDR > __START_KERNEL)); MAYBE_BUILD_BUG_ON(!(((MODULES_END - 1) & PGDIR_MASK) == (__START_KERNEL & PGDIR_MASK))); +#endif cr4_init_shadow(); diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 19cb2852238b..feb14304d1ed 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -130,7 +130,13 @@ SYM_CODE_START_NOALIGN(startup_64) popq %rsi /* Form the CR3 value being sure to include the CR3 modifier */ +#ifdef CONFIG_X86_PIE + movq kernel_map_base(%rip), %rdi + movabs $early_top_pgt, %rcx + subq %rdi, %rcx +#else movabs $(early_top_pgt - __START_KERNEL_map), %rcx +#endif addq %rcx, %rax jmp 1f SYM_CODE_END(startup_64) @@ -179,7 +185,13 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL) #endif /* Form the CR3 value being sure to include the CR3 modifier */ +#ifdef CONFIG_X86_PIE + movq kernel_map_base(%rip), %rdi + movabs $init_top_pgt, %rcx + subq %rdi, %rcx +#else movabs $(init_top_pgt - __START_KERNEL_map), %rcx +#endif addq %rcx, %rax 1: diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 16babff771bd..e68ca78b829c 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -808,11 +808,17 @@ static int dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) { if (kaslr_enabled()) { +#ifdef CONFIG_X86_PIE + pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n", + kaslr_offset(), + __START_KERNEL); +#else pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n", kaslr_offset(), __START_KERNEL, __START_KERNEL_map, MODULES_VADDR-1); +#endif } else { pr_emerg("Kernel Offset: disabled\n"); } diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index 81aa1c0b39cc..d5c6f61242aa 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -102,9 +102,9 @@ static struct addr_marker address_markers[] = { #ifdef CONFIG_EFI [EFI_END_NR] = { EFI_VA_END, "EFI Runtime Services" }, #endif - [HIGH_KERNEL_NR] = { __START_KERNEL_map, "High Kernel Mapping" }, - [MODULES_VADDR_NR] = { MODULES_VADDR, "Modules" }, - [MODULES_END_NR] = { MODULES_END, "End Modules" }, + [HIGH_KERNEL_NR] = { 0UL, "High Kernel Mapping" }, + [MODULES_VADDR_NR] = { 0UL, "Modules" }, + [MODULES_END_NR] = { 0UL, "End Modules" }, [FIXADDR_START_NR] = { 0UL, "Fixmap Area" }, [END_OF_SPACE_NR] = { -1, NULL } }; @@ -475,6 +475,9 @@ static int __init pt_dump_init(void) address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START; address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END; #endif + address_markers[HIGH_KERNEL_NR].start_address = KERNEL_MAP_BASE; + address_markers[MODULES_VADDR_NR].start_address = MODULES_VADDR; + address_markers[MODULES_END_NR].start_address = MODULES_END; address_markers[FIXADDR_START_NR].start_address = FIXADDR_START; #endif #ifdef CONFIG_X86_32 diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b7fd05a1ba1d..54bcd46c229d 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -413,7 +413,7 @@ void __init init_extra_mapping_uc(unsigned long phys, unsigned long size) /* * The head.S code sets up the kernel high mapping: * - * from __START_KERNEL_map to __START_KERNEL_map + size (== _end-_text) + * from KERNEL_MAP_BASE to KERNEL_MAP_BASE + size (== _end-_text) * * phys_base holds the negative offset to the kernel, which is added * to the compile time generated pmds. This results in invalid pmds up @@ -425,8 +425,8 @@ void __init init_extra_mapping_uc(unsigned long phys, unsigned long size) */ void __init cleanup_highmap(void) { - unsigned long vaddr = __START_KERNEL_map; - unsigned long vaddr_end = __START_KERNEL_map + KERNEL_IMAGE_SIZE; + unsigned long vaddr = KERNEL_MAP_BASE; + unsigned long vaddr_end = KERNEL_MAP_BASE + KERNEL_IMAGE_SIZE; unsigned long end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1; pmd_t *pmd = level2_kernel_pgt; @@ -436,7 +436,7 @@ void __init cleanup_highmap(void) * arch/x86/xen/mmu.c:xen_setup_kernel_pagetable(). */ if (max_pfn_mapped) - vaddr_end = __START_KERNEL_map + (max_pfn_mapped << PAGE_SHIFT); + vaddr_end = KERNEL_MAP_BASE + (max_pfn_mapped << PAGE_SHIFT); for (; vaddr + PMD_SIZE - 1 < vaddr_end; pmd++, vaddr += PMD_SIZE) { if (pmd_none(*pmd)) diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 0302491d799d..0edc8fdfb419 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -197,7 +197,7 @@ static inline p4d_t *early_p4d_offset(pgd_t *pgd, unsigned long addr) return (p4d_t *)pgd; p4d = pgd_val(*pgd) & PTE_PFN_MASK; - p4d += __START_KERNEL_map - phys_base; + p4d += KERNEL_MAP_BASE - phys_base; return (p4d_t *)p4d + p4d_index(addr); } @@ -420,7 +420,7 @@ void __init kasan_init(void) shadow_cea_per_cpu_begin, 0); kasan_populate_early_shadow((void *)shadow_cea_end, - kasan_mem_to_shadow((void *)__START_KERNEL_map)); + kasan_mem_to_shadow((void *)KERNEL_MAP_BASE)); kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext), (unsigned long)kasan_mem_to_shadow(_end), diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index c434aea9939c..2fb89be3a750 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1709,7 +1709,7 @@ static int cpa_process_alias(struct cpa_data *cpa) if (!within(vaddr, (unsigned long)_text, _brk_end) && __cpa_pfn_in_highmap(cpa->pfn)) { unsigned long temp_cpa_vaddr = (cpa->pfn << PAGE_SHIFT) + - __START_KERNEL_map - phys_base; + KERNEL_MAP_BASE - phys_base; alias_cpa = *cpa; alias_cpa.vaddr = &temp_cpa_vaddr; alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY); diff --git a/arch/x86/mm/physaddr.c b/arch/x86/mm/physaddr.c index fc3f3d3e2ef2..9cb6d898329c 100644 --- a/arch/x86/mm/physaddr.c +++ b/arch/x86/mm/physaddr.c @@ -14,15 +14,15 @@ #ifdef CONFIG_DEBUG_VIRTUAL unsigned long __phys_addr(unsigned long x) { - unsigned long y = x - __START_KERNEL_map; + unsigned long y = x - KERNEL_MAP_BASE; - /* use the carry flag to determine if x was < __START_KERNEL_map */ + /* use the carry flag to determine if x was < KERNEL_MAP_BASE */ if (unlikely(x > y)) { x = y + phys_base; VIRTUAL_BUG_ON(y >= KERNEL_IMAGE_SIZE); } else { - x = y + (__START_KERNEL_map - PAGE_OFFSET); + x = y + (KERNEL_MAP_BASE - PAGE_OFFSET); /* carry flag will be set if starting x was >= PAGE_OFFSET */ VIRTUAL_BUG_ON((x > y) || !phys_addr_valid(x)); @@ -34,7 +34,7 @@ EXPORT_SYMBOL(__phys_addr); unsigned long __phys_addr_symbol(unsigned long x) { - unsigned long y = x - __START_KERNEL_map; + unsigned long y = x - KERNEL_MAP_BASE; /* only check upper bounds since lower bounds will trigger carry */ VIRTUAL_BUG_ON(y >= KERNEL_IMAGE_SIZE); @@ -46,16 +46,16 @@ EXPORT_SYMBOL(__phys_addr_symbol); bool __virt_addr_valid(unsigned long x) { - unsigned long y = x - __START_KERNEL_map; + unsigned long y = x - KERNEL_MAP_BASE; - /* use the carry flag to determine if x was < __START_KERNEL_map */ + /* use the carry flag to determine if x was < KERNEL_MAP_BASE */ if (unlikely(x > y)) { x = y + phys_base; if (y >= KERNEL_IMAGE_SIZE) return false; } else { - x = y + (__START_KERNEL_map - PAGE_OFFSET); + x = y + (KERNEL_MAP_BASE - PAGE_OFFSET); /* carry flag will be set if starting x was >= PAGE_OFFSET */ if ((x > y) || !phys_addr_valid(x)) diff --git a/arch/x86/platform/efi/efi_thunk_64.S b/arch/x86/platform/efi/efi_thunk_64.S index c4b1144f99f6..0997363821e7 100644 --- a/arch/x86/platform/efi/efi_thunk_64.S +++ b/arch/x86/platform/efi/efi_thunk_64.S @@ -52,7 +52,11 @@ STACK_FRAME_NON_STANDARD __efi64_thunk /* * Calculate the physical address of the kernel text. */ +#ifdef CONFIG_X86_PIE + movq kernel_map_base(%rip), %rax +#else movq $__START_KERNEL_map, %rax +#endif subq phys_base(%rip), %rax leaq 1f(%rip), %rbp