Message ID | 20240925150059.3955569-55-ardb+git@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | x86: Rely on toolchain for relocatable code | expand |
On 9/25/24 08:01, Ard Biesheuvel wrote: > From: Ard Biesheuvel <ardb@kernel.org> > > As an intermediate step towards enabling PIE linking for the 64-bit x86 > kernel, enable PIE codegen for all objects that are linked into the > kernel proper. > > This substantially reduces the number of relocations that need to be > processed when booting a relocatable KASLR kernel. > This really seems like going completely backwards to me. You are imposing a more restrictive code model on the kernel, optimizing for boot time in a way that will exert a permanent cost on the running kernel. There is a *huge* difference between the kernel and user space here: KERNEL MEMORY IS PERMANENTLY ALLOCATED, AND IS NEVER SHARED. Dirtying user pages requires them to be unshared and dirty, which is undesirable. Kernel pages are *always* unshared and dirty. > It also brings us much closer to the ordinary PIE relocation model used > for most of user space, which is therefore much better supported and > less likely to create problems as we increase the range of compilers and > linkers that need to be supported. We have been resisting *for ages* making the kernel worse to accomodate broken compilers. We don't "need" to support more compilers -- we need the compilers to support us. We have working compilers; any new compiler that wants to play should be expected to work correctly. -hpa
Hi Peter, Thanks for taking a look. On Tue, 1 Oct 2024 at 23:13, H. Peter Anvin <hpa@zytor.com> wrote: > > On 9/25/24 08:01, Ard Biesheuvel wrote: > > From: Ard Biesheuvel <ardb@kernel.org> > > > > As an intermediate step towards enabling PIE linking for the 64-bit x86 > > kernel, enable PIE codegen for all objects that are linked into the > > kernel proper. > > > > This substantially reduces the number of relocations that need to be > > processed when booting a relocatable KASLR kernel. > > > > This really seems like going completely backwards to me. > > You are imposing a more restrictive code model on the kernel, optimizing > for boot time in a way that will exert a permanent cost on the running > kernel. > Fair point about the boot time. This is not the only concern, though, and arguably the least important one. As I responded to Andi before, it is also about using a code model and relocation model that matches the reality of how the code is executed: - the early C code runs from the 1:1 mapping, and needs special hacks to accommodate this - KASLR runs the kernel from a different virtual address than the one we told the linker about > There is a *huge* difference between the kernel and user space here: > > KERNEL MEMORY IS PERMANENTLY ALLOCATED, AND IS NEVER SHARED. > No need to shout. > Dirtying user pages requires them to be unshared and dirty, which is > undesirable. Kernel pages are *always* unshared and dirty. > I guess you are referring to the use of a GOT? That is a valid concern, but it does not apply here. With hidden visibility and compiler command line options like -mdirect-access-extern, all emitted symbol references are direct. Disallowing text relocations could be trivially enabled with this series if desired, and actually helps avoid the tricky bugs we keep fixing in the early startup code that executes from the 1:1 mapping (the C code in .head.text) So it mostly comes down to minor differences in addressing modes, e.g., movq $sym, %reg actually uses more bytes than leaq sym(%rip), %reg whereas movq sym, %reg and movq sym(%rip), %reg are the same length. OTOH, indexing a statically allocated global array like movl array(,%reg1,4), %reg2 will be converted into leaq array(%rip), %reg2 movl (%reg2,%reg1,4), %reg2 and is therefore less efficient in terms of code footprint. But in general, the x86_64 ISA and psABI are quite flexible in this regard, and extrapolating from past experiences with PIC code on i386 is not really justified here. As Andi also pointed out, what ultimately matters is the performance, as well as code size where it impacts performance, through the I-cache footprint. I'll do some testing before reposting, and maybe not bother if the impact is negative. > > It also brings us much closer to the ordinary PIE relocation model used > > for most of user space, which is therefore much better supported and > > less likely to create problems as we increase the range of compilers and > > linkers that need to be supported. > > We have been resisting *for ages* making the kernel worse to accomodate > broken compilers. We don't "need" to support more compilers -- we need > the compilers to support us. We have working compilers; any new compiler > that wants to play should be expected to work correctly. > We are in a much better place now than we were before in that regard, which is actually how this effort came about: instead of lying to the compiler, and maintaining our own pile of scripts and relocation tools, we can just do what other arches are doing in Linux, and let the toolchain do it for us.
On Wed, 2 Oct 2024 at 08:31, Ard Biesheuvel <ardb@kernel.org> wrote: > > I guess you are referring to the use of a GOT? That is a valid > concern, but it does not apply here. With hidden visibility and > compiler command line options like -mdirect-access-extern, all emitted > symbol references are direct. I absolutely hate GOT entries. We definitely shouldn't ever do anything that causes them on x86-64. I'd much rather just do boot-time relocation, and I don't think the "we run code at a different location than we told the linker" is an arghument against it. Please, let's make sure we never have any of the global offset table horror. Yes, yes, you can't avoid them on other architectures. That said, doing changes like changing "mov $sym" to "lea sym(%rip)" I feel are a complete no-brainer and should be done regardless of any other code generation issues. Let's not do relocation for no good reason. Linus
On Wed, 2 Oct 2024 at 22:02, Linus Torvalds <torvalds@linux-foundation.org> wrote: > > On Wed, 2 Oct 2024 at 08:31, Ard Biesheuvel <ardb@kernel.org> wrote: > > > > I guess you are referring to the use of a GOT? That is a valid > > concern, but it does not apply here. With hidden visibility and > > compiler command line options like -mdirect-access-extern, all emitted > > symbol references are direct. > > I absolutely hate GOT entries. We definitely shouldn't ever do > anything that causes them on x86-64. > > I'd much rather just do boot-time relocation, and I don't think the > "we run code at a different location than we told the linker" is an > arghument against it. > > Please, let's make sure we never have any of the global offset table horror. > > Yes, yes, you can't avoid them on other architectures. > GCC/binutils never needs them. GCC 13 and older will emit some horrible indirect fentry hook calls via the GOT, but those are NOPed out by objtool anyway, so that is easily fixable. Clang/LLD is slightly trickier, because Clang emits relaxable GOTPCREL relocations, but LLD doesn't update the relocations emitted via --emit-relocs, so the relocs tool gets confused. This is one of the reasons I had for proposing to simply switch to PIE linking, and let the linker deal with all of that. Note that Clang may emit these even when not generating PIC/PIE code at all. So this is the reason I wanted to add support for GOTPCREL relocations in the relocs tool - it is really quite trivial to do, and makes our jobs slightly easier when dealing with these compiler quirks. The alternative would be to teach objtool how to relax 'movq foo@GOTPCREL(%rip)' into 'leaq foo(%rip)' - these are GOTPCREL relaxations described in the x86_64 psABI for ELF, which is why compilers are assuming more and more that emitting these is fine even without -fpic, given that the linker is supposed to elide them if possible. Note that there are 1 or 2 cases in the asm code where it is actually quite useful to refer to the address of foo as 'foo@GOTPCREL(%rip)' in instructions that take memory operands, but those individual cases are easily converted to something else if even having a GOT with just 2 entries is a dealbreaker for you. > That said, doing changes like changing "mov $sym" to "lea sym(%rip)" I > feel are a complete no-brainer and should be done regardless of any > other code generation issues. > Yes, this is the primary reason I ended up looking into this in the first place. Earlier this year, we ended up having to introduce RIP_REL_REF() to emit those RIP-relative references explicitly, in order to prevent the C code that is called via the early 1:1 mapping from exploding. The amount of C code called in that manner has been growing steadily over time with the introduction of 5-level paging and SEV-SNP and TDX support, which need to play all kinds of tricks before the normal kernel mappings are created. Compiling with -fpie and linking with --pie -z text produces an executable that is guaranteed to have only RIP-relative references in the .text segment, removing the need for RIP_REL_REF entirely (it already does nothing when __pic__ is #define'd).
On 10/3/24 04:13, Ard Biesheuvel wrote: > >> That said, doing changes like changing "mov $sym" to "lea sym(%rip)" I >> feel are a complete no-brainer and should be done regardless of any >> other code generation issues. > > Yes, this is the primary reason I ended up looking into this in the > first place. Earlier this year, we ended up having to introduce > RIP_REL_REF() to emit those RIP-relative references explicitly, in > order to prevent the C code that is called via the early 1:1 mapping > from exploding. The amount of C code called in that manner has been > growing steadily over time with the introduction of 5-level paging and > SEV-SNP and TDX support, which need to play all kinds of tricks before > the normal kernel mappings are created. > movq $sym to leaq sym(%rip) which you said ought to be smaller (and in reality appears to be the same size, 7 bytes) seems like a no-brainer and can be treated as a code quality issue -- in other words, file bug reports against gcc and clang. > Compiling with -fpie and linking with --pie -z text produces an > executable that is guaranteed to have only RIP-relative references in > the .text segment, removing the need for RIP_REL_REF entirely (it > already does nothing when __pic__ is #define'd). But -fpie has a considerable cost; specifically when we have indexed references, as in that case the base pointer needs to be manifest in a register, *and* it takes up a register slot in the EA, which may end converting one instruction into three. Now, the "kernel" memory model is defined in the ABI document, but there is nothing that prevents us from making updates to it if we need to; e.g. the statement that movq $sym can be used is undesirable, of course. -hpa
On Fri, Oct 4, 2024 at 11:06 PM H. Peter Anvin <hpa@zytor.com> wrote: > > On 10/3/24 04:13, Ard Biesheuvel wrote: > > > >> That said, doing changes like changing "mov $sym" to "lea sym(%rip)" I > >> feel are a complete no-brainer and should be done regardless of any > >> other code generation issues. > > > > Yes, this is the primary reason I ended up looking into this in the > > first place. Earlier this year, we ended up having to introduce > > RIP_REL_REF() to emit those RIP-relative references explicitly, in > > order to prevent the C code that is called via the early 1:1 mapping > > from exploding. The amount of C code called in that manner has been > > growing steadily over time with the introduction of 5-level paging and > > SEV-SNP and TDX support, which need to play all kinds of tricks before > > the normal kernel mappings are created. > > > > movq $sym to leaq sym(%rip) which you said ought to be smaller (and in > reality appears to be the same size, 7 bytes) seems like a no-brainer > and can be treated as a code quality issue -- in other words, file bug > reports against gcc and clang. It is the kernel assembly source that should be converted to rip-relative form, gcc (and probably clang) have nothing with it. Uros.
On 10/5/24 01:31, Uros Bizjak wrote: >> >> movq $sym to leaq sym(%rip) which you said ought to be smaller (and in >> reality appears to be the same size, 7 bytes) seems like a no-brainer >> and can be treated as a code quality issue -- in other words, file bug >> reports against gcc and clang. > > It is the kernel assembly source that should be converted to > rip-relative form, gcc (and probably clang) have nothing with it. > Sadly, that is not correct; neither gcc nor clang uses lea: -hpa gcc version 14.2.1 20240912 (Red Hat 14.2.1-3) (GCC) hpa@tazenda:/tmp$ cat foo.c int foobar; int *where_is_foobar(void) { return &foobar; } hpa@tazenda:/tmp$ gcc -mcmodel=kernel -O2 -c -o foo.o foo.c hpa@tazenda:/tmp$ objdump -dr foo.o foo.o: file format elf64-x86-64 Disassembly of section .text: 0000000000000000 <where_is_foobar>: 0: 48 c7 c0 00 00 00 00 mov $0x0,%rax 3: R_X86_64_32S foobar 7: c3 ret clang version 18.1.8 (Fedora 18.1.8-1.fc40) hpa@tazenda:/tmp$ clang -mcmodel=kernel -O2 -c -o foo.o foo.c hpa@tazenda:/tmp$ objdump -dr foo.o foo.o: file format elf64-x86-64 Disassembly of section .text: 0000000000000000 <where_is_foobar>: 0: 48 c7 c0 00 00 00 00 mov $0x0,%rax 3: R_X86_64_32S foobar 7: c3 ret
On Sat, 5 Oct 2024 at 16:37, H. Peter Anvin <hpa@zytor.com> wrote: > > Sadly, that is not correct; neither gcc nor clang uses lea: Looking around, this may be intentional. At least according to Agner, several cores do better at "mov immediate" compared to "lea". Eg a RIP-relative LEA on Zen 2 gets a throughput of two per cycle, but a "MOV r,i" gets four. That got fixed in Zen 3 and later, but apparently Intel had similar issues (Ivy Bridge: 1 LEA per cycle, vs 3 "mov i,r". Haswell is 1:4). Of course, Agner's tables are good, but not necessarily always the whole story. There are other instruction tables on the internet (eg uops.info) with possibly more info. And in reality, I would expect it to be a complete non-issue with any OoO engine and real code, because you are very seldom ALU limited particularly when there aren't any data dependencies. But a RIP-relative LEA does seem to put a *bit* more pressure on the core resources, so the compilers are may be right to pick a "mov". Linus
On Sun, Oct 6, 2024 at 1:37 AM H. Peter Anvin <hpa@zytor.com> wrote: > > On 10/5/24 01:31, Uros Bizjak wrote: > >> > >> movq $sym to leaq sym(%rip) which you said ought to be smaller (and in > >> reality appears to be the same size, 7 bytes) seems like a no-brainer > >> and can be treated as a code quality issue -- in other words, file bug > >> reports against gcc and clang. > > > > It is the kernel assembly source that should be converted to > > rip-relative form, gcc (and probably clang) have nothing with it. > > > > Sadly, that is not correct; neither gcc nor clang uses lea: > > -hpa > > > gcc version 14.2.1 20240912 (Red Hat 14.2.1-3) (GCC) > > hpa@tazenda:/tmp$ cat foo.c > int foobar; > > int *where_is_foobar(void) > { > return &foobar; > } > > hpa@tazenda:/tmp$ gcc -mcmodel=kernel -O2 -c -o foo.o foo.c Indeed, but my reply was in the context of -fpie, which guarantees RIP relative access. IOW, the compiler will always generate sym(%rip) with -fpie, but (obviously) can't change assembly code in the kernel when the PIE is requested. Otherwise, MOV $immediate, %reg is faster when PIE is not required, which is the case with -mcmodel=kernel. IIRC, LEA with %rip had some performance issues, which may not be the case anymore with newer processors. Due to the non-negligible impact of PIE, perhaps some kind of CONFIG_PIE config definition should be introduced, so the assembly code would be able to choose optimal asm sequence when PIE and non-PIE is requested? Uros.
On Sun, Oct 6, 2024 at 2:00 AM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > On Sat, 5 Oct 2024 at 16:37, H. Peter Anvin <hpa@zytor.com> wrote: > > > > Sadly, that is not correct; neither gcc nor clang uses lea: > > Looking around, this may be intentional. At least according to Agner, > several cores do better at "mov immediate" compared to "lea". > > Eg a RIP-relative LEA on Zen 2 gets a throughput of two per cycle, but > a "MOV r,i" gets four. That got fixed in Zen 3 and later, but > apparently Intel had similar issues (Ivy Bridge: 1 LEA per cycle, vs 3 > "mov i,r". Haswell is 1:4). Yes, this is the case. I just missed your reply when replying to Peter's mail with a not so precise answer. Uros.
... > Due to the non-negligible impact of PIE, perhaps some kind of > CONFIG_PIE config definition should be introduced, so the assembly > code would be able to choose optimal asm sequence when PIE and non-PIE > is requested? I wouldn't have thought that performance mattered in the asm code that runs during startup? While x86-84 code (ignoring data references) is pretty much always position independent, the same isn't true of all architectures. Some (at least Nios-II) only have absolute call instructions. So you can't really move to pic code globally. You'd also want 'bad' pic code that contained some fixups that needed the code patching. (Which you really don't want for a shared library.) Otherwise you get an extra instruction for non-trivial data accesses. Thinking.... Doesn't the code generated for -fpic assume that the dynamic loader has processed the relocations before it is run? But the kernel startup code is running before they can have been done? So even if that C code were 'pic' it could still contain things that are invalid (probably arrays of pointers?). So you lose one set of bugs and gain another. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Sun, Oct 6, 2024 at 8:01 PM David Laight <David.Laight@aculab.com> wrote: > > ... > > Due to the non-negligible impact of PIE, perhaps some kind of > > CONFIG_PIE config definition should be introduced, so the assembly > > code would be able to choose optimal asm sequence when PIE and non-PIE > > is requested? > > I wouldn't have thought that performance mattered in the asm code > that runs during startup? No, not the code that runs only once, where performance impact can be tolerated. This one: https://lore.kernel.org/lkml/20240925150059.3955569-44-ardb+git@google.com/ Uros.
On October 6, 2024 12:17:40 PM PDT, Uros Bizjak <ubizjak@gmail.com> wrote: >On Sun, Oct 6, 2024 at 8:01 PM David Laight <David.Laight@aculab.com> wrote: >> >> ... >> > Due to the non-negligible impact of PIE, perhaps some kind of >> > CONFIG_PIE config definition should be introduced, so the assembly >> > code would be able to choose optimal asm sequence when PIE and non-PIE >> > is requested? >> >> I wouldn't have thought that performance mattered in the asm code >> that runs during startup? > >No, not the code that runs only once, where performance impact can be tolerated. > >This one: > >https://lore.kernel.org/lkml/20240925150059.3955569-44-ardb+git@google.com/ > >Uros. > Yeah, running the kernel proper as PIE seems like a lose all around. The decompressor, ELF stub, etc, are of course a different matter entirely (and at least the latter can't rely on the small or kernel memory models anyway.)
diff --git a/arch/x86/Makefile b/arch/x86/Makefile index b78b7623a4a9..83d20f402535 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -193,13 +193,22 @@ else KBUILD_RUSTFLAGS += -Cno-redzone=y KBUILD_RUSTFLAGS += -Ccode-model=kernel + PIE_CFLAGS-y := -fpie -mcmodel=small \ + -include $(srctree)/include/linux/hidden.h + + PIE_CFLAGS-$(CONFIG_CC_IS_GCC) += $(call cc-option.-mdirect-extern-access) + PIE_CFLAGS-$(CONFIG_CC_IS_CLANG) += -fdirect-access-external-data + ifeq ($(CONFIG_STACKPROTECTOR),y) KBUILD_CFLAGS += -mstack-protector-guard-symbol=fixed_percpu_data + + # the 'small' C model defaults to %fs + PIE_CFLAGS-$(CONFIG_SMP) += -mstack-protector-guard-reg=gs endif # Don't emit relaxable GOTPCREL relocations KBUILD_AFLAGS_KERNEL += -Wa,-mrelax-relocations=no - KBUILD_CFLAGS_KERNEL += -Wa,-mrelax-relocations=no + KBUILD_CFLAGS_KERNEL += -Wa,-mrelax-relocations=no $(PIE_CFLAGS-y) endif # diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile index 9cc0ff6e9067..4d3ba35cb619 100644 --- a/arch/x86/boot/Makefile +++ b/arch/x86/boot/Makefile @@ -57,6 +57,7 @@ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_CFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=) KBUILD_CFLAGS += -fno-asynchronous-unwind-tables KBUILD_CFLAGS += $(CONFIG_CC_IMPLICIT_FALLTHROUGH) +KBUILD_CFLAGS_KERNEL := $(obj)/bzImage: asflags-y := $(SVGA_MODE) diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index f2051644de94..c362d36b5b69 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -73,7 +73,7 @@ LDFLAGS_vmlinux += -T hostprogs := mkpiggy HOST_EXTRACFLAGS += -I$(srctree)/tools/include -sed-voffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] \(_text\|__start_rodata\|__bss_start\|_end\)$$/\#define VO_\2 _AC(0x\1,UL)/p' +sed-voffset := -e 's/^\([0-9a-fA-F]*\) [ABbCDdGRSTtVW] \(_text\|__start_rodata\|__bss_start\|_end\)$$/\#define VO_\2 _AC(0x\1,UL)/p' quiet_cmd_voffset = VOFFSET $@ cmd_voffset = $(NM) $< | sed -n $(sed-voffset) > $@ diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index c9216ac4fb1e..7af9fecf9abb 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -141,6 +141,7 @@ endif endif $(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32) +$(obj)/vdso32.so.dbg: KBUILD_CFLAGS_KERNEL := $(obj)/vdso32.so.dbg: $(obj)/vdso32/vdso32.lds $(vobjs32) FORCE $(call if_changed,vdso_and_check) diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile index a0fb39abc5c8..70bf0a26da91 100644 --- a/arch/x86/realmode/rm/Makefile +++ b/arch/x86/realmode/rm/Makefile @@ -67,3 +67,4 @@ KBUILD_CFLAGS := $(REALMODE_CFLAGS) -D_SETUP -D_WAKEUP \ -I$(srctree)/arch/x86/boot KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables +KBUILD_CFLAGS_KERNEL := diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 2b079f73820f..3a084ac77109 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -349,6 +349,7 @@ *(DATA_MAIN) \ *(.data..decrypted) \ *(.ref.data) \ + *(.data.rel*) \ *(.data..shared_aligned) /* percpu related */ \ *(.data.unlikely) \ __start_once = .; \