Message ID | 6b965917ffb2185c541f04ff18a624282ca6e211.1691620546.git.sanastasio@raptorengineering.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | xen/ppc: Add early Radix MMU support | expand |
On 10.08.2023 00:48, Shawn Anastasio wrote: > --- a/xen/arch/ppc/ppc64/head.S > +++ b/xen/arch/ppc/ppc64/head.S > @@ -17,6 +17,33 @@ ENTRY(start) > addis %r2, %r12, .TOC.-1b@ha > addi %r2, %r2, .TOC.-1b@l > > + /* > + * Copy Xen to physical address zero and jump to XEN_VIRT_START > + * (0xc000000000000000). This works because the hardware will ignore the top > + * four address bits when the MMU is off. > + */ > + LOAD_REG_ADDR(%r1, _start) > + LOAD_IMM64(%r12, XEN_VIRT_START) > + > + /* If we're at the correct address, skip copy */ > + cmpld %r1, %r12 > + beq .L_correct_address > + > + /* Copy bytes until _end */ > + LOAD_REG_ADDR(%r11, _end) > + addi %r1, %r1, -8 > + li %r13, -8 > +.L_copy_xen: > + ldu %r10, 8(%r1) > + stdu %r10, 8(%r13) > + cmpld %r1, %r11 > + blt .L_copy_xen > + > + /* Jump to XEN_VIRT_START */ > + mtctr %r12 > + bctr > +.L_correct_address: Somewhat related to my earlier remark towards using %sp instead of %r1: Are you intentionally fiddling with the stack pointer here, corrupting any earlier stack that the boot loader might have set? This ... > /* set up the initial stack */ > LOAD_REG_ADDR(%r1, cpu0_boot_stack) > li %r11, 0 ... is where you actually switch stacks. Using the stack pointer here is likely okay, albeit a bit unusual, the more that you have ample registers available for use. Jan
On 8/14/23 8:09 AM, Jan Beulich wrote: > On 10.08.2023 00:48, Shawn Anastasio wrote: >> --- a/xen/arch/ppc/ppc64/head.S >> +++ b/xen/arch/ppc/ppc64/head.S >> @@ -17,6 +17,33 @@ ENTRY(start) >> addis %r2, %r12, .TOC.-1b@ha >> addi %r2, %r2, .TOC.-1b@l >> >> + /* >> + * Copy Xen to physical address zero and jump to XEN_VIRT_START >> + * (0xc000000000000000). This works because the hardware will ignore the top >> + * four address bits when the MMU is off. >> + */ >> + LOAD_REG_ADDR(%r1, _start) >> + LOAD_IMM64(%r12, XEN_VIRT_START) >> + >> + /* If we're at the correct address, skip copy */ >> + cmpld %r1, %r12 >> + beq .L_correct_address >> + >> + /* Copy bytes until _end */ >> + LOAD_REG_ADDR(%r11, _end) >> + addi %r1, %r1, -8 >> + li %r13, -8 >> +.L_copy_xen: >> + ldu %r10, 8(%r1) >> + stdu %r10, 8(%r13) >> + cmpld %r1, %r11 >> + blt .L_copy_xen >> + >> + /* Jump to XEN_VIRT_START */ >> + mtctr %r12 >> + bctr >> +.L_correct_address: > > Somewhat related to my earlier remark towards using %sp instead of > %r1: Are you intentionally fiddling with the stack pointer here, > corrupting any earlier stack that the boot loader might have set? > This ... > >> /* set up the initial stack */ >> LOAD_REG_ADDR(%r1, cpu0_boot_stack) >> li %r11, 0 > > ... is where you actually switch stacks. Using the stack pointer > here is likely okay, albeit a bit unusual, the more that you have > ample registers available for use. This was intentional -- I just chose it as a free register to use, since as you point out it preceeds the stack set-up code. I agree it might be a bit confusing though and we do have an ample amount of registers to play with, so I'll change it to something else for clarity's sake. > Jan Thanks, Shawn
diff --git a/xen/arch/ppc/include/asm/config.h b/xen/arch/ppc/include/asm/config.h index d060f0dca7..30438d22d2 100644 --- a/xen/arch/ppc/include/asm/config.h +++ b/xen/arch/ppc/include/asm/config.h @@ -39,7 +39,7 @@ name: #endif -#define XEN_VIRT_START _AT(UL, 0xc000000000000000) +#define XEN_VIRT_START _AC(0xc000000000000000, UL) #define SMP_CACHE_BYTES (1 << 6) diff --git a/xen/arch/ppc/ppc64/head.S b/xen/arch/ppc/ppc64/head.S index 8f1e5d3ad2..149af2c472 100644 --- a/xen/arch/ppc/ppc64/head.S +++ b/xen/arch/ppc/ppc64/head.S @@ -17,6 +17,33 @@ ENTRY(start) addis %r2, %r12, .TOC.-1b@ha addi %r2, %r2, .TOC.-1b@l + /* + * Copy Xen to physical address zero and jump to XEN_VIRT_START + * (0xc000000000000000). This works because the hardware will ignore the top + * four address bits when the MMU is off. + */ + LOAD_REG_ADDR(%r1, _start) + LOAD_IMM64(%r12, XEN_VIRT_START) + + /* If we're at the correct address, skip copy */ + cmpld %r1, %r12 + beq .L_correct_address + + /* Copy bytes until _end */ + LOAD_REG_ADDR(%r11, _end) + addi %r1, %r1, -8 + li %r13, -8 +.L_copy_xen: + ldu %r10, 8(%r1) + stdu %r10, 8(%r13) + cmpld %r1, %r11 + blt .L_copy_xen + + /* Jump to XEN_VIRT_START */ + mtctr %r12 + bctr +.L_correct_address: + /* set up the initial stack */ LOAD_REG_ADDR(%r1, cpu0_boot_stack) li %r11, 0
Introduce a small assembly loop in `start` to copy the kernel to physical address 0 before continuing. This ensures that the physical address lines up with XEN_VIRT_START (0xc000000000000000) and allows us to identity map the kernel when the MMU is set up in the next patch. We are also able to start execution at XEN_VIRT_START after the copy since hardware will ignore the top 4 address bits when operating in Real Mode (MMU off). Signed-off-by: Shawn Anastasio <sanastasio@raptorengineering.com> --- v3: no changes. v2: - Fix definition of XEN_VIRT_START macro which incorrectly used _AT instead of _AC. - Use _start instead of start as symbol referring to beginning of Xen binary xen/arch/ppc/include/asm/config.h | 2 +- xen/arch/ppc/ppc64/head.S | 27 +++++++++++++++++++++++++++ 2 files changed, 28 insertions(+), 1 deletion(-) -- 2.30.2