Message ID | 1343775898-28345-5-git-send-email-cyril@ti.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, 31 Jul 2012, Cyril Chemparathy wrote: > This patch adds support for 64-bit physical addresses in virt_to_phys > patching. This does not do real 64-bit add/sub, but instead patches in the > upper 32-bits of the phys_offset directly into the output of virt_to_phys. You should explain _why_ you do not a real aadd/sub. I did deduce it but that might not be obvious to everyone. Also this subtlety should be commented in the code as well. > In addition to adding 64-bit support, this patch also adds a set_phys_offset() > helper that is needed on architectures that need to modify PHYS_OFFSET during > initialization. > > Signed-off-by: Cyril Chemparathy <cyril@ti.com> > --- > arch/arm/include/asm/memory.h | 22 +++++++++++++++------- > arch/arm/kernel/head.S | 6 ++++++ > arch/arm/kernel/setup.c | 14 ++++++++++++++ > 3 files changed, 35 insertions(+), 7 deletions(-) > > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h > index 4a0108f..110495c 100644 > --- a/arch/arm/include/asm/memory.h > +++ b/arch/arm/include/asm/memory.h > @@ -153,23 +153,31 @@ > #ifdef CONFIG_ARM_PATCH_PHYS_VIRT > > extern unsigned long __pv_phys_offset; > -#define PHYS_OFFSET __pv_phys_offset > - > +extern unsigned long __pv_phys_offset_high; As mentioned previously, this is just too ugly. Please make __pv_phys_offset into a phys_addr_t instead and mask the low/high parts as needed in __virt_to_phys(). > extern unsigned long __pv_offset; > > +extern void set_phys_offset(phys_addr_t po); > + > +#define PHYS_OFFSET __virt_to_phys(PAGE_OFFSET) > + > static inline phys_addr_t __virt_to_phys(unsigned long x) > { > - unsigned long t; > - early_patch_imm8(x, t, "add", __pv_offset); > - return t; > + unsigned long tlo, thi = 0; > + > + early_patch_imm8(x, tlo, "add", __pv_offset); > + if (sizeof(phys_addr_t) > 4) > + early_patch_imm8(0, thi, "add", __pv_phys_offset_high); Given the high part is always the same, isn't there a better way than an add with 0 that could be done here? The add will force a load of 0 in a register needlessly just to add a constant value to it. Your new patching framework ought to be able to patch a mov (or a mvn) instruction directly. Nicolas
Hi Nicolas, On 8/4/2012 2:49 AM, Nicolas Pitre wrote: > On Tue, 31 Jul 2012, Cyril Chemparathy wrote: > >> This patch adds support for 64-bit physical addresses in virt_to_phys >> patching. This does not do real 64-bit add/sub, but instead patches in the >> upper 32-bits of the phys_offset directly into the output of virt_to_phys. > > You should explain _why_ you do not a real aadd/sub. I did deduce it > but that might not be obvious to everyone. Also this subtlety should be > commented in the code as well. > We could not do an ADDS + ADC here because the carry is not guaranteed to be retained and passed into the ADC. This is because the compiler is free to insert all kinds of stuff between the two non-volatile asm blocks. Is there another subtlety here that I have missed out on entirely? >> In addition to adding 64-bit support, this patch also adds a set_phys_offset() >> helper that is needed on architectures that need to modify PHYS_OFFSET during >> initialization. >> >> Signed-off-by: Cyril Chemparathy <cyril@ti.com> >> --- >> arch/arm/include/asm/memory.h | 22 +++++++++++++++------- >> arch/arm/kernel/head.S | 6 ++++++ >> arch/arm/kernel/setup.c | 14 ++++++++++++++ >> 3 files changed, 35 insertions(+), 7 deletions(-) >> >> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h >> index 4a0108f..110495c 100644 >> --- a/arch/arm/include/asm/memory.h >> +++ b/arch/arm/include/asm/memory.h >> @@ -153,23 +153,31 @@ >> #ifdef CONFIG_ARM_PATCH_PHYS_VIRT >> >> extern unsigned long __pv_phys_offset; >> -#define PHYS_OFFSET __pv_phys_offset >> - >> +extern unsigned long __pv_phys_offset_high; > > As mentioned previously, this is just too ugly. Please make > __pv_phys_offset into a phys_addr_t instead and mask the low/high parts > as needed in __virt_to_phys(). > Maybe u64 instead of phys_addr_t to keep the sizing non-variable? >> extern unsigned long __pv_offset; >> >> +extern void set_phys_offset(phys_addr_t po); >> + >> +#define PHYS_OFFSET __virt_to_phys(PAGE_OFFSET) >> + >> static inline phys_addr_t __virt_to_phys(unsigned long x) >> { >> - unsigned long t; >> - early_patch_imm8(x, t, "add", __pv_offset); >> - return t; >> + unsigned long tlo, thi = 0; >> + >> + early_patch_imm8(x, tlo, "add", __pv_offset); >> + if (sizeof(phys_addr_t) > 4) >> + early_patch_imm8(0, thi, "add", __pv_phys_offset_high); > > Given the high part is always the same, isn't there a better way than an > add with 0 that could be done here? The add will force a load of 0 in a > register needlessly just to add a constant value to it. Your new > patching framework ought to be able to patch a mov (or a mvn) > instruction directly. > True. I'll try and figure out a better way of doing this. > > Nicolas > Once again, thanks for the excellent feedback.
On Sun, 5 Aug 2012, Cyril Chemparathy wrote: > Hi Nicolas, > > On 8/4/2012 2:49 AM, Nicolas Pitre wrote: > > On Tue, 31 Jul 2012, Cyril Chemparathy wrote: > > > > > This patch adds support for 64-bit physical addresses in virt_to_phys > > > patching. This does not do real 64-bit add/sub, but instead patches in > > > the > > > upper 32-bits of the phys_offset directly into the output of virt_to_phys. > > > > You should explain _why_ you do not a real aadd/sub. I did deduce it > > but that might not be obvious to everyone. Also this subtlety should be > > commented in the code as well. > > > > We could not do an ADDS + ADC here because the carry is not guaranteed to be > retained and passed into the ADC. This is because the compiler is free to > insert all kinds of stuff between the two non-volatile asm blocks. > > Is there another subtlety here that I have missed out on entirely? The high bits for the valid physical memory address range for which virt_to_phys and phys_to_virt can be used are always the same. Therefore no aadition at all is needed, fake or real. Only providing those bits in the top word for the value returned by virt_to_phys is needed. > > > In addition to adding 64-bit support, this patch also adds a > > > set_phys_offset() > > > helper that is needed on architectures that need to modify PHYS_OFFSET > > > during > > > initialization. > > > > > > Signed-off-by: Cyril Chemparathy <cyril@ti.com> > > > --- > > > arch/arm/include/asm/memory.h | 22 +++++++++++++++------- > > > arch/arm/kernel/head.S | 6 ++++++ > > > arch/arm/kernel/setup.c | 14 ++++++++++++++ > > > 3 files changed, 35 insertions(+), 7 deletions(-) > > > > > > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h > > > index 4a0108f..110495c 100644 > > > --- a/arch/arm/include/asm/memory.h > > > +++ b/arch/arm/include/asm/memory.h > > > @@ -153,23 +153,31 @@ > > > #ifdef CONFIG_ARM_PATCH_PHYS_VIRT > > > > > > extern unsigned long __pv_phys_offset; > > > -#define PHYS_OFFSET __pv_phys_offset > > > - > > > +extern unsigned long __pv_phys_offset_high; > > > > As mentioned previously, this is just too ugly. Please make > > __pv_phys_offset into a phys_addr_t instead and mask the low/high parts > > as needed in __virt_to_phys(). > > > > Maybe u64 instead of phys_addr_t to keep the sizing non-variable? No. When not using LPAE, we don't have to pay the price of a u64 value. That's why the phys_addr_t type is conditionally defined. You already do extra processing in virt_to_phys when sizeof(phys_addr_t) > 4 which is perfect for dealing with this issue. Nicolas
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index 4a0108f..110495c 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -153,23 +153,31 @@ #ifdef CONFIG_ARM_PATCH_PHYS_VIRT extern unsigned long __pv_phys_offset; -#define PHYS_OFFSET __pv_phys_offset - +extern unsigned long __pv_phys_offset_high; extern unsigned long __pv_offset; +extern void set_phys_offset(phys_addr_t po); + +#define PHYS_OFFSET __virt_to_phys(PAGE_OFFSET) + static inline phys_addr_t __virt_to_phys(unsigned long x) { - unsigned long t; - early_patch_imm8(x, t, "add", __pv_offset); - return t; + unsigned long tlo, thi = 0; + + early_patch_imm8(x, tlo, "add", __pv_offset); + if (sizeof(phys_addr_t) > 4) + early_patch_imm8(0, thi, "add", __pv_phys_offset_high); + + return (u64)tlo | (u64)thi << 32; } static inline unsigned long __phys_to_virt(phys_addr_t x) { - unsigned long t; - early_patch_imm8(x, t, "sub", __pv_offset); + unsigned long t, xlo = x; + early_patch_imm8(xlo, t, "sub", __pv_offset); return t; } + #else #define __virt_to_phys(x) \ diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index d165896..fa820b3 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -532,6 +532,12 @@ __pv_phys_offset: .long 0 .size __pv_phys_offset, . - __pv_phys_offset + .globl __pv_phys_offset_high + .type __pv_phys_offset_high, %object +__pv_phys_offset_high: + .long 0 + .size __pv_phys_offset_high, . - __pv_phys_offset_high + .globl __pv_offset .type __pv_offset, %object __pv_offset: diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 15a7699..bba3fdc 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -67,6 +67,20 @@ #define MEM_SIZE (16*1024*1024) #endif +#ifdef CONFIG_ARM_PATCH_PHYS_VIRT +/* + * set_phys_offset() sets PHYS_OFFSET and pv_offset. + * Note: this is unsafe to use beyond setup_arch(). + */ +void __init set_phys_offset(phys_addr_t po) +{ + __pv_phys_offset = po; + __pv_phys_offset_high = (u64)po >> 32; + __pv_offset = po - PAGE_OFFSET; +} + +#endif + #if defined(CONFIG_FPE_NWFPE) || defined(CONFIG_FPE_FASTFPE) char fpe_type[8];
This patch adds support for 64-bit physical addresses in virt_to_phys patching. This does not do real 64-bit add/sub, but instead patches in the upper 32-bits of the phys_offset directly into the output of virt_to_phys. In addition to adding 64-bit support, this patch also adds a set_phys_offset() helper that is needed on architectures that need to modify PHYS_OFFSET during initialization. Signed-off-by: Cyril Chemparathy <cyril@ti.com> --- arch/arm/include/asm/memory.h | 22 +++++++++++++++------- arch/arm/kernel/head.S | 6 ++++++ arch/arm/kernel/setup.c | 14 ++++++++++++++ 3 files changed, 35 insertions(+), 7 deletions(-)