Message ID | 1344063622-11021-1-git-send-email-r.sricharan@ti.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Sat, Aug 04, 2012 at 12:30:22PM +0530, R Sricharan wrote: > When either the start address or end address or physical address > to be mapped is unaligned, alloc_init_section creates > page granularity mappings. alloc_init_section calls > alloc_init_pte which populates one pmd entry and sets up > the ptes. But if the size is greater than what can be mapped > by one pmd entry, then the rest remains unmapped. > > The issue becomes visible when LPAE is enabled, where we have > the 3 levels with seperate pgd and pmd's. > When a static mapping for 3MB is requested, only 2MB is mapped > and the remaining 1MB is unmapped. Fixing this here, by looping > in to map the entire unaligned address range. This doesn't look like a nice fix. The implication above is that it's only alloc_init_pte() which is affected - so why not add a loop there? Remember that pte's cover two sections, not one, so you need to use a different increment to sections.
Hi Russell, On Sat, Aug 4, 2012 at 2:14 PM, Russell King - ARM Linux <linux@arm.linux.org.uk> wrote: > On Sat, Aug 04, 2012 at 12:30:22PM +0530, R Sricharan wrote: >> When either the start address or end address or physical address >> to be mapped is unaligned, alloc_init_section creates >> page granularity mappings. alloc_init_section calls >> alloc_init_pte which populates one pmd entry and sets up >> the ptes. But if the size is greater than what can be mapped >> by one pmd entry, then the rest remains unmapped. >> >> The issue becomes visible when LPAE is enabled, where we have >> the 3 levels with seperate pgd and pmd's. >> When a static mapping for 3MB is requested, only 2MB is mapped >> and the remaining 1MB is unmapped. Fixing this here, by looping >> in to map the entire unaligned address range. > > This doesn't look like a nice fix. The implication above is that it's > only alloc_init_pte() which is affected - so why not add a loop there? > Ok, will move the loop to alloc_init_pte instead. > Remember that pte's cover two sections, not one, so you need to use a > different increment to sections. Yes, and that varies when LPAE is enabled. Will take care of this as well in V2 post. Thanks, Sricharan
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index cf4528d..c8c405f 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -597,34 +597,42 @@ static void __init alloc_init_section(pud_t *pud, unsigned long addr, const struct mem_type *type) { pmd_t *pmd = pmd_offset(pud, addr); - - /* - * Try a section mapping - end, addr and phys must all be aligned - * to a section boundary. Note that PMDs refer to the individual - * L1 entries, whereas PGDs refer to a group of L1 entries making - * up one logical pointer to an L2 table. - */ - if (type->prot_sect && ((addr | end | phys) & ~SECTION_MASK) == 0) { - pmd_t *p = pmd; + unsigned long next; #ifndef CONFIG_ARM_LPAE - if (addr & SECTION_SIZE) - pmd++; + if ((addr & SECTION_SIZE) && + (type->prot_sect && ((addr | next | phys) & ~SECTION_MASK) == 0)) + pmd++; #endif - - do { - *pmd = __pmd(phys | type->prot_sect); - phys += SECTION_SIZE; - } while (pmd++, addr += SECTION_SIZE, addr != end); - - flush_pmd_entry(p); - } else { + do { + if ((end - addr) & SECTION_MASK) + next = (addr + SECTION_SIZE) & SECTION_MASK; + else + next = end; /* - * No need to loop; pte's aren't interested in the - * individual L1 entries. + * Try a section mapping - end, addr and phys must all be + * aligned to a section boundary. Note that PMDs refer to + * the individual L1 entries, whereas PGDs refer to a group + * of L1 entries making up one logical pointer to an L2 table. */ - alloc_init_pte(pmd, addr, end, __phys_to_pfn(phys), type); - } + if (type->prot_sect && + ((addr | next | phys) & ~SECTION_MASK) == 0) { + *pmd = __pmd(phys | type->prot_sect); + flush_pmd_entry(pmd); + } else { + /* + * when addresses are not aligned, + * we may be required to map address range greater + * than a section size. So loop in here to map the + * complete range. + */ + alloc_init_pte(pmd, addr, next, + __phys_to_pfn(phys), type); + } + + phys += next - addr; + + } while (pmd++, addr = next, addr != end); } static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,