Message ID | 1557377177-20695-3-git-send-email-anshuman.khandual@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/ioremap: Check virtual address alignment | expand |
Hi Anshuman, On Thu, May 09, 2019 at 10:16:17AM +0530, Anshuman Khandual wrote: > Pgtable page address can be fetched with [pmd|pte]_offset_[kernel] if input > address is PMD_SIZE or PTE_SIZE aligned. Input address is now guaranteed to > be aligned, hence fetched pgtable page address is always correct. But using > 0UL as offset base address has been a standard practice across platforms. > It also makes more sense as it isolates pgtable page address computation > from input virtual address alignment. This does not change functionality. > > Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Will Deacon <will.deacon@arm.com> > Cc: Mark Rutland <mark.rutland@arm.com> > Cc: James Morse <james.morse@arm.com> > Cc: Robin Murphy <robin.murphy@arm.com> What's the plan with this small series? I didn't find a v5 (unless I deleted it by mistake). I can queue this patch through the arm64 tree or they can both go in via the mm tree.
On Thu, May 09, 2019 at 10:16:17AM +0530, Anshuman Khandual wrote: > Pgtable page address can be fetched with [pmd|pte]_offset_[kernel] if input > address is PMD_SIZE or PTE_SIZE aligned. Input address is now guaranteed to > be aligned, hence fetched pgtable page address is always correct. But using > 0UL as offset base address has been a standard practice across platforms. > It also makes more sense as it isolates pgtable page address computation > from input virtual address alignment. This does not change functionality. > > Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Will Deacon <will.deacon@arm.com> > Cc: Mark Rutland <mark.rutland@arm.com> > Cc: James Morse <james.morse@arm.com> > Cc: Robin Murphy <robin.murphy@arm.com> > --- > arch/arm64/mm/mmu.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index e97f018ff740..71bcb783aace 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -1005,7 +1005,7 @@ int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr) > return 1; > } > > - table = pte_offset_kernel(pmdp, addr); > + table = pte_offset_kernel(pmdp, 0UL); > pmd_clear(pmdp); > __flush_tlb_kernel_pgtable(addr); > pte_free_kernel(NULL, table); > @@ -1026,8 +1026,8 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr) > return 1; > } > > - table = pmd_offset(pudp, addr); > - pmdp = table; > + table = pmd_offset(pudp, 0UL); > + pmdp = pmd_offset(pudp, addr); > next = addr; > end = addr + PUD_SIZE; > do { I have the same comment as last time: https://lore.kernel.org/linux-arm-kernel/20190430161759.GI29799@arrakis.emea.arm.com/ I don't see why pmdp needs to be different from table. We get the pointer to a pmd page and we want to iterate over it to free the pte entries it contains. You can add a VM_WARN on addr alignment as in the previous version of the patch but pmdp is just an iterator over table.
On 06/04/2019 07:54 PM, Catalin Marinas wrote: > On Thu, May 09, 2019 at 10:16:17AM +0530, Anshuman Khandual wrote: >> Pgtable page address can be fetched with [pmd|pte]_offset_[kernel] if input >> address is PMD_SIZE or PTE_SIZE aligned. Input address is now guaranteed to >> be aligned, hence fetched pgtable page address is always correct. But using >> 0UL as offset base address has been a standard practice across platforms. >> It also makes more sense as it isolates pgtable page address computation >> from input virtual address alignment. This does not change functionality. >> >> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> >> Cc: Catalin Marinas <catalin.marinas@arm.com> >> Cc: Will Deacon <will.deacon@arm.com> >> Cc: Mark Rutland <mark.rutland@arm.com> >> Cc: James Morse <james.morse@arm.com> >> Cc: Robin Murphy <robin.murphy@arm.com> >> --- >> arch/arm64/mm/mmu.c | 6 +++--- >> 1 file changed, 3 insertions(+), 3 deletions(-) >> >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index e97f018ff740..71bcb783aace 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -1005,7 +1005,7 @@ int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr) >> return 1; >> } >> >> - table = pte_offset_kernel(pmdp, addr); >> + table = pte_offset_kernel(pmdp, 0UL); >> pmd_clear(pmdp); >> __flush_tlb_kernel_pgtable(addr); >> pte_free_kernel(NULL, table); >> @@ -1026,8 +1026,8 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr) >> return 1; >> } >> >> - table = pmd_offset(pudp, addr); >> - pmdp = table; >> + table = pmd_offset(pudp, 0UL); >> + pmdp = pmd_offset(pudp, addr); >> next = addr; >> end = addr + PUD_SIZE; >> do { > > I have the same comment as last time: > > https://lore.kernel.org/linux-arm-kernel/20190430161759.GI29799@arrakis.emea.arm.com/ > > I don't see why pmdp needs to be different from table. We get the > pointer to a pmd page and we want to iterate over it to free the pte > entries it contains. You can add a VM_WARN on addr alignment as in the > previous version of the patch but pmdp is just an iterator over table. Fair enough. I believe VM_WARN() is needed to check address alignment because they are now guaranteed to be aligned because of the previous patch. I guess we should probably drop this patch and consider only the previous one ?
On 06/03/2019 09:06 PM, Catalin Marinas wrote: > Hi Anshuman, > > > On Thu, May 09, 2019 at 10:16:17AM +0530, Anshuman Khandual wrote: >> Pgtable page address can be fetched with [pmd|pte]_offset_[kernel] if input >> address is PMD_SIZE or PTE_SIZE aligned. Input address is now guaranteed to >> be aligned, hence fetched pgtable page address is always correct. But using >> 0UL as offset base address has been a standard practice across platforms. >> It also makes more sense as it isolates pgtable page address computation >> from input virtual address alignment. This does not change functionality. >> >> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> >> Cc: Catalin Marinas <catalin.marinas@arm.com> >> Cc: Will Deacon <will.deacon@arm.com> >> Cc: Mark Rutland <mark.rutland@arm.com> >> Cc: James Morse <james.morse@arm.com> >> Cc: Robin Murphy <robin.murphy@arm.com> > > What's the plan with this small series? I didn't find a v5 (unless I > deleted it by mistake). I can queue this patch through the arm64 tree or > they can both go in via the mm tree. As mentioned earlier [1] I believe the second patch is not needed anymore. Hence only V4 of the first patch (https://patchwork.kernel.org/patch/10944191/) which has a Reviewed-by tag should be considered. [1] https://patchwork.kernel.org/patch/10936625/
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e97f018ff740..71bcb783aace 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1005,7 +1005,7 @@ int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr) return 1; } - table = pte_offset_kernel(pmdp, addr); + table = pte_offset_kernel(pmdp, 0UL); pmd_clear(pmdp); __flush_tlb_kernel_pgtable(addr); pte_free_kernel(NULL, table); @@ -1026,8 +1026,8 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr) return 1; } - table = pmd_offset(pudp, addr); - pmdp = table; + table = pmd_offset(pudp, 0UL); + pmdp = pmd_offset(pudp, addr); next = addr; end = addr + PUD_SIZE; do {
Pgtable page address can be fetched with [pmd|pte]_offset_[kernel] if input address is PMD_SIZE or PTE_SIZE aligned. Input address is now guaranteed to be aligned, hence fetched pgtable page address is always correct. But using 0UL as offset base address has been a standard practice across platforms. It also makes more sense as it isolates pgtable page address computation from input virtual address alignment. This does not change functionality. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> --- arch/arm64/mm/mmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)