Message ID | 1525074094-25839-4-git-send-email-cpandya@codeaurora.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Chintan, [as a side note: I'm confused on the status of this patch series, as part of it was reposted separately by Toshi. Please can you work together?] On Mon, Apr 30, 2018 at 01:11:33PM +0530, Chintan Pandya wrote: > Implement pud_free_pmd_page() and pmd_free_pte_page(). > > Implementation requires, > 1) Clearing off the current pud/pmd entry > 2) Invalidate TLB which could have previously > valid but not stale entry > 3) Freeing of the un-used next level page tables > > Signed-off-by: Chintan Pandya <cpandya@codeaurora.org> > --- > arch/arm64/mm/mmu.c | 29 +++++++++++++++++++++++++---- > 1 file changed, 25 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index da98828..0f651db 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -45,6 +45,7 @@ > #include <asm/memblock.h> > #include <asm/mmu_context.h> > #include <asm/ptdump.h> > +#include <asm/tlbflush.h> > > #define NO_BLOCK_MAPPINGS BIT(0) > #define NO_CONT_MAPPINGS BIT(1) > @@ -973,12 +974,32 @@ int pmd_clear_huge(pmd_t *pmdp) > return 1; > } > > -int pud_free_pmd_page(pud_t *pud, unsigned long addr) > +int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr) > { > - return pud_none(*pud); > + pmd_t *table; > + > + if (pmd_present(READ_ONCE(*pmdp))) { Might also be worth checking pmd_table here, just in case. (same for pud) > + table = __va(pmd_val(*pmdp)); Can you avoid dereferencing *pmdp twice, and instead READ_ONCE into a local variable, please? (same for pud) > + pmd_clear(pmdp); > + __flush_tlb_kernel_pgtable(addr); > + free_page((unsigned long) table); Shouldn't this be pte_free_kernel, to pair with pte_alloc_kernel which was used to allocate the page in the first place? (similarly for pud) > + } > + return 1; > } > > -int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) > +int pud_free_pmd_page(pud_t *pudp, unsigned long addr) > { > - return pmd_none(*pmd); > + pmd_t *table; > + int i; > + > + if (pud_present(READ_ONCE(*pudp))) { > + table = __va(pud_val(*pudp)); > + for (i = 0; i < PTRS_PER_PMD; i++) > + pmd_free_pte_page(&table[i], addr + (i * PMD_SIZE)); I think it would be cleaner to write this as a do { ... } while, for consistency with the ioremap and vmalloc code. Will
On Wed, 2018-05-23 at 15:01 +0100, Will Deacon wrote: > Hi Chintan, > > [as a side note: I'm confused on the status of this patch series, as part > of it was reposted separately by Toshi. Please can you work together?] I do not know the status of my patch series, either... That being said, I made my x86 patches based off from Chintan's 1/4 patch (which changes both x86 and arm) so that my series won't conflict with his. Chintan, If you need to update your series before mine's accepted, please make sure to use the updated 1/4 below. I've updated the descriptions per review comment. https://patchwork.kernel.org/patch/10407065/ Thanks, -Toshi
On 5/23/2018 8:04 PM, Kani, Toshi wrote: > On Wed, 2018-05-23 at 15:01 +0100, Will Deacon wrote: >> Hi Chintan, >> >> [as a side note: I'm confused on the status of this patch series, as part >> of it was reposted separately by Toshi. Please can you work together?] > > I do not know the status of my patch series, either... That being said, > I made my x86 patches based off from Chintan's 1/4 patch (which changes > both x86 and arm) so that my series won't conflict with his. > > Chintan, Hi Toshi, > If you need to update your series before mine's accepted, please make > sure to use the updated 1/4 below. I've updated the descriptions per > review comment. > https://patchwork.kernel.org/patch/10407065/ For the sake of completeness, I will re-push my previous 1/4 but will take your version of change log. I've seen this and your change log describes the change better. > > Thanks, > -Toshi > Chintan
On 5/23/2018 7:31 PM, Will Deacon wrote: > Hi Chintan, Hi Will, > > [as a side note: I'm confused on the status of this patch series, as part > of it was reposted separately by Toshi. Please can you work together?] I will share all 4 patches once again as v10 and take latest version of 1/4 as updated by Toshi. > > On Mon, Apr 30, 2018 at 01:11:33PM +0530, Chintan Pandya wrote: >> Implement pud_free_pmd_page() and pmd_free_pte_page(). >> >> Implementation requires, >> 1) Clearing off the current pud/pmd entry >> 2) Invalidate TLB which could have previously >> valid but not stale entry >> 3) Freeing of the un-used next level page tables >> >> Signed-off-by: Chintan Pandya <cpandya@codeaurora.org> >> --- >> arch/arm64/mm/mmu.c | 29 +++++++++++++++++++++++++---- >> 1 file changed, 25 insertions(+), 4 deletions(-) >> >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index da98828..0f651db 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -45,6 +45,7 @@ >> #include <asm/memblock.h> >> #include <asm/mmu_context.h> >> #include <asm/ptdump.h> >> +#include <asm/tlbflush.h> >> >> #define NO_BLOCK_MAPPINGS BIT(0) >> #define NO_CONT_MAPPINGS BIT(1) >> @@ -973,12 +974,32 @@ int pmd_clear_huge(pmd_t *pmdp) >> return 1; >> } >> >> -int pud_free_pmd_page(pud_t *pud, unsigned long addr) >> +int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr) >> { >> - return pud_none(*pud); >> + pmd_t *table; >> + >> + if (pmd_present(READ_ONCE(*pmdp))) { > > Might also be worth checking pmd_table here, just in case. (same for pud) I had that check in v2 as below. if (pud_val(*pud) && !pud_huge(*pud)) But removed that in v3 as unmap should change this to NONE if it is not table. I still don't see the need of it. > >> + table = __va(pmd_val(*pmdp)); > > Can you avoid dereferencing *pmdp twice, and instead READ_ONCE into a local > variable, please? (same for pud) Okay. > >> + pmd_clear(pmdp); >> + __flush_tlb_kernel_pgtable(addr); >> + free_page((unsigned long) table); > > Shouldn't this be pte_free_kernel, to pair with pte_alloc_kernel which > was used to allocate the page in the first place? (similarly for pud) Okay. > >> + } >> + return 1; >> } >> >> -int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) >> +int pud_free_pmd_page(pud_t *pudp, unsigned long addr) >> { >> - return pmd_none(*pmd); >> + pmd_t *table; >> + int i; >> + >> + if (pud_present(READ_ONCE(*pudp))) { >> + table = __va(pud_val(*pudp)); >> + for (i = 0; i < PTRS_PER_PMD; i++) >> + pmd_free_pte_page(&table[i], addr + (i * PMD_SIZE)); > > I think it would be cleaner to write this as a do { ... } while, for > consistency with the ioremap and vmalloc code. Okay. I'll raise v10 fixing above things. Thanks for the review. > > Will > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel > Chintan
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index da98828..0f651db 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -45,6 +45,7 @@ #include <asm/memblock.h> #include <asm/mmu_context.h> #include <asm/ptdump.h> +#include <asm/tlbflush.h> #define NO_BLOCK_MAPPINGS BIT(0) #define NO_CONT_MAPPINGS BIT(1) @@ -973,12 +974,32 @@ int pmd_clear_huge(pmd_t *pmdp) return 1; } -int pud_free_pmd_page(pud_t *pud, unsigned long addr) +int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr) { - return pud_none(*pud); + pmd_t *table; + + if (pmd_present(READ_ONCE(*pmdp))) { + table = __va(pmd_val(*pmdp)); + pmd_clear(pmdp); + __flush_tlb_kernel_pgtable(addr); + free_page((unsigned long) table); + } + return 1; } -int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) +int pud_free_pmd_page(pud_t *pudp, unsigned long addr) { - return pmd_none(*pmd); + pmd_t *table; + int i; + + if (pud_present(READ_ONCE(*pudp))) { + table = __va(pud_val(*pudp)); + for (i = 0; i < PTRS_PER_PMD; i++) + pmd_free_pte_page(&table[i], addr + (i * PMD_SIZE)); + + pud_clear(pudp); + __flush_tlb_kernel_pgtable(addr); + free_page((unsigned long) table); + } + return 1; }
Implement pud_free_pmd_page() and pmd_free_pte_page(). Implementation requires, 1) Clearing off the current pud/pmd entry 2) Invalidate TLB which could have previously valid but not stale entry 3) Freeing of the un-used next level page tables Signed-off-by: Chintan Pandya <cpandya@codeaurora.org> --- arch/arm64/mm/mmu.c | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-)