Message ID | 89f03091af87f5ab27bd6cafb032236d5bd81d65.1547153058.git.khalid.aziz@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add support for eXclusive Page Frame Ownership | expand |
On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote: > From: Juerg Haefliger <juerg.haefliger@canonical.com> > > Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and > provide a hook for updating a single kernel page table entry (which is > required by the generic XPFO code). > > v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() > > CC: linux-arm-kernel@lists.infradead.org > Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> > Signed-off-by: Tycho Andersen <tycho@docker.com> > Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com> > --- > arch/arm64/Kconfig | 1 + > arch/arm64/mm/Makefile | 2 ++ > arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 61 insertions(+) > create mode 100644 arch/arm64/mm/xpfo.c > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index ea2ab0330e3a..f0a9c0007d23 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -171,6 +171,7 @@ config ARM64 > select SWIOTLB > select SYSCTL_EXCEPTION_TRACE > select THREAD_INFO_IN_TASK > + select ARCH_SUPPORTS_XPFO > help > ARM 64-bit (AArch64) Linux support. > > diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile > index 849c1df3d214..cca3808d9776 100644 > --- a/arch/arm64/mm/Makefile > +++ b/arch/arm64/mm/Makefile > @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n > > obj-$(CONFIG_KASAN) += kasan_init.o > KASAN_SANITIZE_kasan_init.o := n > + > +obj-$(CONFIG_XPFO) += xpfo.o > diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c > new file mode 100644 > index 000000000000..678e2be848eb > --- /dev/null > +++ b/arch/arm64/mm/xpfo.c > @@ -0,0 +1,58 @@ > +/* > + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. > + * Copyright (C) 2016 Brown University. All rights reserved. > + * > + * Authors: > + * Juerg Haefliger <juerg.haefliger@hpe.com> > + * Vasileios P. Kemerlis <vpk@cs.brown.edu> > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms of the GNU General Public License version 2 as published by > + * the Free Software Foundation. > + */ > + > +#include <linux/mm.h> > +#include <linux/module.h> > + > +#include <asm/tlbflush.h> > + > +/* > + * Lookup the page table entry for a virtual address and return a pointer to > + * the entry. Based on x86 tree. > + */ > +static pte_t *lookup_address(unsigned long addr) The x86 also has level. Would it make sense to include that in here?
On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote: > From: Juerg Haefliger <juerg.haefliger@canonical.com> > > Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and > provide a hook for updating a single kernel page table entry (which is > required by the generic XPFO code). > > v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() > > CC: linux-arm-kernel@lists.infradead.org > Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> > Signed-off-by: Tycho Andersen <tycho@docker.com> > Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com> > --- > arch/arm64/Kconfig | 1 + > arch/arm64/mm/Makefile | 2 ++ > arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 61 insertions(+) > create mode 100644 arch/arm64/mm/xpfo.c > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index ea2ab0330e3a..f0a9c0007d23 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -171,6 +171,7 @@ config ARM64 > select SWIOTLB > select SYSCTL_EXCEPTION_TRACE > select THREAD_INFO_IN_TASK > + select ARCH_SUPPORTS_XPFO > help > ARM 64-bit (AArch64) Linux support. > > diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile > index 849c1df3d214..cca3808d9776 100644 > --- a/arch/arm64/mm/Makefile > +++ b/arch/arm64/mm/Makefile > @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n > > obj-$(CONFIG_KASAN) += kasan_init.o > KASAN_SANITIZE_kasan_init.o := n > + > +obj-$(CONFIG_XPFO) += xpfo.o > diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c > new file mode 100644 > index 000000000000..678e2be848eb > --- /dev/null > +++ b/arch/arm64/mm/xpfo.c > @@ -0,0 +1,58 @@ > +/* > + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. > + * Copyright (C) 2016 Brown University. All rights reserved. > + * > + * Authors: > + * Juerg Haefliger <juerg.haefliger@hpe.com> > + * Vasileios P. Kemerlis <vpk@cs.brown.edu> > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms of the GNU General Public License version 2 as published by > + * the Free Software Foundation. > + */ > + > +#include <linux/mm.h> > +#include <linux/module.h> > + > +#include <asm/tlbflush.h> > + > +/* > + * Lookup the page table entry for a virtual address and return a pointer to > + * the entry. Based on x86 tree. > + */ > +static pte_t *lookup_address(unsigned long addr) > +{ > + pgd_t *pgd; > + pud_t *pud; > + pmd_t *pmd; > + > + pgd = pgd_offset_k(addr); > + if (pgd_none(*pgd)) > + return NULL; > + > + pud = pud_offset(pgd, addr); > + if (pud_none(*pud)) > + return NULL; > + > + pmd = pmd_offset(pud, addr); > + if (pmd_none(*pmd)) > + return NULL; > + > + return pte_offset_kernel(pmd, addr); > +} > + > +/* Update a single kernel page table entry */ > +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) > +{ > + pte_t *pte = lookup_address((unsigned long)kaddr); > + > + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); Thought on the other hand.. what if the page is PMD? Do you really want to do this? What if 'pte' is NULL? > +} > + > +inline void xpfo_flush_kernel_tlb(struct page *page, int order) > +{ > + unsigned long kaddr = (unsigned long)page_address(page); > + unsigned long size = PAGE_SIZE; > + > + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); Ditto here. You are assuming it is PTE, but it may be PMD or such. Or worts - the lookup_address could be NULL. > +} > -- > 2.17.1 >
On 1/23/19 7:20 AM, Konrad Rzeszutek Wilk wrote: > On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote: >> From: Juerg Haefliger <juerg.haefliger@canonical.com> >> >> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and >> provide a hook for updating a single kernel page table entry (which is >> required by the generic XPFO code). >> >> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() >> >> CC: linux-arm-kernel@lists.infradead.org >> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> >> Signed-off-by: Tycho Andersen <tycho@docker.com> >> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com> >> --- >> arch/arm64/Kconfig | 1 + >> arch/arm64/mm/Makefile | 2 ++ >> arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++ >> 3 files changed, 61 insertions(+) >> create mode 100644 arch/arm64/mm/xpfo.c >> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index ea2ab0330e3a..f0a9c0007d23 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -171,6 +171,7 @@ config ARM64 >> select SWIOTLB >> select SYSCTL_EXCEPTION_TRACE >> select THREAD_INFO_IN_TASK >> + select ARCH_SUPPORTS_XPFO >> help >> ARM 64-bit (AArch64) Linux support. >> >> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile >> index 849c1df3d214..cca3808d9776 100644 >> --- a/arch/arm64/mm/Makefile >> +++ b/arch/arm64/mm/Makefile >> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n >> >> obj-$(CONFIG_KASAN) += kasan_init.o >> KASAN_SANITIZE_kasan_init.o := n >> + >> +obj-$(CONFIG_XPFO) += xpfo.o >> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c >> new file mode 100644 >> index 000000000000..678e2be848eb >> --- /dev/null >> +++ b/arch/arm64/mm/xpfo.c >> @@ -0,0 +1,58 @@ >> +/* >> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. >> + * Copyright (C) 2016 Brown University. All rights reserved. >> + * >> + * Authors: >> + * Juerg Haefliger <juerg.haefliger@hpe.com> >> + * Vasileios P. Kemerlis <vpk@cs.brown.edu> >> + * >> + * This program is free software; you can redistribute it and/or modify it >> + * under the terms of the GNU General Public License version 2 as published by >> + * the Free Software Foundation. >> + */ >> + >> +#include <linux/mm.h> >> +#include <linux/module.h> >> + >> +#include <asm/tlbflush.h> >> + >> +/* >> + * Lookup the page table entry for a virtual address and return a pointer to >> + * the entry. Based on x86 tree. >> + */ >> +static pte_t *lookup_address(unsigned long addr) > > The x86 also has level. Would it make sense to include that in here? > Possibly. ARM64 does not define page levels (as in the enum for page levels) at this time but that can be added easily. Adding level to lookup_address() for arm will make it uniform with x86 but is there any other rationale besides that? Do you see a future use for this information? The only other architecture I could see that defines lookup_address() is sh but it uses it for trapped io only. Thanks, Khalid
On 1/23/19 7:24 AM, Konrad Rzeszutek Wilk wrote: > On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote: >> From: Juerg Haefliger <juerg.haefliger@canonical.com> >> >> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and >> provide a hook for updating a single kernel page table entry (which is >> required by the generic XPFO code). >> >> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() >> >> CC: linux-arm-kernel@lists.infradead.org >> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> >> Signed-off-by: Tycho Andersen <tycho@docker.com> >> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com> >> --- >> arch/arm64/Kconfig | 1 + >> arch/arm64/mm/Makefile | 2 ++ >> arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++ >> 3 files changed, 61 insertions(+) >> create mode 100644 arch/arm64/mm/xpfo.c >> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index ea2ab0330e3a..f0a9c0007d23 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -171,6 +171,7 @@ config ARM64 >> select SWIOTLB >> select SYSCTL_EXCEPTION_TRACE >> select THREAD_INFO_IN_TASK >> + select ARCH_SUPPORTS_XPFO >> help >> ARM 64-bit (AArch64) Linux support. >> >> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile >> index 849c1df3d214..cca3808d9776 100644 >> --- a/arch/arm64/mm/Makefile >> +++ b/arch/arm64/mm/Makefile >> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n >> >> obj-$(CONFIG_KASAN) += kasan_init.o >> KASAN_SANITIZE_kasan_init.o := n >> + >> +obj-$(CONFIG_XPFO) += xpfo.o >> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c >> new file mode 100644 >> index 000000000000..678e2be848eb >> --- /dev/null >> +++ b/arch/arm64/mm/xpfo.c >> @@ -0,0 +1,58 @@ >> +/* >> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. >> + * Copyright (C) 2016 Brown University. All rights reserved. >> + * >> + * Authors: >> + * Juerg Haefliger <juerg.haefliger@hpe.com> >> + * Vasileios P. Kemerlis <vpk@cs.brown.edu> >> + * >> + * This program is free software; you can redistribute it and/or modify it >> + * under the terms of the GNU General Public License version 2 as published by >> + * the Free Software Foundation. >> + */ >> + >> +#include <linux/mm.h> >> +#include <linux/module.h> >> + >> +#include <asm/tlbflush.h> >> + >> +/* >> + * Lookup the page table entry for a virtual address and return a pointer to >> + * the entry. Based on x86 tree. >> + */ >> +static pte_t *lookup_address(unsigned long addr) >> +{ >> + pgd_t *pgd; >> + pud_t *pud; >> + pmd_t *pmd; >> + >> + pgd = pgd_offset_k(addr); >> + if (pgd_none(*pgd)) >> + return NULL; >> + >> + pud = pud_offset(pgd, addr); >> + if (pud_none(*pud)) >> + return NULL; >> + >> + pmd = pmd_offset(pud, addr); >> + if (pmd_none(*pmd)) >> + return NULL; >> + >> + return pte_offset_kernel(pmd, addr); >> +} >> + >> +/* Update a single kernel page table entry */ >> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) >> +{ >> + pte_t *pte = lookup_address((unsigned long)kaddr); >> + >> + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); > > Thought on the other hand.. what if the page is PMD? Do you really want > to do this? > > What if 'pte' is NULL? >> +} >> + >> +inline void xpfo_flush_kernel_tlb(struct page *page, int order) >> +{ >> + unsigned long kaddr = (unsigned long)page_address(page); >> + unsigned long size = PAGE_SIZE; >> + >> + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); > > Ditto here. You are assuming it is PTE, but it may be PMD or such. > Or worts - the lookup_address could be NULL. > >> +} >> -- >> 2.17.1 >> Hi Konrad, This makes sense. x86 version of set_kpte() checks pte for NULL and also checks if the page is PMD. Now what you said about adding level to lookup_address() for arm makes more sense. Can someone with knowledge of arm64 mmu make recommendations here? Thanks, Khalid
On 2/12/19 7:52 AM, Khalid Aziz wrote: > On 1/23/19 7:24 AM, Konrad Rzeszutek Wilk wrote: >> On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote: >>> From: Juerg Haefliger <juerg.haefliger@canonical.com> >>> >>> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and >>> provide a hook for updating a single kernel page table entry (which is >>> required by the generic XPFO code). >>> >>> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() >>> >>> CC: linux-arm-kernel@lists.infradead.org >>> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> >>> Signed-off-by: Tycho Andersen <tycho@docker.com> >>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com> >>> --- >>> arch/arm64/Kconfig | 1 + >>> arch/arm64/mm/Makefile | 2 ++ >>> arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++ >>> 3 files changed, 61 insertions(+) >>> create mode 100644 arch/arm64/mm/xpfo.c >>> >>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >>> index ea2ab0330e3a..f0a9c0007d23 100644 >>> --- a/arch/arm64/Kconfig >>> +++ b/arch/arm64/Kconfig >>> @@ -171,6 +171,7 @@ config ARM64 >>> select SWIOTLB >>> select SYSCTL_EXCEPTION_TRACE >>> select THREAD_INFO_IN_TASK >>> + select ARCH_SUPPORTS_XPFO >>> help >>> ARM 64-bit (AArch64) Linux support. >>> >>> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile >>> index 849c1df3d214..cca3808d9776 100644 >>> --- a/arch/arm64/mm/Makefile >>> +++ b/arch/arm64/mm/Makefile >>> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n >>> >>> obj-$(CONFIG_KASAN) += kasan_init.o >>> KASAN_SANITIZE_kasan_init.o := n >>> + >>> +obj-$(CONFIG_XPFO) += xpfo.o >>> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c >>> new file mode 100644 >>> index 000000000000..678e2be848eb >>> --- /dev/null >>> +++ b/arch/arm64/mm/xpfo.c >>> @@ -0,0 +1,58 @@ >>> +/* >>> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. >>> + * Copyright (C) 2016 Brown University. All rights reserved. >>> + * >>> + * Authors: >>> + * Juerg Haefliger <juerg.haefliger@hpe.com> >>> + * Vasileios P. Kemerlis <vpk@cs.brown.edu> >>> + * >>> + * This program is free software; you can redistribute it and/or modify it >>> + * under the terms of the GNU General Public License version 2 as published by >>> + * the Free Software Foundation. >>> + */ >>> + >>> +#include <linux/mm.h> >>> +#include <linux/module.h> >>> + >>> +#include <asm/tlbflush.h> >>> + >>> +/* >>> + * Lookup the page table entry for a virtual address and return a pointer to >>> + * the entry. Based on x86 tree. >>> + */ >>> +static pte_t *lookup_address(unsigned long addr) >>> +{ >>> + pgd_t *pgd; >>> + pud_t *pud; >>> + pmd_t *pmd; >>> + >>> + pgd = pgd_offset_k(addr); >>> + if (pgd_none(*pgd)) >>> + return NULL; >>> + >>> + pud = pud_offset(pgd, addr); >>> + if (pud_none(*pud)) >>> + return NULL; >>> + >>> + pmd = pmd_offset(pud, addr); >>> + if (pmd_none(*pmd)) >>> + return NULL; >>> + >>> + return pte_offset_kernel(pmd, addr); >>> +} >>> + >>> +/* Update a single kernel page table entry */ >>> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) >>> +{ >>> + pte_t *pte = lookup_address((unsigned long)kaddr); >>> + >>> + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); >> >> Thought on the other hand.. what if the page is PMD? Do you really want >> to do this? >> >> What if 'pte' is NULL? >>> +} >>> + >>> +inline void xpfo_flush_kernel_tlb(struct page *page, int order) >>> +{ >>> + unsigned long kaddr = (unsigned long)page_address(page); >>> + unsigned long size = PAGE_SIZE; >>> + >>> + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); >> >> Ditto here. You are assuming it is PTE, but it may be PMD or such. >> Or worts - the lookup_address could be NULL. >> >>> +} >>> -- >>> 2.17.1 >>> > > Hi Konrad, > > This makes sense. x86 version of set_kpte() checks pte for NULL and also > checks if the page is PMD. Now what you said about adding level to > lookup_address() for arm makes more sense. > > Can someone with knowledge of arm64 mmu make recommendations here? > > Thanks, > Khalid > arm64 can't split larger pages and requires everything must be mapped as pages (see [RFC PATCH v7 08/16] arm64/mm: disable section/contiguous mappings if XPFO is enabled) . Any situation where we would get something other than a pte would be a bug. Thanks, Laura
On 2/12/19 1:01 PM, Laura Abbott wrote: > On 2/12/19 7:52 AM, Khalid Aziz wrote: >> On 1/23/19 7:24 AM, Konrad Rzeszutek Wilk wrote: >>> On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote: >>>> From: Juerg Haefliger <juerg.haefliger@canonical.com> >>>> >>>> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and >>>> provide a hook for updating a single kernel page table entry (which is >>>> required by the generic XPFO code). >>>> >>>> v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() >>>> >>>> CC: linux-arm-kernel@lists.infradead.org >>>> Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> >>>> Signed-off-by: Tycho Andersen <tycho@docker.com> >>>> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com> >>>> --- >>>> arch/arm64/Kconfig | 1 + >>>> arch/arm64/mm/Makefile | 2 ++ >>>> arch/arm64/mm/xpfo.c | 58 >>>> ++++++++++++++++++++++++++++++++++++++++++ >>>> 3 files changed, 61 insertions(+) >>>> create mode 100644 arch/arm64/mm/xpfo.c >>>> >>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >>>> index ea2ab0330e3a..f0a9c0007d23 100644 >>>> --- a/arch/arm64/Kconfig >>>> +++ b/arch/arm64/Kconfig >>>> @@ -171,6 +171,7 @@ config ARM64 >>>> select SWIOTLB >>>> select SYSCTL_EXCEPTION_TRACE >>>> select THREAD_INFO_IN_TASK >>>> + select ARCH_SUPPORTS_XPFO >>>> help >>>> ARM 64-bit (AArch64) Linux support. >>>> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile >>>> index 849c1df3d214..cca3808d9776 100644 >>>> --- a/arch/arm64/mm/Makefile >>>> +++ b/arch/arm64/mm/Makefile >>>> @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n >>>> obj-$(CONFIG_KASAN) += kasan_init.o >>>> KASAN_SANITIZE_kasan_init.o := n >>>> + >>>> +obj-$(CONFIG_XPFO) += xpfo.o >>>> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c >>>> new file mode 100644 >>>> index 000000000000..678e2be848eb >>>> --- /dev/null >>>> +++ b/arch/arm64/mm/xpfo.c >>>> @@ -0,0 +1,58 @@ >>>> +/* >>>> + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. >>>> + * Copyright (C) 2016 Brown University. All rights reserved. >>>> + * >>>> + * Authors: >>>> + * Juerg Haefliger <juerg.haefliger@hpe.com> >>>> + * Vasileios P. Kemerlis <vpk@cs.brown.edu> >>>> + * >>>> + * This program is free software; you can redistribute it and/or >>>> modify it >>>> + * under the terms of the GNU General Public License version 2 as >>>> published by >>>> + * the Free Software Foundation. >>>> + */ >>>> + >>>> +#include <linux/mm.h> >>>> +#include <linux/module.h> >>>> + >>>> +#include <asm/tlbflush.h> >>>> + >>>> +/* >>>> + * Lookup the page table entry for a virtual address and return a >>>> pointer to >>>> + * the entry. Based on x86 tree. >>>> + */ >>>> +static pte_t *lookup_address(unsigned long addr) >>>> +{ >>>> + pgd_t *pgd; >>>> + pud_t *pud; >>>> + pmd_t *pmd; >>>> + >>>> + pgd = pgd_offset_k(addr); >>>> + if (pgd_none(*pgd)) >>>> + return NULL; >>>> + >>>> + pud = pud_offset(pgd, addr); >>>> + if (pud_none(*pud)) >>>> + return NULL; >>>> + >>>> + pmd = pmd_offset(pud, addr); >>>> + if (pmd_none(*pmd)) >>>> + return NULL; >>>> + >>>> + return pte_offset_kernel(pmd, addr); >>>> +} >>>> + >>>> +/* Update a single kernel page table entry */ >>>> +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) >>>> +{ >>>> + pte_t *pte = lookup_address((unsigned long)kaddr); >>>> + >>>> + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); >>> >>> Thought on the other hand.. what if the page is PMD? Do you really want >>> to do this? >>> >>> What if 'pte' is NULL? >>>> +} >>>> + >>>> +inline void xpfo_flush_kernel_tlb(struct page *page, int order) >>>> +{ >>>> + unsigned long kaddr = (unsigned long)page_address(page); >>>> + unsigned long size = PAGE_SIZE; >>>> + >>>> + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); >>> >>> Ditto here. You are assuming it is PTE, but it may be PMD or such. >>> Or worts - the lookup_address could be NULL. >>> >>>> +} >>>> -- >>>> 2.17.1 >>>> >> >> Hi Konrad, >> >> This makes sense. x86 version of set_kpte() checks pte for NULL and also >> checks if the page is PMD. Now what you said about adding level to >> lookup_address() for arm makes more sense. >> >> Can someone with knowledge of arm64 mmu make recommendations here? >> >> Thanks, >> Khalid >> > > arm64 can't split larger pages and requires everything must be > mapped as pages (see [RFC PATCH v7 08/16] arm64/mm: disable > section/contiguous mappings if XPFO is enabled) . Any > situation where we would get something other than a pte > would be a bug. Thanks, Laura! That helps a lot. I would think checking for NULL pte in set_kpte() would still make sense since lookup_address() can return NULL. Something like: --- arch/arm64/mm/xpfo.c 2019-01-30 13:36:39.857185612 -0700 +++ arch/arm64/mm/xpfo.c.new 2019-02-12 13:26:47.471633031 -0700 @@ -46,6 +46,11 @@ { pte_t *pte = lookup_address((unsigned long)kaddr); + if (unlikely(!pte)) { + WARN(1, "xpfo: invalid address %p\n", kaddr); + return; + } + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); } -- Khalid
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index ea2ab0330e3a..f0a9c0007d23 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -171,6 +171,7 @@ config ARM64 select SWIOTLB select SYSCTL_EXCEPTION_TRACE select THREAD_INFO_IN_TASK + select ARCH_SUPPORTS_XPFO help ARM 64-bit (AArch64) Linux support. diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 849c1df3d214..cca3808d9776 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o KASAN_SANITIZE_kasan_init.o := n + +obj-$(CONFIG_XPFO) += xpfo.o diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c new file mode 100644 index 000000000000..678e2be848eb --- /dev/null +++ b/arch/arm64/mm/xpfo.c @@ -0,0 +1,58 @@ +/* + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. + * Copyright (C) 2016 Brown University. All rights reserved. + * + * Authors: + * Juerg Haefliger <juerg.haefliger@hpe.com> + * Vasileios P. Kemerlis <vpk@cs.brown.edu> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + */ + +#include <linux/mm.h> +#include <linux/module.h> + +#include <asm/tlbflush.h> + +/* + * Lookup the page table entry for a virtual address and return a pointer to + * the entry. Based on x86 tree. + */ +static pte_t *lookup_address(unsigned long addr) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) + return NULL; + + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) + return NULL; + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return NULL; + + return pte_offset_kernel(pmd, addr); +} + +/* Update a single kernel page table entry */ +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) +{ + pte_t *pte = lookup_address((unsigned long)kaddr); + + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); +} + +inline void xpfo_flush_kernel_tlb(struct page *page, int order) +{ + unsigned long kaddr = (unsigned long)page_address(page); + unsigned long size = PAGE_SIZE; + + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); +}