diff mbox

[v6,10/11] mm: add a user_virt_to_phys symbol

Message ID 20170907173609.22696-11-tycho@docker.com (mailing list archive)
State New, archived
Headers show

Commit Message

Tycho Andersen Sept. 7, 2017, 5:36 p.m. UTC
We need someting like this for testing XPFO. Since it's architecture
specific, putting it in the test code is slightly awkward, so let's make it
an arch-specific symbol and export it for use in LKDTM.

v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case

CC: linux-arm-kernel@lists.infradead.org
CC: x86@kernel.org
Signed-off-by: Tycho Andersen <tycho@docker.com>
Tested-by: Marco Benatto <marco.antonio.780@gmail.com>
---
 arch/arm64/mm/xpfo.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++
 arch/x86/mm/xpfo.c   | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/xpfo.h |  5 +++++
 3 files changed, 113 insertions(+)

Comments

Christoph Hellwig Sept. 8, 2017, 7:55 a.m. UTC | #1
On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote:
> We need someting like this for testing XPFO. Since it's architecture
> specific, putting it in the test code is slightly awkward, so let's make it
> an arch-specific symbol and export it for use in LKDTM.

We really should not add an export for this.

I think you'll want to just open code it in your test module.
Kees Cook Sept. 8, 2017, 3:44 p.m. UTC | #2
On Fri, Sep 8, 2017 at 12:55 AM, Christoph Hellwig <hch@infradead.org> wrote:
> On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote:
>> We need someting like this for testing XPFO. Since it's architecture
>> specific, putting it in the test code is slightly awkward, so let's make it
>> an arch-specific symbol and export it for use in LKDTM.
>
> We really should not add an export for this.
>
> I think you'll want to just open code it in your test module.

Isn't that going to be fragile? Why not an export?

-Kees
Christoph Hellwig Sept. 11, 2017, 7:36 a.m. UTC | #3
On Fri, Sep 08, 2017 at 08:44:22AM -0700, Kees Cook wrote:
> On Fri, Sep 8, 2017 at 12:55 AM, Christoph Hellwig <hch@infradead.org> wrote:
> > On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote:
> >> We need someting like this for testing XPFO. Since it's architecture
> >> specific, putting it in the test code is slightly awkward, so let's make it
> >> an arch-specific symbol and export it for use in LKDTM.
> >
> > We really should not add an export for this.
> >
> > I think you'll want to just open code it in your test module.
> 
> Isn't that going to be fragile? Why not an export?

It is a little fragile, but it is functionality not needed at all by
the kernel, so we should not add it to the kernel image and/or export
it.
Mark Rutland Sept. 14, 2017, 6:34 p.m. UTC | #4
On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote:
> We need someting like this for testing XPFO. Since it's architecture
> specific, putting it in the test code is slightly awkward, so let's make it
> an arch-specific symbol and export it for use in LKDTM.
> 
> v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case
> 
> CC: linux-arm-kernel@lists.infradead.org
> CC: x86@kernel.org
> Signed-off-by: Tycho Andersen <tycho@docker.com>
> Tested-by: Marco Benatto <marco.antonio.780@gmail.com>
> ---
>  arch/arm64/mm/xpfo.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++
>  arch/x86/mm/xpfo.c   | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/xpfo.h |  5 +++++
>  3 files changed, 113 insertions(+)
> 
> diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
> index 342a9ccb93c1..94a667d94e15 100644
> --- a/arch/arm64/mm/xpfo.c
> +++ b/arch/arm64/mm/xpfo.c
> @@ -74,3 +74,54 @@ void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size,
>  
>  	xpfo_temp_unmap(addr, size, mapping, sizeof(mapping[0]) * num_pages);
>  }
> +
> +/* Convert a user space virtual address to a physical address.
> + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in
> + * arch/x86/mm/pageattr.c
> + */

When can this be called? What prevents concurrent modification of the user page
tables?

i.e. must mmap_sem be held?

> +phys_addr_t user_virt_to_phys(unsigned long addr)

Does this really need to be architecture specific?

Core mm code manages to walk user page tables just fine...

> +{
> +	phys_addr_t phys_addr;
> +	unsigned long offset;
> +	pgd_t *pgd;
> +	p4d_t *p4d;
> +	pud_t *pud;
> +	pmd_t *pmd;
> +	pte_t *pte;
> +
> +	pgd = pgd_offset(current->mm, addr);
> +	if (pgd_none(*pgd))
> +		return 0;

Can we please separate the address and return value? e.g. pass the PA by
reference and return an error code.

AFAIK, zero is a valid PA, and even if the tables exist, they might point there
in the presence of an error.

> +
> +	p4d = p4d_offset(pgd, addr);
> +	if (p4d_none(*p4d))
> +		return 0;
> +
> +	pud = pud_offset(p4d, addr);
> +	if (pud_none(*pud))
> +		return 0;
> +
> +	if (pud_sect(*pud) || !pud_present(*pud)) {
> +		phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT;

Was there some problem with:

	phys_addr = pmd_page_paddr(*pud);

... and similar for the other levels?

... I'd rather introduce new helpers than more open-coded calculations.

Thanks,
Mark.
Tycho Andersen Sept. 18, 2017, 8:56 p.m. UTC | #5
Hi Mark,

On Thu, Sep 14, 2017 at 07:34:02PM +0100, Mark Rutland wrote:
> On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote:
> > We need someting like this for testing XPFO. Since it's architecture
> > specific, putting it in the test code is slightly awkward, so let's make it
> > an arch-specific symbol and export it for use in LKDTM.
> > 
> > v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case
> > 
> > CC: linux-arm-kernel@lists.infradead.org
> > CC: x86@kernel.org
> > Signed-off-by: Tycho Andersen <tycho@docker.com>
> > Tested-by: Marco Benatto <marco.antonio.780@gmail.com>
> > ---
> >  arch/arm64/mm/xpfo.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++
> >  arch/x86/mm/xpfo.c   | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  include/linux/xpfo.h |  5 +++++
> >  3 files changed, 113 insertions(+)
> > 
> > diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
> > index 342a9ccb93c1..94a667d94e15 100644
> > --- a/arch/arm64/mm/xpfo.c
> > +++ b/arch/arm64/mm/xpfo.c
> > @@ -74,3 +74,54 @@ void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size,
> >  
> >  	xpfo_temp_unmap(addr, size, mapping, sizeof(mapping[0]) * num_pages);
> >  }
> > +
> > +/* Convert a user space virtual address to a physical address.
> > + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in
> > + * arch/x86/mm/pageattr.c
> > + */
> 
> When can this be called? What prevents concurrent modification of the user page
> tables?
> 
> i.e. must mmap_sem be held?

Yes, it should be. Since we're moving this back into the lkdtm test
code I think it's less important, since nothing should be modifying
the tables while the thread is doing the lookup, but I'll add it in
the next version.

> > +phys_addr_t user_virt_to_phys(unsigned long addr)
> 
> Does this really need to be architecture specific?
> 
> Core mm code manages to walk user page tables just fine...

I think so since we don't support section mappings right now, so
p*d_sect will always be false.

> > +{
> > +	phys_addr_t phys_addr;
> > +	unsigned long offset;
> > +	pgd_t *pgd;
> > +	p4d_t *p4d;
> > +	pud_t *pud;
> > +	pmd_t *pmd;
> > +	pte_t *pte;
> > +
> > +	pgd = pgd_offset(current->mm, addr);
> > +	if (pgd_none(*pgd))
> > +		return 0;
> 
> Can we please separate the address and return value? e.g. pass the PA by
> reference and return an error code.
> 
> AFAIK, zero is a valid PA, and even if the tables exist, they might point there
> in the presence of an error.

Sure, I'll rearrange this.

> > +
> > +	p4d = p4d_offset(pgd, addr);
> > +	if (p4d_none(*p4d))
> > +		return 0;
> > +
> > +	pud = pud_offset(p4d, addr);
> > +	if (pud_none(*pud))
> > +		return 0;
> > +
> > +	if (pud_sect(*pud) || !pud_present(*pud)) {
> > +		phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT;
> 
> Was there some problem with:
> 
> 	phys_addr = pmd_page_paddr(*pud);
> 
> ... and similar for the other levels?
> 
> ... I'd rather introduce new helpers than more open-coded calculations.

I wasn't aware of these; we could define a similar set of functions
for x86 and then make it not arch-specific.

I wonder if we could also use follow_page_pte(), since we know that
the page is always present (given that it's been allocated).
Unfortunately follow_page_pte() is not exported.

Tycho
diff mbox

Patch

diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
index 342a9ccb93c1..94a667d94e15 100644
--- a/arch/arm64/mm/xpfo.c
+++ b/arch/arm64/mm/xpfo.c
@@ -74,3 +74,54 @@  void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size,
 
 	xpfo_temp_unmap(addr, size, mapping, sizeof(mapping[0]) * num_pages);
 }
+
+/* Convert a user space virtual address to a physical address.
+ * Shamelessly copied from slow_virt_to_phys() and lookup_address() in
+ * arch/x86/mm/pageattr.c
+ */
+phys_addr_t user_virt_to_phys(unsigned long addr)
+{
+	phys_addr_t phys_addr;
+	unsigned long offset;
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	pgd = pgd_offset(current->mm, addr);
+	if (pgd_none(*pgd))
+		return 0;
+
+	p4d = p4d_offset(pgd, addr);
+	if (p4d_none(*p4d))
+		return 0;
+
+	pud = pud_offset(p4d, addr);
+	if (pud_none(*pud))
+		return 0;
+
+	if (pud_sect(*pud) || !pud_present(*pud)) {
+		phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT;
+		offset = addr & ~PUD_MASK;
+		goto out;
+	}
+
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd))
+		return 0;
+
+	if (pmd_sect(*pmd) || !pmd_present(*pmd)) {
+		phys_addr = (unsigned long)pmd_pfn(*pmd) << PAGE_SHIFT;
+		offset = addr & ~PMD_MASK;
+		goto out;
+	}
+
+	pte =  pte_offset_kernel(pmd, addr);
+	phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
+	offset = addr & ~PAGE_MASK;
+
+out:
+	return (phys_addr_t)(phys_addr | offset);
+}
+EXPORT_SYMBOL(user_virt_to_phys);
diff --git a/arch/x86/mm/xpfo.c b/arch/x86/mm/xpfo.c
index 6794d6724ab5..d24cf2c600e8 100644
--- a/arch/x86/mm/xpfo.c
+++ b/arch/x86/mm/xpfo.c
@@ -112,3 +112,60 @@  inline void xpfo_flush_kernel_tlb(struct page *page, int order)
 
 	flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);
 }
+
+/* Convert a user space virtual address to a physical address.
+ * Shamelessly copied from slow_virt_to_phys() and lookup_address() in
+ * arch/x86/mm/pageattr.c
+ */
+phys_addr_t user_virt_to_phys(unsigned long addr)
+{
+	phys_addr_t phys_addr;
+	unsigned long offset;
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	pgd = pgd_offset(current->mm, addr);
+	if (pgd_none(*pgd))
+		return 0;
+
+	p4d = p4d_offset(pgd, addr);
+	if (p4d_none(*p4d))
+		return 0;
+
+	if (p4d_large(*p4d) || !p4d_present(*p4d)) {
+		phys_addr = (unsigned long)p4d_pfn(*p4d) << PAGE_SHIFT;
+		offset = addr & ~P4D_MASK;
+		goto out;
+	}
+
+	pud = pud_offset(p4d, addr);
+	if (pud_none(*pud))
+		return 0;
+
+	if (pud_large(*pud) || !pud_present(*pud)) {
+		phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT;
+		offset = addr & ~PUD_MASK;
+		goto out;
+	}
+
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd))
+		return 0;
+
+	if (pmd_large(*pmd) || !pmd_present(*pmd)) {
+		phys_addr = (unsigned long)pmd_pfn(*pmd) << PAGE_SHIFT;
+		offset = addr & ~PMD_MASK;
+		goto out;
+	}
+
+	pte =  pte_offset_kernel(pmd, addr);
+	phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
+	offset = addr & ~PAGE_MASK;
+
+out:
+	return (phys_addr_t)(phys_addr | offset);
+}
+EXPORT_SYMBOL(user_virt_to_phys);
diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h
index 1693af1a0293..be72da5fba26 100644
--- a/include/linux/xpfo.h
+++ b/include/linux/xpfo.h
@@ -19,6 +19,7 @@ 
 #ifdef CONFIG_XPFO
 
 #include <linux/dma-mapping.h>
+#include <linux/types.h>
 
 extern struct page_ext_operations page_xpfo_ops;
 
@@ -45,6 +46,8 @@  void xpfo_temp_unmap(const void *addr, size_t size, void **mapping,
 
 bool xpfo_enabled(void);
 
+phys_addr_t user_virt_to_phys(unsigned long addr);
+
 #else /* !CONFIG_XPFO */
 
 static inline void xpfo_kmap(void *kaddr, struct page *page) { }
@@ -69,6 +72,8 @@  static inline void xpfo_temp_unmap(const void *addr, size_t size,
 
 static inline bool xpfo_enabled(void) { return false; }
 
+static inline phys_addr_t user_virt_to_phys(unsigned long addr) { return 0; }
+
 #endif /* CONFIG_XPFO */
 
 #endif /* _LINUX_XPFO_H */