diff mbox series

mm/memory.c: make remap_pfn_range() reject unaligned addr

Message ID 20200617223414.165923-1-zhangalex@google.com (mailing list archive)
State New, archived
Headers show
Series mm/memory.c: make remap_pfn_range() reject unaligned addr | expand

Commit Message

Kaiyu Zhang June 17, 2020, 10:34 p.m. UTC
From: Alex Zhang <zhangalex@google.com>

This function implicitly assumes that the addr passed in is page aligned.
A non page aligned addr could ultimately cause a kernel bug in
remap_pte_range as the exit condition in the logic loop may never be
satisfied.  This patch documents the need for the requirement, as
well as explicitly adding a check for it.

Signed-off-by: Alex Zhang <zhangalex@google.com>

---
 mm/memory.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Andrew Morton June 17, 2020, 10:47 p.m. UTC | #1
On Wed, 17 Jun 2020 15:34:14 -0700 Kaiyu Zhang <zhangalex@google.com> wrote:

> From: Alex Zhang <zhangalex@google.com>
> 
> This function implicitly assumes that the addr passed in is page aligned.
> A non page aligned addr could ultimately cause a kernel bug in
> remap_pte_range as the exit condition in the logic loop may never be
> satisfied.  This patch documents the need for the requirement, as
> well as explicitly adding a check for it.
> 
> ...
>
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
>  /**
>   * remap_pfn_range - remap kernel memory to userspace
>   * @vma: user vma to map to
> - * @addr: target user address to start at
> + * @addr: target page aligned user address to start at
>   * @pfn: page frame number of kernel physical memory address
>   * @size: size of mapping area
>   * @prot: page protection flags for this mapping
> @@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
>  	unsigned long remap_pfn = pfn;
>  	int err;
>  
> +	if (!PAGE_ALIGN(addr))
> +		return -EINVAL;
> +

That won't work.  PAGE_ALIGNED() will do so.

Also, as this is an error in the calling code it would be better to do

	if (WARN_ON_ONCE(!PAGE_ALIGNED(addr)))
		return -EINVAL;

so that the offending code can be fixed up.

Is there any code in the kernel tree which actually has this error?
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index dc7f3543b1fd..9cb0a75f1555 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2081,7 +2081,7 @@  static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
 /**
  * remap_pfn_range - remap kernel memory to userspace
  * @vma: user vma to map to
- * @addr: target user address to start at
+ * @addr: target page aligned user address to start at
  * @pfn: page frame number of kernel physical memory address
  * @size: size of mapping area
  * @prot: page protection flags for this mapping
@@ -2100,6 +2100,9 @@  int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
 	unsigned long remap_pfn = pfn;
 	int err;
 
+	if (!PAGE_ALIGN(addr))
+		return -EINVAL;
+
 	/*
 	 * Physically remapped pages are special. Tell the
 	 * rest of the world about it: