diff mbox series

arm64: mm: Pass origial fault address to handle_mm_fault()

Message ID 20210614122701.100515-1-gshan@redhat.com (mailing list archive)
State New, archived
Headers show
Series arm64: mm: Pass origial fault address to handle_mm_fault() | expand

Commit Message

Gavin Shan June 14, 2021, 12:27 p.m. UTC
Currently, the lower bits of fault address is cleared before it's
passed to handle_mm_fault(). It's unnecessary since generic code
does same thing since the commit 1a29d85eb0f19 ("mm: use vmf->address
instead of of vmf->virtual_address").

This passes the original fault address to handle_mm_fault() in case
the generic code needs to know the exact fault address.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/mm/fault.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Catalin Marinas June 14, 2021, 3:42 p.m. UTC | #1
On Mon, Jun 14, 2021 at 08:27:01PM +0800, Gavin Shan wrote:
> Currently, the lower bits of fault address is cleared before it's
> passed to handle_mm_fault(). It's unnecessary since generic code
> does same thing since the commit 1a29d85eb0f19 ("mm: use vmf->address
> instead of of vmf->virtual_address").
> 
> This passes the original fault address to handle_mm_fault() in case
> the generic code needs to know the exact fault address.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/arm64/mm/fault.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 871c82ab0a30..e2883237216d 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -504,7 +504,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
>  	 */
>  	if (!(vma->vm_flags & vm_flags))
>  		return VM_FAULT_BADACCESS;
> -	return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs);
> +	return handle_mm_fault(vma, addr, mm_flags, regs);

This seems to match most of the other architectures (arch/arm also masks
out the bottom bits). So:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Anshuman Khandual June 15, 2021, 11:11 a.m. UTC | #2
On 6/14/21 5:57 PM, Gavin Shan wrote:
> Currently, the lower bits of fault address is cleared before it's
> passed to handle_mm_fault(). It's unnecessary since generic code
> does same thing since the commit 1a29d85eb0f19 ("mm: use vmf->address
> instead of of vmf->virtual_address").
> 
> This passes the original fault address to handle_mm_fault() in case
> the generic code needs to know the exact fault address.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/arm64/mm/fault.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 871c82ab0a30..e2883237216d 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -504,7 +504,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
>  	 */
>  	if (!(vma->vm_flags & vm_flags))
>  		return VM_FAULT_BADACCESS;
> -	return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs);
> +	return handle_mm_fault(vma, addr, mm_flags, regs);
>  }
>  
>  static bool is_el0_instruction_abort(unsigned int esr)
> 

FWIW

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Will Deacon June 15, 2021, 12:06 p.m. UTC | #3
On Mon, 14 Jun 2021 20:27:01 +0800, Gavin Shan wrote:
> Currently, the lower bits of fault address is cleared before it's
> passed to handle_mm_fault(). It's unnecessary since generic code
> does same thing since the commit 1a29d85eb0f19 ("mm: use vmf->address
> instead of of vmf->virtual_address").
> 
> This passes the original fault address to handle_mm_fault() in case
> the generic code needs to know the exact fault address.

Applied to arm64 (for-next/mm), thanks!

[1/1] arm64: mm: Pass origial fault address to handle_mm_fault()
      https://git.kernel.org/arm64/c/84c5e23edecd

Cheers,
diff mbox series

Patch

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 871c82ab0a30..e2883237216d 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -504,7 +504,7 @@  static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
 	 */
 	if (!(vma->vm_flags & vm_flags))
 		return VM_FAULT_BADACCESS;
-	return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs);
+	return handle_mm_fault(vma, addr, mm_flags, regs);
 }
 
 static bool is_el0_instruction_abort(unsigned int esr)