diff mbox

[1/6] mm: use vma_pages() to replace (vm_end - vm_start) >> PAGE_SHIFT

Message ID 1366030138-71292-1-git-send-email-huawei.libin@huawei.com (mailing list archive)
State New, archived
Delegated to: Bjorn Helgaas
Headers show

Commit Message

Li Bin April 15, 2013, 12:48 p.m. UTC
(*->vm_end - *->vm_start) >> PAGE_SHIFT operation is implemented
as a inline funcion vma_pages() in linux/mm.h, so using it.

Signed-off-by: Libin <huawei.libin@huawei.com>
---
 mm/memory.c | 2 +-
 mm/mmap.c   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

Comments

Michel Lespinasse April 18, 2013, 7:58 a.m. UTC | #1
On Mon, Apr 15, 2013 at 5:48 AM, Libin <huawei.libin@huawei.com> wrote:
> (*->vm_end - *->vm_start) >> PAGE_SHIFT operation is implemented
> as a inline funcion vma_pages() in linux/mm.h, so using it.
>
> Signed-off-by: Libin <huawei.libin@huawei.com>

Looks good to me.

Reviewed-by: Michel Lespinasse <walken@google.com>
diff mbox

Patch

diff --git a/mm/memory.c b/mm/memory.c
index 13cbc42..8b8ae1c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2866,7 +2866,7 @@  static inline void unmap_mapping_range_tree(struct rb_root *root,
 			details->first_index, details->last_index) {
 
 		vba = vma->vm_pgoff;
-		vea = vba + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) - 1;
+		vea = vba + vma_pages(vma) - 1;
 		/* Assume for now that PAGE_CACHE_SHIFT == PAGE_SHIFT */
 		zba = details->first_index;
 		if (zba < vba)
diff --git a/mm/mmap.c b/mm/mmap.c
index 0db0de1..118bfcb 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -919,7 +919,7 @@  can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags,
 	if (is_mergeable_vma(vma, file, vm_flags) &&
 	    is_mergeable_anon_vma(anon_vma, vma->anon_vma, vma)) {
 		pgoff_t vm_pglen;
-		vm_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+		vm_pglen = vma_pages(vma);
 		if (vma->vm_pgoff + vm_pglen == vm_pgoff)
 			return 1;
 	}