diff mbox series

[4/5] mm/vmalloc: Add code comment for find_vmap_area_exceed_addr()

Message ID 20220606083909.363350-5-bhe@redhat.com (mailing list archive)
State New
Headers show
Series Cleanup patches of vmalloc | expand

Commit Message

Baoquan He June 6, 2022, 8:39 a.m. UTC
Its behaviour is like find_vma() which finds an area above the specified
address, add comment to make it easier to understand.

And also fix two places of grammer mistake/typo.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/vmalloc.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Uladzislau Rezki June 6, 2022, 8:50 p.m. UTC | #1
On Mon, Jun 06, 2022 at 04:39:08PM +0800, Baoquan He wrote:
> Its behaviour is like find_vma() which finds an area above the specified
> address, add comment to make it easier to understand.
> 
> And also fix two places of grammer mistake/typo.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  mm/vmalloc.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 11dfc897de40..860ed9986775 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -790,6 +790,7 @@ unsigned long vmalloc_nr_pages(void)
>  	return atomic_long_read(&nr_vmalloc_pages);
>  }
>  
> +/* Look up the first VA which satisfies  addr < va_end,  NULL if none. */
>  static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr)
>  {
>  	struct vmap_area *va = NULL;
> @@ -929,7 +930,7 @@ link_va(struct vmap_area *va, struct rb_root *root,
>  		 * Some explanation here. Just perform simple insertion
>  		 * to the tree. We do not set va->subtree_max_size to
>  		 * its current size before calling rb_insert_augmented().
> -		 * It is because of we populate the tree from the bottom
> +		 * It is because we populate the tree from the bottom
>  		 * to parent levels when the node _is_ in the tree.
>  		 *
>  		 * Therefore we set subtree_max_size to zero after insertion,
> @@ -1659,7 +1660,7 @@ static atomic_long_t vmap_lazy_nr = ATOMIC_LONG_INIT(0);
>  
>  /*
>   * Serialize vmap purging.  There is no actual critical section protected
> - * by this look, but we want to avoid concurrent calls for performance
> + * by this lock, but we want to avoid concurrent calls for performance
>   * reasons and to make the pcpu_get_vm_areas more deterministic.
>   */
>  static DEFINE_MUTEX(vmap_purge_lock);
> -- 
> 2.34.1
> 
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>

--
Uladzislau Rezki
diff mbox series

Patch

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 11dfc897de40..860ed9986775 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -790,6 +790,7 @@  unsigned long vmalloc_nr_pages(void)
 	return atomic_long_read(&nr_vmalloc_pages);
 }
 
+/* Look up the first VA which satisfies  addr < va_end,  NULL if none. */
 static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr)
 {
 	struct vmap_area *va = NULL;
@@ -929,7 +930,7 @@  link_va(struct vmap_area *va, struct rb_root *root,
 		 * Some explanation here. Just perform simple insertion
 		 * to the tree. We do not set va->subtree_max_size to
 		 * its current size before calling rb_insert_augmented().
-		 * It is because of we populate the tree from the bottom
+		 * It is because we populate the tree from the bottom
 		 * to parent levels when the node _is_ in the tree.
 		 *
 		 * Therefore we set subtree_max_size to zero after insertion,
@@ -1659,7 +1660,7 @@  static atomic_long_t vmap_lazy_nr = ATOMIC_LONG_INIT(0);
 
 /*
  * Serialize vmap purging.  There is no actual critical section protected
- * by this look, but we want to avoid concurrent calls for performance
+ * by this lock, but we want to avoid concurrent calls for performance
  * reasons and to make the pcpu_get_vm_areas more deterministic.
  */
 static DEFINE_MUTEX(vmap_purge_lock);