diff mbox series

mm: fix some comments formatting

Message ID HK0PR02MB3634AB83890BA58E709FC78186200@HK0PR02MB3634.apcprd02.prod.outlook.com (mailing list archive)
State New, archived
Headers show
Series mm: fix some comments formatting | expand

Commit Message

Chen Tao Sept. 15, 2020, 4:39 p.m. UTC
Correct one function name "get_partials" with "get_partial".
Update the old struct name of list3 with kmem_cache_node.

Signed-off-by: Chen Tao <chentao3@hotmail.com>
---
 mm/slab.c | 2 +-
 mm/slub.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

Comments

Mike Rapoport Sept. 21, 2020, 12:20 p.m. UTC | #1
On Tue, Sep 15, 2020 at 09:39:56AM -0700, Chen Tao wrote:
> Correct one function name "get_partials" with "get_partial".
> Update the old struct name of list3 with kmem_cache_node.
> 
> Signed-off-by: Chen Tao <chentao3@hotmail.com>

Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>

> ---
>  mm/slab.c | 2 +-
>  mm/slub.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slab.c b/mm/slab.c
> index 3160dff6fd76..0a13cce5d016 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1062,7 +1062,7 @@ int slab_prepare_cpu(unsigned int cpu)
>   * Even if all the cpus of a node are down, we don't free the
>   * kmem_cache_node of any cache. This to avoid a race between cpu_down, and
>   * a kmalloc allocation from another cpu for memory from the node of
> - * the cpu going down.  The list3 structure is usually allocated from
> + * the cpu going down.  The kmem_cache_node structure is usually allocated from
>   * kmem_cache_create() and gets destroyed at kmem_cache_destroy().
>   */
>  int slab_dead_cpu(unsigned int cpu)
> diff --git a/mm/slub.c b/mm/slub.c
> index d4177aecedf6..1faa9ada3f51 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1960,7 +1960,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
>  	/*
>  	 * Racy check. If we mistakenly see no partial slabs then we
>  	 * just allocate an empty slab. If we mistakenly try to get a
> -	 * partial slab and there is none available then get_partials()
> +	 * partial slab and there is none available then get_partial()
>  	 * will return NULL.
>  	 */
>  	if (!n || !n->nr_partial)
> -- 
> 2.17.1
> 
>
diff mbox series

Patch

diff --git a/mm/slab.c b/mm/slab.c
index 3160dff6fd76..0a13cce5d016 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1062,7 +1062,7 @@  int slab_prepare_cpu(unsigned int cpu)
  * Even if all the cpus of a node are down, we don't free the
  * kmem_cache_node of any cache. This to avoid a race between cpu_down, and
  * a kmalloc allocation from another cpu for memory from the node of
- * the cpu going down.  The list3 structure is usually allocated from
+ * the cpu going down.  The kmem_cache_node structure is usually allocated from
  * kmem_cache_create() and gets destroyed at kmem_cache_destroy().
  */
 int slab_dead_cpu(unsigned int cpu)
diff --git a/mm/slub.c b/mm/slub.c
index d4177aecedf6..1faa9ada3f51 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1960,7 +1960,7 @@  static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
 	/*
 	 * Racy check. If we mistakenly see no partial slabs then we
 	 * just allocate an empty slab. If we mistakenly try to get a
-	 * partial slab and there is none available then get_partials()
+	 * partial slab and there is none available then get_partial()
 	 * will return NULL.
 	 */
 	if (!n || !n->nr_partial)