diff mbox series

[2/2] mm: slub: Delete useless parameter of alloc_slab_page()

Message ID 20220309145052.219138-3-sxwjean@me.com (mailing list archive)
State New
Headers show
Series Cleanups for slab | expand

Commit Message

Xiongwei Song March 9, 2022, 2:50 p.m. UTC
From: Xiongwei Song <sxwjean@gmail.com>

The parameter @s is useless for alloc_slab_page(), let's delete it.

Signed-off-by: Xiongwei Song <sxwjean@gmail.com>
---
 mm/slub.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Matthew Wilcox March 9, 2022, 3:28 p.m. UTC | #1
On Wed, Mar 09, 2022 at 10:50:52PM +0800, sxwjean@me.com wrote:
> From: Xiongwei Song <sxwjean@gmail.com>
> 
> The parameter @s is useless for alloc_slab_page(), let's delete it.

Perhaps we could add a little more information here.

It was added in 2014 by 5dfb41750992 ("sl[au]b: charge slabs to kmemcg
explicitly").  The need for it was removed in 2020 by 1f3147b49d75
("mm: slub: call account_slab_page() after slab page initialization").

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
David Rientjes March 9, 2022, 5:15 p.m. UTC | #2
On Wed, 9 Mar 2022, Matthew Wilcox wrote:

> On Wed, Mar 09, 2022 at 10:50:52PM +0800, sxwjean@me.com wrote:
> > From: Xiongwei Song <sxwjean@gmail.com>
> > 
> > The parameter @s is useless for alloc_slab_page(), let's delete it.
> 
> Perhaps we could add a little more information here.
> 
> It was added in 2014 by 5dfb41750992 ("sl[au]b: charge slabs to kmemcg
> explicitly").  The need for it was removed in 2020 by 1f3147b49d75
> ("mm: slub: call account_slab_page() after slab page initialization").
> 
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> 

Acked-by: David Rientjes <rientjes@google.com>
Xiongwei Song March 10, 2022, 1:17 a.m. UTC | #3
On Wed, Mar 9, 2022 at 11:29 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Mar 09, 2022 at 10:50:52PM +0800, sxwjean@me.com wrote:
> > From: Xiongwei Song <sxwjean@gmail.com>
> >
> > The parameter @s is useless for alloc_slab_page(), let's delete it.
>
> Perhaps we could add a little more information here.
>
> It was added in 2014 by 5dfb41750992 ("sl[au]b: charge slabs to kmemcg
> explicitly").  The need for it was removed in 2020 by 1f3147b49d75
> ("mm: slub: call account_slab_page() after slab page initialization").

Ok. Will update.

>
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Thank you.

Regards,
Xiongwei
Xiongwei Song March 10, 2022, 1:18 a.m. UTC | #4
On Thu, Mar 10, 2022 at 1:15 AM David Rientjes <rientjes@google.com> wrote:
>
> On Wed, 9 Mar 2022, Matthew Wilcox wrote:
>
> > On Wed, Mar 09, 2022 at 10:50:52PM +0800, sxwjean@me.com wrote:
> > > From: Xiongwei Song <sxwjean@gmail.com>
> > >
> > > The parameter @s is useless for alloc_slab_page(), let's delete it.
> >
> > Perhaps we could add a little more information here.
> >
> > It was added in 2014 by 5dfb41750992 ("sl[au]b: charge slabs to kmemcg
> > explicitly").  The need for it was removed in 2020 by 1f3147b49d75
> > ("mm: slub: call account_slab_page() after slab page initialization").
> >
> > Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> >
>
> Acked-by: David Rientjes <rientjes@google.com>

Thank you!
Roman Gushchin March 10, 2022, 1:48 a.m. UTC | #5
On Wed, Mar 09, 2022 at 10:50:52PM +0800, sxwjean@me.com wrote:
> From: Xiongwei Song <sxwjean@gmail.com>
> 
> The parameter @s is useless for alloc_slab_page(), let's delete it.
> 
> Signed-off-by: Xiongwei Song <sxwjean@gmail.com>

Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>

Thanks!
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index 261474092e43..5d273ee04c43 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1788,8 +1788,8 @@  static void *setup_object(struct kmem_cache *s, struct slab *slab,
 /*
  * Slab allocation and freeing
  */
-static inline struct slab *alloc_slab_page(struct kmem_cache *s,
-		gfp_t flags, int node, struct kmem_cache_order_objects oo)
+static inline struct slab *alloc_slab_page(gfp_t flags, int node,
+		struct kmem_cache_order_objects oo)
 {
 	struct folio *folio;
 	struct slab *slab;
@@ -1941,7 +1941,7 @@  static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if ((alloc_gfp & __GFP_DIRECT_RECLAIM) && oo_order(oo) > oo_order(s->min))
 		alloc_gfp = (alloc_gfp | __GFP_NOMEMALLOC) & ~(__GFP_RECLAIM|__GFP_NOFAIL);
 
-	slab = alloc_slab_page(s, alloc_gfp, node, oo);
+	slab = alloc_slab_page(alloc_gfp, node, oo);
 	if (unlikely(!slab)) {
 		oo = s->min;
 		alloc_gfp = flags;
@@ -1949,7 +1949,7 @@  static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 		 * Allocation may have failed due to fragmentation.
 		 * Try a lower order alloc if possible
 		 */
-		slab = alloc_slab_page(s, alloc_gfp, node, oo);
+		slab = alloc_slab_page(alloc_gfp, node, oo);
 		if (unlikely(!slab))
 			goto out;
 		stat(s, ORDER_FALLBACK);