diff mbox series

[RFC,net-next] Introducing lockless cache built on top of slab allocator

Message ID 20210920174713.4998-1-l4stpr0gr4m@gmail.com (mailing list archive)
State RFC
Delegated to: Netdev Maintainers
Headers show
Series [RFC,net-next] Introducing lockless cache built on top of slab allocator | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Clearly marked for net-next
netdev/subject_prefix success Link
netdev/cc_maintainers success CCed 10 of 10 maintainers
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit fail Errors and warnings before: 5 this patch: 9
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch warning WARNING: line length of 82 exceeds 80 columns WARNING: line length of 89 exceeds 80 columns
netdev/build_allmodconfig_warn fail Errors and warnings before: 5 this patch: 9
netdev/header_inline success Link

Commit Message

Kangmin Park Sept. 20, 2021, 5:47 p.m. UTC
It is just introducing and proof of concept.
The patch code is based on other RFC patches. So, the code is not
correct yet, it is just simple proof of concept.

Recently block layer implemented percpu, lockless cache on the top
of slab allocator. It can be used for IO polling.

Link: https://lwn.net/Articles/868070/
Link: https://www.spinics.net/lists/linux-block/msg71964.html

It gained some IOPS increase (performance increased by about 10%
on the block layer).

And there are attempts to implement the percpu, lockless cache.

Link: https://lore.kernel.org/linux-mm/20210920154816.31832-1-42.hyeyoo@gmail.com/T/#u

If this cache is implemented successfully,
how about use this cache to allocate skb instead of kmem_cache_alloc_bulk()
in napi_skb_cache_get()?

I want your comment/opinion.

Signed-off-by: Kangmin Park <l4stpr0gr4m@gmail.com>
---
 net/core/skbuff.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

Comments

Kangmin Park Sept. 21, 2021, 1:48 a.m. UTC | #1
2021년 9월 21일 (화) 오전 5:50, Jakub Kicinski <kuba@kernel.org>님이 작성:
>
> On Tue, 21 Sep 2021 02:47:13 +0900 Kangmin Park wrote:
> > It is just introducing and proof of concept.
> > The patch code is based on other RFC patches. So, the code is not
> > correct yet, it is just simple proof of concept.
> >
> > Recently block layer implemented percpu, lockless cache on the top
> > of slab allocator. It can be used for IO polling.
> >
> > Link: https://lwn.net/Articles/868070/
> > Link: https://www.spinics.net/lists/linux-block/msg71964.html
> >
> > It gained some IOPS increase (performance increased by about 10%
> > on the block layer).
> >
> > And there are attempts to implement the percpu, lockless cache.
> >
> > Link: https://lore.kernel.org/linux-mm/20210920154816.31832-1-42.hyeyoo@gmail.com/T/#u
> >
> > If this cache is implemented successfully,
> > how about use this cache to allocate skb instead of kmem_cache_alloc_bulk()
> > in napi_skb_cache_get()?
> >
> > I want your comment/opinion.
>
> Please take a look at skb cache in struct napi_alloc_cache.
> That should be your target here.
>

Oh, thanks for the advice.
I'll send you a v2 patch to replace/improve napi_alloc_cache
when progress is made in implementing the lockless cache.

Best Regards,
Kangmin Park

> > Signed-off-by: Kangmin Park <l4stpr0gr4m@gmail.com>
> > ---
> >  net/core/skbuff.c | 14 +++++++++-----
> >  1 file changed, 9 insertions(+), 5 deletions(-)
> >
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index 7c2ab27fcbf9..f9a9deca423d 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -170,11 +170,15 @@ static struct sk_buff *napi_skb_cache_get(void)
> >       struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
> >       struct sk_buff *skb;
> >
> > -     if (unlikely(!nc->skb_count))
> > -             nc->skb_count = kmem_cache_alloc_bulk(skbuff_head_cache,
> > -                                                   GFP_ATOMIC,
> > -                                                   NAPI_SKB_CACHE_BULK,
> > -                                                   nc->skb_cache);
> > +     if (unlikely(!nc->skb_count)) {
> > +             /* kmem_cache_alloc_cached should be changed to return the size of
> > +              * the allocated cache
> > +              */
> > +             nc->skb_cache = kmem_cache_alloc_cached(skbuff_head_cache,
> > +                                                     GFP_ATOMIC | SLB_LOCKLESS_CACHE);
> > +             nc->skb_count = this_cpu_ptr(skbuff_head_cache)->size;
> > +     }
> > +
> >       if (unlikely(!nc->skb_count))
> >               return NULL;
> >
diff mbox series

Patch

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 7c2ab27fcbf9..f9a9deca423d 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -170,11 +170,15 @@  static struct sk_buff *napi_skb_cache_get(void)
 	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
 	struct sk_buff *skb;
 
-	if (unlikely(!nc->skb_count))
-		nc->skb_count = kmem_cache_alloc_bulk(skbuff_head_cache,
-						      GFP_ATOMIC,
-						      NAPI_SKB_CACHE_BULK,
-						      nc->skb_cache);
+	if (unlikely(!nc->skb_count)) {
+		/* kmem_cache_alloc_cached should be changed to return the size of
+		 * the allocated cache
+		 */
+		nc->skb_cache = kmem_cache_alloc_cached(skbuff_head_cache,
+							GFP_ATOMIC | SLB_LOCKLESS_CACHE);
+		nc->skb_count = this_cpu_ptr(skbuff_head_cache)->size;
+	}
+
 	if (unlikely(!nc->skb_count))
 		return NULL;