diff mbox

[v8,5/5] skb_array: resize support

Message ID 20160613235450-mutt-send-email-mst@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Michael S. Tsirkin June 13, 2016, 8:54 p.m. UTC
Update skb_array after ptr_ring API changes.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/skb_array.h | 33 +++++++++++++++++++++++++++++----
 1 file changed, 29 insertions(+), 4 deletions(-)

Comments

Jesper Dangaard Brouer June 14, 2016, 12:21 p.m. UTC | #1
On Mon, 13 Jun 2016 23:54:50 +0300
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> Update skb_array after ptr_ring API changes.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>

Also did resize unit test:
 https://github.com/netoptimizer/prototype-kernel/commit/af0b4d7e7261e9

The parallel benchmark:
 https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/skb_array_parallel01.c

Have been adjusted to use the non-BH variant of the lock.  And the
parallel benchmark results, where a single producer and a single
consumer CPU runs concurrently, and the queue always partly full
(optimal case for minimize cache-contention):

On CPU i7-4790K @ 4.00GHz:
 - Enqueue 32 cycles(tsc) 8.162 ns 
 - Dequeue 33 cycles(tsc) 8.417 ns

Notice this is an extremely good concurrency results, as it is very
close to the optimal case benchmark 26 cycles, when running
enqueue+dequeue on the same CPU in a tight loop[2].
diff mbox

Patch

diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
index c4c0902..678bfbf 100644
--- a/include/linux/skb_array.h
+++ b/include/linux/skb_array.h
@@ -63,9 +63,9 @@  static inline int skb_array_produce_any(struct skb_array *a, struct sk_buff *skb
 	return ptr_ring_produce_any(&a->ring, skb);
 }
 
-/* Might be slightly faster than skb_array_empty below, but callers invoking
- * this in a loop must take care to use a compiler barrier, for example
- * cpu_relax().
+/* Might be slightly faster than skb_array_empty below, but only safe if the
+ * array is never resized. Also, callers invoking this in a loop must take care
+ * to use a compiler barrier, for example cpu_relax().
  */
 static inline bool __skb_array_empty(struct skb_array *a)
 {
@@ -77,6 +77,21 @@  static inline bool skb_array_empty(struct skb_array *a)
 	return ptr_ring_empty(&a->ring);
 }
 
+static inline bool skb_array_empty_bh(struct skb_array *a)
+{
+	return ptr_ring_empty_bh(&a->ring);
+}
+
+static inline bool skb_array_empty_irq(struct skb_array *a)
+{
+	return ptr_ring_empty_irq(&a->ring);
+}
+
+static inline bool skb_array_empty_any(struct skb_array *a)
+{
+	return ptr_ring_empty_any(&a->ring);
+}
+
 static inline struct sk_buff *skb_array_consume(struct skb_array *a)
 {
 	return ptr_ring_consume(&a->ring);
@@ -136,9 +151,19 @@  static inline int skb_array_init(struct skb_array *a, int size, gfp_t gfp)
 	return ptr_ring_init(&a->ring, size, gfp);
 }
 
+void __skb_array_destroy_skb(void *ptr)
+{
+	kfree_skb(ptr);
+}
+
+int skb_array_resize(struct skb_array *a, int size, gfp_t gfp)
+{
+	return ptr_ring_resize(&a->ring, size, gfp, __skb_array_destroy_skb);
+}
+
 static inline void skb_array_cleanup(struct skb_array *a)
 {
-	ptr_ring_cleanup(&a->ring);
+	ptr_ring_cleanup(&a->ring, __skb_array_destroy_skb);
 }
 
 #endif /* _LINUX_SKB_ARRAY_H  */