From patchwork Wed Jun 22 18:18:19 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 9193445 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1D1926075F for ; Wed, 22 Jun 2016 18:19:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 126D828411 for ; Wed, 22 Jun 2016 18:19:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0620428416; Wed, 22 Jun 2016 18:19:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A56A828411 for ; Wed, 22 Jun 2016 18:19:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752033AbcFVSSd (ORCPT ); Wed, 22 Jun 2016 14:18:33 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46405 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751883AbcFVSSb (ORCPT ); Wed, 22 Jun 2016 14:18:31 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7B95363310; Wed, 22 Jun 2016 18:18:24 +0000 (UTC) Received: from redhat.com (ovpn-116-29.ams2.redhat.com [10.36.116.29]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id u5MIIKhD027187; Wed, 22 Jun 2016 14:18:21 -0400 Date: Wed, 22 Jun 2016 21:18:19 +0300 From: "Michael S. Tsirkin" To: Jason Wang Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, davem@davemloft.net, eric.dumazet@gmail.com, brouer@redhat.com Subject: Re: [PATCH net-next V2] tun: introduce tx skb ring Message-ID: <20160622211620-mutt-send-email-mst@redhat.com> References: <1465979897-4445-1-git-send-email-jasowang@redhat.com> <20160617033218-mutt-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20160617033218-mutt-send-email-mst@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 22 Jun 2016 18:18:24 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Fri, Jun 17, 2016 at 03:41:20AM +0300, Michael S. Tsirkin wrote: > Would it help to have ptr_ring_resize that gets an array of > rings and resizes them both to same length? OK, here it is. Untested so far, and no skb wrapper. Pls let me know whether this is what you had in mind. --> ptr_ring: support resizing multiple queues Signed-off-by: Michael S. Tsirkin --- -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h index a29b023..e576801 100644 --- a/include/linux/ptr_ring.h +++ b/include/linux/ptr_ring.h @@ -354,20 +354,14 @@ static inline int ptr_ring_init(struct ptr_ring *r, int size, int pad, gfp_t gfp return 0; } -static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, - void (*destroy)(void *)) +static inline void **__ptr_ring_swap_queue(struct ptr_ring *r, void **queue, + int size, gfp_t gfp, + void (*destroy)(void *)) { - unsigned long flags; int producer = 0; - void **queue = __ptr_ring_init_queue_alloc(size, gfp); void **old; void *ptr; - if (!queue) - return -ENOMEM; - - spin_lock_irqsave(&(r)->producer_lock, flags); - while ((ptr = ptr_ring_consume(r))) if (producer < size) queue[producer++] = ptr; @@ -380,6 +374,23 @@ static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, old = r->queue; r->queue = queue; + return old; +} + +static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, + void (*destroy)(void *)) +{ + unsigned long flags; + void **queue = __ptr_ring_init_queue_alloc(size, gfp); + void **old; + + if (!queue) + return -ENOMEM; + + spin_lock_irqsave(&(r)->producer_lock, flags); + + old = __ptr_ring_swap_queue(r, queue, size, gfp, destroy); + spin_unlock_irqrestore(&(r)->producer_lock, flags); kfree(old); @@ -387,6 +398,49 @@ static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, return 0; } +static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, int nrings, + int size, + gfp_t gfp, void (*destroy)(void *)) +{ + unsigned long flags; + void ***queues; + int i; + + queues = kmalloc(nrings * sizeof *queues, gfp); + if (!queues) + goto noqueues; + + for (i = 0; i < nrings; ++i) { + queues[i] = __ptr_ring_init_queue_alloc(size, gfp); + if (!queues[i]) + goto nomem; + } + + spin_lock_irqsave(&(rings[i])->producer_lock, flags); + + for (i = 0; i < nrings; ++i) + queues[i] = __ptr_ring_swap_queue(rings[i], queues[i], + size, gfp, destroy); + + spin_unlock_irqrestore(&(rings[i])->producer_lock, flags); + + for (i = 0; i < nrings; ++i) + kfree(queues[i]); + + kfree(queues); + + return 0; + +nomem: + while (--i >= 0) + kfree(queues[i]); + + kfree(queues); + +noqueues: + return -ENOMEM; +} + static inline void ptr_ring_cleanup(struct ptr_ring *r, void (*destroy)(void *)) { void *ptr; diff --git a/tools/virtio/ringtest/ptr_ring.c b/tools/virtio/ringtest/ptr_ring.c index 26dc1d2..deb36af 100644 --- a/tools/virtio/ringtest/ptr_ring.c +++ b/tools/virtio/ringtest/ptr_ring.c @@ -17,6 +17,11 @@ typedef pthread_spinlock_t spinlock_t; typedef int gfp_t; +static void *kmalloc(unsigned size, gfp_t gfp) +{ + return memalign(64, size); +} + static void *kzalloc(unsigned size, gfp_t gfp) { void *p = memalign(64, size);