From patchwork Wed Apr 6 03:43:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xuan Zhuo X-Patchwork-Id: 12803086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45FB5C4332F for ; Wed, 6 Apr 2022 10:17:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376463AbiDFKTi (ORCPT ); Wed, 6 Apr 2022 06:19:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376905AbiDFKSe (ORCPT ); Wed, 6 Apr 2022 06:18:34 -0400 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 989BA264558; Tue, 5 Apr 2022 20:44:15 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=34;SR=0;TI=SMTPD_---0V9JnaOQ_1649216646; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0V9JnaOQ_1649216646) by smtp.aliyun-inc.com(127.0.0.1); Wed, 06 Apr 2022 11:44:07 +0800 From: Xuan Zhuo To: virtualization@lists.linux-foundation.org Cc: Jeff Dike , Richard Weinberger , Anton Ivanov , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Jakub Kicinski , Hans de Goede , Mark Gross , Vadim Pasternak , Bjorn Andersson , Mathieu Poirier , Cornelia Huck , Halil Pasic , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Alexander Gordeev , Sven Schnelle , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Johannes Berg , Xuan Zhuo , Vincent Whitchurch , linux-um@lists.infradead.org, netdev@vger.kernel.org, platform-driver-x86@vger.kernel.org, linux-remoteproc@vger.kernel.org, linux-s390@vger.kernel.org, kvm@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH v9 09/32] virtio_ring: split: extract the logic of vq init Date: Wed, 6 Apr 2022 11:43:23 +0800 Message-Id: <20220406034346.74409-10-xuanzhuo@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220406034346.74409-1-xuanzhuo@linux.alibaba.com> References: <20220406034346.74409-1-xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 X-Git-Hash: 881cb3483d12 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Separate the logic of initializing vq, and subsequent patches will call it separately. The feature of this part is that it does not depend on the information passed by the upper layer and can be called repeatedly. Signed-off-by: Xuan Zhuo --- drivers/virtio/virtio_ring.c | 68 ++++++++++++++++++++---------------- 1 file changed, 38 insertions(+), 30 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 083f2992ba0d..874f878087a3 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -916,6 +916,43 @@ static void *virtqueue_detach_unused_buf_split(struct virtqueue *_vq) return NULL; } +static void vring_virtqueue_init_split(struct vring_virtqueue *vq, + struct virtio_device *vdev, + bool own_ring) +{ + vq->packed_ring = false; + vq->vq.num_free = vq->split.vring.num; + vq->we_own_ring = own_ring; + vq->broken = false; + vq->last_used_idx = 0; + vq->event_triggered = false; + vq->num_added = 0; + vq->use_dma_api = vring_use_dma_api(vdev); +#ifdef DEBUG + vq->in_use = false; + vq->last_add_time_valid = false; +#endif + + vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); + + if (virtio_has_feature(vdev, VIRTIO_F_ORDER_PLATFORM)) + vq->weak_barriers = false; + + vq->split.avail_flags_shadow = 0; + vq->split.avail_idx_shadow = 0; + + /* No callback? Tell other side not to bother us. */ + if (!vq->vq.callback) { + vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; + if (!vq->event) + vq->split.vring.avail->flags = cpu_to_virtio16(vdev, + vq->split.avail_flags_shadow); + } + + /* Put everything in free lists. */ + vq->free_head = 0; +} + static void vring_virtqueue_attach_split(struct vring_virtqueue *vq, struct vring vring, struct vring_desc_state_split *desc_state, @@ -2249,42 +2286,15 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, if (!vq) return NULL; - vq->packed_ring = false; vq->vq.callback = callback; vq->vq.vdev = vdev; vq->vq.name = name; - vq->vq.num_free = vring.num; vq->vq.index = index; - vq->we_own_ring = false; vq->notify = notify; vq->weak_barriers = weak_barriers; - vq->broken = false; - vq->last_used_idx = 0; - vq->event_triggered = false; - vq->num_added = 0; - vq->use_dma_api = vring_use_dma_api(vdev); -#ifdef DEBUG - vq->in_use = false; - vq->last_add_time_valid = false; -#endif vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) && !context; - vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); - - if (virtio_has_feature(vdev, VIRTIO_F_ORDER_PLATFORM)) - vq->weak_barriers = false; - - vq->split.avail_flags_shadow = 0; - vq->split.avail_idx_shadow = 0; - - /* No callback? Tell other side not to bother us. */ - if (!callback) { - vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; - if (!vq->event) - vq->split.vring.avail->flags = cpu_to_virtio16(vdev, - vq->split.avail_flags_shadow); - } err = vring_alloc_state_extra_split(vring.num, &state, &extra); if (err) { @@ -2293,9 +2303,7 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, } vring_virtqueue_attach_split(vq, vring, state, extra); - - /* Put everything in free lists. */ - vq->free_head = 0; + vring_virtqueue_init_split(vq, vdev, false); spin_lock(&vdev->vqs_list_lock); list_add_tail(&vq->vq.list, &vdev->vqs);