From patchwork Tue Oct 11 10:41:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eugenio Perez Martin X-Patchwork-Id: 13003821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AEDDC433F5 for ; Tue, 11 Oct 2022 10:42:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229819AbiJKKmM (ORCPT ); Tue, 11 Oct 2022 06:42:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229811AbiJKKmL (ORCPT ); Tue, 11 Oct 2022 06:42:11 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6091857CD for ; Tue, 11 Oct 2022 03:42:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665484930; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3DZzVaTb8ZcE2rexgrMNXJqh3vRsjkFL61vNK5okDmI=; b=RP8hqSI5GsdueLyZyeaSPMrPxsgEQSPOlZQQ0HL3VhaktHG6S7YMqf4Y83igt3g5yOcG5x kwgAGN41W/NDZLukf+WuRNiwavTsDGcjxSaYCLpb009gxVa9+eqagGveMoAxgJhCeT8CzU iZ8AIeJst1RR749wdzgFQmQwPGhkUvI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-632-sOY-PxuONzalk5cMh8gaHA-1; Tue, 11 Oct 2022 06:42:04 -0400 X-MC-Unique: sOY-PxuONzalk5cMh8gaHA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8D9B5299E742; Tue, 11 Oct 2022 10:42:03 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.104]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6524B53592D; Tue, 11 Oct 2022 10:42:00 +0000 (UTC) From: =?utf-8?q?Eugenio_P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Gautam Dawar , "Michael S. Tsirkin" , Zhu Lingshan , Jason Wang , Si-Wei Liu , Paolo Bonzini , Eli Cohen , Parav Pandit , Laurent Vivier , Stefano Garzarella , Stefan Hajnoczi , "Gonglei (Arei)" , Cindy Lu , Liuxiangdong , Cornelia Huck , kvm@vger.kernel.org, Harpreet Singh Anand Subject: [PATCH v5 1/6] vdpa: Use v->shadow_vqs_enabled in vhost_vdpa_svqs_start & stop Date: Tue, 11 Oct 2022 12:41:49 +0200 Message-Id: <20221011104154.1209338-2-eperezma@redhat.com> In-Reply-To: <20221011104154.1209338-1-eperezma@redhat.com> References: <20221011104154.1209338-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This function used to trust in v->shadow_vqs != NULL to know if it must start svq or not. This is not going to be valid anymore, as qemu is going to allocate svq unconditionally (but it will only start them conditionally). Signed-off-by: Eugenio Pérez --- hw/virtio/vhost-vdpa.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 7468e44b87..7f0ff4df5b 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -1029,7 +1029,7 @@ static bool vhost_vdpa_svqs_start(struct vhost_dev *dev) Error *err = NULL; unsigned i; - if (!v->shadow_vqs) { + if (!v->shadow_vqs_enabled) { return true; } @@ -1082,7 +1082,7 @@ static void vhost_vdpa_svqs_stop(struct vhost_dev *dev) { struct vhost_vdpa *v = dev->opaque; - if (!v->shadow_vqs) { + if (!v->shadow_vqs_enabled) { return; } From patchwork Tue Oct 11 10:41:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eugenio Perez Martin X-Patchwork-Id: 13003822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC8A5C4332F for ; Tue, 11 Oct 2022 10:42:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229616AbiJKKmO (ORCPT ); Tue, 11 Oct 2022 06:42:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229822AbiJKKmM (ORCPT ); Tue, 11 Oct 2022 06:42:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A21A843634 for ; Tue, 11 Oct 2022 03:42:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665484930; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xtxKFnWxYar2cgRx3GrYp3WJGhmrWZQWUyc94CiOZQI=; b=hYZhgGJXabpSoJRHR2TWcB5pGY/OE4mpO/l7EqyujwcytG/CYAKWO/NJl1pTQSUTCWPh56 exaNt2L6MTRO8LgFa92rJOeGuqJ3xc5/skQrJpbIz2umZV+m2RTLP30JUMA56SJ7mF7vou FLxqUIXqUtdGzIbDGmMX77ajvG4p0Nw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-370-dv2X6QnjPzeSZfnb0bWTwA-1; Tue, 11 Oct 2022 06:42:07 -0400 X-MC-Unique: dv2X6QnjPzeSZfnb0bWTwA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 02BA580280D; Tue, 11 Oct 2022 10:42:07 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.104]) by smtp.corp.redhat.com (Postfix) with ESMTP id D5452492B09; Tue, 11 Oct 2022 10:42:03 +0000 (UTC) From: =?utf-8?q?Eugenio_P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Gautam Dawar , "Michael S. Tsirkin" , Zhu Lingshan , Jason Wang , Si-Wei Liu , Paolo Bonzini , Eli Cohen , Parav Pandit , Laurent Vivier , Stefano Garzarella , Stefan Hajnoczi , "Gonglei (Arei)" , Cindy Lu , Liuxiangdong , Cornelia Huck , kvm@vger.kernel.org, Harpreet Singh Anand Subject: [PATCH v5 2/6] vdpa: Allocate SVQ unconditionally Date: Tue, 11 Oct 2022 12:41:50 +0200 Message-Id: <20221011104154.1209338-3-eperezma@redhat.com> In-Reply-To: <20221011104154.1209338-1-eperezma@redhat.com> References: <20221011104154.1209338-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org SVQ may run or not in a device depending on runtime conditions (for example, if the device can move CVQ to its own group or not). Allocate the resources unconditionally, and decide later if to use them or not. Signed-off-by: Eugenio Pérez --- hw/virtio/vhost-vdpa.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 7f0ff4df5b..d966966131 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -410,6 +410,21 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v, int r; bool ok; + shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free); + for (unsigned n = 0; n < hdev->nvqs; ++n) { + g_autoptr(VhostShadowVirtqueue) svq; + + svq = vhost_svq_new(v->iova_tree, v->shadow_vq_ops, + v->shadow_vq_ops_opaque); + if (unlikely(!svq)) { + error_setg(errp, "Cannot create svq %u", n); + return -1; + } + g_ptr_array_add(shadow_vqs, g_steal_pointer(&svq)); + } + + v->shadow_vqs = g_steal_pointer(&shadow_vqs); + if (!v->shadow_vqs_enabled) { return 0; } @@ -426,20 +441,6 @@ static int vhost_vdpa_init_svq(struct vhost_dev *hdev, struct vhost_vdpa *v, return -1; } - shadow_vqs = g_ptr_array_new_full(hdev->nvqs, vhost_svq_free); - for (unsigned n = 0; n < hdev->nvqs; ++n) { - g_autoptr(VhostShadowVirtqueue) svq; - - svq = vhost_svq_new(v->iova_tree, v->shadow_vq_ops, - v->shadow_vq_ops_opaque); - if (unlikely(!svq)) { - error_setg(errp, "Cannot create svq %u", n); - return -1; - } - g_ptr_array_add(shadow_vqs, g_steal_pointer(&svq)); - } - - v->shadow_vqs = g_steal_pointer(&shadow_vqs); return 0; } @@ -580,10 +581,6 @@ static void vhost_vdpa_svq_cleanup(struct vhost_dev *dev) struct vhost_vdpa *v = dev->opaque; size_t idx; - if (!v->shadow_vqs) { - return; - } - for (idx = 0; idx < v->shadow_vqs->len; ++idx) { vhost_svq_stop(g_ptr_array_index(v->shadow_vqs, idx)); } From patchwork Tue Oct 11 10:41:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eugenio Perez Martin X-Patchwork-Id: 13003823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A9CEC433F5 for ; Tue, 11 Oct 2022 10:42:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229811AbiJKKmV (ORCPT ); Tue, 11 Oct 2022 06:42:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229827AbiJKKmS (ORCPT ); Tue, 11 Oct 2022 06:42:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BD51857CD for ; Tue, 11 Oct 2022 03:42:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665484935; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EAle9ttS5SBoGxFcwKEbiwcJwY4Sj94Se9zB8jYE9qA=; b=S43OMcGKrefVZPv/BGG3VhhOCZJcNtoVgoj32sPESytqfF+DmojBCQocHmA1Rq5u/h8o1G r1zAHS1MTvVkPPFZQXaffGIGM8m3rbCeJ61NIpCfm8r4HPzGTAmnolIojEj8dF/qyP821g AMQj0yYNYe8r64MB3GTz+wGZ1Kad2XM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-180-fO7xekBTOXCX88NesXaNxw-1; Tue, 11 Oct 2022 06:42:11 -0400 X-MC-Unique: fO7xekBTOXCX88NesXaNxw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 05ECF299E74B; Tue, 11 Oct 2022 10:42:11 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.104]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4DB23492B09; Tue, 11 Oct 2022 10:42:07 +0000 (UTC) From: =?utf-8?q?Eugenio_P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Gautam Dawar , "Michael S. Tsirkin" , Zhu Lingshan , Jason Wang , Si-Wei Liu , Paolo Bonzini , Eli Cohen , Parav Pandit , Laurent Vivier , Stefano Garzarella , Stefan Hajnoczi , "Gonglei (Arei)" , Cindy Lu , Liuxiangdong , Cornelia Huck , kvm@vger.kernel.org, Harpreet Singh Anand Subject: [PATCH v5 3/6] vdpa: Add asid parameter to vhost_vdpa_dma_map/unmap Date: Tue, 11 Oct 2022 12:41:51 +0200 Message-Id: <20221011104154.1209338-4-eperezma@redhat.com> In-Reply-To: <20221011104154.1209338-1-eperezma@redhat.com> References: <20221011104154.1209338-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org So the caller can choose which ASID is destined. No need to update the batch functions as they will always be called from memory listener updates at the moment. Memory listener updates will always update ASID 0, as it's the passthrough ASID. All vhost devices's ASID are 0 at this moment. Signed-off-by: Eugenio Pérez --- v5: * Solve conflict, now vhost_vdpa_svq_unmap_ring returns void * Change comment on zero initialization. v4: Add comment specifying behavior if device does not support _F_ASID v3: Deleted unneeded space --- include/hw/virtio/vhost-vdpa.h | 8 +++++--- hw/virtio/vhost-vdpa.c | 29 +++++++++++++++++++---------- net/vhost-vdpa.c | 6 +++--- hw/virtio/trace-events | 4 ++-- 4 files changed, 29 insertions(+), 18 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 1111d85643..6560bb9d78 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -29,6 +29,7 @@ typedef struct vhost_vdpa { int index; uint32_t msg_type; bool iotlb_batch_begin_sent; + uint32_t address_space_id; MemoryListener listener; struct vhost_vdpa_iova_range iova_range; uint64_t acked_features; @@ -42,8 +43,9 @@ typedef struct vhost_vdpa { VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX]; } VhostVDPA; -int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size, - void *vaddr, bool readonly); -int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size); +int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, + hwaddr size, void *vaddr, bool readonly); +int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, + hwaddr size); #endif diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index d966966131..ad663feacc 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -72,22 +72,24 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section, return false; } -int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size, - void *vaddr, bool readonly) +int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, + hwaddr size, void *vaddr, bool readonly) { struct vhost_msg_v2 msg = {}; int fd = v->device_fd; int ret = 0; msg.type = v->msg_type; + msg.asid = asid; /* 0 if vdpa device does not support asid */ msg.iotlb.iova = iova; msg.iotlb.size = size; msg.iotlb.uaddr = (uint64_t)(uintptr_t)vaddr; msg.iotlb.perm = readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW; msg.iotlb.type = VHOST_IOTLB_UPDATE; - trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.iotlb.iova, msg.iotlb.size, - msg.iotlb.uaddr, msg.iotlb.perm, msg.iotlb.type); + trace_vhost_vdpa_dma_map(v, fd, msg.type, msg.asid, msg.iotlb.iova, + msg.iotlb.size, msg.iotlb.uaddr, msg.iotlb.perm, + msg.iotlb.type); if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) { error_report("failed to write, fd=%d, errno=%d (%s)", @@ -98,18 +100,24 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size, return ret; } -int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size) +int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, + hwaddr size) { struct vhost_msg_v2 msg = {}; int fd = v->device_fd; int ret = 0; msg.type = v->msg_type; + /* + * The caller must set asid = 0 if the device does not support asid. + * This is not an ABI break since it is set to 0 by the initializer anyway. + */ + msg.asid = asid; msg.iotlb.iova = iova; msg.iotlb.size = size; msg.iotlb.type = VHOST_IOTLB_INVALIDATE; - trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.iotlb.iova, + trace_vhost_vdpa_dma_unmap(v, fd, msg.type, msg.asid, msg.iotlb.iova, msg.iotlb.size, msg.iotlb.type); if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) { @@ -229,7 +237,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener, } vhost_vdpa_iotlb_batch_begin_once(v); - ret = vhost_vdpa_dma_map(v, iova, int128_get64(llsize), + ret = vhost_vdpa_dma_map(v, 0, iova, int128_get64(llsize), vaddr, section->readonly); if (ret) { error_report("vhost vdpa map fail!"); @@ -303,7 +311,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener, vhost_iova_tree_remove(v->iova_tree, *result); } vhost_vdpa_iotlb_batch_begin_once(v); - ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize)); + ret = vhost_vdpa_dma_unmap(v, 0, iova, int128_get64(llsize)); if (ret) { error_report("vhost_vdpa dma unmap error!"); } @@ -896,7 +904,7 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v, hwaddr addr) } size = ROUND_UP(result->size, qemu_real_host_page_size()); - r = vhost_vdpa_dma_unmap(v, result->iova, size); + r = vhost_vdpa_dma_unmap(v, v->address_space_id, result->iova, size); if (unlikely(r < 0)) { error_report("Unable to unmap SVQ vring: %s (%d)", g_strerror(-r), -r); return; @@ -936,7 +944,8 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa *v, DMAMap *needle, return false; } - r = vhost_vdpa_dma_map(v, needle->iova, needle->size + 1, + r = vhost_vdpa_dma_map(v, v->address_space_id, needle->iova, + needle->size + 1, (void *)(uintptr_t)needle->translated_addr, needle->perm == IOMMU_RO); if (unlikely(r != 0)) { diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index d84bb768cf..025fbbc41b 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -242,7 +242,7 @@ static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr) return; } - r = vhost_vdpa_dma_unmap(v, map->iova, map->size + 1); + r = vhost_vdpa_dma_unmap(v, v->address_space_id, map->iova, map->size + 1); if (unlikely(r != 0)) { error_report("Device cannot unmap: %s(%d)", g_strerror(r), r); } @@ -282,8 +282,8 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size, return r; } - r = vhost_vdpa_dma_map(v, map.iova, vhost_vdpa_net_cvq_cmd_page_len(), buf, - !write); + r = vhost_vdpa_dma_map(v, v->address_space_id, map.iova, + vhost_vdpa_net_cvq_cmd_page_len(), buf, !write); if (unlikely(r < 0)) { goto dma_map_err; } diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events index 20af2e7ebd..36e5ae75f6 100644 --- a/hw/virtio/trace-events +++ b/hw/virtio/trace-events @@ -26,8 +26,8 @@ vhost_user_write(uint32_t req, uint32_t flags) "req:%d flags:0x%"PRIx32"" vhost_user_create_notifier(int idx, void *n) "idx:%d n:%p" # vhost-vdpa.c -vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8 -vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8 +vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8 +vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8 vhost_vdpa_listener_begin_batch(void *v, int fd, uint32_t msg_type, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8 vhost_vdpa_listener_commit(void *v, int fd, uint32_t msg_type, uint8_t type) "vdpa:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8 vhost_vdpa_listener_region_add(void *vdpa, uint64_t iova, uint64_t llend, void *vaddr, bool readonly) "vdpa: %p iova 0x%"PRIx64" llend 0x%"PRIx64" vaddr: %p read-only: %d" From patchwork Tue Oct 11 10:41:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eugenio Perez Martin X-Patchwork-Id: 13003824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37396C433F5 for ; Tue, 11 Oct 2022 10:42:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229453AbiJKKmX (ORCPT ); Tue, 11 Oct 2022 06:42:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229838AbiJKKmT (ORCPT ); Tue, 11 Oct 2022 06:42:19 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E12138B2EC for ; Tue, 11 Oct 2022 03:42:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665484938; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wDZ5MXKrLbAEkwEFC6uq+6MhbbVzqnB1xuVU606XNaI=; b=ZJtjQZBWhSq5txsLPpa3roLLRDY6v9YZqOiQhx9uRpU6Wj4ut0wY7SZUO1wtSlCtbvcRos dWNE3QniNgXCwjTCkYtVPEvEKv1h4NcVgVhT8+d5xZJ2L/f7Mlzxh5HmhT28i/BHe6E7ax 3Ej6eFCn7xMX5aBoYZxkYiRgAk3g8tQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-472-73xvM9L7MfOUAYKbbzjZHg-1; Tue, 11 Oct 2022 06:42:15 -0400 X-MC-Unique: 73xvM9L7MfOUAYKbbzjZHg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6DE321C05AE9; Tue, 11 Oct 2022 10:42:14 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.104]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5106D492B09; Tue, 11 Oct 2022 10:42:11 +0000 (UTC) From: =?utf-8?q?Eugenio_P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Gautam Dawar , "Michael S. Tsirkin" , Zhu Lingshan , Jason Wang , Si-Wei Liu , Paolo Bonzini , Eli Cohen , Parav Pandit , Laurent Vivier , Stefano Garzarella , Stefan Hajnoczi , "Gonglei (Arei)" , Cindy Lu , Liuxiangdong , Cornelia Huck , kvm@vger.kernel.org, Harpreet Singh Anand Subject: [PATCH v5 4/6] vdpa: Store x-svq parameter in VhostVDPAState Date: Tue, 11 Oct 2022 12:41:52 +0200 Message-Id: <20221011104154.1209338-5-eperezma@redhat.com> In-Reply-To: <20221011104154.1209338-1-eperezma@redhat.com> References: <20221011104154.1209338-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org CVQ can be shadowed two ways: - Device has x-svq=on parameter (current way) - The device can isolate CVQ in its own vq group QEMU needs to check for the second condition dynamically, because CVQ index is not known at initialization time. Since this is dynamic, the CVQ isolation could vary with different conditions, making it possible to go from "not isolated group" to "isolated". Saving the cmdline parameter in an extra field so we never disable CVQ SVQ in case the device was started with cmdline. Signed-off-by: Eugenio Pérez --- net/vhost-vdpa.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 025fbbc41b..e8c78e4813 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -38,6 +38,8 @@ typedef struct VhostVDPAState { void *cvq_cmd_out_buffer; virtio_net_ctrl_ack *status; + /* The device always have SVQ enabled */ + bool always_svq; bool started; } VhostVDPAState; @@ -600,6 +602,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer, s->vhost_vdpa.device_fd = vdpa_device_fd; s->vhost_vdpa.index = queue_pair_index; + s->always_svq = svq; s->vhost_vdpa.shadow_vqs_enabled = svq; s->vhost_vdpa.iova_tree = iova_tree; if (!is_datapath) { From patchwork Tue Oct 11 10:41:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eugenio Perez Martin X-Patchwork-Id: 13003825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF64DC433F5 for ; Tue, 11 Oct 2022 10:42:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229879AbiJKKme (ORCPT ); Tue, 11 Oct 2022 06:42:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229835AbiJKKmZ (ORCPT ); Tue, 11 Oct 2022 06:42:25 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F3818B2EC for ; Tue, 11 Oct 2022 03:42:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665484941; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JjLZSMxDoJzukfhronBWmF9VBo9r0DM5fgqfGZCuCT8=; b=eBT3NtbRWIWau6gBcbXn1I8EBLdBADmOCXX0DHBlk7Yas/xoxtsRl2Cu3Yze1VWRWiL6ez 3IA2ZF0wmzDOzCG/y7iOTidRRas6R7bHnIEUzAn58ujkrH4KKJyEZ6vnp/KRnG71uE6BOS ZBMqblsnUlAiJpbbsVZrXpx0qPAvzh0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-592-nV4AtLQ3Oxyct-JUmFvcQA-1; Tue, 11 Oct 2022 06:42:18 -0400 X-MC-Unique: nV4AtLQ3Oxyct-JUmFvcQA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DD56F299E754; Tue, 11 Oct 2022 10:42:17 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.104]) by smtp.corp.redhat.com (Postfix) with ESMTP id B5293492B10; Tue, 11 Oct 2022 10:42:14 +0000 (UTC) From: =?utf-8?q?Eugenio_P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Gautam Dawar , "Michael S. Tsirkin" , Zhu Lingshan , Jason Wang , Si-Wei Liu , Paolo Bonzini , Eli Cohen , Parav Pandit , Laurent Vivier , Stefano Garzarella , Stefan Hajnoczi , "Gonglei (Arei)" , Cindy Lu , Liuxiangdong , Cornelia Huck , kvm@vger.kernel.org, Harpreet Singh Anand Subject: [PATCH v5 5/6] vdpa: Add listener_shadow_vq to vhost_vdpa Date: Tue, 11 Oct 2022 12:41:53 +0200 Message-Id: <20221011104154.1209338-6-eperezma@redhat.com> In-Reply-To: <20221011104154.1209338-1-eperezma@redhat.com> References: <20221011104154.1209338-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The memory listener that thells the device how to convert GPA to qemu's va is registered against CVQ vhost_vdpa. This series try to map the memory listener translations to ASID 0, while it maps the CVQ ones to ASID 1. Let's tell the listener if it needs to register them on iova tree or not. Signed-off-by: Eugenio Pérez --- v5: Solve conflict about vhost_iova_tree_remove accepting mem_region by value. --- include/hw/virtio/vhost-vdpa.h | 2 ++ hw/virtio/vhost-vdpa.c | 6 +++--- net/vhost-vdpa.c | 1 + 3 files changed, 6 insertions(+), 3 deletions(-) diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index 6560bb9d78..0c3ed2d69b 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -34,6 +34,8 @@ typedef struct vhost_vdpa { struct vhost_vdpa_iova_range iova_range; uint64_t acked_features; bool shadow_vqs_enabled; + /* The listener must send iova tree addresses, not GPA */ + bool listener_shadow_vq; /* IOVA mapping used by the Shadow Virtqueue */ VhostIOVATree *iova_tree; GPtrArray *shadow_vqs; diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index ad663feacc..29d009c02b 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -220,7 +220,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener, vaddr, section->readonly); llsize = int128_sub(llend, int128_make64(iova)); - if (v->shadow_vqs_enabled) { + if (v->listener_shadow_vq) { int r; mem_region.translated_addr = (hwaddr)(uintptr_t)vaddr, @@ -247,7 +247,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener, return; fail_map: - if (v->shadow_vqs_enabled) { + if (v->listener_shadow_vq) { vhost_iova_tree_remove(v->iova_tree, mem_region); } @@ -292,7 +292,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener, llsize = int128_sub(llend, int128_make64(iova)); - if (v->shadow_vqs_enabled) { + if (v->listener_shadow_vq) { const DMAMap *result; const void *vaddr = memory_region_get_ram_ptr(section->mr) + section->offset_within_region + diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index e8c78e4813..f7831aeb8d 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -604,6 +604,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer, s->vhost_vdpa.index = queue_pair_index; s->always_svq = svq; s->vhost_vdpa.shadow_vqs_enabled = svq; + s->vhost_vdpa.listener_shadow_vq = svq; s->vhost_vdpa.iova_tree = iova_tree; if (!is_datapath) { s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(), From patchwork Tue Oct 11 10:41:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eugenio Perez Martin X-Patchwork-Id: 13003826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1EA0C433F5 for ; Tue, 11 Oct 2022 10:42:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229845AbiJKKmm (ORCPT ); Tue, 11 Oct 2022 06:42:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229865AbiJKKm3 (ORCPT ); Tue, 11 Oct 2022 06:42:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84AE08C462 for ; Tue, 11 Oct 2022 03:42:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665484945; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DPFDxTBnsRnQpeGqKNXXcky9QctdLhlpbSkmYK5HBIE=; b=Mgfjrln3W8wJnGyznlvuP643mZfVrlHpeQsBLzA1Gw/XCn2Ru6TTp/fnUnpvuoPRrWH/Qk b6ZBtTBVXX8ZCla2o4u7PGdQi0qzuW1IdFEw+HuSehU7PR8DyLfo5DT3673Wdwcohx+Bxi yDkOFF15U79iSfzJBw5+9HJYsj1MYrA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-269-yt_mMouoNPeu7-wkdTlNQQ-1; Tue, 11 Oct 2022 06:42:22 -0400 X-MC-Unique: yt_mMouoNPeu7-wkdTlNQQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 62B28380390B; Tue, 11 Oct 2022 10:42:21 +0000 (UTC) Received: from eperezma.remote.csb (unknown [10.39.193.104]) by smtp.corp.redhat.com (Postfix) with ESMTP id 30ADA492B09; Tue, 11 Oct 2022 10:42:18 +0000 (UTC) From: =?utf-8?q?Eugenio_P=C3=A9rez?= To: qemu-devel@nongnu.org Cc: Gautam Dawar , "Michael S. Tsirkin" , Zhu Lingshan , Jason Wang , Si-Wei Liu , Paolo Bonzini , Eli Cohen , Parav Pandit , Laurent Vivier , Stefano Garzarella , Stefan Hajnoczi , "Gonglei (Arei)" , Cindy Lu , Liuxiangdong , Cornelia Huck , kvm@vger.kernel.org, Harpreet Singh Anand Subject: [PATCH v5 6/6] vdpa: Always start CVQ in SVQ mode Date: Tue, 11 Oct 2022 12:41:54 +0200 Message-Id: <20221011104154.1209338-7-eperezma@redhat.com> In-Reply-To: <20221011104154.1209338-1-eperezma@redhat.com> References: <20221011104154.1209338-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Isolate control virtqueue in its own group, allowing to intercept control commands but letting dataplane run totally passthrough to the guest. Signed-off-by: Eugenio Pérez --- v5: * Fixing the not adding cvq buffers when x-svq=on is specified. * Move vring state in vhost_vdpa_get_vring_group instead of using a parameter. * Rename VHOST_VDPA_NET_CVQ_PASSTHROUGH to VHOST_VDPA_NET_DATA_ASID v4: * Squash vhost_vdpa_cvq_group_is_independent. * Rebased on last CVQ start series, that allocated CVQ cmd bufs at load * Do not check for cvq index on vhost_vdpa_net_prepare, we only have one that callback registered in that NetClientInfo. v3: * Make asid related queries print a warning instead of returning an error and stop the start of qemu. --- hw/virtio/vhost-vdpa.c | 3 +- net/vhost-vdpa.c | 118 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 115 insertions(+), 6 deletions(-) diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 29d009c02b..fd4de06eab 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -682,7 +682,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev) { uint64_t features; uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 | - 0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH; + 0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH | + 0x1ULL << VHOST_BACKEND_F_IOTLB_ASID; int r; if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) { diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index f7831aeb8d..6f6ef59ea3 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -38,6 +38,9 @@ typedef struct VhostVDPAState { void *cvq_cmd_out_buffer; virtio_net_ctrl_ack *status; + /* Number of address spaces supported by the device */ + unsigned address_space_num; + /* The device always have SVQ enabled */ bool always_svq; bool started; @@ -102,6 +105,9 @@ static const uint64_t vdpa_svq_device_features = BIT_ULL(VIRTIO_NET_F_RSC_EXT) | BIT_ULL(VIRTIO_NET_F_STANDBY); +#define VHOST_VDPA_NET_DATA_ASID 0 +#define VHOST_VDPA_NET_CVQ_ASID 1 + VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc) { VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc); @@ -226,6 +232,34 @@ static NetClientInfo net_vhost_vdpa_info = { .check_peer_type = vhost_vdpa_check_peer_type, }; +static uint32_t vhost_vdpa_get_vring_group(int device_fd, unsigned vq_index) +{ + struct vhost_vring_state state = { + .index = vq_index, + }; + int r = ioctl(device_fd, VHOST_VDPA_GET_VRING_GROUP, &state); + + return r < 0 ? 0 : state.num; +} + +static int vhost_vdpa_set_address_space_id(struct vhost_vdpa *v, + unsigned vq_group, + unsigned asid_num) +{ + struct vhost_vring_state asid = { + .index = vq_group, + .num = asid_num, + }; + int ret; + + ret = ioctl(v->device_fd, VHOST_VDPA_SET_GROUP_ASID, &asid); + if (unlikely(ret < 0)) { + warn_report("Can't set vq group %u asid %u, errno=%d (%s)", + asid.index, asid.num, errno, g_strerror(errno)); + } + return ret; +} + static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr) { VhostIOVATree *tree = v->iova_tree; @@ -300,11 +334,50 @@ dma_map_err: static int vhost_vdpa_net_cvq_start(NetClientState *nc) { VhostVDPAState *s; - int r; + struct vhost_vdpa *v; + uint32_t cvq_group; + int cvq_index, r; assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA); s = DO_UPCAST(VhostVDPAState, nc, nc); + v = &s->vhost_vdpa; + + v->listener_shadow_vq = s->always_svq; + v->shadow_vqs_enabled = s->always_svq; + s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_DATA_ASID; + + if (s->always_svq) { + goto out; + } + + if (s->address_space_num < 2) { + return 0; + } + + /** + * Check if all the virtqueues of the virtio device are in a different vq + * than the last vq. VQ group of last group passed in cvq_group. + */ + cvq_index = v->dev->vq_index_end - 1; + cvq_group = vhost_vdpa_get_vring_group(v->device_fd, cvq_index); + for (int i = 0; i < cvq_index; ++i) { + uint32_t group = vhost_vdpa_get_vring_group(v->device_fd, i); + + if (unlikely(group == cvq_group)) { + warn_report("CVQ %u group is the same as VQ %u one (%u)", cvq_group, + i, group); + return 0; + } + } + + r = vhost_vdpa_set_address_space_id(v, cvq_group, VHOST_VDPA_NET_CVQ_ASID); + if (r == 0) { + v->shadow_vqs_enabled = true; + s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID; + } + +out: if (!s->vhost_vdpa.shadow_vqs_enabled) { return 0; } @@ -576,12 +649,38 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = { .avail_handler = vhost_vdpa_net_handle_ctrl_avail, }; +static uint32_t vhost_vdpa_get_as_num(int vdpa_device_fd) +{ + uint64_t features; + unsigned num_as; + int r; + + r = ioctl(vdpa_device_fd, VHOST_GET_BACKEND_FEATURES, &features); + if (unlikely(r < 0)) { + warn_report("Cannot get backend features"); + return 1; + } + + if (!(features & BIT_ULL(VHOST_BACKEND_F_IOTLB_ASID))) { + return 1; + } + + r = ioctl(vdpa_device_fd, VHOST_VDPA_GET_AS_NUM, &num_as); + if (unlikely(r < 0)) { + warn_report("Cannot retrieve number of supported ASs"); + return 1; + } + + return num_as; +} + static NetClientState *net_vhost_vdpa_init(NetClientState *peer, const char *device, const char *name, int vdpa_device_fd, int queue_pair_index, int nvqs, + unsigned nas, bool is_datapath, bool svq, VhostIOVATree *iova_tree) @@ -600,6 +699,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer, snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA); s = DO_UPCAST(VhostVDPAState, nc, nc); + s->address_space_num = nas; s->vhost_vdpa.device_fd = vdpa_device_fd; s->vhost_vdpa.index = queue_pair_index; s->always_svq = svq; @@ -686,6 +786,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name, g_autoptr(VhostIOVATree) iova_tree = NULL; NetClientState *nc; int queue_pairs, r, i = 0, has_cvq = 0; + unsigned num_as = 1; + bool svq_cvq; assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA); opts = &netdev->u.vhost_vdpa; @@ -711,7 +813,13 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name, return queue_pairs; } - if (opts->x_svq) { + svq_cvq = opts->x_svq; + if (has_cvq && !opts->x_svq) { + num_as = vhost_vdpa_get_as_num(vdpa_device_fd); + svq_cvq = num_as > 1; + } + + if (opts->x_svq || svq_cvq) { struct vhost_vdpa_iova_range iova_range; uint64_t invalid_dev_features = @@ -734,15 +842,15 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name, for (i = 0; i < queue_pairs; i++) { ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, - vdpa_device_fd, i, 2, true, opts->x_svq, - iova_tree); + vdpa_device_fd, i, 2, num_as, true, + opts->x_svq, iova_tree); if (!ncs[i]) goto err; } if (has_cvq) { nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, - vdpa_device_fd, i, 1, false, + vdpa_device_fd, i, 1, num_as, false, opts->x_svq, iova_tree); if (!nc) goto err;