From patchwork Wed Feb 5 14:58:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonah Palmer X-Patchwork-Id: 13961269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A906C02194 for ; Wed, 5 Feb 2025 14:59:19 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfgrE-0007b8-P4; Wed, 05 Feb 2025 09:58:28 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfgrB-0007Yj-6a for qemu-devel@nongnu.org; Wed, 05 Feb 2025 09:58:25 -0500 Received: from mx0b-00069f02.pphosted.com ([205.220.177.32]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfgr8-0008Ty-BM for qemu-devel@nongnu.org; Wed, 05 Feb 2025 09:58:24 -0500 Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 515DSWlC023797; Wed, 5 Feb 2025 14:58:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=corp-2023-11-20; bh=j/k9/ q28KjJdlKFMrtf+MmLMnEeMJRhfS7oG05H4Lpo=; b=lHzKlzGfm2Zyo008Da+3l IUpdo1M2/9y3Iq9HSO1TpGeZDVNWiBxCQcEWJfn5meZmO0zBJC2wsunFC5NmUstm //ePU1pBHpKN33wTrAmEwHt0W4e7mhQaN8fJF5DOz6NNdlhUqk9kbPJ63xIakxs8 G+iCd1G5EUly+eJruEbq3ofXGKmBnHPrFRlZB1gaeX2fKBfNzqwPwn+LxZJLDzEI dtA1+Hpkeen8M+hoMRlSuo5ML7eog8dBEltk3iZbR/W0SHiSmizyRK+xnnLJvCBq nMWHQy/EbATX95AV8pAXTT2w7LyP7MRQFTGUTgTQaw2fCnLjHOvxuaYKHkPxG8AT A== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 44m50u8myt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 05 Feb 2025 14:58:19 +0000 (GMT) Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 515Eibg0022556; Wed, 5 Feb 2025 14:58:18 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 44j8e96bex-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 05 Feb 2025 14:58:18 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 515EwGh9022923; Wed, 5 Feb 2025 14:58:18 GMT Received: from jonah-ol8.us.oracle.com (dhcp-10-65-182-246.vpn.oracle.com [10.65.182.246]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 44j8e96bbx-2; Wed, 05 Feb 2025 14:58:17 +0000 From: Jonah Palmer To: qemu-devel@nongnu.org Cc: eperezma@redhat.com, mst@redhat.com, leiyang@redhat.com, peterx@redhat.com, dtatulea@nvidia.com, jasowang@redhat.com, si-wei.liu@oracle.com, boris.ostrovsky@oracle.com, jonah.palmer@oracle.com Subject: [PATCH 1/4] vhost-iova-tree: Implement an IOVA-only tree Date: Wed, 5 Feb 2025 09:58:07 -0500 Message-ID: <20250205145813.394915-2-jonah.palmer@oracle.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250205145813.394915-1-jonah.palmer@oracle.com> References: <20250205145813.394915-1-jonah.palmer@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-05_06,2025-02-05_02,2024-11-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxlogscore=999 phishscore=0 adultscore=0 malwarescore=0 bulkscore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2501170000 definitions=main-2502050117 X-Proofpoint-GUID: 7eLc6QzVj_ou9J6AqBLbr3pmieGA5iep X-Proofpoint-ORIG-GUID: 7eLc6QzVj_ou9J6AqBLbr3pmieGA5iep Received-SPF: pass client-ip=205.220.177.32; envelope-from=jonah.palmer@oracle.com; helo=mx0b-00069f02.pphosted.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Creates and supports an IOVA-only tree to support a SVQ IOVA->HVA and GPA->IOVA tree for host-only and guest-backed memory, respectively, in the next patch. The IOVA allocator still allocates an IOVA range but now adds this range to the IOVA-only tree as well as to the full IOVA->HVA tree. In the next patch, the full IOVA->HVA tree will be split into a partial SVQ IOVA->HVA tree and a GPA->IOVA tree. The motivation behind having an IOVA-only tree was to have a single tree that would keep track of all allocated IOVA ranges between the partial SVQ IOVA->HVA and GPA->IOVA trees. Signed-off-by: Jonah Palmer --- hw/virtio/vhost-iova-tree.c | 26 ++++++++++++++++++++------ hw/virtio/vhost-iova-tree.h | 3 ++- hw/virtio/vhost-vdpa.c | 29 +++++++++++++++++++++-------- net/vhost-vdpa.c | 10 ++++++++-- 4 files changed, 51 insertions(+), 17 deletions(-) diff --git a/hw/virtio/vhost-iova-tree.c b/hw/virtio/vhost-iova-tree.c index 3d03395a77..216885aa3c 100644 --- a/hw/virtio/vhost-iova-tree.c +++ b/hw/virtio/vhost-iova-tree.c @@ -28,6 +28,9 @@ struct VhostIOVATree { /* IOVA address to qemu memory maps. */ IOVATree *iova_taddr_map; + + /* Allocated IOVA addresses */ + IOVATree *iova_map; }; /** @@ -44,6 +47,7 @@ VhostIOVATree *vhost_iova_tree_new(hwaddr iova_first, hwaddr iova_last) tree->iova_last = iova_last; tree->iova_taddr_map = iova_tree_new(); + tree->iova_map = iova_tree_new(); return tree; } @@ -53,6 +57,7 @@ VhostIOVATree *vhost_iova_tree_new(hwaddr iova_first, hwaddr iova_last) void vhost_iova_tree_delete(VhostIOVATree *iova_tree) { iova_tree_destroy(iova_tree->iova_taddr_map); + iova_tree_destroy(iova_tree->iova_map); g_free(iova_tree); } @@ -75,6 +80,7 @@ const DMAMap *vhost_iova_tree_find_iova(const VhostIOVATree *tree, * * @tree: The iova tree * @map: The iova map + * @taddr: The translated address (HVA) * * Returns: * - IOVA_OK if the map fits in the container @@ -83,19 +89,26 @@ const DMAMap *vhost_iova_tree_find_iova(const VhostIOVATree *tree, * * It returns assignated iova in map->iova if return value is VHOST_DMA_MAP_OK. */ -int vhost_iova_tree_map_alloc(VhostIOVATree *tree, DMAMap *map) +int vhost_iova_tree_map_alloc(VhostIOVATree *tree, DMAMap *map, hwaddr taddr) { + int ret; + /* Some vhost devices do not like addr 0. Skip first page */ hwaddr iova_first = tree->iova_first ?: qemu_real_host_page_size(); - if (map->translated_addr + map->size < map->translated_addr || - map->perm == IOMMU_NONE) { + if (taddr + map->size < taddr || map->perm == IOMMU_NONE) { return IOVA_ERR_INVALID; } - /* Allocate a node in IOVA address */ - return iova_tree_alloc_map(tree->iova_taddr_map, map, iova_first, - tree->iova_last); + /* Allocate a node in the IOVA-only tree */ + ret = iova_tree_alloc_map(tree->iova_map, map, iova_first, tree->iova_last); + if (unlikely(ret != IOVA_OK)) { + return ret; + } + + /* Insert a node in the IOVA->HVA tree */ + map->translated_addr = taddr; + return iova_tree_insert(tree->iova_taddr_map, map); } /** @@ -107,4 +120,5 @@ int vhost_iova_tree_map_alloc(VhostIOVATree *tree, DMAMap *map) void vhost_iova_tree_remove(VhostIOVATree *iova_tree, DMAMap map) { iova_tree_remove(iova_tree->iova_taddr_map, map); + iova_tree_remove(iova_tree->iova_map, map); } diff --git a/hw/virtio/vhost-iova-tree.h b/hw/virtio/vhost-iova-tree.h index 4adfd79ff0..525ce72b1d 100644 --- a/hw/virtio/vhost-iova-tree.h +++ b/hw/virtio/vhost-iova-tree.h @@ -21,7 +21,8 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(VhostIOVATree, vhost_iova_tree_delete); const DMAMap *vhost_iova_tree_find_iova(const VhostIOVATree *iova_tree, const DMAMap *map); -int vhost_iova_tree_map_alloc(VhostIOVATree *iova_tree, DMAMap *map); +int vhost_iova_tree_map_alloc(VhostIOVATree *iova_tree, DMAMap *map, + hwaddr taddr); void vhost_iova_tree_remove(VhostIOVATree *iova_tree, DMAMap map); #endif diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 3cdaa12ed5..703dcfc929 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -360,14 +360,20 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener, llsize = int128_sub(llend, int128_make64(iova)); if (s->shadow_data) { int r; + hwaddr hw_vaddr = (hwaddr)(uintptr_t)vaddr; - mem_region.translated_addr = (hwaddr)(uintptr_t)vaddr, mem_region.size = int128_get64(llsize) - 1, mem_region.perm = IOMMU_ACCESS_FLAG(true, section->readonly), - r = vhost_iova_tree_map_alloc(s->iova_tree, &mem_region); + r = vhost_iova_tree_map_alloc(s->iova_tree, &mem_region, hw_vaddr); if (unlikely(r != IOVA_OK)) { error_report("Can't allocate a mapping (%d)", r); + + if (mem_region.translated_addr == hw_vaddr) { + error_report("Insertion to IOVA->HVA tree failed"); + /* Remove the mapping from the IOVA-only tree */ + goto fail_map; + } goto fail; } @@ -1142,16 +1148,23 @@ static void vhost_vdpa_svq_unmap_rings(struct vhost_dev *dev, * * @v: Vhost-vdpa device * @needle: The area to search iova + * @taddr: The translated address (HVA) * @errorp: Error pointer */ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa *v, DMAMap *needle, - Error **errp) + hwaddr taddr, Error **errp) { int r; - r = vhost_iova_tree_map_alloc(v->shared->iova_tree, needle); + r = vhost_iova_tree_map_alloc(v->shared->iova_tree, needle, taddr); if (unlikely(r != IOVA_OK)) { error_setg(errp, "Cannot allocate iova (%d)", r); + + if (needle->translated_addr == taddr) { + error_append_hint(errp, "Insertion to IOVA->HVA tree failed"); + /* Remove the mapping from the IOVA-only tree */ + vhost_iova_tree_remove(v->shared->iova_tree, *needle); + } return false; } @@ -1192,11 +1205,11 @@ static bool vhost_vdpa_svq_map_rings(struct vhost_dev *dev, vhost_svq_get_vring_addr(svq, &svq_addr); driver_region = (DMAMap) { - .translated_addr = svq_addr.desc_user_addr, .size = driver_size - 1, .perm = IOMMU_RO, }; - ok = vhost_vdpa_svq_map_ring(v, &driver_region, errp); + ok = vhost_vdpa_svq_map_ring(v, &driver_region, svq_addr.desc_user_addr, + errp); if (unlikely(!ok)) { error_prepend(errp, "Cannot create vq driver region: "); return false; @@ -1206,11 +1219,11 @@ static bool vhost_vdpa_svq_map_rings(struct vhost_dev *dev, addr->avail_user_addr = driver_region.iova + avail_offset; device_region = (DMAMap) { - .translated_addr = svq_addr.used_user_addr, .size = device_size - 1, .perm = IOMMU_RW, }; - ok = vhost_vdpa_svq_map_ring(v, &device_region, errp); + ok = vhost_vdpa_svq_map_ring(v, &device_region, svq_addr.used_user_addr, + errp); if (unlikely(!ok)) { error_prepend(errp, "Cannot create vq device region: "); vhost_vdpa_svq_unmap_ring(v, driver_region.translated_addr); diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 231b45246c..5a3a57203d 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -510,14 +510,20 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size, bool write) { DMAMap map = {}; + hwaddr taddr = (hwaddr)(uintptr_t)buf; int r; - map.translated_addr = (hwaddr)(uintptr_t)buf; map.size = size - 1; map.perm = write ? IOMMU_RW : IOMMU_RO, - r = vhost_iova_tree_map_alloc(v->shared->iova_tree, &map); + r = vhost_iova_tree_map_alloc(v->shared->iova_tree, &map, taddr); if (unlikely(r != IOVA_OK)) { error_report("Cannot map injected element"); + + if (map.translated_addr == taddr) { + error_report("Insertion to IOVA->HVA tree failed"); + /* Remove the mapping from the IOVA-only tree */ + goto dma_map_err; + } return r; } From patchwork Wed Feb 5 14:58:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonah Palmer X-Patchwork-Id: 13961270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 71992C02192 for ; Wed, 5 Feb 2025 14:59:19 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfgrE-0007Zl-3A; Wed, 05 Feb 2025 09:58:28 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfgrC-0007ZY-PG for qemu-devel@nongnu.org; Wed, 05 Feb 2025 09:58:26 -0500 Received: from mx0b-00069f02.pphosted.com ([205.220.177.32]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfgr9-0008U3-LG for qemu-devel@nongnu.org; Wed, 05 Feb 2025 09:58:26 -0500 Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 515DSWlD023797; Wed, 5 Feb 2025 14:58:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=corp-2023-11-20; bh=ucpG2 p9hvTqQOAU0sxZffSPLFwOEgIzh9mLgyUv4r5w=; b=UfqVBabW2g3M2AcNf4Gwc Wlk8uLHjPhGesrPuw9Yvc/ya2OQlFtrlsIekP3/wJVyhH9l2TqozN+/lfLg1heeT NCxLW12yHNw2UIcXnxhTy1jEL0RlwOzATMYwZrrBfJM36X65nskFiZkWhDVCFvZR BVNHXVNrElJqUc3G6cOHREb+WUPydXUkvVG/ixbgRt4YElTfb7356jR7y/kBkVvV xhOdfgqUnxPNxh8wI15lgNhC3uXzy+Xp3aATsen2NU4f3JUqXzMkkWbTzrIkVjKr BfbHsvaH+Ka7Jg0u+S9PWqOoUhLtDUMfMAxHew35QT3mKrY9ZSFN0pdp5OYYNCKS g== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 44m50u8myw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 05 Feb 2025 14:58:20 +0000 (GMT) Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 515Efg92022498; Wed, 5 Feb 2025 14:58:20 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 44j8e96bfu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 05 Feb 2025 14:58:19 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 515EwGhB022923; Wed, 5 Feb 2025 14:58:19 GMT Received: from jonah-ol8.us.oracle.com (dhcp-10-65-182-246.vpn.oracle.com [10.65.182.246]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 44j8e96bbx-3; Wed, 05 Feb 2025 14:58:19 +0000 From: Jonah Palmer To: qemu-devel@nongnu.org Cc: eperezma@redhat.com, mst@redhat.com, leiyang@redhat.com, peterx@redhat.com, dtatulea@nvidia.com, jasowang@redhat.com, si-wei.liu@oracle.com, boris.ostrovsky@oracle.com, jonah.palmer@oracle.com Subject: [PATCH 2/4] vhost-iova-tree: Implement GPA->IOVA & partial IOVA->HVA trees Date: Wed, 5 Feb 2025 09:58:08 -0500 Message-ID: <20250205145813.394915-3-jonah.palmer@oracle.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250205145813.394915-1-jonah.palmer@oracle.com> References: <20250205145813.394915-1-jonah.palmer@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-05_06,2025-02-05_02,2024-11-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxlogscore=999 phishscore=0 adultscore=0 malwarescore=0 bulkscore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2501170000 definitions=main-2502050117 X-Proofpoint-GUID: Yw9UmM6qvbHkQDgy3DxUX7QTrY8lQ2zo X-Proofpoint-ORIG-GUID: Yw9UmM6qvbHkQDgy3DxUX7QTrY8lQ2zo Received-SPF: pass client-ip=205.220.177.32; envelope-from=jonah.palmer@oracle.com; helo=mx0b-00069f02.pphosted.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Creates and supports a GPA->IOVA tree and a partial IOVA->HVA tree by splitting up guest-backed memory maps and host-only memory maps from the full IOVA->HVA tree. That is, any guest-backed memory maps are now stored in the GPA->IOVA tree and host-only memory maps stay in the IOVA->HVA tree. The qemu_ram_block_from_host() function in vhost_svq_translate_addr() is used to determine if the HVA (from the iovec) is backed by guest memory by attempting to infer a RAM block from it. If a valid RAMBlock is returned, we derive the GPA from it and search the GPA->IOVA tree for the corresponding IOVA. If an invalid RAMBlock is returned, the HVA is used to search the IOVA->HVA tree for the corresponding IOVA. This method of determining if an HVA is backed by guest memory is far from optimal, especially for memory buffers backed by host-only memory. In the next patch, this method will be improved by utilizing the in_addr and out_addr members of a VirtQueueElement to determine if an incoming address for translation is backed by guest or host-only memory. Signed-off-by: Jonah Palmer --- hw/virtio/vhost-iova-tree.c | 67 ++++++++++++++++++++++++++++++ hw/virtio/vhost-iova-tree.h | 5 +++ hw/virtio/vhost-shadow-virtqueue.c | 28 ++++++++++--- hw/virtio/vhost-vdpa.c | 19 ++++----- include/qemu/iova-tree.h | 22 ++++++++++ util/iova-tree.c | 46 ++++++++++++++++++++ 6 files changed, 171 insertions(+), 16 deletions(-) diff --git a/hw/virtio/vhost-iova-tree.c b/hw/virtio/vhost-iova-tree.c index 216885aa3c..9d2d6a7af2 100644 --- a/hw/virtio/vhost-iova-tree.c +++ b/hw/virtio/vhost-iova-tree.c @@ -31,6 +31,9 @@ struct VhostIOVATree { /* Allocated IOVA addresses */ IOVATree *iova_map; + + /* GPA->IOVA address memory maps */ + IOVATree *gpa_iova_map; }; /** @@ -48,6 +51,7 @@ VhostIOVATree *vhost_iova_tree_new(hwaddr iova_first, hwaddr iova_last) tree->iova_taddr_map = iova_tree_new(); tree->iova_map = iova_tree_new(); + tree->gpa_iova_map = gpa_tree_new(); return tree; } @@ -58,6 +62,7 @@ void vhost_iova_tree_delete(VhostIOVATree *iova_tree) { iova_tree_destroy(iova_tree->iova_taddr_map); iova_tree_destroy(iova_tree->iova_map); + iova_tree_destroy(iova_tree->gpa_iova_map); g_free(iova_tree); } @@ -122,3 +127,65 @@ void vhost_iova_tree_remove(VhostIOVATree *iova_tree, DMAMap map) iova_tree_remove(iova_tree->iova_taddr_map, map); iova_tree_remove(iova_tree->iova_map, map); } + +/** + * Find the IOVA address stored from a guest memory address (GPA) + * + * @tree: The VhostIOVATree + * @map: The map with the guest memory address + * + * Returns the stored GPA->IOVA mapping, or NULL if not found. + */ +const DMAMap *vhost_iova_tree_find_gpa(const VhostIOVATree *tree, + const DMAMap *map) +{ + return iova_tree_find_iova(tree->gpa_iova_map, map); +} + +/** + * Allocate a new IOVA range and add the mapping to the GPA->IOVA tree + * + * @tree: The VhostIOVATree + * @map: The IOVA mapping + * @taddr: The translated address (GPA) + * + * Returns: + * - IOVA_OK if the map fits both containers + * - IOVA_ERR_INVALID if the map does not make sense (like size overflow) + * - IOVA_ERR_NOMEM if the IOVA-only tree cannot allocate more space + * + * It returns an assigned IOVA in map->iova if the return value is IOVA_OK. + */ +int vhost_iova_tree_map_alloc_gpa(VhostIOVATree *tree, DMAMap *map, hwaddr taddr) +{ + int ret; + + /* Some vhost devices don't like addr 0. Skip first page */ + hwaddr iova_first = tree->iova_first ?: qemu_real_host_page_size(); + + if (taddr + map->size < taddr || map->perm == IOMMU_NONE) { + return IOVA_ERR_INVALID; + } + + /* Allocate a node in the IOVA-only tree */ + ret = iova_tree_alloc_map(tree->iova_map, map, iova_first, tree->iova_last); + if (unlikely(ret != IOVA_OK)) { + return ret; + } + + /* Insert a node in the GPA->IOVA tree */ + map->translated_addr = taddr; + return gpa_tree_insert(tree->gpa_iova_map, map); +} + +/** + * Remove existing mappings from the IOVA-only and GPA->IOVA trees + * + * @tree: The VhostIOVATree + * @map: The map to remove + */ +void vhost_iova_tree_remove_gpa(VhostIOVATree *iova_tree, DMAMap map) +{ + iova_tree_remove(iova_tree->gpa_iova_map, map); + iova_tree_remove(iova_tree->iova_map, map); +} diff --git a/hw/virtio/vhost-iova-tree.h b/hw/virtio/vhost-iova-tree.h index 525ce72b1d..0c4ba5abd5 100644 --- a/hw/virtio/vhost-iova-tree.h +++ b/hw/virtio/vhost-iova-tree.h @@ -24,5 +24,10 @@ const DMAMap *vhost_iova_tree_find_iova(const VhostIOVATree *iova_tree, int vhost_iova_tree_map_alloc(VhostIOVATree *iova_tree, DMAMap *map, hwaddr taddr); void vhost_iova_tree_remove(VhostIOVATree *iova_tree, DMAMap map); +const DMAMap *vhost_iova_tree_find_gpa(const VhostIOVATree *iova_tree, + const DMAMap *map); +int vhost_iova_tree_map_alloc_gpa(VhostIOVATree *iova_tree, DMAMap *map, + hwaddr taddr); +void vhost_iova_tree_remove_gpa(VhostIOVATree *iova_tree, DMAMap map); #endif diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c index 37aca8b431..a53492fd36 100644 --- a/hw/virtio/vhost-shadow-virtqueue.c +++ b/hw/virtio/vhost-shadow-virtqueue.c @@ -16,6 +16,7 @@ #include "qemu/log.h" #include "qemu/memalign.h" #include "linux-headers/linux/vhost.h" +#include "exec/ramblock.h" /** * Validate the transport device features that both guests can use with the SVQ @@ -88,14 +89,31 @@ static bool vhost_svq_translate_addr(const VhostShadowVirtqueue *svq, } for (size_t i = 0; i < num; ++i) { - DMAMap needle = { - .translated_addr = (hwaddr)(uintptr_t)iovec[i].iov_base, - .size = iovec[i].iov_len, - }; Int128 needle_last, map_last; size_t off; + RAMBlock *rb; + hwaddr gpa; + ram_addr_t offset; + const DMAMap *map; + DMAMap needle; + + rb = qemu_ram_block_from_host(iovec[i].iov_base, false, &offset); + if (rb) { + gpa = rb->offset + offset; + + needle = (DMAMap) { + .translated_addr = gpa, + .size = iovec[i].iov_len, + }; + map = vhost_iova_tree_find_gpa(svq->iova_tree, &needle); + } else { + needle = (DMAMap) { + .translated_addr = (hwaddr)(uintptr_t)iovec[i].iov_base, + .size = iovec[i].iov_len, + }; + map = vhost_iova_tree_find_iova(svq->iova_tree, &needle); + } - const DMAMap *map = vhost_iova_tree_find_iova(svq->iova_tree, &needle); /* * Map cannot be NULL since iova map contains all guest space and * qemu already has a physical address mapped diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 703dcfc929..7efbde3d4c 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -360,17 +360,17 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener, llsize = int128_sub(llend, int128_make64(iova)); if (s->shadow_data) { int r; - hwaddr hw_vaddr = (hwaddr)(uintptr_t)vaddr; + hwaddr gpa = section->offset_within_address_space; mem_region.size = int128_get64(llsize) - 1, mem_region.perm = IOMMU_ACCESS_FLAG(true, section->readonly), - r = vhost_iova_tree_map_alloc(s->iova_tree, &mem_region, hw_vaddr); + r = vhost_iova_tree_map_alloc_gpa(s->iova_tree, &mem_region, gpa); if (unlikely(r != IOVA_OK)) { error_report("Can't allocate a mapping (%d)", r); - if (mem_region.translated_addr == hw_vaddr) { - error_report("Insertion to IOVA->HVA tree failed"); + if (mem_region.translated_addr == gpa) { + error_report("Insertion to GPA->IOVA tree failed"); /* Remove the mapping from the IOVA-only tree */ goto fail_map; } @@ -392,7 +392,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener, fail_map: if (s->shadow_data) { - vhost_iova_tree_remove(s->iova_tree, mem_region); + vhost_iova_tree_remove_gpa(s->iova_tree, mem_region); } fail: @@ -446,21 +446,18 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener, if (s->shadow_data) { const DMAMap *result; - const void *vaddr = memory_region_get_ram_ptr(section->mr) + - section->offset_within_region + - (iova - section->offset_within_address_space); DMAMap mem_region = { - .translated_addr = (hwaddr)(uintptr_t)vaddr, + .translated_addr = section->offset_within_address_space, .size = int128_get64(llsize) - 1, }; - result = vhost_iova_tree_find_iova(s->iova_tree, &mem_region); + result = vhost_iova_tree_find_gpa(s->iova_tree, &mem_region); if (!result) { /* The memory listener map wasn't mapped */ return; } iova = result->iova; - vhost_iova_tree_remove(s->iova_tree, *result); + vhost_iova_tree_remove_gpa(s->iova_tree, *result); } vhost_vdpa_iotlb_batch_begin_once(s); /* diff --git a/include/qemu/iova-tree.h b/include/qemu/iova-tree.h index 44a45931d5..16d354a814 100644 --- a/include/qemu/iova-tree.h +++ b/include/qemu/iova-tree.h @@ -40,6 +40,28 @@ typedef struct DMAMap { } QEMU_PACKED DMAMap; typedef gboolean (*iova_tree_iterator)(DMAMap *map); +/** + * gpa_tree_new: + * + * Create a new GPA->IOVA tree. + * + * Returns: the tree point on success, or NULL otherwise. + */ +IOVATree *gpa_tree_new(void); + +/** + * gpa_tree_insert: + * + * @tree: The GPA->IOVA tree we're inserting the mapping to + * @map: The GPA->IOVA mapping to insert + * + * Inserts a GPA range to the GPA->IOVA tree. If there are overlapped + * ranges, IOVA_ERR_OVERLAP will be returned. + * + * Return: 0 if successful, < 0 otherwise. + */ +int gpa_tree_insert(IOVATree *tree, const DMAMap *map); + /** * iova_tree_new: * diff --git a/util/iova-tree.c b/util/iova-tree.c index 06295e2755..5b0c95ff15 100644 --- a/util/iova-tree.c +++ b/util/iova-tree.c @@ -257,3 +257,49 @@ void iova_tree_destroy(IOVATree *tree) g_tree_destroy(tree->tree); g_free(tree); } + +static int gpa_tree_compare(gconstpointer a, gconstpointer b, gpointer data) +{ + const DMAMap *m1 = a, *m2 = b; + + if (m1->translated_addr > m2->translated_addr + m2->size) { + return 1; + } + + if (m1->translated_addr + m1->size < m2->translated_addr) { + return -1; + } + + /* Overlapped */ + return 0; +} + +IOVATree *gpa_tree_new(void) +{ + IOVATree *gpa_tree = g_new0(IOVATree, 1); + + gpa_tree->tree = g_tree_new_full(gpa_tree_compare, NULL, g_free, NULL); + + return gpa_tree; +} + +int gpa_tree_insert(IOVATree *tree, const DMAMap *map) +{ + DMAMap *new; + + if (map->translated_addr + map->size < map->translated_addr || + map->perm == IOMMU_NONE) { + return IOVA_ERR_INVALID; + } + + /* We don't allow inserting ranges that overlap with existing ones */ + if (iova_tree_find(tree, map)) { + return IOVA_ERR_OVERLAP; + } + + new = g_new0(DMAMap, 1); + memcpy(new, map, sizeof(*new)); + iova_tree_insert_internal(tree->tree, new); + + return IOVA_OK; +} From patchwork Wed Feb 5 14:58:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonah Palmer X-Patchwork-Id: 13961273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AE1B1C02198 for ; Wed, 5 Feb 2025 14:59:31 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfgrG-0007c5-El; Wed, 05 Feb 2025 09:58:30 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfgrE-0007Zn-BU for qemu-devel@nongnu.org; Wed, 05 Feb 2025 09:58:28 -0500 Received: from mx0b-00069f02.pphosted.com ([205.220.177.32]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfgrC-0008UX-HI for qemu-devel@nongnu.org; Wed, 05 Feb 2025 09:58:28 -0500 Received: from pps.filterd (m0246632.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 515DRada010506; Wed, 5 Feb 2025 14:58:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=corp-2023-11-20; bh=HPQY3 t7lqz3YfjREFEdRfiqO06SmhyTkaYjOQn9gbQI=; b=HSXvI/tGmqM3NMTqdGM5C 4+BlPkrp26IeteU4f7BmV4fa+blq7sKh5e3KFIKgKLRxxAioD67APBIYRQoIlx+q 8e5i8uiXVHvVnfpSkG8CxyeTGrFM0gk6l6N6lFBLptSVGT392ht00JhpickSw4LA Q1jYPYKQs408cUSBgCccnko1+NWazyBT60DCyFzSBp84eQItLSRA52Xj6Dy1VBEH rI81nvv15PREh09KKT3/EjaXvr2oM+Yyqt+nTnSIK/fAvC1OieNJlk6GMcnjHJtv 2e+Qh2QYMFuzgggN9FrkJgbEJSQZgx9uCqdxTe6os8h7J9g01JiqW82txUWVcV7I Q== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 44hfy876f2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 05 Feb 2025 14:58:24 +0000 (GMT) Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 515EmIYt022525; Wed, 5 Feb 2025 14:58:24 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 44j8e96bj3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 05 Feb 2025 14:58:23 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 515EwGhD022923; Wed, 5 Feb 2025 14:58:23 GMT Received: from jonah-ol8.us.oracle.com (dhcp-10-65-182-246.vpn.oracle.com [10.65.182.246]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 44j8e96bbx-4; Wed, 05 Feb 2025 14:58:23 +0000 From: Jonah Palmer To: qemu-devel@nongnu.org Cc: eperezma@redhat.com, mst@redhat.com, leiyang@redhat.com, peterx@redhat.com, dtatulea@nvidia.com, jasowang@redhat.com, si-wei.liu@oracle.com, boris.ostrovsky@oracle.com, jonah.palmer@oracle.com Subject: [PATCH 3/4] svq: Support translations via GPAs in vhost_svq_translate_addr Date: Wed, 5 Feb 2025 09:58:09 -0500 Message-ID: <20250205145813.394915-4-jonah.palmer@oracle.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250205145813.394915-1-jonah.palmer@oracle.com> References: <20250205145813.394915-1-jonah.palmer@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-05_06,2025-02-05_02,2024-11-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxlogscore=999 phishscore=0 adultscore=0 malwarescore=0 bulkscore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2501170000 definitions=main-2502050117 X-Proofpoint-GUID: np663eQ_7tpc2Iz2XqD5eR4xPY4F198m X-Proofpoint-ORIG-GUID: np663eQ_7tpc2Iz2XqD5eR4xPY4F198m Received-SPF: pass client-ip=205.220.177.32; envelope-from=jonah.palmer@oracle.com; helo=mx0b-00069f02.pphosted.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Propagates the GPAs (in_addr/out_addr) of a VirtQueueElement to vhost_svq_translate_addr() to translate GPAs to IOVAs via the GPA->IOVA tree when descriptors are backed by guest memory. GPAs are unique in the guest's address space, ensuring unambiguous IOVA translations. This avoids the issue where different GPAs map to the same HVA, causing the HVA->IOVA translation to potentially return an IOVA associated with the wrong intended GPA. For descriptors backed by host-only memory, the existing partial SVQ IOVA->HVA tree is used. Signed-off-by: Jonah Palmer --- hw/virtio/vhost-shadow-virtqueue.c | 45 ++++++++++++++++-------------- hw/virtio/vhost-shadow-virtqueue.h | 5 ++-- net/vhost-vdpa.c | 2 +- 3 files changed, 28 insertions(+), 24 deletions(-) diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c index a53492fd36..30ba565f03 100644 --- a/hw/virtio/vhost-shadow-virtqueue.c +++ b/hw/virtio/vhost-shadow-virtqueue.c @@ -16,7 +16,6 @@ #include "qemu/log.h" #include "qemu/memalign.h" #include "linux-headers/linux/vhost.h" -#include "exec/ramblock.h" /** * Validate the transport device features that both guests can use with the SVQ @@ -79,10 +78,11 @@ uint16_t vhost_svq_available_slots(const VhostShadowVirtqueue *svq) * @vaddr: Translated IOVA addresses * @iovec: Source qemu's VA addresses * @num: Length of iovec and minimum length of vaddr + * @gpas: Descriptors' GPAs, if backed by guest memory */ static bool vhost_svq_translate_addr(const VhostShadowVirtqueue *svq, hwaddr *addrs, const struct iovec *iovec, - size_t num) + size_t num, const hwaddr *gpas) { if (num == 0) { return true; @@ -91,22 +91,19 @@ static bool vhost_svq_translate_addr(const VhostShadowVirtqueue *svq, for (size_t i = 0; i < num; ++i) { Int128 needle_last, map_last; size_t off; - RAMBlock *rb; - hwaddr gpa; - ram_addr_t offset; const DMAMap *map; DMAMap needle; - rb = qemu_ram_block_from_host(iovec[i].iov_base, false, &offset); - if (rb) { - gpa = rb->offset + offset; - + /* Check if the descriptor is backed by guest memory */ + if (gpas) { + /* Search the GPA->IOVA tree */ needle = (DMAMap) { - .translated_addr = gpa, + .translated_addr = gpas[i], .size = iovec[i].iov_len, }; map = vhost_iova_tree_find_gpa(svq->iova_tree, &needle); } else { + /* Search the IOVA->HVA tree */ needle = (DMAMap) { .translated_addr = (hwaddr)(uintptr_t)iovec[i].iov_base, .size = iovec[i].iov_len, @@ -148,6 +145,7 @@ static bool vhost_svq_translate_addr(const VhostShadowVirtqueue *svq, * @sg: Cache for hwaddr * @iovec: The iovec from the guest * @num: iovec length + * @addr: Descriptors' GPAs, if backed by guest memory * @more_descs: True if more descriptors come in the chain * @write: True if they are writeable descriptors * @@ -155,7 +153,8 @@ static bool vhost_svq_translate_addr(const VhostShadowVirtqueue *svq, */ static bool vhost_svq_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg, const struct iovec *iovec, size_t num, - bool more_descs, bool write) + const hwaddr *addr, bool more_descs, + bool write) { uint16_t i = svq->free_head, last = svq->free_head; unsigned n; @@ -167,7 +166,7 @@ static bool vhost_svq_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg, return true; } - ok = vhost_svq_translate_addr(svq, sg, iovec, num); + ok = vhost_svq_translate_addr(svq, sg, iovec, num, addr); if (unlikely(!ok)) { return false; } @@ -192,8 +191,9 @@ static bool vhost_svq_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg, static bool vhost_svq_add_split(VhostShadowVirtqueue *svq, const struct iovec *out_sg, size_t out_num, + const hwaddr *out_addr, const struct iovec *in_sg, size_t in_num, - unsigned *head) + const hwaddr *in_addr, unsigned *head) { unsigned avail_idx; vring_avail_t *avail = svq->vring.avail; @@ -209,13 +209,14 @@ static bool vhost_svq_add_split(VhostShadowVirtqueue *svq, return false; } - ok = vhost_svq_vring_write_descs(svq, sgs, out_sg, out_num, in_num > 0, - false); + ok = vhost_svq_vring_write_descs(svq, sgs, out_sg, out_num, out_addr, + in_num > 0, false); if (unlikely(!ok)) { return false; } - ok = vhost_svq_vring_write_descs(svq, sgs, in_sg, in_num, false, true); + ok = vhost_svq_vring_write_descs(svq, sgs, in_sg, in_num, in_addr, false, + true); if (unlikely(!ok)) { return false; } @@ -265,8 +266,9 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq) * Return -EINVAL if element is invalid, -ENOSPC if dev queue is full */ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg, - size_t out_num, const struct iovec *in_sg, size_t in_num, - VirtQueueElement *elem) + size_t out_num, const hwaddr *out_addr, + const struct iovec *in_sg, size_t in_num, + const hwaddr *in_addr, VirtQueueElement *elem) { unsigned qemu_head; unsigned ndescs = in_num + out_num; @@ -276,7 +278,8 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg, return -ENOSPC; } - ok = vhost_svq_add_split(svq, out_sg, out_num, in_sg, in_num, &qemu_head); + ok = vhost_svq_add_split(svq, out_sg, out_num, out_addr, in_sg, in_num, + in_addr, &qemu_head); if (unlikely(!ok)) { return -EINVAL; } @@ -292,8 +295,8 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg, static int vhost_svq_add_element(VhostShadowVirtqueue *svq, VirtQueueElement *elem) { - return vhost_svq_add(svq, elem->out_sg, elem->out_num, elem->in_sg, - elem->in_num, elem); + return vhost_svq_add(svq, elem->out_sg, elem->out_num, elem->out_addr, + elem->in_sg, elem->in_num, elem->in_addr, elem); } /** diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h index 19c842a15b..9c273739d6 100644 --- a/hw/virtio/vhost-shadow-virtqueue.h +++ b/hw/virtio/vhost-shadow-virtqueue.h @@ -118,8 +118,9 @@ uint16_t vhost_svq_available_slots(const VhostShadowVirtqueue *svq); void vhost_svq_push_elem(VhostShadowVirtqueue *svq, const VirtQueueElement *elem, uint32_t len); int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg, - size_t out_num, const struct iovec *in_sg, size_t in_num, - VirtQueueElement *elem); + size_t out_num, const hwaddr *out_addr, + const struct iovec *in_sg, size_t in_num, + const hwaddr *in_addr, VirtQueueElement *elem); size_t vhost_svq_poll(VhostShadowVirtqueue *svq, size_t num); void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd); diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 5a3a57203d..bd01866878 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -649,7 +649,7 @@ static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, VhostShadowVirtqueue *svq = g_ptr_array_index(s->vhost_vdpa.shadow_vqs, 0); int r; - r = vhost_svq_add(svq, out_sg, out_num, in_sg, in_num, NULL); + r = vhost_svq_add(svq, out_sg, out_num, NULL, in_sg, in_num, NULL, NULL); if (unlikely(r != 0)) { if (unlikely(r == -ENOSPC)) { qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n", From patchwork Wed Feb 5 14:58:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonah Palmer X-Patchwork-Id: 13961272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7888AC02192 for ; Wed, 5 Feb 2025 14:59:30 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfgrI-0007ci-9h; Wed, 05 Feb 2025 09:58:32 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfgrG-0007c7-JD for qemu-devel@nongnu.org; Wed, 05 Feb 2025 09:58:30 -0500 Received: from mx0b-00069f02.pphosted.com ([205.220.177.32]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfgrF-0008V1-11 for qemu-devel@nongnu.org; Wed, 05 Feb 2025 09:58:30 -0500 Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 515DRSq0022495; Wed, 5 Feb 2025 14:58:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=corp-2023-11-20; bh=yOI69 1WvAHOHycEBdzZFaoCxfn5EXkXGfyIrGHVCTig=; b=XMmiAycdkNs3uhel3d8J3 WrBY9CaOyMaN9s5TTJdAD8mZpI0ObeIgYLk1X3bEw8PmhM6tdT/d8Q1J3Ovsv08n eVvau0Kj2ZR4AJMmYYGRwuKcvyTfjJzZmVygCkqtvoJf97U75eRWn/8O46dxh0Pr N4yG4rnUDayYTqR62m7Oy5E1RXcPuL9L2zSiz4eXZCv1jJPT0JELlcYlAozMXo8C LkdixKUe50TKfEsRgYwWOUYMGIcLmJm+QAnDm1jNiwPOXRYInTBos7FQsaVEWw4J +NZaF6eLS5UcrKSGjO0AcRMEyYI/u21FuQ09+fOaWjSEy5D2Dv18DHy7K+GcNdmU w== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 44m50u8n09-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 05 Feb 2025 14:58:26 +0000 (GMT) Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 515EhhRp022529; Wed, 5 Feb 2025 14:58:25 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 44j8e96bk3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 05 Feb 2025 14:58:25 +0000 Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 515EwGhF022923; Wed, 5 Feb 2025 14:58:25 GMT Received: from jonah-ol8.us.oracle.com (dhcp-10-65-182-246.vpn.oracle.com [10.65.182.246]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 44j8e96bbx-5; Wed, 05 Feb 2025 14:58:25 +0000 From: Jonah Palmer To: qemu-devel@nongnu.org Cc: eperezma@redhat.com, mst@redhat.com, leiyang@redhat.com, peterx@redhat.com, dtatulea@nvidia.com, jasowang@redhat.com, si-wei.liu@oracle.com, boris.ostrovsky@oracle.com, jonah.palmer@oracle.com Subject: [PATCH 4/4] vhost-iova-tree: Update documentation Date: Wed, 5 Feb 2025 09:58:10 -0500 Message-ID: <20250205145813.394915-5-jonah.palmer@oracle.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250205145813.394915-1-jonah.palmer@oracle.com> References: <20250205145813.394915-1-jonah.palmer@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-02-05_06,2025-02-05_02,2024-11-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxlogscore=999 phishscore=0 adultscore=0 malwarescore=0 bulkscore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2501170000 definitions=main-2502050117 X-Proofpoint-GUID: PfTLlndr5AiqOynJ5Ig-aOeHstfxTFqv X-Proofpoint-ORIG-GUID: PfTLlndr5AiqOynJ5Ig-aOeHstfxTFqv Received-SPF: pass client-ip=205.220.177.32; envelope-from=jonah.palmer@oracle.com; helo=mx0b-00069f02.pphosted.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Jonah Palmer --- hw/virtio/vhost-iova-tree.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/hw/virtio/vhost-iova-tree.c b/hw/virtio/vhost-iova-tree.c index 9d2d6a7af2..fa4147b773 100644 --- a/hw/virtio/vhost-iova-tree.c +++ b/hw/virtio/vhost-iova-tree.c @@ -37,9 +37,9 @@ struct VhostIOVATree { }; /** - * Create a new IOVA tree + * Create a new VhostIOVATree * - * Returns the new IOVA tree + * Returns the new VhostIOVATree. */ VhostIOVATree *vhost_iova_tree_new(hwaddr iova_first, hwaddr iova_last) { @@ -56,7 +56,7 @@ VhostIOVATree *vhost_iova_tree_new(hwaddr iova_first, hwaddr iova_last) } /** - * Delete an iova tree + * Delete a VhostIOVATree */ void vhost_iova_tree_delete(VhostIOVATree *iova_tree) { @@ -69,10 +69,10 @@ void vhost_iova_tree_delete(VhostIOVATree *iova_tree) /** * Find the IOVA address stored from a memory address * - * @tree: The iova tree + * @tree: The VhostIOVATree * @map: The map with the memory address * - * Return the stored mapping, or NULL if not found. + * Returns the stored IOVA->HVA mapping, or NULL if not found. */ const DMAMap *vhost_iova_tree_find_iova(const VhostIOVATree *tree, const DMAMap *map) @@ -81,10 +81,10 @@ const DMAMap *vhost_iova_tree_find_iova(const VhostIOVATree *tree, } /** - * Allocate a new mapping + * Allocate a new IOVA range and add the mapping to the IOVA->HVA tree * - * @tree: The iova tree - * @map: The iova map + * @tree: The VhostIOVATree + * @map: The IOVA mapping * @taddr: The translated address (HVA) * * Returns: @@ -92,7 +92,7 @@ const DMAMap *vhost_iova_tree_find_iova(const VhostIOVATree *tree, * - IOVA_ERR_INVALID if the map does not make sense (like size overflow) * - IOVA_ERR_NOMEM if tree cannot allocate more space. * - * It returns assignated iova in map->iova if return value is VHOST_DMA_MAP_OK. + * It returns an assigned IOVA in map->iova if the return value is IOVA_OK. */ int vhost_iova_tree_map_alloc(VhostIOVATree *tree, DMAMap *map, hwaddr taddr) { @@ -117,9 +117,9 @@ int vhost_iova_tree_map_alloc(VhostIOVATree *tree, DMAMap *map, hwaddr taddr) } /** - * Remove existing mappings from iova tree + * Remove existing mappings from the IOVA-only and IOVA->HVA trees * - * @iova_tree: The vhost iova tree + * @iova_tree: The VhostIOVATree * @map: The map to remove */ void vhost_iova_tree_remove(VhostIOVATree *iova_tree, DMAMap map)