From patchwork Thu May 21 05:00:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 11562251 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BAF201391 for ; Thu, 21 May 2020 05:01:37 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7BDC520748 for ; Thu, 21 May 2020 05:01:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="FmGh/JYn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7BDC520748 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:37244 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jbdKq-0004qC-Jw for patchwork-qemu-devel@patchwork.kernel.org; Thu, 21 May 2020 01:01:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:59988) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdJo-00032T-KV for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:32 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:61607) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdJn-0001Em-Fi for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=Va0Wof9z55xrrI91p6vOei7zjKTVO0gLIlQPs/0NLmE=; b=FmGh/JYnQSvIZ4wRUmE8bI8AYrD7+lLDbFzg6gXXR+1NlJ/knB3WjzG2aYQ39XQykp1X FwKySoYH/LR/f+Cu1TYLwYywWjuuyjFLdrTxiP01abbMWYNyavH1cuHYTSNfdgB7jJG3S8 FFU7clvFVXfWiwMNAFyyxFJcWGoz0posQ= Received: by filterdrecv-p3iad2-8ddf98858-lwgxm with SMTP id filterdrecv-p3iad2-8ddf98858-lwgxm-19-5EC60AED-C7 2020-05-21 05:00:29.450879106 +0000 UTC m=+4852378.688574636 Received: from localhost.localdomain.com (unknown) by ismtpd0026p1las1.sendgrid.net (SG) with ESMTP id wgexu8MbRYaJaOvAFOQoKA Thu, 21 May 2020 05:00:29.256 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v4 02/10] Add vhost-user helper to get MemoryRegion data Date: Thu, 21 May 2020 05:00:29 +0000 (UTC) Message-Id: <1588533678-23450-3-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zciUKG/x94H5nEBLgg/S3DlyXmSKv4CA51oOqz3CA8a/reLwKwMAf9Wyo9Wg/a7/vapv9BZvycGWTZq6T4WMC2q5bzTtgizmyzRGq3XaWYU5JV50IE7cYN6sWzNxqe5OLE9s9Bh1d/W7L4ADsvciINJL9ingAfeU8s95v36Qi++SVGKIzVGUH2BJtWpl7ato5g== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/21 01:00:23 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" When setting the memory tables, qemu uses a memory region's userspace address to look up the region's MemoryRegion struct. Among other things, the MemoryRegion contains the region's offset and associated file descriptor, all of which need to be sent to the backend. With VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS, this logic will be needed in multiple places, so before feature support is added it should be moved to a helper function. This helper is also used to simplify the vhost_user_can_merge() function. Signed-off-by: Raphael Norwitz Reviewed-by: Marc-André Lureau --- hw/virtio/vhost-user.c | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 2e0552d..442b0d6 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -407,6 +407,18 @@ static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base, return 0; } +static MemoryRegion *vhost_user_get_mr_data(uint64_t addr, ram_addr_t *offset, + int *fd) +{ + MemoryRegion *mr; + + assert((uintptr_t)addr == addr); + mr = memory_region_from_host((void *)(uintptr_t)addr, offset); + *fd = memory_region_get_fd(mr); + + return mr; +} + static void vhost_user_fill_msg_region(VhostUserMemoryRegion *dst, struct vhost_memory_region *src) { @@ -433,10 +445,7 @@ static int vhost_user_fill_set_mem_table_msg(struct vhost_user *u, for (i = 0; i < dev->mem->nregions; ++i) { reg = dev->mem->regions + i; - assert((uintptr_t)reg->userspace_addr == reg->userspace_addr); - mr = memory_region_from_host((void *)(uintptr_t)reg->userspace_addr, - &offset); - fd = memory_region_get_fd(mr); + mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd); if (fd > 0) { if (track_ramblocks) { assert(*fd_num < VHOST_MEMORY_MAX_NREGIONS); @@ -1551,13 +1560,9 @@ static bool vhost_user_can_merge(struct vhost_dev *dev, { ram_addr_t offset; int mfd, rfd; - MemoryRegion *mr; - - mr = memory_region_from_host((void *)(uintptr_t)start1, &offset); - mfd = memory_region_get_fd(mr); - mr = memory_region_from_host((void *)(uintptr_t)start2, &offset); - rfd = memory_region_get_fd(mr); + (void)vhost_user_get_mr_data(start1, &offset, &mfd); + (void)vhost_user_get_mr_data(start2, &offset, &rfd); return mfd == rfd; } From patchwork Thu May 21 05:00:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 11562255 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 42EEE138A for ; Thu, 21 May 2020 05:01:43 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1A16D2070A for ; Thu, 21 May 2020 05:01:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="ovSTwAoC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1A16D2070A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:37652 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jbdKw-00052q-4j for patchwork-qemu-devel@patchwork.kernel.org; Thu, 21 May 2020 01:01:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:59994) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdJr-000363-Lg for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:35 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:57783) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdJq-0001G4-KB for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=ZExuQRUr3IfyDBsu2fewl5RpWLDXO6Mcu5FYonSolU4=; b=ovSTwAoC7Pbum0VhXLsosLagnirghaXl+1bFMqfvo08webl/5Tbt61WdUOKVoiY6KGL4 TkbHtg1wKYH05NH/yDrySrDgHZ1vLNOfZArvFOKTmU1bz7mJ2+zwDCvVBhqR1ryfXicYLk Udsyzswab98YZBEPmdic2wnz4fA9DZg3g= Received: by filterdrecv-p3iad2-8ddf98858-285bn with SMTP id filterdrecv-p3iad2-8ddf98858-285bn-19-5EC60AF0-9D 2020-05-21 05:00:32.541578376 +0000 UTC m=+4852389.140011353 Received: from localhost.localdomain.com (unknown) by ismtpd0026p1las1.sendgrid.net (SG) with ESMTP id qqFVVg5fQIOLH_Q8J_9LNQ Thu, 21 May 2020 05:00:32.330 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v4 03/10] Add VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS Date: Thu, 21 May 2020 05:00:32 +0000 (UTC) Message-Id: <1588533678-23450-4-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zQR6zJKBAnoY4dgBZoMs7vnpLZSNKyf9svFr6cbVutTy38BAHmZK3qcLyVAyoIPVwqgdXluK8TiMhkiCRq+RunkB/b6GWqWFhiatODs6iPtuHP0TDx7zeo/8BbWRNbTIvJwIKzHrpVEruTlU/y3zaF98OuEnyOeMyGDw4PJMGhqOpwzY39bcnOISW9vJ6pMxnw== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/21 01:00:23 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Turschmid , raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" This change introduces a new feature to the vhost-user protocol allowing a backend device to specify the maximum number of ram slots it supports. At this point, the value returned by the backend will be capped at the maximum number of ram slots which can be supported by vhost-user, which is currently set to 8 because of underlying protocol limitations. The returned value will be stored inside the VhostUserState struct so that on device reconnect we can verify that the ram slot limitation has not decreased since the last time the device connected. Signed-off-by: Raphael Norwitz Signed-off-by: Peter Turschmid Reviewed-by: Marc-André Lureau --- docs/interop/vhost-user.rst | 16 ++++++++++++++ hw/virtio/vhost-user.c | 49 ++++++++++++++++++++++++++++++++++++++++-- include/hw/virtio/vhost-user.h | 1 + 3 files changed, 64 insertions(+), 2 deletions(-) diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst index 3b1b660..b3cf5c3 100644 --- a/docs/interop/vhost-user.rst +++ b/docs/interop/vhost-user.rst @@ -815,6 +815,7 @@ Protocol features #define VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD 12 #define VHOST_USER_PROTOCOL_F_RESET_DEVICE 13 #define VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS 14 + #define VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS 15 Master message types -------------------- @@ -1263,6 +1264,21 @@ Master message types The state.num field is currently reserved and must be set to 0. +``VHOST_USER_GET_MAX_MEM_SLOTS`` + :id: 36 + :equivalent ioctl: N/A + :slave payload: u64 + + When the ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol + feature has been successfully negotiated, this message is submitted + by master to the slave. The slave should return the message with a + u64 payload containing the maximum number of memory slots for + QEMU to expose to the guest. At this point, the value returned + by the backend will be capped at the maximum number of ram slots + which can be supported by vhost-user. Currently that limit is set + at VHOST_USER_MAX_RAM_SLOTS = 8 because of underlying protocol + limitations. + Slave message types ------------------- diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 442b0d6..0af593f 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -59,6 +59,8 @@ enum VhostUserProtocolFeature { VHOST_USER_PROTOCOL_F_HOST_NOTIFIER = 11, VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD = 12, VHOST_USER_PROTOCOL_F_RESET_DEVICE = 13, + /* Feature 14 reserved for VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS. */ + VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15, VHOST_USER_PROTOCOL_F_MAX }; @@ -100,6 +102,8 @@ typedef enum VhostUserRequest { VHOST_USER_SET_INFLIGHT_FD = 32, VHOST_USER_GPU_SET_SOCKET = 33, VHOST_USER_RESET_DEVICE = 34, + /* Message number 35 reserved for VHOST_USER_VRING_KICK. */ + VHOST_USER_GET_MAX_MEM_SLOTS = 36, VHOST_USER_MAX } VhostUserRequest; @@ -895,6 +899,23 @@ static int vhost_user_set_owner(struct vhost_dev *dev) return 0; } +static int vhost_user_get_max_memslots(struct vhost_dev *dev, + uint64_t *max_memslots) +{ + uint64_t backend_max_memslots; + int err; + + err = vhost_user_get_u64(dev, VHOST_USER_GET_MAX_MEM_SLOTS, + &backend_max_memslots); + if (err < 0) { + return err; + } + + *max_memslots = backend_max_memslots; + + return 0; +} + static int vhost_user_reset_device(struct vhost_dev *dev) { VhostUserMsg msg = { @@ -1392,7 +1413,7 @@ static int vhost_user_postcopy_notifier(NotifierWithReturn *notifier, static int vhost_user_backend_init(struct vhost_dev *dev, void *opaque) { - uint64_t features, protocol_features; + uint64_t features, protocol_features, ram_slots; struct vhost_user *u; int err; @@ -1454,6 +1475,27 @@ static int vhost_user_backend_init(struct vhost_dev *dev, void *opaque) "slave-req protocol features."); return -1; } + + /* get max memory regions if backend supports configurable RAM slots */ + if (!virtio_has_feature(dev->protocol_features, + VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS)) { + u->user->memory_slots = VHOST_MEMORY_MAX_NREGIONS; + } else { + err = vhost_user_get_max_memslots(dev, &ram_slots); + if (err < 0) { + return err; + } + + if (ram_slots < u->user->memory_slots) { + error_report("The backend specified a max ram slots limit " + "of %lu, when the prior validated limit was %d. " + "This limit should never decrease.", ram_slots, + u->user->memory_slots); + return -1; + } + + u->user->memory_slots = MIN(ram_slots, VHOST_MEMORY_MAX_NREGIONS); + } } if (dev->migration_blocker == NULL && @@ -1519,7 +1561,9 @@ static int vhost_user_get_vq_index(struct vhost_dev *dev, int idx) static int vhost_user_memslots_limit(struct vhost_dev *dev) { - return VHOST_MEMORY_MAX_NREGIONS; + struct vhost_user *u = dev->opaque; + + return u->user->memory_slots; } static bool vhost_user_requires_shm_log(struct vhost_dev *dev) @@ -1904,6 +1948,7 @@ bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp) return false; } user->chr = chr; + user->memory_slots = 0; return true; } diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h index 811e325..a9abca3 100644 --- a/include/hw/virtio/vhost-user.h +++ b/include/hw/virtio/vhost-user.h @@ -20,6 +20,7 @@ typedef struct VhostUserHostNotifier { typedef struct VhostUserState { CharBackend *chr; VhostUserHostNotifier notifier[VIRTIO_QUEUE_MAX]; + int memory_slots; } VhostUserState; bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp); From patchwork Thu May 21 05:00:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 11562257 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 053B0138A for ; Thu, 21 May 2020 05:01:53 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C05F52070A for ; Thu, 21 May 2020 05:01:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="DNv4Xfm1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C05F52070A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:38076 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jbdL4-0005Dm-Mi for patchwork-qemu-devel@patchwork.kernel.org; Thu, 21 May 2020 01:01:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:60004) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdJv-0003CU-LV for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:39 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:31805) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdJt-0001GS-Of for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:mime-version:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=im5M+YZu/MII0q4GwxvN/jVip6CxCdPkWEAoADX2dVE=; b=DNv4Xfm1422LD7afD338BRdEYZXItiE2uIh4i3LBfAAt/IhDNMIsDZDgc3lhLKdIjgTR qadsxG2DsPh1GMbUE+248g8oWigYKe74rT0VmzDZYcQ9P/U0OzjDff49PUbZsQZXknD8j0 Ya4CvU9s5UbGaQ9DfhfL3+KwaEsQ4SWNU= Received: by filterdrecv-p3iad2-8ddf98858-mlrf4 with SMTP id filterdrecv-p3iad2-8ddf98858-mlrf4-17-5EC60AF3-F3 2020-05-21 05:00:35.756772162 +0000 UTC m=+4852383.505281836 Received: from localhost.localdomain.com (unknown) by ismtpd0026p1las1.sendgrid.net (SG) with ESMTP id qsV8rq2HRgCPpzitOzmGOA Thu, 21 May 2020 05:00:35.435 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v4 04/10] Transmit vhost-user memory regions individually Date: Thu, 21 May 2020 05:00:35 +0000 (UTC) Message-Id: <1588533678-23450-5-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> MIME-Version: 1.0 X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zQR6zJKBAnoY4dgBZoMs7vmxHdAVErDM7h9Thi3WKTfPcpwquY0X8UpTpGoxoxniIq5t0Z+pGfiPw3VKhgtPbiWhyUH/N1bGweO2kL8c6ZMyYm/P8WEV0tBcINNlUqjs2ah3NT+NRVX3Ss1fLZvyu1CnvpJ8RMjh2R/DYaCQYEH405V5/OIlgPXoJ4+jkxrz1Q== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/21 01:00:23 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Turschmid , raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Swapnil Ingle , Raphael Norwitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" With this change, when the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS protocol feature has been negotiated, Qemu no longer sends the backend all the memory regions in a single message. Rather, when the memory tables are set or updated, a series of VHOST_USER_ADD_MEM_REG and VHOST_USER_REM_MEM_REG messages are sent to transmit the regions to map and/or unmap instead of sending send all the regions in one fixed size VHOST_USER_SET_MEM_TABLE message. The vhost_user struct maintains a shadow state of the VM’s memory regions. When the memory tables are modified, the vhost_user_set_mem_table() function compares the new device memory state to the shadow state and only sends regions which need to be unmapped or mapped in. The regions which must be unmapped are sent first, followed by the new regions to be mapped in. After all the messages have been sent, the shadow state is set to the current virtual device state. Existing backends which do not support VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS are unaffected. Signed-off-by: Raphael Norwitz Signed-off-by: Swapnil Ingle Signed-off-by: Peter Turschmid Suggested-by: Mike Cui Acked-by: Marc-André Lureau --- docs/interop/vhost-user.rst | 33 ++- hw/virtio/vhost-user.c | 510 +++++++++++++++++++++++++++++++++++++------- 2 files changed, 469 insertions(+), 74 deletions(-) diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst index b3cf5c3..037eefa 100644 --- a/docs/interop/vhost-user.rst +++ b/docs/interop/vhost-user.rst @@ -1276,8 +1276,37 @@ Master message types QEMU to expose to the guest. At this point, the value returned by the backend will be capped at the maximum number of ram slots which can be supported by vhost-user. Currently that limit is set - at VHOST_USER_MAX_RAM_SLOTS = 8 because of underlying protocol - limitations. + at VHOST_USER_MAX_RAM_SLOTS = 8. + +``VHOST_USER_ADD_MEM_REG`` + :id: 37 + :equivalent ioctl: N/A + :slave payload: memory region + + When the ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol + feature has been successfully negotiated, this message is submitted + by the master to the slave. The message payload contains a memory + region descriptor struct, describing a region of guest memory which + the slave device must map in. When the + ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol feature has + been successfully negotiated, along with the + ``VHOST_USER_REM_MEM_REG`` message, this message is used to set and + update the memory tables of the slave device. + +``VHOST_USER_REM_MEM_REG`` + :id: 38 + :equivalent ioctl: N/A + :slave payload: memory region + + When the ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol + feature has been successfully negotiated, this message is submitted + by the master to the slave. The message payload contains a memory + region descriptor struct, describing a region of guest memory which + the slave device must unmap. When the + ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol feature has + been successfully negotiated, along with the + ``VHOST_USER_ADD_MEM_REG`` message, this message is used to set and + update the memory tables of the slave device. Slave message types ------------------- diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 0af593f..9358406 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -104,6 +104,8 @@ typedef enum VhostUserRequest { VHOST_USER_RESET_DEVICE = 34, /* Message number 35 reserved for VHOST_USER_VRING_KICK. */ VHOST_USER_GET_MAX_MEM_SLOTS = 36, + VHOST_USER_ADD_MEM_REG = 37, + VHOST_USER_REM_MEM_REG = 38, VHOST_USER_MAX } VhostUserRequest; @@ -128,6 +130,11 @@ typedef struct VhostUserMemory { VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS]; } VhostUserMemory; +typedef struct VhostUserMemRegMsg { + uint32_t padding; + VhostUserMemoryRegion region; +} VhostUserMemRegMsg; + typedef struct VhostUserLog { uint64_t mmap_size; uint64_t mmap_offset; @@ -186,6 +193,7 @@ typedef union { struct vhost_vring_state state; struct vhost_vring_addr addr; VhostUserMemory memory; + VhostUserMemRegMsg mem_reg; VhostUserLog log; struct vhost_iotlb_msg iotlb; VhostUserConfig config; @@ -226,6 +234,16 @@ struct vhost_user { /* True once we've entered postcopy_listen */ bool postcopy_listen; + + /* Our current regions */ + int num_shadow_regions; + struct vhost_memory_region shadow_regions[VHOST_MEMORY_MAX_NREGIONS]; +}; + +struct scrub_regions { + struct vhost_memory_region *region; + int reg_idx; + int fd_idx; }; static bool ioeventfd_enabled(void) @@ -489,8 +507,332 @@ static int vhost_user_fill_set_mem_table_msg(struct vhost_user *u, return 1; } +static inline bool reg_equal(struct vhost_memory_region *shadow_reg, + struct vhost_memory_region *vdev_reg) +{ + return shadow_reg->guest_phys_addr == vdev_reg->guest_phys_addr && + shadow_reg->userspace_addr == vdev_reg->userspace_addr && + shadow_reg->memory_size == vdev_reg->memory_size; +} + +static void scrub_shadow_regions(struct vhost_dev *dev, + struct scrub_regions *add_reg, + int *nr_add_reg, + struct scrub_regions *rem_reg, + int *nr_rem_reg, uint64_t *shadow_pcb, + bool track_ramblocks) +{ + struct vhost_user *u = dev->opaque; + bool found[VHOST_MEMORY_MAX_NREGIONS] = {}; + struct vhost_memory_region *reg, *shadow_reg; + int i, j, fd, add_idx = 0, rm_idx = 0, fd_num = 0; + ram_addr_t offset; + MemoryRegion *mr; + bool matching; + + /* + * Find memory regions present in our shadow state which are not in + * the device's current memory state. + * + * Mark regions in both the shadow and device state as "found". + */ + for (i = 0; i < u->num_shadow_regions; i++) { + shadow_reg = &u->shadow_regions[i]; + matching = false; + + for (j = 0; j < dev->mem->nregions; j++) { + reg = &dev->mem->regions[j]; + + mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd); + + if (reg_equal(shadow_reg, reg)) { + matching = true; + found[j] = true; + if (track_ramblocks) { + /* + * Reset postcopy client bases, region_rb, and + * region_rb_offset in case regions are removed. + */ + if (fd > 0) { + u->region_rb_offset[j] = offset; + u->region_rb[j] = mr->ram_block; + shadow_pcb[j] = u->postcopy_client_bases[i]; + } else { + u->region_rb_offset[j] = 0; + u->region_rb[j] = NULL; + } + } + break; + } + } + + /* + * If the region was not found in the current device memory state + * create an entry for it in the removed list. + */ + if (!matching) { + rem_reg[rm_idx].region = shadow_reg; + rem_reg[rm_idx++].reg_idx = i; + } + } + + /* + * For regions not marked "found", create entries in the added list. + * + * Note their indexes in the device memory state and the indexes of their + * file descriptors. + */ + for (i = 0; i < dev->mem->nregions; i++) { + reg = &dev->mem->regions[i]; + mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd); + if (fd > 0) { + ++fd_num; + } + + /* + * If the region was in both the shadow and device state we don't + * need to send a VHOST_USER_ADD_MEM_REG message for it. + */ + if (found[i]) { + continue; + } + + add_reg[add_idx].region = reg; + add_reg[add_idx].reg_idx = i; + add_reg[add_idx++].fd_idx = fd_num; + } + *nr_rem_reg = rm_idx; + *nr_add_reg = add_idx; + + return; +} + +static int send_remove_regions(struct vhost_dev *dev, + struct scrub_regions *remove_reg, + int nr_rem_reg, VhostUserMsg *msg, + bool reply_supported) +{ + struct vhost_user *u = dev->opaque; + struct vhost_memory_region *shadow_reg; + int i, fd, shadow_reg_idx, ret; + ram_addr_t offset; + VhostUserMemoryRegion region_buffer; + + /* + * The regions in remove_reg appear in the same order they do in the + * shadow table. Therefore we can minimize memory copies by iterating + * through remove_reg backwards. + */ + for (i = nr_rem_reg - 1; i >= 0; i--) { + shadow_reg = remove_reg[i].region; + shadow_reg_idx = remove_reg[i].reg_idx; + + vhost_user_get_mr_data(shadow_reg->userspace_addr, &offset, &fd); + + if (fd > 0) { + msg->hdr.request = VHOST_USER_REM_MEM_REG; + vhost_user_fill_msg_region(®ion_buffer, shadow_reg); + msg->payload.mem_reg.region = region_buffer; + + if (vhost_user_write(dev, msg, &fd, 1) < 0) { + return -1; + } + + if (reply_supported) { + ret = process_message_reply(dev, msg); + if (ret) { + return ret; + } + } + } + + /* + * At this point we know the backend has unmapped the region. It is now + * safe to remove it from the shadow table. + */ + memmove(&u->shadow_regions[shadow_reg_idx], + &u->shadow_regions[shadow_reg_idx + 1], + sizeof(struct vhost_memory_region) * + (u->num_shadow_regions - shadow_reg_idx)); + u->num_shadow_regions--; + } + + return 0; +} + +static int send_add_regions(struct vhost_dev *dev, + struct scrub_regions *add_reg, int nr_add_reg, + VhostUserMsg *msg, uint64_t *shadow_pcb, + bool reply_supported, bool track_ramblocks) +{ + struct vhost_user *u = dev->opaque; + int i, fd, ret, reg_idx, reg_fd_idx; + struct vhost_memory_region *reg; + MemoryRegion *mr; + ram_addr_t offset; + VhostUserMsg msg_reply; + VhostUserMemoryRegion region_buffer; + + for (i = 0; i < nr_add_reg; i++) { + reg = add_reg[i].region; + reg_idx = add_reg[i].reg_idx; + reg_fd_idx = add_reg[i].fd_idx; + + mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd); + + if (fd > 0) { + if (track_ramblocks) { + trace_vhost_user_set_mem_table_withfd(reg_fd_idx, mr->name, + reg->memory_size, + reg->guest_phys_addr, + reg->userspace_addr, + offset); + u->region_rb_offset[reg_idx] = offset; + u->region_rb[reg_idx] = mr->ram_block; + } + msg->hdr.request = VHOST_USER_ADD_MEM_REG; + vhost_user_fill_msg_region(®ion_buffer, reg); + msg->payload.mem_reg.region = region_buffer; + msg->payload.mem_reg.region.mmap_offset = offset; + + if (vhost_user_write(dev, msg, &fd, 1) < 0) { + return -1; + } + + if (track_ramblocks) { + uint64_t reply_gpa; + + if (vhost_user_read(dev, &msg_reply) < 0) { + return -1; + } + + reply_gpa = msg_reply.payload.mem_reg.region.guest_phys_addr; + + if (msg_reply.hdr.request != VHOST_USER_ADD_MEM_REG) { + error_report("%s: Received unexpected msg type." + "Expected %d received %d", __func__, + VHOST_USER_ADD_MEM_REG, + msg_reply.hdr.request); + return -1; + } + + /* + * We're using the same structure, just reusing one of the + * fields, so it should be the same size. + */ + if (msg_reply.hdr.size != msg->hdr.size) { + error_report("%s: Unexpected size for postcopy reply " + "%d vs %d", __func__, msg_reply.hdr.size, + msg->hdr.size); + return -1; + } + + /* Get the postcopy client base from the backend's reply. */ + if (reply_gpa == dev->mem->regions[reg_idx].guest_phys_addr) { + shadow_pcb[reg_idx] = + msg_reply.payload.mem_reg.region.userspace_addr; + trace_vhost_user_set_mem_table_postcopy( + msg_reply.payload.mem_reg.region.userspace_addr, + msg->payload.mem_reg.region.userspace_addr, + reg_fd_idx, reg_idx); + } else { + error_report("%s: invalid postcopy reply for region. " + "Got guest physical address %lX, expected " + "%lX", __func__, reply_gpa, + dev->mem->regions[reg_idx].guest_phys_addr); + return -1; + } + } else if (reply_supported) { + ret = process_message_reply(dev, msg); + if (ret) { + return ret; + } + } + } else if (track_ramblocks) { + u->region_rb_offset[reg_idx] = 0; + u->region_rb[reg_idx] = NULL; + } + + /* + * At this point, we know the backend has mapped in the new + * region, if the region has a valid file descriptor. + * + * The region should now be added to the shadow table. + */ + u->shadow_regions[u->num_shadow_regions].guest_phys_addr = + reg->guest_phys_addr; + u->shadow_regions[u->num_shadow_regions].userspace_addr = + reg->userspace_addr; + u->shadow_regions[u->num_shadow_regions].memory_size = + reg->memory_size; + u->num_shadow_regions++; + } + + return 0; +} + +static int vhost_user_add_remove_regions(struct vhost_dev *dev, + VhostUserMsg *msg, + bool reply_supported, + bool track_ramblocks) +{ + struct vhost_user *u = dev->opaque; + struct scrub_regions add_reg[VHOST_MEMORY_MAX_NREGIONS]; + struct scrub_regions rem_reg[VHOST_MEMORY_MAX_NREGIONS]; + uint64_t shadow_pcb[VHOST_MEMORY_MAX_NREGIONS] = {}; + int nr_add_reg, nr_rem_reg; + + msg->hdr.size = sizeof(msg->payload.mem_reg.padding) + + sizeof(VhostUserMemoryRegion); + + /* Find the regions which need to be removed or added. */ + scrub_shadow_regions(dev, add_reg, &nr_add_reg, rem_reg, &nr_rem_reg, + shadow_pcb, track_ramblocks); + + if (nr_rem_reg && send_remove_regions(dev, rem_reg, nr_rem_reg, msg, + reply_supported) < 0) + { + goto err; + } + + if (nr_add_reg && send_add_regions(dev, add_reg, nr_add_reg, msg, + shadow_pcb, reply_supported, track_ramblocks) < 0) + { + goto err; + } + + if (track_ramblocks) { + memcpy(u->postcopy_client_bases, shadow_pcb, + sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + /* + * Now we've registered this with the postcopy code, we ack to the + * client, because now we're in the position to be able to deal with + * any faults it generates. + */ + /* TODO: Use this for failure cases as well with a bad value. */ + msg->hdr.size = sizeof(msg->payload.u64); + msg->payload.u64 = 0; /* OK */ + + if (vhost_user_write(dev, msg, NULL, 0) < 0) { + return -1; + } + } + + return 0; + +err: + if (track_ramblocks) { + memcpy(u->postcopy_client_bases, shadow_pcb, + sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + } + + return -1; +} + static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev, - struct vhost_memory *mem) + struct vhost_memory *mem, + bool reply_supported, + bool config_mem_slots) { struct vhost_user *u = dev->opaque; int fds[VHOST_MEMORY_MAX_NREGIONS]; @@ -513,71 +855,84 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev, u->region_rb_len = dev->mem->nregions; } - if (vhost_user_fill_set_mem_table_msg(u, dev, &msg, fds, &fd_num, + if (config_mem_slots) { + if (vhost_user_add_remove_regions(dev, &msg, reply_supported, true) < 0) { - return -1; - } + return -1; + } + } else { + if (vhost_user_fill_set_mem_table_msg(u, dev, &msg, fds, &fd_num, + true) < 0) { + return -1; + } - if (vhost_user_write(dev, &msg, fds, fd_num) < 0) { - return -1; - } + if (vhost_user_write(dev, &msg, fds, fd_num) < 0) { + return -1; + } - if (vhost_user_read(dev, &msg_reply) < 0) { - return -1; - } + if (vhost_user_read(dev, &msg_reply) < 0) { + return -1; + } - if (msg_reply.hdr.request != VHOST_USER_SET_MEM_TABLE) { - error_report("%s: Received unexpected msg type." - "Expected %d received %d", __func__, - VHOST_USER_SET_MEM_TABLE, msg_reply.hdr.request); - return -1; - } - /* We're using the same structure, just reusing one of the - * fields, so it should be the same size. - */ - if (msg_reply.hdr.size != msg.hdr.size) { - error_report("%s: Unexpected size for postcopy reply " - "%d vs %d", __func__, msg_reply.hdr.size, msg.hdr.size); - return -1; - } - - memset(u->postcopy_client_bases, 0, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); - - /* They're in the same order as the regions that were sent - * but some of the regions were skipped (above) if they - * didn't have fd's - */ - for (msg_i = 0, region_i = 0; - region_i < dev->mem->nregions; - region_i++) { - if (msg_i < fd_num && - msg_reply.payload.memory.regions[msg_i].guest_phys_addr == - dev->mem->regions[region_i].guest_phys_addr) { - u->postcopy_client_bases[region_i] = - msg_reply.payload.memory.regions[msg_i].userspace_addr; - trace_vhost_user_set_mem_table_postcopy( - msg_reply.payload.memory.regions[msg_i].userspace_addr, - msg.payload.memory.regions[msg_i].userspace_addr, - msg_i, region_i); - msg_i++; + if (msg_reply.hdr.request != VHOST_USER_SET_MEM_TABLE) { + error_report("%s: Received unexpected msg type." + "Expected %d received %d", __func__, + VHOST_USER_SET_MEM_TABLE, msg_reply.hdr.request); + return -1; + } + + /* + * We're using the same structure, just reusing one of the + * fields, so it should be the same size. + */ + if (msg_reply.hdr.size != msg.hdr.size) { + error_report("%s: Unexpected size for postcopy reply " + "%d vs %d", __func__, msg_reply.hdr.size, + msg.hdr.size); + return -1; + } + + memset(u->postcopy_client_bases, 0, + sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + + /* + * They're in the same order as the regions that were sent + * but some of the regions were skipped (above) if they + * didn't have fd's + */ + for (msg_i = 0, region_i = 0; + region_i < dev->mem->nregions; + region_i++) { + if (msg_i < fd_num && + msg_reply.payload.memory.regions[msg_i].guest_phys_addr == + dev->mem->regions[region_i].guest_phys_addr) { + u->postcopy_client_bases[region_i] = + msg_reply.payload.memory.regions[msg_i].userspace_addr; + trace_vhost_user_set_mem_table_postcopy( + msg_reply.payload.memory.regions[msg_i].userspace_addr, + msg.payload.memory.regions[msg_i].userspace_addr, + msg_i, region_i); + msg_i++; + } + } + if (msg_i != fd_num) { + error_report("%s: postcopy reply not fully consumed " + "%d vs %zd", + __func__, msg_i, fd_num); + return -1; + } + + /* + * Now we've registered this with the postcopy code, we ack to the + * client, because now we're in the position to be able to deal + * with any faults it generates. + */ + /* TODO: Use this for failure cases as well with a bad value. */ + msg.hdr.size = sizeof(msg.payload.u64); + msg.payload.u64 = 0; /* OK */ + if (vhost_user_write(dev, &msg, NULL, 0) < 0) { + return -1; } - } - if (msg_i != fd_num) { - error_report("%s: postcopy reply not fully consumed " - "%d vs %zd", - __func__, msg_i, fd_num); - return -1; - } - /* Now we've registered this with the postcopy code, we ack to the client, - * because now we're in the position to be able to deal with any faults - * it generates. - */ - /* TODO: Use this for failure cases as well with a bad value */ - msg.hdr.size = sizeof(msg.payload.u64); - msg.payload.u64 = 0; /* OK */ - if (vhost_user_write(dev, &msg, NULL, 0) < 0) { - return -1; } return 0; @@ -592,12 +947,17 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev, bool do_postcopy = u->postcopy_listen && u->postcopy_fd.handler; bool reply_supported = virtio_has_feature(dev->protocol_features, VHOST_USER_PROTOCOL_F_REPLY_ACK); + bool config_mem_slots = + virtio_has_feature(dev->protocol_features, + VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS); if (do_postcopy) { - /* Postcopy has enough differences that it's best done in it's own + /* + * Postcopy has enough differences that it's best done in it's own * version */ - return vhost_user_set_mem_table_postcopy(dev, mem); + return vhost_user_set_mem_table_postcopy(dev, mem, reply_supported, + config_mem_slots); } VhostUserMsg msg = { @@ -608,17 +968,23 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev, msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK; } - if (vhost_user_fill_set_mem_table_msg(u, dev, &msg, fds, &fd_num, + if (config_mem_slots) { + if (vhost_user_add_remove_regions(dev, &msg, reply_supported, false) < 0) { - return -1; - } - - if (vhost_user_write(dev, &msg, fds, fd_num) < 0) { - return -1; - } + return -1; + } + } else { + if (vhost_user_fill_set_mem_table_msg(u, dev, &msg, fds, &fd_num, + false) < 0) { + return -1; + } + if (vhost_user_write(dev, &msg, fds, fd_num) < 0) { + return -1; + } - if (reply_supported) { - return process_message_reply(dev, &msg); + if (reply_supported) { + return process_message_reply(dev, &msg); + } } return 0; From patchwork Thu May 21 05:00:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 11562265 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 156B960D for ; Thu, 21 May 2020 05:05:19 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D046E20721 for ; Thu, 21 May 2020 05:05:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="Pp6N4gFS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D046E20721 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:48502 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jbdOQ-0002nu-0w for patchwork-qemu-devel@patchwork.kernel.org; Thu, 21 May 2020 01:05:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:60014) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdJz-0003KF-Me for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:44 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:28555) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdJy-0001Gj-Gx for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=1gT8UANEjY26OpRqhyqDy/W0PyLouFTI8A05goPKUko=; b=Pp6N4gFSKNFrYaBRfKLXcuY1y+EaC5eJE7DRb5GRy9ka62d1rKQ91F/mvpqwYpRBVXao 6bl/BS1m5CaV3w5Yzn+9ElwsCaMyllHF20JRxMqYLUHcCadUG+ywJf0notCf/5Gcnar48l PJkc8Vqe6VpTPfH+jg8GTOpdtV34UTaOs= Received: by filterdrecv-p3iad2-8ddf98858-w5zgs with SMTP id filterdrecv-p3iad2-8ddf98858-w5zgs-17-5EC60AF8-92 2020-05-21 05:00:40.516925812 +0000 UTC m=+4852394.811253408 Received: from localhost.localdomain.com (unknown) by ismtpd0026p1las1.sendgrid.net (SG) with ESMTP id p5IHhxkhQSG-k2Y6vHr4Rg Thu, 21 May 2020 05:00:40.319 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v4 05/10] Lift max memory slots limit imposed by vhost-user Date: Thu, 21 May 2020 05:00:40 +0000 (UTC) Message-Id: <1588533678-23450-6-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zcHs6ANfyz+yj7wQh8uN2B2Wv9Rj8vSG31+Kau6EBCR0zg27Ee0wWyHxXxKtnyh/xRuUXSytKpyNJKwm00jGRjRJKNfntTmAB/oJLcq3TJlfI1EbQobKe3r0HgH9aeExKIGIj0kguJ4ufyrvYyXuAclt/Nt6fLKt9DAb+ULzvrDxTKBU6aIgBmEBDOgxdS9xDw== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/21 01:00:23 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Turschmid , raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Historically, sending all memory regions to vhost-user backends in a single message imposed a limitation on the number of times memory could be hot-added to a VM with a vhost-user device. Now that backends which support the VHOST_USER_PROTOCOL_F_CONFIGURE_SLOTS send memory regions individually, we no longer need to impose this limitation on devices which support this feature. With this change, VMs with a vhost-user device which supports the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS can support a configurable number of memory slots, up to the maximum allowed by the target platform. Existing backends which do not support VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS are unaffected. Signed-off-by: Raphael Norwitz Signed-off-by: Peter Turschmid Suggested-by: Mike Cui Reviewed-by: Marc-André Lureau --- docs/interop/vhost-user.rst | 7 +++--- hw/virtio/vhost-user.c | 56 ++++++++++++++++++++++++++++++--------------- 2 files changed, 40 insertions(+), 23 deletions(-) diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst index 037eefa..688b7c6 100644 --- a/docs/interop/vhost-user.rst +++ b/docs/interop/vhost-user.rst @@ -1273,10 +1273,9 @@ Master message types feature has been successfully negotiated, this message is submitted by master to the slave. The slave should return the message with a u64 payload containing the maximum number of memory slots for - QEMU to expose to the guest. At this point, the value returned - by the backend will be capped at the maximum number of ram slots - which can be supported by vhost-user. Currently that limit is set - at VHOST_USER_MAX_RAM_SLOTS = 8. + QEMU to expose to the guest. The value returned by the backend + will be capped at the maximum number of ram slots which can be + supported by the target platform. ``VHOST_USER_ADD_MEM_REG`` :id: 37 diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 9358406..48b8081 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -35,11 +35,29 @@ #include #endif -#define VHOST_MEMORY_MAX_NREGIONS 8 +#define VHOST_MEMORY_BASELINE_NREGIONS 8 #define VHOST_USER_F_PROTOCOL_FEATURES 30 #define VHOST_USER_SLAVE_MAX_FDS 8 /* + * Set maximum number of RAM slots supported to + * the maximum number supported by the target + * hardware plaform. + */ +#if defined(TARGET_X86) || defined(TARGET_X86_64) || \ + defined(TARGET_ARM) || defined(TARGET_ARM_64) +#include "hw/acpi/acpi.h" +#define VHOST_USER_MAX_RAM_SLOTS ACPI_MAX_RAM_SLOTS + +#elif defined(TARGET_PPC) || defined(TARGET_PPC_64) +#include "hw/ppc/spapr.h" +#define VHOST_USER_MAX_RAM_SLOTS SPAPR_MAX_RAM_SLOTS + +#else +#define VHOST_USER_MAX_RAM_SLOTS 512 +#endif + +/* * Maximum size of virtio device config space */ #define VHOST_USER_MAX_CONFIG_SIZE 256 @@ -127,7 +145,7 @@ typedef struct VhostUserMemoryRegion { typedef struct VhostUserMemory { uint32_t nregions; uint32_t padding; - VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS]; + VhostUserMemoryRegion regions[VHOST_MEMORY_BASELINE_NREGIONS]; } VhostUserMemory; typedef struct VhostUserMemRegMsg { @@ -222,7 +240,7 @@ struct vhost_user { int slave_fd; NotifierWithReturn postcopy_notifier; struct PostCopyFD postcopy_fd; - uint64_t postcopy_client_bases[VHOST_MEMORY_MAX_NREGIONS]; + uint64_t postcopy_client_bases[VHOST_USER_MAX_RAM_SLOTS]; /* Length of the region_rb and region_rb_offset arrays */ size_t region_rb_len; /* RAMBlock associated with a given region */ @@ -237,7 +255,7 @@ struct vhost_user { /* Our current regions */ int num_shadow_regions; - struct vhost_memory_region shadow_regions[VHOST_MEMORY_MAX_NREGIONS]; + struct vhost_memory_region shadow_regions[VHOST_USER_MAX_RAM_SLOTS]; }; struct scrub_regions { @@ -392,7 +410,7 @@ int vhost_user_gpu_set_socket(struct vhost_dev *dev, int fd) static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base, struct vhost_log *log) { - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_USER_MAX_RAM_SLOTS]; size_t fd_num = 0; bool shmfd = virtio_has_feature(dev->protocol_features, VHOST_USER_PROTOCOL_F_LOG_SHMFD); @@ -470,7 +488,7 @@ static int vhost_user_fill_set_mem_table_msg(struct vhost_user *u, mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd); if (fd > 0) { if (track_ramblocks) { - assert(*fd_num < VHOST_MEMORY_MAX_NREGIONS); + assert(*fd_num < VHOST_MEMORY_BASELINE_NREGIONS); trace_vhost_user_set_mem_table_withfd(*fd_num, mr->name, reg->memory_size, reg->guest_phys_addr, @@ -478,7 +496,7 @@ static int vhost_user_fill_set_mem_table_msg(struct vhost_user *u, offset); u->region_rb_offset[i] = offset; u->region_rb[i] = mr->ram_block; - } else if (*fd_num == VHOST_MEMORY_MAX_NREGIONS) { + } else if (*fd_num == VHOST_MEMORY_BASELINE_NREGIONS) { error_report("Failed preparing vhost-user memory table msg"); return -1; } @@ -523,7 +541,7 @@ static void scrub_shadow_regions(struct vhost_dev *dev, bool track_ramblocks) { struct vhost_user *u = dev->opaque; - bool found[VHOST_MEMORY_MAX_NREGIONS] = {}; + bool found[VHOST_USER_MAX_RAM_SLOTS] = {}; struct vhost_memory_region *reg, *shadow_reg; int i, j, fd, add_idx = 0, rm_idx = 0, fd_num = 0; ram_addr_t offset; @@ -777,9 +795,9 @@ static int vhost_user_add_remove_regions(struct vhost_dev *dev, bool track_ramblocks) { struct vhost_user *u = dev->opaque; - struct scrub_regions add_reg[VHOST_MEMORY_MAX_NREGIONS]; - struct scrub_regions rem_reg[VHOST_MEMORY_MAX_NREGIONS]; - uint64_t shadow_pcb[VHOST_MEMORY_MAX_NREGIONS] = {}; + struct scrub_regions add_reg[VHOST_USER_MAX_RAM_SLOTS]; + struct scrub_regions rem_reg[VHOST_USER_MAX_RAM_SLOTS]; + uint64_t shadow_pcb[VHOST_USER_MAX_RAM_SLOTS] = {}; int nr_add_reg, nr_rem_reg; msg->hdr.size = sizeof(msg->payload.mem_reg.padding) + @@ -803,7 +821,7 @@ static int vhost_user_add_remove_regions(struct vhost_dev *dev, if (track_ramblocks) { memcpy(u->postcopy_client_bases, shadow_pcb, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(uint64_t) * VHOST_USER_MAX_RAM_SLOTS); /* * Now we've registered this with the postcopy code, we ack to the * client, because now we're in the position to be able to deal with @@ -823,7 +841,7 @@ static int vhost_user_add_remove_regions(struct vhost_dev *dev, err: if (track_ramblocks) { memcpy(u->postcopy_client_bases, shadow_pcb, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(uint64_t) * VHOST_USER_MAX_RAM_SLOTS); } return -1; @@ -835,7 +853,7 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev, bool config_mem_slots) { struct vhost_user *u = dev->opaque; - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_MEMORY_BASELINE_NREGIONS]; size_t fd_num = 0; VhostUserMsg msg_reply; int region_i, msg_i; @@ -893,7 +911,7 @@ static int vhost_user_set_mem_table_postcopy(struct vhost_dev *dev, } memset(u->postcopy_client_bases, 0, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(uint64_t) * VHOST_USER_MAX_RAM_SLOTS); /* * They're in the same order as the regions that were sent @@ -942,7 +960,7 @@ static int vhost_user_set_mem_table(struct vhost_dev *dev, struct vhost_memory *mem) { struct vhost_user *u = dev->opaque; - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_MEMORY_BASELINE_NREGIONS]; size_t fd_num = 0; bool do_postcopy = u->postcopy_listen && u->postcopy_fd.handler; bool reply_supported = virtio_has_feature(dev->protocol_features, @@ -1149,7 +1167,7 @@ static int vhost_set_vring_file(struct vhost_dev *dev, VhostUserRequest request, struct vhost_vring_file *file) { - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_USER_MAX_RAM_SLOTS]; size_t fd_num = 0; VhostUserMsg msg = { .hdr.request = request, @@ -1845,7 +1863,7 @@ static int vhost_user_backend_init(struct vhost_dev *dev, void *opaque) /* get max memory regions if backend supports configurable RAM slots */ if (!virtio_has_feature(dev->protocol_features, VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS)) { - u->user->memory_slots = VHOST_MEMORY_MAX_NREGIONS; + u->user->memory_slots = VHOST_MEMORY_BASELINE_NREGIONS; } else { err = vhost_user_get_max_memslots(dev, &ram_slots); if (err < 0) { @@ -1860,7 +1878,7 @@ static int vhost_user_backend_init(struct vhost_dev *dev, void *opaque) return -1; } - u->user->memory_slots = MIN(ram_slots, VHOST_MEMORY_MAX_NREGIONS); + u->user->memory_slots = MIN(ram_slots, VHOST_USER_MAX_RAM_SLOTS); } } From patchwork Thu May 21 05:00:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 11562263 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5324E138A for ; Thu, 21 May 2020 05:04:00 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2A42E2070A for ; Thu, 21 May 2020 05:04:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="fM/pXi3k" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A42E2070A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:44824 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jbdN9-0000Bn-Cq for patchwork-qemu-devel@patchwork.kernel.org; Thu, 21 May 2020 01:03:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:60028) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdK6-0003Oa-IB for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:50 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:46399) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdK5-0001HL-9c for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=ch/5JdiTmKgvL+Uvq0cHRGXtmNKqcmujc8lt8mLXMF4=; b=fM/pXi3kFVcDs2rpCoHzw1A/6A2iTGpEVkeleAMsqLMUp+21XFD9+1FLz7arT4SEM5RX gIlBhfG3o/UXKvA+9H+MtlhHrnLxTu/9pigh7IG1uKnoCdSH6/YuSk44mOeo5zn0LPWLhZ ERZRdofEQtPsZCY1/vZuAaxluYHwEiqoY= Received: by filterdrecv-p3iad2-8ddf98858-mlrf4 with SMTP id filterdrecv-p3iad2-8ddf98858-mlrf4-17-5EC60AFF-49 2020-05-21 05:00:47.291654359 +0000 UTC m=+4852395.040164021 Received: from localhost.localdomain.com (unknown) by ismtpd0026p1las1.sendgrid.net (SG) with ESMTP id PCWxhi9MQIO9N93uVls3ag Thu, 21 May 2020 05:00:47.090 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v4 06/10] Refactor out libvhost-user fault generation logic Date: Thu, 21 May 2020 05:00:47 +0000 (UTC) Message-Id: <1588533678-23450-7-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zadGtPi4MZIvnRTVm/I/hBcU37PSP4M8LvU0QdTbon6fDAtS7Vd9LbqVQPMUYHc6Gdnppe1tOO5Dl7Ys4jladeHQdGNs/IcD98ge36X1g1tjz6w9R6ToW/zFR9RfmYqJpKNCd2gToK8ztftp/uPAlDb+lVOMKOZ5QYBmPWKWumyLcZsl801SWUZtdDTh8Vckcg== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/21 01:00:23 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" In libvhost-user, the incoming postcopy migration path for setting the backend's memory tables has become convolued. In particular, moving the logic which starts generating faults, having received the final ACK from qemu can be moved to a separate function. This simplifies the code substantially. This logic will also be needed by the postcopy path once the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS feature is supported. Signed-off-by: Raphael Norwitz Reviewed-by: Marc-André Lureau --- contrib/libvhost-user/libvhost-user.c | 147 ++++++++++++++++++---------------- 1 file changed, 79 insertions(+), 68 deletions(-) diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index 3bca996..cccfa22 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -584,6 +584,84 @@ map_ring(VuDev *dev, VuVirtq *vq) } static bool +generate_faults(VuDev *dev) { + int i; + for (i = 0; i < dev->nregions; i++) { + VuDevRegion *dev_region = &dev->regions[i]; + int ret; +#ifdef UFFDIO_REGISTER + /* + * We should already have an open ufd. Mark each memory + * range as ufd. + * Discard any mapping we have here; note I can't use MADV_REMOVE + * or fallocate to make the hole since I don't want to lose + * data that's already arrived in the shared process. + * TODO: How to do hugepage + */ + ret = madvise((void *)(uintptr_t)dev_region->mmap_addr, + dev_region->size + dev_region->mmap_offset, + MADV_DONTNEED); + if (ret) { + fprintf(stderr, + "%s: Failed to madvise(DONTNEED) region %d: %s\n", + __func__, i, strerror(errno)); + } + /* + * Turn off transparent hugepages so we dont get lose wakeups + * in neighbouring pages. + * TODO: Turn this backon later. + */ + ret = madvise((void *)(uintptr_t)dev_region->mmap_addr, + dev_region->size + dev_region->mmap_offset, + MADV_NOHUGEPAGE); + if (ret) { + /* + * Note: This can happen legally on kernels that are configured + * without madvise'able hugepages + */ + fprintf(stderr, + "%s: Failed to madvise(NOHUGEPAGE) region %d: %s\n", + __func__, i, strerror(errno)); + } + struct uffdio_register reg_struct; + reg_struct.range.start = (uintptr_t)dev_region->mmap_addr; + reg_struct.range.len = dev_region->size + dev_region->mmap_offset; + reg_struct.mode = UFFDIO_REGISTER_MODE_MISSING; + + if (ioctl(dev->postcopy_ufd, UFFDIO_REGISTER, ®_struct)) { + vu_panic(dev, "%s: Failed to userfault region %d " + "@%p + size:%zx offset: %zx: (ufd=%d)%s\n", + __func__, i, + dev_region->mmap_addr, + dev_region->size, dev_region->mmap_offset, + dev->postcopy_ufd, strerror(errno)); + return false; + } + if (!(reg_struct.ioctls & ((__u64)1 << _UFFDIO_COPY))) { + vu_panic(dev, "%s Region (%d) doesn't support COPY", + __func__, i); + return false; + } + DPRINT("%s: region %d: Registered userfault for %" + PRIx64 " + %" PRIx64 "\n", __func__, i, + (uint64_t)reg_struct.range.start, + (uint64_t)reg_struct.range.len); + /* Now it's registered we can let the client at it */ + if (mprotect((void *)(uintptr_t)dev_region->mmap_addr, + dev_region->size + dev_region->mmap_offset, + PROT_READ | PROT_WRITE)) { + vu_panic(dev, "failed to mprotect region %d for postcopy (%s)", + i, strerror(errno)); + return false; + } + /* TODO: Stash 'zero' support flags somewhere */ +#endif + } + + return true; +} + +static bool vu_set_mem_table_exec_postcopy(VuDev *dev, VhostUserMsg *vmsg) { int i; @@ -655,74 +733,7 @@ vu_set_mem_table_exec_postcopy(VuDev *dev, VhostUserMsg *vmsg) } /* OK, now we can go and register the memory and generate faults */ - for (i = 0; i < dev->nregions; i++) { - VuDevRegion *dev_region = &dev->regions[i]; - int ret; -#ifdef UFFDIO_REGISTER - /* We should already have an open ufd. Mark each memory - * range as ufd. - * Discard any mapping we have here; note I can't use MADV_REMOVE - * or fallocate to make the hole since I don't want to lose - * data that's already arrived in the shared process. - * TODO: How to do hugepage - */ - ret = madvise((void *)(uintptr_t)dev_region->mmap_addr, - dev_region->size + dev_region->mmap_offset, - MADV_DONTNEED); - if (ret) { - fprintf(stderr, - "%s: Failed to madvise(DONTNEED) region %d: %s\n", - __func__, i, strerror(errno)); - } - /* Turn off transparent hugepages so we dont get lose wakeups - * in neighbouring pages. - * TODO: Turn this backon later. - */ - ret = madvise((void *)(uintptr_t)dev_region->mmap_addr, - dev_region->size + dev_region->mmap_offset, - MADV_NOHUGEPAGE); - if (ret) { - /* Note: This can happen legally on kernels that are configured - * without madvise'able hugepages - */ - fprintf(stderr, - "%s: Failed to madvise(NOHUGEPAGE) region %d: %s\n", - __func__, i, strerror(errno)); - } - struct uffdio_register reg_struct; - reg_struct.range.start = (uintptr_t)dev_region->mmap_addr; - reg_struct.range.len = dev_region->size + dev_region->mmap_offset; - reg_struct.mode = UFFDIO_REGISTER_MODE_MISSING; - - if (ioctl(dev->postcopy_ufd, UFFDIO_REGISTER, ®_struct)) { - vu_panic(dev, "%s: Failed to userfault region %d " - "@%p + size:%zx offset: %zx: (ufd=%d)%s\n", - __func__, i, - dev_region->mmap_addr, - dev_region->size, dev_region->mmap_offset, - dev->postcopy_ufd, strerror(errno)); - return false; - } - if (!(reg_struct.ioctls & ((__u64)1 << _UFFDIO_COPY))) { - vu_panic(dev, "%s Region (%d) doesn't support COPY", - __func__, i); - return false; - } - DPRINT("%s: region %d: Registered userfault for %" - PRIx64 " + %" PRIx64 "\n", __func__, i, - (uint64_t)reg_struct.range.start, - (uint64_t)reg_struct.range.len); - /* Now it's registered we can let the client at it */ - if (mprotect((void *)(uintptr_t)dev_region->mmap_addr, - dev_region->size + dev_region->mmap_offset, - PROT_READ | PROT_WRITE)) { - vu_panic(dev, "failed to mprotect region %d for postcopy (%s)", - i, strerror(errno)); - return false; - } - /* TODO: Stash 'zero' support flags somewhere */ -#endif - } + (void)generate_faults(dev); return false; } From patchwork Thu May 21 05:00:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 11562269 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 214D5912 for ; Thu, 21 May 2020 05:06:47 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CB32B20748 for ; Thu, 21 May 2020 05:06:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="Q+6IlbCK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB32B20748 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:52738 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jbdPp-0004dZ-VR for patchwork-qemu-devel@patchwork.kernel.org; Thu, 21 May 2020 01:06:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:60032) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdK8-0003PU-Qy for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:53 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:50734) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdK7-0001Ha-SD for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=90KP3EWUzYVQheSsrDuXWYxs8cTzEfUWVMG7tdurO8M=; b=Q+6IlbCKWOjJ0q9HqLUJuaB0ojrSjJXVxjVaeUcCeJ7zo74uLqfjdNlCiEI+FKW8QrNx uxn2Q1yNI5kN/+Sp8hsRN3cXDjSMaOcdseCJp23FiqeqgHsf9SdEZR9QJGW6IBgd4dJZ9m 6ePMjLCX4ky0uqqWk788vNDKs91bqdwuc= Received: by filterdrecv-p3iad2-8ddf98858-lwgxm with SMTP id filterdrecv-p3iad2-8ddf98858-lwgxm-19-5EC60B01-11E 2020-05-21 05:00:49.894761049 +0000 UTC m=+4852399.132456611 Received: from localhost.localdomain.com (unknown) by ismtpd0026p1las1.sendgrid.net (SG) with ESMTP id LOkJMnJtQMCFMnpanl835g Thu, 21 May 2020 05:00:49.689 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v4 07/10] Support ram slot configuration in libvhost-user Date: Thu, 21 May 2020 05:00:50 +0000 (UTC) Message-Id: <1588533678-23450-8-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zfUi5BgBPYaI2xP0EBiTyPX701jm1u73emrPKJwFZ823KKNFbrIHwWau7RcNRNtr9TgRmfyWaouLWs+SSSdYxIEceT/FUUlX3TQd9YwdU5+rJFvLiCOSwelhzoaTjAfQaksnmBaS6eQB9IQxp9idINMJ29orq8hx7OoaWDL/oAt2JlJFZGanHvRqe96PdZl6MQ== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/21 01:00:23 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" The VHOST_USER_GET_MAX_MEM_SLOTS message allows a vhost-user backend to specify a maximum number of ram slots it is willing to support. This change adds support for libvhost-user to process this message. For now the backend will reply with 8 as the maximum number of regions supported. libvhost-user does not yet support the vhost-user protocol feature VHOST_USER_PROTOCOL_F_CONFIGIRE_MEM_SLOTS, so qemu should never send the VHOST_USER_GET_MAX_MEM_SLOTS message. Therefore this new functionality is not currently used. Signed-off-by: Raphael Norwitz Reviewed-by: Marc-André Lureau --- contrib/libvhost-user/libvhost-user.c | 19 +++++++++++++++++++ contrib/libvhost-user/libvhost-user.h | 1 + 2 files changed, 20 insertions(+) diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index cccfa22..9f039b7 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -137,6 +137,7 @@ vu_request_to_string(unsigned int req) REQ(VHOST_USER_SET_INFLIGHT_FD), REQ(VHOST_USER_GPU_SET_SOCKET), REQ(VHOST_USER_VRING_KICK), + REQ(VHOST_USER_GET_MAX_MEM_SLOTS), REQ(VHOST_USER_MAX), }; #undef REQ @@ -1565,6 +1566,22 @@ vu_handle_vring_kick(VuDev *dev, VhostUserMsg *vmsg) return false; } +static bool vu_handle_get_max_memslots(VuDev *dev, VhostUserMsg *vmsg) +{ + vmsg->flags = VHOST_USER_REPLY_MASK | VHOST_USER_VERSION; + vmsg->size = sizeof(vmsg->payload.u64); + vmsg->payload.u64 = VHOST_MEMORY_MAX_NREGIONS; + vmsg->fd_num = 0; + + if (!vu_message_write(dev, dev->sock, vmsg)) { + vu_panic(dev, "Failed to send max ram slots: %s\n", strerror(errno)); + } + + DPRINT("u64: 0x%016"PRIx64"\n", (uint64_t) VHOST_MEMORY_MAX_NREGIONS); + + return false; +} + static bool vu_process_message(VuDev *dev, VhostUserMsg *vmsg) { @@ -1649,6 +1666,8 @@ vu_process_message(VuDev *dev, VhostUserMsg *vmsg) return vu_set_inflight_fd(dev, vmsg); case VHOST_USER_VRING_KICK: return vu_handle_vring_kick(dev, vmsg); + case VHOST_USER_GET_MAX_MEM_SLOTS: + return vu_handle_get_max_memslots(dev, vmsg); default: vmsg_close_fds(vmsg); vu_panic(dev, "Unhandled request: %d", vmsg->request); diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h index f30394f..88ef40d 100644 --- a/contrib/libvhost-user/libvhost-user.h +++ b/contrib/libvhost-user/libvhost-user.h @@ -97,6 +97,7 @@ typedef enum VhostUserRequest { VHOST_USER_SET_INFLIGHT_FD = 32, VHOST_USER_GPU_SET_SOCKET = 33, VHOST_USER_VRING_KICK = 35, + VHOST_USER_GET_MAX_MEM_SLOTS = 36, VHOST_USER_MAX } VhostUserRequest; From patchwork Thu May 21 05:00:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 11562273 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 49DFD1391 for ; Thu, 21 May 2020 05:07:54 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 20BE720748 for ; Thu, 21 May 2020 05:07:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="VOKESkOo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 20BE720748 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:56276 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jbdQv-00066S-CU for patchwork-qemu-devel@patchwork.kernel.org; Thu, 21 May 2020 01:07:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:60046) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdKC-0003Uh-Un for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:56 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:51874) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdKB-0001Hu-N3 for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=2FTh0G5Bni5O1HipBUNVtDSutJB43/93GVvOtHvkaoY=; b=VOKESkOoCaVhiV8z8J0Lk7p0xHZGPBskbZpshq4/zosNmk0IdHScyH+uZ5Qi8HRTU6UE cZgpGO0KGA7NmTkZvS8wrJoDiS9MsubKHVOBQz0PkhwiA8oNzkzH0uQ+w72tmy1jHBgf1u buOXfJdS4b9xAua4EWC4m0zX8uvtHgBMU= Received: by filterdrecv-p3iad2-8ddf98858-xxtk7 with SMTP id filterdrecv-p3iad2-8ddf98858-xxtk7-19-5EC60B04-2 2020-05-21 05:00:52.08102741 +0000 UTC m=+4852401.953129604 Received: from localhost.localdomain.com (unknown) by ismtpd0026p1las1.sendgrid.net (SG) with ESMTP id MD98p3FRTe-PP1loND7zOg Thu, 21 May 2020 05:00:51.865 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v4 08/10] Support adding individual regions in libvhost-user Date: Thu, 21 May 2020 05:00:52 +0000 (UTC) Message-Id: <1588533678-23450-9-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zf8uMeSsKbghQKEVM6R2KgxARKEbTESzuukyHUwbR4wkJ6FoW++TXFTeFe2sz+f4lUcx3eEzTaRr7Ppoo352Iag4d3fdRarURxyEiN6Vyz1bK/cOjunotCx0btOiTbBfLiRlLMPyBgIFVFVIhHXHTqfRTvc+tMXfuh8xOCqPbbQ5xMs89IlE091tJv6X9lNnqw== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/21 01:00:23 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" When the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS is enabled, qemu will transmit memory regions to a backend individually using the new message VHOST_USER_ADD_MEM_REG. With this change vhost-user backends built with libvhost-user can now map in new memory regions when VHOST_USER_ADD_MEM_REG messages are received. Qemu only sends VHOST_USER_ADD_MEM_REG messages when the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS feature is negotiated, and since it is not yet supported in libvhost-user, this new functionality is not yet used. Signed-off-by: Raphael Norwitz --- contrib/libvhost-user/libvhost-user.c | 103 ++++++++++++++++++++++++++++++++++ contrib/libvhost-user/libvhost-user.h | 7 +++ 2 files changed, 110 insertions(+) diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index 9f039b7..d8ee7a2 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -138,6 +138,7 @@ vu_request_to_string(unsigned int req) REQ(VHOST_USER_GPU_SET_SOCKET), REQ(VHOST_USER_VRING_KICK), REQ(VHOST_USER_GET_MAX_MEM_SLOTS), + REQ(VHOST_USER_ADD_MEM_REG), REQ(VHOST_USER_MAX), }; #undef REQ @@ -663,6 +664,106 @@ generate_faults(VuDev *dev) { } static bool +vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) { + int i; + bool track_ramblocks = dev->postcopy_listening; + VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m; + VuDevRegion *dev_region = &dev->regions[dev->nregions]; + void *mmap_addr; + + /* + * If we are in postcopy mode and we receive a u64 payload with a 0 value + * we know all the postcopy client bases have been recieved, and we + * should start generating faults. + */ + if (track_ramblocks && + vmsg->size == sizeof(vmsg->payload.u64) && + vmsg->payload.u64 == 0) { + (void)generate_faults(dev); + return false; + } + + DPRINT("Adding region: %d\n", dev->nregions); + DPRINT(" guest_phys_addr: 0x%016"PRIx64"\n", + msg_region->guest_phys_addr); + DPRINT(" memory_size: 0x%016"PRIx64"\n", + msg_region->memory_size); + DPRINT(" userspace_addr 0x%016"PRIx64"\n", + msg_region->userspace_addr); + DPRINT(" mmap_offset 0x%016"PRIx64"\n", + msg_region->mmap_offset); + + dev_region->gpa = msg_region->guest_phys_addr; + dev_region->size = msg_region->memory_size; + dev_region->qva = msg_region->userspace_addr; + dev_region->mmap_offset = msg_region->mmap_offset; + + /* + * We don't use offset argument of mmap() since the + * mapped address has to be page aligned, and we use huge + * pages. + */ + if (track_ramblocks) { + /* + * In postcopy we're using PROT_NONE here to catch anyone + * accessing it before we userfault. + */ + mmap_addr = mmap(0, dev_region->size + dev_region->mmap_offset, + PROT_NONE, MAP_SHARED, + vmsg->fds[0], 0); + } else { + mmap_addr = mmap(0, dev_region->size + dev_region->mmap_offset, + PROT_READ | PROT_WRITE, MAP_SHARED, vmsg->fds[0], + 0); + } + + if (mmap_addr == MAP_FAILED) { + vu_panic(dev, "region mmap error: %s", strerror(errno)); + } else { + dev_region->mmap_addr = (uint64_t)(uintptr_t)mmap_addr; + DPRINT(" mmap_addr: 0x%016"PRIx64"\n", + dev_region->mmap_addr); + } + + close(vmsg->fds[0]); + + if (track_ramblocks) { + /* + * Return the address to QEMU so that it can translate the ufd + * fault addresses back. + */ + msg_region->userspace_addr = (uintptr_t)(mmap_addr + + dev_region->mmap_offset); + + /* Send the message back to qemu with the addresses filled in. */ + vmsg->fd_num = 0; + if (!vu_send_reply(dev, dev->sock, vmsg)) { + vu_panic(dev, "failed to respond to add-mem-region for postcopy"); + return false; + } + + DPRINT("Successfully added new region in postcopy\n"); + dev->nregions++; + return false; + + } else { + for (i = 0; i < dev->max_queues; i++) { + if (dev->vq[i].vring.desc) { + if (map_ring(dev, &dev->vq[i])) { + vu_panic(dev, "remapping queue %d for new memory region", + i); + } + } + } + + DPRINT("Successfully added new region\n"); + dev->nregions++; + vmsg_set_reply_u64(vmsg, 0); + return true; + } +} + +static bool vu_set_mem_table_exec_postcopy(VuDev *dev, VhostUserMsg *vmsg) { int i; @@ -1668,6 +1769,8 @@ vu_process_message(VuDev *dev, VhostUserMsg *vmsg) return vu_handle_vring_kick(dev, vmsg); case VHOST_USER_GET_MAX_MEM_SLOTS: return vu_handle_get_max_memslots(dev, vmsg); + case VHOST_USER_ADD_MEM_REG: + return vu_add_mem_reg(dev, vmsg); default: vmsg_close_fds(vmsg); vu_panic(dev, "Unhandled request: %d", vmsg->request); diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h index 88ef40d..60ef7fd 100644 --- a/contrib/libvhost-user/libvhost-user.h +++ b/contrib/libvhost-user/libvhost-user.h @@ -98,6 +98,7 @@ typedef enum VhostUserRequest { VHOST_USER_GPU_SET_SOCKET = 33, VHOST_USER_VRING_KICK = 35, VHOST_USER_GET_MAX_MEM_SLOTS = 36, + VHOST_USER_ADD_MEM_REG = 37, VHOST_USER_MAX } VhostUserRequest; @@ -124,6 +125,11 @@ typedef struct VhostUserMemory { VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS]; } VhostUserMemory; +typedef struct VhostUserMemRegMsg { + uint32_t padding; + VhostUserMemoryRegion region; +} VhostUserMemRegMsg; + typedef struct VhostUserLog { uint64_t mmap_size; uint64_t mmap_offset; @@ -176,6 +182,7 @@ typedef struct VhostUserMsg { struct vhost_vring_state state; struct vhost_vring_addr addr; VhostUserMemory memory; + VhostUserMemRegMsg memreg; VhostUserLog log; VhostUserConfig config; VhostUserVringArea area; From patchwork Thu May 21 05:00:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 11562267 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EAAE214C0 for ; Thu, 21 May 2020 05:05:29 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BFD0320721 for ; Thu, 21 May 2020 05:05:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="AWFMhsOj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BFD0320721 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:49042 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jbdOa-00034Y-Tg for patchwork-qemu-devel@patchwork.kernel.org; Thu, 21 May 2020 01:05:28 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:60056) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdKG-0003ak-0V for qemu-devel@nongnu.org; Thu, 21 May 2020 01:01:00 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:40221) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdKE-0001IG-RV for qemu-devel@nongnu.org; Thu, 21 May 2020 01:00:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=t3fz5473SIIHnN2NwCjhpDyWkhB4xeDUz228FyTM6Gk=; b=AWFMhsOjiKCxzo1S1j8QIz/FVZVYwygItB0AGbcb9a8C8J7KBGAb2VmjyrV8iCKV1JPN YHGNJud3Tz98GJI7t41YS0MSb5xvSwoZm6qi81iRnRKkE7vh9oKnoEJIGgB+iLIi44HYY6 h0Rjn+LttE1FXQe0MZUiSYApYd0It2lqU= Received: by filterdrecv-p3iad2-8ddf98858-xm5rk with SMTP id filterdrecv-p3iad2-8ddf98858-xm5rk-18-5EC60B08-E2 2020-05-21 05:00:56.802137655 +0000 UTC m=+4775845.781871036 Received: from localhost.localdomain.com (unknown) by ismtpd0026p1las1.sendgrid.net (SG) with ESMTP id yqpAbHQuRdOSTJlRZqj__A Thu, 21 May 2020 05:00:56.585 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v4 09/10] Support individual region unmap in libvhost-user Date: Thu, 21 May 2020 05:00:56 +0000 (UTC) Message-Id: <1588533678-23450-10-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zQJ3KVDq2XRWa2XbkOOq+vXI84eLiafeClUC7SbSEGIm8qHo7LMQsMaD8JSr1sgASWuJnCZqSfYv9NBNcvFfbskxRzhOADdHGs3ViQ2/5lpu4j/2/vy1Fz0xxGtwChw8CJ3z+6PU5U/1JMFVsURDvWXreJntRQ3uRzcZYsAxo7o9c6icQRKoClbohf2dBfdTAA== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/21 01:00:23 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" When the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS protocol feature is enabled, on memory hot-unplug qemu will transmit memory regions to remove individually using the new message VHOST_USER_REM_MEM_REG message. With this change, vhost-user backends build with libvhost-user can now unmap individual memory regions when receiving the VHOST_USER_REM_MEM_REG message. Qemu only sends VHOST_USER_REM_MEM_REG messages when the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS feature is negotiated, and support for that feature has not yet been added in libvhost-user, this new functionality is not yet used. Signed-off-by: Raphael Norwitz --- contrib/libvhost-user/libvhost-user.c | 63 +++++++++++++++++++++++++++++++++++ contrib/libvhost-user/libvhost-user.h | 1 + 2 files changed, 64 insertions(+) diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index d8ee7a2..386449b 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -139,6 +139,7 @@ vu_request_to_string(unsigned int req) REQ(VHOST_USER_VRING_KICK), REQ(VHOST_USER_GET_MAX_MEM_SLOTS), REQ(VHOST_USER_ADD_MEM_REG), + REQ(VHOST_USER_REM_MEM_REG), REQ(VHOST_USER_MAX), }; #undef REQ @@ -763,6 +764,66 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) { } } +static inline bool reg_equal(VuDevRegion *vudev_reg, + VhostUserMemoryRegion *msg_reg) +{ + if (vudev_reg->gpa == msg_reg->guest_phys_addr && + vudev_reg->qva == msg_reg->userspace_addr && + vudev_reg->size == msg_reg->memory_size) { + return true; + } + + return false; +} + +static bool +vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) { + int i, j; + bool found = false; + VuDevRegion shadow_regions[VHOST_MEMORY_MAX_NREGIONS] = {}; + VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m; + + DPRINT("Removing region:\n"); + DPRINT(" guest_phys_addr: 0x%016"PRIx64"\n", + msg_region->guest_phys_addr); + DPRINT(" memory_size: 0x%016"PRIx64"\n", + msg_region->memory_size); + DPRINT(" userspace_addr 0x%016"PRIx64"\n", + msg_region->userspace_addr); + DPRINT(" mmap_offset 0x%016"PRIx64"\n", + msg_region->mmap_offset); + + for (i = 0, j = 0; i < dev->nregions; i++) { + if (!reg_equal(&dev->regions[i], msg_region)) { + shadow_regions[j].gpa = dev->regions[i].gpa; + shadow_regions[j].size = dev->regions[i].size; + shadow_regions[j].qva = dev->regions[i].qva; + shadow_regions[j].mmap_offset = dev->regions[i].mmap_offset; + j++; + } else { + found = true; + VuDevRegion *r = &dev->regions[i]; + void *m = (void *) (uintptr_t) r->mmap_addr; + + if (m) { + munmap(m, r->size + r->mmap_offset); + } + } + } + + if (found) { + memcpy(dev->regions, shadow_regions, + sizeof(VuDevRegion) * VHOST_MEMORY_MAX_NREGIONS); + DPRINT("Successfully removed a region\n"); + dev->nregions--; + vmsg_set_reply_u64(vmsg, 0); + } else { + vu_panic(dev, "Specified region not found\n"); + } + + return true; +} + static bool vu_set_mem_table_exec_postcopy(VuDev *dev, VhostUserMsg *vmsg) { @@ -1771,6 +1832,8 @@ vu_process_message(VuDev *dev, VhostUserMsg *vmsg) return vu_handle_get_max_memslots(dev, vmsg); case VHOST_USER_ADD_MEM_REG: return vu_add_mem_reg(dev, vmsg); + case VHOST_USER_REM_MEM_REG: + return vu_rem_mem_reg(dev, vmsg); default: vmsg_close_fds(vmsg); vu_panic(dev, "Unhandled request: %d", vmsg->request); diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h index 60ef7fd..f843971 100644 --- a/contrib/libvhost-user/libvhost-user.h +++ b/contrib/libvhost-user/libvhost-user.h @@ -99,6 +99,7 @@ typedef enum VhostUserRequest { VHOST_USER_VRING_KICK = 35, VHOST_USER_GET_MAX_MEM_SLOTS = 36, VHOST_USER_ADD_MEM_REG = 37, + VHOST_USER_REM_MEM_REG = 38, VHOST_USER_MAX } VhostUserRequest; From patchwork Thu May 21 05:00:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raphael Norwitz X-Patchwork-Id: 11562271 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19D621391 for ; Thu, 21 May 2020 05:06:56 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E521520748 for ; Thu, 21 May 2020 05:06:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sendgrid.net header.i=@sendgrid.net header.b="RGHZT8x8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E521520748 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nutanix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:53304 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jbdPz-0004rz-3g for patchwork-qemu-devel@patchwork.kernel.org; Thu, 21 May 2020 01:06:55 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:60062) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdKJ-0003gR-4p for qemu-devel@nongnu.org; Thu, 21 May 2020 01:01:03 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:58463) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jbdKH-0001Ia-R9 for qemu-devel@nongnu.org; Thu, 21 May 2020 01:01:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=lgg31uEtMDopUvmpXDUsYGONCAL7eG0Q3eMW9ba+v58=; b=RGHZT8x8fmbzhKb6JYJ+LkmxwFxCSt1tPjuipoMzJvvVXTTEPnZQrnhSFYs6B09P+Je4 UubpuXDV9kcxHOlaB+YqBSe4692O4FNnOEJufHFnrsKNyU1TWbcF1jNFaqgDQardIHwnv1 /+usYlqElBpmjr7ZbG6zoStXPJtYCmszk= Received: by filterdrecv-p3iad2-8ddf98858-lwgxm with SMTP id filterdrecv-p3iad2-8ddf98858-lwgxm-19-5EC60B0B-E7 2020-05-21 05:00:59.818794673 +0000 UTC m=+4852409.056490205 Received: from localhost.localdomain.com (unknown) by ismtpd0026p1las1.sendgrid.net (SG) with ESMTP id VG9aOgbfRxipDFJ3nZk6Lg Thu, 21 May 2020 05:00:59.599 +0000 (UTC) From: Raphael Norwitz Subject: [PATCH v4 10/10] Lift max ram slots limit in libvhost-user Date: Thu, 21 May 2020 05:00:59 +0000 (UTC) Message-Id: <1588533678-23450-11-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588533678-23450-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: YCLURHX+pjNDm1i7d69iKyMnQi/dvWah9veFa8nllaoUC0ScIWrCgiaWGu43VgxFdB4istXUBpN9H93OJgc8zSp3+RPqU4g3+UQTYOB1nSwmrKHZM0l7k4GYm8xmWEN5tH/MfTcSIBiaSz680lFOusuXuTzpDAylu2jwXl8dIhJwhIlU3WbhH+4DEnC7f1AkUEp0Gh15xnGvaarfjE2FM7qJNtkO99npY21tUMUblQGGWnMG1A6MuBIrK5cScI/mZ9Uxidkgs6RS2wHLK7ccWg== To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/21 01:00:23 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Historically, VMs with vhost-user devices could hot-add memory a maximum of 8 times. Now that the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS protocol feature has been added, VMs with vhost-user backends which support this new feature can support a configurable number of ram slots up to the maximum supported by the target platform. This change adds VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support for backends built with libvhost-user, and increases the number of supported ram slots from 8 to 32. Memory hot-add, hot-remove and postcopy migration were tested with the vhost-user-bridge sample. Signed-off-by: Raphael Norwitz --- contrib/libvhost-user/libvhost-user.c | 17 +++++++++-------- contrib/libvhost-user/libvhost-user.h | 15 +++++++++++---- 2 files changed, 20 insertions(+), 12 deletions(-) diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index 386449b..b1e6072 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -269,7 +269,7 @@ have_userfault(void) static bool vu_message_read(VuDev *dev, int conn_fd, VhostUserMsg *vmsg) { - char control[CMSG_SPACE(VHOST_MEMORY_MAX_NREGIONS * sizeof(int))] = { }; + char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * sizeof(int))] = {}; struct iovec iov = { .iov_base = (char *)vmsg, .iov_len = VHOST_USER_HDR_SIZE, @@ -340,7 +340,7 @@ vu_message_write(VuDev *dev, int conn_fd, VhostUserMsg *vmsg) { int rc; uint8_t *p = (uint8_t *)vmsg; - char control[CMSG_SPACE(VHOST_MEMORY_MAX_NREGIONS * sizeof(int))] = { }; + char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * sizeof(int))] = {}; struct iovec iov = { .iov_base = (char *)vmsg, .iov_len = VHOST_USER_HDR_SIZE, @@ -353,7 +353,7 @@ vu_message_write(VuDev *dev, int conn_fd, VhostUserMsg *vmsg) struct cmsghdr *cmsg; memset(control, 0, sizeof(control)); - assert(vmsg->fd_num <= VHOST_MEMORY_MAX_NREGIONS); + assert(vmsg->fd_num <= VHOST_MEMORY_BASELINE_NREGIONS); if (vmsg->fd_num > 0) { size_t fdsize = vmsg->fd_num * sizeof(int); msg.msg_controllen = CMSG_SPACE(fdsize); @@ -780,7 +780,7 @@ static bool vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) { int i, j; bool found = false; - VuDevRegion shadow_regions[VHOST_MEMORY_MAX_NREGIONS] = {}; + VuDevRegion shadow_regions[VHOST_USER_MAX_RAM_SLOTS] = {}; VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m; DPRINT("Removing region:\n"); @@ -813,7 +813,7 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) { if (found) { memcpy(dev->regions, shadow_regions, - sizeof(VuDevRegion) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(VuDevRegion) * VHOST_USER_MAX_RAM_SLOTS); DPRINT("Successfully removed a region\n"); dev->nregions--; vmsg_set_reply_u64(vmsg, 0); @@ -1394,7 +1394,8 @@ vu_get_protocol_features_exec(VuDev *dev, VhostUserMsg *vmsg) 1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ | 1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER | 1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD | - 1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK; + 1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK | + 1ULL << VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS; if (have_userfault()) { features |= 1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT; @@ -1732,14 +1733,14 @@ static bool vu_handle_get_max_memslots(VuDev *dev, VhostUserMsg *vmsg) { vmsg->flags = VHOST_USER_REPLY_MASK | VHOST_USER_VERSION; vmsg->size = sizeof(vmsg->payload.u64); - vmsg->payload.u64 = VHOST_MEMORY_MAX_NREGIONS; + vmsg->payload.u64 = VHOST_USER_MAX_RAM_SLOTS; vmsg->fd_num = 0; if (!vu_message_write(dev, dev->sock, vmsg)) { vu_panic(dev, "Failed to send max ram slots: %s\n", strerror(errno)); } - DPRINT("u64: 0x%016"PRIx64"\n", (uint64_t) VHOST_MEMORY_MAX_NREGIONS); + DPRINT("u64: 0x%016"PRIx64"\n", (uint64_t) VHOST_USER_MAX_RAM_SLOTS); return false; } diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h index f843971..844c37c 100644 --- a/contrib/libvhost-user/libvhost-user.h +++ b/contrib/libvhost-user/libvhost-user.h @@ -28,7 +28,13 @@ #define VIRTQUEUE_MAX_SIZE 1024 -#define VHOST_MEMORY_MAX_NREGIONS 8 +#define VHOST_MEMORY_BASELINE_NREGIONS 8 + +/* + * Set a reasonable maximum number of ram slots, which will be supported by + * any architecture. + */ +#define VHOST_USER_MAX_RAM_SLOTS 32 typedef enum VhostSetConfigType { VHOST_SET_CONFIG_TYPE_MASTER = 0, @@ -55,6 +61,7 @@ enum VhostUserProtocolFeature { VHOST_USER_PROTOCOL_F_HOST_NOTIFIER = 11, VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD = 12, VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS = 14, + VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15, VHOST_USER_PROTOCOL_F_MAX }; @@ -123,7 +130,7 @@ typedef struct VhostUserMemoryRegion { typedef struct VhostUserMemory { uint32_t nregions; uint32_t padding; - VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS]; + VhostUserMemoryRegion regions[VHOST_MEMORY_BASELINE_NREGIONS]; } VhostUserMemory; typedef struct VhostUserMemRegMsg { @@ -190,7 +197,7 @@ typedef struct VhostUserMsg { VhostUserInflight inflight; } payload; - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_MEMORY_BASELINE_NREGIONS]; int fd_num; uint8_t *data; } VU_PACKED VhostUserMsg; @@ -368,7 +375,7 @@ typedef struct VuDevInflightInfo { struct VuDev { int sock; uint32_t nregions; - VuDevRegion regions[VHOST_MEMORY_MAX_NREGIONS]; + VuDevRegion regions[VHOST_USER_MAX_RAM_SLOTS]; VuVirtq *vq; VuDevInflightInfo inflight_info; int log_call_fd;