From patchwork Thu Jan 5 20:17:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Maciej S. Szmigiero" X-Patchwork-Id: 13090434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E3BD1C54EBC for ; Thu, 5 Jan 2023 20:19:35 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pDWh7-0006jl-IT; Thu, 05 Jan 2023 15:18:34 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pDWgN-0006aV-7E for qemu-devel@nongnu.org; Thu, 05 Jan 2023 15:17:58 -0500 Received: from vps-vb.mhejs.net ([37.28.154.113]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pDWgI-0002oE-S0 for qemu-devel@nongnu.org; Thu, 05 Jan 2023 15:17:45 -0500 Received: from MUA by vps-vb.mhejs.net with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1pDWfv-0001O9-AM; Thu, 05 Jan 2023 21:17:19 +0100 From: "Maciej S. Szmigiero" To: Paolo Bonzini , Richard Henderson , Eduardo Habkost Cc: "Michael S . Tsirkin" , Marcel Apfelbaum , =?utf-8?q?Alex_Benn=C3=A9e?= , Thomas Huth , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , =?utf-8?q?Phil?= =?utf-8?q?ippe_Mathieu-Daud=C3=A9?= , Eric Blake , Markus Armbruster , David Hildenbrand , qemu-devel@nongnu.org Subject: [PATCH v2 1/3] hapvdimm: add a virtual DIMM device for memory hot-add protocols Date: Thu, 5 Jan 2023 21:17:06 +0100 Message-Id: <7f01485e4e20e7d3d30b8b5c687cb087002133bb.1672878904.git.maciej.szmigiero@oracle.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=37.28.154.113; envelope-from=mail@maciej.szmigiero.name; helo=vps-vb.mhejs.net X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: "Maciej S. Szmigiero" This device works like a virtual DIMM stick: it allows inserting extra RAM into the guest at run time and later removing it without having to duplicate all of the address space management logic of TYPE_MEMORY_DEVICE in each memory hot-add protocol driver. This device is not meant to be instantiated or removed by the QEMU user directly: rather, the protocol driver is supposed to add and remove it as required. In fact, its very existence is supposed to be an implementation detail, transparent to the QEMU user. To prevent the user from accidentally manually creating an instance of this device the protocol driver is supposed to place the qdev_device_add*() call (that is uses to add this device) between hapvdimm_allow_adding() and hapvdimm_disallow_adding() calls in order to temporary authorize the operation. Signed-off-by: Maciej S. Szmigiero --- hw/i386/Kconfig | 2 + hw/i386/pc.c | 4 +- hw/mem/Kconfig | 4 + hw/mem/hapvdimm.c | 221 ++++++++++++++++++++++++++++++++++++++ hw/mem/meson.build | 1 + include/hw/mem/hapvdimm.h | 27 +++++ 6 files changed, 258 insertions(+), 1 deletion(-) create mode 100644 hw/mem/hapvdimm.c create mode 100644 include/hw/mem/hapvdimm.h diff --git a/hw/i386/Kconfig b/hw/i386/Kconfig index d22ac4a4b9..c248cd3734 100644 --- a/hw/i386/Kconfig +++ b/hw/i386/Kconfig @@ -69,6 +69,7 @@ config I440FX imply E1000_PCI imply VMPORT imply VMMOUSE + imply HAPVDIMM select PC_PCI select PC_ACPI select ACPI_SMBUS @@ -96,6 +97,7 @@ config Q35 imply E1000E_PCI_EXPRESS imply VMPORT imply VMMOUSE + imply HAPVDIMM select PC_PCI select PC_ACPI select PCI_EXPRESS_Q35 diff --git a/hw/i386/pc.c b/hw/i386/pc.c index d489ecc0d1..62b532f9e9 100644 --- a/hw/i386/pc.c +++ b/hw/i386/pc.c @@ -73,6 +73,7 @@ #include "hw/acpi/acpi.h" #include "hw/acpi/cpu_hotplug.h" #include "acpi-build.h" +#include "hw/mem/hapvdimm.h" #include "hw/mem/pc-dimm.h" #include "hw/mem/nvdimm.h" #include "hw/cxl/cxl.h" @@ -1610,7 +1611,8 @@ static HotplugHandler *pc_get_hotplug_handler(MachineState *machine, object_dynamic_cast(OBJECT(dev), TYPE_VIRTIO_PMEM_PCI) || object_dynamic_cast(OBJECT(dev), TYPE_VIRTIO_MEM_PCI) || object_dynamic_cast(OBJECT(dev), TYPE_VIRTIO_IOMMU_PCI) || - object_dynamic_cast(OBJECT(dev), TYPE_X86_IOMMU_DEVICE)) { + object_dynamic_cast(OBJECT(dev), TYPE_X86_IOMMU_DEVICE) || + object_dynamic_cast(OBJECT(dev), TYPE_HAPVDIMM)) { return HOTPLUG_HANDLER(machine); } diff --git a/hw/mem/Kconfig b/hw/mem/Kconfig index 73c5ae8ad9..d8c1feafed 100644 --- a/hw/mem/Kconfig +++ b/hw/mem/Kconfig @@ -16,3 +16,7 @@ config CXL_MEM_DEVICE bool default y if CXL select MEM_DEVICE + +config HAPVDIMM + bool + select MEM_DEVICE diff --git a/hw/mem/hapvdimm.c b/hw/mem/hapvdimm.c new file mode 100644 index 0000000000..9ae82edb2c --- /dev/null +++ b/hw/mem/hapvdimm.c @@ -0,0 +1,221 @@ +/* + * A memory hot-add protocol vDIMM device + * + * Copyright (C) 2020-2023 Oracle and/or its affiliates. + * + * Heavily based on pc-dimm.c: + * Copyright ProfitBricks GmbH 2012 + * Copyright (C) 2014 Red Hat Inc + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" + +#include "exec/memory.h" +#include "hw/boards.h" +#include "hw/mem/hapvdimm.h" +#include "hw/mem/memory-device.h" +#include "hw/qdev-core.h" +#include "hw/qdev-properties.h" +#include "migration/vmstate.h" +#include "qapi/error.h" +#include "qapi/visitor.h" +#include "qemu/module.h" +#include "sysemu/hostmem.h" +#include "trace.h" + +typedef struct HAPVDIMMDevice { + /* private */ + DeviceState parent_obj; + + /* public */ + bool ever_realized; + uint64_t addr; + uint64_t align; + uint32_t node; + HostMemoryBackend *hostmem; +} HAPVDIMMDevice; + +typedef struct HAPVDIMMDeviceClass { + /* private */ + DeviceClass parent_class; +} HAPVDIMMDeviceClass; + +static bool hapvdimm_adding_allowed; +static Property hapvdimm_properties[] = { + DEFINE_PROP_UINT64(HAPVDIMM_ADDR_PROP, HAPVDIMMDevice, addr, 0), + DEFINE_PROP_UINT64(HAPVDIMM_ALIGN_PROP, HAPVDIMMDevice, align, 0), + DEFINE_PROP_LINK(HAPVDIMM_MEMDEV_PROP, HAPVDIMMDevice, hostmem, + TYPE_MEMORY_BACKEND, HostMemoryBackend *), + DEFINE_PROP_END_OF_LIST(), +}; + +void hapvdimm_allow_adding(void) +{ + hapvdimm_adding_allowed = true; +} + +void hapvdimm_disallow_adding(void) +{ + hapvdimm_adding_allowed = false; +} + +static void hapvdimm_get_size(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + ERRP_GUARD(); + uint64_t value; + + value = memory_device_get_region_size(MEMORY_DEVICE(obj), errp); + if (*errp) { + return; + } + + visit_type_uint64(v, name, &value, errp); +} + +static void hapvdimm_init(Object *obj) +{ + object_property_add(obj, HAPVDIMM_SIZE_PROP, "uint64", hapvdimm_get_size, + NULL, NULL, NULL); +} + +static void hapvdimm_realize(DeviceState *dev, Error **errp) +{ + ERRP_GUARD(); + HAPVDIMMDevice *hapvdimm = HAPVDIMM(dev); + MachineState *ms = MACHINE(qdev_get_machine()); + + if (!hapvdimm->ever_realized) { + if (!hapvdimm_adding_allowed) { + error_setg(errp, "direct adding not allowed"); + return; + } + + hapvdimm->ever_realized = true; + } + + memory_device_pre_plug(MEMORY_DEVICE(hapvdimm), ms, + hapvdimm->align ? &hapvdimm->align : NULL, + errp); + if (*errp) { + return; + } + + if (!hapvdimm->hostmem) { + error_setg(errp, "'" HAPVDIMM_MEMDEV_PROP "' property is not set"); + return; + } else if (host_memory_backend_is_mapped(hapvdimm->hostmem)) { + const char *path; + + path = object_get_canonical_path_component(OBJECT(hapvdimm->hostmem)); + error_setg(errp, "can't use already busy memdev: %s", path); + return; + } + + host_memory_backend_set_mapped(hapvdimm->hostmem, true); + + memory_device_plug(MEMORY_DEVICE(hapvdimm), ms); + vmstate_register_ram(host_memory_backend_get_memory(hapvdimm->hostmem), + dev); +} + +static void hapvdimm_unrealize(DeviceState *dev) +{ + HAPVDIMMDevice *hapvdimm = HAPVDIMM(dev); + MachineState *ms = MACHINE(qdev_get_machine()); + + memory_device_unplug(MEMORY_DEVICE(hapvdimm), ms); + vmstate_unregister_ram(host_memory_backend_get_memory(hapvdimm->hostmem), + dev); + + host_memory_backend_set_mapped(hapvdimm->hostmem, false); +} + +static uint64_t hapvdimm_md_get_addr(const MemoryDeviceState *md) +{ + return object_property_get_uint(OBJECT(md), HAPVDIMM_ADDR_PROP, + &error_abort); +} + +static void hapvdimm_md_set_addr(MemoryDeviceState *md, uint64_t addr, + Error **errp) +{ + object_property_set_uint(OBJECT(md), HAPVDIMM_ADDR_PROP, addr, errp); +} + +static MemoryRegion *hapvdimm_md_get_memory_region(MemoryDeviceState *md, + Error **errp) +{ + HAPVDIMMDevice *hapvdimm = HAPVDIMM(md); + + if (!hapvdimm->hostmem) { + error_setg(errp, "'" HAPVDIMM_MEMDEV_PROP "' property must be set"); + return NULL; + } + + return host_memory_backend_get_memory(hapvdimm->hostmem); +} + +static void hapvdimm_md_fill_device_info(const MemoryDeviceState *md, + MemoryDeviceInfo *info) +{ + PCDIMMDeviceInfo *di = g_new0(PCDIMMDeviceInfo, 1); + const DeviceClass *dc = DEVICE_GET_CLASS(md); + const HAPVDIMMDevice *hapvdimm = HAPVDIMM(md); + const DeviceState *dev = DEVICE(md); + + if (dev->id) { + di->id = g_strdup(dev->id); + } + di->hotplugged = dev->hotplugged; + di->hotpluggable = dc->hotpluggable; + di->addr = hapvdimm->addr; + di->slot = -1; + di->node = 0; /* FIXME: report proper node */ + di->size = object_property_get_uint(OBJECT(hapvdimm), HAPVDIMM_SIZE_PROP, + NULL); + di->memdev = object_get_canonical_path(OBJECT(hapvdimm->hostmem)); + + info->u.dimm.data = di; + info->type = MEMORY_DEVICE_INFO_KIND_DIMM; +} + +static void hapvdimm_class_init(ObjectClass *oc, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(oc); + MemoryDeviceClass *mdc = MEMORY_DEVICE_CLASS(oc); + + dc->realize = hapvdimm_realize; + dc->unrealize = hapvdimm_unrealize; + device_class_set_props(dc, hapvdimm_properties); + dc->desc = "vDIMM for a hot add protocol"; + + mdc->get_addr = hapvdimm_md_get_addr; + mdc->set_addr = hapvdimm_md_set_addr; + mdc->get_plugged_size = memory_device_get_region_size; + mdc->get_memory_region = hapvdimm_md_get_memory_region; + mdc->fill_device_info = hapvdimm_md_fill_device_info; +} + +static const TypeInfo hapvdimm_info = { + .name = TYPE_HAPVDIMM, + .parent = TYPE_DEVICE, + .instance_size = sizeof(HAPVDIMMDevice), + .instance_init = hapvdimm_init, + .class_init = hapvdimm_class_init, + .class_size = sizeof(HAPVDIMMDeviceClass), + .interfaces = (InterfaceInfo[]) { + { TYPE_MEMORY_DEVICE }, + { } + }, +}; + +static void hapvdimm_register_types(void) +{ + type_register_static(&hapvdimm_info); +} + +type_init(hapvdimm_register_types) diff --git a/hw/mem/meson.build b/hw/mem/meson.build index 609b2b36fc..5f7a0181d3 100644 --- a/hw/mem/meson.build +++ b/hw/mem/meson.build @@ -4,6 +4,7 @@ mem_ss.add(when: 'CONFIG_DIMM', if_true: files('pc-dimm.c')) mem_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_mc.c')) mem_ss.add(when: 'CONFIG_NVDIMM', if_true: files('nvdimm.c')) mem_ss.add(when: 'CONFIG_CXL_MEM_DEVICE', if_true: files('cxl_type3.c')) +mem_ss.add(when: 'CONFIG_HAPVDIMM', if_true: files('hapvdimm.c')) softmmu_ss.add_all(when: 'CONFIG_MEM_DEVICE', if_true: mem_ss) diff --git a/include/hw/mem/hapvdimm.h b/include/hw/mem/hapvdimm.h new file mode 100644 index 0000000000..bb9a135a52 --- /dev/null +++ b/include/hw/mem/hapvdimm.h @@ -0,0 +1,27 @@ +/* + * A memory hot-add protocol vDIMM device + * + * Copyright (C) 2020-2023 Oracle and/or its affiliates. + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef QEMU_HAPVDIMM_H +#define QEMU_HAPVDIMM_H + +#include "qom/object.h" + +#define TYPE_HAPVDIMM "mem-hapvdimm" +OBJECT_DECLARE_SIMPLE_TYPE(HAPVDIMMDevice, HAPVDIMM) + +#define HAPVDIMM_ADDR_PROP "addr" +#define HAPVDIMM_ALIGN_PROP "align" +#define HAPVDIMM_SIZE_PROP "size" +#define HAPVDIMM_MEMDEV_PROP "memdev" + +void hapvdimm_allow_adding(void); +void hapvdimm_disallow_adding(void); + +#endif From patchwork Thu Jan 5 20:17:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Maciej S. Szmigiero" X-Patchwork-Id: 13090435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 43695C5479D for ; Thu, 5 Jan 2023 20:19:36 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pDWgw-0006hg-MK; Thu, 05 Jan 2023 15:18:23 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pDWgN-0006aU-6Y for qemu-devel@nongnu.org; Thu, 05 Jan 2023 15:17:54 -0500 Received: from vps-vb.mhejs.net ([37.28.154.113]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pDWgH-0002rI-6L for qemu-devel@nongnu.org; Thu, 05 Jan 2023 15:17:45 -0500 Received: from MUA by vps-vb.mhejs.net with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1pDWg0-0001OM-O5; Thu, 05 Jan 2023 21:17:24 +0100 From: "Maciej S. Szmigiero" To: Paolo Bonzini , Richard Henderson , Eduardo Habkost Cc: "Michael S . Tsirkin" , Marcel Apfelbaum , =?utf-8?q?Alex_Benn=C3=A9e?= , Thomas Huth , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , =?utf-8?q?Phil?= =?utf-8?q?ippe_Mathieu-Daud=C3=A9?= , Eric Blake , Markus Armbruster , David Hildenbrand , qemu-devel@nongnu.org Subject: [PATCH v2 2/3] Add Hyper-V Dynamic Memory Protocol definitions Date: Thu, 5 Jan 2023 21:17:07 +0100 Message-Id: <2638a041ccafd9d99fcd6576c19aec12b02d1a58.1672878904.git.maciej.szmigiero@oracle.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=37.28.154.113; envelope-from=mail@maciej.szmigiero.name; helo=vps-vb.mhejs.net X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: "Maciej S. Szmigiero" This commit adds Hyper-V Dynamic Memory Protocol definitions, taken from hv_balloon Linux kernel driver, adapted to the QEMU coding style and definitions. Signed-off-by: Maciej S. Szmigiero --- include/hw/hyperv/dynmem-proto.h | 423 +++++++++++++++++++++++++++++++ 1 file changed, 423 insertions(+) create mode 100644 include/hw/hyperv/dynmem-proto.h diff --git a/include/hw/hyperv/dynmem-proto.h b/include/hw/hyperv/dynmem-proto.h new file mode 100644 index 0000000000..d0f9090ac4 --- /dev/null +++ b/include/hw/hyperv/dynmem-proto.h @@ -0,0 +1,423 @@ +#ifndef HW_HYPERV_DYNMEM_PROTO_H +#define HW_HYPERV_DYNMEM_PROTO_H + +/* + * Hyper-V Dynamic Memory Protocol definitions + * + * Copyright (C) 2020-2023 Oracle and/or its affiliates. + * + * Based on drivers/hv/hv_balloon.c from Linux kernel: + * Copyright (c) 2012, Microsoft Corporation. + * + * Author: K. Y. Srinivasan + * + * This work is licensed under the terms of the GNU GPL, version 2. + * See the COPYING file in the top-level directory. + */ + +/* + * Protocol versions. The low word is the minor version, the high word the major + * version. + * + * History: + * Initial version 1.0 + * Changed to 0.1 on 2009/03/25 + * Changes to 0.2 on 2009/05/14 + * Changes to 0.3 on 2009/12/03 + * Changed to 1.0 on 2011/04/05 + * Changed to 2.0 on 2019/12/10 + */ + +#define DYNMEM_MAKE_VERSION(Major, Minor) ((uint32_t)(((Major) << 16) | (Minor))) +#define DYNMEM_MAJOR_VERSION(Version) ((uint32_t)(Version) >> 16) +#define DYNMEM_MINOR_VERSION(Version) ((uint32_t)(Version) & 0xff) + +enum { + DYNMEM_PROTOCOL_VERSION_1 = DYNMEM_MAKE_VERSION(0, 3), + DYNMEM_PROTOCOL_VERSION_2 = DYNMEM_MAKE_VERSION(1, 0), + DYNMEM_PROTOCOL_VERSION_3 = DYNMEM_MAKE_VERSION(2, 0), + + DYNMEM_PROTOCOL_VERSION_WIN7 = DYNMEM_PROTOCOL_VERSION_1, + DYNMEM_PROTOCOL_VERSION_WIN8 = DYNMEM_PROTOCOL_VERSION_2, + DYNMEM_PROTOCOL_VERSION_WIN10 = DYNMEM_PROTOCOL_VERSION_3, + + DYNMEM_PROTOCOL_VERSION_CURRENT = DYNMEM_PROTOCOL_VERSION_WIN10 +}; + + + +/* + * Message Types + */ + +enum dm_message_type { + /* + * Version 0.3 + */ + DM_ERROR = 0, + DM_VERSION_REQUEST = 1, + DM_VERSION_RESPONSE = 2, + DM_CAPABILITIES_REPORT = 3, + DM_CAPABILITIES_RESPONSE = 4, + DM_STATUS_REPORT = 5, + DM_BALLOON_REQUEST = 6, + DM_BALLOON_RESPONSE = 7, + DM_UNBALLOON_REQUEST = 8, + DM_UNBALLOON_RESPONSE = 9, + DM_MEM_HOT_ADD_REQUEST = 10, + DM_MEM_HOT_ADD_RESPONSE = 11, + DM_VERSION_03_MAX = 11, + /* + * Version 1.0. + */ + DM_INFO_MESSAGE = 12, + DM_VERSION_1_MAX = 12, + + /* + * Version 2.0 + */ + DM_MEM_HOT_REMOVE_REQUEST = 13, + DM_MEM_HOT_REMOVE_RESPONSE = 14 +}; + + +/* + * Structures defining the dynamic memory management + * protocol. + */ + +union dm_version { + struct { + uint16_t minor_version; + uint16_t major_version; + }; + uint32_t version; +} QEMU_PACKED; + + +union dm_caps { + struct { + uint64_t balloon:1; + uint64_t hot_add:1; + /* + * To support guests that may have alignment + * limitations on hot-add, the guest can specify + * its alignment requirements; a value of n + * represents an alignment of 2^n in mega bytes. + */ + uint64_t hot_add_alignment:4; + uint64_t hot_remove:1; + uint64_t reservedz:57; + } cap_bits; + uint64_t caps; +} QEMU_PACKED; + +union dm_mem_page_range { + struct { + /* + * The PFN number of the first page in the range. + * 40 bits is the architectural limit of a PFN + * number for AMD64. + */ + uint64_t start_page:40; + /* + * The number of pages in the range. + */ + uint64_t page_cnt:24; + } finfo; + uint64_t page_range; +} QEMU_PACKED; + + + +/* + * The header for all dynamic memory messages: + * + * type: Type of the message. + * size: Size of the message in bytes; including the header. + * trans_id: The guest is responsible for manufacturing this ID. + */ + +struct dm_header { + uint16_t type; + uint16_t size; + uint32_t trans_id; +} QEMU_PACKED; + +/* + * A generic message format for dynamic memory. + * Specific message formats are defined later in the file. + */ + +struct dm_message { + struct dm_header hdr; + uint8_t data[]; /* enclosed message */ +} QEMU_PACKED; + + +/* + * Specific message types supporting the dynamic memory protocol. + */ + +/* + * Version negotiation message. Sent from the guest to the host. + * The guest is free to try different versions until the host + * accepts the version. + * + * dm_version: The protocol version requested. + * is_last_attempt: If TRUE, this is the last version guest will request. + * reservedz: Reserved field, set to zero. + */ + +struct dm_version_request { + struct dm_header hdr; + union dm_version version; + uint32_t is_last_attempt:1; + uint32_t reservedz:31; +} QEMU_PACKED; + +/* + * Version response message; Host to Guest and indicates + * if the host has accepted the version sent by the guest. + * + * is_accepted: If TRUE, host has accepted the version and the guest + * should proceed to the next stage of the protocol. FALSE indicates that + * guest should re-try with a different version. + * + * reservedz: Reserved field, set to zero. + */ + +struct dm_version_response { + struct dm_header hdr; + uint64_t is_accepted:1; + uint64_t reservedz:63; +} QEMU_PACKED; + +/* + * Message reporting capabilities. This is sent from the guest to the + * host. + */ + +struct dm_capabilities { + struct dm_header hdr; + union dm_caps caps; + uint64_t min_page_cnt; + uint64_t max_page_number; +} QEMU_PACKED; + +/* + * Response to the capabilities message. This is sent from the host to the + * guest. This message notifies if the host has accepted the guest's + * capabilities. If the host has not accepted, the guest must shutdown + * the service. + * + * is_accepted: Indicates if the host has accepted guest's capabilities. + * reservedz: Must be 0. + */ + +struct dm_capabilities_resp_msg { + struct dm_header hdr; + uint64_t is_accepted:1; + uint64_t hot_remove:1; + uint64_t suppress_pressure_reports:1; + uint64_t reservedz:61; +} QEMU_PACKED; + +/* + * This message is used to report memory pressure from the guest. + * This message is not part of any transaction and there is no + * response to this message. + * + * num_avail: Available memory in pages. + * num_committed: Committed memory in pages. + * page_file_size: The accumulated size of all page files + * in the system in pages. + * zero_free: The nunber of zero and free pages. + * page_file_writes: The writes to the page file in pages. + * io_diff: An indicator of file cache efficiency or page file activity, + * calculated as File Cache Page Fault Count - Page Read Count. + * This value is in pages. + * + * Some of these metrics are Windows specific and fortunately + * the algorithm on the host side that computes the guest memory + * pressure only uses num_committed value. + */ + +struct dm_status { + struct dm_header hdr; + uint64_t num_avail; + uint64_t num_committed; + uint64_t page_file_size; + uint64_t zero_free; + uint32_t page_file_writes; + uint32_t io_diff; +} QEMU_PACKED; + + +/* + * Message to ask the guest to allocate memory - balloon up message. + * This message is sent from the host to the guest. The guest may not be + * able to allocate as much memory as requested. + * + * num_pages: number of pages to allocate. + */ + +struct dm_balloon { + struct dm_header hdr; + uint32_t num_pages; + uint32_t reservedz; +} QEMU_PACKED; + + +/* + * Balloon response message; this message is sent from the guest + * to the host in response to the balloon message. + * + * reservedz: Reserved; must be set to zero. + * more_pages: If FALSE, this is the last message of the transaction. + * if TRUE there will atleast one more message from the guest. + * + * range_count: The number of ranges in the range array. + * + * range_array: An array of page ranges returned to the host. + * + */ + +struct dm_balloon_response { + struct dm_header hdr; + uint32_t reservedz; + uint32_t more_pages:1; + uint32_t range_count:31; + union dm_mem_page_range range_array[]; +} QEMU_PACKED; + +/* + * Un-balloon message; this message is sent from the host + * to the guest to give guest more memory. + * + * more_pages: If FALSE, this is the last message of the transaction. + * if TRUE there will atleast one more message from the guest. + * + * reservedz: Reserved; must be set to zero. + * + * range_count: The number of ranges in the range array. + * + * range_array: An array of page ranges returned to the host. + * + */ + +struct dm_unballoon_request { + struct dm_header hdr; + uint32_t more_pages:1; + uint32_t reservedz:31; + uint32_t range_count; + union dm_mem_page_range range_array[]; +} QEMU_PACKED; + +/* + * Un-balloon response message; this message is sent from the guest + * to the host in response to an unballoon request. + * + */ + +struct dm_unballoon_response { + struct dm_header hdr; +} QEMU_PACKED; + + +/* + * Hot add request message. Message sent from the host to the guest. + * + * mem_range: Memory range to hot add. + * + */ + +struct dm_hot_add { + struct dm_header hdr; + union dm_mem_page_range range; +} QEMU_PACKED; + +/* + * Hot add response message. + * This message is sent by the guest to report the status of a hot add request. + * If page_count is less than the requested page count, then the host should + * assume all further hot add requests will fail, since this indicates that + * the guest has hit an upper physical memory barrier. + * + * Hot adds may also fail due to low resources; in this case, the guest must + * not complete this message until the hot add can succeed, and the host must + * not send a new hot add request until the response is sent. + * If VSC fails to hot add memory DYNMEM_NUMBER_OF_UNSUCCESSFUL_HOTADD_ATTEMPTS + * times it fails the request. + * + * + * page_count: number of pages that were successfully hot added. + * + * result: result of the operation 1: success, 0: failure. + * + */ + +struct dm_hot_add_response { + struct dm_header hdr; + uint32_t page_count; + uint32_t result; +} QEMU_PACKED; + +struct dm_hot_remove { + struct dm_header hdr; + uint32_t virtual_node; + uint32_t page_count; + uint32_t qos_flags; + uint32_t reservedZ; +} QEMU_PACKED; + +struct dm_hot_remove_response { + struct dm_header hdr; + uint32_t result; + uint32_t range_count; + uint64_t more_pages:1; + uint64_t reservedz:63; + union dm_mem_page_range range_array[]; +} QEMU_PACKED; + +#define DM_REMOVE_QOS_LARGE (1 << 0) +#define DM_REMOVE_QOS_LOCAL (1 << 1) +#define DM_REMOVE_QOS_MASK (0x3) + +/* + * Types of information sent from host to the guest. + */ + +enum dm_info_type { + INFO_TYPE_MAX_PAGE_CNT = 0, + MAX_INFO_TYPE +}; + + +/* + * Header for the information message. + */ + +struct dm_info_header { + enum dm_info_type type; + uint32_t data_size; + uint8_t data[]; +} QEMU_PACKED; + +/* + * This message is sent from the host to the guest to pass + * some relevant information (win8 addition). + * + * reserved: no used. + * info_size: size of the information blob. + * info: information blob. + */ + +struct dm_info_msg { + struct dm_header hdr; + uint32_t reserved; + uint32_t info_size; + uint8_t info[]; +}; + +#endif From patchwork Thu Jan 5 20:17:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Maciej S. Szmigiero" X-Patchwork-Id: 13090440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C96AAC4708E for ; Thu, 5 Jan 2023 20:21:22 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pDWhE-0006kr-Ls; Thu, 05 Jan 2023 15:18:40 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pDWgR-0006b2-IS for qemu-devel@nongnu.org; Thu, 05 Jan 2023 15:18:05 -0500 Received: from vps-vb.mhejs.net ([37.28.154.113]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pDWgL-0002uc-5I for qemu-devel@nongnu.org; Thu, 05 Jan 2023 15:17:50 -0500 Received: from MUA by vps-vb.mhejs.net with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1pDWg6-0001OX-50; Thu, 05 Jan 2023 21:17:30 +0100 From: "Maciej S. Szmigiero" To: Paolo Bonzini , Richard Henderson , Eduardo Habkost Cc: "Michael S . Tsirkin" , Marcel Apfelbaum , =?utf-8?q?Alex_Benn=C3=A9e?= , Thomas Huth , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , =?utf-8?q?Phil?= =?utf-8?q?ippe_Mathieu-Daud=C3=A9?= , Eric Blake , Markus Armbruster , David Hildenbrand , qemu-devel@nongnu.org Subject: [PATCH v2 3/3] Add a Hyper-V Dynamic Memory Protocol driver (hv-balloon) Date: Thu, 5 Jan 2023 21:17:08 +0100 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=37.28.154.113; envelope-from=mail@maciej.szmigiero.name; helo=vps-vb.mhejs.net X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: "Maciej S. Szmigiero" This driver is like virtio-balloon on steroids: it allows both changing the guest memory allocation via ballooning and inserting extra RAM into it by adding required memory backends and providing them to the driver. One of advantages of these over ACPI-based PC DIMM hotplug is that such memory can be hotplugged in much smaller granularity because the ACPI DIMM slot limit does not apply. Hot-adding additional memory is done by creating a new memory backend (for example by executing HMP command "object_add memory-backend-ram,id=mem1,size=4G"), then executing a new "hv-balloon-add-memory" QMP command, providing the id of that memory backend as the "id" parameter. In contrast with ACPI DIMM hotplug where one can only request to unplug a whole DIMM stick this driver allows removing memory from guest in single page (4k) units via ballooning. After a VM reboot each previously hot-added memory backend gets released. A "HV_BALLOON_MEMORY_BACKEND_UNUSED" QMP event is emitted in this case so the software controlling QEMU knows that it either needs to delete that memory backend (if no longer needed) or re-insert it. In the future, the guest boot memory size might be changed on reboot instead, taking into account the effective size that VM had before that reboot (much like Hyper-V does). For performance reasons, the guest-released memory is tracked in a few range trees, as a series of (start, count) ranges. Each time a new page range is inserted into such tree its neighbors are checked as candidates for possible merging with it. Besides performance reasons, the Dynamic Memory protocol itself uses page ranges as the data structure in its messages, so relevant pages need to be merged into such ranges anyway. One has to be careful when tracking the guest-released pages, since the guest can maliciously report returning pages outside its current address space, which later clash with the address range of newly added memory. Similarly, the guest can report freeing the same page twice. The above design results in much better ballooning performance than when using virtio-balloon with the same guest: 230 GB / minute with this driver versus 70 GB / minute with virtio-balloon. During a ballooning operation most of time is spent waiting for the guest to come up with newly freed page ranges, processing the received ranges on the host side (in QEMU and KVM) is nearly instantaneous. The unballoon operation is also pretty much instantaneous: thanks to the merging of the ballooned out page ranges 200 GB of memory can be returned to the guest in about 1 second. With virtio-balloon this operation takes about 2.5 minutes. These tests were done against a Windows Server 2019 guest running on a Xeon E5-2699, after dirtying the whole memory inside guest before each balloon operation. Using a range tree instead of a bitmap to track the removed memory also means that the solution scales well with the guest size: even a 1 TB range takes just few bytes of memory. Since the required GTree operations aren't present in every Glib version a check for them was added to "configure" script, together with new "--enable-hv-balloon" and "--disable-hv-balloon" arguments. If these GTree operations are missing in the system's Glib version this driver will be skipped during QEMU build. An optional "status-report=on" device parameter requests memory status events from the guest (typically sent every second), which allow the host to learn both the guest memory available and the guest memory in use counts. They are emitted externally as "HV_BALLOON_STATUS_REPORT" QMP events. The driver is named hv-balloon since the Linux kernel client driver for the Dynamic Memory Protocol is named as such and to follow the naming pattern established by the virtio-balloon driver. The whole protocol runs over Hyper-V VMBus. The driver was tested against Windows Server 2012 R2, Windows Server 2016 and Windows Server 2016 guests and obeys the guest alignment requirements reported to the host via DM_CAPABILITIES_REPORT message. Signed-off-by: Maciej S. Szmigiero --- Kconfig.host | 3 + configure | 36 + hw/hyperv/Kconfig | 5 + hw/hyperv/hv-balloon.c | 2185 ++++++++++++++++++++++++++++++++++++++++ hw/hyperv/meson.build | 1 + hw/hyperv/trace-events | 16 + meson.build | 4 +- qapi/machine.json | 68 ++ 8 files changed, 2317 insertions(+), 1 deletion(-) create mode 100644 hw/hyperv/hv-balloon.c diff --git a/Kconfig.host b/Kconfig.host index d763d89269..2ee71578f3 100644 --- a/Kconfig.host +++ b/Kconfig.host @@ -46,3 +46,6 @@ config FUZZ config VFIO_USER_SERVER_ALLOWED bool imply VFIO_USER_SERVER + +config HV_BALLOON_POSSIBLE + bool diff --git a/configure b/configure index 789a4f6cc9..d2b4f6fde7 100755 --- a/configure +++ b/configure @@ -286,6 +286,7 @@ bsd_user="" pie="" coroutine="" plugins="$default_feature" +hv_balloon="$default_feature" meson="" ninja="" bindir="bin" @@ -897,6 +898,10 @@ for opt do ;; --enable-jemalloc) meson_option_parse --enable-malloc=jemalloc jemalloc ;; + --enable-hv-balloon) hv_balloon=yes + ;; + --disable-hv-balloon) hv_balloon=no + ;; # everything else has the same name in configure and meson --*) meson_option_parse "$opt" "$optarg" ;; @@ -1050,6 +1055,7 @@ cat << EOF debug-info debugging information safe-stack SafeStack Stack Smash Protection. Depends on clang/llvm >= 3.7 and requires coroutine backend ucontext. + hv-balloon hv-balloon driver where supported (requires Glib 2.68+ GTree API) NOTE: The object files are built at the place where configure is launched EOF @@ -1777,6 +1783,32 @@ EOF fi fi +########################################## +# check for hv-balloon + +if test "$hv_balloon" != "no"; then + cat > $TMPC << EOF +#include +#include +int main(void) { + GTree *tree; + + tree = g_tree_new((GCompareFunc)strcmp); + (void)g_tree_node_first(tree); + g_tree_destroy(tree); + return 0; +} +EOF + if compile_prog "$glib_cflags" "$glib_libs" ; then + hv_balloon=yes + else + if test "$hv_balloon" = "yes" ; then + feature_not_found "hv-balloon" "Update Glib" + fi + hv_balloon="no" + fi +fi + ########################################## # functions to probe cross compilers @@ -2389,6 +2421,10 @@ if test "$have_tsan" = "yes" && test "$have_tsan_iface_fiber" = "yes" ; then echo "CONFIG_TSAN=y" >> $config_host_mak fi +if test "$hv_balloon" = "yes" ; then + echo "CONFIG_HV_BALLOON_POSSIBLE=y" >> $config_host_mak +fi + if test "$plugins" = "yes" ; then echo "CONFIG_PLUGIN=y" >> $config_host_mak fi diff --git a/hw/hyperv/Kconfig b/hw/hyperv/Kconfig index fcf65903bd..8f8be1bcce 100644 --- a/hw/hyperv/Kconfig +++ b/hw/hyperv/Kconfig @@ -16,3 +16,8 @@ config SYNDBG bool default y depends on VMBUS + +config HV_BALLOON + bool + default y + depends on HV_BALLOON_POSSIBLE && VMBUS && HAPVDIMM diff --git a/hw/hyperv/hv-balloon.c b/hw/hyperv/hv-balloon.c new file mode 100644 index 0000000000..b11f005189 --- /dev/null +++ b/hw/hyperv/hv-balloon.c @@ -0,0 +1,2185 @@ +/* + * QEMU Hyper-V Dynamic Memory Protocol driver + * + * Copyright (C) 2020-2023 Oracle and/or its affiliates. + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" + +#include "exec/address-spaces.h" +#include "exec/cpu-common.h" +#include "exec/memory.h" +#include "exec/ramblock.h" +#include "hw/boards.h" +#include "hw/hyperv/dynmem-proto.h" +#include "hw/hyperv/vmbus.h" +#include "hw/mem/hapvdimm.h" +#include "hw/mem/pc-dimm.h" +#include "hw/qdev-core.h" +#include "hw/qdev-properties.h" +#include "monitor/qdev.h" +#include "qapi/error.h" +#include "qapi/qapi-commands-machine.h" +#include "qapi/qapi-events-machine.h" +#include "qapi/qmp/qdict.h" +#include "qemu/error-report.h" +#include "qemu/module.h" +#include "qemu/units.h" +#include "qemu/timer.h" +#include "sysemu/balloon.h" +#include "sysemu/reset.h" +#include "trace.h" + +/* + * temporarily avoid warnings about enhanced GTree API usage requiring a + * too recent Glib version until GLIB_VERSION_MAX_ALLOWED finally reaches + * the Glib version with this API + */ +#pragma GCC diagnostic ignored "-Wdeprecated-declarations" + +#define TYPE_HV_BALLOON "hv-balloon" +#define HV_BALLOON_GUID "525074DC-8985-46e2-8057-A307DC18A502" +#define HV_BALLOON_PFN_SHIFT 12 +#define HV_BALLOON_PAGE_SIZE (1 << HV_BALLOON_PFN_SHIFT) + +/* + * Some Windows versions (at least Server 2019) will crash with various + * error codes when receiving DM protocol requests (at least + * DM_MEM_HOT_ADD_REQUEST) immediately after boot. + * + * It looks like Hyper-V from Server 2016 uses a 50-second after-boot + * delay, probably to workaround this issue, so we'll use this value, too. + */ +#define HV_BALLOON_POST_INIT_WAIT (50 * 1000) + +#define HV_BALLOON_HA_CHUNK_SIZE (2 * GiB) +#define HV_BALLOON_HA_CHUNK_PAGES (HV_BALLOON_HA_CHUNK_SIZE / HV_BALLOON_PAGE_SIZE) + +#define HV_BALLOON_HR_CHUNK_PAGES 585728 +/* + * ^ that's the maximum number of pages + * that Windows returns in one hot remove response + * + * If the number requested is too high Windows will no longer honor + * these requests + */ + +typedef enum State { + /* not a real state */ + S_NO_CHANGE = 0, + + S_WAIT_RESET, + S_CLOSED, + S_VERSION, + S_CAPS, + S_POST_INIT_WAIT, + S_IDLE, + S_HOT_ADD_RB_WAIT, + S_HOT_ADD_POSTING, + S_HOT_ADD_REPLY_WAIT, + S_HOT_ADD_SKIP_CURRENT, + S_HOT_ADD_PROCESSED_CLEAR_PENDING, + S_HOT_ADD_PROCESSED_NEXT, + S_HOT_REMOVE, + S_BALLOON_POSTING, + S_BALLOON_RB_WAIT, + S_BALLOON_REPLY_WAIT, + S_UNBALLOON_POSTING, + S_UNBALLOON_RB_WAIT, + S_UNBALLOON_REPLY_WAIT, +} State; + +typedef struct StateDesc { + State state; + const char *desc; +} StateDesc; + +typedef struct PageRange { + uint64_t start; + uint64_t count; +} PageRange; + +/* type safety */ +typedef struct PageRangeTree { + GTree *t; +} PageRangeTree; + +typedef struct HAPVDIMMRange { + HAPVDIMMDevice *hapvdimm; + + PageRange range; + uint64_t used; + + /* + * Pages not currently usable due to guest alignment reqs or + * not hot added in the first place + */ + uint64_t unused_head, unused_tail; + + /* Memory removed from the guest backed by this HAPVDIMM */ + PageRangeTree removed_guest, removed_both; +} HAPVDIMMRange; + +/* type safety */ +typedef struct HAPVDIMMRangeTree { + GTree *t; +} HAPVDIMMRangeTree; + +typedef struct HvBalloon { + VMBusDevice parent; + State state; + bool status_reports; + + union dm_version version; + union dm_caps caps; + + QEMUTimer post_init_timer; + guint del_todo_process_timer; + + unsigned int trans_id; + + /* Guest target size */ + uint64_t target; + bool target_changed; + uint64_t target_diff; + + /* + * All HAPVDIMMs under control of this driver + * (but excluding the ones in hapvdimms_del_todo) + */ + HAPVDIMMRangeTree hapvdimms; + + /* Non-HAPVDIMM removed memory */ + PageRangeTree removed_guest, removed_both; + + /* Grand totals of removed memory (both HAPVDIMM and non-HAPVDIMM) */ + uint64_t removed_guest_ctr, removed_both_ctr; + + /* HAPVDIMMs waiting to be added during current connection */ + GSList *ha_todo; + uint64_t ha_current_count; + + /* HAPVDIMMs waiting to be deleted, not in any of the above structures */ + GSList *hapvdimms_del_todo; +} HvBalloon; + +#define HV_BALLOON(obj) OBJECT_CHECK(HvBalloon, (obj), TYPE_HV_BALLOON) + +#define HV_BALLOON_SET_STATE(hvb, news) \ + do { \ + assert(news != S_NO_CHANGE); \ + hv_balloon_state_set(hvb, news, # news); \ + } while (0) + +#define HV_BALLOON_STATE_DESC_SET(stdesc, news) \ + _hv_balloon_state_desc_set(stdesc, news, # news) + +#define HV_BALLOON_STATE_DESC_INIT \ + { \ + .state = S_NO_CHANGE, \ + } + +#define SUM_OVERFLOW_U64(in1, in2) ((in1) > UINT64_MAX - (in2)) +#define SUM_SATURATE_U64(in1, in2) \ + ({ \ + uint64_t _in1 = (in1), _in2 = (in2); \ + uint64_t _result; \ + \ + if (!SUM_OVERFLOW_U64(_in1, _in2)) { \ + _result = _in1 + _in2; \ + } else { \ + _result = UINT64_MAX; \ + } \ + \ + _result; \ + }) + +typedef struct HvBalloonReq { + VMBusChanReq vmreq; +} HvBalloonReq; + +/* PageRange */ +static void page_range_intersect(const PageRange *range, + uint64_t start, uint64_t count, + PageRange *out) +{ + uint64_t end1 = range->start + range->count; + uint64_t end2 = start + count; + uint64_t end = MIN(end1, end2); + + out->start = MAX(range->start, start); + out->count = out->start < end ? end - out->start : 0; +} + +static uint64_t page_range_intersection_size(const PageRange *range, + uint64_t start, uint64_t count) +{ + PageRange trange; + + page_range_intersect(range, start, count, &trange); + return trange.count; +} + +/* return just the part of range before (start) */ +static void page_range_part_before(const PageRange *range, + uint64_t start, PageRange *out) +{ + uint64_t endr = range->start + range->count; + uint64_t end = MIN(endr, start); + + out->start = range->start; + if (end > out->start) { + out->count = end - out->start; + } else { + out->count = 0; + } +} + +/* return just the part of range after (start, count) */ +static void page_range_part_after(const PageRange *range, + uint64_t start, uint64_t count, + PageRange *out) +{ + uint64_t end = range->start + range->count; + uint64_t ends = start + count; + + out->start = MAX(range->start, ends); + if (end > out->start) { + out->count = end - out->start; + } else { + out->count = 0; + } +} + +static bool page_range_joinable_left(const PageRange *range, + uint64_t start, uint64_t count) +{ + return start + count == range->start; +} + +static bool page_range_joinable_right(const PageRange *range, + uint64_t start, uint64_t count) +{ + return range->start + range->count == start; +} + +static bool page_range_joinable(const PageRange *range, + uint64_t start, uint64_t count) +{ + return page_range_joinable_left(range, start, count) || + page_range_joinable_right(range, start, count); +} + +/* PageRangeTree */ +static gint page_range_tree_key_compare(gconstpointer leftp, + gconstpointer rightp, + gpointer user_data) +{ + const uint64_t *left = leftp, *right = rightp; + + if (*left < *right) { + return -1; + } else if (*left > *right) { + return 1; + } else { /* *left == *right */ + return 0; + } +} + +static GTreeNode *page_range_tree_insert_new(PageRangeTree tree, + uint64_t start, uint64_t count) +{ + uint64_t *key = g_malloc(sizeof(*key)); + PageRange *range = g_malloc(sizeof(*range)); + + assert(count > 0); + + *key = range->start = start; + range->count = count; + + return g_tree_insert_node(tree.t, key, range); +} + +static void page_range_tree_insert(PageRangeTree tree, + uint64_t start, uint64_t count, + uint64_t *dupcount) +{ + GTreeNode *node; + bool joinable; + uint64_t intersection; + PageRange *range; + + assert(!SUM_OVERFLOW_U64(start, count)); + if (count == 0) { + return; + } + + node = g_tree_upper_bound(tree.t, &start); + if (node) { + node = g_tree_node_previous(node); + } else { + node = g_tree_node_last(tree.t); + } + + if (node) { + range = g_tree_node_value(node); + assert(range); + intersection = page_range_intersection_size(range, start, count); + joinable = page_range_joinable_right(range, start, count); + } + + if (!node || + (!intersection && !joinable)) { + /* + * !node case: the tree is empty or the very first node in the tree + * already has a higher key (the start of its range). + * the other case: there is a gap in the tree between the new range + * and the previous one. + * anyway, let's just insert the new range into the tree. + */ + node = page_range_tree_insert_new(tree, start, count); + assert(node); + range = g_tree_node_value(node); + assert(range); + } else { + /* + * the previous range in the tree either partially covers the new + * range or ends just at its beginning - extend it + */ + if (dupcount) { + *dupcount += intersection; + } + + count += start - range->start; + range->count = MAX(range->count, count); + } + + /* check next nodes for possible merging */ + for (node = g_tree_node_next(node); node; ) { + PageRange *rangecur; + + rangecur = g_tree_node_value(node); + assert(rangecur); + + intersection = page_range_intersection_size(rangecur, + range->start, range->count); + joinable = page_range_joinable_left(rangecur, + range->start, range->count); + if (!intersection && !joinable) { + /* the current node is disjoint */ + break; + } + + if (dupcount) { + *dupcount += intersection; + } + + count = rangecur->count + (rangecur->start - range->start); + range->count = MAX(range->count, count); + + /* the current node was merged in, remove it */ + start = rangecur->start; + node = g_tree_node_next(node); + /* no hinted removal in GTree... */ + g_tree_remove(tree.t, &start); + } +} + +static bool page_range_tree_pop(PageRangeTree tree, PageRange *out, + uint64_t maxcount) +{ + GTreeNode *node; + PageRange *range; + + node = g_tree_node_last(tree.t); + if (!node) { + return false; + } + + range = g_tree_node_value(node); + assert(range); + + out->start = range->start; + + /* can't modify range->start as it is the node key */ + if (range->count > maxcount) { + out->start += range->count - maxcount; + out->count = maxcount; + range->count -= maxcount; + } else { + out->count = range->count; + /* no hinted removal in GTree... */ + g_tree_remove(tree.t, &out->start); + } + + return true; +} + +static bool page_range_tree_intree_any(PageRangeTree tree, + uint64_t start, uint64_t count) +{ + GTreeNode *node; + + if (count == 0) { + return false; + } + + /* find the first node that can possibly intersect our range */ + node = g_tree_upper_bound(tree.t, &start); + if (node) { + /* + * a NULL node below means that the very first node in the tree + * already has a higher key (the start of its range). + */ + node = g_tree_node_previous(node); + } else { + /* a NULL node below means that the tree is empty */ + node = g_tree_node_last(tree.t); + } + /* node range start <= range start */ + + if (!node) { + /* node range start > range start */ + node = g_tree_node_first(tree.t); + } + + for ( ; node; node = g_tree_node_next(node)) { + PageRange *range = g_tree_node_value(node); + + assert(range); + /* + * if this node starts beyond or at the end of our range so does + * every next one + */ + if (range->start >= start + count) { + break; + } + + if (page_range_intersection_size(range, start, count) > 0) { + return true; + } + } + + return false; +} + +static PageRangeTree page_range_tree_new(void) +{ + PageRangeTree tree; + + tree.t = g_tree_new_full(page_range_tree_key_compare, NULL, + g_free, g_free); + return tree; +} + +static void page_range_tree_destroy(PageRangeTree *tree) +{ + /* g_tree_destroy() is not NULL-safe */ + if (!tree->t) { + return; + } + + g_tree_destroy(tree->t); + tree->t = NULL; +} + +/* HAPVDIMMDevice */ +static uint64_t hapvdimm_get_addr(HAPVDIMMDevice *hapvdimm) +{ + return object_property_get_uint(OBJECT(hapvdimm), HAPVDIMM_ADDR_PROP, + &error_abort) / HV_BALLOON_PAGE_SIZE; +} + +static uint64_t hapvdimm_get_size(HAPVDIMMDevice *hapvdimm) +{ + return object_property_get_uint(OBJECT(hapvdimm), HAPVDIMM_SIZE_PROP, + &error_abort) / HV_BALLOON_PAGE_SIZE; +} + +static void hapvdimm_get_range(HAPVDIMMDevice *hapvdimm, PageRange *out) +{ + out->start = hapvdimm_get_addr(hapvdimm); + assert(out->start > 0); + + out->count = hapvdimm_get_size(hapvdimm); + assert(out->count > 0); +} + +static HostMemoryBackend *hapvdimm_get_memdev(HAPVDIMMDevice *hapvdimm) +{ + Object *memdev_obj; + + memdev_obj = object_property_get_link(OBJECT(hapvdimm), + HAPVDIMM_MEMDEV_PROP, + &error_abort); + return MEMORY_BACKEND(memdev_obj); +} + +/* HAPVDIMMRange */ +static HAPVDIMMRange *hapvdimm_range_new(HAPVDIMMDevice *hapvdimm) +{ + HAPVDIMMRange *hpr = g_malloc(sizeof(*hpr)); + + hpr->hapvdimm = HAPVDIMM(object_ref(hapvdimm)); + hapvdimm_get_range(hapvdimm, &hpr->range); + + hpr->removed_guest = page_range_tree_new(); + hpr->removed_both = page_range_tree_new(); + + /* mark the whole range as unused */ + hpr->used = 0; + hpr->unused_head = hpr->range.count; + hpr->unused_tail = 0; + + return hpr; +} + +static void hapvdimm_range_free(HAPVDIMMRange *hpr) +{ + g_autoptr(HAPVDIMMDevice) hapvdimm = g_steal_pointer(&hpr->hapvdimm); + + page_range_tree_destroy(&hpr->removed_guest); + page_range_tree_destroy(&hpr->removed_both); + + g_free(hpr); +} + +/* the hapvdimm range reduced by unused head and tail */ +static void hapvdimm_range_get_effective_range(HAPVDIMMRange *hpr, + PageRange *out) +{ + out->start = hpr->range.start + hpr->unused_head; + out->count = hpr->range.count - hpr->unused_head - hpr->unused_tail; +} + +/* HAPVDIMMRangeTree */ +static gint hapvdimm_tree_key_compare(gconstpointer leftp, gconstpointer rightp, + gpointer user_data) +{ + /* + * hapvdimm tree is also keyed on page range start, so we can simply reuse + * the comparison function from the page range tree + */ + return page_range_tree_key_compare(leftp, rightp, user_data); +} + +static HAPVDIMMRange *hapvdimm_tree_insert_new(HvBalloon *balloon, + HAPVDIMMDevice *hapvdimm) +{ + HAPVDIMMRange *hpr; + uint64_t *key; + + hpr = hapvdimm_range_new(hapvdimm); + + key = g_malloc(sizeof(*key)); + *key = hpr->range.start; + + g_tree_insert(balloon->hapvdimms.t, key, hpr); + + return hpr; +} + +/* The HAPVDIMM must not be on the ha_todo list since it's going to get unref'ed. */ +static void hapvdimm_tree_remove(HvBalloon *balloon, HAPVDIMMDevice *hapvdimm) +{ + uint64_t addr; + + addr = hapvdimm_get_addr(hapvdimm); + assert(addr > 0); + + g_tree_remove(balloon->hapvdimms.t, &addr); +} + +/* total RAM includes memory currently removed from the guest */ +static gboolean hapvdimm_tree_total_ram_node(gpointer key, + gpointer value, + gpointer data) +{ + HAPVDIMMRange *hpr = value; + uint64_t *size = data; + PageRange rangeeff; + + hapvdimm_range_get_effective_range(hpr, &rangeeff); + *size += rangeeff.count; + + return false; +} + +static uint64_t hapvdimm_tree_total_ram(HvBalloon *balloon) +{ + uint64_t size = 0; + + g_tree_foreach(balloon->hapvdimms.t, hapvdimm_tree_total_ram_node, &size); + return size; +} + +static void hapvdimm_tree_value_free(gpointer data) +{ + HAPVDIMMRange *hpr = data; + + hapvdimm_range_free(hpr); +} + +static HAPVDIMMRangeTree hapvdimm_tree_new(void) +{ + HAPVDIMMRangeTree tree; + + tree.t = g_tree_new_full(hapvdimm_tree_key_compare, NULL, g_free, + hapvdimm_tree_value_free); + return tree; +} + +static void hapvdimm_tree_destroy(HAPVDIMMRangeTree *tree) +{ + /* g_tree_destroy() is not NULL-safe */ + if (!tree->t) { + return; + } + + g_tree_destroy(tree->t); + tree->t = NULL; +} + +static gboolean ha_todo_add_all_node(gpointer key, + gpointer value, + gpointer data) +{ + HAPVDIMMRange *hpr = value; + HvBalloon *balloon = data; + + /* assume the hpr is fresh */ + assert(hpr->used == 0); + assert(hpr->unused_head == hpr->range.count); + assert(hpr->unused_tail == 0); + + balloon->ha_todo = g_slist_append(balloon->ha_todo, hpr); + + return false; +} + +static void ha_todo_add_all(HvBalloon *balloon) +{ + assert(balloon->ha_todo == NULL); + g_tree_foreach(balloon->hapvdimms.t, ha_todo_add_all_node, balloon); +} + +static void ha_todo_clear(HvBalloon *balloon) +{ + g_slist_free(g_steal_pointer(&balloon->ha_todo)); +} + +/* TODO: unify the code below with virtio-balloon and cache the value */ +static int build_dimm_list(Object *obj, void *opaque) +{ + GSList **list = opaque; + + if (object_dynamic_cast(obj, TYPE_PC_DIMM)) { + DeviceState *dev = DEVICE(obj); + if (dev->realized) { /* only realized DIMMs matter */ + *list = g_slist_prepend(*list, dev); + } + } + + object_child_foreach(obj, build_dimm_list, opaque); + return 0; +} + +static ram_addr_t get_current_ram_size(void) +{ + GSList *list = NULL, *item; + ram_addr_t size = current_machine->ram_size; + + build_dimm_list(qdev_get_machine(), &list); + for (item = list; item; item = g_slist_next(item)) { + Object *obj = OBJECT(item->data); + if (!strcmp(object_get_typename(obj), TYPE_PC_DIMM)) + size += object_property_get_int(obj, PC_DIMM_SIZE_PROP, + &error_abort); + } + g_slist_free(list); + + return size; +} + +/* total RAM includes memory currently removed from the guest */ +static uint64_t hv_balloon_total_ram(HvBalloon *balloon) +{ + ram_addr_t ram_size = get_current_ram_size(); + uint64_t ram_size_pages = ram_size >> HV_BALLOON_PFN_SHIFT; + uint64_t hapvdimm_size_pages = hapvdimm_tree_total_ram(balloon); + + assert(ram_size_pages > 0); + + return SUM_SATURATE_U64(ram_size_pages, hapvdimm_size_pages); +} + +/* + * calculating the total RAM size is a slow operation, + * avoid it as much as possible + */ +static uint64_t hv_balloon_total_removed_rs(HvBalloon *balloon, + uint64_t ram_size_pages) +{ + uint64_t total_removed; + + total_removed = SUM_SATURATE_U64(balloon->removed_guest_ctr, + balloon->removed_both_ctr); + + /* possible if guest returns pages outside actual RAM */ + if (total_removed > ram_size_pages) { + total_removed = ram_size_pages; + } + + return total_removed; +} + +static bool hv_balloon_state_is_init(HvBalloon *balloon) +{ + return balloon->state == S_WAIT_RESET || + balloon->state == S_CLOSED || + balloon->state == S_VERSION || + balloon->state == S_CAPS; +} + +/* Returns whether the state has actually changed */ +static bool hv_balloon_state_set(HvBalloon *balloon, + State newst, const char *newststr) +{ + if (newst == S_NO_CHANGE || balloon->state == newst) { + return false; + } + + balloon->state = newst; + trace_hv_balloon_state_change(newststr); + return true; +} + +static void _hv_balloon_state_desc_set(StateDesc *stdesc, + State newst, const char *newststr) +{ + /* state setting is only permitted on a freshly init desc */ + assert(stdesc->state == S_NO_CHANGE); + + assert(newst != S_NO_CHANGE); + + stdesc->state = newst; + stdesc->desc = newststr; +} + +static void del_todo_process(HvBalloon *balloon) +{ + while (balloon->hapvdimms_del_todo) { + HAPVDIMMDevice *hapvdimm = balloon->hapvdimms_del_todo->data; + HostMemoryBackend *backend; + const char *backend_id; + + backend = hapvdimm_get_memdev(hapvdimm); + backend_id = object_get_canonical_path_component(OBJECT(backend)); + + object_unparent(OBJECT(hapvdimm)); + object_unref(OBJECT(hapvdimm)); + qapi_event_send_hv_balloon_memory_backend_unused(backend_id); + + balloon->hapvdimms_del_todo = + g_slist_remove(balloon->hapvdimms_del_todo, hapvdimm); + } + + if (balloon->del_todo_process_timer) { + g_source_remove(balloon->del_todo_process_timer); + balloon->del_todo_process_timer = 0; + } +} + +static gboolean del_todo_process_timer(gpointer user_data) +{ + HvBalloon *balloon = user_data; + + balloon->del_todo_process_timer = 0; + + del_todo_process(balloon); + + return G_SOURCE_REMOVE; +} + +static void del_todo_append(HvBalloon *balloon, + HAPVDIMMDevice *hapvdimm) +{ + balloon->hapvdimms_del_todo = g_slist_append(balloon->hapvdimms_del_todo, + object_ref(hapvdimm)); +} + +static void del_todo_add(HvBalloon *balloon, + HAPVDIMMDevice *hapvdimm) +{ + hapvdimm_tree_remove(balloon, hapvdimm); + del_todo_append(balloon, hapvdimm); +} + +static gboolean del_todo_add_all_node(gpointer key, + gpointer value, + gpointer data) +{ + HAPVDIMMRange *hpr = value; + HvBalloon *balloon = data; + + del_todo_append(balloon, hpr->hapvdimm); + + return false; +} + +static void del_todo_add_all(HvBalloon *balloon) +{ + g_tree_foreach(balloon->hapvdimms.t, del_todo_add_all_node, balloon); + hapvdimm_tree_destroy(&balloon->hapvdimms); + + balloon->hapvdimms = hapvdimm_tree_new(); +} + +static void del_todo_add_all_from_ha_todo(HvBalloon *balloon) +{ + while (balloon->ha_todo) { + HAPVDIMMRange *hpr = balloon->ha_todo->data; + + del_todo_add(balloon, hpr->hapvdimm); + balloon->ha_todo = g_slist_remove(balloon->ha_todo, hpr); + } +} + +static VMBusChannel *hv_balloon_get_channel_maybe(HvBalloon *balloon) +{ + return vmbus_device_channel(&balloon->parent, 0); +} + +static VMBusChannel *hv_balloon_get_channel(HvBalloon *balloon) +{ + VMBusChannel *chan; + + chan = hv_balloon_get_channel_maybe(balloon); + assert(chan != NULL); + return chan; +} + +static ssize_t hv_balloon_send_packet(VMBusChannel *chan, + struct dm_message *msg) +{ + int ret; + + ret = vmbus_channel_reserve(chan, 0, msg->hdr.size); + if (ret < 0) { + return ret; + } + + return vmbus_channel_send(chan, VMBUS_PACKET_DATA_INBAND, + NULL, 0, msg, msg->hdr.size, false, + msg->hdr.trans_id); +} + +static bool hv_balloon_unballoon_get_source(HvBalloon *balloon, + PageRangeTree *dtree, + uint64_t **dctr, + HAPVDIMMRange **hpr) +{ + /* Try the boot memory first */ + if (g_tree_nnodes(balloon->removed_guest.t) > 0) { + *dtree = balloon->removed_guest; + *dctr = &balloon->removed_guest_ctr; + *hpr = NULL; + } else if (g_tree_nnodes(balloon->removed_both.t) > 0) { + *dtree = balloon->removed_both; + *dctr = &balloon->removed_both_ctr; + *hpr = NULL; + } else { + GTreeNode *node; + + for (node = g_tree_node_first(balloon->hapvdimms.t); node; + node = g_tree_node_next(node)) { + HAPVDIMMRange *hprnode = g_tree_node_value(node); + + assert(hprnode); + if (g_tree_nnodes(hprnode->removed_guest.t) > 0) { + *dtree = hprnode->removed_guest; + *dctr = &balloon->removed_guest_ctr; + *hpr = hprnode; + break; + } else if (g_tree_nnodes(hprnode->removed_both.t) > 0) { + *dtree = hprnode->removed_both; + *dctr = &balloon->removed_both_ctr; + *hpr = hprnode; + break; + } + } + + if (!node) { + return false; + } + } + + return true; +} + +static void hv_balloon_balloon_unballoon_start(HvBalloon *balloon, + uint64_t ram_size_pages, + StateDesc *stdesc) +{ + uint64_t total_removed = hv_balloon_total_removed_rs(balloon, + ram_size_pages); + + assert(balloon->state == S_IDLE); + assert(ram_size_pages > 0); + + /* + * we need to cache the value when starting the (un)balloon procedure + * in case somebody changes the balloon target when the procedure is + * in progress + */ + if (balloon->target < ram_size_pages - total_removed) { + balloon->target_diff = ram_size_pages - total_removed - balloon->target; + HV_BALLOON_STATE_DESC_SET(stdesc, S_BALLOON_RB_WAIT); + } else { + balloon->target_diff = balloon->target - + (ram_size_pages - total_removed); + + /* + * careful here, the user might have set the balloon target + * above the RAM size, so above the total removed count + */ + balloon->target_diff = MIN(balloon->target_diff, total_removed); + HV_BALLOON_STATE_DESC_SET(stdesc, S_UNBALLOON_RB_WAIT); + } + + balloon->target_changed = false; +} + +static void hv_balloon_unballoon_rb_wait(HvBalloon *balloon, StateDesc *stdesc) +{ + VMBusChannel *chan = hv_balloon_get_channel(balloon); + struct dm_unballoon_request *ur; + size_t ur_size = sizeof(*ur) + sizeof(ur->range_array[0]); + + assert(balloon->state == S_UNBALLOON_RB_WAIT); + + if (vmbus_channel_reserve(chan, 0, ur_size) < 0) { + return; + } + + HV_BALLOON_STATE_DESC_SET(stdesc, S_UNBALLOON_POSTING); +} + +static void hv_balloon_unballoon_posting(HvBalloon *balloon, StateDesc *stdesc) +{ + VMBusChannel *chan = hv_balloon_get_channel(balloon); + PageRangeTree dtree; + uint64_t *dctr; + HAPVDIMMRange *hpr; + struct dm_unballoon_request *ur; + size_t ur_size = sizeof(*ur) + sizeof(ur->range_array[0]); + PageRange range; + bool bret; + ssize_t ret; + + assert(balloon->state == S_UNBALLOON_POSTING); + assert(balloon->target_diff > 0); + + if (!hv_balloon_unballoon_get_source(balloon, &dtree, &dctr, &hpr)) { + error_report("trying to unballoon but nothing ballooned"); + /* + * there is little we can do as we might have already + * sent the guest a partial request we can't cancel + */ + return; + } + + assert(dtree.t); + assert(dctr); + + ur = alloca(ur_size); + memset(ur, 0, ur_size); + ur->hdr.type = DM_UNBALLOON_REQUEST; + ur->hdr.size = ur_size; + ur->hdr.trans_id = balloon->trans_id; + + bret = page_range_tree_pop(dtree, &range, MIN(balloon->target_diff, + HV_BALLOON_HA_CHUNK_PAGES)); + assert(bret); + /* TODO: madvise? */ + + *dctr -= range.count; + balloon->target_diff -= range.count; + if (hpr) { + hpr->used += range.count; + } + + ur->range_count = 1; + ur->range_array[0].finfo.start_page = range.start; + ur->range_array[0].finfo.page_cnt = range.count; + ur->more_pages = balloon->target_diff > 0; + + trace_hv_balloon_outgoing_unballoon(ur->hdr.trans_id, + range.count, range.start, + balloon->target_diff); + + if (ur->more_pages) { + HV_BALLOON_STATE_DESC_SET(stdesc, S_UNBALLOON_RB_WAIT); + } else { + HV_BALLOON_STATE_DESC_SET(stdesc, S_UNBALLOON_REPLY_WAIT); + } + + ret = vmbus_channel_send(chan, VMBUS_PACKET_DATA_INBAND, + NULL, 0, ur, ur_size, false, + ur->hdr.trans_id); + if (ret <= 0) { + error_report("error %zd when posting unballoon msg, expect problems", + ret); + } +} + +static void hv_balloon_hot_add_start(HvBalloon *balloon, StateDesc *stdesc) +{ + HAPVDIMMRange *hpr; + PageRange range; + + assert(balloon->state == S_IDLE); + assert(balloon->ha_todo); + + hpr = balloon->ha_todo->data; + + range.start = QEMU_ALIGN_UP(hpr->range.start, + (1 << balloon->caps.cap_bits.hot_add_alignment) + * (MiB / HV_BALLOON_PAGE_SIZE)); + hpr->unused_head = range.start - hpr->range.start; + if (hpr->unused_head >= hpr->range.count) { + HV_BALLOON_STATE_DESC_SET(stdesc, S_HOT_ADD_SKIP_CURRENT); + return; + } + + range.count = hpr->range.count - hpr->unused_head; + range.count = QEMU_ALIGN_DOWN(range.count, + (1 << balloon->caps.cap_bits.hot_add_alignment) + * (MiB / HV_BALLOON_PAGE_SIZE)); + if (range.count == 0) { + HV_BALLOON_STATE_DESC_SET(stdesc, S_HOT_ADD_SKIP_CURRENT); + return; + } + hpr->unused_tail = hpr->range.count - hpr->unused_head - range.count; + hpr->used = 0; + + HV_BALLOON_STATE_DESC_SET(stdesc, S_HOT_ADD_RB_WAIT); +} + +static void hv_balloon_hot_add_rb_wait(HvBalloon *balloon, StateDesc *stdesc) +{ + VMBusChannel *chan = hv_balloon_get_channel(balloon); + struct dm_hot_add *ha; + size_t ha_size = sizeof(*ha) + sizeof(ha->range); + + assert(balloon->state == S_HOT_ADD_RB_WAIT); + + if (vmbus_channel_reserve(chan, 0, ha_size) < 0) { + return; + } + + HV_BALLOON_STATE_DESC_SET(stdesc, S_HOT_ADD_POSTING); +} + +static void hv_balloon_hot_add_posting(HvBalloon *balloon, StateDesc *stdesc) +{ + VMBusChannel *chan = hv_balloon_get_channel(balloon); + HAPVDIMMRange *hpr; + struct dm_hot_add *ha; + size_t ha_size = sizeof(*ha) + sizeof(ha->range); + union dm_mem_page_range *ha_region; + PageRange range; + uint64_t chunk_max_size; + ssize_t ret; + + assert(balloon->state == S_HOT_ADD_POSTING); + assert(balloon->ha_todo); + + hpr = balloon->ha_todo->data; + + range.start = hpr->range.start + hpr->unused_head + hpr->used; + range.count = hpr->range.count; + range.count -= hpr->unused_head; + range.count -= hpr->used; + range.count -= hpr->unused_tail; + + chunk_max_size = MAX((1 << balloon->caps.cap_bits.hot_add_alignment) * + (MiB / HV_BALLOON_PAGE_SIZE), + HV_BALLOON_HA_CHUNK_PAGES); + range.count = MIN(range.count, chunk_max_size); + balloon->ha_current_count = range.count; + + ha = alloca(ha_size); + ha_region = &(&ha->range)[1]; + memset(ha, 0, ha_size); + ha->hdr.type = DM_MEM_HOT_ADD_REQUEST; + ha->hdr.size = ha_size; + ha->hdr.trans_id = balloon->trans_id; + + ha->range.finfo.start_page = range.start; + ha->range.finfo.page_cnt = range.count; + ha_region->finfo.start_page = range.start; + ha_region->finfo.page_cnt = ha->range.finfo.page_cnt; + + trace_hv_balloon_outgoing_hot_add(ha->hdr.trans_id, + range.count, range.start); + + ret = vmbus_channel_send(chan, VMBUS_PACKET_DATA_INBAND, + NULL, 0, ha, ha_size, false, + ha->hdr.trans_id); + if (ret <= 0) { + error_report("error %zd when posting hot add msg, expect problems", + ret); + } + + HV_BALLOON_STATE_DESC_SET(stdesc, S_HOT_ADD_REPLY_WAIT); +} + +static void hv_balloon_hot_add_finish(HvBalloon *balloon, StateDesc *stdesc) +{ + HAPVDIMMRange *hpr; + + assert(balloon->state == S_HOT_ADD_SKIP_CURRENT || + balloon->state == S_HOT_ADD_PROCESSED_CLEAR_PENDING || + balloon->state == S_HOT_ADD_PROCESSED_NEXT); + assert(balloon->ha_todo); + + hpr = balloon->ha_todo->data; + + balloon->ha_todo = g_slist_remove(balloon->ha_todo, hpr); + if (balloon->state == S_HOT_ADD_SKIP_CURRENT) { + del_todo_add(balloon, hpr->hapvdimm); + } else if (balloon->state == S_HOT_ADD_PROCESSED_CLEAR_PENDING) { + del_todo_add_all_from_ha_todo(balloon); + } + + /* let other things happen, too, between hot adds to be done */ + HV_BALLOON_STATE_DESC_SET(stdesc, S_IDLE); +} + +static void hv_balloon_balloon_rb_wait(HvBalloon *balloon, StateDesc *stdesc) +{ + VMBusChannel *chan = hv_balloon_get_channel(balloon); + size_t bl_size = sizeof(struct dm_balloon); + + assert(balloon->state == S_BALLOON_RB_WAIT); + + if (vmbus_channel_reserve(chan, 0, bl_size) < 0) { + return; + } + + HV_BALLOON_STATE_DESC_SET(stdesc, S_BALLOON_POSTING); +} + +static void hv_balloon_balloon_posting(HvBalloon *balloon, StateDesc *stdesc) +{ + VMBusChannel *chan = hv_balloon_get_channel(balloon); + struct dm_balloon bl; + size_t bl_size = sizeof(bl); + ssize_t ret; + + assert(balloon->state == S_BALLOON_POSTING); + assert(balloon->target_diff > 0); + + memset(&bl, 0, sizeof(bl)); + bl.hdr.type = DM_BALLOON_REQUEST; + bl.hdr.size = bl_size; + bl.hdr.trans_id = balloon->trans_id; + bl.num_pages = MIN(balloon->target_diff, HV_BALLOON_HR_CHUNK_PAGES); + + trace_hv_balloon_outgoing_balloon(bl.hdr.trans_id, bl.num_pages, + balloon->target_diff); + + ret = vmbus_channel_send(chan, VMBUS_PACKET_DATA_INBAND, + NULL, 0, &bl, bl_size, false, + bl.hdr.trans_id); + if (ret <= 0) { + error_report("error %zd when posting balloon msg, expect problems", + ret); + } + + HV_BALLOON_STATE_DESC_SET(stdesc, S_BALLOON_REPLY_WAIT); +} + +static void hv_balloon_idle_state(HvBalloon *balloon, + StateDesc *stdesc) +{ + bool can_balloon = balloon->caps.cap_bits.balloon; + bool want_unballoon = false; + bool want_hot_add = balloon->ha_todo != NULL; + bool want_balloon = false; + uint64_t ram_size_pages; + + assert(balloon->state == S_IDLE); + + if (can_balloon && balloon->target_changed) { + uint64_t total_removed; + + ram_size_pages = hv_balloon_total_ram(balloon); + total_removed = hv_balloon_total_removed_rs(balloon, + ram_size_pages); + + want_unballoon = total_removed > 0 && + balloon->target > ram_size_pages - total_removed; + want_balloon = balloon->target < ram_size_pages - total_removed; + } + + /* + * the order here is important, first we unballoon, then hot add, + * then balloon (or hot remove) + */ + if (want_unballoon) { + hv_balloon_balloon_unballoon_start(balloon, ram_size_pages, stdesc); + } else if (want_hot_add) { + hv_balloon_hot_add_start(balloon, stdesc); + } else if (want_balloon) { + hv_balloon_balloon_unballoon_start(balloon, ram_size_pages, stdesc); + } +} + +static const struct { + void (*handler)(HvBalloon *balloon, StateDesc *stdesc); +} state_handlers[] = { + [S_IDLE].handler = hv_balloon_idle_state, + [S_UNBALLOON_RB_WAIT].handler = hv_balloon_unballoon_rb_wait, + [S_UNBALLOON_POSTING].handler = hv_balloon_unballoon_posting, + [S_HOT_ADD_RB_WAIT].handler = hv_balloon_hot_add_rb_wait, + [S_HOT_ADD_POSTING].handler = hv_balloon_hot_add_posting, + [S_HOT_ADD_SKIP_CURRENT].handler = hv_balloon_hot_add_finish, + [S_HOT_ADD_PROCESSED_CLEAR_PENDING].handler = hv_balloon_hot_add_finish, + [S_HOT_ADD_PROCESSED_NEXT].handler = hv_balloon_hot_add_finish, + [S_BALLOON_RB_WAIT].handler = hv_balloon_balloon_rb_wait, + [S_BALLOON_POSTING].handler = hv_balloon_balloon_posting, +}; + +static void hv_balloon_handle_state(HvBalloon *balloon, StateDesc *stdesc) +{ + if (!state_handlers[balloon->state].handler) { + return; + } + + state_handlers[balloon->state].handler(balloon, stdesc); +} + +static void hv_balloon_remove_response_insert_range(PageRangeTree tree, + const PageRange *range, + uint64_t *ctr1, + uint64_t *ctr2, + uint64_t *ctr3) +{ + uint64_t dupcount, effcount; + + if (range->count == 0) { + return; + } + + dupcount = 0; + page_range_tree_insert(tree, range->start, range->count, &dupcount); + + assert(dupcount <= range->count); + effcount = range->count - dupcount; + + *ctr1 += effcount; + *ctr2 += effcount; + if (ctr3) { + *ctr3 += effcount; + } +} + +static void hv_balloon_remove_response_handle_range(HvBalloon *balloon, + PageRange *range, + bool both, + uint64_t *removedctr) +{ + GTreeNode *node; + PageRangeTree globaltree = both ? balloon->removed_both : + balloon->removed_guest; + uint64_t *globalctr = both ? &balloon->removed_both_ctr : + &balloon->removed_guest_ctr; + + if (range->count == 0) { + return; + } + + trace_hv_balloon_remove_response(range->count, range->start, both); + + /* find the first node that can possibly intersect our range */ + node = g_tree_upper_bound(balloon->hapvdimms.t, &range->start); + if (node) { + /* + * a NULL node below means that the very first node in the tree + * already has a higher key (the start of its range). + */ + node = g_tree_node_previous(node); + } else { + /* a NULL node below means that the tree is empty */ + node = g_tree_node_last(balloon->hapvdimms.t); + } + /* node range start <= range start */ + + if (!node) { + /* node range start > range start */ + node = g_tree_node_first(balloon->hapvdimms.t); + } + + for ( ; node && range->count > 0; node = g_tree_node_next(node)) { + HAPVDIMMRange *hpr = g_tree_node_value(node); + PageRangeTree hprtree; + PageRange rangeeff, rangehole, rangecommon; + uint64_t hprremoved = 0; + + assert(hpr); + hprtree = both ? hpr->removed_both : hpr->removed_guest; + hapvdimm_range_get_effective_range(hpr, &rangeeff); + + /* + * if this node starts beyond or at the end of the range so does + * every next one + */ + if (rangeeff.start >= range->start + range->count) { + break; + } + + /* process the hole before the current hpr, if it exists */ + page_range_part_before(range, rangeeff.start, &rangehole); + hv_balloon_remove_response_insert_range(globaltree, &rangehole, + globalctr, removedctr, NULL); + if (rangehole.count > 0) { + trace_hv_balloon_remove_response_hole(rangehole.count, + rangehole.start, + range->count, range->start, + rangeeff.start, both); + } + + /* + * process the hpr part, can be empty for the very first node processed + * or due to difference between the nominal and effective hpr start + */ + page_range_intersect(range, rangeeff.start, rangeeff.count, + &rangecommon); + hv_balloon_remove_response_insert_range(hprtree, &rangecommon, + globalctr, removedctr, + &hprremoved); + hpr->used -= hprremoved; + if (rangecommon.count > 0) { + trace_hv_balloon_remove_response_common(rangecommon.count, + rangecommon.start, + range->count, range->start, + rangeeff.count, + rangeeff.start, hprremoved, + both); + } + + /* calculate what's left after the current hpr */ + rangecommon = *range; + page_range_part_after(&rangecommon, rangeeff.start, rangeeff.count, + range); + } + + /* process the remainder of the range that lies outside of the hpr tree */ + if (range->count > 0) { + hv_balloon_remove_response_insert_range(globaltree, range, + globalctr, removedctr, NULL); + trace_hv_balloon_remove_response_remainder(range->count, range->start, + both); + range->count = 0; + } +} + +static void hv_balloon_remove_response_handle_pages(HvBalloon *balloon, + PageRange *range, + uint64_t start, + uint64_t count, + bool both, + uint64_t *removedctr) +{ + assert(count > 0); + + /* + * if there is an existing range that the new range can't be joined to + * dump it into tree(s) + */ + if (range->count > 0 && !page_range_joinable(range, start, count)) { + hv_balloon_remove_response_handle_range(balloon, range, both, + removedctr); + } + + if (range->count == 0) { + range->start = start; + range->count = count; + } else if (page_range_joinable_left(range, start, count)) { + range->start = start; + range->count += count; + } else { /* page_range_joinable_right() */ + range->count += count; + } +} + +static gboolean hv_balloon_handle_remove_host_addr_node(gpointer key, + gpointer value, + gpointer data) +{ + PageRange *range = value; + uint64_t pageoff; + + for (pageoff = 0; pageoff < range->count; ) { + void *addr = (void *)((range->start + pageoff) * HV_BALLOON_PAGE_SIZE); + RAMBlock *rb; + ram_addr_t rb_offset; + size_t rb_page_size; + size_t discard_size; + + rb = qemu_ram_block_from_host(addr, false, &rb_offset); + rb_page_size = qemu_ram_pagesize(rb); + + if (rb_page_size != HV_BALLOON_PAGE_SIZE) { + /* TODO: these should end in "removed_guest" */ + warn_report("guest reported removed page backed by unsupported page size %zu", + rb_page_size); + pageoff++; + continue; + } + + discard_size = MIN(range->count - pageoff, + (rb->max_length - rb_offset) / + HV_BALLOON_PAGE_SIZE); + discard_size = MAX(discard_size, 1); + + if (ram_block_discard_range(rb, rb_offset, discard_size * + HV_BALLOON_PAGE_SIZE) != 0) { + warn_report("guest reported removed page failed discard"); + } + + pageoff += discard_size; + } + + return false; +} + +static void hv_balloon_handle_remove_host_addr_tree(PageRangeTree tree) +{ + g_tree_foreach(tree.t, hv_balloon_handle_remove_host_addr_node, NULL); +} + +static int hv_balloon_handle_remove_section(PageRangeTree tree, + const MemoryRegionSection *section, + uint64_t count) +{ + void *addr = memory_region_get_ram_ptr(section->mr) + + section->offset_within_region; + uint64_t addr_page; + + assert(count > 0); + + if ((uintptr_t)addr % HV_BALLOON_PAGE_SIZE) { + warn_report("guest reported removed pages at an unaligned host addr %p", + addr); + return -EINVAL; + } + + addr_page = (uintptr_t)addr / HV_BALLOON_PAGE_SIZE; + page_range_tree_insert(tree, addr_page, count, NULL); + + return 0; +} + +static void hv_balloon_handle_remove_ranges(HvBalloon *balloon, + union dm_mem_page_range ranges[], + uint32_t count) +{ + uint64_t removedcnt; + PageRangeTree removed_host_addr; + PageRange range_guest, range_both; + + removed_host_addr = page_range_tree_new(); + range_guest.count = range_both.count = removedcnt = 0; + for (unsigned int ctr = 0; ctr < count; ctr++) { + union dm_mem_page_range *mr = &ranges[ctr]; + hwaddr pa; + MemoryRegionSection section; + + for (unsigned int offset = 0; offset < mr->finfo.page_cnt; ) { + int ret; + uint64_t pageno = mr->finfo.start_page + offset; + uint64_t pagecnt = 1; + + pa = (hwaddr)pageno << HV_BALLOON_PFN_SHIFT; + section = memory_region_find(get_system_memory(), pa, + (mr->finfo.page_cnt - offset) * + HV_BALLOON_PAGE_SIZE); + if (!section.mr) { + warn_report("guest reported removed page %"PRIu64" not found in RAM", + pageno); + ret = -EINVAL; + goto finish_page; + } + + pagecnt = section.size / HV_BALLOON_PAGE_SIZE; + if (pagecnt <= 0) { + warn_report("guest reported removed page %"PRIu64" in a section smaller than page size", + pageno); + pagecnt = 1; /* skip the whole page */ + ret = -EINVAL; + goto finish_page; + } + + if (!memory_region_is_ram(section.mr) || + memory_region_is_rom(section.mr) || + memory_region_is_romd(section.mr)) { + warn_report("guest reported removed page %"PRIu64" in a section that is not an ordinary RAM", + pageno); + ret = -EINVAL; + goto finish_page; + } + + ret = hv_balloon_handle_remove_section(removed_host_addr, §ion, + pagecnt); + + finish_page: + if (ret == 0) { + hv_balloon_remove_response_handle_pages(balloon, + &range_both, + pageno, pagecnt, + true, &removedcnt); + } else { + hv_balloon_remove_response_handle_pages(balloon, + &range_guest, + pageno, pagecnt, + false, &removedcnt); + } + + if (section.mr) { + memory_region_unref(section.mr); + } + + offset += pagecnt; + } + } + + hv_balloon_remove_response_handle_range(balloon, &range_both, true, + &removedcnt); + hv_balloon_remove_response_handle_range(balloon, &range_guest, false, + &removedcnt); + + hv_balloon_handle_remove_host_addr_tree(removed_host_addr); + page_range_tree_destroy(&removed_host_addr); + + if (removedcnt > balloon->target_diff) { + warn_report("guest reported more pages removed than currently pending (%"PRIu64" vs %"PRIu64")", + removedcnt, balloon->target_diff); + balloon->target_diff = 0; + } else { + balloon->target_diff -= removedcnt; + } +} + +static bool hv_balloon_handle_msg_size(HvBalloonReq *req, size_t minsize, + const char *msgname) +{ + VMBusChanReq *vmreq = &req->vmreq; + uint32_t msglen = vmreq->msglen; + + if (msglen >= minsize) { + return true; + } + + warn_report("%s message too short (%u vs %zu), ignoring", msgname, + (unsigned int)msglen, minsize); + return false; +} + +static void hv_balloon_handle_version_request(HvBalloon *balloon, + HvBalloonReq *req, + StateDesc *stdesc) +{ + VMBusChanReq *vmreq = &req->vmreq; + struct dm_version_request *msgVr = vmreq->msg; + struct dm_version_response respVr; + + if (balloon->state != S_VERSION) { + warn_report("unexpected DM_VERSION_REQUEST in %d state", + balloon->state); + return; + } + + if (!hv_balloon_handle_msg_size(req, sizeof(*msgVr), + "DM_VERSION_REQUEST")) { + return; + } + + trace_hv_balloon_incoming_version(msgVr->version.major_version, + msgVr->version.minor_version); + + memset(&respVr, 0, sizeof(respVr)); + respVr.hdr.type = DM_VERSION_RESPONSE; + respVr.hdr.size = sizeof(respVr); + respVr.hdr.trans_id = msgVr->hdr.trans_id; + respVr.is_accepted = msgVr->version.version >= DYNMEM_PROTOCOL_VERSION_1 && + msgVr->version.version <= DYNMEM_PROTOCOL_VERSION_3; + + hv_balloon_send_packet(vmreq->chan, (struct dm_message *)&respVr); + + if (respVr.is_accepted) { + HV_BALLOON_STATE_DESC_SET(stdesc, S_CAPS); + } +} + +static void hv_balloon_handle_caps_report(HvBalloon *balloon, + HvBalloonReq *req, + StateDesc *stdesc) +{ + VMBusChanReq *vmreq = &req->vmreq; + struct dm_capabilities *msgCap = vmreq->msg; + struct dm_capabilities_resp_msg respCap; + + if (balloon->state != S_CAPS) { + warn_report("unexpected DM_CAPABILITIES_REPORT in %d state", + balloon->state); + return; + } + + if (!hv_balloon_handle_msg_size(req, sizeof(*msgCap), + "DM_CAPABILITIES_REPORT")) { + return; + } + + trace_hv_balloon_incoming_caps(msgCap->caps.caps); + balloon->caps = msgCap->caps; + + memset(&respCap, 0, sizeof(respCap)); + respCap.hdr.type = DM_CAPABILITIES_RESPONSE; + respCap.hdr.size = sizeof(respCap); + respCap.hdr.trans_id = msgCap->hdr.trans_id; + respCap.is_accepted = 1; + respCap.hot_remove = 1; + respCap.suppress_pressure_reports = !balloon->status_reports; + hv_balloon_send_packet(vmreq->chan, (struct dm_message *)&respCap); + + if (balloon->caps.cap_bits.hot_add) { + ha_todo_add_all(balloon); + } + + timer_mod(&balloon->post_init_timer, + qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) + + HV_BALLOON_POST_INIT_WAIT); + + HV_BALLOON_STATE_DESC_SET(stdesc, S_POST_INIT_WAIT); +} + +static void hv_balloon_handle_status_report(HvBalloon *balloon, + HvBalloonReq *req) +{ + VMBusChanReq *vmreq = &req->vmreq; + struct dm_status *msgStatus = vmreq->msg; + + if (!hv_balloon_handle_msg_size(req, sizeof(*msgStatus), + "DM_STATUS_REPORT")) { + return; + } + + if (!balloon->status_reports) { + return; + } + + qapi_event_send_hv_balloon_status_report((uint64_t)msgStatus->num_committed * + HV_BALLOON_PAGE_SIZE, + (uint64_t)msgStatus->num_avail * + HV_BALLOON_PAGE_SIZE); +} + +static void hv_balloon_handle_unballoon_response(HvBalloon *balloon, + HvBalloonReq *req, + StateDesc *stdesc) +{ + VMBusChanReq *vmreq = &req->vmreq; + struct dm_unballoon_response *msgUrR = vmreq->msg; + + if (balloon->state != S_UNBALLOON_REPLY_WAIT) { + warn_report("unexpected DM_UNBALLOON_RESPONSE in %d state", + balloon->state); + return; + } + + if (!hv_balloon_handle_msg_size(req, sizeof(*msgUrR), + "DM_UNBALLOON_RESPONSE")) + return; + + trace_hv_balloon_incoming_unballoon(msgUrR->hdr.trans_id); + + balloon->trans_id++; + HV_BALLOON_STATE_DESC_SET(stdesc, S_IDLE); +} + +static void hv_balloon_handle_hot_add_response(HvBalloon *balloon, + HvBalloonReq *req, + StateDesc *stdesc) +{ + VMBusChanReq *vmreq = &req->vmreq; + struct dm_hot_add_response *msgHaR = vmreq->msg; + HAPVDIMMRange *hpr; + + if (balloon->state != S_HOT_ADD_REPLY_WAIT) { + warn_report("unexpected DM_HOT_ADD_RESPONSE in %d state", + balloon->state); + return; + } + + if (!hv_balloon_handle_msg_size(req, sizeof(*msgHaR), + "DM_HOT_ADD_RESPONSE")) + return; + + trace_hv_balloon_incoming_hot_add(msgHaR->hdr.trans_id, msgHaR->result, + msgHaR->page_count); + + balloon->trans_id++; + + assert(balloon->ha_todo); + hpr = balloon->ha_todo->data; + + if (msgHaR->result) { + if (msgHaR->page_count > balloon->ha_current_count) { + warn_report("DM_HOT_ADD_RESPONSE page count higher than requested (%"PRIu32" vs %"PRIu64")", + msgHaR->page_count, balloon->ha_current_count); + msgHaR->page_count = balloon->ha_current_count; + } + + hpr->used += msgHaR->page_count; + } + + if (!msgHaR->result || msgHaR->page_count < balloon->ha_current_count) { + if (hpr->used == 0) { + /* + * apparently the guest didn't like the current range at all, + * let's try the next one + */ + HV_BALLOON_STATE_DESC_SET(stdesc, S_HOT_ADD_SKIP_CURRENT); + return; + } + + /* + * the current planned range was only partially hot-added, take note + * how much of it remains and don't attempt any further hot adds + */ + hpr->unused_tail = hpr->range.count - hpr->unused_head - hpr->used; + + HV_BALLOON_STATE_DESC_SET(stdesc, S_HOT_ADD_PROCESSED_CLEAR_PENDING); + return; + } + + /* any pages remaining in this hpr? */ + if (hpr->range.count - hpr->unused_head - hpr->used - + hpr->unused_tail > 0) { + HV_BALLOON_STATE_DESC_SET(stdesc, S_HOT_ADD_RB_WAIT); + } else { + HV_BALLOON_STATE_DESC_SET(stdesc, S_HOT_ADD_PROCESSED_NEXT); + } +} + +static void hv_balloon_handle_balloon_response(HvBalloon *balloon, + HvBalloonReq *req, + StateDesc *stdesc) +{ + VMBusChanReq *vmreq = &req->vmreq; + struct dm_balloon_response *msgBR = vmreq->msg; + + if (balloon->state != S_BALLOON_REPLY_WAIT) { + warn_report("unexpected DM_BALLOON_RESPONSE in %d state", + balloon->state); + return; + } + + if (!hv_balloon_handle_msg_size(req, sizeof(*msgBR), + "DM_BALLOON_RESPONSE")) + return; + + trace_hv_balloon_incoming_balloon(msgBR->hdr.trans_id, msgBR->range_count, + msgBR->more_pages); + + if (vmreq->msglen < sizeof(*msgBR) + + (uint64_t)sizeof(msgBR->range_array[0]) * msgBR->range_count) { + warn_report("DM_BALLOON_RESPONSE too short for the range count"); + return; + } + + if (msgBR->range_count == 0) { + /* The guest is already at its minimum size */ + msgBR->more_pages = 0; + balloon->target_diff = 0; + } else { + hv_balloon_handle_remove_ranges(balloon, + msgBR->range_array, + msgBR->range_count); + } + + if (!msgBR->more_pages) { + balloon->trans_id++; + + if (balloon->target_diff > 0) { + HV_BALLOON_STATE_DESC_SET(stdesc, S_BALLOON_RB_WAIT); + } else { + HV_BALLOON_STATE_DESC_SET(stdesc, S_IDLE); + } + } +} + +static void hv_balloon_handle_packet(HvBalloon *balloon, HvBalloonReq *req, + StateDesc *stdesc) +{ + VMBusChanReq *vmreq = &req->vmreq; + struct dm_message *msg = vmreq->msg; + + if (vmreq->msglen < sizeof(msg->hdr)) { + return; + } + + switch (msg->hdr.type) { + case DM_VERSION_REQUEST: + hv_balloon_handle_version_request(balloon, req, stdesc); + break; + + case DM_CAPABILITIES_REPORT: + hv_balloon_handle_caps_report(balloon, req, stdesc); + break; + + case DM_STATUS_REPORT: + hv_balloon_handle_status_report(balloon, req); + break; + + case DM_MEM_HOT_ADD_RESPONSE: + hv_balloon_handle_hot_add_response(balloon, req, stdesc); + break; + + case DM_UNBALLOON_RESPONSE: + hv_balloon_handle_unballoon_response(balloon, req, stdesc); + break; + + case DM_BALLOON_RESPONSE: + hv_balloon_handle_balloon_response(balloon, req, stdesc); + break; + + default: + warn_report("unknown DM message %u", msg->hdr.type); + break; + } +} + +static bool hv_balloon_recv_channel(HvBalloon *balloon, StateDesc *stdesc) +{ + VMBusChannel *chan; + HvBalloonReq *req; + + if (balloon->state == S_WAIT_RESET || + balloon->state == S_CLOSED) { + return false; + } + + chan = hv_balloon_get_channel(balloon); + if (vmbus_channel_recv_start(chan)) { + return false; + } + + while ((req = vmbus_channel_recv_peek(chan, sizeof(*req)))) { + hv_balloon_handle_packet(balloon, req, stdesc); + vmbus_free_req(req); + vmbus_channel_recv_pop(chan); + + if (stdesc->state != S_NO_CHANGE) { + break; + } + } + + return vmbus_channel_recv_done(chan) > 0; +} + +static bool hv_balloon_event_loop_state(HvBalloon *balloon) +{ + StateDesc state_new = HV_BALLOON_STATE_DESC_INIT; + + hv_balloon_handle_state(balloon, &state_new); + return hv_balloon_state_set(balloon, state_new.state, state_new.desc); +} + +static bool hv_balloon_event_loop_recv(HvBalloon *balloon) +{ + StateDesc state_new = HV_BALLOON_STATE_DESC_INIT; + bool any_recv, state_changed; + + any_recv = hv_balloon_recv_channel(balloon, &state_new); + state_changed = hv_balloon_state_set(balloon, + state_new.state, state_new.desc); + + return state_changed || any_recv; +} + +static void hv_balloon_event_loop(HvBalloon *balloon) +{ + bool state_repeat, recv_repeat; + + do { + state_repeat = hv_balloon_event_loop_state(balloon); + recv_repeat = hv_balloon_event_loop_recv(balloon); + } while (state_repeat || recv_repeat); +} + +void qmp_hv_balloon_add_memory(const char *id, Error **errp) +{ + HvBalloon *balloon; + uint64_t align; + g_autofree gchar *align_str = NULL; + g_autoptr(QDict) qdict = NULL; + g_autoptr(DeviceState) dev = NULL; + HAPVDIMMDevice *hapvdimm; + PageRange range; + HAPVDIMMRange *hpr; + + balloon = HV_BALLOON(object_resolve_path_type("", TYPE_HV_BALLOON, NULL)); + if (!balloon) { + error_setg(errp, "no %s device present", TYPE_HV_BALLOON); + return; + } + + if (hv_balloon_state_is_init(balloon)) { + error_setg(errp, "no guest attached to the DM protocol yet"); + return; + } + + if (!balloon->caps.cap_bits.hot_add) { + error_setg(errp, + "the current DM protocol guest has no support for memory hot add"); + return; + } + + /* add device */ + qdict = qdict_new(); + qdict_put_str(qdict, "driver", TYPE_HAPVDIMM); + qdict_put_str(qdict, HAPVDIMM_MEMDEV_PROP, id); + + align = (1 << balloon->caps.cap_bits.hot_add_alignment) * MiB; + align_str = g_strdup_printf("%" PRIu64, align); + qdict_put_str(qdict, HAPVDIMM_ALIGN_PROP, align_str); + + hapvdimm_allow_adding(); + dev = qdev_device_add_from_qdict(qdict, false, errp); + hapvdimm_disallow_adding(); + if (!dev) { + return; + } + + hapvdimm = HAPVDIMM(dev); + + hapvdimm_get_range(hapvdimm, &range); + if (page_range_tree_intree_any(balloon->removed_guest, + range.start, range.count) || + page_range_tree_intree_any(balloon->removed_both, + range.start, range.count)) { + error_setg(errp, + "some of the device new pages were already returned by the guest. this should not happen, please reboot the guest and try again"); + return; + } + + trace_hv_balloon_hapvdimm_range_add(range.count, range.start); + + hpr = hapvdimm_tree_insert_new(balloon, hapvdimm); + + balloon->ha_todo = g_slist_append(balloon->ha_todo, hpr); + + hv_balloon_event_loop(balloon); +} + +static void hv_balloon_notify_cb(VMBusChannel *chan) +{ + HvBalloon *balloon = HV_BALLOON(vmbus_channel_device(chan)); + + hv_balloon_event_loop(balloon); +} + +static void hv_balloon_stat(void *opaque, BalloonInfo *info) +{ + HvBalloon *balloon = opaque; + info->actual = (hv_balloon_total_ram(balloon) - balloon->removed_both_ctr) + << HV_BALLOON_PFN_SHIFT; +} + +static void hv_balloon_to_target(void *opaque, ram_addr_t target) +{ + HvBalloon *balloon = opaque; + uint64_t target_pages = target >> HV_BALLOON_PFN_SHIFT; + + if (!target_pages) { + return; + } + + /* + * always set target_changed, even with unchanged target, as the user + * might be asking us to try again reaching it + */ + balloon->target = target_pages; + balloon->target_changed = true; + + hv_balloon_event_loop(balloon); +} + +static int hv_balloon_open_channel(VMBusChannel *chan) +{ + HvBalloon *balloon = HV_BALLOON(vmbus_channel_device(chan)); + + if (balloon->state != S_CLOSED) { + warn_report("guest trying to open a DM channel in invalid %d state", + balloon->state); + return -EINVAL; + } + + HV_BALLOON_SET_STATE(balloon, S_VERSION); + hv_balloon_event_loop(balloon); + + return 0; +} + +static void hv_balloon_close_channel(VMBusChannel *chan) +{ + HvBalloon *balloon = HV_BALLOON(vmbus_channel_device(chan)); + + timer_del(&balloon->post_init_timer); + + HV_BALLOON_SET_STATE(balloon, S_WAIT_RESET); + hv_balloon_event_loop(balloon); +} + +static void hv_balloon_post_init_timer(void *opaque) +{ + HvBalloon *balloon = opaque; + + if (balloon->state != S_POST_INIT_WAIT) { + return; + } + + HV_BALLOON_SET_STATE(balloon, S_IDLE); + hv_balloon_event_loop(balloon); +} + +static void hv_balloon_system_reset(void *opaque) +{ + HvBalloon *balloon = HV_BALLOON(opaque); + + if (!balloon->hapvdimms_del_todo) { + return; + } + + if (balloon->del_todo_process_timer) { + return; + } + + balloon->del_todo_process_timer = g_idle_add(del_todo_process_timer, + balloon); +} + +static void hv_balloon_dev_realize(VMBusDevice *vdev, Error **errp) +{ + ERRP_GUARD(); + HvBalloon *balloon = HV_BALLOON(vdev); + int ret; + + /* used by hv_balloon_stat() */ + balloon->hapvdimms = hapvdimm_tree_new(); + balloon->state = S_WAIT_RESET; + + ret = qemu_add_balloon_handler(hv_balloon_to_target, hv_balloon_stat, + balloon); + if (ret < 0) { + /* This also protects against having multiple hv-balloon instances */ + error_setg(errp, "Only one balloon device is supported"); + goto ret_tree; + } + + timer_init_ms(&balloon->post_init_timer, QEMU_CLOCK_VIRTUAL, + hv_balloon_post_init_timer, balloon); + + qemu_register_reset(hv_balloon_system_reset, balloon); + + return; + +ret_tree: + hapvdimm_tree_destroy(&balloon->hapvdimms); +} + +static void hv_balloon_reset_destroy_common(HvBalloon *balloon) +{ + ha_todo_clear(balloon); + del_todo_add_all(balloon); +} + +static void hv_balloon_dev_reset(VMBusDevice *vdev) +{ + HvBalloon *balloon = HV_BALLOON(vdev); + + page_range_tree_destroy(&balloon->removed_guest); + page_range_tree_destroy(&balloon->removed_both); + balloon->removed_guest = page_range_tree_new(); + balloon->removed_both = page_range_tree_new(); + + hv_balloon_reset_destroy_common(balloon); + + balloon->trans_id = 0; + balloon->removed_guest_ctr = 0; + balloon->removed_both_ctr = 0; + + HV_BALLOON_SET_STATE(balloon, S_CLOSED); + hv_balloon_event_loop(balloon); +} + +static void hv_balloon_dev_unrealize(VMBusDevice *vdev) +{ + HvBalloon *balloon = HV_BALLOON(vdev); + + qemu_unregister_reset(hv_balloon_system_reset, balloon); + + hv_balloon_reset_destroy_common(balloon); + + del_todo_process(balloon); + assert(!balloon->del_todo_process_timer); + + qemu_remove_balloon_handler(balloon); + + page_range_tree_destroy(&balloon->removed_guest); + page_range_tree_destroy(&balloon->removed_both); + hapvdimm_tree_destroy(&balloon->hapvdimms); +} + +static Property hv_balloon_properties[] = { + DEFINE_PROP_BOOL("status-report", HvBalloon, + status_reports, false), + DEFINE_PROP_END_OF_LIST(), +}; + +static void hv_balloon_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(klass); + VMBusDeviceClass *vdc = VMBUS_DEVICE_CLASS(klass); + + device_class_set_props(dc, hv_balloon_properties); + qemu_uuid_parse(HV_BALLOON_GUID, &vdc->classid); + set_bit(DEVICE_CATEGORY_MISC, dc->categories); + vdc->vmdev_realize = hv_balloon_dev_realize; + vdc->vmdev_unrealize = hv_balloon_dev_unrealize; + vdc->vmdev_reset = hv_balloon_dev_reset; + vdc->open_channel = hv_balloon_open_channel; + vdc->close_channel = hv_balloon_close_channel; + vdc->chan_notify_cb = hv_balloon_notify_cb; +} + +static const TypeInfo hv_balloon_type_info = { + .name = TYPE_HV_BALLOON, + .parent = TYPE_VMBUS_DEVICE, + .instance_size = sizeof(HvBalloon), + .class_init = hv_balloon_class_init, +}; + +static void hv_balloon_register_types(void) +{ + type_register_static(&hv_balloon_type_info); +} + +type_init(hv_balloon_register_types) diff --git a/hw/hyperv/meson.build b/hw/hyperv/meson.build index b43f119ea5..212e0ce51e 100644 --- a/hw/hyperv/meson.build +++ b/hw/hyperv/meson.build @@ -2,3 +2,4 @@ specific_ss.add(when: 'CONFIG_HYPERV', if_true: files('hyperv.c')) specific_ss.add(when: 'CONFIG_HYPERV_TESTDEV', if_true: files('hyperv_testdev.c')) specific_ss.add(when: 'CONFIG_VMBUS', if_true: files('vmbus.c')) specific_ss.add(when: 'CONFIG_SYNDBG', if_true: files('syndbg.c')) +specific_ss.add(when: 'CONFIG_HV_BALLOON', if_true: files('hv-balloon.c')) diff --git a/hw/hyperv/trace-events b/hw/hyperv/trace-events index b4c35ca8e3..3b98ac3689 100644 --- a/hw/hyperv/trace-events +++ b/hw/hyperv/trace-events @@ -16,3 +16,19 @@ vmbus_gpadl_torndown(uint32_t gpadl_id) "gpadl #%d" vmbus_open_channel(uint32_t chan_id, uint32_t gpadl_id, uint32_t target_vp) "channel #%d gpadl #%d target vp %d" vmbus_channel_open(uint32_t chan_id, uint32_t status) "channel #%d status %d" vmbus_close_channel(uint32_t chan_id) "channel #%d" + +# hv-balloon +hv_balloon_state_change(const char *tostr) "-> %s" +hv_balloon_incoming_version(uint16_t major, uint16_t minor) "incoming proto version %u.%u" +hv_balloon_incoming_caps(uint32_t caps) "incoming caps 0x%x" +hv_balloon_outgoing_unballoon(uint32_t trans_id, uint64_t count, uint64_t start, uint64_t rempages) "posting unballoon %"PRIu32" for %"PRIu64" @ 0x%"PRIx64", remaining %"PRIu64 +hv_balloon_incoming_unballoon(uint32_t trans_id) "incoming unballoon response %"PRIu32 +hv_balloon_outgoing_hot_add(uint32_t trans_id, uint64_t count, uint64_t start) "posting hot add %"PRIu32" for %"PRIu64" @ 0x%"PRIx64 +hv_balloon_incoming_hot_add(uint32_t trans_id, uint32_t result, uint32_t count) "incoming hot add response %"PRIu32", result %"PRIu32", count %"PRIu32 +hv_balloon_outgoing_balloon(uint32_t trans_id, uint64_t count, uint64_t rempages) "posting balloon %"PRIu32" for %"PRIu64", remaining %"PRIu64 +hv_balloon_incoming_balloon(uint32_t trans_id, uint32_t range_count, uint32_t more_pages) "incoming balloon response %"PRIu32", ranges %"PRIu32", more %"PRIu32 +hv_balloon_hapvdimm_range_add(uint64_t count, uint64_t start) "adding hapvdimm range %"PRIu64" @ 0x%"PRIx64 +hv_balloon_remove_response(uint64_t count, uint64_t start, unsigned int both) "processing remove response range %"PRIu64" @ 0x%"PRIx64", both %u" +hv_balloon_remove_response_hole(uint64_t counthole, uint64_t starthole, uint64_t countrange, uint64_t startrange, uint64_t starthpr, unsigned int both) "response range hole %"PRIu64" @ 0x%"PRIx64" from range %"PRIu64" @ 0x%"PRIx64", before hpr start 0x%"PRIx64", both %u" +hv_balloon_remove_response_common(uint64_t countcommon, uint64_t startcommon, uint64_t countrange, uint64_t startrange, uint64_t counthpr, uint64_t starthpr, uint64_t removed, unsigned int both) "response common range %"PRIu64" @ 0x%"PRIx64" from range %"PRIu64" @ 0x%"PRIx64" with hpr %"PRIu64" @ 0x%"PRIx64", removed %"PRIu64", both %u" +hv_balloon_remove_response_remainder(uint64_t count, uint64_t start, unsigned int both) "remove response remaining range %"PRIu64" @ 0x%"PRIx64", both %u" diff --git a/meson.build b/meson.build index 4c6f8a674a..ede2500f8a 100644 --- a/meson.build +++ b/meson.build @@ -2504,7 +2504,8 @@ host_kconfig = \ ('CONFIG_LINUX' in config_host ? ['CONFIG_LINUX=y'] : []) + \ (have_pvrdma ? ['CONFIG_PVRDMA=y'] : []) + \ (multiprocess_allowed ? ['CONFIG_MULTIPROCESS_ALLOWED=y'] : []) + \ - (vfio_user_server_allowed ? ['CONFIG_VFIO_USER_SERVER_ALLOWED=y'] : []) + (vfio_user_server_allowed ? ['CONFIG_VFIO_USER_SERVER_ALLOWED=y'] : []) + \ + ('CONFIG_HV_BALLOON_POSSIBLE' in config_host ? ['CONFIG_HV_BALLOON_POSSIBLE=y'] : []) ignored = [ 'TARGET_XML_FILES', 'TARGET_ABI_DIR', 'TARGET_ARCH' ] @@ -3975,6 +3976,7 @@ summary_info += {'libudev': libudev} # Dummy dependency, keep .found() summary_info += {'FUSE lseek': fuse_lseek.found()} summary_info += {'selinux': selinux} +summary_info += {'hv-balloon support': config_host.has_key('CONFIG_HV_BALLOON_POSSIBLE')} summary(summary_info, bool_yn: true, section: 'Dependencies') if not supported_cpus.contains(cpu) diff --git a/qapi/machine.json b/qapi/machine.json index b9228a5e46..04ff95337a 100644 --- a/qapi/machine.json +++ b/qapi/machine.json @@ -1104,6 +1104,74 @@ { 'event': 'BALLOON_CHANGE', 'data': { 'actual': 'int' } } +## +# @hv-balloon-add-memory: +# +# Hot-add memory backend via Hyper-V Dynamic Memory Protocol. +# +# @id: the name of the memory backend object to hot-add +# +# Returns: Nothing on success +# Error if there's no guest connected with hot-add capability, +# @id is not a valid memory backend or it's already in use. +# +# Since: TBD +# +# Example: +# +# -> { "execute": "hv-balloon-add-memory", "arguments": { "id": "mb1" } } +# <- { "return": {} } +# +## +{ 'command': 'hv-balloon-add-memory', 'data': {'id': 'str'} } + +## +# @HV_BALLOON_STATUS_REPORT: +# +# Emitted when the hv-balloon driver receives a "STATUS" message from +# the guest. +# +# @commited: the amount of memory in use inside the guest plus the amount +# of the memory unusable inside the guest (ballooned out, +# offline, etc.) +# +# @available: the amount of the memory inside the guest available for new +# allocations ("free") +# +# Since: TBD +# +# Example: +# +# <- { "event": "HV_BALLOON_STATUS_REPORT", +# "data": { "commited": 816640000, "available": 3333054464 }, +# "timestamp": { "seconds": 1600295492, "microseconds": 661044 } } +# +## +{ 'event': 'HV_BALLOON_STATUS_REPORT', + 'data': { 'commited': 'size', 'available': 'size' } } + +## +# @HV_BALLOON_MEMORY_BACKEND_UNUSED: +# +# Emitted when the hv-balloon driver marks a memory backend object +# unused so it can now be removed, if required. +# +# This can happen because the VM was restarted. +# +# @id: the memory backend object id +# +# Since: TBD +# +# Example: +# +# <- { "event": "HV_BALLOON_MEMORY_BACKEND_UNUSED", +# "data": { "id": "mb1" }, +# "timestamp": { "seconds": 1600295492, "microseconds": 661044 } } +# +## +{ 'event': 'HV_BALLOON_MEMORY_BACKEND_UNUSED', + 'data': { 'id': 'str' } } + ## # @MemoryInfo: #