From patchwork Mon Feb 17 08:18:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chenyi Qiang X-Patchwork-Id: 13977268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A567C021A0 for ; Mon, 17 Feb 2025 08:19:40 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tjwLF-000307-T4; Mon, 17 Feb 2025 03:19:02 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tjwLC-0002q3-VX for qemu-devel@nongnu.org; Mon, 17 Feb 2025 03:18:59 -0500 Received: from mgamail.intel.com ([198.175.65.11]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tjwLA-00015F-KF for qemu-devel@nongnu.org; Mon, 17 Feb 2025 03:18:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1739780336; x=1771316336; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aCF+b4RwfnVnsui1hr69IimEOU4HkOAhxYc97Sd88Us=; b=nJoLgxcbrzRQPWDWguX9koZHSPQAajpV4VhURMkb463Gaf6UqihmjDk4 BvZhB86V7VKy0c/+Jzr5sIbBPsprKT3LR+GF5qdrY0iNAgmPwy7bxJW5t Pxrp37CWFRnaldVN2+zjpsY+bFK4x+9h6D/aqfU9OS6Oz4GvtpVuBVc4Z sHBOMA9EoTfmq2EWSPVpd7X7Axg1PSAS32MAAd44uS1EOBZ7GvfEm3lF5 M70c/e8sBXdx/x8VqTLyauG+6kP1G7OOxghWHGDmNesvjo7pklhE9taH8 xkNp7RI8ASi12YA7wbNGtEU1t8lrORqxW+ZLuz0oRq1kV3Iinxs5JebuO w==; X-CSE-ConnectionGUID: PrlBThrgQ+umbez0gNLUvQ== X-CSE-MsgGUID: 3kevUiZPSvG9wOC37GhlRQ== X-IronPort-AV: E=McAfee;i="6700,10204,11347"; a="50668975" X-IronPort-AV: E=Sophos;i="6.13,292,1732608000"; d="scan'208";a="50668975" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Feb 2025 00:18:54 -0800 X-CSE-ConnectionGUID: bJsn2YPTQ6OurDEqgpOlMw== X-CSE-MsgGUID: DYuhebGISKS+FiChk3Xkeg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="118690184" Received: from emr-bkc.sh.intel.com ([10.112.230.82]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Feb 2025 00:18:52 -0800 From: Chenyi Qiang To: David Hildenbrand , Alexey Kardashevskiy , Peter Xu , Paolo Bonzini , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Michael Roth Cc: Chenyi Qiang , qemu-devel@nongnu.org, kvm@vger.kernel.org, Williams Dan J , Peng Chao P , Gao Chao , Xu Yilun , Li Xiaoyao Subject: [PATCH v2 2/6] memory: Change memory_region_set_ram_discard_manager() to return the result Date: Mon, 17 Feb 2025 16:18:21 +0800 Message-ID: <20250217081833.21568-3-chenyi.qiang@intel.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250217081833.21568-1-chenyi.qiang@intel.com> References: <20250217081833.21568-1-chenyi.qiang@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=198.175.65.11; envelope-from=chenyi.qiang@intel.com; helo=mgamail.intel.com X-Spam_score_int: -47 X-Spam_score: -4.8 X-Spam_bar: ---- X-Spam_report: (-4.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.382, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Modify memory_region_set_ram_discard_manager() to return false if a RamDiscardManager is already set in the MemoryRegion. The caller must handle this failure, such as having virtio-mem undo its actions and fail the realize() process. Opportunistically move the call earlier to avoid complex error handling. This change is beneficial when introducing a new RamDiscardManager instance besides virtio-mem. After ram_block_coordinated_discard_require(true) unlocks all RamDiscardManager instances, only one instance is allowed to be set for a MemoryRegion at present. Suggested-by: David Hildenbrand Signed-off-by: Chenyi Qiang --- Changes in v2: - newly added. --- hw/virtio/virtio-mem.c | 30 +++++++++++++++++------------- include/exec/memory.h | 6 +++--- system/memory.c | 11 ++++++++--- 3 files changed, 28 insertions(+), 19 deletions(-) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 21f16e4912..ef818a2cdf 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -1074,6 +1074,18 @@ static void virtio_mem_device_realize(DeviceState *dev, Error **errp) vmem->block_size; vmem->bitmap = bitmap_new(vmem->bitmap_size); + /* + * Set ourselves as RamDiscardManager before the plug handler maps the + * memory region and exposes it via an address space. + */ + if (memory_region_set_ram_discard_manager(&vmem->memdev->mr, + RAM_DISCARD_MANAGER(vmem))) { + error_setg(errp, "Failed to set RamDiscardManager"); + g_free(vmem->bitmap); + ram_block_coordinated_discard_require(false); + return; + } + virtio_init(vdev, VIRTIO_ID_MEM, sizeof(struct virtio_mem_config)); vmem->vq = virtio_add_queue(vdev, 128, virtio_mem_handle_request); @@ -1124,13 +1136,6 @@ static void virtio_mem_device_realize(DeviceState *dev, Error **errp) vmem->system_reset = VIRTIO_MEM_SYSTEM_RESET(obj); vmem->system_reset->vmem = vmem; qemu_register_resettable(obj); - - /* - * Set ourselves as RamDiscardManager before the plug handler maps the - * memory region and exposes it via an address space. - */ - memory_region_set_ram_discard_manager(&vmem->memdev->mr, - RAM_DISCARD_MANAGER(vmem)); } static void virtio_mem_device_unrealize(DeviceState *dev) @@ -1138,12 +1143,6 @@ static void virtio_mem_device_unrealize(DeviceState *dev) VirtIODevice *vdev = VIRTIO_DEVICE(dev); VirtIOMEM *vmem = VIRTIO_MEM(dev); - /* - * The unplug handler unmapped the memory region, it cannot be - * found via an address space anymore. Unset ourselves. - */ - memory_region_set_ram_discard_manager(&vmem->memdev->mr, NULL); - qemu_unregister_resettable(OBJECT(vmem->system_reset)); object_unref(OBJECT(vmem->system_reset)); @@ -1155,6 +1154,11 @@ static void virtio_mem_device_unrealize(DeviceState *dev) host_memory_backend_set_mapped(vmem->memdev, false); virtio_del_queue(vdev, 0); virtio_cleanup(vdev); + /* + * The unplug handler unmapped the memory region, it cannot be + * found via an address space anymore. Unset ourselves. + */ + memory_region_set_ram_discard_manager(&vmem->memdev->mr, NULL); g_free(vmem->bitmap); ram_block_coordinated_discard_require(false); } diff --git a/include/exec/memory.h b/include/exec/memory.h index 3bebc43d59..390477b588 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -2487,13 +2487,13 @@ static inline bool memory_region_has_ram_discard_manager(MemoryRegion *mr) * * This function must not be called for a mapped #MemoryRegion, a #MemoryRegion * that does not cover RAM, or a #MemoryRegion that already has a - * #RamDiscardManager assigned. + * #RamDiscardManager assigned. Return 0 if the rdm is set successfully. * * @mr: the #MemoryRegion * @rdm: #RamDiscardManager to set */ -void memory_region_set_ram_discard_manager(MemoryRegion *mr, - RamDiscardManager *rdm); +int memory_region_set_ram_discard_manager(MemoryRegion *mr, + RamDiscardManager *rdm); /** * memory_region_find: translate an address/size relative to a diff --git a/system/memory.c b/system/memory.c index b17b5538ff..297a3dbcd4 100644 --- a/system/memory.c +++ b/system/memory.c @@ -2115,12 +2115,17 @@ RamDiscardManager *memory_region_get_ram_discard_manager(MemoryRegion *mr) return mr->rdm; } -void memory_region_set_ram_discard_manager(MemoryRegion *mr, - RamDiscardManager *rdm) +int memory_region_set_ram_discard_manager(MemoryRegion *mr, + RamDiscardManager *rdm) { g_assert(memory_region_is_ram(mr)); - g_assert(!rdm || !mr->rdm); + if (mr->rdm && rdm != NULL) { + return -1; + } + + /* !rdm || !mr->rdm */ mr->rdm = rdm; + return 0; } uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *rdm,