From patchwork Thu Feb 29 06:36:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoyao Li X-Patchwork-Id: 13576522 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8741F4C610 for ; Thu, 29 Feb 2024 06:38:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709188709; cv=none; b=PQdOoGZfJ+L30JGGPvAcQfjf2scAwmvGHSHE5D8RwN1dr8mCUFMFZsFVerscyZFgbIA4K3auDTIJTHsf3MSFhJ8LpE2IKJEHERgsq2K4mGBMTffg19v84EHs3LiltLQqaepSyWc4mPXisJGIsjf1jm6t/AAoEK+sFuZ67xyInBc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709188709; c=relaxed/simple; bh=nHcrori2+xWCP40+Q6vPTJnv0Y0lUVOf729EXknihSo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CHC33EnOsH609Cn+0FcwzAG8s0uWAiefobCgnEuUtO5tfm8s3CdLFT/DwJPlC5sTzyyDNK88p3kgzHSTQVaC4+O9+CTFrspLpQ4fTzmQmXNvfWHLCUB+XsLQtpJ4y4FGjgp37kTgyz9fB9bFioDNm3+AhCtaWHrfv8YgCgROaD0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=AJtxAe+x; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="AJtxAe+x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709188707; x=1740724707; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nHcrori2+xWCP40+Q6vPTJnv0Y0lUVOf729EXknihSo=; b=AJtxAe+xBf70HZLdDR/4RqeNAtMd/VOEVOkLsihsgIvTLDcqBl7C+liZ ggBG9BIoLdBHw9XXLe9eYCfQ/aa1jqs+motj4yc5v6g7fhlCdUkWjRasx nyYyOr8jwa4ilpUNcUCB1Q2aNzsI167Wg1zweb4nv2u/g48OPPIFIg+oY P08vMTp6Nl7Iyl6wEqY0NMPVyYYcLjw+OjqRkkCodoMFtcLp0izIob9s2 Geo0uGcQJIq49wXy1QLxztkzagY+Ido3r1F54iZ8NHXhcoQHQ+79dDmCB KsuBtUOW64psVuHmClZZUG3SXofVRZFQTn9j95qElwmKmOGWMOgiX1iHF A==; X-IronPort-AV: E=McAfee;i="6600,9927,10998"; a="3802532" X-IronPort-AV: E=Sophos;i="6.06,192,1705392000"; d="scan'208";a="3802532" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2024 22:38:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,192,1705392000"; d="scan'208";a="8074917" Received: from lxy-clx-4s.sh.intel.com ([10.239.48.52]) by orviesa007.jf.intel.com with ESMTP; 28 Feb 2024 22:38:21 -0800 From: Xiaoyao Li To: Paolo Bonzini , David Hildenbrand , Igor Mammedov , Eduardo Habkost , Marcel Apfelbaum , =?utf-8?q?Philippe_Mathieu-D?= =?utf-8?q?aud=C3=A9?= , Yanan Wang , "Michael S. Tsirkin" , Richard Henderson , Ani Sinha , Peter Xu , Cornelia Huck , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Eric Blake , Markus Armbruster , Marcelo Tosatti Cc: kvm@vger.kernel.org, qemu-devel@nongnu.org, Michael Roth , Claudio Fontana , Gerd Hoffmann , Isaku Yamahata , Chenyi Qiang , xiaoyao.li@intel.com Subject: [PATCH v5 08/65] kvm: handle KVM_EXIT_MEMORY_FAULT Date: Thu, 29 Feb 2024 01:36:29 -0500 Message-Id: <20240229063726.610065-9-xiaoyao.li@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240229063726.610065-1-xiaoyao.li@intel.com> References: <20240229063726.610065-1-xiaoyao.li@intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chao Peng When geeting KVM_EXIT_MEMORY_FAULT exit, it indicates userspace needs to do the memory conversion on the RAMBlock to turn the memory into desired attribute, i.e., private/shared. Currently only KVM_MEMORY_EXIT_FLAG_PRIVATE in flags is valid when KVM_EXIT_MEMORY_FAULT happens. Note, KVM_EXIT_MEMORY_FAULT makes sense only when the RAMBlock has guest_memfd memory backend. Note, KVM_EXIT_MEMORY_FAULT returns with -EFAULT, so special handling is added. When page is converted from shared to private, the original shared memory can be discarded via ram_block_discard_range(). Note, shared memory can be discarded only when it's not back'ed by hugetlb because hugetlb is supposed to be pre-allocated and no need for discarding. Signed-off-by: Chao Peng Co-developed-by: Xiaoyao Li Signed-off-by: Xiaoyao Li Reviewed-by: Isaku Yamahata --- Changes in v4: - open-coded ram_block_discard logic; - change warn_report() to error_report(); (Daniel) --- accel/kvm/kvm-all.c | 94 ++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 84 insertions(+), 10 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 70d482a2c936..87e4275932a7 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -2903,6 +2903,68 @@ static void kvm_eat_signals(CPUState *cpu) } while (sigismember(&chkset, SIG_IPI)); } +static int kvm_convert_memory(hwaddr start, hwaddr size, bool to_private) +{ + MemoryRegionSection section; + ram_addr_t offset; + MemoryRegion *mr; + RAMBlock *rb; + void *addr; + int ret = -1; + + if (!QEMU_PTR_IS_ALIGNED(start, qemu_host_page_size) || + !QEMU_PTR_IS_ALIGNED(size, qemu_host_page_size)) { + return -1; + } + + if (!size) { + return -1; + } + + section = memory_region_find(get_system_memory(), start, size); + mr = section.mr; + if (!mr) { + return -1; + } + + if (memory_region_has_guest_memfd(mr)) { + if (to_private) { + ret = kvm_set_memory_attributes_private(start, size); + } else { + ret = kvm_set_memory_attributes_shared(start, size); + } + + if (ret) { + memory_region_unref(section.mr); + return ret; + } + + addr = memory_region_get_ram_ptr(mr) + section.offset_within_region; + rb = qemu_ram_block_from_host(addr, false, &offset); + + if (to_private) { + if (rb->page_size != qemu_host_page_size) { + /* + * shared memory is back'ed by hugetlb, which is supposed to be + * pre-allocated and doesn't need to be discarded + */ + return 0; + } else { + ret = ram_block_discard_range(rb, offset, size); + } + } else { + ret = ram_block_discard_guest_memfd_range(rb, offset, size); + } + } else { + error_report("Convert non guest_memfd backed memory region " + "(0x%"HWADDR_PRIx" ,+ 0x%"HWADDR_PRIx") to %s", + start, size, to_private ? "private" : "shared"); + } + + memory_region_unref(section.mr); + return ret; +} + int kvm_cpu_exec(CPUState *cpu) { struct kvm_run *run = cpu->kvm_run; @@ -2970,18 +3032,20 @@ int kvm_cpu_exec(CPUState *cpu) ret = EXCP_INTERRUPT; break; } - fprintf(stderr, "error: kvm run failed %s\n", - strerror(-run_ret)); + if (!(run_ret == -EFAULT && run->exit_reason == KVM_EXIT_MEMORY_FAULT)) { + fprintf(stderr, "error: kvm run failed %s\n", + strerror(-run_ret)); #ifdef TARGET_PPC - if (run_ret == -EBUSY) { - fprintf(stderr, - "This is probably because your SMT is enabled.\n" - "VCPU can only run on primary threads with all " - "secondary threads offline.\n"); - } + if (run_ret == -EBUSY) { + fprintf(stderr, + "This is probably because your SMT is enabled.\n" + "VCPU can only run on primary threads with all " + "secondary threads offline.\n"); + } #endif - ret = -1; - break; + ret = -1; + break; + } } trace_kvm_run_exit(cpu->cpu_index, run->exit_reason); @@ -3064,6 +3128,16 @@ int kvm_cpu_exec(CPUState *cpu) break; } break; + case KVM_EXIT_MEMORY_FAULT: + if (run->memory_fault.flags & ~KVM_MEMORY_EXIT_FLAG_PRIVATE) { + error_report("KVM_EXIT_MEMORY_FAULT: Unknown flag 0x%" PRIx64, + (uint64_t)run->memory_fault.flags); + ret = -1; + break; + } + ret = kvm_convert_memory(run->memory_fault.gpa, run->memory_fault.size, + run->memory_fault.flags & KVM_MEMORY_EXIT_FLAG_PRIVATE); + break; default: ret = kvm_arch_handle_exit(cpu, run); break;