From patchwork Fri Mar 1 17:28:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13578846 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73F285C9C; Fri, 1 Mar 2024 17:29:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314185; cv=none; b=ABfXioAJ79p1ccdInbbgiAazYP20spCBQvMZa4S55sO2ARqxC401UxdF9S6UnZE2kDpJosAKgI9mKCxxDKoAq0KqTICiBlv/C43N512CiPMtnDQgKs75HC0KIg8KJXZ7zdYwC24sgprrQSMiT5OIbCzGFAr376mLEkL2qdVYpn8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314185; c=relaxed/simple; bh=xhAmNSgq/jNSfhesY71o3ouQLTaGpDQEc7g/WrW05mQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IgtjmitrE/fBiKQaVHb92vNZZTTMxsJCAg9jqT0urcPAPYRhMBFdlTmZNfHcPF1SYYmz7g/8XlTWUacR9WV4AUb38E869trn+vV9uguI0t9fjkFWt3LzIlU4xj8mpZ0koLSmk7I5MSCgKU11IbMF2MB4s9lZW/C/2RRkAG9M2vY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UlWLgrlm; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UlWLgrlm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709314183; x=1740850183; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xhAmNSgq/jNSfhesY71o3ouQLTaGpDQEc7g/WrW05mQ=; b=UlWLgrlm8LifINGPWfu408Yq3EElAdzbVvETQFv4XX3N4I+WlCHkeEfx ccgukcz/LVy1uN7FZhbXC+8SHowPsRj3apogfMxIMmwPxbTEE9/6rMFcA K+P9yXHxFPzN1CvSppQJlvo0U/8pe2ID5hpt137eMM/7lqMOGPCR+20xX hp3wc3YM8wIr/oG6vR6+W8pbFndqd3G5TZdPyK7NeCn5kWwYE76TskidS EkQkRvdkuT3f9I66rkHLhgABAbrWeC+oDD7P302j7vchrga1+FMivhlPD ospIIOuCdpIPCbiulV7wDao9dArSuOeMDg3PsGtCYMDX8XVfPlXm18gj6 g==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="6812378" X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="6812378" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="12946517" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:23 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Michael Roth , David Matlack , Federico Parola Subject: [RFC PATCH 1/8] KVM: Document KVM_MAP_MEMORY ioctl Date: Fri, 1 Mar 2024 09:28:43 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Adds documentation of KVM_MAP_MEMORY ioctl. It pre-populates guest memory. And potentially do initialized memory contents with encryption and measurement depending on underlying technology. Suggested-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- Documentation/virt/kvm/api.rst | 36 ++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 0b5a33ee71ee..33d2b63f7dbf 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6352,6 +6352,42 @@ a single guest_memfd file, but the bound ranges must not overlap). See KVM_SET_USER_MEMORY_REGION2 for additional details. +4.143 KVM_MAP_MEMORY +------------------------ + +:Capability: KVM_CAP_MAP_MEMORY +:Architectures: none +:Type: vcpu ioctl +:Parameters: struct kvm_memory_mapping(in/out) +:Returns: 0 on success, <0 on error + +KVM_MAP_MEMORY populates guest memory without running vcpu. + +:: + + struct kvm_memory_mapping { + __u64 base_gfn; + __u64 nr_pages; + __u64 flags; + __u64 source; + }; + + /* For kvm_memory_mapping:: flags */ + #define KVM_MEMORY_MAPPING_FLAG_WRITE _BITULL(0) + #define KVM_MEMORY_MAPPING_FLAG_EXEC _BITULL(1) + #define KVM_MEMORY_MAPPING_FLAG_USER _BITULL(2) + #define KVM_MEMORY_MAPPING_FLAG_PRIVATE _BITULL(3) + +KVM_MAP_MEMORY populates guest memory in the underlying mapping. If source is +not zero and it's supported (depending on underlying technology), the guest +memory content is populated with the source. The flags field supports three +flags: KVM_MEMORY_MAPPING_FLAG_WRITE, KVM_MEMORY_MAPPING_FLAG_EXEC, and +KVM_MEMORY_MAPPING_FLAG_USER. Which corresponds to fault code for kvm page +fault to populate guest memory. write fault, fetch fault and user fault. +When it returned, the input is updated. If nr_pages is large, it may +return -EAGAIN and update the values (base_gfn and nr_pages. source if not zero) +to point the remaining range. + 5. The kvm_run structure ======================== From patchwork Fri Mar 1 17:28:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13578847 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34B598BED; Fri, 1 Mar 2024 17:29:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314186; cv=none; b=r6aXqmtTVb8rCNj5Li65nQ3R4rPIwj0LSi+EvJ+U1/52PEgkAMVPKlw2HbuXzzKF/chbHGrQozs6aKvqNgrVtTDUubqH7sfRE2rIcFirFyYGQRfXxT5Vh4Pm8qZ1764pzAnLn5wjRFfHifMOE0YXEiWEWBAUzpCXUPpwtGzkxLU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314186; c=relaxed/simple; bh=L8tZoKqL5jzbGjkL4hvBkVrDBWeSD538UMVU5yxx08Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LkidMZwDr+LtAui0t7ocXHGCRPNJrJtB4dklOmzRbY+A7b2XgFr21n04nEe+ieYaM6KG2uYJQmn+QedjVlpiwmTnzdPapPjZxa+uwXfEHl54PxgYY7W1OlTSqxJ+ul10XwgE55SzaITG+2H4BbqPKQ6/GgJzif8K2C/W0hY4Vmg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=b31Wjz+U; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="b31Wjz+U" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709314184; x=1740850184; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=L8tZoKqL5jzbGjkL4hvBkVrDBWeSD538UMVU5yxx08Q=; b=b31Wjz+UQPbxdewQZ7Tr2YbgTzl5fjARkC4iTTJli4mSYJqK37hcZ7tt eU2JnNULlBhy2MSNdUUPOK5kiJUem01pkSJ7tHl6c0Q5N7i4iPC9I/Jqz R2d7bgl1UtlNknA1Numo3Em3CRqHHe5sP9ycxoPZC4ghg6jouZ6QbIxDd YAIopyG0QdvSPBY6l2g7ssPu/P3mGZRIHQJiiaFAFp3P8WrqfM/mprck4 4lON4JUPB8nKeQUbmJhUAiIde4DKZpYoOn/VarLE3+AI8p8xX1ROr3y9m b+PsMJDzDy+XHGDDYJE45R9JqUUe56ca2483UfNYRJV1Q+gknzDz2ZuP3 g==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="6812387" X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="6812387" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="12946533" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:23 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Michael Roth , David Matlack , Federico Parola Subject: [RFC PATCH 2/8] KVM: Add KVM_MAP_MEMORY vcpu ioctl to pre-populate guest memory Date: Fri, 1 Mar 2024 09:28:44 -0800 Message-Id: <012b59708114ba121735769de94756fa5af3204d.1709288671.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Add new ioctl KVM_MAP_MEMORY in the kvm common code. It iterates on the memory range and call arch specific function. Add stub function as weak symbol. [1] https://lore.kernel.org/kvm/Zbrj5WKVgMsUFDtb@google.com/ Suggested-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- include/linux/kvm_host.h | 4 +++ include/uapi/linux/kvm.h | 15 ++++++++ virt/kvm/kvm_main.c | 74 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 93 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9807ea98b568..afbed288d625 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2445,4 +2445,8 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, } #endif /* CONFIG_KVM_PRIVATE_MEM */ +int kvm_arch_vcpu_pre_map_memory(struct kvm_vcpu *vcpu); +int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, + struct kvm_memory_mapping *mapping); + #endif diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 2190adbe3002..f5d6b481244f 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -917,6 +917,7 @@ struct kvm_enable_cap { #define KVM_CAP_MEMORY_ATTRIBUTES 233 #define KVM_CAP_GUEST_MEMFD 234 #define KVM_CAP_VM_TYPES 235 +#define KVM_CAP_MAP_MEMORY 236 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1548,4 +1549,18 @@ struct kvm_create_guest_memfd { __u64 reserved[6]; }; +#define KVM_MAP_MEMORY _IOWR(KVMIO, 0xd5, struct kvm_memory_mapping) + +#define KVM_MEMORY_MAPPING_FLAG_WRITE _BITULL(0) +#define KVM_MEMORY_MAPPING_FLAG_EXEC _BITULL(1) +#define KVM_MEMORY_MAPPING_FLAG_USER _BITULL(2) +#define KVM_MEMORY_MAPPING_FLAG_PRIVATE _BITULL(3) + +struct kvm_memory_mapping { + __u64 base_gfn; + __u64 nr_pages; + __u64 flags; + __u64 source; +}; + #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d1fd9cb5d037..d77c9b79d76b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4419,6 +4419,69 @@ static int kvm_vcpu_ioctl_get_stats_fd(struct kvm_vcpu *vcpu) return fd; } +__weak int kvm_arch_vcpu_pre_map_memory(struct kvm_vcpu *vcpu) +{ + return -EOPNOTSUPP; +} + +__weak int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, + struct kvm_memory_mapping *mapping) +{ + return -EOPNOTSUPP; +} + +static int kvm_vcpu_map_memory(struct kvm_vcpu *vcpu, + struct kvm_memory_mapping *mapping) +{ + bool added = false; + int idx, r = 0; + + if (mapping->flags & ~(KVM_MEMORY_MAPPING_FLAG_WRITE | + KVM_MEMORY_MAPPING_FLAG_EXEC | + KVM_MEMORY_MAPPING_FLAG_USER | + KVM_MEMORY_MAPPING_FLAG_PRIVATE)) + return -EINVAL; + if ((mapping->flags & KVM_MEMORY_MAPPING_FLAG_PRIVATE) && + !kvm_arch_has_private_mem(vcpu->kvm)) + return -EINVAL; + + /* Sanity check */ + if (!IS_ALIGNED(mapping->source, PAGE_SIZE) || + !mapping->nr_pages || + mapping->base_gfn + mapping->nr_pages <= mapping->base_gfn) + return -EINVAL; + + vcpu_load(vcpu); + idx = srcu_read_lock(&vcpu->kvm->srcu); + r = kvm_arch_vcpu_pre_map_memory(vcpu); + if (r) + return r; + + while (mapping->nr_pages) { + if (signal_pending(current)) { + r = -ERESTARTSYS; + break; + } + + if (need_resched()) + cond_resched(); + + r = kvm_arch_vcpu_map_memory(vcpu, mapping); + if (r) + break; + + added = true; + } + + srcu_read_unlock(&vcpu->kvm->srcu, idx); + vcpu_put(vcpu); + + if (added && mapping->nr_pages > 0) + r = -EAGAIN; + + return r; +} + static long kvm_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -4620,6 +4683,17 @@ static long kvm_vcpu_ioctl(struct file *filp, r = kvm_vcpu_ioctl_get_stats_fd(vcpu); break; } + case KVM_MAP_MEMORY: { + struct kvm_memory_mapping mapping; + + r = -EFAULT; + if (copy_from_user(&mapping, argp, sizeof(mapping))) + break; + r = kvm_vcpu_map_memory(vcpu, &mapping); + if (copy_to_user(argp, &mapping, sizeof(mapping))) + r = -EFAULT; + break; + } default: r = kvm_arch_vcpu_ioctl(filp, ioctl, arg); } From patchwork Fri Mar 1 17:28:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13578848 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF76FC8E1; Fri, 1 Mar 2024 17:29:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314187; cv=none; b=M6btWXGYTKCaUMuurdCqt6O+PI7vW5rDYtP2cHg0NYI+dkTHAJa17dtDykbr0nMxKLRNUlSBzkvqwwu2bT12q4FDh2MyE+Pgnuk/Kdvq505wottWj+zbOIrd2RS+9Q5/Lz5/qwR7xcy2gNJV/MqpJD3M0Mbj6aRySPsdGARY7Ps= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314187; c=relaxed/simple; bh=y+3G5VwEI5knod2LBHcytheyiy4/iGNmorazZWFPAdo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eF/x3qTH46B9jbn2583jDbcZdXinRTzpqna/BELIr4iL6uG4BE8Q+6zxVzEfiI3ISbTcFwXSBUCWEFEJBcEW/i78vS5Hx0MSgK8XmR5RDiNs/b29fGEdwC5NGC2e28/sdCEDufGj3PxTr8ENlPqjJz1JdJyAQ7kPlXjiMErjtwE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Y30lMCQs; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Y30lMCQs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709314185; x=1740850185; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=y+3G5VwEI5knod2LBHcytheyiy4/iGNmorazZWFPAdo=; b=Y30lMCQsS9pdnnabuPZXE01IpHSgyn7gCWl4RUtKnFbRNSh4qy0IqB7a df01n+4KuUFlZA8GWkxR6jz19I0csEGpmxk8chHTf8j2NKs8TgjutCTJk o8+6LnziGZVvEFpQhd26+IsFTPYaKOaCIMJj45+FuCSj/WSwOD9MnP1pL 8lBIGFG1rfnXOEZoJlTLPGlHhhRBDf7op50jLCq+Ej5CA8bXUEV1kg4nB i5UYrhjD1b27ilVffZ9WhkTbpidaC1Y1E1RZtF8i93aI7s/FP6JSIR5V/ Tamn/hP+ZcrhS/JfdgsGZWkzf8w/F4EsUMb8vjnLFaCOQpB/URAKoQEyo Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="6812397" X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="6812397" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="12946546" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:24 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Michael Roth , David Matlack , Federico Parola Subject: [RFC PATCH 3/8] KVM: x86/mmu: Introduce initialier macro for struct kvm_page_fault Date: Fri, 1 Mar 2024 09:28:45 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Another function will initialize struct kvm_page_fault. Add initializer macro to unify the big struct initialization. No functional change intended. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu_internal.h | 44 +++++++++++++++++++-------------- 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 0669a8a668ca..72ef09fc9322 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -279,27 +279,35 @@ enum { RET_PF_SPURIOUS, }; +#define KVM_PAGE_FAULT_INIT(_vcpu, _cr2_or_gpa, _err, _prefetch, _max_level) { \ + .addr = (_cr2_or_gpa), \ + .error_code = (_err), \ + .exec = (_err) & PFERR_FETCH_MASK, \ + .write = (_err) & PFERR_WRITE_MASK, \ + .present = (_err) & PFERR_PRESENT_MASK, \ + .rsvd = (_err) & PFERR_RSVD_MASK, \ + .user = (_err) & PFERR_USER_MASK, \ + .prefetch = (_prefetch), \ + .is_tdp = \ + likely((_vcpu)->arch.mmu->page_fault == kvm_tdp_page_fault), \ + .nx_huge_page_workaround_enabled = \ + is_nx_huge_page_enabled((_vcpu)->kvm), \ + \ + .max_level = (_max_level), \ + .req_level = PG_LEVEL_4K, \ + .goal_level = PG_LEVEL_4K, \ + .is_private = \ + kvm_mem_is_private((_vcpu)->kvm, (_cr2_or_gpa) >> PAGE_SHIFT), \ + \ + .pfn = KVM_PFN_ERR_FAULT, \ + .hva = KVM_HVA_ERR_BAD, } + static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch, int *emulation_type) { - struct kvm_page_fault fault = { - .addr = cr2_or_gpa, - .error_code = err, - .exec = err & PFERR_FETCH_MASK, - .write = err & PFERR_WRITE_MASK, - .present = err & PFERR_PRESENT_MASK, - .rsvd = err & PFERR_RSVD_MASK, - .user = err & PFERR_USER_MASK, - .prefetch = prefetch, - .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), - .nx_huge_page_workaround_enabled = - is_nx_huge_page_enabled(vcpu->kvm), - - .max_level = KVM_MAX_HUGEPAGE_LEVEL, - .req_level = PG_LEVEL_4K, - .goal_level = PG_LEVEL_4K, - .is_private = kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), - }; + struct kvm_page_fault fault = KVM_PAGE_FAULT_INIT(vcpu, cr2_or_gpa, err, + prefetch, + KVM_MAX_HUGEPAGE_LEVEL); int r; if (vcpu->arch.mmu->root_role.direct) { From patchwork Fri Mar 1 17:28:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13578850 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E07DC1118E; Fri, 1 Mar 2024 17:29:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314188; cv=none; b=JBgAJC890n2x7tgx85x9QtJPMCfPInPHcCsdt6/HluR94h9Agqhy2L7XCJFvhC6cWrlkoQ5HAEg4jqXkO5yqadQ2ju66u5WCQ8I7B4LpNr8JLuhePT9fdFUxBBAtFR4VBK4oGmxgMx251cOEV4NYUVlf4WyU+KUmNSKcwV8bUrg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314188; c=relaxed/simple; bh=zO7tKEhbr1x5xw4j54wsZXmRlpVUaAL+sOxG9bhlihY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RAuerpOvfoK711Dm7bjBKxOVQDTWCUEuXGTpbw8D84RiNT6paswUOIb8qXEEKv8IiKO98ZpAWKJpYPbVM7aGawGkh5SrBOIhW50I8vPtsLql3WBwmTbKqwpaUQcL/ZLN0nWULlcE+l5dXekY5qhf1srb3tjdxBIZy1hCwz6IrNs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Da//ITgG; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Da//ITgG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709314186; x=1740850186; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zO7tKEhbr1x5xw4j54wsZXmRlpVUaAL+sOxG9bhlihY=; b=Da//ITgG9Fvc3kqtCludp5oiuHMcWDYgdPEyI+8tqBNa/M3sf9BIwAXU UDpQsQ3uYwOkLcKpgqGhW4eynrai+8teOdUrqbaYh6LD3FnmICcLiSud3 Fj5S5gA+Pq+j5LauPTCAD3+i6UXZqbmvUPSdwFIpTkagT+pXlPqmAOMi9 K+p9YjcFtheuRYodeKzd945f/YwxRDb3ovFwJbLyEEUI6PqB/isn0Uj2/ cybQZ8Q1eQf1lZMCLqjmxi7cZb/lp/H58ih0ri7zPmB2dljLUKYnoAT3A f2ZJJh/ya4et/P8Ln9J4J07ndWLjQHB4z4H/IlXl9zxgpz4vHTwbk6toQ g==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="6812409" X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="6812409" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="12946553" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:25 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Michael Roth , David Matlack , Federico Parola Subject: [RFC PATCH 4/8] KVM: x86/mmu: Factor out kvm_mmu_do_page_fault() Date: Fri, 1 Mar 2024 09:28:46 -0800 Message-Id: <291c6458504aee05af8d6323a6eafbbd155590df.1709288671.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata For ioctl to pre-populate guest memory, factor out kvm_mmu_do_page_fault() into initialization function of struct kvm_page_fault, calling fault hander, and the surrounding logic of error check and stats update part. This enables to implement a wrapper to call fault handler. No functional change intended. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu_internal.h | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 72ef09fc9322..aac52f0fdf54 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -302,6 +302,24 @@ enum { .pfn = KVM_PFN_ERR_FAULT, \ .hva = KVM_HVA_ERR_BAD, } +static inline int __kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + int r; + + if (vcpu->arch.mmu->root_role.direct) { + fault->gfn = fault->addr >> PAGE_SHIFT; + fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); + } + + if (IS_ENABLED(CONFIG_RETPOLINE) && fault->is_tdp) + r = kvm_tdp_page_fault(vcpu, fault); + else + r = vcpu->arch.mmu->page_fault(vcpu, fault); + + return r; +} + static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch, int *emulation_type) { @@ -310,11 +328,6 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, KVM_MAX_HUGEPAGE_LEVEL); int r; - if (vcpu->arch.mmu->root_role.direct) { - fault.gfn = fault.addr >> PAGE_SHIFT; - fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn); - } - /* * Async #PF "faults", a.k.a. prefetch faults, are not faults from the * guest perspective and have already been counted at the time of the @@ -323,10 +336,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!prefetch) vcpu->stat.pf_taken++; - if (IS_ENABLED(CONFIG_RETPOLINE) && fault.is_tdp) - r = kvm_tdp_page_fault(vcpu, &fault); - else - r = vcpu->arch.mmu->page_fault(vcpu, &fault); + r = __kvm_mmu_do_page_fault(vcpu, &fault); if (fault.write_fault_to_shadow_pgtable && emulation_type) *emulation_type |= EMULTYPE_WRITE_PF_TO_SP; From patchwork Fri Mar 1 17:28:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13578849 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 71CB211C88; Fri, 1 Mar 2024 17:29:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314188; cv=none; b=AhUp6G/Ofonb9TyvaiBokzIPJV9daW2EZ+G5TNMlzuZuWLxRj9jlhYAQAQUivTWWyZcvT/DN0qKHgG03H1uf++Jx3jWgOM6AO/WPUaDgIBPDgobkFLQQo8i1oGuRsYvLMbtYN443Fsph2E5r4Wh/et5qTPzZstt3ptKSX7e3ucs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314188; c=relaxed/simple; bh=7zgHhzoa1cUt+d95umTjGfJYGkjrgy8xI3Zn8UQJKIo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=X/8DAndSmyjDrg2NovLqnT2Pfc4JiwitCAN+aRoShgZzoczX0MhCVijj0+p6Rvfjug/kvYki0EAWRqpBclrhp2jXmUeeATeY4TG7eZuwl0lemziDKlbN1YSr3gDShuQZ59/Zqf3746gS+S2oE/89z6crYab3KevB0kRFxdPa/Rw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=QWyc+v2g; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QWyc+v2g" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709314186; x=1740850186; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7zgHhzoa1cUt+d95umTjGfJYGkjrgy8xI3Zn8UQJKIo=; b=QWyc+v2gQcKo2NE42H/ottxHRk9cVzUTvuDaqvOvezlkPh6/G0FPUpa3 Wyynh51KYKA+lu0sv95BQTOY2LtQM77wCL6wJ5rvshKaxZcCXFIRjXknB Ne+t6Dvxa7E0+yePTBqw1B/KSItfLcbb0RT/96VOobTWVkc12xIKLAWrW rxQH6eB6lKwsyDVDxWHeLsWheHJSr3hwc/9hU5Spb0PI5S0IRxS/2egYN n6cL4wYGfOJjOV64io4HJ5LIKhqtnKG0gp2adZyMjBvLPOIziRfTOxfnf 4eqIodnflWQmoLgbadJyDMHD6BsREQQyehCnkvP2bFMHcfcw0GzL9rE/n Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="6812421" X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="6812421" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="12946563" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:26 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Michael Roth , David Matlack , Federico Parola Subject: [RFC PATCH 5/8] KVM: x86/mmu: Introduce kvm_mmu_map_page() for prepopulating guest memory Date: Fri, 1 Mar 2024 09:28:47 -0800 Message-Id: <7b7dd4d56249028aa0b84d439ffdf1b79e67322a.1709288671.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Introduce a helper function to call kvm fault handler. This allows a new ioctl to invoke kvm fault handler to populate without seeing RET_PF_* enums or other KVM MMU internal definitions. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 30 ++++++++++++++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 60f21bb4c27b..48870c5e08ec 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -183,6 +183,9 @@ static inline void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, __kvm_mmu_refresh_passthrough_bits(vcpu, mmu); } +int kvm_mmu_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, + u8 max_level, u8 *goal_level); + /* * Check if a given access (described through the I/D, W/R and U/S bits of a * page fault error code pfec) causes a permission fault with the given PTE diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e4cc7f764980..7d5e80d17977 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4659,6 +4659,36 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return direct_page_fault(vcpu, fault); } +int kvm_mmu_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, + u8 max_level, u8 *goal_level) +{ + struct kvm_page_fault fault = KVM_PAGE_FAULT_INIT(vcpu, gpa, error_code, + false, max_level); + int r; + + r = __kvm_mmu_do_page_fault(vcpu, &fault); + + if (is_error_noslot_pfn(fault.pfn) || vcpu->kvm->vm_bugged) + return -EFAULT; + + switch (r) { + case RET_PF_RETRY: + return -EAGAIN; + + case RET_PF_FIXED: + case RET_PF_SPURIOUS: + *goal_level = fault.goal_level; + return 0; + + case RET_PF_CONTINUE: + case RET_PF_EMULATE: + case RET_PF_INVALID: + default: + return -EIO; + } +} +EXPORT_SYMBOL_GPL(kvm_mmu_map_page); + static void nonpaging_init_context(struct kvm_mmu *context) { context->page_fault = nonpaging_page_fault; From patchwork Fri Mar 1 17:28:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13578852 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 620AA36135; Fri, 1 Mar 2024 17:29:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314190; cv=none; b=uU7PUEr/xDMjrZ6v2a8aiyqZPQ0e+pa0+tdVQug6IfUio+kXEvKWLCd1OkhTdKhcstl4znL90peCKH0D4Edirs2hVIKtjL+SJRd+bEayJw+5B7GgetmovN/ZGjhaLIsVeLh4AZvUt8hGvvIczTJIMdZ6mu4ritGSj1G1BSJGM2c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314190; c=relaxed/simple; bh=4JuI8MhcUjFoTdzx1jxF5qxObLAyf716TjE4SfN0UR8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AbOhvBnyyPJZePSxMERcA6su7p8w8Y7c4N4e6m1wQi9xUaraBtBM1FCgsSUFleeRQCaqzt4T6PM7Al0D+u5wut5k4Fg8BlErFBXhzMlim9IRkMa+BE6VbX6nz0tALNxR886fcp0dS50OMYZ8parK4RthABMDNuLNqDQpWG9rZ5E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ioruVSok; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ioruVSok" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709314188; x=1740850188; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4JuI8MhcUjFoTdzx1jxF5qxObLAyf716TjE4SfN0UR8=; b=ioruVSokUA/DfuEdEp7rqk2zzcCrLJrdammXNqJckrsGfccHDsVUZG3d 2B6B+ABUu/q+vOyN6I/ENlPDG1JwupVYxcLbnjw2o2lLqs3feyk/A2/ru QC7mcsPGU4TavA/Uia5FpjoAXptadSI+2iIao34rsBA6mrUlp5HQ9tXP+ EnOnAXd6ox3ybHqGeOwo+FP0UZrLBpkuvqI6tRMdFNYkfEhBV7f1Af3hF I/DS3/lDr3uK7e3ACp2DzmBFBb1xy4ETYQWmh3akx6EENT+O3ZsYLYpo5 Mjgl1khkj/2rnXDxqaOwMCHMW9cZmQCGEy3TvfEfEKPRpr4yLKc5bFNVa w==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="6812433" X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="6812433" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="12946571" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:26 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Michael Roth , David Matlack , Federico Parola Subject: [RFC PATCH 6/8] KVM: x86: Implement kvm_arch_{, pre_}vcpu_map_memory() Date: Fri, 1 Mar 2024 09:28:48 -0800 Message-Id: <66a957f4ec4a8591d2ff2550686e361ec648b308.1709288671.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Wire KVM_MAP_MEMORY ioctl to kvm_mmu_map_tdp_page() to populate guest memory. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/x86.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3b8cb69b04fa..6025c0e12d89 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4660,6 +4660,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: case KVM_CAP_IRQFD_RESAMPLE: case KVM_CAP_MEMORY_FAULT_INFO: + case KVM_CAP_MAP_MEMORY: r = 1; break; case KVM_CAP_EXIT_HYPERCALL: @@ -5805,6 +5806,54 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, } } +int kvm_arch_vcpu_pre_map_memory(struct kvm_vcpu *vcpu) +{ + return kvm_mmu_reload(vcpu); +} + +int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, + struct kvm_memory_mapping *mapping) +{ + u8 max_level, goal_level = PG_LEVEL_4K; + u32 error_code; + int r; + + error_code = 0; + if (mapping->flags & KVM_MEMORY_MAPPING_FLAG_WRITE) + error_code |= PFERR_WRITE_MASK; + if (mapping->flags & KVM_MEMORY_MAPPING_FLAG_EXEC) + error_code |= PFERR_FETCH_MASK; + if (mapping->flags & KVM_MEMORY_MAPPING_FLAG_USER) + error_code |= PFERR_USER_MASK; + if (mapping->flags & KVM_MEMORY_MAPPING_FLAG_PRIVATE) { +#ifdef PFERR_PRIVATE_ACCESS + error_code |= PFERR_PRIVATE_ACCESS; +#else + return -OPNOTSUPP; +#endif + } + + if (IS_ALIGNED(mapping->base_gfn, KVM_PAGES_PER_HPAGE(PG_LEVEL_1G)) && + mapping->nr_pages >= KVM_PAGES_PER_HPAGE(PG_LEVEL_1G)) + max_level = PG_LEVEL_1G; + else if (IS_ALIGNED(mapping->base_gfn, KVM_PAGES_PER_HPAGE(PG_LEVEL_2M)) && + mapping->nr_pages >= KVM_PAGES_PER_HPAGE(PG_LEVEL_2M)) + max_level = PG_LEVEL_2M; + else + max_level = PG_LEVEL_4K; + + r = kvm_mmu_map_page(vcpu, gfn_to_gpa(mapping->base_gfn), error_code, + max_level, &goal_level); + if (r) + return r; + + if (mapping->source) + mapping->source += KVM_HPAGE_SIZE(goal_level); + mapping->base_gfn += KVM_PAGES_PER_HPAGE(goal_level); + mapping->nr_pages -= KVM_PAGES_PER_HPAGE(goal_level); + return r; +} + long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { From patchwork Fri Mar 1 17:28:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13578851 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B5A22260A; Fri, 1 Mar 2024 17:29:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314189; cv=none; b=uqtQ3KFpNJjUztjn7D1+ZrB7Tih+W1DIbMsuoG5pPnyOeZias8k5amGQUnc//P8GduWhApGSbDZT9ZrwP+tSIpeDyAtnB59Hv+SrIKdX/T5GQnL1kfm53U9entiNE4J1k38VcYVf/mOcmg/n1vZzizhbGFpcDFbrpkwuudjSeqY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314189; c=relaxed/simple; bh=y1JrrW54QV2rrlJRNIe/iaQAHwsqZ6loiMUvuPO+XB0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ptS4tN++qtfLf+YkcsDCcCgBDq7cOUY4QqBke/bv7Fl6HEkWEa5JCKXc8X3L/rMdViUWWKw0C1/C7IaK5I8qimgVyhrgqipPqgKKzi4T/AtmrgHv4h3rW48ORyG77+PVVMbOcWUc7TCohXloQrI8lXPSFioZMckoSRKmm81rskw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GNP5/VVx; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GNP5/VVx" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709314187; x=1740850187; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=y1JrrW54QV2rrlJRNIe/iaQAHwsqZ6loiMUvuPO+XB0=; b=GNP5/VVx3BWc/9drOl76uSR92iLdu2ekeNNVw3P6w9BzEMwmv4AQAaIS baonqQO8T2QrgaEOmy+RsR255jkirv2vftGuRK9MGzDLDHkEjKKAOUJnZ i+4cvlhInOIM+Zm0ZOkv9Y1rkb0HSOHrVXS5c+KL1Y9OVL8Q3gcZ72ZxT mCtgkaCMlUjeZKgB8BZSOo/31QdK82WhYD8P1OUwVP8Pfpfd2B+yNAb29 Oa1XAVVzcIPSH8Klp7ssD5x2aNwsHkOElsWd6MkOQxHV86N8bJiHeqX1Z 50lQMbh5BqjW6pu9zSkxrv3dl3lrMOafitrMxMfOX5noSuLsS/rgBO+F9 g==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="6812441" X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="6812441" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="12946576" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:27 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Michael Roth , David Matlack , Federico Parola Subject: [RFC PATCH 7/8] KVM: x86: Add hooks in kvm_arch_vcpu_map_memory() Date: Fri, 1 Mar 2024 09:28:49 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata In the case of TDX, the memory contents needs to be provided to be encrypted when populating guest memory before running the guest. Add hooks in kvm_mmu_map_tdp_page() for KVM_MAP_MEMORY before/after calling kvm_mmu_tdp_page(). TDX KVM will use the hooks. Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 2 ++ arch/x86/include/asm/kvm_host.h | 6 ++++++ arch/x86/kvm/x86.c | 34 ++++++++++++++++++++++++++++++ 3 files changed, 42 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 3942b74c1b75..fc4e11d40733 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -137,6 +137,8 @@ KVM_X86_OP(complete_emulated_msr) KVM_X86_OP(vcpu_deliver_sipi_vector) KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); KVM_X86_OP_OPTIONAL(get_untagged_addr) +KVM_X86_OP_OPTIONAL(pre_mmu_map_page); +KVM_X86_OP_OPTIONAL(post_mmu_map_page); #undef KVM_X86_OP #undef KVM_X86_OP_OPTIONAL diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9e7b1a00e265..301fedd6b156 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1805,6 +1805,12 @@ struct kvm_x86_ops { unsigned long (*vcpu_get_apicv_inhibit_reasons)(struct kvm_vcpu *vcpu); gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags); + + int (*pre_mmu_map_page)(struct kvm_vcpu *vcpu, + struct kvm_memory_mapping *mapping, + u32 *error_code, u8 *max_level); + void (*post_mmu_map_page)(struct kvm_vcpu *vcpu, + struct kvm_memory_mapping *mapping); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6025c0e12d89..ba8bf35f1c9a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5811,6 +5811,36 @@ int kvm_arch_vcpu_pre_map_memory(struct kvm_vcpu *vcpu) return kvm_mmu_reload(vcpu); } +static int kvm_pre_mmu_map_page(struct kvm_vcpu *vcpu, + struct kvm_memory_mapping *mapping, + u32 error_code, u8 *max_level) +{ + int r = 0; + + if (vcpu->kvm->arch.vm_type == KVM_X86_DEFAULT_VM || + vcpu->kvm->arch.vm_type == KVM_X86_SW_PROTECTED_VM) { + if (mapping->source) + r = -EINVAL; + } else if (kvm_x86_ops.pre_mmu_map_page) + r = static_call(kvm_x86_pre_mmu_map_page)(vcpu, mapping, + &error_code, + max_level); + else + r = -EOPNOTSUPP; + + return r; +} + +static void kvm_post_mmu_map_page(struct kvm_vcpu *vcpu, struct kvm_memory_mapping *mapping) +{ + if (vcpu->kvm->arch.vm_type == KVM_X86_DEFAULT_VM || + vcpu->kvm->arch.vm_type == KVM_X86_SW_PROTECTED_VM) + return; + + if (kvm_x86_ops.post_mmu_map_page) + static_call(kvm_x86_post_mmu_map_page)(vcpu, mapping); +} + int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, struct kvm_memory_mapping *mapping) { @@ -5842,8 +5872,12 @@ int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, else max_level = PG_LEVEL_4K; + r = kvm_pre_mmu_map_page(vcpu, mapping, error_code, &max_level); + if (r) + return r; r = kvm_mmu_map_page(vcpu, gfn_to_gpa(mapping->base_gfn), error_code, max_level, &goal_level); + kvm_post_mmu_map_page(vcpu, mapping); if (r) return r; From patchwork Fri Mar 1 17:28:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13578853 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B04536AFB; Fri, 1 Mar 2024 17:29:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314190; cv=none; b=Mh7t7V2cb/Y9ZzBpnRl/ByuFlChKez8ok95m/Bku5pKAYzrnmMu0j+aShI/luESiGVKCVb0vugtbM0OoTFtt18qhjhMfgMSVZVIFt0lJrly/jVCJ93FOBSfuwWMSeD22LT/mgdKT3xzjdoakGvhmy5rI6j/S0Z9VMqN1ws0AtAs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709314190; c=relaxed/simple; bh=rMeKnysEBRBdToHOhbnPZufVGaLePQDq21ml1Kh/8Fs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jkxssSTpwCRlyAWBbgk4Vpl5jbRGY10U7MHVrpE6dv2BJOXGNqErlkgPWaiUW8uMT+khlLOO2XpB+If0U0NR20llLMxJyVBjc1cTUp2UQZmutZWmaicPbaGXrmCoGfsyNr+8csX+31OzbJIB5/j7McF11vmQQSXCD3UeZ08shqY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IYC3RqoO; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IYC3RqoO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709314188; x=1740850188; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rMeKnysEBRBdToHOhbnPZufVGaLePQDq21ml1Kh/8Fs=; b=IYC3RqoO49+1+97YU4jBP4dMVGVG8wj1CLQc/acS/HxGMvsSQplllj6S Wc/RH1t7cuxQD+zyzP9BD/8pyrJfAMqrn7x9khiH/hcfSk574swY3yO6X 8cRMZncXOc/D/1lQR/nOeUsMOIeCsj6QdRrH2LxR03QI0Gc6IhNlFPrjM wlzYDSTG7rgKFr6fHSX6uVY1Z983qDuZudGfObKkw9LjkuXOrWeqJlBsL gdCOdvnVf+SX5N7epYbReDqYXx4ngCwrquEdgN8uGxi92z4V8jBBONf8f VqEMW1oQIB4q1Y9BjFa2REIY6MI7zs0CrhU6FXgzeCpEVUiJJzB8AX8Bm A==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="6812448" X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="6812448" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,196,1705392000"; d="scan'208";a="12946588" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 09:29:27 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Michael Roth , David Matlack , Federico Parola Subject: [RFC PATCH 8/8] KVM: selftests: x86: Add test for KVM_MAP_MEMORY Date: Fri, 1 Mar 2024 09:28:50 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Add a test case to exercise KVM_MAP_MEMORY and run the guest to access pre-populated area. It tests KVM_MAP_MEMORY ioctl for KVM_X86_DEFAULT_VM and KVM_X86_SW_PROTECTED_VM. The other VM type is just for future place hodler. Signed-off-by: Isaku Yamahata --- tools/include/uapi/linux/kvm.h | 14 ++ tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/map_memory_test.c | 136 ++++++++++++++++++ 3 files changed, 151 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/map_memory_test.c diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index c3308536482b..ea8d3cf840ab 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -2227,4 +2227,18 @@ struct kvm_create_guest_memfd { __u64 reserved[6]; }; +#define KVM_MAP_MEMORY _IOWR(KVMIO, 0xd5, struct kvm_memory_mapping) + +#define KVM_MEMORY_MAPPING_FLAG_WRITE _BITULL(0) +#define KVM_MEMORY_MAPPING_FLAG_EXEC _BITULL(1) +#define KVM_MEMORY_MAPPING_FLAG_USER _BITULL(2) +#define KVM_MEMORY_MAPPING_FLAG_PRIVATE _BITULL(3) + +struct kvm_memory_mapping { + __u64 base_gfn; + __u64 nr_pages; + __u64 flags; + __u64 source; +}; + #endif /* __LINUX_KVM_H */ diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index da20e6bb43ed..baef461ed38a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -142,6 +142,7 @@ TEST_GEN_PROGS_x86_64 += set_memory_region_test TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test TEST_GEN_PROGS_x86_64 += system_counter_offset_test +TEST_GEN_PROGS_x86_64 += x86_64/map_memory_test # Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/x86_64/map_memory_test.c b/tools/testing/selftests/kvm/x86_64/map_memory_test.c new file mode 100644 index 000000000000..9480c6c89226 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/map_memory_test.c @@ -0,0 +1,136 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 202r, Intel, Inc + * + * Author: + * Isaku Yamahata + */ +#include + +#include +#include +#include + +/* Arbitrarily chosen value. Pick 3G */ +#define TEST_GVA 0xc0000000 +#define TEST_GPA TEST_GVA +#define TEST_SIZE (SZ_2M + PAGE_SIZE) +#define TEST_NPAGES (TEST_SIZE / PAGE_SIZE) +#define TEST_SLOT 10 + +static void guest_code(uint64_t base_gpa) +{ + volatile uint64_t val __used; + int i; + + for (i = 0; i < TEST_NPAGES; i++) { + uint64_t *src = (uint64_t *)(base_gpa + i * PAGE_SIZE); + + val = *src; + } + + GUEST_DONE(); +} + +static void map_memory(struct kvm_vcpu *vcpu, u64 base_gfn, u64 nr_pages, + u64 source, bool should_success) +{ + struct kvm_memory_mapping mapping = { + .base_gfn = base_gfn, + .nr_pages = nr_pages, + .flags = KVM_MEMORY_MAPPING_FLAG_WRITE, + .source = source, + }; + int ret; + + do { + ret = __vcpu_ioctl(vcpu, KVM_MAP_MEMORY, &mapping); + } while (ret && errno == EAGAIN); + + if (should_success) { + __TEST_ASSERT_VM_VCPU_IOCTL(!ret, "KVM_MAP_MEMORY", ret, vcpu->vm); + } else { + __TEST_ASSERT_VM_VCPU_IOCTL(ret && errno == EFAULT, + "KVM_MAP_MEMORY", ret, vcpu->vm); + } +} + +static void __test_map_memory(unsigned long vm_type, bool private, bool use_source) +{ + const struct vm_shape shape = { + .mode = VM_MODE_DEFAULT, + .type = vm_type, + }; + struct kvm_vcpu *vcpu; + struct kvm_run *run; + struct kvm_vm *vm; + struct ucall uc; + u64 source; + + vm = vm_create_shape_with_one_vcpu(shape, &vcpu, guest_code); + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, + TEST_GPA, TEST_SLOT, TEST_NPAGES, + private ? KVM_MEM_GUEST_MEMFD : 0); + virt_map(vm, TEST_GVA, TEST_GPA, TEST_NPAGES); + + if (private) + vm_mem_set_private(vm, TEST_GPA, TEST_SIZE); + + source = use_source ? TEST_GVA: 0; + map_memory(vcpu, TEST_GPA / PAGE_SIZE, SZ_2M / PAGE_SIZE, source, true); + source = use_source ? TEST_GVA + SZ_2M: 0; + map_memory(vcpu, (TEST_GPA + SZ_2M) / PAGE_SIZE, 1, source, true); + + source = use_source ? TEST_GVA + TEST_SIZE : 0; + map_memory(vcpu, (TEST_GPA + TEST_SIZE) / PAGE_SIZE, 1, source, false); + + vcpu_args_set(vcpu, 1, TEST_GVA); + vcpu_run(vcpu); + + run = vcpu->run; + TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, + "Wanted KVM_EXIT_IO, got exit reason: %u (%s)", + run->exit_reason, exit_reason_str(run->exit_reason)); + + switch (get_ucall(vcpu, &uc)) { + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unknown ucall 0x%lx.", uc.cmd); + break; + } + + kvm_vm_free(vm); +} + +static void test_map_memory(unsigned long vm_type, bool use_source) +{ + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) { + pr_info("Skipping tests for vm_type 0x%lx\n", vm_type); + return; + } + + __test_map_memory(vm_type, false, use_source); + __test_map_memory(vm_type, true, use_source); +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_check_cap(KVM_CAP_MAP_MEMORY)); + + __test_map_memory(KVM_X86_DEFAULT_VM, false, false); + test_map_memory(KVM_X86_SW_PROTECTED_VM, false); +#ifdef KVM_X86_SEV_VM + test_map_memory(KVM_X86_SEV_VM, false); +#endif +#ifdef KVM_X86_SEV_ES_VM + test_map_memory(KVM_X86_SEV_ES_VM, false); +#endif +#ifdef KVM_X86_TDX_VM + test_map_memory(KVM_X86_TDX_VM, true); +#endif + return 0; +}