From patchwork Tue Apr 16 06:20:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13631350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0EFFAC4345F for ; Tue, 16 Apr 2024 06:21:11 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.706591.1103822 (Exim 4.92) (envelope-from ) id 1rwcBj-0007rf-Lw; Tue, 16 Apr 2024 06:21:03 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 706591.1103822; Tue, 16 Apr 2024 06:21:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcBj-0007rY-JN; Tue, 16 Apr 2024 06:21:03 +0000 Received: by outflank-mailman (input) for mailman id 706591; Tue, 16 Apr 2024 06:21:02 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcBi-0007rO-4i for xen-devel@lists.xenproject.org; Tue, 16 Apr 2024 06:21:02 +0000 Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 7d5d5d79-fbb9-11ee-94a3-07e782e9044d; Tue, 16 Apr 2024 08:20:59 +0200 (CEST) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 4B6BB364C7; Tue, 16 Apr 2024 02:20:58 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 4343B364C6; Tue, 16 Apr 2024 02:20:58 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id 2972E364C5; Tue, 16 Apr 2024 02:20:54 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7d5d5d79-fbb9-11ee-94a3-07e782e9044d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=Cv2yikNXg84Cn7K/ub1pBhxK6TBxHTPjKhRpG7iJVUw=; b=UJkT 3qm91j7UuHB+mRYH6ZdZEuxPYSMx+rzlIOkh83h1ihoFXFteF7PGOa+Z50QNKgyI M+fiSzOOC0NJcgfNYa7YkbQqwmr4Vv2HMcGWfipcy3TkUVyuOB+X7AuMHyKgSi42 +oa3wX1/b1C40nFKzBfxZ1AN4I9jsfYghk3UrNs= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Xenia Ragiadakou , Stefano Stabellini , Sergiy Kibrik Subject: [XEN PATCH v1 01/15] x86: introduce AMD-V and Intel VT-x Kconfig options Date: Tue, 16 Apr 2024 09:20:52 +0300 Message-Id: <20240416062052.3467935-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: 7B56E264-FBB9-11EE-B5FA-F515D2CDFF5E-90055647!pb-smtp20.pobox.com From: Xenia Ragiadakou Introduce two new Kconfig options, SVM and VMX, to allow code specific to each virtualization technology to be separated and, when not required, stripped. CONFIG_SVM will be used to enable virtual machine extensions on platforms that implement the AMD Virtualization Technology (AMD-V). CONFIG_VMX will be used to enable virtual machine extensions on platforms that implement the Intel Virtualization Technology (Intel VT-x). Both features depend on HVM support. Since, at this point, disabling any of them would cause Xen to not compile, the options are enabled by default if HVM and are not selectable by the user. No functional change intended. Signed-off-by: Xenia Ragiadakou Signed-off-by: Sergiy Kibrik --- xen/arch/x86/Kconfig | 6 ++++++ xen/arch/x86/hvm/Makefile | 4 ++-- xen/arch/x86/mm/Makefile | 3 ++- xen/arch/x86/mm/hap/Makefile | 2 +- 4 files changed, 11 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index d6f3128588..6f06d3baa5 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -120,6 +120,12 @@ config HVM If unsure, say Y. +config SVM + def_bool y if HVM + +config VMX + def_bool y if HVM + config XEN_SHSTK bool "Supervisor Shadow Stacks" depends on HAS_AS_CET_SS diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile index 3464191544..8434badc64 100644 --- a/xen/arch/x86/hvm/Makefile +++ b/xen/arch/x86/hvm/Makefile @@ -1,5 +1,5 @@ -obj-y += svm/ -obj-y += vmx/ +obj-$(CONFIG_SVM) += svm/ +obj-$(CONFIG_VMX) += vmx/ obj-y += viridian/ obj-y += asid.o diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile index 0803ac9297..92168290a8 100644 --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -10,6 +10,7 @@ obj-$(CONFIG_MEM_SHARING) += mem_sharing.o obj-$(CONFIG_HVM) += nested.o obj-$(CONFIG_HVM) += p2m.o obj-y += p2m-basic.o -obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o +obj-$(CONFIG_HVM) += p2m-pod.o p2m-pt.o +obj-$(CONFIG_VMX) += p2m-ept.o obj-y += paging.o obj-y += physmap.o diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile index 8ef54b1faa..98c8a87819 100644 --- a/xen/arch/x86/mm/hap/Makefile +++ b/xen/arch/x86/mm/hap/Makefile @@ -3,4 +3,4 @@ obj-y += guest_walk_2.o obj-y += guest_walk_3.o obj-y += guest_walk_4.o obj-y += nested_hap.o -obj-y += nested_ept.o +obj-$(CONFIG_VMX) += nested_ept.o From patchwork Tue Apr 16 06:22:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13631351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8F3EBC4345F for ; Tue, 16 Apr 2024 06:23:16 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.706594.1103832 (Exim 4.92) (envelope-from ) id 1rwcDl-0008PO-2i; Tue, 16 Apr 2024 06:23:09 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 706594.1103832; Tue, 16 Apr 2024 06:23:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcDk-0008PH-Uk; Tue, 16 Apr 2024 06:23:08 +0000 Received: by outflank-mailman (input) for mailman id 706594; Tue, 16 Apr 2024 06:23:07 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcDj-0008OO-Pa for xen-devel@lists.xenproject.org; Tue, 16 Apr 2024 06:23:07 +0000 Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id c8af5c23-fbb9-11ee-94a3-07e782e9044d; Tue, 16 Apr 2024 08:23:05 +0200 (CEST) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 3D499364D0; Tue, 16 Apr 2024 02:23:04 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 16B97364CF; Tue, 16 Apr 2024 02:23:04 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id 246A6364CD; Tue, 16 Apr 2024 02:23:00 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c8af5c23-fbb9-11ee-94a3-07e782e9044d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=nVGF93SwVxjmd/Ty6DZ8L8PrhTbVuUWdW/P91kcIR4Y=; b=Dhoe hpAsN9S9fZki+jlw7w7X7i9TxdD9gVrOXbzlqTCM2vOy0Wgz1J+Dsc/ZEaLjkdlr BNOzj1Shg0ZyuOL50hS4DntD7//ocr5rI6y7nhHZBg/CP/iH11GFeV1WPkyMgA6w eNmikVRcJR3inQ9X/rJijvPswG0NysRfY4OeOdA= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Xenia Ragiadakou , Stefano Stabellini , Sergiy Kibrik Subject: [XEN PATCH v1 02/15] x86/hvm: guard AMD-V and Intel VT-x hvm_function_table initializers Date: Tue, 16 Apr 2024 09:22:58 +0300 Message-Id: <20240416062258.3468774-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: C66D5B70-FBB9-11EE-9473-F515D2CDFF5E-90055647!pb-smtp20.pobox.com From: Xenia Ragiadakou Since start_svm() is AMD-V specific while start_vmx() is Intel VT-x specific, one can be excluded from build completely with CONFIG_SVM and CONFIG_VMX, respectively. No functional change intended. Signed-off-by: Xenia Ragiadakou Signed-off-by: Sergiy Kibrik --- xen/arch/x86/hvm/hvm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 0ce45b177c..3edbe03caf 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -156,9 +156,9 @@ static int __init cf_check hvm_enable(void) { const struct hvm_function_table *fns = NULL; - if ( cpu_has_vmx ) + if ( IS_ENABLED(CONFIG_VMX) && cpu_has_vmx ) fns = start_vmx(); - else if ( cpu_has_svm ) + else if ( IS_ENABLED(CONFIG_SVM) && cpu_has_svm ) fns = start_svm(); if ( fns == NULL ) From patchwork Tue Apr 16 06:25:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13631352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E3354C4345F for ; Tue, 16 Apr 2024 06:25:26 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.706600.1103842 (Exim 4.92) (envelope-from ) id 1rwcFn-000112-Cb; Tue, 16 Apr 2024 06:25:15 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 706600.1103842; Tue, 16 Apr 2024 06:25:15 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcFn-00010v-9w; Tue, 16 Apr 2024 06:25:15 +0000 Received: by outflank-mailman (input) for mailman id 706600; Tue, 16 Apr 2024 06:25:13 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcFl-00010p-JS for xen-devel@lists.xenproject.org; Tue, 16 Apr 2024 06:25:13 +0000 Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 139b1886-fbba-11ee-94a3-07e782e9044d; Tue, 16 Apr 2024 08:25:11 +0200 (CEST) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 3A83F364E2; Tue, 16 Apr 2024 02:25:10 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 2595B364E1; Tue, 16 Apr 2024 02:25:10 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id 19A2E364E0; Tue, 16 Apr 2024 02:25:06 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 139b1886-fbba-11ee-94a3-07e782e9044d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=4n89nVeeX4nXJeA9AycHq/YmU87NztPR/Mi/SSF7NXg=; b=RQL0 nzpDEpVxb3uyUwjiBEdibwUPoKLzs5hoVcT4gE2VPgw8bZPW9/fQc1zAXObGP3OU bV+atRL2g965dqbYwmQhX8vJKDIU+xAVv6xmJenKW5U2a6rghwRKnUsnfsCjTWoR LaXtQHCMuxC/T1fw4Pfm66m8HGj6YJipQggiE50= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Tamas K Lengyel , Alexandru Isaila , Petre Pircalabu , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Xenia Ragiadakou , Stefano Stabellini Subject: [XEN PATCH v1 03/15] x86/monitor: guard altp2m usage Date: Tue, 16 Apr 2024 09:25:03 +0300 Message-Id: <20240416062503.3468942-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: 1183DA58-FBBA-11EE-B03F-F515D2CDFF5E-90055647!pb-smtp20.pobox.com Use altp2m index only when it is supported by the platform, i.e. VMX. The puspose of that is the possiblity to disable VMX support and exclude its code from the build completely. Signed-off-by: Sergiy Kibrik --- xen/arch/x86/hvm/monitor.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c index 4f500beaf5..192a721403 100644 --- a/xen/arch/x86/hvm/monitor.c +++ b/xen/arch/x86/hvm/monitor.c @@ -262,6 +262,8 @@ bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec, struct vcpu *curr = current; vm_event_request_t req = {}; paddr_t gpa = (gfn_to_gaddr(gfn) | (gla & ~PAGE_MASK)); + unsigned int altp2m_idx = hvm_altp2m_supported() ? + altp2m_vcpu_idx(curr) : 0; int rc; ASSERT(curr->arch.vm_event->send_event); @@ -270,7 +272,7 @@ bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec, * p2m_get_mem_access() can fail from a invalid MFN and return -ESRCH * in which case access must be restricted. */ - rc = p2m_get_mem_access(curr->domain, gfn, &access, altp2m_vcpu_idx(curr)); + rc = p2m_get_mem_access(curr->domain, gfn, &access, altp2m_idx); if ( rc == -ESRCH ) access = XENMEM_access_n; From patchwork Tue Apr 16 06:27:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13631353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 977DEC04FF6 for ; Tue, 16 Apr 2024 06:27:28 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.706607.1103851 (Exim 4.92) (envelope-from ) id 1rwcHn-0001at-OV; Tue, 16 Apr 2024 06:27:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 706607.1103851; Tue, 16 Apr 2024 06:27:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcHn-0001am-Ll; Tue, 16 Apr 2024 06:27:19 +0000 Received: by outflank-mailman (input) for mailman id 706607; Tue, 16 Apr 2024 06:27:19 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcHm-0001ag-WE for xen-devel@lists.xenproject.org; Tue, 16 Apr 2024 06:27:19 +0000 Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 5e658636-fbba-11ee-94a3-07e782e9044d; Tue, 16 Apr 2024 08:27:17 +0200 (CEST) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id B7057364F3; Tue, 16 Apr 2024 02:27:15 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id B0040364F2; Tue, 16 Apr 2024 02:27:15 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id C9B78364F1; Tue, 16 Apr 2024 02:27:12 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5e658636-fbba-11ee-94a3-07e782e9044d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=j8hR43uHFMb8E6XMwU16iGhVgjg2xCVf92GTagxy/BY=; b=V49X akXpWQvpcVLLPgqcy8Bvj6a11dKW1hTJta3/pATKuXvX/HHd6ke75Q8hOopWV4i5 IwqBwK5h/2Gp54dABlAIvt61DZpcpyRZsVXAkQyRsE1W+ITP8aB1zXs5DW4UWKzN vE1LD2oG6usRz0fFItG6emsVk/MgjqGPOrKdkSk= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , George Dunlap , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Xenia Ragiadakou , Stefano Stabellini Subject: [XEN PATCH v1 04/15] x86/p2m: guard altp2m init/teardown Date: Tue, 16 Apr 2024 09:27:09 +0300 Message-Id: <20240416062709.3469044-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: 5C70952E-FBBA-11EE-B4EA-F515D2CDFF5E-90055647!pb-smtp20.pobox.com Initialize and bring down altp2m only when it is supported by the platform, i.e. VMX. The puspose of that is the possiblity to disable VMX support and exclude its code from the build completely. Signed-off-by: Sergiy Kibrik --- xen/arch/x86/mm/p2m-basic.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c index 8599bd15c6..90106997d7 100644 --- a/xen/arch/x86/mm/p2m-basic.c +++ b/xen/arch/x86/mm/p2m-basic.c @@ -126,13 +126,15 @@ int p2m_init(struct domain *d) return rc; } - rc = p2m_init_altp2m(d); - if ( rc ) + if ( hvm_altp2m_supported() ) { - p2m_teardown_hostp2m(d); - p2m_teardown_nestedp2m(d); + rc = p2m_init_altp2m(d); + if ( rc ) + { + p2m_teardown_hostp2m(d); + p2m_teardown_nestedp2m(d); + } } - return rc; } @@ -195,11 +197,12 @@ void p2m_final_teardown(struct domain *d) { if ( is_hvm_domain(d) ) { + if ( hvm_altp2m_supported() ) + p2m_teardown_altp2m(d); /* - * We must tear down both of them unconditionally because - * we initialise them unconditionally. + * We must tear down nestedp2m unconditionally because + * we initialise it unconditionally. */ - p2m_teardown_altp2m(d); p2m_teardown_nestedp2m(d); } From patchwork Tue Apr 16 06:29:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13631354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BC4FBC4345F for ; Tue, 16 Apr 2024 06:29:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.706616.1103875 (Exim 4.92) (envelope-from ) id 1rwcJs-0002jr-IN; Tue, 16 Apr 2024 06:29:28 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 706616.1103875; Tue, 16 Apr 2024 06:29:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcJs-0002jk-EJ; Tue, 16 Apr 2024 06:29:28 +0000 Received: by outflank-mailman (input) for mailman id 706616; Tue, 16 Apr 2024 06:29:27 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcJq-0002jT-TZ for xen-devel@lists.xenproject.org; Tue, 16 Apr 2024 06:29:27 +0000 Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a9c54d99-fbba-11ee-b909-491648fe20b8; Tue, 16 Apr 2024 08:29:24 +0200 (CEST) Received: from pb-smtp21.pobox.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id 244E22D8C3; Tue, 16 Apr 2024 02:29:22 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id 0ECC22D8C2; Tue, 16 Apr 2024 02:29:22 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp21.pobox.com (Postfix) with ESMTPSA id BD1532D8C1; Tue, 16 Apr 2024 02:29:18 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a9c54d99-fbba-11ee-b909-491648fe20b8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=4yot9xHJJsSRFsRHmPkkTqaZKbjidHD4FJwAR2vKH/s=; b=Twt2 1fbcpzbl2kORh1N8iWDCi+RHWTQgUZj73FNOJnh9RnE01wVC23ywPp9RFkm9PjNK hXHU8YxWsJ23ILdB2+B5L6HBZl/cvbYp1A7l9U+R/NTYQVAKP04pQHV8Vt3b88lC xByfnwgHdVsOe0d9O8AODcMsxZZasDBY+nDGzLg= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , George Dunlap , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Xenia Ragiadakou , Stefano Stabellini Subject: [XEN PATCH v1 05/15] x86/p2m: move altp2m-related code to separate file Date: Tue, 16 Apr 2024 09:29:15 +0300 Message-Id: <20240416062915.3469145-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: A782DC66-FBBA-11EE-BD69-A19503B9AAD1-90055647!pb-smtp21.pobox.com Move altp2m code from generic p2m.c file to altp2m.c, so that VMX-specific code is kept separately and can possibly be disabled in the build. No functional change intended. Signed-off-by: Sergiy Kibrik --- xen/arch/x86/mm/altp2m.c | 631 ++++++++++++++++++++++++++++++++++++++ xen/arch/x86/mm/p2m.c | 636 +-------------------------------------- xen/arch/x86/mm/p2m.h | 3 + 3 files changed, 637 insertions(+), 633 deletions(-) diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c index a04297b646..6fe62200ba 100644 --- a/xen/arch/x86/mm/altp2m.c +++ b/xen/arch/x86/mm/altp2m.c @@ -9,6 +9,8 @@ #include #include "mm-locks.h" #include "p2m.h" +#include +#include void altp2m_vcpu_initialise(struct vcpu *v) @@ -151,6 +153,635 @@ void p2m_teardown_altp2m(struct domain *d) } } +int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn, + p2m_type_t *t, p2m_access_t *a, + bool prepopulate) +{ + *mfn = ap2m->get_entry(ap2m, gfn, t, a, 0, NULL, NULL); + + /* Check host p2m if no valid entry in alternate */ + if ( !mfn_valid(*mfn) && !p2m_is_hostp2m(ap2m) ) + { + struct p2m_domain *hp2m = p2m_get_hostp2m(ap2m->domain); + unsigned int page_order; + int rc; + + *mfn = p2m_get_gfn_type_access(hp2m, gfn, t, a, P2M_ALLOC | P2M_UNSHARE, + &page_order, 0); + + rc = -ESRCH; + if ( !mfn_valid(*mfn) || *t != p2m_ram_rw ) + return rc; + + /* If this is a superpage, copy that first */ + if ( prepopulate && page_order != PAGE_ORDER_4K ) + { + unsigned long mask = ~((1UL << page_order) - 1); + gfn_t gfn_aligned = _gfn(gfn_x(gfn) & mask); + mfn_t mfn_aligned = _mfn(mfn_x(*mfn) & mask); + + rc = ap2m->set_entry(ap2m, gfn_aligned, mfn_aligned, page_order, *t, *a, 1); + if ( rc ) + return rc; + } + } + + return 0; +} + +void p2m_altp2m_check(struct vcpu *v, uint16_t idx) +{ + if ( altp2m_active(v->domain) ) + p2m_switch_vcpu_altp2m_by_id(v, idx); +} + +bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx) +{ + struct domain *d = v->domain; + bool rc = false; + + if ( idx >= MAX_ALTP2M ) + return rc; + + altp2m_list_lock(d); + + if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) ) + { + if ( p2m_set_altp2m(v, idx) ) + altp2m_vcpu_update_p2m(v); + rc = 1; + } + + altp2m_list_unlock(d); + return rc; +} + +/* + * Read info about the gfn in an altp2m, locking the gfn. + * + * If the entry is valid, pass the results back to the caller. + * + * If the entry was invalid, and the host's entry is also invalid, + * return to the caller without any changes. + * + * If the entry is invalid, and the host entry was valid, propagate + * the host's entry to the altp2m (retaining page order), and indicate + * that the caller should re-try the faulting instruction. + */ +bool p2m_altp2m_get_or_propagate(struct p2m_domain *ap2m, unsigned long gfn_l, + mfn_t *mfn, p2m_type_t *p2mt, + p2m_access_t *p2ma, unsigned int *page_order) +{ + p2m_type_t ap2mt; + p2m_access_t ap2ma; + unsigned int cur_order; + unsigned long mask; + gfn_t gfn; + mfn_t amfn; + int rc; + + /* + * NB we must get the full lock on the altp2m here, in addition to + * the lock on the individual gfn, since we may change a range of + * gfns below. + */ + p2m_lock(ap2m); + + amfn = get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma, 0, &cur_order); + + if ( cur_order > *page_order ) + cur_order = *page_order; + + if ( !mfn_eq(amfn, INVALID_MFN) ) + { + p2m_unlock(ap2m); + *mfn = amfn; + *p2mt = ap2mt; + *p2ma = ap2ma; + *page_order = cur_order; + return false; + } + + /* Host entry is also invalid; don't bother setting the altp2m entry. */ + if ( mfn_eq(*mfn, INVALID_MFN) ) + { + p2m_unlock(ap2m); + *page_order = cur_order; + return false; + } + + /* + * If this is a superpage mapping, round down both frame numbers + * to the start of the superpage. NB that we repupose `amfn` + * here. + */ + mask = ~((1UL << cur_order) - 1); + amfn = _mfn(mfn_x(*mfn) & mask); + gfn = _gfn(gfn_l & mask); + + /* Override the altp2m entry with its default access. */ + *p2ma = ap2m->default_access; + + rc = p2m_set_entry(ap2m, gfn, amfn, cur_order, *p2mt, *p2ma); + p2m_unlock(ap2m); + + if ( rc ) + { + gprintk(XENLOG_ERR, + "failed to set entry for %"PRI_gfn" -> %"PRI_mfn" altp2m %u, rc %d\n", + gfn_l, mfn_x(amfn), vcpu_altp2m(current).p2midx, rc); + domain_crash(ap2m->domain); + } + + return true; +} + +enum altp2m_reset_type { + ALTP2M_RESET, + ALTP2M_DEACTIVATE +}; + +static void p2m_reset_altp2m(struct domain *d, unsigned int idx, + enum altp2m_reset_type reset_type) +{ + struct p2m_domain *p2m; + + ASSERT(idx < MAX_ALTP2M); + p2m = array_access_nospec(d->arch.altp2m_p2m, idx); + + p2m_lock(p2m); + + p2m_flush_table_locked(p2m); + + if ( reset_type == ALTP2M_DEACTIVATE ) + p2m_free_logdirty(p2m); + + /* Uninit and reinit ept to force TLB shootdown */ + ept_p2m_uninit(p2m); + ept_p2m_init(p2m); + + p2m->min_remapped_gfn = gfn_x(INVALID_GFN); + p2m->max_remapped_gfn = 0; + + p2m_unlock(p2m); +} + +void p2m_flush_altp2m(struct domain *d) +{ + unsigned int i; + + altp2m_list_lock(d); + + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + p2m_reset_altp2m(d, i, ALTP2M_DEACTIVATE); + d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN); + d->arch.altp2m_visible_eptp[i] = mfn_x(INVALID_MFN); + } + + altp2m_list_unlock(d); +} + +static int p2m_activate_altp2m(struct domain *d, unsigned int idx, + p2m_access_t hvmmem_default_access) +{ + struct p2m_domain *hostp2m, *p2m; + int rc; + + ASSERT(idx < MAX_ALTP2M); + + p2m = array_access_nospec(d->arch.altp2m_p2m, idx); + hostp2m = p2m_get_hostp2m(d); + + p2m_lock(p2m); + + rc = p2m_init_logdirty(p2m); + + if ( rc ) + goto out; + + /* The following is really just a rangeset copy. */ + rc = rangeset_merge(p2m->logdirty_ranges, hostp2m->logdirty_ranges); + + if ( rc ) + { + p2m_free_logdirty(p2m); + goto out; + } + + p2m->default_access = hvmmem_default_access; + p2m->domain = hostp2m->domain; + p2m->global_logdirty = hostp2m->global_logdirty; + p2m->min_remapped_gfn = gfn_x(INVALID_GFN); + p2m->max_mapped_pfn = p2m->max_remapped_gfn = 0; + + p2m_init_altp2m_ept(d, idx); + + out: + p2m_unlock(p2m); + + return rc; +} + +int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx) +{ + int rc = -EINVAL; + struct p2m_domain *hostp2m = p2m_get_hostp2m(d); + + if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ) + return rc; + + altp2m_list_lock(d); + + if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + rc = p2m_activate_altp2m(d, idx, hostp2m->default_access); + + altp2m_list_unlock(d); + return rc; +} + +int p2m_init_next_altp2m(struct domain *d, uint16_t *idx, + xenmem_access_t hvmmem_default_access) +{ + int rc = -EINVAL; + unsigned int i; + p2m_access_t a; + struct p2m_domain *hostp2m = p2m_get_hostp2m(d); + + if ( hvmmem_default_access > XENMEM_access_default || + !xenmem_access_to_p2m_access(hostp2m, hvmmem_default_access, &a) ) + return rc; + + altp2m_list_lock(d); + + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) ) + continue; + + rc = p2m_activate_altp2m(d, i, a); + + if ( !rc ) + *idx = i; + + break; + } + + altp2m_list_unlock(d); + return rc; +} + +int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx) +{ + struct p2m_domain *p2m; + int rc = -EBUSY; + + if ( !idx || idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ) + return rc; + + rc = domain_pause_except_self(d); + if ( rc ) + return rc; + + rc = -EBUSY; + altp2m_list_lock(d); + + if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] != + mfn_x(INVALID_MFN) ) + { + p2m = array_access_nospec(d->arch.altp2m_p2m, idx); + + if ( !_atomic_read(p2m->active_vcpus) ) + { + p2m_reset_altp2m(d, idx, ALTP2M_DEACTIVATE); + d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] = + mfn_x(INVALID_MFN); + d->arch.altp2m_visible_eptp[array_index_nospec(idx, MAX_EPTP)] = + mfn_x(INVALID_MFN); + rc = 0; + } + } + + altp2m_list_unlock(d); + + domain_unpause_except_self(d); + + return rc; +} + +int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx) +{ + struct vcpu *v; + int rc = -EINVAL; + + if ( idx >= MAX_ALTP2M ) + return rc; + + rc = domain_pause_except_self(d); + if ( rc ) + return rc; + + rc = -EINVAL; + altp2m_list_lock(d); + + if ( d->arch.altp2m_visible_eptp[idx] != mfn_x(INVALID_MFN) ) + { + for_each_vcpu( d, v ) + if ( p2m_set_altp2m(v, idx) ) + altp2m_vcpu_update_p2m(v); + + rc = 0; + } + + altp2m_list_unlock(d); + + domain_unpause_except_self(d); + + return rc; +} + +int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, + gfn_t old_gfn, gfn_t new_gfn) +{ + struct p2m_domain *hp2m, *ap2m; + p2m_access_t a; + p2m_type_t t; + mfn_t mfn; + int rc = -EINVAL; + + if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || + d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + return rc; + + hp2m = p2m_get_hostp2m(d); + ap2m = array_access_nospec(d->arch.altp2m_p2m, idx); + + p2m_lock(hp2m); + p2m_lock(ap2m); + + if ( gfn_eq(new_gfn, INVALID_GFN) ) + { + mfn = ap2m->get_entry(ap2m, old_gfn, &t, &a, 0, NULL, NULL); + rc = mfn_valid(mfn) + ? p2m_remove_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K) + : 0; + goto out; + } + + rc = altp2m_get_effective_entry(ap2m, old_gfn, &mfn, &t, &a, + AP2MGET_prepopulate); + if ( rc ) + goto out; + + rc = altp2m_get_effective_entry(ap2m, new_gfn, &mfn, &t, &a, + AP2MGET_query); + if ( rc ) + goto out; + + if ( !ap2m->set_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K, t, a, + (current->domain != d)) ) + { + rc = 0; + + if ( gfn_x(new_gfn) < ap2m->min_remapped_gfn ) + ap2m->min_remapped_gfn = gfn_x(new_gfn); + if ( gfn_x(new_gfn) > ap2m->max_remapped_gfn ) + ap2m->max_remapped_gfn = gfn_x(new_gfn); + } + + out: + p2m_unlock(ap2m); + p2m_unlock(hp2m); + return rc; +} + +int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn, + mfn_t mfn, unsigned int page_order, + p2m_type_t p2mt, p2m_access_t p2ma) +{ + struct p2m_domain *p2m; + unsigned int i; + unsigned int reset_count = 0; + unsigned int last_reset_idx = ~0; + int ret = 0; + + if ( !altp2m_active(d) ) + return 0; + + altp2m_list_lock(d); + + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + p2m_type_t t; + p2m_access_t a; + + if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) ) + continue; + + p2m = d->arch.altp2m_p2m[i]; + + /* Check for a dropped page that may impact this altp2m */ + if ( mfn_eq(mfn, INVALID_MFN) && + gfn_x(gfn) + (1UL << page_order) > p2m->min_remapped_gfn && + gfn_x(gfn) <= p2m->max_remapped_gfn ) + { + if ( !reset_count++ ) + { + p2m_reset_altp2m(d, i, ALTP2M_RESET); + last_reset_idx = i; + } + else + { + /* At least 2 altp2m's impacted, so reset everything */ + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + if ( i == last_reset_idx || + d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) ) + continue; + + p2m_reset_altp2m(d, i, ALTP2M_RESET); + } + + ret = 0; + break; + } + } + else if ( !mfn_eq(get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, + NULL), INVALID_MFN) ) + { + int rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma); + + /* Best effort: Don't bail on error. */ + if ( !ret ) + ret = rc; + + p2m_put_gfn(p2m, gfn); + } + else + p2m_put_gfn(p2m, gfn); + } + + altp2m_list_unlock(d); + + return ret; +} + +/* + * Set/clear the #VE suppress bit for a page. Only available on VMX. + */ +int p2m_set_suppress_ve(struct domain *d, gfn_t gfn, bool suppress_ve, + unsigned int altp2m_idx) +{ + int rc; + struct xen_hvm_altp2m_suppress_ve_multi sve = { + altp2m_idx, suppress_ve, 0, 0, gfn_x(gfn), gfn_x(gfn), 0 + }; + + if ( !(rc = p2m_set_suppress_ve_multi(d, &sve)) ) + rc = sve.first_error; + + return rc; +} + +/* + * Set/clear the #VE suppress bit for multiple pages. Only available on VMX. + */ +int p2m_set_suppress_ve_multi(struct domain *d, + struct xen_hvm_altp2m_suppress_ve_multi *sve) +{ + struct p2m_domain *host_p2m = p2m_get_hostp2m(d); + struct p2m_domain *ap2m = NULL; + struct p2m_domain *p2m = host_p2m; + uint64_t start = sve->first_gfn; + int rc = 0; + + if ( sve->view > 0 ) + { + if ( sve->view >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || + d->arch.altp2m_eptp[array_index_nospec(sve->view, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + return -EINVAL; + + p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, sve->view); + } + + p2m_lock(host_p2m); + + if ( ap2m ) + p2m_lock(ap2m); + + while ( sve->last_gfn >= start ) + { + p2m_access_t a; + p2m_type_t t; + mfn_t mfn; + int err = 0; + + if ( (err = altp2m_get_effective_entry(p2m, _gfn(start), &mfn, &t, &a, + AP2MGET_query)) && + !sve->first_error ) + { + sve->first_error_gfn = start; /* Save the gfn of the first error */ + sve->first_error = err; /* Save the first error code */ + } + + if ( !err && (err = p2m->set_entry(p2m, _gfn(start), mfn, + PAGE_ORDER_4K, t, a, + sve->suppress_ve)) && + !sve->first_error ) + { + sve->first_error_gfn = start; /* Save the gfn of the first error */ + sve->first_error = err; /* Save the first error code */ + } + + /* Check for continuation if it's not the last iteration. */ + if ( sve->last_gfn >= ++start && hypercall_preempt_check() ) + { + rc = -ERESTART; + break; + } + } + + sve->first_gfn = start; + + if ( ap2m ) + p2m_unlock(ap2m); + + p2m_unlock(host_p2m); + + return rc; +} + +int p2m_get_suppress_ve(struct domain *d, gfn_t gfn, bool *suppress_ve, + unsigned int altp2m_idx) +{ + struct p2m_domain *host_p2m = p2m_get_hostp2m(d); + struct p2m_domain *ap2m = NULL; + struct p2m_domain *p2m; + mfn_t mfn; + p2m_access_t a; + p2m_type_t t; + int rc = 0; + + if ( altp2m_idx > 0 ) + { + if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || + d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + return -EINVAL; + + p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx); + } + else + p2m = host_p2m; + + gfn_lock(host_p2m, gfn, 0); + + if ( ap2m ) + p2m_lock(ap2m); + + mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, NULL, suppress_ve); + if ( !mfn_valid(mfn) ) + rc = -ESRCH; + + if ( ap2m ) + p2m_unlock(ap2m); + + gfn_unlock(host_p2m, gfn, 0); + + return rc; +} + +int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int altp2m_idx, + uint8_t visible) +{ + int rc = 0; + + altp2m_list_lock(d); + + /* + * Eptp index is correlated with altp2m index and should not exceed + * min(MAX_ALTP2M, MAX_EPTP). + */ + if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || + d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + rc = -EINVAL; + else if ( visible ) + d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] = + d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)]; + else + d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] = + mfn_x(INVALID_MFN); + + altp2m_list_unlock(d); + + return rc; +} + + /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index ce742c12e0..1f219e8e45 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -500,9 +500,8 @@ int p2m_alloc_table(struct p2m_domain *p2m) return 0; } -static int __must_check -p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, - unsigned int page_order) +int __must_check p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, + unsigned int page_order) { unsigned long i; p2m_type_t t; @@ -1329,8 +1328,7 @@ p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m) return p2m; } -static void -p2m_flush_table_locked(struct p2m_domain *p2m) +void p2m_flush_table_locked(struct p2m_domain *p2m) { struct page_info *top, *pg; struct domain *d = p2m->domain; @@ -1729,481 +1727,6 @@ int unmap_mmio_regions(struct domain *d, return i == nr ? 0 : i ?: ret; } -int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn, - p2m_type_t *t, p2m_access_t *a, - bool prepopulate) -{ - *mfn = ap2m->get_entry(ap2m, gfn, t, a, 0, NULL, NULL); - - /* Check host p2m if no valid entry in alternate */ - if ( !mfn_valid(*mfn) && !p2m_is_hostp2m(ap2m) ) - { - struct p2m_domain *hp2m = p2m_get_hostp2m(ap2m->domain); - unsigned int page_order; - int rc; - - *mfn = p2m_get_gfn_type_access(hp2m, gfn, t, a, P2M_ALLOC | P2M_UNSHARE, - &page_order, 0); - - rc = -ESRCH; - if ( !mfn_valid(*mfn) || *t != p2m_ram_rw ) - return rc; - - /* If this is a superpage, copy that first */ - if ( prepopulate && page_order != PAGE_ORDER_4K ) - { - unsigned long mask = ~((1UL << page_order) - 1); - gfn_t gfn_aligned = _gfn(gfn_x(gfn) & mask); - mfn_t mfn_aligned = _mfn(mfn_x(*mfn) & mask); - - rc = ap2m->set_entry(ap2m, gfn_aligned, mfn_aligned, page_order, *t, *a, 1); - if ( rc ) - return rc; - } - } - - return 0; -} - -void p2m_altp2m_check(struct vcpu *v, uint16_t idx) -{ - if ( altp2m_active(v->domain) ) - p2m_switch_vcpu_altp2m_by_id(v, idx); -} - -bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx) -{ - struct domain *d = v->domain; - bool rc = false; - - if ( idx >= MAX_ALTP2M ) - return rc; - - altp2m_list_lock(d); - - if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) ) - { - if ( p2m_set_altp2m(v, idx) ) - altp2m_vcpu_update_p2m(v); - rc = 1; - } - - altp2m_list_unlock(d); - return rc; -} - -/* - * Read info about the gfn in an altp2m, locking the gfn. - * - * If the entry is valid, pass the results back to the caller. - * - * If the entry was invalid, and the host's entry is also invalid, - * return to the caller without any changes. - * - * If the entry is invalid, and the host entry was valid, propagate - * the host's entry to the altp2m (retaining page order), and indicate - * that the caller should re-try the faulting instruction. - */ -bool p2m_altp2m_get_or_propagate(struct p2m_domain *ap2m, unsigned long gfn_l, - mfn_t *mfn, p2m_type_t *p2mt, - p2m_access_t *p2ma, unsigned int *page_order) -{ - p2m_type_t ap2mt; - p2m_access_t ap2ma; - unsigned int cur_order; - unsigned long mask; - gfn_t gfn; - mfn_t amfn; - int rc; - - /* - * NB we must get the full lock on the altp2m here, in addition to - * the lock on the individual gfn, since we may change a range of - * gfns below. - */ - p2m_lock(ap2m); - - amfn = get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma, 0, &cur_order); - - if ( cur_order > *page_order ) - cur_order = *page_order; - - if ( !mfn_eq(amfn, INVALID_MFN) ) - { - p2m_unlock(ap2m); - *mfn = amfn; - *p2mt = ap2mt; - *p2ma = ap2ma; - *page_order = cur_order; - return false; - } - - /* Host entry is also invalid; don't bother setting the altp2m entry. */ - if ( mfn_eq(*mfn, INVALID_MFN) ) - { - p2m_unlock(ap2m); - *page_order = cur_order; - return false; - } - - /* - * If this is a superpage mapping, round down both frame numbers - * to the start of the superpage. NB that we repupose `amfn` - * here. - */ - mask = ~((1UL << cur_order) - 1); - amfn = _mfn(mfn_x(*mfn) & mask); - gfn = _gfn(gfn_l & mask); - - /* Override the altp2m entry with its default access. */ - *p2ma = ap2m->default_access; - - rc = p2m_set_entry(ap2m, gfn, amfn, cur_order, *p2mt, *p2ma); - p2m_unlock(ap2m); - - if ( rc ) - { - gprintk(XENLOG_ERR, - "failed to set entry for %"PRI_gfn" -> %"PRI_mfn" altp2m %u, rc %d\n", - gfn_l, mfn_x(amfn), vcpu_altp2m(current).p2midx, rc); - domain_crash(ap2m->domain); - } - - return true; -} - -enum altp2m_reset_type { - ALTP2M_RESET, - ALTP2M_DEACTIVATE -}; - -static void p2m_reset_altp2m(struct domain *d, unsigned int idx, - enum altp2m_reset_type reset_type) -{ - struct p2m_domain *p2m; - - ASSERT(idx < MAX_ALTP2M); - p2m = array_access_nospec(d->arch.altp2m_p2m, idx); - - p2m_lock(p2m); - - p2m_flush_table_locked(p2m); - - if ( reset_type == ALTP2M_DEACTIVATE ) - p2m_free_logdirty(p2m); - - /* Uninit and reinit ept to force TLB shootdown */ - ept_p2m_uninit(p2m); - ept_p2m_init(p2m); - - p2m->min_remapped_gfn = gfn_x(INVALID_GFN); - p2m->max_remapped_gfn = 0; - - p2m_unlock(p2m); -} - -void p2m_flush_altp2m(struct domain *d) -{ - unsigned int i; - - altp2m_list_lock(d); - - for ( i = 0; i < MAX_ALTP2M; i++ ) - { - p2m_reset_altp2m(d, i, ALTP2M_DEACTIVATE); - d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN); - d->arch.altp2m_visible_eptp[i] = mfn_x(INVALID_MFN); - } - - altp2m_list_unlock(d); -} - -static int p2m_activate_altp2m(struct domain *d, unsigned int idx, - p2m_access_t hvmmem_default_access) -{ - struct p2m_domain *hostp2m, *p2m; - int rc; - - ASSERT(idx < MAX_ALTP2M); - - p2m = array_access_nospec(d->arch.altp2m_p2m, idx); - hostp2m = p2m_get_hostp2m(d); - - p2m_lock(p2m); - - rc = p2m_init_logdirty(p2m); - - if ( rc ) - goto out; - - /* The following is really just a rangeset copy. */ - rc = rangeset_merge(p2m->logdirty_ranges, hostp2m->logdirty_ranges); - - if ( rc ) - { - p2m_free_logdirty(p2m); - goto out; - } - - p2m->default_access = hvmmem_default_access; - p2m->domain = hostp2m->domain; - p2m->global_logdirty = hostp2m->global_logdirty; - p2m->min_remapped_gfn = gfn_x(INVALID_GFN); - p2m->max_mapped_pfn = p2m->max_remapped_gfn = 0; - - p2m_init_altp2m_ept(d, idx); - - out: - p2m_unlock(p2m); - - return rc; -} - -int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx) -{ - int rc = -EINVAL; - struct p2m_domain *hostp2m = p2m_get_hostp2m(d); - - if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ) - return rc; - - altp2m_list_lock(d); - - if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - rc = p2m_activate_altp2m(d, idx, hostp2m->default_access); - - altp2m_list_unlock(d); - return rc; -} - -int p2m_init_next_altp2m(struct domain *d, uint16_t *idx, - xenmem_access_t hvmmem_default_access) -{ - int rc = -EINVAL; - unsigned int i; - p2m_access_t a; - struct p2m_domain *hostp2m = p2m_get_hostp2m(d); - - if ( hvmmem_default_access > XENMEM_access_default || - !xenmem_access_to_p2m_access(hostp2m, hvmmem_default_access, &a) ) - return rc; - - altp2m_list_lock(d); - - for ( i = 0; i < MAX_ALTP2M; i++ ) - { - if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) ) - continue; - - rc = p2m_activate_altp2m(d, i, a); - - if ( !rc ) - *idx = i; - - break; - } - - altp2m_list_unlock(d); - return rc; -} - -int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx) -{ - struct p2m_domain *p2m; - int rc = -EBUSY; - - if ( !idx || idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ) - return rc; - - rc = domain_pause_except_self(d); - if ( rc ) - return rc; - - rc = -EBUSY; - altp2m_list_lock(d); - - if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] != - mfn_x(INVALID_MFN) ) - { - p2m = array_access_nospec(d->arch.altp2m_p2m, idx); - - if ( !_atomic_read(p2m->active_vcpus) ) - { - p2m_reset_altp2m(d, idx, ALTP2M_DEACTIVATE); - d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] = - mfn_x(INVALID_MFN); - d->arch.altp2m_visible_eptp[array_index_nospec(idx, MAX_EPTP)] = - mfn_x(INVALID_MFN); - rc = 0; - } - } - - altp2m_list_unlock(d); - - domain_unpause_except_self(d); - - return rc; -} - -int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx) -{ - struct vcpu *v; - int rc = -EINVAL; - - if ( idx >= MAX_ALTP2M ) - return rc; - - rc = domain_pause_except_self(d); - if ( rc ) - return rc; - - rc = -EINVAL; - altp2m_list_lock(d); - - if ( d->arch.altp2m_visible_eptp[idx] != mfn_x(INVALID_MFN) ) - { - for_each_vcpu( d, v ) - if ( p2m_set_altp2m(v, idx) ) - altp2m_vcpu_update_p2m(v); - - rc = 0; - } - - altp2m_list_unlock(d); - - domain_unpause_except_self(d); - - return rc; -} - -int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, - gfn_t old_gfn, gfn_t new_gfn) -{ - struct p2m_domain *hp2m, *ap2m; - p2m_access_t a; - p2m_type_t t; - mfn_t mfn; - int rc = -EINVAL; - - if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || - d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - return rc; - - hp2m = p2m_get_hostp2m(d); - ap2m = array_access_nospec(d->arch.altp2m_p2m, idx); - - p2m_lock(hp2m); - p2m_lock(ap2m); - - if ( gfn_eq(new_gfn, INVALID_GFN) ) - { - mfn = ap2m->get_entry(ap2m, old_gfn, &t, &a, 0, NULL, NULL); - rc = mfn_valid(mfn) - ? p2m_remove_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K) - : 0; - goto out; - } - - rc = altp2m_get_effective_entry(ap2m, old_gfn, &mfn, &t, &a, - AP2MGET_prepopulate); - if ( rc ) - goto out; - - rc = altp2m_get_effective_entry(ap2m, new_gfn, &mfn, &t, &a, - AP2MGET_query); - if ( rc ) - goto out; - - if ( !ap2m->set_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K, t, a, - (current->domain != d)) ) - { - rc = 0; - - if ( gfn_x(new_gfn) < ap2m->min_remapped_gfn ) - ap2m->min_remapped_gfn = gfn_x(new_gfn); - if ( gfn_x(new_gfn) > ap2m->max_remapped_gfn ) - ap2m->max_remapped_gfn = gfn_x(new_gfn); - } - - out: - p2m_unlock(ap2m); - p2m_unlock(hp2m); - return rc; -} - -int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn, - mfn_t mfn, unsigned int page_order, - p2m_type_t p2mt, p2m_access_t p2ma) -{ - struct p2m_domain *p2m; - unsigned int i; - unsigned int reset_count = 0; - unsigned int last_reset_idx = ~0; - int ret = 0; - - if ( !altp2m_active(d) ) - return 0; - - altp2m_list_lock(d); - - for ( i = 0; i < MAX_ALTP2M; i++ ) - { - p2m_type_t t; - p2m_access_t a; - - if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) ) - continue; - - p2m = d->arch.altp2m_p2m[i]; - - /* Check for a dropped page that may impact this altp2m */ - if ( mfn_eq(mfn, INVALID_MFN) && - gfn_x(gfn) + (1UL << page_order) > p2m->min_remapped_gfn && - gfn_x(gfn) <= p2m->max_remapped_gfn ) - { - if ( !reset_count++ ) - { - p2m_reset_altp2m(d, i, ALTP2M_RESET); - last_reset_idx = i; - } - else - { - /* At least 2 altp2m's impacted, so reset everything */ - for ( i = 0; i < MAX_ALTP2M; i++ ) - { - if ( i == last_reset_idx || - d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) ) - continue; - - p2m_reset_altp2m(d, i, ALTP2M_RESET); - } - - ret = 0; - break; - } - } - else if ( !mfn_eq(get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, - NULL), INVALID_MFN) ) - { - int rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma); - - /* Best effort: Don't bail on error. */ - if ( !ret ) - ret = rc; - - p2m_put_gfn(p2m, gfn); - } - else - p2m_put_gfn(p2m, gfn); - } - - altp2m_list_unlock(d); - - return ret; -} - /*** Audit ***/ #if P2M_AUDIT @@ -2540,159 +2063,6 @@ int xenmem_add_to_physmap_one( return rc; } -/* - * Set/clear the #VE suppress bit for a page. Only available on VMX. - */ -int p2m_set_suppress_ve(struct domain *d, gfn_t gfn, bool suppress_ve, - unsigned int altp2m_idx) -{ - int rc; - struct xen_hvm_altp2m_suppress_ve_multi sve = { - altp2m_idx, suppress_ve, 0, 0, gfn_x(gfn), gfn_x(gfn), 0 - }; - - if ( !(rc = p2m_set_suppress_ve_multi(d, &sve)) ) - rc = sve.first_error; - - return rc; -} - -/* - * Set/clear the #VE suppress bit for multiple pages. Only available on VMX. - */ -int p2m_set_suppress_ve_multi(struct domain *d, - struct xen_hvm_altp2m_suppress_ve_multi *sve) -{ - struct p2m_domain *host_p2m = p2m_get_hostp2m(d); - struct p2m_domain *ap2m = NULL; - struct p2m_domain *p2m = host_p2m; - uint64_t start = sve->first_gfn; - int rc = 0; - - if ( sve->view > 0 ) - { - if ( sve->view >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || - d->arch.altp2m_eptp[array_index_nospec(sve->view, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - return -EINVAL; - - p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, sve->view); - } - - p2m_lock(host_p2m); - - if ( ap2m ) - p2m_lock(ap2m); - - while ( sve->last_gfn >= start ) - { - p2m_access_t a; - p2m_type_t t; - mfn_t mfn; - int err = 0; - - if ( (err = altp2m_get_effective_entry(p2m, _gfn(start), &mfn, &t, &a, - AP2MGET_query)) && - !sve->first_error ) - { - sve->first_error_gfn = start; /* Save the gfn of the first error */ - sve->first_error = err; /* Save the first error code */ - } - - if ( !err && (err = p2m->set_entry(p2m, _gfn(start), mfn, - PAGE_ORDER_4K, t, a, - sve->suppress_ve)) && - !sve->first_error ) - { - sve->first_error_gfn = start; /* Save the gfn of the first error */ - sve->first_error = err; /* Save the first error code */ - } - - /* Check for continuation if it's not the last iteration. */ - if ( sve->last_gfn >= ++start && hypercall_preempt_check() ) - { - rc = -ERESTART; - break; - } - } - - sve->first_gfn = start; - - if ( ap2m ) - p2m_unlock(ap2m); - - p2m_unlock(host_p2m); - - return rc; -} - -int p2m_get_suppress_ve(struct domain *d, gfn_t gfn, bool *suppress_ve, - unsigned int altp2m_idx) -{ - struct p2m_domain *host_p2m = p2m_get_hostp2m(d); - struct p2m_domain *ap2m = NULL; - struct p2m_domain *p2m; - mfn_t mfn; - p2m_access_t a; - p2m_type_t t; - int rc = 0; - - if ( altp2m_idx > 0 ) - { - if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || - d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - return -EINVAL; - - p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx); - } - else - p2m = host_p2m; - - gfn_lock(host_p2m, gfn, 0); - - if ( ap2m ) - p2m_lock(ap2m); - - mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, NULL, suppress_ve); - if ( !mfn_valid(mfn) ) - rc = -ESRCH; - - if ( ap2m ) - p2m_unlock(ap2m); - - gfn_unlock(host_p2m, gfn, 0); - - return rc; -} - -int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int altp2m_idx, - uint8_t visible) -{ - int rc = 0; - - altp2m_list_lock(d); - - /* - * Eptp index is correlated with altp2m index and should not exceed - * min(MAX_ALTP2M, MAX_EPTP). - */ - if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || - d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - rc = -EINVAL; - else if ( visible ) - d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] = - d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)]; - else - d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] = - mfn_x(INVALID_MFN); - - altp2m_list_unlock(d); - - return rc; -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm/p2m.h b/xen/arch/x86/mm/p2m.h index 04308cfb6d..635f5a7f45 100644 --- a/xen/arch/x86/mm/p2m.h +++ b/xen/arch/x86/mm/p2m.h @@ -22,6 +22,9 @@ static inline void p2m_free_logdirty(struct p2m_domain *p2m) {} int p2m_init_altp2m(struct domain *d); void p2m_teardown_altp2m(struct domain *d); +void p2m_flush_table_locked(struct p2m_domain *p2m); +int __must_check p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, + unsigned int page_order); void p2m_nestedp2m_init(struct p2m_domain *p2m); int p2m_init_nestedp2m(struct domain *d); void p2m_teardown_nestedp2m(struct domain *d); From patchwork Tue Apr 16 06:31:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13631355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81AFCC4345F for ; Tue, 16 Apr 2024 06:31:45 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.706624.1103884 (Exim 4.92) (envelope-from ) id 1rwcLv-0004D8-1L; Tue, 16 Apr 2024 06:31:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 706624.1103884; Tue, 16 Apr 2024 06:31:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcLu-0004Ci-V0; Tue, 16 Apr 2024 06:31:34 +0000 Received: by outflank-mailman (input) for mailman id 706624; Tue, 16 Apr 2024 06:31:33 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcLt-0004CA-LU for xen-devel@lists.xenproject.org; Tue, 16 Apr 2024 06:31:33 +0000 Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id f5a1d601-fbba-11ee-b909-491648fe20b8; Tue, 16 Apr 2024 08:31:32 +0200 (CEST) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 1E2CD3654E; Tue, 16 Apr 2024 02:31:29 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 0AA0E3654D; Tue, 16 Apr 2024 02:31:29 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id E54993654C; Tue, 16 Apr 2024 02:31:24 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f5a1d601-fbba-11ee-b909-491648fe20b8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=rS/UZ+/PSV3QRa1QCAslyBy27wJqULa/1N2ijCmPfxw=; b=CH+2 PeUFlqfloks1QIAFx91lvy9NFEbmUvuJzKnCyGd+lfdKkZqV9jXOiaVir/0hh2SG aF2alDFT4fGolJyZwEx+aKZeC14/fGl4m/7VIZlWubFCLL+D6SLcMxqm1xaaI3FY 6FIW4dZrfBxxv5jTxRFrZlCouyGlGT5b8HPMRNI= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Xenia Ragiadakou , Stefano Stabellini Subject: [XEN PATCH v1 06/15] x86/p2m: guard altp2m code with CONFIG_VMX option Date: Tue, 16 Apr 2024 09:31:21 +0300 Message-Id: <20240416063121.3469245-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: F2B5BBB8-FBBA-11EE-A681-F515D2CDFF5E-90055647!pb-smtp20.pobox.com Instead of using generic CONFIG_HVM option switch to a bit more specific CONFIG_VMX option for altp2m support, as it depends on VMX. Also guard altp2m routines, so that it can be disabled completely in the build. Signed-off-by: Sergiy Kibrik --- xen/arch/x86/include/asm/altp2m.h | 5 ++++- xen/arch/x86/include/asm/hvm/hvm.h | 7 +++++++ xen/arch/x86/include/asm/p2m.h | 18 +++++++++++++++++- xen/arch/x86/mm/Makefile | 2 +- 4 files changed, 29 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/include/asm/altp2m.h b/xen/arch/x86/include/asm/altp2m.h index e5e59cbd68..03613dc246 100644 --- a/xen/arch/x86/include/asm/altp2m.h +++ b/xen/arch/x86/include/asm/altp2m.h @@ -7,7 +7,7 @@ #ifndef __ASM_X86_ALTP2M_H #define __ASM_X86_ALTP2M_H -#ifdef CONFIG_HVM +#ifdef CONFIG_VMX #include #include /* for struct vcpu, struct domain */ @@ -38,7 +38,10 @@ static inline bool altp2m_active(const struct domain *d) } /* Only declaration is needed. DCE will optimise it out when linking. */ +void altp2m_vcpu_initialise(struct vcpu *v); +void altp2m_vcpu_destroy(struct vcpu *v); uint16_t altp2m_vcpu_idx(const struct vcpu *v); +int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn); void altp2m_vcpu_disable_ve(struct vcpu *v); #endif diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index 87a6935d97..870ebf3d3a 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -648,11 +648,18 @@ static inline bool hvm_hap_supported(void) return hvm_funcs.caps.hap; } +#ifdef CONFIX_VMX /* returns true if hardware supports alternate p2m's */ static inline bool hvm_altp2m_supported(void) { return hvm_funcs.caps.altp2m; } +#else +static inline bool hvm_altp2m_supported(void) +{ + return false; +} +#endif /* updates the current hardware p2m */ static inline void altp2m_vcpu_update_p2m(struct vcpu *v) diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h index 111badf89a..0b2da1fd05 100644 --- a/xen/arch/x86/include/asm/p2m.h +++ b/xen/arch/x86/include/asm/p2m.h @@ -581,9 +581,9 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn) return _gfn(mfn_x(mfn)); } -#ifdef CONFIG_HVM #define AP2MGET_prepopulate true #define AP2MGET_query false +#ifdef CONFIG_VMX /* * Looks up altp2m entry. If the entry is not found it looks up the entry in @@ -593,6 +593,15 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn) int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn, p2m_type_t *t, p2m_access_t *a, bool prepopulate); +#else +static inline int altp2m_get_effective_entry(struct p2m_domain *ap2m, + gfn_t gfn, mfn_t *mfn, + p2m_type_t *t, p2m_access_t *a, + bool prepopulate) +{ + ASSERT_UNREACHABLE(); + return -EOPNOTSUPP; +} #endif /* Init the datastructures for later use by the p2m code */ @@ -909,8 +918,15 @@ static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx) /* Switch alternate p2m for a single vcpu */ bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx); +#ifdef CONFIG_VMX /* Check to see if vcpu should be switched to a different p2m. */ void p2m_altp2m_check(struct vcpu *v, uint16_t idx); +#else +static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) +{ + /* Not supported w/o VMX */ +} +#endif /* Flush all the alternate p2m's for a domain */ void p2m_flush_altp2m(struct domain *d); diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile index 92168290a8..3af992a6e9 100644 --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -1,7 +1,7 @@ obj-y += shadow/ obj-$(CONFIG_HVM) += hap/ -obj-$(CONFIG_HVM) += altp2m.o +obj-$(CONFIG_VMX) += altp2m.o obj-$(CONFIG_HVM) += guest_walk_2.o guest_walk_3.o guest_walk_4.o obj-$(CONFIG_SHADOW_PAGING) += guest_walk_4.o obj-$(CONFIG_MEM_ACCESS) += mem_access.o From patchwork Tue Apr 16 06:33:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13631356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E41D6C4345F for ; Tue, 16 Apr 2024 06:33:45 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.706630.1103894 (Exim 4.92) (envelope-from ) id 1rwcNu-0004o9-Dw; Tue, 16 Apr 2024 06:33:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 706630.1103894; Tue, 16 Apr 2024 06:33:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcNu-0004o2-BK; Tue, 16 Apr 2024 06:33:38 +0000 Received: by outflank-mailman (input) for mailman id 706630; Tue, 16 Apr 2024 06:33:37 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcNt-0004ma-0c for xen-devel@lists.xenproject.org; Tue, 16 Apr 2024 06:33:37 +0000 Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 403c5eaa-fbbb-11ee-b909-491648fe20b8; Tue, 16 Apr 2024 08:33:36 +0200 (CEST) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 9A5C83655F; Tue, 16 Apr 2024 02:33:34 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 92F663655E; Tue, 16 Apr 2024 02:33:34 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id AF7E53655D; Tue, 16 Apr 2024 02:33:31 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 403c5eaa-fbbb-11ee-b909-491648fe20b8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=8JG4PNjuvlxmdEd1LlKmhuuWXf6XUwBh7cHi41Twr6g=; b=rtoh MLRLrV8RqFLvqfgTLdqf/eHbxNbZmOEHfGtPhCX3rKhD5/5NCWZaLWa7SU0UDbTp 0NQMuhf7HESJsx82uKS0PlmrrdA8wfcOxt9ZL8o8VifMsH8C5nyy3nCO4S+PghNK tzErVXbV64L3RlAo99ui58otxtTr+rj8MU45xlE= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , George Dunlap , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Xenia Ragiadakou , Stefano Stabellini , Sergiy Kibrik Subject: [XEN PATCH v1 07/15] x86/p2m: guard vmx specific ept functions with CONFIG_VMX Date: Tue, 16 Apr 2024 09:33:28 +0300 Message-Id: <20240416063328.3469386-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: 3E467702-FBBB-11EE-81EF-F515D2CDFF5E-90055647!pb-smtp20.pobox.com From: Xenia Ragiadakou The functions ept_p2m_init() and ept_p2m_uninit() are VT-x specific. Do build-time checks and skip these functions execution when !VMX. No functional change intended. Signed-off-by: Xenia Ragiadakou Signed-off-by: Sergiy Kibrik --- xen/arch/x86/mm/p2m-basic.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c index 90106997d7..6810941c30 100644 --- a/xen/arch/x86/mm/p2m-basic.c +++ b/xen/arch/x86/mm/p2m-basic.c @@ -38,7 +38,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m) p2m_pod_init(p2m); p2m_nestedp2m_init(p2m); - if ( hap_enabled(d) && cpu_has_vmx ) + if ( IS_ENABLED(CONFIG_VMX) && hap_enabled(d) && cpu_has_vmx ) ret = ept_p2m_init(p2m); else p2m_pt_init(p2m); @@ -70,7 +70,7 @@ struct p2m_domain *p2m_init_one(struct domain *d) void p2m_free_one(struct p2m_domain *p2m) { p2m_free_logdirty(p2m); - if ( hap_enabled(p2m->domain) && cpu_has_vmx ) + if ( IS_ENABLED(CONFIG_VMX) && hap_enabled(p2m->domain) && cpu_has_vmx ) ept_p2m_uninit(p2m); free_cpumask_var(p2m->dirty_cpumask); xfree(p2m); From patchwork Tue Apr 16 06:35:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13631375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A8434C04FF8 for ; Tue, 16 Apr 2024 06:35:53 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.706634.1103904 (Exim 4.92) (envelope-from ) id 1rwcPw-000696-Oj; Tue, 16 Apr 2024 06:35:44 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 706634.1103904; Tue, 16 Apr 2024 06:35:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcPw-00068z-MF; Tue, 16 Apr 2024 06:35:44 +0000 Received: by outflank-mailman (input) for mailman id 706634; Tue, 16 Apr 2024 06:35:44 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcPv-00068Z-VX for xen-devel@lists.xenproject.org; Tue, 16 Apr 2024 06:35:44 +0000 Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8b22676d-fbbb-11ee-94a3-07e782e9044d; Tue, 16 Apr 2024 08:35:41 +0200 (CEST) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 5BEE33657E; Tue, 16 Apr 2024 02:35:40 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 54E1B3657D; Tue, 16 Apr 2024 02:35:40 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id 6FC1B3657C; Tue, 16 Apr 2024 02:35:37 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8b22676d-fbbb-11ee-94a3-07e782e9044d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=yt1e5CeIcMi14aJOR2Hc7CK1KtViZUVJ3OEeqEmOCWU=; b=AcXW tLKha4lIWA5TSibeADqaOn4NyJjQvzDX52EYV/CQewYewjBs8jbiGC9UiTmndK3I bJI7mrG3I9b12k6nEoxQ8+11RoOaEki+hT2Y4zI9BEQFUntShtwUhmw9Dv6P5Shh /pN6qL5mTX5V7pZ2xvZY9zHH9DQsbFVttPkfG4E= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Xenia Ragiadakou , Stefano Stabellini Subject: [XEN PATCH v1 08/15] x86/vpmu: separate amd/intel vPMU code Date: Tue, 16 Apr 2024 09:35:34 +0300 Message-Id: <20240416063534.3469482-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: 89392214-FBBB-11EE-9506-F515D2CDFF5E-90055647!pb-smtp20.pobox.com Build AMD vPMU when CONFIG_SVM is on, and Intel vPMU when CONFIG_VMX is on respectively, allowing for a plaftorm-specific build. Also separate arch_vpmu_ops initializers using these options and static inline stubs. No functional change intended. Signed-off-by: Sergiy Kibrik --- xen/arch/x86/cpu/Makefile | 4 +++- xen/arch/x86/include/asm/vpmu.h | 19 +++++++++++++++++++ 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/cpu/Makefile b/xen/arch/x86/cpu/Makefile index 35561fe51d..d3d7b8fb2e 100644 --- a/xen/arch/x86/cpu/Makefile +++ b/xen/arch/x86/cpu/Makefile @@ -10,4 +10,6 @@ obj-y += intel.o obj-y += intel_cacheinfo.o obj-y += mwait-idle.o obj-y += shanghai.o -obj-y += vpmu.o vpmu_amd.o vpmu_intel.o +obj-y += vpmu.o +obj-$(CONFIG_SVM) += vpmu_amd.o +obj-$(CONFIG_VMX) += vpmu_intel.o diff --git a/xen/arch/x86/include/asm/vpmu.h b/xen/arch/x86/include/asm/vpmu.h index dae9b43dac..da86f2e420 100644 --- a/xen/arch/x86/include/asm/vpmu.h +++ b/xen/arch/x86/include/asm/vpmu.h @@ -11,6 +11,7 @@ #define __ASM_X86_HVM_VPMU_H_ #include +#include #define vcpu_vpmu(vcpu) (&(vcpu)->arch.vpmu) #define vpmu_vcpu(vpmu) container_of((vpmu), struct vcpu, arch.vpmu) @@ -42,9 +43,27 @@ struct arch_vpmu_ops { #endif }; +#ifdef CONFIG_VMX const struct arch_vpmu_ops *core2_vpmu_init(void); +#else +static inline const struct arch_vpmu_ops* core2_vpmu_init(void) +{ + return ERR_PTR(-ENODEV); +} +#endif +#ifdef CONFIG_SVM const struct arch_vpmu_ops *amd_vpmu_init(void); const struct arch_vpmu_ops *hygon_vpmu_init(void); +#else +static inline const struct arch_vpmu_ops* amd_vpmu_init(void) +{ + return ERR_PTR(-ENODEV); +} +static inline const struct arch_vpmu_ops* hygon_vpmu_init(void) +{ + return ERR_PTR(-ENODEV); +} +#endif struct vpmu_struct { u32 flags; From patchwork Tue Apr 16 06:37:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13631376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CFF80C4345F for ; Tue, 16 Apr 2024 06:38:04 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.706638.1103915 (Exim 4.92) (envelope-from ) id 1rwcS1-0006vp-4e; Tue, 16 Apr 2024 06:37:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 706638.1103915; Tue, 16 Apr 2024 06:37:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcS1-0006vi-1U; Tue, 16 Apr 2024 06:37:53 +0000 Received: by outflank-mailman (input) for mailman id 706638; Tue, 16 Apr 2024 06:37:51 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rwcRz-0006va-Kf for xen-devel@lists.xenproject.org; Tue, 16 Apr 2024 06:37:51 +0000 Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id d765debc-fbbb-11ee-94a3-07e782e9044d; Tue, 16 Apr 2024 08:37:49 +0200 (CEST) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id D5B9D36586; Tue, 16 Apr 2024 02:37:47 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id C0FCF36585; Tue, 16 Apr 2024 02:37:47 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id B4E8D36584; Tue, 16 Apr 2024 02:37:43 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d765debc-fbbb-11ee-94a3-07e782e9044d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=sasl; bh=9Zw6Q89ndDALDigv8IS7imOfjSDEUzFnmBNeSU/Q/xY=; b=TkFo iZVtU4VyBn+zFeGpwLM4tVTkkaKEk24s+noMTIIecFoQYYcN0/qF88jEOdpbfUGP xPeUlFnTYSUtcgPUHvHAn9UFBg6DGSTAbjbJu++eDUMhizYy2GNUATZtgG4W4AL1 /zEpyRGvmtrLETxN+mLc09NB1LcKxIDDkTkJuBo= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Xenia Ragiadakou , Stefano Stabellini , Sergiy Kibrik Subject: [XEN PATCH v1 09/15] x86/traps: guard vmx specific functions with CONFIG_VMX Date: Tue, 16 Apr 2024 09:37:40 +0300 Message-Id: <20240416063740.3469592-1-Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Pobox-Relay-ID: D47E1B62-FBBB-11EE-BE54-F515D2CDFF5E-90055647!pb-smtp20.pobox.com From: Xenia Ragiadakou The functions vmx_vmcs_enter() and vmx_vmcs_exit() are VT-x specific. Guard their calls with CONFIG_VMX. No functional change intended. Signed-off-by: Xenia Ragiadakou Signed-off-by: Sergiy Kibrik --- xen/arch/x86/traps.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index d554c9d41e..218eb5b322 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -676,7 +676,6 @@ void vcpu_show_execution_state(struct vcpu *v) vcpu_pause(v); /* acceptably dangerous */ -#ifdef CONFIG_HVM /* * For VMX special care is needed: Reading some of the register state will * require VMCS accesses. Engaging foreign VMCSes involves acquiring of a @@ -684,12 +683,11 @@ void vcpu_show_execution_state(struct vcpu *v) * region. Despite this being a layering violation, engage the VMCS right * here. This then also avoids doing so several times in close succession. */ - if ( cpu_has_vmx && is_hvm_vcpu(v) ) + if ( IS_ENABLED(CONFIG_VMX) && cpu_has_vmx && is_hvm_vcpu(v) ) { ASSERT(!in_irq()); vmx_vmcs_enter(v); } -#endif /* Prevent interleaving of output. */ flags = console_lock_recursive_irqsave(); @@ -714,10 +712,8 @@ void vcpu_show_execution_state(struct vcpu *v) console_unlock_recursive_irqrestore(flags); } -#ifdef CONFIG_HVM - if ( cpu_has_vmx && is_hvm_vcpu(v) ) + if ( IS_ENABLED(CONFIG_VMX) && cpu_has_vmx && is_hvm_vcpu(v) ) vmx_vmcs_exit(v); -#endif vcpu_unpause(v); }