From patchwork Wed May 15 08:59:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664902 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9E37AC25B75 for ; Wed, 15 May 2024 09:00:28 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722017.1125795 (Exim 4.92) (envelope-from ) id 1s7AUR-0006V7-R6; Wed, 15 May 2024 08:59:59 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722017.1125795; Wed, 15 May 2024 08:59:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AUR-0006V0-OY; Wed, 15 May 2024 08:59:59 +0000 Received: by outflank-mailman (input) for mailman id 722017; Wed, 15 May 2024 08:59:58 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AUQ-0006Uu-QN for xen-devel@lists.xenproject.org; Wed, 15 May 2024 08:59:58 +0000 Received: from pb-smtp1.pobox.com (pb-smtp1.pobox.com [64.147.108.70]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 80707de0-1299-11ef-909d-e314d9c70b13; Wed, 15 May 2024 10:59:57 +0200 (CEST) Received: from pb-smtp1.pobox.com (unknown [127.0.0.1]) by pb-smtp1.pobox.com (Postfix) with ESMTP id 20A1E1AE64; Wed, 15 May 2024 04:59:56 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp1.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp1.pobox.com (Postfix) with ESMTP id 189EB1AE63; Wed, 15 May 2024 04:59:56 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp1.pobox.com (Postfix) with ESMTPSA id A7BEE1AE62; Wed, 15 May 2024 04:59:54 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 80707de0-1299-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=O/4uva/ALHfMHswcx9quBNHY7 OZITBZUuNjaXjzXO0M=; b=aXt/fvr59eEmqkiT/cC1Rrc82iZ7ZlIEtG6aYNXH9 vO49tnnHkQbM8QNyAAE0E9q8r4yXzT334oonenzJ6+aIMhP6N2X/NXZZN7HL0AjB EL1d6GdF2eWj6LEKAi3HF1ZYzlGwjWJGzJxVEgudtHM4mYffK1+1D2ErC5zgGf+u Fc= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Xenia Ragiadakou , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Jan Beulich , Stefano Stabellini , Xenia Ragiadakou , Sergiy Kibrik Subject: [XEN PATCH v2 01/15] x86: introduce AMD-V and Intel VT-x Kconfig options Date: Wed, 15 May 2024 11:59:52 +0300 Message-Id: <3f2168a337a192336e9a7fb797185c39978db11b.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 7F500188-1299-11EF-B3E3-78DCEB2EC81B-90055647!pb-smtp1.pobox.com From: Xenia Ragiadakou Introduce two new Kconfig options, SVM and VMX, to allow code specific to each virtualization technology to be separated and, when not required, stripped. CONFIG_SVM will be used to enable virtual machine extensions on platforms that implement the AMD Virtualization Technology (AMD-V). CONFIG_VMX will be used to enable virtual machine extensions on platforms that implement the Intel Virtualization Technology (Intel VT-x). Both features depend on HVM support. Since, at this point, disabling any of them would cause Xen to not compile, the options are enabled by default if HVM and are not selectable by the user. No functional change intended. Signed-off-by: Xenia Ragiadakou Signed-off-by: Sergiy Kibrik CC: Jan Beulich Acked-by: Jan Beulich --- changes in v2: - simplify kconfig expression to def_bool HVM - keep file list in Makefile in alphabetical order changes in v1: - change kconfig option name AMD_SVM/INTEL_VMX -> SVM/VMX --- xen/arch/x86/Kconfig | 6 ++++++ xen/arch/x86/hvm/Makefile | 4 ++-- xen/arch/x86/mm/Makefile | 3 ++- xen/arch/x86/mm/hap/Makefile | 2 +- 4 files changed, 11 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 7e03e4bc55..8c9f8431f0 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -122,6 +122,12 @@ config HVM If unsure, say Y. +config SVM + def_bool HVM + +config VMX + def_bool HVM + config XEN_SHSTK bool "Supervisor Shadow Stacks" depends on HAS_AS_CET_SS diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile index 3464191544..8434badc64 100644 --- a/xen/arch/x86/hvm/Makefile +++ b/xen/arch/x86/hvm/Makefile @@ -1,5 +1,5 @@ -obj-y += svm/ -obj-y += vmx/ +obj-$(CONFIG_SVM) += svm/ +obj-$(CONFIG_VMX) += vmx/ obj-y += viridian/ obj-y += asid.o diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile index 0803ac9297..0128ca7ab6 100644 --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -10,6 +10,7 @@ obj-$(CONFIG_MEM_SHARING) += mem_sharing.o obj-$(CONFIG_HVM) += nested.o obj-$(CONFIG_HVM) += p2m.o obj-y += p2m-basic.o -obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o +obj-$(CONFIG_VMX) += p2m-ept.o +obj-$(CONFIG_HVM) += p2m-pod.o p2m-pt.o obj-y += paging.o obj-y += physmap.o diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile index 8ef54b1faa..98c8a87819 100644 --- a/xen/arch/x86/mm/hap/Makefile +++ b/xen/arch/x86/mm/hap/Makefile @@ -3,4 +3,4 @@ obj-y += guest_walk_2.o obj-y += guest_walk_3.o obj-y += guest_walk_4.o obj-y += nested_hap.o -obj-y += nested_ept.o +obj-$(CONFIG_VMX) += nested_ept.o From patchwork Wed May 15 09:01:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664906 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CAF8DC25B75 for ; Wed, 15 May 2024 09:02:19 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722024.1125805 (Exim 4.92) (envelope-from ) id 1s7AWQ-0007yl-5q; Wed, 15 May 2024 09:02:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722024.1125805; Wed, 15 May 2024 09:02:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AWQ-0007ye-3O; Wed, 15 May 2024 09:02:02 +0000 Received: by outflank-mailman (input) for mailman id 722024; Wed, 15 May 2024 09:02:00 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AWO-0007yU-Ky for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:02:00 +0000 Received: from pb-smtp1.pobox.com (pb-smtp1.pobox.com [64.147.108.70]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c972cd5e-1299-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:01:59 +0200 (CEST) Received: from pb-smtp1.pobox.com (unknown [127.0.0.1]) by pb-smtp1.pobox.com (Postfix) with ESMTP id B7E011AE9D; Wed, 15 May 2024 05:01:58 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp1.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp1.pobox.com (Postfix) with ESMTP id AEFA81AE9C; Wed, 15 May 2024 05:01:58 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp1.pobox.com (Postfix) with ESMTPSA id CD3601AE9B; Wed, 15 May 2024 05:01:57 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c972cd5e-1299-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=0h3VDZqzxvclJXa5ig4GCZlFK w9kvRilSok7LMNliA4=; b=tYioIXoEbVaYnBVUQFf+TZ979ZYN5ixxzYuibjcWs YOlpH0krxzhbyvOsprIjv3PACrLl6Td7ZBK7EnAUtxKARZsd38sTTEbh9w4PnkMj 0Oknt+u/3yqEtTgH/uu3fXxJDxYvO+4t35m5rjoqD0IBtWJ1UqT2wtA3rdNRwcWR Zc= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Alexandru Isaila , Petre Pircalabu , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Jan Beulich , Stefano Stabellini , Xenia Ragiadakou , Tamas K Lengyel Subject: [XEN PATCH v2 02/15] x86/monitor: guard altp2m usage Date: Wed, 15 May 2024 12:01:55 +0300 Message-Id: <01767c3f98a88999d4b8ed3ae742ad66a0921ba3.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: C8B7F45C-1299-11EF-9E0A-78DCEB2EC81B-90055647!pb-smtp1.pobox.com Explicitly check whether altp2m is on for domain when getting altp2m index. If explicit call to altp2m_active() always returns false, DCE will remove call to altp2m_vcpu_idx(). The puspose of that is later to be able to disable altp2m support and exclude its code from the build completely, when not supported by target platform (as of now it's supported for VT-d only). Signed-off-by: Sergiy Kibrik CC: Tamas K Lengyel CC: Jan Beulich Reviewed-by: Stefano Stabellini --- changes in v2: - patch description changed, removed VMX mentioning - guard by altp2m_active() instead of hvm_altp2m_supported() --- xen/arch/x86/hvm/monitor.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c index 2a8ff07ec9..74621000b2 100644 --- a/xen/arch/x86/hvm/monitor.c +++ b/xen/arch/x86/hvm/monitor.c @@ -262,6 +262,8 @@ bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec, struct vcpu *curr = current; vm_event_request_t req = {}; paddr_t gpa = (gfn_to_gaddr(gfn) | (gla & ~PAGE_MASK)); + unsigned int altp2m_idx = altp2m_active(curr->domain) ? + altp2m_vcpu_idx(curr) : 0; int rc; ASSERT(curr->arch.vm_event->send_event); @@ -270,7 +272,7 @@ bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec, * p2m_get_mem_access() can fail from a invalid MFN and return -ESRCH * in which case access must be restricted. */ - rc = p2m_get_mem_access(curr->domain, gfn, &access, altp2m_vcpu_idx(curr)); + rc = p2m_get_mem_access(curr->domain, gfn, &access, altp2m_idx); if ( rc == -ESRCH ) access = XENMEM_access_n; From patchwork Wed May 15 09:03:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664907 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40C20C25B75 for ; Wed, 15 May 2024 09:04:21 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722028.1125815 (Exim 4.92) (envelope-from ) id 1s7AYP-0000BH-KF; Wed, 15 May 2024 09:04:05 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722028.1125815; Wed, 15 May 2024 09:04:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AYP-0000BA-HU; Wed, 15 May 2024 09:04:05 +0000 Received: by outflank-mailman (input) for mailman id 722028; Wed, 15 May 2024 09:04:03 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AYN-0000AC-St for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:04:03 +0000 Received: from pb-smtp1.pobox.com (pb-smtp1.pobox.com [64.147.108.70]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 131bb5d6-129a-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:04:03 +0200 (CEST) Received: from pb-smtp1.pobox.com (unknown [127.0.0.1]) by pb-smtp1.pobox.com (Postfix) with ESMTP id 665621AEBA; Wed, 15 May 2024 05:04:02 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp1.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp1.pobox.com (Postfix) with ESMTP id 5F0061AEB9; Wed, 15 May 2024 05:04:02 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp1.pobox.com (Postfix) with ESMTPSA id 3AA061AEB5; Wed, 15 May 2024 05:04:01 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 131bb5d6-129a-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=8cHhszYK7gh07npz053WlMjUF pjZMrbumLTp0bEkDA8=; b=gGb2MAXvbOitMXo/4x42RkhX1u793PsoDq7gFhhlb ULcmUOnDUObjxDPLQKtEQuz6HM05+yM7rjDCEUP/zAPFDVEeXJt/OFCXVIMl0vLk N+Jw6PNtJmS/oPWmvgzaTBjyjTzntAuS163wlN8pOcxAhizyOB5tUjE+uG5A0V8e ro= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , George Dunlap , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini , Xenia Ragiadakou , Tamas K Lengyel Subject: [XEN PATCH v2 03/15] x86/p2m: guard altp2m routines Date: Wed, 15 May 2024 12:03:59 +0300 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 12469D44-129A-11EF-9194-78DCEB2EC81B-90055647!pb-smtp1.pobox.com Initialize and bring down altp2m only when it is supported by the platform, e.g. VMX. Also guard p2m_altp2m_propagate_change(). The puspose of that is the possiblity to disable altp2m support and exclude its code from the build completely, when it's not supported by the target platform. Signed-off-by: Sergiy Kibrik CC: Tamas K Lengyel Reviewed-by: Stefano Stabellini --- xen/arch/x86/mm/p2m-basic.c | 19 +++++++++++-------- xen/arch/x86/mm/p2m-ept.c | 2 +- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c index 8599bd15c6..90106997d7 100644 --- a/xen/arch/x86/mm/p2m-basic.c +++ b/xen/arch/x86/mm/p2m-basic.c @@ -126,13 +126,15 @@ int p2m_init(struct domain *d) return rc; } - rc = p2m_init_altp2m(d); - if ( rc ) + if ( hvm_altp2m_supported() ) { - p2m_teardown_hostp2m(d); - p2m_teardown_nestedp2m(d); + rc = p2m_init_altp2m(d); + if ( rc ) + { + p2m_teardown_hostp2m(d); + p2m_teardown_nestedp2m(d); + } } - return rc; } @@ -195,11 +197,12 @@ void p2m_final_teardown(struct domain *d) { if ( is_hvm_domain(d) ) { + if ( hvm_altp2m_supported() ) + p2m_teardown_altp2m(d); /* - * We must tear down both of them unconditionally because - * we initialise them unconditionally. + * We must tear down nestedp2m unconditionally because + * we initialise it unconditionally. */ - p2m_teardown_altp2m(d); p2m_teardown_nestedp2m(d); } diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index f83610cb8c..d264df5b14 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -986,7 +986,7 @@ out: if ( is_epte_present(&old_entry) ) ept_free_entry(p2m, &old_entry, target); - if ( entry_written && p2m_is_hostp2m(p2m) ) + if ( entry_written && p2m_is_hostp2m(p2m) && hvm_altp2m_supported()) { ret = p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, p2ma); if ( !rc ) From patchwork Wed May 15 09:06:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 574EAC25B75 for ; Wed, 15 May 2024 09:06:25 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722034.1125835 (Exim 4.92) (envelope-from ) id 1s7AaW-0001DI-6v; Wed, 15 May 2024 09:06:16 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722034.1125835; Wed, 15 May 2024 09:06:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AaW-0001DB-4I; Wed, 15 May 2024 09:06:16 +0000 Received: by outflank-mailman (input) for mailman id 722034; Wed, 15 May 2024 09:06:15 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AaU-000103-OI for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:06:14 +0000 Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 5f8a6286-129a-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:06:12 +0200 (CEST) Received: from pb-smtp21.pobox.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id 1CFFA1B78B; Wed, 15 May 2024 05:06:10 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id 13D771B78A; Wed, 15 May 2024 05:06:10 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 62E661B789; Wed, 15 May 2024 05:06:06 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5f8a6286-129a-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=5ppiqUMhZa/TUsVhmPyp9SIdu yJb6XDnR4WLAsE0R6c=; b=me7UyMbtH9QPwTtfAqu5sqbPrsCuFWQBnNOHBGXjU ylqUgHLKlokaVboruREVIZf7ST6k2Iv2RDdK/sxzmHHmb4/XIcyk9gu5Edhvujrk TpjdU81m7AJ/BlYX9S2wu93x7ZHHgXBBsEAix1W3Mr3uKgK2cQsx8mfUakaHBAZK sc= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Andrew Cooper , George Dunlap , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Jan Beulich , Stefano Stabellini , Xenia Ragiadakou , Tamas K Lengyel Subject: [XEN PATCH v2 04/15] x86/p2m: move altp2m-related code to separate file Date: Wed, 15 May 2024 12:06:02 +0300 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 5CE01CA4-129A-11EF-B0DF-A19503B9AAD1-90055647!pb-smtp21.pobox.com Move altp2m code from generic p2m.c file to altp2m.c, so it is kept separately and can possibly be disabled in the build. We may want to disable it when building for specific platform only, that doesn't support alternate p2m. No functional change intended. Signed-off-by: Sergiy Kibrik CC: Tamas K Lengyel CC: Jan Beulich Reviewed-by: Stefano Stabellini Acked-by: Jan Beulich --- changes in v2: - no double blank lines - no unrelated re-formatting - header #include-s ordering - changed patch description --- xen/arch/x86/mm/altp2m.c | 630 ++++++++++++++++++++++++++++++++++++++ xen/arch/x86/mm/p2m.c | 632 +-------------------------------------- xen/arch/x86/mm/p2m.h | 3 + 3 files changed, 635 insertions(+), 630 deletions(-) diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c index a04297b646..6fe1e9ed6b 100644 --- a/xen/arch/x86/mm/altp2m.c +++ b/xen/arch/x86/mm/altp2m.c @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include "mm-locks.h" #include "p2m.h" @@ -151,6 +153,634 @@ void p2m_teardown_altp2m(struct domain *d) } } +int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn, + p2m_type_t *t, p2m_access_t *a, + bool prepopulate) +{ + *mfn = ap2m->get_entry(ap2m, gfn, t, a, 0, NULL, NULL); + + /* Check host p2m if no valid entry in alternate */ + if ( !mfn_valid(*mfn) && !p2m_is_hostp2m(ap2m) ) + { + struct p2m_domain *hp2m = p2m_get_hostp2m(ap2m->domain); + unsigned int page_order; + int rc; + + *mfn = p2m_get_gfn_type_access(hp2m, gfn, t, a, P2M_ALLOC | P2M_UNSHARE, + &page_order, 0); + + rc = -ESRCH; + if ( !mfn_valid(*mfn) || *t != p2m_ram_rw ) + return rc; + + /* If this is a superpage, copy that first */ + if ( prepopulate && page_order != PAGE_ORDER_4K ) + { + unsigned long mask = ~((1UL << page_order) - 1); + gfn_t gfn_aligned = _gfn(gfn_x(gfn) & mask); + mfn_t mfn_aligned = _mfn(mfn_x(*mfn) & mask); + + rc = ap2m->set_entry(ap2m, gfn_aligned, mfn_aligned, page_order, *t, *a, 1); + if ( rc ) + return rc; + } + } + + return 0; +} + +void p2m_altp2m_check(struct vcpu *v, uint16_t idx) +{ + if ( altp2m_active(v->domain) ) + p2m_switch_vcpu_altp2m_by_id(v, idx); +} + +bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx) +{ + struct domain *d = v->domain; + bool rc = false; + + if ( idx >= MAX_ALTP2M ) + return rc; + + altp2m_list_lock(d); + + if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) ) + { + if ( p2m_set_altp2m(v, idx) ) + altp2m_vcpu_update_p2m(v); + rc = 1; + } + + altp2m_list_unlock(d); + return rc; +} + +/* + * Read info about the gfn in an altp2m, locking the gfn. + * + * If the entry is valid, pass the results back to the caller. + * + * If the entry was invalid, and the host's entry is also invalid, + * return to the caller without any changes. + * + * If the entry is invalid, and the host entry was valid, propagate + * the host's entry to the altp2m (retaining page order), and indicate + * that the caller should re-try the faulting instruction. + */ +bool p2m_altp2m_get_or_propagate(struct p2m_domain *ap2m, unsigned long gfn_l, + mfn_t *mfn, p2m_type_t *p2mt, + p2m_access_t *p2ma, unsigned int *page_order) +{ + p2m_type_t ap2mt; + p2m_access_t ap2ma; + unsigned int cur_order; + unsigned long mask; + gfn_t gfn; + mfn_t amfn; + int rc; + + /* + * NB we must get the full lock on the altp2m here, in addition to + * the lock on the individual gfn, since we may change a range of + * gfns below. + */ + p2m_lock(ap2m); + + amfn = get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma, 0, &cur_order); + + if ( cur_order > *page_order ) + cur_order = *page_order; + + if ( !mfn_eq(amfn, INVALID_MFN) ) + { + p2m_unlock(ap2m); + *mfn = amfn; + *p2mt = ap2mt; + *p2ma = ap2ma; + *page_order = cur_order; + return false; + } + + /* Host entry is also invalid; don't bother setting the altp2m entry. */ + if ( mfn_eq(*mfn, INVALID_MFN) ) + { + p2m_unlock(ap2m); + *page_order = cur_order; + return false; + } + + /* + * If this is a superpage mapping, round down both frame numbers + * to the start of the superpage. NB that we repupose `amfn` + * here. + */ + mask = ~((1UL << cur_order) - 1); + amfn = _mfn(mfn_x(*mfn) & mask); + gfn = _gfn(gfn_l & mask); + + /* Override the altp2m entry with its default access. */ + *p2ma = ap2m->default_access; + + rc = p2m_set_entry(ap2m, gfn, amfn, cur_order, *p2mt, *p2ma); + p2m_unlock(ap2m); + + if ( rc ) + { + gprintk(XENLOG_ERR, + "failed to set entry for %"PRI_gfn" -> %"PRI_mfn" altp2m %u, rc %d\n", + gfn_l, mfn_x(amfn), vcpu_altp2m(current).p2midx, rc); + domain_crash(ap2m->domain); + } + + return true; +} + +enum altp2m_reset_type { + ALTP2M_RESET, + ALTP2M_DEACTIVATE +}; + +static void p2m_reset_altp2m(struct domain *d, unsigned int idx, + enum altp2m_reset_type reset_type) +{ + struct p2m_domain *p2m; + + ASSERT(idx < MAX_ALTP2M); + p2m = array_access_nospec(d->arch.altp2m_p2m, idx); + + p2m_lock(p2m); + + p2m_flush_table_locked(p2m); + + if ( reset_type == ALTP2M_DEACTIVATE ) + p2m_free_logdirty(p2m); + + /* Uninit and reinit ept to force TLB shootdown */ + ept_p2m_uninit(p2m); + ept_p2m_init(p2m); + + p2m->min_remapped_gfn = gfn_x(INVALID_GFN); + p2m->max_remapped_gfn = 0; + + p2m_unlock(p2m); +} + +void p2m_flush_altp2m(struct domain *d) +{ + unsigned int i; + + altp2m_list_lock(d); + + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + p2m_reset_altp2m(d, i, ALTP2M_DEACTIVATE); + d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN); + d->arch.altp2m_visible_eptp[i] = mfn_x(INVALID_MFN); + } + + altp2m_list_unlock(d); +} + +static int p2m_activate_altp2m(struct domain *d, unsigned int idx, + p2m_access_t hvmmem_default_access) +{ + struct p2m_domain *hostp2m, *p2m; + int rc; + + ASSERT(idx < MAX_ALTP2M); + + p2m = array_access_nospec(d->arch.altp2m_p2m, idx); + hostp2m = p2m_get_hostp2m(d); + + p2m_lock(p2m); + + rc = p2m_init_logdirty(p2m); + + if ( rc ) + goto out; + + /* The following is really just a rangeset copy. */ + rc = rangeset_merge(p2m->logdirty_ranges, hostp2m->logdirty_ranges); + + if ( rc ) + { + p2m_free_logdirty(p2m); + goto out; + } + + p2m->default_access = hvmmem_default_access; + p2m->domain = hostp2m->domain; + p2m->global_logdirty = hostp2m->global_logdirty; + p2m->min_remapped_gfn = gfn_x(INVALID_GFN); + p2m->max_mapped_pfn = p2m->max_remapped_gfn = 0; + + p2m_init_altp2m_ept(d, idx); + + out: + p2m_unlock(p2m); + + return rc; +} + +int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx) +{ + int rc = -EINVAL; + struct p2m_domain *hostp2m = p2m_get_hostp2m(d); + + if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ) + return rc; + + altp2m_list_lock(d); + + if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + rc = p2m_activate_altp2m(d, idx, hostp2m->default_access); + + altp2m_list_unlock(d); + return rc; +} + +int p2m_init_next_altp2m(struct domain *d, uint16_t *idx, + xenmem_access_t hvmmem_default_access) +{ + int rc = -EINVAL; + unsigned int i; + p2m_access_t a; + struct p2m_domain *hostp2m = p2m_get_hostp2m(d); + + if ( hvmmem_default_access > XENMEM_access_default || + !xenmem_access_to_p2m_access(hostp2m, hvmmem_default_access, &a) ) + return rc; + + altp2m_list_lock(d); + + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) ) + continue; + + rc = p2m_activate_altp2m(d, i, a); + + if ( !rc ) + *idx = i; + + break; + } + + altp2m_list_unlock(d); + return rc; +} + +int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx) +{ + struct p2m_domain *p2m; + int rc = -EBUSY; + + if ( !idx || idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ) + return rc; + + rc = domain_pause_except_self(d); + if ( rc ) + return rc; + + rc = -EBUSY; + altp2m_list_lock(d); + + if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] != + mfn_x(INVALID_MFN) ) + { + p2m = array_access_nospec(d->arch.altp2m_p2m, idx); + + if ( !_atomic_read(p2m->active_vcpus) ) + { + p2m_reset_altp2m(d, idx, ALTP2M_DEACTIVATE); + d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] = + mfn_x(INVALID_MFN); + d->arch.altp2m_visible_eptp[array_index_nospec(idx, MAX_EPTP)] = + mfn_x(INVALID_MFN); + rc = 0; + } + } + + altp2m_list_unlock(d); + + domain_unpause_except_self(d); + + return rc; +} + +int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx) +{ + struct vcpu *v; + int rc = -EINVAL; + + if ( idx >= MAX_ALTP2M ) + return rc; + + rc = domain_pause_except_self(d); + if ( rc ) + return rc; + + rc = -EINVAL; + altp2m_list_lock(d); + + if ( d->arch.altp2m_visible_eptp[idx] != mfn_x(INVALID_MFN) ) + { + for_each_vcpu( d, v ) + if ( p2m_set_altp2m(v, idx) ) + altp2m_vcpu_update_p2m(v); + + rc = 0; + } + + altp2m_list_unlock(d); + + domain_unpause_except_self(d); + + return rc; +} + +int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, + gfn_t old_gfn, gfn_t new_gfn) +{ + struct p2m_domain *hp2m, *ap2m; + p2m_access_t a; + p2m_type_t t; + mfn_t mfn; + int rc = -EINVAL; + + if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || + d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + return rc; + + hp2m = p2m_get_hostp2m(d); + ap2m = array_access_nospec(d->arch.altp2m_p2m, idx); + + p2m_lock(hp2m); + p2m_lock(ap2m); + + if ( gfn_eq(new_gfn, INVALID_GFN) ) + { + mfn = ap2m->get_entry(ap2m, old_gfn, &t, &a, 0, NULL, NULL); + rc = mfn_valid(mfn) + ? p2m_remove_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K) + : 0; + goto out; + } + + rc = altp2m_get_effective_entry(ap2m, old_gfn, &mfn, &t, &a, + AP2MGET_prepopulate); + if ( rc ) + goto out; + + rc = altp2m_get_effective_entry(ap2m, new_gfn, &mfn, &t, &a, + AP2MGET_query); + if ( rc ) + goto out; + + if ( !ap2m->set_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K, t, a, + (current->domain != d)) ) + { + rc = 0; + + if ( gfn_x(new_gfn) < ap2m->min_remapped_gfn ) + ap2m->min_remapped_gfn = gfn_x(new_gfn); + if ( gfn_x(new_gfn) > ap2m->max_remapped_gfn ) + ap2m->max_remapped_gfn = gfn_x(new_gfn); + } + + out: + p2m_unlock(ap2m); + p2m_unlock(hp2m); + return rc; +} + +int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn, + mfn_t mfn, unsigned int page_order, + p2m_type_t p2mt, p2m_access_t p2ma) +{ + struct p2m_domain *p2m; + unsigned int i; + unsigned int reset_count = 0; + unsigned int last_reset_idx = ~0; + int ret = 0; + + if ( !altp2m_active(d) ) + return 0; + + altp2m_list_lock(d); + + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + p2m_type_t t; + p2m_access_t a; + + if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) ) + continue; + + p2m = d->arch.altp2m_p2m[i]; + + /* Check for a dropped page that may impact this altp2m */ + if ( mfn_eq(mfn, INVALID_MFN) && + gfn_x(gfn) + (1UL << page_order) > p2m->min_remapped_gfn && + gfn_x(gfn) <= p2m->max_remapped_gfn ) + { + if ( !reset_count++ ) + { + p2m_reset_altp2m(d, i, ALTP2M_RESET); + last_reset_idx = i; + } + else + { + /* At least 2 altp2m's impacted, so reset everything */ + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + if ( i == last_reset_idx || + d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) ) + continue; + + p2m_reset_altp2m(d, i, ALTP2M_RESET); + } + + ret = 0; + break; + } + } + else if ( !mfn_eq(get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, + NULL), INVALID_MFN) ) + { + int rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma); + + /* Best effort: Don't bail on error. */ + if ( !ret ) + ret = rc; + + p2m_put_gfn(p2m, gfn); + } + else + p2m_put_gfn(p2m, gfn); + } + + altp2m_list_unlock(d); + + return ret; +} + +/* + * Set/clear the #VE suppress bit for a page. Only available on VMX. + */ +int p2m_set_suppress_ve(struct domain *d, gfn_t gfn, bool suppress_ve, + unsigned int altp2m_idx) +{ + int rc; + struct xen_hvm_altp2m_suppress_ve_multi sve = { + altp2m_idx, suppress_ve, 0, 0, gfn_x(gfn), gfn_x(gfn), 0 + }; + + if ( !(rc = p2m_set_suppress_ve_multi(d, &sve)) ) + rc = sve.first_error; + + return rc; +} + +/* + * Set/clear the #VE suppress bit for multiple pages. Only available on VMX. + */ +int p2m_set_suppress_ve_multi(struct domain *d, + struct xen_hvm_altp2m_suppress_ve_multi *sve) +{ + struct p2m_domain *host_p2m = p2m_get_hostp2m(d); + struct p2m_domain *ap2m = NULL; + struct p2m_domain *p2m = host_p2m; + uint64_t start = sve->first_gfn; + int rc = 0; + + if ( sve->view > 0 ) + { + if ( sve->view >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || + d->arch.altp2m_eptp[array_index_nospec(sve->view, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + return -EINVAL; + + p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, sve->view); + } + + p2m_lock(host_p2m); + + if ( ap2m ) + p2m_lock(ap2m); + + while ( sve->last_gfn >= start ) + { + p2m_access_t a; + p2m_type_t t; + mfn_t mfn; + int err = 0; + + if ( (err = altp2m_get_effective_entry(p2m, _gfn(start), &mfn, &t, &a, + AP2MGET_query)) && + !sve->first_error ) + { + sve->first_error_gfn = start; /* Save the gfn of the first error */ + sve->first_error = err; /* Save the first error code */ + } + + if ( !err && (err = p2m->set_entry(p2m, _gfn(start), mfn, + PAGE_ORDER_4K, t, a, + sve->suppress_ve)) && + !sve->first_error ) + { + sve->first_error_gfn = start; /* Save the gfn of the first error */ + sve->first_error = err; /* Save the first error code */ + } + + /* Check for continuation if it's not the last iteration. */ + if ( sve->last_gfn >= ++start && hypercall_preempt_check() ) + { + rc = -ERESTART; + break; + } + } + + sve->first_gfn = start; + + if ( ap2m ) + p2m_unlock(ap2m); + + p2m_unlock(host_p2m); + + return rc; +} + +int p2m_get_suppress_ve(struct domain *d, gfn_t gfn, bool *suppress_ve, + unsigned int altp2m_idx) +{ + struct p2m_domain *host_p2m = p2m_get_hostp2m(d); + struct p2m_domain *ap2m = NULL; + struct p2m_domain *p2m; + mfn_t mfn; + p2m_access_t a; + p2m_type_t t; + int rc = 0; + + if ( altp2m_idx > 0 ) + { + if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || + d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + return -EINVAL; + + p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx); + } + else + p2m = host_p2m; + + gfn_lock(host_p2m, gfn, 0); + + if ( ap2m ) + p2m_lock(ap2m); + + mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, NULL, suppress_ve); + if ( !mfn_valid(mfn) ) + rc = -ESRCH; + + if ( ap2m ) + p2m_unlock(ap2m); + + gfn_unlock(host_p2m, gfn, 0); + + return rc; +} + +int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int altp2m_idx, + uint8_t visible) +{ + int rc = 0; + + altp2m_list_lock(d); + + /* + * Eptp index is correlated with altp2m index and should not exceed + * min(MAX_ALTP2M, MAX_EPTP). + */ + if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || + d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] == + mfn_x(INVALID_MFN) ) + rc = -EINVAL; + else if ( visible ) + d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] = + d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)]; + else + d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] = + mfn_x(INVALID_MFN); + + altp2m_list_unlock(d); + + return rc; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index ce742c12e0..7c422a2d7e 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -500,7 +500,7 @@ int p2m_alloc_table(struct p2m_domain *p2m) return 0; } -static int __must_check +int __must_check p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, unsigned int page_order) { @@ -1329,7 +1329,7 @@ p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m) return p2m; } -static void +void p2m_flush_table_locked(struct p2m_domain *p2m) { struct page_info *top, *pg; @@ -1729,481 +1729,6 @@ int unmap_mmio_regions(struct domain *d, return i == nr ? 0 : i ?: ret; } -int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn, - p2m_type_t *t, p2m_access_t *a, - bool prepopulate) -{ - *mfn = ap2m->get_entry(ap2m, gfn, t, a, 0, NULL, NULL); - - /* Check host p2m if no valid entry in alternate */ - if ( !mfn_valid(*mfn) && !p2m_is_hostp2m(ap2m) ) - { - struct p2m_domain *hp2m = p2m_get_hostp2m(ap2m->domain); - unsigned int page_order; - int rc; - - *mfn = p2m_get_gfn_type_access(hp2m, gfn, t, a, P2M_ALLOC | P2M_UNSHARE, - &page_order, 0); - - rc = -ESRCH; - if ( !mfn_valid(*mfn) || *t != p2m_ram_rw ) - return rc; - - /* If this is a superpage, copy that first */ - if ( prepopulate && page_order != PAGE_ORDER_4K ) - { - unsigned long mask = ~((1UL << page_order) - 1); - gfn_t gfn_aligned = _gfn(gfn_x(gfn) & mask); - mfn_t mfn_aligned = _mfn(mfn_x(*mfn) & mask); - - rc = ap2m->set_entry(ap2m, gfn_aligned, mfn_aligned, page_order, *t, *a, 1); - if ( rc ) - return rc; - } - } - - return 0; -} - -void p2m_altp2m_check(struct vcpu *v, uint16_t idx) -{ - if ( altp2m_active(v->domain) ) - p2m_switch_vcpu_altp2m_by_id(v, idx); -} - -bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx) -{ - struct domain *d = v->domain; - bool rc = false; - - if ( idx >= MAX_ALTP2M ) - return rc; - - altp2m_list_lock(d); - - if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) ) - { - if ( p2m_set_altp2m(v, idx) ) - altp2m_vcpu_update_p2m(v); - rc = 1; - } - - altp2m_list_unlock(d); - return rc; -} - -/* - * Read info about the gfn in an altp2m, locking the gfn. - * - * If the entry is valid, pass the results back to the caller. - * - * If the entry was invalid, and the host's entry is also invalid, - * return to the caller without any changes. - * - * If the entry is invalid, and the host entry was valid, propagate - * the host's entry to the altp2m (retaining page order), and indicate - * that the caller should re-try the faulting instruction. - */ -bool p2m_altp2m_get_or_propagate(struct p2m_domain *ap2m, unsigned long gfn_l, - mfn_t *mfn, p2m_type_t *p2mt, - p2m_access_t *p2ma, unsigned int *page_order) -{ - p2m_type_t ap2mt; - p2m_access_t ap2ma; - unsigned int cur_order; - unsigned long mask; - gfn_t gfn; - mfn_t amfn; - int rc; - - /* - * NB we must get the full lock on the altp2m here, in addition to - * the lock on the individual gfn, since we may change a range of - * gfns below. - */ - p2m_lock(ap2m); - - amfn = get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma, 0, &cur_order); - - if ( cur_order > *page_order ) - cur_order = *page_order; - - if ( !mfn_eq(amfn, INVALID_MFN) ) - { - p2m_unlock(ap2m); - *mfn = amfn; - *p2mt = ap2mt; - *p2ma = ap2ma; - *page_order = cur_order; - return false; - } - - /* Host entry is also invalid; don't bother setting the altp2m entry. */ - if ( mfn_eq(*mfn, INVALID_MFN) ) - { - p2m_unlock(ap2m); - *page_order = cur_order; - return false; - } - - /* - * If this is a superpage mapping, round down both frame numbers - * to the start of the superpage. NB that we repupose `amfn` - * here. - */ - mask = ~((1UL << cur_order) - 1); - amfn = _mfn(mfn_x(*mfn) & mask); - gfn = _gfn(gfn_l & mask); - - /* Override the altp2m entry with its default access. */ - *p2ma = ap2m->default_access; - - rc = p2m_set_entry(ap2m, gfn, amfn, cur_order, *p2mt, *p2ma); - p2m_unlock(ap2m); - - if ( rc ) - { - gprintk(XENLOG_ERR, - "failed to set entry for %"PRI_gfn" -> %"PRI_mfn" altp2m %u, rc %d\n", - gfn_l, mfn_x(amfn), vcpu_altp2m(current).p2midx, rc); - domain_crash(ap2m->domain); - } - - return true; -} - -enum altp2m_reset_type { - ALTP2M_RESET, - ALTP2M_DEACTIVATE -}; - -static void p2m_reset_altp2m(struct domain *d, unsigned int idx, - enum altp2m_reset_type reset_type) -{ - struct p2m_domain *p2m; - - ASSERT(idx < MAX_ALTP2M); - p2m = array_access_nospec(d->arch.altp2m_p2m, idx); - - p2m_lock(p2m); - - p2m_flush_table_locked(p2m); - - if ( reset_type == ALTP2M_DEACTIVATE ) - p2m_free_logdirty(p2m); - - /* Uninit and reinit ept to force TLB shootdown */ - ept_p2m_uninit(p2m); - ept_p2m_init(p2m); - - p2m->min_remapped_gfn = gfn_x(INVALID_GFN); - p2m->max_remapped_gfn = 0; - - p2m_unlock(p2m); -} - -void p2m_flush_altp2m(struct domain *d) -{ - unsigned int i; - - altp2m_list_lock(d); - - for ( i = 0; i < MAX_ALTP2M; i++ ) - { - p2m_reset_altp2m(d, i, ALTP2M_DEACTIVATE); - d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN); - d->arch.altp2m_visible_eptp[i] = mfn_x(INVALID_MFN); - } - - altp2m_list_unlock(d); -} - -static int p2m_activate_altp2m(struct domain *d, unsigned int idx, - p2m_access_t hvmmem_default_access) -{ - struct p2m_domain *hostp2m, *p2m; - int rc; - - ASSERT(idx < MAX_ALTP2M); - - p2m = array_access_nospec(d->arch.altp2m_p2m, idx); - hostp2m = p2m_get_hostp2m(d); - - p2m_lock(p2m); - - rc = p2m_init_logdirty(p2m); - - if ( rc ) - goto out; - - /* The following is really just a rangeset copy. */ - rc = rangeset_merge(p2m->logdirty_ranges, hostp2m->logdirty_ranges); - - if ( rc ) - { - p2m_free_logdirty(p2m); - goto out; - } - - p2m->default_access = hvmmem_default_access; - p2m->domain = hostp2m->domain; - p2m->global_logdirty = hostp2m->global_logdirty; - p2m->min_remapped_gfn = gfn_x(INVALID_GFN); - p2m->max_mapped_pfn = p2m->max_remapped_gfn = 0; - - p2m_init_altp2m_ept(d, idx); - - out: - p2m_unlock(p2m); - - return rc; -} - -int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx) -{ - int rc = -EINVAL; - struct p2m_domain *hostp2m = p2m_get_hostp2m(d); - - if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ) - return rc; - - altp2m_list_lock(d); - - if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - rc = p2m_activate_altp2m(d, idx, hostp2m->default_access); - - altp2m_list_unlock(d); - return rc; -} - -int p2m_init_next_altp2m(struct domain *d, uint16_t *idx, - xenmem_access_t hvmmem_default_access) -{ - int rc = -EINVAL; - unsigned int i; - p2m_access_t a; - struct p2m_domain *hostp2m = p2m_get_hostp2m(d); - - if ( hvmmem_default_access > XENMEM_access_default || - !xenmem_access_to_p2m_access(hostp2m, hvmmem_default_access, &a) ) - return rc; - - altp2m_list_lock(d); - - for ( i = 0; i < MAX_ALTP2M; i++ ) - { - if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) ) - continue; - - rc = p2m_activate_altp2m(d, i, a); - - if ( !rc ) - *idx = i; - - break; - } - - altp2m_list_unlock(d); - return rc; -} - -int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx) -{ - struct p2m_domain *p2m; - int rc = -EBUSY; - - if ( !idx || idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ) - return rc; - - rc = domain_pause_except_self(d); - if ( rc ) - return rc; - - rc = -EBUSY; - altp2m_list_lock(d); - - if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] != - mfn_x(INVALID_MFN) ) - { - p2m = array_access_nospec(d->arch.altp2m_p2m, idx); - - if ( !_atomic_read(p2m->active_vcpus) ) - { - p2m_reset_altp2m(d, idx, ALTP2M_DEACTIVATE); - d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] = - mfn_x(INVALID_MFN); - d->arch.altp2m_visible_eptp[array_index_nospec(idx, MAX_EPTP)] = - mfn_x(INVALID_MFN); - rc = 0; - } - } - - altp2m_list_unlock(d); - - domain_unpause_except_self(d); - - return rc; -} - -int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx) -{ - struct vcpu *v; - int rc = -EINVAL; - - if ( idx >= MAX_ALTP2M ) - return rc; - - rc = domain_pause_except_self(d); - if ( rc ) - return rc; - - rc = -EINVAL; - altp2m_list_lock(d); - - if ( d->arch.altp2m_visible_eptp[idx] != mfn_x(INVALID_MFN) ) - { - for_each_vcpu( d, v ) - if ( p2m_set_altp2m(v, idx) ) - altp2m_vcpu_update_p2m(v); - - rc = 0; - } - - altp2m_list_unlock(d); - - domain_unpause_except_self(d); - - return rc; -} - -int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, - gfn_t old_gfn, gfn_t new_gfn) -{ - struct p2m_domain *hp2m, *ap2m; - p2m_access_t a; - p2m_type_t t; - mfn_t mfn; - int rc = -EINVAL; - - if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || - d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - return rc; - - hp2m = p2m_get_hostp2m(d); - ap2m = array_access_nospec(d->arch.altp2m_p2m, idx); - - p2m_lock(hp2m); - p2m_lock(ap2m); - - if ( gfn_eq(new_gfn, INVALID_GFN) ) - { - mfn = ap2m->get_entry(ap2m, old_gfn, &t, &a, 0, NULL, NULL); - rc = mfn_valid(mfn) - ? p2m_remove_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K) - : 0; - goto out; - } - - rc = altp2m_get_effective_entry(ap2m, old_gfn, &mfn, &t, &a, - AP2MGET_prepopulate); - if ( rc ) - goto out; - - rc = altp2m_get_effective_entry(ap2m, new_gfn, &mfn, &t, &a, - AP2MGET_query); - if ( rc ) - goto out; - - if ( !ap2m->set_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K, t, a, - (current->domain != d)) ) - { - rc = 0; - - if ( gfn_x(new_gfn) < ap2m->min_remapped_gfn ) - ap2m->min_remapped_gfn = gfn_x(new_gfn); - if ( gfn_x(new_gfn) > ap2m->max_remapped_gfn ) - ap2m->max_remapped_gfn = gfn_x(new_gfn); - } - - out: - p2m_unlock(ap2m); - p2m_unlock(hp2m); - return rc; -} - -int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn, - mfn_t mfn, unsigned int page_order, - p2m_type_t p2mt, p2m_access_t p2ma) -{ - struct p2m_domain *p2m; - unsigned int i; - unsigned int reset_count = 0; - unsigned int last_reset_idx = ~0; - int ret = 0; - - if ( !altp2m_active(d) ) - return 0; - - altp2m_list_lock(d); - - for ( i = 0; i < MAX_ALTP2M; i++ ) - { - p2m_type_t t; - p2m_access_t a; - - if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) ) - continue; - - p2m = d->arch.altp2m_p2m[i]; - - /* Check for a dropped page that may impact this altp2m */ - if ( mfn_eq(mfn, INVALID_MFN) && - gfn_x(gfn) + (1UL << page_order) > p2m->min_remapped_gfn && - gfn_x(gfn) <= p2m->max_remapped_gfn ) - { - if ( !reset_count++ ) - { - p2m_reset_altp2m(d, i, ALTP2M_RESET); - last_reset_idx = i; - } - else - { - /* At least 2 altp2m's impacted, so reset everything */ - for ( i = 0; i < MAX_ALTP2M; i++ ) - { - if ( i == last_reset_idx || - d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) ) - continue; - - p2m_reset_altp2m(d, i, ALTP2M_RESET); - } - - ret = 0; - break; - } - } - else if ( !mfn_eq(get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, - NULL), INVALID_MFN) ) - { - int rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma); - - /* Best effort: Don't bail on error. */ - if ( !ret ) - ret = rc; - - p2m_put_gfn(p2m, gfn); - } - else - p2m_put_gfn(p2m, gfn); - } - - altp2m_list_unlock(d); - - return ret; -} - /*** Audit ***/ #if P2M_AUDIT @@ -2540,159 +2065,6 @@ int xenmem_add_to_physmap_one( return rc; } -/* - * Set/clear the #VE suppress bit for a page. Only available on VMX. - */ -int p2m_set_suppress_ve(struct domain *d, gfn_t gfn, bool suppress_ve, - unsigned int altp2m_idx) -{ - int rc; - struct xen_hvm_altp2m_suppress_ve_multi sve = { - altp2m_idx, suppress_ve, 0, 0, gfn_x(gfn), gfn_x(gfn), 0 - }; - - if ( !(rc = p2m_set_suppress_ve_multi(d, &sve)) ) - rc = sve.first_error; - - return rc; -} - -/* - * Set/clear the #VE suppress bit for multiple pages. Only available on VMX. - */ -int p2m_set_suppress_ve_multi(struct domain *d, - struct xen_hvm_altp2m_suppress_ve_multi *sve) -{ - struct p2m_domain *host_p2m = p2m_get_hostp2m(d); - struct p2m_domain *ap2m = NULL; - struct p2m_domain *p2m = host_p2m; - uint64_t start = sve->first_gfn; - int rc = 0; - - if ( sve->view > 0 ) - { - if ( sve->view >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || - d->arch.altp2m_eptp[array_index_nospec(sve->view, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - return -EINVAL; - - p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, sve->view); - } - - p2m_lock(host_p2m); - - if ( ap2m ) - p2m_lock(ap2m); - - while ( sve->last_gfn >= start ) - { - p2m_access_t a; - p2m_type_t t; - mfn_t mfn; - int err = 0; - - if ( (err = altp2m_get_effective_entry(p2m, _gfn(start), &mfn, &t, &a, - AP2MGET_query)) && - !sve->first_error ) - { - sve->first_error_gfn = start; /* Save the gfn of the first error */ - sve->first_error = err; /* Save the first error code */ - } - - if ( !err && (err = p2m->set_entry(p2m, _gfn(start), mfn, - PAGE_ORDER_4K, t, a, - sve->suppress_ve)) && - !sve->first_error ) - { - sve->first_error_gfn = start; /* Save the gfn of the first error */ - sve->first_error = err; /* Save the first error code */ - } - - /* Check for continuation if it's not the last iteration. */ - if ( sve->last_gfn >= ++start && hypercall_preempt_check() ) - { - rc = -ERESTART; - break; - } - } - - sve->first_gfn = start; - - if ( ap2m ) - p2m_unlock(ap2m); - - p2m_unlock(host_p2m); - - return rc; -} - -int p2m_get_suppress_ve(struct domain *d, gfn_t gfn, bool *suppress_ve, - unsigned int altp2m_idx) -{ - struct p2m_domain *host_p2m = p2m_get_hostp2m(d); - struct p2m_domain *ap2m = NULL; - struct p2m_domain *p2m; - mfn_t mfn; - p2m_access_t a; - p2m_type_t t; - int rc = 0; - - if ( altp2m_idx > 0 ) - { - if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || - d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - return -EINVAL; - - p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx); - } - else - p2m = host_p2m; - - gfn_lock(host_p2m, gfn, 0); - - if ( ap2m ) - p2m_lock(ap2m); - - mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, NULL, suppress_ve); - if ( !mfn_valid(mfn) ) - rc = -ESRCH; - - if ( ap2m ) - p2m_unlock(ap2m); - - gfn_unlock(host_p2m, gfn, 0); - - return rc; -} - -int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int altp2m_idx, - uint8_t visible) -{ - int rc = 0; - - altp2m_list_lock(d); - - /* - * Eptp index is correlated with altp2m index and should not exceed - * min(MAX_ALTP2M, MAX_EPTP). - */ - if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) || - d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] == - mfn_x(INVALID_MFN) ) - rc = -EINVAL; - else if ( visible ) - d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] = - d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)]; - else - d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] = - mfn_x(INVALID_MFN); - - altp2m_list_unlock(d); - - return rc; -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm/p2m.h b/xen/arch/x86/mm/p2m.h index 04308cfb6d..635f5a7f45 100644 --- a/xen/arch/x86/mm/p2m.h +++ b/xen/arch/x86/mm/p2m.h @@ -22,6 +22,9 @@ static inline void p2m_free_logdirty(struct p2m_domain *p2m) {} int p2m_init_altp2m(struct domain *d); void p2m_teardown_altp2m(struct domain *d); +void p2m_flush_table_locked(struct p2m_domain *p2m); +int __must_check p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn, + unsigned int page_order); void p2m_nestedp2m_init(struct p2m_domain *p2m); int p2m_init_nestedp2m(struct domain *d); void p2m_teardown_nestedp2m(struct domain *d); From patchwork Wed May 15 09:08:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664910 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E0A0AC25B75 for ; Wed, 15 May 2024 09:08:32 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722039.1125845 (Exim 4.92) (envelope-from ) id 1s7AcW-0001qu-Ll; Wed, 15 May 2024 09:08:20 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722039.1125845; Wed, 15 May 2024 09:08:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AcW-0001qn-JG; Wed, 15 May 2024 09:08:20 +0000 Received: by outflank-mailman (input) for mailman id 722039; Wed, 15 May 2024 09:08:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AcV-0001qh-9A for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:08:19 +0000 Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id aa98e4b2-129a-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:08:18 +0200 (CEST) Received: from pb-smtp20.pobox.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 547F11BBAA; Wed, 15 May 2024 05:08:16 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp20.pobox.com (Postfix) with ESMTP id 4C5F31BBA9; Wed, 15 May 2024 05:08:16 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp20.pobox.com (Postfix) with ESMTPSA id D60251BB9F; Wed, 15 May 2024 05:08:12 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: aa98e4b2-129a-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=4MpdoyoH5pC3p0CFUByX+/dhz ItVV3D2aq0lad6vARM=; b=slkyQBGbA00raAtKOXj3hYlqwltBL6XIVBq5bBpXU nukisjagm1x3pCikaxgkYmrQEGV4gla6hIW/It/FJpHcGHXDhX9VkHLRepkzC1bX 6KhU1LL0k6Ftduu70/WXtIF6QbzlBD4ea1Ch9w3+jJFF3JCv/t+nZHwBIXUZNKTx ZQ= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini , Xenia Ragiadakou , Tamas K Lengyel Subject: [XEN PATCH v2 05/15] x86: introduce CONFIG_ALTP2M Kconfig option Date: Wed, 15 May 2024 12:08:09 +0300 Message-Id: <14a8c523b24c87959941e905bd60933a91144bc7.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: A8418106-129A-11EF-8833-F515D2CDFF5E-90055647!pb-smtp20.pobox.com Add new option to make altp2m code inclusion optional. Currently altp2m support provided for VT-d only, so option is dependant on VMX. No functional change intended. Signed-off-by: Sergiy Kibrik CC: Tamas K Lengyel --- xen/arch/x86/Kconfig | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 8c9f8431f0..2872b031a7 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -358,6 +358,11 @@ config REQUIRE_NX was unavailable. However, if enabled, Xen will no longer boot on any CPU which is lacking NX support. +config ALTP2M + bool "Alternate P2M support" + def_bool y + depends on VMX && EXPERT + endmenu source "common/Kconfig" From patchwork Wed May 15 09:10:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55538C25B75 for ; Wed, 15 May 2024 09:10:39 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722045.1125866 (Exim 4.92) (envelope-from ) id 1s7AeW-000444-6d; Wed, 15 May 2024 09:10:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722045.1125866; Wed, 15 May 2024 09:10:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AeW-00043x-45; Wed, 15 May 2024 09:10:24 +0000 Received: by outflank-mailman (input) for mailman id 722045; Wed, 15 May 2024 09:10:23 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AeV-0002ND-0n for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:10:23 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f3a04abb-129a-11ef-b4bb-af5377834399; Wed, 15 May 2024 11:10:21 +0200 (CEST) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id F0DCA35309; Wed, 15 May 2024 05:10:18 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id E724035308; Wed, 15 May 2024 05:10:18 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id EB06535307; Wed, 15 May 2024 05:10:17 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f3a04abb-129a-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=f44NE1zYxVTaI52BPRLBMRpI7 SYEqvO5KN8XxTWD4lc=; b=L5ANc3ibwOg99vpedtd13LOXpq8+AHFs/mozX2pID Ly25w6xMYD0VQvBKjxlo6a6MLcrePO6FIhfHZ2XSW7DbwTcKLWbcM/5iN5fSm0co U6uggbJqays0D3L3dM90Ly+riCR8YJ8d8Zvmi2OoSptqmAVtrJYActm/Jec9F15v WM= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Stefano Stabellini , Xenia Ragiadakou , Tamas K Lengyel Subject: [XEN PATCH v2 06/15] x86/p2m: guard altp2m code with CONFIG_ALTP2M option Date: Wed, 15 May 2024 12:10:16 +0300 Message-Id: <7a6980b1c67dedb306985f73afb23db359771e8f.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: F2D1549E-129A-11EF-A0C8-25B3960A682E-90055647!pb-smtp2.pobox.com Instead of using generic CONFIG_HVM option switch to a bit more specific CONFIG_ALTP2M option for altp2m support. Also guard altp2m routines, so that they can be disabled completely in the build -- when target platform does not actually support altp2m (AMD-V & ARM as of now). Signed-off-by: Sergiy Kibrik CC: Tamas K Lengyel --- changes in v2: - use separate CONFIG_ALTP2M option instead of CONFIG_VMX --- xen/arch/x86/include/asm/altp2m.h | 5 ++++- xen/arch/x86/include/asm/hvm/hvm.h | 2 +- xen/arch/x86/include/asm/p2m.h | 17 ++++++++++++++++- xen/arch/x86/mm/Makefile | 2 +- 4 files changed, 22 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/include/asm/altp2m.h b/xen/arch/x86/include/asm/altp2m.h index e5e59cbd68..092b13e231 100644 --- a/xen/arch/x86/include/asm/altp2m.h +++ b/xen/arch/x86/include/asm/altp2m.h @@ -7,7 +7,7 @@ #ifndef __ASM_X86_ALTP2M_H #define __ASM_X86_ALTP2M_H -#ifdef CONFIG_HVM +#ifdef CONFIG_ALTP2M #include #include /* for struct vcpu, struct domain */ @@ -38,7 +38,10 @@ static inline bool altp2m_active(const struct domain *d) } /* Only declaration is needed. DCE will optimise it out when linking. */ +void altp2m_vcpu_initialise(struct vcpu *v); +void altp2m_vcpu_destroy(struct vcpu *v); uint16_t altp2m_vcpu_idx(const struct vcpu *v); +int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn); void altp2m_vcpu_disable_ve(struct vcpu *v); #endif diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index 0c9e6f1564..4f03dd7af8 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -670,7 +670,7 @@ static inline bool hvm_hap_supported(void) /* returns true if hardware supports alternate p2m's */ static inline bool hvm_altp2m_supported(void) { - return hvm_funcs.caps.altp2m; + return IS_ENABLED(CONFIG_ALTP2M) && hvm_funcs.caps.altp2m; } /* Returns true if we have the minimum hardware requirements for nested virt */ diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h index 111badf89a..855e69d24a 100644 --- a/xen/arch/x86/include/asm/p2m.h +++ b/xen/arch/x86/include/asm/p2m.h @@ -581,9 +581,9 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn) return _gfn(mfn_x(mfn)); } -#ifdef CONFIG_HVM #define AP2MGET_prepopulate true #define AP2MGET_query false +#ifdef CONFIG_ALTP2M /* * Looks up altp2m entry. If the entry is not found it looks up the entry in @@ -593,6 +593,15 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn) int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn, p2m_type_t *t, p2m_access_t *a, bool prepopulate); +#else +static inline int altp2m_get_effective_entry(struct p2m_domain *ap2m, + gfn_t gfn, mfn_t *mfn, + p2m_type_t *t, p2m_access_t *a, + bool prepopulate) +{ + ASSERT_UNREACHABLE(); + return -EOPNOTSUPP; +} #endif /* Init the datastructures for later use by the p2m code */ @@ -909,8 +918,14 @@ static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx) /* Switch alternate p2m for a single vcpu */ bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx); +#ifdef CONFIG_ALTP2M /* Check to see if vcpu should be switched to a different p2m. */ void p2m_altp2m_check(struct vcpu *v, uint16_t idx); +#else +static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) +{ +} +#endif /* Flush all the alternate p2m's for a domain */ void p2m_flush_altp2m(struct domain *d); diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile index 0128ca7ab6..d7d57b8190 100644 --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -1,7 +1,7 @@ obj-y += shadow/ obj-$(CONFIG_HVM) += hap/ -obj-$(CONFIG_HVM) += altp2m.o +obj-$(CONFIG_ALTP2M) += altp2m.o obj-$(CONFIG_HVM) += guest_walk_2.o guest_walk_3.o guest_walk_4.o obj-$(CONFIG_SHADOW_PAGING) += guest_walk_4.o obj-$(CONFIG_MEM_ACCESS) += mem_access.o From patchwork Wed May 15 09:12:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76380C25B75 for ; Wed, 15 May 2024 09:12:38 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722054.1125875 (Exim 4.92) (envelope-from ) id 1s7AgU-00057p-J8; Wed, 15 May 2024 09:12:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722054.1125875; Wed, 15 May 2024 09:12:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AgU-00057i-G4; Wed, 15 May 2024 09:12:26 +0000 Received: by outflank-mailman (input) for mailman id 722054; Wed, 15 May 2024 09:12:25 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AgT-00057a-5M for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:12:25 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 3d5b5cf2-129b-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:12:24 +0200 (CEST) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 8EBC035323; Wed, 15 May 2024 05:12:22 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 85A2E35322; Wed, 15 May 2024 05:12:22 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id BACBF35321; Wed, 15 May 2024 05:12:21 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3d5b5cf2-129b-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=qFhah2+IM6y4Lo8xEY359yzvT fmohoeQSwTfFz6nmRk=; b=eu/PFh/nU5wjH8/eyMA4htpoRljgr5TLhrPQZvP/J G+88V7JjokqbK2sBCJp2dJzEH2hY3ds/9POLMCcYD02J77U0idpYupFqgT1krSr4 BTcHzpRthJVXAJrj7Mmo/8KXZuhrFC01iCkFk8FvDBD8mrbbu/rGPLOjgh7UwSSt Ag= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Jan Beulich , Stefano Stabellini , Xenia Ragiadakou Subject: [XEN PATCH v2 07/15] x86: guard cpu_has_{svm/vmx} macros with CONFIG_{SVM/VMX} Date: Wed, 15 May 2024 12:12:19 +0300 Message-Id: <09f1336974c8fd2f788fe8e1d3ca5fee91da5a81.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 3C9B4828-129B-11EF-89AB-25B3960A682E-90055647!pb-smtp2.pobox.com As we now have SVM/VMX config options for enabling/disabling these features completely in the build, it may be feasible to add build-time checks to cpu_has_{svm,vmx} macros. These are used extensively thoughout HVM code, so we won't have to add extra #ifdef-s to check whether svm/vmx has been enabled, while DCE cleans up calls to vmx/svm functions, if their code not being built. Signed-off-by: Sergiy Kibrik CC: Jan Beulich Reviewed-by: Stefano Stabellini --- xen/arch/x86/include/asm/cpufeature.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h index 9bc553681f..17f5aed000 100644 --- a/xen/arch/x86/include/asm/cpufeature.h +++ b/xen/arch/x86/include/asm/cpufeature.h @@ -81,7 +81,8 @@ static inline bool boot_cpu_has(unsigned int feat) #define cpu_has_sse3 boot_cpu_has(X86_FEATURE_SSE3) #define cpu_has_pclmulqdq boot_cpu_has(X86_FEATURE_PCLMULQDQ) #define cpu_has_monitor boot_cpu_has(X86_FEATURE_MONITOR) -#define cpu_has_vmx boot_cpu_has(X86_FEATURE_VMX) +#define cpu_has_vmx ( IS_ENABLED(CONFIG_VMX) && \ + boot_cpu_has(X86_FEATURE_VMX)) #define cpu_has_eist boot_cpu_has(X86_FEATURE_EIST) #define cpu_has_ssse3 boot_cpu_has(X86_FEATURE_SSSE3) #define cpu_has_fma boot_cpu_has(X86_FEATURE_FMA) @@ -109,7 +110,8 @@ static inline bool boot_cpu_has(unsigned int feat) /* CPUID level 0x80000001.ecx */ #define cpu_has_cmp_legacy boot_cpu_has(X86_FEATURE_CMP_LEGACY) -#define cpu_has_svm boot_cpu_has(X86_FEATURE_SVM) +#define cpu_has_svm ( IS_ENABLED(CONFIG_SVM) && \ + boot_cpu_has(X86_FEATURE_SVM)) #define cpu_has_sse4a boot_cpu_has(X86_FEATURE_SSE4A) #define cpu_has_xop boot_cpu_has(X86_FEATURE_XOP) #define cpu_has_skinit boot_cpu_has(X86_FEATURE_SKINIT) From patchwork Wed May 15 09:14:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 72454C25B75 for ; Wed, 15 May 2024 09:14:49 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722061.1125886 (Exim 4.92) (envelope-from ) id 1s7AiT-0005fE-TM; Wed, 15 May 2024 09:14:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722061.1125886; Wed, 15 May 2024 09:14:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AiT-0005f7-Qs; Wed, 15 May 2024 09:14:29 +0000 Received: by outflank-mailman (input) for mailman id 722061; Wed, 15 May 2024 09:14:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AiT-0005f1-6K for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:14:29 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 86f77944-129b-11ef-b4bb-af5377834399; Wed, 15 May 2024 11:14:27 +0200 (CEST) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 3162E35340; Wed, 15 May 2024 05:14:26 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 297753533F; Wed, 15 May 2024 05:14:26 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 24B493533E; Wed, 15 May 2024 05:14:25 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 86f77944-129b-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=QBVPADKDiu5x/oX6jZVkmGsvt 8awM0kSX8QUYnF6Lfs=; b=TLIfs3Y4jdHaaq+/b+rvFIdlA1tnRxUpDOwtLlTt4 huIdPD5Y4LoAp6EaFithiBn9sPSqHe/KqGbqBV4JRA/7M3iUG4lX1WQ4ffjPJN8J fMEZ4ura1w72RPxHsPXilErs1SNsjTrkD7jOnM9r1YeLLlozDmkGucnKz80hgay2 fc= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini , Xenia Ragiadakou Subject: [XEN PATCH v2 08/15] x86/vpmu: guard vmx/svm calls with cpu_has_{vmx,svm} Date: Wed, 15 May 2024 12:14:22 +0300 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 86266CC0-129B-11EF-BA87-25B3960A682E-90055647!pb-smtp2.pobox.com If VMX/SVM disabled in the build, we may still want to have vPMU drivers for PV guests. Yet some calls to vmx/svm-related routines needs to be guarded then. Signed-off-by: Sergiy Kibrik --- xen/arch/x86/cpu/vpmu_amd.c | 8 ++++---- xen/arch/x86/cpu/vpmu_intel.c | 20 ++++++++++---------- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index db2fa420e1..40b0c8932f 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -290,7 +290,7 @@ static int cf_check amd_vpmu_save(struct vcpu *v, bool to_guest) context_save(v); if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) && - is_msr_bitmap_on(vpmu) ) + is_msr_bitmap_on(vpmu) && cpu_has_svm ) amd_vpmu_unset_msr_bitmap(v); if ( to_guest ) @@ -363,7 +363,7 @@ static int cf_check amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) return 0; vpmu_set(vpmu, VPMU_RUNNING); - if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) ) + if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) && cpu_has_svm ) amd_vpmu_set_msr_bitmap(v); } @@ -372,7 +372,7 @@ static int cf_check amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) ) { vpmu_reset(vpmu, VPMU_RUNNING); - if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) ) + if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) && cpu_has_svm ) amd_vpmu_unset_msr_bitmap(v); release_pmu_ownership(PMU_OWNER_HVM); } @@ -415,7 +415,7 @@ static void cf_check amd_vpmu_destroy(struct vcpu *v) { struct vpmu_struct *vpmu = vcpu_vpmu(v); - if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) ) + if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) && cpu_has_svm ) amd_vpmu_unset_msr_bitmap(v); xfree(vpmu->context); diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index cd414165df..10c34a5691 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -269,7 +269,7 @@ static inline void __core2_vpmu_save(struct vcpu *v) if ( !is_hvm_vcpu(v) ) rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status); /* Save MSR to private context to make it fork-friendly */ - else if ( mem_sharing_enabled(v->domain) ) + else if ( mem_sharing_enabled(v->domain) && cpu_has_vmx ) vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, &core2_vpmu_cxt->global_ctrl); } @@ -288,7 +288,7 @@ static int cf_check core2_vpmu_save(struct vcpu *v, bool to_guest) /* Unset PMU MSR bitmap to trap lazy load. */ if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) && - cpu_has_vmx_msr_bitmap ) + cpu_has_vmx && cpu_has_vmx_msr_bitmap ) core2_vpmu_unset_msr_bitmap(v); if ( to_guest ) @@ -333,7 +333,7 @@ static inline void __core2_vpmu_load(struct vcpu *v) wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl); } /* Restore MSR from context when used with a fork */ - else if ( mem_sharing_is_fork(v->domain) ) + else if ( mem_sharing_is_fork(v->domain) && cpu_has_vmx ) vmx_write_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl); } @@ -442,7 +442,7 @@ static int cf_check core2_vpmu_alloc_resource(struct vcpu *v) if ( !acquire_pmu_ownership(PMU_OWNER_HVM) ) return 0; - if ( is_hvm_vcpu(v) ) + if ( is_hvm_vcpu(v) && cpu_has_vmx ) { if ( vmx_add_host_load_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, 0) ) goto out_err; @@ -513,7 +513,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index) __core2_vpmu_load(current); vpmu_set(vpmu, VPMU_CONTEXT_LOADED); - if ( is_hvm_vcpu(current) && cpu_has_vmx_msr_bitmap ) + if ( is_hvm_vcpu(current) && cpu_has_vmx && cpu_has_vmx_msr_bitmap ) core2_vpmu_set_msr_bitmap(current); } return 1; @@ -584,7 +584,7 @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) if ( msr_content & fixed_ctrl_mask ) return -EINVAL; - if ( is_hvm_vcpu(v) ) + if ( is_hvm_vcpu(v) && cpu_has_vmx ) vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, &core2_vpmu_cxt->global_ctrl); else @@ -653,7 +653,7 @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) if ( blocked ) return -EINVAL; - if ( is_hvm_vcpu(v) ) + if ( is_hvm_vcpu(v) && cpu_has_vmx) vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, &core2_vpmu_cxt->global_ctrl); else @@ -672,7 +672,7 @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) wrmsrl(msr, msr_content); else { - if ( is_hvm_vcpu(v) ) + if ( is_hvm_vcpu(v) && cpu_has_vmx ) vmx_write_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, msr_content); else wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content); @@ -706,7 +706,7 @@ static int cf_check core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content) *msr_content = core2_vpmu_cxt->global_status; break; case MSR_CORE_PERF_GLOBAL_CTRL: - if ( is_hvm_vcpu(v) ) + if ( is_hvm_vcpu(v) && cpu_has_vmx ) vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, msr_content); else rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content); @@ -808,7 +808,7 @@ static void cf_check core2_vpmu_destroy(struct vcpu *v) vpmu->context = NULL; xfree(vpmu->priv_context); vpmu->priv_context = NULL; - if ( is_hvm_vcpu(v) && cpu_has_vmx_msr_bitmap ) + if ( is_hvm_vcpu(v) && cpu_has_vmx && cpu_has_vmx_msr_bitmap ) core2_vpmu_unset_msr_bitmap(v); release_pmu_ownership(PMU_OWNER_HVM); vpmu_clear(vpmu); From patchwork Wed May 15 09:16:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55293C25B75 for ; Wed, 15 May 2024 09:16:46 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722067.1125896 (Exim 4.92) (envelope-from ) id 1s7AkS-0006Fu-BG; Wed, 15 May 2024 09:16:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722067.1125896; Wed, 15 May 2024 09:16:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AkS-0006Fn-8E; Wed, 15 May 2024 09:16:32 +0000 Received: by outflank-mailman (input) for mailman id 722067; Wed, 15 May 2024 09:16:31 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AkQ-0006Ff-VH for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:16:30 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id d03c5259-129b-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:16:30 +0200 (CEST) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 1956435482; Wed, 15 May 2024 05:16:29 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 11C5A35481; Wed, 15 May 2024 05:16:29 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 6DC4E35480; Wed, 15 May 2024 05:16:28 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d03c5259-129b-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=299ufJ2kZ2D6V92PsDN8HDonp 5MtRuFSITeTwpBue9Q=; b=rb+6o5hp4FAI7YCe5i8W0vZBh6ACm1yvFfWcphJE3 I43FEEViadokhM9ME3L2rzVfiBMtPhO3OHmhZ81bD7l2n4RYexuCyYHi/Zi8QUQ7 Y2kIViauLGzLeW3zUm+//qflMIXP/AlhtRfLvko/qXCbHX9BeLEyU1tUAsvL46YT lA= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini , Xenia Ragiadakou Subject: [XEN PATCH v2 09/15] x86/traps: clean up superfluous #idef-s Date: Wed, 15 May 2024 12:16:26 +0300 Message-Id: <7f0b98062ce67ad8176670efbe3c3ebdb43d2b1c.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: CFA4903E-129B-11EF-85B5-25B3960A682E-90055647!pb-smtp2.pobox.com Remove preprocessor checks for CONFIG_HVM option, because expressions covered by these checks are already guarded by cpu_has_vmx, which itself depends on CONFIG_HVM option (via CONFIG_VMX). No functional change intended. Signed-off-by: Sergiy Kibrik Reviewed-by: Stefano Stabellini --- xen/arch/x86/traps.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index d554c9d41e..7b8ee45edf 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -676,7 +676,6 @@ void vcpu_show_execution_state(struct vcpu *v) vcpu_pause(v); /* acceptably dangerous */ -#ifdef CONFIG_HVM /* * For VMX special care is needed: Reading some of the register state will * require VMCS accesses. Engaging foreign VMCSes involves acquiring of a @@ -689,7 +688,6 @@ void vcpu_show_execution_state(struct vcpu *v) ASSERT(!in_irq()); vmx_vmcs_enter(v); } -#endif /* Prevent interleaving of output. */ flags = console_lock_recursive_irqsave(); @@ -714,10 +712,8 @@ void vcpu_show_execution_state(struct vcpu *v) console_unlock_recursive_irqrestore(flags); } -#ifdef CONFIG_HVM if ( cpu_has_vmx && is_hvm_vcpu(v) ) vmx_vmcs_exit(v); -#endif vcpu_unpause(v); } From patchwork Wed May 15 09:18:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A99DAC25B77 for ; Wed, 15 May 2024 09:18:52 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722073.1125906 (Exim 4.92) (envelope-from ) id 1s7AmX-0006pe-NU; Wed, 15 May 2024 09:18:41 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722073.1125906; Wed, 15 May 2024 09:18:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AmX-0006pX-JH; Wed, 15 May 2024 09:18:41 +0000 Received: by outflank-mailman (input) for mailman id 722073; Wed, 15 May 2024 09:18:39 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AmV-0006pP-T5 for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:18:39 +0000 Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 1c97be60-129c-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:18:38 +0200 (CEST) Received: from pb-smtp21.pobox.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id B52891B823; Wed, 15 May 2024 05:18:36 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id ABE891B822; Wed, 15 May 2024 05:18:36 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 754F41B821; Wed, 15 May 2024 05:18:32 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1c97be60-129c-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=fy/w8iemesK+0sMkUtfBzP+Xq P828REQrPH5zx3qcTg=; b=Q3VLjGZbU7sn/9aAeJU9fMnGgN4K8zE0a09b+taR+ nRuV90Rc40+i49K7NkjmPJV3JPrGZkiKyLJwAPV8foMCVwwFjBAPJ4tqRoF+5EcH F6YsmFjfI4043X4RKlIs29J/uwbmdQ6NJLZGkPWt1Hi20cLJ4sJ0VmcnQ6jQjFwo X8= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini , Xenia Ragiadakou Subject: [XEN PATCH v2 10/15] x86/domain: clean up superfluous #idef-s Date: Wed, 15 May 2024 12:18:29 +0300 Message-Id: <67d6604e8f66468c02f0c2e60315fc9251b69beb.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 1992272E-129C-11EF-BDB2-A19503B9AAD1-90055647!pb-smtp21.pobox.com Remove preprocessor checks for CONFIG_HVM option, because expressions covered by these checks are already guarded by cpu_has_svm, which itself depends on CONFIG_HVM option (via CONFIG_SVM). No functional change intended. Signed-off-by: Sergiy Kibrik Reviewed-by: Stefano Stabellini --- xen/arch/x86/domain.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 20e83cf38b..5c7fb7fc73 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1708,11 +1708,9 @@ static void load_segments(struct vcpu *n) if ( !(n->arch.flags & TF_kernel_mode) ) SWAP(gsb, gss); -#ifdef CONFIG_HVM if ( cpu_has_svm && (uregs->fs | uregs->gs) <= 3 ) fs_gs_done = svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n), n->arch.pv.fs_base, gsb, gss); -#endif } if ( !fs_gs_done ) @@ -2025,7 +2023,7 @@ static void __context_switch(void) write_ptbase(n); -#if defined(CONFIG_PV) && defined(CONFIG_HVM) +#if defined(CONFIG_PV) /* Prefetch the VMCB if we expect to use it later in the context switch */ if ( cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) ) svm_load_segs_prefetch(); From patchwork Wed May 15 09:20:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB075C25B75 for ; Wed, 15 May 2024 09:20:59 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722079.1125916 (Exim 4.92) (envelope-from ) id 1s7AoU-0000Iz-1x; Wed, 15 May 2024 09:20:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722079.1125916; Wed, 15 May 2024 09:20:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AoT-0000In-Up; Wed, 15 May 2024 09:20:41 +0000 Received: by outflank-mailman (input) for mailman id 722079; Wed, 15 May 2024 09:20:40 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AoS-0000IZ-Lk for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:20:40 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 651df9bb-129c-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:20:39 +0200 (CEST) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id C7026354B1; Wed, 15 May 2024 05:20:38 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id BFB1A354B0; Wed, 15 May 2024 05:20:38 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id E6416354AF; Wed, 15 May 2024 05:20:37 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 651df9bb-129c-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=/ajn3Y23Ptep8PMnKlNl2X5Wf quTjK0FGfT5lFyKeSU=; b=QnFLjNbpEx2LlU/C+InuGlrAXqd3tZQQwI2HcsMzA fBenES8OC368m2TAhkh3M/kdbLk8cDyq4GTF2DW2p2wHbmODIgTSbR6YzOcUWgTh gw5lKkA66wNCOV6HHWp5J/k7C8++FxtaS6mZTgjYPVSDgPFk8endKLuTwOMPIfyz tM= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini , Xenia Ragiadakou , Sergiy Kibrik Subject: [XEN PATCH v2 11/15] x86/oprofile: guard svm specific symbols with CONFIG_SVM Date: Wed, 15 May 2024 12:20:36 +0300 Message-Id: <8174a35669a8dffa10141c7fea64b7c1f6dfbe4e.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 645A633E-129C-11EF-964D-25B3960A682E-90055647!pb-smtp2.pobox.com From: Xenia Ragiadakou The symbol svm_stgi_label is AMD-V specific so guard its usage in common code with CONFIG_SVM. Since SVM depends on HVM, it can be used alone. Also, use #ifdef instead of #if. No functional change intended. Signed-off-by: Xenia Ragiadakou Signed-off-by: Sergiy Kibrik Acked-by: Jan Beulich --- xen/arch/x86/oprofile/op_model_athlon.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/x86/oprofile/op_model_athlon.c b/xen/arch/x86/oprofile/op_model_athlon.c index 69fd3fcc86..a9c7b87d67 100644 --- a/xen/arch/x86/oprofile/op_model_athlon.c +++ b/xen/arch/x86/oprofile/op_model_athlon.c @@ -320,7 +320,7 @@ static int cf_check athlon_check_ctrs( struct vcpu *v = current; unsigned int const nr_ctrs = model->num_counters; -#if CONFIG_HVM +#ifdef CONFIG_SVM struct cpu_user_regs *guest_regs = guest_cpu_user_regs(); if (!guest_mode(regs) && From patchwork Wed May 15 09:22:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A10BC25B77 for ; Wed, 15 May 2024 09:22:58 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722085.1125926 (Exim 4.92) (envelope-from ) id 1s7AqT-0001MC-DS; Wed, 15 May 2024 09:22:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722085.1125926; Wed, 15 May 2024 09:22:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AqT-0001M5-AW; Wed, 15 May 2024 09:22:45 +0000 Received: by outflank-mailman (input) for mailman id 722085; Wed, 15 May 2024 09:22:44 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AqR-0001Lx-VX for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:22:44 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id ae898d4c-129c-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:22:43 +0200 (CEST) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 065C2354C2; Wed, 15 May 2024 05:22:42 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id F17AB354C1; Wed, 15 May 2024 05:22:41 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 2DF7B354C0; Wed, 15 May 2024 05:22:40 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ae898d4c-129c-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=tJNFbcXuEgYbrw2Axr3CVP5ay 5NL+SPdbHdUF/h0LRg=; b=jt8Mq1AGP4+oII64Bn3PXSibqbibUvfLBESky3FfJ 90oYUrtwv20hIhw8vR43aOW1L4UqhXCclV9lxCHV42YCcMjF+5CjyitlCTJeY6uL HlqUr/gcpf1zIFom2JkHfd5oHIqu2Q1UqRgPHBRVhE0ioCMD7UtF7sVHvxTOHgGd c8= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Paul Durrant , Jan Beulich , Stefano Stabellini , Xenia Ragiadakou , Andrew Cooper Subject: [XEN PATCH v2 12/15] x86/vmx: guard access to cpu_has_vmx_* in common code Date: Wed, 15 May 2024 12:22:39 +0300 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: ADD01248-129C-11EF-BAA2-25B3960A682E-90055647!pb-smtp2.pobox.com There're several places in common code, outside of arch/x86/hvm/vmx, where cpu_has_vmx_* get accessed without checking if VMX present first. We may want to guard these macros, as they read global variables defined inside vmx-specific files -- so VMX can be made optional later on. Signed-off-by: Sergiy Kibrik CC: Andrew Cooper CC: Jan Beulich --- Here I've tried a different approach from prev.patches [1,2] -- instead of modifying whole set of cpu_has_{svm/vmx}_* macros, we can: 1) do not touch SVM part at all, because just as Andrew pointed out they're used inside arch/x86/hvm/svm only. 2) track several places in common code where cpu_has_vmx_* features are checked out and guard them using cpu_has_vmx condition 3) two of cpu_has_vmx_* macros being used in common code are checked in a bit more tricky way, so instead of making complex conditionals even more complicated, we can instead integrate cpu_has_vmx condition inside these two macros. This patch aims to replace [1,2] from v1 series by doing steps above. 1. https://lore.kernel.org/xen-devel/20240416064402.3469959-1-Sergiy_Kibrik@epam.com/ 2. https://lore.kernel.org/xen-devel/20240416064606.3470052-1-Sergiy_Kibrik@epam.com/ --- changes in v2: - do not touch SVM code and macros - drop vmx_ctrl_has_feature() - guard cpu_has_vmx_* macros in common code instead changes in v1: - introduced helper routine vmx_ctrl_has_feature() and used it for all cpu_has_vmx_* macros --- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/viridian/viridian.c | 4 ++-- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 4 ++-- xen/arch/x86/traps.c | 5 +++-- 4 files changed, 8 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 9594e0a5c5..ab75de9779 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -5180,7 +5180,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op) { case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON: case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF: - if ( !cpu_has_monitor_trap_flag ) + if ( !cpu_has_vmx || !cpu_has_monitor_trap_flag ) return -EOPNOTSUPP; break; default: diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 0496c52ed5..657c6a3ea7 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -196,7 +196,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, res->a = CPUID4A_RELAX_TIMER_INT; if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush ) res->a |= CPUID4A_HCALL_REMOTE_TLB_FLUSH; - if ( !cpu_has_vmx_apic_reg_virt ) + if ( !cpu_has_vmx || !cpu_has_vmx_apic_reg_virt ) res->a |= CPUID4A_MSR_BASED_APIC; if ( viridian_feature_mask(d) & HVMPV_hcall_ipi ) res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI; @@ -236,7 +236,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf, case 6: /* Detected and in use hardware features. */ - if ( cpu_has_vmx_virtualize_apic_accesses ) + if ( cpu_has_vmx && cpu_has_vmx_virtualize_apic_accesses ) res->a |= CPUID6A_APIC_OVERLAY; if ( cpu_has_vmx_msr_bitmap || (read_efer() & EFER_SVME) ) res->a |= CPUID6A_MSR_BITMAPS; diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h index 58140af691..aa05f9cf6e 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h @@ -306,7 +306,7 @@ extern u64 vmx_ept_vpid_cap; #define cpu_has_vmx_vnmi \ (vmx_pin_based_exec_control & PIN_BASED_VIRTUAL_NMIS) #define cpu_has_vmx_msr_bitmap \ - (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP) + (cpu_has_vmx && vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP) #define cpu_has_vmx_secondary_exec_control \ (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) #define cpu_has_vmx_tertiary_exec_control \ @@ -347,7 +347,7 @@ extern u64 vmx_ept_vpid_cap; #define cpu_has_vmx_vmfunc \ (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VM_FUNCTIONS) #define cpu_has_vmx_virt_exceptions \ - (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS) + (cpu_has_vmx && vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS) #define cpu_has_vmx_pml \ (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_PML) #define cpu_has_vmx_mpx \ diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 7b8ee45edf..3595bb379a 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -1130,7 +1130,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf, if ( !is_hvm_domain(d) || subleaf != 0 ) break; - if ( cpu_has_vmx_apic_reg_virt ) + if ( cpu_has_vmx && cpu_has_vmx_apic_reg_virt ) res->a |= XEN_HVM_CPUID_APIC_ACCESS_VIRT; /* @@ -1139,7 +1139,8 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf, * and wrmsr in the guest will run without VMEXITs (see * vmx_vlapic_msr_changed()). */ - if ( cpu_has_vmx_virtualize_x2apic_mode && + if ( cpu_has_vmx && + cpu_has_vmx_virtualize_x2apic_mode && cpu_has_vmx_apic_reg_virt && cpu_has_vmx_virtual_intr_delivery ) res->a |= XEN_HVM_CPUID_X2APIC_VIRT; From patchwork Wed May 15 09:24:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 77738C25B77 for ; Wed, 15 May 2024 09:24:59 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722088.1125936 (Exim 4.92) (envelope-from ) id 1s7AsR-0001tP-Ns; Wed, 15 May 2024 09:24:47 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722088.1125936; Wed, 15 May 2024 09:24:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AsR-0001tI-LD; Wed, 15 May 2024 09:24:47 +0000 Received: by outflank-mailman (input) for mailman id 722088; Wed, 15 May 2024 09:24:47 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AsR-0001tC-2W for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:24:47 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id f7d6d3ca-129c-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:24:46 +0200 (CEST) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id F0554354E1; Wed, 15 May 2024 05:24:44 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id E83D2354E0; Wed, 15 May 2024 05:24:44 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 42EF7354DE; Wed, 15 May 2024 05:24:44 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f7d6d3ca-129c-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=lfpK0MUD/x59alsKeTEo/v19a RiX/271n/gqtVYSVVQ=; b=mgpjqU5zYcdWNnEYuplOJbgzNb2Etbw8ls45J8vYf dVAVR2IKZeSbAdmR5l3Hm3NcgWWdAcS7dZNdVZU6UlQs36pqRXldu0WI9ppq/p6P lfmUD6OoyKZYLfoB6Dg9Ft7FtRe6bsYAttvRh5jnueqWdPk/+tyRJJm924qBvuUY bk= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Julien Grall , Stefano Stabellini , Xenia Ragiadakou , Sergiy Kibrik Subject: [XEN PATCH v2 13/15] x86/ioreq: guard VIO_realmode_completion with CONFIG_VMX Date: Wed, 15 May 2024 12:24:42 +0300 Message-Id: <9e64fa33b298f789d8340cf1046a9fbf683dd2b7.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: F72CFF8C-129C-11EF-9E53-25B3960A682E-90055647!pb-smtp2.pobox.com From: Xenia Ragiadakou VIO_realmode_completion is specific to vmx realmode, so guard the completion handling code with CONFIG_VMX. Also, guard VIO_realmode_completion itself by CONFIG_VMX, instead of generic CONFIG_X86. No functional change intended. Signed-off-by: Xenia Ragiadakou Signed-off-by: Sergiy Kibrik Reviewed-by: Stefano Stabellini --- changes in v1: - put VIO_realmode_completion enum under #ifdef CONFIG_VMX --- xen/arch/x86/hvm/emulate.c | 2 ++ xen/arch/x86/hvm/ioreq.c | 2 ++ xen/include/xen/sched.h | 2 +- 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index ab1bc51683..d60b1f6f4d 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -2667,7 +2667,9 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, break; case VIO_mmio_completion: +#ifdef CONFIG_VMX case VIO_realmode_completion: +#endif BUILD_BUG_ON(sizeof(hvio->mmio_insn) < sizeof(hvmemul_ctxt->insn_buf)); hvio->mmio_insn_bytes = hvmemul_ctxt->insn_buf_bytes; memcpy(hvio->mmio_insn, hvmemul_ctxt->insn_buf, hvio->mmio_insn_bytes); diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 4eb7a70182..b37bbd660b 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -33,6 +33,7 @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion) { switch ( completion ) { +#ifdef CONFIG_VMX case VIO_realmode_completion: { struct hvm_emulate_ctxt ctxt; @@ -43,6 +44,7 @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion) break; } +#endif default: ASSERT_UNREACHABLE(); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 132b841995..50a58fe428 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -152,7 +152,7 @@ enum vio_completion { VIO_no_completion, VIO_mmio_completion, VIO_pio_completion, -#ifdef CONFIG_X86 +#ifdef CONFIG_VMX VIO_realmode_completion, #endif }; From patchwork Wed May 15 09:26:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F411BC25B75 for ; Wed, 15 May 2024 09:27:04 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722092.1125946 (Exim 4.92) (envelope-from ) id 1s7AuR-0002Qp-2k; Wed, 15 May 2024 09:26:51 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722092.1125946; Wed, 15 May 2024 09:26:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AuR-0002Qi-08; Wed, 15 May 2024 09:26:51 +0000 Received: by outflank-mailman (input) for mailman id 722092; Wed, 15 May 2024 09:26:50 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AuQ-0002Qc-7I for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:26:50 +0000 Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 412e715d-129d-11ef-909d-e314d9c70b13; Wed, 15 May 2024 11:26:49 +0200 (CEST) Received: from pb-smtp2.pobox.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 2A79C354FB; Wed, 15 May 2024 05:26:48 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1]) by pb-smtp2.pobox.com (Postfix) with ESMTP id 218BA354F8; Wed, 15 May 2024 05:26:48 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 57FC3354F7; Wed, 15 May 2024 05:26:47 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 412e715d-129d-11ef-909d-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=QWpi+mcgV19u04TlYTDH6KKf8 W1LgMYc9I836I7tdAs=; b=WL9hLhGSSOj29npYKRo+om4seXCql3jkvXEmLUT2+ h/XCrDC8Ny2rtUE3fGsAZlUvpUtUA7SjW0GLPOJcAjeLwHFjShk/g71Rg45xqmWo aL/pEdZlpiaaoBmhRYkGGfT5ztyxf4M880EGVT/9cZVDZFwT0DB3VqrdfTKRwxP5 EQ= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Sergiy Kibrik , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini , Xenia Ragiadakou Subject: [XEN PATCH v2 14/15] iommu/vt-d: guard vmx_pi_hooks_* calls with cpu_has_vmx Date: Wed, 15 May 2024 12:26:45 +0300 Message-Id: <73072e5b2ec40ad28d4bcfb9bb0870f3838bb726.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 408AB73C-129D-11EF-88AA-25B3960A682E-90055647!pb-smtp2.pobox.com VMX posted interrupts support can now be excluded from x86 build along with VMX code itself, but still we may want to keep the possibility to use VT-d IOMMU driver in non-HVM setups. So we guard vmx_pi_hooks_{assign/deassign} with some checks for such a case. No functional change intended here. Signed-off-by: Sergiy Kibrik --- xen/drivers/passthrough/vtd/iommu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index e13be244c1..ad78282250 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -2772,7 +2772,7 @@ static int cf_check reassign_device_ownership( if ( !QUARANTINE_SKIP(target, pdev->arch.vtd.pgd_maddr) ) { - if ( !has_arch_pdevs(target) ) + if ( cpu_has_vmx && !has_arch_pdevs(target) ) vmx_pi_hooks_assign(target); #ifdef CONFIG_PV @@ -2806,7 +2806,7 @@ static int cf_check reassign_device_ownership( } if ( ret ) { - if ( !has_arch_pdevs(target) ) + if ( cpu_has_vmx && !has_arch_pdevs(target) ) vmx_pi_hooks_deassign(target); return ret; } @@ -2824,7 +2824,7 @@ static int cf_check reassign_device_ownership( write_unlock(&target->pci_lock); } - if ( !has_arch_pdevs(source) ) + if ( cpu_has_vmx && !has_arch_pdevs(source) ) vmx_pi_hooks_deassign(source); /* From patchwork Wed May 15 09:28:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergiy Kibrik X-Patchwork-Id: 13664933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 48A33C25B75 for ; Wed, 15 May 2024 09:29:18 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.722096.1125956 (Exim 4.92) (envelope-from ) id 1s7AwW-00033v-H8; Wed, 15 May 2024 09:29:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 722096.1125956; Wed, 15 May 2024 09:29:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AwW-00033o-ER; Wed, 15 May 2024 09:29:00 +0000 Received: by outflank-mailman (input) for mailman id 722096; Wed, 15 May 2024 09:28:59 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s7AwV-00033i-Cf for xen-devel@lists.xenproject.org; Wed, 15 May 2024 09:28:59 +0000 Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8d253930-129d-11ef-b4bb-af5377834399; Wed, 15 May 2024 11:28:57 +0200 (CEST) Received: from pb-smtp21.pobox.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id 2D1751B99F; Wed, 15 May 2024 05:28:55 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1]) by pb-smtp21.pobox.com (Postfix) with ESMTP id 241921B99E; Wed, 15 May 2024 05:28:55 -0400 (EDT) (envelope-from sakib@darkstar.site) Received: from localhost (unknown [185.130.54.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pb-smtp21.pobox.com (Postfix) with ESMTPSA id BDBFA1B99D; Wed, 15 May 2024 05:28:51 -0400 (EDT) (envelope-from sakib@darkstar.site) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8d253930-129d-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=sasl; bh=80FdUwFU0qesWxVN03JYJt5oG HTecFn91NcTkV99tSQ=; b=Nx08oJYmkt6CIH8GtCIdMtc70SlAKVFD//4Ch9H/o bAq+qsAb4hvNWEBb5/Y6nFL8sLNzNSSOMBeUVQoLQ4BJmTDdoCOqXcUPx9ozWlRd axPyxa1btcjLWshB3DvgYRWY4UoSkZZx7J5ZT2IrJOwqaFudljszZ532Sn2+QwYT TA= From: Sergiy Kibrik To: xen-devel@lists.xenproject.org Cc: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini , Xenia Ragiadakou , Sergiy Kibrik Subject: [XEN PATCH v2 15/15] x86/hvm: make AMD-V and Intel VT-x support configurable Date: Wed, 15 May 2024 12:28:48 +0300 Message-Id: <3ad7c0279da67e564713140fb5b247349cf4dccc.1715761386.git.Sergiy_Kibrik@epam.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Pobox-Relay-ID: 8AB37632-129D-11EF-98F5-A19503B9AAD1-90055647!pb-smtp21.pobox.com From: Xenia Ragiadakou Provide the user with configuration control over the cpu virtualization support in Xen by making SVM and VMX options user selectable. To preserve the current default behavior, both options depend on HVM and default to value of HVM. To prevent users from unknowingly disabling virtualization support, make the controls user selectable only if EXPERT is enabled. No functional change intended. Signed-off-by: Xenia Ragiadakou Signed-off-by: Sergiy Kibrik Reviewed-by: Stefano Stabellini Acked-by: Jan Beulich --- changes in v2: - remove dependency of build options IOMMU/AMD_IOMMU on VMX/SVM options --- xen/arch/x86/Kconfig | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 2872b031a7..62621c7271 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -123,10 +123,24 @@ config HVM If unsure, say Y. config SVM - def_bool HVM + bool "AMD-V" if EXPERT + depends on HVM + default HVM + help + Enables virtual machine extensions on platforms that implement the + AMD Virtualization Technology (AMD-V). + If your system includes a processor with AMD-V support, say Y. + If in doubt, say Y. config VMX - def_bool HVM + bool "Intel VT-x" if EXPERT + depends on HVM + default HVM + help + Enables virtual machine extensions on platforms that implement the + Intel Virtualization Technology (Intel VT-x). + If your system includes a processor with Intel VT-x support, say Y. + If in doubt, say Y. config XEN_SHSTK bool "Supervisor Shadow Stacks"