From patchwork Tue Feb 8 12:21:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12738707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D3DCC433FE for ; Tue, 8 Feb 2022 13:16:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350290AbiBHNQN (ORCPT ); Tue, 8 Feb 2022 08:16:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356088AbiBHMV4 (ORCPT ); Tue, 8 Feb 2022 07:21:56 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EDA20C03FECA for ; Tue, 8 Feb 2022 04:21:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644322915; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CIGw+zZpVgXOAcn0Ub5iZavubEJfoPdYMKNIlAsa2Y4=; b=EtD7sMoiD55L47kgBtkFANTaX0zLoyIg0ggzaX18m6NcxNWxzXSxre9NCIFBmdgf187mmL j4SxleHqlKAOS7gubfsLfrt3/gj/zTq4jkhIYD8Y5A68ybGs5wZH4LP+xOgwW6xR0eMpW1 bsUe1n9VtfH8YGmz65wnZZ9B4pQ4ZiE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-147-Qp-WgZHwPnWe_Fk65NEIFQ-1; Tue, 08 Feb 2022 07:21:53 -0500 X-MC-Unique: Qp-WgZHwPnWe_Fk65NEIFQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DBD611091DD6 for ; Tue, 8 Feb 2022 12:21:52 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.192.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id E795E7B9E3; Tue, 8 Feb 2022 12:21:51 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky Subject: [PATCH 1/7] pmu_lbr: few fixes Date: Tue, 8 Feb 2022 14:21:42 +0200 Message-Id: <20220208122148.912913-2-mlevitsk@redhat.com> In-Reply-To: <20220208122148.912913-1-mlevitsk@redhat.com> References: <20220208122148.912913-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org * don't run this test on AMD since AMD's LBR is not the same as Intel's LBR and needs a different test. * don't run this test on 32 bit as it is not built for 32 bit anyway Signed-off-by: Maxim Levitsky --- x86/pmu_lbr.c | 6 ++++++ x86/unittests.cfg | 1 + 2 files changed, 7 insertions(+) diff --git a/x86/pmu_lbr.c b/x86/pmu_lbr.c index 5ff805a..688634d 100644 --- a/x86/pmu_lbr.c +++ b/x86/pmu_lbr.c @@ -68,6 +68,12 @@ int main(int ac, char **av) int max, i; setup_vm(); + + if (!is_intel()) { + report_skip("PMU_LBR test is for intel CPU's only"); + return 0; + } + perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); eax.full = id.a; diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 9a70ba3..89ff949 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -179,6 +179,7 @@ check = /proc/sys/kernel/nmi_watchdog=0 accel = kvm [pmu_lbr] +arch = x86_64 file = pmu_lbr.flat extra_params = -cpu host,migratable=no check = /sys/module/kvm/parameters/ignore_msrs=N From patchwork Tue Feb 8 12:21:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12738713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA7F3C43217 for ; Tue, 8 Feb 2022 13:17:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348479AbiBHNQE (ORCPT ); Tue, 8 Feb 2022 08:16:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356135AbiBHMV5 (ORCPT ); Tue, 8 Feb 2022 07:21:57 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5E336C03FEC0 for ; Tue, 8 Feb 2022 04:21:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644322916; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z4/Jlb9AQ/Zg16L1o8VwLht7TqAxZDtF3j78kpYMeAI=; b=XTlmoTQGmX8a3yFfR4XzqhKpVU5jIE7mvlw/3MMRHKBMnJhLW8i7ZVn8jdAl6YQ/opwJFf v4prHmwxz52eBMYJbeLKcdn8qodYD7kjuxWX85c+xie1ne5CzZoAzWf3VGr3pJmg1okXTG rLqe26td3LpaIlDe3sPUj7OuBjIJh9Y= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-153-yan5bOktPH2QBa-jVnnjqg-1; Tue, 08 Feb 2022 07:21:55 -0500 X-MC-Unique: yan5bOktPH2QBa-jVnnjqg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5881419611C9 for ; Tue, 8 Feb 2022 12:21:54 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.192.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4CCF37CD67; Tue, 8 Feb 2022 12:21:53 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky Subject: [PATCH 2/7] svm: Fix reg_corruption test, to avoid timer interrupt firing in later tests. Date: Tue, 8 Feb 2022 14:21:43 +0200 Message-Id: <20220208122148.912913-3-mlevitsk@redhat.com> In-Reply-To: <20220208122148.912913-1-mlevitsk@redhat.com> References: <20220208122148.912913-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The test were setting APIC periodic timer but not disabling it later. Fixes: da338a3 ("SVM: add test for nested guest RIP corruption") Signed-off-by: Maxim Levitsky --- x86/svm_tests.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 0707786..7a97847 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -1847,7 +1847,7 @@ static bool reg_corruption_finished(struct svm_test *test) report_pass("No RIP corruption detected after %d timer interrupts", isr_cnt); set_test_stage(test, 1); - return true; + goto cleanup; } if (vmcb->control.exit_code == SVM_EXIT_INTR) { @@ -1861,11 +1861,16 @@ static bool reg_corruption_finished(struct svm_test *test) if (guest_rip == insb_instruction_label && io_port_var != 0xAA) { report_fail("RIP corruption detected after %d timer interrupts", isr_cnt); - return true; + goto cleanup; } } return false; +cleanup: + apic_write(APIC_LVTT, APIC_LVT_TIMER_MASK); + apic_write(APIC_TMICT, 0); + return true; + } static bool reg_corruption_check(struct svm_test *test) From patchwork Tue Feb 8 12:21:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12738714 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99C9AC4332F for ; Tue, 8 Feb 2022 13:17:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240197AbiBHNQA (ORCPT ); Tue, 8 Feb 2022 08:16:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356428AbiBHMWB (ORCPT ); Tue, 8 Feb 2022 07:22:01 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DD91FC03FEC0 for ; Tue, 8 Feb 2022 04:22:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644322920; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/6+psNmciWHUDVo6oLa5Yb6bs9XiKM91HW+xvMJKg9Y=; b=FeL1yis1DxswMJO4zddOl4z1wzHdn/16hWPc/KbHkaAbXVAF8+exQaIXdHDXJmK7RULMba vnIExSfY5XVZ2/RDDnJXZ1Y8DUiMC9NnU5pA1/WwcGxzJC1qPXGY+bCJZfVzFn1plznKId Z6WeVKFYhZpXwkiH0BXuzckwDt1Fwnk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-594-tMJt0m5xO3uHeTxPHIiQFw-1; Tue, 08 Feb 2022 07:21:56 -0500 X-MC-Unique: tMJt0m5xO3uHeTxPHIiQFw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C02231091DCD for ; Tue, 8 Feb 2022 12:21:55 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.192.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id B3E217ED84; Tue, 8 Feb 2022 12:21:54 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky Subject: [PATCH 3/7] svm: NMI is an "exception" and not interrupt in x86 land Date: Tue, 8 Feb 2022 14:21:44 +0200 Message-Id: <20220208122148.912913-4-mlevitsk@redhat.com> In-Reply-To: <20220208122148.912913-1-mlevitsk@redhat.com> References: <20220208122148.912913-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This can interfere with later tests which do treat it as an exception. Fixes: d4db486 ("svm: Add test cases around NMI injection") Signed-off-by: Maxim Levitsky --- x86/svm_tests.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 7a97847..7586ef7 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -1384,17 +1384,16 @@ static bool interrupt_check(struct svm_test *test) static volatile bool nmi_fired; -static void nmi_handler(isr_regs_t *regs) +static void nmi_handler(struct ex_regs *regs) { nmi_fired = true; - apic_write(APIC_EOI, 0); } static void nmi_prepare(struct svm_test *test) { default_prepare(test); nmi_fired = false; - handle_irq(NMI_VECTOR, nmi_handler); + handle_exception(NMI_VECTOR, nmi_handler); set_test_stage(test, 0); } From patchwork Tue Feb 8 12:21:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12738706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96F7AC388F3 for ; Tue, 8 Feb 2022 13:16:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350236AbiBHNQM (ORCPT ); Tue, 8 Feb 2022 08:16:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356186AbiBHMWA (ORCPT ); Tue, 8 Feb 2022 07:22:00 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1D4C3C03FEC0 for ; Tue, 8 Feb 2022 04:22:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644322919; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HqzNxaGM36JoPEAx/atp62YyCvUTyjd1fhGLAv2y2rI=; b=LytC25Zfg+v6/xRQr0H5+oswrt1JNg7vRhnRzQIjyNDOrEOoLN17KN7hFyWH19mdPo5NYq N9Dnk05SRDfZ95Q+Zv0Bg+IkqRhvAye99E6KnS0B2a4+9LwJcyUO/Uyod/h1rHkbUfPwjD Vg48KJQMYkApFyJ3CjV4KGV1V9JaNGs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-608-hXeHZbQiM2miAY7sD3Mlvw-1; Tue, 08 Feb 2022 07:21:58 -0500 X-MC-Unique: hXeHZbQiM2miAY7sD3Mlvw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3235F6415B for ; Tue, 8 Feb 2022 12:21:57 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.192.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3291E7EBF5; Tue, 8 Feb 2022 12:21:55 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky Subject: [PATCH 4/7] svm: intercept shutdown in all svm tests by default Date: Tue, 8 Feb 2022 14:21:45 +0200 Message-Id: <20220208122148.912913-5-mlevitsk@redhat.com> In-Reply-To: <20220208122148.912913-1-mlevitsk@redhat.com> References: <20220208122148.912913-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If L1 doesn't intercept shutdown, then L1 itself gets it, which doesn't allow it to report the error that happened. Signed-off-by: Maxim Levitsky --- x86/svm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/x86/svm.c b/x86/svm.c index 3f94b2a..62da2af 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -174,7 +174,9 @@ void vmcb_ident(struct vmcb *vmcb) save->cr2 = read_cr2(); save->g_pat = rdmsr(MSR_IA32_CR_PAT); save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); - ctrl->intercept = (1ULL << INTERCEPT_VMRUN) | (1ULL << INTERCEPT_VMMCALL); + ctrl->intercept = (1ULL << INTERCEPT_VMRUN) | + (1ULL << INTERCEPT_VMMCALL) | + (1ULL << INTERCEPT_SHUTDOWN); ctrl->iopm_base_pa = virt_to_phys(io_bitmap); ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap); From patchwork Tue Feb 8 12:21:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12738712 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37FB7C4167D for ; Tue, 8 Feb 2022 13:17:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349522AbiBHNQF (ORCPT ); Tue, 8 Feb 2022 08:16:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356441AbiBHMWE (ORCPT ); Tue, 8 Feb 2022 07:22:04 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AB743C03FEC0 for ; Tue, 8 Feb 2022 04:22:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644322922; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=A4m1xJmbwrr/oQuxoGqrKefFITYBH/tT5cQ5+2a7U9g=; b=ix6GVchW5gSUvLscjuEGV0BhWCS4CeKJhOi6QattMR+FnGaAvct2QGucftYJtoNxXhWWRw HMGMoDoKVnE5UcBa3zpeE6ak/qO6IhSfLv+ldfAsaorUsNTPw+277imM0R0Uumog8GHevY ZJWFMAZGluem9Qhx1hOTrtO2gPTiPTs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-644-bnZJWbqjOD-rfbtuZQzUfA-1; Tue, 08 Feb 2022 07:22:01 -0500 X-MC-Unique: bnZJWbqjOD-rfbtuZQzUfA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 099E4835B52 for ; Tue, 8 Feb 2022 12:21:59 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.192.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9C4067B9D6; Tue, 8 Feb 2022 12:21:57 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky Subject: [PATCH 5/7] svm: add SVM_BARE_VMRUN Date: Tue, 8 Feb 2022 14:21:46 +0200 Message-Id: <20220208122148.912913-6-mlevitsk@redhat.com> In-Reply-To: <20220208122148.912913-1-mlevitsk@redhat.com> References: <20220208122148.912913-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be useful in nested LBR tests to ensure that no extra branches are made in the guest entry. Signed-off-by: Maxim Levitsky --- x86/svm.c | 32 -------------------------------- x86/svm.h | 47 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+), 32 deletions(-) diff --git a/x86/svm.c b/x86/svm.c index 62da2af..6f4e023 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -196,41 +196,9 @@ struct regs get_regs(void) // rax handled specially below -#define SAVE_GPR_C \ - "xchg %%rbx, regs+0x8\n\t" \ - "xchg %%rcx, regs+0x10\n\t" \ - "xchg %%rdx, regs+0x18\n\t" \ - "xchg %%rbp, regs+0x28\n\t" \ - "xchg %%rsi, regs+0x30\n\t" \ - "xchg %%rdi, regs+0x38\n\t" \ - "xchg %%r8, regs+0x40\n\t" \ - "xchg %%r9, regs+0x48\n\t" \ - "xchg %%r10, regs+0x50\n\t" \ - "xchg %%r11, regs+0x58\n\t" \ - "xchg %%r12, regs+0x60\n\t" \ - "xchg %%r13, regs+0x68\n\t" \ - "xchg %%r14, regs+0x70\n\t" \ - "xchg %%r15, regs+0x78\n\t" - -#define LOAD_GPR_C SAVE_GPR_C struct svm_test *v2_test; -#define ASM_PRE_VMRUN_CMD \ - "vmload %%rax\n\t" \ - "mov regs+0x80, %%r15\n\t" \ - "mov %%r15, 0x170(%%rax)\n\t" \ - "mov regs, %%r15\n\t" \ - "mov %%r15, 0x1f8(%%rax)\n\t" \ - LOAD_GPR_C \ - -#define ASM_POST_VMRUN_CMD \ - SAVE_GPR_C \ - "mov 0x170(%%rax), %%r15\n\t" \ - "mov %%r15, regs+0x80\n\t" \ - "mov 0x1f8(%%rax), %%r15\n\t" \ - "mov %%r15, regs\n\t" \ - "vmsave %%rax\n\t" \ u64 guest_stack[10000]; diff --git a/x86/svm.h b/x86/svm.h index f74b13a..6d072f4 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -416,10 +416,57 @@ void vmcb_ident(struct vmcb *vmcb); struct regs get_regs(void); void vmmcall(void); int __svm_vmrun(u64 rip); +void __svm_bare_vmrun(void); int svm_vmrun(void); void test_set_guest(test_guest_func func); extern struct vmcb *vmcb; extern struct svm_test svm_tests[]; + +#define SAVE_GPR_C \ + "xchg %%rbx, regs+0x8\n\t" \ + "xchg %%rcx, regs+0x10\n\t" \ + "xchg %%rdx, regs+0x18\n\t" \ + "xchg %%rbp, regs+0x28\n\t" \ + "xchg %%rsi, regs+0x30\n\t" \ + "xchg %%rdi, regs+0x38\n\t" \ + "xchg %%r8, regs+0x40\n\t" \ + "xchg %%r9, regs+0x48\n\t" \ + "xchg %%r10, regs+0x50\n\t" \ + "xchg %%r11, regs+0x58\n\t" \ + "xchg %%r12, regs+0x60\n\t" \ + "xchg %%r13, regs+0x68\n\t" \ + "xchg %%r14, regs+0x70\n\t" \ + "xchg %%r15, regs+0x78\n\t" + +#define LOAD_GPR_C SAVE_GPR_C + +#define ASM_PRE_VMRUN_CMD \ + "vmload %%rax\n\t" \ + "mov regs+0x80, %%r15\n\t" \ + "mov %%r15, 0x170(%%rax)\n\t" \ + "mov regs, %%r15\n\t" \ + "mov %%r15, 0x1f8(%%rax)\n\t" \ + LOAD_GPR_C \ + +#define ASM_POST_VMRUN_CMD \ + SAVE_GPR_C \ + "mov 0x170(%%rax), %%r15\n\t" \ + "mov %%r15, regs+0x80\n\t" \ + "mov 0x1f8(%%rax), %%r15\n\t" \ + "mov %%r15, regs\n\t" \ + "vmsave %%rax\n\t" \ + + + +#define SVM_BARE_VMRUN \ + asm volatile ( \ + ASM_PRE_VMRUN_CMD \ + "vmrun %%rax\n\t" \ + ASM_POST_VMRUN_CMD \ + : \ + : "a" (virt_to_phys(vmcb)) \ + : "memory", "r15") \ + #endif From patchwork Tue Feb 8 12:21:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12738709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F954C35274 for ; Tue, 8 Feb 2022 13:16:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349634AbiBHNQU (ORCPT ); Tue, 8 Feb 2022 08:16:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356582AbiBHMWG (ORCPT ); Tue, 8 Feb 2022 07:22:06 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3CBB2C03FEC0 for ; Tue, 8 Feb 2022 04:22:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644322924; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uAClmZIhjk+U2OTzC0CdGCOIMEeGVPMMcpbAwBp3bFI=; b=HD5S04rKgW5hwqKTy1DkoWd5RbzhWNUPgn82ferluV/XIepJ/0vJ4FCScHDHWsC4B10rGn qCjKBWxjBsImcIRYAUxe4fs2lHGSk1rSBXcvSngA3PJC1XhWheMy1q+Arq/xAOk4Gm3ncC KLYWmZ0+WqLkXp1QH9ysr8tkldUusRs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-307-O5WXuxIjO6ilWxdLr5l6MA-1; Tue, 08 Feb 2022 07:22:02 -0500 X-MC-Unique: O5WXuxIjO6ilWxdLr5l6MA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4687318397C2 for ; Tue, 8 Feb 2022 12:22:01 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.192.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 158C27CAFF; Tue, 8 Feb 2022 12:21:58 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky Subject: [PATCH 6/7] svm: add tests for LBR virtualization Date: Tue, 8 Feb 2022 14:21:47 +0200 Message-Id: <20220208122148.912913-7-mlevitsk@redhat.com> In-Reply-To: <20220208122148.912913-1-mlevitsk@redhat.com> References: <20220208122148.912913-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Maxim Levitsky --- lib/x86/processor.h | 1 + x86/svm.c | 5 + x86/svm.h | 5 +- x86/svm_tests.c | 239 ++++++++++++++++++++++++++++++++++++++++++++ x86/unittests.cfg | 2 +- 5 files changed, 250 insertions(+), 2 deletions(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index fe5add5..9147a47 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -187,6 +187,7 @@ static inline bool is_intel(void) #define X86_FEATURE_RDPRU (CPUID(0x80000008, 0, EBX, 4)) #define X86_FEATURE_AMD_IBPB (CPUID(0x80000008, 0, EBX, 12)) #define X86_FEATURE_NPT (CPUID(0x8000000A, 0, EDX, 0)) +#define X86_FEATURE_LBRV (CPUID(0x8000000A, 0, EDX, 1)) #define X86_FEATURE_NRIPS (CPUID(0x8000000A, 0, EDX, 3)) #define X86_FEATURE_VGIF (CPUID(0x8000000A, 0, EDX, 16)) diff --git a/x86/svm.c b/x86/svm.c index 6f4e023..bb58d7c 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -70,6 +70,11 @@ bool vgif_supported(void) return this_cpu_has(X86_FEATURE_VGIF); } +bool lbrv_supported(void) +{ + return this_cpu_has(X86_FEATURE_LBRV); +} + void default_prepare(struct svm_test *test) { vmcb_ident(vmcb); diff --git a/x86/svm.h b/x86/svm.h index 6d072f4..58b9410 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -98,7 +98,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area { u32 event_inj; u32 event_inj_err; u64 nested_cr3; - u64 lbr_ctl; + u64 virt_ext; u32 clean; u32 reserved_5; u64 next_rip; @@ -360,6 +360,8 @@ struct __attribute__ ((__packed__)) vmcb { #define MSR_BITMAP_SIZE 8192 +#define LBR_CTL_ENABLE_MASK BIT_ULL(0) + struct svm_test { const char *name; bool (*supported)(void); @@ -405,6 +407,7 @@ u64 *npt_get_pml4e(void); bool smp_supported(void); bool default_supported(void); bool vgif_supported(void); +bool lbrv_supported(void); void default_prepare(struct svm_test *test); void default_prepare_gif_clear(struct svm_test *test); bool default_finished(struct svm_test *test); diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 7586ef7..b2ba283 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -3078,6 +3078,240 @@ static void svm_nm_test(void) "fnop with CR0.TS and CR0.EM unset no #NM excpetion"); } + +static bool check_lbr(u64 *from_excepted, u64 *to_expected) +{ + u64 from = rdmsr(MSR_IA32_LASTBRANCHFROMIP); + u64 to = rdmsr(MSR_IA32_LASTBRANCHTOIP); + + if ((u64)from_excepted != from) { + report(false, "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx", + (u64)from_excepted, from); + return false; + } + + if ((u64)to_expected != to) { + report(false, "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx", + (u64)from_excepted, from); + return false; + } + + return true; +} + +static bool check_dbgctl(u64 dbgctl, u64 dbgctl_expected) +{ + if (dbgctl != dbgctl_expected) { + report(false, "Unexpected MSR_IA32_DEBUGCTLMSR value 0x%lx", dbgctl); + return false; + } + return true; +} + + +#define DO_BRANCH(branch_name) \ + asm volatile ( \ + # branch_name "_from:" \ + "jmp " # branch_name "_to\n" \ + "nop\n" \ + "nop\n" \ + # branch_name "_to:" \ + "nop\n" \ + ) + + +extern u64 guest_branch0_from, guest_branch0_to; +extern u64 guest_branch2_from, guest_branch2_to; + +extern u64 host_branch0_from, host_branch0_to; +extern u64 host_branch2_from, host_branch2_to; +extern u64 host_branch3_from, host_branch3_to; +extern u64 host_branch4_from, host_branch4_to; + +u64 dbgctl; + +static void svm_lbrv_test_guest1(void) +{ + /* + * This guest expects the LBR to be already enabled when it starts, + * it does a branch, and then disables the LBR and then checks. + */ + + DO_BRANCH(guest_branch0); + + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (dbgctl != DEBUGCTLMSR_LBR) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_DEBUGCTLMSR) != 0) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64)&guest_branch0_from) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64)&guest_branch0_to) + asm volatile("ud2\n"); + + asm volatile ("vmmcall\n"); +} + +static void svm_lbrv_test_guest2(void) +{ + /* + * This guest expects the LBR to be disabled when it starts, + * enables it, does a branch, disables it and then checks. + */ + + DO_BRANCH(guest_branch1); + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + + if (dbgctl != 0) + asm volatile("ud2\n"); + + if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64)&host_branch2_from) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64)&host_branch2_to) + asm volatile("ud2\n"); + + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + DO_BRANCH(guest_branch2); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (dbgctl != DEBUGCTLMSR_LBR) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64)&guest_branch2_from) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64)&guest_branch2_to) + asm volatile("ud2\n"); + + asm volatile ("vmmcall\n"); +} + +static void svm_lbrv_test0(void) +{ + report(true, "Basic LBR test"); + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch0); + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + check_dbgctl(dbgctl, DEBUGCTLMSR_LBR); + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + check_dbgctl(dbgctl, 0); + + check_lbr(&host_branch0_from, &host_branch0_to); +} + +static void svm_lbrv_test1(void) +{ + report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)"); + + vmcb->save.rip = (ulong)svm_lbrv_test_guest1; + vmcb->control.virt_ext = 0; + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch1); + SVM_BARE_VMRUN; + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return; + } + + check_dbgctl(dbgctl, 0); + check_lbr(&guest_branch0_from, &guest_branch0_to); +} + +static void svm_lbrv_test2(void) +{ + report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)"); + + vmcb->save.rip = (ulong)svm_lbrv_test_guest2; + vmcb->control.virt_ext = 0; + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch2); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + SVM_BARE_VMRUN; + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return; + } + + check_dbgctl(dbgctl, 0); + check_lbr(&guest_branch2_from, &guest_branch2_to); +} + +static void svm_lbrv_nested_test1(void) +{ + if (!lbrv_supported()) { + report_skip("LBRV not supported in the guest"); + return; + } + + report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (1)"); + vmcb->save.rip = (ulong)svm_lbrv_test_guest1; + vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK; + vmcb->save.dbgctl = DEBUGCTLMSR_LBR; + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch3); + SVM_BARE_VMRUN; + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return; + } + + if (vmcb->save.dbgctl != 0) { + report(false, "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx", vmcb->save.dbgctl); + return; + } + + check_dbgctl(dbgctl, DEBUGCTLMSR_LBR); + check_lbr(&host_branch3_from, &host_branch3_to); +} +static void svm_lbrv_nested_test2(void) +{ + if (!lbrv_supported()) { + report_skip("LBRV not supported in the guest"); + return; + } + + report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (2)"); + vmcb->save.rip = (ulong)svm_lbrv_test_guest2; + vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK; + + vmcb->save.dbgctl = 0; + vmcb->save.br_from = (u64)&host_branch2_from; + vmcb->save.br_to = (u64)&host_branch2_to; + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch4); + SVM_BARE_VMRUN; + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return; + } + + check_dbgctl(dbgctl, DEBUGCTLMSR_LBR); + check_lbr(&host_branch4_from, &host_branch4_to); +} + struct svm_test svm_tests[] = { { "null", default_supported, default_prepare, default_prepare_gif_clear, null_test, @@ -3200,5 +3434,10 @@ struct svm_test svm_tests[] = { TEST(svm_nm_test), TEST(svm_int3_test), TEST(svm_into_test), + TEST(svm_lbrv_test0), + TEST(svm_lbrv_test1), + TEST(svm_lbrv_test2), + TEST(svm_lbrv_nested_test1), + TEST(svm_lbrv_nested_test2), { NULL, NULL, NULL, NULL, NULL, NULL, NULL } }; diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 89ff949..fa4ff69 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -238,7 +238,7 @@ arch = x86_64 [svm] file = svm.flat smp = 2 -extra_params = -cpu max,+svm -m 4g +extra_params = -cpu max,+svm -m 4g -append "-svm_lbrv_test*" arch = x86_64 [taskswitch] From patchwork Tue Feb 8 12:21:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12738705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F19CBC352AA for ; Tue, 8 Feb 2022 13:16:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350093AbiBHNQK (ORCPT ); Tue, 8 Feb 2022 08:16:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356700AbiBHMWG (ORCPT ); Tue, 8 Feb 2022 07:22:06 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C9E4FC03FECE for ; Tue, 8 Feb 2022 04:22:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644322924; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nbmEOQdTLOjowm+EP4V3yhFwqv57RIYTM/rSF0fgSNA=; b=agFq3jiPEzQXx64jw7N7lf6GIz+dHR4TONsgmhtbtPZf6iKN3MgZngoTIc6v3M+LBvSRMf Eo0OrWIDVsoQ/On4vDVx8UfSs9rLVVCoACDjXaJiwuYgo5hI2tFecTdE3wq4sATL0GvO5e uHpFXeEbfXkbt74m2GzhQIrPJ174IiY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-447-x6yvXJl3OdOO0MUAf2xEUg-1; Tue, 08 Feb 2022 07:22:03 -0500 X-MC-Unique: x6yvXJl3OdOO0MUAf2xEUg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 43E3992510 for ; Tue, 8 Feb 2022 12:22:02 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.192.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7ECE27CAF4; Tue, 8 Feb 2022 12:22:00 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky Subject: [PATCH 7/7] svm: add tests for case when L1 intercepts various hardware interrupts (an interrupt, SMI, NMI), but lets L2 control either EFLAG.IF or GIF Date: Tue, 8 Feb 2022 14:21:48 +0200 Message-Id: <20220208122148.912913-8-mlevitsk@redhat.com> In-Reply-To: <20220208122148.912913-1-mlevitsk@redhat.com> References: <20220208122148.912913-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Maxim Levitsky --- x86/svm.h | 11 +++ x86/svm_tests.c | 194 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 205 insertions(+) diff --git a/x86/svm.h b/x86/svm.h index 58b9410..df1b1ac 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -426,6 +426,17 @@ void test_set_guest(test_guest_func func); extern struct vmcb *vmcb; extern struct svm_test svm_tests[]; +static inline void stgi(void) +{ + asm volatile ("stgi"); +} + +static inline void clgi(void) +{ + asm volatile ("clgi"); +} + + #define SAVE_GPR_C \ "xchg %%rbx, regs+0x8\n\t" \ diff --git a/x86/svm_tests.c b/x86/svm_tests.c index b2ba283..ef8b5ee 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -3312,6 +3312,195 @@ static void svm_lbrv_nested_test2(void) check_lbr(&host_branch4_from, &host_branch4_to); } + +// test that a nested guest which does enable INTR interception +// but doesn't enable virtual interrupt masking works + +static volatile int dummy_isr_recevied; +static void dummy_isr(isr_regs_t *regs) +{ + dummy_isr_recevied++; + eoi(); +} + + +static volatile int nmi_recevied; +static void dummy_nmi_handler(struct ex_regs *regs) +{ + nmi_recevied++; +} + + +static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected_vmexit) +{ + if (counter) + *counter = 0; + + sti(); // host IF value should not matter + clgi(); // vmrun will set back GI to 1 + + svm_vmrun(); + + if (counter) + report(!*counter, "No interrupt expected"); + + stgi(); + + if (counter) + report(*counter == 1, "Interrupt is expected"); + + report (vmcb->control.exit_code == expected_vmexit, "Test expected VM exit"); + report(vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now"); + cli(); +} + + +// subtest: test that enabling EFLAGS.IF is enought to trigger an interrupt +static void svm_intr_intercept_mix_if_guest(struct svm_test *test) +{ + asm volatile("nop;nop;nop;nop"); + report(!dummy_isr_recevied, "No interrupt expected"); + sti(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_if(void) +{ + // make a physical interrupt to be pending + handle_irq(0x55, dummy_isr); + + vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + vmcb->save.rflags &= ~X86_EFLAGS_IF; + + test_set_guest(svm_intr_intercept_mix_if_guest); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0); + svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR); +} + + +// subtest: test that a clever guest can trigger an interrupt by setting GIF +// if GIF is not intercepted +static void svm_intr_intercept_mix_gif_guest(struct svm_test *test) +{ + + asm volatile("nop;nop;nop;nop"); + report(!dummy_isr_recevied, "No interrupt expected"); + + // clear GIF and enable IF + // that should still not cause VM exit + clgi(); + sti(); + asm volatile("nop"); + report(!dummy_isr_recevied, "No interrupt expected"); + + stgi(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_gif(void) +{ + handle_irq(0x55, dummy_isr); + + vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + vmcb->save.rflags &= ~X86_EFLAGS_IF; + + test_set_guest(svm_intr_intercept_mix_gif_guest); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0); + svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR); +} + + + +// subtest: test that a clever guest can trigger an interrupt by setting GIF +// if GIF is not intercepted and interrupt comes after guest +// started running +static void svm_intr_intercept_mix_gif_guest2(struct svm_test *test) +{ + asm volatile("nop;nop;nop;nop"); + report(!dummy_isr_recevied, "No interrupt expected"); + + clgi(); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0); + report(!dummy_isr_recevied, "No interrupt expected"); + + stgi(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_gif2(void) +{ + handle_irq(0x55, dummy_isr); + + vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + vmcb->save.rflags |= X86_EFLAGS_IF; + + test_set_guest(svm_intr_intercept_mix_gif_guest2); + svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR); +} + + +// subtest: test that pending NMI will be handled when guest enables GIF +static void svm_intr_intercept_mix_nmi_guest(struct svm_test *test) +{ + asm volatile("nop;nop;nop;nop"); + report(!nmi_recevied, "No NMI expected"); + cli(); // should have no effect + + clgi(); + asm volatile("nop"); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI, 0); + sti(); // should have no effect + asm volatile("nop"); + report(!nmi_recevied, "No NMI expected"); + + stgi(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_nmi(void) +{ + handle_exception(2, dummy_nmi_handler); + + vmcb->control.intercept |= (1 << INTERCEPT_NMI); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + vmcb->save.rflags |= X86_EFLAGS_IF; + + test_set_guest(svm_intr_intercept_mix_nmi_guest); + svm_intr_intercept_mix_run_guest(&nmi_recevied, SVM_EXIT_NMI); +} + +// test that pending SMI will be handled when guest enables GIF +// TODO: can't really count #SMIs so just test that guest doesn't hang +// and VMexits on SMI +static void svm_intr_intercept_mix_smi_guest(struct svm_test *test) +{ + asm volatile("nop;nop;nop;nop"); + + clgi(); + asm volatile("nop"); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_SMI, 0); + sti(); // should have no effect + asm volatile("nop"); + stgi(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_smi(void) +{ + vmcb->control.intercept |= (1 << INTERCEPT_SMI); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + test_set_guest(svm_intr_intercept_mix_smi_guest); + svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI); +} + struct svm_test svm_tests[] = { { "null", default_supported, default_prepare, default_prepare_gif_clear, null_test, @@ -3439,5 +3628,10 @@ struct svm_test svm_tests[] = { TEST(svm_lbrv_test2), TEST(svm_lbrv_nested_test1), TEST(svm_lbrv_nested_test2), + TEST(svm_intr_intercept_mix_if), + TEST(svm_intr_intercept_mix_gif), + TEST(svm_intr_intercept_mix_gif2), + TEST(svm_intr_intercept_mix_nmi), + TEST(svm_intr_intercept_mix_smi), { NULL, NULL, NULL, NULL, NULL, NULL, NULL } };