From patchwork Tue Mar 22 20:56:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12789022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC12DC433F5 for ; Tue, 22 Mar 2022 20:56:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234762AbiCVU6G (ORCPT ); Tue, 22 Mar 2022 16:58:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235029AbiCVU5u (ORCPT ); Tue, 22 Mar 2022 16:57:50 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D9D23F7C for ; Tue, 22 Mar 2022 13:56:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647982578; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CIGw+zZpVgXOAcn0Ub5iZavubEJfoPdYMKNIlAsa2Y4=; b=XQ026265X9o5gCDim2EuHseHZaLuHQprRada46HWlUNP8Ey9vh+0xirHTCkVFrVPheBqzv M7zmZAqewevNLuefUOPhaw5DfiVg8McWva+rLEQTD9u6U4uXiYxU42W1kDzp35VUPKs+1B xQIxi/zM82d+hx0yM4eF4PiNKhamqNc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-646-5RwCOCTCMHq8TIfUASZnzQ-1; Tue, 22 Mar 2022 16:56:17 -0400 X-MC-Unique: 5RwCOCTCMHq8TIfUASZnzQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1DE991C068C0 for ; Tue, 22 Mar 2022 20:56:17 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id 00E08C27E80; Tue, 22 Mar 2022 20:56:15 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky , Cathy Avery , Paolo Bonzini Subject: [kvm-unit-tests PATCH 1/9] pmu_lbr: few fixes Date: Tue, 22 Mar 2022 22:56:05 +0200 Message-Id: <20220322205613.250925-2-mlevitsk@redhat.com> In-Reply-To: <20220322205613.250925-1-mlevitsk@redhat.com> References: <20220322205613.250925-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org * don't run this test on AMD since AMD's LBR is not the same as Intel's LBR and needs a different test. * don't run this test on 32 bit as it is not built for 32 bit anyway Signed-off-by: Maxim Levitsky --- x86/pmu_lbr.c | 6 ++++++ x86/unittests.cfg | 1 + 2 files changed, 7 insertions(+) diff --git a/x86/pmu_lbr.c b/x86/pmu_lbr.c index 5ff805a..688634d 100644 --- a/x86/pmu_lbr.c +++ b/x86/pmu_lbr.c @@ -68,6 +68,12 @@ int main(int ac, char **av) int max, i; setup_vm(); + + if (!is_intel()) { + report_skip("PMU_LBR test is for intel CPU's only"); + return 0; + } + perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); eax.full = id.a; diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 9a70ba3..89ff949 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -179,6 +179,7 @@ check = /proc/sys/kernel/nmi_watchdog=0 accel = kvm [pmu_lbr] +arch = x86_64 file = pmu_lbr.flat extra_params = -cpu host,migratable=no check = /sys/module/kvm/parameters/ignore_msrs=N From patchwork Tue Mar 22 20:56:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12789027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BF0DC433FE for ; Tue, 22 Mar 2022 20:56:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234770AbiCVU6N (ORCPT ); Tue, 22 Mar 2022 16:58:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235030AbiCVU5u (ORCPT ); Tue, 22 Mar 2022 16:57:50 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E8D7F60D8 for ; Tue, 22 Mar 2022 13:56:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647982580; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z4/Jlb9AQ/Zg16L1o8VwLht7TqAxZDtF3j78kpYMeAI=; b=A+jc6HVJSJSy4r3WUPxW8vGgrtv4rGn6TGc2m+EoY5hgNoJzU+VPHh3of8xPxEZmbKAOXQ Sdh9miuromfEdJ2KgRXhoEJDYv64Tm/0JNecdp5c1kQdmFUkVoIvHtJ0sdMXgj+kKzzAQP JJyoCyRc5nYllnQqztjx7xyuAglXMEE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-466-miDYK59GM4mDTd5SWqAYrA-1; Tue, 22 Mar 2022 16:56:18 -0400 X-MC-Unique: miDYK59GM4mDTd5SWqAYrA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9247B899EC1 for ; Tue, 22 Mar 2022 20:56:18 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id 75D7EC27E80; Tue, 22 Mar 2022 20:56:17 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky , Cathy Avery , Paolo Bonzini Subject: [kvm-unit-tests PATCH 2/9] svm: Fix reg_corruption test, to avoid timer interrupt firing in later tests. Date: Tue, 22 Mar 2022 22:56:06 +0200 Message-Id: <20220322205613.250925-3-mlevitsk@redhat.com> In-Reply-To: <20220322205613.250925-1-mlevitsk@redhat.com> References: <20220322205613.250925-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The test were setting APIC periodic timer but not disabling it later. Fixes: da338a3 ("SVM: add test for nested guest RIP corruption") Signed-off-by: Maxim Levitsky --- x86/svm_tests.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 0707786..7a97847 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -1847,7 +1847,7 @@ static bool reg_corruption_finished(struct svm_test *test) report_pass("No RIP corruption detected after %d timer interrupts", isr_cnt); set_test_stage(test, 1); - return true; + goto cleanup; } if (vmcb->control.exit_code == SVM_EXIT_INTR) { @@ -1861,11 +1861,16 @@ static bool reg_corruption_finished(struct svm_test *test) if (guest_rip == insb_instruction_label && io_port_var != 0xAA) { report_fail("RIP corruption detected after %d timer interrupts", isr_cnt); - return true; + goto cleanup; } } return false; +cleanup: + apic_write(APIC_LVTT, APIC_LVT_TIMER_MASK); + apic_write(APIC_TMICT, 0); + return true; + } static bool reg_corruption_check(struct svm_test *test) From patchwork Tue Mar 22 20:56:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12789024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEAF8C433FE for ; Tue, 22 Mar 2022 20:56:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234650AbiCVU6J (ORCPT ); Tue, 22 Mar 2022 16:58:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235087AbiCVU5u (ORCPT ); Tue, 22 Mar 2022 16:57:50 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9A8D6BE31 for ; Tue, 22 Mar 2022 13:56:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647982581; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/6+psNmciWHUDVo6oLa5Yb6bs9XiKM91HW+xvMJKg9Y=; b=HKbz0CyA4L9PPMvHpNQvN96gsdIOC9a1CZ+oEuVePggHgpjoFv9DgDl7v76oCl02H7JKl7 izYKulGWByhXW2YqzIqWOOMLrN+DcGItzPz7pzmR37ur4v//AchgAg9z/82a6Ps1bLwtdc Zcqy66sTxib1erduwg3PpWOxkNXX8Pk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-198-XS31wkWUPD22u3_HgEjw8w-1; Tue, 22 Mar 2022 16:56:20 -0400 X-MC-Unique: XS31wkWUPD22u3_HgEjw8w-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 15160185A7B2 for ; Tue, 22 Mar 2022 20:56:20 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id E9D12C26E9A; Tue, 22 Mar 2022 20:56:18 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky , Cathy Avery , Paolo Bonzini Subject: [kvm-unit-tests PATCH 3/9] svm: NMI is an "exception" and not interrupt in x86 land Date: Tue, 22 Mar 2022 22:56:07 +0200 Message-Id: <20220322205613.250925-4-mlevitsk@redhat.com> In-Reply-To: <20220322205613.250925-1-mlevitsk@redhat.com> References: <20220322205613.250925-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This can interfere with later tests which do treat it as an exception. Fixes: d4db486 ("svm: Add test cases around NMI injection") Signed-off-by: Maxim Levitsky --- x86/svm_tests.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 7a97847..7586ef7 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -1384,17 +1384,16 @@ static bool interrupt_check(struct svm_test *test) static volatile bool nmi_fired; -static void nmi_handler(isr_regs_t *regs) +static void nmi_handler(struct ex_regs *regs) { nmi_fired = true; - apic_write(APIC_EOI, 0); } static void nmi_prepare(struct svm_test *test) { default_prepare(test); nmi_fired = false; - handle_irq(NMI_VECTOR, nmi_handler); + handle_exception(NMI_VECTOR, nmi_handler); set_test_stage(test, 0); } From patchwork Tue Mar 22 20:56:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12789019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62394C433EF for ; Tue, 22 Mar 2022 20:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234565AbiCVU6C (ORCPT ); Tue, 22 Mar 2022 16:58:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235152AbiCVU5w (ORCPT ); Tue, 22 Mar 2022 16:57:52 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 133146415 for ; Tue, 22 Mar 2022 13:56:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647982583; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HqzNxaGM36JoPEAx/atp62YyCvUTyjd1fhGLAv2y2rI=; b=cFQLmDezgUsycqmkaC/lbjPXpC64pvCVrPk5m6S6PGssV7FDtQB+TCAmb4fTOhwGtLx3Bo +6ywyd2ivursXhHgwUgAOpZVV0Uexxx6ghSC1u9uQBSSMQGjkEO0ptSSiCEPIIwKdigAso lpM9kRhPt6/lDzwJyb0N2jNXyHFvmWE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-376-jcjRmlWzMkyADp7B-dhRgA-1; Tue, 22 Mar 2022 16:56:21 -0400 X-MC-Unique: jcjRmlWzMkyADp7B-dhRgA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8A216899EC5 for ; Tue, 22 Mar 2022 20:56:21 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6CBD5C26E9A; Tue, 22 Mar 2022 20:56:20 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky , Cathy Avery , Paolo Bonzini Subject: [kvm-unit-tests PATCH 4/9] svm: intercept shutdown in all svm tests by default Date: Tue, 22 Mar 2022 22:56:08 +0200 Message-Id: <20220322205613.250925-5-mlevitsk@redhat.com> In-Reply-To: <20220322205613.250925-1-mlevitsk@redhat.com> References: <20220322205613.250925-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If L1 doesn't intercept shutdown, then L1 itself gets it, which doesn't allow it to report the error that happened. Signed-off-by: Maxim Levitsky --- x86/svm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/x86/svm.c b/x86/svm.c index 3f94b2a..62da2af 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -174,7 +174,9 @@ void vmcb_ident(struct vmcb *vmcb) save->cr2 = read_cr2(); save->g_pat = rdmsr(MSR_IA32_CR_PAT); save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); - ctrl->intercept = (1ULL << INTERCEPT_VMRUN) | (1ULL << INTERCEPT_VMMCALL); + ctrl->intercept = (1ULL << INTERCEPT_VMRUN) | + (1ULL << INTERCEPT_VMMCALL) | + (1ULL << INTERCEPT_SHUTDOWN); ctrl->iopm_base_pa = virt_to_phys(io_bitmap); ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap); From patchwork Tue Mar 22 20:56:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12789026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE8DAC433F5 for ; Tue, 22 Mar 2022 20:56:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234967AbiCVU6L (ORCPT ); Tue, 22 Mar 2022 16:58:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235210AbiCVU5x (ORCPT ); Tue, 22 Mar 2022 16:57:53 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7F8EBF7C for ; Tue, 22 Mar 2022 13:56:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647982584; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=A4m1xJmbwrr/oQuxoGqrKefFITYBH/tT5cQ5+2a7U9g=; b=ZzK11dBK0Fbj122K6WE53m8FH6+yl5bnVLRTsvM5qC9CF5loBY/J4GUmu4BRVdaPugWg2S lHjSojrzx0R88HhzgTY+Suy0PGVfoW1LqF4e7xH1q2E5kEzw3lUfAvjCEU+l/WRghJNO4M 95mzmQvOpdhvHDzVX0/g3LP30om2/3s= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-440-3Mgbs1OFPcmIpDgbRQRmVg-1; Tue, 22 Mar 2022 16:56:23 -0400 X-MC-Unique: 3Mgbs1OFPcmIpDgbRQRmVg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0B44C899EC2 for ; Tue, 22 Mar 2022 20:56:23 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id E20C1C27E80; Tue, 22 Mar 2022 20:56:21 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky , Cathy Avery , Paolo Bonzini Subject: [kvm-unit-tests PATCH 5/9] svm: add SVM_BARE_VMRUN Date: Tue, 22 Mar 2022 22:56:09 +0200 Message-Id: <20220322205613.250925-6-mlevitsk@redhat.com> In-Reply-To: <20220322205613.250925-1-mlevitsk@redhat.com> References: <20220322205613.250925-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be useful in nested LBR tests to ensure that no extra branches are made in the guest entry. Signed-off-by: Maxim Levitsky --- x86/svm.c | 32 -------------------------------- x86/svm.h | 47 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+), 32 deletions(-) diff --git a/x86/svm.c b/x86/svm.c index 62da2af..6f4e023 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -196,41 +196,9 @@ struct regs get_regs(void) // rax handled specially below -#define SAVE_GPR_C \ - "xchg %%rbx, regs+0x8\n\t" \ - "xchg %%rcx, regs+0x10\n\t" \ - "xchg %%rdx, regs+0x18\n\t" \ - "xchg %%rbp, regs+0x28\n\t" \ - "xchg %%rsi, regs+0x30\n\t" \ - "xchg %%rdi, regs+0x38\n\t" \ - "xchg %%r8, regs+0x40\n\t" \ - "xchg %%r9, regs+0x48\n\t" \ - "xchg %%r10, regs+0x50\n\t" \ - "xchg %%r11, regs+0x58\n\t" \ - "xchg %%r12, regs+0x60\n\t" \ - "xchg %%r13, regs+0x68\n\t" \ - "xchg %%r14, regs+0x70\n\t" \ - "xchg %%r15, regs+0x78\n\t" - -#define LOAD_GPR_C SAVE_GPR_C struct svm_test *v2_test; -#define ASM_PRE_VMRUN_CMD \ - "vmload %%rax\n\t" \ - "mov regs+0x80, %%r15\n\t" \ - "mov %%r15, 0x170(%%rax)\n\t" \ - "mov regs, %%r15\n\t" \ - "mov %%r15, 0x1f8(%%rax)\n\t" \ - LOAD_GPR_C \ - -#define ASM_POST_VMRUN_CMD \ - SAVE_GPR_C \ - "mov 0x170(%%rax), %%r15\n\t" \ - "mov %%r15, regs+0x80\n\t" \ - "mov 0x1f8(%%rax), %%r15\n\t" \ - "mov %%r15, regs\n\t" \ - "vmsave %%rax\n\t" \ u64 guest_stack[10000]; diff --git a/x86/svm.h b/x86/svm.h index f74b13a..6d072f4 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -416,10 +416,57 @@ void vmcb_ident(struct vmcb *vmcb); struct regs get_regs(void); void vmmcall(void); int __svm_vmrun(u64 rip); +void __svm_bare_vmrun(void); int svm_vmrun(void); void test_set_guest(test_guest_func func); extern struct vmcb *vmcb; extern struct svm_test svm_tests[]; + +#define SAVE_GPR_C \ + "xchg %%rbx, regs+0x8\n\t" \ + "xchg %%rcx, regs+0x10\n\t" \ + "xchg %%rdx, regs+0x18\n\t" \ + "xchg %%rbp, regs+0x28\n\t" \ + "xchg %%rsi, regs+0x30\n\t" \ + "xchg %%rdi, regs+0x38\n\t" \ + "xchg %%r8, regs+0x40\n\t" \ + "xchg %%r9, regs+0x48\n\t" \ + "xchg %%r10, regs+0x50\n\t" \ + "xchg %%r11, regs+0x58\n\t" \ + "xchg %%r12, regs+0x60\n\t" \ + "xchg %%r13, regs+0x68\n\t" \ + "xchg %%r14, regs+0x70\n\t" \ + "xchg %%r15, regs+0x78\n\t" + +#define LOAD_GPR_C SAVE_GPR_C + +#define ASM_PRE_VMRUN_CMD \ + "vmload %%rax\n\t" \ + "mov regs+0x80, %%r15\n\t" \ + "mov %%r15, 0x170(%%rax)\n\t" \ + "mov regs, %%r15\n\t" \ + "mov %%r15, 0x1f8(%%rax)\n\t" \ + LOAD_GPR_C \ + +#define ASM_POST_VMRUN_CMD \ + SAVE_GPR_C \ + "mov 0x170(%%rax), %%r15\n\t" \ + "mov %%r15, regs+0x80\n\t" \ + "mov 0x1f8(%%rax), %%r15\n\t" \ + "mov %%r15, regs\n\t" \ + "vmsave %%rax\n\t" \ + + + +#define SVM_BARE_VMRUN \ + asm volatile ( \ + ASM_PRE_VMRUN_CMD \ + "vmrun %%rax\n\t" \ + ASM_POST_VMRUN_CMD \ + : \ + : "a" (virt_to_phys(vmcb)) \ + : "memory", "r15") \ + #endif From patchwork Tue Mar 22 20:56:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12789020 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0E18C433FE for ; Tue, 22 Mar 2022 20:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230020AbiCVU6D (ORCPT ); Tue, 22 Mar 2022 16:58:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235266AbiCVU5z (ORCPT ); Tue, 22 Mar 2022 16:57:55 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 06D277643 for ; Tue, 22 Mar 2022 13:56:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647982586; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Lva3jA1UIOJNHB5sO36A9Osjhwc6+VS5t3GMWJqc9AE=; b=IXLo3dOPdvw9FGBUQizsauLc5Op7/31TTltEKWZO3tY0VqJDeMkv3SDnIvBTplPtJM0bGf CtkTy0o6ExikX6awqKiYbYL2rtnP7skyFA4nIrfaYlHN7tin8VU6N3QUggOjjzaNP1IPf1 6dHAnRFXOAwixyVret7qjjn0Lsl9ARo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-193-OV2mBMpxOvu_dFN-kb8agw-1; Tue, 22 Mar 2022 16:56:24 -0400 X-MC-Unique: OV2mBMpxOvu_dFN-kb8agw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 83E3580231F for ; Tue, 22 Mar 2022 20:56:24 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id 62D19C26E9A; Tue, 22 Mar 2022 20:56:23 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky , Cathy Avery , Paolo Bonzini Subject: [kvm-unit-tests PATCH 6/9] svm: add tests for LBR virtualization Date: Tue, 22 Mar 2022 22:56:10 +0200 Message-Id: <20220322205613.250925-7-mlevitsk@redhat.com> In-Reply-To: <20220322205613.250925-1-mlevitsk@redhat.com> References: <20220322205613.250925-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Maxim Levitsky --- lib/x86/processor.h | 1 + x86/svm.c | 5 + x86/svm.h | 5 +- x86/svm_tests.c | 239 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 249 insertions(+), 1 deletion(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 117032a..b01c3d0 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -187,6 +187,7 @@ static inline bool is_intel(void) #define X86_FEATURE_RDPRU (CPUID(0x80000008, 0, EBX, 4)) #define X86_FEATURE_AMD_IBPB (CPUID(0x80000008, 0, EBX, 12)) #define X86_FEATURE_NPT (CPUID(0x8000000A, 0, EDX, 0)) +#define X86_FEATURE_LBRV (CPUID(0x8000000A, 0, EDX, 1)) #define X86_FEATURE_NRIPS (CPUID(0x8000000A, 0, EDX, 3)) #define X86_FEATURE_VGIF (CPUID(0x8000000A, 0, EDX, 16)) diff --git a/x86/svm.c b/x86/svm.c index 6f4e023..bb58d7c 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -70,6 +70,11 @@ bool vgif_supported(void) return this_cpu_has(X86_FEATURE_VGIF); } +bool lbrv_supported(void) +{ + return this_cpu_has(X86_FEATURE_LBRV); +} + void default_prepare(struct svm_test *test) { vmcb_ident(vmcb); diff --git a/x86/svm.h b/x86/svm.h index 6d072f4..58b9410 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -98,7 +98,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area { u32 event_inj; u32 event_inj_err; u64 nested_cr3; - u64 lbr_ctl; + u64 virt_ext; u32 clean; u32 reserved_5; u64 next_rip; @@ -360,6 +360,8 @@ struct __attribute__ ((__packed__)) vmcb { #define MSR_BITMAP_SIZE 8192 +#define LBR_CTL_ENABLE_MASK BIT_ULL(0) + struct svm_test { const char *name; bool (*supported)(void); @@ -405,6 +407,7 @@ u64 *npt_get_pml4e(void); bool smp_supported(void); bool default_supported(void); bool vgif_supported(void); +bool lbrv_supported(void); void default_prepare(struct svm_test *test); void default_prepare_gif_clear(struct svm_test *test); bool default_finished(struct svm_test *test); diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 7586ef7..b2ba283 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -3078,6 +3078,240 @@ static void svm_nm_test(void) "fnop with CR0.TS and CR0.EM unset no #NM excpetion"); } + +static bool check_lbr(u64 *from_excepted, u64 *to_expected) +{ + u64 from = rdmsr(MSR_IA32_LASTBRANCHFROMIP); + u64 to = rdmsr(MSR_IA32_LASTBRANCHTOIP); + + if ((u64)from_excepted != from) { + report(false, "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx", + (u64)from_excepted, from); + return false; + } + + if ((u64)to_expected != to) { + report(false, "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx", + (u64)from_excepted, from); + return false; + } + + return true; +} + +static bool check_dbgctl(u64 dbgctl, u64 dbgctl_expected) +{ + if (dbgctl != dbgctl_expected) { + report(false, "Unexpected MSR_IA32_DEBUGCTLMSR value 0x%lx", dbgctl); + return false; + } + return true; +} + + +#define DO_BRANCH(branch_name) \ + asm volatile ( \ + # branch_name "_from:" \ + "jmp " # branch_name "_to\n" \ + "nop\n" \ + "nop\n" \ + # branch_name "_to:" \ + "nop\n" \ + ) + + +extern u64 guest_branch0_from, guest_branch0_to; +extern u64 guest_branch2_from, guest_branch2_to; + +extern u64 host_branch0_from, host_branch0_to; +extern u64 host_branch2_from, host_branch2_to; +extern u64 host_branch3_from, host_branch3_to; +extern u64 host_branch4_from, host_branch4_to; + +u64 dbgctl; + +static void svm_lbrv_test_guest1(void) +{ + /* + * This guest expects the LBR to be already enabled when it starts, + * it does a branch, and then disables the LBR and then checks. + */ + + DO_BRANCH(guest_branch0); + + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (dbgctl != DEBUGCTLMSR_LBR) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_DEBUGCTLMSR) != 0) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64)&guest_branch0_from) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64)&guest_branch0_to) + asm volatile("ud2\n"); + + asm volatile ("vmmcall\n"); +} + +static void svm_lbrv_test_guest2(void) +{ + /* + * This guest expects the LBR to be disabled when it starts, + * enables it, does a branch, disables it and then checks. + */ + + DO_BRANCH(guest_branch1); + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + + if (dbgctl != 0) + asm volatile("ud2\n"); + + if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64)&host_branch2_from) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64)&host_branch2_to) + asm volatile("ud2\n"); + + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + DO_BRANCH(guest_branch2); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (dbgctl != DEBUGCTLMSR_LBR) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHFROMIP) != (u64)&guest_branch2_from) + asm volatile("ud2\n"); + if (rdmsr(MSR_IA32_LASTBRANCHTOIP) != (u64)&guest_branch2_to) + asm volatile("ud2\n"); + + asm volatile ("vmmcall\n"); +} + +static void svm_lbrv_test0(void) +{ + report(true, "Basic LBR test"); + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch0); + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + check_dbgctl(dbgctl, DEBUGCTLMSR_LBR); + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + check_dbgctl(dbgctl, 0); + + check_lbr(&host_branch0_from, &host_branch0_to); +} + +static void svm_lbrv_test1(void) +{ + report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)"); + + vmcb->save.rip = (ulong)svm_lbrv_test_guest1; + vmcb->control.virt_ext = 0; + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch1); + SVM_BARE_VMRUN; + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return; + } + + check_dbgctl(dbgctl, 0); + check_lbr(&guest_branch0_from, &guest_branch0_to); +} + +static void svm_lbrv_test2(void) +{ + report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)"); + + vmcb->save.rip = (ulong)svm_lbrv_test_guest2; + vmcb->control.virt_ext = 0; + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch2); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + SVM_BARE_VMRUN; + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return; + } + + check_dbgctl(dbgctl, 0); + check_lbr(&guest_branch2_from, &guest_branch2_to); +} + +static void svm_lbrv_nested_test1(void) +{ + if (!lbrv_supported()) { + report_skip("LBRV not supported in the guest"); + return; + } + + report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (1)"); + vmcb->save.rip = (ulong)svm_lbrv_test_guest1; + vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK; + vmcb->save.dbgctl = DEBUGCTLMSR_LBR; + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch3); + SVM_BARE_VMRUN; + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return; + } + + if (vmcb->save.dbgctl != 0) { + report(false, "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx", vmcb->save.dbgctl); + return; + } + + check_dbgctl(dbgctl, DEBUGCTLMSR_LBR); + check_lbr(&host_branch3_from, &host_branch3_to); +} +static void svm_lbrv_nested_test2(void) +{ + if (!lbrv_supported()) { + report_skip("LBRV not supported in the guest"); + return; + } + + report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (2)"); + vmcb->save.rip = (ulong)svm_lbrv_test_guest2; + vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK; + + vmcb->save.dbgctl = 0; + vmcb->save.br_from = (u64)&host_branch2_from; + vmcb->save.br_to = (u64)&host_branch2_to; + + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); + DO_BRANCH(host_branch4); + SVM_BARE_VMRUN; + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); + + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return; + } + + check_dbgctl(dbgctl, DEBUGCTLMSR_LBR); + check_lbr(&host_branch4_from, &host_branch4_to); +} + struct svm_test svm_tests[] = { { "null", default_supported, default_prepare, default_prepare_gif_clear, null_test, @@ -3200,5 +3434,10 @@ struct svm_test svm_tests[] = { TEST(svm_nm_test), TEST(svm_int3_test), TEST(svm_into_test), + TEST(svm_lbrv_test0), + TEST(svm_lbrv_test1), + TEST(svm_lbrv_test2), + TEST(svm_lbrv_nested_test1), + TEST(svm_lbrv_nested_test2), { NULL, NULL, NULL, NULL, NULL, NULL, NULL } }; From patchwork Tue Mar 22 20:56:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12789023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EE6CC433EF for ; Tue, 22 Mar 2022 20:56:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234618AbiCVU6I (ORCPT ); Tue, 22 Mar 2022 16:58:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235295AbiCVU55 (ORCPT ); Tue, 22 Mar 2022 16:57:57 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6AECF60D8 for ; Tue, 22 Mar 2022 13:56:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647982587; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SCxCaXzsD64dZT9vpoGEXZkTLWh1XKLyjXU/a5a7SwI=; b=gtkmjOwc/CvTFtSNqkJtInE1wBRd3XvQYAOJciqUTWiy6nkVqX07plCU0K08MnMvIzp/qi iByKWxPAWIhv9D7r7840tAWvpsWrGkYtQsP1n6mjxpcJCo196qTMXPpM5mPi10yt8uGFEd kGoIKdcsx+8ze+9/9UTlEwuqtU5v/J4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-600-JooZl315NRm1YQsBHozX1w-1; Tue, 22 Mar 2022 16:56:26 -0400 X-MC-Unique: JooZl315NRm1YQsBHozX1w-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 033BB811E78 for ; Tue, 22 Mar 2022 20:56:26 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id DA04DC27E80; Tue, 22 Mar 2022 20:56:24 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky , Cathy Avery , Paolo Bonzini Subject: [kvm-unit-tests PATCH 7/9] svm: add tests for case when L1 intercepts various hardware interrupts Date: Tue, 22 Mar 2022 22:56:11 +0200 Message-Id: <20220322205613.250925-8-mlevitsk@redhat.com> In-Reply-To: <20220322205613.250925-1-mlevitsk@redhat.com> References: <20220322205613.250925-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org (an interrupt, SMI, NMI), but lets L2 control either EFLAG.IF or GIF Signed-off-by: Maxim Levitsky --- x86/svm.h | 11 +++ x86/svm_tests.c | 194 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 205 insertions(+) diff --git a/x86/svm.h b/x86/svm.h index 58b9410..df1b1ac 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -426,6 +426,17 @@ void test_set_guest(test_guest_func func); extern struct vmcb *vmcb; extern struct svm_test svm_tests[]; +static inline void stgi(void) +{ + asm volatile ("stgi"); +} + +static inline void clgi(void) +{ + asm volatile ("clgi"); +} + + #define SAVE_GPR_C \ "xchg %%rbx, regs+0x8\n\t" \ diff --git a/x86/svm_tests.c b/x86/svm_tests.c index b2ba283..ef8b5ee 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -3312,6 +3312,195 @@ static void svm_lbrv_nested_test2(void) check_lbr(&host_branch4_from, &host_branch4_to); } + +// test that a nested guest which does enable INTR interception +// but doesn't enable virtual interrupt masking works + +static volatile int dummy_isr_recevied; +static void dummy_isr(isr_regs_t *regs) +{ + dummy_isr_recevied++; + eoi(); +} + + +static volatile int nmi_recevied; +static void dummy_nmi_handler(struct ex_regs *regs) +{ + nmi_recevied++; +} + + +static void svm_intr_intercept_mix_run_guest(volatile int *counter, int expected_vmexit) +{ + if (counter) + *counter = 0; + + sti(); // host IF value should not matter + clgi(); // vmrun will set back GI to 1 + + svm_vmrun(); + + if (counter) + report(!*counter, "No interrupt expected"); + + stgi(); + + if (counter) + report(*counter == 1, "Interrupt is expected"); + + report (vmcb->control.exit_code == expected_vmexit, "Test expected VM exit"); + report(vmcb->save.rflags & X86_EFLAGS_IF, "Guest should have EFLAGS.IF set now"); + cli(); +} + + +// subtest: test that enabling EFLAGS.IF is enought to trigger an interrupt +static void svm_intr_intercept_mix_if_guest(struct svm_test *test) +{ + asm volatile("nop;nop;nop;nop"); + report(!dummy_isr_recevied, "No interrupt expected"); + sti(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_if(void) +{ + // make a physical interrupt to be pending + handle_irq(0x55, dummy_isr); + + vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + vmcb->save.rflags &= ~X86_EFLAGS_IF; + + test_set_guest(svm_intr_intercept_mix_if_guest); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0); + svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR); +} + + +// subtest: test that a clever guest can trigger an interrupt by setting GIF +// if GIF is not intercepted +static void svm_intr_intercept_mix_gif_guest(struct svm_test *test) +{ + + asm volatile("nop;nop;nop;nop"); + report(!dummy_isr_recevied, "No interrupt expected"); + + // clear GIF and enable IF + // that should still not cause VM exit + clgi(); + sti(); + asm volatile("nop"); + report(!dummy_isr_recevied, "No interrupt expected"); + + stgi(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_gif(void) +{ + handle_irq(0x55, dummy_isr); + + vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + vmcb->save.rflags &= ~X86_EFLAGS_IF; + + test_set_guest(svm_intr_intercept_mix_gif_guest); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0); + svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR); +} + + + +// subtest: test that a clever guest can trigger an interrupt by setting GIF +// if GIF is not intercepted and interrupt comes after guest +// started running +static void svm_intr_intercept_mix_gif_guest2(struct svm_test *test) +{ + asm volatile("nop;nop;nop;nop"); + report(!dummy_isr_recevied, "No interrupt expected"); + + clgi(); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_FIXED | 0x55, 0); + report(!dummy_isr_recevied, "No interrupt expected"); + + stgi(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_gif2(void) +{ + handle_irq(0x55, dummy_isr); + + vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + vmcb->save.rflags |= X86_EFLAGS_IF; + + test_set_guest(svm_intr_intercept_mix_gif_guest2); + svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR); +} + + +// subtest: test that pending NMI will be handled when guest enables GIF +static void svm_intr_intercept_mix_nmi_guest(struct svm_test *test) +{ + asm volatile("nop;nop;nop;nop"); + report(!nmi_recevied, "No NMI expected"); + cli(); // should have no effect + + clgi(); + asm volatile("nop"); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI, 0); + sti(); // should have no effect + asm volatile("nop"); + report(!nmi_recevied, "No NMI expected"); + + stgi(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_nmi(void) +{ + handle_exception(2, dummy_nmi_handler); + + vmcb->control.intercept |= (1 << INTERCEPT_NMI); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + vmcb->save.rflags |= X86_EFLAGS_IF; + + test_set_guest(svm_intr_intercept_mix_nmi_guest); + svm_intr_intercept_mix_run_guest(&nmi_recevied, SVM_EXIT_NMI); +} + +// test that pending SMI will be handled when guest enables GIF +// TODO: can't really count #SMIs so just test that guest doesn't hang +// and VMexits on SMI +static void svm_intr_intercept_mix_smi_guest(struct svm_test *test) +{ + asm volatile("nop;nop;nop;nop"); + + clgi(); + asm volatile("nop"); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_SMI, 0); + sti(); // should have no effect + asm volatile("nop"); + stgi(); + asm volatile("nop"); + report(0, "must not reach here"); +} + +static void svm_intr_intercept_mix_smi(void) +{ + vmcb->control.intercept |= (1 << INTERCEPT_SMI); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + test_set_guest(svm_intr_intercept_mix_smi_guest); + svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI); +} + struct svm_test svm_tests[] = { { "null", default_supported, default_prepare, default_prepare_gif_clear, null_test, @@ -3439,5 +3628,10 @@ struct svm_test svm_tests[] = { TEST(svm_lbrv_test2), TEST(svm_lbrv_nested_test1), TEST(svm_lbrv_nested_test2), + TEST(svm_intr_intercept_mix_if), + TEST(svm_intr_intercept_mix_gif), + TEST(svm_intr_intercept_mix_gif2), + TEST(svm_intr_intercept_mix_nmi), + TEST(svm_intr_intercept_mix_smi), { NULL, NULL, NULL, NULL, NULL, NULL, NULL } }; From patchwork Tue Mar 22 20:56:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12789021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11882C433EF for ; Tue, 22 Mar 2022 20:56:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234552AbiCVU6E (ORCPT ); Tue, 22 Mar 2022 16:58:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235371AbiCVU57 (ORCPT ); Tue, 22 Mar 2022 16:57:59 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 32ECADF3C for ; Tue, 22 Mar 2022 13:56:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647982590; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q9HneRc10pAdn+BEXX8opU9Z5KQsphxSp3uraYqDUMw=; b=U3sc2Nh/3rWyJw8Uc1By3HhD7AdjLw6uDdyXhY6ouhmumvjnHr0oWxASv50dj+hHSveT0b /bI0CIfCNCyuPyR5iN6bcuoTsicSIOm2DS4+ZVUT/w0G8zTOgAu+a3tnoRRJtf8LC9TUxW FugPjWtDxptfRMVnrHdZnej40k3py6c= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-665-hZidn-09MvKLGFN2E9MbwQ-1; Tue, 22 Mar 2022 16:56:27 -0400 X-MC-Unique: hZidn-09MvKLGFN2E9MbwQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 782DD280550F for ; Tue, 22 Mar 2022 20:56:27 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5AED0C26E9A; Tue, 22 Mar 2022 20:56:26 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky , Cathy Avery , Paolo Bonzini Subject: [kvm-unit-tests PATCH 8/9] svm: add test for nested tsc scaling Date: Tue, 22 Mar 2022 22:56:12 +0200 Message-Id: <20220322205613.250925-9-mlevitsk@redhat.com> In-Reply-To: <20220322205613.250925-1-mlevitsk@redhat.com> References: <20220322205613.250925-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Maxim Levitsky --- lib/x86/msr.h | 1 + lib/x86/processor.h | 2 ++ x86/svm.c | 5 ++++ x86/svm.h | 3 +++ x86/svm_tests.c | 65 +++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 76 insertions(+) diff --git a/lib/x86/msr.h b/lib/x86/msr.h index 5001b16..fa1c0c8 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -431,6 +431,7 @@ /* AMD-V MSRs */ +#define MSR_AMD64_TSC_RATIO 0xc0000104 #define MSR_VM_CR 0xc0010114 #define MSR_VM_IGNNE 0xc0010115 #define MSR_VM_HSAVE_PA 0xc0010117 diff --git a/lib/x86/processor.h b/lib/x86/processor.h index b01c3d0..b3fe924 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -189,9 +189,11 @@ static inline bool is_intel(void) #define X86_FEATURE_NPT (CPUID(0x8000000A, 0, EDX, 0)) #define X86_FEATURE_LBRV (CPUID(0x8000000A, 0, EDX, 1)) #define X86_FEATURE_NRIPS (CPUID(0x8000000A, 0, EDX, 3)) +#define X86_FEATURE_TSCRATEMSR (CPUID(0x8000000A, 0, EDX, 4)) #define X86_FEATURE_VGIF (CPUID(0x8000000A, 0, EDX, 16)) + static inline bool this_cpu_has(u64 feature) { u32 input_eax = feature >> 32; diff --git a/x86/svm.c b/x86/svm.c index bb58d7c..460fc59 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -75,6 +75,11 @@ bool lbrv_supported(void) return this_cpu_has(X86_FEATURE_LBRV); } +bool tsc_scale_supported(void) +{ + return this_cpu_has(X86_FEATURE_TSCRATEMSR); +} + void default_prepare(struct svm_test *test) { vmcb_ident(vmcb); diff --git a/x86/svm.h b/x86/svm.h index df1b1ac..d92c4f2 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -147,6 +147,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area { #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL #define SVM_VM_CR_SVM_DIS_MASK 0x0010ULL +#define TSC_RATIO_DEFAULT 0x0100000000ULL + struct __attribute__ ((__packed__)) vmcb_seg { u16 selector; u16 attrib; @@ -408,6 +410,7 @@ bool smp_supported(void); bool default_supported(void); bool vgif_supported(void); bool lbrv_supported(void); +bool tsc_scale_supported(void); void default_prepare(struct svm_test *test); void default_prepare_gif_clear(struct svm_test *test); bool default_finished(struct svm_test *test); diff --git a/x86/svm_tests.c b/x86/svm_tests.c index ef8b5ee..e7bd788 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -918,6 +918,70 @@ static bool tsc_adjust_check(struct svm_test *test) return ok && adjust <= -2 * TSC_ADJUST_VALUE; } + +static u64 guest_tsc_delay_value; +/* number of bits to shift tsc right for stable result */ +#define TSC_SHIFT 24 +#define TSC_SCALE_ITERATIONS 10 + +static void svm_tsc_scale_guest(struct svm_test *test) +{ + u64 start_tsc = rdtsc(); + + while (rdtsc() - start_tsc < guest_tsc_delay_value) + cpu_relax(); +} + +static void svm_tsc_scale_run_testcase(u64 duration, + double tsc_scale, u64 tsc_offset) +{ + u64 start_tsc, actual_duration; + + guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale; + + test_set_guest(svm_tsc_scale_guest); + vmcb->control.tsc_offset = tsc_offset; + wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32))); + + start_tsc = rdtsc(); + + if (svm_vmrun() != SVM_EXIT_VMMCALL) + report_fail("unexpected vm exit code 0x%x", vmcb->control.exit_code); + + actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT; + + report(duration == actual_duration, "tsc delay (expected: %lu, actual: %lu)", + duration, actual_duration); +} + +static void svm_tsc_scale_test(void) +{ + int i; + + if (!tsc_scale_supported()) { + report_skip("TSC scale not supported in the guest"); + return; + } + + report(rdmsr(MSR_AMD64_TSC_RATIO) == TSC_RATIO_DEFAULT, + "initial TSC scale ratio"); + + for (i = 0 ; i < TSC_SCALE_ITERATIONS; i++) { + + double tsc_scale = (double)(rdrand() % 100 + 1) / 10; + int duration = rdrand() % 50 + 1; + u64 tsc_offset = rdrand(); + + report_info("duration=%d, tsc_scale=%d, tsc_offset=%ld", + duration, (int)(tsc_scale * 100), tsc_offset); + + svm_tsc_scale_run_testcase(duration, tsc_scale, tsc_offset); + } + + svm_tsc_scale_run_testcase(50, 255, rdrand()); + svm_tsc_scale_run_testcase(50, 0.0001, rdrand()); +} + static void latency_prepare(struct svm_test *test) { default_prepare(test); @@ -3633,5 +3697,6 @@ struct svm_test svm_tests[] = { TEST(svm_intr_intercept_mix_gif2), TEST(svm_intr_intercept_mix_nmi), TEST(svm_intr_intercept_mix_smi), + TEST(svm_tsc_scale_test), { NULL, NULL, NULL, NULL, NULL, NULL, NULL } }; From patchwork Tue Mar 22 20:56:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12789025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09107C433EF for ; Tue, 22 Mar 2022 20:56:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234894AbiCVU6K (ORCPT ); Tue, 22 Mar 2022 16:58:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235397AbiCVU57 (ORCPT ); Tue, 22 Mar 2022 16:57:59 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6AB2CDFCB for ; Tue, 22 Mar 2022 13:56:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647982590; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VLfMQnuAfS6OkVhoyOp/zryzpDhHZ9uEK2Up2UGEWTo=; b=IAiBwhqnZcMJb6O//+cmmXWpPUELOeik0ltY3KXitjbi8KNXykjHGEzIOHnw5L6rkFwN0z J3On4c7HwO0SN53S7ARQcX8utNy843+cZe4p1zX0n12FovvaaZb9lHVGHGRXzi705j1qbZ 0ekouwHnBjz27sZxwjNwfpFX/Q9q2TY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-9-1b33s2QmN9Gx7nLKJbMUXg-1; Tue, 22 Mar 2022 16:56:29 -0400 X-MC-Unique: 1b33s2QmN9Gx7nLKJbMUXg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id ECA9D2805513 for ; Tue, 22 Mar 2022 20:56:28 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.231]) by smtp.corp.redhat.com (Postfix) with ESMTP id CFDFBC27E80; Tue, 22 Mar 2022 20:56:27 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Maxim Levitsky , Cathy Avery , Paolo Bonzini Subject: [kvm-unit-tests PATCH 9/9] svm: add test for pause filter and threshold Date: Tue, 22 Mar 2022 22:56:13 +0200 Message-Id: <20220322205613.250925-10-mlevitsk@redhat.com> In-Reply-To: <20220322205613.250925-1-mlevitsk@redhat.com> References: <20220322205613.250925-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Maxim Levitsky --- lib/x86/processor.h | 3 +- x86/svm.c | 11 +++++++ x86/svm.h | 5 +++- x86/svm_tests.c | 70 +++++++++++++++++++++++++++++++++++++++++++++ x86/unittests.cfg | 7 ++++- 5 files changed, 93 insertions(+), 3 deletions(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index b3fe924..9a0dad6 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -190,10 +190,11 @@ static inline bool is_intel(void) #define X86_FEATURE_LBRV (CPUID(0x8000000A, 0, EDX, 1)) #define X86_FEATURE_NRIPS (CPUID(0x8000000A, 0, EDX, 3)) #define X86_FEATURE_TSCRATEMSR (CPUID(0x8000000A, 0, EDX, 4)) +#define X86_FEATURE_PAUSEFILTER (CPUID(0x8000000A, 0, EDX, 10)) +#define X86_FEATURE_PFTHRESHOLD (CPUID(0x8000000A, 0, EDX, 12)) #define X86_FEATURE_VGIF (CPUID(0x8000000A, 0, EDX, 16)) - static inline bool this_cpu_has(u64 feature) { u32 input_eax = feature >> 32; diff --git a/x86/svm.c b/x86/svm.c index 460fc59..f6896f0 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -80,6 +80,17 @@ bool tsc_scale_supported(void) return this_cpu_has(X86_FEATURE_TSCRATEMSR); } +bool pause_filter_supported(void) +{ + return this_cpu_has(X86_FEATURE_PAUSEFILTER); +} + +bool pause_threshold_supported(void) +{ + return this_cpu_has(X86_FEATURE_PFTHRESHOLD); +} + + void default_prepare(struct svm_test *test) { vmcb_ident(vmcb); diff --git a/x86/svm.h b/x86/svm.h index d92c4f2..e93822b 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -75,7 +75,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area { u16 intercept_dr_write; u32 intercept_exceptions; u64 intercept; - u8 reserved_1[42]; + u8 reserved_1[40]; + u16 pause_filter_thresh; u16 pause_filter_count; u64 iopm_base_pa; u64 msrpm_base_pa; @@ -411,6 +412,8 @@ bool default_supported(void); bool vgif_supported(void); bool lbrv_supported(void); bool tsc_scale_supported(void); +bool pause_filter_supported(void); +bool pause_threshold_supported(void); void default_prepare(struct svm_test *test); void default_prepare_gif_clear(struct svm_test *test); bool default_finished(struct svm_test *test); diff --git a/x86/svm_tests.c b/x86/svm_tests.c index e7bd788..6a9b03b 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -3030,6 +3030,75 @@ static bool vgif_check(struct svm_test *test) return get_test_stage(test) == 3; } + +static int pause_test_counter; +static int wait_counter; + +static void pause_filter_test_guest_main(struct svm_test *test) +{ + int i; + for (i = 0 ; i < pause_test_counter ; i++) + pause(); + + if (!wait_counter) + return; + + for (i = 0; i < wait_counter; i++) + ; + + for (i = 0 ; i < pause_test_counter ; i++) + pause(); + +} + +static void pause_filter_run_test(int pause_iterations, int filter_value, int wait_iterations, int threshold) +{ + test_set_guest(pause_filter_test_guest_main); + + pause_test_counter = pause_iterations; + wait_counter = wait_iterations; + + vmcb->control.pause_filter_count = filter_value; + vmcb->control.pause_filter_thresh = threshold; + svm_vmrun(); + + if (filter_value <= pause_iterations || wait_iterations < threshold) + report(vmcb->control.exit_code == SVM_EXIT_PAUSE, "expected PAUSE vmexit"); + else + report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "no expected PAUSE vmexit"); +} + +static void pause_filter_test(void) +{ + if (!pause_filter_supported()) { + report_skip("PAUSE filter not supported in the guest"); + return; + } + + vmcb->control.intercept |= (1 << INTERCEPT_PAUSE); + + // filter count more that pause count - no VMexit + pause_filter_run_test(10, 9, 0, 0); + + // filter count smaller pause count - no VMexit + pause_filter_run_test(20, 21, 0, 0); + + + if (pause_threshold_supported()) { + // filter count smaller pause count - no VMexit + large enough threshold + // so that filter counter resets + pause_filter_run_test(20, 21, 1000, 10); + + // filter count smaller pause count - no VMexit + small threshold + // so that filter doesn't reset + pause_filter_run_test(20, 21, 10, 1000); + } else { + report_skip("PAUSE threshold not supported in the guest"); + return; + } +} + + static int of_test_counter; static void guest_test_of_handler(struct ex_regs *r) @@ -3698,5 +3767,6 @@ struct svm_test svm_tests[] = { TEST(svm_intr_intercept_mix_nmi), TEST(svm_intr_intercept_mix_smi), TEST(svm_tsc_scale_test), + TEST(pause_filter_test), { NULL, NULL, NULL, NULL, NULL, NULL, NULL } }; diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 89ff949..c277088 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -238,7 +238,12 @@ arch = x86_64 [svm] file = svm.flat smp = 2 -extra_params = -cpu max,+svm -m 4g +extra_params = -cpu max,+svm -m 4g -append "-pause_filter_test" +arch = x86_64 + +[svm_pause_filter] +file = svm.flat +extra_params = -cpu max,+svm -overcommit cpu-pm=on -m 4g -append pause_filter_test arch = x86_64 [taskswitch]