From patchwork Mon Dec 11 18:55:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 13487936 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hbmXrEGO" Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45CFEAC for ; Mon, 11 Dec 2023 10:56:14 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5e03f0ede64so17336397b3.0 for ; Mon, 11 Dec 2023 10:56:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702320973; x=1702925773; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z/TRiMrd8RanNuxK1z5tY35iy/s57//+YI0rclS1ptg=; b=hbmXrEGO8mQqa3ZDjQjWPN/Op6Ldz6Wl1p25XZLqmYVetbVPwQlWS2tQMXjzVWKU2i MWt6MWP+cp1ePP9OQJed96WWTHc7UjrqqKSxyh/yziXCKs/LFINsrW3pMJGCOxH/Rbqq 07ZG//u1grghMdOawzwK0BgCzwjuSa7T7dbA4rZPOZi7hhSBAw3nESEfCSQ0azVGx+Nq /m5BYFFfKK4AXD9SzzFDGNmUiC9EjXbacOyM4V4YTx7I5eNQcCXQfQyYtpa+zkjTJEjI /GgwGuNEY5u/owS+w/l87F4JyjL6a1Hk/4STotdj/qk6tHswpKeRorViERE0WJVWmNll H8wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702320973; x=1702925773; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z/TRiMrd8RanNuxK1z5tY35iy/s57//+YI0rclS1ptg=; b=rXTa/3HLxHDPOHwSbdo3Q6LGja9W/vRgOgg1gxAi52i0UAOpHfp2TVlQFzFVuutvLk A2RLYiBoEoaZrFmOXrH/Dsj4C49bPjDmEiKUaBAteyIgSj5dGZKQlROJyZpNANl2VPb7 MWN7qEbZzRkHgK4f0otSxnEpR4BfI+C3h4Nzx/XtFZuchTtut+D0giqtL/oeUFkKUUmY Y+xoQWU8BhKJ+wieWzgUwjVXzVBxaFRj/udJGBXDhyS5KHHIF2enmT2597qO+hLDVyDj FhLVZDeY49cy0gaxk3WjmzDWetpq78ceDynMgJxWma6J0FSyK3aDbksDChA6rdRFS3MT 2pWw== X-Gm-Message-State: AOJu0Yz6kuuyjSYrfidYSWzGChsPqIo1RMZp1VWuyqzR46HtdbVlDrMk 0ZK2sUvY8IIRp8v4PuGCAXLIzdUpXNat6Q== X-Google-Smtp-Source: AGHT+IFk+8HJZyDLOUVGpYVbrRex0qsvgWex8LVKGs7JRAgFWCnBwvMVPx7YjqzXSfJfGRE1mbPl4crTptJxug== X-Received: from loggerhead.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:29a]) (user=jmattson job=sendgmr) by 2002:a05:690c:3508:b0:5d6:cdd0:d496 with SMTP id fq8-20020a05690c350800b005d6cdd0d496mr48166ywb.3.1702320973484; Mon, 11 Dec 2023 10:56:13 -0800 (PST) Date: Mon, 11 Dec 2023 10:55:48 -0800 In-Reply-To: <20231211185552.3856862-1-jmattson@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231211185552.3856862-1-jmattson@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231211185552.3856862-2-jmattson@google.com> Subject: [kvm-unit-tests PATCH 1/5] nVMX: Enable x2APIC mode for virtual-interrupt delivery tests From: Jim Mattson To: seanjc@google.com, kvm@vger.kernel.org, pbonzini@redhat.com Cc: Jim Mattson Since "virtualize x2APIC mode" is enabled for these tests, call enable_x2apic() so that the x2apic_ops function table will be installed. Signed-off-by: Jim Mattson --- x86/vmx_tests.c | 1 + 1 file changed, 1 insertion(+) diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index c1540d396da8..e5ed79b7da4a 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -9305,6 +9305,7 @@ static void enable_vid(void) assert(cpu_has_apicv()); + enable_x2apic(); disable_intercept_for_x2apic_msrs(); virtual_apic_page = alloc_page(); From patchwork Mon Dec 11 18:55:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 13487937 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bUDERcgI" Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E868CB4 for ; Mon, 11 Dec 2023 10:56:15 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5de8c2081d1so39043687b3.1 for ; Mon, 11 Dec 2023 10:56:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702320975; x=1702925775; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PZg3wzXwIVpAxCjY72chHZv0/7M7qr4UWjEusdndgB8=; b=bUDERcgIJ2eNEUZ2ueKGuEUucKgF/R9ho3lXJoR/zIHUBMUnNeUPdloW5ADle/+mz2 a5JliQV3AsSceuUUJTDw7FspfARe/ki3wl/t30anaVgoB7JbDz57nEpMiwBuTPXsaZSb 8PrOs5byI8yVau3PCgFfGgr0s4JhnfUU4b4ZO0cjiDsp8SIjRK0IfoDQAhVzR9odMHTy nJ9uGgfkZo0UA/gZdJpQHb/zEYdHy2bntYMnl4rVhvORFiXA3nigNlL+fmtS7/yagGNZ 1LsQ2Euc3FCqXt9F8gcNRfjcKxlVhRH1TIBO26W93+bbev9CUuKM+6kzzmvwxx3tVfxC 4QIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702320975; x=1702925775; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PZg3wzXwIVpAxCjY72chHZv0/7M7qr4UWjEusdndgB8=; b=OiBQAoILIorRSPwrroTzMsE4yVIdCsLD/OBwVhInXAFF7pTyzRUUnevGdCK+NZBnh/ SrY+gCEBToILVK+yNzLY9EyEp+IxTZzlZkpHxCN7rOu1aKm8nB60CJomW1S3DZTrFdgX x6M0tMKMGfQaEvW90/bbb/X5QkJwxftmEwQ6ospQxjVtfb3jlBdnfFcWKvZXeGSNojGQ FHFrN4exo6Bcp7mAxP90sgJaILUt6bvolYsI8YobQOl3TpNt2H7WNdxr6si9igdpbKPR LFFIssT4ev8rumY5s11F9R04BI4iw/dH09SQ+BYGvfZaIbjpHvwvW47mkGrxJVyxry3s Y8cg== X-Gm-Message-State: AOJu0Yzbd9QDbO9OZ0KDvDC1byp6fL3HKMB/evCBUKwd4buJvmieOPEn dja2VDGsQl8D82xC7yvsAE6Na8ss1wCFfw== X-Google-Smtp-Source: AGHT+IHM9rv0HL4pX+KEeRFOJe7lfA0zcEgpGxQly75LUQ+hhZOkb4TNRnfPvYWYb/WMjMbaOd51d82Y2/Fd+w== X-Received: from loggerhead.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:29a]) (user=jmattson job=sendgmr) by 2002:a05:690c:b8d:b0:5d4:2ff3:d280 with SMTP id ck13-20020a05690c0b8d00b005d42ff3d280mr46785ywb.7.1702320975167; Mon, 11 Dec 2023 10:56:15 -0800 (PST) Date: Mon, 11 Dec 2023 10:55:49 -0800 In-Reply-To: <20231211185552.3856862-1-jmattson@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231211185552.3856862-1-jmattson@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231211185552.3856862-3-jmattson@google.com> Subject: [kvm-unit-tests PATCH 2/5] nVMX: test nested "virtual-interrupt delivery" From: Jim Mattson To: seanjc@google.com, kvm@vger.kernel.org, pbonzini@redhat.com Cc: "Marc Orr (Google)" , Oliver Upton , Jim Mattson From: "Marc Orr (Google)" Add test coverage for recognizing and delivering virtual interrupts via VMX's "virtual-interrupt delivery" feature, in the following two scenarios: 1. There's a pending interrupt at VM-entry. 2. There's a pending interrupt during TPR virtualization. Signed-off-by: Marc Orr (Google) Co-developed-by: Oliver Upton Signed-off-by: Oliver Upton Co-developed-by: Jim Mattson Signed-off-by: Jim Mattson --- lib/x86/apic.h | 5 ++ x86/unittests.cfg | 2 +- x86/vmx_tests.c | 165 ++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 171 insertions(+), 1 deletion(-) diff --git a/lib/x86/apic.h b/lib/x86/apic.h index c389d40e169a..8df889b2d1e4 100644 --- a/lib/x86/apic.h +++ b/lib/x86/apic.h @@ -81,6 +81,11 @@ static inline bool apic_lvt_entry_supported(int idx) return GET_APIC_MAXLVT(apic_read(APIC_LVR)) >= idx; } +static inline u8 task_priority_class(u8 vector) +{ + return vector >> 4; +} + enum x2apic_reg_semantics { X2APIC_INVALID = 0, X2APIC_READABLE = BIT(0), diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 3fe59449b650..dd086d9e2bf4 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -361,7 +361,7 @@ timeout = 10 [vmx_apicv_test] file = vmx.flat -extra_params = -cpu max,+vmx -append "apic_reg_virt_test virt_x2apic_mode_test" +extra_params = -cpu max,+vmx -append "apic_reg_virt_test virt_x2apic_mode_test vmx_basic_vid_test" arch = x86_64 groups = vmx timeout = 10 diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index e5ed79b7da4a..0fb7e1466c50 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -10711,6 +10711,170 @@ static void vmx_exception_test(void) test_set_guest_finished(); } +enum Vid_op { + VID_OP_SET_ISR, + VID_OP_NOP, + VID_OP_SET_CR8, + VID_OP_TERMINATE, +}; + +struct vmx_basic_vid_test_guest_args { + enum Vid_op op; + u8 nr; + bool isr_fired; +} vmx_basic_vid_test_guest_args; + +static void vmx_vid_test_isr(isr_regs_t *regs) +{ + volatile struct vmx_basic_vid_test_guest_args *args = + &vmx_basic_vid_test_guest_args; + + args->isr_fired = true; + barrier(); + eoi(); +} + +static void vmx_basic_vid_test_guest(void) +{ + volatile struct vmx_basic_vid_test_guest_args *args = + &vmx_basic_vid_test_guest_args; + + sti_nop(); + for (;;) { + enum Vid_op op = args->op; + u8 nr = args->nr; + + switch (op) { + case VID_OP_TERMINATE: + return; + case VID_OP_SET_ISR: + handle_irq(nr, vmx_vid_test_isr); + break; + case VID_OP_SET_CR8: + write_cr8(nr); + break; + default: + break; + } + + vmcall(); + } +} + +/* + * Test virtual interrupt delivery (VID) at VM-entry or TPR virtualization + * + * Args: + * nr: vector under test + * tpr: task priority under test + * tpr_virt: If true, then test VID during TPR virtualization. Otherwise, + * test VID during VM-entry. + */ +static void test_basic_vid(u8 nr, u8 tpr, bool tpr_virt) +{ + volatile struct vmx_basic_vid_test_guest_args *args = + &vmx_basic_vid_test_guest_args; + bool isr_fired_want = + task_priority_class(nr) > task_priority_class(tpr); + u16 rvi_want = isr_fired_want ? 0 : nr; + u16 int_status; + + /* + * From the SDM: + * IF "interrupt-window exiting" is 0 AND + * RVI[7:4] > VPPR[7:4] (see Section 29.1.1 for definition of VPPR) + * THEN recognize a pending virtual interrupt; + * ELSE + * do not recognize a pending virtual interrupt; + * FI; + * + * Thus, VPPR dictates whether a virtual interrupt is recognized. + * However, PPR virtualization, which occurs before virtual interrupt + * delivery, sets VPPR to VTPR, when SVI is 0. + */ + vmcs_write(GUEST_INT_STATUS, nr); + args->isr_fired = false; + if (tpr_virt) { + args->op = VID_OP_SET_CR8; + args->nr = task_priority_class(tpr); + set_vtpr(0xff); + } else { + args->op = VID_OP_NOP; + set_vtpr(tpr); + } + + enter_guest(); + skip_exit_vmcall(); + TEST_ASSERT_EQ(args->isr_fired, isr_fired_want); + int_status = vmcs_read(GUEST_INT_STATUS); + TEST_ASSERT_EQ(int_status, rvi_want); +} + +/* + * Test recognizing and delivering virtual interrupts via "Virtual-interrupt + * delivery" for two scenarios: + * 1. When there is a pending interrupt at VM-entry. + * 2. When there is a pending interrupt during TPR virtualization. + */ +static void vmx_basic_vid_test(void) +{ + volatile struct vmx_basic_vid_test_guest_args *args = + &vmx_basic_vid_test_guest_args; + u8 nr_class; + u16 nr; + + if (!cpu_has_apicv()) { + report_skip("%s : Not all required APICv bits supported", __func__); + return; + } + + enable_vid(); + test_set_guest(vmx_basic_vid_test_guest); + + /* + * kvm-unit-tests uses vector 32 for IPIs, so don't install a test ISR + * for that vector. + */ + for (nr = 0x21; nr < 0x100; nr++) { + vmcs_write(GUEST_INT_STATUS, 0); + args->op = VID_OP_SET_ISR; + args->nr = nr; + args->isr_fired = false; + enter_guest(); + skip_exit_vmcall(); + TEST_ASSERT(!args->isr_fired); + } + report(true, "Set ISR for vectors 33-255."); + + for (nr_class = 2; nr_class < 16; nr_class++) { + u8 nr_sub_class; + + for (nr_sub_class = 0; nr_sub_class < 16; nr_sub_class++) { + u16 tpr; + + nr = (nr_class << 4) | nr_sub_class; + + /* + * Don't test the reserved IPI vector, as the test ISR + * was not installed. + */ + if (nr == 0x20) + continue; + + for (tpr = 0; tpr < 256; tpr++) { + test_basic_vid(nr, tpr, /*tpr_virt=*/false); + test_basic_vid(nr, tpr, /*tpr_virt=*/true); + } + report(true, "TPR 0-255 for vector 0x%x.", nr); + } + } + + /* Terminate the guest */ + args->op = VID_OP_TERMINATE; + enter_guest(); + assert_exit_reason(VMX_VMCALL); +} + #define TEST(name) { #name, .v2 = name } /* name/init/guest_main/exit_handler/syscall_handler/guest_regs */ @@ -10765,6 +10929,7 @@ struct vmx_test vmx_tests[] = { TEST(vmx_hlt_with_rvi_test), TEST(apic_reg_virt_test), TEST(virt_x2apic_mode_test), + TEST(vmx_basic_vid_test), /* APIC pass-through tests */ TEST(vmx_apic_passthrough_test), TEST(vmx_apic_passthrough_thread_test), From patchwork Mon Dec 11 18:55:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 13487938 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gmWvikfE" Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94190B8 for ; Mon, 11 Dec 2023 10:56:17 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5df5d595287so14717697b3.1 for ; Mon, 11 Dec 2023 10:56:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702320977; x=1702925777; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KkpDC5cK3MCsLQsdfJOqTSEJVLLYusM8q6HHUh7HS2Y=; b=gmWvikfEsTOxIWuao5oxGI4Xm1XVWLx39e74YjRd6HsS8WKu44Hu4/LFsdIX3r8OV/ +f9HMajODcilGd0b++k8MuiBMu+EkOOjHUqi2YNKWm509s56qp+OYwlfeVog6+fzgkH+ viPtyu2raIgo9glccvOByTUh4KIftntxuBIPVAsgyq/byEBUms4b2J/4zUWk6m5dgJxQ cDoXwR7/uRGxuV9vVdu3iLO2nFKDVF1216zDSB22aqrlRo1IQ5KkbjMhOyc2IBzsG1UU ZL9CYfLPA5PAequW+6HfILK+XgWR9hZWcIK9IQVRuJ6wbHhND/aR5rU00vVT/KQaxcqS fdyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702320977; x=1702925777; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KkpDC5cK3MCsLQsdfJOqTSEJVLLYusM8q6HHUh7HS2Y=; b=WUx8mJ229NyH66jugwlPNVGyb9McFOEucK44KPYRAoYYbdqY2ZyurIjSavFp+hYXQe vroNO5YouF9CyILRjaEhqedc+VEmf9H5Gl632T/npYys+MABkGbaXoa6NxFXJB5IGchu qJbQCm05Cc3OPI5Xow+7rDb6kFT9rBFvfqqIpgWMjp8w4ryCzYzp2yGds4y/lxpsX+dk phv3TZFilz0ny+fVQE2nw5LO7yslSpZ5EoHxK+3vLrkt8QS9iUneqTzvYfVj8kwGYKn5 o7H7CUyRLBTH1ReMBnhfmkdU7ixbsGVDhwTc9sX9p/ZKXZS5Yg3kYG9LkxCsfQMiNsE/ Kn6g== X-Gm-Message-State: AOJu0YxUAfG7n8CsTv56H/SnSPTiIVsDoF4kF+N1WF6PSC/sYKb0yk0o FYwExYlRSF83gaUJp9LNmV2W9L5N4rARgw== X-Google-Smtp-Source: AGHT+IFXbQ965HW9sgG9wnyjhatwSKKxs44CcGMAEeE3lxoLQVxYgE0YBnkb+LA7v0SjhCGFaZkRzPmfrstH4Q== X-Received: from loggerhead.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:29a]) (user=jmattson job=sendgmr) by 2002:a05:690c:2901:b0:5e1:90b4:dd9b with SMTP id eg1-20020a05690c290100b005e190b4dd9bmr10863ywb.2.1702320976879; Mon, 11 Dec 2023 10:56:16 -0800 (PST) Date: Mon, 11 Dec 2023 10:55:50 -0800 In-Reply-To: <20231211185552.3856862-1-jmattson@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231211185552.3856862-1-jmattson@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231211185552.3856862-4-jmattson@google.com> Subject: [kvm-unit-tests PATCH 3/5] nVMX: test nested EOI virtualization From: Jim Mattson To: seanjc@google.com, kvm@vger.kernel.org, pbonzini@redhat.com Cc: "Marc Orr (Google)" , Oliver Upton , Jim Mattson From: "Marc Orr (Google)" Add a test for nested VMs that invoke EOI virtualization. Specifically, check that a pending low-priority interrupt, masked by a higher-priority interrupt, is scheduled via "virtual-interrupt delivery," after the higher-priority interrupt executes EOI. Signed-off-by: Marc Orr (Google) Co-developed-by: Oliver Upton Signed-off-by: Oliver Upton Co-developed-by: Jim Mattson Signed-off-by: Jim Mattson --- x86/unittests.cfg | 2 +- x86/vmx_tests.c | 161 ++++++++++++++++++++++++++++++++++++++-------- 2 files changed, 136 insertions(+), 27 deletions(-) diff --git a/x86/unittests.cfg b/x86/unittests.cfg index dd086d9e2bf4..f307168b0e01 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -361,7 +361,7 @@ timeout = 10 [vmx_apicv_test] file = vmx.flat -extra_params = -cpu max,+vmx -append "apic_reg_virt_test virt_x2apic_mode_test vmx_basic_vid_test" +extra_params = -cpu max,+vmx -append "apic_reg_virt_test virt_x2apic_mode_test vmx_basic_vid_test vmx_eoi_virt_test" arch = x86_64 groups = vmx timeout = 10 diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 0fb7e1466c50..ce480431bf58 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -60,6 +60,11 @@ static inline void vmcall(void) asm volatile("vmcall"); } +static u32 *get_vapic_page(void) +{ + return (u32 *)phys_to_virt(vmcs_read(APIC_VIRT_ADDR)); +} + static void basic_guest_main(void) { report_pass("Basic VMX test"); @@ -10721,15 +10726,36 @@ enum Vid_op { struct vmx_basic_vid_test_guest_args { enum Vid_op op; u8 nr; - bool isr_fired; + u32 isr_exec_cnt; } vmx_basic_vid_test_guest_args; +/* + * From the SDM, Bit x of the VIRR is + * at bit position (x & 1FH) + * at offset (200H | ((x & E0H) >> 1)). + */ +static void set_virr_bit(volatile u32 *virtual_apic_page, u8 nr) +{ + u32 page_offset = (0x200 | ((nr & 0xE0) >> 1)) / sizeof(u32); + u32 mask = 1 << (nr & 0x1f); + + virtual_apic_page[page_offset] |= mask; +} + +static bool get_virr_bit(volatile u32 *virtual_apic_page, u8 nr) +{ + u32 page_offset = (0x200 | ((nr & 0xE0) >> 1)) / sizeof(u32); + u32 mask = 1 << (nr & 0x1f); + + return virtual_apic_page[page_offset] & mask; +} + static void vmx_vid_test_isr(isr_regs_t *regs) { volatile struct vmx_basic_vid_test_guest_args *args = &vmx_basic_vid_test_guest_args; - args->isr_fired = true; + args->isr_exec_cnt++; barrier(); eoi(); } @@ -10761,6 +10787,27 @@ static void vmx_basic_vid_test_guest(void) } } +static void set_isrs_for_vmx_basic_vid_test(void) +{ + volatile struct vmx_basic_vid_test_guest_args *args = + &vmx_basic_vid_test_guest_args; + u16 nr; + + /* + * kvm-unit-tests uses vector 32 for IPIs, so don't install a test ISR + * for that vector. + */ + for (nr = 0x21; nr < 0x100; nr++) { + vmcs_write(GUEST_INT_STATUS, 0); + args->op = VID_OP_SET_ISR; + args->nr = nr; + args->isr_exec_cnt = 0; + enter_guest(); + skip_exit_vmcall(); + } + report(true, "Set ISR for vectors 33-255."); +} + /* * Test virtual interrupt delivery (VID) at VM-entry or TPR virtualization * @@ -10770,13 +10817,12 @@ static void vmx_basic_vid_test_guest(void) * tpr_virt: If true, then test VID during TPR virtualization. Otherwise, * test VID during VM-entry. */ -static void test_basic_vid(u8 nr, u8 tpr, bool tpr_virt) +static void test_basic_vid(u8 nr, u8 tpr, bool tpr_virt, u32 isr_exec_cnt_want, + bool eoi_exit_induced) { volatile struct vmx_basic_vid_test_guest_args *args = &vmx_basic_vid_test_guest_args; - bool isr_fired_want = - task_priority_class(nr) > task_priority_class(tpr); - u16 rvi_want = isr_fired_want ? 0 : nr; + u16 rvi_want = isr_exec_cnt_want ? 0 : nr; u16 int_status; /* @@ -10793,7 +10839,7 @@ static void test_basic_vid(u8 nr, u8 tpr, bool tpr_virt) * delivery, sets VPPR to VTPR, when SVI is 0. */ vmcs_write(GUEST_INT_STATUS, nr); - args->isr_fired = false; + args->isr_exec_cnt = 0; if (tpr_virt) { args->op = VID_OP_SET_CR8; args->nr = task_priority_class(tpr); @@ -10804,8 +10850,18 @@ static void test_basic_vid(u8 nr, u8 tpr, bool tpr_virt) } enter_guest(); + if (eoi_exit_induced) { + u32 exit_cnt; + + assert_exit_reason(VMX_EOI_INDUCED); + for (exit_cnt = 1; exit_cnt < isr_exec_cnt_want; exit_cnt++) { + enter_guest(); + assert_exit_reason(VMX_EOI_INDUCED); + } + enter_guest(); + } skip_exit_vmcall(); - TEST_ASSERT_EQ(args->isr_fired, isr_fired_want); + TEST_ASSERT_EQ(args->isr_exec_cnt, isr_exec_cnt_want); int_status = vmcs_read(GUEST_INT_STATUS); TEST_ASSERT_EQ(int_status, rvi_want); } @@ -10821,7 +10877,6 @@ static void vmx_basic_vid_test(void) volatile struct vmx_basic_vid_test_guest_args *args = &vmx_basic_vid_test_guest_args; u8 nr_class; - u16 nr; if (!cpu_has_apicv()) { report_skip("%s : Not all required APICv bits supported", __func__); @@ -10830,23 +10885,10 @@ static void vmx_basic_vid_test(void) enable_vid(); test_set_guest(vmx_basic_vid_test_guest); - - /* - * kvm-unit-tests uses vector 32 for IPIs, so don't install a test ISR - * for that vector. - */ - for (nr = 0x21; nr < 0x100; nr++) { - vmcs_write(GUEST_INT_STATUS, 0); - args->op = VID_OP_SET_ISR; - args->nr = nr; - args->isr_fired = false; - enter_guest(); - skip_exit_vmcall(); - TEST_ASSERT(!args->isr_fired); - } - report(true, "Set ISR for vectors 33-255."); + set_isrs_for_vmx_basic_vid_test(); for (nr_class = 2; nr_class < 16; nr_class++) { + u16 nr; u8 nr_sub_class; for (nr_sub_class = 0; nr_sub_class < 16; nr_sub_class++) { @@ -10862,8 +10904,16 @@ static void vmx_basic_vid_test(void) continue; for (tpr = 0; tpr < 256; tpr++) { - test_basic_vid(nr, tpr, /*tpr_virt=*/false); - test_basic_vid(nr, tpr, /*tpr_virt=*/true); + u32 isr_exec_cnt_want = + task_priority_class(nr) > + task_priority_class(tpr) ? 1 : 0; + + test_basic_vid(nr, tpr, /*tpr_virt=*/false, + isr_exec_cnt_want, + /*eoi_exit_induced=*/false); + test_basic_vid(nr, tpr, /*tpr_virt=*/true, + isr_exec_cnt_want, + /*eoi_exit_induced=*/false); } report(true, "TPR 0-255 for vector 0x%x.", nr); } @@ -10875,6 +10925,64 @@ static void vmx_basic_vid_test(void) assert_exit_reason(VMX_VMCALL); } +static void test_eoi_virt(u8 nr, u8 lo_pri_nr, bool eoi_exit_induced) +{ + u32 *virtual_apic_page = get_vapic_page(); + + set_virr_bit(virtual_apic_page, lo_pri_nr); + test_basic_vid(nr, /*tpr=*/0, /*tpr_virt=*/false, + /*isr_exec_cnt_want=*/2, eoi_exit_induced); + TEST_ASSERT(!get_virr_bit(virtual_apic_page, lo_pri_nr)); + TEST_ASSERT(!get_virr_bit(virtual_apic_page, nr)); +} + +static void vmx_eoi_virt_test(void) +{ + volatile struct vmx_basic_vid_test_guest_args *args = + &vmx_basic_vid_test_guest_args; + u16 nr; + u16 lo_pri_nr; + + if (!cpu_has_apicv()) { + report_skip("%s : Not all required APICv bits supported", __func__); + return; + } + + enable_vid(); /* Note, enable_vid sets APIC_VIRT_ADDR field in VMCS. */ + test_set_guest(vmx_basic_vid_test_guest); + set_isrs_for_vmx_basic_vid_test(); + + /* Now test EOI virtualization without induced EOI exits. */ + for (nr = 0x22; nr < 0x100; nr++) { + for (lo_pri_nr = 0x21; lo_pri_nr < nr; lo_pri_nr++) + test_eoi_virt(nr, lo_pri_nr, + /*eoi_exit_induced=*/false); + + report(true, "Low priority nrs 0x21-0x%x for nr 0x%x.", + nr - 1, nr); + } + + /* Finally, test EOI virtualization with induced EOI exits. */ + vmcs_write(EOI_EXIT_BITMAP0, GENMASK_ULL(63, 0)); + vmcs_write(EOI_EXIT_BITMAP1, GENMASK_ULL(63, 0)); + vmcs_write(EOI_EXIT_BITMAP2, GENMASK_ULL(63, 0)); + vmcs_write(EOI_EXIT_BITMAP3, GENMASK_ULL(63, 0)); + for (nr = 0x22; nr < 0x100; nr++) { + for (lo_pri_nr = 0x21; lo_pri_nr < nr; lo_pri_nr++) + test_eoi_virt(nr, lo_pri_nr, + /*eoi_exit_induced=*/true); + + report(true, + "Low priority nrs 0x21-0x%x for nr 0x%x, with induced EOI exits.", + nr - 1, nr); + } + + /* Terminate the guest */ + args->op = VID_OP_TERMINATE; + enter_guest(); + assert_exit_reason(VMX_VMCALL); +} + #define TEST(name) { #name, .v2 = name } /* name/init/guest_main/exit_handler/syscall_handler/guest_regs */ @@ -10930,6 +11038,7 @@ struct vmx_test vmx_tests[] = { TEST(apic_reg_virt_test), TEST(virt_x2apic_mode_test), TEST(vmx_basic_vid_test), + TEST(vmx_eoi_virt_test), /* APIC pass-through tests */ TEST(vmx_apic_passthrough_test), TEST(vmx_apic_passthrough_thread_test), From patchwork Mon Dec 11 18:55:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 13487939 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OlDU8Www" Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6FBE5B4 for ; Mon, 11 Dec 2023 10:56:19 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1d09a64eaebso42352875ad.3 for ; Mon, 11 Dec 2023 10:56:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702320979; x=1702925779; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VkVfXdG+QchtPbEMXByOmfoGEr5gxV3bgAp3Ue9WKBk=; b=OlDU8WwwYirlPxIPiUALjFzGr74ysf3deCI8s6KIV14bvMk7HvEIDopw+YfUiPl+TJ 4f2Wd2vgjic5a9kbtfW9kV/Zjd5apkffWIpNkLzrzbcQoxdg+mYkc7Vu7BvzVane9Hge TNwFTUMv1+npugwjNkmIxv+GfORQAXCRldgxaMIm3rfgfNJyDa1MfOoMgi64CfNvq3ht UC4qaXRqDe3kV5TKFUhbIi2AizckghXciHW2RcPhCq2c18fy4+GVIGGprNmfTG4S1is7 nhJ/GBejxBAoa+BcXF1Fwr6fZZ4gDQ+lBzFQa2Fo94D4vSLakYedtEU4871C4dytnj3+ vEjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702320979; x=1702925779; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VkVfXdG+QchtPbEMXByOmfoGEr5gxV3bgAp3Ue9WKBk=; b=UuECLm3YQWyNsWHhs4aXYGt5+SdQc+Zp95d7OGaf7Yhm2iFelY90TR4nemI/0GOVxf 2nk4/nvMRC0G5SmyHJw6BFz6GpKTUxJUfwFpf57Z3grkkK35jkjX+DISfl2BZ+oCd39E wSHMxZiGrPEJMEWFpYpRoxeu1EmPcHKKyjU4qloN/YJ2IJgyjgfXgOddO0BS1QUaHods MeNgVrx7XPVnEu7OzeFFPZK8J3Smau334VKwcsjBCQubR7pq3fLzEJS6MVxoezCUpBnR R1pvz7KMHi1YaDQYxNysZzrP9cFJvTeBpCfbmHpxgecGwzruph67vCXMvKuEC4m97glH BkiA== X-Gm-Message-State: AOJu0Yy4bRO0DO5lma/t8dJ957JODIH/CZ20pK7HSWMJcWKUidt7HZf/ q6SOPoPWjWZJ+TvBj0d7thyZLvWJSIMSUg== X-Google-Smtp-Source: AGHT+IFL4UQzOvYAyDIIpaw0aAqRTom59VMXLtIlj+xaSPYrZIOI0q5ojn0k3THzW2KqLplKG0RsLxjFEcNI7w== X-Received: from loggerhead.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:29a]) (user=jmattson job=sendgmr) by 2002:a17:902:cecf:b0:1d0:5878:d4e4 with SMTP id d15-20020a170902cecf00b001d05878d4e4mr36544plg.3.1702320978937; Mon, 11 Dec 2023 10:56:18 -0800 (PST) Date: Mon, 11 Dec 2023 10:55:51 -0800 In-Reply-To: <20231211185552.3856862-1-jmattson@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231211185552.3856862-1-jmattson@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231211185552.3856862-5-jmattson@google.com> Subject: [kvm-unit-tests PATCH 4/5] nVMX: add self-IPI tests to vmx_basic_vid_test From: Jim Mattson To: seanjc@google.com, kvm@vger.kernel.org, pbonzini@redhat.com Cc: "Marc Orr (Google)" , Jim Mattson From: "Marc Orr (Google)" Extend the VMX "virtual-interrupt delivery test", vmx_basic_vid_test, to verify that virtual-interrupt delivery is triggered by a self-IPI in L2. Signed-off-by: Marc Orr (Google) Co-developed-by: Jim Mattson Signed-off-by: Jim Mattson --- x86/vmx_tests.c | 35 +++++++++++++++++++++++++---------- 1 file changed, 25 insertions(+), 10 deletions(-) diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index ce480431bf58..a26f77e92f72 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -10720,6 +10720,7 @@ enum Vid_op { VID_OP_SET_ISR, VID_OP_NOP, VID_OP_SET_CR8, + VID_OP_SELF_IPI, VID_OP_TERMINATE, }; @@ -10779,6 +10780,9 @@ static void vmx_basic_vid_test_guest(void) case VID_OP_SET_CR8: write_cr8(nr); break; + case VID_OP_SELF_IPI: + vmx_x2apic_write(APIC_SELF_IPI, nr); + break; default: break; } @@ -10817,7 +10821,7 @@ static void set_isrs_for_vmx_basic_vid_test(void) * tpr_virt: If true, then test VID during TPR virtualization. Otherwise, * test VID during VM-entry. */ -static void test_basic_vid(u8 nr, u8 tpr, bool tpr_virt, u32 isr_exec_cnt_want, +static void test_basic_vid(u8 nr, u8 tpr, enum Vid_op op, u32 isr_exec_cnt_want, bool eoi_exit_induced) { volatile struct vmx_basic_vid_test_guest_args *args = @@ -10838,15 +10842,23 @@ static void test_basic_vid(u8 nr, u8 tpr, bool tpr_virt, u32 isr_exec_cnt_want, * However, PPR virtualization, which occurs before virtual interrupt * delivery, sets VPPR to VTPR, when SVI is 0. */ - vmcs_write(GUEST_INT_STATUS, nr); args->isr_exec_cnt = 0; - if (tpr_virt) { - args->op = VID_OP_SET_CR8; + args->op = op; + switch (op) { + case VID_OP_SELF_IPI: + vmcs_write(GUEST_INT_STATUS, 0); + args->nr = nr; + set_vtpr(0); + break; + case VID_OP_SET_CR8: + vmcs_write(GUEST_INT_STATUS, nr); args->nr = task_priority_class(tpr); set_vtpr(0xff); - } else { - args->op = VID_OP_NOP; + break; + default: + vmcs_write(GUEST_INT_STATUS, nr); set_vtpr(tpr); + break; } enter_guest(); @@ -10903,15 +10915,18 @@ static void vmx_basic_vid_test(void) if (nr == 0x20) continue; + test_basic_vid(nr, /*tpr=*/0, VID_OP_SELF_IPI, + /*isr_exec_cnt_want=*/1, + /*eoi_exit_induced=*/false); for (tpr = 0; tpr < 256; tpr++) { u32 isr_exec_cnt_want = task_priority_class(nr) > task_priority_class(tpr) ? 1 : 0; - test_basic_vid(nr, tpr, /*tpr_virt=*/false, + test_basic_vid(nr, tpr, VID_OP_NOP, isr_exec_cnt_want, /*eoi_exit_induced=*/false); - test_basic_vid(nr, tpr, /*tpr_virt=*/true, + test_basic_vid(nr, tpr, VID_OP_SET_CR8, isr_exec_cnt_want, /*eoi_exit_induced=*/false); } @@ -10930,8 +10945,8 @@ static void test_eoi_virt(u8 nr, u8 lo_pri_nr, bool eoi_exit_induced) u32 *virtual_apic_page = get_vapic_page(); set_virr_bit(virtual_apic_page, lo_pri_nr); - test_basic_vid(nr, /*tpr=*/0, /*tpr_virt=*/false, - /*isr_exec_cnt_want=*/2, eoi_exit_induced); + test_basic_vid(nr, /*tpr=*/0, VID_OP_NOP, /*isr_exec_cnt_want=*/2, + eoi_exit_induced); TEST_ASSERT(!get_virr_bit(virtual_apic_page, lo_pri_nr)); TEST_ASSERT(!get_virr_bit(virtual_apic_page, nr)); } From patchwork Mon Dec 11 18:55:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 13487940 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ay9WgdUX" Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F4DFB8 for ; Mon, 11 Dec 2023 10:56:21 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5de8c2081d1so39045227b3.1 for ; Mon, 11 Dec 2023 10:56:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702320980; x=1702925780; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Po2NuzeRSr6e6XvJUH45J91ZQBQs42rgxiRE1lTRmPo=; b=ay9WgdUXrLyoG+M3jfRPqmx1nRYxndPXXLP1q2CO9IWAXyGVomyMhUyBZo+UgtpzrL LG/Yla83y7OhdVGnMiEBwhItJD8/akhwkueH83YOZ6ZigBAQTJ07+oqpZ7ZkPqnFbhW+ b7EFI7dOaa2SdH10bYTjyQKouCmJQsMU5LG/p1jgELqn1qioUW1+L/fI9nPA9OVpvBzU nREzjDZ5Sq66TXV/aoDXw9JJZf7HjC3W4zlEznN0FJtsx21wc68XN3vYzLq9smgRtPa3 0mE/r4cgWQ3b9oRG139oMlJaRh1PBJ1m7q5QHuAqJY9IHxfPRkr5fHUFihvRzcuFcCl8 0vHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702320980; x=1702925780; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Po2NuzeRSr6e6XvJUH45J91ZQBQs42rgxiRE1lTRmPo=; b=QdsFW39iqJuMO5t5EnLILcZ5nN3hzZk4vJhWYaShSwJCzxwVc1VxMC2SNeFNXixCgb 3IgL5pYZ4+hlWshZtcMKYity7Bw/jBuSD/fkc9aBV/WTHLB/OyNjFmw+/S1cvwEcs49K PrKN1a4n/gfxjEDoT4ik/f0viOva08kr2NIN5CUt1yi2+f42IXlpQA4RW99eYkWgqHIN fPQMtjRh53I7XPPL4f5oK7TYyKzTmh3KOBLOzyY15hBCfYYccyMnuOHQRTsNQxP7bJiP CUYt0psfAMH4nG6M2+ICeaBu4omnEZYyyyFIVYeDxWGpFlheUhcVz05tomoFAYkmG7kA PA1g== X-Gm-Message-State: AOJu0YwXYMHGr50EC77NbzL0b+2to7dYHDnCENQXwsCtOc0sICA2Qawf 7dYm1qgrlph5RgwC8SvcfSzTNTdm/ntmpA== X-Google-Smtp-Source: AGHT+IGwoNxDxJen0x0iEmE/EAOpNBvnKl0zodWypqrdQw7SMOVqC4Y0UwsgY+IsGJ0sQ0pBpg5+/v+okoPmKA== X-Received: from loggerhead.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:29a]) (user=jmattson job=sendgmr) by 2002:a05:6902:dc5:b0:db4:5d34:fa5 with SMTP id de5-20020a0569020dc500b00db45d340fa5mr40161ybb.0.1702320980637; Mon, 11 Dec 2023 10:56:20 -0800 (PST) Date: Mon, 11 Dec 2023 10:55:52 -0800 In-Reply-To: <20231211185552.3856862-1-jmattson@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231211185552.3856862-1-jmattson@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231211185552.3856862-6-jmattson@google.com> Subject: [kvm-unit-tests PATCH 5/5] nVMX: add test for posted interrupts From: Jim Mattson To: seanjc@google.com, kvm@vger.kernel.org, pbonzini@redhat.com Cc: Oliver Upton , Jim Mattson From: Oliver Upton Test virtual posted interrupts under the following conditions: - vTPR[7:4] >= VECTOR[7:4]: Expect the L2 interrupt to be blocked. The bit corresponding to the posted interrupt should be set in L2's vIRR. Test with a running guest. - vTPR[7:4] < VECTOR[7:4]: Expect the interrupt to be delivered and the ISR to execute once. Test with a running and halted guest. Signed-off-by: Oliver Upton Co-developed-by: Jim Mattson Signed-off-by: Jim Mattson --- lib/x86/asm/bitops.h | 8 +++ x86/unittests.cfg | 8 +++ x86/vmx_tests.c | 133 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 149 insertions(+) diff --git a/lib/x86/asm/bitops.h b/lib/x86/asm/bitops.h index 13a25ec9853d..54ec9c424cd6 100644 --- a/lib/x86/asm/bitops.h +++ b/lib/x86/asm/bitops.h @@ -13,4 +13,12 @@ #define HAVE_BUILTIN_FLS 1 +static inline void test_and_set_bit(long nr, unsigned long *addr) +{ + asm volatile("lock; bts %1,%0" + : "+m" (*addr) + : "Ir" (nr) + : "memory"); +} + #endif diff --git a/x86/unittests.cfg b/x86/unittests.cfg index f307168b0e01..9598c61ef7ac 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -366,6 +366,14 @@ arch = x86_64 groups = vmx timeout = 10 +[vmx_posted_intr_test] +file = vmx.flat +smp = 2 +extra_params = -cpu max,+vmx -append "vmx_posted_interrupts_test" +arch = x86_64 +groups = vmx +timeout = 10 + [vmx_apic_passthrough_thread] file = vmx.flat smp = 2 diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index a26f77e92f72..1a3da59632dc 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -65,6 +65,11 @@ static u32 *get_vapic_page(void) return (u32 *)phys_to_virt(vmcs_read(APIC_VIRT_ADDR)); } +static u64 *get_pi_desc(void) +{ + return (u64 *)phys_to_virt(vmcs_read(POSTED_INTR_DESC_ADDR)); +} + static void basic_guest_main(void) { report_pass("Basic VMX test"); @@ -9327,6 +9332,18 @@ static void enable_vid(void) vmcs_set_bits(CPU_EXEC_CTRL1, CPU_VINTD | CPU_VIRT_X2APIC); } +#define PI_VECTOR 255 + +static void enable_posted_interrupts(void) +{ + void *pi_desc = alloc_page(); + + vmcs_set_bits(PIN_CONTROLS, PIN_POST_INTR); + vmcs_set_bits(EXI_CONTROLS, EXI_INTA); + vmcs_write(PINV, PI_VECTOR); + vmcs_write(POSTED_INTR_DESC_ADDR, (u64)pi_desc); +} + static void trigger_ioapic_scan_thread(void *data) { /* Wait until other CPU entered L2 */ @@ -10722,12 +10739,18 @@ enum Vid_op { VID_OP_SET_CR8, VID_OP_SELF_IPI, VID_OP_TERMINATE, + VID_OP_SPIN, + VID_OP_HLT, }; struct vmx_basic_vid_test_guest_args { enum Vid_op op; u8 nr; u32 isr_exec_cnt; + u32 *virtual_apic_page; + u64 *pi_desc; + u32 dest; + bool in_guest; } vmx_basic_vid_test_guest_args; /* @@ -10743,6 +10766,14 @@ static void set_virr_bit(volatile u32 *virtual_apic_page, u8 nr) virtual_apic_page[page_offset] |= mask; } +static void clear_virr_bit(volatile u32 *virtual_apic_page, u8 nr) +{ + u32 page_offset = (0x200 | ((nr & 0xE0) >> 1)) / sizeof(u32); + u32 mask = 1 << (nr & 0x1f); + + virtual_apic_page[page_offset] &= ~mask; +} + static bool get_virr_bit(volatile u32 *virtual_apic_page, u8 nr) { u32 page_offset = (0x200 | ((nr & 0xE0) >> 1)) / sizeof(u32); @@ -10783,6 +10814,24 @@ static void vmx_basic_vid_test_guest(void) case VID_OP_SELF_IPI: vmx_x2apic_write(APIC_SELF_IPI, nr); break; + case VID_OP_HLT: + cli(); + barrier(); + args->in_guest = true; + barrier(); + safe_halt(); + break; + case VID_OP_SPIN: { + u32 *virtual_apic_page = args->virtual_apic_page; + u32 prev_cnt = args->isr_exec_cnt; + u8 nr = args->nr; + + args->in_guest = true; + while (args->isr_exec_cnt == prev_cnt && + !get_virr_bit(virtual_apic_page, nr)) + pause(); + clear_virr_bit(virtual_apic_page, nr); + } default: break; } @@ -10803,6 +10852,7 @@ static void set_isrs_for_vmx_basic_vid_test(void) */ for (nr = 0x21; nr < 0x100; nr++) { vmcs_write(GUEST_INT_STATUS, 0); + args->virtual_apic_page = get_vapic_page(); args->op = VID_OP_SET_ISR; args->nr = nr; args->isr_exec_cnt = 0; @@ -10812,6 +10862,27 @@ static void set_isrs_for_vmx_basic_vid_test(void) report(true, "Set ISR for vectors 33-255."); } +static void post_interrupt(u8 vector, u32 dest) +{ + volatile struct vmx_basic_vid_test_guest_args *args = + &vmx_basic_vid_test_guest_args; + + test_and_set_bit(vector, args->pi_desc); + test_and_set_bit(256, args->pi_desc); + apic_icr_write(PI_VECTOR, dest); +} + +static void vmx_posted_interrupts_test_worker(void *data) +{ + volatile struct vmx_basic_vid_test_guest_args *args = + &vmx_basic_vid_test_guest_args; + + while (!args->in_guest) + pause(); + + post_interrupt(args->nr, args->dest); +} + /* * Test virtual interrupt delivery (VID) at VM-entry or TPR virtualization * @@ -10843,6 +10914,7 @@ static void test_basic_vid(u8 nr, u8 tpr, enum Vid_op op, u32 isr_exec_cnt_want, * delivery, sets VPPR to VTPR, when SVI is 0. */ args->isr_exec_cnt = 0; + args->virtual_apic_page = get_vapic_page(); args->op = op; switch (op) { case VID_OP_SELF_IPI: @@ -10855,6 +10927,15 @@ static void test_basic_vid(u8 nr, u8 tpr, enum Vid_op op, u32 isr_exec_cnt_want, args->nr = task_priority_class(tpr); set_vtpr(0xff); break; + case VID_OP_SPIN: + case VID_OP_HLT: + vmcs_write(GUEST_INT_STATUS, 0); + args->nr = nr; + set_vtpr(tpr); + args->in_guest = false; + barrier(); + on_cpu_async(1, vmx_posted_interrupts_test_worker, NULL); + break; default: vmcs_write(GUEST_INT_STATUS, nr); set_vtpr(tpr); @@ -10998,6 +11079,57 @@ static void vmx_eoi_virt_test(void) assert_exit_reason(VMX_VMCALL); } +static void vmx_posted_interrupts_test(void) +{ + volatile struct vmx_basic_vid_test_guest_args *args = + &vmx_basic_vid_test_guest_args; + u16 vector; + u8 class; + + if (!cpu_has_apicv()) { + report_skip("%s : Not all required APICv bits supported", __func__); + return; + } + + if (cpu_count() < 2) { + report_skip("%s : CPU count < 2", __func__); + return; + } + + enable_vid(); + enable_posted_interrupts(); + args->pi_desc = get_pi_desc(); + args->dest = apic_id(); + + test_set_guest(vmx_basic_vid_test_guest); + set_isrs_for_vmx_basic_vid_test(); + + for (class = 0; class < 16; class++) { + for (vector = 33; vector < 256; vector++) { + u32 isr_exec_cnt_want = + (task_priority_class(vector) > class) ? + 1 : 0; + + test_basic_vid(vector, class << 4, VID_OP_SPIN, + isr_exec_cnt_want, false); + + /* + * Only test posted interrupts to a halted vCPU if we + * expect the interrupt to be serviced. Otherwise, the + * vCPU could HLT indefinitely. + */ + if (isr_exec_cnt_want) + test_basic_vid(vector, class << 4, VID_OP_HLT, + isr_exec_cnt_want, false); + } + } + report(true, "Posted vectors 33-25 cross TPR classes 0-0xf, running and sometimes halted\n"); + + /* Terminate the guest */ + args->op = VID_OP_TERMINATE; + enter_guest(); +} + #define TEST(name) { #name, .v2 = name } /* name/init/guest_main/exit_handler/syscall_handler/guest_regs */ @@ -11054,6 +11186,7 @@ struct vmx_test vmx_tests[] = { TEST(virt_x2apic_mode_test), TEST(vmx_basic_vid_test), TEST(vmx_eoi_virt_test), + TEST(vmx_posted_interrupts_test), /* APIC pass-through tests */ TEST(vmx_apic_passthrough_test), TEST(vmx_apic_passthrough_thread_test),