From patchwork Tue Dec 14 01:18:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12675187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39463C433F5 for ; Tue, 14 Dec 2021 01:18:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238334AbhLNBSh (ORCPT ); Mon, 13 Dec 2021 20:18:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237215AbhLNBSh (ORCPT ); Mon, 13 Dec 2021 20:18:37 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D49EAC06173F for ; Mon, 13 Dec 2021 17:18:36 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id d3-20020a17090a6a4300b001a70e45f34cso10957415pjm.0 for ; Mon, 13 Dec 2021 17:18:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3bR/XVyullSaRp804j//01DHZhjN7sckcjTz4TDq2B0=; b=pzvIL11ZwH70VjT50fN34lrEeF79ocMU6M3HR5Dx59HtmKPREVnVRwks3Ux0XIeUJq lS2yiQahnRKOi9JxyzNDb5FbDGi0hTFduAnCqr/Oq5ywrSSm5WkpYcB7NR/rlDX48FNU iu0g+IhNWNUknlFY5GnjGM8TMHgnfnGlrrU7WYcsWVbLapLu6LCAlKrST9rWIsqD3YKa OEGa1BA73tMKugNUwR49g8YQRMpMKIzrx7bYDc/2OAi+n1w3iivR7Vf8rT87Ppm6Ksfr XhdXG2c0AqVfTyWcw2qtExux5AgwtMrDtQRiEvybJCMtassJWhqLsuoC4DQ33Z++Xni4 zg+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3bR/XVyullSaRp804j//01DHZhjN7sckcjTz4TDq2B0=; b=Hivitvg6uRZ8P5P04eQAnWg+cCFoSCSBxwHRxCtfh2iE7NovbRELS+5rsBEql8zrGo HNPPJcHYjpjUxhjV+vXYENF2iy4JIUnc50kB1bGiB0+JYarBkCOGxSya+fzrPyiUCPbm EHFkindsTcTRlynFbCZHv2TCw6Kh33x0gYhsLaU4d8GIAW1/azkyku0B03g2yaG4VQbM sbSEHIRWTsso4Jk8t398BJhF53FFGkEG6sY6cIwevNbKIJoVHWrM6EeWA8EAipu9sp/G oCaPkaBJRvJ0oHj1IHUXsZCHCvXW9COhKzQETgU3S0qpoiLdswZEM7hgXPwznLDCsxll npCw== X-Gm-Message-State: AOAM531t+06+X1AKuGtCIxWLVW8642zS5jOpP1lujcj4nOKf/DBAE3Sy Z4EODucS+Ie7ovcvjMiQBto0xNRA9nOfiZY0sJDpcK55oZ0ISSH9YH+uC2dX4MD/gZRU1TCcyL8 K+SmOpYbupMNTeZsuiMD+AuYlM6UbqqySZScWvINQb/u5jYKEabL6uNhX9LDFJotMKgJ0 X-Google-Smtp-Source: ABdhPJxwO3h0GU6uEJkYLTullNrT+xMMAoO93pxkNcp+IScEv7lnpEYhkZ+c8+PqPFsnDME3mi3pBYmb38lo6mBf X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:90a:f00e:: with SMTP id bt14mr2036817pjb.219.1639444716073; Mon, 13 Dec 2021 17:18:36 -0800 (PST) Date: Tue, 14 Dec 2021 01:18:20 +0000 In-Reply-To: <20211214011823.3277011-1-aaronlewis@google.com> Message-Id: <20211214011823.3277011-2-aaronlewis@google.com> Mime-Version: 1.0 References: <20211214011823.3277011-1-aaronlewis@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [kvm-unit-tests PATCH v2 1/4] x86: Fix a #GP from occurring in usermode library's exception handlers From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When handling an exception in usermode.c the exception handler #GPs when executing IRET to return from the exception handler. This happens because the stack segment selector does not have the same privilege level as the return code segment selector. Set the stack segment selector to match the code segment selector's privilege level to fix the issue. This problem has been disguised in kvm-unit-tests because a #GP exception handler has been registered with run_in_user() for the tests that are currently using this feature. With a #GP exception handler registered, the first exception will be processed then #GP on the IRET. The IRET from the second #GP will then succeed, and the subsequent lngjmp() will restore RSP to a sane value. But if no #GP handler is installed, e.g. if a test wants to handle only #ACs, the #GP on the initial IRET will be fatal. This is only a problem in 64-bit mode because 64-bit mode unconditionally pops SS:RSP (SDM vol 3, 6.14.3 "IRET in IA-32e Mode"). In 32-bit mode SS:RSP is not popped because there is no privilege level change when executing IRET at the end of the #GP handler. Signed-off-by: Aaron Lewis Reviewed-by: Sean Christopherson --- lib/x86/desc.h | 4 ++++ lib/x86/usermode.c | 3 +++ 2 files changed, 7 insertions(+) diff --git a/lib/x86/desc.h b/lib/x86/desc.h index b65539e..9b81da0 100644 --- a/lib/x86/desc.h +++ b/lib/x86/desc.h @@ -18,6 +18,10 @@ struct ex_regs { unsigned long rip; unsigned long cs; unsigned long rflags; +#ifdef __x86_64__ + unsigned long rsp; + unsigned long ss; +#endif }; typedef void (*handler)(struct ex_regs *regs); diff --git a/lib/x86/usermode.c b/lib/x86/usermode.c index 2e77831..57a017d 100644 --- a/lib/x86/usermode.c +++ b/lib/x86/usermode.c @@ -26,6 +26,9 @@ static void restore_exec_to_jmpbuf_exception_handler(struct ex_regs *regs) /* longjmp must happen after iret, so do not do it now. */ regs->rip = (unsigned long)&restore_exec_to_jmpbuf; regs->cs = KERNEL_CS; +#ifdef __x86_64__ + regs->ss = KERNEL_DS; +#endif } uint64_t run_in_user(usermode_func func, unsigned int fault_vector, From patchwork Tue Dec 14 01:18:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12675189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42653C433F5 for ; Tue, 14 Dec 2021 01:18:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240739AbhLNBSn (ORCPT ); Mon, 13 Dec 2021 20:18:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237215AbhLNBSn (ORCPT ); Mon, 13 Dec 2021 20:18:43 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CF3EC061574 for ; Mon, 13 Dec 2021 17:18:43 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id e12-20020aa7980c000000b0049fa3fc29d0so11036033pfl.10 for ; Mon, 13 Dec 2021 17:18:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cs3OJ+F2rO8WRibDoWfVPoWtuNoT5ZR+6yS9ZBrTGd8=; b=qT7rL4CCP9fXLjVmPH+NDDOq1H7NmNUeKAemFVpIvLGVKz3AUU1JKW/WawKt92M8vV wWdD4jg7/wa29wDJoLlDhcEpfwllwqdIun0UZPhD2E1rjg81HKDRSZV+ln5bqLRS5ZfV 66fMZYv4CRPDHvKhz1uMWNumKwBwYTg8mP3LJoNdcmNTv4o1UpNeXIwdKUM8TdvvGot4 4Y6HAT/MvH6KuRoNWA9SNEQfeJrbSngDi3PWdys15zyTvGsedpAiI8KQOYn3Tb/GGxxf W7TZEbpzNOLRb+RCyhLfOoSg+ubYI67CpaIBPdyMhbt7svAqt+Hrb1T42QAofpFTkJFu PHWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cs3OJ+F2rO8WRibDoWfVPoWtuNoT5ZR+6yS9ZBrTGd8=; b=LXnwfz+pWomBjw0rBuxWONSoOPhXXrzN51+5sERIam7jIgcJPCsn7Jl0RL3I4HybE7 J5TaowKc2hdwj7lfJVNxaMc9KTfD/BLQg1+coLLtw8TmUpfFCiSfXTe1hCZF3nlDnfYS dgXtkhFUkC4pC/R8fVBuHaObqxkHwStsbNd6vsLow2FWRHdtkRhYeGS9gcJPw8Wllbho TowFOYayzcM8FwfjEkbtdUBKHy65AWF+uUUz0S/0jykWi0eIBEvFbQzKIc9BEQIyTULv B+qeflxQOaraWQZdUeL+FJ4zRdhnV6bdopBb7Dj+wkh/tW4kLxHG0q49he2xLFx4M6tZ IlRQ== X-Gm-Message-State: AOAM531VIJIzZ4q6BvxQwHpVL7GsO2ElFb3iftV4oxzhIkWNmqWxPKau GihoguFIlLdfaCZvn3SRauR5+WWxpXANFQhda0yyj4RZaW6pWgKgTSZTOuox1GYzZj66s2G724n GVxCX5ga887jHLK/YYEnKEFIuWDBS/SygamAXfBzFdF347EmS47Dp8UYp6EyBdDrnkydG X-Google-Smtp-Source: ABdhPJwnRxfLIOimTAgsBCgaySQLv8bvVSSCkkEJVzelMg9nFdsrIE+wkR5et0LSkGg+TZBAEfoBTPUw/bEcGb0j X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:902:6b8c:b0:148:8a86:a01c with SMTP id p12-20020a1709026b8c00b001488a86a01cmr1948181plk.50.1639444722553; Mon, 13 Dec 2021 17:18:42 -0800 (PST) Date: Tue, 14 Dec 2021 01:18:21 +0000 In-Reply-To: <20211214011823.3277011-1-aaronlewis@google.com> Message-Id: <20211214011823.3277011-3-aaronlewis@google.com> Mime-Version: 1.0 References: <20211214011823.3277011-1-aaronlewis@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [kvm-unit-tests PATCH v2 2/4] x86: Align L2's stacks From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Setting the stack to PAGE_SIZE - 1 sets the stack to being 1-byte aligned, which fails in usermode with alignment checks enabled (ie: with flags cr0.am set and eflags.ac set). This was causing an #AC in usermode.c when preparing to call the callback in run_in_user(). Aligning the stack fixes the issue. For the purposes of fixing the #AC in usermode.c the stack has to be aligned to at least an 8-byte boundary. Setting it to a page aligned boundary ensures any stack alignment requirements are met as x86_64 stacks generally want to be 16-byte aligned. Signed-off-by: Aaron Lewis Reviewed-by: Sean Christopherson --- x86/vmx.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/x86/vmx.c b/x86/vmx.c index 6dc9a55..f4fbb94 100644 --- a/x86/vmx.c +++ b/x86/vmx.c @@ -42,7 +42,7 @@ u64 *bsp_vmxon_region; struct vmcs *vmcs_root; u32 vpid_cnt; -void *guest_stack, *guest_syscall_stack; +u64 guest_stack_top, guest_syscall_stack_top; u32 ctrl_pin, ctrl_enter, ctrl_exit, ctrl_cpu[2]; struct regs regs; @@ -1241,8 +1241,7 @@ static void init_vmcs_guest(void) vmcs_write(GUEST_CR3, guest_cr3); vmcs_write(GUEST_CR4, guest_cr4); vmcs_write(GUEST_SYSENTER_CS, KERNEL_CS); - vmcs_write(GUEST_SYSENTER_ESP, - (u64)(guest_syscall_stack + PAGE_SIZE - 1)); + vmcs_write(GUEST_SYSENTER_ESP, guest_syscall_stack_top); vmcs_write(GUEST_SYSENTER_EIP, (u64)(&entry_sysenter)); vmcs_write(GUEST_DR7, 0); vmcs_write(GUEST_EFER, rdmsr(MSR_EFER)); @@ -1292,7 +1291,7 @@ static void init_vmcs_guest(void) /* 26.3.1.4 */ vmcs_write(GUEST_RIP, (u64)(&guest_entry)); - vmcs_write(GUEST_RSP, (u64)(guest_stack + PAGE_SIZE - 1)); + vmcs_write(GUEST_RSP, guest_stack_top); vmcs_write(GUEST_RFLAGS, X86_EFLAGS_FIXED); /* 26.3.1.5 */ @@ -1388,8 +1387,8 @@ void init_vmx(u64 *vmxon_region) static void alloc_bsp_vmx_pages(void) { bsp_vmxon_region = alloc_page(); - guest_stack = alloc_page(); - guest_syscall_stack = alloc_page(); + guest_stack_top = (uintptr_t)alloc_page() + PAGE_SIZE; + guest_syscall_stack_top = (uintptr_t)alloc_page() + PAGE_SIZE; vmcs_root = alloc_page(); } From patchwork Tue Dec 14 01:18:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12675191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4681C433F5 for ; Tue, 14 Dec 2021 01:18:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241728AbhLNBSt (ORCPT ); Mon, 13 Dec 2021 20:18:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237215AbhLNBSt (ORCPT ); Mon, 13 Dec 2021 20:18:49 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1801C061574 for ; Mon, 13 Dec 2021 17:18:48 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id b8-20020a17090a10c800b001a61dff6c9dso9574350pje.5 for ; Mon, 13 Dec 2021 17:18:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GGx78l1C6UrDdfX2ISiFf3LXQn3wH8GXhVJbd0Xrhqs=; b=n1NesT7JZ6WaFJzqtnbXUxkjcyfqUtSYRR+l+x7ggc9oP6QGwxHdOguLAS6+fp7un6 Nlv12OQVjSebEXC1ImkbpLHlP9qmU9UkwHonVkjw0MgYgC4MVNQEKdGChdcEsJKTImIw J1aGInFYShDRuJD72Sv0l/TX2P6JRcNjRgWaiBDxi5jiF2i8FnzllZoYU0Zg+RfA8Ym5 1hl6oJYKsqCyRaYzAngGrV11G1S47v+pwZSoUgKNKjngnKI04mtlAB1FORffi/xEfejg 9Gbmgayxx4FpFa60CR7xu0hkDNn7cOreWoatHbhm0lczhBsZGtJ7qOS0Vxu66aklxUAQ T8DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GGx78l1C6UrDdfX2ISiFf3LXQn3wH8GXhVJbd0Xrhqs=; b=F4okmW18UlhYu/DhacPyNfuQCXsdMwcWzd2qILJrqHvvFCeQk+XScIAimz/uav1Jaa 86cXxG8J0WKzAnq5uc0zjcbnI8DOol/1C3/mSUxEAITXPNSwGYE6qrwG07YttO5nWG6X VC+kqK9IVLi+NE7Ncd6fc2ndVo1Sgis5Sh/0elarwvYQjSVSFfaLBTKxP2fNxN/gEvNH 1n7Xol0boceU3EDiXFq0CIC5pqZ9GPtgMJvgvTqOlXFFF4kgcZ9zDRtCFEOvIdFFiH1T kZZ53vUSEjL64O64GAnMLAwEWtnVjAgsQhVur7+Y0nbsAOVUmtJbmC6DRdmniDSa9HAg VtSg== X-Gm-Message-State: AOAM53332ifNtBx45gOGSCeW+gMKjSeKXbaOzyAxQyoQkYXz/ov4Qd79 Sl5/Gj0aNfawaxQZCVvTAMKt+lwqLgiFHHHpHSgIvj0YaBt4tpmZH5Y1SO+A1mGdTVHSVe9OF34 se6ah2dImJfTX2SNtHYRJ/LE80LVjjDMFUqu0hppCGLW1cYvCjlup/JjXIddJiObqkYhv X-Google-Smtp-Source: ABdhPJzQpoAqw+4FYkKVoidHLPyyX0TkAhypAFvK8G+8vLzCURfnVCxX+zO74HTPQuWQIFPPL/pC/SPcixGDg87s X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a05:6a00:1343:b0:4ad:99ae:d4b3 with SMTP id k3-20020a056a00134300b004ad99aed4b3mr1493568pfu.64.1639444728147; Mon, 13 Dec 2021 17:18:48 -0800 (PST) Date: Tue, 14 Dec 2021 01:18:22 +0000 In-Reply-To: <20211214011823.3277011-1-aaronlewis@google.com> Message-Id: <20211214011823.3277011-4-aaronlewis@google.com> Mime-Version: 1.0 References: <20211214011823.3277011-1-aaronlewis@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [kvm-unit-tests PATCH v2 3/4] x86: Add a test framework for nested_vmx_reflect_vmexit() testing From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Set up a test framework that verifies an exception occurring in L2 is forwarded to the right place (L0?, L1?, L2?). To add a test to this framework just add the exception and callbacks to the vmx_exception_tests array. This framework tests two things: 1) It tests that an exception is handled by L2. 2) It tests that an exception is handled by L1. To test that this happens, each exception is triggered twice; once with just an L2 exception handler registered, and again with both an L2 exception handler registered and L1's exception bitmap set. The expectation is that the first exception will be handled by L2 and the second by L1. To implement this support was added to vmx.c to allow more than one L2 test be run in a single test. Previously there was a hard limit of only being allowed to set the L2 guest code once in a given test. That is no longer a limitation with the addition of test_set_guest_restartable(). Support was also added to allow the test to complete without running through the entirety of the L2 guest code. Calling the function test_set_guest_finished() marks the guest code as completed, allowing it to end without running to the end. Signed-off-by: Aaron Lewis --- lib/x86/desc.c | 2 +- lib/x86/desc.h | 1 + x86/unittests.cfg | 7 ++++ x86/vmx.c | 17 +++++++++ x86/vmx.h | 2 ++ x86/vmx_tests.c | 88 +++++++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 116 insertions(+), 1 deletion(-) diff --git a/lib/x86/desc.c b/lib/x86/desc.c index 16b7256..c2eb16e 100644 --- a/lib/x86/desc.c +++ b/lib/x86/desc.c @@ -91,7 +91,7 @@ struct ex_record { extern struct ex_record exception_table_start, exception_table_end; -static const char* exception_mnemonic(int vector) +const char* exception_mnemonic(int vector) { switch(vector) { case 0: return "#DE"; diff --git a/lib/x86/desc.h b/lib/x86/desc.h index 9b81da0..ad6277b 100644 --- a/lib/x86/desc.h +++ b/lib/x86/desc.h @@ -224,6 +224,7 @@ void set_intr_alt_stack(int e, void *fn); void print_current_tss_info(void); handler handle_exception(u8 v, handler fn); void unhandled_exception(struct ex_regs *regs, bool cpu); +const char* exception_mnemonic(int vector); bool test_for_exception(unsigned int ex, void (*trigger_func)(void *data), void *data); diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 9fcdcae..0353b69 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -368,6 +368,13 @@ arch = x86_64 groups = vmx nested_exception check = /sys/module/kvm_intel/parameters/allow_smaller_maxphyaddr=Y +[vmx_exception_test] +file = vmx.flat +extra_params = -cpu max,+vmx -append vmx_exception_test +arch = x86_64 +groups = vmx nested_exception +timeout = 10 + [debug] file = debug.flat arch = x86_64 diff --git a/x86/vmx.c b/x86/vmx.c index f4fbb94..9908746 100644 --- a/x86/vmx.c +++ b/x86/vmx.c @@ -1895,6 +1895,23 @@ void test_set_guest(test_guest_func func) v2_guest_main = func; } +/* + * Set the target of the first enter_guest call and reset the RIP so 'func' + * will start from the beginning. This can be called multiple times per test. + */ +void test_set_guest_restartable(test_guest_func func) +{ + assert(current->v2); + v2_guest_main = func; + init_vmcs_guest(); + guest_finished = 0; +} + +void test_set_guest_finished(void) +{ + guest_finished = 1; +} + static void check_for_guest_termination(union exit_reason exit_reason) { if (is_hypercall(exit_reason)) { diff --git a/x86/vmx.h b/x86/vmx.h index 4423986..5321a7e 100644 --- a/x86/vmx.h +++ b/x86/vmx.h @@ -1055,7 +1055,9 @@ void hypercall(u32 hypercall_no); typedef void (*test_guest_func)(void); typedef void (*test_teardown_func)(void *data); void test_set_guest(test_guest_func func); +void test_set_guest_restartable(test_guest_func func); void test_add_teardown(test_teardown_func func, void *data); void test_skip(const char *msg); +void test_set_guest_finished(void); #endif diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 3d57ed6..018db2f 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -10701,6 +10701,93 @@ static void vmx_pf_vpid_test(void) __vmx_pf_vpid_test(invalidate_tlb_new_vpid, 1); } +struct vmx_exception_test { + u8 vector; + void (*guest_code)(void); + void (*init_test)(void); + void (*uninit_test)(void); +}; + +struct vmx_exception_test vmx_exception_tests[] = { +}; + +static u8 vmx_exception_test_vector; + +static void vmx_exception_handler(struct ex_regs *regs) +{ + report(regs->vector == vmx_exception_test_vector, + "Handling %s in L2's exception handler", + exception_mnemonic(vmx_exception_test_vector)); + vmcall(); +} + +static void handle_exception_in_l2(u8 vector) +{ + handler old_handler = handle_exception(vector, vmx_exception_handler); + + vmx_exception_test_vector = vector; + + enter_guest(); + report(vmcs_read(EXI_REASON) == VMX_VMCALL, + "%s handled by L2", exception_mnemonic(vector)); + + test_set_guest_finished(); + + handle_exception(vector, old_handler); +} + +static void handle_exception_in_l1(u32 vector) +{ + handler old_handler = handle_exception(vector, vmx_exception_handler); + u32 old_eb = vmcs_read(EXC_BITMAP); + + vmx_exception_test_vector = 0xff; + + vmcs_write(EXC_BITMAP, old_eb | (1u << vector)); + + enter_guest(); + + report((vmcs_read(EXI_REASON) == VMX_EXC_NMI) && + ((vmcs_read(EXI_INTR_INFO) & 0xff) == vector), + "%s handled by L1", exception_mnemonic(vector)); + + test_set_guest_finished(); + + vmcs_write(EXC_BITMAP, old_eb); + handle_exception(vector, old_handler); +} + +static void vmx_exception_test(void) +{ + struct vmx_exception_test *t; + int i; + + for (i = 0; i < ARRAY_SIZE(vmx_exception_tests); i++) { + t = &vmx_exception_tests[i]; + + TEST_ASSERT(t->guest_code); + test_set_guest_restartable(t->guest_code); + + if (t->init_test) + t->init_test(); + + handle_exception_in_l2(t->vector); + + if (t->uninit_test) + t->uninit_test(); + + test_set_guest_restartable(t->guest_code); + + if (t->init_test) + t->init_test(); + + handle_exception_in_l1(t->vector); + + if (t->uninit_test) + t->uninit_test(); + } +} + #define TEST(name) { #name, .v2 = name } /* name/init/guest_main/exit_handler/syscall_handler/guest_regs */ @@ -10810,5 +10897,6 @@ struct vmx_test vmx_tests[] = { TEST(vmx_pf_no_vpid_test), TEST(vmx_pf_invvpid_test), TEST(vmx_pf_vpid_test), + TEST(vmx_exception_test), { NULL, NULL, NULL, NULL, NULL, {0} }, }; From patchwork Tue Dec 14 01:18:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12675195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F17E4C433F5 for ; Tue, 14 Dec 2021 01:18:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241897AbhLNBSy (ORCPT ); Mon, 13 Dec 2021 20:18:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237215AbhLNBSy (ORCPT ); Mon, 13 Dec 2021 20:18:54 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D39BC061574 for ; Mon, 13 Dec 2021 17:18:54 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id x1-20020a17090a294100b001a6e7ba6b4eso10908208pjf.9 for ; Mon, 13 Dec 2021 17:18:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=R8GOKUJD5KjxnzxKOwN2AHS+VSxxmSefbq0jTCQOOEQ=; b=M6rUvCcSGYuIzqsJM0SRdqwWtvnGvzceknHH2Qau5Quv1Fel/aPljQBlWjQ4puHdR8 cD0yqjpIqiV6X0cHkeuzoWvDOF4aVgxhO/95VWBoV3NRFicTpNUtFoFDUO+v6sZPdcPv kbf94lw0D++c+MnASm9vuUPhS1YrjW/z5/Ao3ON91zhv1//1fPykcbKoZVrmvh+ZJAb0 CetKKdbsEXzqWX+aPAObT6ax9CG6gxppRPPi69RTirZ10WXvCvDloGGVPXJxxlQYCD8M Zq9eIRo3Zfe84BLcmyslsd7COWZo2bkomMOPIaX+JTYaEJCOFvuEEqPuZ0f5rTDADe49 nqzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=R8GOKUJD5KjxnzxKOwN2AHS+VSxxmSefbq0jTCQOOEQ=; b=Sw8NooRzp1MR2/KlpEmlTcjsGapLCok/mU1u2Ji0EnrInArQY+kGHkZUNDrpMjLC+L BYaf0dgEihBUPX+EMkjmTFlsk4y0ksl3hjjh5NrOLvbEWjksxwEGhSnstjHearMShc29 3o1CHAk3McoI7nNV3EmFVnsDeb4XUIM1FmEgwybcy90VQpsNVPirXYX/57avuuh3hbAi Waz3JkUiaFrDppIai3XNeKCzd5RihsbFKL2HdMng3hih62FjVcmlTtUI/jBkaoe991Rx yYRsPMDsAaE4QW+FEPX+tOQVBjnxiLx+VojXM7O0T94AAEujg6c8BlFE9Ug/GKzKn1Xj 6Q/w== X-Gm-Message-State: AOAM530sEXEPeYvYVvrhDdk8H1oST9/ni1WHOjf3VQKM1JzaxXPuvO2F AmoZvJz+jNaDG9xqE7snv8mgfuZxHS7dgbxRAvg7FvUUQkflJrc3eEn3igC/SMZFP4jE5uw9mGh fhm/T4t4JfezrP3/MC3Y8A0/qxY0A+432yLa9kD6kI0MTqX9u3IjYMIZBdycw+KN8Vet0 X-Google-Smtp-Source: ABdhPJwGftOJYUnGVYIhOV3ymgUlIz+rKRjqMn701Ecxlz6I8CTZFe7+fWv/XUiYpLKbAGvoSv+RJnPb2f/oF5S/ X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:903:1c5:b0:141:fbe2:56c1 with SMTP id e5-20020a17090301c500b00141fbe256c1mr1953589plh.52.1639444733544; Mon, 13 Dec 2021 17:18:53 -0800 (PST) Date: Tue, 14 Dec 2021 01:18:23 +0000 In-Reply-To: <20211214011823.3277011-1-aaronlewis@google.com> Message-Id: <20211214011823.3277011-5-aaronlewis@google.com> Mime-Version: 1.0 References: <20211214011823.3277011-1-aaronlewis@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [kvm-unit-tests PATCH v2 4/4] x86: Add test coverage for nested_vmx_reflect_vmexit() testing From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add test cases to ensure exceptions that occur in L2 are forwarded to the correct place. Add testing for exceptions: #GP, #UD, #DE, #DB, #BP, and #AC. Signed-off-by: Aaron Lewis --- x86/vmx_tests.c | 73 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 018db2f..f795330 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -21,6 +21,7 @@ #include "smp.h" #include "delay.h" #include "access.h" +#include "x86/usermode.h" #define VPID_CAP_INVVPID_TYPES_SHIFT 40 @@ -10701,6 +10702,72 @@ static void vmx_pf_vpid_test(void) __vmx_pf_vpid_test(invalidate_tlb_new_vpid, 1); } +static void vmx_l2_gp_test(void) +{ + *(volatile u64 *)NONCANONICAL = 0; +} + +static void vmx_l2_ud_test(void) +{ + asm volatile ("ud2"); +} + +static void vmx_l2_de_test(void) +{ + asm volatile ( + "xor %%eax, %%eax\n\t" + "xor %%ebx, %%ebx\n\t" + "xor %%edx, %%edx\n\t" + "idiv %%ebx\n\t" + ::: "eax", "ebx", "edx"); +} + +static void vmx_l2_bp_test(void) +{ + asm volatile ("int3"); +} + +static void vmx_db_init(void) +{ + enable_tf(); +} + +static void vmx_db_uninit(void) +{ + disable_tf(); +} + +static void vmx_l2_db_test(void) +{ +} + +static uint64_t usermode_callback(void) +{ + /* Trigger an #AC by writing 8 bytes to a 4-byte aligned address. */ + asm volatile( + "sub $0x10, %rsp\n\t" + "movq $0, 0x4(%rsp)\n\t" + "add $0x10, %rsp\n\t"); + + return 0; +} + +static void vmx_l2_ac_test(void) +{ + u64 old_cr0 = read_cr0(); + u64 old_rflags = read_rflags(); + bool raised_vector = false; + + write_cr0(old_cr0 | X86_CR0_AM); + write_rflags(old_rflags | X86_EFLAGS_AC); + + run_in_user(usermode_callback, AC_VECTOR, 0, 0, 0, 0, &raised_vector); + report(raised_vector, "#AC vector raised from usermode in L2"); + + write_cr0(old_cr0); + write_rflags(old_rflags); +} + struct vmx_exception_test { u8 vector; void (*guest_code)(void); @@ -10709,6 +10776,12 @@ struct vmx_exception_test { }; struct vmx_exception_test vmx_exception_tests[] = { + { GP_VECTOR, vmx_l2_gp_test }, + { UD_VECTOR, vmx_l2_ud_test }, + { DE_VECTOR, vmx_l2_de_test }, + { DB_VECTOR, vmx_l2_db_test, vmx_db_init, vmx_db_uninit }, + { BP_VECTOR, vmx_l2_bp_test }, + { AC_VECTOR, vmx_l2_ac_test }, }; static u8 vmx_exception_test_vector;