From patchwork Thu Feb 4 00:01:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 229F3C433DB for ; Thu, 4 Feb 2021 00:02:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D506A64F46 for ; Thu, 4 Feb 2021 00:02:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234104AbhBDACN (ORCPT ); Wed, 3 Feb 2021 19:02:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234076AbhBDACJ (ORCPT ); Wed, 3 Feb 2021 19:02:09 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E01A5C061786 for ; Wed, 3 Feb 2021 16:01:28 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id dj13so787148qvb.20 for ; Wed, 03 Feb 2021 16:01:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=xlob3SxVBzuk6VwhTKCHNdeQ5cEpkkL7dVmYnzLmun0=; b=pCmvpt0R661t/A0DjVN4wniue+VW7yOO9Lz/O/Xm7JfDLttlBTqYdxSlV4GkuGR9jU a9LIq3BEDUe05rDusUEU75qDQiaYVUHS5bzdEMX2FPjZg/CXoLDQt9rij4eoLs7LfDHp Ezuk43HAN3Mk/CHRnVJaAezIf+YKeVexCGCml8eMfx+1Q97YlZfVLpzkk8RmoTwuMjfU QtfryjianriFO4Q3xaHTUV4y+VfFuhfxjsiC4u2Tk2Sp4SXp/mStsWuD49fY5yi+naxC 43HAuW+/GDYbShHoazRXNnYzbynTUSSiXnKWWUgeY9sHLK8j+MOEXQrBTX/OCDVzraAz LBJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=xlob3SxVBzuk6VwhTKCHNdeQ5cEpkkL7dVmYnzLmun0=; b=WmLEmf3fFP0n1J93PQGbQz2UOrdvShsNf0zKQ0AkRrh69tS56MFyPUNeWmfVgjlWZX dU7MWyAwJ+IdghFYV9s2WGhBpettYFGS/TW+d7JhtrQ8aDAKWvOkTdpWjvLJpMIqWYiH Q+BernHmYU4qO/R3IhI8uptPX776zrtHX1MYIZSQgKqVJmBB6y0SKRod6emJVk8D0K5i xDFPBNHKCCNLkfDRxDFxsWtXL4k1V0iqhHqhPYiegakF2aPCCFb0+rikMk8iQzOaqSa/ VBMi5+KfHw7AF9Pui8G3NNCSvW49sm3u0si+dwPvXPma2zb2iRDucpCXW7h4vzYWCJBv JiWQ== X-Gm-Message-State: AOAM5330czn1lcZpfRb3rAX+aV01P2/q49dYUZLIg2pKNQj9AaLj4fTa 6CAlfo12GJ1eP7vNzUloU/7//LBtHgQ= X-Google-Smtp-Source: ABdhPJwbxo0jhQm0FLX75tZWB7FNsV1Ul3pvxTbJF9kRwBzPSD9aQGisdB56VNOpsohROXiIsNxiNEsR3GQ= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:a0c:c78c:: with SMTP id k12mr5114776qvj.47.1612396888075; Wed, 03 Feb 2021 16:01:28 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:06 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-2-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 01/12] KVM: x86: Set so called 'reserved CR3 bits in LM mask' at vCPU reset From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Set cr3_lm_rsvd_bits, which is effectively an invalid GPA mask, at vCPU reset. The reserved bits check needs to be done even if userspace never configures the guest's CPUID model. Cc: stable@vger.kernel.org Fixes: 0107973a80ad ("KVM: x86: Introduce cr3_lm_rsvd_bits in kvm_vcpu_arch") Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 667d0042d0b7..e6fbf2f574a6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10091,6 +10091,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) fx_init(vcpu); vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu); + vcpu->arch.cr3_lm_rsvd_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); vcpu->arch.pat = MSR_IA32_CR_PAT_DEFAULT; From patchwork Thu Feb 4 00:01:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E62DC4332B for ; Thu, 4 Feb 2021 00:02:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D3F264E40 for ; Thu, 4 Feb 2021 00:02:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234122AbhBDAC0 (ORCPT ); Wed, 3 Feb 2021 19:02:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234081AbhBDACL (ORCPT ); Wed, 3 Feb 2021 19:02:11 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44CE5C06178A for ; Wed, 3 Feb 2021 16:01:31 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id c12so1554518ybf.1 for ; Wed, 03 Feb 2021 16:01:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=kbECJ51yB90Mpm+amQG+1r2/0hpCKqj/OUqn0H3TWgk=; b=tonq77jJEq5KfadmjUp1FxNdkrfBDEoCcDfbDxDMtH8JeXiV0wybfLmJ60pyX6C1a5 6fiwTAU9fg7tGgvwV4mqkOpre191BMFL2D0kVklaCiMo6KtVDrYcQVS1lShRbihZKRu/ bhSxzZ9DD62P7ncRF7OV3TdfiQK990/wvpCNTdqb9JbYea7TRnnZUqX9XOO6jf8Al2QY 4Wand2CzCmxEJ0TPYXF23CQm9XD12WExYhxqSf0lCxqkJ348ebUjDDSSxqJFb/DE1nfd 4ymbLMZ675fsEMoFccBLVsqVa3BGJBINhymqV4sM5Ffq0p6PMv/p7njFDpoilB1XAWOf kb5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=kbECJ51yB90Mpm+amQG+1r2/0hpCKqj/OUqn0H3TWgk=; b=MZ/eGF/GsJX5h57N0v9gRcUG5GuaEE55giHCge3rfp7Ex9MHiw2yt+sMT7ykn1tiU+ TqKMnoKhgx/WXJhHUw5l6BocAgTmsFZlvGIU0fYS6HIX5heEOCmLrc0r+Mkw1lHUvzXf 23Y1jYOXkEGsO9KZJwReG2nWUrEPi0GJ2f0sVXuJtNuT0HmyFoZpzLHxTONp+sTFuu07 U2DFXqzGkFUo0WpAi5GoHbB3QRBYulRmUsciT5BCq0fViGzt9ZtShRbR53/bamUrGxhv +kajjOTN2sYFRZ6zlB31mYFHukOJ3wuruU0hbQXKL+AVwDE0b8jU39GWIADo2WO1bfI3 gP3Q== X-Gm-Message-State: AOAM53037XN5tl4yxRXgPUB+JdIXSHAulhQeLDXzvZqHHpSS71rVR7Yj 4j6o7iak1WK/UwVaynV0k/decaWURyI= X-Google-Smtp-Source: ABdhPJyB0swS6XZarHCPmv3gUPrNicEPC3aReYqP8MBm8ApkabZlXsWr5U0UlFlK9Blz4/hD+/gZqUwM9iY= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:a5b:410:: with SMTP id m16mr8604874ybp.451.1612396890526; Wed, 03 Feb 2021 16:01:30 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:07 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-3-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 02/12] KVM: nSVM: Don't strip host's C-bit from guest's CR3 when reading PDPTRs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Don't clear the SME C-bit when reading a guest PDPTR, as the GPA (CR3) is in the guest domain. Barring a bizarre paravirtual use case, this is likely a benign bug. SME is not emulated by KVM, loading SEV guest PDPTRs is doomed as KVM can't use the correct key to read guest memory, and setting guest MAXPHYADDR higher than the host, i.e. overlapping the C-bit, would cause faults in the guest. Note, for SEV guests, stripping the C-bit is technically aligned with CPU behavior, but for KVM it's the greater of two evils. Because KVM doesn't have access to the guest's encryption key, ignoring the C-bit would at best result in KVM reading garbage. By keeping the C-bit, KVM will fail its read (unless userspace creates a memslot with the C-bit set). The guest will still undoubtedly die, as KVM will use '0' for the PDPTR value, but that's preferable to interpreting encrypted data as a PDPTR. Fixes: d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM") Cc: Tom Lendacky Cc: Brijesh Singh Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 1ffb28cfe39d..70c72fe61e02 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -58,7 +58,7 @@ static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index) u64 pdpte; int ret; - ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(__sme_clr(cr3)), &pdpte, + ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(cr3), &pdpte, offset_in_page(cr3) + index * 8, 8); if (ret) return 0; From patchwork Thu Feb 4 00:01:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D63AC433E9 for ; Thu, 4 Feb 2021 00:02:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F75A64F4A for ; Thu, 4 Feb 2021 00:02:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234148AbhBDAC1 (ORCPT ); Wed, 3 Feb 2021 19:02:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234077AbhBDACZ (ORCPT ); Wed, 3 Feb 2021 19:02:25 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA822C061793 for ; Wed, 3 Feb 2021 16:01:33 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id q7so1064195qkn.7 for ; Wed, 03 Feb 2021 16:01:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Tdon8eC5Mj9r7LlkZz+UAtMydKIEM0SgyE72PrAxPS8=; b=elT9TRgDqK4CrAsfjShNdqhUkP6AR4MsqFZ63uKDIOidSiyiOPUqYKHu26WoTrlW5C 1q/piuJQaA8V4Wti77ssR9CNlA7pROk5yxcz5yD859ZwS+yk98DriMIdqdn65eW9ZknA gbwGhbTJhHMdjcKLBbNhi1es9NGT27zPbXSdIQ6kzy+YZA3TLWu+1txBG37IbJ0Kq1Dk V3pi0TVZaCqkuuYsSs9dt8JsGPfu/UcqAiZu/IjREFjWLiKxI6yfwbrb/cNHqZ4x7XLm UVlce3iLTFSaksVi2KMHChmVd0s/5aVVw+HRHbK8ez2CV6Hyqy3CiRk7NSzPGKoRU+xZ dNiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Tdon8eC5Mj9r7LlkZz+UAtMydKIEM0SgyE72PrAxPS8=; b=WaZ+JZk/tHdGwmd6hed2Yr+15f/+gIoRTE7PVv/Zb0sb5MJlQbur38xtQA+KOThpGB Qnc10mExv7C/BG+U6lPvQpZZnSsSvVMoRJ3dYm8FGg49bdLUdqoehSheJRBWAkVI9bi0 1pxgAYxYJY+m/i+VybPdwyVaGBSV1ooLXz7R9f4tViQzdi8d8pXEi2Oggf/g7BWpxOZx bIvtfuUBDCbX2Vn2DwzducJsU1ki0gbxzRIE5zGF9tA5C+G8ObakPiaaXU9Cv3IkVQzx +6jFB3D2hqMhKDeAtavgYCFLzo3FaJ3XuDqIytiUYSeLRjpN7RwT5F4l4PEAQ4RDPgqL 50vw== X-Gm-Message-State: AOAM530340Ta3PN4ZdOzYqjNkbFV7fbTYVnoeuExrXuwBE8LwF1LUzLF OFwlko8ldQQgZGB7Vb73vsTXuyStDQM= X-Google-Smtp-Source: ABdhPJwVk/Vwo8DeZc9aunFIzu73jnLKiexkJobxrAuFiUkTHMywWbEiin4okhJMM11JaAaLzEDcu3rduKQ= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:ad4:46cd:: with SMTP id g13mr5282675qvw.27.1612396892974; Wed, 03 Feb 2021 16:01:32 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:08 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-4-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 03/12] KVM: x86: Add a helper to check for a legal GPA From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a helper to check for a legal GPA, and use it to consolidate code in existing, related helpers. Future patches will extend usage to VMX and SVM code, properly handle exceptions to the maxphyaddr rule, and add more helpers. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.h | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index dc921d76e42e..674d61079f2d 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -36,9 +36,19 @@ static inline int cpuid_maxphyaddr(struct kvm_vcpu *vcpu) return vcpu->arch.maxphyaddr; } +static inline bool kvm_vcpu_is_legal_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) +{ + return !(gpa >> cpuid_maxphyaddr(vcpu)); +} + static inline bool kvm_vcpu_is_illegal_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) { - return (gpa >= BIT_ULL(cpuid_maxphyaddr(vcpu))); + return !kvm_vcpu_is_legal_gpa(vcpu, gpa); +} + +static inline bool page_address_valid(struct kvm_vcpu *vcpu, gpa_t gpa) +{ + return PAGE_ALIGNED(gpa) && kvm_vcpu_is_legal_gpa(vcpu, gpa); } struct cpuid_reg { @@ -324,11 +334,6 @@ static __always_inline void kvm_cpu_cap_check_and_set(unsigned int x86_feature) kvm_cpu_cap_set(x86_feature); } -static inline bool page_address_valid(struct kvm_vcpu *vcpu, gpa_t gpa) -{ - return PAGE_ALIGNED(gpa) && !(gpa >> cpuid_maxphyaddr(vcpu)); -} - static __always_inline bool guest_pv_has(struct kvm_vcpu *vcpu, unsigned int kvm_feature) { From patchwork Thu Feb 4 00:01:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FD69C4332B for ; Thu, 4 Feb 2021 00:07:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A79564DF5 for ; Thu, 4 Feb 2021 00:07:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234189AbhBDAG3 (ORCPT ); Wed, 3 Feb 2021 19:06:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234173AbhBDACu (ORCPT ); Wed, 3 Feb 2021 19:02:50 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 482A0C061797 for ; Wed, 3 Feb 2021 16:01:36 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id m21so1240418qtp.6 for ; Wed, 03 Feb 2021 16:01:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=HKjbtuBA/eMiKbQylYWdN0UmPc2knJZyYHV6onJRrSE=; b=tKmh60HZSWWVEmyGmZ76W2xuc+7QEcKSoF1SNpBozpX10Sd2ISZVq3+eaBndPw24EQ W/bCqhaG8UM8UvE5JJdjSyZx5uP7X3fRCVnKyja/iR2t+/WwJK2tS3DS201XNJ5raThl MifndWRhCgs20ldB04DP83AHJCkOlC9EeVVJetB9bVs3NioH6r/NwhCw+JvcE2+0WB8n V775araQj2jYOaLC+wgm9BtvpXbsoSYSVq/ku6jjWXueR0T9ZVLLlJ6FqA4AlTgZILEJ ocub0lhVzYbniTpzRjeEGz95iXyIWIlB/32xpW5jSf/fKuUtM/Br18923QefZKbfOCF5 Fd4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=HKjbtuBA/eMiKbQylYWdN0UmPc2knJZyYHV6onJRrSE=; b=Wwdf8Tcd5Thd123OtRoEEWVhTJA0uVymXiQZjZPMvczEg8724tZLPbzQpF0C8/3ogJ FFh3e7bcs2TUQI4EkCiswYkzGUihrT+ctEHo0FW+FhfSzuMbmz+CRjPTbnM7eGOnudb0 OANE+Wdhk/dmEjs+8h7cL7/DktfkcbaxizKW7KCB6N3/Q3ZQKrFP1VE4Ma7J2jztB8P6 aXE6s/oRWgRtK/T2RfhXPiWcEH7NR9IJOvnKFoEiyzbZaXZxnLzhGRpZwkMkBa9E6C6J E22mz5iAqAjTdpDaAjKoBQaxIJEBGZfQvWjzIvjllpSRaqmp3n5vPYkWZHOrrURhI0lx KRKg== X-Gm-Message-State: AOAM530e6zHwhgbLQlW1UR6L7sqPVr/NeFQdScPpizHLvjUvoFkfRwFy gy/C0wO1SuUraZa4fNun+fNZXhaUzJI= X-Google-Smtp-Source: ABdhPJyt/Fu978qG4Zyz/Z6m87/XlaKn/iXwiDUTG8SqfSmyu7hZ4+kAqVn3luzKzVZzYTCIQzhdqmucKZ4= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:ad4:5bc8:: with SMTP id t8mr5453989qvt.36.1612396895418; Wed, 03 Feb 2021 16:01:35 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:09 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-5-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 04/12] KVM: x86: Add a helper to handle legal GPA with an alignment requirement From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a helper to genericize checking for a legal GPA that also must conform to an arbitrary alignment, and use it in the existing page_address_valid(). Future patches will replace open coded variants in VMX and SVM. Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index 674d61079f2d..a9d55ab51e3c 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -46,9 +46,15 @@ static inline bool kvm_vcpu_is_illegal_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) return !kvm_vcpu_is_legal_gpa(vcpu, gpa); } +static inline bool kvm_vcpu_is_legal_aligned_gpa(struct kvm_vcpu *vcpu, + gpa_t gpa, gpa_t alignment) +{ + return IS_ALIGNED(gpa, alignment) && kvm_vcpu_is_legal_gpa(vcpu, gpa); +} + static inline bool page_address_valid(struct kvm_vcpu *vcpu, gpa_t gpa) { - return PAGE_ALIGNED(gpa) && kvm_vcpu_is_legal_gpa(vcpu, gpa); + return kvm_vcpu_is_legal_aligned_gpa(vcpu, gpa, PAGE_SIZE); } struct cpuid_reg { From patchwork Thu Feb 4 00:01:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 913A7C43381 for ; Thu, 4 Feb 2021 00:07:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4ACA164F46 for ; Thu, 4 Feb 2021 00:07:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234168AbhBDAGX (ORCPT ); Wed, 3 Feb 2021 19:06:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234175AbhBDACu (ORCPT ); Wed, 3 Feb 2021 19:02:50 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94CD2C0617A9 for ; Wed, 3 Feb 2021 16:01:38 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id s4so1032829qkj.18 for ; Wed, 03 Feb 2021 16:01:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=d2iRbGdfp/Ifr4yknHGjT6yfFI/3odhmNBJQhqstDno=; b=rjKpyX/qgfzTGAfG2oRnIAwsnHCzo2UL0l25qtM7TIDodrAopJ1I09Ndwpjl3Hz1jT f3SKbUXLKvBCcGpmk4G+3VaOvKAHbovX8NS0+KXUFPL4K0flu2Io6+8zKNFPrHqTkWPc DnkP3diUY3mrvKcNK7Yl9QuDcRtdZ/thWwji1PB4XhFrPZxKxq+Xld5KZ1eIznwwU/ns fey/cFpHieKuQfJExJWxLas7RgkYXRXJL69125YFmlkSh/p+OitYH2zCbQ+GBw2REj2j dq+eOI+VfUSuDRsPod77kz9haZOtj7WEHwbvK3jxAVKoM1bxykIygsjO3QHyld/BOLei m3lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=d2iRbGdfp/Ifr4yknHGjT6yfFI/3odhmNBJQhqstDno=; b=eXcP7MkFC79DlUmDNbJ9emj/iI6fTO+l5s+jgiIa6I/frKvdf5oUG9/zdECaVvBzqt FUljkuuufGMCvBUC/LFzIp6xTUM3+bj6/JZBORkHgyt1Zh/dMNyZ0WwjGG3YkN5y5Auh /N2tV3XYAtKUcuH5+hUb3hsPue2gmHFO7PWOUF72X6JYyq4UHgrSY+rNDuORfMXX8aF7 WVO+0dwrRVkP6qWh/6+o2Ds5Gt/qJ1BL14oujpbtPnj0Bd0aNp4C8v+txBVxdjR3bdGO GKU9BUxFBmeMxR1LnXJ0POFPoj6udc0aWpG6IefFnyNiih+imzDsCvIWBpsudYk2Vy2h NaIg== X-Gm-Message-State: AOAM530HeIG/u8vIqT5d564CWcK8+tQ5f1NxJkBgRhI78Bnb6LmX//cB j71D0plTsW7Xd3z8MSFfcU5vq3FLJEI= X-Google-Smtp-Source: ABdhPJx8RyG5PcUvngLKMHbmBv437Ki4ZOemCkox27BEbIjqqqvH0ZYWyyPPQlZpVj1jDMur4uDso78oJqs= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:ad4:5bc8:: with SMTP id t8mr5454160qvt.36.1612396897679; Wed, 03 Feb 2021 16:01:37 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:10 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-6-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 05/12] KVM: VMX: Use GPA legality helpers to replace open coded equivalents From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace a variety of open coded GPA checks with the recently introduced common helpers. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 26 +++++++------------------- arch/x86/kvm/vmx/vmx.c | 2 +- 2 files changed, 8 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b14fc19ceb36..b25ce704a2aa 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -775,8 +775,7 @@ static int nested_vmx_check_apicv_controls(struct kvm_vcpu *vcpu, (CC(!nested_cpu_has_vid(vmcs12)) || CC(!nested_exit_intr_ack_set(vcpu)) || CC((vmcs12->posted_intr_nv & 0xff00)) || - CC((vmcs12->posted_intr_desc_addr & 0x3f)) || - CC((vmcs12->posted_intr_desc_addr >> cpuid_maxphyaddr(vcpu))))) + CC(!kvm_vcpu_is_legal_aligned_gpa(vcpu, vmcs12->posted_intr_desc_addr, 64)))) return -EINVAL; /* tpr shadow is needed by all apicv features. */ @@ -789,13 +788,11 @@ static int nested_vmx_check_apicv_controls(struct kvm_vcpu *vcpu, static int nested_vmx_check_msr_switch(struct kvm_vcpu *vcpu, u32 count, u64 addr) { - int maxphyaddr; - if (count == 0) return 0; - maxphyaddr = cpuid_maxphyaddr(vcpu); - if (!IS_ALIGNED(addr, 16) || addr >> maxphyaddr || - (addr + count * sizeof(struct vmx_msr_entry) - 1) >> maxphyaddr) + + if (!kvm_vcpu_is_legal_aligned_gpa(vcpu, addr, 16) || + !kvm_vcpu_is_legal_gpa(vcpu, (addr + count * sizeof(struct vmx_msr_entry) - 1))) return -EINVAL; return 0; @@ -1093,14 +1090,6 @@ static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu, } } -static bool nested_cr3_valid(struct kvm_vcpu *vcpu, unsigned long val) -{ - unsigned long invalid_mask; - - invalid_mask = (~0ULL) << cpuid_maxphyaddr(vcpu); - return (val & invalid_mask) == 0; -} - /* * Returns true if the MMU needs to be sync'd on nested VM-Enter/VM-Exit. * tl;dr: the MMU needs a sync if L0 is using shadow paging and L1 didn't @@ -1152,7 +1141,7 @@ static bool nested_vmx_transition_mmu_sync(struct kvm_vcpu *vcpu) static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool nested_ept, enum vm_entry_failure_code *entry_failure_code) { - if (CC(!nested_cr3_valid(vcpu, cr3))) { + if (CC(kvm_vcpu_is_illegal_gpa(vcpu, cr3))) { *entry_failure_code = ENTRY_FAIL_DEFAULT; return -EINVAL; } @@ -2666,7 +2655,6 @@ static int nested_vmx_check_nmi_controls(struct vmcs12 *vmcs12) static bool nested_vmx_check_eptp(struct kvm_vcpu *vcpu, u64 new_eptp) { struct vcpu_vmx *vmx = to_vmx(vcpu); - int maxphyaddr = cpuid_maxphyaddr(vcpu); /* Check for memory type validity */ switch (new_eptp & VMX_EPTP_MT_MASK) { @@ -2697,7 +2685,7 @@ static bool nested_vmx_check_eptp(struct kvm_vcpu *vcpu, u64 new_eptp) } /* Reserved bits should not be set */ - if (CC(new_eptp >> maxphyaddr || ((new_eptp >> 7) & 0x1f))) + if (CC(kvm_vcpu_is_illegal_gpa(vcpu, new_eptp) || ((new_eptp >> 7) & 0x1f))) return false; /* AD, if set, should be supported */ @@ -2881,7 +2869,7 @@ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu, if (CC(!nested_host_cr0_valid(vcpu, vmcs12->host_cr0)) || CC(!nested_host_cr4_valid(vcpu, vmcs12->host_cr4)) || - CC(!nested_cr3_valid(vcpu, vmcs12->host_cr3))) + CC(kvm_vcpu_is_illegal_gpa(vcpu, vmcs12->host_cr3))) return -EINVAL; if (CC(is_noncanonical_address(vmcs12->host_ia32_sysenter_esp, vcpu)) || diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index beb5a912014d..cbeb0748f25f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1114,7 +1114,7 @@ static inline bool pt_can_write_msr(struct vcpu_vmx *vmx) static inline bool pt_output_base_valid(struct kvm_vcpu *vcpu, u64 base) { /* The base must be 128-byte aligned and a legal physical address. */ - return !kvm_vcpu_is_illegal_gpa(vcpu, base) && !(base & 0x7f); + return kvm_vcpu_is_legal_aligned_gpa(vcpu, base, 128); } static inline void pt_load_msr(struct pt_ctx *ctx, u32 addr_range) From patchwork Thu Feb 4 00:01:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AE91C433E6 for ; Thu, 4 Feb 2021 00:07:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 276A864F65 for ; Thu, 4 Feb 2021 00:07:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234132AbhBDAGQ (ORCPT ); Wed, 3 Feb 2021 19:06:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234181AbhBDACv (ORCPT ); Wed, 3 Feb 2021 19:02:51 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E117CC0617AB for ; Wed, 3 Feb 2021 16:01:40 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id f127so1516277ybf.12 for ; Wed, 03 Feb 2021 16:01:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=N+cVTlisb7HGeqiFkTgiq2oBYDM+4NRVXkhEzMWc8HU=; b=D+cDoX0CFeijjXjDOMKS1/GeQQ3JfQRkctlHnPNzFIcUiAlceE1gti5gMS5FO38S3f 9TtpNy1ziIY1Nc9PS21csPZ30J0SZKK9Yya9/iVbpoVHvfswEEJjtYlpBlduh92Kkl3b s4aSt1mgSroQ1pjVc05K8Zs0GwG7EkORFwvePnzIwKPSX7RnnA9PZji9ZRxbt8Nip4KI FEeXaoYpxWBb7pmxh499IWlIpGE1IXJLIzaX/9pHaYSib4HzufoBViLQlQ2+c8nFsaJH +lpuABrinVTd1R69z49otzaoMR0Yx5Uj4s5uis+V9cLoXZPBHX8NkpTVrPDfbfLEarUi dS+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=N+cVTlisb7HGeqiFkTgiq2oBYDM+4NRVXkhEzMWc8HU=; b=EHy2HGARtZ0DgAjQjlUj4ptOX55f/zNAjXtI8RmuyceTQ3IBSs4MPNBNRuajTyIdwh PymUC4XDHYeemhCQLfHPw6aeS9jDgF1v/axtdLpv5uKTGGR6Y702jKQFpbMEliM/hIT9 VQ4i4VZ5cXOEVI9j60IGcwRdKcTaExrHQa1TOTsF3kzE8E8XuPS896d0yXaBAvCeM2r3 y4jADHydZVoIHjlrbQ1p52H1tKwqKLEVRuu9iWzPW0H4pqQ3ZQCojQNumPcGRDU11wZg u/R26+gRibnvkfQQEAgsRWfV3ZrMwtaxQKSCTLCm9vHlIXiOaYr5q6SGtM5aDeQy4QGN vRJw== X-Gm-Message-State: AOAM533pCjNgz7d4ABRkzr0IR20gL4G0NUqMl7pzeoN+2tB8tSDQGPWU 6jJIoXIHmR3Ux4JfBEkX3png9E4vCB4= X-Google-Smtp-Source: ABdhPJzqnVf4CSigFEmz3ocNKb2F5pHtFZSP7dI3TSolRfUl2/D+j8ACOtft1vwnRzJ823gt3rhJYe2+AGA= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:a25:bbc1:: with SMTP id c1mr7996046ybk.130.1612396900135; Wed, 03 Feb 2021 16:01:40 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:11 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-7-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 06/12] KVM: nSVM: Use common GPA helper to check for illegal CR3 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace an open coded check for an invalid CR3 with its equivalent helper. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 70c72fe61e02..ac662964cee5 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -367,7 +367,7 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm) static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool nested_npt) { - if (cr3 & rsvd_bits(cpuid_maxphyaddr(vcpu), 63)) + if (kvm_vcpu_is_illegal_gpa(vcpu, cr3)) return -EINVAL; if (!nested_npt && is_pae_paging(vcpu) && From patchwork Thu Feb 4 00:01:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B7CCC433E0 for ; Thu, 4 Feb 2021 00:07:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DDEC264F4C for ; Thu, 4 Feb 2021 00:07:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234167AbhBDAGF (ORCPT ); Wed, 3 Feb 2021 19:06:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234189AbhBDACv (ORCPT ); Wed, 3 Feb 2021 19:02:51 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21816C061351 for ; Wed, 3 Feb 2021 16:01:43 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id v13so808000qvm.10 for ; Wed, 03 Feb 2021 16:01:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=ngo4Ulx+Y4zNOoFYw+YT9FfKDadr2Ha0EmFigfRIUTY=; b=TnvAOi72CulyML4J/gF2okdzoQOWWKtPxRliwaThw8oHxMkWEci/gH9WJqwzK4ksF2 p/EWL0Z3b7hrvMIIQMrn0kuiJrIQksgsASTUDTCAEqLYoRKnX1hiy6Xf6/rmkOOsE+ua 5Tn2mVZVvzZAiZIr+WMKMggk5Qto9Zbd18d8/aIvWXTn2zFEkGB5xQN43hBNGvMuo7dI /te7785DtG3eHAd8OM8PWvrZEv7jb1Un5SIAyGvub66HPh8K9DIAnUX3xv+i3SVGCKKQ VHgCIQz1/mNlwRAXePAKHgCdfjnks4Hul1739qrfo+HHeM+kWHatHkS28A86Zm49qsgc MlpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=ngo4Ulx+Y4zNOoFYw+YT9FfKDadr2Ha0EmFigfRIUTY=; b=WF73PjtvkKL5Dxt0069YKfgIapoO+CESYnU0N7hLHtKZTVH8xK0Vt6UQaU2ztd4m1C PQORcZ7eSo1KtOYbP2oZEXYRigzU3sv8ZGkcibauvp876Z6Mikb7QcAHlfcIcm2I2OjI uY/xKxoR2yoo2mg1VyybzajOjvVyoB8F7tNY8YVBDDIpaT5XPPLtAkh5E/PnS+7njNgU fE38ctEPIuOjpTy5Tueu6Imzkry1Br/tKuAU7bGYhiTN5E/3VrL0ZTTCicNMNaiTexh0 3j3EfOKhebb4h/mTTCaLg7xyDlccfvR8N6QcqAjVD2cManprvAbe1mPOOkaklblqOa1P UD2A== X-Gm-Message-State: AOAM531t104sYDsNWNjggy3u1cIKAq/XD9OSAdVKrKkRgrQFGlxr6uaD 41eMcq+WUD2pnYqvGeT1c1gKS5T0vcI= X-Google-Smtp-Source: ABdhPJxP5fc8jRQVqOicTZftbOXqRAgoRGmROKvvw84X4EKuL0PW+BqQtVr5NcdD8FlmQtxinOnkSB5HhAQ= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:ad4:4348:: with SMTP id q8mr5415162qvs.36.1612396902309; Wed, 03 Feb 2021 16:01:42 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:12 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-8-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 07/12] KVM: x86: SEV: Treat C-bit as legal GPA bit regardless of vCPU mode From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename cr3_lm_rsvd_bits to reserved_gpa_bits, and use it for all GPA legality checks. AMD's APM states: If the C-bit is an address bit, this bit is masked from the guest physical address when it is translated through the nested page tables. Thus, any access that can conceivably be run through NPT should ignore the C-bit when checking for validity. For features that KVM emulates in software, e.g. MTRRs, there is no clear direction in the APM for how the C-bit should be handled. For such cases, follow the SME behavior inasmuch as possible, since SEV is is essentially a VM-specific variant of SME. For SME, the APM states: In this case the upper physical address bits are treated as reserved when the feature is enabled except where otherwise indicated. Collecting the various relavant SME snippets in the APM and cross- referencing the omissions with Linux kernel code, this leaves MTTRs and APIC_BASE as the only flows that KVM emulates that should _not_ ignore the C-bit. Note, this means the reserved bit checks in the page tables are technically broken. This will be remedied in a future patch. Although the page table checks are technically broken, in practice, it's all but guaranteed to be irrelevant. NPT is required for SEV, i.e. shadowing page tables isn't needed in the common case. Theoretically, the checks could be in play for nested NPT, but it's extremely unlikely that anyone is running nested VMs on SEV, as doing so would require L1 to expose sensitive data to L0, e.g. the entire VMCB. And if anyone is running nested VMs, L0 can't read the guest's encrypted memory, i.e. L1 would need to put its NPT in shared memory, in which case the C-bit will never be set. Or, L1 could use shadow paging, but again, if L0 needs to read page tables, e.g. to load PDPTRs, the memory can't be encrypted if L1 has any expectation of L0 doing the right thing. Cc: Tom Lendacky Cc: Brijesh Singh Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/cpuid.c | 2 +- arch/x86/kvm/cpuid.h | 2 +- arch/x86/kvm/svm/nested.c | 2 +- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/x86.c | 7 +++---- 6 files changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 915f716e78e6..1653d49a66ff 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -654,7 +654,7 @@ struct kvm_vcpu_arch { int cpuid_nent; struct kvm_cpuid_entry2 *cpuid_entries; - unsigned long cr3_lm_rsvd_bits; + u64 reserved_gpa_bits; int maxphyaddr; int max_tdp_level; diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 944f518ca91b..7bd1331c1bbc 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -194,7 +194,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) vcpu->arch.cr4_guest_rsvd_bits = __cr4_reserved_bits(guest_cpuid_has, vcpu); - vcpu->arch.cr3_lm_rsvd_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); + vcpu->arch.reserved_gpa_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); /* Invoke the vendor callback only after the above state is updated. */ static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu); diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index a9d55ab51e3c..f673f45bdf52 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -38,7 +38,7 @@ static inline int cpuid_maxphyaddr(struct kvm_vcpu *vcpu) static inline bool kvm_vcpu_is_legal_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) { - return !(gpa >> cpuid_maxphyaddr(vcpu)); + return !(gpa & vcpu->arch.reserved_gpa_bits); } static inline bool kvm_vcpu_is_illegal_gpa(struct kvm_vcpu *vcpu, gpa_t gpa) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index ac662964cee5..add3cd4295e1 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -241,7 +241,7 @@ static bool nested_vmcb_check_cr3_cr4(struct vcpu_svm *svm, */ if ((save->efer & EFER_LME) && (save->cr0 & X86_CR0_PG)) { if (!(save->cr4 & X86_CR4_PAE) || !(save->cr0 & X86_CR0_PE) || - (save->cr3 & vcpu->arch.cr3_lm_rsvd_bits)) + kvm_vcpu_is_illegal_gpa(vcpu, save->cr3)) return false; } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f53e6377a933..50ad5a3bf880 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4079,7 +4079,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) if (sev_guest(vcpu->kvm)) { best = kvm_find_cpuid_entry(vcpu, 0x8000001F, 0); if (best) - vcpu->arch.cr3_lm_rsvd_bits &= ~(1UL << (best->ebx & 0x3f)); + vcpu->arch.reserved_gpa_bits &= ~(1UL << (best->ebx & 0x3f)); } if (!kvm_vcpu_apicv_active(vcpu)) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e6fbf2f574a6..1da7ed093650 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1082,8 +1082,7 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3) return 0; } - if (is_long_mode(vcpu) && - (cr3 & vcpu->arch.cr3_lm_rsvd_bits)) + if (is_long_mode(vcpu) && kvm_vcpu_is_illegal_gpa(vcpu, cr3)) return 1; else if (is_pae_paging(vcpu) && !load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3)) @@ -9712,7 +9711,7 @@ static bool kvm_is_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) */ if (!(sregs->cr4 & X86_CR4_PAE) || !(sregs->efer & EFER_LMA)) return false; - if (sregs->cr3 & vcpu->arch.cr3_lm_rsvd_bits) + if (kvm_vcpu_is_illegal_gpa(vcpu, sregs->cr3)) return false; } else { /* @@ -10091,7 +10090,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) fx_init(vcpu); vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu); - vcpu->arch.cr3_lm_rsvd_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); + vcpu->arch.reserved_gpa_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); vcpu->arch.pat = MSR_IA32_CR_PAT_DEFAULT; From patchwork Thu Feb 4 00:01:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14519C433DB for ; Thu, 4 Feb 2021 00:07:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B67E064DF5 for ; Thu, 4 Feb 2021 00:07:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234074AbhBDAGB (ORCPT ); Wed, 3 Feb 2021 19:06:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234193AbhBDACw (ORCPT ); Wed, 3 Feb 2021 19:02:52 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75DE0C061353 for ; Wed, 3 Feb 2021 16:01:45 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id d194so1076611qke.3 for ; Wed, 03 Feb 2021 16:01:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=tcZGz/XctS2SK0DQJ0waWmDO5l1biFLI49kWF1s+1Nc=; b=XQtvLfosHf5vLuwF7V/crEvxxGWqSWGC6HVaB7OLInf979DgrVpIY2iaKEbR2Dkdq9 MLstn2WG+7UWR7vax03RO88vq+ZSOLodxqLiOXmBNL4ypNWiQAkORgi28dbEhp4SQmJE tb01s6oiyRLEwfWV9QWc/23b3R2g7YSxDJ4a0/sRGgJ1xbUlDJs48oRkXCAhBZ/yMdkV p7/nixeYEGRBZnEOudWRQOuykit0BfFFBLe9BtTywXBMHO1ODuBkNr5yQtO94UO8wN2B 6ExFX4rh0ksbuSU0fB75BV+lUaw+1IarBaHJcYFkPxLvk0O5kts69YzRmc3Bg/ZxYuwr CbMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=tcZGz/XctS2SK0DQJ0waWmDO5l1biFLI49kWF1s+1Nc=; b=d/r+xVaC3JZz3K++Jv7L7DrM7TQg6L2iqgEM8BcymIlLM59yfNl9qTD3xPtdqWcAiu +RQNdH/HH3obZ6TVE0dQRSYbPj3+bIH99hYmlgtjknsU4ARRIJJFUqmk4bAip7Z6iqT8 ehepSW5bYwaljRPCaofEWgrGtv/Hw4b8eV0Q3KumajplT8MuikLtIaT5eBG8n3AMAkbB cojjZ9WrUmTzlH2IkegSI4RcBAbrHMq3WFX91PRNZS3jsuw72EtPA90yHH/ydIthzYBG dVoqgkR868LCO5xCfcv1PwJaLkE22XbhkZd8ly117dw3TyHPt59Aayg5zVEoNG8hXotO 3ivA== X-Gm-Message-State: AOAM53353zomVCECbPUPXqOTiN2FnlOcs/7EQi3Mwae+p0r9lcewf42S ygg/PeBtzZSL0STf4ZnsQTNQkxlmS3Q= X-Google-Smtp-Source: ABdhPJzpYg7r7zJzlMBVnl/c5YIgFDUqh7JtC9IsnUMSxGbSjPOnKcKWYzZQWHovbMBLrajLTlfB4hte7L0= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:a0c:b617:: with SMTP id f23mr5041685qve.44.1612396904641; Wed, 03 Feb 2021 16:01:44 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:13 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-9-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 08/12] KVM: x86: Use reserved_gpa_bits to calculate reserved PxE bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use reserved_gpa_bits, which accounts for exceptions to the maxphyaddr rule, e.g. SEV's C-bit, for the page {table,directory,etc...} entry (PxE) reserved bits checks. For SEV, the C-bit is ignored by hardware when walking pages tables, e.g. the APM states: Note that while the guest may choose to set the C-bit explicitly on instruction pages and page table addresses, the value of this bit is a don't-care in such situations as hardware always performs these as private accesses. Such behavior is expected to hold true for other features that repurpose GPA bits, e.g. KVM could theoretically emulate SME or MKTME, which both allow non-zero repurposed bits in the page tables. Conceptually, KVM should apply reserved GPA checks universally, and any features that do not adhere to the basic rule should be explicitly handled, i.e. if a GPA bit is repurposed but not allowed in page tables for whatever reason. Refactor __reset_rsvds_bits_mask() to take the pre-generated reserved bits mask, and opportunistically clean up its code, e.g. to align lines and comments. Practically speaking, this is change is a likely a glorified nop given the current KVM code base. SEV's C-bit is the only repurposed GPA bit, and KVM doesn't support shadowing encrypted page tables (which is theoretically possible via SEV debug APIs). Cc: Rick Edgecombe Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.c | 10 ++-- arch/x86/kvm/mmu/mmu.c | 104 ++++++++++++++++++++--------------------- arch/x86/kvm/x86.c | 3 +- 3 files changed, 58 insertions(+), 59 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 7bd1331c1bbc..d313b1804278 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -188,16 +188,20 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) kvm_update_pv_runtime(vcpu); vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu); - kvm_mmu_reset_context(vcpu); + vcpu->arch.reserved_gpa_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); kvm_pmu_refresh(vcpu); vcpu->arch.cr4_guest_rsvd_bits = __cr4_reserved_bits(guest_cpuid_has, vcpu); - vcpu->arch.reserved_gpa_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); - /* Invoke the vendor callback only after the above state is updated. */ static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu); + + /* + * Except for the MMU, which needs to be reset after any vendor + * specific adjustments to the reserved GPA bits. + */ + kvm_mmu_reset_context(vcpu); } static int is_efer_nx(void) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e8bfff9acd5e..d462db3bc742 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3985,20 +3985,27 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu, static void __reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, struct rsvd_bits_validate *rsvd_check, - int maxphyaddr, int level, bool nx, bool gbpages, + u64 pa_bits_rsvd, int level, bool nx, bool gbpages, bool pse, bool amd) { - u64 exb_bit_rsvd = 0; u64 gbpages_bit_rsvd = 0; u64 nonleaf_bit8_rsvd = 0; + u64 high_bits_rsvd; rsvd_check->bad_mt_xwr = 0; - if (!nx) - exb_bit_rsvd = rsvd_bits(63, 63); if (!gbpages) gbpages_bit_rsvd = rsvd_bits(7, 7); + if (level == PT32E_ROOT_LEVEL) + high_bits_rsvd = pa_bits_rsvd & rsvd_bits(0, 62); + else + high_bits_rsvd = pa_bits_rsvd & rsvd_bits(0, 51); + + /* Note, NX doesn't exist in PDPTEs, this is handled below. */ + if (!nx) + high_bits_rsvd |= rsvd_bits(63, 63); + /* * Non-leaf PML4Es and PDPEs reserve bit 8 (which would be the G bit for * leaf entries) on AMD CPUs only. @@ -4027,45 +4034,39 @@ __reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, rsvd_check->rsvd_bits_mask[1][1] = rsvd_bits(13, 21); break; case PT32E_ROOT_LEVEL: - rsvd_check->rsvd_bits_mask[0][2] = - rsvd_bits(maxphyaddr, 63) | - rsvd_bits(5, 8) | rsvd_bits(1, 2); /* PDPTE */ - rsvd_check->rsvd_bits_mask[0][1] = exb_bit_rsvd | - rsvd_bits(maxphyaddr, 62); /* PDE */ - rsvd_check->rsvd_bits_mask[0][0] = exb_bit_rsvd | - rsvd_bits(maxphyaddr, 62); /* PTE */ - rsvd_check->rsvd_bits_mask[1][1] = exb_bit_rsvd | - rsvd_bits(maxphyaddr, 62) | - rsvd_bits(13, 20); /* large page */ + rsvd_check->rsvd_bits_mask[0][2] = rsvd_bits(63, 63) | + high_bits_rsvd | + rsvd_bits(5, 8) | + rsvd_bits(1, 2); /* PDPTE */ + rsvd_check->rsvd_bits_mask[0][1] = high_bits_rsvd; /* PDE */ + rsvd_check->rsvd_bits_mask[0][0] = high_bits_rsvd; /* PTE */ + rsvd_check->rsvd_bits_mask[1][1] = high_bits_rsvd | + rsvd_bits(13, 20); /* large page */ rsvd_check->rsvd_bits_mask[1][0] = rsvd_check->rsvd_bits_mask[0][0]; break; case PT64_ROOT_5LEVEL: - rsvd_check->rsvd_bits_mask[0][4] = exb_bit_rsvd | - nonleaf_bit8_rsvd | rsvd_bits(7, 7) | - rsvd_bits(maxphyaddr, 51); + rsvd_check->rsvd_bits_mask[0][4] = high_bits_rsvd | + nonleaf_bit8_rsvd | + rsvd_bits(7, 7); rsvd_check->rsvd_bits_mask[1][4] = rsvd_check->rsvd_bits_mask[0][4]; fallthrough; case PT64_ROOT_4LEVEL: - rsvd_check->rsvd_bits_mask[0][3] = exb_bit_rsvd | - nonleaf_bit8_rsvd | rsvd_bits(7, 7) | - rsvd_bits(maxphyaddr, 51); - rsvd_check->rsvd_bits_mask[0][2] = exb_bit_rsvd | - gbpages_bit_rsvd | - rsvd_bits(maxphyaddr, 51); - rsvd_check->rsvd_bits_mask[0][1] = exb_bit_rsvd | - rsvd_bits(maxphyaddr, 51); - rsvd_check->rsvd_bits_mask[0][0] = exb_bit_rsvd | - rsvd_bits(maxphyaddr, 51); + rsvd_check->rsvd_bits_mask[0][3] = high_bits_rsvd | + nonleaf_bit8_rsvd | + rsvd_bits(7, 7); + rsvd_check->rsvd_bits_mask[0][2] = high_bits_rsvd | + gbpages_bit_rsvd; + rsvd_check->rsvd_bits_mask[0][1] = high_bits_rsvd; + rsvd_check->rsvd_bits_mask[0][0] = high_bits_rsvd; rsvd_check->rsvd_bits_mask[1][3] = rsvd_check->rsvd_bits_mask[0][3]; - rsvd_check->rsvd_bits_mask[1][2] = exb_bit_rsvd | - gbpages_bit_rsvd | rsvd_bits(maxphyaddr, 51) | - rsvd_bits(13, 29); - rsvd_check->rsvd_bits_mask[1][1] = exb_bit_rsvd | - rsvd_bits(maxphyaddr, 51) | - rsvd_bits(13, 20); /* large page */ + rsvd_check->rsvd_bits_mask[1][2] = high_bits_rsvd | + gbpages_bit_rsvd | + rsvd_bits(13, 29); + rsvd_check->rsvd_bits_mask[1][1] = high_bits_rsvd | + rsvd_bits(13, 20); /* large page */ rsvd_check->rsvd_bits_mask[1][0] = rsvd_check->rsvd_bits_mask[0][0]; break; @@ -4076,8 +4077,8 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) { __reset_rsvds_bits_mask(vcpu, &context->guest_rsvd_check, - cpuid_maxphyaddr(vcpu), context->root_level, - context->nx, + vcpu->arch.reserved_gpa_bits, + context->root_level, context->nx, guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), is_pse(vcpu), guest_cpuid_is_amd_or_hygon(vcpu)); @@ -4085,27 +4086,22 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, static void __reset_rsvds_bits_mask_ept(struct rsvd_bits_validate *rsvd_check, - int maxphyaddr, bool execonly) + u64 pa_bits_rsvd, bool execonly) { + u64 high_bits_rsvd = pa_bits_rsvd & rsvd_bits(0, 51); u64 bad_mt_xwr; - rsvd_check->rsvd_bits_mask[0][4] = - rsvd_bits(maxphyaddr, 51) | rsvd_bits(3, 7); - rsvd_check->rsvd_bits_mask[0][3] = - rsvd_bits(maxphyaddr, 51) | rsvd_bits(3, 7); - rsvd_check->rsvd_bits_mask[0][2] = - rsvd_bits(maxphyaddr, 51) | rsvd_bits(3, 6); - rsvd_check->rsvd_bits_mask[0][1] = - rsvd_bits(maxphyaddr, 51) | rsvd_bits(3, 6); - rsvd_check->rsvd_bits_mask[0][0] = rsvd_bits(maxphyaddr, 51); + rsvd_check->rsvd_bits_mask[0][4] = high_bits_rsvd | rsvd_bits(3, 7); + rsvd_check->rsvd_bits_mask[0][3] = high_bits_rsvd | rsvd_bits(3, 7); + rsvd_check->rsvd_bits_mask[0][2] = high_bits_rsvd | rsvd_bits(3, 6); + rsvd_check->rsvd_bits_mask[0][1] = high_bits_rsvd | rsvd_bits(3, 6); + rsvd_check->rsvd_bits_mask[0][0] = high_bits_rsvd; /* large page */ rsvd_check->rsvd_bits_mask[1][4] = rsvd_check->rsvd_bits_mask[0][4]; rsvd_check->rsvd_bits_mask[1][3] = rsvd_check->rsvd_bits_mask[0][3]; - rsvd_check->rsvd_bits_mask[1][2] = - rsvd_bits(maxphyaddr, 51) | rsvd_bits(12, 29); - rsvd_check->rsvd_bits_mask[1][1] = - rsvd_bits(maxphyaddr, 51) | rsvd_bits(12, 20); + rsvd_check->rsvd_bits_mask[1][2] = high_bits_rsvd | rsvd_bits(12, 29); + rsvd_check->rsvd_bits_mask[1][1] = high_bits_rsvd | rsvd_bits(12, 20); rsvd_check->rsvd_bits_mask[1][0] = rsvd_check->rsvd_bits_mask[0][0]; bad_mt_xwr = 0xFFull << (2 * 8); /* bits 3..5 must not be 2 */ @@ -4124,7 +4120,7 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu, struct kvm_mmu *context, bool execonly) { __reset_rsvds_bits_mask_ept(&context->guest_rsvd_check, - cpuid_maxphyaddr(vcpu), execonly); + vcpu->arch.reserved_gpa_bits, execonly); } /* @@ -4146,7 +4142,7 @@ reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) */ shadow_zero_check = &context->shadow_zero_check; __reset_rsvds_bits_mask(vcpu, shadow_zero_check, - shadow_phys_bits, + rsvd_bits(shadow_phys_bits, 63), context->shadow_root_level, uses_nx, guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), is_pse(vcpu), true); @@ -4183,13 +4179,13 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, if (boot_cpu_is_amd()) __reset_rsvds_bits_mask(vcpu, shadow_zero_check, - shadow_phys_bits, + rsvd_bits(shadow_phys_bits, 63), context->shadow_root_level, false, boot_cpu_has(X86_FEATURE_GBPAGES), true, true); else __reset_rsvds_bits_mask_ept(shadow_zero_check, - shadow_phys_bits, + rsvd_bits(shadow_phys_bits, 63), false); if (!shadow_me_mask) @@ -4210,7 +4206,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context, bool execonly) { __reset_rsvds_bits_mask_ept(&context->shadow_zero_check, - shadow_phys_bits, execonly); + rsvd_bits(shadow_phys_bits, 63), execonly); } #define BYTE_MASK(access) \ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1da7ed093650..82a70511c0d3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -761,8 +761,7 @@ static int kvm_read_nested_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu) { - return rsvd_bits(cpuid_maxphyaddr(vcpu), 63) | rsvd_bits(5, 8) | - rsvd_bits(1, 2); + return vcpu->arch.reserved_gpa_bits | rsvd_bits(5, 8) | rsvd_bits(1, 2); } /* From patchwork Thu Feb 4 00:01:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3087CC433DB for ; Thu, 4 Feb 2021 00:03:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02BB164E40 for ; Thu, 4 Feb 2021 00:03:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234231AbhBDADB (ORCPT ); Wed, 3 Feb 2021 19:03:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234198AbhBDACw (ORCPT ); Wed, 3 Feb 2021 19:02:52 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA77EC061354 for ; Wed, 3 Feb 2021 16:01:47 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id p22so1500047ybc.18 for ; Wed, 03 Feb 2021 16:01:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=KVbXCJbK0fJijUMIx8v2OVwCZqQe15lsxZFi3EGecaw=; b=opyX1IcsdheLOecVRr/E17VB+hX2+bRSNpQU2AKEc75rEOAphJBR1E1zZy11OX5E8D MOTT02kX8RMf1rsG08QGYhs16gYkz9lFqyJk8+BOPm8+Fnsym4JEIPdCulxfA3/o6hOO Oy2BwhKqbLF9w41MIN6AzwhHKh5OEqYySOdgTcqIwdS7nUhb2tZxvHkSmShe3Gj8RluJ ZDJPPvN+CHnmO/xr0U1BwVrx5baf5SYStCy5+yecsF8hrnw7+COuLPAi8sNn4eAEp03b O1fv9qtQrbty3PKFo5Ou+01ZLLblZAEs/V/LxPfsw/fR3nl8V66vPOO4NSXle1B1I4La Ha2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=KVbXCJbK0fJijUMIx8v2OVwCZqQe15lsxZFi3EGecaw=; b=RkSrds1r48idD/u+WePT5iMGK+HnqgyTfOtrVUVVuAGPI+j0CaDe9fQCEe9YBmuy0j rdGLUDERLd6h13Olm6owNWJ3XNgnrjc1L3zT7G10WuJUMteLhwcUdINdNdKrrGYMb+n7 VODYI26syK06AsvV5KCo3lBJ0w4IcOZNwtYVot9BE05AynnPgli87mv/KayjDtxPWESi vWJlv3EPCYOQ3Beqlya3pH/jaG+Usxhrpvos07L6SuKDeABZnsS4ED/vlZ8Wt09CBx/d RQ7Au3SF3g3DdLLyuquX/T8pauMtk2WOs7rmUqdiPnpwlMLne171Zv5Rp79ug40Zx5iC 2FTw== X-Gm-Message-State: AOAM533mc+tJSL0LT+yKFdIlZc+5XVJHxfVUNjXolwgJI01Yatbsyd9h KC8sskG78SQIoeJL/hy6cc9+jhkpRHY= X-Google-Smtp-Source: ABdhPJxipC1uw+bK0spQZBfo0QAzIqcG0dWoBMYMahZvik96ZGpn2SfwdWjogVk+rzl+XSv5RRPwRYioj3Q= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:a25:a527:: with SMTP id h36mr8036454ybi.400.1612396907131; Wed, 03 Feb 2021 16:01:47 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:14 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-10-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 09/12] KVM: x86/mmu: Add helper to generate mask of reserved HPA bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a helper to generate the mask of reserved PA bits in the host. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d462db3bc742..86af58294272 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4123,6 +4123,11 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu, vcpu->arch.reserved_gpa_bits, execonly); } +static inline u64 reserved_hpa_bits(void) +{ + return rsvd_bits(shadow_phys_bits, 63); +} + /* * the page table on host is the shadow page table for the page * table in guest or amd nested guest, its mmu features completely @@ -4142,7 +4147,7 @@ reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) */ shadow_zero_check = &context->shadow_zero_check; __reset_rsvds_bits_mask(vcpu, shadow_zero_check, - rsvd_bits(shadow_phys_bits, 63), + reserved_hpa_bits(), context->shadow_root_level, uses_nx, guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), is_pse(vcpu), true); @@ -4179,14 +4184,13 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, if (boot_cpu_is_amd()) __reset_rsvds_bits_mask(vcpu, shadow_zero_check, - rsvd_bits(shadow_phys_bits, 63), + reserved_hpa_bits(), context->shadow_root_level, false, boot_cpu_has(X86_FEATURE_GBPAGES), true, true); else __reset_rsvds_bits_mask_ept(shadow_zero_check, - rsvd_bits(shadow_phys_bits, 63), - false); + reserved_hpa_bits(), false); if (!shadow_me_mask) return; @@ -4206,7 +4210,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context, bool execonly) { __reset_rsvds_bits_mask_ept(&context->shadow_zero_check, - rsvd_bits(shadow_phys_bits, 63), execonly); + reserved_hpa_bits(), execonly); } #define BYTE_MASK(access) \ From patchwork Thu Feb 4 00:01:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B51FC433E9 for ; Thu, 4 Feb 2021 00:03:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 474BC64F78 for ; Thu, 4 Feb 2021 00:03:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234252AbhBDADK (ORCPT ); Wed, 3 Feb 2021 19:03:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234215AbhBDADF (ORCPT ); Wed, 3 Feb 2021 19:03:05 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E526C0612F2 for ; Wed, 3 Feb 2021 16:01:50 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id c13so1526040ybg.8 for ; Wed, 03 Feb 2021 16:01:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Z9PY+ha0iNe1qhtrrDXpdGIAQqM4NawdTmEQfVlKh0A=; b=uXPdQJNCax4ScHsyNG6CuYomNtv/cKnz/pychbfxxThSndPFgZ1ZfDnm52ODAY/qGn 09VhZi3QLmQmrscRMGiymFL2+iaw+pc0tf012Je5T97qs5kob1Gwbg0XGFhcxttGDB2l MH5ssPdPA8WcYGOSqHLidXVEnKdYbvPFz1rJ5zNHOWLpuEt8Z08zpRtVHwl4VK+NwOyD 9YVXYHf5oakMsgvWDfsdGhQpGwZqdKdstc2ty32rzfb7yCd/KLBDvWlgaKACZ6I185ev j3iMmMY9+sBsDzjqGM9r1HN8OW/B+Wz11NkP5cPgr5v8/s6LMs642cBx3efzJycb1OKz 7NYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Z9PY+ha0iNe1qhtrrDXpdGIAQqM4NawdTmEQfVlKh0A=; b=O1OtW0rS6Z1IfUc5L63FMtfSxkiUVy9lo/diAbKPIf/GTIYEkwaNpWKdr/lRkDa0ZU R2AlptVMcs1mQjJcMvmDnKsLvrJi/n0kCQxIBF9btvChFaAAa7AYI/jMVBZszgT4fWJD oeR5OAcvtLn070+ckb6qUvUXsDnIm4ZvR+MmIamK9bLQ+Nsql1qTVULZOnOvLg5skYly mu8AjNfLHc/GzrZzg1Q2BEpas1nm1kgUNhCZck2/gzz3/UDJ8zUFNW8Dr3kbIOiVKanb NkNynjWG9iVTkhfE6/F0zIdO3IgThVAwM97OyJyXtKqm4Nmt5uBkn0XJwx5TLgwvonyC ot/A== X-Gm-Message-State: AOAM532AKjl+LDOJnucVIOUjTG0YHwoPiRLcJZNSyALCw+v1zBN38wXq K9bDnQPgsWcyOF8QYYeg88xQe+RzqgQ= X-Google-Smtp-Source: ABdhPJwvmeMkJuxwu9FcSp7HWHK7vMfbcCJ+yABiDBg9vCaI2ywrOt7JzIJcO4BVv1b3p08eAK1OQ5inVEY= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:a25:c04d:: with SMTP id c74mr7510990ybf.102.1612396909482; Wed, 03 Feb 2021 16:01:49 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:15 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-11-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 10/12] KVM: x86: Add helper to consolidate "raw" reserved GPA mask calculations From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a helper to generate the mask of reserved GPA bits _without_ any adjustments for repurposed bits, and use it to replace a variety of open coded variants in the MTRR and APIC_BASE flows. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/cpuid.c | 12 +++++++++++- arch/x86/kvm/cpuid.h | 1 + arch/x86/kvm/mtrr.c | 12 ++++++------ arch/x86/kvm/x86.c | 4 ++-- 4 files changed, 20 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index d313b1804278..dd9406450696 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -188,7 +188,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) kvm_update_pv_runtime(vcpu); vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu); - vcpu->arch.reserved_gpa_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); + vcpu->arch.reserved_gpa_bits = kvm_vcpu_reserved_gpa_bits_raw(vcpu); kvm_pmu_refresh(vcpu); vcpu->arch.cr4_guest_rsvd_bits = @@ -242,6 +242,16 @@ int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu) return 36; } +/* + * This "raw" version returns the reserved GPA bits without any adjustments for + * encryption technologies that usurp bits. The raw mask should be used if and + * only if hardware does _not_ strip the usurped bits, e.g. in virtual MTRRs. + */ +u64 kvm_vcpu_reserved_gpa_bits_raw(struct kvm_vcpu *vcpu) +{ + return rsvd_bits(cpuid_maxphyaddr(vcpu), 63); +} + /* when an old userspace process fills a new kernel module */ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid *cpuid, diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index f673f45bdf52..2a0c5064497f 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -30,6 +30,7 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx, u32 *ecx, u32 *edx, bool exact_only); int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu); +u64 kvm_vcpu_reserved_gpa_bits_raw(struct kvm_vcpu *vcpu); static inline int cpuid_maxphyaddr(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index f472fdb6ae7e..a8502e02f479 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -75,7 +75,7 @@ bool kvm_mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data) /* variable MTRRs */ WARN_ON(!(msr >= 0x200 && msr < 0x200 + 2 * KVM_NR_VAR_MTRR)); - mask = (~0ULL) << cpuid_maxphyaddr(vcpu); + mask = kvm_vcpu_reserved_gpa_bits_raw(vcpu); if ((msr & 1) == 0) { /* MTRR base */ if (!valid_mtrr_type(data & 0xff)) @@ -351,14 +351,14 @@ static void set_var_mtrr_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data) if (var_mtrr_range_is_valid(cur)) list_del(&mtrr_state->var_ranges[index].node); - /* Extend the mask with all 1 bits to the left, since those - * bits must implicitly be 0. The bits are then cleared - * when reading them. + /* + * Set all illegal GPA bits in the mask, since those bits must + * implicitly be 0. The bits are then cleared when reading them. */ if (!is_mtrr_mask) cur->base = data; else - cur->mask = data | (-1LL << cpuid_maxphyaddr(vcpu)); + cur->mask = data | kvm_vcpu_reserved_gpa_bits_raw(vcpu); /* add it to the list if it's enabled. */ if (var_mtrr_range_is_valid(cur)) { @@ -426,7 +426,7 @@ int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata) else *pdata = vcpu->arch.mtrr_state.var_ranges[index].mask; - *pdata &= (1ULL << cpuid_maxphyaddr(vcpu)) - 1; + *pdata &= ~kvm_vcpu_reserved_gpa_bits_raw(vcpu); } return 0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 82a70511c0d3..28fea7ff7a86 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -408,7 +408,7 @@ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { enum lapic_mode old_mode = kvm_get_apic_mode(vcpu); enum lapic_mode new_mode = kvm_apic_mode(msr_info->data); - u64 reserved_bits = ((~0ULL) << cpuid_maxphyaddr(vcpu)) | 0x2ff | + u64 reserved_bits = kvm_vcpu_reserved_gpa_bits_raw(vcpu) | 0x2ff | (guest_cpuid_has(vcpu, X86_FEATURE_X2APIC) ? 0 : X2APIC_ENABLE); if ((msr_info->data & reserved_bits) != 0 || new_mode == LAPIC_MODE_INVALID) @@ -10089,7 +10089,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) fx_init(vcpu); vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu); - vcpu->arch.reserved_gpa_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63); + vcpu->arch.reserved_gpa_bits = kvm_vcpu_reserved_gpa_bits_raw(vcpu); vcpu->arch.pat = MSR_IA32_CR_PAT_DEFAULT; From patchwork Thu Feb 4 00:01:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E527C433DB for ; Thu, 4 Feb 2021 00:03:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 55D0664E40 for ; Thu, 4 Feb 2021 00:03:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234240AbhBDADE (ORCPT ); Wed, 3 Feb 2021 19:03:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234215AbhBDAC4 (ORCPT ); Wed, 3 Feb 2021 19:02:56 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AD73C06121E for ; Wed, 3 Feb 2021 16:01:52 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id w4so1551320ybc.7 for ; Wed, 03 Feb 2021 16:01:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=A6vZFDvp80hzL8bSGmgbXTenVplxRZt6FUQXbaMJFdM=; b=iILJ48vL9bhmsbygJ+drqXyFr1y2A4TSk8IGIa4dcDPFX6fRXtdLP65EqBLBKnpsGV wknSQuVLipRUANA3gQuUOnp2NsqDOedFFElNDsbmc7Aa9ix69g3i08n7T1Hg11QsntHo NSJrc8S63wRb0Xf5mF3xttAFX4E0jq3wwHbWtSKeRVnt7NTS40zh5chTiil1PGCTIC9M MS67NISELAvQ+wJfzM0f0JCGNGP5Uf/o84bYLNLGG1LWJKwktka0W1Ai3+wsjRl+5ONa zkeKBtg7RDOzZgskjs9IBV+bObJr3kGUMEUb/URaqIXd4AfRsDts5U4njswutTrHyKcK MUMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=A6vZFDvp80hzL8bSGmgbXTenVplxRZt6FUQXbaMJFdM=; b=NslFCJPmVCMUIaWWPizUWHh8iSFNLYQYghcDBLhvcLlHMdb+OqPE1j5wNIyn5x8REF 0QAPRxixPfpY8ruXsmjTf0Wuv2Nq8YBGC7QZ+pD/tsquBB0qaI4X/WoK/MI040wSDF1o ecwKhnvLZqDz9oevURCihE+JfX8RQfEaly8eJJH7r3iC91elvzLBqRnWotmHYKtQxbnG 2iy+LujE/igM+lxFDsTOU9L0tS/57vP3ZbEdx77uR1T4yswHIIuUu0mDgPnQytecBFMv 0AymwVgwlbsndStj5NtsWp4ZS79caXI4KPu0ibffH4gSeZ3mEkUlKKRDjH5DLN/cDfVu lFzQ== X-Gm-Message-State: AOAM531ActASVoL7hUpDbzJ3SmKY1s4XwsFmDByC50r5X5GoJZYUXK// qWZYLfHSgJNC6nXb2z562HbA07PZpZE= X-Google-Smtp-Source: ABdhPJzJssUlPvMCPA9v68raoyHoKIZLJwhTQkUnkx2kNJvyrd9sqrlkcPKamhZCcXSwN8F7zq+GTdpUfn8= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:a25:7c06:: with SMTP id x6mr8347021ybc.445.1612396911857; Wed, 03 Feb 2021 16:01:51 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:16 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-12-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 11/12] KVM: x86: Move nVMX's consistency check macro to common code From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move KVM's CC() macro to x86.h so that it can be reused by nSVM. Debugging VM-Enter is as painful on SVM as it is on VMX. Rename the more visible macro to KVM_NESTED_VMENTER_CONSISTENCY_CHECK to avoid any collisions with the uber-concise "CC". Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 8 +------- arch/x86/kvm/x86.h | 8 ++++++++ 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b25ce704a2aa..dbca1687ae8e 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -21,13 +21,7 @@ module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO); static bool __read_mostly nested_early_check = 0; module_param(nested_early_check, bool, S_IRUGO); -#define CC(consistency_check) \ -({ \ - bool failed = (consistency_check); \ - if (failed) \ - trace_kvm_nested_vmenter_failed(#consistency_check, 0); \ - failed; \ -}) +#define CC KVM_NESTED_VMENTER_CONSISTENCY_CHECK /* * Hyper-V requires all of these, so mark them as supported even though diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 4f875f8d93b3..a14da36a30ed 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -8,6 +8,14 @@ #include "kvm_cache_regs.h" #include "kvm_emulate.h" +#define KVM_NESTED_VMENTER_CONSISTENCY_CHECK(consistency_check) \ +({ \ + bool failed = (consistency_check); \ + if (failed) \ + trace_kvm_nested_vmenter_failed(#consistency_check, 0); \ + failed; \ +}) + #define KVM_DEFAULT_PLE_GAP 128 #define KVM_VMX_DEFAULT_PLE_WINDOW 4096 #define KVM_DEFAULT_PLE_WINDOW_GROW 2 From patchwork Thu Feb 4 00:01:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12065811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05488C433E6 for ; Thu, 4 Feb 2021 00:06:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CAE2A64E27 for ; Thu, 4 Feb 2021 00:06:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233391AbhBDAFl (ORCPT ); Wed, 3 Feb 2021 19:05:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234255AbhBDADV (ORCPT ); Wed, 3 Feb 2021 19:03:21 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 135C3C06121F for ; Wed, 3 Feb 2021 16:01:55 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id w4so1551463ybc.7 for ; Wed, 03 Feb 2021 16:01:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=luNrCKLtsPU27ckN7gNdItwjuKJ5CIfky359N7ChVXk=; b=p3Nfy1i5cJizykjazbp5BhDXRAb3f6qml14F9H38h0NuUPl4trhBkfZKE2UUmhpb+/ qT7GIEp+B1EmNxvcts14/ih1sGmKy/3Jeyu8qsiPUobvLC6o+SlBD1dQj6sjG0XFJqua NuhJZwxcOkbQc8Jfl9Y1FT4ctIABenjNglrHM4zoudX0DJK3ybFJQlg1BdvCeutdnE6N PplSMDNIPEdMZviV4TYTMIvFuqpEsyNZRaOzlg+VPRKpUl1cU/ur5FNKl5Asdu8N1REQ 9g/3DiBmMfmqz/ewtd/b/BpHnrL+WZa4b5R9m2aGxtV3GnSG8CrxiKVeP7YHQjabzCp6 G42Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=luNrCKLtsPU27ckN7gNdItwjuKJ5CIfky359N7ChVXk=; b=BP/lZdpHPPDdSqN4pAgT+v9bpGS7huxdWOGpmhDQKR9Cw4ABRY5FROBuQ+KLxKF1LS jJwr82/qroJXhjQJefcMHPKTE+apBn7v4v0mK+MkjsgoLK92ev7rBHG0BQVvye4elKxH XiYr+X9GDjQX5luqXd/DcoE5kBIvFMp1Y743g6wrtKtRNOf1brKmoRVWIQLh0sRvRs0+ QTbi+ObbXVZ9FWNFuWwE33lEKYusa4gubvDvZ9geqLNMiTzwuKnF5WuwiD+W9LWFqat5 suFyYFyErOauRD7RyG0XlsrMcQs5+tWnYDK9i4TPP9SSCesAPNGb4z2+sDzTzLkgpE/b 9YEQ== X-Gm-Message-State: AOAM530ofz0md3cMd+z2/Lb3grRt1wLkqzrUHLbL25o/3k5UbKRt4vH5 tOv0auLY7cajnvk8jjg2RW+mfVcTlok= X-Google-Smtp-Source: ABdhPJzWxYdN+wSCf25IEtYIx8ntCF1X3y1H+Pg95o847to6WtjhpgKVS/wU2bzzPBiimahIyNkWXZK5al0= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:a9a0:e924:d161:b6cb]) (user=seanjc job=sendgmr) by 2002:a25:5557:: with SMTP id j84mr8065418ybb.472.1612396914318; Wed, 03 Feb 2021 16:01:54 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 3 Feb 2021 16:01:17 -0800 In-Reply-To: <20210204000117.3303214-1-seanjc@google.com> Message-Id: <20210204000117.3303214-13-seanjc@google.com> Mime-Version: 1.0 References: <20210204000117.3303214-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.365.g02bc693789-goog Subject: [PATCH 12/12] KVM: nSVM: Trace VM-Enter consistency check failures From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Tom Lendacky , Brijesh Singh , Rick Edgecombe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use trace_kvm_nested_vmenter_failed() and its macro magic to trace consistency check failures on nested VMRUN. Tracing such failures by running the buggy VMM as a KVM guest is often the only way to get a precise explanation of why VMRUN failed. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/nested.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index add3cd4295e1..16fea02471a7 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -29,6 +29,8 @@ #include "lapic.h" #include "svm.h" +#define CC KVM_NESTED_VMENTER_CONSISTENCY_CHECK + static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu, struct x86_exception *fault) { @@ -216,14 +218,13 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu) static bool nested_vmcb_check_controls(struct vmcb_control_area *control) { - if ((vmcb_is_intercept(control, INTERCEPT_VMRUN)) == 0) + if (CC(!vmcb_is_intercept(control, INTERCEPT_VMRUN))) return false; - if (control->asid == 0) + if (CC(control->asid == 0)) return false; - if ((control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) && - !npt_enabled) + if (CC((control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) && !npt_enabled)) return false; return true; @@ -240,32 +241,36 @@ static bool nested_vmcb_check_cr3_cr4(struct vcpu_svm *svm, * CR0.PG && EFER.LME. */ if ((save->efer & EFER_LME) && (save->cr0 & X86_CR0_PG)) { - if (!(save->cr4 & X86_CR4_PAE) || !(save->cr0 & X86_CR0_PE) || - kvm_vcpu_is_illegal_gpa(vcpu, save->cr3)) + if (CC(!(save->cr4 & X86_CR4_PAE)) || + CC(!(save->cr0 & X86_CR0_PE)) || + CC(kvm_vcpu_is_illegal_gpa(vcpu, save->cr3))) return false; } - return kvm_is_valid_cr4(&svm->vcpu, save->cr4); + if (CC(!kvm_is_valid_cr4(vcpu, save->cr4))) + return false; + + return true; } /* Common checks that apply to both L1 and L2 state. */ static bool nested_vmcb_valid_sregs(struct vcpu_svm *svm, struct vmcb_save_area *save) { - if (!(save->efer & EFER_SVME)) + if (CC(!(save->efer & EFER_SVME))) return false; - if (((save->cr0 & X86_CR0_CD) == 0 && (save->cr0 & X86_CR0_NW)) || - (save->cr0 & ~0xffffffffULL)) + if (CC((save->cr0 & X86_CR0_CD) == 0 && (save->cr0 & X86_CR0_NW)) || + CC(save->cr0 & ~0xffffffffULL)) return false; - if (!kvm_dr6_valid(save->dr6) || !kvm_dr7_valid(save->dr7)) + if (CC(!kvm_dr6_valid(save->dr6)) || CC(!kvm_dr7_valid(save->dr7))) return false; if (!nested_vmcb_check_cr3_cr4(svm, save)) return false; - if (!kvm_valid_efer(&svm->vcpu, save->efer)) + if (CC(!kvm_valid_efer(&svm->vcpu, save->efer))) return false; return true; @@ -367,12 +372,12 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm) static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool nested_npt) { - if (kvm_vcpu_is_illegal_gpa(vcpu, cr3)) + if (CC(kvm_vcpu_is_illegal_gpa(vcpu, cr3))) return -EINVAL; if (!nested_npt && is_pae_paging(vcpu) && (cr3 != kvm_read_cr3(vcpu) || pdptrs_changed(vcpu))) { - if (!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3)) + if (CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))) return -EINVAL; }